10 interesting stories served every morning and every evening.
On January 14, 2006, John Resig introduced a JavaScript library called jQuery at BarCamp in New York City. Now, 20 years later, the jQuery team is happy to announce the final release of jQuery 4.0.0. After a long development cycle and several pre-releases, jQuery 4.0.0 brings many improvements and modernizations. It is the first major version release in almost 10 years and includes some breaking changes, so be sure to read through the details below before upgrading. Still, we expect that most users will be able to upgrade with minimal changes to their code.
Many of the breaking changes are ones the team has wanted to make for years, but couldn’t in a patch or minor release. We’ve trimmed legacy code, removed some previously-deprecated APIs, removed some internal-only parameters to public functions that were never documented, and dropped support for some “magic” behaviors that were overly complicated.
We have an upgrade guide and jQuery Migrate plugin release ready to assist with the transition. Please upgrade and let us know if you encounter any issues.
As usual, the release is available on our CDN and the npm package manager. Other third party CDNs will probably have it available soon as well, but remember that we don’t control their release schedules and they will need some time. Here are the highlights for jQuery 4.0.0.
jQuery 4.0 drops support for IE 10 and older. Some may be asking why we didn’t remove support for IE 11. We plan to remove support in stages, and the next step will be released in jQuery 5.0. For now, we’ll start by removing code specifically supporting IE versions older than 11.
We also dropped support for other very old browsers, including Edge Legacy, iOS versions earlier than the last 3, Firefox versions earlier than the last 2 (aside from Firefox ESR), and Android Browser. No changes should be required on your end. If you need to support any of these browsers, stick with jQuery 3.x.
jQuery 4.0 adds support for Trusted Types, ensuring that HTML wrapped in TrustedHTML can be used as input to jQuery manipulation methods in a way that doesn’t violate the require-trusted-types-for Content Security Policy directive.
Along with this, while some AJAX requests were already using tags to maintain attributes such as crossdomain, we have since switched most asynchronous script requests to use <script> tags to avoid any CSP errors caused by using inline scripts. There are still a few cases where XHR is used for asynchronous script requests, such as when the”headers” option is passed (use scriptAttrs instead!), but we now use a tag whenever possible.
It was a special day when the jQuery source on the main branch was migrated from AMD to ES modules. The jQuery source has always been published with jQuery releases on npm and GitHub, but could not be imported directly as modules without RequireJS, which was jQuery’s build tool of choice. We have since switched to Rollup for packaging jQuery and we do run all tests on the ES modules separately. This makes jQuery compatible with modern build tools, development workflows, and browsers through the use of .
...
Read the original on blog.jquery.com »
• The 2025 US tariffs are an own goal: American importers and consumers bear nearly the entire cost. Foreign exporters absorb only about 4% of the tariff burden—the remaining 96% is passed through to US buyers.
• Using shipment-level data covering over 25 million transactions valued at nearly $4 trillion, we find near-complete pass-through of tariffs to US import prices.
• US customs revenue surged by approximately $200 billion in 2025—a tax paid almost entirely by Americans.
• Event studies around discrete tariff shocks on Brazil (50%) and India (25–50%) confirm: export prices did not decline. Trade volumes collapsed instead.
• Indian export customs data validates our findings: when facing US tariffs, Indian exporters maintained their prices and reduced shipments. They did not “eat” the tariff.
...
Read the original on www.kielinstitut.de »
The other day I was browsing my one-and-only social network — which is not a social network, but I’m tired of arguing with people online about it — HackerNews. It’s like this dark corner of the internet, where anonymous tech-enthusiasts, scientists, entrepreneurs, and internet-trolls, like to lurk. I like HackerNews. It helps me stay up-to-date about recent tech news (like Cloudflare acquiring Astro which makes me happy for the Astro team, but also sad and worried since I really like Astro, and big-tech has a tendency to ruin things); it mostly avoids politics; and it’s not a social network.
And, in the fashion of HackerNews, I stumbled upon someone sharing their open-source project. It’s great to see people work on their projects and decide to show them to the world. I think people underestimate the fear of actually shipping stuff, which involves sharing it with the world.
Upon glancing at the comment section, I started to see other anonymous participants questioning the validity of said open-source project in terms of how much of it was AI-generated. I grabbed my popcorn, and started to follow this thread. More accusations started to appear: the commit timeline does not make sense; the code has AI-generated comments; etc. And at the same time, the author tried to reply to every comment claiming that they wrote this 100% without using AI.
I don’t mind people using AI to write code, even though I tried to resist it myself, until eventually succumbing to it. But I think it’s fair to disclose the use of AI, especially in open-source software. People on the internet are, mostly, anonymous, and it’s not always possible to verify the claims or expertise of particular individuals. But as the amount of code is growing, considering that everyone is using AI to generate whatever-app they want, it’s impossible to verify every piece of code we are going to use. So it’s fair to know, I think, if some project is AI generated and to what extent. In the end, LLMs are just probabilistic next-token generators. And while they are getting extremely good at most simple tasks, they have the potential to wreak havoc with harder problems or edge-cases (especially if there are no experienced engineers, with domain knowledge, to review the generated code).
As I was following this thread, I started to see a pattern: the comments of the author looked AI generated too:
The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know, and while in markdown two dashes will render as em-dash, this is not true of HackerNews (hence, you often see — in HackerNews comments, where the author is probably used to Markdown renderer turning it into em-dash)
The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of
The other notorious “let me know if you want to [do that thing] or [explore this other thing]” at the end of the sentence
I was sitting there, refreshing the page, seeing the author being confronted with use of AI in both their code and their comments, while the author claiming to have not used AI at all. Honestly, I was thinking I was going insane. Am I wrong to suspect them? What if people DO USE em-dashes in real life? What if English is not their native language and in their native language it’s fine to use phrases like “you are absolutely right”? Is this even a real person? Are the people who are commenting real?
And then it hit me. We have reached the Dead Internet. The Dead Internet Theory claims that since around 2016 (a whooping 10 years already), the internet is mainly dead, i.e. most interactions are between bots, and most content is machine generated to either sell you stuff, or game the SEO game (in order to sell you stuff).
I’m proud to say that I spent a good portion of my teenage years on the internet, chatting and learning from real people who knew more than me. Back in the early 2000s, there were barely bots on the internet. The average non-tech human didn’t know anything about phpBB forums, and the weird people with pseudonyms who hanged-out in there. I spent countless hours inside IRC channels, and on phpBB forums, learning things like network programming, OS-development, game-development, and of course web-development (which became my profession for almost two decades now). I’m basically a graduate of the Internet University. Back then, nobody had doubts that they were talking to a human-being. Sure, you could think that you spoke to a hot girl, who in reality was a fat guy, but hey, at least they were real!
But today, I no longer know what is real. I saw a picture on LinkedIn, from a real tech company, posting about their “office vibes” and their happy employees. And then I went to the comment section, and sure enough this picture is AI generated (mangled text that does not make sense, weird hand artifacts). It was posted by an employee of the company, it showed other employees of said company, and it was altered with AI to showcase a different reality. Hell, maybe the people on the picture do not even exist!
And these are mild examples. I don’t use social networks (and no, HackerNews is not a social network), but I hear horror stories about AI generated content on Facebook, Xitter, TikTok, ranging from photos of giants that built the pyramids in Egypt, all the way to short videos of pretty girls saying that the EU is bad for Poland.
I honestly got sad that day. Hopeless, if I could say. AI is easily available to the masses, which allow them to generate shitload of AI-slop. People no longer need to write comments or code, they can just feed this to AI agents who will generate the next “you are absolutely right” masterpiece.
I like technology. I like software engineering, and the concept of the internet where people could share knowledge and create communities. Were there malicious actors back then on the internet? For sure. But what I am seeing today, makes me question whether the future we are headed to is a future where technology is useful anymore. Or, rather, it’s a future where bots talk with bots, and human knowledge just gets recycled and repackaged into “10 step to fix your [daily problem] you are having” for the sake of selling you more stuff.
Unless otherwise noted, all content is generated by a human.
...
Read the original on kudmitry.com »
bitchat is a decentralized peer-to-peer messaging application that operates over bluetooth mesh networks. no internet required, no servers, no phone numbers.
traditional messaging apps depend on centralized infrastructure that can be monitored, censored, or disabled. bitchat creates ad-hoc communication networks using only the devices present in physical proximity. each device acts as both client and server, automatically discovering peers and relaying messages across multiple hops to extend the network’s reach.
this approach provides censorship resistance, surveillance resistance, and infrastructure independence. the network remains functional during internet outages, natural disasters, protests, or in regions with limited connectivity.
ios/macos version:
appstore: bitchat mesh
source code: https://github.com/permissionlesstech/bitchat
supports ios 16.0+ and macos 13.0+. build using xcode with xcodegen or swift package manager.
the software is released into the public domain.
...
Read the original on bitchat.free »
Do you require a (replacement) smartphone for your work at Radboud University? If so, there is a strong possibility that you will receive a Fairphone from 1 February 2026 onwards. Radboud University has decided to choose Fairphone as its standard company smartphone model for reasons of sustainability, cost efficiency and management support.
Do you require a (replacement) smartphone for your work at Radboud University? If so, there is a strong possibility that you will receive a Fairphone from 1 February 2026 onwards. Radboud University has decided to choose Fairphone as its standard company smartphone model for reasons of sustainability, cost efficiency and management support.
The Fairphone is a sustainable smartphone with easily replaceable parts such as the battery and screen. This makes the device last longer. Fair and recycled materials, such as plastic and aluminium, are used as much as possible in the production of this smartphone. Fairphone also pays attention to good and safe working conditions in its factories.
Fairphones are issued to employees by the Information & Library Services (ILS) division. In addition to new Fairphones, the university can also reissue used Samsung devices where possible. These are Samsung devices that have already been returned and still meet the technical and age requirements. As long as these devices are still available, not every employee will receive a Fairphone immediately. Employees who have an iPhone from Radboud University can continue to use it as long as the device is still functioning. However, returned iPhones will no longer be reissued.
Employees who prefer to use their private phone for work can request an RU SIM card for this purpose. The costs for using your own device will not be reimbursed. Naturally, smartphone models that have already been issued will continue to be supported by ILS colleagues, as will privately purchased smartphone models used for work.
Due to its longer lifespan, the total cost of a Fairphone is lower than that of comparable devices. In addition, Radboud University only needs to purchase, manage and support one standard model. This results in smaller stock, easier management and faster support. Manuals and instructions also only need to be maintained for one device.
Furthermore, less investment is required in knowledge of different models/brands. This also helps to speed up incident handling and, where necessary, smartphone replacement.
Fairphone offers a five-year warranty and long-term software support for up to eight years. This means that devices need to be replaced less quickly. This fits in with Radboud University’s circularity strategy, which focuses on the longest possible use and reuse of ICT hardware.
...
Read the original on www.ru.nl »
...
Read the original on gitlab.winehq.org »
Mastering the Schengen Shuffle: How to Use Precise Date Counting for 90/180-Day Visa Compliance
Navigate the complex 90/180-day visa rule with precision. Learn how to use a days between dates calculator to avoid entry bans, fines, and border issues.
The Time-Debt Audit: Is Your Loan Stealing Your Future Autonomy?
Stop viewing loans as monthly payments and start seeing them as ‘life hours.’ Use our Time-Debt Audit to calculate your Freedom Ratio before you sign.
Beyond the ‘Bad Request’: A Guide to URL Encoding for No-Code Automation
Stop ‘Bad Request’ errors in Zapier, Make, and Airtable. Master URL encoding to protect your automation workflows from data corruption and broken API calls.
Digital Quarantine: How to Use Subnetting to Secure Your Home Office and IoT Devices
Learn how to use a subnet calculator to build a ‘Digital Quarantine’ for your home. Isolate vulnerable IoT devices from your work data and personal files.
The ROI of Your Ride: Using a Car Loan Calculator to Turn Your Vehicle into a Business Asset
Don’t let car payments kill your gig economy profits. Learn to use a car loan calculator to determine the ROI of your vehicle for Uber, DoorDash, and more.
Beyond the Nest Egg: Finding Your Financial ‘Crossover Point’ with Compound Interest
Discover the Crossover Point: the milestone where interest earnings exceed your contributions. A guide to compound interest for late-start investors.
The ‘Wait Tax’: Quantifying the Exact Cost of Delaying Your Investments
Stop waiting for the ‘perfect time’ to invest. Learn how to calculate your ‘Wait Tax’—the massive financial penalty of delaying your portfolio by just 12–24 months.
The Minimum Viable Rest Strategy: A Survival Guide to Sleep Cycles for High-Stakes Performance
Master the Minimum Viable Rest (MVR) strategy using the 90-minute sleep cycle rule. Learn how to calculate survival windows to maintain clarity during crunch periods.
Beat the Monday Blues: The ‘Social Jetlag’ Recovery Plan Using the 90-Minute Rule
Stop the Sunday night panic. Use the 90-minute sleep cycle rule and our Sleep Calculator to recover from social jetlag and wake up refreshed on Monday morning.
...
Read the original on calquio.com »
Calculate how your investments grow over time with compound interest.
Stop waiting for the ‘perfect time’ to invest. Learn how to calculate your ‘Wait Tax’—the massive financial penalty of delaying your portfolio by just 12–24 months.
Discover the Crossover Point: the milestone where interest earnings exceed your contributions. A guide to compound interest for late-start investors.
Calculate how your investments grow over time with compound interest.
Stop waiting for the ‘perfect time’ to invest. Learn how to calculate your ‘Wait Tax’—the massive financial penalty of delaying your portfolio by just 12–24 months.
Discover the Crossover Point: the milestone where interest earnings exceed your contributions. A guide to compound interest for late-start investors.
Compound interest is interest calculated on both the initial principal and the accumulated interest from previous periods. Unlike simple interest, which only earns interest on the original amount, compound interest allows your money to grow exponentially over time.
Albert Einstein reportedly called compound interest “the eighth wonder of the world,” saying: “He who understands it, earns it; he who doesn’t, pays it.”
The basic formula for compound interest is:
For continuous compounding, the formula becomes:
A quick mental math trick to estimate how long it takes to double your money:
The more frequently interest compounds, the more you earn. Think of it as: how often the bank calculates and adds interest to your balance.
At a 10% annual rate on $10,000 over 10 years:
When planning long-term investments, it’s crucial to understand the difference between nominal returns (the number you see) and real returns (actual purchasing power).
Nominal Return: The raw percentage your investment grows — what your account statement shows.
Real Return: Your return after accounting for inflation — what your money can actually buy.
Example: You invest $10,000 at 10% annual return for 20 years.
* Nominal value: $67,275 (what your account shows)
Historical inflation rates vary by country, but a common assumption for developed economies is 2-3% annually. During high-inflation periods, this can exceed 5-10%.
Start early — Time is your greatest ally. Even small amounts grow significantly over decades.
Be consistent — Regular contributions amplify the effect of compounding.
Seek higher rates — Even a 1% difference compounds to significant amounts over time.
Beat inflation — Ensure your real return is positive; otherwise, you’re losing purchasing power.
...
Read the original on calquio.com »
👋 Join our Discord community.
📖 Check out the GLM-4.7 technical blog, technical report(GLM-4.5).
📍 Use GLM-4.7-Flash API services on Z.ai API Platform.
👉 One click to GLM-4.7.
GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
For local deployment, GLM-4.7-Flash supports inference frameworks including vLLM and SGLang. Comprehensive deployment instructions are available in the official Github repository.
vLLM and SGLang only support GLM-4.7-Flash on their main branches.
* using pip (must use pypi.org as the index url):
pip install -U vllm –pre –index-url https://pypi.org/simple –extra-index-url https://wheels.vllm.ai/nightly
pip install git+https://github.com/huggingface/transformers.git
* using pip install sglang from source, then update transformers to the latest main branch.
using with transformers as
pip install git+https://github.com/huggingface/transformers.git
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = “zai-org/GLM-4.7-Flash”
messages = [{“role”: “user”, “content”: “hello”}]
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors=“pt”,
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=MODEL_PATH,
torch_dtype=torch.bfloat16,
device_map=“auto”,
inputs = inputs.to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:])
print(output_text)
vllm serve zai-org/GLM-4.7-Flash \
–tensor-parallel-size 4 \
–speculative-config.method mtp \
–speculative-config.num_speculative_tokens 1 \
–tool-call-parser glm47 \
–reasoning-parser glm45 \
–enable-auto-tool-choice \
–served-model-name glm-4.7-flash
python3 -m sglang.launch_server \
–model-path zai-org/GLM-4.7-Flash \
–tp-size 4 \
–tool-call-parser glm47 \
–reasoning-parser glm45 \
–speculative-algorithm EAGLE \
–speculative-num-steps 3 \
–speculative-eagle-topk 1 \
–speculative-num-draft-tokens 4 \
–mem-fraction-static 0.8 \
–served-model-name glm-4.7-flash \
–host 0.0.0.0 \
–port 8000
If you find our work useful in your research, please consider citing the following paper:
@misc{5team2025glm45agenticreasoningcoding,
title={GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models},
author={GLM Team and Aohan Zeng and Xin Lv and Qinkai Zheng and Zhenyu Hou and Bin Chen and Chengxing Xie and Cunxiang Wang and Da Yin and Hao Zeng and Jiajie Zhang and Kedong Wang and Lucen Zhong and Mingdao Liu and Rui Lu and Shulin Cao and Xiaohan Zhang and Xuancheng Huang and Yao Wei and Yean Cheng and Yifan An and Yilin Niu and Yuanhao Wen and Yushi Bai and Zhengxiao Du and Zihan Wang and Zilin Zhu and Bohan Zhang and Bosi Wen and Bowen Wu and Bowen Xu and Can Huang and Casey Zhao and Changpeng Cai and Chao Yu and Chen Li and Chendi Ge and Chenghua Huang and Chenhui Zhang and Chenxi Xu and Chenzheng Zhu and Chuang Li and Congfeng Yin and Daoyan Lin and Dayong Yang and Dazhi Jiang and Ding Ai and Erle Zhu and Fei Wang and Gengzheng Pan and Guo Wang and Hailong Sun and Haitao Li and Haiyang Li and Haiyi Hu and Hanyu Zhang and Hao Peng and Hao Tai and Haoke Zhang and Haoran Wang and Haoyu Yang and He Liu and He Zhao and Hongwei Liu and Hongxi Yan and Huan Liu and Huilong Chen and Ji Li and Jiajing Zhao and Jiamin Ren and Jian Jiao and Jiani Zhao and Jianyang Yan and Jiaqi Wang and Jiayi Gui and Jiayue Zhao and Jie Liu and Jijie Li and Jing Li and Jing Lu and Jingsen Wang and Jingwei Yuan and Jingxuan Li and Jingzhao Du and Jinhua Du and Jinxin Liu and Junkai Zhi and Junli Gao and Ke Wang and Lekang Yang and Liang Xu and Lin Fan and Lindong Wu and Lintao Ding and Lu Wang and Man Zhang and Minghao Li and Minghuan Xu and Mingming Zhao and Mingshu Zhai and Pengfan Du and Qian Dong and Shangde Lei and Shangqing Tu and Shangtong Yang and Shaoyou Lu and Shijie Li and Shuang Li and Shuang-Li and Shuxun Yang and Sibo Yi and Tianshu Yu and Wei Tian and Weihan Wang and Wenbo Yu and Weng Lam Tam and Wenjie Liang and Wentao Liu and Xiao Wang and Xiaohan Jia and Xiaotao Gu and Xiaoying Ling and Xin Wang and Xing Fan and Xingru Pan and Xinyuan Zhang and Xinze Zhang and Xiuqing Fu and Xunkai Zhang and Yabo Xu and Yandong Wu and Yida Lu and Yidong Wang and Yilin Zhou and Yiming Pan and Ying Zhang and Yingli Wang and Yingru Li and Yinpei Su and Yipeng Geng and Yitong Zhu and Yongkun Yang and Yuhang Li and Yuhao Wu and Yujiang Li and Yunan Liu and Yunqing Wang and Yuntao Li and Yuxuan Zhang and Zezhen Liu and Zhen Yang and Zhengda Zhou and Zhongpei Qiao and Zhuoer Feng and Zhuorui Liu and Zichen Zhang and Zihan Wang and Zijun Yao and Zikang Wang and Ziqiang Liu and Ziwei Chai and Zixuan Li and Zuodong Zhao and Wenguang Chen and Jidong Zhai and Bin Xu and Minlie Huang and Hongning Wang and Juanzi Li and Yuxiao Dong and Jie Tang},
year={2025},
eprint={2508.06471},
archivePrefix={arXiv},
primaryClass={cs.CL},
...
Read the original on huggingface.co »
At least 39 people have died in a train collision in southern Spain and dozens more have been injured in the country’s worst rail crash in more than a decade, Spain’s Civil Guard has said. Carriages on a Madrid-bound train derailed and crossed over to the opposite tracks, colliding with an oncoming train in Adamuz on Sunday evening.Four hundred passengers and staff were onboard both trains, the rail networks said. Emergency services treated 122 people, with 43, including four children, still in hospital. Of those, 12 adults and one child are in intensive care.Spanish Transport Minister Óscar Puente said the death toll “is not yet final”, as officials launched an investigation.
Puente described the incident as “extremely strange”. All the railway experts consulted by the government “are extremely baffled by the accident”, he told reporters in Madrid. Rail network operator Adif said the collision happened at 19:45 local time (18:45 GMT), about an hour after the train left Málaga heading north to Madrid, when it derailed on a straight stretch of track near the city of Córdoba.The force of the crash pushed the carriages of the second train into an embankment, Puente said. He added that most of those killed and injured were in the front carriages of the second train, which was travelling south from Madrid to Huelva.The type of train involved in the crash was a Freccia 1000, which can reach top speeds of 400 km/h (250 mph), a spokesperson for the Italian rail company Ferrovie dello Stato told Reuters news agency.Rescue teams said the twisted wreckage of the trains made it difficult to recover people trapped inside the carriages.Córdoba fire chief Francisco Carmona told Spanish public broadcaster RTVE: “We have even had to remove a dead person to be able to reach someone alive. It is hard, tricky work.”
Salvador Jimenez, a journalist with RTVE who was on one of the trains, said the impact felt like an “earthquake”. “I was in the first carriage. There was a moment when it felt like an earthquake and the train had indeed derailed,” Jimenez said. Footage from the scene appears to show some train carriages had tipped over on their sides. Rescue workers can be seen scaling the train to pull people out of the lopsided train doors and windows.A Madrid-bound passenger, José, told public broadcaster Canal Sur: “There were people and screaming, calling for doctors.”
All rail services between Madrid and Andalusia were suspended following the accident and are expected to remain closed all day on Monday. Iryo, a private rail company that operated the journey from Málaga, said around 300 passengers were on board the train that first derailed, while the other train — operated by the state-funded firm Renfe — had around 100 passengers.The official cause is not yet known. An investigation is not expected to determine what happened for at least a month, according to the transport minister. Spain’s Prime Minister, Pedro Sánchez, said the country will endure a “night of deep pain”. The mayor of Adamuz, Rafael Moreno, was one of the first people on the scene of the accident, describing it as “a nightmare”.King Felipe VI and Queen Letizia said they were following news of the disaster “with great concern”.“We extend our most heartfelt condolences to the relatives and loved ones of the dead, as well as our love and wishes for a swift recovery to the injured,” the royal palace said on X.The emergency agency in the region of Andalusia urged any crash survivors to contact their families or post on social media that they are alive.
Advanced medical posts were set up for impacted passengers to be treated for injuries and transferred to hospital. Adif said it set up spaces for relatives of the victims at Atocha, Seville, Córdoba, Málaga and Huelva stations. The Spanish Red Cross has deployed emergency support services to the scene, while also offering counselling to families nearby. Miguel Ángel Rodríguez from the Red Cross told RNE radio: “The families are going through a situation of great anxiety due to the lack of information. These are very distressing moments.”
...
Read the original on www.bbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.