10 interesting stories served every morning and every evening.
The Foundation that promotes the Zig programming language has quit GitHub due to what its leadership perceives as the code sharing site’s decline.
The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled “safe_sleep.sh rarely hangs indefinitely.” GitHub addressed the problem in August, but didn’t reveal that in the thread, which remained open until Monday.
The code uses 100 percent CPU all the time, and will run forever
That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.
One piece of evidence he offered for that assessment was the “safe_sleep.sh rarely hangs indefinitely” thread.
“Most importantly, Actions has inexcusable bugs while being completely neglected,” Kelly wrote. “After the CEO of GitHub said to ‘embrace AI or get out’, it seems the lackeys at Microsoft took the hint, because GitHub Actions started ‘vibe-scheduling’ — choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.”
Kelly’s gripe seems justified, as the bug discussed in the thread appears to have popped up following a code change in February 2022 that users flagged in prior bug reports.
The code change replaced instances of the posix “sleep” command with a “safe_sleep” script that failed to work as advertised. It was supposed to allow the GitHub Actions runner — the application that runs a job from a GitHub Actions workflow — to pause execution safely.
“The bug in this ‘safe sleep’ script is obvious from looking at it: if the process is not scheduled for the one-second interval in which the loop would return (due to $SECONDS having the correct value), then it simply spins forever,” wrote Zig core developer Matthew Lugg in a comment appended to the April bug thread.
“That can easily happen on a CI machine under extreme load. When this happens, it’s pretty bad: it completely breaks a runner until manual intervention. On Zig’s CI runner machines, we observed multiple of these processes which had been running for hundreds of hours, silently taking down two runner services for weeks.”
The fix was merged on August 20, 2025, from a separate issue opened back in February 2024. The related bug report from April 2025 remained open until Monday, December 1, 2025. A separate CPU usage bug remains unresolved.
Jeremy Howard, co-founder of Answer. AI and Fast.AI, said in a series of social media posts that users’ claims about GitHub Actions being in a poor state of repair appear to be justified.
“The bug,” he wrote, “was implemented in a way that, very obviously to nearly anyone at first glance, uses 100 percent CPU all the time, and will run forever unless the task happens to check the time during the correct second.”
I can’t see how such an extraordinary collection of outright face-palming events could be made
He added that the platform-independent fix for the CPU issue proposed last February lingered for a year without review and was closed by the GitHub bot in March 2025 before being revived and merged.
“Whilst one could say that this is just one isolated incident, I can’t see how such an extraordinary collection of outright face-palming events could be made in any reasonably functioning organization,” Howard concluded.
GitHub did not immediately respond to a request for comment.
While Kelly has gone on to apologize for the incendiary nature of his post, Zig is not the only software project publicly parting ways with GitHub.
Over the weekend, Rodrigo Arias Mallo, creator of the Dillo browser project, said he’s planning to move away from GitHub owing to concerns about over-reliance on JavaScript, GitHub’s ability to deny service, declining usability, inadequate moderation tools, and “over-focusing on LLMs and generative AI, which are destroying the open web (or what remains of it) among other problems.”
Codeberg, for its part, has doubled its supporting membership since January, going from more than 600 members to over 1,200 as of last week.
GitHub has not disclosed how many of its users pay for its services presently. The code hosting biz had “over 1.3 million paid GitHub Copilot subscribers, up 30 percent quarter-over-quarter,” Microsoft CEO Satya Nadella said on the company’s Q2 2024 earnings call.
In Q4 2024, when GitHub reported an annual revenue run rate of $2 billion, GitHub Copilot subscriptions accounted for about 40 percent of the company’s annual revenue growth.
Nadella offered a different figure during Microsoft’s Q3 2025 earnings call: “we now have over 15 million GitHub Copilot users, up over 4X year-over-year.” It’s not clear how many GitHub users pay for Copilot, or for runner scripts that burned CPU cycles when they should have been sleeping. ®
...
Read the original on www.theregister.com »
Ghostty is now fiscally sponsored by Hack Club, a registered 501(c)(3) non-profit.
Fiscal sponsorship is a legal and financial arrangement in which a recognized non-profit extends its tax-exempt status to a project that aligns with its mission. This allows Ghostty to operate as a charitable initiative while Hack Club manages compliance, donations, accounting, and governance oversight.
Being non-profit clearly demonstrates our commitment to keeping Ghostty free and open source for everyone. It paves the way for a model for sustainable development beyond my personal involvement. And it also provides important legal protections and assurances to the people and communities that adopt and use Ghostty.
Since the beginning of the project in 2023 and the private beta days of Ghostty, I’ve repeatedly expressed my intention that Ghostty legally become a non-profit. This intention stems from several core beliefs I have.
First, I want to lay bricks for a sustainable future for Ghostty that doesn’t depend on my personal involvement technically or financially. Financially, I am still the largest donor to the project, and I intend to remain so, but a non-profit structure allows others to contribute financially without fear of misappropriation or misuse of funds (as protected by legal requirements and oversight from the fiscal sponsor).
Second, I want to squelch any possible concerns about a
“rug pull”. A non-profit structure provides enforceable assurances: the mission cannot be quietly changed, funds cannot be diverted to private benefit, and the project cannot be sold off or repurposed for commercial gain. The structure legally binds Ghostty to the public-benefit purpose it was created to serve.
Finally, despite being decades-old technology, terminals and terminal-related technologies remain foundational to modern computing and software infrastructure. They’re often out of the limelight, but they’re ever present on developer machines, embedded in IDEs, visible as read-only consoles for continuous integration and cloud services, and still one of the primary ways remote access is done on servers around the world.
I believe infrastructure of this kind should be stewarded by a mission-driven,
non-commercial entity that prioritizes public benefit over private profit.
That structure increases trust, encourages adoption, and creates the conditions for Ghostty to grow into a widely used and impactful piece of open-source infrastructure.
From a technical perspective, nothing changes for Ghostty. Our technical goals for the project remain the same, the license (MIT) remains the same, and we continue our work towards better Ghostty GUI releases and
libghostty.
Financially, Ghostty can now accept tax-deductible donations in the United States. This opens up new avenues for funding the project and sustaining development over the long term. Most immediately, I’m excited to begin
compensating contributors, but I also intend to support upstream dependencies, fund community events, and pay for boring operational costs.
All our financial transactions will be transparent down to individual transactions for both inflows and outflows. You can view our public ledger at Ghostty’s page on Hack Club Bank. At the time of writing, this is empty, but you’ll soon see some initial funding from me and the beginning of paying for some of our operational costs.
All applicable names, marks, and intellectual property associated with Ghostty have been transferred to Hack Club and are now owned under the non-profit umbrella. Copyright continues to be held by individual contributors under the continued and existing license structure.
From a leadership perspective, I remain the project lead and final authority on all decisions, but as stated earlier, the creation of a non-profit structure lays the groundwork for an eventual future beyond this model.
As our fiscal sponsor, Hack Club provides essential services to Ghostty, including accounting, legal compliance, and governance oversight. To support this, 7% of all donations to Ghostty go to Hack Club to cover these costs in addition to supporting their broader mission of empowering young people around the world interested in technology and coding.
In the words of Zach Latta, Hack Club’s founder and executive director this is a “good-for-good” trade. Instead of donor fees going to a for-profit management company or covering pure overhead of a single project, the fees go to another non-profit doing important work in the tech community and the overhead is amortized across many projects.
In addition to the 7% fees, my family is personally donating $150,000
directly to the Hack Club project1 (not to Ghostty within it). Hack Club does amazing work and I would’ve supported them regardless of their fiscal sponsorship of Ghostty, but I wanted to pair these two things together to amplify the impact of both.
Please consider donating to support Ghostty’s continued development.
I recognize that Ghostty is already in an abnormally fortunate position to have myself as a backer, but I do envision a future where Ghostty is more equally supported by a broader community. And with our new structure, you can be assured about the usage of your funds
towards public-benefit goals.
This post isn’t meant to directly be a fundraising pitch
so it is purposely lacking critical details about our funding goals, budget, project goals, project metrics, etc. I’ll work on those in the future. In the mean time, if you’re interested in talking more about supporting Ghostty, please email me at m@mitchellh.com.
I’m thankful for Hack Club and their team for working with us to make this happen. I’m also thankful for the Ghostty community who has supported this project and has trusted me and continues to trust me to steward it responsibly.
For more information about Ghostty’s non-profit structure, see the
dedicated page on Ghostty’s website.
...
Read the original on mitchellh.com »
About Us
PIN Talk: Driving under the influence and road safety, Ljubljana 8 December / PIN pogovor: Vožnja pod vplivom alkohola ali drog in varnost prometa, Ljubljana, 8. december 2025
Experts in leading medical journal condemn the rise of SUVs, citing critical public health and safety risks
Accepting US car standards would risk European lives, warn cities and civil society
EU officials must revisit the hastily agreed trade deal with the US, where the EU stated that it “intends to accept” lower US vehicle standards, say cities — including Paris, Brussels and Amsterdam, and more than 75 civil society organisations. In a letter to European lawmakers, the signatories warn that aligning European standards with laxer rules in the US would undermine the EU’s global leadership in road safety, public health, climate policy and competitiveness.
The deal agreed over summer states that “with respect to automobiles, the United States and the European Union intend to accept and provide mutual recognition to each other’s standards.” Yet, EU vehicle safety regulations have supported a 36% reduction in European road deaths since 2010. By contrast, road deaths in the US over the same period increased 30%, with pedestrian deaths up 80% and cyclist deaths up 50%.
Europe currently has mandatory requirements for life-saving technologies, such as pedestrian protection, automated emergency braking and lane-keeping assistance. Some of the most basic pedestrian protection requirements which have long been in place in the EU, such as deformation zones in the front of vehicles to reduce crash severity and the prohibition of sharp edges have made cars like the Tesla Cybertruck illegal to sell in Europe.
“Europe built its reputation on pioneering robust vehicle standards. To accept lower US standards would undo decades of EU progress,” say the signatories. According to the letter “the consequences of such a move for European road safety would be profound.“
The EU is set to apply limits to harmful pollution from brake and tyre wear from 2026 onwards, while at the same time the US is moving to weaken air pollution rules for vehicles. Accepting weaker US standards would increase European exposure to pollutants linked to asthma, cancer and numerous cardiovascular and neurological conditions, warn the signatories.
Major EU brands such as BMW, Mercedes and Stellantis already build large numbers of vehicles in US automotive plants to EU standards — particularly larger SUVs. However, if the lower US vehicle standards are accepted in Europe, these production lines could produce vehicles to these US lower standards, before shipping these vehicles to the EU. Overall, vehicle production would shift from the EU to the US. To accept lower US car standards would risk large-scale job losses in EU car plants and across Europe’s automotive supply chain.
The European Commission is already working to tighten Individual Vehicle Approval (IVA), which is being abused to put thousands of oversized US pick-up trucks on EU streets without complying with core EU safety, air pollution and climate standards. To now accept lower US vehicle standards across the board would open the floodgates to US pick-ups and large SUVs.
The signatories urge EU lawmakers to oppose the intention to accept lower US vehicle standards in the EU–US Joint Statement and affirm publicly that EU vehicle standards are non-negotiable.
2025 10 20 Civil society + city letter on risk of EU accepting lower US car standards (FINAL)Download
This website does not use cookies but certain pages use embedded content from external services including YouTube, Twitter, Google Sheets, MailChimp and Infogram which may track your usage. If you continue to use this site, you give your consent to this. You can find more information on our privacy policy page.
...
Read the original on etsc.eu »
We thank comments from Sumit Agarwal, Ron Kaniel, Roni Michaely, Lyndon Moore, Antoinette Schoar, and seminar/conference participants at the Chinese University of Hong Kong, Columbia Business School, Deakin University, Macquarie University, Peking University (HSBC and Guanghua), Shanghai Lixin University of Accounting and Finance, Tsinghua University, University of Sydney, University of Technology Sydney, 2023 Australasian Finance and Banking Conference, 2023 Finance Down Under, and 2023 Five Star Workshop in Finance for their helpful comments. We thank Lei Chen, Jingru Pan, Yiyun Yan, Zitong Zeng, and Tianyue Zheng for their excellent research assistance. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.
...
Read the original on www.nber.org »
I grabbed lunch with a former Microsoft coworker I’ve always admired—one of those engineers who can take any idea, even a mediocre one, and immediately find the gold in it. I wanted her take on Wanderfugl 🐦, the AI-powered map I’ve been building full-time. I expected encouragement. At worst, overly generous feedback because she knows what I’ve sacrificed.
Instead, she reacted to it with a level of negativity I’d never seen her direct at me before.
When I finally got her to explain what was wrong, none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she’s forced to use at work. My product barely featured. Her reaction wasn’t about me at all. It was about her entire environment.
Her PM had been laid off months earlier. The team asked why. Their director told them it was because the PM org “wasn’t effective enough at using Copilot 365.”
I nervously laughed. This director got up in a group meeting and said that someone lost their job over this?
After a pause I tried to share how much better I’ve been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn’t fully grok how tone deaf I was being though. She’s drowning in resentment.
I left the lunch deflated and weirdly guilty, like building an AI product made me part of the problem.
But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn’t true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard “AI.”
The people at big tech in Seattle are not ok
When I joined Microsoft, there was still a sense of possibility. Satya was pushing “growth mindset” everywhere. Leaders talked about empowerment and breaking down silos. And even though there was always a gap between the slogans and reality, there was room to try things.
I leaned into it. I pushed into areas nobody wanted to touch, like Windows update compression, because it lived awkwardly across three teams. Somehow, a 40% improvement made it out alive. Leadership backed it. The people trying to kill it shrank back into their fiefdoms. It felt like the culture wanted change.
That world is gone.
When the layoff directive hit, every org braced for impact. Anything not strictly inside the org’s charter was axed. I went from shipping a major improvement in Windows 11 to having zero projects overnight. I quit shortly after. In hindsight, getting laid off with severance might’ve been better than watching the culture collapse in slow motion.
Then came the AI panic.
If you could classify your project as “AI,” you were safe and prestigious. If you couldn’t, you were nobody. Overnight, most engineers got rebranded as “not AI talent.” And then came the final insult: everyone was forced to use Microsoft’s AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors’ tools. Sometimes worse than doing the work manually.
But you weren’t allowed to fix them—that was the AI org’s turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren’t “embracing AI.”
Bring up AI in a Seattle coffee shop now and people react like you’re advocating asbestos.
Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.
This belief system—that AI is useless and that you’re not good enough to work on it anyway—hurts three groups:
1. The companies.
They’ve taught their best engineers that innovation isn’t their job.
2. The engineers.
They’re stuck in resentment and self-doubt while their careers stall.
3. Anyone trying to build anything new in Seattle.
Say “AI” and people treat you like a threat or an idiot.
And the loop feeds itself:
Engineers don’t try because they think they can’t.
Companies don’t empower them because they assume they shouldn’t.
Bad products reinforce the belief that AI is doomed.
The spiral locks in.
My former coworker—the composite of three people for anonymity—now believes she’s both unqualified for AI work and that AI isn’t worth doing anyway. She’s wrong on both counts, but the culture made sure she’d land there.
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.
...
Read the original on jonready.com »
is a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.
is a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.
The game itself is a Windows executable, right? At a core level, the Linux operating system does not even know how to load the program, and so, instead of invoking it through the OS, you invoke it through Proton, which is going to do the first step of setting up the address space, loading the segments of code into memory. The code coming from the app is all x86, and so Proton is a facilitator. It puts the existing code of the app in a format and a layout that the Linux OS can understand and then starts executing that code.
...
Read the original on www.theverge.com »
Update: This post received a large amount of attention on Hacker News — see the discussion thread.
Initial Contact: Upon discovering this vulnerability on October 27, 2025, I immediately reached out to Filevine’s security team via email.
November 4, 2025: Filevine’s security team thanked me for the writeup and confirmed they would review the vulnerability and fix it quickly.
November 20, 2025: I followed up to confirm the patch was in place from my end, and informed them of my intention to write a technical blog post.
November 21, 2025: Filevine confirmed the issue was resolved and thanked me for responsibly reporting it.
The Filevine team was responsive, professional, and took the findings seriously throughout the disclosure process. They acknowledged the severity, worked to remediate the issues, allowed responsible disclosure, and maintained clear communication. This is another great example of how organizations should handle security disclosures.
AI legal-tech companies are exploding in value, and Filevine, now valued at over a billion dollars, is one of the fastest-growing platforms in the space. Law firms feed tools like this enormous amounts of highly confidential information.
Because I’d recently been working with Yale Law School on a related project, I decided to take a closer look at how Filevine handles data security. What I discovered should concern every legal professional using AI systems today.
When I first navigated to the site to see how it worked, it seemed that I needed to be part of a law firm to actually play around with the tooling, or request an official demo. However, I know that companies often have a demo environment that is open, so I used a technique called subdomain enumeration (which I had first heard about in Gal Nagli’s article last year) to see if there was a demo environment. I found something much more interesting instead.
I saw a subdomain called margolis.filevine.com. When I navigated to that site, I was greeted with a loading page that never resolved:
I wanted to see what was actually loading, so I opened Chrome’s developer tools, but saw no Fetch/XHR requests (the request you often expect to see if a page is loading data). Then, I decided to dig through some of the Javascript files to see if I could figure out what was supposed to be happening. I saw a snippet in a JS file like POST await fetch(${BOX_SERVICE}/recommend). This piqued my interest — recommend what? And what is the BOX_SERVICE? That variable was not defined in the JS file the fetch would be called from, but (after looking through minified code, which SUCKS to do) I found it in another one: “dxxxxxx9.execute-api.us-west-2.amazonaws.com/prod”. Now I had a new endpoint to test, I just had to figure out the correct payload structure to it. After looking at more minified js to determine the correct structure for this endpoint, I was able to construct a working payload to /prod/recommend:
(the name could be anything of course). No authorization tokens needed, and I was greeted with the response:
At first I didn’t entirely understand the impact of what I saw. No matter the name of the project I passed in, I was recommended the same boxFolders and couldn’t seem to access any files. Then, not realizing I stumbled upon something massive, I turned my attention to the boxToken in the response.
After reading some documentation on the Box Api, I realized this was a maximum access fully scoped admin token to the entire Box filesystem (like an internal shared Google Drive) of this law firm. This includes all confidential files, logs, user information, etc. Once I was able to prove this had an impact (by searching for “confidential” and getting nearly 100k results back)
I immediately stopped testing and responsibly disclosed this to Filevine. They responded quickly and professionally and remediated this issue.
If someone had malicious intent, they would have been able to extract every single file used by Margolis lawyers — countless data protected by HIPAA and other legal standards, internal memos/payrolls, literally millions of the most sensitive documents this law firm has in their possession. Documents protected by court orders! This could have been a real nightmare for both the law firm and the clients whose data would have been exposed.
To companies who feel pressure to rush into the AI craze in their industry — be careful! Always ensure the companies you are giving your most sensitive information to secure that data.
...
Read the original on alexschapiro.com »
More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
More severe for the least complex attacks.
More severe if no privileges are required.
More severe when no user interaction is required.
More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
...
Read the original on github.com »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
There was an error while loading. .
+**This project is currently under maintenance and is not accepting new changes.**+- The codebase is in a maintenance-only state+- No new features, enhancements, or pull requests will be accepted+- Critical security fixes may be evaluated on a case-by-case basis+- Existing issues and pull requests will not be actively reviewed+For enterprise support and actively maintained versions, please see [MinIO AIStor](https://www.min.io/product/aistor).
You can’t perform that action at this time.
...
Read the original on github.com »
It’s no surprise to see modern AAA games occupying hundreds of gigabytes of storage these days, especially if you are gaming on a PC. But somehow, Arrowhead Game Studios, the developers behind the popular co-op shooter Helldivers 2, have managed to substantially cut the game’s size by 85%.
As per a recent post on Steam, this reduction was made possible with support from Nixxes Software, best known for developing high-quality PC ports of Sony’s biggest PlayStation titles. The developers were able to achieve this by de-duplicating game data, which resulted in bringing the size down from ~154GB to just ~23GB, saving a massive ~131GB of storage space.
Originally, the game’s large install size was attributed to optimization for mechanical hard drives since duplicating data is used to reduce loading times on older storage media. However, it turns out that Arrowhead’s estimates for load times on HDDs, based on industry data, were incorrect.
With their latest data measurements specific to the game, the developers have confirmed the small number of players (11% last week) using mechanical hard drives will witness mission load times increase by only a few seconds in worst cases. Additionally, the post reads, “the majority of the loading time in Helldivers 2 is due to level-generation rather than asset loading. This level generation happens in parallel with loading assets from the disk and so is the main determining factor of the loading time.”
This is a promising development and a nudge to other game developers to take some notes and potentially make an effort in saving precious storage space for PC gamers.
One can access the ‘slim’ version of Helldivers 2 by opting in to the latest beta update via Steam, which is said to functionally offer the same experience as the legacy versions, apart from its smaller installation size. All progression, war contributions, and purchases are also expected to be carried over to the new slim version. There’s also the option to opt out of the beta at any time in case there are any potential issues.
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
...
Read the original on www.tomshardware.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.