10 interesting stories served every morning and every evening.

The West Forgot How to Build. Now It's Forgetting Code

techtrenches.dev

In 2023, Raytheon’s pres­i­dent stood at the Paris Air Show and de­scribed what it took to restart Stinger mis­sile pro­duc­tion. They brought back en­gi­neers in their 70s to teach younger work­ers how to build a mis­sile from pa­per schemat­ics drawn dur­ing the Carter ad­min­is­tra­tion. Test equip­ment had been sit­ting in ware­houses for years. The nose cone still had to be at­tached by hand, ex­actly as it was forty years ago.

The Pentagon had­n’t bought a new Stinger in twenty years. Then Russia in­vaded Ukraine, and sud­denly every­one needed them. The pro­duc­tion line was shut down. The elec­tron­ics were ob­so­lete. The seeker com­po­nent was out of pro­duc­tion. An or­der placed in May 2022 would­n’t de­liver un­til 2026. Four years. Not be­cause of money. Because the peo­ple who knew how to build them re­tired a decade ear­lier and no­body re­placed them.

I run en­gi­neer­ing teams in Ukraine. My peo­ple lived the other side of this equa­tion. Not the fac­tory floor. The re­ceiv­ing end. While Raytheon was strug­gling to restart pro­duc­tion from forty-year-old blue­prints, the US was ship­ping thou­sands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thir­teen years’ worth of Stinger pro­duc­tion. I’ve seen this pat­tern be­fore. It’s hap­pen­ing in my in­dus­try right now.

In March 2023, the EU promised Ukraine one mil­lion ar­tillery shells within twelve months. European pro­duc­tion ca­pac­ity sat at 230,000 shells per year. Ukraine was con­sum­ing 5,000 to 7,000 rounds per day. Anyone with a cal­cu­la­tor could see this would­n’t work.

By the dead­line, Europe de­liv­ered about half. Macron called the orig­i­nal promise reck­less. An in­ves­ti­ga­tion by eleven me­dia out­lets across nine coun­tries found ac­tual pro­duc­tion ca­pac­ity was roughly one-third of of­fi­cial EU claims. The mil­lion-shell mark was­n’t hit un­til December 2024, nine months late.

It was­n’t one bot­tle­neck. It was all of them. France had halted do­mes­tic pro­pel­lant pro­duc­tion in 2007. Seventeen years of noth­ing. Europe’s sin­gle ma­jor TNT pro­ducer was in Poland. Germany had two days of am­mu­ni­tion stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The en­tire con­ti­nen­t’s de­fense in­dus­try had been op­ti­mized for mak­ing small batches of ex­pen­sive cus­tom prod­ucts. Nobody planned for vol­ume. Nobody planned for cri­sis.

The U.S. was­n’t much bet­ter. One plant in Scranton, one fa­cil­ity in Iowa for ex­plo­sive fill, no do­mes­tic TNT pro­duc­tion since 1986. Billions of in­vest­ment later, pro­duc­tion still had­n’t hit half the tar­get.

This was­n’t an ac­ci­dent. In 1993, the Pentagon told de­fense CEOs to con­sol­i­date or die. Fifty-one ma­jor de­fense con­trac­tors col­lapsed into five. Tactical mis­sile sup­pli­ers went from thir­teen to three. Shipbuilders from eight to two. The work­force fell from 3.2 mil­lion to 1.1 mil­lion. A 65% cut.

The am­mu­ni­tion sup­ply chain had sin­gle points of fail­ure every­where. One man­u­fac­turer for 155mm shell cas­ings, sit­ting in Coachella, California, on the San Andreas Fault. One fa­cil­ity in Canada for pro­pel­lant charges. Optimized for min­i­mum cost with zero mar­gin for surge. On pa­per, ef­fi­cient. In prac­tice, one bad day away from col­lapse.

Then there’s Fogbank. A clas­si­fied ma­te­r­ial used in nu­clear war­heads. Produced from 1975 to 1989, then the fa­cil­ity was shut down. When the gov­ern­ment needed to re­pro­duce it for a war­head life ex­ten­sion pro­gram in 2000, they dis­cov­ered they could­n’t. A GAO re­port found that al­most all staff with pro­duc­tion ex­per­tise had re­tired, died, or left the agency. Few records ex­isted.

After spend­ing an ad­di­tional $69 mil­lion and years of re­verse en­gi­neer­ing, they fi­nally pro­duced vi­able Fogbank. Then dis­cov­ered the new batch was too pure. The orig­i­nal had con­tained an un­in­ten­tional im­pu­rity that was crit­i­cal to its func­tion. That fact ex­isted nowhere in any doc­u­ment. Only the work­ers who made the orig­i­nal batch knew it, and they had re­tired years ear­lier.

A nu­clear weapons pro­gram lost the abil­ity to make a ma­te­r­ial it in­vented. The knowl­edge ex­isted only in peo­ple, and the peo­ple were gone.

I read the Fogbank story and rec­og­nized it im­me­di­ately. Not the nu­clear ma­te­r­ial. The pat­tern. Build ca­pa­bil­ity over decades. Find a cheaper sub­sti­tute. Let the hu­man pipeline at­ro­phy. Enjoy the sav­ings. Then watch it all col­lapse when a cri­sis de­mands what you op­ti­mized away.

In de­fense, the sub­sti­tute was the peace div­i­dend. In soft­ware, it’s AI.

I wrote about the tal­ent pipeline col­lapse be­fore. The hir­ing num­bers and the ju­nior-to-se­nior prob­lem are doc­u­mented. So is the com­pre­hen­sion cri­sis. What I did­n’t have was the right his­tor­i­cal par­al­lel. Now I do.

And it tells you some­thing the hir­ing data does­n’t: how long re­build­ing ac­tu­ally takes.

Every ma­jor de­fense pro­duc­tion ramp-up took three to five years for sim­ple sys­tems. Five to ten for com­plex ones. Stinger: thirty months min­i­mum from or­der to de­liv­ery. Javelin: four and a half years to less than dou­ble pro­duc­tion. 155mm shells: four years and still not at tar­get de­spite five bil­lion dol­lars in­vested. France only restarted pro­pel­lant pro­duc­tion in 2024, sev­en­teen years af­ter shut­ting it down.

Money was never the con­straint. Knowledge was. RAND found that 10% of tech­ni­cal skills for sub­ma­rine de­sign need ten years of on-the-job ex­pe­ri­ence to de­velop, some­times fol­low­ing a PhD. Apprenticeships in de­fense trades take two to four years, with five to eight years to reach su­per­vi­sory com­pe­tence.

Now map that onto soft­ware. A ju­nior de­vel­oper needs three to five years to be­come a com­pe­tent mid-level en­gi­neer. Five to eight years to be­come se­nior. Ten or more to be­come a prin­ci­pal or ar­chi­tect. That time­line can’t be com­pressed by throw­ing money at it. It can’t be com­pressed by AI ei­ther.

A METR ran­dom­ized con­trolled trial found that ex­pe­ri­enced de­vel­op­ers us­ing AI cod­ing tools ac­tu­ally took 19% longer on real-world open source tasks. Before start­ing, they pre­dicted AI would make them 24% faster. The gap be­tween pre­dic­tion and re­al­ity was 43 per­cent­age points. When re­searchers tried to run a fol­low-up, a sig­nif­i­cant share of de­vel­op­ers re­fused to par­tic­i­pate if it meant work­ing with­out AI. They could­n’t imag­ine go­ing back.

The soft­ware in­dus­try is in year three of the same op­ti­miza­tion. Salesforce said it won’t hire more soft­ware en­gi­neers in 2025. A LeadDev sur­vey found 54% of en­gi­neer­ing lead­ers be­lieve AI copi­lots will re­duce ju­nior hir­ing long-term. A CRA sur­vey of uni­ver­sity com­put­ing de­part­ments found 62% re­ported de­clin­ing en­roll­ment this year.

I see it in code re­view. Review is now the bot­tle­neck. AI gen­er­ates code fast. Humans re­view it slow. The in­dus­try’s an­swer is pre­dictable: let AI re­view AIs code. I’m not do­ing that. I’ve re­worked our pull re­quest tem­plates in­stead. Every PR now has to ex­plain what changed, why, what type of change it is, screen­shots of be­fore and af­ter. Structured con­text so the re­viewer is­n’t guess­ing. I’m adding ded­i­cated re­view­ers per pro­ject. More eyes, more chances to catch what the model missed.

But even that does­n’t solve the deeper prob­lem. The skills you need to be ef­fec­tive now are dif­fer­ent. Technical ex­per­tise alone is­n’t enough any­more. You need peo­ple who can take own­er­ship, com­mu­ni­cate trade­offs, push back on bad sug­ges­tions from a ma­chine that sounds very con­fi­dent. Leadership qual­i­ties. Our last hir­ing round tells you how rare that is: 2,253 can­di­dates, 2,069 dis­qual­i­fied, 4 hired. A 0.18% con­ver­sion rate. The com­bi­na­tion of tech­ni­cal skill and the judg­ment to know when the AI is wrong barely ex­ists in the mar­ket any­more.

We doc­u­ment every­thing. Site Books, SDDs, RVS re­ports, boil­er­plate mod­ules with full cov­er­age. It works to­day, be­cause the peo­ple read­ing those docs have the en­gi­neer­ing ex­per­tise to act on them. What hap­pens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t mat­ter. Maybe the prob­lem stays man­age­able. I can’t pre­dict the ca­pa­bil­i­ties of mod­els in 2031.

But crises don’t send cal­en­dar in­vites. Nobody ex­pected a full-scale land war in Europe in 2022. The de­fense in­dus­try had thirty years to pre­pare and did­n’t. Even Fogbank had records. They weren’t enough with­out the peo­ple who un­der­stood what they meant.

Five to ten years from now, we’ll need se­nior en­gi­neers. People who un­der­stand sys­tems end to end, who can de­bug dis­trib­uted fail­ures at 2 AM, who carry in­sti­tu­tional knowl­edge that ex­ists nowhere in the code­base. Those en­gi­neers don’t ex­ist yet be­cause we’re not cre­at­ing them. The ju­niors who should be learn­ing right now are ei­ther not be­ing hired or de­vel­op­ing what a DoD-funded work­force study calls AI-mediated com­pe­tence.” They can prompt an AI. They can’t tell you what the AI got wrong.

It’s Fogbank for code. When ju­niors skip de­bug­ging and skip the for­ma­tive mis­takes, they don’t build the tacit ex­per­tise. And when my gen­er­a­tion of en­gi­neers re­tires, that knowl­edge does­n’t trans­fer to the AI.

It just dis­ap­pears.

The West al­ready made this mis­take once. The bill came due in Ukraine.

I know how this sounds. I know I’ve writ­ten about the tal­ent pipeline be­fore. The de­fense ex­am­ple is­n’t about re­peat­ing the ar­gu­ment. It’s about show­ing what hap­pens if the in­dus­try’s ex­pec­ta­tions don’t work out. Stinger, Javelin, Fogbank, a mil­lion shells no­body could make. That’s the cost of bet­ting wrong on op­ti­miza­tion. We’re mak­ing the same bet with soft­ware en­gi­neer­ing right now.

Maybe AI gets good enough, and the bet pays off. Maybe it does­n’t. The de­fense in­dus­try thought peace would last for­ever, too.

No posts

Amateur armed with ChatGPT 'vibe-maths' a 60-year-old problem

www.scientificamerican.com

April 24, 2026

4 min read

Add Us On GoogleAdd SciAm

An am­a­teur just solved a 60-year-old math prob­lem—by ask­ing AI

An am­a­teur just solved a 60-year-old math prob­lem—by ask­ing AI

A ChatGPT AI has proved a con­jec­ture with a method no hu­man had thought of. Experts be­lieve it may have fur­ther uses

By Joseph Howlett edited by Lee Billings

Eugene Mymrin/Getty Images

Liam Price just cracked a 60-year-old prob­lem that world-class math­e­mati­cians have tried and failed to solve. He’s 23 years old and has no ad­vanced math­e­mat­ics train­ing. What he does have is a ChatGPT Pro sub­scrip­tion, which gives him ac­cess to the lat­est large lan­guage mod­els from OpenAI.

Artificial in­tel­li­gence has re­cently made head­lines for solv­ing a num­ber of Erdős prob­lems,” con­jec­tures left be­hind by the pro­lific math­e­mati­cian Paul Erdős. But ex­perts have warned that these prob­lems are an im­per­fect bench­mark of ar­ti­fi­cial in­tel­li­gence’s math­e­mat­i­cal prowess. They range dra­mat­i­cally in both sig­nif­i­cance and dif­fi­culty, and many AI so­lu­tions have turned out to be less orig­i­nal than they ap­peared.

The new so­lu­tion—which Price got in re­sponse to a sin­gle prompt to GPT-5.4 Pro and posted on www.er­dosprob­lems.com, a web­site de­voted to the Erdős prob­lems, just over a week ago—is dif­fer­ent. The prob­lem it solves has eluded some promi­nent minds, be­stow­ing it some es­teem. And more im­por­tantly, the AI seems to have used a to­tally new method for prob­lems of this kind. It’s too soon to say with cer­tainty, but this LLM-conceived con­nec­tion may be use­ful for broader ap­pli­ca­tions—some­thing hard to find among re­cently touted AI tri­umphs in math.

On sup­port­ing sci­ence jour­nal­ism

If you’re en­joy­ing this ar­ti­cle, con­sider sup­port­ing our award-win­ning jour­nal­ism by sub­scrib­ing. By pur­chas­ing a sub­scrip­tion you are help­ing to en­sure the fu­ture of im­pact­ful sto­ries about the dis­cov­er­ies and ideas shap­ing our world to­day.

This one is a bit dif­fer­ent be­cause peo­ple did look at it, and the hu­mans that looked at it just col­lec­tively made a slight wrong turn at move one,” says Terence Tao, a math­e­mati­cian at the University of California, Los Angeles, who has be­come a promi­nent score­keeper for AIs push into his field. What’s be­gin­ning to emerge is that the prob­lem was maybe eas­ier than ex­pected, and it was like there was some kind of men­tal block.”

The ques­tion Price solved—or prompted ChatGPT to solve—con­cerns spe­cial sets of whole num­bers, where no num­ber in the set can be evenly di­vided by any other. Erdős called these primitive sets” be­cause of their con­nec­tion to sim­i­larly in­di­vis­i­ble prime num­bers.

A num­ber is prime if it has no other di­vi­sors, and this is kind of gen­er­al­iz­ing that de­f­i­n­i­tion from an in­di­vid­ual num­ber to a col­lec­tion of num­bers,” says Jared Lichtman, a math­e­mati­cian at Stanford University. Any set of prime num­bers is au­to­mat­i­cally prim­i­tive, be­cause primes have no fac­tors (except them­selves and the num­ber one).

Erdős also came up with the Erdős sum, a score” you can cal­cu­late for any prim­i­tive set. He showed that the biggest the sum could be was about 1.6—and con­jec­tured that this value must also hold for the (infinite) set of all prime num­bers. Lichtman proved Erdős right as part of his doc­toral the­sis in 2022.

Erdős also no­ticed that the score drops if all of a set’s num­bers are large—the larger the num­bers, the lower the score. He guessed that the low­est this score could be was ex­actly one, a limit that the score would ap­proach as the set’s num­bers ap­proached in­fin­ity. Lichtman tried to prove this, too, but got stuck like every­one else be­fore him.

Price was­n’t aware of this his­tory when he en­tered the prob­lem into ChatGPT on an idle Monday af­ter­noon. I did­n’t know what the prob­lem was—I was just do­ing Erdős prob­lems as I do some­times, giv­ing them to the AI and see­ing what it can come up with,” he says. And it came up with what looked like a right so­lu­tion.”

He sent it to his oc­ca­sional col­lab­o­ra­tor Kevin Barreto, a sec­ond-year un­der­grad­u­ate in math­e­mat­ics at the University of Cambridge. The duo had jump-started the AI-for-Erdős craze late last year by prompt­ing a free ver­sion of ChatGPT with open prob­lems cho­sen at ran­dom from the Erdős prob­lems web­site. (An AI re­searcher sub­se­quently gifted them each a ChatGPT Pro sub­scrip­tion to en­cour­age their vibe math­ing.”)

Reviewing Price’s mes­sage, Barreto re­al­ized what they had was spe­cial, and ex­perts whom he no­ti­fied quickly took no­tice.

There was kind of a stan­dard se­quence of moves that every­one who worked on the prob­lem pre­vi­ously started by do­ing,” Tao says. The LLM took an en­tirely dif­fer­ent route, us­ing a for­mula that was well known in re­lated parts of math, but which no one had thought to ap­ply to this type of ques­tion.

The raw out­put of ChatGPT’s proof was ac­tu­ally quite poor. So it re­quired an ex­pert to kind of sift through and ac­tu­ally un­der­stand what it was try­ing to say,” Lichtman says. But now he and Tao have short­ened the proof so that it bet­ter dis­tills the LLMs key in­sight.

More im­por­tantly, they al­ready see other po­ten­tial ap­pli­ca­tions of the AIs cog­ni­tive leap. We have dis­cov­ered a new way to think about large num­bers and their anatomy,” Tao says. It’s a nice achieve­ment. I think the jury is still out on the long-term sig­nif­i­cance.”

Lichtman is hope­ful be­cause ChatGPT’s dis­cov­ery val­i­dates a sense he’s had since grad­u­ate school. I had the in­tu­ition that these prob­lems were kind of clus­tered to­gether and they had some kind of uni­fy­ing feel to them,” he says. And this new method is re­ally con­firm­ing that in­tu­ition.”

It’s Time to Stand Up for Science

If you en­joyed this ar­ti­cle, I’d like to ask for your sup­port. Scientific American has served as an ad­vo­cate for sci­ence and in­dus­try for 180 years, and right now may be the most crit­i­cal mo­ment in that two-cen­tury his­tory.

I’ve been a Scientific American sub­scriber since I was 12 years old, and it helped shape the way I look at the world. SciAm al­ways ed­u­cates and de­lights me, and in­spires a sense of awe for our vast, beau­ti­ful uni­verse. I hope it does that for you, too.

If you sub­scribe to Scientific American, you help en­sure that our cov­er­age is cen­tered on mean­ing­ful re­search and dis­cov­ery; that we have the re­sources to re­port on the de­ci­sions that threaten labs across the U.S.; and that we sup­port both bud­ding and work­ing sci­en­tists at a time when the value of sci­ence it­self too of­ten goes un­rec­og­nized.

In re­turn, you get es­sen­tial news, cap­ti­vat­ing pod­casts, bril­liant in­fo­graph­ics, can’t-miss newslet­ters, must-watch videos, chal­leng­ing games, and the sci­ence world’s best writ­ing and re­port­ing. You can even gift some­one a sub­scrip­tion.

There has never been a more im­por­tant time for us to stand up and show why sci­ence mat­ters. I hope you’ll sup­port us in that mis­sion.

Just a moment...

www.science.org

Progress Report: Linux 7.0 - Asahi Linux

asahilinux.org

After al­most three years of 6.x se­ries ker­nels, Linux 7.0 is fi­nally here.

That means it’s also time for an­other Asahi progress re­port!

Automate Everything

Users of al­ter­nate dis­tros and keen-eyed in­di­vid­u­als may have no­ticed some

changes to the Asahi Installer. After al­most two years, we fi­nally got around

to push­ing an up­dated ver­sion of the in­staller to the CDN! Two years is a long

time to go be­tween up­dates, so what took so long?

Our up­stream in­staller pack­age is a lit­tle bit of a Rube-Goldberg ma­chine. The

bulk of the in­staller is writ­ten in Python, with some small Bash scripts to

boot­strap it. When you run curl | sh, you’re ac­tu­ally down­load­ing the boos­t­rap

script, which then fetches the ac­tual in­staller bun­dle from our CDN. This bun­dle

con­sists of a Python in­ter­preter and very stripped down stan­dard li­brary, a built

m1n1 stage 1 bi­nary, and the in­staller it­self.

Until re­cently, cut­ting an in­staller re­lease meant:

Tagging the in­staller repo

Downloading a ma­cOS Python build

Building m1n1 from a blessed com­mit

Bundling Python, m1n1 and the in­staller

Uploading the in­staller bun­dle to the CDN

Updating the CDNs ver­sion flag file

This process was time-con­sum­ing and re­quired ad­min­is­tra­tive ac­cess to the CDN.

As a re­sult, we ne­glected to push in­staller up­dated for quite some time; the

pre­vi­ous in­staller tag was from June 2024! As up­stream­ing work has pro­gressed

and Devicetree bind­ings churned, this be­came rather prob­lem­atic for our friends

main­tain­ing dis­tros.

The Asahi Installer of­fers a UEFI-only in­stal­la­tion op­tion. This op­tion

shrinks ma­cOS and only in­stalls what is nec­es­sary to boot a UEFI ex­e­cutable,

mean­ing m1n1 stage 1, the Devicetrees, and U-Boot. This al­lows users to

boot from live me­dia with Asahi sup­port, such as spe­cialised Gentoo Asahi

LiveCD im­ages.

Since the Devicetrees on a fresh UEFI-only in­stall come from the in­staller

bun­dle it­self, a ker­nel will only suc­cess­fully boot when the in­staller-bun­dled

Devicetrees match what that ker­nel ex­pects to see. The two have got­ten rather

out of sync as time has gone on due to Devicetree bind­ings chang­ing

as a re­sult of the up­stream­ing process. This sit­u­a­tion fi­nally came to a

head with ker­nel 6.18, which re­quired nu­mer­ous changes to both m1n1 and

the Devicetree bind­ings for the Apple USB sub­sys­tem. This made boot­ing

ker­nel 6.18 and above from live me­dia im­pos­si­ble. Oops.

Rather than go through the trou­ble of man­u­ally push­ing out an­other up­date,

we took the op­por­tu­nity to build some au­toma­tion and solve this prob­lem

per­ma­nently.

We moved the man­i­fest of in­stal­lable im­ages into the asahi-in­staller-data repo,

al­low­ing us to up­date it in­de­pen­dently of the in­staller code­base.

On top of this, we also now de­ploy

the in­staller us­ing GitHub work­flows. Going for­ward, every push to the main

branch of asahi-in­staller will

au­to­mat­i­cally build the in­staller and up­load it to https://​alx.sh/​dev.

Every tag pushed to GitHub will do the same for https://​alx.sh.

The lat­est ver­sion, 0.8.0, bumps the bun­dled m1n1 stage 1 bi­nary to

ver­sion 1.5.2, in­tro­duces in­staller sup­port for the Mac Pro, and adds

a firmware up­date mode which ties in nicely with…

How do you ov­erengi­neer a light sen­sor?

Basically every­thing with a screen now comes with some sort of light

sen­sor. This is usu­ally to en­able au­to­matic bright­ness ad­just­ment based

on am­bi­ent con­di­tions. It’s a very con­ve­nient fea­ture in de­vices like

smart­phones, where a user may walk out­side and find their dis­play too

dim to see. The cheap­est ver­sions of this use a sim­ple pho­tore­sis­tor.

This is fine if the goal is just to change bright­ness, but bright­ness

is not the only thing af­fected by am­bi­ent light­ing con­di­tions. What about

colour ren­der­ing?

Apple’s de­vices have had the True Tone dis­play fea­ture for quite some time.

This works by mea­sur­ing both the bright­ness and the colour char­ac­ter­is­tics

of the en­vi­ron­men­t’s am­bi­ent light­ing. This data is then used to ap­ply

bright­ness and colour trans­for­ma­tions to the dis­play to en­sure that it is

al­ways dis­play­ing con­tent as ac­cu­rately as pos­si­ble. This is most no­tice­able

in en­vi­ron­ments with light­ing fix­tures that have a low Colour Rendering

Index, such as flu­o­res­cent tubes or cheap cool white LEDs. The de­vices that

en­able this, am­bi­ent light sen­sors, are usu­ally lit­tle ICs that con­nect

to the sys­tem over I2C or other in­dus­try-stan­dard bus. This is

fine for ba­sic ap­pli­ca­tions, but this is Apple. There are some other con­sid­er­a­tions

to be had:

The light sen­sor is do­ing stuff when­ever the screen is on, so pro­cess­ing its

out­put should be as ef­fi­cient as pos­si­ble

The light sen­sor should be able to be cal­i­brated for max­i­mum ac­cu­racy

There are mul­ti­ple mod­els of light sen­sor in use, and the OS should not

have to care too much about that

The light sen­sor has to have a three let­ter acronym like every other piece

of hard­ware on this plat­form (ALS)

Naturally, this sounds like a job for the Always-On Processor1 (AOP)!

We’ve had a work­ing AOP+ALS dri­ver set for a while thanks to chaos_princess,

how­ever the raw data AOP re­ports back from ALS is rather in­ac­cu­rate with­out

cal­i­bra­tion. That cal­i­bra­tion is a bi­nary blob that must be up­loaded to the AOP

at run­time. It is es­sen­tially firmware. Since we can­not re­dis­trib­ute Apple’s

bi­na­ries, it must be re­trieved from ma­cOS at in­stall time and then stored some­where the dri­ver

knows to look for it.

To achieve this, the Asahi Installer gath­ers up all

the firmware it knows we will need in Linux and stores it on the EFI System

Partition it cre­ates. A Dracut mod­ule then mounts this to a sub­di­rec­tory of

/lib/firmware/, where dri­vers can find it. However, is­sues arise when we

need to re­trieve more firmware from ma­cOS af­ter Asahi Linux has al­ready been

in­stalled. To avoid a re­peat of the we­b­cam sit­u­a­tion, where users

were re­quired to man­u­ally do surgery on their EFI System Partition, chaos_princess

added the abil­ity for the Asahi Installer to au­to­mat­i­cally up­date the firmware

pack­age. Starting with ALS, any re­quired firmware up­dates will be a sim­ple

mat­ter of boot­ing into ma­cOS or ma­cOS Recovery, re-run­ning the Asahi Installer, and fol­low­ing

the prompts.

To en­able ALS sup­port (and to do firmware up­grades in the fu­ture), fol­low these steps:

Ensure you are run­ning ver­sion 6.19 or above of the Asahi ker­nel

Ensure your dis­tro ships iio-sen­sor-proxy as a de­pen­dency of your DE (Fedora

Asahi Remix does this)

The Free Universal Construction Kit

fffff.at

Ever wanted to con­nect your Legos and Tinkertoys to­gether? Now you can — and much more. Announcing the Free Universal Construction Kit: a set of adapters for com­plete in­ter­op­er­abil­ity be­tween 10 pop­u­lar con­struc­tion toys.

Fig. 1. The Free Universal Construction Kit.

Overview

Motivation

Download

Implementation

Legal and Commercial Implications

License and Disclaimers

Credits, Contact and Acknowledgements

Keywords

Overview

Video by Riley Harmon for F.A.T. Lab + Sy-Lab.

F.A.T. Lab and Sy-Lab are pleased to pre­sent the Free Universal Construction Kit: a ma­trix of nearly 80 adapter bricks that en­able com­plete in­ter­op­er­abil­ity be­tween ten* pop­u­lar chil­dren’s con­struc­tion toys. By al­low­ing any piece to join to any other, the Kit en­cour­ages to­tally new forms of in­ter­course be­tween oth­er­wise closed sys­tems—en­abling rad­i­cally hy­brid con­struc­tive play, the cre­ation of pre­vi­ously im­pos­si­ble de­signs, and ul­ti­mately, more cre­ative op­por­tu­ni­ties for kids. As with other grass­roots in­ter­op­er­abil­ity reme­dies, the Free Universal Construction Kit im­ple­ments pro­pri­etary pro­to­cols in or­der to pro­vide a pub­lic ser­vice un­met—or un­meetable—by cor­po­rate in­ter­ests.

The Free Universal Construction Kit of­fers adapters be­tween Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles (Bristle Blocks), Lincoln Logs, Tinkertoys, Zome, and Zoob. Our adapters can be down­loaded from Thingiverse.com and other shar­ing sites as a set of 3D mod­els in .STL for­mat, suit­able for re­pro­duc­tion by per­sonal man­u­fac­tur­ing de­vices like the Makerbot (an in­ex­pen­sive, open-source 3D printer).

Motivation

Our kids are al­ready do­ing it! And when we were grow­ing up, our­selves, we did it too—or we tried to, any­way. Connecting our toys to­gether. Because: what if we want to make a con­struc­tion which is half-Tin­ker­toys, half-K’Nex? Why shouldn’t we be able to? We dreamed about this pos­si­bil­ity years ago, when we were small, and we knew then, as we know now, that we’d need some adapters to help. The ad­vent of low-cost 3D print­ing has made such adapters pos­si­ble, and with it, a vast new set of com­bi­na­to­r­ial pos­si­bil­i­ties for chil­dren’s cre­ative con­struc­tion toys.

Opening doors to new cre­ative worlds is one ma­jor rea­son we cre­ated the Free Universal Construction Kit. Another is that we be­lieve ex­per­tise shouldn’t be dis­pos­able — and that chil­drens’ hard-won cre­ative flu­ency with their toys shouldn’t be­come ob­so­lete each Christmas. By al­low­ing dif­fer­ent toy sys­tems to work to­gether, the Free Universal Construction Kit makes pos­si­ble new forms of “for­ward com­pat­i­bil­i­ty”, ex­tend­ing the value of these sys­tems across the life of a child. Thus, with the Kit’s adapters, play­sets like Krinkles (often en­joyed by tod­dlers) can still re­tain their use-value for older chil­dren us­ing Lego, and for even older tweens us­ing Zome.

The Kit of­fers a “best of all worlds” ap­proach to play and learn­ing that com­bines the ad­van­tages of each toy sys­tem. We se­lected con­struc­tion sets for in­clu­sion based on their sig­nif­i­cant level of mar­ket pen­e­tra­tion, as well as for the di­ver­sity of fea­tures they brought to the Kit’s col­lec­tion. Some of the sup­ported con­struc­tion sys­tems, for ex­am­ple, of­fer great me­chan­i­cal strength, or the abil­ity to build at large scales; oth­ers of­fer the means to de­sign ki­netic move­ments; and still oth­ers per­mit the cre­ation of a wide range of crys­tal­lo­graphic geome­tries and sym­me­tries. Using these clas­sic toys as a foun­da­tion, the Free Universal Construction Kit of­fers a “meta-mashup sys­tem” ide­ally pro­vi­sioned for the cre­ation of trans­gres­sive ar­chi­tec­ture and chimeric ready­mades.

Finally, in pro­duc­ing the Free Universal Construction Kit, we hope to demon­strate a model of re­verse en­gi­neer­ing as a civic ac­tiv­ity: a cre­ative process in which any­one can de­velop the nec­es­sary pieces to bridge the lim­i­ta­tions pre­sented by mass-pro­duced com­mer­cial ar­ti­facts. We hope that the Kit will not only prompt peo­ple to cre­ate new de­signs, but more im­por­tantly, to re­flect on our re­la­tion­ship with ma­te­r­ial mass-cul­ture—and the rapidly-evolv­ing ways in which we can bet­ter adapt it to our imag­i­na­tions.

Download

The Free Universal Construction Kit 3D mod­els are freely avail­able in .STL for­mat from three lo­ca­tions:

Individual adapters from the Free Universal Construction Kit may be down­loaded from Thingiverse.com — the world’s fore­most web­site ded­i­cated to the free shar­ing and remix­ing of user-cre­ated dig­i­tal de­sign files.

The com­plete Free Universal Construction Kit can also be down­loaded in its en­tirety*, as a 29MB .zip archive from the F.A.T. Lab web site, here. Note: all units are in inches.

We ex­pect the Kit to be avail­able shortly from The Pirate Bay, as a tor­rent in TPB’s new physibles” (physical down­load­ables) chan­nel.

In ad­di­tion to the Kit it­self, we also of­fer for down­load this at­trac­tive B1 poster (4.5MB PDF, in two ver­sions: gray back­ground / white back­ground).

Figure 2. The Free Universal Construction Kit adapter ma­trix. (PDFs: Gray, White)

We (F.A.T. Lab and Sy-Lab) nei­ther sell nor dis­trib­ute phys­i­cal copies of the Free Universal Construction Kit. Please do not ask us to do so. Individuals seek­ing their own phys­i­cal copies of the Kit, in whole or in part, are en­cour­aged to down­load our files and re­pro­duce them with open-hard­ware desk­top 3D print­ers like the Makerbot, RepRap, Ultimaker, or Printrbot. Alternatively, copies for pri­vate use may be avail­able from a per­sonal fab­ri­ca­tion ser­vice bu­reau; for awe­some ser­vice, in­ter­na­tional/​any­where ship­ping and quick turn­around, we highly rec­om­mend Ponoko.com for per­son­al­ized 3D print­ing in a wide va­ri­ety of ma­te­ri­als. Shapeways and QuickParts are good, too. You may also find a 3D printer in the ar­chi­tec­ture, in­dus­trial de­sign, and/​or me­chan­i­cal en­gi­neer­ing de­part­ments of your lo­cal uni­ver­sity.

Please note that our li­cense for the Free Universal Construction Kit pro­hibits com­mer­cial use of these de­signs in mass pro­duc­tion; note, how­ever that we en­cour­age in­di­vid­u­als to con­tract with fab­ri­ca­tion ser­vice bu­reaus for the cre­ation of per­sonal copies. For more in­for­ma­tion, see our li­cense and dis­claimers, be­low.

Implementation

The Free Universal Construction Kit com­prises nearly 80 two-way adapters. These al­low each of the dif­fer­ent con­struc­tion toys (Lego, Tinkertoy, Fischertechnik etc.) to in­ter­face with any of the other sup­ported sys­tems. Prior to mod­el­ing, the di­men­sions of the var­i­ous toy con­nec­tors were re­verse-en­gi­neered with an op­ti­cal com­para­tor fit­ted with a dig­i­tal read-out ac­cu­rate to less than one ten-thou­sandth of an inch (0.0001in., or 2.54 mi­crons).

Figure 3. A Bristle Block be­ing mea­sured in the op­ti­cal com­para­tor.

The re­sult­ing pre­ci­sion en­sures that the Free Universal Construction Kit “ac­tu­ally works”, en­abling tight snap-fits be­tween cus­tom and com­mer­cial com­po­nents.

Figure 4. The Kit in use, con­nect­ing four dif­fer­ent sys­tems to­gether.

Below is a par­tial gallery of as­sorted Kit adapters, re­spec­tively com­pat­i­ble with (clockwise from top left): Lego, Zoob, Tinkertoys, and Gears! Gears! Gears!. Click on the im­ages for higher-res­o­lu­tion pho­tographs:

In ad­di­tion to its many one-to-one adapters, the Free Universal Construction Kit also in­cludes a spe­cial fist-sized Universal Adapter Brick which pro­vides con­nec­tiv­ity be­tween all of the sup­ported con­struc­tion sys­tems:

Fig. 9. The Universal Adapter Brick.

Producing phys­i­cal prints from our pro­vided 3D mod­els prompts cer­tain fab­ri­ca­tion con­sid­er­a­tions. According to Wikipedia, the pre­ci­sion of Lego pieces is less than 10 mi­crons. As of early 2012, how­ever, stan­dard Makerbot print­ers have an XY res­o­lu­tion of 100 mi­crons (0.1mm) and a de­fault layer thick­ness of 360 mi­crons (0.36mm). We thus cau­tion that fab­ri­ca­tion of the Free Universal Construction Kit with cur­rent (2012-era) so­lu­tions for DIY 3D print­ing, such as the Makerbot, Printrbot or RepRap, may lack the pre­ci­sion re­quired for re­li­able or sat­is­fac­tory cou­pling with stan­dard com­mer­cial pieces. A great deal de­pends on how well-tuned the printer is; thus, your mileage may vary. In any case, we ex­pect this sit­u­a­tion will im­prove grad­u­ally, but in­ex­orably, in tan­dem with im­prove­ments to these vi­brantly evolv­ing fab­ri­ca­tion plat­forms. The artist’s proof shown here was cre­ated in a UV-cured white resin us­ing a com­mer­cial-grade Objet (“polyjet”) 3D printer, which has a hor­i­zon­tal res­o­lu­tion of 42 mi­crons, and a layer thick­ness of 16 mi­crons. Ponoko.com and other pri­vate fab­ri­ca­tion ser­vices of­fer print­ing from Objet ma­chines and other high-res­o­lu­tion de­vices.

Legal and Commercial Implications

Consider the frus­trat­ing ex­pe­ri­ence of pur­chas­ing a new com­puter (a Mac, say) and dis­cov­er­ing that it will not play your aunt’s Windows Media video of your lit­tle cousins. Likewise, imag­ine your aunt’s cor­re­spond­ing an­noy­ance when she finds that her PC will not play the Apple Quicktime video you sent her of your cats. This hu­mil­i­at­ing lit­tle episode isn’t an ac­ci­dent; it’s just a skir­mish in a never-end­ing bat­tle be­tween gi­ant com­mer­cial en­ti­ties, played out, thou­sands of times every day, in ex­actly such mi­cro-pun­ish­ments to cus­tomers like you. If you’re well-in­formed, you may hap­pen to know about VLC — a free, open-source video player, de­vel­oped by in­de­pen­dent hack­ers as a grass­roots rem­edy for ex­actly this prob­lem. Until the ad­vent of ubiq­ui­tous 3D print­ing, soft­ware reme­dies like VLC weren’t read­ily avail­able for hard­ware prod­ucts, like toys. That’s chang­ing.

Today’s man­u­fac­tur­ers have lit­tle or no in­trin­sic mo­ti­va­tion to make their prod­ucts com­pat­i­ble with any­one else’s. Indeed—despite ob­vi­ous ben­e­fits to users every­where—the im­ple­men­ta­tion of cross-brand in­ter­op­er­abil­ity can be nearly im­pos­si­ble, given the tan­gled re­stric­tions of patents, de­sign rights, and trade­marks in­volved in do­ing so. So we stepped up. The Free Universal Construction Kit is the VLC of chil­dren’s play­sets.

As we can see from the ex­am­ple above, in­ter­op­er­abil­ity is a ques­tion of power and mar­ket dom­i­nance. Most mar­ket lead­ers re­gard in­ter­op­er­abil­ity as an anti-com­pet­i­tive nui­sance, a reg­u­la­tory check on their am­bi­tion, or a con­ces­sion to the whin­ing of lesser play­ers. Quite sim­ply, in­ter­op­er­abil­ity is the re­quest of the dis­en­fran­chised. And which end-user, in so many ways, is less en­fran­chised than a pre­lit­er­ate child?

The sim­ple fact is that no toy com­pany would ever make the Free Universal Construction Kit. Instead, each con­struc­tion toy wants (and in­deed, pre­tends) to be your only play­set. Within this world­view, the other man­u­fac­tur­ers’ con­struc­tion sets are just so many ele­phants in the room, com­pet­ing for your at­ten­tion on the shelves of Toys-R-Us. No longer. The Free Universal Construction Kit pre­sents what no man­u­fac­turer could: a rem­edy pro­vid­ing ex­ten­si­ble, post-facto syn­tac­tic in­ter­op­er­abil­ity for con­struc­tion toys. Let the fun be­gin!

Some may ex­press con­cern that the Free Universal Construction Kit in­fringes such cor­po­rate pre­rog­a­tives as copy­right, de­sign right, trade dress, trade­marks or patents of the sup­ported toy sys­tems. We en­cour­age those ea­ger to en­force these rights to please think of the chil­dren (or per­haps the Streisand ef­fect) — and we as­sert that the home print­ing of the Free Universal Construction Kit con­sti­tutes pro­tected fair use. Simon Bradshaw et al., writ­ing in “The Intellectual Property Implications of Low-Cost 3D Printing”, con­clude that the pub­lic is legally al­lowed to make 3D prints that mate with pro­pri­etary parts, es­pe­cially in cases (the “Must Fit Exception”) where a piece’s shape “is de­ter­mined by the need to con­nect to or fit into or around an­other pro­duct”:

“Even where a reg­is­tered de­sign is copied via a 3D printer this would not be an in­fringe­ment if it were done privately and for pur­poses which are not com­mer­cial’. Both cri­te­ria must be met; it is in­suf­fi­cient that copy­ing is not done for profit. Purely per­sonal use of a 3D printer to make items will thus not in­fringe a reg­is­tered de­sign.”

*In fact, the Free Universal Construction Kit de­lib­er­ately avoids patent in­fringe­ment. Part of our strat­egy for do­ing so is our choice to sup­port older (“classic”) play­sets: of the ten toy sys­tems sup­ported by the Kit, eight are no longer pro­tected by ac­tive (20-year) patents. To take a few ex­am­ples: Lego was patented in 1958; Lincoln Logs, in 1920; and Tinkertoys, in 1932. There are, how­ever, two in­stances in which toy sys­tems nom­i­nally sup­ported by the Kit are still pro­tected (as of this writ­ing) by ac­tive patents: Zoob (patented 1996) and ZomeTool (patented 2002). For the Zoob and Zome sys­tems, please note that we have de­layed the re­lease of per­ti­nent adapter mod­els un­til December 2016 and November 2022, re­spec­tively.

The Free Universal Construction Kit is sim­ply one “toy” il­lus­tra­tion of a com­ing grass­roots rev­o­lu­tion, in which every­day peo­ple can—with desk­top tools—over­come ar­bi­trary re­stric­tions in mass-man­u­fac­tured phys­i­cal cul­ture. The bur­geon­ing pos­si­bil­ity of freely shared down­load­able adapters has sig­nif­i­cant im­pli­ca­tions for in­dus­tries where the at­tempt to cre­ate “tech­no­log­i­cal lock-in” is a com­mon busi­ness prac­tice. For more on this sub­ject, and the le­gal hori­zons of re­pro­duc­ing com­mer­cial prod­ucts with home fab­ri­ca­tion sys­tems, please see:

Bradshaw, Simon; A. Bowyer and P. Haufe, “The Intellectual Property Implications of Low-Cost 3D Printing”, 7:1 SCRIPTed 5, 2010.

de Bruijn, Erik. “Fab It Yourself: Adapters & Consumer Lock-In”. Blog.erikdebruijn.nl, 13 September 2010.

Hanna, Peter. “The next Napster? Copyright ques­tions as 3D print­ing comes of age”. Arstechnica.com, April 2011.

Ross, Valerie. “Can You Patent a Shape? 3D Printing on Collision Course With Intellectual Property Law”. Discover Magazine, 7 April 2011.

Weinberg, Michael. “3D Printing Settlers of Catan is Probably Not Illegal: Is This a Problem?”. PublicKnowledge.org, 28 January 2011.

Weinberg, Michael. “It Will Be Awesome if They Don’t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology”. PublicKnowledge.org, 10 November 2010.

In ad­di­tion to the writ­ers above, we tip our hats to Thingiverse user Zydac, whose re­lated pro­ject (a Duplo-to-Brio track adapter) led us to these le­gal writ­ings; to Andrew Plumb (Clothbot) who has probed the le­gal and prac­ti­cal im­pli­ca­tions of Lego-compatible bricks for some time; and to Daan van den Berg, who has ex­plored 3D-printed remixes of branded forms as a mode of crit­i­cal artis­tic prac­tice.

License and Disclaimers

The Free Universal Construction Kit and its as­so­ci­ated me­dia are li­censed un­der and sub­ject to the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License (http://​cre­ativecom­mons.org/​li­censes/​by-nc-sa/​3.0/​legal­code). The of­fi­cial URL for the Free Universal Construction Kit is https://​fffff.at/​free-uni­ver­sal-con­struc­tion-kit. You are free to copy, dis­trib­ute and trans­mit the Kit, and to remix and/​or adapt the Kit; in do­ing so, you must at­tribute the Kit to “F.A.T. Lab and Sy-Lab”, and in­clude a link to the pro­ject us­ing the URL above. We es­pe­cially wel­come ex­ten­sions to the Kit which pro­vide com­pat­i­bil­ity with as-yet-un­sup­ported play sys­tems. Please note that ex­ten­sions to the Kit re­quire the same or sim­i­lar li­cense. You may not use the Kit in com­mer­cial mass pro­duc­tion; how­ever, we per­mit in­di­vid­u­als to con­tract with fab­ri­ca­tion ser­vice bu­reaus (e.g. Ponoko, Shapeways, etc.) for per­sonal copies.

Lego®, Duplo®, Fischertechnik®, Gears! Gears! Gears!®, K’Nex®, Krinkles®, Bristle Blocks®, Lincoln Logs®, Tinkertoys®, Zome®, ZomeTool® and Zoob® are trade­marks of their re­spec­tive own­ers. The Free Universal Construction Kit is not as­so­ci­ated or af­fil­i­ated with, or en­dorsed, spon­sored, cer­ti­fied or ap­proved by, any of the fore­go­ing own­ers or their re­spec­tive prod­ucts.

We are not a com­mer­cial com­pany; we are artists, hack­ers and ac­tivists. The Kit is not a prod­uct; it is a provo­ca­tion. F.A.T. Lab and Sy-Lab, in co­op­er­a­tion with Adapterz LLC, (1) per­form solely the ser­vice of pub­lish­ing the Free Universal Construction Kit, (2) do not par­tic­i­pate in any pro­duc­tion, pub­lic man­u­fac­ture or sale of the items dis­played here, and (3) of­fer no opin­ion, war­ranty or rep­re­sen­ta­tion as to the safety, qual­ity or func­tion­al­ity of the Kit. The F.A.T. Lab, Sy-Lab and Adapterz LLC there­fore of­fer no war­ranty of any kind, ex­press or im­plied.

Please cite the Free Universal Construction Kit, and/​or this ar­ti­cle, as fol­lows:

Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.” Fffff.at, 20 March 2012. <https://​fffff.at/​free-uni­ver­sal-con­struc­tion-kit>.

Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.” Fffff.at, 20 March 2012. <https://​fffff.at/​free-uni­ver­sal-con­struc­tion-kit>.

WARNING: CHOKING HAZARD!

Small parts. Not for chil­dren un­der 3 years.

Credits, Contact and Acknowledgements

For press or other in­quiries about the Free Universal Construction Kit, please con­tact info@adapterz.org. The Kit was con­ceived and de­vel­oped by the F.A.T. (Free Art and Technology) Lab in col­lab­o­ra­tion with Sy-Lab, and is rep­re­sented, for le­gal pur­poses, by Adapterz, LLC. The Kit’s advertisement” video was cre­ated by Riley Harmon.

The cre­ators ex­press grat­i­tude to: our fam­i­lies; our lawyers; the chil­dren ap­pear­ing in our demon­stra­tion video, and their fam­i­lies; Jean Aw, Eric Brockmeyer, David Familian, Andy Flowers, Michael Joaquin Grey, Mark Gross, Riley Harmon, Marcie and Lawrence Hayhurst, Allie Oswell, Eric Paulos, Bre Pettis, Kent Sheely, Michael Weinberg, and the STUDIO for Creative Inquiry. The Kit files are sport­ingly hosted by Thingiverse.com.

Keywords

Toys, kits, con­struc­tions sets, con­struc­tion toys, con­struc­tion sys­tems, Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles, Bristle Blocks, Lincoln Logs, Tinkertoys, Zome, ZomeTool, Zoob, Constructivist learn­ing, play, con­nec­tors, adap­tors, adapter piece, adapter brick, adapters, uni­ver­sal trans­la­tor, gen­der changer, mod­u­lar­ity, in­ter­op­er­abil­ity, in­ter­op­er­abil­ity rem­edy, com­pat­i­bil­ity layer, tech­no­log­i­cal lock-in, post-facto plug-and-play syn­tac­tic in­ter­op­er­abil­ity, shim, com­puter aided de­sign, 3D mod­els, STL files, physi­bles, rapid pro­to­typ­ing, 3D print­ing, Makerbot, RepRap, Printrbot, Thingiverse, Ponoko, F.A.T. Lab, Sy-Lab, fair use, remix, hy­brid, mashup.

The com­mons and the pub­lic good are con­tin­u­ally threat­ened by nar­row in­ter­ests seek­ing pri­vate gain. Please con­tinue to sup­port and pro­tect the free, open, and non-pro­pri­etary ex­change and de­vel­op­ment of ideas and in­for­ma­tion on­line.

Just a moment...

freakonomics.com

It's OK to Use Coding Assistance Tools To Revive The Projects You Never Were Going To Finish

blog.matthewbrunelle.com

Note: I ini­tially drafted this be­fore my last post on how Claude Code is get­ting worse. I’m putting it out now so I can ref­er­ence it in a fu­ture post on OpenCode. As you can imag­ine my opin­ion on Claude Code has shifted since I wrote this.

Long ago I at­tempted a per­sonal pro­ject, but never fin­ished due to life be­ing busy. [1] Sort of like the Japanese word Tsundoku, for the pile of books you in­tend to even­tu­ally read one day. We all have these pro­jects and they are good can­di­dates for test­ing out AI cod­ing as­sis­tance. After all, they were never go­ing to get done any­way.

The POC I put to­gether was a shim be­tween YouTube Music and the OpenSubsonic api. Explaining OpenSubsonic could be its own ar­ti­cle, but for our pur­poses it’s an API con­tract for nicely de­cou­pling mu­sic stream­ing clients and servers. You can pick your own op­tions for both. In my case I like Navidrome for the server, Feishin for desk­top, and as I men­tioned in my post on GrapheneOS, Symfonium for Android.

Anyways, the shim made YouTube Music con­form to the API so I could add it to any of my clients. Under the hood I used yt­mu­s­i­capi for meta­data lookup and pro­gram­mat­i­cally called yt-dlp to stream the mu­sic. Getting ba­sic stream­ing work­ing was pretty sim­ple. However, there was a long tail im­ple­ment­ing all the end­points in a con­for­mant way. Then as al­ways, there were new shiny pro­jects that stole my at­ten­tion away. Like that em­bed­ded rust lo­ca­tion pro­ject I promise I’ll fin­ish at some point. Maybe.

Luckily, noth­ing was re­ally novel in that stream­ing pro­ject, and there is a clear spec to im­ple­ment which is per­fect for as­sisted cod­ing. So a month and a half ago I thought I would test Claude Code with Opus 4.6 and see how it did im­ple­ment­ing the pro­ject from scratch. After all, they gave me a free $50 in credit, so I might as well.

The setup

Since I had al­ready writ­ten a proof of con­cept by hand, I had my own opin­ions about the im­ple­men­ta­tion and lay­ing all of that out be­fore­hand con­strained the tool in a nice way.

I did the fol­low­ing:

Created a uv pro­ject with fastapi, py­dan­tic, yt­mu­s­i­capi and yt-dlp as de­pen­den­cies.

Changed main.py to the ex­am­ple FastAPI main file.

Dropped the ope­napi spec for OpenSubsonic in the folder.

Added a brief de­scrip­tion in a readme file:

This pro­ject acts as a shim, ex­pos­ing YouTube mu­sic as an open­sub­sonic client. It uses fastapi for its server with py­dan­tic, yt­mu­s­i­capi for meta­data and yt-dlp for stream­ing.”

open­sub­sonic docs are avail­able at: https://​ex­am­ple.docsy.dev/​docs/​ref­er­ence/

The ope­napi spec is in ope­napi.json.

Added an empty TODO file.

Generated a CLAUDE.md file us­ing /init.

I also of­ten add a sec­tion like this to the CLAUDE.md file:

## Conventions

- Methods should have type an­no­ta­tions for args and re­turns as well as doc­strings.

- Use Pydantic for data mod­el­ing. Use mod­ern Pydantic V2 con­ven­tions.

- Doc strings should use the Google style for­mat with an args and re­turns sec­tions.

- Write unit tests with mod­ern pytest style, eg top level meth­ods us­ing `assert` and fix­tures.

That’s mostly based on past ex­pe­ri­ence for what I have to re­peat­edly ask Claude Code not to do.

I’ve bun­dled up this start­ing point into a git repos­i­tory in case any­one else wants to try the ex­per­i­ment.

Implementing the MVP

With that setup done, I let Claude kick things off. The work­flow I typ­i­cally use is:

Enter plan mode.

Prompt for the next piece of work.

After get­ting the ini­tial plan, look for gaps / prob­lems and ask fol­low up ques­tions un­til I like the plan.

Provide links to re­sources when Claude is off.

Ask Claude to use the search tool to fig­ure out what is id­iomatic when there are mul­ti­ple op­tions and it is un­clear to me which to take.

Use Accept and clear con­text”.

Repeat.

The first prompt I used was:

Have a look at the ope­napi.json file. This is a spec for the open­sub­sonic api. Implement an async fastapi server that stubs out all of the meth­ods. There are both older xml end­points and newer style json end­points. You only need to han­dle the newer json end­points.

Have a look at the ope­napi.json file. This is a spec for the open­sub­sonic api. Implement an async fastapi server that stubs out all of the meth­ods. There are both older xml end­points and newer style json end­points. You only need to han­dle the newer json end­points.

For this kind of change I like to clear con­text af­ter im­ple­ment­ing and then ask a fol­low up ques­tion:

I im­ple­mented stubbed ver­sions of all the meth­ods spec­i­fied in ope­napi.json. Double-check they are cor­rect.

I im­ple­mented stubbed ver­sions of all the meth­ods spec­i­fied in ope­napi.json. Double-check they are cor­rect.

Even with a spec, Claude Code makes mis­takes the first time, but then will catch them (mostly) the sec­ond time through.

Also, af­ter im­ple­ment­ing larger changes, I like to re-run /init to up­date the CLAUDE.md file to cover the new pieces.

The next ma­jor prompt was:

The meth­ods for all end­points are stubbed out now. I want to con­nect a sub­sonic client, search for a song, and stream it to the client. What is the min­i­mum amount of func­tion­al­ity needed to im­ple­ment that? Use yt­mu­s­i­capi for search­ing YouTube mu­sic and yt-dlp for stream­ing.

The meth­ods for all end­points are stubbed out now. I want to con­nect a sub­sonic client, search for a song, and stream it to the client. What is the min­i­mum amount of func­tion­al­ity needed to im­ple­ment that? Use yt­mu­s­i­capi for search­ing YouTube mu­sic and yt-dlp for stream­ing.

I got an im­ple­men­ta­tion that looked rea­son­able pretty quickly, but fell over when try­ing to ac­tu­ally con­nect with Feishin. At that point I it­er­ated by test­ing the client and hand­ing the server re­quest logs to Claude Code. Even with a spec there are de­tails that are not spelled out clearly, like how end­points may have a .view suf­fix that needs to be stripped. Every time there was an er­ror I gen­er­ated new unit tests to cover them.

I was shocked to hear the au­dio stream­ing through feishin af­ter only a cou­ple of it­er­a­tions. The main is­sues in­volved stubbed end­points re­turn­ing noth­ing. They mostly had to be up­dated to re­turn empty, but cor­rectly struc­tured re­sponses.

Just get­ting an MVP is the easy part though. Not that far be­yond what I im­ple­mented in my POC.

Working through the long tail.

The rest of the work was the less in­ter­est­ing, more drudgery parts to make the pro­ject ac­tu­ally us­able. From the docs, OpenSubsonic has ~80 end­points spread over 15 dif­fer­ent cat­e­gories.

For the MVP use case I only had to sup­port:

getLi­cense, ge­tUser, get­Gen­res and get­Mu­sicDi­rec­to­ries with empty, but valid col­lec­tions.

get­Song as a pass through that re­turned the ID in the query params and de­fault val­ues.

search3 with a very ba­sic yt­mu­s­i­capi call.

stream with a yt-dlp call wrapped in an asyn­cio.to_thread to ex­tract the URL for the bestaudio” for­mat.

get­Cov­er­Art with a call to yt-dlp to ex­tract the cover im­age URL.

To sup­port the full func­tion­al­ity of a sub­sonic client I:

Added sim­ple in mem­ory caching for yt­mu­s­i­capi calls to avoid hit­ting us­age lim­its.

Used sqlite for stor­ing mu­sic meta­data and im­ple­mented all the end­points in the brows­ing cat­e­gory. Even get­Top­Songs by query­ing for the top songs list.

Saved the song to disk as it streamed to avoid re­down­load­ing songs. I had to have ad­di­tional han­dling to clean up the in­com­plete file when a client dis­con­nects from the stream end­point be­fore the file was fully down­loaded.

I knew all these things had to be done to make my own POC more us­able, and I could have done them, but never did. At the same time, since I never planned to re­lease any­thing I ab­solutely skipped the hard bits around au­then­ti­ca­tion.

All to­gether I was able to get a work­ing ser­vice that I could con­nect to from a sub­sonic client in a short evening. In the end I dubbed the pro­ject Sub-standard”.

Is this good?

I don’t want to sound like an AI cod­ing as­sist booster. I still have fears around deskilling from re­ly­ing on these tools too much. That’s why I still bang my head against the wall try­ing to learn Rust.

In my mind there are dif­fer­ent buck­ets for per­sonal pro­jects. One is things I do to learn and grow and the other is things I re­ally wish ex­isted. [2] This kind of pro­ject falls into the sec­ond bucket. Using AI cod­ing as­sist to reify those pro­jects is sort of a form of wish ful­fill­ment. I never would have got­ten to it, but now I can have the pro­ject. One less metaphor­i­cal book sit­ting un­read on book­shelf.

In the end I think the im­por­tant thing is not whether you are do­ing pro­jects in bucket 2, but whether you are also still do­ing the stretch pro­jects in bucket 1.

Or at least that is the ex­cuse I tell my­self. ↩︎

Or at least that is the ex­cuse I tell my­self. ↩︎

Also other buck­ets, I don’t want to im­ply those are the only two. ↩︎

Also other buck­ets, I don’t want to im­ply those are the only two. ↩︎

EU Age Control: The trojan horse for digital IDs

juraj.bednar.io

Most peo­ple think EU Age Control apps are about iden­ti­fy­ing users. The sales pitch is all zero-knowl­edge proofs of age. You prove you’re over 18 with­out the site learn­ing your name, ex­act birth­day or any­thing that can link one proof to an­other.

Before go­ing fur­ther, it is worth lay­ing out three sep­a­rate prob­lems this post is wor­ried about. They are easy to blur but they are very dif­fer­ent. First: the DSA fall­back — plat­forms don’t ac­tu­ally need the pri­vacy-pre­serv­ing wal­let; the rules let them use a nor­mal KYC provider in­stead. Second: at­tes­ta­tion lock-in — Google and Apple de­cide what soft­ware runs on the phones that can use this sys­tem. Third: the sys­tem it­self is weaker than ad­ver­tised — the cryp­tog­ra­phy the ref­er­ence app ac­tu­ally ships is not the cryp­tog­ra­phy the mar­ket­ing de­scribes, un­link­a­bil­ity de­pends on wal­let be­hav­ior not math, and there is a whole class of re­lay at­tacks the pro­to­col can­not stop. When com­men­ta­tors wave away the hacks,” they usu­ally mean bugs in the mock-up.

It is also worth ask­ing when this app started be­ing de­scribed as just a ref­er­ence im­ple­men­ta­tion” or a white-label demo.” The README tells a story. On 12 May 2025, a dis­claimer ap­peared fram­ing the pro­ject as an Age Verification Solution Toolbox” that Member States are ex­pected to build on. On 31 July 2025, fur­ther soft­en­ing was added — lan­guage ex­plic­itly call­ing the app a white-la­bel ref­er­ence for coun­tries to adapt — and in the ex­act same edit, the ear­lier, blunter dis­claimer (which said this was an ini­tial ver­sion not in­tended for pro­duc­tion) was qui­etly re­moved. In any case, it was al­ways pre­sented as a tool­box that coun­tries should adapt into their apps — so judg­ing the app by it­self does not make much sense, it de­pends on how these tech­niques are im­ple­mented in each coun­try’s ver­i­fi­ca­tion app. There will be no sin­gle EU app, de­spite what the hon­chos of EU say.

The DSA fall­back no­body talks about

Big plat­forms must ver­ify age for cer­tain con­tent. They can use the fancy EU wal­let with its pri­vacy fea­tures. They can also just plug in a nor­mal KYC provider that scans your full pass­port, runs live­ness checks and sees every­thing. Which path do you think most com­pa­nies will ac­tu­ally take when the privacy-preserving” op­tion re­quires in­te­grat­ing with sys­tems that barely ex­ist yet across 27 coun­tries?

It’s mar­ket­ing sleight of hand. They push the pri­vacy an­gle hard while the rules qui­etly al­low the non-pri­vate fall­back. The pri­vacy part is op­tional. (I think they mainly know the apps will not be ready by the end of the year).

KYC com­pa­nies have been avoid­ing real elec­tronic IDs for years. I have a Slovak eID chip that’s been in my wal­let for­ever. It has proper cryp­to­graphic keys and can prove who I am far more cleanly than a photo of my dri­ver’s li­cense plus video call. Yet al­most every KYC provider still does the bitmap and live­ness rou­tine. The rea­son is sim­ple. Integrating with 27 dif­fer­ent na­tional eID sys­tems is a night­mare. Maintaining a data­base of what every coun­try’s phys­i­cal ID looks like is cheaper and works every­where. The cryp­to­graphic route does­n’t — in prac­tice, not in the­ory.

So the EU so­lu­tion only works” if plat­forms de­cide to do all that in­te­gra­tion work them­selves. Right now the of­fi­cial trusted list has zero pro­duc­tion apps. The ref­er­ence im­ple­men­ta­tion is still half-baked. Believing this turns into clean in­ter­op­er­abil­ity across all EU coun­tries by the end of 2026 is wish­ful think­ing.

How ver­i­fi­ca­tion ac­tu­ally works

The main high-as­sur­ance path in the ref­er­ence app uses an NFC pass­port. You scan the MRZ code at the bot­tom of the photo page; it gives the keys to read and de­crypt the data on the NFC chip. That chip con­tains signed data in­clud­ing a JPEG photo of the holder. The de­sign calls for a live photo to be taken and matched lo­cally against the chip’s JPEG — this is in­tended to stop a kid scan­ning a par­en­t’s pass­port to get a cre­den­tial for them­selves.

The app is open-source so you can read every line. But chang­ing even one bit would break the hard­ware at­tes­ta­tion once at­tes­ta­tion is ac­tu­ally en­forced by na­tional de­ploy­ments. In the cur­rent ref­er­ence code, at­tes­ta­tion ver­i­fi­ca­tion is not wired up on the server side — it is a promise na­tional de­ploy­ments would need to add. The bi­nary must ul­ti­mately match ex­actly what Google or Apple signed. No GrapheneOS, no cus­tom Linux phones.

Attestation locks it down. It is the same EU that hates these American cor­po­ra­tions and wants EU al­ter­na­tives for every­thing — yet no one can make a phone us­able for age ver­i­fi­ca­tion with­out the bless­ing of Google (or Apple, who does not cer­tify third-party de­vices for iOS at all). Bought a Huawei phone that does not pass Play Integrity? Sorry. Note: Huawei phones can pro­duce hard­ware at­tes­ta­tion via their fac­tory key chain, but they can­not pass Google’s Play Integrity ver­dict — the same ap­plies to GrapheneOS, Linux phones, and any­thing out­side the Google bless­ing. Use a Daylight com­puter that does­n’t wreck your cir­ca­dian rhythm? Back to the of­fice.

There is a sim­pler MRZ-only path in the ref­er­ence app where you pho­to­graph an ID card with no NFC read or face match. Real na­tional apps may not sup­port it, and the ref­er­ence rec­om­mends the high-as­sur­ance path. Countries will prob­a­bly force the chip-based route. It’s a tro­jan horse to dig­i­tal ID any­way.

The mar­keted crypto and the shipped crypto are not the same thing

The pub­lic story is built around zero-knowl­edge proofs. The ref­er­ence Android app does­n’t ac­tu­ally use zero-knowl­edge crypto in the flow that runs. It uses an older ISO stan­dard (ISO 18013 – 5 mdoc with ES256) where each at­tribute is signed in ad­vance and the wal­let re­veals only the ones asked for, hid­ing the rest us­ing salted-di­gest com­mit­ments. A ZK li­brary is pulled in, but noth­ing in the pre­sen­ta­tion path ever calls it. So when peo­ple cite ZK age proofs” as the in­no­va­tion, they are cit­ing some­thing that is in the repo but is not switched on. Whether na­tional apps even­tu­ally turn it on is an open ques­tion. Today’s ref­er­ence is plain sig­na­tures.

The cryp­tog­ra­phy could be solid — zero-knowl­edge proofs over pass­port sig­na­tures are a real and tractable thing. But the crypto ac­tu­ally ship­ping in the cur­rent ref­er­ence is the older plain-sig­na­ture for­mat with dis­pos­able-batch un­link­a­bil­ity, not ZK. So when peo­ple de­fend the math works,” they are de­fend­ing math that is not turned on. Although if you use each signed at­tes­ta­tion only once, it only re­veals that you are over 18 and maybe from the sig­na­ture who is­sued the at­tes­ta­tion. There’s no unique iden­ti­fier.

What’s pri­vate and what is­n’t

The over­all flow is lo­cal-first, but still needs a server to is­sue cre­den­tials. Scanning and ini­tial checks hap­pen on the phone. Because the app is (or would be) at­tested, the is­su­ing server can be rea­son­ably con­fi­dent what ex­act code ac­tu­ally ex­e­cuted. The server ver­i­fies the doc­u­ment sig­na­tures and is­sues a signed cre­den­tial. That cre­den­tial can then be used to pro­duce a proof of age when talk­ing to web­sites.

From the ver­i­fier’s (say, a porn site or so­cial me­dia plat­form) point of view it looks un­link­able — as long as the wal­let be­haves. The de­sign is not the math guar­an­tees two proofs can’t be cor­re­lated.” The de­sign is the wal­let hands out a pile of dis­pos­able cre­den­tials, uses each one once, then asks for more.” If the wal­let obeys that rule, two ver­i­fiers see two dif­fer­ent sig­na­tures and can’t tie them to­gether. If the wal­let cheats, or if a proof is re­played, the two ver­i­fiers see the same sig­na­ture bytes, and the link­age is triv­ial. This is an im­por­tant nu­ance — the usual ZK = math = un­link­able for­ever” pitch does­n’t ap­ply here. The prop­erty holds be­cause the wal­let is sup­posed to ro­tate cre­den­tials, not be­cause the cryp­tog­ra­phy makes reuse im­pos­si­ble. Real cryp­to­graphic un­link­a­bil­ity schemes like BBS+ or CL sig­na­tures would pro­duce un­cor­re­lated proofs even on reuse. This is not that.

From the is­suer’s point of view — they is­sue cre­den­tials when you pre­sent your ID. The is­suer does­n’t know what you’ll use the cre­den­tial for, or how many times you’ll use it — the one-use rule lives in­side the wal­let, not on the server. So if the wal­let is mod­i­fied, or if proofs are cap­tured and re­played, no­body up­stream sees it. Any rate limit” you might imag­ine is a limit on how many cre­den­tials you mint, not on how many times a cre­den­tial is used in the wild.

They can of course in­fer that you are an EU coun­try cit­i­zen. But they can’t (under nor­mal wal­let be­hav­ior) tell which ac­counts are yours or link your ac­tiv­ity across sites.

What about re­lay at­tacks?

Here’s a sce­nario the spec does­n’t re­ally an­swer. Suppose a child wants to get into an age-gated site. A ser­vice pops up — call it Grandma-as-a-Service — that of­fers to ver­ify on their be­half for a few eu­ros. The child opens the site, gets a QR code or a link, and in­stead of scan­ning it them­selves they paste it into the proxy ser­vice. The proxy for­wards it to a real adult some­where with a real, gov­ern­ment-is­sued wal­let on a clean phone. The adult ap­proves. The adult’s wal­let pro­duces a cryp­to­graph­i­cally per­fect over 18” proof. The site sees a valid proof and lets the child in.

Nothing failed. Every sig­na­ture is real, every at­tes­ta­tion is real, the adult re­ally is over 18, the wal­let re­ally is run­ning un­mod­i­fied on a gen­uine Android. The catch is that the pro­to­col binds the proof to some wal­let some­where said yes,” not to the hu­man at this browser right now.” There is no prox­im­ity check. The browser-side Digital Credentials API par­tially closes this — but only when the user ver­i­fies on the same phone they’re brows­ing from. QR codes and deep links, which work across de­vices, are wide open.

People as­sume Google’s Play Integrity would stop this. It does­n’t. Play Integrity at­tests what code is run­ning on what de­vice. It says noth­ing about who is in front of it or where the de­vice is. In the proxy flow, the adult’s phone is a real phone and every at­tes­ta­tion is real. The re­lay — the web ser­vice the child talks to — is­n’t be­ing at­tested; it’s just mov­ing bytes.

And once an adult is en­rolled, the re­sale ver­sion gets ugly. The wal­let has thirty dis­pos­able cre­den­tials, re­freshed on a short in­ter­val. The is­suer never sees how those get used. So the proxy op­er­a­tor can reuse each cre­den­tial across many chil­dren; noth­ing up­stream raises an alarm. The one-time use” rule is an honor-sys­tem rule in­side the wal­let soft­ware, not some­thing the is­suer can en­force af­ter the fact. This is not a bug that pro­duc­tion apps will fix.” It’s in­her­ited from the shape of the pro­to­col, so it will be pre­sent in all 27 na­tional apps.

In any case, this is the tro­jan horse. Start with protect the chil­dren from porn and scary so­cial me­dia.” Create enough fric­tion that peo­ple reach for the con­ve­nient at­tested wal­let. The app it­self must be at­tested — which in prac­tice means Google or Apple de­cide what runs. The cre­den­tial can be killed by the is­suer.

The ref­er­ence app leaks face pho­tos, al­though only lo­cally. Twenty-seven coun­tries will each build their own ver­sion. With their own pri­vacy bugs.

Then you get the Hawthorne ef­fect. Every con­tro­ver­sial site that makes you pull out the wal­let cre­ates self-cen­sor­ship, even if the proof is sup­pos­edly anony­mous. Governments have a ter­ri­ble track record pro­tect­ing this data. Any data. History is full of ex­am­ples.

(Want to watch porn? Criticize a politi­cian? Are you re­ally go­ing to open the EU coun­try’s ID app to ver­ify that you are over 18 and be­lieve it’s un­link­able ZK proof — even if it re­ally is?)

Later they link it to Digital Euro and every­thing else. Suddenly a big chunk of your life can be switched off re­motely. Didn’t pay a park­ing ticket on time? Let’s tem­porar­ily re­voke your cre­den­tials — when you can’t log in any­where, you will come and pay the ticket.

The ar­chi­tec­ture and pol­i­tics are the usual con­trol layer with fresher paint. We don’t need re­vo­ca­ble dig­i­tal IDs as the price of en­try to the in­ter­net. We were do­ing just fine.

Are the pub­lished hacks real?

It’s worth split­ting the re­ported prob­lems into two piles. Pile one: bugs in the mock-up” — leaked files, unchecked MRZ scans, Chrome-extension demos hit­ting a place­holder back­end. These are fix­able and will be fixed per coun­try. Pile two: struc­tural prop­er­ties that fall out of the pro­to­col it­self — no prox­im­ity bind­ing, client-side one-time-use, un­link­a­bil­ity that breaks on reuse. These are not bugs. They’ll be pre­sent in every na­tional im­ple­men­ta­tion that fol­lows the spec. When com­men­ta­tors wave away the hacks,” they usu­ally mean pile one. Pile two is what this post is ac­tu­ally about.

There have been sev­eral hacks,” mostly by peo­ple who don’t un­der­stand how this is sup­posed to work. Leaving files on disk in the ref­er­ence app is some­thing that will be fixed, and does not re­ally mat­ter. The ref­er­ence app will not be used by any coun­try di­rectly — they will have their own bugs. It’s for coun­tries to know how to gen­er­ate the proofs and stay in­ter­op­er­a­ble. It does­n’t even mat­ter that you can fool it into giv­ing you a test cre­den­tial, be­cause the pri­mary ver­i­fi­ca­tion path will be coun­tries’ own eID sys­tems, not their mock-up of unchecked MRZ scan­ner.

There was a hack” that cre­ated a cus­tom Chrome ex­ten­sion. That would fail app at­tes­ta­tion once at­tes­ta­tion is en­forced. The MRZ path also does not con­nect to a real back­end, be­cause there’s no real EU-side back­end — the reg­istries of valid doc­u­ments are the com­pe­tence of in­di­vid­ual coun­tries.

I’m 99% sure that even though I con­sider EU com­pletely in­com­pe­tent, these par­tic­u­lar mock-up hacks won’t work in pro­duc­tion apps. So this haha, I hacked the app with my Claude Max sub­scrip­tion” does­n’t mean any­thing. They’re hack­ing a mock-up show­ing the use of a li­brary. Yes, Frau Ursula called it EU Age Verification app,” but there will not be an EU app — there will be a Slovak app, a Hungarian app, a German app, a Dutch app, a French app…

But Why?

Many of us nat­u­rally ask why peo­ple want this. I think it’s a mis­take to think they don’t. There is de­mand for this. The in­ter­net is scary, par­ents think they can’t pro­tect their chil­dren from many bad things hap­pen­ing, and some­one came to pro­vide a solution.” Doesn’t mat­ter that I am sure the kids will go around it eas­ily. The clients (the vot­ers) are not the chil­dren be­ing pro­tected, but their par­ents, feel­ing good.

I think a very good and deep ex­pla­na­tion is in my novel Tamers of Entropy. Have a look. It is very cypher­punk/​lu­narpunk and ex­plains also the psy­chol­ogy be­hind these dystopias — and a way out­side. Plus it’s fun to read. Check it out at tamer­sofen­tropy.net. The char­ac­ters also have Nostr ac­counts.

Conclusion

The EU fancy ZK apps will not be ready. Platforms will use nor­mal KYC providers, AI face age es­ti­ma­tors and other means.

When done ac­cord­ing to spec, the age ver­i­fi­ca­tion app has mean­ing­ful pri­vacy prop­er­ties — the plat­forms don’t know your iden­tity or link your ac­counts. But those prop­er­ties rest on wal­let be­hav­ior, not cryp­to­graphic guar­an­tees. The ZK math that would make un­link­a­bil­ity a hard guar­an­tee is in the repo and not switched on.

The apps will not work un­less you have a Google or Apple ap­proved de­vice. Forget Linux, GrapheneOS, Huawei, af­ter-mar­ket firmwares. It’s part of the se­cu­rity model.

And re­lay at­tacks — chil­dren us­ing adult prox­ies to get into age-gated sites — are not fix­able bugs. They are a struc­tural prop­erty of the pro­to­col that will ship in all 27 na­tional apps.

The pri­vacy the­ater hides the wolf. The wolf is still there.

openai.com

The AI Industry Is Discovering That the Public Hates It

newrepublic.com

On April 10, the house of OpenAI CEO Sam Altman was at­tacked

with a Molotov cock­tail by 20-year-old Daniel Moreno-Gama. The sus­pect, who

was ar­rested the same day, had writ­ten a man­i­festo warn­ing of the

ex­is­ten­tial threat of ar­ti­fi­cial in­tel­li­gence. In his mis­sive, he ad­vo­cated for

killing the CEOs of AI com­pa­nies, and he re­ferred to him­self as butlerian ji­hadist”

on Instagram (a ref­er­ence to a war against ma­chines in Frank Herbert’s Dune

uni­verse).

Three days prior in Indianapolis, an un­known per­pe­tra­tor

fired 13 shots into the home

of lo­cal Democratic coun­cil­man Ron Gibson while his 8-year-old son was

home. Neither were hurt, but a note read­ing No Data Centers” was left on the

doorstep. Gibson had lent his sup­port for a po­ten­tial data cen­ter pro­ject in

his dis­trict. There have not yet been any ar­rests in the case.

Both in­ci­dents were fright­en­ing ex­am­ples of ab­hor­rent,

po­lit­i­cally mo­ti­vated vi­o­lence. But the re­ac­tion, at least on so­cial me­dia, seemed to revel in

it.

The mood ex­em­pli­fied by in­flamed Instagram com­menters on these in­ci­dents was fur­ther re­in­forced on April 13 when Stanford University

re­leased its an­nual

Artificial Intelligence Index, which pro­vides a yearly snap­shot of where

the in­dus­try stands.

In the re­port, one of the most stand­out con­trasts was the

gulf be­tween what AI ex­perts pre­dict for AIs fu­ture and the pub­lic’s

re­ac­tion to the in­dus­try’s de­signs. On jobs, 73 per­cent of ex­perts were

pos­i­tive about the long-term ef­fect, with 69 per­cent pos­i­tive about the

long-term ef­fect on the econ­omy. Among the pub­lic, those num­bers were 23 per­cent

and 21 per­cent, re­spec­tively, with nearly two-thirds of Americans think­ing that

AI would lead to fewer jobs over the next 20 years.

A sep­a­rate sur­vey, re­leased in March

2026 by Gallup, also showed a sharp in­crease in neg­a­tive at­ti­tudes to­ward

AI among Gen Z. According to the poll, the per­cent­age of Gen Zers who felt

ex­cited about AI had dropped from 36 per­cent to 22 per­cent, while the num­ber

who felt an­gry about it in­creased from 22 per­cent to 31 per­cent.

These num­bers and ac­tions point in the same di­rec­tion: a

rapidly grow­ing pop­ulist back­lash to­ward AI, which tech jour­nal­ist Jasmine Sun de­fined as a

world­view in which AI is viewed not only

as a nor­mal tech­nol­ogy, but an elite po­lit­i­cal pro­ject to be re­sisted … a thing

man­u­fac­tured by out-of-touch bil­lion­aires and pushed onto an un­will­ing pub­lic.”

Naturally, vi­o­lence is never an an­swer, nor is it a po­lit­i­cally ef­fec­tive

tac­tic. But you also can­not ig­nore how the tone-deaf pub­lic mes­sag­ing of the AI

in­dus­try has helped to con­tribute to this re­ac­tion.

For years, CEOs like Altman and Anthropic’s Dario Amodei have

very pub­licly os­cil­lated be­tween two sub­op­ti­mal sce­nar­ios. In one, AI ex­ter­mi­nates

hu­man­ity with a bi­o­log­i­cal su­per-weapon. In the other, AI

ei­ther takes your job en­tirely or cre­ates an econ­omy where your only op­tion

is to down­shift

into the gig econ­omy.

These pitches may be per­fect for at­tract­ing at­ten­tion at

tech con­fer­ences or fund­ing rounds, but they ut­terly ig­nore the daily con­cerns

of reg­u­lar Americans, at a time when the job mar­ket (especially for newer

grad­u­ates) is in­cred­i­bly

shaky; eco­nomic gains are con­cen­trated

among the top 0.1 per­cent; and the price

of food, hous­ing, and, now, gaso­line all con­tinue to sky­rocket.

This is the en­vi­ron­ment in which the AI in­dus­try is very

pub­licly ask­ing for hun­dreds

of bil­lions of dol­lars in con­tin­ued in­vest­ment, as well as a mas­sive data cen­ter

build­out that has had sig­nif­i­cant ef­fects on

lo­cal pop­u­la­tions’ elec­tri­cal bills. For ex­am­ple, in Virginia, the epi­cen­ter of the U.S.

data cen­ter boom, res­i­den­tial elec­tri­cal rates have been pro­jected to in­crease

by up to 25 per­cent by 2030.

These costs could be ig­nored, or even ac­cepted, if there was a clear idea of how pre­cisely AI would stream­line and im­prove the work­place—or

of­fer any tan­gi­ble pub­lic ben­e­fit sig­nif­i­cant enough to make these un­der­ly­ing

trade-offs ac­cept­able. But the an­swers to these ques­tions re­main ex­tremely

ten­u­ous. According to a February 2026 pa­per by the National Bureau of Economic

Research, 80

per­cent of com­pa­nies that have be­gun ac­tively us­ing AI have re­ported no

im­pact on com­pany pro­duc­tiv­ity. A sep­a­rate, widely cited 2025 MIT study

re­vealed that 95 per­cent of cor­po­rate AI

pi­lot pro­grams re­ceived zero re­turn.

Even within tech and cod­ing, one of the ar­eas where AI is

re­ported to have the most promise, there’s the ques­tion of whether the

pro­duc­tiv­ity gains re­ported can be trusted. In a provoca­tive GitHub

post, ma­chine-learn­ing en­gi­neer Han-Chung Lee ar­gued that even rosy

in­ter­nal num­bers that do show AI-assisted pro­duc­tiv­ity gains are

sus­pect, as they’re pro­duced to hit adop­tion tar­gets no one can ef­fec­tively

au­dit.

This is­n’t to say that AI does­n’t show im­mense and pos­si­bly

in­cred­i­bly valu­able po­ten­tial, es­pe­cially bear­ing in mind that ChatGPT (which

can be con­sid­ered the first main­stream demon­stra­tion of AI tech­nol­ogy) was only

launched in November 2022. It’s nat­ural for new tech­nol­ogy to have a bumpy

adop­tion pe­riod as both users and de­sign­ers stress-test its strengths and lim­i­ta­tions

in the real world.

But the gap be­tween how AI com­pa­nies talk about them­selves

and how the gen­eral pub­lic has ex­pe­ri­enced the tech­nol­ogy (and its side

ef­fects) has grown into a chasm, and now the re­sults of these di­vi­sions are

start­ing to show; data

cen­ter pro­jects can­celed or de­layed; an in­dus­try that is less

pop­u­lar than ICE or Donald Trump; and now, vi­o­lent acts against AI

lead­ers.

In its de­fense, Big Tech has re­al­ized the ex­tent of the

po­ten­tial prob­lems that AI could pose to reg­u­lar Americans. Earlier in April,

for ex­am­ple, Open AI re­leased an Industrial

Policy White Paper, which in­cluded sug­ges­tions such as the cre­ation of a

Public Wealth Fund for all Americans to share in AI growth, re­vamp­ing so­cial

safety nets, and in­vest­ing in real-time mea­sure­ment of how AI af­fects work. In

January, Microsoft re­leased a Community-First

AI Infrastructure Initiative, promis­ing to sub­si­dize util­ity rates and

min­i­mize wa­ter use in com­mu­ni­ties where it was build­ing data cen­ters.

But it’s one thing for AI com­pa­nies to make lofty promises

in press re­leases, and an­other thing en­tirely for them to fol­low through

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.