10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program in 2000, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function. That fact existed nowhere in any document. Only the workers who made the original batch knew it, and they had retired years earlier.
A nuclear weapons program lost the ability to make a material it invented. The knowledge existed only in people, and the people were gone.
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. They weren’t enough without the people who understood what they meant.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
April 24, 2026
4 min read
Add Us On GoogleAdd SciAm
An amateur just solved a 60-year-old math problem—by asking AI
An amateur just solved a 60-year-old math problem—by asking AI
A ChatGPT AI has proved a conjecture with a method no human had thought of. Experts believe it may have further uses
By Joseph Howlett edited by Lee Billings
Eugene Mymrin/Getty Images
Liam Price just cracked a 60-year-old problem that world-class mathematicians have tried and failed to solve. He’s 23 years old and has no advanced mathematics training. What he does have is a ChatGPT Pro subscription, which gives him access to the latest large language models from OpenAI.
Artificial intelligence has recently made headlines for solving a number of “Erdős problems,” conjectures left behind by the prolific mathematician Paul Erdős. But experts have warned that these problems are an imperfect benchmark of artificial intelligence’s mathematical prowess. They range dramatically in both significance and difficulty, and many AI solutions have turned out to be less original than they appeared.
The new solution—which Price got in response to a single prompt to GPT-5.4 Pro and posted on www.erdosproblems.com, a website devoted to the Erdős problems, just over a week ago—is different. The problem it solves has eluded some prominent minds, bestowing it some esteem. And more importantly, the AI seems to have used a totally new method for problems of this kind. It’s too soon to say with certainty, but this LLM-conceived connection may be useful for broader applications—something hard to find among recently touted AI triumphs in math.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”
The question Price solved—or prompted ChatGPT to solve—concerns special sets of whole numbers, where no number in the set can be evenly divided by any other. Erdős called these “primitive sets” because of their connection to similarly indivisible prime numbers.
“A number is prime if it has no other divisors, and this is kind of generalizing that definition from an individual number to a collection of numbers,” says Jared Lichtman, a mathematician at Stanford University. Any set of prime numbers is automatically primitive, because primes have no factors (except themselves and the number one).
Erdős also came up with the Erdős sum, a “score” you can calculate for any primitive set. He showed that the biggest the sum could be was about 1.6—and conjectured that this value must also hold for the (infinite) set of all prime numbers. Lichtman proved Erdős right as part of his doctoral thesis in 2022.
Erdős also noticed that the score drops if all of a set’s numbers are large—the larger the numbers, the lower the score. He guessed that the lowest this score could be was exactly one, a limit that the score would approach as the set’s numbers approached infinity. Lichtman tried to prove this, too, but got stuck like everyone else before him.
Price wasn’t aware of this history when he entered the problem into ChatGPT on an idle Monday afternoon. “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.”
He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge. The duo had jump-started the AI-for-Erdős craze late last year by prompting a free version of ChatGPT with open problems chosen at random from the Erdős problems website. (An AI researcher subsequently gifted them each a ChatGPT Pro subscription to encourage their “vibe mathing.”)
Reviewing Price’s message, Barreto realized what they had was special, and experts whom he notified quickly took notice.
“There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.
More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.”
Lichtman is hopeful because ChatGPT’s discovery validates a sense he’s had since graduate school. “I had the intuition that these problems were kind of clustered together and they had some kind of unifying feel to them,” he says. “And this new method is really confirming that intuition.”
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
After almost three years of 6.x series kernels, Linux 7.0 is finally here.
That means it’s also time for another Asahi progress report!
Automate Everything
Users of alternate distros and keen-eyed individuals may have noticed some
changes to the Asahi Installer. After almost two years, we finally got around
to pushing an updated version of the installer to the CDN! Two years is a long
time to go between updates, so what took so long?
Our upstream installer package is a little bit of a Rube-Goldberg machine. The
bulk of the installer is written in Python, with some small Bash scripts to
bootstrap it. When you run curl | sh, you’re actually downloading the boostrap
script, which then fetches the actual installer bundle from our CDN. This bundle
consists of a Python interpreter and very stripped down standard library, a built
m1n1 stage 1 binary, and the installer itself.
Until recently, cutting an installer release meant:
Tagging the installer repo
Downloading a macOS Python build
Building m1n1 from a blessed commit
Bundling Python, m1n1 and the installer
Uploading the installer bundle to the CDN
Updating the CDN’s version flag file
This process was time-consuming and required administrative access to the CDN.
As a result, we neglected to push installer updated for quite some time; the
previous installer tag was from June 2024! As upstreaming work has progressed
and Devicetree bindings churned, this became rather problematic for our friends
maintaining distros.
The Asahi Installer offers a UEFI-only installation option. This option
shrinks macOS and only installs what is necessary to boot a UEFI executable,
meaning m1n1 stage 1, the Devicetrees, and U-Boot. This allows users to
boot from live media with Asahi support, such as specialised Gentoo Asahi
LiveCD images.
Since the Devicetrees on a fresh UEFI-only install come from the installer
bundle itself, a kernel will only successfully boot when the installer-bundled
Devicetrees match what that kernel expects to see. The two have gotten rather
out of sync as time has gone on due to Devicetree bindings changing
as a result of the upstreaming process. This situation finally came to a
head with kernel 6.18, which required numerous changes to both m1n1 and
the Devicetree bindings for the Apple USB subsystem. This made booting
kernel 6.18 and above from live media impossible. Oops.
Rather than go through the trouble of manually pushing out another update,
we took the opportunity to build some automation and solve this problem
permanently.
We moved the manifest of installable images into the asahi-installer-data repo,
allowing us to update it independently of the installer codebase.
On top of this, we also now deploy
the installer using GitHub workflows. Going forward, every push to the main
branch of asahi-installer will
automatically build the installer and upload it to https://alx.sh/dev.
Every tag pushed to GitHub will do the same for https://alx.sh.
The latest version, 0.8.0, bumps the bundled m1n1 stage 1 binary to
version 1.5.2, introduces installer support for the Mac Pro, and adds
a firmware update mode which ties in nicely with…
How do you overengineer a light sensor?
Basically everything with a screen now comes with some sort of light
sensor. This is usually to enable automatic brightness adjustment based
on ambient conditions. It’s a very convenient feature in devices like
smartphones, where a user may walk outside and find their display too
dim to see. The cheapest versions of this use a simple photoresistor.
This is fine if the goal is just to change brightness, but brightness
is not the only thing affected by ambient lighting conditions. What about
colour rendering?
Apple’s devices have had the True Tone display feature for quite some time.
This works by measuring both the brightness and the colour characteristics
of the environment’s ambient lighting. This data is then used to apply
brightness and colour transformations to the display to ensure that it is
always displaying content as accurately as possible. This is most noticeable
in environments with lighting fixtures that have a low Colour Rendering
Index, such as fluorescent tubes or cheap cool white LEDs. The devices that
enable this, ambient light sensors, are usually little ICs that connect
to the system over I2C or other industry-standard bus. This is
fine for basic applications, but this is Apple. There are some other considerations
to be had:
The light sensor is doing stuff whenever the screen is on, so processing its
output should be as efficient as possible
The light sensor should be able to be calibrated for maximum accuracy
There are multiple models of light sensor in use, and the OS should not
have to care too much about that
The light sensor has to have a three letter acronym like every other piece
of hardware on this platform (ALS)
Naturally, this sounds like a job for the Always-On Processor1 (AOP)!
We’ve had a working AOP+ALS driver set for a while thanks to chaos_princess,
however the raw data AOP reports back from ALS is rather inaccurate without
calibration. That calibration is a binary blob that must be uploaded to the AOP
at runtime. It is essentially firmware. Since we cannot redistribute Apple’s
binaries, it must be retrieved from macOS at install time and then stored somewhere the driver
knows to look for it.
To achieve this, the Asahi Installer gathers up all
the firmware it knows we will need in Linux and stores it on the EFI System
Partition it creates. A Dracut module then mounts this to a subdirectory of
/lib/firmware/, where drivers can find it. However, issues arise when we
need to retrieve more firmware from macOS after Asahi Linux has already been
installed. To avoid a repeat of the webcam situation, where users
were required to manually do surgery on their EFI System Partition, chaos_princess
added the ability for the Asahi Installer to automatically update the firmware
package. Starting with ALS, any required firmware updates will be a simple
matter of booting into macOS or macOS Recovery, re-running the Asahi Installer, and following
the prompts.
To enable ALS support (and to do firmware upgrades in the future), follow these steps:
Ensure you are running version 6.19 or above of the Asahi kernel
Ensure your distro ships iio-sensor-proxy as a dependency of your DE (Fedora
Asahi Remix does this)
Ever wanted to connect your Legos and Tinkertoys together? Now you can — and much more. Announcing the Free Universal Construction Kit: a set of adapters for complete interoperability between 10 popular construction toys.
Fig. 1. The Free Universal Construction Kit.
Overview
Motivation
Download
Implementation
Legal and Commercial Implications
License and Disclaimers
Credits, Contact and Acknowledgements
Keywords
Overview
Video by Riley Harmon for F.A.T. Lab + Sy-Lab.
F.A.T. Lab and Sy-Lab are pleased to present the Free Universal Construction Kit: a matrix of nearly 80 adapter bricks that enable complete interoperability between ten* popular children’s construction toys. By allowing any piece to join to any other, the Kit encourages totally new forms of intercourse between otherwise closed systems—enabling radically hybrid constructive play, the creation of previously impossible designs, and ultimately, more creative opportunities for kids. As with other grassroots interoperability remedies, the Free Universal Construction Kit implements proprietary protocols in order to provide a public service unmet—or unmeetable—by corporate interests.
The Free Universal Construction Kit offers adapters between Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles (Bristle Blocks), Lincoln Logs, Tinkertoys, Zome, and Zoob. Our adapters can be downloaded from Thingiverse.com and other sharing sites as a set of 3D models in .STL format, suitable for reproduction by personal manufacturing devices like the Makerbot (an inexpensive, open-source 3D printer).
Motivation
Our kids are already doing it! And when we were growing up, ourselves, we did it too—or we tried to, anyway. Connecting our toys together. Because: what if we want to make a construction which is half-Tinkertoys, half-K’Nex? Why shouldn’t we be able to? We dreamed about this possibility years ago, when we were small, and we knew then, as we know now, that we’d need some adapters to help. The advent of low-cost 3D printing has made such adapters possible, and with it, a vast new set of combinatorial possibilities for children’s creative construction toys.
Opening doors to new creative worlds is one major reason we created the Free Universal Construction Kit. Another is that we believe expertise shouldn’t be disposable — and that childrens’ hard-won creative fluency with their toys shouldn’t become obsolete each Christmas. By allowing different toy systems to work together, the Free Universal Construction Kit makes possible new forms of “forward compatibilityâ€, extending the value of these systems across the life of a child. Thus, with the Kit’s adapters, playsets like Krinkles (often enjoyed by toddlers) can still retain their use-value for older children using Lego, and for even older tweens using Zome.
The Kit offers a “best of all worlds†approach to play and learning that combines the advantages of each toy system. We selected construction sets for inclusion based on their significant level of market penetration, as well as for the diversity of features they brought to the Kit’s collection. Some of the supported construction systems, for example, offer great mechanical strength, or the ability to build at large scales; others offer the means to design kinetic movements; and still others permit the creation of a wide range of crystallographic geometries and symmetries. Using these classic toys as a foundation, the Free Universal Construction Kit offers a “meta-mashup system†ideally provisioned for the creation of transgressive architecture and chimeric readymades.
Finally, in producing the Free Universal Construction Kit, we hope to demonstrate a model of reverse engineering as a civic activity: a creative process in which anyone can develop the necessary pieces to bridge the limitations presented by mass-produced commercial artifacts. We hope that the Kit will not only prompt people to create new designs, but more importantly, to reflect on our relationship with material mass-culture—and the rapidly-evolving ways in which we can better adapt it to our imaginations.
Download
The Free Universal Construction Kit 3D models are freely available in .STL format from three locations:
Individual adapters from the Free Universal Construction Kit may be downloaded from Thingiverse.com — the world’s foremost website dedicated to the free sharing and remixing of user-created digital design files.
The complete Free Universal Construction Kit can also be downloaded in its entirety*, as a 29MB .zip archive from the F.A.T. Lab web site, here. Note: all units are in inches.
We expect the Kit to be available shortly from The Pirate Bay, as a torrent in TPB’s new “physibles” (physical downloadables) channel.
In addition to the Kit itself, we also offer for download this attractive B1 poster (4.5MB PDF, in two versions: gray background / white background).
Figure 2. The Free Universal Construction Kit adapter matrix. (PDFs: Gray, White)
We (F.A.T. Lab and Sy-Lab) neither sell nor distribute physical copies of the Free Universal Construction Kit. Please do not ask us to do so. Individuals seeking their own physical copies of the Kit, in whole or in part, are encouraged to download our files and reproduce them with open-hardware desktop 3D printers like the Makerbot, RepRap, Ultimaker, or Printrbot. Alternatively, copies for private use may be available from a personal fabrication service bureau; for awesome service, international/anywhere shipping and quick turnaround, we highly recommend Ponoko.com for personalized 3D printing in a wide variety of materials. Shapeways and QuickParts are good, too. You may also find a 3D printer in the architecture, industrial design, and/or mechanical engineering departments of your local university.
Please note that our license for the Free Universal Construction Kit prohibits commercial use of these designs in mass production; note, however that we encourage individuals to contract with fabrication service bureaus for the creation of personal copies. For more information, see our license and disclaimers, below.
Implementation
The Free Universal Construction Kit comprises nearly 80 two-way adapters. These allow each of the different construction toys (Lego, Tinkertoy, Fischertechnik etc.) to interface with any of the other supported systems. Prior to modeling, the dimensions of the various toy connectors were reverse-engineered with an optical comparator fitted with a digital read-out accurate to less than one ten-thousandth of an inch (0.0001in., or 2.54 microns).
Figure 3. A Bristle Block being measured in the optical comparator.
The resulting precision ensures that the Free Universal Construction Kit “actually worksâ€, enabling tight snap-fits between custom and commercial components.
Figure 4. The Kit in use, connecting four different systems together.
Below is a partial gallery of assorted Kit adapters, respectively compatible with (clockwise from top left): Lego, Zoob, Tinkertoys, and Gears! Gears! Gears!. Click on the images for higher-resolution photographs:
In addition to its many one-to-one adapters, the Free Universal Construction Kit also includes a special fist-sized Universal Adapter Brick which provides connectivity between all of the supported construction systems:
Fig. 9. The Universal Adapter Brick.
Producing physical prints from our provided 3D models prompts certain fabrication considerations. According to Wikipedia, the precision of Lego pieces is less than 10 microns. As of early 2012, however, standard Makerbot printers have an XY resolution of 100 microns (0.1mm) and a default layer thickness of 360 microns (0.36mm). We thus caution that fabrication of the Free Universal Construction Kit with current (2012-era) solutions for DIY 3D printing, such as the Makerbot, Printrbot or RepRap, may lack the precision required for reliable or satisfactory coupling with standard commercial pieces. A great deal depends on how well-tuned the printer is; thus, your mileage may vary. In any case, we expect this situation will improve gradually, but inexorably, in tandem with improvements to these vibrantly evolving fabrication platforms. The artist’s proof shown here was created in a UV-cured white resin using a commercial-grade Objet (“polyjetâ€) 3D printer, which has a horizontal resolution of 42 microns, and a layer thickness of 16 microns. Ponoko.com and other private fabrication services offer printing from Objet machines and other high-resolution devices.
Legal and Commercial Implications
Consider the frustrating experience of purchasing a new computer (a Mac, say) and discovering that it will not play your aunt’s Windows Media video of your little cousins. Likewise, imagine your aunt’s corresponding annoyance when she finds that her PC will not play the Apple Quicktime video you sent her of your cats. This humiliating little episode isn’t an accident; it’s just a skirmish in a never-ending battle between giant commercial entities, played out, thousands of times every day, in exactly such micro-punishments to customers like you. If you’re well-informed, you may happen to know about VLC — a free, open-source video player, developed by independent hackers as a grassroots remedy for exactly this problem. Until the advent of ubiquitous 3D printing, software remedies like VLC weren’t readily available for hardware products, like toys. That’s changing.
Today’s manufacturers have little or no intrinsic motivation to make their products compatible with anyone else’s. Indeed—despite obvious benefits to users everywhere—the implementation of cross-brand interoperability can be nearly impossible, given the tangled restrictions of patents, design rights, and trademarks involved in doing so. So we stepped up. The Free Universal Construction Kit is the VLC of children’s playsets.
As we can see from the example above, interoperability is a question of power and market dominance. Most market leaders regard interoperability as an anti-competitive nuisance, a regulatory check on their ambition, or a concession to the whining of lesser players. Quite simply, interoperability is the request of the disenfranchised. And which end-user, in so many ways, is less enfranchised than a preliterate child?
The simple fact is that no toy company would ever make the Free Universal Construction Kit. Instead, each construction toy wants (and indeed, pretends) to be your only playset. Within this worldview, the other manufacturers’ construction sets are just so many elephants in the room, competing for your attention on the shelves of Toys-R-Us. No longer. The Free Universal Construction Kit presents what no manufacturer could: a remedy providing extensible, post-facto syntactic interoperability for construction toys. Let the fun begin!
Some may express concern that the Free Universal Construction Kit infringes such corporate prerogatives as copyright, design right, trade dress, trademarks or patents of the supported toy systems. We encourage those eager to enforce these rights to please think of the children (or perhaps the Streisand effect) — and we assert that the home printing of the Free Universal Construction Kit constitutes protected fair use. Simon Bradshaw et al., writing in “The Intellectual Property Implications of Low-Cost 3D Printingâ€, conclude that the public is legally allowed to make 3D prints that mate with proprietary parts, especially in cases (the “Must Fit Exceptionâ€) where a piece’s shape “is determined by the need to connect to or fit into or around another productâ€:
“Even where a registered design is copied via a 3D printer this would not be an infringement if it were done ‘privately and for purposes which are not commercial’. Both criteria must be met; it is insufficient that copying is not done for profit. Purely personal use of a 3D printer to make items will thus not infringe a registered design.â€
*In fact, the Free Universal Construction Kit deliberately avoids patent infringement. Part of our strategy for doing so is our choice to support older (“classicâ€) playsets: of the ten toy systems supported by the Kit, eight are no longer protected by active (20-year) patents. To take a few examples: Lego was patented in 1958; Lincoln Logs, in 1920; and Tinkertoys, in 1932. There are, however, two instances in which toy systems nominally supported by the Kit are still protected (as of this writing) by active patents: Zoob (patented 1996) and ZomeTool (patented 2002). For the Zoob and Zome systems, please note that we have delayed the release of pertinent adapter models until December 2016 and November 2022, respectively.
The Free Universal Construction Kit is simply one “toy†illustration of a coming grassroots revolution, in which everyday people can—with desktop tools—overcome arbitrary restrictions in mass-manufactured physical culture. The burgeoning possibility of freely shared downloadable adapters has significant implications for industries where the attempt to create “technological lock-in†is a common business practice. For more on this subject, and the legal horizons of reproducing commercial products with home fabrication systems, please see:
Bradshaw, Simon; A. Bowyer and P. Haufe, “The Intellectual Property Implications of Low-Cost 3D Printingâ€, 7:1 SCRIPTed 5, 2010.
de Bruijn, Erik. “Fab It Yourself: Adapters & Consumer Lock-Inâ€. Blog.erikdebruijn.nl, 13 September 2010.
Hanna, Peter. “The next Napster? Copyright questions as 3D printing comes of ageâ€. Arstechnica.com, April 2011.
Ross, Valerie. “Can You Patent a Shape? 3D Printing on Collision Course With Intellectual Property Lawâ€. Discover Magazine, 7 April 2011.
Weinberg, Michael. “3D Printing Settlers of Catan is Probably Not Illegal: Is This a Problem?â€. PublicKnowledge.org, 28 January 2011.
Weinberg, Michael. “It Will Be Awesome if They Don’t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technologyâ€. PublicKnowledge.org, 10 November 2010.
In addition to the writers above, we tip our hats to Thingiverse user Zydac, whose related project (a Duplo-to-Brio track adapter) led us to these legal writings; to Andrew Plumb (Clothbot) who has probed the legal and practical implications of Lego-compatible bricks for some time; and to Daan van den Berg, who has explored 3D-printed remixes of branded forms as a mode of critical artistic practice.
License and Disclaimers
The Free Universal Construction Kit and its associated media are licensed under and subject to the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License (http://creativecommons.org/licenses/by-nc-sa/3.0/legalcode). The official URL for the Free Universal Construction Kit is https://fffff.at/free-universal-construction-kit. You are free to copy, distribute and transmit the Kit, and to remix and/or adapt the Kit; in doing so, you must attribute the Kit to “F.A.T. Lab and Sy-Labâ€, and include a link to the project using the URL above. We especially welcome extensions to the Kit which provide compatibility with as-yet-unsupported play systems. Please note that extensions to the Kit require the same or similar license. You may not use the Kit in commercial mass production; however, we permit individuals to contract with fabrication service bureaus (e.g. Ponoko, Shapeways, etc.) for personal copies.
Lego®, Duplo®, Fischertechnik®, Gears! Gears! Gears!®, K’Nex®, Krinkles®, Bristle Blocks®, Lincoln Logs®, Tinkertoys®, Zome®, ZomeTool® and Zoob® are trademarks of their respective owners. The Free Universal Construction Kit is not associated or affiliated with, or endorsed, sponsored, certified or approved by, any of the foregoing owners or their respective products.
We are not a commercial company; we are artists, hackers and activists. The Kit is not a product; it is a provocation. F.A.T. Lab and Sy-Lab, in cooperation with Adapterz LLC, (1) perform solely the service of publishing the Free Universal Construction Kit, (2) do not participate in any production, public manufacture or sale of the items displayed here, and (3) offer no opinion, warranty or representation as to the safety, quality or functionality of the Kit. The F.A.T. Lab, Sy-Lab and Adapterz LLC therefore offer no warranty of any kind, express or implied.
Please cite the Free Universal Construction Kit, and/or this article, as follows:
Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.†Fffff.at, 20 March 2012. <https://fffff.at/free-universal-construction-kit>.
Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.†Fffff.at, 20 March 2012. <https://fffff.at/free-universal-construction-kit>.
WARNING: CHOKING HAZARD!
Small parts. Not for children under 3 years.
Credits, Contact and Acknowledgements
For press or other inquiries about the Free Universal Construction Kit, please contact info@adapterz.org. The Kit was conceived and developed by the F.A.T. (Free Art and Technology) Lab in collaboration with Sy-Lab, and is represented, for legal purposes, by Adapterz, LLC. The Kit’s “advertisement” video was created by Riley Harmon.
The creators express gratitude to: our families; our lawyers; the children appearing in our demonstration video, and their families; Jean Aw, Eric Brockmeyer, David Familian, Andy Flowers, Michael Joaquin Grey, Mark Gross, Riley Harmon, Marcie and Lawrence Hayhurst, Allie Oswell, Eric Paulos, Bre Pettis, Kent Sheely, Michael Weinberg, and the STUDIO for Creative Inquiry. The Kit files are sportingly hosted by Thingiverse.com.
Keywords
Toys, kits, constructions sets, construction toys, construction systems, Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles, Bristle Blocks, Lincoln Logs, Tinkertoys, Zome, ZomeTool, Zoob, Constructivist learning, play, connectors, adaptors, adapter piece, adapter brick, adapters, universal translator, gender changer, modularity, interoperability, interoperability remedy, compatibility layer, technological lock-in, post-facto plug-and-play syntactic interoperability, shim, computer aided design, 3D models, STL files, physibles, rapid prototyping, 3D printing, Makerbot, RepRap, Printrbot, Thingiverse, Ponoko, F.A.T. Lab, Sy-Lab, fair use, remix, hybrid, mashup.
The commons and the public good are continually threatened by narrow interests seeking private gain. Please continue to support and protect the free, open, and non-proprietary exchange and development of ideas and information online.
Note: I initially drafted this before my last post on how Claude Code is getting worse. I’m putting it out now so I can reference it in a future post on OpenCode. As you can imagine my opinion on Claude Code has shifted since I wrote this.
Long ago I attempted a personal project, but never finished due to life being busy. [1] Sort of like the Japanese word Tsundoku, for the pile of books you intend to eventually read one day. We all have these projects and they are good candidates for testing out AI coding assistance. After all, they were never going to get done anyway.
The POC I put together was a shim between YouTube Music and the OpenSubsonic api. Explaining OpenSubsonic could be its own article, but for our purposes it’s an API contract for nicely decoupling music streaming clients and servers. You can pick your own options for both. In my case I like Navidrome for the server, Feishin for desktop, and as I mentioned in my post on GrapheneOS, Symfonium for Android.
Anyways, the shim made YouTube Music conform to the API so I could add it to any of my clients. Under the hood I used ytmusicapi for metadata lookup and programmatically called yt-dlp to stream the music. Getting basic streaming working was pretty simple. However, there was a long tail implementing all the endpoints in a conformant way. Then as always, there were new shiny projects that stole my attention away. Like that embedded rust location project I promise I’ll finish at some point. Maybe.
Luckily, nothing was really novel in that streaming project, and there is a clear spec to implement which is perfect for assisted coding. So a month and a half ago I thought I would test Claude Code with Opus 4.6 and see how it did implementing the project from scratch. After all, they gave me a free $50 in credit, so I might as well.
The setup
Since I had already written a proof of concept by hand, I had my own opinions about the implementation and laying all of that out beforehand constrained the tool in a nice way.
I did the following:
Created a uv project with fastapi, pydantic, ytmusicapi and yt-dlp as dependencies.
Changed main.py to the example FastAPI main file.
Dropped the openapi spec for OpenSubsonic in the folder.
Added a brief description in a readme file:
This project acts as a shim, exposing YouTube music as an opensubsonic client. It uses fastapi for its server with pydantic, ytmusicapi for metadata and yt-dlp for streaming.”
opensubsonic docs are available at: https://example.docsy.dev/docs/reference/
The openapi spec is in openapi.json.
Added an empty TODO file.
Generated a CLAUDE.md file using /init.
I also often add a section like this to the CLAUDE.md file:
## Conventions
- Methods should have type annotations for args and returns as well as docstrings.
- Use Pydantic for data modeling. Use modern Pydantic V2 conventions.
- Doc strings should use the Google style format with an args and returns sections.
- Write unit tests with modern pytest style, eg top level methods using `assert` and fixtures.
That’s mostly based on past experience for what I have to repeatedly ask Claude Code not to do.
I’ve bundled up this starting point into a git repository in case anyone else wants to try the experiment.
Implementing the MVP
With that setup done, I let Claude kick things off. The workflow I typically use is:
Enter plan mode.
Prompt for the next piece of work.
After getting the initial plan, look for gaps / problems and ask follow up questions until I like the plan.
Provide links to resources when Claude is off.
Ask Claude to use the search tool to figure out what is idiomatic when there are multiple options and it is unclear to me which to take.
Use “Accept and clear context”.
Repeat.
The first prompt I used was:
Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints.
Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints.
For this kind of change I like to clear context after implementing and then ask a follow up question:
I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct.
I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct.
Even with a spec, Claude Code makes mistakes the first time, but then will catch them (mostly) the second time through.
Also, after implementing larger changes, I like to re-run /init to update the CLAUDE.md file to cover the new pieces.
The next major prompt was:
The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming.
The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming.
I got an implementation that looked reasonable pretty quickly, but fell over when trying to actually connect with Feishin. At that point I iterated by testing the client and handing the server request logs to Claude Code. Even with a spec there are details that are not spelled out clearly, like how endpoints may have a .view suffix that needs to be stripped. Every time there was an error I generated new unit tests to cover them.
I was shocked to hear the audio streaming through feishin after only a couple of iterations. The main issues involved stubbed endpoints returning nothing. They mostly had to be updated to return empty, but correctly structured responses.
Just getting an MVP is the easy part though. Not that far beyond what I implemented in my POC.
Working through the long tail.
The rest of the work was the less interesting, more drudgery parts to make the project actually usable. From the docs, OpenSubsonic has ~80 endpoints spread over 15 different categories.
For the MVP use case I only had to support:
getLicense, getUser, getGenres and getMusicDirectories with empty, but valid collections.
getSong as a pass through that returned the ID in the query params and default values.
search3 with a very basic ytmusicapi call.
stream with a yt-dlp call wrapped in an asyncio.to_thread to extract the URL for the “bestaudio” format.
getCoverArt with a call to yt-dlp to extract the cover image URL.
To support the full functionality of a subsonic client I:
Added simple in memory caching for ytmusicapi calls to avoid hitting usage limits.
Used sqlite for storing music metadata and implemented all the endpoints in the browsing category. Even getTopSongs by querying for the top songs list.
Saved the song to disk as it streamed to avoid redownloading songs. I had to have additional handling to clean up the incomplete file when a client disconnects from the stream endpoint before the file was fully downloaded.
I knew all these things had to be done to make my own POC more usable, and I could have done them, but never did. At the same time, since I never planned to release anything I absolutely skipped the hard bits around authentication.
All together I was able to get a working service that I could connect to from a subsonic client in a short evening. In the end I dubbed the project “Sub-standard”.
Is this good?
I don’t want to sound like an AI coding assist booster. I still have fears around deskilling from relying on these tools too much. That’s why I still bang my head against the wall trying to learn Rust.
In my mind there are different buckets for personal projects. One is things I do to learn and grow and the other is things I really wish existed. [2] This kind of project falls into the second bucket. Using AI coding assist to reify those projects is sort of a form of wish fulfillment. I never would have gotten to it, but now I can have the project. One less metaphorical book sitting unread on bookshelf.
In the end I think the important thing is not whether you are doing projects in bucket 2, but whether you are also still doing the stretch projects in bucket 1.
Or at least that is the excuse I tell myself. ↩︎
Or at least that is the excuse I tell myself. ↩︎
Also other buckets, I don’t want to imply those are the only two. ↩︎
Also other buckets, I don’t want to imply those are the only two. ↩︎
Most people think EU Age Control apps are about identifying users. The sales pitch is all zero-knowledge proofs of age. You prove you’re over 18 without the site learning your name, exact birthday or anything that can link one proof to another.
Before going further, it is worth laying out three separate problems this post is worried about. They are easy to blur but they are very different. First: the DSA fallback — platforms don’t actually need the privacy-preserving wallet; the rules let them use a normal KYC provider instead. Second: attestation lock-in — Google and Apple decide what software runs on the phones that can use this system. Third: the system itself is weaker than advertised — the cryptography the reference app actually ships is not the cryptography the marketing describes, unlinkability depends on wallet behavior not math, and there is a whole class of relay attacks the protocol cannot stop. When commentators wave away “the hacks,” they usually mean bugs in the mock-up.
It is also worth asking when this app started being described as “just a reference implementation” or a “white-label demo.” The README tells a story. On 12 May 2025, a disclaimer appeared framing the project as an “Age Verification Solution Toolbox” that Member States are expected to build on. On 31 July 2025, further softening was added — language explicitly calling the app a white-label reference for countries to adapt — and in the exact same edit, the earlier, blunter disclaimer (which said this was an initial version not intended for production) was quietly removed. In any case, it was always presented as a toolbox that countries should adapt into their apps — so judging the app by itself does not make much sense, it depends on how these techniques are implemented in each country’s verification app. There will be no single EU app, despite what the honchos of EU say.
The DSA fallback nobody talks about
Big platforms must verify age for certain content. They can use the fancy EU wallet with its privacy features. They can also just plug in a normal KYC provider that scans your full passport, runs liveness checks and sees everything. Which path do you think most companies will actually take when the “privacy-preserving” option requires integrating with systems that barely exist yet across 27 countries?
It’s marketing sleight of hand. They push the privacy angle hard while the rules quietly allow the non-private fallback. The privacy part is optional. (I think they mainly know the apps will not be ready by the end of the year).
KYC companies have been avoiding real electronic IDs for years. I have a Slovak eID chip that’s been in my wallet forever. It has proper cryptographic keys and can prove who I am far more cleanly than a photo of my driver’s license plus video call. Yet almost every KYC provider still does the bitmap and liveness routine. The reason is simple. Integrating with 27 different national eID systems is a nightmare. Maintaining a database of what every country’s physical ID looks like is cheaper and works everywhere. The cryptographic route doesn’t — in practice, not in theory.
So the EU solution only “works” if platforms decide to do all that integration work themselves. Right now the official trusted list has zero production apps. The reference implementation is still half-baked. Believing this turns into clean interoperability across all EU countries by the end of 2026 is wishful thinking.
How verification actually works
The main high-assurance path in the reference app uses an NFC passport. You scan the MRZ code at the bottom of the photo page; it gives the keys to read and decrypt the data on the NFC chip. That chip contains signed data including a JPEG photo of the holder. The design calls for a live photo to be taken and matched locally against the chip’s JPEG — this is intended to stop a kid scanning a parent’s passport to get a credential for themselves.
The app is open-source so you can read every line. But changing even one bit would break the hardware attestation once attestation is actually enforced by national deployments. In the current reference code, attestation verification is not wired up on the server side — it is a promise national deployments would need to add. The binary must ultimately match exactly what Google or Apple signed. No GrapheneOS, no custom Linux phones.
Attestation locks it down. It is the same EU that hates these American corporations and wants EU alternatives for everything — yet no one can make a phone usable for age verification without the blessing of Google (or Apple, who does not certify third-party devices for iOS at all). Bought a Huawei phone that does not pass Play Integrity? Sorry. Note: Huawei phones can produce hardware attestation via their factory key chain, but they cannot pass Google’s Play Integrity verdict — the same applies to GrapheneOS, Linux phones, and anything outside the Google blessing. Use a Daylight computer that doesn’t wreck your circadian rhythm? Back to the office.
There is a simpler MRZ-only path in the reference app where you photograph an ID card with no NFC read or face match. Real national apps may not support it, and the reference recommends the high-assurance path. Countries will probably force the chip-based route. It’s a trojan horse to digital ID anyway.
The marketed crypto and the shipped crypto are not the same thing
The public story is built around zero-knowledge proofs. The reference Android app doesn’t actually use zero-knowledge crypto in the flow that runs. It uses an older ISO standard (ISO 18013 – 5 mdoc with ES256) where each attribute is signed in advance and the wallet reveals only the ones asked for, hiding the rest using salted-digest commitments. A ZK library is pulled in, but nothing in the presentation path ever calls it. So when people cite “ZK age proofs” as the innovation, they are citing something that is in the repo but is not switched on. Whether national apps eventually turn it on is an open question. Today’s reference is plain signatures.
The cryptography could be solid — zero-knowledge proofs over passport signatures are a real and tractable thing. But the crypto actually shipping in the current reference is the older plain-signature format with disposable-batch unlinkability, not ZK. So when people defend “the math works,” they are defending math that is not turned on. Although if you use each signed attestation only once, it only reveals that you are over 18 and maybe from the signature who issued the attestation. There’s no unique identifier.
What’s private and what isn’t
The overall flow is local-first, but still needs a server to issue credentials. Scanning and initial checks happen on the phone. Because the app is (or would be) attested, the issuing server can be reasonably confident what exact code actually executed. The server verifies the document signatures and issues a signed credential. That credential can then be used to produce a proof of age when talking to websites.
From the verifier’s (say, a porn site or social media platform) point of view it looks unlinkable — as long as the wallet behaves. The design is not “the math guarantees two proofs can’t be correlated.” The design is “the wallet hands out a pile of disposable credentials, uses each one once, then asks for more.” If the wallet obeys that rule, two verifiers see two different signatures and can’t tie them together. If the wallet cheats, or if a proof is replayed, the two verifiers see the same signature bytes, and the linkage is trivial. This is an important nuance — the usual “ZK = math = unlinkable forever” pitch doesn’t apply here. The property holds because the wallet is supposed to rotate credentials, not because the cryptography makes reuse impossible. Real cryptographic unlinkability schemes like BBS+ or CL signatures would produce uncorrelated proofs even on reuse. This is not that.
From the issuer’s point of view — they issue credentials when you present your ID. The issuer doesn’t know what you’ll use the credential for, or how many times you’ll use it — the one-use rule lives inside the wallet, not on the server. So if the wallet is modified, or if proofs are captured and replayed, nobody upstream sees it. Any “rate limit” you might imagine is a limit on how many credentials you mint, not on how many times a credential is used in the wild.
They can of course infer that you are an EU country citizen. But they can’t (under normal wallet behavior) tell which accounts are yours or link your activity across sites.
What about relay attacks?
Here’s a scenario the spec doesn’t really answer. Suppose a child wants to get into an age-gated site. A service pops up — call it Grandma-as-a-Service — that offers to verify on their behalf for a few euros. The child opens the site, gets a QR code or a link, and instead of scanning it themselves they paste it into the proxy service. The proxy forwards it to a real adult somewhere with a real, government-issued wallet on a clean phone. The adult approves. The adult’s wallet produces a cryptographically perfect “over 18” proof. The site sees a valid proof and lets the child in.
Nothing failed. Every signature is real, every attestation is real, the adult really is over 18, the wallet really is running unmodified on a genuine Android. The catch is that the protocol binds the proof to “some wallet somewhere said yes,” not to “the human at this browser right now.” There is no proximity check. The browser-side Digital Credentials API partially closes this — but only when the user verifies on the same phone they’re browsing from. QR codes and deep links, which work across devices, are wide open.
People assume Google’s Play Integrity would stop this. It doesn’t. Play Integrity attests what code is running on what device. It says nothing about who is in front of it or where the device is. In the proxy flow, the adult’s phone is a real phone and every attestation is real. The relay — the web service the child talks to — isn’t being attested; it’s just moving bytes.
And once an adult is enrolled, the resale version gets ugly. The wallet has thirty disposable credentials, refreshed on a short interval. The issuer never sees how those get used. So the proxy operator can reuse each credential across many children; nothing upstream raises an alarm. The “one-time use” rule is an honor-system rule inside the wallet software, not something the issuer can enforce after the fact. This is not a bug that production apps will “fix.” It’s inherited from the shape of the protocol, so it will be present in all 27 national apps.
In any case, this is the trojan horse. Start with “protect the children from porn and scary social media.” Create enough friction that people reach for the convenient attested wallet. The app itself must be attested — which in practice means Google or Apple decide what runs. The credential can be killed by the issuer.
The reference app leaks face photos, although only locally. Twenty-seven countries will each build their own version. With their own privacy bugs.
Then you get the Hawthorne effect. Every controversial site that makes you pull out the wallet creates self-censorship, even if the proof is supposedly anonymous. Governments have a terrible track record protecting this data. Any data. History is full of examples.
(Want to watch porn? Criticize a politician? Are you really going to open the EU country’s ID app to verify that you are over 18 and believe it’s unlinkable ZK proof — even if it really is?)
Later they link it to Digital Euro and everything else. Suddenly a big chunk of your life can be switched off remotely. Didn’t pay a parking ticket on time? Let’s temporarily revoke your credentials — when you can’t log in anywhere, you will come and pay the ticket.
The architecture and politics are the usual control layer with fresher paint. We don’t need revocable digital IDs as the price of entry to the internet. We were doing just fine.
Are the published hacks real?
It’s worth splitting the reported problems into two piles. Pile one: “bugs in the mock-up” — leaked files, unchecked MRZ scans, Chrome-extension demos hitting a placeholder backend. These are fixable and will be fixed per country. Pile two: structural properties that fall out of the protocol itself — no proximity binding, client-side one-time-use, unlinkability that breaks on reuse. These are not bugs. They’ll be present in every national implementation that follows the spec. When commentators wave away “the hacks,” they usually mean pile one. Pile two is what this post is actually about.
There have been several “hacks,” mostly by people who don’t understand how this is supposed to work. Leaving files on disk in the reference app is something that will be fixed, and does not really matter. The reference app will not be used by any country directly — they will have their own bugs. It’s for countries to know how to generate the proofs and stay interoperable. It doesn’t even matter that you can fool it into giving you a test credential, because the primary verification path will be countries’ own eID systems, not their mock-up of unchecked MRZ scanner.
There was a “hack” that created a custom Chrome extension. That would fail app attestation once attestation is enforced. The MRZ path also does not connect to a real backend, because there’s no real EU-side backend — the registries of valid documents are the competence of individual countries.
I’m 99% sure that even though I consider EU completely incompetent, these particular mock-up hacks won’t work in production apps. So this “haha, I hacked the app with my Claude Max subscription” doesn’t mean anything. They’re hacking a mock-up showing the use of a library. Yes, Frau Ursula called it “EU Age Verification app,” but there will not be an EU app — there will be a Slovak app, a Hungarian app, a German app, a Dutch app, a French app…
But Why?
Many of us naturally ask why people want this. I think it’s a mistake to think they don’t. There is demand for this. The internet is scary, parents think they can’t protect their children from many bad things happening, and someone came to provide a “solution.” Doesn’t matter that I am sure the kids will go around it easily. The clients (the voters) are not the children being protected, but their parents, feeling good.
I think a very good and deep explanation is in my novel Tamers of Entropy. Have a look. It is very cypherpunk/lunarpunk and explains also the psychology behind these dystopias — and a way outside. Plus it’s fun to read. Check it out at tamersofentropy.net. The characters also have Nostr accounts.
Conclusion
The EU fancy ZK apps will not be ready. Platforms will use normal KYC providers, AI face age estimators and other means.
When done according to spec, the age verification app has meaningful privacy properties — the platforms don’t know your identity or link your accounts. But those properties rest on wallet behavior, not cryptographic guarantees. The ZK math that would make unlinkability a hard guarantee is in the repo and not switched on.
The apps will not work unless you have a Google or Apple approved device. Forget Linux, GrapheneOS, Huawei, after-market firmwares. It’s part of the security model.
And relay attacks — children using adult proxies to get into age-gated sites — are not fixable bugs. They are a structural property of the protocol that will ship in all 27 national apps.
The privacy theater hides the wolf. The wolf is still there.
On April 10, the house of OpenAI CEO Sam Altman was attacked
with a Molotov cocktail by 20-year-old Daniel Moreno-Gama. The suspect, who
was arrested the same day, had written a manifesto warning of the
existential threat of artificial intelligence. In his missive, he advocated for
killing the CEOs of AI companies, and he referred to himself as “butlerian jihadist”
on Instagram (a reference to a war against machines in Frank Herbert’s Dune
universe).
Three days prior in Indianapolis, an unknown perpetrator
fired 13 shots into the home
of local Democratic councilman Ron Gibson while his 8-year-old son was
home. Neither were hurt, but a note reading “No Data Centers” was left on the
doorstep. Gibson had lent his support for a potential data center project in
his district. There have not yet been any arrests in the case.
Both incidents were frightening examples of abhorrent,
politically motivated violence. But the reaction, at least on social media, seemed to revel in
it.
The mood exemplified by inflamed Instagram commenters on these incidents was further reinforced on April 13 when Stanford University
released its annual
Artificial Intelligence Index, which provides a yearly snapshot of where
the industry stands.
In the report, one of the most standout contrasts was the
gulf between what AI experts predict for AI’s future and the public’s
reaction to the industry’s designs. On jobs, 73 percent of experts were
positive about the long-term effect, with 69 percent positive about the
long-term effect on the economy. Among the public, those numbers were 23 percent
and 21 percent, respectively, with nearly two-thirds of Americans thinking that
AI would lead to fewer jobs over the next 20 years.
A separate survey, released in March
2026 by Gallup, also showed a sharp increase in negative attitudes toward
AI among Gen Z. According to the poll, the percentage of Gen Zers who felt
excited about AI had dropped from 36 percent to 22 percent, while the number
who felt angry about it increased from 22 percent to 31 percent.
These numbers and actions point in the same direction: a
rapidly growing populist backlash toward AI, which tech journalist Jasmine Sun defined as “a
worldview in which AI is viewed not only
as a normal technology, but an elite political project to be resisted … a thing
manufactured by out-of-touch billionaires and pushed onto an unwilling public.”
Naturally, violence is never an answer, nor is it a politically effective
tactic. But you also cannot ignore how the tone-deaf public messaging of the AI
industry has helped to contribute to this reaction.
For years, CEOs like Altman and Anthropic’s Dario Amodei have
very publicly oscillated between two suboptimal scenarios. In one, AI exterminates
humanity with a biological super-weapon. In the other, AI
either takes your job entirely or creates an economy where your only option
is to downshift
into the gig economy.
These pitches may be perfect for attracting attention at
tech conferences or funding rounds, but they utterly ignore the daily concerns
of regular Americans, at a time when the job market (especially for newer
graduates) is incredibly
shaky; economic gains are concentrated
among the top 0.1 percent; and the price
of food, housing, and, now, gasoline all continue to skyrocket.
This is the environment in which the AI industry is very
publicly asking for hundreds
of billions of dollars in continued investment, as well as a massive data center
buildout that has had significant effects on
local populations’ electrical bills. For example, in Virginia, the epicenter of the U.S.
data center boom, residential electrical rates have been projected to increase
by up to 25 percent by 2030.
These costs could be ignored, or even accepted, if there was a clear idea of how precisely AI would streamline and improve the workplace—or
offer any tangible public benefit significant enough to make these underlying
trade-offs acceptable. But the answers to these questions remain extremely
tenuous. According to a February 2026 paper by the National Bureau of Economic
Research, 80
percent of companies that have begun actively using AI have reported no
impact on company productivity. A separate, widely cited 2025 MIT study
revealed that 95 percent of corporate AI
pilot programs received zero return.
Even within tech and coding, one of the areas where AI is
reported to have the most promise, there’s the question of whether the
productivity gains reported can be trusted. In a provocative GitHub
post, machine-learning engineer Han-Chung Lee argued that even rosy
internal numbers that do show AI-assisted productivity gains are
suspect, as they’re produced to hit adoption targets no one can effectively
audit.
This isn’t to say that AI doesn’t show immense and possibly
incredibly valuable potential, especially bearing in mind that ChatGPT (which
can be considered the first mainstream demonstration of AI technology) was only
launched in November 2022. It’s natural for new technology to have a bumpy
adoption period as both users and designers stress-test its strengths and limitations
in the real world.
But the gap between how AI companies talk about themselves
and how the general public has experienced the technology (and its side
effects) has grown into a chasm, and now the results of these divisions are
starting to show; data
center projects canceled or delayed; an industry that is less
popular than ICE or Donald Trump; and now, violent acts against AI
leaders.
In its defense, Big Tech has realized the extent of the
potential problems that AI could pose to regular Americans. Earlier in April,
for example, Open AI released an Industrial
Policy White Paper, which included suggestions such as the creation of a
Public Wealth Fund for all Americans to share in AI growth, revamping social
safety nets, and investing in real-time measurement of how AI affects work. In
January, Microsoft released a Community-First
AI Infrastructure Initiative, promising to subsidize utility rates and
minimize water use in communities where it was building data centers.
But it’s one thing for AI companies to make lofty promises
in press releases, and another thing entirely for them to follow through
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.