10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program in 2000, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function. That fact existed nowhere in any document. Only the workers who made the original batch knew it, and they had retired years earlier.
A nuclear weapons program lost the ability to make a material it invented. The knowledge existed only in people, and the people were gone.
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. They weren’t enough without the people who understood what they meant.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
April 24, 2026
4 min read
Add Us On GoogleAdd SciAm
An amateur just solved a 60-year-old math problem—by asking AI
An amateur just solved a 60-year-old math problem—by asking AI
A ChatGPT AI has proved a conjecture with a method no human had thought of. Experts believe it may have further uses
By Joseph Howlett edited by Lee Billings
Eugene Mymrin/Getty Images
Liam Price just cracked a 60-year-old problem that world-class mathematicians have tried and failed to solve. He’s 23 years old and has no advanced mathematics training. What he does have is a ChatGPT Pro subscription, which gives him access to the latest large language models from OpenAI.
Artificial intelligence has recently made headlines for solving a number of “Erdős problems,” conjectures left behind by the prolific mathematician Paul Erdős. But experts have warned that these problems are an imperfect benchmark of artificial intelligence’s mathematical prowess. They range dramatically in both significance and difficulty, and many AI solutions have turned out to be less original than they appeared.
The new solution—which Price got in response to a single prompt to GPT-5.4 Pro and posted on www.erdosproblems.com, a website devoted to the Erdős problems, just over a week ago—is different. The problem it solves has eluded some prominent minds, bestowing it some esteem. And more importantly, the AI seems to have used a totally new method for problems of this kind. It’s too soon to say with certainty, but this LLM-conceived connection may be useful for broader applications—something hard to find among recently touted AI triumphs in math.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”
The question Price solved—or prompted ChatGPT to solve—concerns special sets of whole numbers, where no number in the set can be evenly divided by any other. Erdős called these “primitive sets” because of their connection to similarly indivisible prime numbers.
“A number is prime if it has no other divisors, and this is kind of generalizing that definition from an individual number to a collection of numbers,” says Jared Lichtman, a mathematician at Stanford University. Any set of prime numbers is automatically primitive, because primes have no factors (except themselves and the number one).
Erdős also came up with the Erdős sum, a “score” you can calculate for any primitive set. He showed that the biggest the sum could be was about 1.6—and conjectured that this value must also hold for the (infinite) set of all prime numbers. Lichtman proved Erdős right as part of his doctoral thesis in 2022.
Erdős also noticed that the score drops if all of a set’s numbers are large—the larger the numbers, the lower the score. He guessed that the lowest this score could be was exactly one, a limit that the score would approach as the set’s numbers approached infinity. Lichtman tried to prove this, too, but got stuck like everyone else before him.
Price wasn’t aware of this history when he entered the problem into ChatGPT on an idle Monday afternoon. “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.”
He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge. The duo had jump-started the AI-for-Erdős craze late last year by prompting a free version of ChatGPT with open problems chosen at random from the Erdős problems website. (An AI researcher subsequently gifted them each a ChatGPT Pro subscription to encourage their “vibe mathing.”)
Reviewing Price’s message, Barreto realized what they had was special, and experts whom he notified quickly took notice.
“There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.
More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.”
Lichtman is hopeful because ChatGPT’s discovery validates a sense he’s had since graduate school. “I had the intuition that these problems were kind of clustered together and they had some kind of unifying feel to them,” he says. “And this new method is really confirming that intuition.”
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
Ever wanted to connect your Legos and Tinkertoys together? Now you can — and much more. Announcing the Free Universal Construction Kit: a set of adapters for complete interoperability between 10 popular construction toys.
Fig. 1. The Free Universal Construction Kit.
Overview
Motivation
Download
Implementation
Legal and Commercial Implications
License and Disclaimers
Credits, Contact and Acknowledgements
Keywords
Overview
Video by Riley Harmon for F.A.T. Lab + Sy-Lab.
F.A.T. Lab and Sy-Lab are pleased to present the Free Universal Construction Kit: a matrix of nearly 80 adapter bricks that enable complete interoperability between ten* popular children’s construction toys. By allowing any piece to join to any other, the Kit encourages totally new forms of intercourse between otherwise closed systems—enabling radically hybrid constructive play, the creation of previously impossible designs, and ultimately, more creative opportunities for kids. As with other grassroots interoperability remedies, the Free Universal Construction Kit implements proprietary protocols in order to provide a public service unmet—or unmeetable—by corporate interests.
The Free Universal Construction Kit offers adapters between Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles (Bristle Blocks), Lincoln Logs, Tinkertoys, Zome, and Zoob. Our adapters can be downloaded from Thingiverse.com and other sharing sites as a set of 3D models in .STL format, suitable for reproduction by personal manufacturing devices like the Makerbot (an inexpensive, open-source 3D printer).
Motivation
Our kids are already doing it! And when we were growing up, ourselves, we did it too—or we tried to, anyway. Connecting our toys together. Because: what if we want to make a construction which is half-Tinkertoys, half-K’Nex? Why shouldn’t we be able to? We dreamed about this possibility years ago, when we were small, and we knew then, as we know now, that we’d need some adapters to help. The advent of low-cost 3D printing has made such adapters possible, and with it, a vast new set of combinatorial possibilities for children’s creative construction toys.
Opening doors to new creative worlds is one major reason we created the Free Universal Construction Kit. Another is that we believe expertise shouldn’t be disposable — and that childrens’ hard-won creative fluency with their toys shouldn’t become obsolete each Christmas. By allowing different toy systems to work together, the Free Universal Construction Kit makes possible new forms of “forward compatibilityâ€, extending the value of these systems across the life of a child. Thus, with the Kit’s adapters, playsets like Krinkles (often enjoyed by toddlers) can still retain their use-value for older children using Lego, and for even older tweens using Zome.
The Kit offers a “best of all worlds†approach to play and learning that combines the advantages of each toy system. We selected construction sets for inclusion based on their significant level of market penetration, as well as for the diversity of features they brought to the Kit’s collection. Some of the supported construction systems, for example, offer great mechanical strength, or the ability to build at large scales; others offer the means to design kinetic movements; and still others permit the creation of a wide range of crystallographic geometries and symmetries. Using these classic toys as a foundation, the Free Universal Construction Kit offers a “meta-mashup system†ideally provisioned for the creation of transgressive architecture and chimeric readymades.
Finally, in producing the Free Universal Construction Kit, we hope to demonstrate a model of reverse engineering as a civic activity: a creative process in which anyone can develop the necessary pieces to bridge the limitations presented by mass-produced commercial artifacts. We hope that the Kit will not only prompt people to create new designs, but more importantly, to reflect on our relationship with material mass-culture—and the rapidly-evolving ways in which we can better adapt it to our imaginations.
Download
The Free Universal Construction Kit 3D models are freely available in .STL format from three locations:
Individual adapters from the Free Universal Construction Kit may be downloaded from Thingiverse.com — the world’s foremost website dedicated to the free sharing and remixing of user-created digital design files.
The complete Free Universal Construction Kit can also be downloaded in its entirety*, as a 29MB .zip archive from the F.A.T. Lab web site, here. Note: all units are in inches.
We expect the Kit to be available shortly from The Pirate Bay, as a torrent in TPB’s new “physibles” (physical downloadables) channel.
In addition to the Kit itself, we also offer for download this attractive B1 poster (4.5MB PDF, in two versions: gray background / white background).
Figure 2. The Free Universal Construction Kit adapter matrix. (PDFs: Gray, White)
We (F.A.T. Lab and Sy-Lab) neither sell nor distribute physical copies of the Free Universal Construction Kit. Please do not ask us to do so. Individuals seeking their own physical copies of the Kit, in whole or in part, are encouraged to download our files and reproduce them with open-hardware desktop 3D printers like the Makerbot, RepRap, Ultimaker, or Printrbot. Alternatively, copies for private use may be available from a personal fabrication service bureau; for awesome service, international/anywhere shipping and quick turnaround, we highly recommend Ponoko.com for personalized 3D printing in a wide variety of materials. Shapeways and QuickParts are good, too. You may also find a 3D printer in the architecture, industrial design, and/or mechanical engineering departments of your local university.
Please note that our license for the Free Universal Construction Kit prohibits commercial use of these designs in mass production; note, however that we encourage individuals to contract with fabrication service bureaus for the creation of personal copies. For more information, see our license and disclaimers, below.
Implementation
The Free Universal Construction Kit comprises nearly 80 two-way adapters. These allow each of the different construction toys (Lego, Tinkertoy, Fischertechnik etc.) to interface with any of the other supported systems. Prior to modeling, the dimensions of the various toy connectors were reverse-engineered with an optical comparator fitted with a digital read-out accurate to less than one ten-thousandth of an inch (0.0001in., or 2.54 microns).
Figure 3. A Bristle Block being measured in the optical comparator.
The resulting precision ensures that the Free Universal Construction Kit “actually worksâ€, enabling tight snap-fits between custom and commercial components.
Figure 4. The Kit in use, connecting four different systems together.
Below is a partial gallery of assorted Kit adapters, respectively compatible with (clockwise from top left): Lego, Zoob, Tinkertoys, and Gears! Gears! Gears!. Click on the images for higher-resolution photographs:
In addition to its many one-to-one adapters, the Free Universal Construction Kit also includes a special fist-sized Universal Adapter Brick which provides connectivity between all of the supported construction systems:
Fig. 9. The Universal Adapter Brick.
Producing physical prints from our provided 3D models prompts certain fabrication considerations. According to Wikipedia, the precision of Lego pieces is less than 10 microns. As of early 2012, however, standard Makerbot printers have an XY resolution of 100 microns (0.1mm) and a default layer thickness of 360 microns (0.36mm). We thus caution that fabrication of the Free Universal Construction Kit with current (2012-era) solutions for DIY 3D printing, such as the Makerbot, Printrbot or RepRap, may lack the precision required for reliable or satisfactory coupling with standard commercial pieces. A great deal depends on how well-tuned the printer is; thus, your mileage may vary. In any case, we expect this situation will improve gradually, but inexorably, in tandem with improvements to these vibrantly evolving fabrication platforms. The artist’s proof shown here was created in a UV-cured white resin using a commercial-grade Objet (“polyjetâ€) 3D printer, which has a horizontal resolution of 42 microns, and a layer thickness of 16 microns. Ponoko.com and other private fabrication services offer printing from Objet machines and other high-resolution devices.
Legal and Commercial Implications
Consider the frustrating experience of purchasing a new computer (a Mac, say) and discovering that it will not play your aunt’s Windows Media video of your little cousins. Likewise, imagine your aunt’s corresponding annoyance when she finds that her PC will not play the Apple Quicktime video you sent her of your cats. This humiliating little episode isn’t an accident; it’s just a skirmish in a never-ending battle between giant commercial entities, played out, thousands of times every day, in exactly such micro-punishments to customers like you. If you’re well-informed, you may happen to know about VLC — a free, open-source video player, developed by independent hackers as a grassroots remedy for exactly this problem. Until the advent of ubiquitous 3D printing, software remedies like VLC weren’t readily available for hardware products, like toys. That’s changing.
Today’s manufacturers have little or no intrinsic motivation to make their products compatible with anyone else’s. Indeed—despite obvious benefits to users everywhere—the implementation of cross-brand interoperability can be nearly impossible, given the tangled restrictions of patents, design rights, and trademarks involved in doing so. So we stepped up. The Free Universal Construction Kit is the VLC of children’s playsets.
As we can see from the example above, interoperability is a question of power and market dominance. Most market leaders regard interoperability as an anti-competitive nuisance, a regulatory check on their ambition, or a concession to the whining of lesser players. Quite simply, interoperability is the request of the disenfranchised. And which end-user, in so many ways, is less enfranchised than a preliterate child?
The simple fact is that no toy company would ever make the Free Universal Construction Kit. Instead, each construction toy wants (and indeed, pretends) to be your only playset. Within this worldview, the other manufacturers’ construction sets are just so many elephants in the room, competing for your attention on the shelves of Toys-R-Us. No longer. The Free Universal Construction Kit presents what no manufacturer could: a remedy providing extensible, post-facto syntactic interoperability for construction toys. Let the fun begin!
Some may express concern that the Free Universal Construction Kit infringes such corporate prerogatives as copyright, design right, trade dress, trademarks or patents of the supported toy systems. We encourage those eager to enforce these rights to please think of the children (or perhaps the Streisand effect) — and we assert that the home printing of the Free Universal Construction Kit constitutes protected fair use. Simon Bradshaw et al., writing in “The Intellectual Property Implications of Low-Cost 3D Printingâ€, conclude that the public is legally allowed to make 3D prints that mate with proprietary parts, especially in cases (the “Must Fit Exceptionâ€) where a piece’s shape “is determined by the need to connect to or fit into or around another productâ€:
“Even where a registered design is copied via a 3D printer this would not be an infringement if it were done ‘privately and for purposes which are not commercial’. Both criteria must be met; it is insufficient that copying is not done for profit. Purely personal use of a 3D printer to make items will thus not infringe a registered design.â€
*In fact, the Free Universal Construction Kit deliberately avoids patent infringement. Part of our strategy for doing so is our choice to support older (“classicâ€) playsets: of the ten toy systems supported by the Kit, eight are no longer protected by active (20-year) patents. To take a few examples: Lego was patented in 1958; Lincoln Logs, in 1920; and Tinkertoys, in 1932. There are, however, two instances in which toy systems nominally supported by the Kit are still protected (as of this writing) by active patents: Zoob (patented 1996) and ZomeTool (patented 2002). For the Zoob and Zome systems, please note that we have delayed the release of pertinent adapter models until December 2016 and November 2022, respectively.
The Free Universal Construction Kit is simply one “toy†illustration of a coming grassroots revolution, in which everyday people can—with desktop tools—overcome arbitrary restrictions in mass-manufactured physical culture. The burgeoning possibility of freely shared downloadable adapters has significant implications for industries where the attempt to create “technological lock-in†is a common business practice. For more on this subject, and the legal horizons of reproducing commercial products with home fabrication systems, please see:
Bradshaw, Simon; A. Bowyer and P. Haufe, “The Intellectual Property Implications of Low-Cost 3D Printingâ€, 7:1 SCRIPTed 5, 2010.
de Bruijn, Erik. “Fab It Yourself: Adapters & Consumer Lock-Inâ€. Blog.erikdebruijn.nl, 13 September 2010.
Hanna, Peter. “The next Napster? Copyright questions as 3D printing comes of ageâ€. Arstechnica.com, April 2011.
Ross, Valerie. “Can You Patent a Shape? 3D Printing on Collision Course With Intellectual Property Lawâ€. Discover Magazine, 7 April 2011.
Weinberg, Michael. “3D Printing Settlers of Catan is Probably Not Illegal: Is This a Problem?â€. PublicKnowledge.org, 28 January 2011.
Weinberg, Michael. “It Will Be Awesome if They Don’t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technologyâ€. PublicKnowledge.org, 10 November 2010.
In addition to the writers above, we tip our hats to Thingiverse user Zydac, whose related project (a Duplo-to-Brio track adapter) led us to these legal writings; to Andrew Plumb (Clothbot) who has probed the legal and practical implications of Lego-compatible bricks for some time; and to Daan van den Berg, who has explored 3D-printed remixes of branded forms as a mode of critical artistic practice.
License and Disclaimers
The Free Universal Construction Kit and its associated media are licensed under and subject to the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License (http://creativecommons.org/licenses/by-nc-sa/3.0/legalcode). The official URL for the Free Universal Construction Kit is https://fffff.at/free-universal-construction-kit. You are free to copy, distribute and transmit the Kit, and to remix and/or adapt the Kit; in doing so, you must attribute the Kit to “F.A.T. Lab and Sy-Labâ€, and include a link to the project using the URL above. We especially welcome extensions to the Kit which provide compatibility with as-yet-unsupported play systems. Please note that extensions to the Kit require the same or similar license. You may not use the Kit in commercial mass production; however, we permit individuals to contract with fabrication service bureaus (e.g. Ponoko, Shapeways, etc.) for personal copies.
Lego®, Duplo®, Fischertechnik®, Gears! Gears! Gears!®, K’Nex®, Krinkles®, Bristle Blocks®, Lincoln Logs®, Tinkertoys®, Zome®, ZomeTool® and Zoob® are trademarks of their respective owners. The Free Universal Construction Kit is not associated or affiliated with, or endorsed, sponsored, certified or approved by, any of the foregoing owners or their respective products.
We are not a commercial company; we are artists, hackers and activists. The Kit is not a product; it is a provocation. F.A.T. Lab and Sy-Lab, in cooperation with Adapterz LLC, (1) perform solely the service of publishing the Free Universal Construction Kit, (2) do not participate in any production, public manufacture or sale of the items displayed here, and (3) offer no opinion, warranty or representation as to the safety, quality or functionality of the Kit. The F.A.T. Lab, Sy-Lab and Adapterz LLC therefore offer no warranty of any kind, express or implied.
Please cite the Free Universal Construction Kit, and/or this article, as follows:
Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.†Fffff.at, 20 March 2012. <https://fffff.at/free-universal-construction-kit>.
Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.†Fffff.at, 20 March 2012. <https://fffff.at/free-universal-construction-kit>.
WARNING: CHOKING HAZARD!
Small parts. Not for children under 3 years.
Credits, Contact and Acknowledgements
For press or other inquiries about the Free Universal Construction Kit, please contact info@adapterz.org. The Kit was conceived and developed by the F.A.T. (Free Art and Technology) Lab in collaboration with Sy-Lab, and is represented, for legal purposes, by Adapterz, LLC. The Kit’s “advertisement” video was created by Riley Harmon.
The creators express gratitude to: our families; our lawyers; the children appearing in our demonstration video, and their families; Jean Aw, Eric Brockmeyer, David Familian, Andy Flowers, Michael Joaquin Grey, Mark Gross, Riley Harmon, Marcie and Lawrence Hayhurst, Allie Oswell, Eric Paulos, Bre Pettis, Kent Sheely, Michael Weinberg, and the STUDIO for Creative Inquiry. The Kit files are sportingly hosted by Thingiverse.com.
Keywords
Toys, kits, constructions sets, construction toys, construction systems, Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles, Bristle Blocks, Lincoln Logs, Tinkertoys, Zome, ZomeTool, Zoob, Constructivist learning, play, connectors, adaptors, adapter piece, adapter brick, adapters, universal translator, gender changer, modularity, interoperability, interoperability remedy, compatibility layer, technological lock-in, post-facto plug-and-play syntactic interoperability, shim, computer aided design, 3D models, STL files, physibles, rapid prototyping, 3D printing, Makerbot, RepRap, Printrbot, Thingiverse, Ponoko, F.A.T. Lab, Sy-Lab, fair use, remix, hybrid, mashup.
The commons and the public good are continually threatened by narrow interests seeking private gain. Please continue to support and protect the free, open, and non-proprietary exchange and development of ideas and information online.
Note: I initially drafted this before my last post on how Claude Code is getting worse. I’m putting it out now so I can reference it in a future post on OpenCode. As you can imagine my opinion on Claude Code has shifted since I wrote this.
Long ago I attempted a personal project, but never finished due to life being busy. [1] Sort of like the Japanese word Tsundoku, for the pile of books you intend to eventually read one day. We all have these projects and they are good candidates for testing out AI coding assistance. After all, they were never going to get done anyway.
The POC I put together was a shim between YouTube Music and the OpenSubsonic api. Explaining OpenSubsonic could be its own article, but for our purposes it’s an API contract for nicely decoupling music streaming clients and servers. You can pick your own options for both. In my case I like Navidrome for the server, Feishin for desktop, and as I mentioned in my post on GrapheneOS, Symfonium for Android.
Anyways, the shim made YouTube Music conform to the API so I could add it to any of my clients. Under the hood I used ytmusicapi for metadata lookup and programmatically called yt-dlp to stream the music. Getting basic streaming working was pretty simple. However, there was a long tail implementing all the endpoints in a conformant way. Then as always, there were new shiny projects that stole my attention away. Like that embedded rust location project I promise I’ll finish at some point. Maybe.
Luckily, nothing was really novel in that streaming project, and there is a clear spec to implement which is perfect for assisted coding. So a month and a half ago I thought I would test Claude Code with Opus 4.6 and see how it did implementing the project from scratch. After all, they gave me a free $50 in credit, so I might as well.
The setup
Since I had already written a proof of concept by hand, I had my own opinions about the implementation and laying all of that out beforehand constrained the tool in a nice way.
I did the following:
Created a uv project with fastapi, pydantic, ytmusicapi and yt-dlp as dependencies.
Changed main.py to the example FastAPI main file.
Dropped the openapi spec for OpenSubsonic in the folder.
Added a brief description in a readme file:
This project acts as a shim, exposing YouTube music as an opensubsonic client. It uses fastapi for its server with pydantic, ytmusicapi for metadata and yt-dlp for streaming.”
opensubsonic docs are available at: https://example.docsy.dev/docs/reference/
The openapi spec is in openapi.json.
Added an empty TODO file.
Generated a CLAUDE.md file using /init.
I also often add a section like this to the CLAUDE.md file:
## Conventions
- Methods should have type annotations for args and returns as well as docstrings.
- Use Pydantic for data modeling. Use modern Pydantic V2 conventions.
- Doc strings should use the Google style format with an args and returns sections.
- Write unit tests with modern pytest style, eg top level methods using `assert` and fixtures.
That’s mostly based on past experience for what I have to repeatedly ask Claude Code not to do.
I’ve bundled up this starting point into a git repository in case anyone else wants to try the experiment.
Implementing the MVP
With that setup done, I let Claude kick things off. The workflow I typically use is:
Enter plan mode.
Prompt for the next piece of work.
After getting the initial plan, look for gaps / problems and ask follow up questions until I like the plan.
Provide links to resources when Claude is off.
Ask Claude to use the search tool to figure out what is idiomatic when there are multiple options and it is unclear to me which to take.
Use “Accept and clear context”.
Repeat.
The first prompt I used was:
Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints.
Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints.
For this kind of change I like to clear context after implementing and then ask a follow up question:
I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct.
I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct.
Even with a spec, Claude Code makes mistakes the first time, but then will catch them (mostly) the second time through.
Also, after implementing larger changes, I like to re-run /init to update the CLAUDE.md file to cover the new pieces.
The next major prompt was:
The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming.
The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming.
I got an implementation that looked reasonable pretty quickly, but fell over when trying to actually connect with Feishin. At that point I iterated by testing the client and handing the server request logs to Claude Code. Even with a spec there are details that are not spelled out clearly, like how endpoints may have a .view suffix that needs to be stripped. Every time there was an error I generated new unit tests to cover them.
I was shocked to hear the audio streaming through feishin after only a couple of iterations. The main issues involved stubbed endpoints returning nothing. They mostly had to be updated to return empty, but correctly structured responses.
Just getting an MVP is the easy part though. Not that far beyond what I implemented in my POC.
Working through the long tail.
The rest of the work was the less interesting, more drudgery parts to make the project actually usable. From the docs, OpenSubsonic has ~80 endpoints spread over 15 different categories.
For the MVP use case I only had to support:
getLicense, getUser, getGenres and getMusicDirectories with empty, but valid collections.
getSong as a pass through that returned the ID in the query params and default values.
search3 with a very basic ytmusicapi call.
stream with a yt-dlp call wrapped in an asyncio.to_thread to extract the URL for the “bestaudio” format.
getCoverArt with a call to yt-dlp to extract the cover image URL.
To support the full functionality of a subsonic client I:
Added simple in memory caching for ytmusicapi calls to avoid hitting usage limits.
Used sqlite for storing music metadata and implemented all the endpoints in the browsing category. Even getTopSongs by querying for the top songs list.
Saved the song to disk as it streamed to avoid redownloading songs. I had to have additional handling to clean up the incomplete file when a client disconnects from the stream endpoint before the file was fully downloaded.
I knew all these things had to be done to make my own POC more usable, and I could have done them, but never did. At the same time, since I never planned to release anything I absolutely skipped the hard bits around authentication.
All together I was able to get a working service that I could connect to from a subsonic client in a short evening. In the end I dubbed the project “Sub-standard”.
Is this good?
I don’t want to sound like an AI coding assist booster. I still have fears around deskilling from relying on these tools too much. That’s why I still bang my head against the wall trying to learn Rust.
In my mind there are different buckets for personal projects. One is things I do to learn and grow and the other is things I really wish existed. [2] This kind of project falls into the second bucket. Using AI coding assist to reify those projects is sort of a form of wish fulfillment. I never would have gotten to it, but now I can have the project. One less metaphorical book sitting unread on bookshelf.
In the end I think the important thing is not whether you are doing projects in bucket 2, but whether you are also still doing the stretch projects in bucket 1.
Or at least that is the excuse I tell myself. ↩︎
Or at least that is the excuse I tell myself. ↩︎
Also other buckets, I don’t want to imply those are the only two. ↩︎
Also other buckets, I don’t want to imply those are the only two. ↩︎
On April 10, the house of OpenAI CEO Sam Altman was attacked
with a Molotov cocktail by 20-year-old Daniel Moreno-Gama. The suspect, who
was arrested the same day, had written a manifesto warning of the
existential threat of artificial intelligence. In his missive, he advocated for
killing the CEOs of AI companies, and he referred to himself as “butlerian jihadist”
on Instagram (a reference to a war against machines in Frank Herbert’s Dune
universe).
Three days prior in Indianapolis, an unknown perpetrator
fired 13 shots into the home
of local Democratic councilman Ron Gibson while his 8-year-old son was
home. Neither were hurt, but a note reading “No Data Centers” was left on the
doorstep. Gibson had lent his support for a potential data center project in
his district. There have not yet been any arrests in the case.
Both incidents were frightening examples of abhorrent,
politically motivated violence. But the reaction, at least on social media, seemed to revel in
it.
The mood exemplified by inflamed Instagram commenters on these incidents was further reinforced on April 13 when Stanford University
released its annual
Artificial Intelligence Index, which provides a yearly snapshot of where
the industry stands.
In the report, one of the most standout contrasts was the
gulf between what AI experts predict for AI’s future and the public’s
reaction to the industry’s designs. On jobs, 73 percent of experts were
positive about the long-term effect, with 69 percent positive about the
long-term effect on the economy. Among the public, those numbers were 23 percent
and 21 percent, respectively, with nearly two-thirds of Americans thinking that
AI would lead to fewer jobs over the next 20 years.
A separate survey, released in March
2026 by Gallup, also showed a sharp increase in negative attitudes toward
AI among Gen Z. According to the poll, the percentage of Gen Zers who felt
excited about AI had dropped from 36 percent to 22 percent, while the number
who felt angry about it increased from 22 percent to 31 percent.
These numbers and actions point in the same direction: a
rapidly growing populist backlash toward AI, which tech journalist Jasmine Sun defined as “a
worldview in which AI is viewed not only
as a normal technology, but an elite political project to be resisted … a thing
manufactured by out-of-touch billionaires and pushed onto an unwilling public.”
Naturally, violence is never an answer, nor is it a politically effective
tactic. But you also cannot ignore how the tone-deaf public messaging of the AI
industry has helped to contribute to this reaction.
For years, CEOs like Altman and Anthropic’s Dario Amodei have
very publicly oscillated between two suboptimal scenarios. In one, AI exterminates
humanity with a biological super-weapon. In the other, AI
either takes your job entirely or creates an economy where your only option
is to downshift
into the gig economy.
These pitches may be perfect for attracting attention at
tech conferences or funding rounds, but they utterly ignore the daily concerns
of regular Americans, at a time when the job market (especially for newer
graduates) is incredibly
shaky; economic gains are concentrated
among the top 0.1 percent; and the price
of food, housing, and, now, gasoline all continue to skyrocket.
This is the environment in which the AI industry is very
publicly asking for hundreds
of billions of dollars in continued investment, as well as a massive data center
buildout that has had significant effects on
local populations’ electrical bills. For example, in Virginia, the epicenter of the U.S.
data center boom, residential electrical rates have been projected to increase
by up to 25 percent by 2030.
These costs could be ignored, or even accepted, if there was a clear idea of how precisely AI would streamline and improve the workplace—or
offer any tangible public benefit significant enough to make these underlying
trade-offs acceptable. But the answers to these questions remain extremely
tenuous. According to a February 2026 paper by the National Bureau of Economic
Research, 80
percent of companies that have begun actively using AI have reported no
impact on company productivity. A separate, widely cited 2025 MIT study
revealed that 95 percent of corporate AI
pilot programs received zero return.
Even within tech and coding, one of the areas where AI is
reported to have the most promise, there’s the question of whether the
productivity gains reported can be trusted. In a provocative GitHub
post, machine-learning engineer Han-Chung Lee argued that even rosy
internal numbers that do show AI-assisted productivity gains are
suspect, as they’re produced to hit adoption targets no one can effectively
audit.
This isn’t to say that AI doesn’t show immense and possibly
incredibly valuable potential, especially bearing in mind that ChatGPT (which
can be considered the first mainstream demonstration of AI technology) was only
launched in November 2022. It’s natural for new technology to have a bumpy
adoption period as both users and designers stress-test its strengths and limitations
in the real world.
But the gap between how AI companies talk about themselves
and how the general public has experienced the technology (and its side
effects) has grown into a chasm, and now the results of these divisions are
starting to show; data
center projects canceled or delayed; an industry that is less
popular than ICE or Donald Trump; and now, violent acts against AI
leaders.
In its defense, Big Tech has realized the extent of the
potential problems that AI could pose to regular Americans. Earlier in April,
for example, Open AI released an Industrial
Policy White Paper, which included suggestions such as the creation of a
Public Wealth Fund for all Americans to share in AI growth, revamping social
safety nets, and investing in real-time measurement of how AI affects work. In
January, Microsoft released a Community-First
AI Infrastructure Initiative, promising to subsidize utility rates and
minimize water use in communities where it was building data centers.
But it’s one thing for AI companies to make lofty promises
in press releases, and another thing entirely for them to follow through
Niri is a scrollable-tiling Wayland compositor. Windows are arranged in columns on an infinite strip going to the right. Opening a new window never causes existing windows to resize.
As you may have noticed, niri now lives in a GitHub org rather than my (@YaLTeR) personal account.
The primary reason was the ability to give out issue triage permissions: I’d like to give a massive thanks to @Sempyos for triaging all of our issues and pull requests, answering many, many questions, and helping people diagnose their problems with niri.
We’ve also moved a few niri-adjacent projects to the GitHub org, like the awesome-niri list of related projects maintained by @Vortriz and a new artwork repo by @bluelinden and @HumpityDumpityDumber—two of the creators of our project logo. In the artwork repo, you can find a badge and several wallpapers, including two stunning 3D works created by @Duncan-Rose in Blender:
The main niri repo also flew past 20,000 stars in February! 🌟 Thanks everyone for support.
Note
Packagers:
our minimum supported Rust version is now 1.85.
niri.service no longer hardcodes /usr/bin/ in the niri binary path (thanks @Axlefublr).
@markK24 restructured the dinit service files: 3bfa4a7
Now with introductions out of the way, here are the improvements from the last release.
Blur
It’s here. The most requested niri feature by far. Our highest upvoted issue on GitHub. After tireless fork maintenance by @visualglitch91 and @Naxdy, blur is in mainline niri for everyone to use!
Windows and layer-shell components can request blur through the ext-background-effect Wayland protocol with no extra niri configuration. Many already do:
Dank Material Shell v1.4.5: enable background blur in settings
Noctalia shell: enable in settings and see docs
Vicinae launcher
foot terminal v1.26: set blur=true in colors config
kitty terminal v0.46.2: set background_blur 1
Ghostty terminal: will have support in v1.4
Toolkits:
Quickshell: will have support in v0.3
winit: will have support in v0.31
For apps that don’t support ext-background-effect yet, you can enable blur through the niri config:
// Enable blur behind the Alacritty terminal.
window-rule {
match app-id=“^Alacritty$”
background-effect {
blur true
}
}
// Enable blur behind the fuzzel launcher.
layer-rule {
match namespace=“^launcher$”
background-effect {
blur true
}
}
Keep in mind that niri-configured blur needs the right geometry-corner-radius, and it won’t work with complex surface shapes. See the Window Effects wiki page for details.
Have I seen this screenshot before?..
We have both normal blur and xray blur that always shows the wallpaper. Xray blur is the default because it’s much more efficient: niri computes the blurred wallpaper once, and then reuses it as a static image, which is extremely cheap. Only if the wallpaper changes, the blur is recomputed (so an animated background will shrink the efficiency gains).
If you prefer non-xray (normal) blur, you can enable it with a window/layer rule. For example, you can set it on top and overlay layers (that usually overlap other content), via the new layer matcher:
// Make top and overlay layers use the regular blur (if enabled),
// while bottom and background layers keep using the efficient xray blur.
layer-rule {
match layer=“top”
match layer=“overlay”
background-effect {
xray false
}
}
So, if blur is so good, where’s blur 2? Err, I mean, why did it take so long to add?
In short, background blur turned out to be a massive undertaking. Not because of the blur algorithm itself (by the way, if you want to learn about different blurs, including the widely used Dual Kawase, I highly recommend this blog post), but because window background effects in general required a lot of thinking and additions to the code, especially to make them as efficient as possible. This is one of the most complex niri features thus far.
Xray and non-xray effects are also pretty much two entirely separate and very different beasts, code-wise. Non-xray reads back the just-rendered pixels in the middle of a frame, blurs them, then continues drawing the frame. This required extensive refactors of Smithay’s rendering architecture (big thanks to @Drakulix!). Xray on the other hand requires threading the window positions all throughout the rendering code to draw the right cut-out of the background.
But it gets worse: we have our Overview. It was quite a challenge figuring out how to support xray blur in the overview, while maintaining the property that it is never re-rendered.
I also had to get both of them working with all other niri features, like blocking out from screencasts. When the window itself is blocked out that’s easy, but what if something in the background layer, inside the blur, is blocked out? An unusual case for sure, but hardly a good exclude if your sensitive data gets accidentally leaked.
By the way, I made it so xray can be used on its own, without the blur. As well as the noise and saturation effects (normally for reducing blur color banding and bumping the vividness). For example:
window-rule {
match app-id=“Alacritty”
// Xray without the blur!
background-effect {
xray true
}
}
One more thing you can do starting from this release is to configure niri to apply transparency and background effects to pop-up menus, using the new popups block in window or layer rules.
// Blur the background behind pop-up menus in Loupe.
window-rule {
match app-id=“Loupe”
popups {
// Matches the default libadwaita pop-up corner radius.
geometry-corner-radius 15
// Note: it’ll look better to set background opacity
// through your GTK theme CSS and not here.
// This is just an example that makes it look obvious.
opacity 0.5
background-effect {
blur true
}
}
}
Keep in mind that pop-up rules tend to bump even more into problems with application behavior and surface shapes. For example, web apps or Electron don’t use Wayland pop-ups at all; they’re entirely emulated inside the client—niri cannot do anything with them.
Shape-wise, in GTK 4, pop-ups with has-arrow=true won’t look right because they aren’t rounded rectangles. Thankfully, clients implementing ext-background-effect can shape their blur in any sort of elaborate pattern.
Well, enough about blur, we’ve got more interesting things to cover!
Credit: Houl Floof
Optional includes
Pretty much right after I added config includes last release (before I merged them even), people started requesting optional includes—that can be absent without failing config loading. Some use-cases are being able to change parts of an immutable niri config on NixOS, or having local/private overrides for parts of the config.
I pushed back for a time because I think some of those problems should be solved elsewhere, rather than requiring every program with includes to support optional. However, the added code complexity was rather low, so I eventually went ahead and accepted @johnrichardrinehart’s implementation.
Starting from this release, you can make an include optional by setting optional=true:
// Won’t fail if this file doesn’t exist.
include optional=true “optional-config.kdl”
// Regular include, will fail if the file doesn’t exist.
include “required-config.kdl”
When an optional include file is missing, niri will emit a warning in the logs on every config reload. This reminds you that the file is missing while still loading the config successfully.
The optional file is still watched for changes, so if you create it later, the config will automatically reload and apply the new settings. Finally, optional=true only affects whether a missing file causes an error, so if the file exists but contains invalid syntax or other errors, those errors will still cause a parsing failure.
While we’re talking about includes: they now expand paths starting with ~ to the home directory, so ~/file.kdl will expand to /home/user/file.kdl. Thanks to @HigherOrderLogic and @BennyDeeDev for prototype implementations of this feature.
Pointer warping while scrolling
Last release, I made dragging windows horizontally by their titlebars scroll the view left and right. This made mouse-only navigation much more convenient, but I still felt that something was missing.
This release makes the pointer warp from one side of the screen to the other during view scrolling gestures, similarly to Blender. It makes scrolling through several windows natural and convenient, even when you start right next to the monitor edge.
Screencasting features
Earlier in the release cycle, I spent some time improving various aspects of our screencasting support. In niri, you can screencast through xdg-desktop-portal-gnome via PipeWire (the recommended approach), or through wlr-screencopy (mainly intended for tools such as wf-recorder). Both of these have seen improvements.
April 19, 2026
3 minute read
I was chatting to a friend about my folding bike and I had the urge to write about it - because this falls in the category of “Things I wish I’d had decades ago”. And maybe I can encourage some others to try these wonderful devices.
Note: I’m 3 months into a new job so blogging has taken a back seat to drinking from a firehose of new domain knowledge, new people, new tech. I’m still playing with AI-assisted coding, but at a slower pace - I do hope to blog more about this when things calm down.
12 years ago I started cycling in London, commuting by train, and I used the bicycle hire scheme mis-named at the time “Boris Bikes”. It was OK but a bit of a hassle - bikes were heavy, payment was fiddly, and often the hire racks would be empty in the morning and full in the evening.
So I followed the advice of other commuters and got this beautiful device - it cost £1000 at the time, a fair bit of money, but on a Ride to Work scheme I could pay this weekly over a year, so it was £4 a week, pre-tax, which made it quite affordable.
It’s a Brompton - and they are a marvellous brand, but I don’t want to just say “Get a Brompton” as I’m sure other brands must be competing in this space - and Bromptons are pricey. So do your own research.
I also (after a couple of annoying flats) got puncture-proof Schwalbe Marathon Plus tyres - and I haven’t had a single puncture since.
And like I said at the start - I so wish I’d had something like this years and years ago. So many years of commuting in Melbourne where I’d walk slowly to a station, or drive to a station and have to cram into busy parking. So many years where my bike would languish in a shed, probably with flat tyres because I only got it out on specific “exercise” attempts.
The folding bike:
Lives in my study. I have a nicer bike in the shed but almost never get it out because the bike in my study is so convenient.
Can be carried in one hand - it’s heavy, about 12kg plus bags, but that’s ok for short distances.
Can go on the train - this is the biggest benefit, commuting is so much easier when you can go cycle -> train -> cycle. Most trains, even ones with “no bikes” rules, allow them - they aren’t any bigger than a large suitcase.
Never gets punctures
Can go in the boot of the car easily - when I get the car serviced, I drive to the garage, then cycle home, and cycle back to the garage at the end of the day.
Can be carried in to the office or cafes or shops - no locking it on the street; a big benefit in London where bike thieves are everywhere and tend to carry bolt cutters or angle grinders!
I do have a lock - a folding ‘silver’ grade Abus Bordo lock that mounts on the bike. But I only really use it in my home town where thieves are much rarer, or on the very rare case where I want to go in a cafe and there isn’t room for the bike - but only if I can sit with the bike in eyeshot!
I get it serviced every year or two. And after 11 years, it’s had nothing major go wrong - a few cable replacements and the like, but it still has the original frame, wheels, and gears. That’s pretty impressive for 11 years of commuting, though post-Covid I only tend to commute one day a week.
For a lot of people this should be fairly should be simple economics. Our station parking is £10 a day - current Brompton prices start at £1400 - so even ignoring pre-tax schemes and savings in other transport like the underground, a Brompton would pay for itself in 140 working days, or 28 weeks for the poor folks still commuting every day.
Plus I just love the freedom of cycling, and the exercise!
#protip If cycling in one of the supported areas the free Cycle Streets app is marvellous. It uses Open StreetMap data so users can update it when roads change, and lets you choose quiet vs fast routes. People ask me if cycling in London is safe - it’s fine if you use an app like this to avoid the worst roads, and ride sensibly with a bit of care about passing trucks or busses, and (gasp) actually obey traffic signals.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.