10 interesting stories served every morning and every evening.
I am a great lover of brutalist architecture. 1960’s concrete buildings may not be for everyone, but I love the aesthetic. I’ve made a laptop stand, to help me hack in true brutalist style. It has the characteristic beton brut
(raw concrete) surface texture, and is quite possibly the heaviest laptop stand in the world. It also boasts 2 x 2.1 amp USB charge ports, a three-pin plug socket for my laptop, and an integral plant pot. Here are some of its highlights.
Rusted rebar and exposed wire add to the theme of urbex and decay
It was a slow process, but here are some action shots of making the laptop stand:
There were two main pours of concrete, to do the base and the side walls. It intentionally wasn’t mixed very thoroughly, to produce areas on the surface where there was more sand or more cement. Sanding the sides has also exposed the gravel in the concrete. This help to make it look aged and weathered.
On smaller pieces such as little plant pots or coasters, it is possible to use quick drying cement and get the bubbles out by vibrating the form with an electric toothbrush after the pour. For very large pieces such as a dining table, you need to use slow drying cement, and walk around the tabletop for ages, tapping the form with a rubber mallet to remove any air bubbles. For a medium-sized piece like this, a vibrating dildo is actually the best thing to use. Just think of it like any other power tool.
The plant pot is made of a ghee tin. Four bolts were drilled through it and covered in concrete during the first pour to fix it in place. The inner pot is a grey plastic plant pot which fits perfectly in the ghee tin. I’ve chosen a string of pearls plant, because I liked the effect of a running plant hanging over the edge. It reminds me of the derelict buildings I’ve seen during urban exploration.
The exposed wire really adds a sense fo dilapidation and urban decay. This isn’t actually the live power cable, but it has been made to look like one. The real cable disappears into the concrete on the right hand side of the laptop stand, and the damaged fake cable comes out of the other side of the wall. The real power lead is strapped to the rebar cage with cable ties, but the overall effect is that it looks like the live cable is badly damaged.
The wire had to be wrapped in kitchen paper and sprayed with ammonia and water, to produce the appropriate corrosion effect. Attempts to lower it into a little pot filled with liquid didn’t really work - the copper compounds turned the liquid blue, but it wasn’t forming a patina on the wire.
Here’s what seems to be happening here:
$$ \ce{Cu2+ + 2NH3 + 3H2O -> Cu(OH)2 + 2NH4+} $$
The exposed rebar was first polished with a wire brush attachment on a Dremel tool, to remove the concrete and expose the metal, then it was rusted with water, salt, and hydrogen peroxide.
The penpot was similarly rusted with salt water and peroxide, after being scuffed up with some sandpaper. It has also had some moss added to it: acrylic paint cut with sand was added, to produce a realistic texture. Dab, don’t wipe.
I’m delighted with my laptop stand, even if the aesthetic isn’t to everyone’s taste. The themes of brutalist architecture, urban decay, and dilapidation have worked out really nicely, especially with the deliberate hole and the rusted metal. It has pride of place on a desk it had to be carried to on a trolley because of the sheer weight of the stand, but nothing worthwhile comes easy.
...
Read the original on sam-burns.com »
Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit. We have also extended access to a group of over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.Cybersecurity in the age of AIThe software that all of us rely on every day—responsible for running banking systems, storing medical records, linking up logistics networks, keeping power grids functioning, and much more—has always contained bugs. Many are minor, but some are serious security flaws that, if discovered, could allow cyberattackers to hijack systems, disrupt operations, or steal data.We have already seen the serious consequences of cyberattacks for important corporate networks, healthcare systems, energy infrastructure, transport hubs, and the information security of government agencies across the world. On the global stage, state-sponsored attacks from actors like China, Iran, North Korea, and Russia have threatened to compromise the infrastructure that underpins both civilian life and military readiness. Even smaller-scale attacks, such as those where individual hospitals or schools are targeted, can still inflict substantial economic damage, expose sensitive data, and even put lives at risk. The current global financial costs of cybercrime are challenging to estimate, but might be around $500B every year.Many flaws in software go unnoticed for years because finding and exploiting them has required expertise held by only a few skilled security experts. With the latest frontier AI models, the cost, effort, and level of expertise required to find and exploit software vulnerabilities have all dropped dramatically. Over the past year, AI models have become increasingly effective at reading and reasoning about code—in particular, they show a striking ability to spot vulnerabilities and work out ways to exploit them. Claude Mythos Preview demonstrates a leap in these cyber skills—the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated.Ten years after the first DARPA Cyber Grand Challenge, frontier AI models are now becoming competitive with the best humans at finding and exploiting vulnerabilities. Without the necessary safeguards, these powerful cyber capabilities could be used to exploit the many existing flaws in the world’s most important software. This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the United States and its allies. Addressing these issues is therefore an important security priority for democratic states.Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs. Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.
Over the past few weeks, we have used Claude Mythos Preview to identify thousands of zero-day vulnerabilities (that is, flaws that were previously unknown to the software’s developers), many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software.In a post on our Frontier Red Team blog, we provide technical details for a subset of these vulnerabilities that have already been patched and, in some cases, the ways that Mythos Preview found to exploit them. It was able to identify nearly all of these vulnerabilities—and develop many related exploits—entirely autonomously, without any human steering. The following are three examples:Mythos Preview found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world and is used to run firewalls and other critical infrastructure. The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it;It also discovered a 16-year-old vulnerability in FFmpeg—which is used by innumerable pieces of software to encode and decode video—in a line of code that automated testing tools had hit five million times without ever catching the problem;The model autonomously found and chained together several vulnerabilities in the Linux kernel—the software that runs most of the world’s servers—to allow an attacker to escalate from ordinary user access to complete control of the machine.We have reported the above vulnerabilities to the maintainers of the relevant software, and they have all now been patched. For many other vulnerabilities, we are providing a cryptographic hash of the details today (see the Red Team blog), and we will reveal the specifics after a fix is in place.Evaluation benchmarks such as CyberGym reinforce the substantial difference between Mythos Preview and our next-best model, Claude Opus 4.6:In addition to our own work, many of our partners have already been using Claude Mythos Preview for several weeks. This is what they’ve found:“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient.
Providers of technology must aggressively adopt new approaches now, and customers need to be ready to deploy. That is why Cisco joined Project Glasswing—this work is too important and too urgent to do alone.”“At AWS, we build defenses before threats emerge, from our custom silicon up through the technology stack. Security isn’t a phase for us; it’s continuous and embedded in everything we do. Our teams analyze over 400 trillion network flows every day for threats, and AI is central to our ability to defend at scale.
We’ve been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it’s already helping us strengthen our code. We’re bringing deep security expertise to our partnership with Anthropic and are helping to harden Claude Mythos Preview so even more organizations can advance their most ambitious work with security that sets the standard.”“As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft.
When tested against CTI-REALM, our open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models. We look forward to partnering with Anthropic and the broader industry to improve security outcomes for all.”“The window between a vulnerability being discovered and being exploited by an adversary has collapsed—what once took months now happens in minutes with AI.
Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it’s a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one.”“In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers—whose software underpins much of the world’s critical infrastructure—have historically been left to figure out security on their own. Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software.
By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.”“Promoting the cybersecurity and resiliency of the financial system is central to JPMorganChase’s mission, and we believe the industry is strongest when leading institutions work together on shared challenges. Project Glasswing provides a unique, early stage opportunity to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure both on our own terms and alongside respected technology leaders.
We will take a rigorous, independent approach to determining how to proceed and where we can help. Anthropic’s initiative reflects the kind of forward-looking, collaborative approach that this moment demands.”“Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It’s always been critical that the industry work together on emerging security issues, whether it’s post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks.
We have long believed that AI poses new challenges and opens new opportunities in cyber defense, which is why we’ve built AI-powered tools—such as Big Sleep and CodeMender—to find and fix critical software flaws. We will continue investing in our leading cybersecurity platform and a culture focused on protecting users, customers, the ecosystem, and national security.”“Over the past few weeks, we’ve had access to the Claude Mythos Preview model, using it to identify complex vulnerabilities that prior-generation models missed entirely. This is not only a game changer for finding previously hidden vulnerabilities, but it also signals a dangerous shift where attackers can soon find even more zero-day vulnerabilities and develop exploits faster than ever before.
It’s clear that these models need to be in the hands of open source owners and defenders everywhere to find and fix these vulnerabilities before attackers get access. Perhaps even more important: everyone needs to prepare for AI-assisted attackers. There will be more attacks, faster attacks, and more sophisticated attacks. Now is the time to modernize cybersecurity stacks everywhere. We commend Anthropic for partnering with the industry to ensure these powerful capabilities prioritize defense first.”The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills. For example, as shown in the evaluation results below, the model has the highest scores of any model yet developed on a variety of software coding tasks.More information on the model’s capabilities, its safety properties, and its general characteristics can be found in the Claude Mythos Preview system card.We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview3.Today’s announcement is the beginning of a longer-term effort. To be successful, it will require broad involvement from across the technology industry and beyond.Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems—systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.Anthropic’s commitment of $100M in model usage credits to Project Glasswing and additional participants will cover substantial usage throughout this research preview. Afterward, Claude Mythos Preview will be available to participants at $25/$125 per million input/output tokens (participants can access the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).In addition to our commitment of model usage credits, we’ve donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable the maintainers of open-source software to respond to this changing landscape (maintainers interested in access can apply through the Claude for Open Source program).We intend for this work to grow in scope and continue for many months, and we’ll share as much as we can so that other organizations can apply the lessons to their own security. Partners will, to the extent they’re able, share information and best practices with each other; within 90 days, Anthropic will report publicly on what we’ve learned, as well as the vulnerabilities fixed and improvements made that can be disclosed. We will also collaborate with leading security organizations to produce a set of practical recommendations for how security practices should evolve in the AI era. This will potentially include:Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. As we noted above, securing critical infrastructure is a top national security priority for democratic countries—the emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology. Governments have an essential role to play in helping maintain that lead, and in both assessing and mitigating the national security risks associated with AI models. We are ready to work with local, state, and federal representatives to assist in these tasks.We are hopeful that Project Glasswing can seed a larger effort across industry and the public sector, with all parties helping to address the biggest questions around the impact of powerful models on security. We invite other AI industry members to join us in helping to set the standards for the industry. In the medium term, an independent, third-party body—one that can bring together private- and public-sector organizations—might be the ideal home for continued work on these large-scale cybersecurity projects.
The project is named for the glasswing butterfly, Greta oto. The metaphor can be applied in two ways: the butterfly’s transparent wings let it hide in plain sight, much like the vulnerabilities discussed in this post; they also allow it to evade harm—like the transparency we’re advocating for in our approach. From the Ancient Greek for “utterance” or “narrative”: the system of stories through which civilizations made sense of the world.Security professionals whose legitimate work is affected by these safeguards will be able to apply to an upcoming Cyber Verification Program.
...
Read the original on www.anthropic.com »
TL;DR my motivation and experience for moving my blog from Cloudflare to bunny.net
I’ve been a long time Cloudflare user. They offer a solid service that is free for the vast majority of their users, that’s very generous. Their infrastructure is massive and their feature set is undeniably incredible.
One of my biggest concerns though is around how easily I could become heavily dependent on this one single company that then can decide to cut me off and disable all of my websites, for any arbitrary reason. It’s a single point of failure for the internet. Every Cloudflare outage ends up in the news. And I can’t help but feel that the idea of centralizing the internet into a single US corporation feels off. Not to mention the various scandals that have surrounded them. So I was open to alternatives.
Bunny.net (affiliate link because why not, raw link here) is a Slovenian (EU) company that is building up a lot of momentum. Their CDN-related services rival Cloudflare already, and although their PoP network is smaller than Cloudflare’s, they score highly on performance and speed across the globe. It’s a genuinely competitive alternative to Cloudflare.
It has the additional benefit of being a European company, and I like the idea of growing and supporting the European tech scene.
What I was moving away from
I’ve been using various different services, but focusing on this blog, the first thing was Cloudflare as the registrar for the domain name. I did some research on alternative registrars, but I just didn’t find any good European options. The closest I found was INWX, but their lack of free WHOIS Privacy made them a non-option. I ended up with Porkbun. They run on Cloudflare infrastructure, but they have better support. So the remaining thing Cloudflare was doing for me was the “Orange Cloud”: automatic caching, origin hiding, and optional protection features.
So that’s what we’re moving over! I’m gonna walk you through how to set up the bunny.net CDN for your website, with some sensible defaults.
Setting up your bunny.net account is quick and you get $20 worth of free credits to play around with, those are valid for 14 days. You don’t need to give them a credit card up front to try things out, but if you do, you get another $30 worth of credits. You do need to confirm your email though before you can start setting things up. Once you’re out of the trial, you pay per use, which for most cases is cents a month. However, note that bunny.net require a minimum payment of $1 per month.
I guess a cheap price to pay to stop being the product and start becoming the customer.
The pull zone is the main mechanism for enabling the CDN for your website. You’ll find them under CDN in the left navigation bar. Here’s how to set one up:
Fill in the pull zone name. Just make it something meaningful to you, for example the website name.
Fill in your Origin URL. This would be the address for directly accessing your server. In my case, it’s the public IP of my server.
If you’re running multiple apps on your server, for example using Dokploy, coolify, or self-hosted PaaSs like that, you’ll want to pass the Host header as well. Here you put in the domain of your app. In my case, that’s jola.dev.
Finally you can select your pricing zones. Note that some zones are more expensive, so you can choose to disable them. This just means that people in those areas will get redirected to the closest zone you do have enabled.
And you’re done with the first part!
Now that you’ve set up the pull zone, it’s time to hook it up to your website and domain. Go to the pull zone you created. You’ll see a “hostnames” screen. Time to connect things.
Under “Add a custom hostname” fill in your website domain name.
You’ll get a modal with some instructions. You need to follow them to set up the DNS name to point your website to go through the CDN.
Go to where you manage domain name and add a CNAME record to point your domain to the given CNAME value in the modal, something like website.b-cdn.net.
Once you’ve done that, wait a few minutes to let it propagate, and then click “Verify & Activate SSL”.
If it says success, you’re done. Your website is now running through the bunny.net CDN, similar to the Cloudflare orange cloud.
This is the part where bunny.net will really shine through!
If your website is set up to return the appropriate cache headers for each resource, things will just work. Bunny defaults to respecting the cache control headers when pointing a pull zone at an origin site. To verify, go to Caching → General and check that “Respect origin Cache-Control” is set under “Cache expiration time”. Note that if you set no-cache, bunny will use that and will not cache at the edge.
Alternatively, if you don’t have cache headers set up, and you don’t want to control that yourself, you can instead enable Smart Cache. This will default to caching typically cached resources like images, CSS, JS files etc, while avoiding caching things like HTML pages. This will work for most cases!
But I wanted to go faster. If you’ve read my post about building this website, here’s how I’ve set up my cache headers: I added a new pipeline in the router called public and added an extra middleware to it. I technically have everything using this pipeline, but leaving the standard browser pipeline that comes out of the box with Phoenix keeps my options open to add authenticated (uncached) pages in the future.
pipeline :public do
plug :accepts, [“html”]
plug :put_root_layout, html: {JolaDevWeb. Layouts, :root}
plug :put_secure_browser_headers, @secure_headers
plug :put_cdn_cache_header
end
defp put_cdn_cache_header(conn, _opts) do
put_resp_header(conn, “cache-control”, “public, s-maxage=86400, max-age=0″)
end
You can see the whole router here https://github.com/joladev/jola.dev/blob/main/lib/jola_dev_web/router.ex.
This setup means I even cache the HTML pages, which makes this ridiculously fast. Here’s the landing page response time from various locations, using the Larm response time checker tool:
Because I’m caching the HTML pages, if I publish a new post I do need to purge the pull zone to reset the cached HTML files.
All of these are optional, but nice to have!
On your pull zone page, under General → Hostnames, go toggle “Force SSL” on for your domain to ensure that all requests use SSL. SSL/TLS is pretty standard these days, and many TLDs and websites use HSTS to enforce it, but no harm in enabling it here too.
DDoS protection comes out of the box, but we can set some other things up. First of all, go to Caching and then Origin Shield in the left menu on your pull zone, and activate Origin Shield. Select the location closest to your origin. This reduces load on your server, as bunny.net will cache everything in the Origin Shield location, and all edge locations will try that location first before hitting your server.
Next, go to Caching → General and scroll down. At the bottom of the page you can select Stale Cache: While Origin Offline and While Updating. This means bunny will keep serving cached content even if it is stale, if it can’t reach your origin, and that it will serve stale content while fetching the latest version. Both are nice to haves, nothing you have to enable, but provide a slightly better service to your users!
Next, let’s set up an Edge rule to redirect any requests to our automatically generated pull zone domain to our actual domain, to avoid confusing crawlers. On your pull zone, in the left menu, click Edge rules.
For URL, input your URL plus the path variable. Eg for me it’s https://jola.dev{{path}} .
For conditions, pick Match any and Request URL Match any.
Input *:// replacing with the name given to your pull zone.
Now you should be able to go to https://slug.b-cdn.net for your pull zone and get redirected to your proper domain!
This post just covers the very basics of getting set up on bunny.net. I haven’t even scratched the surface of edge rules, cache configuration, the Shield features for security and firewalls, video hosting and streaming, edge scripting and edge distributed containers, and much more.
I especially appreciate the great statistics, logs, and metrics you get out of the dashboard. You can even see every single request coming through to help you investigate issues, and clear feedback on what’s getting cached and not. I’m actively moving everything else over and I’m excited for the upcoming S3 compatible storage!
...
Read the original on jola.dev »
The Apollo Guidance Computer (AGC) is one of the most scrutinised codebases in history. Thousands of developers have read it. Academics have published papers on its reliability. Emulators run it instruction by instruction. We found a bug in it that had been missed for fifty-seven years: a resource lock in the gyro control code that leaks on an error path, silently disabling the guidance platform’s ability to realign.
We used Claude and Allium, our open-source behavioural specification language, to distil 130,000 lines of AGC assembly into 12,500 lines of specs. The specs were derived from the code itself, and the process signposted us directly to the defect.
The source code has been publicly available since 2003, when Ron Burkey and a team of volunteers began painstakingly transcribing it by hand from printed listings at the MIT Instrumentation Laboratory. In 2016, former NASA intern Chris Garry’s GitHub repository went viral, topping the trending page. Thousands of developers scrolled through the assembly language of a machine with 2K of erasable RAM and a 1MHz clock.
The AGC’s programs were stored in 74KB of core rope: copper wire threaded by hand through tiny magnetic cores in a factory (a wire passing through a core was a 1; a wire bypassing it was a 0). The women who wove it were known internally as the “Little Old Ladies”, and the memory itself was called LOL memory. The program was physically woven into the hardware. Ken Shirriff has analysed it down to individual gates, and the Virtual AGC project runs the software in emulation, having confirmed the recovered source byte-for-byte against the original core rope dumps.
As far as we can determine, no formal verification, model checking or static analysis has been published against the flight code. The scrutiny has been deep, but it has been a particular kind of scrutiny: reading the code, emulating the code, verifying the transcription.
We took a different approach. We used Allium to distil a behavioural specification from the Inertial Measurement Unit (IMU) subsystem, the gyroscope-based platform that tells the spacecraft which way it is pointing. The specification models the lifecycle of every shared resource: when it is acquired, when it must be released, and on which paths.
It surfaced a flaw that reading and emulation had missed.
The AGC manages the IMU through a shared resource lock called LGYRO. When the computer needs to torque the gyroscopes (to correct platform drift or perform a star alignment), it acquires LGYRO at the start and releases it when all three axes have been torqued. The lock prevents two routines from fighting over the gyro hardware at the same time.
The lock is acquired on the way in and released on the way out. But there is a third possibility, and it doesn’t release the lock.
‘Caging’ is an emergency measure: a physical clamp that locks the IMU’s gimbals in place to protect the gyroscopes from damage. The crew could trigger it with a guarded switch in the cockpit.
When the torque completes normally, the routine exits via STRTGYR2 and the LGYRO lock is cleared. When the IMU is caged while a torque is in progress, the code exits via a routine called BADEND, which does not clear the lock. Two instructions are missing:
CAF ZERO
TS LGYRO
Once LGYRO is stuck, every subsequent attempt to torque the gyros finds the lock held, sleeps waiting for a wake signal that will never come, and hangs. Fine alignment, drift compensation, manual gyro torque: all blocked.
On 21 July 1969, while Neil Armstrong and Buzz Aldrin walked on the lunar surface, Michael Collins orbited alone in the Command Module Columbia. Every two hours he disappeared behind the Moon, out of radio contact with Earth. “I am alone now, truly alone, and absolutely isolated from any known life. I am it,” he wrote in Carrying the Fire. “If a count were taken, the score would be three billion plus two over on the other side of the moon, and one plus God knows what on this side.”
During each pass he ran Program 52, a star-sighting alignment that kept the guidance platform pointed in the right direction. If the platform drifted, the engine burn to bring him home would point the wrong way.
Here’s how the bug might have manifested.
Collins has just finished his star sightings at the optics station in the lower equipment bay and keyed in the final commands. The computer is torquing the gyroscopes to apply the correction across all three axes.
He moves back toward the main panel in a cramped cockpit, past a cage switch protected by a flip-up cover. An elbow catches the cover and nudges the switch. The code handles this gracefully: a routine called CAGETEST detects the cage, abandons the torque and exits. The P52 fails, and he understands why: the cage interrupted the correction. He uncages the IMU and heads back to the optics station to realign.
He starts a new P52. The program hangs.
No alarm, no program light. The DSKY (display and keyboard, his only interface to the computer) accepts the input and does nothing. He tries V41, the manual gyro torque verb. Same result. Everything else on the computer works. Only gyro operations are dead.
The first failure looked normal: a cage event during alignment, with a known recovery. The second gives no clue what is wrong. The trained response to an accidental cage is to uncage and realign. Collins had been trained to restart the computer, but nothing about this failure would suggest he needed to. Commands were accepted, everything else worked. It would look like faulty hardware, not a stuck lock.
“My secret terror for the last six months has been leaving them on the Moon and returning to Earth alone”, Collins later wrote of the rendezvous. A dead gyro system behind the Moon, with Armstrong and Aldrin on the surface waiting for a rendezvous burn that depends on a platform he can no longer align, is exactly that scenario.
A hard reset would have cleared it. But the 1202 alarms during the lunar descent had been stressful enough with Mission Control on the line and Steve Bales making a snap abort-or-continue call.
Behind the Moon, alone, with a computer that was accepting commands and doing nothing, Collins would have had to make that call by himself.
Margaret Hamilton (as “rope mother” for LUMINARY) approved the final flight programs before they were woven into core rope memory. Her team at the MIT Instrumentation Laboratory pioneered concepts we now take for granted: priority scheduling, asynchronous multitasking, restart protection and software-based error recovery. Even the term ‘software engineering’ is hers.
Their priority scheduling saved the Apollo 11 landing when the 1202 alarms fired during descent, shedding low-priority tasks under load exactly as designed. Most modern systems don’t handle overload that gracefully.
The most serious bugs that did surface were specification errors, not coding mistakes. Don Eyles, who wrote the lunar landing guidance code, documented several. For example, the ICD for the rendezvous radar specified that two 800 Hz power supplies would be frequency-locked but said nothing about phase synchronisation. The resulting phase drift made the antenna appear to dither, generating roughly 6,400 spurious interrupts per second per angle and consuming roughly 13% of the computer’s capacity during Apollo 11’s descent. This was the underlying cause of the 1202 alarms.
This defect has the same shape. BADEND is a general-purpose termination routine shared by all IMU mode-switching operations. It clears MODECADR (the stall register), wakes sleeping jobs, and exits. But LGYRO is a gyro-specific lock, acquired only by the pulse-torquing code and released only by the normal completion path in STRTGYR2. When the error path routes through BADEND, it handles the general resources correctly, but not the gyro-specific lock.
The AGC was written so defensively that latent faults like this would be silently corrected by the restart logic, which clears LGYRO as a side effect of full erasable-memory initialisation. Any test that happened to trigger a restart after the bug would see the system recover seamlessly.
The defensive coding hid the problem, but it didn’t eliminate it. A cage event without a subsequent restart would still leave the gyros locked. Collins would have no way to realign the guidance platform and no diagnostic clue pointing to the fix.
We found this defect by distilling a behavioural specification of the IMU subsystem using Allium, an AI-native behavioural specification language. The specification models each shared resource as an entity with a lifecycle: acquired, held, released.
The IMU entity declares a gyros_busy field modelling LGYRO. Two rules govern it:
rule GyroTorque {
– Sends gyro torquing pulse commands. Reserves the gyros,
– enables power supply, and dispatches pulses per axis.
when: GyroTorque(command: GyroTorqueCommand)
requires:
imu.mode != caged
imu.gyros_busy = false
ensures:
imu.gyros_busy = true
GyroTorqueStarted()
rule GyroTorqueBusy {
– Gyros already reserved by another torquing operation.
– Caller sleeps until LGYRO is cleared.
when: GyroTorque(command: GyroTorqueCommand)
requires: imu.gyros_busy = true
ensures:
JobSleep(job: calling_job())
GyroTorque requires gyros_busy = false and ensures gyros_busy = true: the lock is acquired. Somewhere, on every path that follows, the lock must be released. The spec doesn’t show where in the code the release happens, but it makes the obligation explicit: if gyros_busy goes to true, something must set it back to false.
With that obligation written down, Claude traced every path that runs after gyros_busy is set to true. The normal completion path (STRTGYR2) clears it. The cage-interrupted path (BADEND) does not. MODECADR, the other shared resource, is correctly cleared in BADEND: LGYRO is missing.
The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.
The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.
Tests verify the code as written; a behavioural specification asks what the code is for.
A specification distilled by Allium models resource lifecycles across all paths, including the ones nobody thought to test. You can view the Allium specifications and reproduction of the bug on GitHub.
Hamilton’s team released resources by loading the constant zero into the accumulator (CAF ZERO) and storing it into the lock register (TS LGYRO). Every release placed manually, by a programmer who remembered every path that could reach that point.
Modern languages have tried to make lock leaks structurally impossible: Go has defer, Java has try-with-resources, Python has with, Rust’s ownership system makes lock leaks a compile-time error.
Nevertheless, lock leaks persist. MITRE classifies the pattern as CWE-772: “Missing Release of Resource after Effective Lifetime”, and rates its likelihood of exploitation as high. Not all resources are managed by a language runtime. Database connections, distributed locks, file handles in shell scripts, infrastructure that must be torn down in the right order: these are still often the programmer’s responsibility. Anywhere the programmer is responsible for writing the cleanup, the same bug is waiting.
Every Apollo crew came home safely. But the IMU mode-switching routines were carried forward across missions in both the Command Module software (COMANCHE) and the Lunar Module software (LUMINARY). The fault was never noticed and never fixed.
A fifty-seven-year-old bug hid in flight-proven assembly. What’s hiding in yours? Let’s talk.
Thanks to Farzad “Fuzz” Pezeshkpour for independently reproducing the bug, and to Danny Smith and Prashant Gandhi for reviewing early drafts of this article.
...
Read the original on www.juxt.pro »
I’ve been inspired to write something for
April Cools Club, and what fits better from my normal content than my experience rice farming in rural Japan!
For those who aren’t aware, in 2025 I spent January-July in Japan staying with my wife’s family. During that time we helped out on the family rice farm near Shuzenji in the Shizuoka prefecture. I unfortunately had to leave before the full harvest process was done but I’ll take you as far as I got and also try and share other insights I gleaned.
Unfortunately, while I thought I took a lot of photos it seems I’m missing things I would have liked to have captured for this. Where applicable there’ll be other sources and at least one video linked for more information.
The farm is primarily a rice farm, there’s no animals (ignoring the koi fish that were in the garden pond). There is a portion of a bamboo forest and space to plant non-rice crops so we also grow or harvest for consumption:
Warabi - an edible bracken. Not so much planted it just grows everywhere
Whatever other vegetables they decide to plant
And looking out from the driveway this is the view at the end of winter before everything starts growing and spring properly kicks in:
Obviously, me and my wife aren’t around all the time. When we’re not my brother-in-law and mother-in-law work on the farm part time often 1-2 days per week each.
At the start of spring we come back to the fields. They’ve been left fallow over the winter and the dead rice plants from last year cover them. They’re currently dry, we’ll flood them later on once they’ve been prepared. Because it’s hard and spiky we have cordless strimmers with metal blades to cut through this and not take forever.
I don’t have a picture of a rice field before clearing but here is one of the fields partially cleared:
As part of clearing and getting ready we also dig the drainage ditches along the side of the fields. When the fields were drained last year the soft mud flows back into the ditches and hardens again so they need to be redug.
The field will also be ploughed to break up the soil and loosen it, and we’ll remove large rocks we find. After ploughing we can level the field to flatten it. With the field level, the rice will be at equal depths and the planting process is more consistent.
This might be the first time someone’s prepared a rice field wearing a Rust London t-shirt. It’s definitely my first time driving a tractor!
But before we get that far we have to prepare the route for the water to get into the field. This work is only actually needed for one of our fields, the others have a fairly direct route from the river to the field. But for one field we have to clear a few hundred metres of a channel that goes along the edge of a bamboo forest clearing the dead bamboo and other natural detritus.
I don’t have any pictures of this, but imagine all the joys of clearing out hundreds of meters of ditches among dense vegetation in high humidity.
I do have a picture after ploughing with the drainage ditch for the field dug next to the river that will supply our water:
One last thing we might do before flooding is drive metal rods into the perimeter of the field as part of building a fence. This doesn’t have to be done for every field, just the ones that border the bamboo forest where the wild boar and deer might sneak through at night and eat the rice.
Rice fields are typically placed near rivers, before planting we have to flood and level the fields. We’ll go down into the river, and place a wooden board by a drainage pipe at the edge redirecting water down that pipe and into a channel next to the field. We can then open a hole and let water go from that channel into the field. Water can then drain out of the edge of the field when it gets full, continuing on into other fields and eventually back into the river. The water will rejoin the river in part so that farming doesn’t dry out rivers and ensure the longevity of the environment.
For the field with the more onerous ditch clearing that water flows under the field and eventually back into the river. For that there’s an ad-hoc construction of some old drain and bamboo to move the water across into the field:
And the water entering that field:
After flooding, a tractor with a flat rear blade will be moved over the field a few times to level it. When the planting vehicle goes over the field little arms pluck off some rice and stick it down. If the soil is too far from the arm you end up with loose rice floating around on top of the water. Obviously, we don’t want to waste rice like this so levelling is an important step.
One thing to note, with a rice field the deeper soil is compacted and firm, it shouldn’t be able to drain by the water going into the water table and disappearing. However, our field with the tricky water intake did suffer from a minor sinkhole as water was able to go down and rejoin the stream that flows under the field. This resulted in some work to dig down and fill the area letting water out with rocks and harder mud and compacting it with the bucket of a digger. After this work was done the field held it’s water and we were able to think about planting.
After poking around to figure out why water was draining I managed to get this picture of the hole that started to open up. I guess that’s a sinkhole of sorts.
An interesting fact is that rice doesn’t actually need standing water to grow. The water helps by stopping weeds growing around the rice taking resources and protects the rice from certain pests that would eat it.
For some further watching, this video
shows a more advanced but very similar for a different farm. The main difference is that they don’t need to manually go to the inlet gates to open/close them and instead have some more modern gates controlled via mobile phone.
It’s planting day, turning up I can see the neighbours have already planted and here you can see our ready but empty field next to their freshly planted field:
But here we go, everything we’ve been working toward. The previous process has taken from mid-February and now it’s early May. We go off and buy seed trays of rice to load into the Rice Transplanter. Below you can see a picture of the planting process:
An arm will move along the bottom of the rice and pull off a clump of rice and then plunge it into the ground. It will keep moving back and forth doing this at regular intervals. It the motion of it working is reminiscent of a typewriter at work.
After it’s done there’s some leftover rice, and there might be gaps where things weren’t perfectly level. We go out into the field wearing
jika-tabi. These are boots with a split between the big toe and the other toes. It’s meant to help our feet not get stuck in the wet mud. Grabbing rice in small bunches we pull them from the seed tray and plant them about an inch deep into the mud and compact some mud around it.
Fun language note, my wife asked me if I saw the tabi once and I thought she meant a tabby cat. I wasn’t aware of the name of the footwear.
Now the rice is in the field we’re at risk of attack. Wild boar and deer just love to snack on our hard work - this means it’s time to put up the electric fence. This is fairly simple drive the poles into the ground at regular intervals, then feed the wire along it wrapping it round the clips making sure it’s moderately taut. Also check for any breaks in the wire and if so get a bit of electrical tape and fix it.
After wiring we place a box which is just a solar panel and battery on a timer next to the fence and try to hammer it into the ground or prop it up securely enough with rocks where the dirt is too shallow.
We’ll have to come back every week or so to cut the grass that sprouts up on the edge of the field. If we don’t it will ground the fence and drain the battery and our rice will fall victim to the local wildlife.
After planting our fields look like this:
When the rice gets older - around waist height the field is drained. Some sort of narrow plough is moved between the rows pushing the mud up around the rice to hold it up and then the water intake is closed and the field left to drain and dry out. Then the rice will continue to grow until it’s harvest time.
Unfortunately, I left Japan a couple of weeks after draining and I haven’t experienced the final stage of harvest yet. I have this picture I was sent of the rice near harvest time but the final stages will have to remain shrouded in mystery for now. I’m not ready for spoilers when I may learn this in future firsthand:
A spectre has been looming over this post. The wild boar. I got an update one day it seems a baby boar managed to squeeze under an unelectrified part of the fence and help itself to an all you can eat buffet:
Luckily someone came to the farm the day before and the day after it happened and it was closed up before the boars started visiting nightly. But it seems important to remain vigilant of your defences. I’ve still not seen a boar in the wild even going through the nearby forests - they’re nocturnal and rather dangerous so I’m glad of that!
In rice fields you can see a lot of interesting wildlife. Frogs and salamanders help protect the crop by eating bugs that might feed on the rice. You also might see snakes nearby that feed on them as well.
When clearing grass once I saw a snake dart out from under a pickup truck we’d had parked up for a few hours as I walked past. I then looked at the grass I was going to cut and saw it hunkered down in the grass but obscured enough to not get a picture and not wanting to disturb it I moved on. After all I don’t know how dangerous it might be.
I asked my brother-in-law about the snakes later on when he came to the truck to get a drink and asked if they’re dangerous. He asked if it’s brown or “blue” (aoi 青い) - it was brown. Also blue here isn’t blue but green, historically ao used to mean the entire blue-green spectrum so for some older terms (often things like animal colours), aoi is still used instead of the more modern word for green (midori 緑). Anyway, his response to my answer is how I first heard the Japanese word 有毒な (yūdokuna) - or venomous in English. Not speaking any English, he further translated it by grabbing his throat and miming frothing at the mouth.
There are also black kites flying around, they’ve been known to swoop down and snatch up kittens and there are warning signs in some more populated places about keeping close to small pets. I’ve seen them circling in the heat but it’s hard to get a good photograph of birds with a normal smartphone camera. But I have my best capture of one:
When I was in Japan there was a rice price crisis (try saying that three times fast). With a 95% increase in price, it actually became cheaper to fly to South Korea, fill a suitcase with rice and fly back. Eventually, the government released part of its emergency rice supply kept in storage to tackle food shortages and mitigate against disasters. This situation is likely to occur again, and as an outsider looking at how Japan’s farming system is organised it seems unavoidable without significant reforms.
In Japan the average age of a full-time rice farmer is around 70. For younger generations they can only afford to do it part-time, 1 or 2 days a week. They also own 4-6 rice fields. There are no factory farms and large scale operations.
In this respect my wife’s family is very average. Rice farming doesn’t generate enough income to do it full time so my Mother-in-law and Brother-in-law only farm 1 or 2 days a week maximum. Without more time they’re able to just plant enough fields to account for the family rice consumption and not to sell rice.
Part of the reason of this is the Gentan system, designed to protect small-scale farmers income it prevents large scale factory farming of rice and encourages ownership of smaller farms. It has been officially abolished but it still shapes how the rice economy works. This was part of a system to discourage communism initially by encouraging ownership of business and preventing absentee landlords accumulating large tracts of land where people who work the fields would be forced into renting. It should be noted the UK’s system is like this with rich landowners accumulating more farmland for tax reasons and renting it to farmers who often struggle to make farming profitable.
Farmers also sell their crops via a centrally managed system which fixes the price. Historically, crops used for animal feed have fetched higher price than human quality rice leading to a number of farmers planting rice for themselves and then selling animal feed to make a living.
Another issue is automation of farming. Reading this account of rice farming you might think this seems very manual and it is. In America rice is aerially planted. With consistency in fields and the distribution of the rice leads to higher yields. And if you’re dealing with such small farm area that becomes more important - and things like aerial planting become less economically viable. An American farm can be roughly 100 times larger than a Japanese one.
Additionally, with rising cost of living a lot of the youth of Japan move to cities like Tokyo, Osaka and Nagoya where they can find better paid office work. Local rural economies struggle more as they lose people and income from work doesn’t scale with the costs of living. It seems unavoidable we’ll see more and more rice farms close and further impacts due to decreased output.
If you’re interested in this there’s a video about this on
Asianometry.
Reading this last section it might seem to end in doom and gloom. This isn’t really how I wanted to sign off on things. Rice farming was a positive experience for me, a connection with nature, building relationships with my wife’s family and growing my Japanese skills. Doing a day of manual labour, chatting shit, then going for the onsen and some BBQ and beers is far better than grinding away at some enterprise SaaS that will probably disappear in a few years.
Farming becoming economically unviable seems to be something afflicting many countries. At some point I expect a wakeup call or transition. Either things are changed to make it viable full-time or Japan’s system of small independent farms will gradually fade away. Only time will tell, but I hope that rural communities can continue to survive and also thrive.
...
Read the original on xd009642.github.io »
Drag, click an era, or use the arrows. 49 cards across 30 years.
Pick any two GPUs. See how they compare.
Every GPU plotted by year and transistor count. Click any dot to explore.
What Gamers Actually Use
The flagship costs $1,999. The most popular card costs $329.
RTX 3060 at 4.1% vs RTX 5090 at 0.42%.
...
Read the original on sheets.works »
We have signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity that we expect to come online starting in 2027. This significant expansion of our compute infrastructure will power our frontier Claude models and help us serve extraordinary demand from customers worldwide.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” said Krishna Rao, CFO of Anthropic. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”
Demand from Claude customers has accelerated in 2026. Our run-rate revenue has now surpassed $30 billion—up from approximately $9 billion at the end of 2025. When we announced our Series G fundraising in February, we shared that over 500 business customers were each spending over $1 million on an annualized basis. Today that number exceeds 1,000, doubling in less than two months.
The vast majority of the new compute will be sited in the United States, making this partnership a major expansion of our November 2025 commitment to invest $50 billion in strengthening American computing infrastructure.
The partnership deepens our existing work with Google Cloud—building on the increased TPU capacity we announced last October—as well as our relationship with Broadcom.
We train and run Claude on a range of AI hardware—AWS Trainium, Google TPUs, and NVIDIA GPUs—which means we can match workloads to the chips best suited for them. This diversity of platforms translates to better performance and greater resilience for customers who depend on Claude for critical work. Amazon remains our primary cloud provider and training partner, and we continue to work closely with AWS on Project Rainier. Claude remains the only frontier AI model available to customers on all three of the world’s largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).
...
Read the original on www.anthropic.com »
Cloudflare is accelerating its post-quantum roadmap. We now target 2029 to be fully post-quantum (PQ) secure including, crucially, post-quantum authentication.
At Cloudflare, we believe in making the Internet private and secure by default. We started by offering free universal SSL certificates in 2014, began preparing our post-quantum migration in 2019, and enabled post-quantum encryption for all websites and APIs in 2022, mitigating harvest-now/decrypt-later attacks. While we’re excited by the fact that over 65% of human traffic to Cloudflare is post-quantum encrypted, our work is not done until authentication is also upgraded. Credible new research and rapid industry developments suggest that the deadline to migrate is much sooner than expected. This is a challenge that any organization must treat with urgency, which is why we’re expediting our own internal Q-Day readiness timeline.
What happened? Last week, Google announced they had drastically improved upon the quantum algorithm to break elliptic curve cryptography, which is widely used to secure the Internet. They did not reveal the algorithm, but instead provided a zero-knowledge proof that they have one.
This is not even the biggest breakthrough. That same day, Oratomic published a resource estimate for breaking RSA-2048 and P-256 on a neutral atom computer. For P-256, it only requires a shockingly low 10,000 qubits. Google’s motivation behind their recent announcement to also pursue neutral atoms alongside superconducting quantum computers becomes clear now. Although Oratomic explains their basic approach, they still leave out crucial details on purpose.
These independent advances prompted Google to accelerate their post-quantum migration timeline to 2029. What’s more, in their announcement and other talks, Google has placed a priority on quantum-secure authentication over mitigating harvest-now/decrypt-later attacks. As we discuss next, this priority indicates that Google is concerned about Q-Day coming as soon as 2030. Following the announcements, IBM Quantum Safe’s CTO is more pessimistic and can’t rule out quantum “moonshot attacks” on high value targets as early as 2029.
The quantum threat is well known: Q-Day is the day that sufficiently capable quantum computers can break essential cryptography used to protect data and access across systems today. Cryptographically relevant quantum computers (CRQCs) don’t exist yet, but many labs across the world are pursuing different approaches to building one. Until recently, progress on CRQCs has been mostly public, but there is no reason to expect that will continue. Indeed, there is ample reason to expect that progress will leave the public eye. As quantum computer scientist Scott Aaronson warned at the end of 2025:
[A]t some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already.
That point has now passed indeed.
We’d like to spend some words on why it’s difficult to predict progress on quantum computing. Sudden “quantum” leaps in understanding, like the one we witnessed last week, can occur even if everything happens in the public eye. Simply put, breaking cryptography with a quantum computer requires engineering on three independent fronts: quantum hardware, error correction, and quantum software. Progress on each front compounds progress on the others.
Hardware. There are many different competing approaches. We mentioned neutral atoms and superconducting qubits, but there are also ion-trap, photonics, and moonshots like topological qubits. Complementary approaches can even be combined. Most of these approaches are pursued by several labs around the world. They all have their distinct engineering challenges and problems to solve before they can scale up. A few years ago, all of them had a long list of open challenges, and it was unclear if any of them would scale. Today most of them have made good progress. None have been demonstrated to scale yet: if they had, we wouldn’t have a couple of years left. But these approaches are much closer now, especially neutral atoms. To ignore this progress, you’d have to believe that every single approach will hit a wall.
Error correction. All quantum computers are noisy and require error-correcting codes to perform meaningful computation. This adds quite a bit of overhead, though how much depends on the architecture. More noise requires more error correction, but more interestingly, improved qubit connectivity allows for much more efficient codes. For a sense of scale: typically around a thousand physical qubits are required for one logical qubit for the superconducting quantum computers that are noisy and only have neighbor qubit connectivity. We knew “reconfigurable qubits” such as those of neutral-atom machines allow for an order of magnitude better error-correcting codes. Surprisingly, Oratomic showed the advantage is even larger: only about 3-4 physical neutral atom qubits are required per logical qubit.
Software. Lastly, the quantum algorithms to crack cryptography can be improved. This is Google’s breakthrough: they massively sped up the algorithm to crack P-256. On top of that, Oratomic showed further architecture specific optimizations for reconfigurable qubits.
The picture comes together: in 2025 neutral atoms turned out to be more scalable than expected, and now Oratomic figured out how to do much better error-correcting codes with such highly connected qubits. On top of that, breaking P-256 requires much less work. The result is that Q-Day has been pulled forward significantly from typical 2035+ timelines, with neutral atoms in the lead, and other approaches not far behind.
In previous blog posts we’ve discussed how different quantum computers compare on physical qubit count and fidelity, compared to the conservative goalpost of cracking RSA-2048 on a superconducting qubit architecture. This analysis gives us a rough idea of how much time we have, and it’s certainly better than tracking quantum factoring records, but it misses architecture-specific optimization and software improvements. What to watch for now is when the final missing capabilities for each architecture are achieved.
Historically, the industry’s focus on post-quantum cryptography (PQC) has been based largely on PQ encryption, which stops harvest-now/decrypt-later (HNDL) attacks. In an HNDL attack, an adversary harvests sensitive encrypted network traffic today and stores it until a future date when it can use a powerful quantum computer to decrypt the data. HNDL attacks are the primary threat when Q-Day is far away. That’s why our focus, thus far, has been on mitigating this risk, by adopting post-quantum encryption by default in our products since 2022. Today, as we mentioned above, most Cloudflare products are secure against HNDL attacks, and we’re working to upgrade the rest as we speak.
The other category of attacks is against authentication: adversaries armed with functioning quantum computers impersonate servers or forge access credentials. If Q-Day is far off, authentication is not urgent: deploying PQ certificates and signatures does not add any value, only effort.
An imminent Q-Day flips the script: data leaks are severe, but broken authentication is catastrophic. Any overlooked quantum-vulnerable remote-login key is an access point for an attacker to do as they wish, whether that’s to extort, take down, or snoop on your system. Any automatic software-update mechanism becomes a remote code execution vector. An active quantum attacker has it easy — they only need to find one trusted quantum-vulnerable key to get in.
When experts in the field of building quantum computers start patching authentication systems, we should all listen. The question is no longer “when will our encrypted data be at risk?” but “how long before an attacker walks in the front door with a quantum-forged key?”
If quantum computers arrive in the next few years, they will be scarce and expensive. Attackers will prioritize high-value targets, like long-lived keys that unlock substantial assets or persistent access such as root certificates, API auth keys and code-signing certs. If an attacker is able to compromise one such key, they retain indefinite access until they are discovered or that key is revoked.
This suggests long-lived keys should be prioritized. That is certainly true if the quantum attack of a single key is expensive and slow, which is to be expected for the first generation of neutral atom quantum computers. That’s not the case for scalable superconducting quantum computers and later generations of neutral atom quantum computers, which could well crack keys much faster. Such fast CRQCs flip the script again, and an adversary with one might focus purely on HNDL attacks so that their attacks remain undetected. Google’s Sophie Schmieg compares this scenario to Enigma’s cryptanalysis that changed the direction of World War II.
Adding support for PQ cryptography is not enough. Systems must disable support for quantum-vulnerable cryptography to be secure against downgrade attacks. In larger, especially federated systems such as the web, this is not feasible because not every client (browser) will support post-quantum certificates, and servers need to keep supporting these legacy clients. However, downgrade protection for HTTPS is still achievable using “PQ HSTS” and/or certificate transparency.
Disabling quantum-vulnerable cryptography is not the last step: once done, all secrets such as passwords and access tokens previously exposed in the quantum-vulnerable system need to be rotated. Unlike post-quantum encryption, which takes one big push, migrating to post-quantum authentication has a long dependency chain — not to mention third-party validation and fraud monitoring. This will take years, not months.
It’s natural for organizations reading this to rush out and think about which internal systems they need to upgrade. But that’s not the end of the story. Q-day threatens all systems. As such, it’s important to understand the impact of a potential Q-day on third-party dependencies, both direct and indirect. Not just the third-parties you speak cryptography to, but also any third parties that are critical business dependencies like financial services and utilities.
With Q-day approaching on a shorter timeline, post-quantum authentication is top priority. Long-term keys should be upgraded first. Deep dependency chains and the fact that everyone has third-party vendors means this effort will take on the order of years, not months. Upgrading to post-quantum cryptography is not enough: to prevent downgrades, quantum-vulnerable cryptography must also be turned off.
Today, Cloudflare provides post-quantum encryption for the majority of our products mitigating harvest-now/decrypt-later. This is the product of work we started over a decade ago to protect our customers and the Internet at large.
We are targeting full post-quantum security including authentication for our entire product suite by 2029. Here we’re sharing some intermediate milestones we’ve set, subject to change as our understanding of the risk and deployment challenges evolve.
For businesses, we recommend making post-quantum support a requirement for any procurement. Common best practices, like keeping software updated and automating certificate issuance, are meaningful and will get you pretty far. We recommend assessing critical vendors early for what their failure to take action would mean for your business.
For regulatory agencies and governments: leading by setting early timelines has been crucial for industry-wide progress so far. We are now in a pivotal position where fragmentation in standards and effort between and within jurisdictions could put progress at risk. We recommend that governments assign and empower a lead agency to coordinate the migration on a clear timeline, stay security-focused, and promote the use of existing international standards. Governments need not panic, but can lead migration with confidence.
For Cloudflare customers, with respect to our services, you do not need to take any mitigating action. We are following the latest advancements in quantum computing closely and taking proactive steps to protect your data. As we have done in the past, we will turn on post-quantum security by default, with no switches to flip. What we don’t control is the other side: browsers, applications, and origins need to upgrade. Corporate network traffic on Cloudflare need not worry: Cloudflare One offers end-to-end protection when tunnelling traffic through our post-quantum encrypted infrastructure.
Privacy and security are table stakes for the Internet. That’s why every post-quantum upgrade we build will continue to be available to all customers, on every plan, at no additional cost. Making post-quantum security the default is the only way to protect the Internet at scale.
Free TLS helped encrypt the web. Free post-quantum cryptography will help secure it for what comes next.
...
Read the original on blog.cloudflare.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.