10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Writing this makes me irrationally sad, but Ghostty will be leaving GitHub1.
I’m GitHub user 1299, joined Feb 2008.
Since then, I’ve opened GitHub every single day. Every day, multiple times per
day, for over 18 years. Over half my life. A handful of exceptions in there
(I’d love to see the data), but I can’t imagine more than a week per year.
GitHub is the place that has made me the most happy. I always made time for
it. When I went through tough breakups? I lost myself in open source… on
GitHub. During college at 4 AM when everyone is passed out? Let me get one
commit in. During my honeymoon while my wife is still asleep? Yeah, GitHub.
It’s where I’ve historically been happiest and wanted to be.
Even the annoying stuff! Some people doom scroll social media. I’ve been doom
scrolling GitHub issues since before that was a word. On vacations I’d have
bookmarks of different projects on GitHub I wanted to study. Not just source
code, but OSS processes, how other maintainers react to difficult situations.
Etc. Believe it or not, I like this.
Some might call this sick, but my hobby and work and passion all align and for
most of my life they got to also live in one place on the internet: GitHub.
Did you know I started Vagrant (my first successful open source project) in
large part because I hoped it would get me a job at GitHub? It’s no secret,
I’ve said this repeatedly, and in my first public talk about Vagrant, when I
was a mere 20 years old, I joked “maybe GitHub will hire me if it’s good!”
GitHub was my dream job. I didn’t ever get to work there (not their fault).
But it was the perfect place I wanted to be. The engineers were incredible,
the product was incredible, and it was something I lived and breathed every
day. I still do and consistently have… for these 18 years. Enough time for
an entire human to become an adult, all on GitHub.
Lately, I’ve been very publicly critical of GitHub. I’ve been mean about it.
I’ve been angry about it. I’ve hurt people’s feelings. I’ve been lashing out.
Because GitHub is failing me, every single day, and it is personal. It is
irrationally personal. I love GitHub more than a person should love a thing,
and I’m mad at it. I’m sorry about the hurt feelings to the people working on
it.
I’ve felt this way for a long time, but for the past month I’ve kept a journal
where I put an “X” next to every date where a GitHub outage has negatively
impacted my ability to work2. Almost every day has an X. On the day I am
writing this post, I’ve been unable to do any PR review for ~2 hours because
there is a GitHub Actions outage3. This is no longer a place for serious
work if it just blocks you out for hours per day, every day.
It’s not a fun place for me to be anymore. I want to be there but it doesn’t
want me to be there. I want to get work done and it doesn’t want me to get
work done. I want to ship software and it doesn’t want me to ship software.
I want it to be better, but I also want to code. And I can’t code with GitHub
anymore. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one
day, but this will have to be predicated on real results and improvements,
not words and promises.
I’ll share more details about where the Ghostty project will be moving to in
the coming months. We have a plan but I’m also very much still in discussions
with multiple providers (both commercial and FOSS).
It’ll take us time to remove all of our dependencies on GitHub and we have a
plan in place to do it as incrementally as possible. We plan on keeping a
read-only mirror available on GitHub at the current URL.
My personal projects and other work will remain on GitHub for now.
Ghostty is where I, our maintainers, and our open source community are
most impacted so that is the focus of this change. We’ll see where it
goes after that.
Footnotes
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
Your phone is about to stop being yours.
126 days until lockdown
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn’t registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook
What Google is doing
In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.
Registration requires:
Paying a fee to Google
Agreeing to Google’s Terms and Conditions
Surrendering your government-issued identification
Providing evidence of your private signing key
Listing all current and all future application identifiers
If a developer does not comply, their apps get silently blocked on every Android device worldwide.
Who this hurts
You
You bought an Android phone because Google told you it was open. You could install what you wanted, and that was the deal.
Google is now rewriting that deal, retroactively, on hardware you already own. After the update lands, you can only run software that Google has pre-approved. On your phone: your property, that you paid for.
Independent developers
A teenager’s first app, a volunteer’s privacy tool, or a company’s confidential internal beta. It doesn’t matter. After September 2026, none of these can be installed without Google’s blessing.
F-Droid, home to thousands of free and open-source Android apps, has called this an “existential” threat. Cory Doctorow calls it “Darth Android”.
Governments & civil society
Google has a documented track record of complying when authoritarian regimes demand app removals. With this program, the software that runs your country’s institutions will exist at the pleasure of a single unaccountable foreign corporation.
The EFF calls app gatekeeping “an ever-expanding pathway to internet censorship.”
Google’s “escape hatch” is a trap door
Google says “power users” can “still install” unverified apps. Here’s what that actually looks like:
Delve into System Settings, find Developer Options
Tap the build number seven times to enable Developer Mode
Dismiss scare screens about coercion
Enter your PIN
Restart the device
Wait 24 hours
Come back, dismiss more scare screens
Pick “allow temporarily” (7 days) or “allow indefinitely”
Confirm, again, that you understand “the risks”
Nine steps. A mandatory 24-hour cooling-off period. For installing software on a device you own.
Worse: this flow runs entirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS update required and no consent needed. And as of today, it hasn’t shipped in any beta, preview, or canary build. It exists only as a blog post and some mockups.
This is bigger than Android
If Google can retroactively lock down billions of devices that were sold as open platforms, every hardware manufacturer on the planet is watching.
The principle being established: the company that made your device gets to decide, after you’ve bought it, what software you’re allowed to run. In software, this is called a “rug pull”; but at least you could always install competing software. In hardware, it is a fait accompli that strips you of your agency and renders you powerless to the whims of a single unaccountable gatekeeper and convicted monopolist.
Android’s openness was never just a feature. It was the promise that distinguished it from iPhone. Millions chose Android for exactly that reason. Google is now revoking that promise unilaterally, on devices already in people’s pockets, because they’ve decided they have enough market dominance and regulatory capture to get away with it.
Ars Technica: “Google’s Apple envy threatens to dismantle Android’s open legacy.”
But wait, isn’t this…
″…just about security?”
The security rationale is a smokescreen. Google Play Protect already scans for malware independent of developer identity. Requiring a government ID doesn’t make code safer. It makes developers identifiable and controllable. Malware authors can register. Indie developers and dissidents often can’t. The EFF is blunt: identity-based gatekeeping is a censorship tool, not a security one.
″…still sideloading if you use the advanced flow?”
Nine steps, 24-hour wait, buried in Developer Options, delivered through a proprietary service that Google can revoke whenever they want. That’s not sideloading. That’s a deterrence mechanism built to ensure almost nobody completes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.
″…only a problem if you have something to hide?”
Whistleblowers, journalists, and activists under authoritarian governments will be the first victims. People in domestic abuse situations are next. All these groups have legitimate reasons to distribute or use software without putting their legal identity in a Google database. Anonymous open-source contribution is a tradition older than Google itself. This policy ends it on Android.
″…the same thing Apple does?”
Apple has been a walled garden from day one. People chose Android because it was different. “Apple does it too” is a race to the bottom and a weak tu quoque argument. And under regulatory pressure (the EU’s Digital Markets Act), even Apple is being forced to open up. Google is moving in the opposite direction: attempting to further entrench its gatekeeping status.
″…just $25 and some paperwork?”
Maybe, if you’re a developer in the US with a credit card and a driver’s license. Try being a student in sub-Saharan Africa, or a dissident in Myanmar, or a volunteer maintaining a community health app. The cost isn’t only financial: you’re surrendering government ID and evidence of your signing keys to a company that routinely complies with government demands to remove apps and expose developers.
Fight back
Everyone
Install F-Droid on every Android device you own. Alternative stores only survive if people actually use them.
Contact your regulators. Regulators worldwide are genuinely concerned about monopolies and the centralization of power in the tech sector, and want to hear directly from individuals who are affected and concerned.
Share this page. Link to keepandroidopen.org everywhere.
Push back on astroturfers. The “well, actually…” crowd is out in force. Don’t let them set the narrative.
Sign the change.org petition and join the over 100,000 signatories who have made their voices heard.
Read and share our open letter
Tell Google what you think of this through their own developer verification survey (for all the good that will do).
Developers
Do not sign up. Don’t join the program by signing up for the Android Developer Console and agreeing to their irrevocable Terms and Conditions. Don’t verify your identity. Don’t play ball.
Google’s plan only works if developers comply. Don’t.
Talk other developers and organizations out of signing up.
Add the FreeDroidWarn library to your apps to warn users.
Run a website? Add the countdown banner.
Google employees
If you know something about the program’s technical implementation or internal rationale, contact tips@keepandroidopen.org from a non-work machine and a non-Gmail account. Strict confidence guaranteed.
All those opposed…
69 organizations from 21 countries have signed the open letter
Read the full open letter and thank the signatories →
What they’re saying
Tech press
“Google’s Android developer verification program draws pushback” InfoWorld
“Google’s Android developer verification program draws pushback”
“Google’s New Developer ID Rule Could Harm F-Droid” Reclaim The Net
“Google’s New Developer ID Rule Could Harm F-Droid”
“Google will make you wait 24 hours to sideload Android apps” How-To Geek
“Google will make you wait 24 hours to sideload Android apps”
“An ‘existential’ threat to alternative app stores” The New Stack
“An ‘existential’ threat to alternative app stores”
“I’ve been an Android user for almost 15 years — and Google’s sideloading changes are pushing me back to iPhone” Tom’s Guide
“I’ve been an Android user for almost 15 years — and Google’s sideloading changes are pushing me back to iPhone”
“F-Droid Says Google Is Lying About the Future of Sideloading on Android” How-To Geek
“F-Droid Says Google Is Lying About the Future of Sideloading on Android”
“Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy” It’s FOSS News
“Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy”
“Keep Android Open” Linux Magazine
“Keep Android Open”
“F-Droid says Google’s new sideloading restrictions will kill the project” Ars Technica
“F-Droid says Google’s new sideloading restrictions will kill the project”
“Sideloading is dead for all intents and purposes. The Android you know and love is slowly disappearing.” Android Police
“Sideloading is dead for all intents and purposes. The Android you know and love is slowly disappearing.”
“F-Droid project threatened by Google’s new dev registration rules” Bleeping Computer
“F-Droid project threatened by Google’s new dev registration rules”
“Google’s new developer rules could threaten sideloading and F-Droid’s future” Gizmochina
“Google’s new developer rules could threaten sideloading and F-Droid’s future”
“Open letter warns mandatory registration ‘threatens innovation, competition, privacy and user freedom’” Infosecurity Magazine
File read
Shell
Shell
Upload
Download
Command
Shell
File read
Shell
Shell
Shell
File read
File read
Inherit
Shell
Command
File write
File read
Inherit
Shell
Inherit
Shell
Command
File write
File read
Inherit
Shell
Inherit
Shell
Command
File write
File read
Inherit
Inherit
Shell
Command
File write
File read
Inherit
File read
Shell
Command
File read
Download
File write
File read
File read
File read
File read
File read
Shell
File write
File read
Shell
Shell
Command
File read
Shell
Shell
Shell
Shell
File write
File read
File read
Inherit
Shell
Command
File write
File read
Inherit
File read
File read
File read
File read
File read
Shell
Reverse shell
File write
File read
Upload
Download
Library load
Inherit
Shell
File write
File read
Inherit
Shell
Command
File write
File read
Inherit
File read
File read
Shell
File read
Inherit
Shell
Command
Reverse shell
The phones we carry around in our pockets have two million times more memory and are thousands of times faster than the room-sized computers that guided the Apollo mission to the Moon. This incredible shrinking act has been driven by our ability to make transistors smaller and smaller.
Each transistor is a microscopic switch that can alternate between a one and a zero, the basic language of all computing. Billions are packed onto tiny silicon chips called semiconductors. The more transistors that fit onto a chip, the more logic and memory circuits it holds, and the more it can do.
Get the print magazine
Subscribe for $100 to receive six beautiful issues per year.
Subscribe
Advanced semiconductors are, arguably, the most important technology in the world. Over the last five years, they have even emerged as a geopolitical flashpoint between the US and China. But for all this rivalry, any country or company that hopes to manufacture semiconductors is dependent on a single firm: ASML. Dubbed ‘a relatively obscure Dutch company’ by the BBC in 2020, ASML makes the only machines in the world capable of stenciling the transistors onto chips with the precision necessary to fit billions on a 30-centimeter wafer.
These machines are roughly the size of double-decker buses. To ship one requires 40 freight containers, three cargo planes, and 20 trucks. They are the world’s most complex objects. Each contains over one hundred thousand components, all of which have to be perfectly calibrated for the machine to produce light consistently at the right wavelength.
While ASML is now the sole supplier of these machines, and will be for some time to come, it started out as a laggard in the chipmaking industry. Overtaking its competition required many things rarely associated with European companies: close collaboration with the American government, selling large stakes to foreign competitors, and a huge gamble on an unproven technology.
Let there be light
The key to ASML’s success is a technology called photolithography (sometimes just called lithography). The technique involves transferring a pattern onto a semiconductor wafer by exposing it to light. In the 1950s, the first chipmakers had tried to draw these patterns by hand, but anything that physically touches the wafer scratches it, dirties it, or warps the pattern. Scientists working independently for Bell Labs and the US military realized that they could use light to print identical patterns without making physical contact with the wafer.
To make chips, engineers start with a thin wafer of semiconductor material, usually silicon. This wafer is coated with a chemical called photoresist, which reacts when exposed to light. In photolithography, light is projected through a detailed pattern onto the photoresist-coated wafer, softening the exposed areas. The wafer is washed to remove any softened areas, revealing the silicon underneath. It is then moved to an etching machine that blasts it with charged chlorine or bromine gas, carving the desired pattern into the exposed silicon. These features are later filled with metal, such as tungsten and copper, to connect the transistor to power. These etched layers then combine into an intricate network of transistors.
Over time, the semiconductor manufacturing ecosystem has developed increasingly sophisticated etching using ever smaller wavelengths of light. Smaller wavelengths diffract less, allowing the light to travel in straighter lines and print sharper, tinier details without blurring. These allow for more precise pattern projections that, in turn, allow smaller and more densely packed transistors.
Early lithography relied on mercury vapor lamps that were similar to streetlights, while more modern machines rely on lasers created using argon and fluorine gases. By 2010, such lasers made it possible to create a 22-nanometer feature through multiple exposures using a 193-nanometer wavelength.
The most advanced version of this technology, extreme ultraviolet lithography, is used to make the very smallest chips. The smallest in 2025 were marketed as three nanometers, roughly 25,000 times thinner than a human hair.
To make them, a droplet of liquid tin is released into a chamber and hit with a single pulse of light, which melts and flattens it. As the droplet continues to fall, a second, more powerful pulse vaporizes the tin, creating an extremely hot plasma that emits light at the narrow wavelengths needed for extreme ultraviolet lithography. The light beam is then concentrated by reflecting it across a series of slightly concave mirrors so flawless that, if scaled to the size of Germany, their imperfections would be measured in millimeters. Engineers need to use mirrors, rather than the glass lenses used in standard lithography, as almost all solid materials absorb light at such short wavelengths.
The light eventually hits the mask, which contains the pattern to be printed on the chip. As the pattern on the mask is usually several times larger than what is wanted on the chip, the light is then reflected by a second system of mirrors.
Path of light through an extreme ultraviolet lithography scanner.
Image
ASML.
After the light reflects from the mask, it carries the pattern as a bundle of rays spreading out from each point. The next mirrors tip these rays inward so that, instead of spreading widely, they reunite over a shorter distance. When the rays from each point come together sooner, the picture they form is physically smaller. By repeating this with several carefully shaped mirrors, engineers shrink the pattern by a fixed amount while keeping it in focus. After being shrunk four times, it hits the wafer.
The great shrinking act
Longer wavelengths act like a blunt chisel, suitable for rough shaping, but they struggle to capture finer details. The longer light waves are larger relative to the tiny features on the reticle that they must reflect from. When a wave meets something smaller than itself, it naturally spreads and bends around its edges instead of casting a sharp shadow. To create the same details, the blunt chisel needs to go over the same spot a number of times (creating blurrier edges). Lithography had to take wavelengths all the way to the extreme ultraviolet range to achieve the high resolution patterning needed for cutting-edge process nodes.
Wavelengths as low as 13.5 nanometers can achieve more precise patterns in a single exposure. In fact, extreme ultraviolet lithography can combine three or four photolithography patterning cycles into a single one on a seven-nanometer node. Without EUV, producing five-nanometer nodes might require as many as one hundred different steps.
Extreme ultraviolet lithography was able to produce more accurate patterns on wafers than older techniques even if they were used multiple times.
Today, ASML dominates the overall market for lithography and has an effective monopoly in extreme ultraviolet lithography. Its EUV machines sell for more than $120 million. With a market capitalization of over $400 billion, ASML is one of Europe’s most valuable companies. But it wasn’t always like this.
Origins
ASML started off life within Philips, the Dutch consumer electronics giant. During the 1970s, Philips had roughly 20 percent of the global electronics market and was a major chipmaker. In this era, lithography machines used wavelengths of over 400 nanometers to pattern 1,000-nanometer features. The industry struggled to shrink features without losing accuracy or letting dust and flaws creep in. Philips began to work on its own prototype, drawing on its expertise in optics and precision mechanics. By the early 1980s, the project was running into trouble. The company was looking to cut costs and engineers estimated that they would need over $280 million in today’s money to finish the machine’s development and production.
In 1984, Philips spun out Advanced Semiconductor Materials Lithography (which later dropped the full name in favor of its acronym) as a joint venture with ASM International, a Dutch conglomerate that sold equipment to the semiconductor industry. The business originally struggled. It had no market share and no brand recognition. Its first product, the PAS 2000, was a commercial failure. The machine used oil pressure, like that in power steering, to move the table that held the wafer during exposure, rather than electric motors. This made it smooth and precise, but it was prone to leaking. At the first conference ASML attended, one industry executive told them: ‘The race has already been run. There’s no room for you here.’ ASML switched back to electric motors.
The company took an unusual approach from the outset. While Japanese giants Nikon and Canon were vertically integrated, ASML outsourced key components like optics and motors so that it could focus on assembling and optimizing the final machine. Given this outsourcing, it made sense for ASML to embrace a modular design with clearly defined subsystems. This approach was mocked in European manufacturing circles. German engineers warned ASML’s leadership that they were ‘asking for trouble’ and would ‘lose all control’ if they didn’t make critical components themselves. But ASML had no choice: it lacked the capital, expertise, and time to build these subsystems from scratch.
By 1988, ASML was on the verge of collapse. ASM International had already pulled out, and Philips considered shutting it down. It was saved by a single Philips board member, Gerd Lorenz, who was particularly worried about Europe’s growing dependence on Asia for strategic technology. Lorenz argued that Europe needed a stake in chip manufacturing. This was enough to convince Philips to give ASML more time, but didn’t fix its fundamental problem: it was still an inferior supplier with no competitive edge.
ASML used the time it was given to develop the PAS 5500, released in 1991 and the company’s first commercial breakout. While Nikon’s contemporary photolithography system was more precise, ASML’s modular design meant that machines could be fixed quickly on site. This reduced downtime and, by making it easy to replace parts when they broke, it was possible to extend the machine’s life. This was a key factor that led John Kelly, IBM’s director of semiconductor R&D, to push IBM to order the PAS 5500 over the Japanese machines. ASML had gone global.
The first breakthroughs
ASML’s success depended on two projects in the late 1990s and 2000s that gave it a huge advantage in research and development. The first was a public-private partnership, started in 1997, called the Extreme Ultraviolet Limited Liability Company. The Extreme Ultraviolet Limited Liability Company began life as a rescue mission. Before 1997, basic semiconductor research was carried out in a small handful of research labs, all dependent on government grants.
The original program for EUV research was a ‘virtual national lab’ that combined Lawrence Livermore National Laboratory, Sandia National Laboratories, and the Lawrence Berkeley National Laboratory. Each covered a different component: Livermore focused on mirrors and optics, Sandia on the light source and systems engineering, and Berkeley on advanced equipment for testing. But in 1996, Department of Energy budget cuts had placed the virtual national lab program on the chopping block.
Intel, then the undisputed world leader in microprocessors, was keen to preserve the work and spearheaded the creation of the Extreme Ultraviolet Limited Liability Company, the largest public-private partnership of its kind in the history of the US Department of Energy. During its six-year life, the company invested over $270 million into extreme ultraviolet lithography development, funded by the sale of shares to member companies, giving them a right of first refusal to purchase the photolithography tools being produced.
The company initially restricted membership to American firms. ASML, along with its main Japanese rivals, Canon and Nikon, was initially barred from membership.
The only established semiconductor equipment manufacturer to join the partnership from the beginning was Silicon Valley Group, which had a market share of just 5 percent to ASML’s 20 percent. Fearing the danger of being reliant on such a small manufacturer, the rest of the companies involved concluded that it would be better to open up to foreign firms, rather than risk ceding the entire market.
ASML was allowed to participate so long as it committed to establish a research center in the US and source 55 percent of components for the systems sold in the US from American suppliers. In practice, this commitment was never enforced. Its Japanese competitors were never allowed to join, due to widespread fear in the US of Japanese competition.
The program built up a vast base of intellectual property and process knowledge. These types of public-private partnerships typically grant the participating companies a non-exclusive license to use the intellectual property generated, but in this case the companies in partnership got complete ownership.
In 2001, ASML acquired Silicon Valley Group after it ran into cash flow difficulties, making ASML the sole surviving equipment manufacturer in the partnership. When the consortium produced the first full-scale extreme ultraviolet lithography prototype — the Engineering Test Stand — ASML stood alone at the vanguard of lithography. This was the first demonstration that 13.5-nanometer light could print dense features on a chip.
By the time the Engineering Test Stand was built, the program had already proved that it was possible to generate extreme ultraviolet light reliably, which let engineers start building mirrors and lenses that could be used in real production tools. To solve outstanding questions, such as how to boost the throughput of their machines or increase the power of their light sources in production settings, ASML needed to test its machines in environments close to the real world. But no chipmakers were willing to shoulder a project so large and risky at such an early stage.
The second project essential to ASML’s success was the Belgium-based Interuniversity Microelectronics Centre (IMEC), a research organization that collects machines from different companies and allows researchers to test them in semi-real environments while protecting the companies’ intellectual property.
As potential customers began to consider different options for next generation lithography technologies, ASML used IMEC to promote its extreme ultraviolet lithography prototype. Topping ASML’s target list was TSMC, which today is the world’s largest semiconductor foundry. Founded in 1987, TSMC’s history had been intertwined with ASML’s since its birth: Philips, ASML’s former parent, owned a 27.5 percent stake in it. Seeing ASML’s machinery exhibited at IMEC was what led TSMC to partner with ASML in EUV development.
By contrast, Canon and Nikon were tight-lipped about their research and made little effort to cooperate with outside companies. While this theoretically allowed them to maintain greater control over their work, and capture more of the value chain, it also made them solely responsible for simultaneously solving a bewildering array of fundamental physics problems, while assuming all the financial risk of doing so.
ASML’s prototype extreme ultraviolet lithography system.
Image
ASML.
Since almost all of the parts in ASML’s machines are made by other companies, it has become master of a sprawling supply chain of over five thousand companies. It has diversified its suppliers over the years in a very deliberate way: 80 percent of its spending goes to companies across Europe and the Middle East (notably not the US, despite prior agreements), which reduces the risk of potential export restrictions, tariffs, and other geopolitical risks that may face critical suppliers based in the US or Asia. It also aims for its suppliers to make no more than 25 percent of their revenue from ASML, to force them not to become overreliant on the volatile semiconductor market.
While most of its components come from a large number of small suppliers, ASML has formed deep bonds with its biggest suppliers. It acquired a 24.9 percent stake in optics manufacturer Zeiss. Peter Leibinger, vice chairman of laser manufacturer Trumpf, has said that ASML and Trumpf are a ‘virtually merged company’.
Winning the war
Extreme ultraviolet lithography would not become a successful commercial technology until 2018, over 20 years after the creation of the Extreme Ultraviolet Limited Liability Company and 34 years after IMEC was founded. In the meantime, it was consuming more and more resources. By 2015, ASML was spending more than $1 billion a year on R&D, more than double its 2010 total. According to some estimates, by 2014, the industry had collectively invested over $20 billion in extreme ultraviolet lithography, with no guarantee of any return.
ASML was able to continue pouring money into this black hole partly because it had already beaten its competitors. By 2010, it had two thirds of the overall lithography market and was the dominant supplier for the rapidly growing smartphone market, with deep ties to Intel, Samsung, and TSMC. It had secured this position by winning the decisive technical battle of the 2000s.
At the start of the millennium, the entire semiconductor industry hit a physical wall. Circuits had been getting steadily smaller for decades by simply switching to shorter wavelengths, but the standard 193-nanometer light (roughly one five-hundredth of the thickness of a human hair) was too blunt to draw smaller circuits.
Nikon tried to solve this by developing a new light source with a smaller wavelength of 157 nanometers. But this shorter wavelength light was absorbed and distorted by standard glass, forcing Nikon to build lenses out of calcium fluoride, a rare, brittle crystal that was expensive to polish and prone to cracking under heat. The industry poured hundreds of millions of dollars into this ‘dry’ lithography path, only to find the manufacturing challenges insurmountable.
ASML’s partnerships helped it avoid this dead end. TSMC researcher Burn Lin had advised them to switch to a technology called immersion lithography. ASML continued to use 193-nanometer light but placed a layer of water between the lens and the silicon wafer. Just as a straw appears bent and magnified when placed in a glass of water, the water in the machine bent the light waves, sharpening the focus and allowing smaller circuits to be printed without needing new lenses.
ASML compounded this advantage by introducing a revolutionary machine architecture called TWINSCAN. In older machines, the light source would sit idle while the machine stopped to measure the surface of the silicon wafer to ensure it was flat. ASML replaced this with a dual-stage system: a massive machine with two tables would measure one wafer in the background while another was being printed simultaneously. This eliminated the dead time in the manufacturing process, allowing chipmakers to produce significantly more chips per hour. By the time Nikon abandoned its 157-nanometer project in 2005, ASML had become the industry standard, with 53.2 percent of the market.
ASML’s machines were so much better than the competition that it could charge nearly twice as much for them: $55 million versus $30 million for the comparable Nikon device.
But even this was not enough. While ASML was beginning to ship prototype EUV machines to IMEC from 2006 onwards, they were so slow and prone to breaking down that they were commercially useless. In 2012, ASML, still reeling from the global financial crisis, was struggling to continue financing its EUV efforts.
In a drastic move — part desperate attempt to keep the company’s research efforts afloat and part strategic bet to win the EUV market once and for all — the ASML leadership launched a co-investment program that sold 23 percent of the company to its three largest customers: Intel, TSMC and Samsung.
The funding also allowed ASML to complete a $2.5 billion acquisition of one of its suppliers, Cymer, which produces lithography light sources. The acquisition allowed ASML to invest in Cymer’s R&D efforts to perfect its soft X-ray light source, which involved hitting fast-moving droplets of tin with such force that they lost electrons, but precisely enough that this did not shed so much debris that it coated the mirrors. They accomplished this by moving from a single pulse to two separate laser pulses: the pre-pulse would shape the droplet and the main pulse would generate the plasma. This improved efficiency and stability.
ASML’s close partnership with TSMC proved especially critical. In 2014, TSMC launched its first chip for Apple, which was now its largest customer and was putting pressure on the chipmaker to produce higher performance chips than its existing machinery was capable of. It had become urgent for ASML to complete a commercial EUV machine.
The two companies worked so closely together that Anthony Yen, the Division Director at TSMC responsible for overseeing EUV development, described them as ‘one team’. ASML and TSMC engineers on the ground worked tirelessly, troubleshooting and iterating until they had reached the necessary throughput: 500 wafers a day for a month.
During this period, the joint team redesigned both the tin-droplet generator and the way the laser hit each droplet. The new setup produced droplets that were about half the original size while still yielding the same ultraviolet energy. Smaller droplets throw off far less debris when vaporized, which slows the rate at which tin builds up on the collector mirror. Because the mirror degrades more slowly, it needs fewer replacements, keeping the machine up and running for longer stretches.
The partnership was a win for ASML, as it was able to work through some of its key engineering and commercialization challenges. It also helped TSMC become an early adopter of the most cutting-edge technology. By 2019, TSMC was ramping up mass production of its seven-nanometer process and the first phones with EUV chips were being sold by the end of the year.
ASML’s most advanced extreme ultraviolet lithography scanner, the TWINSCAN EXE:5000.
Image
ASML.
Meanwhile, competitor firms like Nikon, which had never believed as strongly in extreme ultraviolet lithography, effectively gave up. In its 2013 annual report, Nikon noted that its own EUV progress had not proceeded as planned, and it was not mentioned in an annual report again. With ASML pulling ahead on R&D and locking up key customer demand, and with competitors struggling to justify their own R&D spending in the wake of the financial crisis, ASML became the last company standing in the race to commercialize the technology.
The importance of tacit knowledge
Early on, ASML cultivated a culture that was more risk tolerant than other players in the industry. It promoted high-potential talent early and had a track record of retaining key employees for decades. Much of this is a product of its challenging early years. ASML needed the talent of its younger generation to save the company, so it was more willing to promote and empower them quickly.
For example, Martin Van Den Brink joined ASML in 1984. Within 18 months, aged 29, he became one of two people promoted to lead the development of one of the company’s early flagship projects. He carried on working at ASML for his entire career, serving as president and chief technical officer until his retirement in 2024. This practice was far less common among ASML’s Japanese rivals, who were more hierarchical and tended to reward seniority over performance.
Retaining the best workers is especially crucial in an area like photolithography, where a huge amount of tacit knowledge is used to assemble its machines. An ASML engineer once told He Rongming, the founder of Shanghai Micro Electronics Equipment, one of China’s top ASML competitors, that the company wouldn’t be able to replicate ASML’s products even if it had the blueprints. He suggested that ASML’s products reflected ‘decades, if not centuries’ of knowledge and experience. ASML’s Chinese competitors have systematically attempted to hire former ASML engineers, and there is at least one documented case of a former ASML employee unlawfully handing over proprietary information. But none of this appears to have narrowed the gap.
A European giant
ASML is a rare example of a European tech giant. Its success was the result of transatlantic cooperation, not continental parochialism. Had the company not joined a program funded by US chipmakers, Canon and Nikon would likely still dominate a less advanced lithography industry.
Cooperation with other companies was just as important. While vertical integration gave Nikon and Canon total control, it capped their innovation at the limits of their internal resources. In a system exceeding one hundred thousand components, that ceiling proved fatal. ASML’s modular approach allowed it to import cutting-edge physics by acquiring Cymer and investing in Zeiss, while distributing the risk to customers like Intel and TSMC. This strategy created a collective engine that outspent and outpaced every rival attempting to shoulder the burden alone.
This took a great deal of courage. ASML sank billions of dollars into the development and commercialization of EUV technology, with no guarantee that it would ever work. As late as the 2010s, many semiconductor experts doubted that the technology could be successfully commercialized. Now it is the most important technology in the world.
But ASML, and by extension the continent, cannot stand still. As ASML enjoys its place as an indispensable pillar in one of the world’s most important industries, others are working to create a new paradigm in chip technology. Moore’s Law probably doesn’t end here, and in a matter of years, five nanometers won’t be small enough.
Enjoyed this? There’s more.
Get new articles from Works in Progress delivered to your inbox.
A few years back, I had a running joke with the guy I was seeing about adding him to my period tracker. Being a women’s health expert, I enjoy weaving nerdy anecdotes about cycles and attraction and desires into my flirtations and marveling at my own wit and woo-woo mastery of my cyclical body. This ruse seemed like a harmless jab at my digitally tracked self-awareness — a very late millennial feminist living in the Bay Area version of coquetry.
It maybe wasn’t all that harmless, after all.
Turns out, the matter of sharing the data around my cycle, and potentially the even more private information about my intimate experiences, wasn’t as much of a matter of choice as I might have expected. Worse, it might have been used to sell me stretchmark creme or dental dams.
Caught bloody handed
That period tracking app, Flo, has been found liable in connection with selling user data to Meta all the while promising their users they were protecting their privacy. The class action suit had 13 million Flo users included as plaintiffs, which is a sizeable chunk of pissed off users amongst their reported 75 million-strong user base.
Those lawsuits against Meta and Flo, first filed in 2021 with more in the US and Canada, reveal a bigger issue in non-medical health tracking software — there’s too much gray area around consent when it comes to selling your health information to advertisers.
What’s important about the legal precedent being set is in highlighting how the current guidelines around health data privacy (like HIPAA) are woefully lagging behind the health tracking tech already available directly to users. It raises a number of critical questions:
What does this legal vagueness mean for how we choose to self monitor our biological markers?
What does this legal vagueness mean for how we choose to self monitor our biological markers?
In a post-Dobbs environment, how do concerns around digital privacy impact our consumer choices in sexual health and period tracking apps?
In a post-Dobbs environment, how do concerns around digital privacy impact our consumer choices in sexual health and period tracking apps?
Why is it still up to the consumer to run safety checks when it should be the role of product teams and healthtech brands to build less creepy tech?
Why is it still up to the consumer to run safety checks when it should be the role of product teams and healthtech brands to build less creepy tech?
Do we really need to be tracking every possible symptom and mood and cramp and letting private tech companies decide what to do with that data?
Do we really need to be tracking every possible symptom and mood and cramp and letting private tech companies decide what to do with that data?
Feeling “creamy” today? Great, we’ll let Mark Zuckerberg know.
Joking about the consistency of my ovulation was already a bridge too far and a line I opted not to venture to cross with said beau. I certainly wouldn’t have willingly announced to anyone parsing through data at Meta if I had masturbated or had unprotected sex on any given day. The Flo app might have made that decision for me, though.
For all my mental back and forths about whether or not to actually send a partner my cycle calendar, Flo might have been sending the intimate details of our sexual encounters to a bunch of tech bros behind my back. Turns out, Flo had embedded a secret “eavesdropping” tool which passed along information like menstruation cycle, ovulation, and if a user was trying to get pregnant to Meta, even while explicitly claiming not to in their privacy policy.
As slippery as an ovulation flow, Flo was telling us our private data was safely hidden from prying eyes. The guilty verdict in the August 2025 Frasco v. Flo lawsuit proved otherwise:
“Flo, through the Flo App, unlawfully shared users’ sensitive health data — including menstrual cycle, ovulation, and pregnancy-related information — with third parties such as Meta, Google, and Flurry for their own commercial use (Burr & Forman, 2025).”
“Flo, through the Flo App, unlawfully shared users’ sensitive health data — including menstrual cycle, ovulation, and pregnancy-related information — with third parties such as Meta, Google, and Flurry for their own commercial use (Burr & Forman, 2025).”
The jury found Meta liable for collecting sensitive reproductive health data and using it for its own gain. The other parties listed settled out of court, which means their involvement in the breach gets to stay more private than the health data of Flo users between 2016 and 2019.
Nothing feminism needs more nowadays than a bit more irony, right?
This wasn’t a hack. It was a design decision.
It’s important to call out that these third-party platforms didn’t hack into the Flo app. The folks in charge of making privacy decisions at Flo handed them our sensitive data on a silver platter. It was simple track-and-sell data sharing and we maybe should have seen it coming.
I’ve written before about how ‘pinkwashing’ femtech can disguise a whole host of unethical product decisions. Prior to heading for greener and more private pastures with my period tracking app selection, Flo was already starting to give me the ick. The UX design was getting more convoluted, more cluttered, more cartoonish with every update.
Quickly, the Flo home screen was becoming more bloated than a late-luteal phase tummy. Opening the app to log whether I had spotted a bit that morning or had insomnia or tender breasts was like navigating a minefield of tired femme designs and redundant reminders to meditate.
With each update, the home display presented me with the option for ever growing opportunities for negative symptom reporting. Without any differentiation in hierarchy, everything seemed flatly pathological. The symptoms were pushed more and more to the front and advice popped out at every turn, essentially burying the actual cycle tracker.
In the context of the Flo-Meta filings, this makes sense — focusing on the “problems” of periods can help drive sales of items purporting to alleviate symptoms. There isn’t much to monetize from a simple period calendar, is there? It’s dystopian to realize the emphasis on symptomology was helping to drive advertising on sites even more recently found liable for personal harm on par with tobacco companies.
At the end of the day, no amount of pinkwashed ‘empowerment’ or ‘evolved’ mentions of sex toys and self pleasure can cover up who benefitted* from these design choices.
The gap between HIPAA and ‘wellness’ is where consent goes to die
Flo changed its privacy policy a whopping 13 times in the three years relevant to the legal claims (2016 – 2019). These lawsuits show that all those edits did nothing to make the consent users might have thought they were giving real in any meaningful way.
Lawsuits like the Flo-Meta lawsuits are notable in that they are helping to build a foundation of legal precedent within the gray zone of non-HIPAA compliant wellness tech. Much of health tech, which includes a lot of reproductive health tech currently on the market, isn’t explicitly clinical or directly tied to communications with a healthcare provider.
Which means, you can be logging some deep information about the functions of your body and given automated advice on making adjustments to potentially improve these bodily functions, and in all likelihood, it manages to not fall under the protection of current health and privacy laws. This means that it is at the discretion of the apps themselves to create the policies around what data to share or sell or report to government agencies themselves.
They also have pretty broad discretion in the designs around consent they are willing and able to offer users. The design decisions and consent frameworks in-product can be guided by best-practices, but those choices are still largely driven by the opinions within product teams. This is how sloppy consent patterns continue to get shipped out to users, even when the product might deal in incredibly sensitive data collection.
It wasn’t like some cyber criminal was holding Flo ransom, these were embedded legal, design, engineering, and sales positions that got through a chain of employees that ultimately threw users under the bus for profit.
It’s hard to track down exact information on the number of staff employed by Flo from 2016 – 2019 and who was directly responsible for these choices. By most accounts, it was a lean operation — probably around 350 employees at any given time in those years. That’s a pretty small group of folks making potentially monumental decisions about how highly sensitive health data got collected, stored, and shared in addition to how those processes and policies were communicated to their millions of users worldwide.
If we’re left to our own devices, who will protect us?
It seems like we can’t just necessarily leave it up to companies — or their ragtag teams of crackpot lawyers rewriting privacy policies every few months — to keep our private data private. I guess we’re left needing to hurt Mark Zuckerberg’s feelings every now and again in order to just use our vibrators in peace.
The law is slow to catch up, even more so when it comes to regulating tech. This makes me nervous when considering the rush to increase the collection of data around women’s health in an effort to close the data gap. This is a worthy aim, but how much trust can we really place in private companies operating outside of clinically guided structures?
This is even before we factor in the increased use of generative AI in populating health advice within apps which seem to intentionally circumvent the healthcare space and thus not have to be compliant with the user protections under that categorical umbrella. There is such a thing as too much data, though try telling that to a PM trying to make his KPIs. If the data comes from unmanaged flows, the collection methods prioritized for third-party ad sales, and done without the direct consent of users, how much can we even rely on the derivative generative outputs? Is this the standard we want to set for collecting women’s health data? Is it worth all the costs?
Personally, this reeks of moving fast and breaking things to me. Flo definitely broke my trust, along with at least 13 million former Flo users. With (reportedly) over a third of US women utilizing period tracking apps and a similar rate of use amongst women in the EU, there’s a significant market to capture here. Unlike in 2016 when Flo was one of few players on the field, there are hundreds of cycle tracking apps for savvy users to select from today, not to mention the increasing availability of built-in cycle trackers within other health apps and wearables.
Though Flo remains one of the top downloaded of the bunch, for many of us, it’s a matter of once burned, twice shy. Personally, I’m a big fan of WildAI, which doesn’t bother to ask me if I’ve rubbed one out and therefore has no interest in telling a tech behemoth a whole lot more than if I bothered to note if I was thirsty and horny and hungry on the same day. You and Mark can guess how much space those notes take up on my cycle calendar all on your own. I prefer it that way, and Flo should too.
*Let’s just take a moment, by the way, to reflect on how the dev dudes setting up personalized ad gating at Google might have been tracking the sex toy use and prevalence of anal sex amongst Flo users so they might drive up pay per click (PPC) rates across your apps. Obviously, this is feminism at its finest.
**It might be worth arguing if in a post-Dobbs world and in countries with wishy-washy digital privacy standards that maybe meticulously logging sexy self-play might not have the potential health benefits worth the risk having it wind up in the hands of such loose-lipped data brokers. It’s bad enough we have to worry about the privacy violations of the vibrators themselves. Maybe “dumb” dildos are the better option these days, actually. We’ll have to get to that in another post.
Share The Femtech Design Desk
Leave a comment
For help please visit help.ft.com. We apologise for any inconvenience.
The following information can help our support team to resolve this issue.
📰 News
2026 – 03-06: 🚀 VibeVoice ASR is now part of a Transformers release! You can now use our speech recognition model directly through the Hugging Face Transformers library for seamless integration into your projects.
2026 – 01-21: 📣 We open-sourced VibeVoice-ASR, a unified speech-to-text model designed to handle 60-minute long-form audio in a single pass, generating structured transcriptions containing Who (Speaker), When (Timestamps), and What (Content), with support for User-Customized Context. Try it in Playground.
⭐️ VibeVoice-ASR is natively multilingual, supporting over 50 languages — check the supported languages for details.
🔥 The VibeVoice-ASR finetuning code is now available!
⚡️ vLLM inference is now supported for faster inference; see vllm-asr for more details.
📑 VibeVoice-ASR Technique Report is available.
2025 – 12-16: 📣 We added experimental speakers to VibeVoice‑Realtime‑0.5B for exploration, including multilingual voices in nine languages (DE, FR, IT, JP, KR, NL, PL, PT, ES) and 11 distinct English style voices. Try it. More speaker types will be added over time.
2025 – 12-03: 📣 We open-sourced VibeVoice‑Realtime‑0.5B, a real‑time text‑to‑speech model that supports streaming text input and robust long-form speech generation. Try it on Colab.
2025 – 09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have removed the VibeVoice-TTS code from this repository.
2025 – 08-25: 📣 We open-sourced VibeVoice-TTS, a long-form multi-speaker text-to-speech model that can synthesize speech up to 90 minutes long with up to 4 distinct speakers. — accepted as an Oral at ICLR 2026! 🔥
Overview
VibeVoice is a family of open-source frontier voice AI models that includes both Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) models.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
For more information, demos, and examples, please visit our Project Page.
Models
1. 📖 VibeVoice-ASR - Long-form Speech Recognition
VibeVoice-ASR is a unified speech-to-text model designed to handle 60-minute long-form audio in a single pass, generating structured transcriptions containing Who (Speaker), When (Timestamps), and What (Content), with support for Customized Hotwords.
🕒 60-minute Single-Pass Processing:
Unlike conventional ASR models that slice audio into short chunks (often losing global context), VibeVoice ASR accepts up to 60 minutes of continuous audio input within 64K token length. This ensures consistent speaker tracking and semantic coherence across the entire hour.
🕒 60-minute Single-Pass Processing:
Unlike conventional ASR models that slice audio into short chunks (often losing global context), VibeVoice ASR accepts up to 60 minutes of continuous audio input within 64K token length. This ensures consistent speaker tracking and semantic coherence across the entire hour.
👤 Customized Hotwords:
Users can provide customized hotwords (e.g., specific names, technical terms, or background info) to guide the recognition process, significantly improving accuracy on domain-specific content.
👤 Customized Hotwords:
Users can provide customized hotwords (e.g., specific names, technical terms, or background info) to guide the recognition process, significantly improving accuracy on domain-specific content.
📝 Rich Transcription (Who, When, What):
The model jointly performs ASR, diarization, and timestamping, producing a structured output that indicates who said what and when.
📝 Rich Transcription (Who, When, What):
The model jointly performs ASR, diarization, and timestamping, producing a structured output that indicates who said what and when.
📖 Documentation | 🤗 Hugging Face | 🎮 Playground | 🛠️ Finetuning | 📊 Paper
2. 🎙️ VibeVoice-TTS - Long-form Multi-speaker TTS
Best for: Long-form conversational audio, podcasts, multi-speaker dialogues
⏱️ 90-minute Long-form Generation:
Synthesizes conversational/single-speaker speech up to 90 minutes in a single pass, maintaining speaker consistency and semantic coherence throughout.
⏱️ 90-minute Long-form Generation:
Synthesizes conversational/single-speaker speech up to 90 minutes in a single pass, maintaining speaker consistency and semantic coherence throughout.
👥 Multi-speaker Support:
Supports up to 4 distinct speakers in a single conversation, with natural turn-taking and speaker consistency across long dialogues.
👥 Multi-speaker Support:
Supports up to 4 distinct speakers in a single conversation, with natural turn-taking and speaker consistency across long dialogues.
🎭 Expressive Speech:
Generates expressive, natural-sounding speech that captures conversational dynamics and emotional nuances.
🎭 Expressive Speech:
Generates expressive, natural-sounding speech that captures conversational dynamics and emotional nuances.
🌐 Multi-lingual Support:
Supports English, Chinese and other languages.
🌐 Multi-lingual Support:
Supports English, Chinese and other languages.
📖 Documentation | 🤗 Hugging Face | 📊 Paper
English
Chinese
Cross-Lingual
Spontaneous Singing
Long Conversation with 4 people
3. ⚡ VibeVoice-Streaming - Real-time Streaming TTS
VibeVoice-Realtime is a lightweight real‑time text-to-speech model supporting streaming text input and robust long-form speech generation.
Parameter size: 0.5B (deployment-friendly)
Real-time TTS (~300 milliseconds first audible latency)
Streaming text input
Robust long-form speech generation (~10 minutes)
📖 Documentation | 🤗 Hugging Face | 🚀 Colab
Contributing
Please see CONTRIBUTING.md for detailed contribution guidelines.
⚠️ Risks and Limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release).
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
Star History
Please enable JS and disable any ad blocker
I wanted to give an update on GitHub’s availability in light of two recent incidents. Both of those incidents are not acceptable, and we are sorry for the impact they had on you. I wanted to share some details on them, as well as explain what we’ve done and what we’re doing to improve our reliability.
We started executing our plan to increase GitHub’s capacity by 10X in October 2025 with a goal of substantially improving reliability and failover. By February 2026, it was clear that we needed to design for a future that requires 30X today’s scale.
The main driver is a rapid change in how software is being built. Since the second half of December 2025, agentic development workflows have accelerated sharply. By nearly every measure, the direction is already clear: repository creation, pull request activity, API usage, automation, and large-repository workloads are all growing quickly.
This exponential growth does not stress one system at a time. A pull request can touch Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases. At high scale, small inefficiencies compound: queues deepen, cache misses become database load, indexes fall behind, retries amplify traffic, and one slow dependency can affect several product experiences.
Our priorities are clear: availability first, then capacity, then new features. We are reducing unnecessary work, improving caching, isolating critical services, removing single points of failure, and moving performance-sensitive paths into systems designed for these workloads. This is distributed systems work: reducing hidden coupling, limiting blast radius, and making GitHub degrade gracefully when one subsystem is under pressure. We’re making progress quickly, but these incidents are examples of where there’s still work to do.
What we’re doing
Short term, we had to resolve a variety of bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL), redesigning user session cache to redoing authentication and authorization flows to substantially reduce database load. We also leveraged our migration to Azure to stand up a lot more compute.
Next we focused on isolating critical services like git and GitHub Actions from other workloads and minimizing the blast radius by minimizing single points of failure. This work started with careful analysis of dependencies and different tiers of traffic to understand what needs to be pulled apart and how we can minimize impact on legitimate traffic from various attacks. Then we addressed those in order of risk. Similarly, we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.
While we were already in progress of migrating out of our smaller custom data centers into public cloud, we started working on path to multi cloud. This longer-term measure is necessary to achieve the level of resilience, low latency, and flexibility that will be needed in the future.
The number of repositories on GitHub is growing faster than ever, but a much harder scaling challenge is the rise of large monorepos. For the last three months, we’ve been investing heavily in response to this trend both within git system and in the pull request experience.
We will have a separate blog post soon describing extensive work we’ve done and the new upcoming API design for greater efficiency and scale. As part of this work, we have invested in optimizing merge queue operations, since that is key for repos that have many thousands of pull requests a day.
Recent incidents
The two recent incidents were different in cause and impact, but both reflect why we are increasing our focus on availability, isolation, and blast-radius reduction.
April 23 merge queue incident
On April 23, pull requests experienced a regression affecting merge queue operations.
Pull requests merged through merge queue using the squash merge method produced incorrect merge commits when a merge group contained more than one pull request. In affected cases, changes from previously merged pull requests and prior commits were inadvertently reverted by subsequent merges.
During the impact window, 658 repositories and 2,092 pull requests were affected. We initially shared slightly higher numbers because our first assessment was intentionally conservative. The issue did not affect pull requests merged outside merge queue, nor did it affect merge queue groups using merge or rebase methods.
There was no data loss: all commits remained stored in Git. However, the state of affected default branches was incorrect, and we could not safely repair every repository automatically. More details are available in the incident root cause analysis.
This incident exposed multiple process failures, and we are changing those processes to prevent this class of issue from recurring.
On April 27, an incident affected our Elasticsearch subsystem, which powers several search-backed experiences across GitHub, including parts of pull requests, issues, and projects.
We are still completing the root cause analysis and will publish it shortly. What we know now is that the cluster became overloaded (likely due to a botnet attack) and stopped returning search results. There was no data loss, and Git operations and APIs were not impacted. However, parts of the UI that depended on search showed no results, which caused a significant disruption.
This is one of the systems we had not yet fully isolated to eliminate as a single point of failure, because other areas had been higher in our risk-prioritized reliability work. That impact is unacceptable, and we are using the same dependency and blast-radius analysis described above to reduce the likelihood and impact of this type of failure in the future.
Increasing transparency
We have also heard clear feedback that customers need greater transparency during incidents.
We recently updated the GitHub status page to include availability numbers. We have also committed to statusing incidents both large and small, so you do not have to guess whether an issue is on your side or ours.
We are continuing to improve how we categorize incidents so that the scale and scope are easier to understand. We are also working on better ways for customers to report incidents and share signals with us during disruptions.
Our commitment
GitHub’s role has always been to support developers on an open and extensible platform.
The team at GitHub is incredibly passionate about our work. We hear the pain you’re experiencing. We read every email, social post, support ticket, and we take it all to heart. We’re sorry.
We are committed to improving availability, increasing resilience, scaling for the future of software development, and communicating more transparently along the way.
Editor’s note: This post was updated on April 28, 2026, to update the number of repos affected during the April 23 incident.
Written by
Vladimir Fedorov is GitHub’s Chief Technology Officer, bringing decades of experience in engineering leadership and innovation. A passionate advocate for developer productivity, Vlad is leading GitHub’s engineering team to shape the future of developer tools and innovation with a developer-first mindset.
Before joining GitHub, Vlad co-founded UserClouds, a startup specializing in data governance and privacy. He spent 12 years at Facebook, now Meta, as Senior Vice President, leading engineering teams of over 2,000 across Privacy, Ads, and Platform. Earlier in his career, Vlad worked at Microsoft and earned both his BS and MS in Computer Science from Caltech. He currently serves on the board of Codepath.org, an organization dedicated to reprogramming higher education to create the first AI-native generation of engineers, CTOs, and founders.
Vlad lives in the Bay Area and when not working enjoys spending time outside and on the water with his family.
Explore more from GitHub
Docs
Everything you need to master GitHub, all in one place.
Go to Docs
GitHub
Build what’s next on GitHub, the place for anyone from anywhere to build anything.
Start building
Customer stories
Meet the companies and engineering teams that build with GitHub.
Learn more
The GitHub Podcast
Catch up on the GitHub podcast, a show dedicated to the topics, trends, stories and culture in and around the open source developer community on GitHub.
Listen now
On April 17, 2026, Sam Altman’s other AI company, Tools For Humanity, announced a partnership with Bruno Mars as he embarks on his Romantic Tour. The announcement coincided with the company’s Concert Kit tool, which allegedly allows “verified humans” to access VIP tickets and concert experiences.
However, Bruno Mars’ management and Live Nation released a joint statement on April 22, claiming that the partnership didn’t exist. “To be clear, we were never even approached by [Tools For Humanity], nor were we in any discussions regarding a partnership or tour access,” the statement read. “We first learned that our tour was being used to promote their project after their keynote made those initial claims.”
Those claims originated from TFH’s chief product officer, Tiago Sada, during a company event. The company then published a post on its website including Sada’s quote about Bruno Mars’ Romantic Tour. Eventually, word got back to Mars’ team.
AI Company Executive Gets His Marses Confused, Is Actually Partnering with Jared Leto’s Band
The initial post on Tools For Humanity’s website has since been edited to correct the false information. A spokesperson also confirmed the company “does not have any agreement with Bruno Mars to test or feature Concert Kit.” Additionally, “there is no association or affiliation with the artist or his tour.”
Tools For Humanity is actually partnering with Thirty Seconds to Mars on their 2027 European tour. While TFH has not disclosed the actual reason for the false Bruno Mars announcement, it looks a bit like a case of mistaken identity. Pretty ironic, since the company’s whole shtick is supposedly verifying human identities.
The company launched in 2019 initially as a way to verify human identities in online spaces to prevent fraud. This included live music monopoly Live Nation-Ticketmaster, which is often plagued by bots and scammers. In 2023, TFH launched a physical identity verification device in the form of an orb that scans human irises.
Unfortunately, the orb does not also tell fortunes, which is clearly a major design flaw. If it did, they’d probably be able to prevent this Mars mix-up before it happened.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.