10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Writing this makes me irrationally sad, but Ghostty will be leaving GitHub1.
I’m GitHub user 1299, joined Feb 2008.
Since then, I’ve opened GitHub every single day. Every day, multiple times per
day, for over 18 years. Over half my life. A handful of exceptions in there
(I’d love to see the data), but I can’t imagine more than a week per year.
GitHub is the place that has made me the most happy. I always made time for
it. When I went through tough breakups? I lost myself in open source… on
GitHub. During college at 4 AM when everyone is passed out? Let me get one
commit in. During my honeymoon while my wife is still asleep? Yeah, GitHub.
It’s where I’ve historically been happiest and wanted to be.
Even the annoying stuff! Some people doom scroll social media. I’ve been doom
scrolling GitHub issues since before that was a word. On vacations I’d have
bookmarks of different projects on GitHub I wanted to study. Not just source
code, but OSS processes, how other maintainers react to difficult situations.
Etc. Believe it or not, I like this.
Some might call this sick, but my hobby and work and passion all align and for
most of my life they got to also live in one place on the internet: GitHub.
Did you know I started Vagrant (my first successful open source project) in
large part because I hoped it would get me a job at GitHub? It’s no secret,
I’ve said this repeatedly, and in my first public talk about Vagrant, when I
was a mere 20 years old, I joked “maybe GitHub will hire me if it’s good!”
GitHub was my dream job. I didn’t ever get to work there (not their fault).
But it was the perfect place I wanted to be. The engineers were incredible,
the product was incredible, and it was something I lived and breathed every
day. I still do and consistently have… for these 18 years. Enough time for
an entire human to become an adult, all on GitHub.
Lately, I’ve been very publicly critical of GitHub. I’ve been mean about it.
I’ve been angry about it. I’ve hurt people’s feelings. I’ve been lashing out.
Because GitHub is failing me, every single day, and it is personal. It is
irrationally personal. I love GitHub more than a person should love a thing,
and I’m mad at it. I’m sorry about the hurt feelings to the people working on
it.
I’ve felt this way for a long time, but for the past month I’ve kept a journal
where I put an “X” next to every date where a GitHub outage has negatively
impacted my ability to work2. Almost every day has an X. On the day I am
writing this post, I’ve been unable to do any PR review for ~2 hours because
there is a GitHub Actions outage3. This is no longer a place for serious
work if it just blocks you out for hours per day, every day.
It’s not a fun place for me to be anymore. I want to be there but it doesn’t
want me to be there. I want to get work done and it doesn’t want me to get
work done. I want to ship software and it doesn’t want me to ship software.
I want it to be better, but I also want to code. And I can’t code with GitHub
anymore. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one
day, but this will have to be predicated on real results and improvements,
not words and promises.
I’ll share more details about where the Ghostty project will be moving to in
the coming months. We have a plan but I’m also very much still in discussions
with multiple providers (both commercial and FOSS).
It’ll take us time to remove all of our dependencies on GitHub and we have a
plan in place to do it as incrementally as possible. We plan on keeping a
read-only mirror available on GitHub at the current URL.
My personal projects and other work will remain on GitHub for now.
Ghostty is where I, our maintainers, and our open source community are
most impacted so that is the focus of this change. We’ll see where it
goes after that.
Footnotes
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
The timing of this is coincidental with the large outage on April 27, 2026.
We’ve been discussing and putting together a plan to leave GitHub
for months, and this blog post was written over a week ago. We only
made the final decision this week. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
To the “Git is distributed!” crowd: the issue isn’t Git, it’s the
infrastructure we rely on around it: issues, PRs, Actions, etc. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
This is not the large Elasticsearch outage they had on April 27, 2026.
This blog post was written a week before that, so this was a different
outage. ↩
Your phone is about to stop being yours.
125 days until lockdown
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn’t registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook
What Google is doing
In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.
Registration requires:
Paying a fee to Google
Agreeing to Google’s Terms and Conditions
Surrendering your government-issued identification
Providing evidence of your private signing key
Listing all current and all future application identifiers
If a developer does not comply, their apps get silently blocked on every Android device worldwide.
Who this hurts
You
You bought an Android phone because Google told you it was open. You could install what you wanted, and that was the deal.
Google is now rewriting that deal, retroactively, on hardware you already own. After the update lands, you can only run software that Google has pre-approved. On your phone: your property, that you paid for.
Independent developers
A teenager’s first app, a volunteer’s privacy tool, or a company’s confidential internal beta. It doesn’t matter. After September 2026, none of these can be installed without Google’s blessing.
F-Droid, home to thousands of free and open-source Android apps, has called this an “existential” threat. Cory Doctorow calls it “Darth Android”.
Governments & civil society
Google has a documented track record of complying when authoritarian regimes demand app removals. With this program, the software that runs your country’s institutions will exist at the pleasure of a single unaccountable foreign corporation.
The EFF calls app gatekeeping “an ever-expanding pathway to internet censorship.”
Google’s “escape hatch” is a trap door
Google says “power users” can “still install” unverified apps. Here’s what that actually looks like:
Delve into System Settings, find Developer Options
Tap the build number seven times to enable Developer Mode
Dismiss scare screens about coercion
Enter your PIN
Restart the device
Wait 24 hours
Come back, dismiss more scare screens
Pick “allow temporarily” (7 days) or “allow indefinitely”
Confirm, again, that you understand “the risks”
Nine steps. A mandatory 24-hour cooling-off period. For installing software on a device you own.
Worse: this flow runs entirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS update required and no consent needed. And as of today, it hasn’t shipped in any beta, preview, or canary build. It exists only as a blog post and some mockups.
This is bigger than Android
If Google can retroactively lock down billions of devices that were sold as open platforms, every hardware manufacturer on the planet is watching.
The principle being established: the company that made your device gets to decide, after you’ve bought it, what software you’re allowed to run. In software, this is called a “rug pull”; but at least you could always install competing software. In hardware, it is a fait accompli that strips you of your agency and renders you powerless to the whims of a single unaccountable gatekeeper and convicted monopolist.
Android’s openness was never just a feature. It was the promise that distinguished it from iPhone. Millions chose Android for exactly that reason. Google is now revoking that promise unilaterally, on devices already in people’s pockets, because they’ve decided they have enough market dominance and regulatory capture to get away with it.
Ars Technica: “Google’s Apple envy threatens to dismantle Android’s open legacy.”
But wait, isn’t this…
″…just about security?”
The security rationale is a smokescreen. Google Play Protect already scans for malware independent of developer identity. Requiring a government ID doesn’t make code safer. It makes developers identifiable and controllable. Malware authors can register. Indie developers and dissidents often can’t. The EFF is blunt: identity-based gatekeeping is a censorship tool, not a security one.
″…still sideloading if you use the advanced flow?”
Nine steps, 24-hour wait, buried in Developer Options, delivered through a proprietary service that Google can revoke whenever they want. That’s not sideloading. That’s a deterrence mechanism built to ensure almost nobody completes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.
″…only a problem if you have something to hide?”
Whistleblowers, journalists, and activists under authoritarian governments will be the first victims. People in domestic abuse situations are next. All these groups have legitimate reasons to distribute or use software without putting their legal identity in a Google database. Anonymous open-source contribution is a tradition older than Google itself. This policy ends it on Android.
″…the same thing Apple does?”
Apple has been a walled garden from day one. People chose Android because it was different. “Apple does it too” is a race to the bottom and a weak tu quoque argument. And under regulatory pressure (the EU’s Digital Markets Act), even Apple is being forced to open up. Google is moving in the opposite direction: attempting to further entrench its gatekeeping status.
″…just $25 and some paperwork?”
Maybe, if you’re a developer in the US with a credit card and a driver’s license. Try being a student in sub-Saharan Africa, or a dissident in Myanmar, or a volunteer maintaining a community health app. The cost isn’t only financial: you’re surrendering government ID and evidence of your signing keys to a company that routinely complies with government demands to remove apps and expose developers.
Fight back
Everyone
Install F-Droid on every Android device you own. Alternative stores only survive if people actually use them.
Contact your regulators. Regulators worldwide are genuinely concerned about monopolies and the centralization of power in the tech sector, and want to hear directly from individuals who are affected and concerned.
Share this page. Link to keepandroidopen.org everywhere.
Push back on astroturfers. The “well, actually…” crowd is out in force. Don’t let them set the narrative.
Sign the change.org petition and join the over 100,000 signatories who have made their voices heard.
Read and share our open letter
Tell Google what you think of this through their own developer verification survey (for all the good that will do).
Developers
Do not sign up. Don’t join the program by signing up for the Android Developer Console and agreeing to their irrevocable Terms and Conditions. Don’t verify your identity. Don’t play ball.
Google’s plan only works if developers comply. Don’t.
Talk other developers and organizations out of signing up.
Add the FreeDroidWarn library to your apps to warn users.
Run a website? Add the countdown banner.
Google employees
If you know something about the program’s technical implementation or internal rationale, contact tips@keepandroidopen.org from a non-work machine and a non-Gmail account. Strict confidence guaranteed.
All those opposed…
69 organizations from 21 countries have signed the open letter
Read the full open letter and thank the signatories →
What they’re saying
Tech press
“Google says it’s making Android sideloading ‘high-friction’ to better warn users about potential risks” XDA Developers
“Google says it’s making Android sideloading ‘high-friction’ to better warn users about potential risks”
“Google’s dev registration plan ‘will end the F-Droid project’” The Register
“Google’s dev registration plan ‘will end the F-Droid project’”
“Over 67 groups urge the company to drop ID checks for apps distributed outside Play” The Register
“Over 67 groups urge the company to drop ID checks for apps distributed outside Play”
“Google’s Attack on Sideloading Will Rob Android of One of Its Best Features” How-To Geek
“Google’s Attack on Sideloading Will Rob Android of One of Its Best Features”
“Google kneecaps indie Android devs, forces them to register” The Register
“Google kneecaps indie Android devs, forces them to register”
“Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy” It’s FOSS News
“Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy”
“F-Droid Says Google Is Lying About the Future of Sideloading on Android” How-To Geek
“F-Droid Says Google Is Lying About the Future of Sideloading on Android”
“Google’s Apple envy threatens to dismantle Android’s open legacy” Ars Technica
“Google’s Apple envy threatens to dismantle Android’s open legacy”
“F-Droid project threatened by Google’s new dev registration rules” Bleeping Computer
“F-Droid project threatened by Google’s new dev registration rules”
“Keep Android Open” Linux Magazine
“Keep Android Open”
“Google’s Android developer verification program draws pushback” InfoWorld
“Google’s Android developer verification program draws pushback”
“We all know that’s a load of bullshit. Adding a goddamn 24-hour waiting period is batshit insanity.” Thom Holwerda, OSnews
“We all know that’s a load of bullshit. Adding a goddamn 24-hour waiting period is batshit insanity.”
“Google will verify Android developers distributing apps outside the Play store” The Verge
written on April 28, 2026
GitHub was not the first home of my Open Source software. SourceForge
was.
Before GitHub, I had my own Trac installation. I had Subversion repositories,
tickets, tarballs, and documentation on infrastructure I controlled. Later I
moved projects to Bitbucket, back when Bitbucket still felt like a serious
alternative place for Open Source projects, especially for people who were not
all-in on Git yet.
And then, eventually, GitHub became the place, and I moved all of it there.
It is hard for me to overstate how important GitHub became in my life. A large
part of my Open Source identity formed there. Projects I worked on found users
there. People found me there, and I found other people there. Many professional
relationships and many friendships started because some repository, issue, pull
request, or comment thread made two people aware of each other.
That is why I find what is happening to GitHub today so sad and so
disappointing. I do not look at it as just the folks at Microsoft making
product decisions I dislike. GitHub was part of the social infrastructure of
Open Source for a very long time. For many of us, it was not merely where the
code lived; it was where a large part of the community lived.
So when I think about GitHub’s decline, I also think about what came before it,
and what might come after it. I have written a few times over the years about
dependencies, and in particular about the problem of micro
dependencies. In my mind, GitHub gave
life to that phenomenon. It was something I definitely did not completely
support, but it also made Open Source more inclusive. GitHub changed how Open
Source feels,
and later npm and other systems changed how dependencies feel. Put them
together and you get a world in which publishing code is almost frictionless,
consuming code is almost frictionless, and the number of projects in the world
explodes.
That has many upsides. But it is worth remembering that Open Source did not
always work this way.
A Smaller World
Before GitHub, Open Source was a much smaller world. Not necessarily in the
number of people who cared about it, but in the number of projects most of us
could realistically depend on.
There were well-known projects, maintained over long periods of time by a
comparatively small number of people. You knew the
names. You knew the mailing lists. You knew who
had been around for years and who had earned trust. That trust was not perfect,
and the old world had plenty of gatekeeping, but reputation mattered in a very
direct way. We took pride (and got frustrated) when the Debian folks came and
told us our licensing stuff was murky or the copyright headers were not up to
snuff, because they packaged things up.
A dependency was not just a package name. It was a project with a history, a
website, a maintainer, a release process, a lot of friction, and often a place in
a larger community. You did not add dependencies casually, because the act of
depending on something usually meant you had to understand where it came from.
Not all of this was necessarily intentional, but because these projects were
comparatively large, they also needed to bring their own infrastructure. Small
projects might run on a university server, and many of them were on SourceForge,
but the larger ones ran their own show. They grouped together into larger
collectives to make it work.
We Ran Our Own Infrastructure
My first Open Source projects lived on infrastructure I ran myself. There was a
Trac installation, Subversion repositories, tarballs, documentation, and release
files served from my own machines or from servers under my control. That was
normal. If you wanted to publish software, you often also became a small-time
system administrator. Georg and I ran our own
collective for our Open Source projects: Pocoo. We
shared server costs and the burden of maintaining Subversion and Trac, mailing
lists and more.
Subversion in particular made this “running your own forge” natural. It was
centralized: you needed a server, and somebody had to operate it.
The project had a home, and that home was usually quite literal: a hostname, a
directory, a Trac instance, a mailing list archive.
When Mercurial and Git arrived, they were philosophically the opposite. Both
were distributed. Everybody could have the full repository. Everybody could
have their own copy, their own branches, their own history. In principle, those
distributed version control systems should have reduced the need for a single
center. But despite all of this, GitHub became the center.
That is one of the great ironies of modern Open Source. The distributed version
control system won, and then the world standardized on one enormous centralized
service for hosting it.
What GitHub Gave Us
It is easy now to talk only about GitHub’s failures, of which there are currently
many, but that would be unfair: GitHub was, and continues to be, a tremendous
gift to Open Source.
It made creating a project easy and it made discovering projects easy. It made
contributing understandable to people who had never subscribed to a development
mailing list in their life. It gave projects issue trackers, pull requests,
release pages, wikis, organization pages, API access, webhooks, and later CI.
It normalized the idea that Open Source happens in the open, with visible
history and visible collaboration. And it was an excellent and reasonable
default choice for a decade.
But maybe the most underappreciated thing GitHub did was archival work: GitHub
became a library. It became an index of a huge part of the software commons
because even abandoned projects remained findable. You could find forks, and
old issues and discussions all stayed online. For all the complaints one can
make about centralization, that centralization also created discoverable memory.
The leaders there once
cared
a lot about keeping GitHub available even in countries that were sanctioned by
the US.
I know what the alternative looks like, because I was living it. Some of my
earliest Open Source projects are technically still on
PyPI, but the actual packages are gone.
The metadata points to my old server, and that server has long stopped serving
those files.
That was normal before the large platforms. A personal domain expired, a VPS
For help please visit help.ft.com. We apologise for any inconvenience.
The following information can help our support team to resolve this issue.
In April 2026, Canonical disclosed 44 CVEs in uutils, the Rust reimplementation of GNU coreutils that ships by default since 25.10. Most of them came out of an external audit commissioned ahead of the 26.04 LTS.
I read through the list and thought there’s a lot to learn from it.
What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing, and none of them were caught by the borrow checker, clippy lints, or cargo audit.
I’m not writing this to criticize the uutils team. Quite the contrary; I actually want to thank them for sharing the audit results in such detail so that we can all learn from them.
We also had Jon Seager, VP Engineering for Ubuntu, on our ‘Rust in Production’ podcast recently and a lot of listeners appreciated his honesty about the state of Rust at Canonical.
If you write systems code in Rust, this is the most concentrated look at where Rust’s safety ends that you’ll likely find anywhere right now.
Don’t Trust a Path Across Two Syscalls
This is the largest cluster of bugs in the audit. It’s also the reason cp, mv, and rm are still GNU in Ubuntu 26.04 LTS. :(
The pattern is always the same. You do one syscall to check something about a path, then another syscall to act on the same path. Between those two calls, an attacker with write access to a parent directory can swap the path component for a symbolic link. The kernel re-resolves the path from scratch on the second call, and the privileged action lands on the attacker’s chosen target.
Rust’s standard library makes this easy to get wrong. The ergonomic APIs you reach for first (fs::metadata, File::create, fs::remove_file, fs::set_permissions) all take a path and re-resolve it every time, rather than taking a file descriptor and operating relative to that.
That’s fine for a normal program, but if you’re writing a privileged tool that needs to be secure against local attackers, you have to be careful.
Case Study: CVE-2026 – 35355
Here’s the bug, simplified from src/uu/install/src/install.rs.
// 1. Clear the destination
fs::remove_file(to)?;
// …
// 2. Create the destination. The path is re-resolved here!
let mut dest = File::create(to)?; // follows symlinks, truncates
copy(from, &mut dest)?;
Between step 1 and step 2, anyone with write access to the parent directory can plant to as a symlink to, say, /etc/shadow. Then File::create follows the symlink and the privileged process happily overwrites /etc/shadow with whatever from happened to contain.
The fix uses OpenOptions::create_new(true):
fs::remove_file(to)?;
let mut dest = OpenOptions::new()
.write(true)
.create_new(true)
.open(to)?;
copy(from, &mut dest)?;
The docs for create_new say (emphasis mine):
No file is allowed to exist at the target location, also no (dangling) symlink. In this way, if the call succeeds, the file returned is guaranteed to be new.
No file is allowed to exist at the target location, also no (dangling) symlink. In this way, if the call succeeds, the file returned is guaranteed to be new.
Rule: Anchor on a File Descriptor Instead
A &Path in Rust looks like a value, but remember that to the kernel it’s just a name. That name can point to different things from one syscall to the next.
Anchor your operations on a file descriptor instead.
create_new() only helps with that when you’re creating a new file. For everything else, open the parent directory once and work relative to that handle.
If you act on the same path twice, assume it’s a TOCTOU (Time Of Check To Time Of Use) bug until you’ve proven otherwise.
Set Permissions at Creation Time, Not After
This is a close relative of TOCTOU. You want a directory with restrictive permissions, so you write something like this.
// Create with default permissions
fs::create_dir(&path)?;
// Fix up permissions
fs::set_permissions(&path, Permissions::from_mode(0o700))?;
For a brief moment, path exists with the default permissions. Any other user on the system can open() it during that window. Once they have a file descriptor, the later chmod doesn’t take it away from them.
Rule: Set Permissions at Creation, Never After
Reach for OpenOptions::mode() and DirBuilderExt::mode() so the file or directory is born with the permissions you want. The kernel will apply your umask on top, so set that explicitly too if you really care.
String Equality on Paths Is Not the Same as Filesystem Identity
The original –preserve-root check in chmod was literally this:
if recursive && preserve_root && file == Path::new(“/”) {
return Err(PreserveRoot);
}
That comparison is bypassed by anything that resolves to / but isn’t spelled /. So /../, /./, /usr/.., or a symlink that points to /. Run chmod -R 000 /../ and see it rip right past your check and lock down the whole system.
Here’s the fix:
fn is_root(file: &Path) -> bool {
matches!(fs::canonicalize(file), Ok(p) if p == Path::new(“/”))
}
if recursive && preserve_root && is_root(file) {
return Err(PreserveRoot);
}
Rule: Resolve Paths Before Comparing Them
canonicalize resolves .., ., and symlinks into a real absolute path. That’s a lot better than string comparison.
Oh and if you were wondering about this line:
matches!(fs::canonicalize(file), Ok(p) if p == Path::new(“/”))
I think that’s just a fancy way of saying
// First, resolve the path to its canonical form
if let Ok(p) = fs::canonicalize(file) {
// If that succeeded, check if the canonical path is “/”
p == Path::new(“/”)
} else {
false
}
In the specific case of –preserve-root, this works because / has no parent directory, so there’s nothing for an attacker to swap from underneath you. In the more general case of comparing two arbitrary paths for filesystem identity, however, you’d want to open both and compare their (dev, inode) pairs, the way GNU coreutils does. (Think identity, not string equality.)
By the way, my favorite bug in this group is CVE-2026 – 35363:
rm . # ❌
rm .. # ❌
rm ./ # ✅
rm ./// # ✅
It refused . and .. but happily accepted ./ and .///, then deleted the current directory while printing Invalid input. 😅
Stay in Bytes at Unix Boundaries
Rust’s String and &str are always UTF-8.
That’s a great choice in 99% of all cases, but Unix paths, environment variables, arguments, and the inputs flowing through tools like cut, comm, and tr live in the messy world of bytes.
Every time a Rust program bridges that gap, it has three options.
Lossy conversion with from_utf8_lossy silently rewrites invalid bytes to U+FFFD. That’s just fancy data corruption.
🫤 Strict conversion with unwrap or ? crashes or refuses to operate.
😚 Staying in bytes with OsStr or &[u8] is what you should usually do.
The audit found bugs in both of the first two categories. Here’s an example.
Case Study: comm (CVE-2026 – 35346)
This is the original code, from src/uu/comm/src/comm.rs.
// ra, rb are &[u8], raw bytes from the input files.
print!(“{}”, String::from_utf8_lossy(ra));
print!(“{delim}{}”, String::from_utf8_lossy(rb));
GNU comm works on binary files because it just shuffles bytes around. The uutils version replaced anything that wasn’t valid UTF-8 with U+FFFD, which silently corrupted the output.
Here’s the fix: stay in bytes.
let mut out = BufWriter::new(io::stdout().lock());
out.write_all(ra)?;
out.write_all(delim)?;
out.write_all(rb)?;
print! forces a UTF-8 round-trip through Display. Write::write_all does not.
It writes the raw bytes directly to stdout.
Rule: Pick the Right Type for the Situation
For Unix-flavored systems code, use Path and PathBuf for filesystem paths, OsString for environment variables, and Vec<u8> or &[u8] for stream contents. It’s tempting to round-trip them through String for easier formatting, but that’s where the corruption creeps in.
UTF-8 is a great default for application strings, but it’s absolutely, positively the wrong default for the raw byte stuff Unix tools work with.
Wiz Research uncovered a critical vulnerability (CVE-2026 – 3854) in GitHub’s internal git infrastructure that could have affected both GitHub.com and GitHub Enterprise Server. By exploiting an injection flaw in GitHub’s internal protocol, any authenticated user could execute arbitrary commands on GitHub’s backend servers with a single git push command - using nothing but a standard git client.
Notably, this is one of the first critical vulnerabilities discovered in closed-source binaries using AI, highlighting a shift in how these flaws are identified. Despite the complexity of the underlying system, the vulnerability is remarkably easy to exploit. On GitHub.com, this vulnerability allowed remote code execution on shared storage nodes. We confirmed that millions of public and private repositories belonging to other users and organizations were accessible on the affected nodes. On GitHub Enterprise Server, the same vulnerability grants full server compromise, including access to all hosted repositories and internal secrets.
GitHub mitigated this issue on GitHub.com within 6 hours of our report, released patches for all supported versions of GitHub Enterprise Server, and published the CVE at the time of release. GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable. Detailed remediation steps and further technical details are available in GitHub’s security blog post.
GitHub greatly appreciates the collaboration, professionalism, and partnership that Wiz has shown throughout this process. A finding of this caliber and severity is rare, earning one of the highest rewards available in our Bug Bounty program, and serves as a reminder that the most impactful security research comes from skilled researchers who know how to ask the right questions. As the landscape evolves, these close partnerships with talented hunters and researchers are more important than ever.Alexis Wales, GitHub CISO
GitHub greatly appreciates the collaboration, professionalism, and partnership that Wiz has shown throughout this process. A finding of this caliber and severity is rare, earning one of the highest rewards available in our Bug Bounty program, and serves as a reminder that the most impactful security research comes from skilled researchers who know how to ask the right questions. As the landscape evolves, these close partnerships with talented hunters and researchers are more important than ever.
Alexis Wales, GitHub CISO
This post breaks down the vulnerability, walks through the exploitation chain, and provides recommendations for GHES administrators to protect their environments.
Required Actions and Mitigations
GitHub.com: GitHub has mitigated this issue. No action is required for GitHub.com users.
GitHub Enterprise Server: Immediate action required.
Upgrade to GHES version 3.19.3 or later - this release patches CVE-2026 – 3854
Upgrade to GHES version 3.19.3 or later - this release patches CVE-2026 – 3854
Affected Versions
Find Vulnerable GHES Instances with Wiz
Wiz customers can identify vulnerable GitHub Enterprise Server instances in their environments using this pre-built query in the Wiz Threat Center. The query identifies all GHES instances running a version vulnerable to this issue.
Why We Researched GitHub’s Git Infrastructure
GitHub is the world’s largest code hosting platform, home to hundreds of millions of repositories spanning open source projects, enterprise codebases, and critical infrastructure. Its internal git infrastructure-the pipeline that processes every git push-is one of the most security-sensitive systems on the internet. When a user pushes code, it passes through multiple internal services, each written in a different programming language. This multi-service architecture creates opportunities for inconsistencies in how each component parses and trusts shared data.
We’ve looked into GitHub Enterprise Server (GHES) in the past to hunt for these exact types of vulnerabilities. However, extracting and auditing the sheer volume of compiled blackbox binaries that run this pipeline historically required an impractical amount of time and manual effort.
But this is Round 2, and the landscape has shifted. By leveraging AI-augmented tooling-particularly automated reverse engineering using IDA MCP-we were able to do what was previously too costly. Using AI, we rapidly analyzed GitHub’s compiled binaries, reconstructed internal protocols, and systematically identified where user input could influence server behavior across the entire pipeline. Thanks to this new capability, we found a fundamental flaw in how that input flows through GitHub’s multi-service architecture.
Technical Deep-Dive
Understanding the Architecture
When a user runs git push against GitHub via SSH, the request flows through several key components:
babeld - A git proxy and the entry point for all git operations. It receives the user’s SSH connection and forwards authentication to gitauth.
babeld - A git proxy and the entry point for all git operations. It receives the user’s SSH connection and forwards authentication to gitauth.
gitauth - An internal authentication service. It verifies the user’s credentials, checks whether they have push access to the target repository, and returns the security policies that apply to the session - file size limits, branch naming rules, and more. babeld takes this response and constructs an internal header containing all of this security metadata.
gitauth - An internal authentication service. It verifies the user’s credentials, checks whether they have push access to the target repository, and returns the security policies that apply to the session - file size limits, branch naming rules, and more. babeld takes this response and constructs an internal header containing all of this security metadata.
gitrpcd - An internal RPC server. It receives the request from babeld, parses the X-Stat header, and sets up the environment for downstream processes. Critically, gitrpcd performs no authentication of its own - it trusts babeld completely and treats every field in the X-Stat header as authoritative.
gitrpcd - An internal RPC server. It receives the request from babeld, parses the X-Stat header, and sets up the environment for downstream processes. Critically, gitrpcd performs no authentication of its own - it trusts babeld completely and treats every field in the X-Stat header as authoritative.
The pre-receive hook - A compiled Go binary that enforces security policies before a push is accepted. It checks file size limits, branch naming rules, LFS integrity, and runs any admin-defined custom hooks.
The pre-receive hook - A compiled Go binary that enforces security policies before a push is accepted. It checks file size limits, branch naming rules, LFS integrity, and runs any admin-defined custom hooks.
The critical link between these components is the X-Stat header. It carries security-critical fields as semicolon-delimited key=value pairs. Internal services parse this header by splitting on ; and populating a map. A key detail: the map uses last-write-wins semantics. If a key appears twice, the later value silently overrides the earlier one.
When babeld forwards a push request, one of the internal requests includes push options in the X-Stat header. Git push options are arbitrary strings that users can pass with git push -o. They are a standard git protocol feature, intended for server-side hints. babeld encodes them as numbered fields - push_option_0, push_option_1, and so on - alongside a push_option_count.
The Vulnerability: X-Stat Field Injection
So what happens when user-controlled input reaches the X-Stat header without proper sanitization?
babeld copies git push option values directly into the X-Stat header - without sanitizing semicolons. Since ; is the X-Stat field delimiter, any semicolon in a push option value breaks out of its designated field and creates new, attacker-controlled fields.
Consider a push option value that contains a semicolon followed by a security field name. babeld embeds it verbatim, producing a header like:
When splitting on ;, this header parses as:
The attacker’s value wins because it appears later in the header - last-write-wins.
We confirmed this both through binary analysis and on the wire - a packet capture on a live GHES instance showed injected fields appearing alongside and overriding their legitimate counterparts in the X-Stat header.
By combining reverse engineering of the pre-receive binary with wire-level analysis, we mapped out injectable X-Stat fields. The following are particularly security-relevant:
The first three are the ones that matter most - together, they lead to remote code execution.
Escalation to RCE
Overriding security flags like large_blob_rejection_enabled is interesting, but the real question is: can we turn field injection into code execution?
The answer lies in three fields from the table above: rails_env, custom_hooks_dir, and repo_pre_receive_hooks. To understand why, we need to look at how the pre-receive hook binary handles custom hooks.
GHES supports admin-defined custom pre-receive hooks - scripts that run before a push is accepted. By reverse engineering the pre-receive binary, we discovered it has two execution paths, controlled entirely by the rails_env field from the X-Stat header: a production value that runs hooks inside a sandbox, and any other value that runs hooks directly - no sandbox, no isolation - as the git service user with full filesystem access.
The only thing separating these two paths is the value of rails_env. And we can inject it.
The escalation to RCE chains three injections together:
Step 1 - Bypass the sandbox. Inject a non-production rails_env value to switch from the sandboxed production path to the unsandboxed path.
Step 2 - Redirect the hook directory. Inject custom_hooks_dir to control the base directory where the binary looks up hook scripts.
Step 3 - Inject a hook definition with path traversal. Inject repo_pre_receive_hooks with a crafted hook entry whose script field contains a path traversal sequence. The binary’s path resolution joins the attacker- controlled base directory with the traversal payload, resolving to an arbitrary binary on the filesystem.
The non-production path then executes the resolved path directly - no arguments, no sandbox - as the git service user:
With unsandboxed code execution as the git user, we had full control over the GHES instance, including filesystem read/write access and visibility into internal service configuration.
From GHES to GitHub.com
We had RCE on GitHub Enterprise Server. The next question was obvious - does this work on GitHub.com?
We ran the same exploitation chain against a repository on GitHub.com. The push completed successfully, but the custom hooks never executed. No remote: output, no code execution - nothing.
To understand what was happening, we injected user_operator_mode=bool:true to enable debug output on both platforms. Comparing the output side by side, we noticed that GitHub.com was missing certain hook execution steps that appeared on GHES - the custom hooks code path was simply not being reached.
We went back to the binary and dug deeper. Through further reverse engineering, we identified a boolean flag in the X-Stat header that controls whether the server operates in enterprise mode. On GHES, this flag defaults to true - so the custom hooks path is always active. On GitHub.com, it defaults to false, meaning custom hooks are never reached under normal conditions.
Since this flag was also carried in the X-Stat header, it was injectable through the same mechanism. One more injected field, and the full exploitation chain worked on GitHub.com. This time, we executed hostname instead of id:
RCE on GitHub.com - confirmed.
Cross-Tenant Impact
RCE on GitHub Enterprise Server is a critical vulnerability. On GitHub.com, the same flaw had broader implications due to the shared infrastructure serving multiple users and organizations.
GitHub.com is a multi-tenant platform. Repositories belonging to millions of different organizations and users are stored on shared backend infrastructure. When we achieved code execution on GitHub.com, we landed on a shared storage node running as the git user.
The git user exists for a reason: it serves all repository operations across the node. By design, it has broad filesystem access to every repository hosted on that node. Compromising this user meant we could read any repository on the node, regardless of which organization or user owned it. We enumerated repository index entries accessible from two compromised nodes and found millions of entries across each, belonging to other users and organizations.
To be clear: we did not access the contents of other tenants’ repositories. We validated the cross-tenant exposure using only our own test accounts, confirming that the git user’s filesystem permissions would allow reading any repository on the node.
Conclusion
A single git push command was enough to exploit a flaw in GitHub’s internal protocol and achieve code execution on backend infrastructure. The vulnerability chain highlights a pattern that extends well beyond GitHub. When multiple services written in different languages pass data through a shared internal protocol, the assumptions each service makes about that data become a critical attack surface. In this case, one service assumed push option values were safe to embed verbatim. Another assumed every field in the X-Stat header was set by a trusted source. The pre-receive hook assumed an environment variable could only be production in production. Each assumption was reasonable in isolation - and dangerous in combination.
The presence of a non-production code path in a production binary, the lack of path traversal validation on hook scripts, and the use of a delimiter-based protocol without input sanitization are patterns that appear across many codebases. We encourage teams building multi-service architectures to audit how user-controlled input flows through internal protocols - especially where security-critical configuration is derived from shared data formats.
This research was made possible by AI-augmented reverse engineering tooling, particularly IDA MCP, which allowed us to rapidly analyze compiled binaries and reconstruct internal protocols at a speed that would not have been feasible manually. As these tools continue to mature, we expect them to play an increasingly important role in uncovering vulnerability classes that require deep cross-component analysis.
Responsible Disclosure Timeline
2026 – 03-04 - Wiz Research discovers the X-Stat push option injection vulnerability.2026 – 03-04 - RCE confirmed on GHES 3.19.1.2026 – 03-04 - Wiz Research reports the vulnerability to GitHub.2026 – 03-04 - GitHub acknowledges receipt.2026 – 03-04 - GitHub deploys fix on GitHub.com.2026 – 03-10 - CVE-2026 – 3854 assigned with CVSS 8.7.2026 – 03-10 - GHES patch released.2026 – 04-28 - Public disclosure.
Stay in touch!
Hi there! We are Sagi Tzadik (@sagitz_), Nir Ohfeld (@nirohfeld), Ronen Shustin (@ronenshh), Hillai Ben-Sasson (@hillai), Yuval Avrahami (@yuvalavra), and Noam Malron (@noamsec) from the Wiz Research Team (@wiz_io). We are a group of veteran white-hat hackers with a single goal: to make the cloud a safer place for everyone. We primarily focus on finding new attack vectors in the cloud and uncovering isolation issues in cloud vendors and service providers. We would love to hear from you! Feel free to contact us on X (Twitter) or via email: research@wiz.io.
28 Apr 2026
OpenAI’s ad platform has two halves. On the ChatGPT side, the backend injects structured single_advertiser_ad_unit objects into the conversation SSE stream while the model is responding. On the merchant side, a tracking SDK called OAIQ runs in the visitor’s browser and reports product views back to OpenAI. The two are tied together by Fernet-encrypted click tokens, four of them per ad.
I captured both halves on a consented mobile-traffic research fleet. Everything below comes from observed traffic.
When you send a message to ChatGPT, the backend opens an SSE response at chatgpt.com/backend-api/f/conversation. Most events in that stream are model-output. Some are ad units. They look like this:
event: delta
data: {
“type”: “single_advertiser_ad_unit”,
“ads_request_id”: “069e89b3-c038 – 7764-8000 – 6e5a193e5f69″,
“ads_spam_integrity_payload”: “gAAAAABp6Js_<…redacted…>”,
“preamble”: “”,
“advertiser_brand”: {
“name”: “Grubhub”,
“url”: “www.grubhub.com”,
“favicon_url”: “https://bzrcdn.openai.com/cabfae7ead26b03d.png”,
“id”: “adacct_6984ed0ba55481a29894bb192f7773b4″
},
“carousel_cards”: [{
“title”: “Get Chinese Food Delivered”,
“body”: “Satisfy Your Cravings with Grubhub Delivery.”,
“image_url”: “https://bzrcdn.openai.com/cabfae7ead26b03d.png”,
“target”: {
“type”: “url”,
“value”: “https://www.grubhub.com/?utm_source=chatgptpilot&utm_medium=paid&utm_campaign=diner_gh_search_chatgpt_kw_traffic_nb_x_nat_x&utm_content=nbchinese&oppref=gAAAA<…>&olref=gAAAA<…>”,
“open_externally”: false
},
“ad_data_token”: “eyJwYXlsb2<…>”
}]
}
Notes:
single_advertiser_ad_unit is a typed schema. The naming implies siblings (multi-advertiser, etc.).
advertiser_brand.id is adacct_<32-hex> — a stable per-merchant account identifier.
Brand favicon and ad image both load from bzrcdn.openai.com. OpenAI hosts the advertiser’s creative, not the merchant.
target.open_externally: false opens the link in ChatGPT’s in-app webview, so OpenAI observes the post-click navigation on top of any pixel signal.
Four Fernet tokens per ad: ads_spam_integrity_payload, oppref, olref, and a base64-wrapped ad_data_token. Each is AES-128-CBC under a server-only key with HMAC-SHA256 integrity.
How ads get selected
A single account in the panel received six different ads across six conversations on six different topics. The targeting is contextual to the chat:
Same account, different topic, different brand. I didn’t find evidence one way or the other on whether targeting also incorporates prior conversation history.
The four-token attribution chain
Every ad ships with four distinct Fernet-encrypted blobs. Their roles, based on where they appear:
ads_spam_integrity_payload sent inside the SSE data, never on the click URL. Server-side integrity check against forged ad clicks.
oppref present on the click URL and copied verbatim by the OAIQ pixel into the cookie __oppref (TTL 720 hours / 30 days). The forward attribution token. Travels with every subsequent merchant pixel event.
olref paired with oppref on the click URL but not stored by the SDK we observed. Likely impression-side / outbound-link-reference logging on OpenAI’s servers.
ad_data_token base64-wrapped JSON containing yet another Fernet token. Carried in the SSE payload, presumably reconciled server-side at click time.
Fernet’s first nine bytes are public: version byte 0x80 plus an 8-byte big-endian Unix timestamp. So the mint time of any of these tokens is recoverable without OpenAI’s key:
import base64, struct, datetime
b = base64.urlsafe_b64decode(“gAAAAABp7fdA” + “==“)
print(datetime.datetime.utcfromtimestamp(struct.unpack(“>Q”, b[1:9])[0]))
# → 2026 – 04-26 11:30:08 UTC
The Home Depot click URL I captured was minted at 11:30:08; the browser fetched the merchant page at 11:31:43. Click latency: 95 seconds.
How the loop closes on the merchant side
User taps the card. Browser opens:
https://www.grubhub.com/?utm_source=chatgptpilot&…
&oppref=gAAAA<…>
&olref=gAAAA<…>
The merchant page loads the OAIQ SDK:
<script src=“https://bzrcdn.openai.com/sdk/oaiq.min.js”></script>
<script>
oaiq(‘init’, { pid: ‘<merchant pixel ID>’ });
oaiq(‘measure’, ‘contents_viewed’, { … });
</script>
oaiq.min.js is at version 0.1.3. On init it reads ?oppref= from window.location, writes it into the first-party cookie __oppref with a 720-hour TTL, and sets a probe cookie __oaiq_domain_probe. Every subsequent measure call POSTs JSON to:
POST https://bzr.openai.com/v1/sdk/events?pid=<merchant>&st=oaiq-web&sv=0.1.3
Two domains to add to your filter list if you want to block ChatGPT ad events: bzrcdn.openai.com, bzr.openai.com. Two cookie names to inspect after any ChatGPT-recommended click: __oppref, __oaiq_domain_probe.
Please enable JS and disable any ad blocker
Today we are announcing a fundamental change in how we build Warp: the Warp client is now open-source, and the community can participate in building it using an agent-first workflow managed by Oz, our cloud agent orchestration platform. OpenAI is the founding sponsor of the new, open‑source Warp repository, and the new agentic management workflows are powered by GPT models.
Open-sourcing with an agent-powered repo is our vision of how software will be built in the future. Humans managing agents at scale to build production-grade software is the model, and implementing this model in the open will allow software to improve most quickly.
Put simply: we believe that a diverse collection of contributors with unique ideas + Oz agents with structured processes + a rich corpus of context and self-improvement loops will yield a magic product, beyond what we might build internally.
Why now
The primary reason is that we think we can ship a better Warp, more quickly, if we open source and work with our community to help supervise a fleet of agents. The biggest bottleneck to development is no longer writing code — it’s all the human-in-the-loop activities around the code: speccing the product and verifying behavior, and frankly, we are limited in what our internal team can do and the pace we want to move at.
We’ve found that agents can handle the implementation heavy lifting really well. That frees contributors to focus on the higher-leverage work: shaping what gets built and making sure it’s right.
Inviting our community into the process to help manage agents will be a big unlock. We now have a lot of confidence in code that is generated by Oz with our rules, context and verification, so anyone contributing should have a high chance of success coding a feature correctly. Moreover, leaning on agents creates pressure for us to nail orchestration, memory, handoff, and all of the other parts of agentic engineering that are core to our business. There’s a virtuous loop here.
The second reason is about giving developers a chance to shape the future. There isn’t a full-featured open agentic development environment on the market and we want to offer the community an alternative to closed-source options provided by more established companies. No one knows exactly what the future of agentic development will look like and we think the community ought to be able to participate in shaping it.
Warp is multi-model and multi-harness and we want to double down on that openness. Opening will allow us to be more responsive to users, working with them on the long tail of our backlog to make Warp the best ADE on the market.
In this spirit, we are coupling a few major product improvements with this launch to make Warp more open and customizable.
First we are launching support for a much wider range of open source models in Warp today, including the latest models Kimi, MiniMax, and Qwen, along with a new “auto (open)” model-routed version that picks the best open model for a task.
Second, we’ve made it much easier to customize your Warp experience however you’d like — from just a terminal, to having some minimal features for improving agentic development like a diff view and file tree, to a full fledged ADE with built-in agents.
Finally, we are shipping a (long-overdue) settings file so that users and agents get programmatic control over settings and easy portability between devices.
How it works
Warp’s source code is now available at github.com/warpdotdev/warp with an AGPL license.
You can learn more about the contribution process in CONTRIBUTING.md, but the tl;dr is we want agents doing the heavy lifting (coding, planning, testing, etc.) and community members helping with ideas, direction and verification. The Warp team will help guide what gets built, when and how, but we see Warp fundamentally becoming a collaborative effort with our community. This is a cool opportunity for folks to contribute to a fast growing app that’s used by nearly a million active developers, working closely alongside the Warp team.
Warp’s new open-source agent workflows are powered by OpenAI models, with OpenAI supporting the next generation of collaborative software development. You’re free to use other coding agents as well to contribute, but our preference is using Oz since it has the correct skills and verification loops built-in.
“Open source has long been central to how developers learn, build, and push the field forward. We’re excited to support experiments that explore how AI can help maintainers and contributors collaborate more effectively at scale.”
— Thibault Sottiaux, Engineering Lead, OpenAI
“Open source has long been central to how developers learn, build, and push the field forward. We’re excited to support experiments that explore how AI can help maintainers and contributors collaborate more effectively at scale.”
— Thibault Sottiaux, Engineering Lead, OpenAI
Finally, as part of making this change, we are moving from a closed product development process to an open one. That means public GitHub issues will become the source of truth for tracking features. We will publicize our roadmap for the ADE, and have technical and product discussions for ADE issues in the open.
Closing thoughts
Open-sourcing is fundamentally coming from our desire to build a successful business. We are competing with other highly funded, closed-source competitors, and we think opening and providing the resources for the community to improve Warp is a smart way for us to accelerate product development. Yes, we are a VC funded startup, but we do not have the resources to compete on price or massively subsidize usage — we need to build our business by offering the best possible product to the most excited community.
Business goals aside, we also think there’s value in the community itself shaping the future. Warp is a unique product with five years of engineering behind it, and we hope it provides the community a compelling starting point for building the future of agentic development, beyond any other open alternative.
We could just keep going with our current model, privately guessing at the roadmap and scaling more and more agents to build internally, but that feels like a missed opportunity. It would be missing a chance for the community to shape the direction, to hack on the product, and to actually work with agents to build the best possible product.
When we first launched Warp as a terminal 5 years ago, the plan was always to open source the client. This is from the original Show HN post for Warp:
We’ve debated whether to open source every year, and for the first-time this year it became clear that the balance had shifted because of the rise of agents. I’ve been a developer for a long time and I’ve never seen a change like what’s happening now. Warp’s mission has always been to empower developers to ship better software more quickly and I hope by changing the way we are building it we get a little closer to achieving that mission.
Jess Weatherbed
is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.
Google has signed a classified deal that allows the US Department of Defense to use its AI models for “any lawful government purpose,” The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in “inhumane or extremely harmful ways.”
If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing the Department of Defense’s demands to remove weapon and surveillance-related guardrails from its AI models.
Citing a single anonymous source “with knowledge of the situation,” The Information reports that the deal states that both parties have agreed that the search giant’s AI systems shouldn’t be used for domestic mass surveillance or autonomous weapons “without appropriate human oversight and control.” But the contract also says it doesn’t give Google “any right to control or veto lawful government operational decision-making,” which would suggest the agreed restrictions are more of a pinky promise than legally binding obligations. The deal also requires Google to assist with making adjustments to its AI safety settings and filters at the government’s request.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” a Google spokesperson said in a statement to The Information, adding that the new agreement is an amendment to its existing government deal. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
Update, April 28th: Replaced statement provided to Reuters with a statement that Google provided to The Information.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Jess Weatherbed
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.