10 interesting stories served every morning and every evening.

Ghostty Is Leaving GitHub

mitchellh.com

Writing this makes me ir­ra­tionally sad, but Ghostty will be leav­ing GitHub1.

I’m GitHub user 1299, joined Feb 2008.

Since then, I’ve opened GitHub every sin­gle day. Every day, mul­ti­ple times per

day, for over 18 years. Over half my life. A hand­ful of ex­cep­tions in there

(I’d love to see the data), but I can’t imag­ine more than a week per year.

GitHub is the place that has made me the most happy. I al­ways made time for

it. When I went through tough breakups? I lost my­self in open source… on

GitHub. During col­lege at 4 AM when every­one is passed out? Let me get one

com­mit in. During my hon­ey­moon while my wife is still asleep? Yeah, GitHub.

It’s where I’ve his­tor­i­cally been hap­pi­est and wanted to be.

Even the an­noy­ing stuff! Some peo­ple doom scroll so­cial me­dia. I’ve been doom

scrolling GitHub is­sues since be­fore that was a word. On va­ca­tions I’d have

book­marks of dif­fer­ent pro­jects on GitHub I wanted to study. Not just source

code, but OSS processes, how other main­tain­ers re­act to dif­fi­cult sit­u­a­tions.

Etc. Believe it or not, I like this.

Some might call this sick, but my hobby and work and pas­sion all align and for

most of my life they got to also live in one place on the in­ter­net: GitHub.

Did you know I started Vagrant (my first suc­cess­ful open source pro­ject) in

large part be­cause I hoped it would get me a job at GitHub? It’s no se­cret,

I’ve said this re­peat­edly, and in my first pub­lic talk about Vagrant, when I

was a mere 20 years old, I joked maybe GitHub will hire me if it’s good!”

GitHub was my dream job. I did­n’t ever get to work there (not their fault).

But it was the per­fect place I wanted to be. The en­gi­neers were in­cred­i­ble,

the prod­uct was in­cred­i­ble, and it was some­thing I lived and breathed every

day. I still do and con­sis­tently have… for these 18 years. Enough time for

an en­tire hu­man to be­come an adult, all on GitHub.

Lately, I’ve been very pub­licly crit­i­cal of GitHub. I’ve been mean about it.

I’ve been an­gry about it. I’ve hurt peo­ple’s feel­ings. I’ve been lash­ing out.

Because GitHub is fail­ing me, every sin­gle day, and it is per­sonal. It is

ir­ra­tionally per­sonal. I love GitHub more than a per­son should love a thing,

and I’m mad at it. I’m sorry about the hurt feel­ings to the peo­ple work­ing on

it.

I’ve felt this way for a long time, but for the past month I’ve kept a jour­nal

where I put an X” next to every date where a GitHub out­age has neg­a­tively

im­pacted my abil­ity to work2. Almost every day has an X. On the day I am

writ­ing this post, I’ve been un­able to do any PR re­view for ~2 hours be­cause

there is a GitHub Actions out­age3. This is no longer a place for se­ri­ous

work if it just blocks you out for hours per day, every day.

It’s not a fun place for me to be any­more. I want to be there but it does­n’t

want me to be there. I want to get work done and it does­n’t want me to get

work done. I want to ship soft­ware and it does­n’t want me to ship soft­ware.

I want it to be bet­ter, but I also want to code. And I can’t code with GitHub

any­more. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one

day, but this will have to be pred­i­cated on real re­sults and im­prove­ments,

not words and promises.

I’ll share more de­tails about where the Ghostty pro­ject will be mov­ing to in

the com­ing months. We have a plan but I’m also very much still in dis­cus­sions

with mul­ti­ple providers (both com­mer­cial and FOSS).

It’ll take us time to re­move all of our de­pen­den­cies on GitHub and we have a

plan in place to do it as in­cre­men­tally as pos­si­ble. We plan on keep­ing a

read-only mir­ror avail­able on GitHub at the cur­rent URL.

My per­sonal pro­jects and other work will re­main on GitHub for now.

Ghostty is where I, our main­tain­ers, and our open source com­mu­nity are

most im­pacted so that is the fo­cus of this change. We’ll see where it

goes af­ter that.

Footnotes

The tim­ing of this is co­in­ci­den­tal with the large out­age on April 27, 2026.

We’ve been dis­cussing and putting to­gether a plan to leave GitHub

for months, and this blog post was writ­ten over a week ago. We only

made the fi­nal de­ci­sion this week. ↩

The tim­ing of this is co­in­ci­den­tal with the large out­age on April 27, 2026.

We’ve been dis­cussing and putting to­gether a plan to leave GitHub

for months, and this blog post was writ­ten over a week ago. We only

made the fi­nal de­ci­sion this week. ↩

To the Git is dis­trib­uted!” crowd: the is­sue is­n’t Git, it’s the

in­fra­struc­ture we rely on around it: is­sues, PRs, Actions, etc. ↩

To the Git is dis­trib­uted!” crowd: the is­sue is­n’t Git, it’s the

in­fra­struc­ture we rely on around it: is­sues, PRs, Actions, etc. ↩

This is not the large Elasticsearch out­age they had on April 27, 2026.

This blog post was writ­ten a week be­fore that, so this was a dif­fer­ent

out­age. ↩

This is not the large Elasticsearch out­age they had on April 27, 2026.

This blog post was writ­ten a week be­fore that, so this was a dif­fer­ent

out­age. ↩

Keep Android Open

keepandroidopen.org

Your phone is about to stop be­ing yours.

125 days un­til lock­down

Starting September 2026, a silent up­date, non­con­sen­su­ally pushed by Google, will block every Android app whose de­vel­oper has­n’t reg­is­tered with Google, signed their con­tract, paid up, and handed over gov­ern­ment ID.

Every app and every de­vice, world­wide, with no opt-out.

Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook

What Google is do­ing

In August 2025, Google an­nounced a new re­quire­ment: start­ing September 2026, every Android app de­vel­oper must reg­is­ter cen­trally with Google be­fore their soft­ware can be in­stalled on any de­vice. Not just Play Store apps: all apps. This in­cludes apps shared be­tween friends, dis­trib­uted through F-Droid, built by hob­by­ists for per­sonal use. Independent de­vel­op­ers, church and com­mu­nity groups, and hob­by­ists alike will all be frozen out of be­ing able to de­velop and dis­trib­ute their soft­ware.

Registration re­quires:

Paying a fee to Google

Agreeing to Google’s Terms and Conditions

Surrendering your gov­ern­ment-is­sued iden­ti­fi­ca­tion

Providing ev­i­dence of your pri­vate sign­ing key

Listing all cur­rent and all fu­ture ap­pli­ca­tion iden­ti­fiers

If a de­vel­oper does not com­ply, their apps get silently blocked on every Android de­vice world­wide.

Who this hurts

You

You bought an Android phone be­cause Google told you it was open. You could in­stall what you wanted, and that was the deal.

Google is now rewrit­ing that deal, retroac­tively, on hard­ware you al­ready own. After the up­date lands, you can only run soft­ware that Google has pre-ap­proved. On your phone: your prop­erty, that you paid for.

Independent de­vel­op­ers

A teenager’s first app, a vol­un­teer’s pri­vacy tool, or a com­pa­ny’s con­fi­den­tial in­ter­nal beta. It does­n’t mat­ter. After September 2026, none of these can be in­stalled with­out Google’s bless­ing.

F-Droid, home to thou­sands of free and open-source Android apps, has called this an existential” threat. Cory Doctorow calls it Darth Android”.

Governments & civil so­ci­ety

Google has a doc­u­mented track record of com­ply­ing when au­thor­i­tar­ian regimes de­mand app re­movals. With this pro­gram, the soft­ware that runs your coun­try’s in­sti­tu­tions will ex­ist at the plea­sure of a sin­gle un­ac­count­able for­eign cor­po­ra­tion.

The EFF calls app gate­keep­ing an ever-ex­pand­ing path­way to in­ter­net cen­sor­ship.”

Google’s escape hatch” is a trap door

Google says power users” can still in­stall” un­ver­i­fied apps. Here’s what that ac­tu­ally looks like:

Delve into System Settings, find Developer Options

Tap the build num­ber seven times to en­able Developer Mode

Dismiss scare screens about co­er­cion

Enter your PIN

Restart the de­vice

Wait 24 hours

Come back, dis­miss more scare screens

Pick allow tem­porar­ily” (7 days) or allow in­def­i­nitely”

Confirm, again, that you un­der­stand the risks”

Nine steps. A manda­tory 24-hour cool­ing-off pe­riod. For in­stalling soft­ware on a de­vice you own.

Worse: this flow runs en­tirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS up­date re­quired and no con­sent needed. And as of to­day, it has­n’t shipped in any beta, pre­view, or ca­nary build. It ex­ists only as a blog post and some mock­ups.

This is big­ger than Android

If Google can retroac­tively lock down bil­lions of de­vices that were sold as open plat­forms, every hard­ware man­u­fac­turer on the planet is watch­ing.

The prin­ci­ple be­ing es­tab­lished: the com­pany that made your de­vice gets to de­cide, af­ter you’ve bought it, what soft­ware you’re al­lowed to run. In soft­ware, this is called a rug pull”; but at least you could al­ways in­stall com­pet­ing soft­ware. In hard­ware, it is a fait ac­com­pli that strips you of your agency and ren­ders you pow­er­less to the whims of a sin­gle un­ac­count­able gate­keeper and con­victed mo­nop­o­list.

Android’s open­ness was never just a fea­ture. It was the promise that dis­tin­guished it from iPhone. Millions chose Android for ex­actly that rea­son. Google is now re­vok­ing that promise uni­lat­er­ally, on de­vices al­ready in peo­ple’s pock­ets, be­cause they’ve de­cided they have enough mar­ket dom­i­nance and reg­u­la­tory cap­ture to get away with it.

Ars Technica: Google’s Apple envy threat­ens to dis­man­tle Android’s open legacy.”

But wait, is­n’t this…

″…just about se­cu­rity?”

The se­cu­rity ra­tio­nale is a smoke­screen. Google Play Protect al­ready scans for mal­ware in­de­pen­dent of de­vel­oper iden­tity. Requiring a gov­ern­ment ID does­n’t make code safer. It makes de­vel­op­ers iden­ti­fi­able and con­trol­lable. Malware au­thors can reg­is­ter. Indie de­vel­op­ers and dis­si­dents of­ten can’t. The EFF is blunt: iden­tity-based gate­keep­ing is a cen­sor­ship tool, not a se­cu­rity one.

″…still side­load­ing if you use the ad­vanced flow?”

Nine steps, 24-hour wait, buried in Developer Options, de­liv­ered through a pro­pri­etary ser­vice that Google can re­voke when­ever they want. That’s not side­load­ing. That’s a de­ter­rence mech­a­nism built to en­sure al­most no­body com­pletes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.

″…only a prob­lem if you have some­thing to hide?”

Whistleblowers, jour­nal­ists, and ac­tivists un­der au­thor­i­tar­ian gov­ern­ments will be the first vic­tims. People in do­mes­tic abuse sit­u­a­tions are next. All these groups have le­git­i­mate rea­sons to dis­trib­ute or use soft­ware with­out putting their le­gal iden­tity in a Google data­base. Anonymous open-source con­tri­bu­tion is a tra­di­tion older than Google it­self. This pol­icy ends it on Android.

″…the same thing Apple does?”

Apple has been a walled gar­den from day one. People chose Android be­cause it was dif­fer­ent. Apple does it too” is a race to the bot­tom and a weak tu quoque ar­gu­ment. And un­der reg­u­la­tory pres­sure (the EUs Digital Markets Act), even Apple is be­ing forced to open up. Google is mov­ing in the op­po­site di­rec­tion: at­tempt­ing to fur­ther en­trench its gate­keep­ing sta­tus.

″…just $25 and some pa­per­work?”

Maybe, if you’re a de­vel­oper in the US with a credit card and a dri­ver’s li­cense. Try be­ing a stu­dent in sub-Sa­ha­ran Africa, or a dis­si­dent in Myanmar, or a vol­un­teer main­tain­ing a com­mu­nity health app. The cost is­n’t only fi­nan­cial: you’re sur­ren­der­ing gov­ern­ment ID and ev­i­dence of your sign­ing keys to a com­pany that rou­tinely com­plies with gov­ern­ment de­mands to re­move apps and ex­pose de­vel­op­ers.

Fight back

Everyone

Install F-Droid on every Android de­vice you own. Alternative stores only sur­vive if peo­ple ac­tu­ally use them.

Contact your reg­u­la­tors. Regulators world­wide are gen­uinely con­cerned about mo­nop­o­lies and the cen­tral­iza­tion of power in the tech sec­tor, and want to hear di­rectly from in­di­vid­u­als who are af­fected and con­cerned.

Share this page. Link to keepan­droidopen.org every­where.

Push back on as­tro­turfers. The well, ac­tu­ally…” crowd is out in force. Don’t let them set the nar­ra­tive.

Sign the change.org pe­ti­tion and join the over 100,000 sig­na­to­ries who have made their voices heard.

Read and share our open let­ter

Tell Google what you think of this through their own de­vel­oper ver­i­fi­ca­tion sur­vey (for all the good that will do).

Developers

Do not sign up. Don’t join the pro­gram by sign­ing up for the Android Developer Console and agree­ing to their ir­rev­o­ca­ble Terms and Conditions. Don’t ver­ify your iden­tity. Don’t play ball.

Google’s plan only works if de­vel­op­ers com­ply. Don’t.

Talk other de­vel­op­ers and or­ga­ni­za­tions out of sign­ing up.

Add the FreeDroidWarn li­brary to your apps to warn users.

Run a web­site? Add the count­down ban­ner.

Google em­ploy­ees

If you know some­thing about the pro­gram’s tech­ni­cal im­ple­men­ta­tion or in­ter­nal ra­tio­nale, con­tact tips@keepan­droidopen.org from a non-work ma­chine and a non-Gmail ac­count. Strict con­fi­dence guar­an­teed.

All those op­posed…

69 or­ga­ni­za­tions from 21 coun­tries have signed the open let­ter

Read the full open let­ter and thank the sig­na­to­ries →

What they’re say­ing

Tech press

Google says it’s mak­ing Android side­load­ing high-friction’ to bet­ter warn users about po­ten­tial risks” XDA Developers

Google says it’s mak­ing Android side­load­ing high-friction’ to bet­ter warn users about po­ten­tial risks”

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’” The Register

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’”

Over 67 groups urge the com­pany to drop ID checks for apps dis­trib­uted out­side Play” The Register

Over 67 groups urge the com­pany to drop ID checks for apps dis­trib­uted out­side Play”

Google’s Attack on Sideloading Will Rob Android of One of Its Best Features” How-To Geek

Google’s Attack on Sideloading Will Rob Android of One of Its Best Features”

Google kneecaps in­die Android devs, forces them to reg­is­ter” The Register

Google kneecaps in­die Android devs, forces them to reg­is­ter”

Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy” It’s FOSS News

Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy”

F-Droid Says Google Is Lying About the Future of Sideloading on Android” How-To Geek

F-Droid Says Google Is Lying About the Future of Sideloading on Android”

Google’s Apple envy threat­ens to dis­man­tle Android’s open legacy” Ars Technica

Google’s Apple envy threat­ens to dis­man­tle Android’s open legacy”

F-Droid pro­ject threat­ened by Google’s new dev reg­is­tra­tion rules” Bleeping Computer

F-Droid pro­ject threat­ened by Google’s new dev reg­is­tra­tion rules”

Keep Android Open” Linux Magazine

Keep Android Open”

Google’s Android de­vel­oper ver­i­fi­ca­tion pro­gram draws push­back” InfoWorld

Google’s Android de­vel­oper ver­i­fi­ca­tion pro­gram draws push­back”

We all know that’s a load of bull­shit. Adding a god­damn 24-hour wait­ing pe­riod is bat­shit in­san­ity.” Thom Holwerda, OSnews

We all know that’s a load of bull­shit. Adding a god­damn 24-hour wait­ing pe­riod is bat­shit in­san­ity.”

Google will ver­ify Android de­vel­op­ers dis­trib­ut­ing apps out­side the Play store” The Verge

Before GitHub

lucumr.pocoo.org

writ­ten on April 28, 2026

GitHub was not the first home of my Open Source soft­ware. SourceForge

was.

Before GitHub, I had my own Trac in­stal­la­tion. I had Subversion repos­i­to­ries,

tick­ets, tar­balls, and doc­u­men­ta­tion on in­fra­struc­ture I con­trolled. Later I

moved pro­jects to Bitbucket, back when Bitbucket still felt like a se­ri­ous

al­ter­na­tive place for Open Source pro­jects, es­pe­cially for peo­ple who were not

all-in on Git yet.

And then, even­tu­ally, GitHub be­came the place, and I moved all of it there.

It is hard for me to over­state how im­por­tant GitHub be­came in my life. A large

part of my Open Source iden­tity formed there. Projects I worked on found users

there. People found me there, and I found other peo­ple there. Many pro­fes­sional

re­la­tion­ships and many friend­ships started be­cause some repos­i­tory, is­sue, pull

re­quest, or com­ment thread made two peo­ple aware of each other.

That is why I find what is hap­pen­ing to GitHub to­day so sad and so

dis­ap­point­ing. I do not look at it as just the folks at Microsoft mak­ing

prod­uct de­ci­sions I dis­like. GitHub was part of the so­cial in­fra­struc­ture of

Open Source for a very long time. For many of us, it was not merely where the

code lived; it was where a large part of the com­mu­nity lived.

So when I think about GitHub’s de­cline, I also think about what came be­fore it,

and what might come af­ter it. I have writ­ten a few times over the years about

de­pen­den­cies, and in par­tic­u­lar about the prob­lem of mi­cro

de­pen­den­cies. In my mind, GitHub gave

life to that phe­nom­e­non. It was some­thing I def­i­nitely did not com­pletely

sup­port, but it also made Open Source more in­clu­sive. GitHub changed how Open

Source feels,

and later npm and other sys­tems changed how de­pen­den­cies feel. Put them

to­gether and you get a world in which pub­lish­ing code is al­most fric­tion­less,

con­sum­ing code is al­most fric­tion­less, and the num­ber of pro­jects in the world

ex­plodes.

That has many up­sides. But it is worth re­mem­ber­ing that Open Source did not

al­ways work this way.

A Smaller World

Before GitHub, Open Source was a much smaller world. Not nec­es­sar­ily in the

num­ber of peo­ple who cared about it, but in the num­ber of pro­jects most of us

could re­al­is­ti­cally de­pend on.

There were well-known pro­jects, main­tained over long pe­ri­ods of time by a

com­par­a­tively small num­ber of peo­ple. You knew the

names. You knew the mail­ing lists. You knew who

had been around for years and who had earned trust. That trust was not per­fect,

and the old world had plenty of gate­keep­ing, but rep­u­ta­tion mat­tered in a very

di­rect way. We took pride (and got frus­trated) when the Debian folks came and

told us our li­cens­ing stuff was murky or the copy­right head­ers were not up to

snuff, be­cause they pack­aged things up.

A de­pen­dency was not just a pack­age name. It was a pro­ject with a his­tory, a

web­site, a main­tainer, a re­lease process, a lot of fric­tion, and of­ten a place in

a larger com­mu­nity. You did not add de­pen­den­cies ca­su­ally, be­cause the act of

de­pend­ing on some­thing usu­ally meant you had to un­der­stand where it came from.

Not all of this was nec­es­sar­ily in­ten­tional, but be­cause these pro­jects were

com­par­a­tively large, they also needed to bring their own in­fra­struc­ture. Small

pro­jects might run on a uni­ver­sity server, and many of them were on SourceForge,

but the larger ones ran their own show. They grouped to­gether into larger

col­lec­tives to make it work.

We Ran Our Own Infrastructure

My first Open Source pro­jects lived on in­fra­struc­ture I ran my­self. There was a

Trac in­stal­la­tion, Subversion repos­i­to­ries, tar­balls, doc­u­men­ta­tion, and re­lease

files served from my own ma­chines or from servers un­der my con­trol. That was

nor­mal. If you wanted to pub­lish soft­ware, you of­ten also be­came a small-time

sys­tem ad­min­is­tra­tor. Georg and I ran our own

col­lec­tive for our Open Source pro­jects: Pocoo. We

shared server costs and the bur­den of main­tain­ing Subversion and Trac, mail­ing

lists and more.

Subversion in par­tic­u­lar made this running your own forge” nat­ural. It was

cen­tral­ized: you needed a server, and some­body had to op­er­ate it.

The pro­ject had a home, and that home was usu­ally quite lit­eral: a host­name, a

di­rec­tory, a Trac in­stance, a mail­ing list archive.

When Mercurial and Git ar­rived, they were philo­soph­i­cally the op­po­site. Both

were dis­trib­uted. Everybody could have the full repos­i­tory. Everybody could

have their own copy, their own branches, their own his­tory. In prin­ci­ple, those

dis­trib­uted ver­sion con­trol sys­tems should have re­duced the need for a sin­gle

cen­ter. But de­spite all of this, GitHub be­came the cen­ter.

That is one of the great ironies of mod­ern Open Source. The dis­trib­uted ver­sion

con­trol sys­tem won, and then the world stan­dard­ized on one enor­mous cen­tral­ized

ser­vice for host­ing it.

What GitHub Gave Us

It is easy now to talk only about GitHub’s fail­ures, of which there are cur­rently

many, but that would be un­fair: GitHub was, and con­tin­ues to be, a tremen­dous

gift to Open Source.

It made cre­at­ing a pro­ject easy and it made dis­cov­er­ing pro­jects easy. It made

con­tribut­ing un­der­stand­able to peo­ple who had never sub­scribed to a de­vel­op­ment

mail­ing list in their life. It gave pro­jects is­sue track­ers, pull re­quests,

re­lease pages, wikis, or­ga­ni­za­tion pages, API ac­cess, web­hooks, and later CI.

It nor­mal­ized the idea that Open Source hap­pens in the open, with vis­i­ble

his­tory and vis­i­ble col­lab­o­ra­tion. And it was an ex­cel­lent and rea­son­able

de­fault choice for a decade.

But maybe the most un­der­ap­pre­ci­ated thing GitHub did was archival work: GitHub

be­came a li­brary. It be­came an in­dex of a huge part of the soft­ware com­mons

be­cause even aban­doned pro­jects re­mained find­able. You could find forks, and

old is­sues and dis­cus­sions all stayed on­line. For all the com­plaints one can

make about cen­tral­iza­tion, that cen­tral­iza­tion also cre­ated dis­cov­er­able mem­ory.

The lead­ers there once

cared

a lot about keep­ing GitHub avail­able even in coun­tries that were sanc­tioned by

the US.

I know what the al­ter­na­tive looks like, be­cause I was liv­ing it. Some of my

ear­li­est Open Source pro­jects are tech­ni­cally still on

PyPI, but the ac­tual pack­ages are gone.

The meta­data points to my old server, and that server has long stopped serv­ing

those files.

That was nor­mal be­fore the large plat­forms. A per­sonal do­main ex­pired, a VPS

Security Verification

www.ft.com

For help please visit help.ft.com. We apologise for any in­con­ve­nience.

The fol­low­ing in­for­ma­tion can help our sup­port team to re­solve this is­sue.

Bugs Rust Won't Catch

corrode.dev

In April 2026, Canonical dis­closed 44 CVEs in uu­tils, the Rust reim­ple­men­ta­tion of GNU core­utils that ships by de­fault since 25.10. Most of them came out of an ex­ter­nal au­dit com­mis­sioned ahead of the 26.04 LTS.

I read through the list and thought there’s a lot to learn from it.

What’s no­table is that all of these bugs landed in a pro­duc­tion Rust code­base, writ­ten by peo­ple who knew what they were do­ing, and none of them were caught by the bor­row checker, clippy lints, or cargo au­dit.

I’m not writ­ing this to crit­i­cize the uu­tils team. Quite the con­trary; I ac­tu­ally want to thank them for shar­ing the au­dit re­sults in such de­tail so that we can all learn from them.

We also had Jon Seager, VP Engineering for Ubuntu, on our Rust in Production’ pod­cast re­cently and a lot of lis­ten­ers ap­pre­ci­ated his hon­esty about the state of Rust at Canonical.

If you write sys­tems code in Rust, this is the most con­cen­trated look at where Rust’s safety ends that you’ll likely find any­where right now.

Don’t Trust a Path Across Two Syscalls

This is the largest clus­ter of bugs in the au­dit. It’s also the rea­son cp, mv, and rm are still GNU in Ubuntu 26.04 LTS. :(

The pat­tern is al­ways the same. You do one syscall to check some­thing about a path, then an­other syscall to act on the same path. Between those two calls, an at­tacker with write ac­cess to a par­ent di­rec­tory can swap the path com­po­nent for a sym­bolic link. The ker­nel re-re­solves the path from scratch on the sec­ond call, and the priv­i­leged ac­tion lands on the at­tack­er’s cho­sen tar­get.

Rust’s stan­dard li­brary makes this easy to get wrong. The er­gonomic APIs you reach for first (fs::metadata, File::create, fs::re­move_­file, fs::set_per­mis­sions) all take a path and re-re­solve it every time, rather than tak­ing a file de­scrip­tor and op­er­at­ing rel­a­tive to that.

That’s fine for a nor­mal pro­gram, but if you’re writ­ing a priv­i­leged tool that needs to be se­cure against lo­cal at­tack­ers, you have to be care­ful.

Case Study: CVE-2026 – 35355

Here’s the bug, sim­pli­fied from src/​uu/​in­stall/​src/​in­stall.rs.

// 1. Clear the des­ti­na­tion

fs::re­move_­file(to)?;

// …

// 2. Create the des­ti­na­tion. The path is re-re­solved here!

let mut dest = File::create(to)?; // fol­lows sym­links, trun­cates

copy(from, &mut dest)?;

Between step 1 and step 2, any­one with write ac­cess to the par­ent di­rec­tory can plant to as a sym­link to, say, /etc/shadow. Then File::create fol­lows the sym­link and the priv­i­leged process hap­pily over­writes /etc/shadow with what­ever from hap­pened to con­tain.

The fix uses OpenOptions::create_new(true):

fs::re­move_­file(to)?;

let mut dest = OpenOptions::new()

 .write(true)

 .create_new(true)

 .open(to)?;

copy(from, &mut dest)?;

The docs for cre­ate_new say (emphasis mine):

No file is al­lowed to ex­ist at the tar­get lo­ca­tion, also no (dangling) sym­link. In this way, if the call suc­ceeds, the file re­turned is guar­an­teed to be new.

No file is al­lowed to ex­ist at the tar­get lo­ca­tion, also no (dangling) sym­link. In this way, if the call suc­ceeds, the file re­turned is guar­an­teed to be new.

Rule: Anchor on a File Descriptor Instead

A &Path in Rust looks like a value, but re­mem­ber that to the ker­nel it’s just a name. That name can point to dif­fer­ent things from one syscall to the next.

Anchor your op­er­a­tions on a file de­scrip­tor in­stead.

cre­ate_new() only helps with that when you’re cre­at­ing a new file. For every­thing else, open the par­ent di­rec­tory once and work rel­a­tive to that han­dle.

If you act on the same path twice, as­sume it’s a TOCTOU (Time Of Check To Time Of Use) bug un­til you’ve proven oth­er­wise.

Set Permissions at Creation Time, Not After

This is a close rel­a­tive of TOCTOU. You want a di­rec­tory with re­stric­tive per­mis­sions, so you write some­thing like this.

// Create with de­fault per­mis­sions

fs::cre­ate_dir(&path)?;

// Fix up per­mis­sions

fs::set_per­mis­sions(&path, Permissions::from_mode(0o700))?;

For a brief mo­ment, path ex­ists with the de­fault per­mis­sions. Any other user on the sys­tem can open() it dur­ing that win­dow. Once they have a file de­scrip­tor, the later chmod does­n’t take it away from them.

Rule: Set Permissions at Creation, Never After

Reach for OpenOptions::mode() and DirBuilderExt::mode() so the file or di­rec­tory is born with the per­mis­sions you want. The ker­nel will ap­ply your umask on top, so set that ex­plic­itly too if you re­ally care.

String Equality on Paths Is Not the Same as Filesystem Identity

The orig­i­nal –preserve-root check in chmod was lit­er­ally this:

if re­cur­sive && pre­serve_­root && file == Path::new(“/”) {

re­turn Err(PreserveRoot);

}

That com­par­i­son is by­passed by any­thing that re­solves to / but is­n’t spelled /. So /../, /./, /usr/.., or a sym­link that points to /. Run chmod -R 000 /../ and see it rip right past your check and lock down the whole sys­tem.

Here’s the fix:

fn is_­root(file: &Path) -> bool {

matches!(fs::canon­i­cal­ize(file), Ok(p) if p == Path::new(“/”))

}

if re­cur­sive && pre­serve_­root && is_­root(file) {

re­turn Err(PreserveRoot);

}

Rule: Resolve Paths Before Comparing Them

canon­i­cal­ize re­solves .., ., and sym­links into a real ab­solute path. That’s a lot bet­ter than string com­par­i­son.

Oh and if you were won­der­ing about this line:

matches!(fs::canon­i­cal­ize(file), Ok(p) if p == Path::new(“/”))

I think that’s just a fancy way of say­ing

// First, re­solve the path to its canon­i­cal form

if let Ok(p) = fs::canon­i­cal­ize(file) {

// If that suc­ceeded, check if the canon­i­cal path is /”

p == Path::new(“/”)

} else {

false

}

In the spe­cific case of –preserve-root, this works be­cause / has no par­ent di­rec­tory, so there’s noth­ing for an at­tacker to swap from un­der­neath you. In the more gen­eral case of com­par­ing two ar­bi­trary paths for filesys­tem iden­tity, how­ever, you’d want to open both and com­pare their (dev, in­ode) pairs, the way GNU core­utils does. (Think iden­tity, not string equal­ity.)

By the way, my fa­vorite bug in this group is CVE-2026 – 35363:

rm . # ❌

rm .. # ❌

rm ./ # ✅

rm ./// # ✅

It re­fused . and .. but hap­pily ac­cepted ./ and .///, then deleted the cur­rent di­rec­tory while print­ing Invalid in­put. 😅

Stay in Bytes at Unix Boundaries

Rust’s String and &str are al­ways UTF-8.

That’s a great choice in 99% of all cases, but Unix paths, en­vi­ron­ment vari­ables, ar­gu­ments, and the in­puts flow­ing through tools like cut, comm, and tr live in the messy world of bytes.

Every time a Rust pro­gram bridges that gap, it has three op­tions.

🫩 Lossy con­ver­sion with from_ut­f8_lossy silently rewrites in­valid bytes to U+FFFD. That’s just fancy data cor­rup­tion.

🫤 Strict con­ver­sion with un­wrap or ? crashes or re­fuses to op­er­ate.

😚 Staying in bytes with OsStr or &[u8] is what you should usu­ally do.

The au­dit found bugs in both of the first two cat­e­gories. Here’s an ex­am­ple.

Case Study: comm (CVE-2026 – 35346)

This is the orig­i­nal code, from src/​uu/​comm/​src/​comm.rs.

// ra, rb are &[u8], raw bytes from the in­put files.

print!(“{}”, String::from_utf8_lossy(ra));

print!(“{de­lim}{}”, String::from_utf8_lossy(rb));

GNU comm works on bi­nary files be­cause it just shuf­fles bytes around. The uu­tils ver­sion re­placed any­thing that was­n’t valid UTF-8 with U+FFFD, which silently cor­rupted the out­put.

Here’s the fix: stay in bytes.

let mut out = BufWriter::new(io::stdout().lock());

out.write_all(ra)?;

out.write_all(de­lim)?;

out.write_all(rb)?;

print! forces a UTF-8 round-trip through Display. Write::write_all does not.

It writes the raw bytes di­rectly to std­out.

Rule: Pick the Right Type for the Situation

For Unix-flavored sys­tems code, use Path and PathBuf for filesys­tem paths, OsString for en­vi­ron­ment vari­ables, and Vec<u8> or &[u8] for stream con­tents. It’s tempt­ing to round-trip them through String for eas­ier for­mat­ting, but that’s where the cor­rup­tion creeps in.

UTF-8 is a great de­fault for ap­pli­ca­tion strings, but it’s ab­solutely, pos­i­tively the wrong de­fault for the raw byte stuff Unix tools work with.

Securing GitHub: Wiz Research uncovers Remote Code Execution in GitHub.com and GitHub Enterprise Server (CVE-2026-3854)

www.wiz.io

Wiz Research un­cov­ered a crit­i­cal vul­ner­a­bil­ity (CVE-2026 – 3854) in GitHub’s in­ter­nal git in­fra­struc­ture that could have af­fected both GitHub.com and GitHub Enterprise Server. By ex­ploit­ing an in­jec­tion flaw in GitHub’s in­ter­nal pro­to­col, any au­then­ti­cated user could ex­e­cute ar­bi­trary com­mands on GitHub’s back­end servers with a sin­gle git push com­mand - us­ing noth­ing but a stan­dard git client.

Notably, this is one of the first crit­i­cal vul­ner­a­bil­i­ties dis­cov­ered in closed-source bi­na­ries us­ing AI, high­light­ing a shift in how these flaws are iden­ti­fied. Despite the com­plex­ity of the un­der­ly­ing sys­tem, the vul­ner­a­bil­ity is re­mark­ably easy to ex­ploit. On GitHub.com, this vul­ner­a­bil­ity al­lowed re­mote code ex­e­cu­tion on shared stor­age nodes. We con­firmed that mil­lions of pub­lic and pri­vate repos­i­to­ries be­long­ing to other users and or­ga­ni­za­tions were ac­ces­si­ble on the af­fected nodes. On GitHub Enterprise Server, the same vul­ner­a­bil­ity grants full server com­pro­mise, in­clud­ing ac­cess to all hosted repos­i­to­ries and in­ter­nal se­crets.

GitHub mit­i­gated this is­sue on GitHub.com within 6 hours of our re­port, re­leased patches for all sup­ported ver­sions of GitHub Enterprise Server, and pub­lished the CVE at the time of re­lease. GitHub Enterprise Server cus­tomers should up­grade im­me­di­ately - at the time of this writ­ing, our data in­di­cates that 88% of in­stances are still vul­ner­a­ble. Detailed re­me­di­a­tion steps and fur­ther tech­ni­cal de­tails are avail­able in GitHub’s se­cu­rity blog post.

GitHub greatly ap­pre­ci­ates the col­lab­o­ra­tion, pro­fes­sion­al­ism, and part­ner­ship that Wiz has shown through­out this process. A find­ing of this cal­iber and sever­ity is rare, earn­ing one of the high­est re­wards avail­able in our Bug Bounty pro­gram, and serves as a re­minder that the most im­pact­ful se­cu­rity re­search comes from skilled re­searchers who know how to ask the right ques­tions. As the land­scape evolves, these close part­ner­ships with tal­ented hunters and re­searchers are more im­por­tant than ever.Alexis Wales, GitHub CISO

GitHub greatly ap­pre­ci­ates the col­lab­o­ra­tion, pro­fes­sion­al­ism, and part­ner­ship that Wiz has shown through­out this process. A find­ing of this cal­iber and sever­ity is rare, earn­ing one of the high­est re­wards avail­able in our Bug Bounty pro­gram, and serves as a re­minder that the most im­pact­ful se­cu­rity re­search comes from skilled re­searchers who know how to ask the right ques­tions. As the land­scape evolves, these close part­ner­ships with tal­ented hunters and re­searchers are more im­por­tant than ever.

Alexis Wales, GitHub CISO

This post breaks down the vul­ner­a­bil­ity, walks through the ex­ploita­tion chain, and pro­vides rec­om­men­da­tions for GHES ad­min­is­tra­tors to pro­tect their en­vi­ron­ments.

Required Actions and Mitigations

GitHub.com: GitHub has mit­i­gated this is­sue. No ac­tion is re­quired for GitHub.com users.

GitHub Enterprise Server: Immediate ac­tion re­quired.

Upgrade to GHES ver­sion 3.19.3 or later - this re­lease patches CVE-2026 – 3854

Upgrade to GHES ver­sion 3.19.3 or later - this re­lease patches CVE-2026 – 3854

Affected Versions

Find Vulnerable GHES Instances with Wiz

Wiz cus­tomers can iden­tify vul­ner­a­ble GitHub Enterprise Server in­stances in their en­vi­ron­ments us­ing this pre-built query in the Wiz Threat Center. The query iden­ti­fies all GHES in­stances run­ning a ver­sion vul­ner­a­ble to this is­sue.

Why We Researched GitHub’s Git Infrastructure

GitHub is the world’s largest code host­ing plat­form, home to hun­dreds of mil­lions of repos­i­to­ries span­ning open source pro­jects, en­ter­prise code­bases, and crit­i­cal in­fra­struc­ture. Its in­ter­nal git in­fra­struc­ture-the pipeline that processes every git push-is one of the most se­cu­rity-sen­si­tive sys­tems on the in­ter­net. When a user pushes code, it passes through mul­ti­ple in­ter­nal ser­vices, each writ­ten in a dif­fer­ent pro­gram­ming lan­guage. This multi-ser­vice ar­chi­tec­ture cre­ates op­por­tu­ni­ties for in­con­sis­ten­cies in how each com­po­nent parses and trusts shared data.

We’ve looked into GitHub Enterprise Server (GHES) in the past to hunt for these ex­act types of vul­ner­a­bil­i­ties. However, ex­tract­ing and au­dit­ing the sheer vol­ume of com­piled black­box bi­na­ries that run this pipeline his­tor­i­cally re­quired an im­prac­ti­cal amount of time and man­ual ef­fort.

But this is Round 2, and the land­scape has shifted. By lever­ag­ing AI-augmented tool­ing-par­tic­u­larly au­to­mated re­verse en­gi­neer­ing us­ing IDA MCP-we were able to do what was pre­vi­ously too costly. Using AI, we rapidly an­a­lyzed GitHub’s com­piled bi­na­ries, re­con­structed in­ter­nal pro­to­cols, and sys­tem­at­i­cally iden­ti­fied where user in­put could in­flu­ence server be­hav­ior across the en­tire pipeline. Thanks to this new ca­pa­bil­ity, we found a fun­da­men­tal flaw in how that in­put flows through GitHub’s multi-ser­vice ar­chi­tec­ture.

Technical Deep-Dive

Understanding the Architecture

When a user runs git push against GitHub via SSH, the re­quest flows through sev­eral key com­po­nents:

ba­beld - A git proxy and the en­try point for all git op­er­a­tions. It re­ceives the user’s SSH con­nec­tion and for­wards au­then­ti­ca­tion to gi­tauth.

ba­beld - A git proxy and the en­try point for all git op­er­a­tions. It re­ceives the user’s SSH con­nec­tion and for­wards au­then­ti­ca­tion to gi­tauth.

gi­tauth - An in­ter­nal au­then­ti­ca­tion ser­vice. It ver­i­fies the user’s cre­den­tials, checks whether they have push ac­cess to the tar­get repos­i­tory, and re­turns the se­cu­rity poli­cies that ap­ply to the ses­sion - file size lim­its, branch nam­ing rules, and more. ba­beld takes this re­sponse and con­structs an in­ter­nal header con­tain­ing all of this se­cu­rity meta­data.

gi­tauth - An in­ter­nal au­then­ti­ca­tion ser­vice. It ver­i­fies the user’s cre­den­tials, checks whether they have push ac­cess to the tar­get repos­i­tory, and re­turns the se­cu­rity poli­cies that ap­ply to the ses­sion - file size lim­its, branch nam­ing rules, and more. ba­beld takes this re­sponse and con­structs an in­ter­nal header con­tain­ing all of this se­cu­rity meta­data.

gitr­pcd - An in­ter­nal RPC server. It re­ceives the re­quest from ba­beld, parses the X-Stat header, and sets up the en­vi­ron­ment for down­stream processes. Critically, gitr­pcd per­forms no au­then­ti­ca­tion of its own - it trusts ba­beld com­pletely and treats every field in the X-Stat header as au­thor­i­ta­tive.

gitr­pcd - An in­ter­nal RPC server. It re­ceives the re­quest from ba­beld, parses the X-Stat header, and sets up the en­vi­ron­ment for down­stream processes. Critically, gitr­pcd per­forms no au­then­ti­ca­tion of its own - it trusts ba­beld com­pletely and treats every field in the X-Stat header as au­thor­i­ta­tive.

The pre-re­ceive hook - A com­piled Go bi­nary that en­forces se­cu­rity poli­cies be­fore a push is ac­cepted. It checks file size lim­its, branch nam­ing rules, LFS in­tegrity, and runs any ad­min-de­fined cus­tom hooks.

The pre-re­ceive hook - A com­piled Go bi­nary that en­forces se­cu­rity poli­cies be­fore a push is ac­cepted. It checks file size lim­its, branch nam­ing rules, LFS in­tegrity, and runs any ad­min-de­fined cus­tom hooks.

The crit­i­cal link be­tween these com­po­nents is the X-Stat header. It car­ries se­cu­rity-crit­i­cal fields as semi­colon-de­lim­ited key=value pairs. Internal ser­vices parse this header by split­ting on ; and pop­u­lat­ing a map. A key de­tail: the map uses last-write-wins se­man­tics. If a key ap­pears twice, the later value silently over­rides the ear­lier one.

When ba­beld for­wards a push re­quest, one of the in­ter­nal re­quests in­cludes push op­tions in the X-Stat header. Git push op­tions are ar­bi­trary strings that users can pass with git push -o. They are a stan­dard git pro­to­col fea­ture, in­tended for server-side hints. ba­beld en­codes them as num­bered fields - push_op­tion_0, push_op­tion_1, and so on - along­side a push_op­tion_­count.

The Vulnerability: X-Stat Field Injection

So what hap­pens when user-con­trolled in­put reaches the X-Stat header with­out proper san­i­ti­za­tion?

ba­beld copies git push op­tion val­ues di­rectly into the X-Stat header - with­out san­i­tiz­ing semi­colons. Since ; is the X-Stat field de­lim­iter, any semi­colon in a push op­tion value breaks out of its des­ig­nated field and cre­ates new, at­tacker-con­trolled fields.

Consider a push op­tion value that con­tains a semi­colon fol­lowed by a se­cu­rity field name. ba­beld em­beds it ver­ba­tim, pro­duc­ing a header like:

When split­ting on ;, this header parses as:

The at­tack­er’s value wins be­cause it ap­pears later in the header - last-write-wins.

We con­firmed this both through bi­nary analy­sis and on the wire - a packet cap­ture on a live GHES in­stance showed in­jected fields ap­pear­ing along­side and over­rid­ing their le­git­i­mate coun­ter­parts in the X-Stat header.

By com­bin­ing re­verse en­gi­neer­ing of the pre-re­ceive bi­nary with wire-level analy­sis, we mapped out in­jectable X-Stat fields. The fol­low­ing are par­tic­u­larly se­cu­rity-rel­e­vant:

The first three are the ones that mat­ter most - to­gether, they lead to re­mote code ex­e­cu­tion.

Escalation to RCE

Overriding se­cu­rity flags like large_blob_re­jec­tion_en­abled is in­ter­est­ing, but the real ques­tion is: can we turn field in­jec­tion into code ex­e­cu­tion?

The an­swer lies in three fields from the table above: rail­s_env, cus­tom_hook­s_dir, and re­po_pre_re­ceive_hooks. To un­der­stand why, we need to look at how the pre-re­ceive hook bi­nary han­dles cus­tom hooks.

GHES sup­ports ad­min-de­fined cus­tom pre-re­ceive hooks - scripts that run be­fore a push is ac­cepted. By re­verse en­gi­neer­ing the pre-re­ceive bi­nary, we dis­cov­ered it has two ex­e­cu­tion paths, con­trolled en­tirely by the rail­s_env field from the X-Stat header: a pro­duc­tion value that runs hooks in­side a sand­box, and any other value that runs hooks di­rectly - no sand­box, no iso­la­tion - as the git ser­vice user with full filesys­tem ac­cess.

The only thing sep­a­rat­ing these two paths is the value of rail­s_env. And we can in­ject it.

The es­ca­la­tion to RCE chains three in­jec­tions to­gether:

Step 1 - Bypass the sand­box. Inject a non-pro­duc­tion rail­s_env value to switch from the sand­boxed pro­duc­tion path to the un­sand­boxed path.

Step 2 - Redirect the hook di­rec­tory. Inject cus­tom_hook­s_dir to con­trol the base di­rec­tory where the bi­nary looks up hook scripts.

Step 3 - Inject a hook de­f­i­n­i­tion with path tra­ver­sal. Inject re­po_pre_re­ceive_hooks with a crafted hook en­try whose script field con­tains a path tra­ver­sal se­quence. The bi­na­ry’s path res­o­lu­tion joins the at­tacker- con­trolled base di­rec­tory with the tra­ver­sal pay­load, re­solv­ing to an ar­bi­trary bi­nary on the filesys­tem.

The non-pro­duc­tion path then ex­e­cutes the re­solved path di­rectly - no ar­gu­ments, no sand­box - as the git ser­vice user:

With un­sand­boxed code ex­e­cu­tion as the git user, we had full con­trol over the GHES in­stance, in­clud­ing filesys­tem read/​write ac­cess and vis­i­bil­ity into in­ter­nal ser­vice con­fig­u­ra­tion.

From GHES to GitHub.com

We had RCE on GitHub Enterprise Server. The next ques­tion was ob­vi­ous - does this work on GitHub.com?

We ran the same ex­ploita­tion chain against a repos­i­tory on GitHub.com. The push com­pleted suc­cess­fully, but the cus­tom hooks never ex­e­cuted. No re­mote: out­put, no code ex­e­cu­tion - noth­ing.

To un­der­stand what was hap­pen­ing, we in­jected user_­op­er­a­tor_­mode=bool:true to en­able de­bug out­put on both plat­forms. Comparing the out­put side by side, we no­ticed that GitHub.com was miss­ing cer­tain hook ex­e­cu­tion steps that ap­peared on GHES - the cus­tom hooks code path was sim­ply not be­ing reached.

We went back to the bi­nary and dug deeper. Through fur­ther re­verse en­gi­neer­ing, we iden­ti­fied a boolean flag in the X-Stat header that con­trols whether the server op­er­ates in en­ter­prise mode. On GHES, this flag de­faults to true - so the cus­tom hooks path is al­ways ac­tive. On GitHub.com, it de­faults to false, mean­ing cus­tom hooks are never reached un­der nor­mal con­di­tions.

Since this flag was also car­ried in the X-Stat header, it was in­jectable through the same mech­a­nism. One more in­jected field, and the full ex­ploita­tion chain worked on GitHub.com. This time, we ex­e­cuted host­name in­stead of id:

RCE on GitHub.com - con­firmed.

Cross-Tenant Impact

RCE on GitHub Enterprise Server is a crit­i­cal vul­ner­a­bil­ity. On GitHub.com, the same flaw had broader im­pli­ca­tions due to the shared in­fra­struc­ture serv­ing mul­ti­ple users and or­ga­ni­za­tions.

GitHub.com is a multi-ten­ant plat­form. Repositories be­long­ing to mil­lions of dif­fer­ent or­ga­ni­za­tions and users are stored on shared back­end in­fra­struc­ture. When we achieved code ex­e­cu­tion on GitHub.com, we landed on a shared stor­age node run­ning as the git user.

The git user ex­ists for a rea­son: it serves all repos­i­tory op­er­a­tions across the node. By de­sign, it has broad filesys­tem ac­cess to every repos­i­tory hosted on that node. Compromising this user meant we could read any repos­i­tory on the node, re­gard­less of which or­ga­ni­za­tion or user owned it. We enu­mer­ated repos­i­tory in­dex en­tries ac­ces­si­ble from two com­pro­mised nodes and found mil­lions of en­tries across each, be­long­ing to other users and or­ga­ni­za­tions.

To be clear: we did not ac­cess the con­tents of other ten­ants’ repos­i­to­ries. We val­i­dated the cross-ten­ant ex­po­sure us­ing only our own test ac­counts, con­firm­ing that the git user’s filesys­tem per­mis­sions would al­low read­ing any repos­i­tory on the node.

Conclusion

A sin­gle git push com­mand was enough to ex­ploit a flaw in GitHub’s in­ter­nal pro­to­col and achieve code ex­e­cu­tion on back­end in­fra­struc­ture. The vul­ner­a­bil­ity chain high­lights a pat­tern that ex­tends well be­yond GitHub. When mul­ti­ple ser­vices writ­ten in dif­fer­ent lan­guages pass data through a shared in­ter­nal pro­to­col, the as­sump­tions each ser­vice makes about that data be­come a crit­i­cal at­tack sur­face. In this case, one ser­vice as­sumed push op­tion val­ues were safe to em­bed ver­ba­tim. Another as­sumed every field in the X-Stat header was set by a trusted source. The pre-re­ceive hook as­sumed an en­vi­ron­ment vari­able could only be pro­duc­tion in pro­duc­tion. Each as­sump­tion was rea­son­able in iso­la­tion - and dan­ger­ous in com­bi­na­tion.

The pres­ence of a non-pro­duc­tion code path in a pro­duc­tion bi­nary, the lack of path tra­ver­sal val­i­da­tion on hook scripts, and the use of a de­lim­iter-based pro­to­col with­out in­put san­i­ti­za­tion are pat­terns that ap­pear across many code­bases. We en­cour­age teams build­ing multi-ser­vice ar­chi­tec­tures to au­dit how user-con­trolled in­put flows through in­ter­nal pro­to­cols - es­pe­cially where se­cu­rity-crit­i­cal con­fig­u­ra­tion is de­rived from shared data for­mats.

This re­search was made pos­si­ble by AI-augmented re­verse en­gi­neer­ing tool­ing, par­tic­u­larly IDA MCP, which al­lowed us to rapidly an­a­lyze com­piled bi­na­ries and re­con­struct in­ter­nal pro­to­cols at a speed that would not have been fea­si­ble man­u­ally. As these tools con­tinue to ma­ture, we ex­pect them to play an in­creas­ingly im­por­tant role in un­cov­er­ing vul­ner­a­bil­ity classes that re­quire deep cross-com­po­nent analy­sis.

Responsible Disclosure Timeline

2026 – 03-04 - Wiz Research dis­cov­ers the X-Stat push op­tion in­jec­tion vul­ner­a­bil­ity.2026 – 03-04 - RCE con­firmed on GHES 3.19.1.2026 – 03-04 - Wiz Research re­ports the vul­ner­a­bil­ity to GitHub.2026 – 03-04 - GitHub ac­knowl­edges re­ceipt.2026 – 03-04 - GitHub de­ploys fix on GitHub.com.2026 – 03-10 - CVE-2026 – 3854 as­signed with CVSS 8.7.2026 – 03-10 - GHES patch re­leased.2026 – 04-28 - Public dis­clo­sure.

Stay in touch!

Hi there! We are Sagi Tzadik (@sagitz_), Nir Ohfeld (@nirohfeld), Ronen Shustin (@ronenshh), Hillai Ben-Sasson (@hillai), Yuval Avrahami (@yuvalavra), and Noam Malron (@noamsec) from the Wiz Research Team (@wiz_io). We are a group of vet­eran white-hat hack­ers with a sin­gle goal: to make the cloud a safer place for every­one. We pri­mar­ily fo­cus on find­ing new at­tack vec­tors in the cloud and un­cov­er­ing iso­la­tion is­sues in cloud ven­dors and ser­vice providers. We would love to hear from you! Feel free to con­tact us on X (Twitter) or via email: re­search@wiz.io.

How ChatGPT serves ads. Here's the full attribution loop.

www.buchodi.com

28 Apr 2026

OpenAI’s ad plat­form has two halves. On the ChatGPT side, the back­end in­jects struc­tured sin­gle_ad­ver­tis­er_ad_u­nit ob­jects into the con­ver­sa­tion SSE stream while the model is re­spond­ing. On the mer­chant side, a track­ing SDK called OAIQ runs in the vis­i­tor’s browser and re­ports prod­uct views back to OpenAI. The two are tied to­gether by Fernet-encrypted click to­kens, four of them per ad.

I cap­tured both halves on a con­sented mo­bile-traf­fic re­search fleet. Everything be­low comes from ob­served traf­fic.

When you send a mes­sage to ChatGPT, the back­end opens an SSE re­sponse at chat­gpt.com/​back­end-api/​f/​con­ver­sa­tion. Most events in that stream are model-out­put. Some are ad units. They look like this:

event: delta

data: {

type”: single_advertiser_ad_unit”,

ads_request_id”: 069e89b3-c038 – 7764-8000 – 6e5a193e5f69″,

ads_spam_integrity_payload”: gAAAAABp6Js_<…redacted…>”,

preamble”: ”,

advertiser_brand”: {

name”: Grubhub”,

url”: www.grub­hub.com,

favicon_url”: https://​bzr­cdn.ope­nai.com/​cab­fae7ead­26b03d.png,

id”: adacct_6984ed0ba55481a29894bb192f7773b4″

},

carousel_cards”: [{

title”: Get Chinese Food Delivered”,

body”: Satisfy Your Cravings with Grubhub Delivery.”,

image_url”: https://​bzr­cdn.ope­nai.com/​cab­fae7ead­26b03d.png,

target”: {

type”: url”,

value”: https://​www.grub­hub.com/?​ut­m_­source=chat­g­pt­pi­lot&ut­m_medium=paid&ut­m_­cam­paign=din­er_gh_search_chat­g­p­t_k­w_­traf­fic_n­b_x_­nat_x&ut­m_­con­tent=nbchi­nese&op­pref=gAAAA<…>&olref=gAAAA<…>”,

open_externally”: false

},

ad_data_token”: eyJwYXlsb2<…>”

}]

}

Notes:

sin­gle_ad­ver­tis­er_ad_u­nit is a typed schema. The nam­ing im­plies sib­lings (multi-advertiser, etc.).

ad­ver­tis­er_brand.id is adac­ct_<32-hex> — a sta­ble per-mer­chant ac­count iden­ti­fier.

Brand fav­i­con and ad im­age both load from bzr­cdn.ope­nai.com. OpenAI hosts the ad­ver­tis­er’s cre­ative, not the mer­chant.

tar­get.open_ex­ter­nally: false opens the link in ChatGPT’s in-app we­b­view, so OpenAI ob­serves the post-click nav­i­ga­tion on top of any pixel sig­nal.

Four Fernet to­kens per ad: ad­s_s­pam_in­tegri­ty_­pay­load, op­pref, ol­ref, and a base64-wrapped ad_­da­ta_­to­ken. Each is AES-128-CBC un­der a server-only key with HMAC-SHA256 in­tegrity.

How ads get se­lected

A sin­gle ac­count in the panel re­ceived six dif­fer­ent ads across six con­ver­sa­tions on six dif­fer­ent top­ics. The tar­get­ing is con­tex­tual to the chat:

Same ac­count, dif­fer­ent topic, dif­fer­ent brand. I did­n’t find ev­i­dence one way or the other on whether tar­get­ing also in­cor­po­rates prior con­ver­sa­tion his­tory.

The four-to­ken at­tri­bu­tion chain

Every ad ships with four dis­tinct Fernet-encrypted blobs. Their roles, based on where they ap­pear:

ad­s_s­pam_in­tegri­ty_­pay­load sent in­side the SSE data, never on the click URL. Server-side in­tegrity check against forged ad clicks.

op­pref pre­sent on the click URL and copied ver­ba­tim by the OAIQ pixel into the cookie __oppref (TTL 720 hours / 30 days). The for­ward at­tri­bu­tion to­ken. Travels with every sub­se­quent mer­chant pixel event.

ol­ref paired with op­pref on the click URL but not stored by the SDK we ob­served. Likely im­pres­sion-side / out­bound-link-ref­er­ence log­ging on OpenAI’s servers.

ad_­da­ta_­to­ken base64-wrapped JSON con­tain­ing yet an­other Fernet to­ken. Carried in the SSE pay­load, pre­sum­ably rec­on­ciled server-side at click time.

Fernet’s first nine bytes are pub­lic: ver­sion byte 0x80 plus an 8-byte big-en­dian Unix time­stamp. So the mint time of any of these to­kens is re­cov­er­able with­out OpenAI’s key:

im­port base64, struct, date­time

b = base64.url­safe_b64de­code(“gAAAAABp7fdA” + ==“)

print(date­time.date­time.utcfrom­time­stamp(struct.un­pack(“>Q”, b[1:9])[0]))

# → 2026 – 04-26 11:30:08 UTC

The Home Depot click URL I cap­tured was minted at 11:30:08; the browser fetched the mer­chant page at 11:31:43. Click la­tency: 95 sec­onds.

How the loop closes on the mer­chant side

User taps the card. Browser opens:

https://​www.grub­hub.com/?​ut­m_­source=chat­g­pt­pi­lot&

&oppref=gAAAA<…>

&olref=gAAAA<…>

The mer­chant page loads the OAIQ SDK:

<script src=“https://​bzr­cdn.ope­nai.com/​sdk/​oaiq.min.js></script>

<script>

oaiq(‘init’, { pid: <merchant pixel ID>’ });

oaiq(‘mea­sure’, contents_viewed’, { … });

</script>

oaiq.min.js is at ver­sion 0.1.3. On init it reads ?oppref= from win­dow.lo­ca­tion, writes it into the first-party cookie __oppref with a 720-hour TTL, and sets a probe cookie __oaiq_domain_probe. Every sub­se­quent mea­sure call POSTs JSON to:

POST https://​bzr.ope­nai.com/​v1/​sdk/​events?pid=<merchant>&st=oaiq-web&sv=0.1.3

Two do­mains to add to your fil­ter list if you want to block ChatGPT ad events: bzr­cdn.ope­nai.com, bzr.ope­nai.com. Two cookie names to in­spect af­ter any ChatGPT-recommended click: __op­pref, __oaiq_­do­main_probe.

reuters.com

www.reuters.com

Please en­able JS and dis­able any ad blocker

Warp is now open-source

www.warp.dev

Today we are an­nounc­ing a fun­da­men­tal change in how we build Warp: the Warp client is now open-source, and the com­mu­nity can par­tic­i­pate in build­ing it us­ing an agent-first work­flow man­aged by Oz, our cloud agent or­ches­tra­tion plat­form. OpenAI is the found­ing spon­sor of the new, open‑source Warp repos­i­tory, and the new agen­tic man­age­ment work­flows are pow­ered by GPT mod­els.

Open-sourcing with an agent-pow­ered repo is our vi­sion of how soft­ware will be built in the fu­ture. Humans man­ag­ing agents at scale to build pro­duc­tion-grade soft­ware is the model, and im­ple­ment­ing this model in the open will al­low soft­ware to im­prove most quickly.

Put sim­ply: we be­lieve that a di­verse col­lec­tion of con­trib­u­tors with unique ideas + Oz agents with struc­tured processes + a rich cor­pus of con­text and self-im­prove­ment loops will yield a magic prod­uct, be­yond what we might build in­ter­nally.

Why now

The pri­mary rea­son is that we think we can ship a bet­ter Warp, more quickly, if we open source and work with our com­mu­nity to help su­per­vise a fleet of agents. The biggest bot­tle­neck to de­vel­op­ment is no longer writ­ing code — it’s all the hu­man-in-the-loop ac­tiv­i­ties around the code: spec­c­ing the prod­uct and ver­i­fy­ing be­hav­ior, and frankly, we are lim­ited in what our in­ter­nal team can do and the pace we want to move at.

We’ve found that agents can han­dle the im­ple­men­ta­tion heavy lift­ing re­ally well. That frees con­trib­u­tors to fo­cus on the higher-lever­age work: shap­ing what gets built and mak­ing sure it’s right.

Inviting our com­mu­nity into the process to help man­age agents will be a big un­lock. We now have a lot of con­fi­dence in code that is gen­er­ated by Oz with our rules, con­text and ver­i­fi­ca­tion, so any­one con­tribut­ing should have a high chance of suc­cess cod­ing a fea­ture cor­rectly. Moreover, lean­ing on agents cre­ates pres­sure for us to nail or­ches­tra­tion, mem­ory, hand­off, and all of the other parts of agen­tic en­gi­neer­ing that are core to our busi­ness. There’s a vir­tu­ous loop here.

The sec­ond rea­son is about giv­ing de­vel­op­ers a chance to shape the fu­ture. There is­n’t a full-fea­tured open agen­tic de­vel­op­ment en­vi­ron­ment on the mar­ket and we want to of­fer the com­mu­nity an al­ter­na­tive to closed-source op­tions pro­vided by more es­tab­lished com­pa­nies. No one knows ex­actly what the fu­ture of agen­tic de­vel­op­ment will look like and we think the com­mu­nity ought to be able to par­tic­i­pate in shap­ing it.

Warp is multi-model and multi-har­ness and we want to dou­ble down on that open­ness. Opening will al­low us to be more re­spon­sive to users, work­ing with them on the long tail of our back­log to make Warp the best ADE on the mar­ket.

In this spirit, we are cou­pling a few ma­jor prod­uct im­prove­ments with this launch to make Warp more open and cus­tomiz­able.

First we are launch­ing sup­port for a much wider range of open source mod­els in Warp to­day, in­clud­ing the lat­est mod­els Kimi, MiniMax, and Qwen, along with a new auto (open)” model-routed ver­sion that picks the best open model for a task.

Second, we’ve made it much eas­ier to cus­tomize your Warp ex­pe­ri­ence how­ever you’d like — from just a ter­mi­nal, to hav­ing some min­i­mal fea­tures for im­prov­ing agen­tic de­vel­op­ment like a diff view and file tree, to a full fledged ADE with built-in agents.

Finally, we are ship­ping a (long-overdue) set­tings file so that users and agents get pro­gram­matic con­trol over set­tings and easy porta­bil­ity be­tween de­vices.

How it works

Warp’s source code is now avail­able at github.com/​warp­dot­dev/​warp with an AGPL li­cense.

You can learn more about the con­tri­bu­tion process in CONTRIBUTING.md, but the tl;dr is we want agents do­ing the heavy lift­ing (coding, plan­ning, test­ing, etc.) and com­mu­nity mem­bers help­ing with ideas, di­rec­tion and ver­i­fi­ca­tion. The Warp team will help guide what gets built, when and how, but we see Warp fun­da­men­tally be­com­ing a col­lab­o­ra­tive ef­fort with our com­mu­nity. This is a cool op­por­tu­nity for folks to con­tribute to a fast grow­ing app that’s used by nearly a mil­lion ac­tive de­vel­op­ers, work­ing closely along­side the Warp team.

Warp’s new open-source agent work­flows are pow­ered by OpenAI mod­els, with OpenAI sup­port­ing the next gen­er­a­tion of col­lab­o­ra­tive soft­ware de­vel­op­ment. You’re free to use other cod­ing agents as well to con­tribute, but our pref­er­ence is us­ing Oz since it has the cor­rect skills and ver­i­fi­ca­tion loops built-in.

Open source has long been cen­tral to how de­vel­op­ers learn, build, and push the field for­ward. We’re ex­cited to sup­port ex­per­i­ments that ex­plore how AI can help main­tain­ers and con­trib­u­tors col­lab­o­rate more ef­fec­tively at scale.”

— Thibault Sottiaux, Engineering Lead, OpenAI

Open source has long been cen­tral to how de­vel­op­ers learn, build, and push the field for­ward. We’re ex­cited to sup­port ex­per­i­ments that ex­plore how AI can help main­tain­ers and con­trib­u­tors col­lab­o­rate more ef­fec­tively at scale.”

— Thibault Sottiaux, Engineering Lead, OpenAI

Finally, as part of mak­ing this change, we are mov­ing from a closed prod­uct de­vel­op­ment process to an open one. That means pub­lic GitHub is­sues will be­come the source of truth for track­ing fea­tures. We will pub­li­cize our roadmap for the ADE, and have tech­ni­cal and prod­uct dis­cus­sions for ADE is­sues in the open.

Closing thoughts

Open-sourcing is fun­da­men­tally com­ing from our de­sire to build a suc­cess­ful busi­ness. We are com­pet­ing with other highly funded, closed-source com­peti­tors, and we think open­ing and pro­vid­ing the re­sources for the com­mu­nity to im­prove Warp is a smart way for us to ac­cel­er­ate prod­uct de­vel­op­ment. Yes, we are a VC funded startup, but we do not have the re­sources to com­pete on price or mas­sively sub­si­dize us­age — we need to build our busi­ness by of­fer­ing the best pos­si­ble prod­uct to the most ex­cited com­mu­nity.

Business goals aside, we also think there’s value in the com­mu­nity it­self shap­ing the fu­ture. Warp is a unique prod­uct with five years of en­gi­neer­ing be­hind it, and we hope it pro­vides the com­mu­nity a com­pelling start­ing point for build­ing the fu­ture of agen­tic de­vel­op­ment, be­yond any other open al­ter­na­tive.

We could just keep go­ing with our cur­rent model, pri­vately guess­ing at the roadmap and scal­ing more and more agents to build in­ter­nally, but that feels like a missed op­por­tu­nity. It would be miss­ing a chance for the com­mu­nity to shape the di­rec­tion, to hack on the prod­uct, and to ac­tu­ally work with agents to build the best pos­si­ble prod­uct.

When we first launched Warp as a ter­mi­nal 5 years ago, the plan was al­ways to open source the client. This is from the orig­i­nal Show HN post for Warp:

We’ve de­bated whether to open source every year, and for the first-time this year it be­came clear that the bal­ance had shifted be­cause of the rise of agents. I’ve been a de­vel­oper for a long time and I’ve never seen a change like what’s hap­pen­ing now. Warp’s mis­sion has al­ways been to em­power de­vel­op­ers to ship bet­ter soft­ware more quickly and I hope by chang­ing the way we are build­ing it we get a lit­tle closer to achiev­ing that mis­sion.

Google and Pentagon reportedly agree on deal for ‘any lawful’ use of AI

www.theverge.com

Jess Weatherbed

is a news writer fo­cused on cre­ative in­dus­tries, com­put­ing, and in­ter­net cul­ture. Jess started her ca­reer at TechRadar, cov­er­ing news and hard­ware re­views.

Google has signed a clas­si­fied deal that al­lows the US Department of Defense to use its AI mod­els for any law­ful gov­ern­ment pur­pose,” The Information re­ports. The agree­ment was re­ported less than a day af­ter Google em­ploy­ees de­manded CEO Sundar Pichai block the Pentagon from us­ing its AI amid con­cerns that it would be used in inhumane or ex­tremely harm­ful ways.”

If the agree­ment is con­firmed, it would place Google along­side OpenAI and xAI, which have also made clas­si­fied AI deals with the US gov­ern­ment. Anthropic was also among that list un­til it was black­listed by the Pentagon for re­fus­ing the Department of Defense’s de­mands to re­move weapon and sur­veil­lance-re­lated guardrails from its AI mod­els.

Citing a sin­gle anony­mous source with knowl­edge of the sit­u­a­tion,” The Information re­ports that the deal states that both par­ties have agreed that the search gi­ant’s AI sys­tems should­n’t be used for do­mes­tic mass sur­veil­lance or au­tonomous weapons without ap­pro­pri­ate hu­man over­sight and con­trol.” But the con­tract also says it does­n’t give Google any right to con­trol or veto law­ful gov­ern­ment op­er­a­tional de­ci­sion-mak­ing,” which would sug­gest the agreed re­stric­tions are more of a pinky promise than legally bind­ing oblig­a­tions. The deal also re­quires Google to as­sist with mak­ing ad­just­ments to its AI safety set­tings and fil­ters at the gov­ern­men­t’s re­quest.

We are proud to be part of a broad con­sor­tium of lead­ing AI labs and tech­nol­ogy and cloud com­pa­nies pro­vid­ing AI ser­vices and in­fra­struc­ture in sup­port of na­tional se­cu­rity,” a Google spokesper­son said in a state­ment to The Information, adding that the new agree­ment is an amend­ment to its ex­ist­ing gov­ern­ment deal. We re­main com­mit­ted to the pri­vate and pub­lic sec­tor con­sen­sus that AI should not be used for do­mes­tic mass sur­veil­lance or au­tonomous weaponry with­out ap­pro­pri­ate hu­man over­sight.”

Update, April 28th: Replaced state­ment pro­vided to Reuters with a state­ment that Google pro­vided to The Information.

Follow top­ics and au­thors from this story to see more like this in your per­son­al­ized home­page feed and to re­ceive email up­dates.

Jess Weatherbed

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.