10 interesting stories served every morning and every evening.

GrapheneOS (@GrapheneOS@grapheneos.social)

grapheneos.social

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

Local AI Needs to be the Norm · unix.foo

unix.foo

One of the cur­rent trends in mod­ern soft­ware is for de­vel­op­ers to slap an API call to OpenAI or Anthropic for fea­tures within their app. Reasonable peo­ple can quib­ble with whether those fea­tures are ac­tu­ally bring­ing value to users, but what I want to dis­cuss is the fun­da­men­tal con­cept of tak­ing on a de­pen­dency to a cloud hosted AI model for ap­pli­ca­tions.

This lazi­ness is cre­at­ing a gen­er­a­tion of soft­ware that is frag­ile, in­vades your pri­vacy, and fun­da­men­tally bro­ken. We are build­ing ap­pli­ca­tions that stop work­ing the mo­ment the server crashes or a credit card ex­pires.

We need to re­turn to a habit of build­ing soft­ware where our lo­cal de­vices do the work. The sil­i­con in our pocket is mind bog­glingly faster than what was avail­able a decade ago. It has a ded­i­cated Neural Engine sit­ting there, mostly idle, while we wait for a JSON re­sponse from a server farm in Virginia. That’s ridicu­lous.

Even if your in­ten­tions are pure, the mo­ment you stream user con­tent to a third party AI provider, you’ve changed the na­ture of your prod­uct. You now have data re­ten­tion ques­tions and all the bag­gage that comes with that (consent, au­dit, breach, gov­ern­ment re­quest, train­ing, etc.)

On top of that you also sub­stan­tially com­pli­cated your stack be­cause your fea­ture now de­pends on net­work con­di­tions, ex­ter­nal ven­dor up­time, rate lim­its, ac­count billing, and your own back­end health.

Congratulations! You took a UX fea­ture and turned it into a dis­trib­uted sys­tem that costs you money.

If the fea­ture can be done lo­cally, opt­ing into this mess is self in­flicted dam­age.

AI every­where” is not the goal. Useful soft­ware is the goal.

Concrete Example: Brutalist Report’s On-Device Summaries

Years ago I launched a fun side pro­ject named The Brutalist Report , a news ag­gre­ga­tor ser­vice in­spired by the 1990s style web.

Recently, I de­cided to build a na­tive iOS client for it with the de­sign goal of en­sur­ing it would re­main a high-den­sity news read­ing ex­pe­ri­ence. Headlines in a stark list, a reader mode that strips the can­cer that has over­taken the web, and (optionally) an intelligence” view that gen­er­ates a sum­mary of the ar­ti­cle.

Here’s the key point though: the sum­mary is gen­er­ated on-de­vice us­ing Apple’s lo­cal model APIs. No server de­tours. No prompt or user logs. No ven­dor ac­count. No we store your con­tent for 30 days” foot­notes needed.

It has be­come so nor­mal for folks that any AI use is hap­pen­ing server-side. We have a lot of work to do to turn this around as an in­dus­try.

It’s not lost on me that some­times the use-cases you have will de­mand the in­tel­li­gence that only a cloud hosted model can pro­vide, but that’s not the case with every use-case you’re try­ing to solve. We need to be thought­ful here.

Available Tooling

I can only speak on the tool­ing avail­able within the Apple ecosys­tem since that’s what I fo­cused ini­tial de­vel­op­ment ef­forts on. In the last year, Apple has in­vested heav­ily here to al­low de­vel­op­ers to make use of a built-in lo­cal AI model eas­ily.

The core flow looks roughly like this:

im­port FoundationModels

let model = SystemLanguageModel.default guard model.avail­abil­ity == .available else { re­turn }

let ses­sion = LanguageModelSession { ”″ Provide a bru­tal­ist, in­for­ma­tion-dense sum­mary in Markdown for­mat. - Use **bold** for key con­cepts. - Use bul­let points for facts. - No fluff. Just facts. ”″ }

let re­sponse = try await ses­sion.re­spond(op­tions: .init(maximumResponseTokens: 1_000)) { ar­ti­cle­Text }

let mark­down = re­sponse.con­tent

And for longer con­tent, we can chunk the plain text (around 10k char­ac­ters per chunk), pro­duce con­cise facts only” notes per chunk, then runs a sec­ond pass to com­bine them into a fi­nal sum­mary.

This is the kind of work lo­cal mod­els are per­fect for. The in­put data is al­ready on the de­vice (because the user is read­ing it). The out­put is light­weight. It’s fast and pri­vate. It’s okay if it’s not a su­per­hu­man PhD level in­tel­li­gence be­cause it’s sum­ma­riz­ing the page you just loaded, not in­vent­ing world knowl­edge.

Local AI shines when the mod­el’s job is trans­form­ing user-owned data, not act­ing as a search en­gine for the uni­verse.

There are plenty of AI fea­tures that peo­ple want but don’t trust. Summarizing emails, ex­tract ac­tion items from notes, cat­e­go­rize this doc­u­ment, etc.

The usual cloud ap­proach turns every one of those into a trust ex­er­cise. Please send your data to our servers. We promise to be cool about it.”

Local AI changes that. Your de­vice al­ready has the data. We’ll do the work right here.

You don’t build trust with your users by writ­ing a 2,000 word pri­vacy pol­icy. You build trust by not need­ing one to be­gin with.

The tool­ing avail­able on the plat­form goes even fur­ther.

One of the best moves Apple has made re­cently is push­ing AI out­put” away from un­struc­tured blobs of text and to­ward typed data.

Instead of ask the model for JSON and pray”, the newer and bet­ter pat­tern is to de­fine a Swift struct that rep­re­sents the thing you want. Give the model guid­ance for each field in nat­ural lan­guage. Ask the model to gen­er­ate an in­stance of that type.

That’s it.

Conceptually, it looks like this:

im­port FoundationModels

@Generable struct ArticleIntel { @Guide(description: One sen­tence. No hype.“) var tldr: String @Guide(description: 3 – 7 bul­lets. Facts only.“) var bul­lets: [String] @Guide(description: Comma-separated key­words.“) var key­words: [String] }

let ses­sion = LanguageModelSession() let re­sponse = try await ses­sion.re­spond( to: Extract struc­tured notes from the ar­ti­cle.”, gen­er­at­ing: ArticleIntel.self ) { ar­ti­cle­Text }

let in­tel = re­sponse.con­tent

Now your UI does­n’t have to scrape bul­let points out of Markdown or hope the model re­mem­bered your JSON schema. You get a real type with real fields, and you can ren­der it con­sis­tently. It pro­duces struc­tured out­put your app can ac­tu­ally use. And it’s all run­ning lo­cally!

This is­n’t just nicer er­gonom­ics. It’s an en­gi­neer­ing im­prove­ment.

And if you’re build­ing a lo­cal first app, this is the dif­fer­ence be­tween AI as nov­elty” and AI as a trust­wor­thy sub­sys­tem”.

But Local Models Aren’t As Smart”

Correct.

But also so what?

Most app fea­tures don’t need a model that can write Shakespeare, ex­plain quan­tum me­chan­ics, and pass the bar exam. They need a model that can do one of these re­li­ably: sum­ma­rize, clas­sify, ex­tract, rewrite, or nor­mal­ize.

And for those tasks, lo­cal mod­els can be truly ex­cel­lent.

If you try to use a lo­cal model as a re­place­ment for the en­tire in­ter­net, you will be dis­ap­pointed. If you use it as a data trans­former” sit­ting in­side your app, you’ll won­der why you ever sent this stuff to a server.

Use cloud mod­els only when they’re gen­uinely nec­es­sary. Keep the user’s data where it be­longs. And when you do use AI, don’t just glue it as a chat box. Use it as a real sub­sys­tem with typed out­puts and pre­dictable be­hav­ior.

Stop ship­ping dis­trib­uted sys­tems when you meant to ship a fea­ture.

Google Broke reCAPTCHA for De-Googled Android Users

reclaimthenet.org

Google has tied its next-gen­er­a­tion re­CAPTCHA sys­tem to Google Play Services on Android, mean­ing any­one run­ning a de-Googled phone will au­to­mat­i­cally fail ver­i­fi­ca­tion when the sys­tem de­cides to chal­lenge them.

The re­quire­ment forces Android users to run Google’s pro­pri­etary app frame­work ver­sion 25.41.30 or higher just to prove they’re hu­man.

When re­CAPTCHA flags what it con­sid­ers sus­pi­cious ac­tiv­ity, it aban­dons the old im­age puz­zles and de­mands you scan a QR code. That scan re­quires Play Services run­ning in the back­ground, com­mu­ni­cat­ing with Google’s servers. If you’re us­ing GrapheneOS or any other cus­tom ROM that strips out Google’s soft­ware, the ver­i­fi­ca­tion fails.

Google an­nounced the broader sys­tem, Google Cloud Fraud Defense, at Cloud Next on April 23, pitch­ing it as a trust plat­form de­signed to han­dle au­tonomous AI agents and tra­di­tional bots alike. What Google did­n’t em­pha­size was the part where prov­ing you’re hu­man now re­quires sub­mit­ting to its pro­pri­etary sur­veil­lance.

Reclaim Your Digital Freedom.

Get un­fil­tered cov­er­age of sur­veil­lance, cen­sor­ship, and the tech­nol­ogy threat­en­ing your civil lib­er­ties.

This was­n’t sud­den, ei­ther. An Internet Archive snap­shot from October 2025 shows the same sup­port page al­ready list­ing a Play Services re­quire­ment at ver­sion 25.39.30. Google built this de­pen­dency qui­etly for at least seven months be­fore a Reddit user on the de­google sub­red­dit flagged it, with re­port­ing from PiunikaWeb and Android Authority bring­ing wider at­ten­tion.

The iOS com­par­i­son is re­veal­ing be­cause Apple de­vices run­ning iOS 16.4 or later com­plete the same ver­i­fi­ca­tion with­out in­stalling any ad­di­tional apps. Google did­n’t de­mand iPhone users in­stall Google soft­ware to pass the test. Only Android users who refuse Play Services get locked out. The asym­me­try re­veals what this is re­ally about: not se­cu­rity, but ecosys­tem con­trol.

re­CAPTCHA sits in front of mil­lions of web­sites. When Google ties ver­i­fi­ca­tion to Play Services, it es­tab­lishes a prece­dent where ac­cess­ing ba­sic web con­tent re­quires run­ning Google’s soft­ware and trans­mit­ting data to Google’s servers.

People run­ning de-Googled phones chose those se­tups be­cause they read the data prac­tices, un­der­stood what Play Services phones home about, and de­cided they did­n’t con­sent. Google’s new sys­tem pun­ishes that de­ci­sion by treat­ing the ab­sence of its pro­pri­etary soft­ware as sus­pi­cious by de­fault.

Web de­vel­op­ers adopt­ing this re­CAPTCHA should un­der­stand what they’re choos­ing. Every site that im­ple­ments it tells de-Googled Android users they’re not wel­come. That’s a small au­di­ence to­day. It’s also the au­di­ence most likely to care about how a web­site treats their data, and the least likely to ca­pit­u­late.

Bambu Lab is abusing the open source social contract

www.jeffgeerling.com

Last year I said I’d prob­a­bly never rec­om­mend an­other Bambu Lab printer again.

I still use my P1S, but af­ter Bambu Lab started push­ing their al­ways-con­nected cloud so­lu­tion as the new de­fault:

I blocked the printer from the Internet via my OPNsense Firewall

I stopped up­dat­ing the firmware

I locked the printer into Developer mode

I deleted Bambu Studio and started us­ing OrcaSlicer

I had to do that to keep it un­der my con­trol, in­stead of Bambu’s.

But I’m weird—I ac­knowl­edge that. I’m one of those crazy ones who likes to own some­thing they pur­chased, and not have the com­pany watch every­thing I do with hard­ware I paid for.

Bambu Lab could’ve left the sta­tus quo at that, and I would­n’t be writ­ing this blog post.

But they did­n’t.

What hap­pened this time?

For con­text: OrcaSlicer is a fork of the open source pro­ject Bambu Studio, which is a fork of Prusa Slicer, which is a fork of slic3r. (They are all li­censed un­der the AGPLv3 open source li­cense).

OrcaSlicer al­ready has to dance around Bambu’s weird de­fault setup where every file you print goes through Bambu’s servers, mean­ing they can see every­thing you ever print on your printer.

That is, un­less you’re like me and you run it in Developer mode, and com­pletely block it from the Internet on old firmware.

Some peo­ple are okay with us­ing OrcaSlicer and print­ing through Bambu’s cloud. It’s con­ve­nient if you’re on the road and want to start a print on your printer at home, with­out man­ag­ing your own VPN.

I run my own WireGuard VPN, so I don’t need that, but I un­der­stand not every­one has the re­sources to man­age their own re­mote ac­cess.

Bambu saw a fork of OrcaSlicer that al­lowed you to use all your print­er’s fea­tures with­out hav­ing to route prints through Bambu’s cloud called OrcaSlicer-bambulab and was like, You know what? No. For the 0.1% of power users who want to run OrcaSlicer with­out the cloud de­liv­ery mech­a­nism like we have in our AGPL-licensed Linux Bambu Studio code… no. You have to use our app, and only our app.”

So they threat­ened that OrcaSlicer fork’s de­vel­oper with le­gal ac­tion for things that de­vel­oper did­n’t do. For ex­am­ple, they in­di­cated the fork used an im­per­son­ation at­tack, de­spite the fork us­ing Bambu Studio’s up­stream code ver­ba­tim.

These are very se­ri­ous pub­lic ac­cu­sa­tions.Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

These are very se­ri­ous pub­lic ac­cu­sa­tions.

Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.

In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.

— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

Bambu is abus­ing the open source so­cial con­tract, and us­ing their le­gal might, to sup­press a tiny num­ber of their users1, for who knows what rea­son.

It seems dumb to me, be­cause it would’ve been eas­ier (and more prof­itable) to do noth­ing at all2. Instead, they wrote a blog post blam­ing an in­di­vid­ual open source de­vel­oper for their own in­fra­struc­ture and se­cu­rity prob­lems.

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.— Bambu Lab blog post

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.

In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.

— Bambu Lab blog post

I don’t think they un­der­stand open source cul­ture. Security ei­ther, if a pub­lic user agent string is their only pro­tec­tion against DDoS at­tacks…

Instead of find­ing so­lu­tions to ecosys­tem prob­lems and build­ing a more se­cure plat­form, Bambu is putting de­voted power users like the fork’s de­vel­oper on blast3.

When ten­sions flared last year, they wrote a sim­i­lar blog post blam­ing com­mu­nity back­lash on unfortunate mis­in­for­ma­tion’. I imag­ine they meant spec­u­la­tion from com­mu­nity mem­bers (like my­self) frus­trated the whole soft­ware ecosys­tem and own­er­ship model was turned up­side down post-pur­chase.

This year they’re blam­ing one de­vel­oper of a tiny slicer fork for the po­ten­tial im­pact he could have on their en­tire cloud in­fra­struc­ture.

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.— Bambu Lab blog post

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.

— Bambu Lab blog post

I love how they frame this as a de­vel­oper try­ing to im­per­son­ate their app, when he’s lit­er­ally us­ing the same AGPL-licensed code their Linux app uses.

I find it dou­bly ironic since their own fork caused Bambu users’ teleme­try to hit Prusa’s servers back in 2022, and (to my knowl­edge) Prusa did­n’t snap back with a C&D.

They spent the rest of their blog post talk­ing about vul­ner­a­bil­i­ties, bugs, and in­sta­bil­i­ties—as if that has any­thing to do with a de­vel­oper us­ing up­stream code ver­ba­tim in his fork.

Maybe they could take a new ap­proach and just not lock down their whole ecosys­tem in the first place.

But who am I kid­ding? Nothing I say, and no amount of com­plain­ing in the com­ments be­low, seems to help Bambu see the fault in their ways.

Spending a lit­tle more for a printer from an­other com­pany just might do it, though.

Louis Rossmann posted a video say­ing he’d pledge $10,000 to help the open source dev fight Bambu’s le­gal threats. And I’d hap­pily chip in too, but that’s only use­ful if the dev wants to put him­self back in Bambu’s crosshairs.

The bet­ter play might just be to skip Bambu al­to­gether.

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

Postmortem: TanStack npm supply-chain compromise | TanStack Blog

tanstack.com

by Tanner Linsley on May 11, 2026.

Last up­dated: 2026 – 05-11

On 2026 – 05-11 be­tween 19:20 and 19:26 UTC, an at­tacker pub­lished 84 ma­li­cious ver­sions across 42 @tanstack/* npm pack­ages by com­bin­ing: the pul­l_re­quest_­tar­get Pwn Request” pat­tern, GitHub Actions cache poi­son­ing across the fork↔base trust bound­ary, and run­time mem­ory ex­trac­tion of an OIDC to­ken from the GitHub Actions run­ner process. No npm to­kens were stolen and the npm pub­lish work­flow it­self was not com­pro­mised.

The ma­li­cious ver­sions were de­tected pub­licly within 20 min­utes by an ex­ter­nal re­searcher ashishkurmi work­ing for stepse­cu­rity. All af­fected ver­sions have been dep­re­cated; npm se­cu­rity has been en­gaged to pull tar­balls from the reg­istry. We have no ev­i­dence of npm cre­den­tials be­ing stolen, but we strongly rec­om­mend that any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 ro­tate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH cre­den­tials reach­able from the in­stall host.

Tracking is­sue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Packages af­fected

42 pack­ages, 84 ver­sions (two per pack­age, pub­lished roughly 6 min­utes apart). See the track­ing is­sue for the full table. Confirmed-clean fam­i­lies: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-pack­age, not @tanstack/start-*).

What the mal­ware does

When a de­vel­oper or CI en­vi­ron­ment runs npm in­stall, pnpm in­stall, or yarn in­stall against any af­fected ver­sion, npm re­solves the ma­li­cious op­tion­alDe­pen­den­cies en­try, fetches the or­phan pay­load com­mit from the fork net­work, runs its pre­pare life­cy­cle script, and ex­e­cutes a ~2.3 MB ob­fus­cated router_init.js smug­gled into the af­fected tar­ball. The script:

Harvests cre­den­tials from com­mon lo­ca­tions: AWS IMDS / Secrets Manager, GCP meta­data, Kubernetes ser­vice-ac­count to­kens, Vault to­kens, ~/.npmrc, GitHub to­kens (env, gh CLI, .git-credentials), SSH pri­vate keys

Exfiltrates over the Session/Oxen mes­sen­ger file-up­load net­work (filev2.getsession.org, seed{1,2,3}.get­ses­sion.org) — end-to-end en­crypted with no at­tacker-con­trolled C2, so block­ing by IP/domain is the only net­work mit­i­ga­tion

Self-propagates: enu­mer­ates other pack­ages the vic­tim main­tains via reg­istry.npmjs.org/-/​v1/​search?text=main­tainer:<user> and re­pub­lishes them with the same in­jec­tion

Because the pay­load runs as part of npm in­stal­l’s life­cy­cle, any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 must treat the in­stall host as po­ten­tially com­pro­mised.

All times UTC. Local time­stamps from GitHub API and npm reg­istry.

Pre-attack (cache poi­son­ing phase)

Detonation (publish phase)

Workflow run 25613093674 starts (19:15:44), and fails.

Detection and re­sponse

Formal mal­ware re­ports are sub­mit­ted via npm

Tanner be­gins npm dep­re­ca­tion process for all 84 af­fected pack­ages.

Public Twitter/X/LinkedIn/Bluesky dis­clo­sure from @tan_stack and main­tain­ers

All cache en­tries for all TanStack/* GitHub repos­i­to­ries purged via API.

Hardening PR merged: bun­dle-size.yml re­struc­tured, repos­i­to­ry_owner guards added, third-party ac­tion refs pinned to SHAs.

Official GitHub Security Advisory is pub­lished, CVE re­quested

Three vul­ner­a­bil­i­ties chained to­gether. Each is nec­es­sary for the at­tack; none alone is suf­fi­cient.

1. pul­l_re­quest_­tar­get Pwn Request” pat­tern in bun­dle-size.yml

bun­dle-size.yml ran pul­l_re­quest_­tar­get for fork PRs and, in­side that trig­ger con­text, checked out the fork’s PR-merge ref and ran a build:

yaml

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

The au­thor of the work­flow at­tempted a trust split (the com­ment-pr job is sep­a­rate from bench­mark-pr, with a com­ment in the YAML not­ing the in­tent to keep bench­mark-pr untrusted with read-only per­mis­sions”). The split is cor­rect in spirit but missed two facts:

ac­tions/​cache@v5′s post-job save is not gated by per­mis­sions:. Cache writes use a run­ner-in­ter­nal to­ken, not the work­flow GITHUB_TOKEN. Setting per­mis­sions: con­tents: read does not block cache mu­ta­tion.

Cache scope is per-repo, shared across pul­l_re­quest_­tar­get runs (which use the base re­po’s cache scope) and pushes to main. A PR run­ning in the base re­po’s cache scope can poi­son en­tries that pro­duc­tion work­flows on main will later re­store.

2. GitHub Actions cache poi­son­ing across trust bound­aries

The ma­li­cious vite_setup.mjs was specif­i­cally de­signed to write data into the pnpm-store di­rec­tory un­der a key the le­git re­lease.yml work­flow would com­pute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the bench­mark-pr job ended, ac­tions/​cache@v5′s post-step saved the (now-poisoned) pnpm store to that ex­act key. When re­lease.yml next ran on a push to main, its Setup Tools step re­stored the poi­soned en­try — en­tirely as de­signed.

This is the class of at­tack doc­u­mented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions de­sign is­sue that re­quires con­scious mit­i­ga­tion.

re­lease.yml de­clares id-to­ken: write (legitimately needed for npm OIDC trusted pub­lish­ing). When the poi­soned pnpm store is re­stored on the run­ner, at­tacker-con­trolled bi­na­ries are now on disk and get in­voked dur­ing the build step. Those bi­na­ries:

Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline

Read /proc/<pid>/maps and /proc/<pid>/mem to dump the work­er’s mem­ory

Extract the OIDC to­ken (which the run­ner mints lazily, in mem­ory, when id-to­ken: write is set)

Use the to­ken to au­then­ti­cate POST re­quests di­rectly to reg­istry.npmjs.org — by­pass­ing the work­flow’s Publish Packages step en­tirely

This is the same mem­ory-ex­trac­tion tech­nique (and ver­ba­tim Python script, with at­tri­bu­tion com­ment) used in the tj-ac­tions/​changed-files com­pro­mise of March 2025. The at­tacker did not in­vent novel trade­craft; they re­com­bined pub­lished re­search.

Why none alone is enough

pul­l_re­quest_­tar­get alone is fine for trusted op­er­a­tions (labeling, com­ments)

Cache poi­son­ing alone (e.g., from in­side an al­ready-com­pro­mised dep) re­quires a sep­a­rate pub­lish ve­hi­cle

OIDC to­ken ex­trac­tion alone re­quires ex­ist­ing code ex­e­cu­tion on the run­ner

The chain only works be­cause each vul­ner­a­bil­ity bridges the trust bound­ary the oth­ers as­sumed: PR fork code cross­ing into base-repo cache, base-repo cache cross­ing into re­lease-work­flow run­time, and re­lease-work­flow run­time cross­ing into npm reg­istry write ac­cess.

How we found out

Detection was ex­ter­nal. External re­searcher ashishkurmi work­ing for StepSecurity opened is­sue #7383 ~20 min­utes af­ter the pub­lish, with full tech­ni­cal analy­sis. Tanner re­ceived a phone call from Socket.dev just mo­ments af­ter start­ing the war room con­firm­ing the sit­u­a­tion.

IOC fin­ger­prints (for down­stream main­tain­ers and se­cu­rity tools)

In any @tanstack/* pack­age’s man­i­fest:

json

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

File: router_init.js (~2.3 MB, pack­age root, not in files”)

Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11

2nd-stage pay­load URLs: https://​lit­ter.cat­box.moe/​h8nc9u.js, https://​lit­ter.cat­box.moe/​7r­rc6l.mjs

Exfiltration net­work: filev2.get­ses­sion.org, seed{1,2,3}.get­ses­sion.org

Forged com­mit iden­tity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fab­ri­cated GitHub no-re­ply email)

Real at­tacker ac­counts: zblgg (id 127806521), voicpro­ducoes (id 269549300)

Attacker fork: github.com/​zblgg/​con­fig­u­ra­tion (fork of TanStack/router re­named to evade fork searches)

Orphan pay­load com­mit (in fork net­work): 79ac49eedf774dd4b0cfa308722bc463cfe5885c

Workflow runs that per­formed the ma­li­cious pub­lishes:

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4) github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4)

github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

What went well

External re­searchers no­ticed and re­ported with full tech­ni­cal de­tail within ~20 min of the in­ci­dent

Maintainer team co­or­di­nated im­me­di­ately and ef­fec­tively across many time­zones

The de­tec­tion com­mu­nity al­ready had a clear pub­lic IOC pat­tern within hours

What could have been bet­ter

No in­ter­nal alert­ing. We learned about the com­pro­mise from a third party. We need mon­i­tor­ing on our own pub­lishes. We’ll be work­ing closely with se­cu­rity re­searcher firms in the ecosys­tem that have the abil­ity to de­tect these is­sues very quickly, po­ten­tially even in-house, and mak­ing the feed­back loop even tighter.

pul­l_re­quest_­tar­get work­flows had not been au­dited de­spite be­ing a long-known dan­ger­ous pat­tern

Floating refs (@v6.0.2, @main) on third-party ac­tions cre­ate stand­ing sup­ply-chain risk in­de­pen­dent of this in­ci­dent

Unpublish was un­avail­able for nearly all af­fected pack­ages be­cause of npm’s no un­pub­lish if de­pen­dents ex­ist” pol­icy. We have to rely on npm se­cu­rity to pull tar­balls server-side, which adds hours of de­lay dur­ing which ma­li­cious tar­balls re­main in­stal­lable

The 7-maintainer list on the npm scope means seven sep­a­rate cre­den­tial-theft tar­gets for the same blast ra­dius

OIDC trusted-pub­lisher bind­ing has no per-pub­lish re­view. Once con­fig­ured, any code path in the work­flow can mint a pub­lish-ca­pa­ble to­ken. We need ei­ther (a) move to short-lived clas­sic to­kens with man­ual re­view, or (b) add prove­nance-source-ver­i­fi­ca­tion to de­tect pub­lishes from un­ex­pected work­flow steps

What we got lucky on

The at­tacker chose a pay­load that broke tests, which made the pub­lish step (which would have pro­duced cleaner-look­ing tar­balls) skip — mean­ing the at­tack was loud enough to de­tect quickly. A more care­ful at­tacker who did­n’t break tests could have pub­lished silently for hours longer

The at­tacker reused pub­lic trade­craft (verbatim mem­ory-dump script with at­tri­bu­tion com­ment) in­stead of writ­ing novel code — mak­ing the IOC-matching faster

These need an­swers be­fore we close the post­mortem.

Did bun­dle-size.ym­l’s Setup Tools step ac­tu­ally call ac­tions/​cache@v5? Verify by read­ing the post-job logs from one of the pul­l_re­quest_­tar­get runs against PR #7378 (e.g., run id 25666610798). Tanner has ac­cess; needs to be done man­u­ally

What was in the ini­tial PR head com­mit (before the force-pushes wiped it)? GitHub’s re­flog may have it. Check via gh api or the GitHub sup­port team

How did the ma­li­cious com­mit get into the fork’s git ob­ject store specif­i­cally — was it pushed di­rectly via git, or was it cre­ated via the GitHub web UI (which would leave au­dit-log en­tries)?

Was voicpro­ducoes a real ac­count or a sock pup­pet? Cross-reference its ac­tiv­ity his­tory

Did the npm cache also get poi­soned (the 6 du­pli­cate linux-npm-store-* en­tries)? Were any ac­tu­ally used?

Can we iden­tify any other fork in the TanStack/router fork net­work that con­tains the or­phan pay­load com­mit? (If yes, the cleanup is harder — every fork host­ing it keeps it ac­ces­si­ble via github:tanstack/​router#79ac49ee…)

Are any other TanStack re­pos (router, query, table, form, vir­tual, etc.) us­ing the same bun­dle-size.yml-style pat­tern? Audit needed

How many users ac­tu­ally down­loaded the af­fected ver­sions dur­ing the pub­lish win­dow? Get from npm sup­port

Did any of the seven listed main­tain­ers’ ma­chines get com­pro­mised sep­a­rately? (None of the ma­li­cious pub­lishes used a main­tain­er’s npm to­ken, but main­tainer ma­chines could have been the sec­ondary tar­get via the self-prop­a­ga­tion logic)

Tracking is­sue: TanStack/router#7383

GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Related re­search:

Adnan Khan, The Monsters in Your Build Cache: Github Actions Cache Poisoning” (May 2024) — ad­nan­thekhan.com GitHub Security Lab, Keeping your GitHub Actions and work­flows se­cure: Preventing pwn re­quests” — se­cu­rity­lab.github.com StepSecurity, Harden-Runner de­tec­tion: tj-ac­tions/​changed-files ac­tion is com­pro­mised” (March 2025) — stepse­cu­rity.io

Several npm latest releases are compromised · Issue #7383 · TanStack/router

github.com

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

There was an er­ror while load­ing. Please re­load this page.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

Poland is a model for economic growth | AP News

apnews.com

POZNAN, Poland (AP) — A gen­er­a­tion ago, Poland ra­tioned sugar and flour while its cit­i­zens were paid one-tenth what West Germans earned. Today, the econ­omy of the coun­try has edged past Switzerland to be­come the world’s 20th largest with more than $1 tril­lion in an­nual out­put.

It’s a his­toric leap from the post-Com­mu­nist ru­ins of 1989 – 90 to European growth cham­pion, which econ­o­mists say has lessons on how to bring pros­per­ity to or­di­nary peo­ple — and that the Trump ad­min­is­tra­tion says should be rec­og­nized by Poland’s pres­ence at a sum­mit of the Group of 20 lead­ing economies later this year.

The trans­for­ma­tion is re­flected in peo­ple like Joanna Kowalska, an en­gi­neer from Poznan, a city of around 500,000 peo­ple mid­way be­tween Berlin and Warsaw. She re­turned home af­ter five years in the U.S.

I get asked of­ten if I’m miss­ing some­thing by com­ing back to Poland, and, to be hon­est, I feel it’s the other way around,” Kowalska said. We are ahead of the United States in so many ar­eas.”

Kowalska works at the Poznan Supercomputing and Networking Center, which is de­vel­op­ing the first ar­ti­fi­cial in­tel­li­gence fac­tory in Poland and in­te­grat­ing it with a quan­tum com­puter, one of 10 on the con­ti­nent fi­nanced by a European Union pro­gram.

3 MIN READ

2 MIN READ

3 MIN READ

Kowalska worked for Microsoft in the U.S. af­ter grad­u­at­ing from the Poznan University of Technology, in a job she saw as a dream come true.”

Newer sky­scrap­ers flank the com­mu­nist-era Palace of Culture and Science, fore­ground, in n, Poland, May 25, 2018. (AP Photo/Alik Keplicz, File)

But she missed hav­ing a sense of mis­sion,” she said.

Especially when it comes to ar­ti­fi­cial in­tel­li­gence, the tech­nol­ogy started de­vel­op­ing so rapidly in Poland,” Kowalska said. So it was very tempt­ing to come back.”

Breaking out of poverty

The guest in­vi­ta­tion to the G20 sum­mit is mostly sym­bolic. No guest coun­try has been pro­moted to full mem­ber since the orig­i­nal G20 met at the fi­nance min­is­ter level in 1999, and that would take a con­sen­sus de­ci­sion of all the mem­bers. Moreover, the orig­i­nal coun­tries were cho­sen not just by gross do­mes­tic prod­uct rank, but by their systemic sig­nif­i­cance” in the global econ­omy.

But the ges­ture re­flects a sta­tis­ti­cal truth: In 35 years — a lit­tle less than one per­son’s work­ing life­time — Poland’s per capita GDP rose to $55,340 in 2025, or 85% of the EU av­er­age. That’s up from $6,730 in 1990, or 38% of the EU av­er­age and now roughly equal to Japan’s $52,039, ac­cord­ing to International Monetary Fund fig­ures mea­sured in to­day’s dol­lars and ad­justed for Poland’s lower cost of liv­ing.

Poland’s econ­omy has grown an av­er­age 3.8% a year since join­ing the EU in 2004, eas­ily beat­ing the European av­er­age of 1.8%.

It was­n’t sim­ply one fac­tor that helped Poland break out of the poverty trap, says Marcin Piątkowski of Warsaw’s Kozminski University and au­thor of a book on the coun­try’s eco­nomic rise.

One of the most im­por­tant fac­tors was rapidly build­ing a strong in­sti­tu­tional frame­work for busi­ness, he said. That in­cluded in­de­pen­dent courts, an anti-mo­nop­oly agency to en­sure fair com­pe­ti­tion, and strong reg­u­la­tion to keep trou­bled banks from chok­ing off credit.

As a re­sult, the econ­omy was­n’t hi­jacked by cor­rupt prac­tices and oli­garchs, as hap­pened else­where in the post-Com­mu­nist world.

Poland also ben­e­fited from bil­lions of eu­ros in EU aid, both be­fore and af­ter it joined the bloc in 2004 and gained ac­cess to its huge sin­gle mar­ket.

Above all, there was the broad con­sen­sus, from across the po­lit­i­cal spec­trum, that Poland’s long-term goal was join­ing the EU.

Poles knew where they were go­ing,” Piątkowski said. Poland down­loaded the in­sti­tu­tions and the rules of the game, and even some cul­tural norms that the West spent 500 years de­vel­op­ing.”

As op­pres­sive as it was, com­mu­nism con­tributed by break­ing down old so­cial bar­ri­ers and open­ing higher ed­u­ca­tion to fac­tory and farm­work­ers who had no chance be­fore. A post-Com­mu­nist boom in higher ed­u­ca­tion means half of young peo­ple now have de­grees.

Young Poles are, for in­stance, bet­ter ed­u­cated than young Germans,” Piatkowski said, but earn half what Germans do. That’s an un­beat­able com­bi­na­tion” for at­tract­ing in­vestors, he said.

Success of an elec­tric bus com­pany

Solaris, a com­pany founded in 1996 in Poznan by Krzysztof Olszewski, is one of the lead­ing man­u­fac­tur­ers of elec­tric buses in Europe with a mar­ket share of around 15%. Its story shows one hall­mark of Poland’s suc­cess: en­tre­pre­neur­ship, or the will­ing­ness to take risks and build some­thing new.

Workers build elec­tric buses at the Solaris bus fac­tory in Poznan, Poland, Thursday, Jan. 29, 2026. (AP Photo/Pietro De Cristofaro)

Educated as an en­gi­neer un­der the Communist gov­ern­ment, Olszewski opened a car re­pair shop where he used spare parts from West Germany to fix Polish cars. While most en­ter­prises were na­tion­al­ized, au­thor­i­ties gave per­mis­sion to small-scale pri­vate work­shops like his to op­er­ate, ac­cord­ing to Katarzyna Szarzec, an econ­o­mist at the Poznan University of Economics and Business.

These were en­claves of pri­vate en­tre­pre­neur­ship,” she said.

In 1996, Olszewski opened a sub­sidiary of the German bus com­pany Neoplan and started pro­duc­ing for the Polish mar­ket.

Poland’s en­try to the EU in 2004 gave us cred­i­bil­ity and ac­cess to a vast, open European mar­ket with the free move­ment of goods, ser­vices and peo­ple,” said Mateusz Figaszewski, re­spon­si­ble for in­sti­tu­tional re­la­tions.

Then came a risky de­ci­sion to start pro­duc­ing elec­tric buses in 2011, a time when few in Europe were ex­per­i­ment­ing with the tech­nol­ogy. Figaszewski said larger com­pa­nies in the West had more to lose if switch­ing to elec­tric ve­hi­cles did­n’t work out.

It be­came an op­por­tu­nity to achieve tech­no­log­i­cal lead­er­ship ahead of the mar­ket,” he said.

An ag­ing pop­u­la­tion

Challenges still re­main for Poland. Due to a low birth rate and an ag­ing so­ci­ety, fewer work­ers will be able to sup­port re­tirees. Average wages are lower than the EU av­er­age. While small and medium en­ter­prises flour­ish, few have be­come global brands.

Poznan Mayor Jacek Jaśkowiak sees do­mes­tic in­no­va­tion as a third wave in Poland’s post­so­cial­ist eco­nomic de­vel­op­ment. In the first wave, for­eign coun­tries opened fac­to­ries in Poland in the early 1990s, tak­ing ad­van­tage of a skilled lo­cal pop­u­la­tion.

Around the turn of the mil­len­nium, he said, Western com­pa­nies brought more ad­vanced branches, in­clud­ing fi­nance, in­for­ma­tion tech­nol­ogy and en­gi­neer­ing.

Now it’s the time to start such so­phis­ti­cated ac­tiv­i­ties here,” Jaśkowiak says, adding that one of his main pri­or­i­ties is in­vest­ing in uni­ver­si­ties.

There is still much to do when it comes to in­no­va­tion and tech­no­log­i­cal progress,” added Szarzec, the Poznan econ­o­mist. But we keep climb­ing up on that lad­der of added value. We’re no longer just a sup­plier of spare parts.”

Szarzec’s stu­dents say more needs to be done to re­duce ur­ban-rural in­equal­i­ties, make hous­ing af­ford­able and sup­port young peo­ple start­ing fam­i­lies. They say Poles need to ac­knowl­edge that im­mi­grants, such as the mil­lions of Ukrainians who fled Russia’s full-scale in­va­sion in 2022, con­tribute to eco­nomic de­vel­op­ment in an ag­ing pop­u­la­tion.

Poland has such a dy­namic econ­omy, with so many op­por­tu­ni­ties for de­vel­op­ment, that of course I am stay­ing,” said Kazimierz Falak, 27, one of Szarzec’s grad­u­ate stu­dents. Poland is promis­ing.”

Computer equip­ment at the Poznan Supercomputing and Networking cen­ter is seen in Poznan, Poland, Wednesday, Jan. 28, 2026. (AP Photo/Pietro De Cristofaro)

___

David McHugh re­ported from Frankfurt, Germany.

How I Moved My Digital Stack to Europe

monokai.com

On dig­i­tal sov­er­eignty, and why European cloud is bet­ter than you think

April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope

There’s a ver­sion of this post that starts with a spread­sheet and ends with a quiet sense of sat­is­fac­tion. That’s mostly how it went. But un­der­neath the prac­ti­cal ex­er­cise of swap­ping one SaaS tool for an­other was some­thing that felt more ur­gent, a grow­ing dis­com­fort with how much of my dig­i­tal in­fra­struc­ture sat on servers I did­n’t con­trol, in a ju­ris­dic­tion in­creas­ingly prone to un­pre­dictabil­ity, op­er­ated by com­pa­nies whose in­cen­tives don’t al­ways align with mine.

Digital sov­er­eignty sounds like a buzz­word un­til you think care­fully about what it means. It means know­ing where your data lives. It means not be­ing one pol­icy change, one ac­qui­si­tion, or one ex­ec­u­tive’s bad mood away from los­ing ac­cess to tools your busi­ness de­pends on. It means choos­ing in­fra­struc­ture based on val­ues, not just con­ve­nience.

So I started mi­grat­ing.

Analytics

Google Analytics was the ob­vi­ous first tar­get. It’s the canon­i­cal ex­am­ple of a ser­vice that’s free be­cause you are the prod­uct, your vis­i­tors’ be­hav­ior fun­neled back into Google’s ad­ver­tis­ing ma­chin­ery.

Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant with­out the cookie con­sent the­ater that Google Analytics typ­i­cally re­quires. The re­port­ing is com­pre­hen­sive, the in­ter­face is fa­mil­iar enough, and I own every­thing.

The main down­side is main­te­nance over­head. You’re now re­spon­si­ble for up­dates, back­ups, and keep­ing the server healthy. For most se­tups this is low-fric­tion, but it’s not zero fric­tion.

Email

Proton Mail is based in Switzerland, not EU ter­ri­tory, but Swiss pri­vacy law is closely aligned with GDPR and ar­guably stronger in some re­spects. Proton builds its busi­ness model around pri­vacy rather than ad­ver­tis­ing, and end-to-end en­cryp­tion is baked in at the pro­to­col level rather than bolted on. The email client is solid, the cal­en­dar works well, and for any­one mov­ing away from US-based ser­vices, it sits com­fort­ably in the same spirit as the rest of this stack.

One ad­just­ment is get­ting used to Proton’s fil­ter sys­tem, which is a bit more lim­ited than Gmail’s. Gmail lets you write fil­ters against vir­tu­ally any­thing, in­clud­ing the full body of the mes­sage. Proton does­n’t sup­port fil­ter­ing on email con­tent at all. So if you’ve built a work­flow around catch­ing spe­cific phrases or key­words in mes­sage bod­ies, you’ll have to re­think it. For most peo­ple this won’t be a deal­breaker, but it’s worth know­ing be­fore you mi­grate.

There’s also a prac­ti­cal lim­i­ta­tion worth flag­ging: Proton caps cus­tom do­mains at three, even on the Duo plan. If you run sev­eral do­mains, like sep­a­rate ad­dresses for dif­fer­ent pro­jects or busi­nesses, you’ll hit that ceil­ing quickly and need to re­think how you route and send mail. I ended up con­sol­i­dat­ing, which was prob­a­bly over­due any­way, but it was­n’t a choice I made en­tirely freely.

Proton is­n’t free and charges a sub­stan­tial fee com­pared to other op­tions. You’ll get ac­cess to a whole suite of Proton apps though.

Password Management

Once I was in the Proton ecosys­tem, mov­ing pass­word man­age­ment there as well made sense. Proton Pass is end-to-end en­crypted, open source, and ben­e­fits from the same Swiss ju­ris­dic­tion as the rest of Proton’s stack.

1Password is a gen­uinely great prod­uct and this was a lat­eral move more than an up­grade. The in­ter­face is sim­ple, the browser ex­ten­sion works re­li­ably, and hav­ing pass­words, email, and cal­en­dar un­der one en­crypted roof has a cer­tain sat­is­fy­ing co­her­ence to it.

Compute

DigitalOcean has earned its rep­u­ta­tion by do­ing one thing ex­cep­tion­ally well: get­ting out of your way. The UI is clean, the men­tal model is sim­ple, and spin­ning up in­fra­struc­ture never feels like a chore. It’s the plat­form that proved de­vel­oper ex­pe­ri­ence could be a com­pet­i­tive moat.

Scaleway was a pleas­ant sur­prise. I ex­pected a ca­pa­ble-but-rough European al­ter­na­tive, but what I found was a plat­form that’s gen­uinely well thought out. Servers spun up quickly in­side a pri­vate net­work of my own con­fig­u­ra­tion, the con­trol panel is clean, and the op­tions avail­able matched every­thing I ac­tu­ally needed. Scaleway dis­plays pro­jected CO₂ emis­sions along­side server lo­ca­tion choices, a nice touch.

Object Storage

Scaleway’s ob­ject stor­age is S3-compatible, which makes mi­gra­tion me­chan­i­cal rather than painful, up­date your end­point and cre­den­tials and ex­ist­ing code works un­changed.

I used a tool called rclone to sync my old AWS S3 stor­age buck­ets to the new Scaleway S3 buck­ets. This took a lit­tle more than a week of con­stant sync­ing, as these buck­ets were quite large.

Offsite Backups

OVH is the largest European cloud provider and brings the re­li­a­bil­ity and pric­ing you’d ex­pect at that scale. Their ob­ject stor­age works well as a backup des­ti­na­tion and ends up cheaper than Backblaze B2 once you con­fig­ure life­cy­cle rules to move older back­ups to the cold stor­age class.

Getting there, how­ever, re­quires some pa­tience. The OVHcloud con­trol panel is a labyrinth: the life­cy­cle rule con­fig­u­ra­tion is buried some­where in the doc­u­men­ta­tion, and it in­volves some work in the ter­mi­nal. Once it’s set up, it works re­li­ably and the cost dif­fer­ence is mean­ing­ful.

Transactional Emails

Lettermint is a European trans­ac­tional email ser­vice that does the job with­out the bloat. Deliverability is solid, the API is clean, and it has straight­for­ward pric­ing.

Compared to SendGrid, the an­a­lyt­ics are leaner and the ecosys­tem in­te­gra­tions are fewer. SendGrid has years of tool­ing, doc­u­men­ta­tion, and com­mu­nity an­swers be­hind it. Lettermint is newer and smaller. For most trans­ac­tional send­ing use cases (password re­sets, no­ti­fi­ca­tions, re­ceipts) that does­n’t mat­ter much. But if you’re do­ing com­plex multi-stream email in­fra­struc­ture, you’ll want to au­dit the fea­ture set care­fully first.

Error Tracking

Bugsink is a self-hosted er­ror track­ing tool that ac­cepts Sentry’s SDK, which means the mi­gra­tion path is al­most fric­tion­less, change one line of con­fig­u­ra­tion and you’re done.

To be hon­est: Bugsink is bare-bones. There’s no per­for­mance mon­i­tor­ing, no ses­sion re­plays, no ad­vanced alert­ing. It’s not a Sentry re­place­ment for teams that use Sentry prop­erly. For me, it’s a sim­ple re­mote er­ror log, when some­thing breaks in pro­duc­tion I get a stack trace and that’s enough. Sentry’s cloud prod­uct is gen­uinely ex­cel­lent if you need the full fea­ture set, and for larger en­gi­neer­ing teams the breadth al­most cer­tainly jus­ti­fies the cost. But if your use case is tell me when some­thing broke and show me the stack trace”, self-hosted Bugsink does ex­actly that with no data leav­ing your in­fra­struc­ture.

AI API in­te­gra­tion

For my AI API in­te­gra­tions, I switched from OpenAI to Mistral. It worked out per­fectly as I was mostly us­ing sim­pler mod­els any­way.

Mistral is head­quar­tered in Paris and has pub­lished com­pelling open-weight mod­els along­side its API of­fer­ing. The API is clean, the mod­els are fast and ca­pa­ble, and there’s some­thing co­her­ent about a European AI provider that leans into open­ness rather than away from it. For my in­fer­ence work­loads, the switch was lat­eral in qual­ity and mean­ing­fully bet­ter in terms of where the money goes.

CDN

Exception № 1

Not every­thing moved. Cloudflare is a US com­pany, I still use it, and I’m at peace with that.

Here’s the rea­son­ing: Cloudflare sits in front of my pub­lic-fac­ing web­sites. Its job is to cache, pro­tect against DDoS at­tacks, and make con­tent load fast for vis­i­tors around the world. The data flow­ing through it is al­ready pub­lic by de­f­i­n­i­tion. I’m not rout­ing pri­vate com­mu­ni­ca­tions or sen­si­tive ap­pli­ca­tion data through Cloudflare; I’m us­ing it to serve pages that any­one on the in­ter­net can read. The sov­er­eignty cal­cu­lus is dif­fer­ent when the thing you’re pro­tect­ing is al­ready pub­lic.

I did try Bunny CDN, which is European-based and has a great rep­u­ta­tion. For straight­for­ward CDN use it’s ex­cel­lent. But Cloudflare’s fea­ture set (security rules, Workers plat­form, breadth of con­fig­u­ra­tion op­tions) was­n’t matched closely enough to jus­tify the switch for my spe­cific needs. Sometimes the prag­matic an­swer wins.

Payments

Exception № 2

Stripe is one of the few ser­vices I haven’t moved yet, even though pay­ment in­fra­struc­ture is ex­actly the kind of thing I care about hav­ing in a ju­ris­dic­tion I trust. Mollie is a Dutch pay­ment proces­sor with full EU in­cor­po­ra­tion, strong GDPR com­pli­ance by de­sign, and a prod­uct that has ma­tured con­sid­er­ably in re­cent years. The API has con­verged to­ward par­ity for most com­mon pay­ment flows, and for a European busi­ness the re­gional pay­ment method cov­er­age (iDEAL, Bancontact, SEPA) is ar­guably bet­ter.

The mi­gra­tion is on the list. It’s just not a triv­ial one. Payment in­te­gra­tions touch billing logic, web­hooks, tax in­voic­ing and cus­tomer-fac­ing flows in ways that re­quire care­ful test­ing and a good mo­ment to cut over. It’s also more ex­pen­sive than Stripe for my use­case.

AI Code as­sis­tance

Exception № 3

This one felt over­due. OpenAI works fine, but the com­pa­ny’s tra­jec­tory does­n’t align with my own views any­more. After a pe­riod of de­lib­er­ate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just did­n’t make the cut as it could­n’t com­pete with Claude.

Claude Code is now my day-to-day AI as­sis­tant for cod­ing. The rea­son­ing qual­ity is strong, the con­text han­dling is gen­uinely im­pres­sive, and Anthropic’s ap­proach to safety and trans­parency feels more struc­turally grounded.

Anthropic is a US com­pany, so this does­n’t sat­isfy the ju­ris­dic­tional cri­te­rion I ap­plied else­where. But it sat­is­fies some­thing else, the sense that the or­ga­ni­za­tion build­ing the thing has given se­ri­ous thought to what it’s build­ing and why.

It’s also worth not­ing that lo­cal mod­els are be­com­ing in­creas­ingly vi­able. Qwen, Alibaba’s open-weight model fam­ily, is a strong ex­am­ple: ca­pa­ble enough for many real work­loads, run­ning en­tirely on your own hard­ware, with no data leav­ing your ma­chine. The gap be­tween fron­tier API mod­els and what you can run lo­cally is nar­row­ing faster than most peo­ple re­al­ize.

Not every­thing is ideal. Most data cen­ters still sit out­side Europe, and open” means dif­fer­ent things to dif­fer­ent or­ga­ni­za­tions. But the di­rec­tion is right. A world where ca­pa­ble AI runs on your own hard­ware, with pub­lished weights and trans­par­ent train­ing, is a much bet­ter world for dig­i­tal au­ton­omy than one where all in­fer­ence routes through a hand­ful of closed API providers. We’re not there yet, but the tra­jec­tory is en­cour­ag­ing.

Git Version Control

Exception № 4

GitLab also re­mains for now. GitLab is head­quar­tered in the US but of­fers self-hosted op­tions, and the com­pany has long had a strong com­mit­ment to trans­parency and open source. A self-hosted in­stance is on the roadmap, but mov­ing source con­trol is a more sig­nif­i­cant un­der­tak­ing than most of these mi­gra­tions.

GitHub stays in the pic­ture for one spe­cific pur­pose: pub­lic-fac­ing NPM pack­ages and is­sue track­ing for open source soft­ware. When you pub­lish a pack­age or main­tain pub­lic tool­ing, GitHub is where de­vel­op­ers ex­pect to find it. The net­work ef­fects are real, it’s where the forks, stars, and is­sue re­ports come from. For the pub­lic-fac­ing sur­face of open source work, there’s no mean­ing­ful sov­er­eignty con­cern and a lot of prac­ti­cal up­side.

Was it worth it?

The prac­ti­cal fric­tion was real but man­age­able. Most mi­gra­tions were an af­ter­noon of work: up­date a cre­den­tial here, point a DNS record there, ex­port and im­port some data. A few took longer. None were cat­a­strophic. All in all it took longer than ex­pected, but most time was spent in re­search­ing and plan­ning when to do what. Two months in, every­thing is run­ning with­out in­ci­dent. No fires, no re­grets.

Digital sov­er­eignty is­n’t about para­noia. It’s about be­ing con­scious about your in­fra­struc­ture, where you de­cide who holds your data, who can reach it, and what hap­pens when pol­i­tics shift. The tools are there. The ecosys­tem is mostly ma­ture. The only thing that was stop­ping me was in­er­tia. It’s en­tirely pos­si­ble to run a re­li­able, ca­pa­ble, pro­fes­sional dig­i­tal stack mostly from European in­fra­struc­ture. This mi­gra­tion was proof of that.

Linux gaming is getting faster because Windows APIs are becoming Linux kernel features

www.xda-developers.com

Published May 10, 2026, 12:30 PM EDT

His love of PCs and their com­po­nents was born out of try­ing to squeeze every ounce of per­for­mance out of the fam­ily com­puter. Tinkering with his own build at age 10 turned into build­ing PCs for friends and fam­ily, fos­ter­ing a pas­sion that would ul­ti­mately take shape as a ca­reer path.

Besides be­ing the first call for tech sup­port for those close to him, Ty is a com­puter sci­ence stu­dent, with his fo­cus be­ing cloud com­put­ing and net­work­ing. He also com­peted in semi-pro Counter-Strike for 8 years, mak­ing him in­ti­mately fa­mil­iar with every­thing to do with pe­riph­er­als.

Sign in to your XDA ac­count

In March 2026, Linux crossed five per­cent of Steam’s user base for the first time, an all-time high for an op­er­at­ing sys­tem that spent two decades as a nov­elty when it came to any kind of gam­ing. Microsoft’s end-of-sup­port dead­line for Windows 10 last October pushed many users to look at al­ter­na­tives, and the Steam Deck has qui­etly turned mil­lions of peo­ple into Linux gamers with­out them re­ally think­ing about it, lead­ing to more wide­spread adop­tion on desk­top ma­chines.

Most of that progress used to hap­pen in­side a piece of soft­ware called Wine, the trans­la­tion layer that con­vinces Windows games they’re run­ning on Windows. Valve’s tuned ver­sion of Wine, called Proton, is what makes Steam Play and the Steam Deck work. For years, every mean­ing­ful im­prove­ment to Linux gam­ing came from changes to Wine and Proton them­selves. That’s still true, but in­creas­ingly the most im­por­tant changes are hap­pen­ing one layer deeper, in­side the Linux ker­nel. The lat­est ex­am­ple of that is some­thing called NTSYNC, a ker­nel-level dri­ver that has of­fered great per­for­mance gains over pre­vi­ous ver­sions of Wine, and is loaded by de­fault on every Steam Deck that’s up-to-date.

Related

What NTSYNC ac­tu­ally is

An ad­di­tional piece of the per­for­mance puz­zle

NTSYNC is a small piece of dri­ver added di­rectly to the Linux ker­nel that gives it a na­tive im­ple­men­ta­tion of a set of Windows-specific tools that games de­pend on to co­or­di­nate them­selves.

Modern games jug­gle dozens of things at once. While you’re play­ing, your CPU man­ages the ren­der­ing pipeline, load­ing as­sets, run­ning physics, pro­cess­ing au­dio, han­dling AI NPC rou­tines, and track­ing in­puts, all in par­al­lel across mul­ti­ple cores. All those jobs con­stantly have to co­or­di­nate so they don’t trip over each other.

Quiz

8 Questions · Test Your Knowledge

The his­tory of LinuxTrivia chal­lenge

From a Finnish stu­den­t’s side pro­ject to pow­er­ing the world — how well do you know the story of Linux?

OriginsKernelDistrosPioneersMilestones

Begin

01 / 8

Origins

In what year did Linus Torvalds first an­nounce the Linux ker­nel to the world?

A1989B1991C1993D1995

Correct! Linus Torvalds posted his now-fa­mous mes­sage to the comp.os.minix news­group on August 25, 1991, de­scrib­ing Linux as just a hob­by’ pro­ject. Few could have pre­dicted it would one day run the ma­jor­ity of the world’s servers and smart­phones.

Not quite — Torvalds made his an­nounce­ment in 1991. He was a 21-year-old com­puter sci­ence stu­dent at the University of Helsinki at the time, and his mod­est post de­scribed the pro­ject as some­thing that won’t be big and pro­fes­sion­al’ like GNU.

Continue

02 / 8

Pioneers

Which uni­ver­sity was Linus Torvalds at­tend­ing when he cre­ated the first ver­sion of the Linux ker­nel?

AStockholm UniversityBAalto UniversityCUniversity of HelsinkiDMIT

Correct! Torvalds was study­ing at the University of Helsinki in Finland when he be­gan work­ing on Linux as a per­sonal pro­ject, ini­tially in­spired by MINIX, a small Unix-like sys­tem used for ed­u­ca­tional pur­poses.

Not quite — Torvalds was a stu­dent at the University of Helsinki in Finland. He started Linux partly out of frus­tra­tion with the lim­i­ta­tions of MINIX, which his pro­fes­sor Andrew Tanenbaum had de­signed de­lib­er­ately to be sim­ple for teach­ing.

Continue

03 / 8

Kernel

What op­er­at­ing sys­tem pri­mar­ily in­spired Linus Torvalds to cre­ate the Linux ker­nel?

AMS-DOSBMINIXCBSD UnixDSolaris

Correct! MINIX, cre­ated by pro­fes­sor Andrew Tanenbaum, was the di­rect in­spi­ra­tion for Linux. Torvalds used MINIX on his new Intel 386 PC but found it too re­stricted for his needs, which pushed him to write his own ker­nel.

Not quite — the an­swer is MINIX. Torvalds was us­ing MINIX when he started Linux, and even held a fa­mous on­line de­bate with its cre­ator Andrew Tanenbaum about ker­nel de­sign phi­los­o­phy, specif­i­cally mono­lithic ver­sus mi­cro­ker­nel ar­chi­tec­tures.

Continue

04 / 8

Milestones

What was the ver­sion num­ber of the first pub­licly re­leased Linux ker­nel in 1991?

A0.01B0.1C1.0D0.99

Correct! Linux ver­sion 0.01 was the first ker­nel Torvalds re­leased pub­licly in September 1991. It was a rough, early build that could only run on Intel 386 hard­ware and had very lim­ited func­tion­al­ity, but it marked the true be­gin­ning of the Linux pro­ject.

Not quite — the first pub­lic re­lease was ver­sion 0.01 in September 1991. The ker­nel did­n’t reach ver­sion 1.0 un­til March 1994, by which point it had grown sig­nif­i­cantly in ca­pa­bil­ity and had at­tracted con­tri­bu­tions from de­vel­op­ers around the world.

Continue

05 / 8

Distros

Which Linux dis­tri­b­u­tion, first re­leased in 1993, is one of the old­est still ac­tively main­tained to­day?

AUbuntuBFedoraCSlackwareDDebian

Correct! Slackware, cre­ated by Patrick Volkerding, was first re­leased in July 1993, mak­ing it one of the old­est sur­viv­ing Linux dis­tri­b­u­tions. It is known for its sim­plic­ity and Unix-like phi­los­o­phy, and it con­tin­ues to be main­tained to this day.

Not quite — the an­swer is Slackware, re­leased in 1993 by Patrick Volkerding. While Debian was also founded in 1993, Slackware nar­rowly edges it out as the older re­lease. Ubuntu did­n’t ar­rive un­til 2004, and Fedora launched in 2003.

Continue

06 / 8

Origins

The GNU Project, which pro­vided many tools that paired with the Linux ker­nel, was founded by which de­vel­oper?

AEric RaymondBRichard StallmanCBruce PerensDIan Murdock

Correct! Richard Stallman founded the GNU Project in 1983 with the goal of cre­at­ing a com­pletely free Unix-like op­er­at­ing sys­tem. When the Linux ker­nel ap­peared in 1991, it filled the miss­ing piece GNU needed, and the com­bi­na­tion be­came what many call GNU/Linux.

Not quite — it was Richard Stallman who founded the GNU Project in 1983. Stallman is also known for cre­at­ing the GPL (GNU General Public License) and found­ing the Free Software Foundation, two pil­lars that shaped the le­gal and philo­soph­i­cal foun­da­tion of free soft­ware.

Continue

07 / 8

Milestones

Which com­pany re­leased a land­mark com­mer­cial Linux dis­tri­b­u­tion in 1994, help­ing bring Linux into the en­ter­prise world?

ACanonicalBSUSECRed HatDMandriva

Correct! Red Hat re­leased its first Linux dis­tri­b­u­tion in 1994 and be­came one of the most in­flu­en­tial com­mer­cial Linux com­pa­nies in his­tory. It pi­o­neered the en­ter­prise Linux mar­ket and was even­tu­ally ac­quired by IBM in 2019 for ap­prox­i­mately $34 bil­lion.

Not quite — Red Hat is the an­swer. Founded by Marc Ewing and Bob Young, Red Hat helped prove that com­pa­nies could build sus­tain­able busi­nesses around open-source soft­ware. SUSE Linux also launched in 1994, mak­ing it a close ri­val, but Red Hat be­came the more glob­ally dom­i­nant en­ter­prise force.

Continue

08 / 8

Distros

Ubuntu Linux, one of the most pop­u­lar desk­top dis­tri­b­u­tions, is based on which other Linux dis­tri­b­u­tion?

AArch LinuxBFedoraCDebianDGentoo

Correct! Ubuntu is based on Debian and was first re­leased in October 2004 by Mark Shuttleworth’s com­pany Canonical. It was de­signed to make Linux more ac­ces­si­ble to every­day users, and its six-month re­lease cy­cle and long-term sup­port ver­sions made it a fa­vorite for both desk­tops and servers.

Not quite — Ubuntu is built on top of Debian. Debian it­self was founded in 1993 by Ian Murdock and is known for its strict com­mit­ment to free soft­ware and sta­bil­ity. Ubuntu in­her­its Debian’s pack­age man­age­ment sys­tem (APT and .deb pack­ages) but adds its own user-friendly layer on top.

See My Score

Challenge Complete

Your Score

/ 8

Thanks for play­ing!

Try Again

Windows han­dles this co­or­di­na­tion by us­ing a spe­cific set of mech­a­nisms, and be­fore NTSYNC, Wine had to mimic these mech­a­nisms us­ing things like esync and fsync, which both worked, but did­n’t al­ways match Windows ex­actly. NTSYNC builds these mech­a­nisms straight into the Linux ker­nel for the first time, and it means Wine does­n’t have to em­u­late any­thing any­more. The de­vel­oper-fac­ing API calls don’t ac­tu­ally change, Linux just knows how to an­swer them na­tively.

Related

NTSYNC is part of a grow­ing pat­tern

Not the first time Linux has in­her­ited fea­tures be­cause of Windows

NTSYNC is­n’t the first time Linux has gained a new fea­ture specif­i­cally be­cause Windows games needed it. A few years back, Linux added a way for soft­ware to wait on sev­eral events at once, which is some­thing Windows had built in for decades, but Linux did­n’t. Wine had been work­ing around the gap with awk­ward tricks un­til the ker­nel fi­nally got na­tive sup­port.

This work is dri­ven by Valve, by CodeWeavers (the com­pany that em­ploys many of the core Wine de­vel­op­ers, in­clud­ing NTSYNCs au­thor Elizabeth Figura), and by a steady stream of con­trib­u­tors who want Linux to be a real gam­ing plat­form with­out de­pend­ing on out-of-ecosys­tem patches for­ever.

Related

These aren’t mag­i­cal per­for­mance gains

fsync was al­ready pretty good

The head­line per­for­mance gains look great, but they need some con­text. The eye-catch­ing 40 to 200 per­cent FPS gains cited in NTSYNCs orig­i­nal bench­marks were mea­sured against un­mod­i­fied up­stream Wine, which al­most no­body uses to play games on Linux any­more. Most Linux gamers, in­clud­ing every Steam Deck owner, use Proton, which al­ready has fsync. Compared to fsync, NTSYNCs per­for­mance gains are far more mod­est. The games that ben­e­fit most from the change to NTSYNC are games that were re­ally strug­gling be­fore. Anything that was run­ning at de­cent fram­er­ates be­fore­hand is still go­ing to run fine.

Related

These 7 Linux myths you still be­lieve sim­ply aren’t true

Linux is a com­pletely dif­fer­ent beast than it was a decade ago.

Valve adopted it any­way

It’s a great sign

Pierre-Loup Griffais, an en­gi­neer at Valve, has gone on the record to say that fsync was al­ready fast enough, and de­spite that, Valve still shipped NTSYNC in sta­ble SteamOS in March any­way, which speaks to the fact that fsync is still a workaround at its core, and can be the cause of is­sues out­side of poor raw FPS.

These old workarounds got sub­tle edge cases wrong in ways that pro­duced oc­ca­sional hitches, dead­locks, or weird be­hav­ior in spe­cific games, which are bugs that don’t show up on bench­mark charts but can ab­solutely ruin in­di­vid­ual ex­pe­ri­ences. NTSYNC fixes those at the source by match­ing Windows be­hav­ior ex­actly, and that means as soon as your fa­vorite dis­tro moves to the new ker­nel ver­sion, whether it be Bazzite, CachyOS, Fedora, or a fla­vor of Ubuntu, they all get this much-needed fix.

Related

4 rea­sons Valve’s full SteamOS re­lease will change PC gam­ing again

Valve’s full SteamOS re­lease will change PC gam­ing again, and here are some of the most im­por­tant ways.

Gaming on Linux con­tin­ues to im­prove by the month

Linux has grown so much in the gam­ing de­part­ment. Where there once was noth­ing but clever Wine patches and com­mu­nity workarounds now lies sup­port from gam­ing be­he­moths like Valve, dri­ving changes to the Linux ker­nel it­self. NTSYNC won’t be the last time a piece of Windows gets re­built in­side Linux be­cause gamers needed it, and with more than five per­cent of Steam’s user base now run­ning Linux, the in­cen­tive to keep do­ing it has never been stronger.

Im going back to writing code by hand

blog.k10s.dev

09 May, 2026

This dev-log is get­ting a lot of at­ten­tion on HN (scary!): HN Thread.

To those who are com­ing here from HN: This started as an in­ves­ti­ga­tion or rather a ques­tion: How far I can get with build­ing a piece of soft­ware by keep­ing my­self com­pletely out of the loop”. The tl;dr of this dev log is that I still need to be in the loop to make any­thing mean­ing­ful. Take aways:

like em-dash” is to ai writ­ing, god-object” is to ai cod­ing

vibe-cod­ing makes every­thing feel cheap and you may end up los­ing fo­cus and build­ing bloat

let a hu­man (you) write the ar­chi­tec­ture and don’t just keep ask­ing it for fea­tures

And some AGENTS.md/CLAUDE.md di­rec­tives that I felt help me stay out of the loop a lit­tle more

Humans in­ter­ven­tion is still needed as of 10/05/2026. You can to­tally go back now! [3]

I

Here is k10s: https://​github.com/​shvb­sle/​k10s/​tree/​archive/​go-v0.4.0

234 com­mits. ~30 week­ends. Built en­tirely on vibe-coded ses­sions with Claude, when­ever my to­kens lasted long enough to ship some­thing.

I’m archiv­ing my TUI tool and rewrit­ing it from scratch.

k10s started as a GPU-aware Kubernetes dash­board (and my first foray into build­ing some­thing se­ri­ous with AI). Think k9s but built for the peo­ple run­ning NVIDIA clus­ters, peo­ple who ac­tu­ally care about GPU uti­liza­tion, DCGM met­rics, and which nodes are sit­ting idle burn­ing $32/hr. I built it in Go with Bubble Tea [1] and it worked.

For a while… :(

I learned over these 7 months is worth more than the 1690 lines of model.go I’m throw­ing away. And I think any­one do­ing se­ri­ous vibe-cod­ing can ben­e­fit from this, be­cause this part does­n’t sur­face much (I feel it gets buried un­der the demo reels and the ve­loc­ity wins).

tl;dr: AI writes fea­tures, not ar­chi­tec­ture. The longer you let it drive with­out con­straints, the worse the wreck­age gets. The ve­loc­ity makes you think you’re win­ning right up un­til the mo­ment every­thing col­lapses si­mul­ta­ne­ously.

II

vibe cod­ing high

I started k10s in late September 2025. The first few weeks were magic. I’d prompt Claude with add a pods view with live up­dates” and boom, it worked. Resource list views, name­space fil­ter­ing, log stream­ing, de­scribe pan­els, key­board nav­i­ga­tion. Each fea­ture landed clean be­cause the pro­ject was small enough that the AI could hold the whole thing in con­text.

The ba­sic k9s clone took maybe 3 week­ends. Resource views for pods, nodes, de­ploy­ments, ser­vices. A com­mand palette. Watch-based live up­dates. Vim key­bind­ings. All work­ing, all vibe-coded in sin­gle ses­sions. I was build­ing at maybe 10x my nor­mal speed and it felt in­cred­i­ble.

Then I wanted the main sell­ing point.

The whole rea­son k10s ex­ists is the GPU fleet view. A ded­i­cated screen that shows you every node’s GPU al­lo­ca­tion, uti­liza­tion from DCGM, tem­per­a­ture, power draw, mem­ory. Not buried in kubectl de­scribe node out­put, but right there in a pur­pose-built table with color-coded sta­tus. Idle nodes in yel­low. Busy in green. Saturated in red.

And Claude one-shot it. I prompted for the fleet view, it gen­er­ated the FleetView struct, the tab fil­ter­ing (GPU/CPU/All), the cus­tom ren­der­ing with al­lo­ca­tion bars. It looked beau­ti­ful. I was rid­ing the high.

Then I typed :rs pods to switch back to the pods view.

Nothing ren­dered. The table was empty. Live up­dates had stopped. I switched to nodes, it showed stale data from the fleet view’s fil­ter. I went back to fleet, the tab counts were wrong.

The god ob­ject had con­sumed it­self.

This is the ti­tle of the blog post. This is where I in­ter­vened for the first time. For 7 months I’d been prompt­ing and ship­ping with­out ever sit­ting down and ac­tu­ally read­ing the code Claude wrote. I’d look at the diff, ver­ify it com­piled, test the happy path, move on. But now some­thing was fun­da­men­tally bro­ken and I could­n’t just prompt my way out of it.

So I sat down and read model.go. All 1690 lines. I was hor­ri­fied.

Here’s what it looked like. One struct to rule them all:

type Model struct { // 3rd party UI com­po­nents table table.Model pag­i­na­tor pag­i­na­tor.Model com­mand­In­put tex­tin­put.Model help help.Model

// clus­ter info and state k8sClient *k8s.Client cur­rent­GVR schema.GroupVer­sion­Re­source re­source­Watcher watch.In­ter­face re­sources []k8s.OrderedResourceFields listOp­tions metav1.ListOp­tions clus­ter­Info *k8s.ClusterInfo log­Lines []k8s.LogLine de­scribeCon­tent string cur­rent­Name­space string nav­i­ga­tion­His­tory *NavigationHistory logView *LogViewState de­scribeView *DescribeViewState view­Mode ViewMode viewWidth int viewHeight int err er­ror plug­in­Reg­istry *plugins.Registry help­Modal *HelpModal de­scribeView­port *DescribeViewport logView­port *LogViewport logStream­Can­cel func() log­Li­nesChan <-chan k8s.Log­Line hor­i­zon­talOff­set int mouse *MouseHandler fleetView *FleetView cre­ation­Times []time.Time all­Re­sources []k8s.OrderedResourceFields // fleet’s un­fil­tered set all­Cre­ation­Times []time.Time // fleet’s time­stamps ra­wOb­jects []unstructured.Unstructured ageColumnIn­dex int // … }

UI wid­gets. K8s client. Per-view state for logs, de­scribe, fleet. Navigation his­tory. Caching. Mouse han­dling. All in one struct. And the Update() method was a 500-line func­tion dis­patch­ing on msg.(type) with 110 switch/​case branches.

This is the mo­ment I stopped vibe-cod­ing and started think­ing.

III

five tenets from the wreck­age

Here’s what I ex­tracted from 7 months of watch­ing AI gen­er­ate a code­base that slowly ate it­self. Each of these is some­thing I did wrong, why it hap­pens with AI-assisted cod­ing, and what you should ac­tu­ally put in your CLAUDE.md or agents.md to pre­vent it.

Tenet 1: AI builds fea­tures, not ar­chi­tec­ture.

Every time I prompted Claude for a fea­ture, it de­liv­ered. Perfectly. The fleet view worked on the first try. Log stream­ing worked. Mouse sup­port worked. The prob­lem is that each fea­ture was im­ple­mented in the con­text of make this work right now” with­out any aware­ness of the 49 other fea­tures shar­ing the same state.

Here’s what the re­sources­Load­edMsg han­dler looks like. This is the code that runs every time you switch views:

case re­sources­Load­edMsg: m.log­Lines = nil // Clear log lines when load­ing re­sources m.hor­i­zon­talOff­set = 0 // Reset hor­i­zon­tal scroll on re­source change

if m.cur­rent­GVR != msg.gvr && m.re­source­Watcher != nil { m.re­source­Watcher.Stop() m.re­source­Watcher = nil } m.cur­rent­GVR = msg.gvr m.cur­rent­Name­space = msg.name­space m.listOp­tions = msg.listOp­tions m.ra­wOb­jects = msg.ra­wOb­jects

// For nodes: store the full un­fil­tered set, clas­sify, then fil­ter if msg.gvr.Re­source == k8s.Re­sourceNodes && m.fleetView != nil { m.all­Re­sources = msg.re­sources m.all­Cre­ation­Times = msg.cre­ation­Times if len(msg.ra­wOb­jects) > 0 { m.fleetView.Clas­si­fyAnd­Count(m.ra­wOb­jectP­trs()) } m.ap­plyFleet­Fil­ter() } else { m.re­sources = msg.re­sources m.cre­ation­Times = msg.cre­ation­Times m.all­Re­sources = nil m.all­Cre­ation­Times = nil }

See the if msg.gvr.Re­source == k8s.Re­sourceNodes && m.fleetView != nil con­di­tional? That’s the fleet view be­ing spe­cial-cased in­side the generic re­source load­ing path. Every new view that needed cus­tom be­hav­ior got an­other branch here. And every branch needed to man­u­ally clear the right com­bi­na­tion of fields or the pre­vi­ous view’s data would bleed through.

How many = nil cleanup lines ex­ist in this file? I counted:

m.log­Lines = nil // Clear log lines when load­ing re­sources m.all­Re­sources = nil // Clear fleet data when not on nodes m.re­sources = nil // Clear re­sources when load­ing logs m.re­sources = nil // Clear re­sources when load­ing de­scribe view m.log­Lines = nil // Clear log lines when load­ing de­scribe view m.re­sources = nil // Clear re­sources when load­ing yaml view m.log­Lines = nil // Clear log lines when load­ing yaml view m.log­Lines = nil // … two more in other han­dlers m.log­Lines = nil

Nine man­ual nil as­sign­ments scat­tered across a 1690-line file. Miss one and you get ghost data from the pre­vi­ous view. This is what hap­pens when there’s no view iso­la­tion. AI can’t see this pat­tern de­cay­ing over time be­cause each prompt only touches one code path.

What to do in­stead: Write the ar­chi­tec­ture your­self be­fore any code. Not a vague de­sign doc. A con­crete set of in­ter­faces, mes­sage types, and own­er­ship rules. Then put those rules in your CLAUDE.md so the AI sees them on every prompt:

# Architecture Invariants (CLAUDE.md)

- Each view im­ple­ments the View trait. Views do NOT ac­cess other views’ state. - All async data ar­rives via AppMsg vari­ants. No di­rect field mu­ta­tion from back­ground tasks. - Adding a new view MUST NOT re­quire mod­i­fy­ing ex­ist­ing views. - The App struct is a thin router. It owns nav­i­ga­tion and mes­sage dis­patch. Nothing else.

The AI will fol­low these if you write them down. It just won’t in­vent them for you.

Tenet 2: The god ob­ject is the de­fault AI ar­ti­fact.

AI grav­i­tates to­ward sin­gle-struct-holds-every­thing be­cause it sat­is­fies the im­me­di­ate prompt with min­i­mal cer­e­mony. But it gets worse. Because there’s no view iso­la­tion, key han­dling be­comes a night­mare. Here’s the ac­tual key dis­patch for the s key:

case m.con­fig.Key­Bind.For(con­fig.Ac­tion­Tog­gleAu­to­Scroll, key): if m.cur­rent­GVR.Re­source == k8s.Re­sourceL­ogs { m.logView.Au­to­scroll = !m.logView.Autoscroll if m.logView.Au­to­scroll { m.table.Go­to­Bot­tom() } re­turn m, nil } // Shell exec for pods and con­tain­ers views if m.cur­rent­GVR.Re­source == k8s.Re­sour­ce­Pods { // … 20 lines to look up se­lected pod, get name, name­space … re­turn m, m.com­mand­With­Pre­flights( m.ex­ecIn­to­Pod(se­lect­ed­Name, se­lect­ed­Name­space), m.re­quire­Con­nec­tion, ) } if m.cur­rent­GVR.Re­source == k8s.Re­source­Con­tain­ers { // … con­tainer exec logic … re­turn m, m.com­mand­With­Pre­flights(m.ex­ecIn­to­Con­tainer(), m.re­quire­Con­nec­tion) } re­turn m, nil

One key­bind­ing. Three com­pletely dif­fer­ent be­hav­iors de­pend­ing on which view you’re in. The s key means autoscroll” in logs, shell” in pods, and shell into con­tainer” in con­tain­ers. This is all in one flat switch be­cause there are no per-view key maps. The AI gen­er­ated this be­cause I said add shell sup­port for pods” and it found the near­est key han­dler and jammed it in.

And look at how Enter works. This is the drill-down han­dler:

case m.con­fig.Key­Bind.For(con­fig.Ac­tion­Sub­mit, key): // Special han­dling for con­texts view if m.cur­rent­GVR.Re­source == contexts” { // … 12 lines … re­turn m, m.ex­e­cu­teC­tx­Com­mand([]string{con­textName}) } // Special han­dling for name­spaces view if m.cur­rent­GVR.Re­source == namespaces” { // … 12 lines … re­turn m, m.ex­e­cuteN­sCom­mand([]string{name­space­Name}) } if m.cur­rent­GVR.Re­source == k8s.Re­sourceL­ogs { re­turn m, nil } // … 25 more lines of generic drill-down …

Every view is a con­di­tional in a flat dis­patch. There are 20+ oc­cur­rences of m.cur­rent­GVR.Re­source == used as a type dis­crim­i­na­tor in this sin­gle file. Not types. String com­par­isons. Every new view means touch­ing every han­dler.

What to do in­stead: Put this in your CLAUDE.md:

# State Ownership Rules

- NEVER add fields to the App/Model struct for view-spe­cific state. - Each view is a sep­a­rate struct im­ple­ment­ing the View trait/​in­ter­face. - Each view de­clares its own key bind­ings. The app dis­patches keys to the ac­tive view. - If you need to add a key­bind­ing, add it to the rel­e­vant view’s keymap, not a global one. - Adding a view means adding a file. If your change re­quires mod­i­fy­ing ex­ist­ing views, stop and ask.

The AI will al­ways take the short­est path (“add an­other if-branch”). Your job is to make the short­est path also the cor­rect path by putting guardrails in the file it reads on every in­vo­ca­tion.

Tenet 3: Velocity il­lu­sion widens your scope.

This one’s psy­cho­log­i­cal, not tech­ni­cal, and I think it’s the most dan­ger­ous.

When I started k10s, I wanted a GPU-focused tool. For peo­ple run­ning train­ing clus­ters. A niche au­di­ence that I’m part of. But vibe-cod­ing made every­thing feel cheap. Oh I can add pods view in one ses­sion? Let me add de­ploy­ments too. And ser­vices. And a full com­mand palette. And mouse sup­port. And con­texts. And name­spaces.”

Suddenly I was build­ing k9s. A gen­eral-pur­pose Kubernetes TUI. For every­one. Because the AI made it feel like each fea­ture was free.

It was­n’t free. Each fea­ture was an­other branch in the god ob­ject. Here’s the key­bind­ing struct:

type keyMap struct { Up, Down, Left, Right key.Bind­ing GotoTop, GotoBottom key.Bind­ing AllNS, DefaultNS key.Bind­ing Enter, Back key.Bind­ing Command, Quit key.Bind­ing Fullscreen key.Bind­ing // log view Autoscroll key.Bind­ing // log view (also shell in pods!) ToggleTime key.Bind­ing // log view WrapText key.Bind­ing // log + de­scribe view CopyLogs key.Bind­ing // log view ToggleLineNums key.Bind­ing // de­scribe view Describe key.Bind­ing // re­source views YamlView key.Bind­ing // re­source views Edit key.Bind­ing // re­source views Shell key.Bind­ing // pods (CONFLICTS with Autoscroll!) FilterLogs key.Bind­ing // log view FleetTabNext key.Bind­ing // fleet view only FleetTabPrev key.Bind­ing // fleet view only }

One flat keymap for all views. Comments in parens show which view each bind­ing ap­plies to. Autoscroll and Shell are both s. This works” be­cause the dis­patch checks m.cur­rent­GVR.Re­source be­fore act­ing. But it means you can’t rea­son about key­bind­ings lo­cally. You have to trace through the en­tire 500-line Update func­tion to know what a key does.

The com­plex­ity was ac­cu­mu­lat­ing in­vis­i­bly while the ve­loc­ity met­ric said you’re ship­ping!”

What to do in­stead: Write a vi­sion doc that ex­plic­itly says who you’re NOT build­ing for, and put the scope bound­ary in your CLAUDE.md:

# Scope (do NOT ex­pand be­yond this)

k10s is for GPU clus­ter op­er­a­tors. Not all Kubernetes users. Supported views: fleet, node-de­tail, gpu-de­tail, work­load. That’s it. Do NOT add generic re­source views (pods, de­ploy­ments, ser­vices). Do NOT add fea­tures that du­pli­cate k9s func­tion­al­ity. If a fea­ture re­quest does­n’t serve some­one run­ning GPU train­ing jobs, re­ject it.

Vibe-coding makes you feel like you have in­fi­nite im­ple­men­ta­tion bud­get. You don’t. You have in­fi­nite LINE bud­get (the AI will gen­er­ate as much code as you want). But you have the same fi­nite com­plex­ity bud­get as al­ways. The ar­chi­tec­ture can only sup­port so many fea­tures be­fore it buck­les, re­gard­less of how fast you wrote them. The CLAUDE.md scope sec­tion is you say­ing no in ad­vance, be­fore the ve­loc­ity high con­vinces you to say yes.

Tenet 4: Positional data is a time bomb.

Every re­source in k10s was fetched from the Kubernetes API and im­me­di­ately flat­tened:

type OrderedResourceFields []string

Column iden­tity was purely po­si­tional. Here’s the sort func­tion for the fleet view. Look at the in­dex ac­cess:

func sort­Fil­tere­dResources(rows []k8s.OrderedResourceFields, times []time.Time, tab FleetTab) { sort.SliceS­table(in­dices, func(a, b int) bool { ra := rows[in­dices[a]] rb := rows[in­dices[b]]

switch tab { case FleetTabGPU: // Sort by Alloc col­umn (index 3) as­cend­ing al­locA, al­locB := ”, ” if len(ra) > 3 { al­locA = ra[3] } if len(rb) > 3 { al­locB = rb[3] } re­turn al­locA < al­locB

case FleetTabCPU: // Sort by Name col­umn (index 0) as­cend­ing nameA, nameB := ”, ” if len(ra) > 0 { nameA = ra[0] } if len(rb) > 0 { nameB = rb[0] } re­turn nameA < nameB

case FleetTabAll: // GPU nodes first, then CPU nodes. // Within GPU: sort by Alloc (index 3). // Within CPU: sort by Name (index 0). com­puteA, com­puteB := ”, ” if len(ra) > 2 { com­puteA = ra[2] } if len(rb) > 2 { com­puteB = rb[2] } aIs­GPU := strings.HasPre­fix(com­puteA, gpu”) bIs­GPU := strings.HasPre­fix(com­puteB, gpu”) // … } }) }

ra[3] is Alloc. ra[2] is Compute. ra[0] is Name. These are magic num­bers. The only thing con­nect­ing in­dex 3 to Alloc” is a com­ment and the col­umn or­der de­fined in re­source.views.json:

{ nodes”: { fields”: [ { name”: Name”, weight”: 0.28 }, { name”: Instance”, weight”: 0.15 }, { name”: Compute”, weight”: 0.12 }, { name”: Alloc”, weight”: 0.12 }, … ] } }

Add a col­umn be­tween Instance and Compute? Every sort, every con­di­tional ren­der, every place that says ra[2] or ra[3] is now silently wrong. The com­piler can’t help you be­cause it’s all []string. And the JSON con­fig can’t ex­press sort be­hav­ior, con­di­tional ren­der­ing, or cus­tom drill tar­gets, so those live in Go code that hard­codes the po­si­tional as­sump­tions from the JSON.

AI gen­er­ates this pat­tern be­cause it’s the short­est path from fetch data” to render table.” A []string sat­is­fies any table wid­get im­me­di­ately. Typed structs re­quire more cer­e­mony up­front. So the AI picks the fast path, and six months later you’re de­bug­ging why sort puts Name” val­ues in the Alloc” col­umn.

What to do in­stead: Put this di­rec­tive in your CLAUDE.md:

# Data Representation

- NEVER flat­ten struc­tured data into []string, Vec<String>, or po­si­tional ar­rays. - All data flows as typed structs (FleetNode, PodInfo, etc.) un­til the ren­der() call. - Column iden­tity comes from struct field names, not ar­ray in­dices. - Sort func­tions op­er­ate on typed fields, never on po­si­tional ac­cess like row[3]. - The ONLY place strings are cre­ated for dis­play is in­side ren­der()/​view() func­tions.

Then your typed struct makes im­pos­si­ble states im­pos­si­ble [2]:

struct FleetNode { name: String, in­stance_­type: String, com­pute_­class: ComputeClass, al­loc: GpuAlloc, }

You can’t sort by the wrong col­umn when columns are named fields. You can’t ac­ci­den­tally com­pare Alloc strings as names. The com­piler en­forces this for you. AI will al­ways pick Vec<String> be­cause it sat­is­fies the prompt faster. Your CLAUDE.md makes the typed path the path of least re­sis­tance.

Tenet 5: AI does­n’t own state tran­si­tions.

The Bubble Tea ar­chi­tec­ture has a beau­ti­ful idea: Update() is the only place state mu­tates, dri­ven by mes­sages. But k10s vi­o­lated this. The up­dateTableMsg han­dler spawned a clo­sure that mu­tated Model fields from in­side a gor­ou­tine:

case up­dateTableMsg: re­turn m, func() tea.Msg { // block on some­one send­ing the up­date mes­sage. <-m.updateTableChan // Preserve cur­sor po­si­tion across col­umn/​row up­dates so that // back­ground re­freshes don’t re­set the user’s se­lec­tion. saved­Cur­sor := max(m.table.Cur­sor(), 0) // run the nec­es­sary table view up­date calls. m.up­date­Columns(m.viewWidth) m.up­dateTable­Data() // Restore cur­sor, clamped to valid range. row­Count := len(m.table.Rows()) if row­Count > 0 { if saved­Cur­sor >= row­Count { saved­Cur­sor = row­Count - 1 } m.table.SetCur­sor(saved­Cur­sor) } re­turn up­dateTableMsg{} }

This re­turned func­tion (a tea.Cmd) is ex­e­cuted by Bubble Tea in a sep­a­rate gor­ou­tine. It calls m.up­date­Columns(m.viewWidth) and m.up­dateTable­Data() which read and write m.re­sources, m.table, m.viewWidth. Meanwhile, View() is called on the main gor­ou­tine read­ing the same fields. There’s no lock. No mu­tex. The chan­nel <-m.updateTableChan blocks the gor­ou­tine un­til some­one sends an up­date sig­nal, but noth­ing pre­vents View() from read­ing half-writ­ten state.

This is a text­book data race. It worked 99% of the time. Corrupted the dis­play 1% of the time in ways that made me think I was go­ing in­sane.

AI gen­er­ates this be­cause just mu­tate it in the clo­sure” is the short­est path to work­ing code. Proper mes­sage pass­ing (send a mes­sage back to Update(), let Update() ap­ply the mu­ta­tion atom­i­cally on the main loop) re­quires more types, more plumb­ing. The AI is op­ti­miz­ing for the prompt, not for cor­rect­ness un­der con­cur­rency.

What to do in­stead: All mu­ta­tions to ren­der-vis­i­ble state hap­pen on the main loop. Period. Background work­ers pro­duce data. They send it as a mes­sage. The main loop re­ceives the mes­sage and ap­plies it. This is the one rule you can­not break in con­cur­rent UI code.

// Background task: tx.send(AppMsg::Fleet­Data(nodes)).await;

// Main loop: match msg { AppMsg::FleetData(nodes) => { self.fleet_view.up­date_n­odes(nodes); } }

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.