10 interesting stories served every morning and every evening.

GrapheneOS (@GrapheneOS@grapheneos.social)

grapheneos.social

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

Local AI Needs to be the Norm · unix.foo

unix.foo

One of the cur­rent trends in mod­ern soft­ware is for de­vel­op­ers to slap an API call to OpenAI or Anthropic for fea­tures within their app. Reasonable peo­ple can quib­ble with whether those fea­tures are ac­tu­ally bring­ing value to users, but what I want to dis­cuss is the fun­da­men­tal con­cept of tak­ing on a de­pen­dency to a cloud hosted AI model for ap­pli­ca­tions.

This lazi­ness is cre­at­ing a gen­er­a­tion of soft­ware that is frag­ile, in­vades your pri­vacy, and fun­da­men­tally bro­ken. We are build­ing ap­pli­ca­tions that stop work­ing the mo­ment the server crashes or a credit card ex­pires.

We need to re­turn to a habit of build­ing soft­ware where our lo­cal de­vices do the work. The sil­i­con in our pocket is mind bog­glingly faster than what was avail­able a decade ago. It has a ded­i­cated Neural Engine sit­ting there, mostly idle, while we wait for a JSON re­sponse from a server farm in Virginia. That’s ridicu­lous.

Even if your in­ten­tions are pure, the mo­ment you stream user con­tent to a third party AI provider, you’ve changed the na­ture of your prod­uct. You now have data re­ten­tion ques­tions and all the bag­gage that comes with that (consent, au­dit, breach, gov­ern­ment re­quest, train­ing, etc.)

On top of that you also sub­stan­tially com­pli­cated your stack be­cause your fea­ture now de­pends on net­work con­di­tions, ex­ter­nal ven­dor up­time, rate lim­its, ac­count billing, and your own back­end health.

Congratulations! You took a UX fea­ture and turned it into a dis­trib­uted sys­tem that costs you money.

If the fea­ture can be done lo­cally, opt­ing into this mess is self in­flicted dam­age.

AI every­where” is not the goal. Useful soft­ware is the goal.

Concrete Example: Brutalist Report’s On-Device Summaries

Years ago I launched a fun side pro­ject named The Brutalist Report , a news ag­gre­ga­tor ser­vice in­spired by the 1990s style web.

Recently, I de­cided to build a na­tive iOS client for it with the de­sign goal of en­sur­ing it would re­main a high-den­sity news read­ing ex­pe­ri­ence. Headlines in a stark list, a reader mode that strips the can­cer that has over­taken the web, and (optionally) an intelligence” view that gen­er­ates a sum­mary of the ar­ti­cle.

Here’s the key point though: the sum­mary is gen­er­ated on-de­vice us­ing Apple’s lo­cal model APIs. No server de­tours. No prompt or user logs. No ven­dor ac­count. No we store your con­tent for 30 days” foot­notes needed.

It has be­come so nor­mal for folks that any AI use is hap­pen­ing server-side. We have a lot of work to do to turn this around as an in­dus­try.

It’s not lost on me that some­times the use-cases you have will de­mand the in­tel­li­gence that only a cloud hosted model can pro­vide, but that’s not the case with every use-case you’re try­ing to solve. We need to be thought­ful here.

Available Tooling

I can only speak on the tool­ing avail­able within the Apple ecosys­tem since that’s what I fo­cused ini­tial de­vel­op­ment ef­forts on. In the last year, Apple has in­vested heav­ily here to al­low de­vel­op­ers to make use of a built-in lo­cal AI model eas­ily.

The core flow looks roughly like this:

im­port FoundationModels

let model = SystemLanguageModel.default guard model.avail­abil­ity == .available else { re­turn }

let ses­sion = LanguageModelSession { ”″ Provide a bru­tal­ist, in­for­ma­tion-dense sum­mary in Markdown for­mat. - Use **bold** for key con­cepts. - Use bul­let points for facts. - No fluff. Just facts. ”″ }

let re­sponse = try await ses­sion.re­spond(op­tions: .init(maximumResponseTokens: 1_000)) { ar­ti­cle­Text }

let mark­down = re­sponse.con­tent

And for longer con­tent, we can chunk the plain text (around 10k char­ac­ters per chunk), pro­duce con­cise facts only” notes per chunk, then runs a sec­ond pass to com­bine them into a fi­nal sum­mary.

This is the kind of work lo­cal mod­els are per­fect for. The in­put data is al­ready on the de­vice (because the user is read­ing it). The out­put is light­weight. It’s fast and pri­vate. It’s okay if it’s not a su­per­hu­man PhD level in­tel­li­gence be­cause it’s sum­ma­riz­ing the page you just loaded, not in­vent­ing world knowl­edge.

Local AI shines when the mod­el’s job is trans­form­ing user-owned data, not act­ing as a search en­gine for the uni­verse.

There are plenty of AI fea­tures that peo­ple want but don’t trust. Summarizing emails, ex­tract ac­tion items from notes, cat­e­go­rize this doc­u­ment, etc.

The usual cloud ap­proach turns every one of those into a trust ex­er­cise. Please send your data to our servers. We promise to be cool about it.”

Local AI changes that. Your de­vice al­ready has the data. We’ll do the work right here.

You don’t build trust with your users by writ­ing a 2,000 word pri­vacy pol­icy. You build trust by not need­ing one to be­gin with.

The tool­ing avail­able on the plat­form goes even fur­ther.

One of the best moves Apple has made re­cently is push­ing AI out­put” away from un­struc­tured blobs of text and to­ward typed data.

Instead of ask the model for JSON and pray”, the newer and bet­ter pat­tern is to de­fine a Swift struct that rep­re­sents the thing you want. Give the model guid­ance for each field in nat­ural lan­guage. Ask the model to gen­er­ate an in­stance of that type.

That’s it.

Conceptually, it looks like this:

im­port FoundationModels

@Generable struct ArticleIntel { @Guide(description: One sen­tence. No hype.“) var tldr: String @Guide(description: 3 – 7 bul­lets. Facts only.“) var bul­lets: [String] @Guide(description: Comma-separated key­words.“) var key­words: [String] }

let ses­sion = LanguageModelSession() let re­sponse = try await ses­sion.re­spond( to: Extract struc­tured notes from the ar­ti­cle.”, gen­er­at­ing: ArticleIntel.self ) { ar­ti­cle­Text }

let in­tel = re­sponse.con­tent

Now your UI does­n’t have to scrape bul­let points out of Markdown or hope the model re­mem­bered your JSON schema. You get a real type with real fields, and you can ren­der it con­sis­tently. It pro­duces struc­tured out­put your app can ac­tu­ally use. And it’s all run­ning lo­cally!

This is­n’t just nicer er­gonom­ics. It’s an en­gi­neer­ing im­prove­ment.

And if you’re build­ing a lo­cal first app, this is the dif­fer­ence be­tween AI as nov­elty” and AI as a trust­wor­thy sub­sys­tem”.

But Local Models Aren’t As Smart”

Correct.

But also so what?

Most app fea­tures don’t need a model that can write Shakespeare, ex­plain quan­tum me­chan­ics, and pass the bar exam. They need a model that can do one of these re­li­ably: sum­ma­rize, clas­sify, ex­tract, rewrite, or nor­mal­ize.

And for those tasks, lo­cal mod­els can be truly ex­cel­lent.

If you try to use a lo­cal model as a re­place­ment for the en­tire in­ter­net, you will be dis­ap­pointed. If you use it as a data trans­former” sit­ting in­side your app, you’ll won­der why you ever sent this stuff to a server.

Use cloud mod­els only when they’re gen­uinely nec­es­sary. Keep the user’s data where it be­longs. And when you do use AI, don’t just glue it as a chat box. Use it as a real sub­sys­tem with typed out­puts and pre­dictable be­hav­ior.

Stop ship­ping dis­trib­uted sys­tems when you meant to ship a fea­ture.

Google Broke reCAPTCHA for De-Googled Android Users

reclaimthenet.org

Google has tied its next-gen­er­a­tion re­CAPTCHA sys­tem to Google Play Services on Android, mean­ing any­one run­ning a de-Googled phone will au­to­mat­i­cally fail ver­i­fi­ca­tion when the sys­tem de­cides to chal­lenge them.

The re­quire­ment forces Android users to run Google’s pro­pri­etary app frame­work ver­sion 25.41.30 or higher just to prove they’re hu­man.

When re­CAPTCHA flags what it con­sid­ers sus­pi­cious ac­tiv­ity, it aban­dons the old im­age puz­zles and de­mands you scan a QR code. That scan re­quires Play Services run­ning in the back­ground, com­mu­ni­cat­ing with Google’s servers. If you’re us­ing GrapheneOS or any other cus­tom ROM that strips out Google’s soft­ware, the ver­i­fi­ca­tion fails.

Google an­nounced the broader sys­tem, Google Cloud Fraud Defense, at Cloud Next on April 23, pitch­ing it as a trust plat­form de­signed to han­dle au­tonomous AI agents and tra­di­tional bots alike. What Google did­n’t em­pha­size was the part where prov­ing you’re hu­man now re­quires sub­mit­ting to its pro­pri­etary sur­veil­lance.

Reclaim Your Digital Freedom.

Get un­fil­tered cov­er­age of sur­veil­lance, cen­sor­ship, and the tech­nol­ogy threat­en­ing your civil lib­er­ties.

This was­n’t sud­den, ei­ther. An Internet Archive snap­shot from October 2025 shows the same sup­port page al­ready list­ing a Play Services re­quire­ment at ver­sion 25.39.30. Google built this de­pen­dency qui­etly for at least seven months be­fore a Reddit user on the de­google sub­red­dit flagged it, with re­port­ing from PiunikaWeb and Android Authority bring­ing wider at­ten­tion.

The iOS com­par­i­son is re­veal­ing be­cause Apple de­vices run­ning iOS 16.4 or later com­plete the same ver­i­fi­ca­tion with­out in­stalling any ad­di­tional apps. Google did­n’t de­mand iPhone users in­stall Google soft­ware to pass the test. Only Android users who refuse Play Services get locked out. The asym­me­try re­veals what this is re­ally about: not se­cu­rity, but ecosys­tem con­trol.

re­CAPTCHA sits in front of mil­lions of web­sites. When Google ties ver­i­fi­ca­tion to Play Services, it es­tab­lishes a prece­dent where ac­cess­ing ba­sic web con­tent re­quires run­ning Google’s soft­ware and trans­mit­ting data to Google’s servers.

People run­ning de-Googled phones chose those se­tups be­cause they read the data prac­tices, un­der­stood what Play Services phones home about, and de­cided they did­n’t con­sent. Google’s new sys­tem pun­ishes that de­ci­sion by treat­ing the ab­sence of its pro­pri­etary soft­ware as sus­pi­cious by de­fault.

Web de­vel­op­ers adopt­ing this re­CAPTCHA should un­der­stand what they’re choos­ing. Every site that im­ple­ments it tells de-Googled Android users they’re not wel­come. That’s a small au­di­ence to­day. It’s also the au­di­ence most likely to care about how a web­site treats their data, and the least likely to ca­pit­u­late.

Bambu Lab is abusing the open source social contract

www.jeffgeerling.com

Last year I said I’d prob­a­bly never rec­om­mend an­other Bambu Lab printer again.

I still use my P1S, but af­ter Bambu Lab started push­ing their al­ways-con­nected cloud so­lu­tion as the new de­fault:

I blocked the printer from the Internet via my OPNsense Firewall

I stopped up­dat­ing the firmware

I locked the printer into Developer mode

I deleted Bambu Studio and started us­ing OrcaSlicer

I had to do that to keep it un­der my con­trol, in­stead of Bambu’s.

But I’m weird—I ac­knowl­edge that. I’m one of those crazy ones who likes to own some­thing they pur­chased, and not have the com­pany watch every­thing I do with hard­ware I paid for.

Bambu Lab could’ve left the sta­tus quo at that, and I would­n’t be writ­ing this blog post.

But they did­n’t.

What hap­pened this time?

For con­text: OrcaSlicer is a fork of the open source pro­ject Bambu Studio, which is a fork of Prusa Slicer, which is a fork of slic3r. (They are all li­censed un­der the AGPLv3 open source li­cense).

OrcaSlicer al­ready has to dance around Bambu’s weird de­fault setup where every file you print goes through Bambu’s servers, mean­ing they can see every­thing you ever print on your printer.

That is, un­less you’re like me and you run it in Developer mode, and com­pletely block it from the Internet on old firmware.

Some peo­ple are okay with us­ing OrcaSlicer and print­ing through Bambu’s cloud. It’s con­ve­nient if you’re on the road and want to start a print on your printer at home, with­out man­ag­ing your own VPN.

I run my own WireGuard VPN, so I don’t need that, but I un­der­stand not every­one has the re­sources to man­age their own re­mote ac­cess.

Bambu saw a fork of OrcaSlicer that al­lowed you to use all your print­er’s fea­tures with­out hav­ing to route prints through Bambu’s cloud called OrcaSlicer-bambulab and was like, You know what? No. For the 0.1% of power users who want to run OrcaSlicer with­out the cloud de­liv­ery mech­a­nism like we have in our AGPL-licensed Linux Bambu Studio code… no. You have to use our app, and only our app.”

So they threat­ened that OrcaSlicer fork’s de­vel­oper with le­gal ac­tion for things that de­vel­oper did­n’t do. For ex­am­ple, they in­di­cated the fork used an im­per­son­ation at­tack, de­spite the fork us­ing Bambu Studio’s up­stream code ver­ba­tim.

These are very se­ri­ous pub­lic ac­cu­sa­tions.Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

These are very se­ri­ous pub­lic ac­cu­sa­tions.

Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.

In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.

— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

Bambu is abus­ing the open source so­cial con­tract, and us­ing their le­gal might, to sup­press a tiny num­ber of their users1, for who knows what rea­son.

It seems dumb to me, be­cause it would’ve been eas­ier (and more prof­itable) to do noth­ing at all2. Instead, they wrote a blog post blam­ing an in­di­vid­ual open source de­vel­oper for their own in­fra­struc­ture and se­cu­rity prob­lems.

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.— Bambu Lab blog post

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.

In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.

— Bambu Lab blog post

I don’t think they un­der­stand open source cul­ture. Security ei­ther, if a pub­lic user agent string is their only pro­tec­tion against DDoS at­tacks…

Instead of find­ing so­lu­tions to ecosys­tem prob­lems and build­ing a more se­cure plat­form, Bambu is putting de­voted power users like the fork’s de­vel­oper on blast3.

When ten­sions flared last year, they wrote a sim­i­lar blog post blam­ing com­mu­nity back­lash on unfortunate mis­in­for­ma­tion’. I imag­ine they meant spec­u­la­tion from com­mu­nity mem­bers (like my­self) frus­trated the whole soft­ware ecosys­tem and own­er­ship model was turned up­side down post-pur­chase.

This year they’re blam­ing one de­vel­oper of a tiny slicer fork for the po­ten­tial im­pact he could have on their en­tire cloud in­fra­struc­ture.

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.— Bambu Lab blog post

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.

— Bambu Lab blog post

I love how they frame this as a de­vel­oper try­ing to im­per­son­ate their app, when he’s lit­er­ally us­ing the same AGPL-licensed code their Linux app uses.

I find it dou­bly ironic since their own fork caused Bambu users’ teleme­try to hit Prusa’s servers back in 2022, and (to my knowl­edge) Prusa did­n’t snap back with a C&D.

They spent the rest of their blog post talk­ing about vul­ner­a­bil­i­ties, bugs, and in­sta­bil­i­ties—as if that has any­thing to do with a de­vel­oper us­ing up­stream code ver­ba­tim in his fork.

Maybe they could take a new ap­proach and just not lock down their whole ecosys­tem in the first place.

But who am I kid­ding? Nothing I say, and no amount of com­plain­ing in the com­ments be­low, seems to help Bambu see the fault in their ways.

Spending a lit­tle more for a printer from an­other com­pany just might do it, though.

Louis Rossmann posted a video say­ing he’d pledge $10,000 to help the open source dev fight Bambu’s le­gal threats. And I’d hap­pily chip in too, but that’s only use­ful if the dev wants to put him­self back in Bambu’s crosshairs.

The bet­ter play might just be to skip Bambu al­to­gether.

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

reuters.com

www.reuters.com

Please en­able JS and dis­able any ad blocker

Building For The Future

blog.cloudflare.com

2026 – 05-07

3 min read

This af­ter­noon, we sent the fol­low­ing email to our global team. One of our core val­ues at Cloudflare is trans­parency, and we be­lieve it’s im­por­tant that you hear this di­rectly from us be­cause it’s a ma­jor mo­ment at Cloudflare.

Team:We are writ­ing to let you know di­rectly that we’ve made the de­ci­sion to re­duce Cloudflare’s work­force by more than 1,100 em­ploy­ees glob­ally.  The way we work at Cloudflare has fun­da­men­tally changed. We don’t just build and sell AI tools and plat­forms. We are our own most de­mand­ing cus­tomer. Cloudflare’s us­age of AI has in­creased by more than 600% in the last three months alone. Employees across the com­pany from en­gi­neer­ing to HR to fi­nance to mar­ket­ing run thou­sands of AI agent ses­sions each day to get their work done. That means we have to be in­ten­tional in how we ar­chi­tect our com­pany for the agen­tic AI era in or­der to su­per­charge the value we de­liver to our cus­tomers and to honor our mis­sion to help build a bet­ter Internet for every­one, every­where. To­day is a hard day. This de­ci­sion un­for­tu­nately means say­ing good­bye to team­mates who have con­tributed mean­ing­fully to our mis­sion and to build­ing Cloudflare into one of the world’s most suc­cess­ful com­pa­nies. We want to be clear that this de­ci­sion is not a re­flec­tion of the in­di­vid­ual work or tal­ent of those leav­ing us. Instead, we are reimag­in­ing every in­ter­nal process, team, and role across the com­pany. Today’s ac­tions are not a cost-cut­ting ex­er­cise or an as­sess­ment of in­di­vid­u­als’ per­for­mance; they are about Cloudflare defin­ing how a world-class, high-growth com­pany op­er­ates and cre­ates value in the agen­tic AI era.  This is a mo­ment we need to own as founders and lead­ers of the com­pany. Matthew has per­son­ally sent out every of­fer let­ter we’ve ex­tended. It is a prac­tice he has al­ways looked for­ward to be­cause it rep­re­sented our growth and the in­cred­i­ble tal­ent join­ing our mis­sion. It did­n’t feel right for this mes­sage to come from any­one other than the two of us. Rather than trick­ling out no­tices through man­agers, we will be send­ing emails to every em­ployee. Within the next hour, every mem­ber of our global team will re­ceive an email from both of us clar­i­fy­ing how this change af­fects them. For those de­part­ing to­day, we will send this up­date to both their per­sonal and Cloudflare ad­dresses to en­sure they re­ceive the in­for­ma­tion im­me­di­ately.It’s im­por­tant to us that we treat de­part­ing team mem­bers right and in a way that ex­ceeds what we’ve seen from other com­pa­nies. We be­lieve act­ing with em­pa­thy is­n’t about avoid­ing hard de­ci­sions but rather about how you treat peo­ple when those de­ci­sions are made. If we are ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat them. We are pair­ing the di­rect­ness of these mea­sures with sev­er­ance pack­ages that lead the in­dus­try. The pack­ages for de­part­ing em­ploy­ees will in­clude the equiv­a­lent of their full base pay through the end of 2026. Healthcare cov­er­age is dif­fer­ent across the globe, and if you’re in the United States, we’ll con­tinue to pro­vide sup­port through the end of the year. We are also vest­ing eq­uity for de­part­ing team mem­bers through August 15th, so they re­ceive stock be­yond their de­par­ture date. And, if de­part­ing team mem­bers haven’t hit their one-year cliffs, we are go­ing to waive those and vest their pro-rated eq­uity through August as well. We’ve asked the team to do this only once, as hard as that may be to­day. We don’t want to do it again for the fore­see­able fu­ture. By tak­ing de­ci­sive ac­tion now, we pro­vide im­me­di­ate clar­ity to those de­part­ing and pro­tect the sta­bil­ity of the team that re­mains. We are mak­ing these changes now be­cause mak­ing smaller, re­peated cuts or drag­ging a re­or­ga­ni­za­tion out over mul­ti­ple quar­ters cre­ates pro­longed emo­tional un­cer­tainty for em­ploy­ees and stalls our abil­ity to build. It’s the right thing to do; it’s the hon­est thing to do; and it re­flects the val­ues of the com­pany we are con­tin­u­ing to build.Cloud­flare started as a dig­i­tally na­tive com­pany built in the cloud. That al­lowed us to catch up to and pass com­pa­nies that had a head start of years or decades but were slowed down by out­dated sys­tems and processes. As we’ve now be­come the leader, we can­not rest on the work­flows and or­ga­ni­za­tional struc­tures that worked yes­ter­day. We’re con­fi­dent that our re­shaped or­ga­ni­za­tion will be even faster and more in­no­v­a­tive as we con­tinue build­ing the fu­ture.To those de­part­ing us: you’ve helped build the strong foun­da­tion Cloudflare stands on to­day. We have the ut­most re­spect for your work and grat­i­tude for the im­pact you have made. We’re con­fi­dent you will land at other great places and build many fu­ture great com­pa­nies, bring­ing with you a unique set of skills learned while build­ing Cloudflare.Transparency is a core prin­ci­ple at Cloudflare, and it was im­por­tant that you hear this from us first. We will be head­ing to our earn­ings con­fer­ence call at 2 PM PT, when we’ll share more. We also plan to ad­dress to­day’s an­nounce­ments live with the team at our all-hands meet­ing. It’s not an easy day, but it’s the right de­ci­sion. Our mis­sion to help build a bet­ter Internet is more im­por­tant now than ever, and there’s a lot of work left to be done.

Team:

We are writ­ing to let you know di­rectly that we’ve made the de­ci­sion to re­duce Cloudflare’s work­force by more than 1,100 em­ploy­ees glob­ally.

The way we work at Cloudflare has fun­da­men­tally changed. We don’t just build and sell AI tools and plat­forms. We are our own most de­mand­ing cus­tomer. Cloudflare’s us­age of AI has in­creased by more than 600% in the last three months alone. Employees across the com­pany from en­gi­neer­ing to HR to fi­nance to mar­ket­ing run thou­sands of AI agent ses­sions each day to get their work done. That means we have to be in­ten­tional in how we ar­chi­tect our com­pany for the agen­tic AI era in or­der to su­per­charge the value we de­liver to our cus­tomers and to honor our mis­sion to help build a bet­ter Internet for every­one, every­where.

Today is a hard day. This de­ci­sion un­for­tu­nately means say­ing good­bye to team­mates who have con­tributed mean­ing­fully to our mis­sion and to build­ing Cloudflare into one of the world’s most suc­cess­ful com­pa­nies. We want to be clear that this de­ci­sion is not a re­flec­tion of the in­di­vid­ual work or tal­ent of those leav­ing us. Instead, we are reimag­in­ing every in­ter­nal process, team, and role across the com­pany. Today’s ac­tions are not a cost-cut­ting ex­er­cise or an as­sess­ment of in­di­vid­u­als’ per­for­mance; they are about Cloudflare defin­ing how a world-class, high-growth com­pany op­er­ates and cre­ates value in the agen­tic AI era.

This is a mo­ment we need to own as founders and lead­ers of the com­pany. Matthew has per­son­ally sent out every of­fer let­ter we’ve ex­tended. It is a prac­tice he has al­ways looked for­ward to be­cause it rep­re­sented our growth and the in­cred­i­ble tal­ent join­ing our mis­sion. It did­n’t feel right for this mes­sage to come from any­one other than the two of us. Rather than trick­ling out no­tices through man­agers, we will be send­ing emails to every em­ployee.

Within the next hour, every mem­ber of our global team will re­ceive an email from both of us clar­i­fy­ing how this change af­fects them. For those de­part­ing to­day, we will send this up­date to both their per­sonal and Cloudflare ad­dresses to en­sure they re­ceive the in­for­ma­tion im­me­di­ately.

It’s im­por­tant to us that we treat de­part­ing team mem­bers right and in a way that ex­ceeds what we’ve seen from other com­pa­nies. We be­lieve act­ing with em­pa­thy is­n’t about avoid­ing hard de­ci­sions but rather about how you treat peo­ple when those de­ci­sions are made. If we are ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat them. We are pair­ing the di­rect­ness of these mea­sures with sev­er­ance pack­ages that lead the in­dus­try. The pack­ages for de­part­ing em­ploy­ees will in­clude the equiv­a­lent of their full base pay through the end of 2026. Healthcare cov­er­age is dif­fer­ent across the globe, and if you’re in the United States, we’ll con­tinue to pro­vide sup­port through the end of the year. We are also vest­ing eq­uity for de­part­ing team mem­bers through August 15th, so they re­ceive stock be­yond their de­par­ture date. And, if de­part­ing team mem­bers haven’t hit their one-year cliffs, we are go­ing to waive those and vest their pro-rated eq­uity through August as well.

We’ve asked the team to do this only once, as hard as that may be to­day. We don’t want to do it again for the fore­see­able fu­ture. By tak­ing de­ci­sive ac­tion now, we pro­vide im­me­di­ate clar­ity to those de­part­ing and pro­tect the sta­bil­ity of the team that re­mains. We are mak­ing these changes now be­cause mak­ing smaller, re­peated cuts or drag­ging a re­or­ga­ni­za­tion out over mul­ti­ple quar­ters cre­ates pro­longed emo­tional un­cer­tainty for em­ploy­ees and stalls our abil­ity to build. It’s the right thing to do; it’s the hon­est thing to do; and it re­flects the val­ues of the com­pany we are con­tin­u­ing to build.

Cloudflare started as a dig­i­tally na­tive com­pany built in the cloud. That al­lowed us to catch up to and pass com­pa­nies that had a head start of years or decades but were slowed down by out­dated sys­tems and processes. As we’ve now be­come the leader, we can­not rest on the work­flows and or­ga­ni­za­tional struc­tures that worked yes­ter­day. We’re con­fi­dent that our re­shaped or­ga­ni­za­tion will be even faster and more in­no­v­a­tive as we con­tinue build­ing the fu­ture.

To those de­part­ing us: you’ve helped build the strong foun­da­tion Cloudflare stands on to­day. We have the ut­most re­spect for your work and grat­i­tude for the im­pact you have made. We’re con­fi­dent you will land at other great places and build many fu­ture great com­pa­nies, bring­ing with you a unique set of skills learned while build­ing Cloudflare.

Transparency is a core prin­ci­ple at Cloudflare, and it was im­por­tant that you hear this from us first. We will be head­ing to our earn­ings con­fer­ence call at 2 PM PT, when we’ll share more. We also plan to ad­dress to­day’s an­nounce­ments live with the team at our all-hands meet­ing.

It’s not an easy day, but it’s the right de­ci­sion. Our mis­sion to help build a bet­ter Internet is more im­por­tant now than ever, and there’s a lot of work left to be done.

Postmortem: TanStack npm supply-chain compromise | TanStack Blog

tanstack.com

by Tanner Linsley on May 11, 2026.

Last up­dated: 2026 – 05-11

On 2026 – 05-11 be­tween 19:20 and 19:26 UTC, an at­tacker pub­lished 84 ma­li­cious ver­sions across 42 @tanstack/* npm pack­ages by com­bin­ing: the pul­l_re­quest_­tar­get Pwn Request” pat­tern, GitHub Actions cache poi­son­ing across the fork↔base trust bound­ary, and run­time mem­ory ex­trac­tion of an OIDC to­ken from the GitHub Actions run­ner process. No npm to­kens were stolen and the npm pub­lish work­flow it­self was not com­pro­mised.

The ma­li­cious ver­sions were de­tected pub­licly within 20 min­utes by an ex­ter­nal re­searcher ashishkurmi work­ing for stepse­cu­rity. All af­fected ver­sions have been dep­re­cated; npm se­cu­rity has been en­gaged to pull tar­balls from the reg­istry. We have no ev­i­dence of npm cre­den­tials be­ing stolen, but we strongly rec­om­mend that any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 ro­tate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH cre­den­tials reach­able from the in­stall host.

Tracking is­sue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Packages af­fected

42 pack­ages, 84 ver­sions (two per pack­age, pub­lished roughly 6 min­utes apart). See the track­ing is­sue for the full table. Confirmed-clean fam­i­lies: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-pack­age, not @tanstack/start-*).

What the mal­ware does

When a de­vel­oper or CI en­vi­ron­ment runs npm in­stall, pnpm in­stall, or yarn in­stall against any af­fected ver­sion, npm re­solves the ma­li­cious op­tion­alDe­pen­den­cies en­try, fetches the or­phan pay­load com­mit from the fork net­work, runs its pre­pare life­cy­cle script, and ex­e­cutes a ~2.3 MB ob­fus­cated router_init.js smug­gled into the af­fected tar­ball. The script:

Harvests cre­den­tials from com­mon lo­ca­tions: AWS IMDS / Secrets Manager, GCP meta­data, Kubernetes ser­vice-ac­count to­kens, Vault to­kens, ~/.npmrc, GitHub to­kens (env, gh CLI, .git-credentials), SSH pri­vate keys

Exfiltrates over the Session/Oxen mes­sen­ger file-up­load net­work (filev2.getsession.org, seed{1,2,3}.get­ses­sion.org) — end-to-end en­crypted with no at­tacker-con­trolled C2, so block­ing by IP/domain is the only net­work mit­i­ga­tion

Self-propagates: enu­mer­ates other pack­ages the vic­tim main­tains via reg­istry.npmjs.org/-/​v1/​search?text=main­tainer:<user> and re­pub­lishes them with the same in­jec­tion

Because the pay­load runs as part of npm in­stal­l’s life­cy­cle, any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 must treat the in­stall host as po­ten­tially com­pro­mised.

All times UTC. Local time­stamps from GitHub API and npm reg­istry.

Pre-attack (cache poi­son­ing phase)

Detonation (publish phase)

Workflow run 25613093674 starts (19:15:44), and fails.

Detection and re­sponse

Formal mal­ware re­ports are sub­mit­ted via npm

Tanner be­gins npm dep­re­ca­tion process for all 84 af­fected pack­ages.

Public Twitter/X/LinkedIn/Bluesky dis­clo­sure from @tan_stack and main­tain­ers

All cache en­tries for all TanStack/* GitHub repos­i­to­ries purged via API.

Hardening PR merged: bun­dle-size.yml re­struc­tured, repos­i­to­ry_owner guards added, third-party ac­tion refs pinned to SHAs.

Official GitHub Security Advisory is pub­lished, CVE re­quested

Three vul­ner­a­bil­i­ties chained to­gether. Each is nec­es­sary for the at­tack; none alone is suf­fi­cient.

1. pul­l_re­quest_­tar­get Pwn Request” pat­tern in bun­dle-size.yml

bun­dle-size.yml ran pul­l_re­quest_­tar­get for fork PRs and, in­side that trig­ger con­text, checked out the fork’s PR-merge ref and ran a build:

yaml

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

The au­thor of the work­flow at­tempted a trust split (the com­ment-pr job is sep­a­rate from bench­mark-pr, with a com­ment in the YAML not­ing the in­tent to keep bench­mark-pr untrusted with read-only per­mis­sions”). The split is cor­rect in spirit but missed two facts:

ac­tions/​cache@v5′s post-job save is not gated by per­mis­sions:. Cache writes use a run­ner-in­ter­nal to­ken, not the work­flow GITHUB_TOKEN. Setting per­mis­sions: con­tents: read does not block cache mu­ta­tion.

Cache scope is per-repo, shared across pul­l_re­quest_­tar­get runs (which use the base re­po’s cache scope) and pushes to main. A PR run­ning in the base re­po’s cache scope can poi­son en­tries that pro­duc­tion work­flows on main will later re­store.

2. GitHub Actions cache poi­son­ing across trust bound­aries

The ma­li­cious vite_setup.mjs was specif­i­cally de­signed to write data into the pnpm-store di­rec­tory un­der a key the le­git re­lease.yml work­flow would com­pute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the bench­mark-pr job ended, ac­tions/​cache@v5′s post-step saved the (now-poisoned) pnpm store to that ex­act key. When re­lease.yml next ran on a push to main, its Setup Tools step re­stored the poi­soned en­try — en­tirely as de­signed.

This is the class of at­tack doc­u­mented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions de­sign is­sue that re­quires con­scious mit­i­ga­tion.

re­lease.yml de­clares id-to­ken: write (legitimately needed for npm OIDC trusted pub­lish­ing). When the poi­soned pnpm store is re­stored on the run­ner, at­tacker-con­trolled bi­na­ries are now on disk and get in­voked dur­ing the build step. Those bi­na­ries:

Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline

Read /proc/<pid>/maps and /proc/<pid>/mem to dump the work­er’s mem­ory

Extract the OIDC to­ken (which the run­ner mints lazily, in mem­ory, when id-to­ken: write is set)

Use the to­ken to au­then­ti­cate POST re­quests di­rectly to reg­istry.npmjs.org — by­pass­ing the work­flow’s Publish Packages step en­tirely

This is the same mem­ory-ex­trac­tion tech­nique (and ver­ba­tim Python script, with at­tri­bu­tion com­ment) used in the tj-ac­tions/​changed-files com­pro­mise of March 2025. The at­tacker did not in­vent novel trade­craft; they re­com­bined pub­lished re­search.

Why none alone is enough

pul­l_re­quest_­tar­get alone is fine for trusted op­er­a­tions (labeling, com­ments)

Cache poi­son­ing alone (e.g., from in­side an al­ready-com­pro­mised dep) re­quires a sep­a­rate pub­lish ve­hi­cle

OIDC to­ken ex­trac­tion alone re­quires ex­ist­ing code ex­e­cu­tion on the run­ner

The chain only works be­cause each vul­ner­a­bil­ity bridges the trust bound­ary the oth­ers as­sumed: PR fork code cross­ing into base-repo cache, base-repo cache cross­ing into re­lease-work­flow run­time, and re­lease-work­flow run­time cross­ing into npm reg­istry write ac­cess.

How we found out

Detection was ex­ter­nal. External re­searcher ashishkurmi work­ing for StepSecurity opened is­sue #7383 ~20 min­utes af­ter the pub­lish, with full tech­ni­cal analy­sis. Tanner re­ceived a phone call from Socket.dev just mo­ments af­ter start­ing the war room con­firm­ing the sit­u­a­tion.

IOC fin­ger­prints (for down­stream main­tain­ers and se­cu­rity tools)

In any @tanstack/* pack­age’s man­i­fest:

json

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

File: router_init.js (~2.3 MB, pack­age root, not in files”)

Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11

2nd-stage pay­load URLs: https://​lit­ter.cat­box.moe/​h8nc9u.js, https://​lit­ter.cat­box.moe/​7r­rc6l.mjs

Exfiltration net­work: filev2.get­ses­sion.org, seed{1,2,3}.get­ses­sion.org

Forged com­mit iden­tity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fab­ri­cated GitHub no-re­ply email)

Real at­tacker ac­counts: zblgg (id 127806521), voicpro­ducoes (id 269549300)

Attacker fork: github.com/​zblgg/​con­fig­u­ra­tion (fork of TanStack/router re­named to evade fork searches)

Orphan pay­load com­mit (in fork net­work): 79ac49eedf774dd4b0cfa308722bc463cfe5885c

Workflow runs that per­formed the ma­li­cious pub­lishes:

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4) github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4)

github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

What went well

External re­searchers no­ticed and re­ported with full tech­ni­cal de­tail within ~20 min of the in­ci­dent

Maintainer team co­or­di­nated im­me­di­ately and ef­fec­tively across many time­zones

The de­tec­tion com­mu­nity al­ready had a clear pub­lic IOC pat­tern within hours

What could have been bet­ter

No in­ter­nal alert­ing. We learned about the com­pro­mise from a third party. We need mon­i­tor­ing on our own pub­lishes. We’ll be work­ing closely with se­cu­rity re­searcher firms in the ecosys­tem that have the abil­ity to de­tect these is­sues very quickly, po­ten­tially even in-house, and mak­ing the feed­back loop even tighter.

pul­l_re­quest_­tar­get work­flows had not been au­dited de­spite be­ing a long-known dan­ger­ous pat­tern

Floating refs (@v6.0.2, @main) on third-party ac­tions cre­ate stand­ing sup­ply-chain risk in­de­pen­dent of this in­ci­dent

Unpublish was un­avail­able for nearly all af­fected pack­ages be­cause of npm’s no un­pub­lish if de­pen­dents ex­ist” pol­icy. We have to rely on npm se­cu­rity to pull tar­balls server-side, which adds hours of de­lay dur­ing which ma­li­cious tar­balls re­main in­stal­lable

The 7-maintainer list on the npm scope means seven sep­a­rate cre­den­tial-theft tar­gets for the same blast ra­dius

OIDC trusted-pub­lisher bind­ing has no per-pub­lish re­view. Once con­fig­ured, any code path in the work­flow can mint a pub­lish-ca­pa­ble to­ken. We need ei­ther (a) move to short-lived clas­sic to­kens with man­ual re­view, or (b) add prove­nance-source-ver­i­fi­ca­tion to de­tect pub­lishes from un­ex­pected work­flow steps

What we got lucky on

The at­tacker chose a pay­load that broke tests, which made the pub­lish step (which would have pro­duced cleaner-look­ing tar­balls) skip — mean­ing the at­tack was loud enough to de­tect quickly. A more care­ful at­tacker who did­n’t break tests could have pub­lished silently for hours longer

The at­tacker reused pub­lic trade­craft (verbatim mem­ory-dump script with at­tri­bu­tion com­ment) in­stead of writ­ing novel code — mak­ing the IOC-matching faster

These need an­swers be­fore we close the post­mortem.

Did bun­dle-size.ym­l’s Setup Tools step ac­tu­ally call ac­tions/​cache@v5? Verify by read­ing the post-job logs from one of the pul­l_re­quest_­tar­get runs against PR #7378 (e.g., run id 25666610798). Tanner has ac­cess; needs to be done man­u­ally

What was in the ini­tial PR head com­mit (before the force-pushes wiped it)? GitHub’s re­flog may have it. Check via gh api or the GitHub sup­port team

How did the ma­li­cious com­mit get into the fork’s git ob­ject store specif­i­cally — was it pushed di­rectly via git, or was it cre­ated via the GitHub web UI (which would leave au­dit-log en­tries)?

Was voicpro­ducoes a real ac­count or a sock pup­pet? Cross-reference its ac­tiv­ity his­tory

Did the npm cache also get poi­soned (the 6 du­pli­cate linux-npm-store-* en­tries)? Were any ac­tu­ally used?

Can we iden­tify any other fork in the TanStack/router fork net­work that con­tains the or­phan pay­load com­mit? (If yes, the cleanup is harder — every fork host­ing it keeps it ac­ces­si­ble via github:tanstack/​router#79ac49ee…)

Are any other TanStack re­pos (router, query, table, form, vir­tual, etc.) us­ing the same bun­dle-size.yml-style pat­tern? Audit needed

How many users ac­tu­ally down­loaded the af­fected ver­sions dur­ing the pub­lish win­dow? Get from npm sup­port

Did any of the seven listed main­tain­ers’ ma­chines get com­pro­mised sep­a­rately? (None of the ma­li­cious pub­lishes used a main­tain­er’s npm to­ken, but main­tainer ma­chines could have been the sec­ondary tar­get via the self-prop­a­ga­tion logic)

Tracking is­sue: TanStack/router#7383

GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Related re­search:

Adnan Khan, The Monsters in Your Build Cache: Github Actions Cache Poisoning” (May 2024) — ad­nan­thekhan.com GitHub Security Lab, Keeping your GitHub Actions and work­flows se­cure: Preventing pwn re­quests” — se­cu­rity­lab.github.com StepSecurity, Harden-Runner de­tec­tion: tj-ac­tions/​changed-files ac­tion is com­pro­mised” (March 2025) — stepse­cu­rity.io

Several npm latest releases are compromised · Issue #7383 · TanStack/router

github.com

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

There was an er­ror while load­ing. Please re­load this page.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

Poland is a model for economic growth | AP News

apnews.com

POZNAN, Poland (AP) — A gen­er­a­tion ago, Poland ra­tioned sugar and flour while its cit­i­zens were paid one-tenth what West Germans earned. Today, the econ­omy of the coun­try has edged past Switzerland to be­come the world’s 20th largest with more than $1 tril­lion in an­nual out­put.

It’s a his­toric leap from the post-Com­mu­nist ru­ins of 1989 – 90 to European growth cham­pion, which econ­o­mists say has lessons on how to bring pros­per­ity to or­di­nary peo­ple — and that the Trump ad­min­is­tra­tion says should be rec­og­nized by Poland’s pres­ence at a sum­mit of the Group of 20 lead­ing economies later this year.

The trans­for­ma­tion is re­flected in peo­ple like Joanna Kowalska, an en­gi­neer from Poznan, a city of around 500,000 peo­ple mid­way be­tween Berlin and Warsaw. She re­turned home af­ter five years in the U.S.

I get asked of­ten if I’m miss­ing some­thing by com­ing back to Poland, and, to be hon­est, I feel it’s the other way around,” Kowalska said. We are ahead of the United States in so many ar­eas.”

Kowalska works at the Poznan Supercomputing and Networking Center, which is de­vel­op­ing the first ar­ti­fi­cial in­tel­li­gence fac­tory in Poland and in­te­grat­ing it with a quan­tum com­puter, one of 10 on the con­ti­nent fi­nanced by a European Union pro­gram.

3 MIN READ

2 MIN READ

3 MIN READ

Kowalska worked for Microsoft in the U.S. af­ter grad­u­at­ing from the Poznan University of Technology, in a job she saw as a dream come true.”

Newer sky­scrap­ers flank the com­mu­nist-era Palace of Culture and Science, fore­ground, in n, Poland, May 25, 2018. (AP Photo/Alik Keplicz, File)

But she missed hav­ing a sense of mis­sion,” she said.

Especially when it comes to ar­ti­fi­cial in­tel­li­gence, the tech­nol­ogy started de­vel­op­ing so rapidly in Poland,” Kowalska said. So it was very tempt­ing to come back.”

Breaking out of poverty

The guest in­vi­ta­tion to the G20 sum­mit is mostly sym­bolic. No guest coun­try has been pro­moted to full mem­ber since the orig­i­nal G20 met at the fi­nance min­is­ter level in 1999, and that would take a con­sen­sus de­ci­sion of all the mem­bers. Moreover, the orig­i­nal coun­tries were cho­sen not just by gross do­mes­tic prod­uct rank, but by their systemic sig­nif­i­cance” in the global econ­omy.

But the ges­ture re­flects a sta­tis­ti­cal truth: In 35 years — a lit­tle less than one per­son’s work­ing life­time — Poland’s per capita GDP rose to $55,340 in 2025, or 85% of the EU av­er­age. That’s up from $6,730 in 1990, or 38% of the EU av­er­age and now roughly equal to Japan’s $52,039, ac­cord­ing to International Monetary Fund fig­ures mea­sured in to­day’s dol­lars and ad­justed for Poland’s lower cost of liv­ing.

Poland’s econ­omy has grown an av­er­age 3.8% a year since join­ing the EU in 2004, eas­ily beat­ing the European av­er­age of 1.8%.

It was­n’t sim­ply one fac­tor that helped Poland break out of the poverty trap, says Marcin Piątkowski of Warsaw’s Kozminski University and au­thor of a book on the coun­try’s eco­nomic rise.

One of the most im­por­tant fac­tors was rapidly build­ing a strong in­sti­tu­tional frame­work for busi­ness, he said. That in­cluded in­de­pen­dent courts, an anti-mo­nop­oly agency to en­sure fair com­pe­ti­tion, and strong reg­u­la­tion to keep trou­bled banks from chok­ing off credit.

As a re­sult, the econ­omy was­n’t hi­jacked by cor­rupt prac­tices and oli­garchs, as hap­pened else­where in the post-Com­mu­nist world.

Poland also ben­e­fited from bil­lions of eu­ros in EU aid, both be­fore and af­ter it joined the bloc in 2004 and gained ac­cess to its huge sin­gle mar­ket.

Above all, there was the broad con­sen­sus, from across the po­lit­i­cal spec­trum, that Poland’s long-term goal was join­ing the EU.

Poles knew where they were go­ing,” Piątkowski said. Poland down­loaded the in­sti­tu­tions and the rules of the game, and even some cul­tural norms that the West spent 500 years de­vel­op­ing.”

As op­pres­sive as it was, com­mu­nism con­tributed by break­ing down old so­cial bar­ri­ers and open­ing higher ed­u­ca­tion to fac­tory and farm­work­ers who had no chance be­fore. A post-Com­mu­nist boom in higher ed­u­ca­tion means half of young peo­ple now have de­grees.

Young Poles are, for in­stance, bet­ter ed­u­cated than young Germans,” Piatkowski said, but earn half what Germans do. That’s an un­beat­able com­bi­na­tion” for at­tract­ing in­vestors, he said.

Success of an elec­tric bus com­pany

Solaris, a com­pany founded in 1996 in Poznan by Krzysztof Olszewski, is one of the lead­ing man­u­fac­tur­ers of elec­tric buses in Europe with a mar­ket share of around 15%. Its story shows one hall­mark of Poland’s suc­cess: en­tre­pre­neur­ship, or the will­ing­ness to take risks and build some­thing new.

Workers build elec­tric buses at the Solaris bus fac­tory in Poznan, Poland, Thursday, Jan. 29, 2026. (AP Photo/Pietro De Cristofaro)

Educated as an en­gi­neer un­der the Communist gov­ern­ment, Olszewski opened a car re­pair shop where he used spare parts from West Germany to fix Polish cars. While most en­ter­prises were na­tion­al­ized, au­thor­i­ties gave per­mis­sion to small-scale pri­vate work­shops like his to op­er­ate, ac­cord­ing to Katarzyna Szarzec, an econ­o­mist at the Poznan University of Economics and Business.

These were en­claves of pri­vate en­tre­pre­neur­ship,” she said.

In 1996, Olszewski opened a sub­sidiary of the German bus com­pany Neoplan and started pro­duc­ing for the Polish mar­ket.

Poland’s en­try to the EU in 2004 gave us cred­i­bil­ity and ac­cess to a vast, open European mar­ket with the free move­ment of goods, ser­vices and peo­ple,” said Mateusz Figaszewski, re­spon­si­ble for in­sti­tu­tional re­la­tions.

Then came a risky de­ci­sion to start pro­duc­ing elec­tric buses in 2011, a time when few in Europe were ex­per­i­ment­ing with the tech­nol­ogy. Figaszewski said larger com­pa­nies in the West had more to lose if switch­ing to elec­tric ve­hi­cles did­n’t work out.

It be­came an op­por­tu­nity to achieve tech­no­log­i­cal lead­er­ship ahead of the mar­ket,” he said.

An ag­ing pop­u­la­tion

Challenges still re­main for Poland. Due to a low birth rate and an ag­ing so­ci­ety, fewer work­ers will be able to sup­port re­tirees. Average wages are lower than the EU av­er­age. While small and medium en­ter­prises flour­ish, few have be­come global brands.

Poznan Mayor Jacek Jaśkowiak sees do­mes­tic in­no­va­tion as a third wave in Poland’s post­so­cial­ist eco­nomic de­vel­op­ment. In the first wave, for­eign coun­tries opened fac­to­ries in Poland in the early 1990s, tak­ing ad­van­tage of a skilled lo­cal pop­u­la­tion.

Around the turn of the mil­len­nium, he said, Western com­pa­nies brought more ad­vanced branches, in­clud­ing fi­nance, in­for­ma­tion tech­nol­ogy and en­gi­neer­ing.

Now it’s the time to start such so­phis­ti­cated ac­tiv­i­ties here,” Jaśkowiak says, adding that one of his main pri­or­i­ties is in­vest­ing in uni­ver­si­ties.

There is still much to do when it comes to in­no­va­tion and tech­no­log­i­cal progress,” added Szarzec, the Poznan econ­o­mist. But we keep climb­ing up on that lad­der of added value. We’re no longer just a sup­plier of spare parts.”

Szarzec’s stu­dents say more needs to be done to re­duce ur­ban-rural in­equal­i­ties, make hous­ing af­ford­able and sup­port young peo­ple start­ing fam­i­lies. They say Poles need to ac­knowl­edge that im­mi­grants, such as the mil­lions of Ukrainians who fled Russia’s full-scale in­va­sion in 2022, con­tribute to eco­nomic de­vel­op­ment in an ag­ing pop­u­la­tion.

Poland has such a dy­namic econ­omy, with so many op­por­tu­ni­ties for de­vel­op­ment, that of course I am stay­ing,” said Kazimierz Falak, 27, one of Szarzec’s grad­u­ate stu­dents. Poland is promis­ing.”

Computer equip­ment at the Poznan Supercomputing and Networking cen­ter is seen in Poznan, Poland, Wednesday, Jan. 28, 2026. (AP Photo/Pietro De Cristofaro)

___

David McHugh re­ported from Frankfurt, Germany.

How I Moved My Digital Stack to Europe

monokai.com

On dig­i­tal sov­er­eignty, and why European cloud is bet­ter than you think

April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope

There’s a ver­sion of this post that starts with a spread­sheet and ends with a quiet sense of sat­is­fac­tion. That’s mostly how it went. But un­der­neath the prac­ti­cal ex­er­cise of swap­ping one SaaS tool for an­other was some­thing that felt more ur­gent, a grow­ing dis­com­fort with how much of my dig­i­tal in­fra­struc­ture sat on servers I did­n’t con­trol, in a ju­ris­dic­tion in­creas­ingly prone to un­pre­dictabil­ity, op­er­ated by com­pa­nies whose in­cen­tives don’t al­ways align with mine.

Digital sov­er­eignty sounds like a buzz­word un­til you think care­fully about what it means. It means know­ing where your data lives. It means not be­ing one pol­icy change, one ac­qui­si­tion, or one ex­ec­u­tive’s bad mood away from los­ing ac­cess to tools your busi­ness de­pends on. It means choos­ing in­fra­struc­ture based on val­ues, not just con­ve­nience.

So I started mi­grat­ing.

Analytics

Google Analytics was the ob­vi­ous first tar­get. It’s the canon­i­cal ex­am­ple of a ser­vice that’s free be­cause you are the prod­uct, your vis­i­tors’ be­hav­ior fun­neled back into Google’s ad­ver­tis­ing ma­chin­ery.

Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant with­out the cookie con­sent the­ater that Google Analytics typ­i­cally re­quires. The re­port­ing is com­pre­hen­sive, the in­ter­face is fa­mil­iar enough, and I own every­thing.

The main down­side is main­te­nance over­head. You’re now re­spon­si­ble for up­dates, back­ups, and keep­ing the server healthy. For most se­tups this is low-fric­tion, but it’s not zero fric­tion.

Email

Proton Mail is based in Switzerland, not EU ter­ri­tory, but Swiss pri­vacy law is closely aligned with GDPR and ar­guably stronger in some re­spects. Proton builds its busi­ness model around pri­vacy rather than ad­ver­tis­ing, and end-to-end en­cryp­tion is baked in at the pro­to­col level rather than bolted on. The email client is solid, the cal­en­dar works well, and for any­one mov­ing away from US-based ser­vices, it sits com­fort­ably in the same spirit as the rest of this stack.

One ad­just­ment is get­ting used to Proton’s fil­ter sys­tem, which is a bit more lim­ited than Gmail’s. Gmail lets you write fil­ters against vir­tu­ally any­thing, in­clud­ing the full body of the mes­sage. Proton does­n’t sup­port fil­ter­ing on email con­tent at all. So if you’ve built a work­flow around catch­ing spe­cific phrases or key­words in mes­sage bod­ies, you’ll have to re­think it. For most peo­ple this won’t be a deal­breaker, but it’s worth know­ing be­fore you mi­grate.

There’s also a prac­ti­cal lim­i­ta­tion worth flag­ging: Proton caps cus­tom do­mains at three, even on the Duo plan. If you run sev­eral do­mains, like sep­a­rate ad­dresses for dif­fer­ent pro­jects or busi­nesses, you’ll hit that ceil­ing quickly and need to re­think how you route and send mail. I ended up con­sol­i­dat­ing, which was prob­a­bly over­due any­way, but it was­n’t a choice I made en­tirely freely.

Proton is­n’t free and charges a sub­stan­tial fee com­pared to other op­tions. You’ll get ac­cess to a whole suite of Proton apps though.

Password Management

Once I was in the Proton ecosys­tem, mov­ing pass­word man­age­ment there as well made sense. Proton Pass is end-to-end en­crypted, open source, and ben­e­fits from the same Swiss ju­ris­dic­tion as the rest of Proton’s stack.

1Password is a gen­uinely great prod­uct and this was a lat­eral move more than an up­grade. The in­ter­face is sim­ple, the browser ex­ten­sion works re­li­ably, and hav­ing pass­words, email, and cal­en­dar un­der one en­crypted roof has a cer­tain sat­is­fy­ing co­her­ence to it.

Compute

DigitalOcean has earned its rep­u­ta­tion by do­ing one thing ex­cep­tion­ally well: get­ting out of your way. The UI is clean, the men­tal model is sim­ple, and spin­ning up in­fra­struc­ture never feels like a chore. It’s the plat­form that proved de­vel­oper ex­pe­ri­ence could be a com­pet­i­tive moat.

Scaleway was a pleas­ant sur­prise. I ex­pected a ca­pa­ble-but-rough European al­ter­na­tive, but what I found was a plat­form that’s gen­uinely well thought out. Servers spun up quickly in­side a pri­vate net­work of my own con­fig­u­ra­tion, the con­trol panel is clean, and the op­tions avail­able matched every­thing I ac­tu­ally needed. Scaleway dis­plays pro­jected CO₂ emis­sions along­side server lo­ca­tion choices, a nice touch.

Object Storage

Scaleway’s ob­ject stor­age is S3-compatible, which makes mi­gra­tion me­chan­i­cal rather than painful, up­date your end­point and cre­den­tials and ex­ist­ing code works un­changed.

I used a tool called rclone to sync my old AWS S3 stor­age buck­ets to the new Scaleway S3 buck­ets. This took a lit­tle more than a week of con­stant sync­ing, as these buck­ets were quite large.

Offsite Backups

OVH is the largest European cloud provider and brings the re­li­a­bil­ity and pric­ing you’d ex­pect at that scale. Their ob­ject stor­age works well as a backup des­ti­na­tion and ends up cheaper than Backblaze B2 once you con­fig­ure life­cy­cle rules to move older back­ups to the cold stor­age class.

Getting there, how­ever, re­quires some pa­tience. The OVHcloud con­trol panel is a labyrinth: the life­cy­cle rule con­fig­u­ra­tion is buried some­where in the doc­u­men­ta­tion, and it in­volves some work in the ter­mi­nal. Once it’s set up, it works re­li­ably and the cost dif­fer­ence is mean­ing­ful.

Transactional Emails

Lettermint is a European trans­ac­tional email ser­vice that does the job with­out the bloat. Deliverability is solid, the API is clean, and it has straight­for­ward pric­ing.

Compared to SendGrid, the an­a­lyt­ics are leaner and the ecosys­tem in­te­gra­tions are fewer. SendGrid has years of tool­ing, doc­u­men­ta­tion, and com­mu­nity an­swers be­hind it. Lettermint is newer and smaller. For most trans­ac­tional send­ing use cases (password re­sets, no­ti­fi­ca­tions, re­ceipts) that does­n’t mat­ter much. But if you’re do­ing com­plex multi-stream email in­fra­struc­ture, you’ll want to au­dit the fea­ture set care­fully first.

Error Tracking

Bugsink is a self-hosted er­ror track­ing tool that ac­cepts Sentry’s SDK, which means the mi­gra­tion path is al­most fric­tion­less, change one line of con­fig­u­ra­tion and you’re done.

To be hon­est: Bugsink is bare-bones. There’s no per­for­mance mon­i­tor­ing, no ses­sion re­plays, no ad­vanced alert­ing. It’s not a Sentry re­place­ment for teams that use Sentry prop­erly. For me, it’s a sim­ple re­mote er­ror log, when some­thing breaks in pro­duc­tion I get a stack trace and that’s enough. Sentry’s cloud prod­uct is gen­uinely ex­cel­lent if you need the full fea­ture set, and for larger en­gi­neer­ing teams the breadth al­most cer­tainly jus­ti­fies the cost. But if your use case is tell me when some­thing broke and show me the stack trace”, self-hosted Bugsink does ex­actly that with no data leav­ing your in­fra­struc­ture.

AI API in­te­gra­tion

For my AI API in­te­gra­tions, I switched from OpenAI to Mistral. It worked out per­fectly as I was mostly us­ing sim­pler mod­els any­way.

Mistral is head­quar­tered in Paris and has pub­lished com­pelling open-weight mod­els along­side its API of­fer­ing. The API is clean, the mod­els are fast and ca­pa­ble, and there’s some­thing co­her­ent about a European AI provider that leans into open­ness rather than away from it. For my in­fer­ence work­loads, the switch was lat­eral in qual­ity and mean­ing­fully bet­ter in terms of where the money goes.

CDN

Exception № 1

Not every­thing moved. Cloudflare is a US com­pany, I still use it, and I’m at peace with that.

Here’s the rea­son­ing: Cloudflare sits in front of my pub­lic-fac­ing web­sites. Its job is to cache, pro­tect against DDoS at­tacks, and make con­tent load fast for vis­i­tors around the world. The data flow­ing through it is al­ready pub­lic by de­f­i­n­i­tion. I’m not rout­ing pri­vate com­mu­ni­ca­tions or sen­si­tive ap­pli­ca­tion data through Cloudflare; I’m us­ing it to serve pages that any­one on the in­ter­net can read. The sov­er­eignty cal­cu­lus is dif­fer­ent when the thing you’re pro­tect­ing is al­ready pub­lic.

I did try Bunny CDN, which is European-based and has a great rep­u­ta­tion. For straight­for­ward CDN use it’s ex­cel­lent. But Cloudflare’s fea­ture set (security rules, Workers plat­form, breadth of con­fig­u­ra­tion op­tions) was­n’t matched closely enough to jus­tify the switch for my spe­cific needs. Sometimes the prag­matic an­swer wins.

Payments

Exception № 2

Stripe is one of the few ser­vices I haven’t moved yet, even though pay­ment in­fra­struc­ture is ex­actly the kind of thing I care about hav­ing in a ju­ris­dic­tion I trust. Mollie is a Dutch pay­ment proces­sor with full EU in­cor­po­ra­tion, strong GDPR com­pli­ance by de­sign, and a prod­uct that has ma­tured con­sid­er­ably in re­cent years. The API has con­verged to­ward par­ity for most com­mon pay­ment flows, and for a European busi­ness the re­gional pay­ment method cov­er­age (iDEAL, Bancontact, SEPA) is ar­guably bet­ter.

The mi­gra­tion is on the list. It’s just not a triv­ial one. Payment in­te­gra­tions touch billing logic, web­hooks, tax in­voic­ing and cus­tomer-fac­ing flows in ways that re­quire care­ful test­ing and a good mo­ment to cut over. It’s also more ex­pen­sive than Stripe for my use­case.

AI Code as­sis­tance

Exception № 3

This one felt over­due. OpenAI works fine, but the com­pa­ny’s tra­jec­tory does­n’t align with my own views any­more. After a pe­riod of de­lib­er­ate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just did­n’t make the cut as it could­n’t com­pete with Claude.

Claude Code is now my day-to-day AI as­sis­tant for cod­ing. The rea­son­ing qual­ity is strong, the con­text han­dling is gen­uinely im­pres­sive, and Anthropic’s ap­proach to safety and trans­parency feels more struc­turally grounded.

Anthropic is a US com­pany, so this does­n’t sat­isfy the ju­ris­dic­tional cri­te­rion I ap­plied else­where. But it sat­is­fies some­thing else, the sense that the or­ga­ni­za­tion build­ing the thing has given se­ri­ous thought to what it’s build­ing and why.

It’s also worth not­ing that lo­cal mod­els are be­com­ing in­creas­ingly vi­able. Qwen, Alibaba’s open-weight model fam­ily, is a strong ex­am­ple: ca­pa­ble enough for many real work­loads, run­ning en­tirely on your own hard­ware, with no data leav­ing your ma­chine. The gap be­tween fron­tier API mod­els and what you can run lo­cally is nar­row­ing faster than most peo­ple re­al­ize.

Not every­thing is ideal. Most data cen­ters still sit out­side Europe, and open” means dif­fer­ent things to dif­fer­ent or­ga­ni­za­tions. But the di­rec­tion is right. A world where ca­pa­ble AI runs on your own hard­ware, with pub­lished weights and trans­par­ent train­ing, is a much bet­ter world for dig­i­tal au­ton­omy than one where all in­fer­ence routes through a hand­ful of closed API providers. We’re not there yet, but the tra­jec­tory is en­cour­ag­ing.

Git Version Control

Exception № 4

GitLab also re­mains for now. GitLab is head­quar­tered in the US but of­fers self-hosted op­tions, and the com­pany has long had a strong com­mit­ment to trans­parency and open source. A self-hosted in­stance is on the roadmap, but mov­ing source con­trol is a more sig­nif­i­cant un­der­tak­ing than most of these mi­gra­tions.

GitHub stays in the pic­ture for one spe­cific pur­pose: pub­lic-fac­ing NPM pack­ages and is­sue track­ing for open source soft­ware. When you pub­lish a pack­age or main­tain pub­lic tool­ing, GitHub is where de­vel­op­ers ex­pect to find it. The net­work ef­fects are real, it’s where the forks, stars, and is­sue re­ports come from. For the pub­lic-fac­ing sur­face of open source work, there’s no mean­ing­ful sov­er­eignty con­cern and a lot of prac­ti­cal up­side.

Was it worth it?

The prac­ti­cal fric­tion was real but man­age­able. Most mi­gra­tions were an af­ter­noon of work: up­date a cre­den­tial here, point a DNS record there, ex­port and im­port some data. A few took longer. None were cat­a­strophic. All in all it took longer than ex­pected, but most time was spent in re­search­ing and plan­ning when to do what. Two months in, every­thing is run­ning with­out in­ci­dent. No fires, no re­grets.

Digital sov­er­eignty is­n’t about para­noia. It’s about be­ing con­scious about your in­fra­struc­ture, where you de­cide who holds your data, who can reach it, and what hap­pens when pol­i­tics shift. The tools are there. The ecosys­tem is mostly ma­ture. The only thing that was stop­ping me was in­er­tia. It’s en­tirely pos­si­ble to run a re­li­able, ca­pa­ble, pro­fes­sional dig­i­tal stack mostly from European in­fra­struc­ture. This mi­gra­tion was proof of that.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.