To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app. Reasonable people can quibble with whether those features are actually bringing value to users, but what I want to discuss is the fundamental concept of taking on a dependency to a cloud hosted AI model for applications.
This laziness is creating a generation of software that is fragile, invades your privacy, and fundamentally broken. We are building applications that stop working the moment the server crashes or a credit card expires.
We need to return to a habit of building software where our local devices do the work. The silicon in our pocket is mind bogglingly faster than what was available a decade ago. It has a dedicated Neural Engine sitting there, mostly idle, while we wait for a JSON response from a server farm in Virginia. That’s ridiculous.
Even if your intentions are pure, the moment you stream user content to a third party AI provider, you’ve changed the nature of your product. You now have data retention questions and all the baggage that comes with that (consent, audit, breach, government request, training, etc.)
On top of that you also substantially complicated your stack because your feature now depends on network conditions, external vendor uptime, rate limits, account billing, and your own backend health.
Congratulations! You took a UX feature and turned it into a distributed system that costs you money.
If the feature can be done locally, opting into this mess is self inflicted damage.
“AI everywhere” is not the goal. Useful software is the goal.
Concrete Example: Brutalist Report’s On-Device Summaries
Years ago I launched a fun side project named The Brutalist Report , a news aggregator service inspired by the 1990s style web.
Recently, I decided to build a native iOS client for it with the design goal of ensuring it would remain a high-density news reading experience. Headlines in a stark list, a reader mode that strips the cancer that has overtaken the web, and (optionally) an “intelligence” view that generates a summary of the article.
Here’s the key point though: the summary is generated on-device using Apple’s local model APIs. No server detours. No prompt or user logs. No vendor account. No “we store your content for 30 days” footnotes needed.
It has become so normal for folks that any AI use is happening server-side. We have a lot of work to do to turn this around as an industry.
It’s not lost on me that sometimes the use-cases you have will demand the intelligence that only a cloud hosted model can provide, but that’s not the case with every use-case you’re trying to solve. We need to be thoughtful here.
Available Tooling
I can only speak on the tooling available within the Apple ecosystem since that’s what I focused initial development efforts on. In the last year, Apple has invested heavily here to allow developers to make use of a built-in local AI model easily.
The core flow looks roughly like this:
import FoundationModels
let model = SystemLanguageModel.default guard model.availability == .available else { return }
let session = LanguageModelSession { “”″ Provide a brutalist, information-dense summary in Markdown format. - Use **bold** for key concepts. - Use bullet points for facts. - No fluff. Just facts. “”″ }
let response = try await session.respond(options: .init(maximumResponseTokens: 1_000)) { articleText }
let markdown = response.content
And for longer content, we can chunk the plain text (around 10k characters per chunk), produce concise “facts only” notes per chunk, then runs a second pass to combine them into a final summary.
This is the kind of work local models are perfect for. The input data is already on the device (because the user is reading it). The output is lightweight. It’s fast and private. It’s okay if it’s not a superhuman PhD level intelligence because it’s summarizing the page you just loaded, not inventing world knowledge.
Local AI shines when the model’s job is transforming user-owned data, not acting as a search engine for the universe.
There are plenty of AI features that people want but don’t trust. Summarizing emails, extract action items from notes, categorize this document, etc.
The usual cloud approach turns every one of those into a trust exercise. “Please send your data to our servers. We promise to be cool about it.”
Local AI changes that. Your device already has the data. We’ll do the work right here.
You don’t build trust with your users by writing a 2,000 word privacy policy. You build trust by not needing one to begin with.
The tooling available on the platform goes even further.
One of the best moves Apple has made recently is pushing “AI output” away from unstructured blobs of text and toward typed data.
Instead of “ask the model for JSON and pray”, the newer and better pattern is to define a Swift struct that represents the thing you want. Give the model guidance for each field in natural language. Ask the model to generate an instance of that type.
That’s it.
Conceptually, it looks like this:
import FoundationModels
@Generable struct ArticleIntel { @Guide(description: “One sentence. No hype.“) var tldr: String @Guide(description: “3 – 7 bullets. Facts only.“) var bullets: [String] @Guide(description: “Comma-separated keywords.“) var keywords: [String] }
let session = LanguageModelSession() let response = try await session.respond( to: “Extract structured notes from the article.”, generating: ArticleIntel.self ) { articleText }
let intel = response.content
Now your UI doesn’t have to scrape bullet points out of Markdown or hope the model remembered your JSON schema. You get a real type with real fields, and you can render it consistently. It produces structured output your app can actually use. And it’s all running locally!
This isn’t just nicer ergonomics. It’s an engineering improvement.
And if you’re building a local first app, this is the difference between “AI as novelty” and “AI as a trustworthy subsystem”.
“But Local Models Aren’t As Smart”
Correct.
But also so what?
Most app features don’t need a model that can write Shakespeare, explain quantum mechanics, and pass the bar exam. They need a model that can do one of these reliably: summarize, classify, extract, rewrite, or normalize.
And for those tasks, local models can be truly excellent.
If you try to use a local model as a replacement for the entire internet, you will be disappointed. If you use it as a “data transformer” sitting inside your app, you’ll wonder why you ever sent this stuff to a server.
Use cloud models only when they’re genuinely necessary. Keep the user’s data where it belongs. And when you do use AI, don’t just glue it as a chat box. Use it as a real subsystem with typed outputs and predictable behavior.
Stop shipping distributed systems when you meant to ship a feature.
Google has tied its next-generation reCAPTCHA system to Google Play Services on Android, meaning anyone running a de-Googled phone will automatically fail verification when the system decides to challenge them.
The requirement forces Android users to run Google’s proprietary app framework version 25.41.30 or higher just to prove they’re human.
When reCAPTCHA flags what it considers suspicious activity, it abandons the old image puzzles and demands you scan a QR code. That scan requires Play Services running in the background, communicating with Google’s servers. If you’re using GrapheneOS or any other custom ROM that strips out Google’s software, the verification fails.
Google announced the broader system, Google Cloud Fraud Defense, at Cloud Next on April 23, pitching it as a trust platform designed to handle autonomous AI agents and traditional bots alike. What Google didn’t emphasize was the part where proving you’re human now requires submitting to its proprietary surveillance.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
This wasn’t sudden, either. An Internet Archive snapshot from October 2025 shows the same support page already listing a Play Services requirement at version 25.39.30. Google built this dependency quietly for at least seven months before a Reddit user on the degoogle subreddit flagged it, with reporting from PiunikaWeb and Android Authority bringing wider attention.
The iOS comparison is revealing because Apple devices running iOS 16.4 or later complete the same verification without installing any additional apps. Google didn’t demand iPhone users install Google software to pass the test. Only Android users who refuse Play Services get locked out. The asymmetry reveals what this is really about: not security, but ecosystem control.
reCAPTCHA sits in front of millions of websites. When Google ties verification to Play Services, it establishes a precedent where accessing basic web content requires running Google’s software and transmitting data to Google’s servers.
People running de-Googled phones chose those setups because they read the data practices, understood what Play Services phones home about, and decided they didn’t consent. Google’s new system punishes that decision by treating the absence of its proprietary software as suspicious by default.
Web developers adopting this reCAPTCHA should understand what they’re choosing. Every site that implements it tells de-Googled Android users they’re not welcome. That’s a small audience today. It’s also the audience most likely to care about how a website treats their data, and the least likely to capitulate.
Last year I said I’d probably never recommend another Bambu Lab printer again.
I still use my P1S, but after Bambu Lab started pushing their always-connected cloud solution as the new default:
I blocked the printer from the Internet via my OPNsense Firewall
I stopped updating the firmware
I locked the printer into Developer mode
I deleted Bambu Studio and started using OrcaSlicer
I had to do that to keep it under my control, instead of Bambu’s.
But I’m weird—I acknowledge that. I’m one of those crazy ones who likes to own something they purchased, and not have the company watch everything I do with hardware I paid for.
Bambu Lab could’ve left the status quo at that, and I wouldn’t be writing this blog post.
But they didn’t.
What happened this time?
For context: OrcaSlicer is a fork of the open source project Bambu Studio, which is a fork of Prusa Slicer, which is a fork of slic3r. (They are all licensed under the AGPLv3 open source license).
OrcaSlicer already has to dance around Bambu’s weird default setup where every file you print goes through Bambu’s servers, meaning they can see everything you ever print on your printer.
That is, unless you’re like me and you run it in Developer mode, and completely block it from the Internet on old firmware.
Some people are okay with using OrcaSlicer and printing through Bambu’s cloud. It’s convenient if you’re on the road and want to start a print on your printer at home, without managing your own VPN.
I run my own WireGuard VPN, so I don’t need that, but I understand not everyone has the resources to manage their own remote access.
Bambu saw a fork of OrcaSlicer that allowed you to use all your printer’s features without having to route prints through Bambu’s cloud called OrcaSlicer-bambulab and was like, “You know what? No. For the 0.1% of power users who want to run OrcaSlicer without the cloud delivery mechanism like we have in our AGPL-licensed Linux Bambu Studio code… no. You have to use our app, and only our app.”
So they threatened that OrcaSlicer fork’s developer with legal action for things that developer didn’t do. For example, they indicated the fork used an impersonation attack, despite the fork using Bambu Studio’s upstream code verbatim.
These are very serious public accusations.Bambu Lab did not write to me with these specific public claims first. They also refused my request to publish the full correspondence. Instead, they published a one-sided public statement where I cannot reply directly.In practice, this presents me to the public as someone bypassing security, impersonating their client, and creating a risk to their infrastructure. I reject that characterization.— OrcaSlicer-bambulabs developer’s response
These are very serious public accusations.
Bambu Lab did not write to me with these specific public claims first. They also refused my request to publish the full correspondence. Instead, they published a one-sided public statement where I cannot reply directly.
In practice, this presents me to the public as someone bypassing security, impersonating their client, and creating a risk to their infrastructure. I reject that characterization.
— OrcaSlicer-bambulabs developer’s response
Bambu is abusing the open source social contract, and using their legal might, to suppress a tiny number of their users1, for who knows what reason.
It seems dumb to me, because it would’ve been easier (and more profitable) to do nothing at all2. Instead, they wrote a blog post blaming an individual open source developer for their own infrastructure and security problems.
This is where the actual issue arises: the modification in question worked by injecting falsified identity metadata into network communication.In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers.— Bambu Lab blog post
This is where the actual issue arises: the modification in question worked by injecting falsified identity metadata into network communication.
In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers.
— Bambu Lab blog post
I don’t think they understand open source culture. Security either, if a public user agent string is their only protection against DDoS attacks…
Instead of finding solutions to ecosystem problems and building a more secure platform, Bambu is putting devoted power users like the fork’s developer on blast3.
When tensions flared last year, they wrote a similar blog post blaming community backlash on ‘unfortunate misinformation’. I imagine they meant speculation from community members (like myself) frustrated the whole software ecosystem and ownership model was turned upside down post-purchase.
This year they’re blaming one developer of a tiny slicer fork for the potential impact he could have on their entire cloud infrastructure.
It creates structural vulnerability. If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. Our systems would have no way to distinguish traffic, because the requests would look identical.— Bambu Lab blog post
It creates structural vulnerability. If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. Our systems would have no way to distinguish traffic, because the requests would look identical.
— Bambu Lab blog post
I love how they frame this as a developer trying to impersonate their app, when he’s literally using the same AGPL-licensed code their Linux app uses.
I find it doubly ironic since their own fork caused Bambu users’ telemetry to hit Prusa’s servers back in 2022, and (to my knowledge) Prusa didn’t snap back with a C&D.
They spent the rest of their blog post talking about vulnerabilities, bugs, and instabilities—as if that has anything to do with a developer using upstream code verbatim in his fork.
Maybe they could take a new approach and just not lock down their whole ecosystem in the first place.
But who am I kidding? Nothing I say, and no amount of complaining in the comments below, seems to help Bambu see the fault in their ways.
Spending a little more for a printer from another company just might do it, though.
Louis Rossmann posted a video saying he’d pledge $10,000 to help the open source dev fight Bambu’s legal threats. And I’d happily chip in too, but that’s only useful if the dev wants to put himself back in Bambu’s crosshairs.
The better play might just be to skip Bambu altogether.
The OrcaSlicer fork in question didn’t seem to have much uptake outside of a very small subset of users prior to Bambu Lab’s cease and desist order. ↩︎
The OrcaSlicer fork in question didn’t seem to have much uptake outside of a very small subset of users prior to Bambu Lab’s cease and desist order. ↩︎
Maybe ask for the fork to not include “bambulabs” in the name, since that could be a reasonable trademark-related demand. ↩︎
Maybe ask for the fork to not include “bambulabs” in the name, since that could be a reasonable trademark-related demand. ↩︎
The fork’s developer mentioned “I previously helped Bambu Studio users with Linux and Wayland issues, including on Bambu Lab’s own GitHub. That makes it especially absurd to me that I am now being publicly presented as someone dangerous to their infrastructure.” ↩︎
The fork’s developer mentioned “I previously helped Bambu Studio users with Linux and Wayland issues, including on Bambu Lab’s own GitHub. That makes it especially absurd to me that I am now being publicly presented as someone dangerous to their infrastructure.” ↩︎
Please enable JS and disable any ad blocker
2026 – 05-07
3 min read
This afternoon, we sent the following email to our global team. One of our core values at Cloudflare is transparency, and we believe it’s important that you hear this directly from us because it’s a major moment at Cloudflare.
Team:We are writing to let you know directly that we’ve made the decision to reduce Cloudflare’s workforce by more than 1,100 employees globally. The way we work at Cloudflare has fundamentally changed. We don’t just build and sell AI tools and platforms. We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere. Today is a hard day. This decision unfortunately means saying goodbye to teammates who have contributed meaningfully to our mission and to building Cloudflare into one of the world’s most successful companies. We want to be clear that this decision is not a reflection of the individual work or talent of those leaving us. Instead, we are reimagining every internal process, team, and role across the company. Today’s actions are not a cost-cutting exercise or an assessment of individuals’ performance; they are about Cloudflare defining how a world-class, high-growth company operates and creates value in the agentic AI era. This is a moment we need to own as founders and leaders of the company. Matthew has personally sent out every offer letter we’ve extended. It is a practice he has always looked forward to because it represented our growth and the incredible talent joining our mission. It didn’t feel right for this message to come from anyone other than the two of us. Rather than trickling out notices through managers, we will be sending emails to every employee. Within the next hour, every member of our global team will receive an email from both of us clarifying how this change affects them. For those departing today, we will send this update to both their personal and Cloudflare addresses to ensure they receive the information immediately.It’s important to us that we treat departing team members right and in a way that exceeds what we’ve seen from other companies. We believe acting with empathy isn’t about avoiding hard decisions but rather about how you treat people when those decisions are made. If we are asking our team to be world-class, we have a reciprocal obligation to be world-class in how we treat them. We are pairing the directness of these measures with severance packages that lead the industry. The packages for departing employees will include the equivalent of their full base pay through the end of 2026. Healthcare coverage is different across the globe, and if you’re in the United States, we’ll continue to provide support through the end of the year. We are also vesting equity for departing team members through August 15th, so they receive stock beyond their departure date. And, if departing team members haven’t hit their one-year cliffs, we are going to waive those and vest their pro-rated equity through August as well. We’ve asked the team to do this only once, as hard as that may be today. We don’t want to do it again for the foreseeable future. By taking decisive action now, we provide immediate clarity to those departing and protect the stability of the team that remains. We are making these changes now because making smaller, repeated cuts or dragging a reorganization out over multiple quarters creates prolonged emotional uncertainty for employees and stalls our ability to build. It’s the right thing to do; it’s the honest thing to do; and it reflects the values of the company we are continuing to build.Cloudflare started as a digitally native company built in the cloud. That allowed us to catch up to and pass companies that had a head start of years or decades but were slowed down by outdated systems and processes. As we’ve now become the leader, we cannot rest on the workflows and organizational structures that worked yesterday. We’re confident that our reshaped organization will be even faster and more innovative as we continue building the future.To those departing us: you’ve helped build the strong foundation Cloudflare stands on today. We have the utmost respect for your work and gratitude for the impact you have made. We’re confident you will land at other great places and build many future great companies, bringing with you a unique set of skills learned while building Cloudflare.Transparency is a core principle at Cloudflare, and it was important that you hear this from us first. We will be heading to our earnings conference call at 2 PM PT, when we’ll share more. We also plan to address today’s announcements live with the team at our all-hands meeting. It’s not an easy day, but it’s the right decision. Our mission to help build a better Internet is more important now than ever, and there’s a lot of work left to be done.
Team:
We are writing to let you know directly that we’ve made the decision to reduce Cloudflare’s workforce by more than 1,100 employees globally.
The way we work at Cloudflare has fundamentally changed. We don’t just build and sell AI tools and platforms. We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere.
Today is a hard day. This decision unfortunately means saying goodbye to teammates who have contributed meaningfully to our mission and to building Cloudflare into one of the world’s most successful companies. We want to be clear that this decision is not a reflection of the individual work or talent of those leaving us. Instead, we are reimagining every internal process, team, and role across the company. Today’s actions are not a cost-cutting exercise or an assessment of individuals’ performance; they are about Cloudflare defining how a world-class, high-growth company operates and creates value in the agentic AI era.
This is a moment we need to own as founders and leaders of the company. Matthew has personally sent out every offer letter we’ve extended. It is a practice he has always looked forward to because it represented our growth and the incredible talent joining our mission. It didn’t feel right for this message to come from anyone other than the two of us. Rather than trickling out notices through managers, we will be sending emails to every employee.
Within the next hour, every member of our global team will receive an email from both of us clarifying how this change affects them. For those departing today, we will send this update to both their personal and Cloudflare addresses to ensure they receive the information immediately.
It’s important to us that we treat departing team members right and in a way that exceeds what we’ve seen from other companies. We believe acting with empathy isn’t about avoiding hard decisions but rather about how you treat people when those decisions are made. If we are asking our team to be world-class, we have a reciprocal obligation to be world-class in how we treat them. We are pairing the directness of these measures with severance packages that lead the industry. The packages for departing employees will include the equivalent of their full base pay through the end of 2026. Healthcare coverage is different across the globe, and if you’re in the United States, we’ll continue to provide support through the end of the year. We are also vesting equity for departing team members through August 15th, so they receive stock beyond their departure date. And, if departing team members haven’t hit their one-year cliffs, we are going to waive those and vest their pro-rated equity through August as well.
We’ve asked the team to do this only once, as hard as that may be today. We don’t want to do it again for the foreseeable future. By taking decisive action now, we provide immediate clarity to those departing and protect the stability of the team that remains. We are making these changes now because making smaller, repeated cuts or dragging a reorganization out over multiple quarters creates prolonged emotional uncertainty for employees and stalls our ability to build. It’s the right thing to do; it’s the honest thing to do; and it reflects the values of the company we are continuing to build.
Cloudflare started as a digitally native company built in the cloud. That allowed us to catch up to and pass companies that had a head start of years or decades but were slowed down by outdated systems and processes. As we’ve now become the leader, we cannot rest on the workflows and organizational structures that worked yesterday. We’re confident that our reshaped organization will be even faster and more innovative as we continue building the future.
To those departing us: you’ve helped build the strong foundation Cloudflare stands on today. We have the utmost respect for your work and gratitude for the impact you have made. We’re confident you will land at other great places and build many future great companies, bringing with you a unique set of skills learned while building Cloudflare.
Transparency is a core principle at Cloudflare, and it was important that you hear this from us first. We will be heading to our earnings conference call at 2 PM PT, when we’ll share more. We also plan to address today’s announcements live with the team at our all-hands meeting.
It’s not an easy day, but it’s the right decision. Our mission to help build a better Internet is more important now than ever, and there’s a lot of work left to be done.
by Tanner Linsley on May 11, 2026.
Last updated: 2026 – 05-11
On 2026 – 05-11 between 19:20 and 19:26 UTC, an attacker published 84 malicious versions across 42 @tanstack/* npm packages by combining: the pull_request_target “Pwn Request” pattern, GitHub Actions cache poisoning across the fork↔base trust boundary, and runtime memory extraction of an OIDC token from the GitHub Actions runner process. No npm tokens were stolen and the npm publish workflow itself was not compromised.
The malicious versions were detected publicly within 20 minutes by an external researcher ashishkurmi working for stepsecurity. All affected versions have been deprecated; npm security has been engaged to pull tarballs from the registry. We have no evidence of npm credentials being stolen, but we strongly recommend that anyone who installed an affected version on 2026 – 05-11 rotate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH credentials reachable from the install host.
Tracking issue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx
Packages affected
42 packages, 84 versions (two per package, published roughly 6 minutes apart). See the tracking issue for the full table. Confirmed-clean families: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-package, not @tanstack/start-*).
What the malware does
When a developer or CI environment runs npm install, pnpm install, or yarn install against any affected version, npm resolves the malicious optionalDependencies entry, fetches the orphan payload commit from the fork network, runs its prepare lifecycle script, and executes a ~2.3 MB obfuscated router_init.js smuggled into the affected tarball. The script:
Harvests credentials from common locations: AWS IMDS / Secrets Manager, GCP metadata, Kubernetes service-account tokens, Vault tokens, ~/.npmrc, GitHub tokens (env, gh CLI, .git-credentials), SSH private keys
Exfiltrates over the Session/Oxen messenger file-upload network (filev2.getsession.org, seed{1,2,3}.getsession.org) — end-to-end encrypted with no attacker-controlled C2, so blocking by IP/domain is the only network mitigation
Self-propagates: enumerates other packages the victim maintains via registry.npmjs.org/-/v1/search?text=maintainer:<user> and republishes them with the same injection
Because the payload runs as part of npm install’s lifecycle, anyone who installed an affected version on 2026 – 05-11 must treat the install host as potentially compromised.
All times UTC. Local timestamps from GitHub API and npm registry.
Pre-attack (cache poisoning phase)
Detonation (publish phase)
Workflow run 25613093674 starts (19:15:44), and fails.
Detection and response
Formal malware reports are submitted via npm
Tanner begins npm deprecation process for all 84 affected packages.
Public Twitter/X/LinkedIn/Bluesky disclosure from @tan_stack and maintainers
All cache entries for all TanStack/* GitHub repositories purged via API.
Hardening PR merged: bundle-size.yml restructured, repository_owner guards added, third-party action refs pinned to SHAs.
Official GitHub Security Advisory is published, CVE requested
Three vulnerabilities chained together. Each is necessary for the attack; none alone is sufficient.
1. pull_request_target “Pwn Request” pattern in bundle-size.yml
bundle-size.yml ran pull_request_target for fork PRs and, inside that trigger context, checked out the fork’s PR-merge ref and ran a build:
yaml
on: pull_request_target: paths: [‘packages/**’, ‘benchmarks/**’]
jobs: benchmark-pr: steps: - uses: actions/checkout@v6.0.2 with: ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork’s merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
on: pull_request_target: paths: [‘packages/**’, ‘benchmarks/**’]
jobs: benchmark-pr: steps: - uses: actions/checkout@v6.0.2 with: ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork’s merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
The author of the workflow attempted a trust split (the comment-pr job is separate from benchmark-pr, with a comment in the YAML noting the intent to keep benchmark-pr “untrusted with read-only permissions”). The split is correct in spirit but missed two facts:
actions/cache@v5′s post-job save is not gated by permissions:. Cache writes use a runner-internal token, not the workflow GITHUB_TOKEN. Setting permissions: contents: read does not block cache mutation.
Cache scope is per-repo, shared across pull_request_target runs (which use the base repo’s cache scope) and pushes to main. A PR running in the base repo’s cache scope can poison entries that production workflows on main will later restore.
2. GitHub Actions cache poisoning across trust boundaries
The malicious vite_setup.mjs was specifically designed to write data into the pnpm-store directory under a key the legit release.yml workflow would compute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the benchmark-pr job ended, actions/cache@v5′s post-step saved the (now-poisoned) pnpm store to that exact key. When release.yml next ran on a push to main, its Setup Tools step restored the poisoned entry — entirely as designed.
This is the class of attack documented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions design issue that requires conscious mitigation.
release.yml declares id-token: write (legitimately needed for npm OIDC trusted publishing). When the poisoned pnpm store is restored on the runner, attacker-controlled binaries are now on disk and get invoked during the build step. Those binaries:
Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline
Read /proc/<pid>/maps and /proc/<pid>/mem to dump the worker’s memory
Extract the OIDC token (which the runner mints lazily, in memory, when id-token: write is set)
Use the token to authenticate POST requests directly to registry.npmjs.org — bypassing the workflow’s Publish Packages step entirely
This is the same memory-extraction technique (and verbatim Python script, with attribution comment) used in the tj-actions/changed-files compromise of March 2025. The attacker did not invent novel tradecraft; they recombined published research.
Why none alone is enough
pull_request_target alone is fine for trusted operations (labeling, comments)
Cache poisoning alone (e.g., from inside an already-compromised dep) requires a separate publish vehicle
OIDC token extraction alone requires existing code execution on the runner
The chain only works because each vulnerability bridges the trust boundary the others assumed: PR fork code crossing into base-repo cache, base-repo cache crossing into release-workflow runtime, and release-workflow runtime crossing into npm registry write access.
How we found out
Detection was external. External researcher ashishkurmi working for StepSecurity opened issue #7383 ~20 minutes after the publish, with full technical analysis. Tanner received a phone call from Socket.dev just moments after starting the war room confirming the situation.
IOC fingerprints (for downstream maintainers and security tools)
In any @tanstack/* package’s manifest:
json
“optionalDependencies”: { “@tanstack/setup”: “github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }
“optionalDependencies”: { “@tanstack/setup”: “github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }
File: router_init.js (~2.3 MB, package root, not in “files”)
Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11
2nd-stage payload URLs: https://litter.catbox.moe/h8nc9u.js, https://litter.catbox.moe/7rrc6l.mjs
Exfiltration network: filev2.getsession.org, seed{1,2,3}.getsession.org
Forged commit identity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fabricated GitHub no-reply email)
Real attacker accounts: zblgg (id 127806521), voicproducoes (id 269549300)
Attacker fork: github.com/zblgg/configuration (fork of TanStack/router renamed to evade fork searches)
Orphan payload commit (in fork network): 79ac49eedf774dd4b0cfa308722bc463cfe5885c
Workflow runs that performed the malicious publishes:
github.com/TanStack/router/actions/runs/25613093674 (attempt 4) github.com/TanStack/router/actions/runs/25691781302
github.com/TanStack/router/actions/runs/25613093674 (attempt 4)
github.com/TanStack/router/actions/runs/25691781302
What went well
External researchers noticed and reported with full technical detail within ~20 min of the incident
Maintainer team coordinated immediately and effectively across many timezones
The detection community already had a clear public IOC pattern within hours
What could have been better
No internal alerting. We learned about the compromise from a third party. We need monitoring on our own publishes. We’ll be working closely with security researcher firms in the ecosystem that have the ability to detect these issues very quickly, potentially even in-house, and making the feedback loop even tighter.
pull_request_target workflows had not been audited despite being a long-known dangerous pattern
Floating refs (@v6.0.2, @main) on third-party actions create standing supply-chain risk independent of this incident
Unpublish was unavailable for nearly all affected packages because of npm’s “no unpublish if dependents exist” policy. We have to rely on npm security to pull tarballs server-side, which adds hours of delay during which malicious tarballs remain installable
The 7-maintainer list on the npm scope means seven separate credential-theft targets for the same blast radius
OIDC trusted-publisher binding has no per-publish review. Once configured, any code path in the workflow can mint a publish-capable token. We need either (a) move to short-lived classic tokens with manual review, or (b) add provenance-source-verification to detect publishes from unexpected workflow steps
What we got lucky on
The attacker chose a payload that broke tests, which made the publish step (which would have produced cleaner-looking tarballs) skip — meaning the attack was loud enough to detect quickly. A more careful attacker who didn’t break tests could have published silently for hours longer
The attacker reused public tradecraft (verbatim memory-dump script with attribution comment) instead of writing novel code — making the IOC-matching faster
These need answers before we close the postmortem.
Did bundle-size.yml’s Setup Tools step actually call actions/cache@v5? Verify by reading the post-job logs from one of the pull_request_target runs against PR #7378 (e.g., run id 25666610798). Tanner has access; needs to be done manually
What was in the initial PR head commit (before the force-pushes wiped it)? GitHub’s reflog may have it. Check via gh api or the GitHub support team
How did the malicious commit get into the fork’s git object store specifically — was it pushed directly via git, or was it created via the GitHub web UI (which would leave audit-log entries)?
Was voicproducoes a real account or a sock puppet? Cross-reference its activity history
Did the npm cache also get poisoned (the 6 duplicate linux-npm-store-* entries)? Were any actually used?
Can we identify any other fork in the TanStack/router fork network that contains the orphan payload commit? (If yes, the cleanup is harder — every fork hosting it keeps it accessible via github:tanstack/router#79ac49ee…)
Are any other TanStack repos (router, query, table, form, virtual, etc.) using the same bundle-size.yml-style pattern? Audit needed
How many users actually downloaded the affected versions during the publish window? Get from npm support
Did any of the seven listed maintainers’ machines get compromised separately? (None of the malicious publishes used a maintainer’s npm token, but maintainer machines could have been the secondary target via the self-propagation logic)
Tracking issue: TanStack/router#7383
GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx
Related research:
Adnan Khan, “The Monsters in Your Build Cache: Github Actions Cache Poisoning” (May 2024) — adnanthekhan.com GitHub Security Lab, “Keeping your GitHub Actions and workflows secure: Preventing pwn requests” — securitylab.github.com StepSecurity, “Harden-Runner detection: tj-actions/changed-files action is compromised” (March 2025) — stepsecurity.io
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
There was an error while loading. Please reload this page.
Notifications
You must be signed in to change notification settings
You can’t perform that action at this time.
POZNAN, Poland (AP) — A generation ago, Poland rationed sugar and flour while its citizens were paid one-tenth what West Germans earned. Today, the economy of the country has edged past Switzerland to become the world’s 20th largest with more than $1 trillion in annual output.
It’s a historic leap from the post-Communist ruins of 1989 – 90 to European growth champion, which economists say has lessons on how to bring prosperity to ordinary people — and that the Trump administration says should be recognized by Poland’s presence at a summit of the Group of 20 leading economies later this year.
The transformation is reflected in people like Joanna Kowalska, an engineer from Poznan, a city of around 500,000 people midway between Berlin and Warsaw. She returned home after five years in the U.S.
“I get asked often if I’m missing something by coming back to Poland, and, to be honest, I feel it’s the other way around,” Kowalska said. “We are ahead of the United States in so many areas.”
Kowalska works at the Poznan Supercomputing and Networking Center, which is developing the first artificial intelligence factory in Poland and integrating it with a quantum computer, one of 10 on the continent financed by a European Union program.
3 MIN READ
2 MIN READ
3 MIN READ
Kowalska worked for Microsoft in the U.S. after graduating from the Poznan University of Technology, in a job she saw as a “dream come true.”
Newer skyscrapers flank the communist-era Palace of Culture and Science, foreground, in n, Poland, May 25, 2018. (AP Photo/Alik Keplicz, File)
But she missed having a “sense of mission,” she said.
“Especially when it comes to artificial intelligence, the technology started developing so rapidly in Poland,” Kowalska said. “So it was very tempting to come back.”
Breaking out of poverty
The guest invitation to the G20 summit is mostly symbolic. No guest country has been promoted to full member since the original G20 met at the finance minister level in 1999, and that would take a consensus decision of all the members. Moreover, the original countries were chosen not just by gross domestic product rank, but by their “systemic significance” in the global economy.
But the gesture reflects a statistical truth: In 35 years — a little less than one person’s working lifetime — Poland’s per capita GDP rose to $55,340 in 2025, or 85% of the EU average. That’s up from $6,730 in 1990, or 38% of the EU average and now roughly equal to Japan’s $52,039, according to International Monetary Fund figures measured in today’s dollars and adjusted for Poland’s lower cost of living.
Poland’s economy has grown an average 3.8% a year since joining the EU in 2004, easily beating the European average of 1.8%.
It wasn’t simply one factor that helped Poland break out of the poverty trap, says Marcin Piątkowski of Warsaw’s Kozminski University and author of a book on the country’s economic rise.
One of the most important factors was rapidly building a strong institutional framework for business, he said. That included independent courts, an anti-monopoly agency to ensure fair competition, and strong regulation to keep troubled banks from choking off credit.
As a result, the economy wasn’t hijacked by corrupt practices and oligarchs, as happened elsewhere in the post-Communist world.
Poland also benefited from billions of euros in EU aid, both before and after it joined the bloc in 2004 and gained access to its huge single market.
Above all, there was the broad consensus, from across the political spectrum, that Poland’s long-term goal was joining the EU.
“Poles knew where they were going,” Piątkowski said. “Poland downloaded the institutions and the rules of the game, and even some cultural norms that the West spent 500 years developing.”
As oppressive as it was, communism contributed by breaking down old social barriers and opening higher education to factory and farmworkers who had no chance before. A post-Communist boom in higher education means half of young people now have degrees.
“Young Poles are, for instance, better educated than young Germans,” Piatkowski said, but earn half what Germans do. That’s “an unbeatable combination” for attracting investors, he said.
Success of an electric bus company
Solaris, a company founded in 1996 in Poznan by Krzysztof Olszewski, is one of the leading manufacturers of electric buses in Europe with a market share of around 15%. Its story shows one hallmark of Poland’s success: entrepreneurship, or the willingness to take risks and build something new.
Workers build electric buses at the Solaris bus factory in Poznan, Poland, Thursday, Jan. 29, 2026. (AP Photo/Pietro De Cristofaro)
Educated as an engineer under the Communist government, Olszewski opened a car repair shop where he used spare parts from West Germany to fix Polish cars. While most enterprises were nationalized, authorities gave permission to small-scale private workshops like his to operate, according to Katarzyna Szarzec, an economist at the Poznan University of Economics and Business.
“These were enclaves of private entrepreneurship,” she said.
In 1996, Olszewski opened a subsidiary of the German bus company Neoplan and started producing for the Polish market.
“Poland’s entry to the EU in 2004 gave us credibility and access to a vast, open European market with the free movement of goods, services and people,” said Mateusz Figaszewski, responsible for institutional relations.
Then came a risky decision to start producing electric buses in 2011, a time when few in Europe were experimenting with the technology. Figaszewski said larger companies in the West had more to lose if switching to electric vehicles didn’t work out.
“It became an opportunity to achieve technological leadership ahead of the market,” he said.
An aging population
Challenges still remain for Poland. Due to a low birth rate and an aging society, fewer workers will be able to support retirees. Average wages are lower than the EU average. While small and medium enterprises flourish, few have become global brands.
Poznan Mayor Jacek Jaśkowiak sees domestic innovation as a third wave in Poland’s postsocialist economic development. In the first wave, foreign countries opened factories in Poland in the early 1990s, taking advantage of a skilled local population.
Around the turn of the millennium, he said, Western companies brought more advanced branches, including finance, information technology and engineering.
“Now it’s the time to start such sophisticated activities here,” Jaśkowiak says, adding that one of his main priorities is investing in universities.
“There is still much to do when it comes to innovation and technological progress,” added Szarzec, the Poznan economist. “But we keep climbing up on that ladder of added value. We’re no longer just a supplier of spare parts.”
Szarzec’s students say more needs to be done to reduce urban-rural inequalities, make housing affordable and support young people starting families. They say Poles need to acknowledge that immigrants, such as the millions of Ukrainians who fled Russia’s full-scale invasion in 2022, contribute to economic development in an aging population.
“Poland has such a dynamic economy, with so many opportunities for development, that of course I am staying,” said Kazimierz Falak, 27, one of Szarzec’s graduate students. “Poland is promising.”
Computer equipment at the Poznan Supercomputing and Networking center is seen in Poznan, Poland, Wednesday, Jan. 28, 2026. (AP Photo/Pietro De Cristofaro)
___
David McHugh reported from Frankfurt, Germany.
On digital sovereignty, and why European cloud is better than you think
April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope
There’s a version of this post that starts with a spreadsheet and ends with a quiet sense of satisfaction. That’s mostly how it went. But underneath the practical exercise of swapping one SaaS tool for another was something that felt more urgent, a growing discomfort with how much of my digital infrastructure sat on servers I didn’t control, in a jurisdiction increasingly prone to unpredictability, operated by companies whose incentives don’t always align with mine.
Digital sovereignty sounds like a buzzword until you think carefully about what it means. It means knowing where your data lives. It means not being one policy change, one acquisition, or one executive’s bad mood away from losing access to tools your business depends on. It means choosing infrastructure based on values, not just convenience.
So I started migrating.
Analytics
Google Analytics was the obvious first target. It’s the canonical example of a service that’s free because you are the product, your visitors’ behavior funneled back into Google’s advertising machinery.
Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant without the cookie consent theater that Google Analytics typically requires. The reporting is comprehensive, the interface is familiar enough, and I own everything.
The main downside is maintenance overhead. You’re now responsible for updates, backups, and keeping the server healthy. For most setups this is low-friction, but it’s not zero friction.
Proton Mail is based in Switzerland, not EU territory, but Swiss privacy law is closely aligned with GDPR and arguably stronger in some respects. Proton builds its business model around privacy rather than advertising, and end-to-end encryption is baked in at the protocol level rather than bolted on. The email client is solid, the calendar works well, and for anyone moving away from US-based services, it sits comfortably in the same spirit as the rest of this stack.
One adjustment is getting used to Proton’s filter system, which is a bit more limited than Gmail’s. Gmail lets you write filters against virtually anything, including the full body of the message. Proton doesn’t support filtering on email content at all. So if you’ve built a workflow around catching specific phrases or keywords in message bodies, you’ll have to rethink it. For most people this won’t be a dealbreaker, but it’s worth knowing before you migrate.
There’s also a practical limitation worth flagging: Proton caps custom domains at three, even on the Duo plan. If you run several domains, like separate addresses for different projects or businesses, you’ll hit that ceiling quickly and need to rethink how you route and send mail. I ended up consolidating, which was probably overdue anyway, but it wasn’t a choice I made entirely freely.
Proton isn’t free and charges a substantial fee compared to other options. You’ll get access to a whole suite of Proton apps though.
Password Management
Once I was in the Proton ecosystem, moving password management there as well made sense. Proton Pass is end-to-end encrypted, open source, and benefits from the same Swiss jurisdiction as the rest of Proton’s stack.
1Password is a genuinely great product and this was a lateral move more than an upgrade. The interface is simple, the browser extension works reliably, and having passwords, email, and calendar under one encrypted roof has a certain satisfying coherence to it.
Compute
DigitalOcean has earned its reputation by doing one thing exceptionally well: getting out of your way. The UI is clean, the mental model is simple, and spinning up infrastructure never feels like a chore. It’s the platform that proved developer experience could be a competitive moat.
Scaleway was a pleasant surprise. I expected a capable-but-rough European alternative, but what I found was a platform that’s genuinely well thought out. Servers spun up quickly inside a private network of my own configuration, the control panel is clean, and the options available matched everything I actually needed. Scaleway displays projected CO₂ emissions alongside server location choices, a nice touch.
Object Storage
Scaleway’s object storage is S3-compatible, which makes migration mechanical rather than painful, update your endpoint and credentials and existing code works unchanged.
I used a tool called rclone to sync my old AWS S3 storage buckets to the new Scaleway S3 buckets. This took a little more than a week of constant syncing, as these buckets were quite large.
Offsite Backups
OVH is the largest European cloud provider and brings the reliability and pricing you’d expect at that scale. Their object storage works well as a backup destination and ends up cheaper than Backblaze B2 once you configure lifecycle rules to move older backups to the cold storage class.
Getting there, however, requires some patience. The OVHcloud control panel is a labyrinth: the lifecycle rule configuration is buried somewhere in the documentation, and it involves some work in the terminal. Once it’s set up, it works reliably and the cost difference is meaningful.
Transactional Emails
Lettermint is a European transactional email service that does the job without the bloat. Deliverability is solid, the API is clean, and it has straightforward pricing.
Compared to SendGrid, the analytics are leaner and the ecosystem integrations are fewer. SendGrid has years of tooling, documentation, and community answers behind it. Lettermint is newer and smaller. For most transactional sending use cases (password resets, notifications, receipts) that doesn’t matter much. But if you’re doing complex multi-stream email infrastructure, you’ll want to audit the feature set carefully first.
Error Tracking
Bugsink is a self-hosted error tracking tool that accepts Sentry’s SDK, which means the migration path is almost frictionless, change one line of configuration and you’re done.
To be honest: Bugsink is bare-bones. There’s no performance monitoring, no session replays, no advanced alerting. It’s not a Sentry replacement for teams that use Sentry properly. For me, it’s a simple remote error log, when something breaks in production I get a stack trace and that’s enough. Sentry’s cloud product is genuinely excellent if you need the full feature set, and for larger engineering teams the breadth almost certainly justifies the cost. But if your use case is “tell me when something broke and show me the stack trace”, self-hosted Bugsink does exactly that with no data leaving your infrastructure.
AI API integration
For my AI API integrations, I switched from OpenAI to Mistral. It worked out perfectly as I was mostly using simpler models anyway.
Mistral is headquartered in Paris and has published compelling open-weight models alongside its API offering. The API is clean, the models are fast and capable, and there’s something coherent about a European AI provider that leans into openness rather than away from it. For my inference workloads, the switch was lateral in quality and meaningfully better in terms of where the money goes.
CDN
Exception № 1
Not everything moved. Cloudflare is a US company, I still use it, and I’m at peace with that.
Here’s the reasoning: Cloudflare sits in front of my public-facing websites. Its job is to cache, protect against DDoS attacks, and make content load fast for visitors around the world. The data flowing through it is already public by definition. I’m not routing private communications or sensitive application data through Cloudflare; I’m using it to serve pages that anyone on the internet can read. The sovereignty calculus is different when the thing you’re protecting is already public.
I did try Bunny CDN, which is European-based and has a great reputation. For straightforward CDN use it’s excellent. But Cloudflare’s feature set (security rules, Workers platform, breadth of configuration options) wasn’t matched closely enough to justify the switch for my specific needs. Sometimes the pragmatic answer wins.
Payments
Exception № 2
Stripe is one of the few services I haven’t moved yet, even though payment infrastructure is exactly the kind of thing I care about having in a jurisdiction I trust. Mollie is a Dutch payment processor with full EU incorporation, strong GDPR compliance by design, and a product that has matured considerably in recent years. The API has converged toward parity for most common payment flows, and for a European business the regional payment method coverage (iDEAL, Bancontact, SEPA) is arguably better.
The migration is on the list. It’s just not a trivial one. Payment integrations touch billing logic, webhooks, tax invoicing and customer-facing flows in ways that require careful testing and a good moment to cut over. It’s also more expensive than Stripe for my usecase.
AI Code assistance
Exception № 3
This one felt overdue. OpenAI works fine, but the company’s trajectory doesn’t align with my own views anymore. After a period of deliberate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just didn’t make the cut as it couldn’t compete with Claude.
Claude Code is now my day-to-day AI assistant for coding. The reasoning quality is strong, the context handling is genuinely impressive, and Anthropic’s approach to safety and transparency feels more structurally grounded.
Anthropic is a US company, so this doesn’t satisfy the jurisdictional criterion I applied elsewhere. But it satisfies something else, the sense that the organization building the thing has given serious thought to what it’s building and why.
It’s also worth noting that local models are becoming increasingly viable. Qwen, Alibaba’s open-weight model family, is a strong example: capable enough for many real workloads, running entirely on your own hardware, with no data leaving your machine. The gap between frontier API models and what you can run locally is narrowing faster than most people realize.
Not everything is ideal. Most data centers still sit outside Europe, and “open” means different things to different organizations. But the direction is right. A world where capable AI runs on your own hardware, with published weights and transparent training, is a much better world for digital autonomy than one where all inference routes through a handful of closed API providers. We’re not there yet, but the trajectory is encouraging.
Git Version Control
Exception № 4
GitLab also remains for now. GitLab is headquartered in the US but offers self-hosted options, and the company has long had a strong commitment to transparency and open source. A self-hosted instance is on the roadmap, but moving source control is a more significant undertaking than most of these migrations.
GitHub stays in the picture for one specific purpose: public-facing NPM packages and issue tracking for open source software. When you publish a package or maintain public tooling, GitHub is where developers expect to find it. The network effects are real, it’s where the forks, stars, and issue reports come from. For the public-facing surface of open source work, there’s no meaningful sovereignty concern and a lot of practical upside.
Was it worth it?
The practical friction was real but manageable. Most migrations were an afternoon of work: update a credential here, point a DNS record there, export and import some data. A few took longer. None were catastrophic. All in all it took longer than expected, but most time was spent in researching and planning when to do what. Two months in, everything is running without incident. No fires, no regrets.
Digital sovereignty isn’t about paranoia. It’s about being conscious about your infrastructure, where you decide who holds your data, who can reach it, and what happens when politics shift. The tools are there. The ecosystem is mostly mature. The only thing that was stopping me was inertia. It’s entirely possible to run a reliable, capable, professional digital stack mostly from European infrastructure. This migration was proof of that.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
Visit pancik.com for more.