To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app. Reasonable people can quibble with whether those features are actually bringing value to users, but what I want to discuss is the fundamental concept of taking on a dependency to a cloud hosted AI model for applications.
This laziness is creating a generation of software that is fragile, invades your privacy, and fundamentally broken. We are building applications that stop working the moment the server crashes or a credit card expires.
We need to return to a habit of building software where our local devices do the work. The silicon in our pocket is mind bogglingly faster than what was available a decade ago. It has a dedicated Neural Engine sitting there, mostly idle, while we wait for a JSON response from a server farm in Virginia. That’s ridiculous.
Even if your intentions are pure, the moment you stream user content to a third party AI provider, you’ve changed the nature of your product. You now have data retention questions and all the baggage that comes with that (consent, audit, breach, government request, training, etc.)
On top of that you also substantially complicated your stack because your feature now depends on network conditions, external vendor uptime, rate limits, account billing, and your own backend health.
Congratulations! You took a UX feature and turned it into a distributed system that costs you money.
If the feature can be done locally, opting into this mess is self inflicted damage.
“AI everywhere” is not the goal. Useful software is the goal.
Concrete Example: Brutalist Report’s On-Device Summaries
Years ago I launched a fun side project named The Brutalist Report , a news aggregator service inspired by the 1990s style web.
Recently, I decided to build a native iOS client for it with the design goal of ensuring it would remain a high-density news reading experience. Headlines in a stark list, a reader mode that strips the cancer that has overtaken the web, and (optionally) an “intelligence” view that generates a summary of the article.
Here’s the key point though: the summary is generated on-device using Apple’s local model APIs. No server detours. No prompt or user logs. No vendor account. No “we store your content for 30 days” footnotes needed.
It has become so normal for folks that any AI use is happening server-side. We have a lot of work to do to turn this around as an industry.
It’s not lost on me that sometimes the use-cases you have will demand the intelligence that only a cloud hosted model can provide, but that’s not the case with every use-case you’re trying to solve. We need to be thoughtful here.
Available Tooling
I can only speak on the tooling available within the Apple ecosystem since that’s what I focused initial development efforts on. In the last year, Apple has invested heavily here to allow developers to make use of a built-in local AI model easily.
The core flow looks roughly like this:
import FoundationModels
let model = SystemLanguageModel.default guard model.availability == .available else { return }
let session = LanguageModelSession { “”″ Provide a brutalist, information-dense summary in Markdown format. - Use **bold** for key concepts. - Use bullet points for facts. - No fluff. Just facts. “”″ }
let response = try await session.respond(options: .init(maximumResponseTokens: 1_000)) { articleText }
let markdown = response.content
And for longer content, we can chunk the plain text (around 10k characters per chunk), produce concise “facts only” notes per chunk, then runs a second pass to combine them into a final summary.
This is the kind of work local models are perfect for. The input data is already on the device (because the user is reading it). The output is lightweight. It’s fast and private. It’s okay if it’s not a superhuman PhD level intelligence because it’s summarizing the page you just loaded, not inventing world knowledge.
Local AI shines when the model’s job is transforming user-owned data, not acting as a search engine for the universe.
There are plenty of AI features that people want but don’t trust. Summarizing emails, extract action items from notes, categorize this document, etc.
The usual cloud approach turns every one of those into a trust exercise. “Please send your data to our servers. We promise to be cool about it.”
Local AI changes that. Your device already has the data. We’ll do the work right here.
You don’t build trust with your users by writing a 2,000 word privacy policy. You build trust by not needing one to begin with.
The tooling available on the platform goes even further.
One of the best moves Apple has made recently is pushing “AI output” away from unstructured blobs of text and toward typed data.
Instead of “ask the model for JSON and pray”, the newer and better pattern is to define a Swift struct that represents the thing you want. Give the model guidance for each field in natural language. Ask the model to generate an instance of that type.
That’s it.
Conceptually, it looks like this:
import FoundationModels
@Generable struct ArticleIntel { @Guide(description: “One sentence. No hype.“) var tldr: String @Guide(description: “3 – 7 bullets. Facts only.“) var bullets: [String] @Guide(description: “Comma-separated keywords.“) var keywords: [String] }
let session = LanguageModelSession() let response = try await session.respond( to: “Extract structured notes from the article.”, generating: ArticleIntel.self ) { articleText }
let intel = response.content
Now your UI doesn’t have to scrape bullet points out of Markdown or hope the model remembered your JSON schema. You get a real type with real fields, and you can render it consistently. It produces structured output your app can actually use. And it’s all running locally!
This isn’t just nicer ergonomics. It’s an engineering improvement.
And if you’re building a local first app, this is the difference between “AI as novelty” and “AI as a trustworthy subsystem”.
“But Local Models Aren’t As Smart”
Correct.
But also so what?
Most app features don’t need a model that can write Shakespeare, explain quantum mechanics, and pass the bar exam. They need a model that can do one of these reliably: summarize, classify, extract, rewrite, or normalize.
And for those tasks, local models can be truly excellent.
If you try to use a local model as a replacement for the entire internet, you will be disappointed. If you use it as a “data transformer” sitting inside your app, you’ll wonder why you ever sent this stuff to a server.
Use cloud models only when they’re genuinely necessary. Keep the user’s data where it belongs. And when you do use AI, don’t just glue it as a chat box. Use it as a real subsystem with typed outputs and predictable behavior.
Stop shipping distributed systems when you meant to ship a feature.
Last year I said I’d probably never recommend another Bambu Lab printer again.
I still use my P1S, but after Bambu Lab started pushing their always-connected cloud solution as the new default:
I blocked the printer from the Internet via my OPNsense Firewall
I stopped updating the firmware
I locked the printer into Developer mode
I deleted Bambu Studio and started using OrcaSlicer
I had to do that to keep it under my control, instead of Bambu’s.
But I’m weird—I acknowledge that. I’m one of those crazy ones who likes to own something they purchased, and not have the company watch everything I do with hardware I paid for.
Bambu Lab could’ve left the status quo at that, and I wouldn’t be writing this blog post.
But they didn’t.
What happened this time?
For context: OrcaSlicer is a fork of the open source project Bambu Studio, which is a fork of Prusa Slicer, which is a fork of slic3r. (They are all licensed under the AGPLv3 open source license).
OrcaSlicer already has to dance around Bambu’s weird default setup where every file you print goes through Bambu’s servers, meaning they can see everything you ever print on your printer.
That is, unless you’re like me and you run it in Developer mode, and completely block it from the Internet on old firmware.
Some people are okay with using OrcaSlicer and printing through Bambu’s cloud. It’s convenient if you’re on the road and want to start a print on your printer at home, without managing your own VPN.
I run my own WireGuard VPN, so I don’t need that, but I understand not everyone has the resources to manage their own remote access.
Bambu saw a fork of OrcaSlicer that allowed you to use all your printer’s features without having to route prints through Bambu’s cloud called OrcaSlicer-bambulab and was like, “You know what? No. For the 0.1% of power users who want to run OrcaSlicer without the cloud delivery mechanism like we have in our AGPL-licensed Linux Bambu Studio code… no. You have to use our app, and only our app.”
So they threatened that OrcaSlicer fork’s developer with legal action for things that developer didn’t do. For example, they indicated the fork used an impersonation attack, despite the fork using Bambu Studio’s upstream code verbatim.
These are very serious public accusations.Bambu Lab did not write to me with these specific public claims first. They also refused my request to publish the full correspondence. Instead, they published a one-sided public statement where I cannot reply directly.In practice, this presents me to the public as someone bypassing security, impersonating their client, and creating a risk to their infrastructure. I reject that characterization.— OrcaSlicer-bambulabs developer’s response
These are very serious public accusations.
Bambu Lab did not write to me with these specific public claims first. They also refused my request to publish the full correspondence. Instead, they published a one-sided public statement where I cannot reply directly.
In practice, this presents me to the public as someone bypassing security, impersonating their client, and creating a risk to their infrastructure. I reject that characterization.
— OrcaSlicer-bambulabs developer’s response
Bambu is abusing the open source social contract, and using their legal might, to suppress a tiny number of their users1, for who knows what reason.
It seems dumb to me, because it would’ve been easier (and more profitable) to do nothing at all2. Instead, they wrote a blog post blaming an individual open source developer for their own infrastructure and security problems.
This is where the actual issue arises: the modification in question worked by injecting falsified identity metadata into network communication.In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers.— Bambu Lab blog post
This is where the actual issue arises: the modification in question worked by injecting falsified identity metadata into network communication.
In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers.
— Bambu Lab blog post
I don’t think they understand open source culture. Security either, if a public user agent string is their only protection against DDoS attacks…
Instead of finding solutions to ecosystem problems and building a more secure platform, Bambu is putting devoted power users like the fork’s developer on blast3.
When tensions flared last year, they wrote a similar blog post blaming community backlash on ‘unfortunate misinformation’. I imagine they meant speculation from community members (like myself) frustrated the whole software ecosystem and ownership model was turned upside down post-purchase.
This year they’re blaming one developer of a tiny slicer fork for the potential impact he could have on their entire cloud infrastructure.
It creates structural vulnerability. If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. Our systems would have no way to distinguish traffic, because the requests would look identical.— Bambu Lab blog post
It creates structural vulnerability. If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. Our systems would have no way to distinguish traffic, because the requests would look identical.
— Bambu Lab blog post
I love how they frame this as a developer trying to impersonate their app, when he’s literally using the same AGPL-licensed code their Linux app uses.
I find it doubly ironic since their own fork caused Bambu users’ telemetry to hit Prusa’s servers back in 2022, and (to my knowledge) Prusa didn’t snap back with a C&D.
They spent the rest of their blog post talking about vulnerabilities, bugs, and instabilities—as if that has anything to do with a developer using upstream code verbatim in his fork.
Maybe they could take a new approach and just not lock down their whole ecosystem in the first place.
But who am I kidding? Nothing I say, and no amount of complaining in the comments below, seems to help Bambu see the fault in their ways.
Spending a little more for a printer from another company just might do it, though.
Louis Rossmann posted a video saying he’d pledge $10,000 to help the open source dev fight Bambu’s legal threats. And I’d happily chip in too, but that’s only useful if the dev wants to put himself back in Bambu’s crosshairs.
The better play might just be to skip Bambu altogether.
The OrcaSlicer fork in question didn’t seem to have much uptake outside of a very small subset of users prior to Bambu Lab’s cease and desist order. ↩︎
The OrcaSlicer fork in question didn’t seem to have much uptake outside of a very small subset of users prior to Bambu Lab’s cease and desist order. ↩︎
Maybe ask for the fork to not include “bambulabs” in the name, since that could be a reasonable trademark-related demand. ↩︎
Maybe ask for the fork to not include “bambulabs” in the name, since that could be a reasonable trademark-related demand. ↩︎
The fork’s developer mentioned “I previously helped Bambu Studio users with Linux and Wayland issues, including on Bambu Lab’s own GitHub. That makes it especially absurd to me that I am now being publicly presented as someone dangerous to their infrastructure.” ↩︎
The fork’s developer mentioned “I previously helped Bambu Studio users with Linux and Wayland issues, including on Bambu Lab’s own GitHub. That makes it especially absurd to me that I am now being publicly presented as someone dangerous to their infrastructure.” ↩︎
by Tanner Linsley on May 11, 2026.
Last updated: 2026 – 05-11
On 2026 – 05-11 between 19:20 and 19:26 UTC, an attacker published 84 malicious versions across 42 @tanstack/* npm packages by combining: the pull_request_target “Pwn Request” pattern, GitHub Actions cache poisoning across the fork↔base trust boundary, and runtime memory extraction of an OIDC token from the GitHub Actions runner process. No npm tokens were stolen and the npm publish workflow itself was not compromised.
The malicious versions were detected publicly within 20 minutes by an external researcher ashishkurmi working for stepsecurity. All affected versions have been deprecated; npm security has been engaged to pull tarballs from the registry. We have no evidence of npm credentials being stolen, but we strongly recommend that anyone who installed an affected version on 2026 – 05-11 rotate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH credentials reachable from the install host.
Tracking issue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx
Packages affected
42 packages, 84 versions (two per package, published roughly 6 minutes apart). See the tracking issue for the full table. Confirmed-clean families: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-package, not @tanstack/start-*).
What the malware does
When a developer or CI environment runs npm install, pnpm install, or yarn install against any affected version, npm resolves the malicious optionalDependencies entry, fetches the orphan payload commit from the fork network, runs its prepare lifecycle script, and executes a ~2.3 MB obfuscated router_init.js smuggled into the affected tarball. The script:
Harvests credentials from common locations: AWS IMDS / Secrets Manager, GCP metadata, Kubernetes service-account tokens, Vault tokens, ~/.npmrc, GitHub tokens (env, gh CLI, .git-credentials), SSH private keys
Exfiltrates over the Session/Oxen messenger file-upload network (filev2.getsession.org, seed{1,2,3}.getsession.org) — end-to-end encrypted with no attacker-controlled C2, so blocking by IP/domain is the only network mitigation
Self-propagates: enumerates other packages the victim maintains via registry.npmjs.org/-/v1/search?text=maintainer:<user> and republishes them with the same injection
Because the payload runs as part of npm install’s lifecycle, anyone who installed an affected version on 2026 – 05-11 must treat the install host as potentially compromised.
All times UTC. Local timestamps from GitHub API and npm registry.
Pre-attack (cache poisoning phase)
Detonation (publish phase)
Workflow run 25613093674 starts (19:15:44), and fails.
Detection and response
Formal malware reports are submitted via npm
Tanner begins npm deprecation process for all 84 affected packages.
Public Twitter/X/LinkedIn/Bluesky disclosure from @tan_stack and maintainers
All cache entries for all TanStack/* GitHub repositories purged via API.
Hardening PR merged: bundle-size.yml restructured, repository_owner guards added, third-party action refs pinned to SHAs.
Official GitHub Security Advisory is published, CVE requested
Three vulnerabilities chained together. Each is necessary for the attack; none alone is sufficient.
1. pull_request_target “Pwn Request” pattern in bundle-size.yml
bundle-size.yml ran pull_request_target for fork PRs and, inside that trigger context, checked out the fork’s PR-merge ref and ran a build:
yaml
on: pull_request_target: paths: [‘packages/**’, ‘benchmarks/**’]
jobs: benchmark-pr: steps: - uses: actions/checkout@v6.0.2 with: ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork’s merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
on: pull_request_target: paths: [‘packages/**’, ‘benchmarks/**’]
jobs: benchmark-pr: steps: - uses: actions/checkout@v6.0.2 with: ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork’s merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
The author of the workflow attempted a trust split (the comment-pr job is separate from benchmark-pr, with a comment in the YAML noting the intent to keep benchmark-pr “untrusted with read-only permissions”). The split is correct in spirit but missed two facts:
actions/cache@v5′s post-job save is not gated by permissions:. Cache writes use a runner-internal token, not the workflow GITHUB_TOKEN. Setting permissions: contents: read does not block cache mutation.
Cache scope is per-repo, shared across pull_request_target runs (which use the base repo’s cache scope) and pushes to main. A PR running in the base repo’s cache scope can poison entries that production workflows on main will later restore.
2. GitHub Actions cache poisoning across trust boundaries
The malicious vite_setup.mjs was specifically designed to write data into the pnpm-store directory under a key the legit release.yml workflow would compute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the benchmark-pr job ended, actions/cache@v5′s post-step saved the (now-poisoned) pnpm store to that exact key. When release.yml next ran on a push to main, its Setup Tools step restored the poisoned entry — entirely as designed.
This is the class of attack documented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions design issue that requires conscious mitigation.
release.yml declares id-token: write (legitimately needed for npm OIDC trusted publishing). When the poisoned pnpm store is restored on the runner, attacker-controlled binaries are now on disk and get invoked during the build step. Those binaries:
Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline
Read /proc/<pid>/maps and /proc/<pid>/mem to dump the worker’s memory
Extract the OIDC token (which the runner mints lazily, in memory, when id-token: write is set)
Use the token to authenticate POST requests directly to registry.npmjs.org — bypassing the workflow’s Publish Packages step entirely
This is the same memory-extraction technique (and verbatim Python script, with attribution comment) used in the tj-actions/changed-files compromise of March 2025. The attacker did not invent novel tradecraft; they recombined published research.
Why none alone is enough
pull_request_target alone is fine for trusted operations (labeling, comments)
Cache poisoning alone (e.g., from inside an already-compromised dep) requires a separate publish vehicle
OIDC token extraction alone requires existing code execution on the runner
The chain only works because each vulnerability bridges the trust boundary the others assumed: PR fork code crossing into base-repo cache, base-repo cache crossing into release-workflow runtime, and release-workflow runtime crossing into npm registry write access.
How we found out
Detection was external. External researcher ashishkurmi working for StepSecurity opened issue #7383 ~20 minutes after the publish, with full technical analysis. Tanner received a phone call from Socket.dev just moments after starting the war room confirming the situation.
IOC fingerprints (for downstream maintainers and security tools)
In any @tanstack/* package’s manifest:
json
“optionalDependencies”: { “@tanstack/setup”: “github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }
“optionalDependencies”: { “@tanstack/setup”: “github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }
File: router_init.js (~2.3 MB, package root, not in “files”)
Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11
2nd-stage payload URLs: https://litter.catbox.moe/h8nc9u.js, https://litter.catbox.moe/7rrc6l.mjs
Exfiltration network: filev2.getsession.org, seed{1,2,3}.getsession.org
Forged commit identity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fabricated GitHub no-reply email)
Real attacker accounts: zblgg (id 127806521), voicproducoes (id 269549300)
Attacker fork: github.com/zblgg/configuration (fork of TanStack/router renamed to evade fork searches)
Orphan payload commit (in fork network): 79ac49eedf774dd4b0cfa308722bc463cfe5885c
Workflow runs that performed the malicious publishes:
github.com/TanStack/router/actions/runs/25613093674 (attempt 4) github.com/TanStack/router/actions/runs/25691781302
github.com/TanStack/router/actions/runs/25613093674 (attempt 4)
github.com/TanStack/router/actions/runs/25691781302
What went well
External researchers noticed and reported with full technical detail within ~20 min of the incident
Maintainer team coordinated immediately and effectively across many timezones
The detection community already had a clear public IOC pattern within hours
What could have been better
No internal alerting. We learned about the compromise from a third party. We need monitoring on our own publishes. We’ll be working closely with security researcher firms in the ecosystem that have the ability to detect these issues very quickly, potentially even in-house, and making the feedback loop even tighter.
pull_request_target workflows had not been audited despite being a long-known dangerous pattern
Floating refs (@v6.0.2, @main) on third-party actions create standing supply-chain risk independent of this incident
Unpublish was unavailable for nearly all affected packages because of npm’s “no unpublish if dependents exist” policy. We have to rely on npm security to pull tarballs server-side, which adds hours of delay during which malicious tarballs remain installable
The 7-maintainer list on the npm scope means seven separate credential-theft targets for the same blast radius
OIDC trusted-publisher binding has no per-publish review. Once configured, any code path in the workflow can mint a publish-capable token. We need either (a) move to short-lived classic tokens with manual review, or (b) add provenance-source-verification to detect publishes from unexpected workflow steps
What we got lucky on
The attacker chose a payload that broke tests, which made the publish step (which would have produced cleaner-looking tarballs) skip — meaning the attack was loud enough to detect quickly. A more careful attacker who didn’t break tests could have published silently for hours longer
The attacker reused public tradecraft (verbatim memory-dump script with attribution comment) instead of writing novel code — making the IOC-matching faster
These need answers before we close the postmortem.
Did bundle-size.yml’s Setup Tools step actually call actions/cache@v5? Verify by reading the post-job logs from one of the pull_request_target runs against PR #7378 (e.g., run id 25666610798). Tanner has access; needs to be done manually
What was in the initial PR head commit (before the force-pushes wiped it)? GitHub’s reflog may have it. Check via gh api or the GitHub support team
How did the malicious commit get into the fork’s git object store specifically — was it pushed directly via git, or was it created via the GitHub web UI (which would leave audit-log entries)?
Was voicproducoes a real account or a sock puppet? Cross-reference its activity history
Did the npm cache also get poisoned (the 6 duplicate linux-npm-store-* entries)? Were any actually used?
Can we identify any other fork in the TanStack/router fork network that contains the orphan payload commit? (If yes, the cleanup is harder — every fork hosting it keeps it accessible via github:tanstack/router#79ac49ee…)
Are any other TanStack repos (router, query, table, form, virtual, etc.) using the same bundle-size.yml-style pattern? Audit needed
How many users actually downloaded the affected versions during the publish window? Get from npm support
Did any of the seven listed maintainers’ machines get compromised separately? (None of the malicious publishes used a maintainer’s npm token, but maintainer machines could have been the secondary target via the self-propagation logic)
Tracking issue: TanStack/router#7383
GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx
Related research:
Adnan Khan, “The Monsters in Your Build Cache: Github Actions Cache Poisoning” (May 2024) — adnanthekhan.com GitHub Security Lab, “Keeping your GitHub Actions and workflows secure: Preventing pwn requests” — securitylab.github.com StepSecurity, “Harden-Runner detection: tj-actions/changed-files action is compromised” (March 2025) — stepsecurity.io
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
There was an error while loading. Please reload this page.
Notifications
You must be signed in to change notification settings
You can’t perform that action at this time.
May 13th, 2026 | 14 minute read
Modern cars are computers on wheels - they have more sensors than you can count and are constantly phoning home with telemetry data like your location, speed, fuel levels, sudden accelerations/decelerations, video footage, driver attention data from eye monitoring systems, and hundreds of other data points. Cars have inward- and outward-facing cameras. They have microphones. They have always-on modems. It’s all enabled by default with difficult or meaningless opt-outs, and your data is monetized through brokers like LexisNexis or Verisk. This all brings a host of security and privacy issues - here are some over the years:
In 2025 Subaru had vulnerabilities allowing anyone to remotely unlock customers’ cars, as well as access the real-time GPS location and location history of the car of the car
Car manufacturers share your driving data with insurance companies, which then increase your premiums
In 2023 Tesla employees internally shared camera footage of naked customers and other sensitive images
In 2015 Charlie Miller and Chris Valasek famously took over a Jeep Cherokee with full control of the ignition, brakes, locks, steering, etc.
Mozilla detailed how 25 car manufacturers scored abysmally on privacy and how they collect data including “sexual activity, immigration status, race, facial expressions, weight and genetic information.” They sell this data to third parties and use it to build profiles about you covering “intelligence, abilities, characteristics, preferences, and more.”
Tesla had a vulnerability in 2017 that allowed anyone to remotely see your car’s location, manage other features, and even summon the car to themselves
The Car That Watches You Back details how cars are now serving you ads, as well as collecting vast amounts of data about you. The Hacker News discussion about this article is what prompted this blog post
Now that we’re sufficiently motivated, what can we do about it? In this blog post, rather than relying on companies’ promises or meaningless opt-outs, we’re going to stop the data at the source by physically removing the modem (the DCM, or Data Communication Module) as well as the built-in GPS on my 2024 RAV4 Hybrid, so the car will no longer have the capability to send any telemetry data back home. Let’s dive in:
Will the car still be functional?
Yes. Depending on how different car manufacturers have wired their cars, how their software and firmware were written, etc., varying levels of functionality might be affected by removing the modem and GPS. For this car:
Everything that relies on a data connection will no longer work. This includes things like over-the-air updates as well as Toyota cloud-based services and SOS functionalityThis is a safety tradeoff - you’re disabling automatic crash notification and emergency calling
This is a safety tradeoff - you’re disabling automatic crash notification and emergency calling
The car’s microphone is wired through the DCM, and in the absence of any other changes removing the DCM means the in-car microphone won’t work, which is inconvenient if you plan on taking calls in the car. However we’ll install a DCM Bypass Kit (discussed more below) to restore all functionality and have a working microphone
CarPlay has a quirk: the phone uses its own GPS but also accepts a location signal from the car’s GPS unit. After removing the DCM, the car would get confused about its location and sometimes jump my position to the middle of Nevada (I live in San Francisco), making navigation annoying. To work around this we’ll fully disconnect the car’s GPS, so it can’t send a bad location to the phoneFrom the title of the blog post you might have wondered why bother removing the GPS after we’ve removed the modem - who cares if the car has built-in location when it can’t phone home with that data? This is whyThis is a well-documented bug with discussions on Apple Support threads as well as car-specific forums like rav4world. This bug affects more than just Toyotas, it’s a generic Apple bug even for people who haven’t removed their modem (but anecdotally removing my modem made the problem worse)
From the title of the blog post you might have wondered why bother removing the GPS after we’ve removed the modem - who cares if the car has built-in location when it can’t phone home with that data? This is why
This is a well-documented bug with discussions on Apple Support threads as well as car-specific forums like rav4world. This bug affects more than just Toyotas, it’s a generic Apple bug even for people who haven’t removed their modem (but anecdotally removing my modem made the problem worse)
Removing the DCM and GPS may void parts of your warranty - just something to be aware of. Thanks to the Magnuson–Moss Warranty Act, it cannot void the whole car warranty. It can void coverage related to the work you did (cloud services, telematics, etc.) but unrelated failures like engine problems must still be covered
So thankfully everything in the car remains 100% functional except the cloud-based services mentioned above, which I didn’t want anyway. There is also one critical caveat about Bluetooth:
No more Bluetooth
Important: Even after the modem is removed, if you connect your phone to the car via Bluetooth then the car will use your phone as an internet connection and send all the same telemetry data back to Toyota. However, if you use a wired USB connection then it does not do that (see the discussion here and elsewhere), so I exclusively use CarPlay via USB. I wish I had a way to completely disable the car’s Bluetooth functionality, but it’s deeply integrated into the head unit.
If you need USB cables for CarPlay I like these USB-A to Lightning and USB-A to USB-C cables from Anker.
Or, if you prefer the convenience of Bluetooth, you can use a Bluetooth -> wired USB adapter like this one. The adapter receives Bluetooth from your phone and presents itself to the car as a USB device, so the car treats it like a wired connection and won’t tether through your phone.
Now, onto the necessary tools and parts:
Tools/parts needed
For this project you’ll need:
A trim removal kit (I used this one)
A ratchet, extension, 10mm socket, and 8mm socketI’ve been extremely happy with this set. However if you’re not planning on doing more handyperson type work then just borrow these 4 parts from a neighbor instead of spending the money on a whole set
I’ve been extremely happy with this set. However if you’re not planning on doing more handyperson type work then just borrow these 4 parts from a neighbor instead of spending the money on a whole set
(Optional) A precision flathead screwdriver (like this one). This can help with disconnecting wire plugs
This Telematics DCM Bypass Kit, for fixing the in-car microphone$90 is a bit steep for a part that probably costs less than $1 to produce, but the makers of the kit did the work of reading the (paywalled) Toyota diagnostics to produce a working product. If you’d like to build your own version you’ll need to subscribe to Toyota TIS to access the car wiring schematics. It’s unfortunate that these schematics and other repair manuals aren’t public
$90 is a bit steep for a part that probably costs less than $1 to produce, but the makers of the kit did the work of reading the (paywalled) Toyota diagnostics to produce a working product. If you’d like to build your own version you’ll need to subscribe to Toyota TIS to access the car wiring schematics. It’s unfortunate that these schematics and other repair manuals aren’t public
Overall this was a medium-difficulty project that took me a few hours to complete. Now, let’s get to work:
Removing the car modem
1) Push down on the leather of your shifter and remove the pin (don’t lose it!):
2) Remove the shifter top:
3) Use the trim tool to pop out the base of the shifter. Just lean it to the side, no need to disconnect anything:
4) Use your hands to pop out the next panel and lean it to the side:
5) Remove these three 10mm bolts:
6) Pull on this light gray trim piece until it disconnects slightly:
7) Pull the radio out, disconnect the plug, and put the radio aside. The radio is held on by clips only and can even be pulled out with your hands, but it requires a little force and the trim removal tool may be helpful. When disconnecting the plug it may help to use the precision screwdriver to push down on the tab to unlock it, but you can also do it with your hands:
8) Pull the next panel (the seat warming controls) out with your hands. It’s only held on by clips but may require a bit of force to remove:
9) Take a photo of all the wiring connections on the seat warming controls so you can assemble it correctly later, unplug all the wires, and set the controls aside:
10) You now have access to the DCM:
11) Removing the DCM requires a lot of maneuvering, tight spaces, and patience, but you can do it. There are two 8mm bolts on the right and one 8mm bolt on the left that need to be removed. Getting access to them may require removing some of the other harnesses or components that are in the way - just go slow and steady, take your time, and take photos of things before you move them. After those 3 bolts are removed you have a little more play to pull the unit out, and after disconnecting the wires in the back you can completely remove the DCM. Here’s mine out of the car, part number 86741 – 06130:
12) Now that the modem is removed we need to install the DCM Bypass Kit so the in-car microphone continues to work. It’s extremely straightforward, just plug it into the wiring harness that you removed from the DCM. The plugs will only fit on the correct wires, there’s no way to get it wrong:
13) Reassemble everything by going in reverse order. Make sure all clips, bolts, etc. are back in their original position and everything is seated correctly. This part should go much faster than disassembly.
Now you’re done with the hard part. Next we disconnect the GPS from the head unit, which is significantly easier:
Removing the GPS antenna
1) Use the trim tool to remove the back panel behind the infotainment screen:
2) Unscrew these four 10mm bolts:
3) Pop the head unit out (it’s only held on by 2 clips at this point). The part number will vary but for my car it was 86140 – 0R710.
4) The GPS antenna is one of the single-wire cables (not the multi-wire plugs). I had 3 single-wire cables in my unit and the GPS wire was the black wire shown in the picture. I was able to determine this by process of elimination - unplugging one of the wires disconnected my car’s reverse camera, unplugging another one disconnected CarPlay completely, and the last one was the GPS - worked like a charm. Again, with a Toyota TIS subscription you can get access to the head unit wiring diagram and not have to make guesses about which wire is which, but process of elimination worked fine for me:
5) Reassemble everything by going in reverse order. Again, make sure that all the clips seat properly.
Confirming it worked
After you have everything reassembled, turn the car on.
1) If you unplugged the modem successfully then:
The infotainment screen will have an icon in the upper right corner indicating no connection
The SOS light in the overhead console will be off:
2) If the DCM Bypass Kit was installed successfully then:
Make a phone call through CarPlay. The recipient should be able to hear you / the microphone should be working
Congratulations - your car no longer has the capability to transmit telemetry data. Of course it may still be captured to local storage and can be physically collected later, but for me that was fine.
Conclusion
Overall I’m very happy with this project. Unfortunately I think it’s only a matter of time before the modem and GPS become more deeply integrated into the car (making this blog post infeasible), or cars have more drastic failure modes when the modem/GPS is removed, or anti-right-to-repair laws get passed to further clamp down on this behavior. For now the win stands - no telemetry leaves the car. Strong Federal privacy laws would make posts like this unnecessary, that’s the world I’d rather live in.
On digital sovereignty, and why European cloud is better than you think
April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope
There’s a version of this post that starts with a spreadsheet and ends with a quiet sense of satisfaction. That’s mostly how it went. But underneath the practical exercise of swapping one SaaS tool for another was something that felt more urgent, a growing discomfort with how much of my digital infrastructure sat on servers I didn’t control, in a jurisdiction increasingly prone to unpredictability, operated by companies whose incentives don’t always align with mine.
Digital sovereignty sounds like a buzzword until you think carefully about what it means. It means knowing where your data lives. It means not being one policy change, one acquisition, or one executive’s bad mood away from losing access to tools your business depends on. It means choosing infrastructure based on values, not just convenience.
So I started migrating.
Analytics
Google Analytics was the obvious first target. It’s the canonical example of a service that’s free because you are the product, your visitors’ behavior funneled back into Google’s advertising machinery.
Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant without the cookie consent theater that Google Analytics typically requires. The reporting is comprehensive, the interface is familiar enough, and I own everything.
The main downside is maintenance overhead. You’re now responsible for updates, backups, and keeping the server healthy. For most setups this is low-friction, but it’s not zero friction.
Proton Mail is based in Switzerland, not EU territory, but Swiss privacy law is closely aligned with GDPR and arguably stronger in some respects. Proton builds its business model around privacy rather than advertising, and end-to-end encryption is baked in at the protocol level rather than bolted on. The email client is solid, the calendar works well, and for anyone moving away from US-based services, it sits comfortably in the same spirit as the rest of this stack.
One adjustment is getting used to Proton’s filter system, which is a bit more limited than Gmail’s. Gmail lets you write filters against virtually anything, including the full body of the message. Proton doesn’t support filtering on email content at all. So if you’ve built a workflow around catching specific phrases or keywords in message bodies, you’ll have to rethink it. For most people this won’t be a dealbreaker, but it’s worth knowing before you migrate.
There’s also a practical limitation worth flagging: Proton caps custom domains at three, even on the Duo plan. If you run several domains, like separate addresses for different projects or businesses, you’ll hit that ceiling quickly and need to rethink how you route and send mail. I ended up consolidating, which was probably overdue anyway, but it wasn’t a choice I made entirely freely.
Proton isn’t free and charges a substantial fee compared to other options. You’ll get access to a whole suite of Proton apps though.
Password Management
Once I was in the Proton ecosystem, moving password management there as well made sense. Proton Pass is end-to-end encrypted, open source, and benefits from the same Swiss jurisdiction as the rest of Proton’s stack.
1Password is a genuinely great product and this was a lateral move more than an upgrade. The interface is simple, the browser extension works reliably, and having passwords, email, and calendar under one encrypted roof has a certain satisfying coherence to it.
Compute
DigitalOcean has earned its reputation by doing one thing exceptionally well: getting out of your way. The UI is clean, the mental model is simple, and spinning up infrastructure never feels like a chore. It’s the platform that proved developer experience could be a competitive moat.
Scaleway was a pleasant surprise. I expected a capable-but-rough European alternative, but what I found was a platform that’s genuinely well thought out. Servers spun up quickly inside a private network of my own configuration, the control panel is clean, and the options available matched everything I actually needed. Scaleway displays projected CO₂ emissions alongside server location choices, a nice touch.
Object Storage
Scaleway’s object storage is S3-compatible, which makes migration mechanical rather than painful, update your endpoint and credentials and existing code works unchanged.
I used a tool called rclone to sync my old AWS S3 storage buckets to the new Scaleway S3 buckets. This took a little more than a week of constant syncing, as these buckets were quite large.
Offsite Backups
OVH is the largest European cloud provider and brings the reliability and pricing you’d expect at that scale. Their object storage works well as a backup destination and ends up cheaper than Backblaze B2 once you configure lifecycle rules to move older backups to the cold storage class.
Getting there, however, requires some patience. The OVHcloud control panel is a labyrinth: the lifecycle rule configuration is buried somewhere in the documentation, and it involves some work in the terminal. Once it’s set up, it works reliably and the cost difference is meaningful.
Transactional Emails
Lettermint is a European transactional email service that does the job without the bloat. Deliverability is solid, the API is clean, and it has straightforward pricing.
Compared to SendGrid, the analytics are leaner and the ecosystem integrations are fewer. SendGrid has years of tooling, documentation, and community answers behind it. Lettermint is newer and smaller. For most transactional sending use cases (password resets, notifications, receipts) that doesn’t matter much. But if you’re doing complex multi-stream email infrastructure, you’ll want to audit the feature set carefully first.
Error Tracking
Bugsink is a self-hosted error tracking tool that accepts Sentry’s SDK, which means the migration path is almost frictionless, change one line of configuration and you’re done.
To be honest: Bugsink is bare-bones. There’s no performance monitoring, no session replays, no advanced alerting. It’s not a Sentry replacement for teams that use Sentry properly. For me, it’s a simple remote error log, when something breaks in production I get a stack trace and that’s enough. Sentry’s cloud product is genuinely excellent if you need the full feature set, and for larger engineering teams the breadth almost certainly justifies the cost. But if your use case is “tell me when something broke and show me the stack trace”, self-hosted Bugsink does exactly that with no data leaving your infrastructure.
AI API integration
For my AI API integrations, I switched from OpenAI to Mistral. It worked out perfectly as I was mostly using simpler models anyway.
Mistral is headquartered in Paris and has published compelling open-weight models alongside its API offering. The API is clean, the models are fast and capable, and there’s something coherent about a European AI provider that leans into openness rather than away from it. For my inference workloads, the switch was lateral in quality and meaningfully better in terms of where the money goes.
CDN
Exception № 1
Not everything moved. Cloudflare is a US company, I still use it, and I’m at peace with that.
Here’s the reasoning: Cloudflare sits in front of my public-facing websites. Its job is to cache, protect against DDoS attacks, and make content load fast for visitors around the world. The data flowing through it is already public by definition. I’m not routing private communications or sensitive application data through Cloudflare; I’m using it to serve pages that anyone on the internet can read. The sovereignty calculus is different when the thing you’re protecting is already public.
I did try Bunny CDN, which is European-based and has a great reputation. For straightforward CDN use it’s excellent. But Cloudflare’s feature set (security rules, Workers platform, breadth of configuration options) wasn’t matched closely enough to justify the switch for my specific needs. Sometimes the pragmatic answer wins.
Payments
Exception № 2
Stripe is one of the few services I haven’t moved yet, even though payment infrastructure is exactly the kind of thing I care about having in a jurisdiction I trust. Mollie is a Dutch payment processor with full EU incorporation, strong GDPR compliance by design, and a product that has matured considerably in recent years. The API has converged toward parity for most common payment flows, and for a European business the regional payment method coverage (iDEAL, Bancontact, SEPA) is arguably better.
The migration is on the list. It’s just not a trivial one. Payment integrations touch billing logic, webhooks, tax invoicing and customer-facing flows in ways that require careful testing and a good moment to cut over. It’s also more expensive than Stripe for my usecase.
AI Code assistance
Exception № 3
This one felt overdue. OpenAI works fine, but the company’s trajectory doesn’t align with my own views anymore. After a period of deliberate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just didn’t make the cut as it couldn’t compete with Claude.
Claude Code is now my day-to-day AI assistant for coding. The reasoning quality is strong, the context handling is genuinely impressive, and Anthropic’s approach to safety and transparency feels more structurally grounded.
Anthropic is a US company, so this doesn’t satisfy the jurisdictional criterion I applied elsewhere. But it satisfies something else, the sense that the organization building the thing has given serious thought to what it’s building and why.
It’s also worth noting that local models are becoming increasingly viable. Qwen, Alibaba’s open-weight model family, is a strong example: capable enough for many real workloads, running entirely on your own hardware, with no data leaving your machine. The gap between frontier API models and what you can run locally is narrowing faster than most people realize.
Not everything is ideal. Most data centers still sit outside Europe, and “open” means different things to different organizations. But the direction is right. A world where capable AI runs on your own hardware, with published weights and transparent training, is a much better world for digital autonomy than one where all inference routes through a handful of closed API providers. We’re not there yet, but the trajectory is encouraging.
Git Version Control
Exception № 4
GitLab also remains for now. GitLab is headquartered in the US but offers self-hosted options, and the company has long had a strong commitment to transparency and open source. A self-hosted instance is on the roadmap, but moving source control is a more significant undertaking than most of these migrations.
GitHub stays in the picture for one specific purpose: public-facing NPM packages and issue tracking for open source software. When you publish a package or maintain public tooling, GitHub is where developers expect to find it. The network effects are real, it’s where the forks, stars, and issue reports come from. For the public-facing surface of open source work, there’s no meaningful sovereignty concern and a lot of practical upside.
Was it worth it?
The practical friction was real but manageable. Most migrations were an afternoon of work: update a credential here, point a DNS record there, export and import some data. A few took longer. None were catastrophic. All in all it took longer than expected, but most time was spent in researching and planning when to do what. Two months in, everything is running without incident. No fires, no regrets.
Digital sovereignty isn’t about paranoia. It’s about being conscious about your infrastructure, where you decide who holds your data, who can reach it, and what happens when politics shift. The tools are there. The ecosystem is mostly mature. The only thing that was stopping me was inertia. It’s entirely possible to run a reliable, capable, professional digital stack mostly from European infrastructure. This migration was proof of that.
Published May 10, 2026, 12:30 PM EDT
His love of PCs and their components was born out of trying to squeeze every ounce of performance out of the family computer. Tinkering with his own build at age 10 turned into building PCs for friends and family, fostering a passion that would ultimately take shape as a career path.
Besides being the first call for tech support for those close to him, Ty is a computer science student, with his focus being cloud computing and networking. He also competed in semi-pro Counter-Strike for 8 years, making him intimately familiar with everything to do with peripherals.
Sign in to your XDA account
In March 2026, Linux crossed five percent of Steam’s user base for the first time, an all-time high for an operating system that spent two decades as a novelty when it came to any kind of gaming. Microsoft’s end-of-support deadline for Windows 10 last October pushed many users to look at alternatives, and the Steam Deck has quietly turned millions of people into Linux gamers without them really thinking about it, leading to more widespread adoption on desktop machines.
Most of that progress used to happen inside a piece of software called Wine, the translation layer that convinces Windows games they’re running on Windows. Valve’s tuned version of Wine, called Proton, is what makes Steam Play and the Steam Deck work. For years, every meaningful improvement to Linux gaming came from changes to Wine and Proton themselves. That’s still true, but increasingly the most important changes are happening one layer deeper, inside the Linux kernel. The latest example of that is something called NTSYNC, a kernel-level driver that has offered great performance gains over previous versions of Wine, and is loaded by default on every Steam Deck that’s up-to-date.
Related
What NTSYNC actually is
An additional piece of the performance puzzle
NTSYNC is a small piece of driver added directly to the Linux kernel that gives it a native implementation of a set of Windows-specific tools that games depend on to coordinate themselves.
Modern games juggle dozens of things at once. While you’re playing, your CPU manages the rendering pipeline, loading assets, running physics, processing audio, handling AI NPC routines, and tracking inputs, all in parallel across multiple cores. All those jobs constantly have to coordinate so they don’t trip over each other.
Quiz
8 Questions · Test Your Knowledge
The history of LinuxTrivia challenge
From a Finnish student’s side project to powering the world — how well do you know the story of Linux?
OriginsKernelDistrosPioneersMilestones
Begin
01 / 8
Origins
In what year did Linus Torvalds first announce the Linux kernel to the world?
A1989B1991C1993D1995
Correct! Linus Torvalds posted his now-famous message to the comp.os.minix newsgroup on August 25, 1991, describing Linux as ‘just a hobby’ project. Few could have predicted it would one day run the majority of the world’s servers and smartphones.
Not quite — Torvalds made his announcement in 1991. He was a 21-year-old computer science student at the University of Helsinki at the time, and his modest post described the project as something that ‘won’t be big and professional’ like GNU.
Continue
02 / 8
Pioneers
Which university was Linus Torvalds attending when he created the first version of the Linux kernel?
AStockholm UniversityBAalto UniversityCUniversity of HelsinkiDMIT
Correct! Torvalds was studying at the University of Helsinki in Finland when he began working on Linux as a personal project, initially inspired by MINIX, a small Unix-like system used for educational purposes.
Not quite — Torvalds was a student at the University of Helsinki in Finland. He started Linux partly out of frustration with the limitations of MINIX, which his professor Andrew Tanenbaum had designed deliberately to be simple for teaching.
Continue
03 / 8
Kernel
What operating system primarily inspired Linus Torvalds to create the Linux kernel?
AMS-DOSBMINIXCBSD UnixDSolaris
Correct! MINIX, created by professor Andrew Tanenbaum, was the direct inspiration for Linux. Torvalds used MINIX on his new Intel 386 PC but found it too restricted for his needs, which pushed him to write his own kernel.
Not quite — the answer is MINIX. Torvalds was using MINIX when he started Linux, and even held a famous online debate with its creator Andrew Tanenbaum about kernel design philosophy, specifically monolithic versus microkernel architectures.
Continue
04 / 8
Milestones
What was the version number of the first publicly released Linux kernel in 1991?
A0.01B0.1C1.0D0.99
Correct! Linux version 0.01 was the first kernel Torvalds released publicly in September 1991. It was a rough, early build that could only run on Intel 386 hardware and had very limited functionality, but it marked the true beginning of the Linux project.
Not quite — the first public release was version 0.01 in September 1991. The kernel didn’t reach version 1.0 until March 1994, by which point it had grown significantly in capability and had attracted contributions from developers around the world.
Continue
05 / 8
Distros
Which Linux distribution, first released in 1993, is one of the oldest still actively maintained today?
AUbuntuBFedoraCSlackwareDDebian
Correct! Slackware, created by Patrick Volkerding, was first released in July 1993, making it one of the oldest surviving Linux distributions. It is known for its simplicity and Unix-like philosophy, and it continues to be maintained to this day.
Not quite — the answer is Slackware, released in 1993 by Patrick Volkerding. While Debian was also founded in 1993, Slackware narrowly edges it out as the older release. Ubuntu didn’t arrive until 2004, and Fedora launched in 2003.
Continue
06 / 8
Origins
The GNU Project, which provided many tools that paired with the Linux kernel, was founded by which developer?
AEric RaymondBRichard StallmanCBruce PerensDIan Murdock
Correct! Richard Stallman founded the GNU Project in 1983 with the goal of creating a completely free Unix-like operating system. When the Linux kernel appeared in 1991, it filled the missing piece GNU needed, and the combination became what many call GNU/Linux.
Not quite — it was Richard Stallman who founded the GNU Project in 1983. Stallman is also known for creating the GPL (GNU General Public License) and founding the Free Software Foundation, two pillars that shaped the legal and philosophical foundation of free software.
Continue
07 / 8
Milestones
Which company released a landmark commercial Linux distribution in 1994, helping bring Linux into the enterprise world?
ACanonicalBSUSECRed HatDMandriva
Correct! Red Hat released its first Linux distribution in 1994 and became one of the most influential commercial Linux companies in history. It pioneered the enterprise Linux market and was eventually acquired by IBM in 2019 for approximately $34 billion.
Not quite — Red Hat is the answer. Founded by Marc Ewing and Bob Young, Red Hat helped prove that companies could build sustainable businesses around open-source software. SUSE Linux also launched in 1994, making it a close rival, but Red Hat became the more globally dominant enterprise force.
Continue
08 / 8
Distros
Ubuntu Linux, one of the most popular desktop distributions, is based on which other Linux distribution?
AArch LinuxBFedoraCDebianDGentoo
Correct! Ubuntu is based on Debian and was first released in October 2004 by Mark Shuttleworth’s company Canonical. It was designed to make Linux more accessible to everyday users, and its six-month release cycle and long-term support versions made it a favorite for both desktops and servers.
Not quite — Ubuntu is built on top of Debian. Debian itself was founded in 1993 by Ian Murdock and is known for its strict commitment to free software and stability. Ubuntu inherits Debian’s package management system (APT and .deb packages) but adds its own user-friendly layer on top.
See My Score
Challenge Complete
Your Score
/ 8
Thanks for playing!
Try Again
Windows handles this coordination by using a specific set of mechanisms, and before NTSYNC, Wine had to mimic these mechanisms using things like esync and fsync, which both worked, but didn’t always match Windows exactly. NTSYNC builds these mechanisms straight into the Linux kernel for the first time, and it means Wine doesn’t have to emulate anything anymore. The developer-facing API calls don’t actually change, Linux just knows how to answer them natively.
Related
NTSYNC is part of a growing pattern
Not the first time Linux has inherited features because of Windows
NTSYNC isn’t the first time Linux has gained a new feature specifically because Windows games needed it. A few years back, Linux added a way for software to wait on several events at once, which is something Windows had built in for decades, but Linux didn’t. Wine had been working around the gap with awkward tricks until the kernel finally got native support.
This work is driven by Valve, by CodeWeavers (the company that employs many of the core Wine developers, including NTSYNC’s author Elizabeth Figura), and by a steady stream of contributors who want Linux to be a real gaming platform without depending on out-of-ecosystem patches forever.
Related
These aren’t magical performance gains
fsync was already pretty good
The headline performance gains look great, but they need some context. The eye-catching 40 to 200 percent FPS gains cited in NTSYNC’s original benchmarks were measured against unmodified upstream Wine, which almost nobody uses to play games on Linux anymore. Most Linux gamers, including every Steam Deck owner, use Proton, which already has fsync. Compared to fsync, NTSYNC’s performance gains are far more modest. The games that benefit most from the change to NTSYNC are games that were really struggling before. Anything that was running at decent framerates beforehand is still going to run fine.
Related
These 7 Linux myths you still believe simply aren’t true
Linux is a completely different beast than it was a decade ago.
Valve adopted it anyway
It’s a great sign
Pierre-Loup Griffais, an engineer at Valve, has gone on the record to say that fsync was already fast enough, and despite that, Valve still shipped NTSYNC in stable SteamOS in March anyway, which speaks to the fact that fsync is still a workaround at its core, and can be the cause of issues outside of poor raw FPS.
These old workarounds got subtle edge cases wrong in ways that produced occasional hitches, deadlocks, or weird behavior in specific games, which are bugs that don’t show up on benchmark charts but can absolutely ruin individual experiences. NTSYNC fixes those at the source by matching Windows behavior exactly, and that means as soon as your favorite distro moves to the new kernel version, whether it be Bazzite, CachyOS, Fedora, or a flavor of Ubuntu, they all get this much-needed fix.
Related
4 reasons Valve’s full SteamOS release will change PC gaming again
Valve’s full SteamOS release will change PC gaming again, and here are some of the most important ways.
Gaming on Linux continues to improve by the month
Linux has grown so much in the gaming department. Where there once was nothing but clever Wine patches and community workarounds now lies support from gaming behemoths like Valve, driving changes to the Linux kernel itself. NTSYNC won’t be the last time a piece of Windows gets rebuilt inside Linux because gamers needed it, and with more than five percent of Steam’s user base now running Linux, the incentive to keep doing it has never been stronger.
09 May, 2026
This dev-log is getting a lot of attention on HN (scary!): HN Thread.
To those who are coming here from HN: This started as an investigation or rather a question: “How far I can get with building a piece of software by keeping myself completely out of the loop”. The tl;dr of this dev log is that I still need to be in the loop to make anything meaningful. Take aways:
like “em-dash” is to ai writing, “god-object” is to ai coding
vibe-coding makes everything feel cheap and you may end up losing focus and building bloat
let a human (you) write the architecture and don’t just keep asking it for features
And some AGENTS.md/CLAUDE.md directives that I felt help me stay out of the loop a little more
Humans intervention is still needed as of 10/05/2026. You can totally go back now! [3]
I
Here is k10s: https://github.com/shvbsle/k10s/tree/archive/go-v0.4.0
234 commits. ~30 weekends. Built entirely on vibe-coded sessions with Claude, whenever my tokens lasted long enough to ship something.
I’m archiving my TUI tool and rewriting it from scratch.
k10s started as a GPU-aware Kubernetes dashboard (and my first foray into building something serious with AI). Think k9s but built for the people running NVIDIA clusters, people who actually care about GPU utilization, DCGM metrics, and which nodes are sitting idle burning $32/hr. I built it in Go with Bubble Tea [1] and it worked.
For a while… :(
I learned over these 7 months is worth more than the 1690 lines of model.go I’m throwing away. And I think anyone doing serious vibe-coding can benefit from this, because this part doesn’t surface much (I feel it gets buried under the demo reels and the velocity wins).
tl;dr: AI writes features, not architecture. The longer you let it drive without constraints, the worse the wreckage gets. The velocity makes you think you’re winning right up until the moment everything collapses simultaneously.
II
vibe coding high
I started k10s in late September 2025. The first few weeks were magic. I’d prompt Claude with “add a pods view with live updates” and boom, it worked. Resource list views, namespace filtering, log streaming, describe panels, keyboard navigation. Each feature landed clean because the project was small enough that the AI could hold the whole thing in context.
The basic k9s clone took maybe 3 weekends. Resource views for pods, nodes, deployments, services. A command palette. Watch-based live updates. Vim keybindings. All working, all vibe-coded in single sessions. I was building at maybe 10x my normal speed and it felt incredible.
Then I wanted the main selling point.
The whole reason k10s exists is the GPU fleet view. A dedicated screen that shows you every node’s GPU allocation, utilization from DCGM, temperature, power draw, memory. Not buried in kubectl describe node output, but right there in a purpose-built table with color-coded status. Idle nodes in yellow. Busy in green. Saturated in red.
And Claude one-shot it. I prompted for the fleet view, it generated the FleetView struct, the tab filtering (GPU/CPU/All), the custom rendering with allocation bars. It looked beautiful. I was riding the high.
Then I typed :rs pods to switch back to the pods view.
Nothing rendered. The table was empty. Live updates had stopped. I switched to nodes, it showed stale data from the fleet view’s filter. I went back to fleet, the tab counts were wrong.
The god object had consumed itself.
This is the title of the blog post. This is where I intervened for the first time. For 7 months I’d been prompting and shipping without ever sitting down and actually reading the code Claude wrote. I’d look at the diff, verify it compiled, test the happy path, move on. But now something was fundamentally broken and I couldn’t just prompt my way out of it.
So I sat down and read model.go. All 1690 lines. I was horrified.
Here’s what it looked like. One struct to rule them all:
type Model struct { // 3rd party UI components table table.Model paginator paginator.Model commandInput textinput.Model help help.Model
// cluster info and state k8sClient *k8s.Client currentGVR schema.GroupVersionResource resourceWatcher watch.Interface resources []k8s.OrderedResourceFields listOptions metav1.ListOptions clusterInfo *k8s.ClusterInfo logLines []k8s.LogLine describeContent string currentNamespace string navigationHistory *NavigationHistory logView *LogViewState describeView *DescribeViewState viewMode ViewMode viewWidth int viewHeight int err error pluginRegistry *plugins.Registry helpModal *HelpModal describeViewport *DescribeViewport logViewport *LogViewport logStreamCancel func() logLinesChan <-chan k8s.LogLine horizontalOffset int mouse *MouseHandler fleetView *FleetView creationTimes []time.Time allResources []k8s.OrderedResourceFields // fleet’s unfiltered set allCreationTimes []time.Time // fleet’s timestamps rawObjects []unstructured.Unstructured ageColumnIndex int // … }
UI widgets. K8s client. Per-view state for logs, describe, fleet. Navigation history. Caching. Mouse handling. All in one struct. And the Update() method was a 500-line function dispatching on msg.(type) with 110 switch/case branches.
This is the moment I stopped vibe-coding and started thinking.
III
five tenets from the wreckage
Here’s what I extracted from 7 months of watching AI generate a codebase that slowly ate itself. Each of these is something I did wrong, why it happens with AI-assisted coding, and what you should actually put in your CLAUDE.md or agents.md to prevent it.
Tenet 1: AI builds features, not architecture.
Every time I prompted Claude for a feature, it delivered. Perfectly. The fleet view worked on the first try. Log streaming worked. Mouse support worked. The problem is that each feature was implemented in the context of “make this work right now” without any awareness of the 49 other features sharing the same state.
Here’s what the resourcesLoadedMsg handler looks like. This is the code that runs every time you switch views:
case resourcesLoadedMsg: m.logLines = nil // Clear log lines when loading resources m.horizontalOffset = 0 // Reset horizontal scroll on resource change
if m.currentGVR != msg.gvr && m.resourceWatcher != nil { m.resourceWatcher.Stop() m.resourceWatcher = nil } m.currentGVR = msg.gvr m.currentNamespace = msg.namespace m.listOptions = msg.listOptions m.rawObjects = msg.rawObjects
// For nodes: store the full unfiltered set, classify, then filter if msg.gvr.Resource == k8s.ResourceNodes && m.fleetView != nil { m.allResources = msg.resources m.allCreationTimes = msg.creationTimes if len(msg.rawObjects) > 0 { m.fleetView.ClassifyAndCount(m.rawObjectPtrs()) } m.applyFleetFilter() } else { m.resources = msg.resources m.creationTimes = msg.creationTimes m.allResources = nil m.allCreationTimes = nil }
See the if msg.gvr.Resource == k8s.ResourceNodes && m.fleetView != nil conditional? That’s the fleet view being special-cased inside the generic resource loading path. Every new view that needed custom behavior got another branch here. And every branch needed to manually clear the right combination of fields or the previous view’s data would bleed through.
How many = nil cleanup lines exist in this file? I counted:
m.logLines = nil // Clear log lines when loading resources m.allResources = nil // Clear fleet data when not on nodes m.resources = nil // Clear resources when loading logs m.resources = nil // Clear resources when loading describe view m.logLines = nil // Clear log lines when loading describe view m.resources = nil // Clear resources when loading yaml view m.logLines = nil // Clear log lines when loading yaml view m.logLines = nil // … two more in other handlers m.logLines = nil
Nine manual nil assignments scattered across a 1690-line file. Miss one and you get ghost data from the previous view. This is what happens when there’s no view isolation. AI can’t see this pattern decaying over time because each prompt only touches one code path.
What to do instead: Write the architecture yourself before any code. Not a vague design doc. A concrete set of interfaces, message types, and ownership rules. Then put those rules in your CLAUDE.md so the AI sees them on every prompt:
# Architecture Invariants (CLAUDE.md)
- Each view implements the View trait. Views do NOT access other views’ state. - All async data arrives via AppMsg variants. No direct field mutation from background tasks. - Adding a new view MUST NOT require modifying existing views. - The App struct is a thin router. It owns navigation and message dispatch. Nothing else.
The AI will follow these if you write them down. It just won’t invent them for you.
Tenet 2: The god object is the default AI artifact.
AI gravitates toward single-struct-holds-everything because it satisfies the immediate prompt with minimal ceremony. But it gets worse. Because there’s no view isolation, key handling becomes a nightmare. Here’s the actual key dispatch for the s key:
case m.config.KeyBind.For(config.ActionToggleAutoScroll, key): if m.currentGVR.Resource == k8s.ResourceLogs { m.logView.Autoscroll = !m.logView.Autoscroll if m.logView.Autoscroll { m.table.GotoBottom() } return m, nil } // Shell exec for pods and containers views if m.currentGVR.Resource == k8s.ResourcePods { // … 20 lines to look up selected pod, get name, namespace … return m, m.commandWithPreflights( m.execIntoPod(selectedName, selectedNamespace), m.requireConnection, ) } if m.currentGVR.Resource == k8s.ResourceContainers { // … container exec logic … return m, m.commandWithPreflights(m.execIntoContainer(), m.requireConnection) } return m, nil
One keybinding. Three completely different behaviors depending on which view you’re in. The s key means “autoscroll” in logs, “shell” in pods, and “shell into container” in containers. This is all in one flat switch because there are no per-view key maps. The AI generated this because I said “add shell support for pods” and it found the nearest key handler and jammed it in.
And look at how Enter works. This is the drill-down handler:
case m.config.KeyBind.For(config.ActionSubmit, key): // Special handling for contexts view if m.currentGVR.Resource == “contexts” { // … 12 lines … return m, m.executeCtxCommand([]string{contextName}) } // Special handling for namespaces view if m.currentGVR.Resource == “namespaces” { // … 12 lines … return m, m.executeNsCommand([]string{namespaceName}) } if m.currentGVR.Resource == k8s.ResourceLogs { return m, nil } // … 25 more lines of generic drill-down …
Every view is a conditional in a flat dispatch. There are 20+ occurrences of m.currentGVR.Resource == used as a type discriminator in this single file. Not types. String comparisons. Every new view means touching every handler.
What to do instead: Put this in your CLAUDE.md:
# State Ownership Rules
- NEVER add fields to the App/Model struct for view-specific state. - Each view is a separate struct implementing the View trait/interface. - Each view declares its own key bindings. The app dispatches keys to the active view. - If you need to add a keybinding, add it to the relevant view’s keymap, not a global one. - Adding a view means adding a file. If your change requires modifying existing views, stop and ask.
The AI will always take the shortest path (“add another if-branch”). Your job is to make the shortest path also the correct path by putting guardrails in the file it reads on every invocation.
Tenet 3: Velocity illusion widens your scope.
This one’s psychological, not technical, and I think it’s the most dangerous.
When I started k10s, I wanted a GPU-focused tool. For people running training clusters. A niche audience that I’m part of. But vibe-coding made everything feel cheap. “Oh I can add pods view in one session? Let me add deployments too. And services. And a full command palette. And mouse support. And contexts. And namespaces.”
Suddenly I was building k9s. A general-purpose Kubernetes TUI. For everyone. Because the AI made it feel like each feature was free.
It wasn’t free. Each feature was another branch in the god object. Here’s the keybinding struct:
type keyMap struct { Up, Down, Left, Right key.Binding GotoTop, GotoBottom key.Binding AllNS, DefaultNS key.Binding Enter, Back key.Binding Command, Quit key.Binding Fullscreen key.Binding // log view Autoscroll key.Binding // log view (also shell in pods!) ToggleTime key.Binding // log view WrapText key.Binding // log + describe view CopyLogs key.Binding // log view ToggleLineNums key.Binding // describe view Describe key.Binding // resource views YamlView key.Binding // resource views Edit key.Binding // resource views Shell key.Binding // pods (CONFLICTS with Autoscroll!) FilterLogs key.Binding // log view FleetTabNext key.Binding // fleet view only FleetTabPrev key.Binding // fleet view only }
One flat keymap for all views. Comments in parens show which view each binding applies to. Autoscroll and Shell are both s. This “works” because the dispatch checks m.currentGVR.Resource before acting. But it means you can’t reason about keybindings locally. You have to trace through the entire 500-line Update function to know what a key does.
The complexity was accumulating invisibly while the velocity metric said “you’re shipping!”
What to do instead: Write a vision doc that explicitly says who you’re NOT building for, and put the scope boundary in your CLAUDE.md:
# Scope (do NOT expand beyond this)
k10s is for GPU cluster operators. Not all Kubernetes users. Supported views: fleet, node-detail, gpu-detail, workload. That’s it. Do NOT add generic resource views (pods, deployments, services). Do NOT add features that duplicate k9s functionality. If a feature request doesn’t serve someone running GPU training jobs, reject it.
Vibe-coding makes you feel like you have infinite implementation budget. You don’t. You have infinite LINE budget (the AI will generate as much code as you want). But you have the same finite complexity budget as always. The architecture can only support so many features before it buckles, regardless of how fast you wrote them. The CLAUDE.md scope section is you saying no in advance, before the velocity high convinces you to say yes.
Tenet 4: Positional data is a time bomb.
Every resource in k10s was fetched from the Kubernetes API and immediately flattened:
type OrderedResourceFields []string
Column identity was purely positional. Here’s the sort function for the fleet view. Look at the index access:
func sortFilteredResources(rows []k8s.OrderedResourceFields, times []time.Time, tab FleetTab) { sort.SliceStable(indices, func(a, b int) bool { ra := rows[indices[a]] rb := rows[indices[b]]
switch tab { case FleetTabGPU: // Sort by Alloc column (index 3) ascending allocA, allocB := “”, “” if len(ra) > 3 { allocA = ra[3] } if len(rb) > 3 { allocB = rb[3] } return allocA < allocB
case FleetTabCPU: // Sort by Name column (index 0) ascending nameA, nameB := “”, “” if len(ra) > 0 { nameA = ra[0] } if len(rb) > 0 { nameB = rb[0] } return nameA < nameB
case FleetTabAll: // GPU nodes first, then CPU nodes. // Within GPU: sort by Alloc (index 3). // Within CPU: sort by Name (index 0). computeA, computeB := “”, “” if len(ra) > 2 { computeA = ra[2] } if len(rb) > 2 { computeB = rb[2] } aIsGPU := strings.HasPrefix(computeA, “gpu”) bIsGPU := strings.HasPrefix(computeB, “gpu”) // … } }) }
ra[3] is Alloc. ra[2] is Compute. ra[0] is Name. These are magic numbers. The only thing connecting index 3 to “Alloc” is a comment and the column order defined in resource.views.json:
{ “nodes”: { “fields”: [ { “name”: “Name”, “weight”: 0.28 }, { “name”: “Instance”, “weight”: 0.15 }, { “name”: “Compute”, “weight”: 0.12 }, { “name”: “Alloc”, “weight”: 0.12 }, … ] } }
Add a column between Instance and Compute? Every sort, every conditional render, every place that says ra[2] or ra[3] is now silently wrong. The compiler can’t help you because it’s all []string. And the JSON config can’t express sort behavior, conditional rendering, or custom drill targets, so those live in Go code that hardcodes the positional assumptions from the JSON.
AI generates this pattern because it’s the shortest path from “fetch data” to “render table.” A []string satisfies any table widget immediately. Typed structs require more ceremony upfront. So the AI picks the fast path, and six months later you’re debugging why sort puts “Name” values in the “Alloc” column.
What to do instead: Put this directive in your CLAUDE.md:
# Data Representation
- NEVER flatten structured data into []string, Vec<String>, or positional arrays. - All data flows as typed structs (FleetNode, PodInfo, etc.) until the render() call. - Column identity comes from struct field names, not array indices. - Sort functions operate on typed fields, never on positional access like row[3]. - The ONLY place strings are created for display is inside render()/view() functions.
Then your typed struct makes impossible states impossible [2]:
struct FleetNode { name: String, instance_type: String, compute_class: ComputeClass, alloc: GpuAlloc, }
You can’t sort by the wrong column when columns are named fields. You can’t accidentally compare Alloc strings as names. The compiler enforces this for you. AI will always pick Vec<String> because it satisfies the prompt faster. Your CLAUDE.md makes the typed path the path of least resistance.
Tenet 5: AI doesn’t own state transitions.
The Bubble Tea architecture has a beautiful idea: Update() is the only place state mutates, driven by messages. But k10s violated this. The updateTableMsg handler spawned a closure that mutated Model fields from inside a goroutine:
case updateTableMsg: return m, func() tea.Msg { // block on someone sending the update message. <-m.updateTableChan // Preserve cursor position across column/row updates so that // background refreshes don’t reset the user’s selection. savedCursor := max(m.table.Cursor(), 0) // run the necessary table view update calls. m.updateColumns(m.viewWidth) m.updateTableData() // Restore cursor, clamped to valid range. rowCount := len(m.table.Rows()) if rowCount > 0 { if savedCursor >= rowCount { savedCursor = rowCount - 1 } m.table.SetCursor(savedCursor) } return updateTableMsg{} }
This returned function (a tea.Cmd) is executed by Bubble Tea in a separate goroutine. It calls m.updateColumns(m.viewWidth) and m.updateTableData() which read and write m.resources, m.table, m.viewWidth. Meanwhile, View() is called on the main goroutine reading the same fields. There’s no lock. No mutex. The channel <-m.updateTableChan blocks the goroutine until someone sends an update signal, but nothing prevents View() from reading half-written state.
This is a textbook data race. It worked 99% of the time. Corrupted the display 1% of the time in ways that made me think I was going insane.
AI generates this because “just mutate it in the closure” is the shortest path to working code. Proper message passing (send a message back to Update(), let Update() apply the mutation atomically on the main loop) requires more types, more plumbing. The AI is optimizing for the prompt, not for correctness under concurrency.
What to do instead: All mutations to render-visible state happen on the main loop. Period. Background workers produce data. They send it as a message. The main loop receives the message and applies it. This is the one rule you cannot break in concurrent UI code.
// Background task: tx.send(AppMsg::FleetData(nodes)).await;
// Main loop: match msg { AppMsg::FleetData(nodes) => { self.fleet_view.update_nodes(nodes); } }
Be in the know.
Sorry, something went wrong. Please enter your name and email again.
I confirm I am 18 years of age or older. I accept and acknowledge that my information will be used in accordance with
Be in the know.
Sorry, something went wrong. Please enter your name and email again.
I confirm I am 18 years of age or older. I accept and acknowledge that my information will be used in accordance with
Intelligence is the new spec.
Link to Youtube Video (visible only when JS is disabled)
The best of Gemini meets our most advanced laptops.
Select anything to ask, compare, or create with Gemini, instantly.1
Open your phone apps on your laptop, no installs needed.2
Access files from your phone as if they live on your laptop.2
Be in the know.
Sorry, something went wrong. Please enter your name and email again.
I confirm I am 18 years of age or older. I accept and acknowledge that my information will be used in accordance with
Check responses. Internet connection required. 18+. Results may vary based on visual matches and are for illustrative purposes only. Sequences shortened.
Setup required. Phone with Android 17 or above required.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
Visit pancik.com for more.