10 interesting stories served every morning and every evening.

Several npm latest releases are compromised · Issue #7383 · TanStack/router

github.com

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

There was an er­ror while load­ing. Please re­load this page.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

Postmortem: TanStack npm supply-chain compromise | TanStack Blog

tanstack.com

by Tanner Linsley on May 11, 2026.

Last up­dated: 2026 – 05-11

On 2026 – 05-11 be­tween 19:20 and 19:26 UTC, an at­tacker pub­lished 84 ma­li­cious ver­sions across 42 @tanstack/* npm pack­ages by com­bin­ing: the pul­l_re­quest_­tar­get Pwn Request” pat­tern, GitHub Actions cache poi­son­ing across the fork↔base trust bound­ary, and run­time mem­ory ex­trac­tion of an OIDC to­ken from the GitHub Actions run­ner process. No npm to­kens were stolen and the npm pub­lish work­flow it­self was not com­pro­mised.

The ma­li­cious ver­sions were de­tected pub­licly within 20 min­utes by an ex­ter­nal re­searcher ashishkurmi work­ing for stepse­cu­rity. All af­fected ver­sions have been dep­re­cated; npm se­cu­rity has been en­gaged to pull tar­balls from the reg­istry. We have no ev­i­dence of npm cre­den­tials be­ing stolen, but we strongly rec­om­mend that any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 ro­tate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH cre­den­tials reach­able from the in­stall host.

Tracking is­sue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Packages af­fected

42 pack­ages, 84 ver­sions (two per pack­age, pub­lished roughly 6 min­utes apart). See the track­ing is­sue for the full table. Confirmed-clean fam­i­lies: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-pack­age, not @tanstack/start-*).

What the mal­ware does

When a de­vel­oper or CI en­vi­ron­ment runs npm in­stall, pnpm in­stall, or yarn in­stall against any af­fected ver­sion, npm re­solves the ma­li­cious op­tion­alDe­pen­den­cies en­try, fetches the or­phan pay­load com­mit from the fork net­work, runs its pre­pare life­cy­cle script, and ex­e­cutes a ~2.3 MB ob­fus­cated router_init.js smug­gled into the af­fected tar­ball. The script:

Harvests cre­den­tials from com­mon lo­ca­tions: AWS IMDS / Secrets Manager, GCP meta­data, Kubernetes ser­vice-ac­count to­kens, Vault to­kens, ~/.npmrc, GitHub to­kens (env, gh CLI, .git-credentials), SSH pri­vate keys

Exfiltrates over the Session/Oxen mes­sen­ger file-up­load net­work (filev2.getsession.org, seed{1,2,3}.get­ses­sion.org) — end-to-end en­crypted with no at­tacker-con­trolled C2, so block­ing by IP/domain is the only net­work mit­i­ga­tion

Self-propagates: enu­mer­ates other pack­ages the vic­tim main­tains via reg­istry.npmjs.org/-/​v1/​search?text=main­tainer:<user> and re­pub­lishes them with the same in­jec­tion

Because the pay­load runs as part of npm in­stal­l’s life­cy­cle, any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 must treat the in­stall host as po­ten­tially com­pro­mised.

All times UTC. Local time­stamps from GitHub API and npm reg­istry.

Pre-attack (cache poi­son­ing phase)

Detonation (publish phase)

Workflow run 25613093674 starts (19:15:44), and fails.

Detection and re­sponse

Formal mal­ware re­ports are sub­mit­ted via npm

Tanner be­gins npm dep­re­ca­tion process for all 84 af­fected pack­ages.

Public Twitter/X/LinkedIn/Bluesky dis­clo­sure from @tan_stack and main­tain­ers

All cache en­tries for all TanStack/* GitHub repos­i­to­ries purged via API.

Hardening PR merged: bun­dle-size.yml re­struc­tured, repos­i­to­ry_owner guards added, third-party ac­tion refs pinned to SHAs.

Official GitHub Security Advisory is pub­lished, CVE re­quested

Three vul­ner­a­bil­i­ties chained to­gether. Each is nec­es­sary for the at­tack; none alone is suf­fi­cient.

1. pul­l_re­quest_­tar­get Pwn Request” pat­tern in bun­dle-size.yml

bun­dle-size.yml ran pul­l_re­quest_­tar­get for fork PRs and, in­side that trig­ger con­text, checked out the fork’s PR-merge ref and ran a build:

yaml

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

The au­thor of the work­flow at­tempted a trust split (the com­ment-pr job is sep­a­rate from bench­mark-pr, with a com­ment in the YAML not­ing the in­tent to keep bench­mark-pr untrusted with read-only per­mis­sions”). The split is cor­rect in spirit but missed two facts:

ac­tions/​cache@v5′s post-job save is not gated by per­mis­sions:. Cache writes use a run­ner-in­ter­nal to­ken, not the work­flow GITHUB_TOKEN. Setting per­mis­sions: con­tents: read does not block cache mu­ta­tion.

Cache scope is per-repo, shared across pul­l_re­quest_­tar­get runs (which use the base re­po’s cache scope) and pushes to main. A PR run­ning in the base re­po’s cache scope can poi­son en­tries that pro­duc­tion work­flows on main will later re­store.

2. GitHub Actions cache poi­son­ing across trust bound­aries

The ma­li­cious vite_setup.mjs was specif­i­cally de­signed to write data into the pnpm-store di­rec­tory un­der a key the le­git re­lease.yml work­flow would com­pute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the bench­mark-pr job ended, ac­tions/​cache@v5′s post-step saved the (now-poisoned) pnpm store to that ex­act key. When re­lease.yml next ran on a push to main, its Setup Tools step re­stored the poi­soned en­try — en­tirely as de­signed.

This is the class of at­tack doc­u­mented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions de­sign is­sue that re­quires con­scious mit­i­ga­tion.

re­lease.yml de­clares id-to­ken: write (legitimately needed for npm OIDC trusted pub­lish­ing). When the poi­soned pnpm store is re­stored on the run­ner, at­tacker-con­trolled bi­na­ries are now on disk and get in­voked dur­ing the build step. Those bi­na­ries:

Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline

Read /proc/<pid>/maps and /proc/<pid>/mem to dump the work­er’s mem­ory

Extract the OIDC to­ken (which the run­ner mints lazily, in mem­ory, when id-to­ken: write is set)

Use the to­ken to au­then­ti­cate POST re­quests di­rectly to reg­istry.npmjs.org — by­pass­ing the work­flow’s Publish Packages step en­tirely

This is the same mem­ory-ex­trac­tion tech­nique (and ver­ba­tim Python script, with at­tri­bu­tion com­ment) used in the tj-ac­tions/​changed-files com­pro­mise of March 2025. The at­tacker did not in­vent novel trade­craft; they re­com­bined pub­lished re­search.

Why none alone is enough

pul­l_re­quest_­tar­get alone is fine for trusted op­er­a­tions (labeling, com­ments)

Cache poi­son­ing alone (e.g., from in­side an al­ready-com­pro­mised dep) re­quires a sep­a­rate pub­lish ve­hi­cle

OIDC to­ken ex­trac­tion alone re­quires ex­ist­ing code ex­e­cu­tion on the run­ner

The chain only works be­cause each vul­ner­a­bil­ity bridges the trust bound­ary the oth­ers as­sumed: PR fork code cross­ing into base-repo cache, base-repo cache cross­ing into re­lease-work­flow run­time, and re­lease-work­flow run­time cross­ing into npm reg­istry write ac­cess.

How we found out

Detection was ex­ter­nal. car­lini opened is­sue #7383 ~20 min­utes af­ter the pub­lish, with full tech­ni­cal analy­sis. Tanner re­ceived a phone call from Socket.dev just mo­ments af­ter start­ing the war room con­firm­ing the sit­u­a­tion.

IOC fin­ger­prints (for down­stream main­tain­ers and se­cu­rity tools)

In any @tanstack/* pack­age’s man­i­fest:

json

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

File: router_init.js (~2.3 MB, pack­age root, not in files”)

Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11

2nd-stage pay­load URLs: https://​lit­ter.cat­box.moe/​h8nc9u.js, https://​lit­ter.cat­box.moe/​7r­rc6l.mjs

Exfiltration net­work: filev2.get­ses­sion.org, seed{1,2,3}.get­ses­sion.org

Forged com­mit iden­tity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fab­ri­cated GitHub no-re­ply email)

Real at­tacker ac­counts: zblgg (id 127806521), voicpro­ducoes (id 269549300)

Attacker fork: github.com/​zblgg/​con­fig­u­ra­tion (fork of TanStack/router re­named to evade fork searches)

Orphan pay­load com­mit (in fork net­work): 79ac49eedf774dd4b0cfa308722bc463cfe5885c

Workflow runs that per­formed the ma­li­cious pub­lishes:

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4) github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4)

github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

What went well

External re­searchers no­ticed and re­ported with full tech­ni­cal de­tail within ~20 min of the in­ci­dent

Maintainer team co­or­di­nated im­me­di­ately and ef­fec­tively across many time­zones

The de­tec­tion com­mu­nity al­ready had a clear pub­lic IOC pat­tern within hours

What could have been bet­ter

No in­ter­nal alert­ing. We learned about the com­pro­mise from a third party. We need mon­i­tor­ing on our own pub­lishes. We’ll be work­ing closely with se­cu­rity re­searcher firms in the ecosys­tem that have the abil­ity to de­tect these is­sues very quickly, po­ten­tially even in-house, and mak­ing the feed­back loop even tighter.

pul­l_re­quest_­tar­get work­flows had not been au­dited de­spite be­ing a long-known dan­ger­ous pat­tern

Floating refs (@v6.0.2, @main) on third-party ac­tions cre­ate stand­ing sup­ply-chain risk in­de­pen­dent of this in­ci­dent

Unpublish was un­avail­able for nearly all af­fected pack­ages be­cause of npm’s no un­pub­lish if de­pen­dents ex­ist” pol­icy. We have to rely on npm se­cu­rity to pull tar­balls server-side, which adds hours of de­lay dur­ing which ma­li­cious tar­balls re­main in­stal­lable

The 7-maintainer list on the npm scope means seven sep­a­rate cre­den­tial-theft tar­gets for the same blast ra­dius

OIDC trusted-pub­lisher bind­ing has no per-pub­lish re­view. Once con­fig­ured, any code path in the work­flow can mint a pub­lish-ca­pa­ble to­ken. We need ei­ther (a) move to short-lived clas­sic to­kens with man­ual re­view, or (b) add prove­nance-source-ver­i­fi­ca­tion to de­tect pub­lishes from un­ex­pected work­flow steps

What we got lucky on

The at­tacker chose a pay­load that broke tests, which made the pub­lish step (which would have pro­duced cleaner-look­ing tar­balls) skip — mean­ing the at­tack was loud enough to de­tect quickly. A more care­ful at­tacker who did­n’t break tests could have pub­lished silently for hours longer

The at­tacker reused pub­lic trade­craft (verbatim mem­ory-dump script with at­tri­bu­tion com­ment) in­stead of writ­ing novel code — mak­ing the IOC-matching faster

These need an­swers be­fore we close the post­mortem.

Did bun­dle-size.ym­l’s Setup Tools step ac­tu­ally call ac­tions/​cache@v5? Verify by read­ing the post-job logs from one of the pul­l_re­quest_­tar­get runs against PR #7378 (e.g., run id 25666610798). Tanner has ac­cess; needs to be done man­u­ally

What was in the ini­tial PR head com­mit (before the force-pushes wiped it)? GitHub’s re­flog may have it. Check via gh api or the GitHub sup­port team

How did the ma­li­cious com­mit get into the fork’s git ob­ject store specif­i­cally — was it pushed di­rectly via git, or was it cre­ated via the GitHub web UI (which would leave au­dit-log en­tries)?

Was voicpro­ducoes a real ac­count or a sock pup­pet? Cross-reference its ac­tiv­ity his­tory

Did the npm cache also get poi­soned (the 6 du­pli­cate linux-npm-store-* en­tries)? Were any ac­tu­ally used?

Does the at­tack re­quire Nx Cloud, or would it have worked with just GitHub Actions cache?

Can we iden­tify any other fork in the TanStack/router fork net­work that con­tains the or­phan pay­load com­mit? (If yes, the cleanup is harder — every fork host­ing it keeps it ac­ces­si­ble via github:tanstack/​router#79ac49ee…)

Are any other TanStack re­pos (router, query, table, form, vir­tual, etc.) us­ing the same bun­dle-size.yml-style pat­tern? Audit needed

How many users ac­tu­ally down­loaded the af­fected ver­sions dur­ing the pub­lish win­dow? Get from npm sup­port

Did any of the seven listed main­tain­ers’ ma­chines get com­pro­mised sep­a­rately? (None of the ma­li­cious pub­lishes used a main­tain­er’s npm to­ken, but main­tainer ma­chines could have been the sec­ondary tar­get via the self-prop­a­ga­tion logic)

Tracking is­sue: TanStack/router#7383

GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Related re­search:

Just a moment...

medium.com

GitLab Act 2

about.gitlab.com

We’ve been work­ing through some sig­nif­i­cant changes in­side GitLab over the past few days, and I want to share them with you di­rectly. The email I sent the team is in­cluded be­low for full con­text.

The agen­tic era af­fords GitLab the largest op­por­tu­nity in our his­tory as a com­pany, and we’re mak­ing the struc­tural and strate­gic de­ci­sions to meet it.

This let­ter has three parts. First, the op­er­a­tional and struc­tural news, which is hard. Second, the strate­gic the­sis we’re bet­ting on. And fi­nally, what this means specif­i­cally for you, our cus­tomers and in­vestors.

The struc­tural news

This morn­ing we shared with team mem­bers that we’re be­gin­ning a re­struc­tur­ing process at GitLab, and we’re run­ning it dif­fer­ently than most. The plan­ning is hap­pen­ing openly, in­clud­ing a vol­un­tary sep­a­ra­tion win­dow. That cre­ates real un­cer­tainty for our team over the next few weeks, but we be­lieve the out­come will be bet­ter for it. Where we can, we plan to fi­nal­ize the new shape of the com­pany on or be­fore June 1. Where lo­cal re­quire­ments ap­ply we will not make any changes un­til the lo­cal process is com­plete.

Four op­er­a­tional changes are part of the work­force re­duc­tion.

We’re reeval­u­at­ing our op­er­a­tional foot­print, and are plan­ning to re­duce the num­ber of coun­tries by up to 30% where we have small teams. We’ll con­tinue serv­ing cus­tomers in those mar­kets through our part­ner net­work.

We’re plan­ning to flat­ten the or­ga­ni­za­tion, re­mov­ing up to three lay­ers of man­age­ment in some func­tions so lead­ers are closer to the work.

We’re re-or­ga­niz­ing R&D to cre­ate roughly 60 smaller, more em­pow­ered teams with end-to-end own­er­ship, nearly dou­bling the num­ber of in­de­pen­dent teams.

We’re rewiring in­ter­nal processes with AI agents, au­tomat­ing the re­views, ap­provals, and hand­offs to speed us up, and plan to right-size roles across the com­pany to fol­low suit.

Operational changes and the up­date to our strat­egy are hap­pen­ing to­gether: they are re­lated but in­de­pen­dent. Operationally, we grew into a shape that was right for the last era and is­n’t right for this one. The strat­egy be­low is what we’re bet­ting on next, and stands on its own.

We are reaf­firm­ing our Q1 and full year FY27 guid­ance to­day. The fi­nal scope and fi­nan­cial im­pact of the re­struc­tur­ing will be shared on our June 2 earn­ings call, once we’ve fin­ished the plan and re­ceived ap­proval by our board.

Our Core Beliefs

Underpinning the changes we’re mak­ing to­day, and our go for­ward strat­egy are 10 core be­liefs that span the world we’re build­ing for, the ar­chi­tec­tural bets we’re mak­ing and how we’ll de­liver.

The world we’re build­ing for

We’re evolv­ing our strat­egy to op­ti­mize for the fu­ture state of soft­ware en­gi­neer­ing:

Software will be built by ma­chines, di­rected by peo­ple. AI is the sub­strate on which fu­ture soft­ware gets built. Agents will plan, code, re­view, de­ploy, and re­pair. Humans still own the judg­ment that mat­ters most: ar­chi­tec­ture, deep un­der­stand­ing of the cus­tomer prob­lem, the trade­offs that re­quire taste. This is why we built and re­leased the Duo Agent Platform in January. Our first quar­ter adop­tion is promis­ing, and we’re ready to ac­cel­er­ate.

The agen­tic era mul­ti­plies de­mand for soft­ware. Software has been the force mul­ti­plier be­hind nearly every busi­ness trans­for­ma­tion of the last two decades. The con­straint was the cost and time of pro­duc­ing and man­ag­ing it. That con­straint is col­laps­ing. As the cost of pro­duc­ing soft­ware col­lapses, de­mand for it will ex­pand. Last year, the de­vel­oper plat­form mar­ket used to be mea­sured in tens of dol­lars per user per month, this year it is hun­dreds/​user/​month and headed to thou­sands. Not only is the value of soft­ware for builders in­creas­ing, but we be­lieve there will be more soft­ware and builders than ever, and we will serve an in­creas­ing vol­ume of both.

The con­se­quen­tial work be­longs to en­gi­neers. Engineering has al­ways been about more than writ­ing code. Great en­gi­neers are prob­lem solvers and builders who care about sys­tem de­sign, dis­trib­uted sys­tems, rea­son­ing through fail­ures, safely in­te­grat­ing new ca­pa­bil­ity into crit­i­cal sys­tems, and mak­ing de­ci­sions un­der am­bi­gu­ity. These are ex­actly the skills the agen­tic era needs more of, es­pe­cially as the vol­ume of soft­ware in­creases. The sup­ply of deep tech­ni­cal prob­lems is mul­ti­ply­ing, and the en­gi­neers who can solve them will be among the scarcest and most valu­able tal­ent in the mar­ket. Our core users’ roles are evolv­ing, their im­por­tance is only in­creas­ing.

The ar­chi­tec­tural bets we’re mak­ing

Platforms that weren’t built for ma­chine scale are start­ing to break un­der it. Winning means in­vest­ing in the fun­da­men­tals that re­ally mat­ter: se­cu­rity, per­for­mance, scal­a­bil­ity, re­li­a­bil­ity and user ex­pe­ri­ence. We’re mak­ing five, fun­da­men­tal ar­chi­tec­tural bets. Each one is un­der­way and we plan to de­liver with­out dis­rup­tion to GitLab cus­tomers that de­pend on us every day.

Machine-scale in­fra­struc­ture. Agents open merge re­quests in par­al­lel, trig­ger pipelines around the clock, and push com­mits at a rate no hu­man team ever did. Git it­self was­n’t de­signed for that load, and bolt­ing AI onto plat­forms not built for agents is the biggest mis­take of this era. We’re do­ing a gen­er­a­tional re­build of the un­der­ly­ing in­fra­struc­ture to han­dle agent-rate work as the de­fault. Git it­self is be­ing reengi­neered for ma­chine scale. The mono­lith is giv­ing way to mod­ern, API-first, com­pos­able ser­vices. And agent-spe­cific APIs are be­ing built so agents can act as first-class users of the plat­form, not as bolted-on con­sumers of hu­man-shaped in­ter­faces. The value of this 100x scale in­fra­struc­ture, and the re­li­a­bil­ity and per­for­mance it pro­vides is much higher than the gen­er­a­tion of in­fra­struc­ture in the mar­ket to­day.

Orchestration across the full life­cy­cle. A sin­gle agent that writes code or opens a merge re­quest pro­duces ac­tiv­ity. Enterprises don’t need agent ac­tiv­ity. They need run­ning soft­ware that moves the busi­ness for­ward. Orchestration is the layer that gets you there. It co­or­di­nates agents across the life­cy­cle, as­sign­ing work, man­ag­ing state, pass­ing con­text, re­solv­ing con­flicts, en­forc­ing pol­icy, and keep­ing a hu­man in the loop when it mat­ters. CI/CD is one of the com­po­nents get­ting reimag­ined. The GitLab pipeline was de­signed to take hu­man-rate com­mits and ship them safely; in the agen­tic era our or­ches­tra­tion ser­vice be­comes the run­time that co­or­di­nates agents, val­i­dates the work and en­forces guardrails, and dri­ves change all the way to pro­duc­tion at ma­chine rate.

Context is our su­per­power. Every dev tool ven­dor is con­verg­ing on sim­i­lar code gen­er­a­tion ca­pa­bil­i­ties. Enterprise AI bills are climb­ing as fast as adop­tion. What does­n’t com­modi­tize is the unique con­text the model gets to work with: a data model that con­nects plan­ning, code, re­view, se­cu­rity, de­ploy­ment, and op­er­a­tions across every pro­ject and repos­i­tory, ac­cu­mu­lated over years of a team’s work. We’re in­vest­ing in that con­nected data model as a first-class, API-accessible ser­vice, and it de­liv­ers more value with every hu­man and agent ac­tion. Context is what lets agents spend fewer to­kens and de­liver bet­ter re­sults.

Governance built into the core. Governance is what lets en­ter­prises move fast in the agen­tic era. Like a race car, it does­n’t mat­ter how fast you can go if you can’t main­tain con­trol. As agents take on more of the work, en­ter­prises need a plat­form that can en­force who’s al­lowed to do what, prove what hap­pened and why, and keep sen­si­tive code and data where it be­longs. We’re build­ing iden­tity, au­dit, pol­icy, and de­ploy­ment flex­i­bil­ity as core plat­form ser­vices that every agent, pipeline, and merge re­quest runs through by de­fault, rather than a sep­a­rate prod­uct lay­ered on top.

One plat­form, three modes. Trillions of lines of code run the world’s busi­nesses to­day. Rewriting most of it is too risky and too ex­pen­sive to jus­tify. The cloud era taught us en­ter­prises run hy­brid, and op­er­at­ing across that mix has been painful, ex­pen­sive, and never fully solved. The agen­tic era will be the same. Every en­ter­prise will live across a spec­trum of hu­man-owned, agent-as­sisted, and agent-au­tonomous work. We’re build­ing one plat­form, one data model, one gov­er­nance sys­tem that op­er­ates across all three modes, and de­liv­er­ing it cloud and model neu­tral.

How we’ll de­liver it

A flex­i­ble busi­ness model. As the way soft­ware gets built changes, the busi­ness model must evolve with it. Agentic AI can aug­ment teams, per­form real work and the busi­ness model must scale with the cost and value of the work per­formed. We’re keep­ing what works: the pre­dictabil­ity of sub­scrip­tions for what cus­tomers have to­day. We’ve al­ready added con­sump­tion pric­ing for the work agents do, with other ma­jor play­ers fol­low­ing over the past few months. Next, we’re in­tro­duc­ing more flex­i­bil­ity to mix both as the way of work evolves.

Culture of ex­cel­lence. Operational char­ac­ter is a key dif­fer­en­tia­tor. What mat­ters most right now is the abil­ity to move quickly, own out­comes, and de­liver real value to our cus­tomers. Speed with Quality, Ownership Mindset, and Customer Outcomes are our new op­er­at­ing prin­ci­ples, built on a cul­ture of ex­cel­lence.

To our cus­tomers

For our cus­tomers, the most im­por­tant thing to­day is what does­n’t change. The sup­port, roadmap com­mit­ments, con­trac­tual terms — all of it con­tin­ues with­out dis­rup­tion. Your ac­count team is avail­able to walk you through to­day’s news if you’d like a con­ver­sa­tion.

Where you should ex­pect to see us evolve is in the qual­ity, depth and pace of in­no­va­tion we ship. We will lead the way in agen­tic en­gi­neer­ing by be­ing cus­tomer zero of our plat­form, demon­strat­ing with our in­no­va­tion and our re­sults the suc­cess you can bet on as our cus­tomers. Our vi­sion for the prod­uct and busi­ness model is clearer than it has ever been and we’re ac­cel­er­at­ing the work. We’ll share the next wave of our in­no­va­tion roadmap at GitLab Transcend on June 10, 2026 and hope you’ll join us.

To our in­vestors

Today’s an­nounce­ment is a de­lib­er­ate move to lead in a mar­ket we be­lieve is in the mid­dle of its largest shift in twenty years. The op­por­tu­nity here is­n’t in­cre­men­tal growth on a DevSecOps plat­form — we’re build­ing to­ward be­com­ing the trusted en­ter­prise plat­form for soft­ware cre­ation in the AI era.

We look for­ward to shar­ing an up­date on the busi­ness and our Q1 re­sults in our up­com­ing earn­ings call on June 2, 2026. We’ll also share the fi­nal scope and fi­nan­cial im­pact of the re­struc­tur­ing at that time, al­though we an­tic­i­pate rein­vest­ing the ma­jor­ity of sav­ings into ac­cel­er­at­ing our progress against the spe­cific growth and tech­no­log­i­cal ini­tia­tives that we’ve out­lined.

This is the most con­se­quen­tial work we’ve taken on as a com­pany. We’ll prove it in the in­no­va­tion we bring to mar­ket, how we serve our cus­tomers, and how we cre­ate value for our share­hold­ers over the near- and long-term.

Thank you,

Bill Staples CEO, GitLab

GitLab Act 2 Update

A let­ter to our team.

Today is hard. I want to ac­knowl­edge how dif­fi­cult to­day is given the vol­ume of change we’re ask­ing you to take in, and the un­cer­tainty of a trans­par­ent re­struc­tur­ing process.

We’ve spent three days to­gether on the why, the what, and the how of where GitLab is go­ing. This let­ter is the writ­ten sum­mary, so you have some­thing to re­flect on as we nav­i­gate the com­ing week to­gether.

Why we’re ini­ti­at­ing a trans­par­ent re­struc­ture of the com­pany

This re­struc­ture process is not like oth­ers you may be see­ing in the news. Of course AI is chang­ing the way we work and is part of our trans­for­ma­tion plan, but this is not an AI op­ti­miza­tion or cost cut­ting ex­er­cise. We in­tend to rein­vest the vast ma­jor­ity of sav­ings back into the busi­ness to ac­cel­er­ate our unique op­por­tu­nity in the agen­tic era as de­fined in our Act 2 Core Beliefs.

One way our re­struc­ture process is dif­fer­ent is that we are do­ing it trans­par­ently and in­clud­ing every team mem­ber in the process. Starting to­day, man­agers across the com­pany are en­ter­ing deeper con­ver­sa­tions with lead­er­ship about how the re­struc­tur­ing prin­ci­ples land in­side their teams. Those con­ver­sa­tions will in­form the de­ci­sion of im­pacted roles. The rea­son we’re not land­ing the full de­ci­sion to­day is that get­ting the shape of the next GitLab right mat­ters more than get­ting it fast — and a trans­par­ent process with in­put from you, your man­agers, lead­ers across the or­ga­ni­za­tion, and our em­ployee rep­re­sen­ta­tives is the best way to land this change with an or­ga­ni­za­tion ready to move for­ward.

As we dis­cussed to­day, we are plan­ning a work­force re­duc­tion dri­ven by a con­cen­tra­tion of our coun­try foot­print, flat­ten­ing how we’re or­ga­nized, and role right-siz­ing de­signed to op­ti­mize the shape and size of our teams. In ad­di­tion, we’re es­tab­lish­ing a new set of op­er­at­ing prin­ci­ples, founded on a cul­ture of ex­cel­lence.

I want to be di­rect: I want to do this once, and do it right, and not re­visit our struc­ture any­time in the fore­see­able fu­ture. The team that comes through this re­struc­ture is the team that builds Act 2, and you should be able to plan your life and your work with­out brac­ing for what comes next. Let’s talk about what’s chang­ing and how we get it right.

The re­struc­tur­ing prin­ci­ples we’re op­ti­miz­ing for

Reduced op­er­a­tional foot­print: We’re re­duc­ing our coun­try foot­print be­cause op­er­at­ing in nearly 60 coun­tries does not al­low us to give every team mem­ber a great ex­pe­ri­ence. We an­tic­i­pate re­duc­ing the num­ber of coun­tries by 30% fo­cused on geos where we have only a hand­ful of peo­ple or fewer. Team mem­bers who are in good stand­ing and would like to re­lo­cate are wel­come to do so. We’ll con­tinue to serve cus­tomers in those mar­kets through our part­ner net­work where ap­pro­pri­ate.

Flatter or­ga­ni­za­tion: We’re flat­ten­ing our or­ga­ni­za­tion be­cause eight lay­ers is too deep for a com­pany our size and man­age­ment lay­ers are slow­ing us down. Every layer of man­age­ment in­creases the num­ber of places where pri­or­i­ties and com­mu­ni­ca­tion gets fil­tered. A flat­ter or­ga­ni­za­tion will bet­ter con­nect every team mem­ber with lead­er­ship.

Role right-siz­ing: As we shift to a new strat­egy and way of work­ing, pow­ered by AI, we must re­visit the size of staffing for each role to en­sure we are op­ti­miz­ing for speed and cus­tomer out­comes. In some cases, AI can aug­ment and ac­cel­er­ate what team mem­bers have been do­ing, in other places we need to ex­pand cer­tain roles to go faster. We do ex­pect daily use of AI by every in­di­vid­ual in the com­pany and we are launch­ing AI ac­cel­er­a­tion pro­grams to sup­port every role as part of our trans­for­ma­tion.

How we’ll op­er­ate go­ing for­ward

We will be re­tir­ing CREDIT as our val­ues frame­work. CREDIT was the right frame­work for the very suc­cess­ful Act 1 that took the com­pany to $1B ARR. Those val­ues shaped a com­pany that thrived through COVID and our IPO to be­come one of the most rec­og­nized names in DevSecOps. We are not re­tir­ing them be­cause they were wrong, we are choos­ing in­stead to fo­cus on some­thing dif­fer­ent for this era which de­mands a dif­fer­ent op­er­at­ing pos­ture. Many of the same val­ues we have been liv­ing and of­ten talk about are still di­rectly ap­plic­a­ble in this era. Our three new op­er­at­ing prin­ci­ples are:

Speed with Quality means we move faster than we have, with the dis­ci­pline that lets oth­ers rely on the work, es­pe­cially our cus­tomers. We achieve this with smaller teams, tighter cy­cles, and stronger guardrails. We will hold a higher bar for what we com­mit to and what we de­liver against those com­mit­ments. Here are some spe­cific ex­am­ples we shared to­day of what we ex­pect every team mem­ber to em­body:

We or­ga­nize and ex­e­cute cross-func­tional pro­jects in small teams with more au­ton­omy

We set high stan­dards for qual­ity, al­ways prove what we build with cus­tomer zero first

We build fast, ex­per­i­ment, learn and fail fast, es­pe­cially for two way de­ci­sions

If an agent can do it, we au­to­mate it, and find things where our judge­ment or skill is es­sen­tial

We have zero tol­er­ance for un­nec­es­sary bu­reau­cracy

We use both sync (for speed) and async (for scale) pat­terns

Ownership Mindset means we ex­pect every in­di­vid­ual to act as a stew­ard for the com­pany and with au­ton­omy. The peo­ple clos­est to the work make the de­ci­sions about it, and they own the re­sult. Layers of man­age­ment be­tween lead­ers and the work com­ing out, and hand­offs that di­lute ac­count­abil­ity are elim­i­nated. Some ex­am­ples of the mind­set we ex­pect every team mem­ber to em­body:

I take pride in my work be­cause it de­liv­ers real out­comes

It is never some­one else’s prob­lem

Everyone is on my team

I care deeply for the cus­tomer and the busi­ness health

I am ef­fi­cient with bud­get, peo­ple and every­one’s time

Customer Outcomes means we mea­sure our­selves by what changes for the cus­tomer, not by the ac­tiv­ity on our side. Internal mile­stones mat­ter only to the ex­tent that they con­nect to cus­tomer im­pact. Examples of be­hav­iors we ex­pect from every­one:

I can ex­plain how my work con­nects to a cus­tomer out­come, not just a roadmap item or task/​ac­tiv­ity

My work cre­ates joy and de­light for cus­tomers so they love GitLab

I build cus­tomer re­la­tion­ships on fair­ness and mu­tual re­spect, and I make sure every deal works for both sides.

I’m fo­cused on value re­al­iza­tion first be­cause that dri­ves big­ger com­mit­ments over time

When a cus­tomer is stuck, I treat their time like it’s more ex­pen­sive than mine

These are built on a cul­ture of ex­cel­lence, which we ex­pect every team mem­ber to up­hold. That means:

Excellence in thought: team mem­bers who are sharp, un­der­stand deeply and with pre­ci­sion, com­mu­ni­cate with clar­ity and in­tegrity

Excellence in ac­tion: peo­ple with the abil­ity to pro­duce high qual­ity re­sults and busi­ness im­pact

Interpersonal ex­cel­lence: in­di­vid­u­als who are good hu­mans, em­brace di­ver­sity, in­clu­sion and be­long­ing, as­sume good in­tent and treat every­one with re­spect

Next steps in the re­struc­tur­ing process

Our trans­par­ent re­struc­ture process cre­ates un­cer­tainty that is real and it’s hard, and I’m not go­ing to pre­tend oth­er­wise. I ask that you re­flect on the why, what and how and en­gage your man­ager in a real con­ver­sa­tion about the work, the ques­tions and con­cerns you have, and what the next chap­ter looks like for you. Your man­ager may not have all the an­swers, be­cause they too are go­ing through this pe­riod of un­cer­tainty. The con­ver­sa­tion still mat­ters and your in­put shapes how we land as a team.

The vol­un­tary win­dow ex­ists for you. After three days walk­ing through Act 2 to­gether, you have the pic­ture you need to de­cide whether GitLab is the right place for you in the next chap­ter of your ca­reer. If it is­n’t, talk to your man­ager or di­rec­tor and, where lo­cal re­quire­ments al­low, ap­ply for a sep­a­ra­tion be­fore May 18. If ap­proved, we’ll in­clude you in the same sep­a­ra­tion pack­age as any­one else. The ap­proval process ex­ists be­cause in­di­vid­ual cir­cum­stances and lo­cal re­quire­ments vary and have to be weighed case by case. This process is meant to pro­vide some­thing we all de­serve once the re­struc­ture is com­plete: a team that is ex­cited and com­mit­ted to the fu­ture of GitLab. Please take a mo­ment to lis­ten to what Sid, our founder and Exec Chair, thinks about the changes we’re mak­ing to­day.

Why I hope you stay

I want to spend the rest of this let­ter con­vinc­ing you to stay, if the Why” and the What” ses­sions haven’t al­ready con­vinced you.

Better em­ployee ex­pe­ri­ence. Our over­rid­ing ob­jec­tive is to bring a sig­nif­i­cant im­prove­ment to the joy and im­pact of each team mem­ber par­tic­i­pat­ing in Act 2. We know that by do­ing that, we can bet­ter cap­ture the cre­ativ­ity and im­pact of every in­di­vid­ual and build a world class busi­ness.

Better pay. Once ap­proved, our new bonus pro­gram will give every team mem­ber who is­n’t on an in­cen­tive com­pen­sa­tion plan or bonus plan to­day, the op­por­tu­nity to earn a cash bonus based on their in­di­vid­ual per­for­mance, tar­get­ing 10% of salary, awarded at their man­ager’s dis­cre­tion.

Smaller, em­pow­ered R&D teams with a clear vi­sion. We as­pire to dou­ble the num­ber of smaller, R&D teams - up to 60 - with more au­ton­omy and own­er­ship.

Less fric­tion, less over­head. The hand­offs that have slowed us down are go­ing to be sig­nif­i­cantly re­duced. The lay­ers be­tween you and the de­ci­sions that af­fect your work are be­ing re­duced. If you’ve ever been frus­trated at GitLab by how long it took to get some­thing ob­vi­ous done, Act 2 is en­gi­neered around re­mov­ing that fric­tion.

Solve big tech­ni­cal prob­lems. Our five ar­chi­tec­tural bets pro­vide deep, tech­ni­cal prob­lems that will re­de­fine GitLab for the agen­tic era, in­clud­ing a new git for agents that sup­ports ma­chine scale, an or­ches­tra­tion layer for hu­mans, agents and full life­cy­cle or­ches­tra­tion, a con­nected graph of full life­cy­cle data as a ser­vice, brand new pol­icy ser­vice to pro­vide cen­tral­ized gov­er­nance and a fully au­tonomous soft­ware en­gi­neer­ing ex­pe­ri­ence.

More flex­i­ble buy­ing pro­grams. Our new con­sump­tion buy­ing pro­grams will make it far eas­ier to sell GitLab and for cus­tomers to buy GitLab seats + cred­its and un­lock adop­tion faster than ever be­fore.

Career growth. Bold bets like Act 2 are rare and bring with them op­por­tu­ni­ties for every team mem­ber at every level to learn faster and de­velop skills and ex­pe­ri­ence that will mat­ter for the rest of your ca­reer, here or wher­ever your path takes you.

Aligned lead­er­ship with the will to win. We have a lead­er­ship team with e-group, and our SLT, that is com­mit­ted to win, make the hard de­ci­sions and align the or­ga­ni­za­tion cross func­tion­ally to ac­cel­er­ate re­sults. We will hold our­selves ac­count­able to help you suc­ceed and cre­ate a win­ning or­ga­ni­za­tion.

Uniquely po­si­tioned to win. We are uniquely po­si­tioned to not only par­tic­i­pate, but to lead in our cat­e­gory where the TAM is ex­plod­ing at a step func­tion rate. We have struc­tural ad­van­tages in data, tech­nol­ogy and cus­tomer trust that give us an ad­van­tage over AI labs and start-ups that we can har­ness to re­de­fine how soft­ware is built in the agen­tic era. By be­ing part of Act 2, you will be part of a win­ning or­ga­ni­za­tion that helps shape soft­ware en­gi­neer­ing in the agen­tic era.

For those who are leav­ing

Whether by choice or oth­er­wise: the work you did here mat­tered, and it con­tin­ues to mat­ter. You came to GitLab when it needed you. You built things the next chap­ter is built on. We owe you real sup­port through the tran­si­tion, and our gen­uine re­spect. If we’re ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat peo­ple leav­ing us. That’s the stan­dard we’re hold­ing our­selves to.

I’ll close with this. None of what I’ve writ­ten makes to­day eas­ier. It is­n’t sup­posed to. What I want you to know is that we’ve made these de­ci­sions care­fully, our in­ten­tion is to make them only once, and we’re go­ing to do right by the peo­ple leav­ing and by the peo­ple stay­ing.

Thank you for what you’ve built. Thank you for what comes next.

Bill Staples, CEO, GitLab

Software engineering may no longer be a lifetime career

www.seangoedecke.com

I don’t think there’s com­pelling ev­i­dence that us­ing AI makes you less in­tel­li­gent over­all1. However, it seems pretty ob­vi­ous that us­ing AI to per­form a task means you don’t learn as much about per­form­ing that task. Some soft­ware en­gi­neers think this is a de­ci­sive ar­gu­ment against the use of AI. Their ar­gu­ment goes some­thing like this:

Using AI means you don’t learn as much from your work

AI-users thus be­come less ef­fec­tive en­gi­neers over time, as their tech­ni­cal skills at­ro­phy

Therefore we should­n’t use AI in our work

I don’t nec­es­sar­ily agree with (2). On the one hand, mov­ing from as­sem­bly lan­guage to C made pro­gram­mers less ef­fec­tive in some ways and more ef­fec­tive in oth­ers. On the other hand, the tran­si­tion from writ­ing code by hand to us­ing AI is ar­guably a big­ger shift, so who knows? But it does­n’t mat­ter. Even if we grant that (2) is cor­rect, this is still a bad ar­gu­ment.

Until around 2024, the best way to learn how to do soft­ware en­gi­neer­ing was just do­ing soft­ware en­gi­neer­ing. That was re­ally lucky for us! It meant that we could par­lay a cod­ing hobby into a lu­cra­tive ca­reer, and that the peo­ple who re­ally liked the work would just get bet­ter and bet­ter over time. However, that was never an im­mutable fact of what soft­ware en­gi­neer­ing is. It was just a for­tu­nate co­in­ci­dence.

It would re­ally suck for soft­ware en­gi­neers if us­ing AI made us worse at our jobs in the long term (or even at gen­eral rea­son­ing, though I still don’t be­lieve that’s true). But we might still be obliged to use it, if it pro­vided enough short-term ben­e­fits, for the same rea­son that con­struc­tion work­ers are obliged to lift heavy ob­jects: be­cause that’s what we’re be­ing paid to do.

If you work in con­struc­tion, you need to lift and carry a se­ries of heavy ob­jects in or­der to be ef­fec­tive. But lift­ing heavy ob­jects puts long-term wear on your back and joints, mak­ing you less ef­fec­tive over time. Construction work­ers don’t say that be­ing a good con­struc­tion worker means not lift­ing heavy ob­jects. They say too bad, that’s the job”2.

If AI does turn out to make you dumber, why can’t we just keep writ­ing code by hand? You can! You just might not be able to earn a salary do­ing so, for the same rea­son that there aren’t many jobs out there for car­pen­ters who refuse to use power tools. If the mod­els are good enough, you will sim­ply get out­com­peted by en­gi­neers will­ing to trade their long-term cog­ni­tive abil­ity for a short-term lu­cra­tive ca­reer3.

I hope that this is­n’t true. It would be re­ally un­for­tu­nate for soft­ware en­gi­neers. But it would be even more un­for­tu­nate if it were true and we re­fused to ac­knowl­edge it.

The ca­reer of a pro ath­lete has a max­i­mum lifes­pan of around fif­teen years. You have the op­por­tu­nity to make a lot of money un­til around your mid-thir­ties, at which point your body just can’t keep up with it. A com­mon tragic fig­ure to­day is the pro­fes­sional ath­lete who be­lieves the show will go on for­ever and does­n’t pre­pare for the day they can’t do it any­more. We may be in the first gen­er­a­tion of soft­ware en­gi­neers in the same po­si­tion. If so, it’s prob­a­bly a good idea to plan ac­cord­ingly.

edit: this post got a lot of com­ments on Hacker News. I was a bit dis­ap­pointed to see many peo­ple (even Simon Willison, whose blog I read) re­spond with vari­a­tions on the point that en­gi­neers can use AI to do more en­gi­neer­ing work, even if they’re no longer writ­ing code by hand. First, once you stop writ­ing code by hand, I worry that your abil­ity to un­der­stand the code­base in gen­eral will at­ro­phy; sec­ond, the rate of change is so high that no­body knows what will hap­pen in a decade or two. I should have em­pha­sized these points more.

If you’re think­ing wait, there’s re­search on this”, you can likely read my take on the pa­per you’re think­ing of here, here or here. ↩

If you’re think­ing wait, there’s re­search on this”, you can likely read my take on the pa­per you’re think­ing of here, here or here.

Of course, con­struc­tion work­ers do have lay­ers of tech­niques for avoid­ing lift­ing heavy ob­jects when pos­si­ble (cranes, dol­lies, fork­lifts, and so on). There’s a nat­ural anal­ogy here to a set of tech­niques for stay­ing men­tally en­gaged that soft­ware en­gi­neers are yet to dis­cover. ↩

Of course, con­struc­tion work­ers do have lay­ers of tech­niques for avoid­ing lift­ing heavy ob­jects when pos­si­ble (cranes, dol­lies, fork­lifts, and so on). There’s a nat­ural anal­ogy here to a set of tech­niques for stay­ing men­tally en­gaged that soft­ware en­gi­neers are yet to dis­cover.

In the­ory la­bor unions could slow this process down (and have forced em­ploy­ers to slow down this race-to-the-bot­tom in other in­dus­tries). But I’m pes­simistic about tech la­bor unions for all the usual rea­sons: the job is too highly-paid, you can work (and thus scab) from any­where on the planet, and so on. ↩

In the­ory la­bor unions could slow this process down (and have forced em­ploy­ers to slow down this race-to-the-bot­tom in other in­dus­tries). But I’m pes­simistic about tech la­bor unions for all the usual rea­sons: the job is too highly-paid, you can work (and thus scab) from any­where on the planet, and so on.

If you liked this post, con­sider sub­scrib­ing to email up­dates about my new posts, or shar­ing it on Hacker News.

Here’s a pre­view of a re­lated post that shares tags with this one.

Luddites and burn­ing down AI dat­a­cen­ter­sIs it time to start burn­ing down dat­a­cen­ters?Some peo­ple think so. An Indianapolis city coun­cil mem­ber had his house re­cently shot up for sup­port­ing dat­a­cen­ters, and Sam Altman’s home was fire­bombed (and then shot) shortly af­ter­wards. People from all sides of the ar­gu­ment are sound­ing the alarm about im­mi­nent vi­o­lence.The ob­vi­ous his­tor­i­cal com­par­i­son is Luddism, the 19th-century phe­nom­e­non where English weavers and knit­ters de­stroyed the ma­chines that were au­tomat­ing their work, and (in some cases) killed the ma­chi­nes’ own­ers. Anti-AI peo­ple are re­claim­ing the term to de­scribe them­selves, and many of the lead­ing lights of the anti-AI move­ment (like Brian Merchant or Gavin Mueller) have writ­ten books ar­gu­ing more or less that the Luddites were right, and we ought to fol­low their ex­am­ple in or­der to re­sist AI au­toma­tion.Con­tinue read­ing…

Luddites and burn­ing down AI dat­a­cen­ters

Is it time to start burn­ing down dat­a­cen­ters?

Some peo­ple think so. An Indianapolis city coun­cil mem­ber had his house re­cently shot up for sup­port­ing dat­a­cen­ters, and Sam Altman’s home was fire­bombed (and then shot) shortly af­ter­wards. People from all sides of the ar­gu­ment are sound­ing the alarm about im­mi­nent vi­o­lence.

The ob­vi­ous his­tor­i­cal com­par­i­son is Luddism, the 19th-century phe­nom­e­non where English weavers and knit­ters de­stroyed the ma­chines that were au­tomat­ing their work, and (in some cases) killed the ma­chi­nes’ own­ers. Anti-AI peo­ple are re­claim­ing the term to de­scribe them­selves, and many of the lead­ing lights of the anti-AI move­ment (like Brian Merchant or Gavin Mueller) have writ­ten books ar­gu­ing more or less that the Luddites were right, and we ought to fol­low their ex­am­ple in or­der to re­sist AI au­toma­tion.Con­tinue read­ing…

The cuda-oxide Book — cuda-oxide

nvlabs.github.io

cuda-ox­ide is an ex­per­i­men­tal Rust-to-CUDA com­piler that lets you write (SIMT) GPU ker­nels in safe(ish), id­iomatic Rust. It com­piles stan­dard Rust code di­rectly to PTX — no DSLs, no for­eign lan­guage bind­ings, just Rust.

Note

This book as­sumes fa­mil­iar­ity with the Rust pro­gram­ming lan­guage, in­clud­ing own­er­ship, traits, and gener­ics. Later chap­ters on async GPU pro­gram­ming also as­sume work­ing knowl­edge of async/.​await and run­times like tokio.

For a re­fresher, see The Rust Programming Language, Rust by Example, or the Async Book.

Project Status#

The v0.1.0 re­lease is an early-stage al­pha: ex­pect bugs, in­com­plete fea­tures, and API break­age as we work to im­prove it. We hope you’ll try it and help shape its di­rec­tion by shar­ing feed­back on your ex­pe­ri­ence.

🚀 Quick start#

use cu­d­a_de­vice::{cu­d­a_­mod­ule, ker­nel, thread, DisjointSlice}; use cu­d­a_­core::{Cu­d­a­Con­text, DeviceBuffer, LaunchConfig};

#[cuda_module] mod ker­nels { use su­per::*;

#[kernel] fn vecadd(a: &[f32], b: &[f32], mut c: DisjointSlice<f32>) { let idx = thread::in­dex_1d(); let i = idx.get(); if let Some(c_elem) = c.get_­mut(idx) { *c_elem = a[i] + b[i]; } } }

fn main() { let ctx = CudaContext::new(0).unwrap(); let stream = ctx.de­fault­_stream(); let mod­ule = ker­nels::load(&ctx).un­wrap();

let a = DeviceBuffer::from_host(&stream, &[1.0f32; 1024]).unwrap(); let b = DeviceBuffer::from_host(&stream, &[2.0f32; 1024]).unwrap(); let mut c = DeviceBuffer::<f32>::zeroed(&stream, 1024).unwrap();

mod­ule .vecadd(&stream, LaunchConfig::for_num_elems(1024), &a, &b, &mut c) .unwrap();

let re­sult = c.to_host_vec(&stream).un­wrap(); as­sert_eq!(re­sult[0], 3.0); }

Build and run with cargo ox­ide run vecadd upon in­stalling the pre­req­ui­sites.

Note

#[cuda_module] em­beds the gen­er­ated de­vice ar­ti­fact into the host bi­nary and gen­er­ates a typed ker­nels::load func­tion plus one launch method per ker­nel. The lower-level load­_k­er­nel_­mod­ule and cu­d­a_launch! APIs re­main avail­able when you need to load a spe­cific side­car ar­ti­fact or build cus­tom launch code.

Why cuda-ox­ide?#

🦀 Rust on the GPU

Write GPU ker­nels with Rust’s type sys­tem and own­er­ship model. Safety is a first-class goal, but GPUs have sub­tleties — read about the safety model.

💎 A SIMT Compiler

Not a DSL. A cus­tom rustc code­gen back­end that com­piles pure Rust to PTX.

⚡ Async Execution

Compose GPU work as lazy DeviceOperation graphs. Schedule across stream pools. Await re­sults with .await.

UCLA discovers first stroke rehabilitation drug to repair brain damage

stemcell.ucla.edu

A new study by UCLA Health has dis­cov­ered what re­searchers say is the first drug to fully re­pro­duce the ef­fects of phys­i­cal stroke re­ha­bil­i­ta­tion in model mice.

The find­ings, pub­lished in Nature Communications, tested two can­di­date drugs de­rived from their stud­ies on the mech­a­nism of the brain ef­fects of re­ha­bil­i­ta­tion, one of which re­sulted in sig­nif­i­cant re­cov­ery in move­ment con­trol af­ter stroke in mice.

Stroke is the lead­ing cause of adult dis­abil­ity be­cause most pa­tients do not fully re­cover from the ef­fects of stroke. There are no drugs in the field of stroke re­cov­ery, re­quir­ing stroke pa­tients to un­dergo phys­i­cal re­ha­bil­i­ta­tion, which has shown to be only mod­estly ef­fec­tive.

The goal is to have a med­i­cine that stroke pa­tients can take that pro­duces the ef­fects of re­ha­bil­i­ta­tion,” said Dr. S. Thomas Carmichael, the study’s lead au­thor and pro­fes­sor and chair of UCLA Neurology. Rehabilitation af­ter stroke is lim­ited in its ac­tual ef­fects be­cause most pa­tients can­not sus­tain the re­hab in­ten­sity needed for stroke re­cov­ery.

Further, stroke re­cov­ery is not like most other fields of med­i­cine, where drugs are avail­able that treat the dis­ease — such as car­di­ol­ogy, in­fec­tious dis­ease or can­cer,” Carmichael said. Rehabilitation is a phys­i­cal med­i­cine ap­proach that has been around for decades; we need to move re­ha­bil­i­ta­tion into an era of mol­e­c­u­lar med­i­cine.”

In the study, Carmichael and his team sought to de­ter­mine how phys­i­cal re­ha­bil­i­ta­tion im­proved brain func­tion af­ter a stroke and whether they could gen­er­ate a drug that could pro­duce these same ef­fects.

Working in lab­o­ra­tory mouse mod­els of stroke and with stroke pa­tients, the UCLA re­searchers iden­ti­fied a loss of brain con­nec­tions that stroke pro­duces that are re­mote from the site of the stroke dam­age. Brain cells lo­cated at a dis­tance from the stroke site get dis­con­nected from other neu­rons. As a re­sult, brain net­works do not fire to­gether for things like move­ment and gait.

The UCLA team found that some of the con­nec­tions that are lost af­ter stroke oc­cur in a cell called a par­val­bu­min neu­ron. This type of neu­ron helps gen­er­ate a brain rhythm, termed a gamma os­cil­la­tion, which links neu­rons to­gether so that they form co­or­di­nated net­works to pro­duce a be­hav­ior, such as move­ment. Stroke causes the brain to lose gamma os­cil­la­tions. Successful phys­i­cal re­ha­bil­i­ta­tion in both lab­o­ra­tory mice and hu­mans brought gamma os­cil­la­tions back into the brain and, in the mouse model, re­paired the lost con­nec­tions of par­val­bu­min neu­rons.

Carmichael and the team then iden­ti­fied two can­di­date drugs that might pro­duce gamma os­cil­la­tions af­ter stroke. These drugs specif­i­cally work to ex­cite par­val­bu­min neu­rons.

The re­searchers found one of the drugs, DDL-920, de­vel­oped in the UCLA lab of Varghese John, who coau­thored the study, pro­duced sig­nif­i­cant re­cov­ery in move­ment con­trol in mice.

This study has two ma­jor ar­eas of im­pact: First, it iden­ti­fies a brain sub­strate and cir­cu­ity that un­der­lies the ef­fect of re­ha­bil­i­ta­tion in the brain. Second, the pa­per then iden­ti­fies a unique drug tar­get in this re­hab brain cir­cu­ity to pro­mote re­cov­ery by mim­ic­k­ing the main ef­fect of phys­i­cal re­hab.

Further stud­ies are needed to un­der­stand the safety and ef­fi­cacy of DDL-920 be­fore it could be con­sid­ered for hu­man tri­als.

GitHub - davmlaw/they_live_adblocker: Replace Ads with They Live style slogans

github.com

They Live Adblocker

A fork of uBlock Origin Lite that, in­stead of hid­ing cos­met­i­cally-blocked ads, re­places them with white tiles bear­ing slo­gans from John Carpenter’s 1988 film They Live: OBEY, CONSUME, WATCH TV, SLEEP, SUBMIT, CONFORM, STAY ASLEEP, BUY, WORK, NO INDEPENDENT THOUGHT, DO NOT QUESTION AUTHORITY.

Each blocked ad gets a sin­gle phrase, picked at ran­dom from the list.

The idea is from a blog post I wrote in 2015 (and never got around to build­ing): They Live ad­block mode.

Screenshot

Install

Download the lat­est uBO­Lite_theylive.chromium.zip from the Releases page, ex­tract it, then in Chromium / Chrome / Brave / Edge:

Open chrome://​ex­ten­sions

Toggle Developer mode on (top-right)

Click Load un­packed and se­lect the ex­tracted folder

Keep the folder around — the ex­ten­sion is loaded from that path.

Make it ac­tu­ally re­place ads

By de­fault uBO Lite uses Basic fil­ter­ing mode, which blocks ads at the net­work layer. Network-blocked ads never pro­duce a DOM el­e­ment, so there’s noth­ing to they-live-ify” — you just get empty space, as with nor­mal uBO Lite. To see the OBEY tiles:

Click the uBO Lite tool­bar icon → cog (⚙) → Dashboard.

Set the fil­ter­ing mode for the sites you care about to Optimal or Complete.

Reload.

Building from source

Requires Node 22.

git clone –recursive https://​github.com/​davm­law/​they_live_ad­blocker cd they_live_ad­blocker/​uBlock nvm use 22 # or oth­er­wise en­sure Node >= 22 tools/​make-mv3.sh chromium # or: fire­fox | edge | sa­fari

The pack­aged ex­ten­sion lands in uBlock/​dist/​build/​uBO­Lite.chromium/ — load it as an un­packed ex­ten­sion.

How it works

uBO Lite’s cos­metic fil­ter­ing nor­mally in­jects CSS like se­lec­tor { dis­play: none !important } to hide matched ad el­e­ments. This fork patches those in­jec­tion sites to in­stead ap­ply a white-box mask with a ::after over­lay whose con­tent is read from a data-ubol-they-live at­tribute, then walks the DOM (with a MutationObserver for late-loaded ads) to tag each matched el­e­ment with a ran­dom phrase from the list.

Touched files in the davm­law/​uBlock sub­mod­ule:

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​they-live.js (new) — phrase list, CSS gen­er­a­tor, DOM tag­ging

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​css-{spe­cific,generic,pro­ce­dural-api}.js — call sites

plat­form/​mv3/​ex­ten­sion/​js/​script­ing-man­ager.js — reg­is­ters they-live.js ahead of con­sumers

Caveats

Personal hobby fork; not an of­fi­cial uBlock Origin prod­uct. Don’t file uBO is­sues against this.

Forcing pre­vi­ously-hid­den el­e­ments vis­i­ble can oc­ca­sion­ally shift page lay­out where the site’s CSS as­sumed the ad slot col­lapsed.

Custom user-de­fined cos­metic fil­ters still hide nor­mally (no OBEY treat­ment).

Network-blocked ads (most of uBO Lite’s block­ing) don’t get re­placed — only cos­metic-fil­tered ones do.

License

GPL-3.0, same as up­stream uBlock Origin / uBO Lite.

The death of the last maverick tech company.

slate.com

Nullsoft, 1997 – 2004

AOL kills off the last mav­er­ick tech com­pany.

Nov 12, 20046:04 PM

When America Online purged its tiny Nullsoft branch of all but three em­ploy­ees this week, it lost ar­guably the most pro­lific di­vi­sion of the com­pany. Not that you could re­ally blame AOL for the mass lay­offs—all of Nullsoft’s pro­jects were spit­balls tossed at the hon­chos up­stairs. Before the AOL days, Nullsoft founder Justin Frankel and his team of whiz kids prac­ti­cally in­vented the MP3 craze when they rolled out their Winamp player and Shoutcast server. When AOL paid mil­lions to buy the then-20-year-old Frankel’s ser­vices in 1999, he used his new gig to be­come what Rolling Stone called the Net’s No. 1 punk.”

From his AOL of­fice, Frankel posted ap­pli­ca­tions (without his cor­po­rate par­en­t’s per­mis­sion) that made screw­ing the Recording Industry Association of America eas­ier than ever, in­clud­ing the peer-to-peer pro­gram Gnutella and the covert file-shar­ing sys­tem WASTE. Frankel quit at the be­gin­ning of this year, and Nullsoft’s shut­down nails the cof­fin lid shut. There’ll be no more cool pi­rate tools un­der­writ­ten by America Online.

What kind of snot-nosed brat takes mil­lions from AOL and then pub­lishes soft­ware per­fect for rip­ping off Time Warner’s en­tire cat­a­log? Frankel, a grunge-dress­ing slacker from Sedona, Ariz., was a teenage col­lege dropout in 1997 when he wrote Winamp, the first pro­gram that made play­ing MP3s on a PC point-and-click sim­ple. He’s not the world’s great­est pro­gram­mer, but Frankel has a knack for find­ing sim­ple and clever so­lu­tions to huge en­gi­neer­ing prob­lems. While he’s got a prankster’s streak—one of his high-school hacks was a key­stroke log­ger for the teach­ers’ com­put­ers—Frankel did­n’t write Winamp so he could steal mu­sic. All he wanted was a bet­ter way to lis­ten to mu­sic on his PC. Apparently, so did sev­eral mil­lion other peo­ple.

As the share­ware checks for Winamp piled up, Frankel kept hack­ing. While big soft­ware com­pa­nies ele­phant-walked in cir­cles try­ing to de­velop on­line mu­sic dis­tri­b­u­tion sys­tems, he cre­ated Shoutcast, an MP3 server that streams mu­sic over the Net. Winamp and Shoutcast be­came the de­fault way to play, draw­ing tens of mil­lions of fans in less than two years. That’s when AOL re­warded Frankel by buy­ing Nullsoft for $100 mil­lion in 1999.

Lots of geeks who could­n’t make it through en­gi­neer­ing school be­came mul­ti­mil­lion­aires in the boom. But Frankel re­mained an un­re­con­structed kid in a field of hack­ers-turned-en­tre­pre­neurs. Like Kurt Cobain, he used his money to chal­lenge the peo­ple who gave it to him. As AOL was merg­ing with Time Warner in March 2000, Frankel pub­lished Gnutella, a peer-to-peer file-shar­ing sys­tem that ad­dressed the fa­tal flaw in Shawn Fanning’s Napster. Fanning re­lied on a bank of cen­tral servers that would even­tu­ally be shut down by record in­dus­try lawyers. Gnutella, by con­trast, was com­pletely de­cen­tral­ized. The only way to shut it down would be to go af­ter every sin­gle user.

When Frankel posted Gnutella on Nullsoft’s site it came with a cheeky, half-apolo­getic note: See? AOL can bring you good things!” AOL was not amused; they had him re­move the pro­gram im­me­di­ately and dis­claimed it as an unau­tho­rized side pro­ject. But Gnutella had al­ready been spread around the Net and re­verse-en­gi­neered by ea­ger pro­gram­mers who set to work im­prov­ing Frankel’s gift. Years af­ter Napster’s servers went dark, Gnutella traf­fic is still grow­ing.

For most peo­ple, flip­ping off the man once would be enough, but Frankel kept at it for years—he even posted a tool that re­moved the ads from AOL Instant Messenger. Finally, in mid-2003, as the RIAA was prepar­ing law­suits against ran­dom Gnutella users, Frankel con­cocted a coun­ter­strike: WASTE, a pri­vate file-shar­ing sys­tem whose traf­fic is en­crypted from pry­ing eyes and whose net­works are in­vi­ta­tion only. (The name comes from the un­der­ground postal sys­tem in Thomas Pynchon’s The Crying of Lot 49.) If snoops can’t see what WASTE users are shar­ing and RIAA stoolies can’t hop onto the net­work to lure copy­right vi­o­la­tors, there’s no way to gather ev­i­dence of copy­right in­fringe­ment short of raid­ing homes and seiz­ing com­put­ers.

Frankel tol­dRolling Stone that he tried to per­suade AOL to re­lease WASTE them­selves as a way to re­vive their fast-falling cus­tomer base. When they re­buffed him, he re­leased the pro­gram on the fourth an­niver­sary of AOLs ac­qui­si­tion of Nullsoft—May 28, 2003—as a means of con­fronting the com­pany. Again, AOL took the pro­gram down and dis­owned it. Not long af­ter spilling his guts to Rolling Stone, Frankel re­signed. For me, cod­ing is a form of self-ex­pres­sion,” he ex­plained in a blog post that he would later re­move. The com­pany con­trols the most ef­fec­tive means of self-ex­pres­sion I have. This is un­ac­cept­able to me as an in­di­vid­ual, there­fore I must leave.”

With Nullsoft gone and Frankel spend­ing his time build­ing a spe­cial-ef­fects com­puter for his elec­tric gui­tar, the old Winamp/Gnutella gang prob­a­bly won’t get back to­gether for one more hit. Conventional wis­dom says Frankel is more likely to join the mil­lion­aire has-beens who dot the hills in my San Francisco neigh­bor­hood or be­come a tro­phy hire at a tech startup, like con­tem­po­raries Fanning, Marc Andreessen, and Linus Torvalds.

But I would­n’t count him out yet. Most dot-com he­roes come across as self-pro­mot­ing one-hit won­ders, but Frankel does his best work when you try to shut him up. It’s hap­pen­ing again: In August, fed­eral agents raided five homes and an ISP where they had man­aged to track down WASTE-like pri­vate net­works. Having suc­ces­sively hacked his way around the lim­i­ta­tions of CDs, MP3s, Napster, and the RIAA, Frankel may next try to find a way to thwart the FBI. As he’s proven over and over, he does­n’t need AOLs back­ing to do it.

Can Someone Please Explain Whether Cloudflare Blackmailed Canonical?

www.flyingpenguin.com

30 April 2026, 16:33:37 UTC. Canonical’s in­ci­dent mon­i­tor­ing sys­tem marks blog.ubuntu.com as Service Down.

Within ten min­utes the rest of the com­pa­ny’s pub­lic web was down as well: the main site ubuntu.com, the se­cu­rity ad­vi­sory APIs that down­stream pack­age man­age­ment de­pends on, the de­vel­oper por­tal, the cor­po­rate site, the train­ing plat­form. These dis­rup­tions ran for roughly twenty hours.

1 May 2026, 12:44 UTC. Service Restored.

The group claim­ing re­spon­si­bil­ity for the at­tack said it used a paid ser­vice. They named one tool they had rented: a com­mer­cial de­nial-of-ser­vice prod­uct called Beamed, sold un­der mul­ti­ple TLDs, with beamed.su serv­ing as the mar­ket­ing and blog site and beamed.st serv­ing as the cus­tomer lo­gin por­tal. The April 2026 blog post How to Bypass Cloudflare with Advanced Stresser Methods” ad­ver­tises three named tech­niques for de­feat­ing Cloudflare pro­tec­tion, in­clud­ing res­i­den­tial IP ro­ta­tion and man­ual endpoint hunt­ing” to lo­cate ori­gin servers. Beamed is ex­plicit about what it sells:

Cloudflare acts as a re­verse proxy, hid­ing the ori­gin server’s IP ad­dress. Many low-qual­ity boot­ers fail against Under Attack Mode” or Bot Fight Mode. Beamed.su em­ploys sev­eral ad­vanced tech­niques to ef­fec­tively stress test web­sites pro­tected by Cloudflare and sim­i­lar CDNs.

Cloudflare acts as a re­verse proxy, hid­ing the ori­gin server’s IP ad­dress. Many low-qual­ity boot­ers fail against Under Attack Mode” or Bot Fight Mode. Beamed.su em­ploys sev­eral ad­vanced tech­niques to ef­fec­tively stress test web­sites pro­tected by Cloudflare and sim­i­lar CDNs.

The blog post host­ing this para­graph is it­self served by Cloudflare. The prod­uct sold is Cloudflare by­pass. The host­ing provider for the seller is Cloudflare.

A week af­ter the at­tack, beamed.su and beamed.st re­main on­line. Both re­solve to Cloudflare AS13335 ad­dresses. Canonical’s two repos­i­tory end­points, se­cu­rity.ubuntu.com and archive.ubuntu.com, also re­solve to Cloudflare AS13335 ad­dresses, as a paid cus­tomer re­la­tion­ship.

Cloudflare fronts at­tack­ers for free and bills the vic­tims for re­lief.

The ques­tion I re­peat­edly have been asked is whether what just hap­pened amounts to black­mail, and how the ac­tor that claimed re­spon­si­bil­ity (a self-de­scribed pro-Iran­ian group call­ing it­self the Islamic Cyber Resistance in Iraq, also styled as 313 Team) ends up rent­ing at­tack ca­pac­ity from a ser­vice whose front-end in­fra­struc­ture is op­er­ated by the same com­pany that Canonical even­tu­ally paid for re­lief.

Beamed’s con­sumer-fac­ing do­mains are reg­is­tered through a reg­is­trar called Immaterialism Limited, which sells do­main reg­is­tra­tion on a flat-rate ba­sis and via a JSON API. Cheap, au­to­mated reg­is­tra­tion with zero fric­tion is typ­i­cally as­so­ci­ated with abuse host­ing. Immateriali.sm is it­self prox­ied through Cloudflare name­servers (tani.ns.cloudflare.com and trey.ns.cloud­flare.com).

Immaterialism Limited is reg­is­tered at Companies House in the United Kingdom un­der com­pany num­ber 15738452. It was in­cor­po­rated on 24 May 2024 with one di­rec­tor, Nicole Priscila Fernandez Chaves of Costa Rica (date of birth March 1993), at a mass-mail­box ad­dress on Great Portland Street in London.

On 11 April 2025 Fernandez Chaves re­signed the di­rec­tor­ship while re­tain­ing 75 per­cent or more of the eco­nomic in­ter­est. The re­place­ment di­rec­tor was Naomi Susan Colvin, a British na­tional res­i­dent in England, ap­pointed at the same ad­dress.

Colvin is the for­mer Director of the Courage Foundation, the le­gal-de­fence ve­hi­cle whose trustees have in­cluded Julian Assange, John Pilger, Vivienne Westwood, and Renata Avila, and which has sup­ported ben­e­fi­cia­ries in­clud­ing WikiLeaks and Barrett Brown. Her cur­rent role is UK and Ireland Programme Director at Blueprint for Free Speech, work­ing on whistle­blower pro­tec­tion and anti-SLAPP lit­i­ga­tion. The le­gal cam­paign that pre­vented the ex­tra­di­tion of Lauri Love to the United States ran un­der her di­rec­tion. She is a long­stand­ing ac­tivist.

On 26 February 2026 Immaterialism Limited filed two changes at Companies House on the same day: a reg­is­tered of­fice change (from 85 Great Portland Street to 167 – 169 Great Portland Street) and a change of de­tails for Fernandez Chaves as per­son with sig­nif­i­cant con­trol.

The next day, 27 February 2026, the rout­ing in­fra­struc­ture that an­nounces Beamed’s IP space and that of re­lated ser­vices moved ju­ris­dic­tion.

The au­tonomous sys­tem that an­nounces Materialism’s ad­dress space is AS39287. RIPE al­lo­cated this AS num­ber on 24 January 2006. Its rout­ing iden­tity has been pre­served con­tin­u­ously since then, but its reg­is­tered op­er­a­tor and the coun­try of record have changed twice.

From around 2017 to roughly 2020, AS39287 was held by Privactually Ltd, a Cypriot com­pany, and op­er­ated un­der the name FLATTR-AS. Flattr was the mi­cro­pay­ments pro­ject of Peter Sunde Kolmosoppi, one of the founders of The Pirate Bay. The abuse con­tact for pre­fixes un­der that reg­is­tra­tion was abuse@shel­ter.st.

From 2020 to 2026, the same AS num­ber was re­as­signed to ab stract ltd, a Finnish com­pany at Urho Kekkosen katu 4 – 6E in Helsinki. Its main­tainer ob­ject on the RIPE record was BKP-MNT. Named per­son of record: Peter Kolmisoppi (handle brokep”), an­other founder of The Pirate Bay, with a Malmö postal ad­dress and the email noc@brokep.com. The au­thor­i­ta­tive name­servers for the op­er­a­tor’s do­main ab­stract.fi were the three Njalla name­servers at njalla.fo, njalla.no, and njalla.in. Njalla is the pri­vacy-as-a-ser­vice do­main proxy founded by Peter Sunde and op­er­ated through 1337 Services LLC in St. Kitts and Nevis. Some pre­fixes un­der ab stract car­ried abuse con­tacts at cy­ber­dyne.is.

Reassignment on 27 February

On 27 February 2026, at 12:11:48 UTC, RIPE recorded the third re­as­sign­ment. AS39287 be­came the prop­erty of Materialism s.r.l., a Romanian com­pany at Bulevardul Metalurgiei in Bucharest, op­er­at­ing un­der the name materialism.” A Materialism RIPE mem­ber­ship had been pro­vi­sioned five months ear­lier, on 30 September 2024, and had then sat dor­mant. The re­as­sign­ment in­cluded the IPv4 pre­fix 45.158.116.0/22 and the IPv6 pre­fixes 2001:67c:2354::/48 and 2a02:6f8::/32, the last of which was orig­i­nally al­lo­cated in August 2008 un­der the prior regime.

The peer­ing arrange­ments were pre­served across all three tran­si­tions. AS39287 has con­tin­ued to im­port from and ex­port to AS42708 (Telia), AS37560 (GTT), AS12552 (GlobalConnect), AS34244 (Voxility), and AS54990, in iden­ti­cal con­fig­u­ra­tion, from the FLATTR pe­riod to the ma­te­ri­al­ism pe­riod. The same routes leave the same up­stream net­works. The vis­i­ble op­er­a­tor name is the vari­able.

The IANA list of ac­cred­ited do­main reg­is­trars also shows that the cus­tomer base of Immateriali.sm in­cludes 1337 Services LLC, the trad­ing en­tity be­hind Njalla. The reg­is­trar end of the chain and the pri­vacy-proxy end are ac­cord­ingly un­der the same alumni clus­ter.

1337 Services. Yeah, I know.

Cert ro­ta­tion on 27 February

The rel­e­vant cer­tifi­cate trans­parency record for Canonical’s repos­i­tory end­points shows the fol­low­ing en­tries dur­ing the same 24-hour win­dow in which the rout­ing re­as­sign­ment oc­curred.

At 06:14:03 UTC on 27 February, Let’s Encrypt is­sued a fresh apex cer­tifi­cate for archive.ubuntu.com.

At 19:13:35 UTC on the same day, Let’s Encrypt is­sued a fresh apex cer­tifi­cate for se­cu­rity.ubuntu.com. The 2026 cer­tifi­cate trans­parency record for that host­name be­fore this en­try con­tains re­gional mir­ror cer­tifi­cates only. An apex cer­tifi­cate at se­cu­rity.ubuntu.com does not ap­pear ear­lier in the vis­i­ble log.

At 22:14:03 UTC on the same day, a fresh cer­tifi­cate was is­sued for clouds.archive.ubuntu.com.

In the fol­low­ing nine days the same pat­tern re­peated for azure.archive.ubuntu.com, wild­card-gce.archive.ubuntu.com, and wild­card-ec2.archive.ubuntu.com. Each new cer­tifi­cate was is­sued for the apex host­name rather than for a re­gional mir­ror.

A valid ori­gin cer­tifi­cate on the apex host­name is a pre­con­di­tion for putting that host­name be­hind a con­tent de­liv­ery net­work with­out break­ing en­cryp­tion be­tween the net­work and the ori­gin. The cer­tifi­cate has to ex­ist at the ori­gin be­fore the net­work can be told to fetch from there.

The syn­chrony of these two events on 27 February has not yet been ex­plained.

The Attack Timeline

The minute-by-minute log of the in­ci­dent is taken from Canonical’s own sta­tus.canon­i­cal.com page, snap­shot­ted into Ubuntu Discourse thread 81470 by a user at ap­prox­i­mately 22:52 UTC on 30 April. All times be­low are UTC. Where orig­i­nal sources used Pacific Daylight Time or British Summer Time, con­ver­sion is given in­line.

16:33:37: blog.ubuntu.com first marked Down. Recorded as the Incident Start Time.

16:34:10: canon­i­cal.com Down.

16:34:45: acad­emy.canon­i­cal.com Down.

16:35:15: de­vel­oper.ubuntu.com Down.

16:35:22: maas.io Down.

16:36:09: jaas.ai Down. Ubuntu Security API (CVEs) Down.

16:37:13: Ubuntu Security API (Notices) Down.

16:41:57: as­sets.ubuntu.com Down.

16:43:25: ubuntu.com Down.

So the se­cu­rity ad­vi­sory feed went dark within three min­utes of the start, and the mar­ket­ing apex within ten. The two hosts that were not yet at­tacked at this point were se­cu­rity.ubuntu.com and archive.ubuntu.com, the two end­points whose un­avail­abil­ity breaks apt up­date on every Ubuntu in­stal­la­tion world­wide.

19:34:38: se­cu­rity.ubuntu.com first marked Down.

19:40:01: archive.ubuntu.com Down.

This is no­table to me be­cause an at­tacker held the repos­i­tory end­points in re­serve for three hours, and then ac­ti­vated them late.

From 19:40 UTC for the next sev­enty min­utes, both repos­i­tory end­points flapped re­peat­edly be­tween Down and Operational on the sta­tus board. The sta­tus log records five Down/Operational tran­si­tions on se­cu­rity.ubuntu.com and four on archive.ubuntu.com dur­ing that pe­riod.

This pat­tern is con­sis­tent with a mit­i­ga­tion be­ing at­tempted at the ori­gin (rate lim­its, ge­o­graphic fil­ters, traf­fic scrub­bing) and fail­ing un­der sus­tained load at the an­nounced 3.5 Tbps scale.

20:50:29: archive.ubuntu.com marked Operational.

20:51:13: se­cu­rity.ubuntu.com marked Operational.

After this 44-second win­dow nei­ther host ap­pears Down again in the cap­tured snap­shot, which ex­tends to 22:52 UTC. The flap­ping stops cleanly. The two end­points sta­bilise to­gether, less than a minute apart, four hours and sev­en­teen min­utes into the at­tack.

The cur­rently re­solved state of those two host­names matches the des­ti­na­tion im­plied by that sta­bil­i­sa­tion. As of this writ­ing, se­cu­rity.ubuntu.com and archive.ubuntu.com both re­solve to 104.20.28.246 and 172.66.152.176, which are ad­dresses now be­ing op­er­ated by Cloudflare un­der AS13335.

The other af­fected hosts (ubuntu.com, canon­i­cal.com, launch­pad.net, snapcraft.io, lo­gin.ubuntu.com) all still re­solve to Canonical’s own AS41231 space at 185.125.189.x and 185.125.190.x. The au­thor­i­ta­tive name­servers for ubuntu.com re­main ns1.canon­i­cal.com, ns2.canon­i­cal.com, and ns3.canon­i­cal.com.

The se­lec­tive Cloudflare on­board­ing

Canonical handed Cloudflare ex­actly two A records: the two records the at­tacker had tar­geted for repos­i­tory de­nial. Everything else stayed on Canonical’s iron and weath­ered the at­tack un­der what­ever mit­i­ga­tion was al­ready in place.

The non-repos­i­tory hosts con­tin­ued flap­ping through the end of the snap­shot. They even­tu­ally came back through some com­bi­na­tion of up­stream fil­ter­ing and the at­tack sub­sid­ing.

Canonical’s first pub­lic ac­knowl­edge­ment was posted at 07:13 UTC on 1 May, ten hours af­ter the repos­i­tory end­points had been made sta­ble be­hind Cloudflare. Full restora­tion of all com­po­nents was con­firmed at 12:44 UTC on 1 May, roughly twenty hours af­ter on­set.

Naming what hap­pened

No ran­som pay­ment moved by any vis­i­ble chan­nel.

Cryptocurrency flows of the rel­e­vant mag­ni­tude are ab­sent from the pub­lic record.

A de­mand let­ter has not sur­faced.

Negotiation, if any oc­curred, was con­ducted in pri­vate.

What did move was a paid sub­scrip­tion.

Canonical’s two high­est-value end­points, the ones whose de­nial cre­ates a world­wide fail­ure of au­to­mated se­cu­rity up­dates, tran­si­tioned to a ser­vice re­la­tion­ship with a ven­dor whose other cur­rent cus­tomers in­clude the booter op­er­a­tion that was at­tack­ing them.

This trans­ac­tion con­cluded with­out re­quir­ing Cloudflare to is­sue any de­mand. Beamed’s con­tin­ued avail­abil­ity for hire is the de­mand. The out­age clock run­ning on Canonical’s own in­fra­struc­ture is the dead­line. The pro­tec­tor col­lects on both sides while re­main­ing, at every in­di­vid­ual mo­ment, con­tent-neu­tral and within the let­ter of its terms of ser­vice. Whether Cloudflare de­signed this po­si­tion or ar­rived at it through the ag­gre­ga­tion of un­re­lated cus­tomer de­ci­sions is, from the per­spec­tive of how a racket op­er­ates, im­ma­te­r­ial. It works the same ei­ther way.

Any his­to­rian should be able to call this out as the same ar­chi­tec­ture we’ve all seen be­fore.

Moses Annenberg’s General News Bureau in the 1930s sold timely race-track re­sults to book­mak­ers across the United States. Bookmakers who sub­scribed sur­vived. Bookmakers who de­clined the sub­scrip­tion found their odds-set­ting ca­pac­ity de­stroyed by com­peti­tors who had sub­scribed.

Annenberg’s rev­enue de­pended on his mo­nop­oly over the ver­i­fi­ca­tion of race re­sults, which made every unau­tho­rised book­maker de­pen­dent on his wire to op­er­ate. The fed­eral gov­ern­ment broke that mo­nop­oly through tax pros­e­cu­tion in 1939, and suc­ces­sor wire ser­vices were raided into the 1940s. Mayor LaGuardia in 1942 was­n’t mess­ing around:

Nine men were ar­rested yes­ter­day in raids on a fifth-floor suite of of­fices at 126 Liberty Street and in apart­ments in an eighty-five-fam­ily house at 834 Penfield Street, the Bronx, in what the po­lice called a million-dollar-a-year wire ser­vice for pool­room book­mak­ers and other gam­blers on horse rac­ing in New York, New Jersey, Westchester and Nassau County.”

Nine men were ar­rested yes­ter­day in raids on a fifth-floor suite of of­fices at 126 Liberty Street and in apart­ments in an eighty-five-fam­ily house at 834 Penfield Street, the Bronx, in what the po­lice called a million-dollar-a-year wire ser­vice for pool­room book­mak­ers and other gam­blers on horse rac­ing in New York, New Jersey, Westchester and Nassau County.”

The DDOS-protection mar­ket reads to­day as roughly the same po­si­tion with re­spect to the booter mar­ket. Cloudflare’s rev­enue de­pends on its po­si­tion as the ver­i­fier of whether a ser­vice is reach­able on the pub­lic in­ter­net. When the same com­pany is also the booter’s host­ing provider, the threat and pro­tec­tion roles have been merged into a sin­gle rev­enue stream.

What dis­tin­guishes this par­tic­u­lar in­ci­dent is how the pub­lic record ap­pears to be laun­dered. Companies House holds the cor­po­rate pa­per­work. RIPEs data­base holds the rout­ing re­as­sign­ment. Certificate trans­parency logs cap­ture the ro­ta­tion date for the apex cer­tifi­cates. Canonical’s own sta­tus page cap­tures the minute the records changed.

Every part of it is the pub­lic reg­istry or a cor­po­rate dis­clo­sure. Even the 27 February clus­ter is on the pub­lic record. On that day three prepa­ra­tions com­pleted within a sin­gle cal­en­dar win­dow. Materialism s.r.l. took own­er­ship of AS39287 and the long-held IPv6 pre­fix that came with it. Immaterialism Limited filed its Companies House pa­per­work. And on Canonical’s side, the two apex host­names that would later be moved be­hind a con­tent de­liv­ery net­work had their ori­gin cer­tifi­cates re­newed.

The four-hour gap be­tween the on­set of the at­tack and the ap­pear­ance of Cloudflare ad­dresses on Canonical’s repos­i­tory host­names is the in­ter­val dur­ing which the pur­chas­ing de­ci­sion moved. I imag­ine en­gi­neers mov­ing from hold the line” against at­tacks routed through Cloudflare to sign the Cloudflare con­tract.” Roughly the time it took for the cost of con­tin­ued out­age to ex­ceed the deal Cloudflare of­fered.

The new cus­tomer re­la­tion­ship was vis­i­ble at 20:50:29 UTC on 30 April 2026.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.