10 interesting stories served every morning and every evening.

Several npm latest releases are compromised · Issue #7383 · TanStack/router

github.com

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

There was an er­ror while load­ing. Please re­load this page.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

Postmortem: TanStack npm supply-chain compromise | TanStack Blog

tanstack.com

by Tanner Linsley on May 11, 2026.

Last up­dated: 2026 – 05-11

On 2026 – 05-11 be­tween 19:20 and 19:26 UTC, an at­tacker pub­lished 84 ma­li­cious ver­sions across 42 @tanstack/* npm pack­ages by com­bin­ing: the pul­l_re­quest_­tar­get Pwn Request” pat­tern, GitHub Actions cache poi­son­ing across the fork↔base trust bound­ary, and run­time mem­ory ex­trac­tion of an OIDC to­ken from the GitHub Actions run­ner process. No npm to­kens were stolen and the npm pub­lish work­flow it­self was not com­pro­mised.

The ma­li­cious ver­sions were de­tected pub­licly within 20 min­utes by an ex­ter­nal re­searcher ashishkurmi work­ing for stepse­cu­rity. All af­fected ver­sions have been dep­re­cated; npm se­cu­rity has been en­gaged to pull tar­balls from the reg­istry. We have no ev­i­dence of npm cre­den­tials be­ing stolen, but we strongly rec­om­mend that any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 ro­tate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH cre­den­tials reach­able from the in­stall host.

Tracking is­sue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Packages af­fected

42 pack­ages, 84 ver­sions (two per pack­age, pub­lished roughly 6 min­utes apart). See the track­ing is­sue for the full table. Confirmed-clean fam­i­lies: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-pack­age, not @tanstack/start-*).

What the mal­ware does

When a de­vel­oper or CI en­vi­ron­ment runs npm in­stall, pnpm in­stall, or yarn in­stall against any af­fected ver­sion, npm re­solves the ma­li­cious op­tion­alDe­pen­den­cies en­try, fetches the or­phan pay­load com­mit from the fork net­work, runs its pre­pare life­cy­cle script, and ex­e­cutes a ~2.3 MB ob­fus­cated router_init.js smug­gled into the af­fected tar­ball. The script:

Harvests cre­den­tials from com­mon lo­ca­tions: AWS IMDS / Secrets Manager, GCP meta­data, Kubernetes ser­vice-ac­count to­kens, Vault to­kens, ~/.npmrc, GitHub to­kens (env, gh CLI, .git-credentials), SSH pri­vate keys

Exfiltrates over the Session/Oxen mes­sen­ger file-up­load net­work (filev2.getsession.org, seed{1,2,3}.get­ses­sion.org) — end-to-end en­crypted with no at­tacker-con­trolled C2, so block­ing by IP/domain is the only net­work mit­i­ga­tion

Self-propagates: enu­mer­ates other pack­ages the vic­tim main­tains via reg­istry.npmjs.org/-/​v1/​search?text=main­tainer:<user> and re­pub­lishes them with the same in­jec­tion

Because the pay­load runs as part of npm in­stal­l’s life­cy­cle, any­one who in­stalled an af­fected ver­sion on 2026 – 05-11 must treat the in­stall host as po­ten­tially com­pro­mised.

All times UTC. Local time­stamps from GitHub API and npm reg­istry.

Pre-attack (cache poi­son­ing phase)

Detonation (publish phase)

Workflow run 25613093674 starts (19:15:44), and fails.

Detection and re­sponse

Formal mal­ware re­ports are sub­mit­ted via npm

Tanner be­gins npm dep­re­ca­tion process for all 84 af­fected pack­ages.

Public Twitter/X/LinkedIn/Bluesky dis­clo­sure from @tan_stack and main­tain­ers

All cache en­tries for all TanStack/* GitHub repos­i­to­ries purged via API.

Hardening PR merged: bun­dle-size.yml re­struc­tured, repos­i­to­ry_owner guards added, third-party ac­tion refs pinned to SHAs.

Official GitHub Security Advisory is pub­lished, CVE re­quested

Three vul­ner­a­bil­i­ties chained to­gether. Each is nec­es­sary for the at­tack; none alone is suf­fi­cient.

1. pul­l_re­quest_­tar­get Pwn Request” pat­tern in bun­dle-size.yml

bun­dle-size.yml ran pul­l_re­quest_­tar­get for fork PRs and, in­side that trig­ger con­text, checked out the fork’s PR-merge ref and ran a build:

yaml

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

on: pul­l_re­quest_­tar­get: paths: [‘packages/**’, benchmarks/**’]

jobs: bench­mark-pr: steps: - uses: ac­tions/​check­out@v6.0.2 with: ref: refs/​pull/${{ github.event.pul­l_re­quest.num­ber }}/merge # fork’s merged code

- uses: TanStack/config/.github/setup@main # tran­si­tively calls ac­tions/​cache@v5

- run: pnpm nx run @benchmarks/bundle-size:build # ex­e­cutes fork-con­trolled code

The au­thor of the work­flow at­tempted a trust split (the com­ment-pr job is sep­a­rate from bench­mark-pr, with a com­ment in the YAML not­ing the in­tent to keep bench­mark-pr untrusted with read-only per­mis­sions”). The split is cor­rect in spirit but missed two facts:

ac­tions/​cache@v5′s post-job save is not gated by per­mis­sions:. Cache writes use a run­ner-in­ter­nal to­ken, not the work­flow GITHUB_TOKEN. Setting per­mis­sions: con­tents: read does not block cache mu­ta­tion.

Cache scope is per-repo, shared across pul­l_re­quest_­tar­get runs (which use the base re­po’s cache scope) and pushes to main. A PR run­ning in the base re­po’s cache scope can poi­son en­tries that pro­duc­tion work­flows on main will later re­store.

2. GitHub Actions cache poi­son­ing across trust bound­aries

The ma­li­cious vite_setup.mjs was specif­i­cally de­signed to write data into the pnpm-store di­rec­tory un­der a key the le­git re­lease.yml work­flow would com­pute and look up: Linux-pnpm-store-${hashFiles(‘**/pnpm-lock.yaml’)}. When the bench­mark-pr job ended, ac­tions/​cache@v5′s post-step saved the (now-poisoned) pnpm store to that ex­act key. When re­lease.yml next ran on a push to main, its Setup Tools step re­stored the poi­soned en­try — en­tirely as de­signed.

This is the class of at­tack doc­u­mented by Adnan Khan in 2024. It’s not a TanStack-specific bug; it’s a known GitHub Actions de­sign is­sue that re­quires con­scious mit­i­ga­tion.

re­lease.yml de­clares id-to­ken: write (legitimately needed for npm OIDC trusted pub­lish­ing). When the poi­soned pnpm store is re­stored on the run­ner, at­tacker-con­trolled bi­na­ries are now on disk and get in­voked dur­ing the build step. Those bi­na­ries:

Locate the GitHub Actions Runner.Worker process via /proc/*/cmdline

Read /proc/<pid>/maps and /proc/<pid>/mem to dump the work­er’s mem­ory

Extract the OIDC to­ken (which the run­ner mints lazily, in mem­ory, when id-to­ken: write is set)

Use the to­ken to au­then­ti­cate POST re­quests di­rectly to reg­istry.npmjs.org — by­pass­ing the work­flow’s Publish Packages step en­tirely

This is the same mem­ory-ex­trac­tion tech­nique (and ver­ba­tim Python script, with at­tri­bu­tion com­ment) used in the tj-ac­tions/​changed-files com­pro­mise of March 2025. The at­tacker did not in­vent novel trade­craft; they re­com­bined pub­lished re­search.

Why none alone is enough

pul­l_re­quest_­tar­get alone is fine for trusted op­er­a­tions (labeling, com­ments)

Cache poi­son­ing alone (e.g., from in­side an al­ready-com­pro­mised dep) re­quires a sep­a­rate pub­lish ve­hi­cle

OIDC to­ken ex­trac­tion alone re­quires ex­ist­ing code ex­e­cu­tion on the run­ner

The chain only works be­cause each vul­ner­a­bil­ity bridges the trust bound­ary the oth­ers as­sumed: PR fork code cross­ing into base-repo cache, base-repo cache cross­ing into re­lease-work­flow run­time, and re­lease-work­flow run­time cross­ing into npm reg­istry write ac­cess.

How we found out

Detection was ex­ter­nal. car­lini opened is­sue #7383 ~20 min­utes af­ter the pub­lish, with full tech­ni­cal analy­sis. Tanner re­ceived a phone call from Socket.dev just mo­ments af­ter start­ing the war room con­firm­ing the sit­u­a­tion.

IOC fin­ger­prints (for down­stream main­tain­ers and se­cu­rity tools)

In any @tanstack/* pack­age’s man­i­fest:

json

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

optionalDependencies”: { @tanstack/setup”: github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c” }

File: router_init.js (~2.3 MB, pack­age root, not in files”)

Cache key: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11

2nd-stage pay­load URLs: https://​lit­ter.cat­box.moe/​h8nc9u.js, https://​lit­ter.cat­box.moe/​7r­rc6l.mjs

Exfiltration net­work: filev2.get­ses­sion.org, seed{1,2,3}.get­ses­sion.org

Forged com­mit iden­tity: claude <claude@users.noreply.github.com> (note: not the real Anthropic Claude — fab­ri­cated GitHub no-re­ply email)

Real at­tacker ac­counts: zblgg (id 127806521), voicpro­ducoes (id 269549300)

Attacker fork: github.com/​zblgg/​con­fig­u­ra­tion (fork of TanStack/router re­named to evade fork searches)

Orphan pay­load com­mit (in fork net­work): 79ac49eedf774dd4b0cfa308722bc463cfe5885c

Workflow runs that per­formed the ma­li­cious pub­lishes:

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4) github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

github.com/​TanStack/​router/​ac­tions/​runs/​25613093674 (attempt 4)

github.com/​TanStack/​router/​ac­tions/​runs/​25691781302

What went well

External re­searchers no­ticed and re­ported with full tech­ni­cal de­tail within ~20 min of the in­ci­dent

Maintainer team co­or­di­nated im­me­di­ately and ef­fec­tively across many time­zones

The de­tec­tion com­mu­nity al­ready had a clear pub­lic IOC pat­tern within hours

What could have been bet­ter

No in­ter­nal alert­ing. We learned about the com­pro­mise from a third party. We need mon­i­tor­ing on our own pub­lishes. We’ll be work­ing closely with se­cu­rity re­searcher firms in the ecosys­tem that have the abil­ity to de­tect these is­sues very quickly, po­ten­tially even in-house, and mak­ing the feed­back loop even tighter.

pul­l_re­quest_­tar­get work­flows had not been au­dited de­spite be­ing a long-known dan­ger­ous pat­tern

Floating refs (@v6.0.2, @main) on third-party ac­tions cre­ate stand­ing sup­ply-chain risk in­de­pen­dent of this in­ci­dent

Unpublish was un­avail­able for nearly all af­fected pack­ages be­cause of npm’s no un­pub­lish if de­pen­dents ex­ist” pol­icy. We have to rely on npm se­cu­rity to pull tar­balls server-side, which adds hours of de­lay dur­ing which ma­li­cious tar­balls re­main in­stal­lable

The 7-maintainer list on the npm scope means seven sep­a­rate cre­den­tial-theft tar­gets for the same blast ra­dius

OIDC trusted-pub­lisher bind­ing has no per-pub­lish re­view. Once con­fig­ured, any code path in the work­flow can mint a pub­lish-ca­pa­ble to­ken. We need ei­ther (a) move to short-lived clas­sic to­kens with man­ual re­view, or (b) add prove­nance-source-ver­i­fi­ca­tion to de­tect pub­lishes from un­ex­pected work­flow steps

What we got lucky on

The at­tacker chose a pay­load that broke tests, which made the pub­lish step (which would have pro­duced cleaner-look­ing tar­balls) skip — mean­ing the at­tack was loud enough to de­tect quickly. A more care­ful at­tacker who did­n’t break tests could have pub­lished silently for hours longer

The at­tacker reused pub­lic trade­craft (verbatim mem­ory-dump script with at­tri­bu­tion com­ment) in­stead of writ­ing novel code — mak­ing the IOC-matching faster

These need an­swers be­fore we close the post­mortem.

Did bun­dle-size.ym­l’s Setup Tools step ac­tu­ally call ac­tions/​cache@v5? Verify by read­ing the post-job logs from one of the pul­l_re­quest_­tar­get runs against PR #7378 (e.g., run id 25666610798). Tanner has ac­cess; needs to be done man­u­ally

What was in the ini­tial PR head com­mit (before the force-pushes wiped it)? GitHub’s re­flog may have it. Check via gh api or the GitHub sup­port team

How did the ma­li­cious com­mit get into the fork’s git ob­ject store specif­i­cally — was it pushed di­rectly via git, or was it cre­ated via the GitHub web UI (which would leave au­dit-log en­tries)?

Was voicpro­ducoes a real ac­count or a sock pup­pet? Cross-reference its ac­tiv­ity his­tory

Did the npm cache also get poi­soned (the 6 du­pli­cate linux-npm-store-* en­tries)? Were any ac­tu­ally used?

Does the at­tack re­quire Nx Cloud, or would it have worked with just GitHub Actions cache?

Can we iden­tify any other fork in the TanStack/router fork net­work that con­tains the or­phan pay­load com­mit? (If yes, the cleanup is harder — every fork host­ing it keeps it ac­ces­si­ble via github:tanstack/​router#79ac49ee…)

Are any other TanStack re­pos (router, query, table, form, vir­tual, etc.) us­ing the same bun­dle-size.yml-style pat­tern? Audit needed

How many users ac­tu­ally down­loaded the af­fected ver­sions dur­ing the pub­lish win­dow? Get from npm sup­port

Did any of the seven listed main­tain­ers’ ma­chines get com­pro­mised sep­a­rately? (None of the ma­li­cious pub­lishes used a main­tain­er’s npm to­ken, but main­tainer ma­chines could have been the sec­ondary tar­get via the self-prop­a­ga­tion logic)

Tracking is­sue: TanStack/router#7383

GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx

Related re­search:

Just a moment...

medium.com

GitLab Act 2

about.gitlab.com

We’ve been work­ing through some sig­nif­i­cant changes in­side GitLab over the past few days, and I want to share them with you di­rectly. The email I sent the team is in­cluded be­low for full con­text.

The agen­tic era af­fords GitLab the largest op­por­tu­nity in our his­tory as a com­pany, and we’re mak­ing the struc­tural and strate­gic de­ci­sions to meet it.

This let­ter has three parts. First, the op­er­a­tional and struc­tural news, which is hard. Second, the strate­gic the­sis we’re bet­ting on. And fi­nally, what this means specif­i­cally for you, our cus­tomers and in­vestors.

The struc­tural news

This morn­ing we shared with team mem­bers that we’re be­gin­ning a re­struc­tur­ing process at GitLab, and we’re run­ning it dif­fer­ently than most. The plan­ning is hap­pen­ing openly, in­clud­ing a vol­un­tary sep­a­ra­tion win­dow. That cre­ates real un­cer­tainty for our team over the next few weeks, but we be­lieve the out­come will be bet­ter for it. Where we can, we plan to fi­nal­ize the new shape of the com­pany on or be­fore June 1. Where lo­cal re­quire­ments ap­ply we will not make any changes un­til the lo­cal process is com­plete.

Four op­er­a­tional changes are part of the work­force re­duc­tion.

We’re reeval­u­at­ing our op­er­a­tional foot­print, and are plan­ning to re­duce the num­ber of coun­tries by up to 30% where we have small teams. We’ll con­tinue serv­ing cus­tomers in those mar­kets through our part­ner net­work.

We’re plan­ning to flat­ten the or­ga­ni­za­tion, re­mov­ing up to three lay­ers of man­age­ment in some func­tions so lead­ers are closer to the work.

We’re re-or­ga­niz­ing R&D to cre­ate roughly 60 smaller, more em­pow­ered teams with end-to-end own­er­ship, nearly dou­bling the num­ber of in­de­pen­dent teams.

We’re rewiring in­ter­nal processes with AI agents, au­tomat­ing the re­views, ap­provals, and hand­offs to speed us up, and plan to right-size roles across the com­pany to fol­low suit.

Operational changes and the up­date to our strat­egy are hap­pen­ing to­gether: they are re­lated but in­de­pen­dent. Operationally, we grew into a shape that was right for the last era and is­n’t right for this one. The strat­egy be­low is what we’re bet­ting on next, and stands on its own.

We are reaf­firm­ing our Q1 and full year FY27 guid­ance to­day. The fi­nal scope and fi­nan­cial im­pact of the re­struc­tur­ing will be shared on our June 2 earn­ings call, once we’ve fin­ished the plan and re­ceived ap­proval by our board.

Our Core Beliefs

Underpinning the changes we’re mak­ing to­day, and our go for­ward strat­egy are 10 core be­liefs that span the world we’re build­ing for, the ar­chi­tec­tural bets we’re mak­ing and how we’ll de­liver.

The world we’re build­ing for

We’re evolv­ing our strat­egy to op­ti­mize for the fu­ture state of soft­ware en­gi­neer­ing:

Software will be built by ma­chines, di­rected by peo­ple. AI is the sub­strate on which fu­ture soft­ware gets built. Agents will plan, code, re­view, de­ploy, and re­pair. Humans still own the judg­ment that mat­ters most: ar­chi­tec­ture, deep un­der­stand­ing of the cus­tomer prob­lem, the trade­offs that re­quire taste. This is why we built and re­leased the Duo Agent Platform in January. Our first quar­ter adop­tion is promis­ing, and we’re ready to ac­cel­er­ate.

The agen­tic era mul­ti­plies de­mand for soft­ware. Software has been the force mul­ti­plier be­hind nearly every busi­ness trans­for­ma­tion of the last two decades. The con­straint was the cost and time of pro­duc­ing and man­ag­ing it. That con­straint is col­laps­ing. As the cost of pro­duc­ing soft­ware col­lapses, de­mand for it will ex­pand. Last year, the de­vel­oper plat­form mar­ket used to be mea­sured in tens of dol­lars per user per month, this year it is hun­dreds/​user/​month and headed to thou­sands. Not only is the value of soft­ware for builders in­creas­ing, but we be­lieve there will be more soft­ware and builders than ever, and we will serve an in­creas­ing vol­ume of both.

The con­se­quen­tial work be­longs to en­gi­neers. Engineering has al­ways been about more than writ­ing code. Great en­gi­neers are prob­lem solvers and builders who care about sys­tem de­sign, dis­trib­uted sys­tems, rea­son­ing through fail­ures, safely in­te­grat­ing new ca­pa­bil­ity into crit­i­cal sys­tems, and mak­ing de­ci­sions un­der am­bi­gu­ity. These are ex­actly the skills the agen­tic era needs more of, es­pe­cially as the vol­ume of soft­ware in­creases. The sup­ply of deep tech­ni­cal prob­lems is mul­ti­ply­ing, and the en­gi­neers who can solve them will be among the scarcest and most valu­able tal­ent in the mar­ket. Our core users’ roles are evolv­ing, their im­por­tance is only in­creas­ing.

The ar­chi­tec­tural bets we’re mak­ing

Platforms that weren’t built for ma­chine scale are start­ing to break un­der it. Winning means in­vest­ing in the fun­da­men­tals that re­ally mat­ter: se­cu­rity, per­for­mance, scal­a­bil­ity, re­li­a­bil­ity and user ex­pe­ri­ence. We’re mak­ing five, fun­da­men­tal ar­chi­tec­tural bets. Each one is un­der­way and we plan to de­liver with­out dis­rup­tion to GitLab cus­tomers that de­pend on us every day.

Machine-scale in­fra­struc­ture. Agents open merge re­quests in par­al­lel, trig­ger pipelines around the clock, and push com­mits at a rate no hu­man team ever did. Git it­self was­n’t de­signed for that load, and bolt­ing AI onto plat­forms not built for agents is the biggest mis­take of this era. We’re do­ing a gen­er­a­tional re­build of the un­der­ly­ing in­fra­struc­ture to han­dle agent-rate work as the de­fault. Git it­self is be­ing reengi­neered for ma­chine scale. The mono­lith is giv­ing way to mod­ern, API-first, com­pos­able ser­vices. And agent-spe­cific APIs are be­ing built so agents can act as first-class users of the plat­form, not as bolted-on con­sumers of hu­man-shaped in­ter­faces. The value of this 100x scale in­fra­struc­ture, and the re­li­a­bil­ity and per­for­mance it pro­vides is much higher than the gen­er­a­tion of in­fra­struc­ture in the mar­ket to­day.

Orchestration across the full life­cy­cle. A sin­gle agent that writes code or opens a merge re­quest pro­duces ac­tiv­ity. Enterprises don’t need agent ac­tiv­ity. They need run­ning soft­ware that moves the busi­ness for­ward. Orchestration is the layer that gets you there. It co­or­di­nates agents across the life­cy­cle, as­sign­ing work, man­ag­ing state, pass­ing con­text, re­solv­ing con­flicts, en­forc­ing pol­icy, and keep­ing a hu­man in the loop when it mat­ters. CI/CD is one of the com­po­nents get­ting reimag­ined. The GitLab pipeline was de­signed to take hu­man-rate com­mits and ship them safely; in the agen­tic era our or­ches­tra­tion ser­vice be­comes the run­time that co­or­di­nates agents, val­i­dates the work and en­forces guardrails, and dri­ves change all the way to pro­duc­tion at ma­chine rate.

Context is our su­per­power. Every dev tool ven­dor is con­verg­ing on sim­i­lar code gen­er­a­tion ca­pa­bil­i­ties. Enterprise AI bills are climb­ing as fast as adop­tion. What does­n’t com­modi­tize is the unique con­text the model gets to work with: a data model that con­nects plan­ning, code, re­view, se­cu­rity, de­ploy­ment, and op­er­a­tions across every pro­ject and repos­i­tory, ac­cu­mu­lated over years of a team’s work. We’re in­vest­ing in that con­nected data model as a first-class, API-accessible ser­vice, and it de­liv­ers more value with every hu­man and agent ac­tion. Context is what lets agents spend fewer to­kens and de­liver bet­ter re­sults.

Governance built into the core. Governance is what lets en­ter­prises move fast in the agen­tic era. Like a race car, it does­n’t mat­ter how fast you can go if you can’t main­tain con­trol. As agents take on more of the work, en­ter­prises need a plat­form that can en­force who’s al­lowed to do what, prove what hap­pened and why, and keep sen­si­tive code and data where it be­longs. We’re build­ing iden­tity, au­dit, pol­icy, and de­ploy­ment flex­i­bil­ity as core plat­form ser­vices that every agent, pipeline, and merge re­quest runs through by de­fault, rather than a sep­a­rate prod­uct lay­ered on top.

One plat­form, three modes. Trillions of lines of code run the world’s busi­nesses to­day. Rewriting most of it is too risky and too ex­pen­sive to jus­tify. The cloud era taught us en­ter­prises run hy­brid, and op­er­at­ing across that mix has been painful, ex­pen­sive, and never fully solved. The agen­tic era will be the same. Every en­ter­prise will live across a spec­trum of hu­man-owned, agent-as­sisted, and agent-au­tonomous work. We’re build­ing one plat­form, one data model, one gov­er­nance sys­tem that op­er­ates across all three modes, and de­liv­er­ing it cloud and model neu­tral.

How we’ll de­liver it

A flex­i­ble busi­ness model. As the way soft­ware gets built changes, the busi­ness model must evolve with it. Agentic AI can aug­ment teams, per­form real work and the busi­ness model must scale with the cost and value of the work per­formed. We’re keep­ing what works: the pre­dictabil­ity of sub­scrip­tions for what cus­tomers have to­day. We’ve al­ready added con­sump­tion pric­ing for the work agents do, with other ma­jor play­ers fol­low­ing over the past few months. Next, we’re in­tro­duc­ing more flex­i­bil­ity to mix both as the way of work evolves.

Culture of ex­cel­lence. Operational char­ac­ter is a key dif­fer­en­tia­tor. What mat­ters most right now is the abil­ity to move quickly, own out­comes, and de­liver real value to our cus­tomers. Speed with Quality, Ownership Mindset, and Customer Outcomes are our new op­er­at­ing prin­ci­ples, built on a cul­ture of ex­cel­lence.

To our cus­tomers

For our cus­tomers, the most im­por­tant thing to­day is what does­n’t change. The sup­port, roadmap com­mit­ments, con­trac­tual terms — all of it con­tin­ues with­out dis­rup­tion. Your ac­count team is avail­able to walk you through to­day’s news if you’d like a con­ver­sa­tion.

Where you should ex­pect to see us evolve is in the qual­ity, depth and pace of in­no­va­tion we ship. We will lead the way in agen­tic en­gi­neer­ing by be­ing cus­tomer zero of our plat­form, demon­strat­ing with our in­no­va­tion and our re­sults the suc­cess you can bet on as our cus­tomers. Our vi­sion for the prod­uct and busi­ness model is clearer than it has ever been and we’re ac­cel­er­at­ing the work. We’ll share the next wave of our in­no­va­tion roadmap at GitLab Transcend on June 10, 2026 and hope you’ll join us.

To our in­vestors

Today’s an­nounce­ment is a de­lib­er­ate move to lead in a mar­ket we be­lieve is in the mid­dle of its largest shift in twenty years. The op­por­tu­nity here is­n’t in­cre­men­tal growth on a DevSecOps plat­form — we’re build­ing to­ward be­com­ing the trusted en­ter­prise plat­form for soft­ware cre­ation in the AI era.

We look for­ward to shar­ing an up­date on the busi­ness and our Q1 re­sults in our up­com­ing earn­ings call on June 2, 2026. We’ll also share the fi­nal scope and fi­nan­cial im­pact of the re­struc­tur­ing at that time, al­though we an­tic­i­pate rein­vest­ing the ma­jor­ity of sav­ings into ac­cel­er­at­ing our progress against the spe­cific growth and tech­no­log­i­cal ini­tia­tives that we’ve out­lined.

This is the most con­se­quen­tial work we’ve taken on as a com­pany. We’ll prove it in the in­no­va­tion we bring to mar­ket, how we serve our cus­tomers, and how we cre­ate value for our share­hold­ers over the near- and long-term.

Thank you,

Bill Staples CEO, GitLab

GitLab Act 2 Update

A let­ter to our team.

Today is hard. I want to ac­knowl­edge how dif­fi­cult to­day is given the vol­ume of change we’re ask­ing you to take in, and the un­cer­tainty of a trans­par­ent re­struc­tur­ing process.

We’ve spent three days to­gether on the why, the what, and the how of where GitLab is go­ing. This let­ter is the writ­ten sum­mary, so you have some­thing to re­flect on as we nav­i­gate the com­ing week to­gether.

Why we’re ini­ti­at­ing a trans­par­ent re­struc­ture of the com­pany

This re­struc­ture process is not like oth­ers you may be see­ing in the news. Of course AI is chang­ing the way we work and is part of our trans­for­ma­tion plan, but this is not an AI op­ti­miza­tion or cost cut­ting ex­er­cise. We in­tend to rein­vest the vast ma­jor­ity of sav­ings back into the busi­ness to ac­cel­er­ate our unique op­por­tu­nity in the agen­tic era as de­fined in our Act 2 Core Beliefs.

One way our re­struc­ture process is dif­fer­ent is that we are do­ing it trans­par­ently and in­clud­ing every team mem­ber in the process. Starting to­day, man­agers across the com­pany are en­ter­ing deeper con­ver­sa­tions with lead­er­ship about how the re­struc­tur­ing prin­ci­ples land in­side their teams. Those con­ver­sa­tions will in­form the de­ci­sion of im­pacted roles. The rea­son we’re not land­ing the full de­ci­sion to­day is that get­ting the shape of the next GitLab right mat­ters more than get­ting it fast — and a trans­par­ent process with in­put from you, your man­agers, lead­ers across the or­ga­ni­za­tion, and our em­ployee rep­re­sen­ta­tives is the best way to land this change with an or­ga­ni­za­tion ready to move for­ward.

As we dis­cussed to­day, we are plan­ning a work­force re­duc­tion dri­ven by a con­cen­tra­tion of our coun­try foot­print, flat­ten­ing how we’re or­ga­nized, and role right-siz­ing de­signed to op­ti­mize the shape and size of our teams. In ad­di­tion, we’re es­tab­lish­ing a new set of op­er­at­ing prin­ci­ples, founded on a cul­ture of ex­cel­lence.

I want to be di­rect: I want to do this once, and do it right, and not re­visit our struc­ture any­time in the fore­see­able fu­ture. The team that comes through this re­struc­ture is the team that builds Act 2, and you should be able to plan your life and your work with­out brac­ing for what comes next. Let’s talk about what’s chang­ing and how we get it right.

The re­struc­tur­ing prin­ci­ples we’re op­ti­miz­ing for

Reduced op­er­a­tional foot­print: We’re re­duc­ing our coun­try foot­print be­cause op­er­at­ing in nearly 60 coun­tries does not al­low us to give every team mem­ber a great ex­pe­ri­ence. We an­tic­i­pate re­duc­ing the num­ber of coun­tries by 30% fo­cused on geos where we have only a hand­ful of peo­ple or fewer. Team mem­bers who are in good stand­ing and would like to re­lo­cate are wel­come to do so. We’ll con­tinue to serve cus­tomers in those mar­kets through our part­ner net­work where ap­pro­pri­ate.

Flatter or­ga­ni­za­tion: We’re flat­ten­ing our or­ga­ni­za­tion be­cause eight lay­ers is too deep for a com­pany our size and man­age­ment lay­ers are slow­ing us down. Every layer of man­age­ment in­creases the num­ber of places where pri­or­i­ties and com­mu­ni­ca­tion gets fil­tered. A flat­ter or­ga­ni­za­tion will bet­ter con­nect every team mem­ber with lead­er­ship.

Role right-siz­ing: As we shift to a new strat­egy and way of work­ing, pow­ered by AI, we must re­visit the size of staffing for each role to en­sure we are op­ti­miz­ing for speed and cus­tomer out­comes. In some cases, AI can aug­ment and ac­cel­er­ate what team mem­bers have been do­ing, in other places we need to ex­pand cer­tain roles to go faster. We do ex­pect daily use of AI by every in­di­vid­ual in the com­pany and we are launch­ing AI ac­cel­er­a­tion pro­grams to sup­port every role as part of our trans­for­ma­tion.

How we’ll op­er­ate go­ing for­ward

We will be re­tir­ing CREDIT as our val­ues frame­work. CREDIT was the right frame­work for the very suc­cess­ful Act 1 that took the com­pany to $1B ARR. Those val­ues shaped a com­pany that thrived through COVID and our IPO to be­come one of the most rec­og­nized names in DevSecOps. We are not re­tir­ing them be­cause they were wrong, we are choos­ing in­stead to fo­cus on some­thing dif­fer­ent for this era which de­mands a dif­fer­ent op­er­at­ing pos­ture. Many of the same val­ues we have been liv­ing and of­ten talk about are still di­rectly ap­plic­a­ble in this era. Our three new op­er­at­ing prin­ci­ples are:

Speed with Quality means we move faster than we have, with the dis­ci­pline that lets oth­ers rely on the work, es­pe­cially our cus­tomers. We achieve this with smaller teams, tighter cy­cles, and stronger guardrails. We will hold a higher bar for what we com­mit to and what we de­liver against those com­mit­ments. Here are some spe­cific ex­am­ples we shared to­day of what we ex­pect every team mem­ber to em­body:

We or­ga­nize and ex­e­cute cross-func­tional pro­jects in small teams with more au­ton­omy

We set high stan­dards for qual­ity, al­ways prove what we build with cus­tomer zero first

We build fast, ex­per­i­ment, learn and fail fast, es­pe­cially for two way de­ci­sions

If an agent can do it, we au­to­mate it, and find things where our judge­ment or skill is es­sen­tial

We have zero tol­er­ance for un­nec­es­sary bu­reau­cracy

We use both sync (for speed) and async (for scale) pat­terns

Ownership Mindset means we ex­pect every in­di­vid­ual to act as a stew­ard for the com­pany and with au­ton­omy. The peo­ple clos­est to the work make the de­ci­sions about it, and they own the re­sult. Layers of man­age­ment be­tween lead­ers and the work com­ing out, and hand­offs that di­lute ac­count­abil­ity are elim­i­nated. Some ex­am­ples of the mind­set we ex­pect every team mem­ber to em­body:

I take pride in my work be­cause it de­liv­ers real out­comes

It is never some­one else’s prob­lem

Everyone is on my team

I care deeply for the cus­tomer and the busi­ness health

I am ef­fi­cient with bud­get, peo­ple and every­one’s time

Customer Outcomes means we mea­sure our­selves by what changes for the cus­tomer, not by the ac­tiv­ity on our side. Internal mile­stones mat­ter only to the ex­tent that they con­nect to cus­tomer im­pact. Examples of be­hav­iors we ex­pect from every­one:

I can ex­plain how my work con­nects to a cus­tomer out­come, not just a roadmap item or task/​ac­tiv­ity

My work cre­ates joy and de­light for cus­tomers so they love GitLab

I build cus­tomer re­la­tion­ships on fair­ness and mu­tual re­spect, and I make sure every deal works for both sides.

I’m fo­cused on value re­al­iza­tion first be­cause that dri­ves big­ger com­mit­ments over time

When a cus­tomer is stuck, I treat their time like it’s more ex­pen­sive than mine

These are built on a cul­ture of ex­cel­lence, which we ex­pect every team mem­ber to up­hold. That means:

Excellence in thought: team mem­bers who are sharp, un­der­stand deeply and with pre­ci­sion, com­mu­ni­cate with clar­ity and in­tegrity

Excellence in ac­tion: peo­ple with the abil­ity to pro­duce high qual­ity re­sults and busi­ness im­pact

Interpersonal ex­cel­lence: in­di­vid­u­als who are good hu­mans, em­brace di­ver­sity, in­clu­sion and be­long­ing, as­sume good in­tent and treat every­one with re­spect

Next steps in the re­struc­tur­ing process

Our trans­par­ent re­struc­ture process cre­ates un­cer­tainty that is real and it’s hard, and I’m not go­ing to pre­tend oth­er­wise. I ask that you re­flect on the why, what and how and en­gage your man­ager in a real con­ver­sa­tion about the work, the ques­tions and con­cerns you have, and what the next chap­ter looks like for you. Your man­ager may not have all the an­swers, be­cause they too are go­ing through this pe­riod of un­cer­tainty. The con­ver­sa­tion still mat­ters and your in­put shapes how we land as a team.

The vol­un­tary win­dow ex­ists for you. After three days walk­ing through Act 2 to­gether, you have the pic­ture you need to de­cide whether GitLab is the right place for you in the next chap­ter of your ca­reer. If it is­n’t, talk to your man­ager or di­rec­tor and, where lo­cal re­quire­ments al­low, ap­ply for a sep­a­ra­tion be­fore May 18. If ap­proved, we’ll in­clude you in the same sep­a­ra­tion pack­age as any­one else. The ap­proval process ex­ists be­cause in­di­vid­ual cir­cum­stances and lo­cal re­quire­ments vary and have to be weighed case by case. This process is meant to pro­vide some­thing we all de­serve once the re­struc­ture is com­plete: a team that is ex­cited and com­mit­ted to the fu­ture of GitLab. Please take a mo­ment to lis­ten to what Sid, our founder and Exec Chair, thinks about the changes we’re mak­ing to­day.

Why I hope you stay

I want to spend the rest of this let­ter con­vinc­ing you to stay, if the Why” and the What” ses­sions haven’t al­ready con­vinced you.

Better em­ployee ex­pe­ri­ence. Our over­rid­ing ob­jec­tive is to bring a sig­nif­i­cant im­prove­ment to the joy and im­pact of each team mem­ber par­tic­i­pat­ing in Act 2. We know that by do­ing that, we can bet­ter cap­ture the cre­ativ­ity and im­pact of every in­di­vid­ual and build a world class busi­ness.

Better pay. Once ap­proved, our new bonus pro­gram will give every team mem­ber who is­n’t on an in­cen­tive com­pen­sa­tion plan or bonus plan to­day, the op­por­tu­nity to earn a cash bonus based on their in­di­vid­ual per­for­mance, tar­get­ing 10% of salary, awarded at their man­ager’s dis­cre­tion.

Smaller, em­pow­ered R&D teams with a clear vi­sion. We as­pire to dou­ble the num­ber of smaller, R&D teams - up to 60 - with more au­ton­omy and own­er­ship.

Less fric­tion, less over­head. The hand­offs that have slowed us down are go­ing to be sig­nif­i­cantly re­duced. The lay­ers be­tween you and the de­ci­sions that af­fect your work are be­ing re­duced. If you’ve ever been frus­trated at GitLab by how long it took to get some­thing ob­vi­ous done, Act 2 is en­gi­neered around re­mov­ing that fric­tion.

Solve big tech­ni­cal prob­lems. Our five ar­chi­tec­tural bets pro­vide deep, tech­ni­cal prob­lems that will re­de­fine GitLab for the agen­tic era, in­clud­ing a new git for agents that sup­ports ma­chine scale, an or­ches­tra­tion layer for hu­mans, agents and full life­cy­cle or­ches­tra­tion, a con­nected graph of full life­cy­cle data as a ser­vice, brand new pol­icy ser­vice to pro­vide cen­tral­ized gov­er­nance and a fully au­tonomous soft­ware en­gi­neer­ing ex­pe­ri­ence.

More flex­i­ble buy­ing pro­grams. Our new con­sump­tion buy­ing pro­grams will make it far eas­ier to sell GitLab and for cus­tomers to buy GitLab seats + cred­its and un­lock adop­tion faster than ever be­fore.

Career growth. Bold bets like Act 2 are rare and bring with them op­por­tu­ni­ties for every team mem­ber at every level to learn faster and de­velop skills and ex­pe­ri­ence that will mat­ter for the rest of your ca­reer, here or wher­ever your path takes you.

Aligned lead­er­ship with the will to win. We have a lead­er­ship team with e-group, and our SLT, that is com­mit­ted to win, make the hard de­ci­sions and align the or­ga­ni­za­tion cross func­tion­ally to ac­cel­er­ate re­sults. We will hold our­selves ac­count­able to help you suc­ceed and cre­ate a win­ning or­ga­ni­za­tion.

Uniquely po­si­tioned to win. We are uniquely po­si­tioned to not only par­tic­i­pate, but to lead in our cat­e­gory where the TAM is ex­plod­ing at a step func­tion rate. We have struc­tural ad­van­tages in data, tech­nol­ogy and cus­tomer trust that give us an ad­van­tage over AI labs and start-ups that we can har­ness to re­de­fine how soft­ware is built in the agen­tic era. By be­ing part of Act 2, you will be part of a win­ning or­ga­ni­za­tion that helps shape soft­ware en­gi­neer­ing in the agen­tic era.

For those who are leav­ing

Whether by choice or oth­er­wise: the work you did here mat­tered, and it con­tin­ues to mat­ter. You came to GitLab when it needed you. You built things the next chap­ter is built on. We owe you real sup­port through the tran­si­tion, and our gen­uine re­spect. If we’re ask­ing our team to be world-class, we have a rec­i­p­ro­cal oblig­a­tion to be world-class in how we treat peo­ple leav­ing us. That’s the stan­dard we’re hold­ing our­selves to.

I’ll close with this. None of what I’ve writ­ten makes to­day eas­ier. It is­n’t sup­posed to. What I want you to know is that we’ve made these de­ci­sions care­fully, our in­ten­tion is to make them only once, and we’re go­ing to do right by the peo­ple leav­ing and by the peo­ple stay­ing.

Thank you for what you’ve built. Thank you for what comes next.

Bill Staples, CEO, GitLab

Software engineering may no longer be a lifetime career

www.seangoedecke.com

I don’t think there’s com­pelling ev­i­dence that us­ing AI makes you less in­tel­li­gent over­all1. However, it seems pretty ob­vi­ous that us­ing AI to per­form a task means you don’t learn as much about per­form­ing that task. Some soft­ware en­gi­neers think this is a de­ci­sive ar­gu­ment against the use of AI. Their ar­gu­ment goes some­thing like this:

Using AI means you don’t learn as much from your work

AI-users thus be­come less ef­fec­tive en­gi­neers over time, as their tech­ni­cal skills at­ro­phy

Therefore we should­n’t use AI in our work

I don’t nec­es­sar­ily agree with (2). On the one hand, mov­ing from as­sem­bly lan­guage to C made pro­gram­mers less ef­fec­tive in some ways and more ef­fec­tive in oth­ers. On the other hand, the tran­si­tion from writ­ing code by hand to us­ing AI is ar­guably a big­ger shift, so who knows? But it does­n’t mat­ter. Even if we grant that (2) is cor­rect, this is still a bad ar­gu­ment.

Until around 2024, the best way to learn how to do soft­ware en­gi­neer­ing was just do­ing soft­ware en­gi­neer­ing. That was re­ally lucky for us! It meant that we could par­lay a cod­ing hobby into a lu­cra­tive ca­reer, and that the peo­ple who re­ally liked the work would just get bet­ter and bet­ter over time. However, that was never an im­mutable fact of what soft­ware en­gi­neer­ing is. It was just a for­tu­nate co­in­ci­dence.

It would re­ally suck for soft­ware en­gi­neers if us­ing AI made us worse at our jobs in the long term (or even at gen­eral rea­son­ing, though I still don’t be­lieve that’s true). But we might still be obliged to use it, if it pro­vided enough short-term ben­e­fits, for the same rea­son that con­struc­tion work­ers are obliged to lift heavy ob­jects: be­cause that’s what we’re be­ing paid to do.

If you work in con­struc­tion, you need to lift and carry a se­ries of heavy ob­jects in or­der to be ef­fec­tive. But lift­ing heavy ob­jects puts long-term wear on your back and joints, mak­ing you less ef­fec­tive over time. Construction work­ers don’t say that be­ing a good con­struc­tion worker means not lift­ing heavy ob­jects. They say too bad, that’s the job”2.

If AI does turn out to make you dumber, why can’t we just keep writ­ing code by hand? You can! You just might not be able to earn a salary do­ing so, for the same rea­son that there aren’t many jobs out there for car­pen­ters who refuse to use power tools. If the mod­els are good enough, you will sim­ply get out­com­peted by en­gi­neers will­ing to trade their long-term cog­ni­tive abil­ity for a short-term lu­cra­tive ca­reer3.

I hope that this is­n’t true. It would be re­ally un­for­tu­nate for soft­ware en­gi­neers. But it would be even more un­for­tu­nate if it were true and we re­fused to ac­knowl­edge it.

The ca­reer of a pro ath­lete has a max­i­mum lifes­pan of around fif­teen years. You have the op­por­tu­nity to make a lot of money un­til around your mid-thir­ties, at which point your body just can’t keep up with it. A com­mon tragic fig­ure to­day is the pro­fes­sional ath­lete who be­lieves the show will go on for­ever and does­n’t pre­pare for the day they can’t do it any­more. We may be in the first gen­er­a­tion of soft­ware en­gi­neers in the same po­si­tion. If so, it’s prob­a­bly a good idea to plan ac­cord­ingly.

edit: this post got a lot of com­ments on Hacker News. I was a bit dis­ap­pointed to see many peo­ple (even Simon Willison, whose blog I read) re­spond with vari­a­tions on the point that en­gi­neers can use AI to do more en­gi­neer­ing work, even if they’re no longer writ­ing code by hand. First, once you stop writ­ing code by hand, I worry that your abil­ity to un­der­stand the code­base in gen­eral will at­ro­phy; sec­ond, the rate of change is so high that no­body knows what will hap­pen in a decade or two. I should have em­pha­sized these points more.

If you’re think­ing wait, there’s re­search on this”, you can likely read my take on the pa­per you’re think­ing of here, here or here. ↩

If you’re think­ing wait, there’s re­search on this”, you can likely read my take on the pa­per you’re think­ing of here, here or here.

Of course, con­struc­tion work­ers do have lay­ers of tech­niques for avoid­ing lift­ing heavy ob­jects when pos­si­ble (cranes, dol­lies, fork­lifts, and so on). There’s a nat­ural anal­ogy here to a set of tech­niques for stay­ing men­tally en­gaged that soft­ware en­gi­neers are yet to dis­cover. ↩

Of course, con­struc­tion work­ers do have lay­ers of tech­niques for avoid­ing lift­ing heavy ob­jects when pos­si­ble (cranes, dol­lies, fork­lifts, and so on). There’s a nat­ural anal­ogy here to a set of tech­niques for stay­ing men­tally en­gaged that soft­ware en­gi­neers are yet to dis­cover.

In the­ory la­bor unions could slow this process down (and have forced em­ploy­ers to slow down this race-to-the-bot­tom in other in­dus­tries). But I’m pes­simistic about tech la­bor unions for all the usual rea­sons: the job is too highly-paid, you can work (and thus scab) from any­where on the planet, and so on. ↩

In the­ory la­bor unions could slow this process down (and have forced em­ploy­ers to slow down this race-to-the-bot­tom in other in­dus­tries). But I’m pes­simistic about tech la­bor unions for all the usual rea­sons: the job is too highly-paid, you can work (and thus scab) from any­where on the planet, and so on.

If you liked this post, con­sider sub­scrib­ing to email up­dates about my new posts, or shar­ing it on Hacker News.

Here’s a pre­view of a re­lated post that shares tags with this one.

Luddites and burn­ing down AI dat­a­cen­ter­sIs it time to start burn­ing down dat­a­cen­ters?Some peo­ple think so. An Indianapolis city coun­cil mem­ber had his house re­cently shot up for sup­port­ing dat­a­cen­ters, and Sam Altman’s home was fire­bombed (and then shot) shortly af­ter­wards. People from all sides of the ar­gu­ment are sound­ing the alarm about im­mi­nent vi­o­lence.The ob­vi­ous his­tor­i­cal com­par­i­son is Luddism, the 19th-century phe­nom­e­non where English weavers and knit­ters de­stroyed the ma­chines that were au­tomat­ing their work, and (in some cases) killed the ma­chi­nes’ own­ers. Anti-AI peo­ple are re­claim­ing the term to de­scribe them­selves, and many of the lead­ing lights of the anti-AI move­ment (like Brian Merchant or Gavin Mueller) have writ­ten books ar­gu­ing more or less that the Luddites were right, and we ought to fol­low their ex­am­ple in or­der to re­sist AI au­toma­tion.Con­tinue read­ing…

Luddites and burn­ing down AI dat­a­cen­ters

Is it time to start burn­ing down dat­a­cen­ters?

Some peo­ple think so. An Indianapolis city coun­cil mem­ber had his house re­cently shot up for sup­port­ing dat­a­cen­ters, and Sam Altman’s home was fire­bombed (and then shot) shortly af­ter­wards. People from all sides of the ar­gu­ment are sound­ing the alarm about im­mi­nent vi­o­lence.

The ob­vi­ous his­tor­i­cal com­par­i­son is Luddism, the 19th-century phe­nom­e­non where English weavers and knit­ters de­stroyed the ma­chines that were au­tomat­ing their work, and (in some cases) killed the ma­chi­nes’ own­ers. Anti-AI peo­ple are re­claim­ing the term to de­scribe them­selves, and many of the lead­ing lights of the anti-AI move­ment (like Brian Merchant or Gavin Mueller) have writ­ten books ar­gu­ing more or less that the Luddites were right, and we ought to fol­low their ex­am­ple in or­der to re­sist AI au­toma­tion.Con­tinue read­ing…

The Top Cop: Driver accused of DUI tracks missing laptop to Illinois State trooper's house

abc7chicago.com

CHICAGO (WLS) — This is the sec­ond part to a two-part se­ries. To read part one, click here.

A restau­rant ex­ec­u­tive ar­rested by an Illinois State Police trooper on DUI charges told the ABC 7 I-Team when he dis­cov­ered his MacBook was miss­ing from his car, he tracked it to the house of the trooper who ar­rested him.

What fol­lowed was cap­tured in a 911 call record­ing, a cell phone video, and an in­ter­nal in­ves­ti­ga­tion by the Illinois State Police, all ob­tained by the I-Team.

For three years, ISP Trooper Kevin Bradley was named by a non­profit as the state’s Top Cop”: A moniker given to the mem­ber of law en­force­ment who has made the most DUI ar­rests statewide.

Over that time, hun­dreds of dri­vers were ar­rested by Bradley for DUI in Cook and DuPage Counties.

But as the I-Team pre­vi­ously re­ported, pub­lic de­fend­ers and civil rights at­tor­neys have ac­cused Bradley of fabricating ev­i­dence” and vi­o­lat­ing dri­vers’ civil rights, ar­gu­ing many of those crim­i­nal cases were later dis­missed prior to ad­ju­di­ca­tion, or they re­sulted in dri­vers re­ceiv­ing not guilty ver­dicts.

Local restau­rant ex­ec­u­tive Sherard Holland is one of those dri­vers.

It was just a reg­u­lar, nor­mal day,” Holland ex­plained. Doing a fa­vor for a friend of mine, trav­el­ing to pick them up and we were go­ing home when I un­for­tu­nately made con­tact with that trooper.”

As the sun rose in the early morn­ing hours of June 2, 2024, Holland says he was con­fused when he saw the po­lice lights in his rearview mir­ror.

The son of a Chicago po­lice of­fi­cer, Holland says he was the des­ig­nated dri­ver for a co-worker af­ter work­ing overnight when he was pulled over by Trooper Bradley.

The I-Team ob­tained a copy of the troop­er’s traf­fic stop re­port and dash­board cam­era footage from that morn­ing.

A dri­ver ac­cused of DUI tracked his miss­ing lap­top to Illinois State Police Trooper Kevin Bradley’s house. The ABC7 I-Team is in­ves­ti­gat­ing.

I pulled you over be­cause you swerved,” Bradley can be heard say­ing to Holland at his dri­ver’s side win­dow.

After speak­ing with Holland, Trooper Bradley wrote in his re­port he could smell the odor of an al­co­holic bev­er­age emit­ting from the ve­hi­cle.”

Bradley then asked Holland to relocate” by dri­ving down the road to a gas sta­tion in Worth Township, pass­ing through sev­eral in­ter­sec­tions and traf­fic lights.

At the gas sta­tion, Bradley ac­cused Holland of dri­ving un­der the in­flu­ence. When asked if he would sub­mit to field so­bri­ety tests, Holland calmly re­fused.

No, thank you,” Holland told Bradley in the footage.

Bradley took Holland into cus­tody for felony DUI and in the dash­board cam­era footage, while Holland was in the state troop­er’s ve­hi­cle, Bradley can be seen search­ing through Holland’s ve­hi­cle be­fore his car was towed.

Missing MacBook

Later that day, when Holland needed his MacBook, he said he no­ticed it was­n’t listed on his in­ven­tory slip, so he as­sumed it must be with his car at the tow yard.

I de­cided to ping my MacBook,” Holland told the I-Team, and it pinged to an ad­dress.”

Using Apple’s Find My” fea­ture, Holland says his MacBook was not ping­ing at the tow yard with his car, rather it was show­ing up at a Tinley Park home.

I was fear­ful of go­ing to re­trieve my item,” Holland ex­plained. And I just had to mo­ti­vate my­self and say, Hey, I’m go­ing to go get it.’”

Holland says he found his courage, and drove over to the ad­dress, where af­ter knock­ing on the front door, he was even­tu­ally met by Trooper Bradley.

He de­nied it and be­gan to ar­gue with me,” Holland said. And I de­cided to call the po­lice.”

After fil­ing a Freedom of Information Act re­quest with the Tinley Park Police Department, the I-Team ob­tained a record­ing of that 911 call.

In it, Holland can be heard speak­ing to a dis­patcher.

I have an iMac that was stolen last night,” Holland told the dis­patcher, re­fer­ring to his MacBook.

And what did they say when you con­fronted them then?” the dis­patcher asked Holland.

Holland told the I-Team while he was on the phone with the dis­patcher, Bradley told him to end the call. In the 911 record­ing, you can hear Holland say, How else would I have your ad­dress? OK, I’ll hang up. I’ll hang up,” and then the call ends.

Holland and his at­tor­ney pro­vided the I-Team a cell-phone video clip that picks up shortly af­ter that 911 call ended, recorded by Holland’s wife who was wait­ing in his car out­side of the Tinley Park home.

In the video, Trooper Bradley is seen walk­ing out and hand­ing Holland his MacBook. Bradley then opens his garage door to show Holland his State Police ve­hi­cle parked in­side, telling Holland the lap­top was ac­ci­den­tally left in his state ve­hi­cle.

It was a very trau­matic in­ter­ac­tion with him,” Holland told the I-Team.

The Tinley Park Police Department dis­patcher even­tu­ally called Holland back.

Is every­thing okay?” the dis­patcher can be heard ask­ing. You’re fine?”

Yeah, he’s a po­lice of­fi­cer and for some rea­son he took my MacBook home with him,” Holland replied.

Oh, gotcha,” the dis­patcher said. You’ll prob­a­bly want to speak to a su­per­vi­sor over there.”

That’s what Holland did: He filed a com­plaint with the Illinois State Police, and the I-Team ob­tained a copy of the state’s in­ter­nal in­ves­ti­ga­tion into the com­plaint.

According to the find­ings, in­ves­ti­ga­tors de­ter­mined Trooper Bradley never re­ported the in­ter­ac­tion with Holland over his MacBook to Bradley’s su­per­vi­sors, nor did he make note of it in his re­ports.

In a tran­script of an in­ter­view Bradley did with in­ter­nal in­ves­ti­ga­tors as part of the in­ves­ti­ga­tion, when Bradley was asked why he took the lap­top, he said he was do­ing Holland a favor.”

I kept it for his cour­tesy, like I said with his phone, key and wal­let,” Bradley told in­ves­ti­ga­tors. It’s my mis­take. I for­got to give him his stuff back and he tracked it.”

The re­port states in­ves­ti­ga­tors de­ter­mined Bradley had vi­o­lated State Police poli­cies, and he was sus­pended for one day.

When the I-Team shared this with Holland, he said, I feel dis­heart­ened by that. I feel let down by, again, peo­ple who should be en­trusted and should be held ac­count­able for their ac­tions.”

I’m out­raged,” said Brandon Brown, Holland’s at­tor­ney. Outraged as a fel­low Cook County res­i­dent to think that some­thing like that could hap­pen.”

What if he did­n’t call 911?” Brown asked, What if he did­n’t have the Find My iPhone ap­pli­ca­tion?”

After months of court hear­ings for the DUI charges, a bench trial found Holland not guilty.”

Holland is now su­ing the state and Trooper Bradley al­leg­ing his civil rights were vi­o­lated.

Bradley did not re­spond to the I-Team’s re­peated re­quests for com­ment. In court fil­ings, at­tor­neys rep­re­sent­ing the state and Bradley have ar­gued Holland’s law­suit should be dis­missed as the trooper has sovereign im­mu­nity” as a mem­ber of law en­force­ment, and that it was a lawful” traf­fic stop.

How many other peo­ple?’

An I-Team analy­sis of Cook County court records found out of 319 DUI pros­e­cu­tions stem­ming from ar­rests made by Trooper Bradley since 2023, 174 cases were dis­missed be­fore ad­ju­di­ca­tion, or the dri­vers were found not guilty, like in Holland’s case.

According to court records, 105 dri­vers were found guilty, with 96 of those dri­vers ac­cept­ing plea deals for lesser charges.

As of last month, 40 cases are still pend­ing.

Brown says there are crit­i­cal ques­tions that need to be an­swered about Bradley’s ar­rests.

How many other peo­ple like Sherard, who maybe could­n’t af­ford a lawyer… Maybe they pled guilty or maybe they were found guilty and per­haps their con­duct did­n’t ac­tu­ally match what was al­leged,” Brown said.

A spokesper­son for the Illinois State Police said, ISP has strong over­sight of all of­fi­cers and ad­dresses any dis­crep­an­cies or com­plaints when they are iden­ti­fied.”

It would be ir­re­spon­si­ble not to ar­rest and re­move a per­son off the roads who is sus­pected of im­paired dri­ving,” a spokesper­son for ISP told the I-Team. Securing con­vic­tions for DUIs in a court of law is a sep­a­rate process from the role troop­ers and other law en­force­ment of­fi­cers play.”

The ISP state­ment reads, A trooper makes an ar­rest when there is prob­a­ble cause… Ultimately, it is up to the pros­e­cu­tor to de­ter­mine whether they can meet the thresh­old of proof be­yond a rea­son­able doubt.”

For dri­vers like Holland, Brown says ar­rests for DUI are se­ri­ous and can have a last­ing im­pact.

It can have a life chang­ing ex­pe­ri­ence,” Brown said. It is dev­as­tat­ing. And if you’re in­no­cent, that’s some­thing that stays with you.”

Court over­time

For every DUI ar­rest made, state po­lice troop­ers must ap­pear in court, and in ev­i­dence mo­tions filed with the court, at­tor­neys have said this has led to a stag­ger­ing amount of over­time pay for Trooper Bradley.

State records show in 2024, Bradley nearly tripled his salary, earn­ing nearly $250,000 in one year.

That’s more than the salary of the Illinois State Police di­rec­tor.

A spokesper­son for ISP told the I-Team Bradley is cur­rently on med­ical leave af­ter suf­fer­ing in­juries from an ac­ci­dent while he was work­ing.

Copyright © 2026 WLS-TV. All Rights Reserved.

The cuda-oxide Book — cuda-oxide

nvlabs.github.io

cuda-ox­ide is an ex­per­i­men­tal Rust-to-CUDA com­piler that lets you write (SIMT) GPU ker­nels in safe(ish), id­iomatic Rust. It com­piles stan­dard Rust code di­rectly to PTX — no DSLs, no for­eign lan­guage bind­ings, just Rust.

Note

This book as­sumes fa­mil­iar­ity with the Rust pro­gram­ming lan­guage, in­clud­ing own­er­ship, traits, and gener­ics. Later chap­ters on async GPU pro­gram­ming also as­sume work­ing knowl­edge of async/.​await and run­times like tokio.

For a re­fresher, see The Rust Programming Language, Rust by Example, or the Async Book.

Project Status#

The v0.1.0 re­lease is an early-stage al­pha: ex­pect bugs, in­com­plete fea­tures, and API break­age as we work to im­prove it. We hope you’ll try it and help shape its di­rec­tion by shar­ing feed­back on your ex­pe­ri­ence.

🚀 Quick start#

use cu­d­a_de­vice::{cu­d­a_­mod­ule, ker­nel, thread, DisjointSlice}; use cu­d­a_­core::{Cu­d­a­Con­text, DeviceBuffer, LaunchConfig};

#[cuda_module] mod ker­nels { use su­per::*;

#[kernel] fn vecadd(a: &[f32], b: &[f32], mut c: DisjointSlice<f32>) { let idx = thread::in­dex_1d(); let i = idx.get(); if let Some(c_elem) = c.get_­mut(idx) { *c_elem = a[i] + b[i]; } } }

fn main() { let ctx = CudaContext::new(0).unwrap(); let stream = ctx.de­fault­_stream(); let mod­ule = ker­nels::load(&ctx).un­wrap();

let a = DeviceBuffer::from_host(&stream, &[1.0f32; 1024]).unwrap(); let b = DeviceBuffer::from_host(&stream, &[2.0f32; 1024]).unwrap(); let mut c = DeviceBuffer::<f32>::zeroed(&stream, 1024).unwrap();

mod­ule .vecadd(&stream, LaunchConfig::for_num_elems(1024), &a, &b, &mut c) .unwrap();

let re­sult = c.to_host_vec(&stream).un­wrap(); as­sert_eq!(re­sult[0], 3.0); }

Build and run with cargo ox­ide run vecadd upon in­stalling the pre­req­ui­sites.

Note

#[cuda_module] em­beds the gen­er­ated de­vice ar­ti­fact into the host bi­nary and gen­er­ates a typed ker­nels::load func­tion plus one launch method per ker­nel. The lower-level load­_k­er­nel_­mod­ule and cu­d­a_launch! APIs re­main avail­able when you need to load a spe­cific side­car ar­ti­fact or build cus­tom launch code.

Why cuda-ox­ide?#

🦀 Rust on the GPU

Write GPU ker­nels with Rust’s type sys­tem and own­er­ship model. Safety is a first-class goal, but GPUs have sub­tleties — read about the safety model.

💎 A SIMT Compiler

Not a DSL. A cus­tom rustc code­gen back­end that com­piles pure Rust to PTX.

⚡ Async Execution

Compose GPU work as lazy DeviceOperation graphs. Schedule across stream pools. Await re­sults with .await.

UCLA discovers first stroke rehabilitation drug to repair brain damage

stemcell.ucla.edu

A new study by UCLA Health has dis­cov­ered what re­searchers say is the first drug to fully re­pro­duce the ef­fects of phys­i­cal stroke re­ha­bil­i­ta­tion in model mice.

The find­ings, pub­lished in Nature Communications, tested two can­di­date drugs de­rived from their stud­ies on the mech­a­nism of the brain ef­fects of re­ha­bil­i­ta­tion, one of which re­sulted in sig­nif­i­cant re­cov­ery in move­ment con­trol af­ter stroke in mice.

Stroke is the lead­ing cause of adult dis­abil­ity be­cause most pa­tients do not fully re­cover from the ef­fects of stroke. There are no drugs in the field of stroke re­cov­ery, re­quir­ing stroke pa­tients to un­dergo phys­i­cal re­ha­bil­i­ta­tion, which has shown to be only mod­estly ef­fec­tive.

The goal is to have a med­i­cine that stroke pa­tients can take that pro­duces the ef­fects of re­ha­bil­i­ta­tion,” said Dr. S. Thomas Carmichael, the study’s lead au­thor and pro­fes­sor and chair of UCLA Neurology. Rehabilitation af­ter stroke is lim­ited in its ac­tual ef­fects be­cause most pa­tients can­not sus­tain the re­hab in­ten­sity needed for stroke re­cov­ery.

Further, stroke re­cov­ery is not like most other fields of med­i­cine, where drugs are avail­able that treat the dis­ease — such as car­di­ol­ogy, in­fec­tious dis­ease or can­cer,” Carmichael said. Rehabilitation is a phys­i­cal med­i­cine ap­proach that has been around for decades; we need to move re­ha­bil­i­ta­tion into an era of mol­e­c­u­lar med­i­cine.”

In the study, Carmichael and his team sought to de­ter­mine how phys­i­cal re­ha­bil­i­ta­tion im­proved brain func­tion af­ter a stroke and whether they could gen­er­ate a drug that could pro­duce these same ef­fects.

Working in lab­o­ra­tory mouse mod­els of stroke and with stroke pa­tients, the UCLA re­searchers iden­ti­fied a loss of brain con­nec­tions that stroke pro­duces that are re­mote from the site of the stroke dam­age. Brain cells lo­cated at a dis­tance from the stroke site get dis­con­nected from other neu­rons. As a re­sult, brain net­works do not fire to­gether for things like move­ment and gait.

The UCLA team found that some of the con­nec­tions that are lost af­ter stroke oc­cur in a cell called a par­val­bu­min neu­ron. This type of neu­ron helps gen­er­ate a brain rhythm, termed a gamma os­cil­la­tion, which links neu­rons to­gether so that they form co­or­di­nated net­works to pro­duce a be­hav­ior, such as move­ment. Stroke causes the brain to lose gamma os­cil­la­tions. Successful phys­i­cal re­ha­bil­i­ta­tion in both lab­o­ra­tory mice and hu­mans brought gamma os­cil­la­tions back into the brain and, in the mouse model, re­paired the lost con­nec­tions of par­val­bu­min neu­rons.

Carmichael and the team then iden­ti­fied two can­di­date drugs that might pro­duce gamma os­cil­la­tions af­ter stroke. These drugs specif­i­cally work to ex­cite par­val­bu­min neu­rons.

The re­searchers found one of the drugs, DDL-920, de­vel­oped in the UCLA lab of Varghese John, who coau­thored the study, pro­duced sig­nif­i­cant re­cov­ery in move­ment con­trol in mice.

This study has two ma­jor ar­eas of im­pact: First, it iden­ti­fies a brain sub­strate and cir­cu­ity that un­der­lies the ef­fect of re­ha­bil­i­ta­tion in the brain. Second, the pa­per then iden­ti­fies a unique drug tar­get in this re­hab brain cir­cu­ity to pro­mote re­cov­ery by mim­ic­k­ing the main ef­fect of phys­i­cal re­hab.

Further stud­ies are needed to un­der­stand the safety and ef­fi­cacy of DDL-920 be­fore it could be con­sid­ered for hu­man tri­als.

GitHub - davmlaw/they_live_adblocker: Replace Ads with They Live style slogans

github.com

They Live Adblocker

A fork of uBlock Origin Lite that, in­stead of hid­ing cos­met­i­cally-blocked ads, re­places them with white tiles bear­ing slo­gans from John Carpenter’s 1988 film They Live: OBEY, CONSUME, WATCH TV, SLEEP, SUBMIT, CONFORM, STAY ASLEEP, BUY, WORK, NO INDEPENDENT THOUGHT, DO NOT QUESTION AUTHORITY.

Each blocked ad gets a sin­gle phrase, picked at ran­dom from the list.

The idea is from a blog post I wrote in 2015 (and never got around to build­ing): They Live ad­block mode.

Screenshot

Install

Download the lat­est uBO­Lite_theylive.chromium.zip from the Releases page, ex­tract it, then in Chromium / Chrome / Brave / Edge:

Open chrome://​ex­ten­sions

Toggle Developer mode on (top-right)

Click Load un­packed and se­lect the ex­tracted folder

Keep the folder around — the ex­ten­sion is loaded from that path.

Make it ac­tu­ally re­place ads

By de­fault uBO Lite uses Basic fil­ter­ing mode, which blocks ads at the net­work layer. Network-blocked ads never pro­duce a DOM el­e­ment, so there’s noth­ing to they-live-ify” — you just get empty space, as with nor­mal uBO Lite. To see the OBEY tiles:

Click the uBO Lite tool­bar icon → cog (⚙) → Dashboard.

Set the fil­ter­ing mode for the sites you care about to Optimal or Complete.

Reload.

Building from source

Requires Node 22.

git clone –recursive https://​github.com/​davm­law/​they_live_ad­blocker cd they_live_ad­blocker/​uBlock nvm use 22 # or oth­er­wise en­sure Node >= 22 tools/​make-mv3.sh chromium # or: fire­fox | edge | sa­fari

The pack­aged ex­ten­sion lands in uBlock/​dist/​build/​uBO­Lite.chromium/ — load it as an un­packed ex­ten­sion.

How it works

uBO Lite’s cos­metic fil­ter­ing nor­mally in­jects CSS like se­lec­tor { dis­play: none !important } to hide matched ad el­e­ments. This fork patches those in­jec­tion sites to in­stead ap­ply a white-box mask with a ::after over­lay whose con­tent is read from a data-ubol-they-live at­tribute, then walks the DOM (with a MutationObserver for late-loaded ads) to tag each matched el­e­ment with a ran­dom phrase from the list.

Touched files in the davm­law/​uBlock sub­mod­ule:

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​they-live.js (new) — phrase list, CSS gen­er­a­tor, DOM tag­ging

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​css-{spe­cific,generic,pro­ce­dural-api}.js — call sites

plat­form/​mv3/​ex­ten­sion/​js/​script­ing-man­ager.js — reg­is­ters they-live.js ahead of con­sumers

Caveats

Personal hobby fork; not an of­fi­cial uBlock Origin prod­uct. Don’t file uBO is­sues against this.

Forcing pre­vi­ously-hid­den el­e­ments vis­i­ble can oc­ca­sion­ally shift page lay­out where the site’s CSS as­sumed the ad slot col­lapsed.

Custom user-de­fined cos­metic fil­ters still hide nor­mally (no OBEY treat­ment).

Network-blocked ads (most of uBO Lite’s block­ing) don’t get re­placed — only cos­metic-fil­tered ones do.

License

GPL-3.0, same as up­stream uBlock Origin / uBO Lite.

The death of the last maverick tech company.

slate.com

Nullsoft, 1997 – 2004

AOL kills off the last mav­er­ick tech com­pany.

Nov 12, 20046:04 PM

When America Online purged its tiny Nullsoft branch of all but three em­ploy­ees this week, it lost ar­guably the most pro­lific di­vi­sion of the com­pany. Not that you could re­ally blame AOL for the mass lay­offs—all of Nullsoft’s pro­jects were spit­balls tossed at the hon­chos up­stairs. Before the AOL days, Nullsoft founder Justin Frankel and his team of whiz kids prac­ti­cally in­vented the MP3 craze when they rolled out their Winamp player and Shoutcast server. When AOL paid mil­lions to buy the then-20-year-old Frankel’s ser­vices in 1999, he used his new gig to be­come what Rolling Stone called the Net’s No. 1 punk.”

From his AOL of­fice, Frankel posted ap­pli­ca­tions (without his cor­po­rate par­en­t’s per­mis­sion) that made screw­ing the Recording Industry Association of America eas­ier than ever, in­clud­ing the peer-to-peer pro­gram Gnutella and the covert file-shar­ing sys­tem WASTE. Frankel quit at the be­gin­ning of this year, and Nullsoft’s shut­down nails the cof­fin lid shut. There’ll be no more cool pi­rate tools un­der­writ­ten by America Online.

What kind of snot-nosed brat takes mil­lions from AOL and then pub­lishes soft­ware per­fect for rip­ping off Time Warner’s en­tire cat­a­log? Frankel, a grunge-dress­ing slacker from Sedona, Ariz., was a teenage col­lege dropout in 1997 when he wrote Winamp, the first pro­gram that made play­ing MP3s on a PC point-and-click sim­ple. He’s not the world’s great­est pro­gram­mer, but Frankel has a knack for find­ing sim­ple and clever so­lu­tions to huge en­gi­neer­ing prob­lems. While he’s got a prankster’s streak—one of his high-school hacks was a key­stroke log­ger for the teach­ers’ com­put­ers—Frankel did­n’t write Winamp so he could steal mu­sic. All he wanted was a bet­ter way to lis­ten to mu­sic on his PC. Apparently, so did sev­eral mil­lion other peo­ple.

As the share­ware checks for Winamp piled up, Frankel kept hack­ing. While big soft­ware com­pa­nies ele­phant-walked in cir­cles try­ing to de­velop on­line mu­sic dis­tri­b­u­tion sys­tems, he cre­ated Shoutcast, an MP3 server that streams mu­sic over the Net. Winamp and Shoutcast be­came the de­fault way to play, draw­ing tens of mil­lions of fans in less than two years. That’s when AOL re­warded Frankel by buy­ing Nullsoft for $100 mil­lion in 1999.

Lots of geeks who could­n’t make it through en­gi­neer­ing school be­came mul­ti­mil­lion­aires in the boom. But Frankel re­mained an un­re­con­structed kid in a field of hack­ers-turned-en­tre­pre­neurs. Like Kurt Cobain, he used his money to chal­lenge the peo­ple who gave it to him. As AOL was merg­ing with Time Warner in March 2000, Frankel pub­lished Gnutella, a peer-to-peer file-shar­ing sys­tem that ad­dressed the fa­tal flaw in Shawn Fanning’s Napster. Fanning re­lied on a bank of cen­tral servers that would even­tu­ally be shut down by record in­dus­try lawyers. Gnutella, by con­trast, was com­pletely de­cen­tral­ized. The only way to shut it down would be to go af­ter every sin­gle user.

When Frankel posted Gnutella on Nullsoft’s site it came with a cheeky, half-apolo­getic note: See? AOL can bring you good things!” AOL was not amused; they had him re­move the pro­gram im­me­di­ately and dis­claimed it as an unau­tho­rized side pro­ject. But Gnutella had al­ready been spread around the Net and re­verse-en­gi­neered by ea­ger pro­gram­mers who set to work im­prov­ing Frankel’s gift. Years af­ter Napster’s servers went dark, Gnutella traf­fic is still grow­ing.

For most peo­ple, flip­ping off the man once would be enough, but Frankel kept at it for years—he even posted a tool that re­moved the ads from AOL Instant Messenger. Finally, in mid-2003, as the RIAA was prepar­ing law­suits against ran­dom Gnutella users, Frankel con­cocted a coun­ter­strike: WASTE, a pri­vate file-shar­ing sys­tem whose traf­fic is en­crypted from pry­ing eyes and whose net­works are in­vi­ta­tion only. (The name comes from the un­der­ground postal sys­tem in Thomas Pynchon’s The Crying of Lot 49.) If snoops can’t see what WASTE users are shar­ing and RIAA stoolies can’t hop onto the net­work to lure copy­right vi­o­la­tors, there’s no way to gather ev­i­dence of copy­right in­fringe­ment short of raid­ing homes and seiz­ing com­put­ers.

Frankel tol­dRolling Stone that he tried to per­suade AOL to re­lease WASTE them­selves as a way to re­vive their fast-falling cus­tomer base. When they re­buffed him, he re­leased the pro­gram on the fourth an­niver­sary of AOLs ac­qui­si­tion of Nullsoft—May 28, 2003—as a means of con­fronting the com­pany. Again, AOL took the pro­gram down and dis­owned it. Not long af­ter spilling his guts to Rolling Stone, Frankel re­signed. For me, cod­ing is a form of self-ex­pres­sion,” he ex­plained in a blog post that he would later re­move. The com­pany con­trols the most ef­fec­tive means of self-ex­pres­sion I have. This is un­ac­cept­able to me as an in­di­vid­ual, there­fore I must leave.”

With Nullsoft gone and Frankel spend­ing his time build­ing a spe­cial-ef­fects com­puter for his elec­tric gui­tar, the old Winamp/Gnutella gang prob­a­bly won’t get back to­gether for one more hit. Conventional wis­dom says Frankel is more likely to join the mil­lion­aire has-beens who dot the hills in my San Francisco neigh­bor­hood or be­come a tro­phy hire at a tech startup, like con­tem­po­raries Fanning, Marc Andreessen, and Linus Torvalds.

But I would­n’t count him out yet. Most dot-com he­roes come across as self-pro­mot­ing one-hit won­ders, but Frankel does his best work when you try to shut him up. It’s hap­pen­ing again: In August, fed­eral agents raided five homes and an ISP where they had man­aged to track down WASTE-like pri­vate net­works. Having suc­ces­sively hacked his way around the lim­i­ta­tions of CDs, MP3s, Napster, and the RIAA, Frankel may next try to find a way to thwart the FBI. As he’s proven over and over, he does­n’t need AOLs back­ing to do it.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.