10 interesting stories served every morning and every evening.

crawshaw - 2026-04-22

crawshaw.io

I am build­ing a cloud

2026 – 04-22

Today is fundrais­ing an­nounce­ment day. As is the na­ture of writ­ing for a larger au­di­ence, it is a for­mal, safe an­nounce­ment. As it should be. Writing must nec­es­sar­ily be­come im­per­sonal at scale. But I would like to write some­thing per­sonal about why I am do­ing this. What is the goal of build­ing exe.dev? I am al­ready the co-founder of one startup that is do­ing very well, sell­ing a prod­uct I love as much as when I first helped de­sign and build it.

What could pos­sess me to go through all the pain of start­ing an­other com­pany? Some fel­low founders have looked at me with in­credulity and shock that I would throw my­self back into the fry­ing pan. (Worse yet, ex­pe­ri­ence tells me that most of the pain is still in my fu­ture.) It has been a gen­uinely hard ques­tion to an­swer be­cause I start search­ing for a big” rea­son, a prin­ci­ple or a so­cial need, a rea­son or mo­ti­va­tion be­yond chal­lenge. But I be­lieve the truth is far sim­pler, and to some I am sure al­most equally in­cred­u­lous.

I like com­put­ers.

In some tech cir­cles, that is an un­usual state­ment. (“In this house, we curse com­put­ers!”) I get it, com­put­ers can be re­ally frus­trat­ing. But I like com­put­ers. I al­ways have. It is re­ally fun get­ting com­put­ers to do things. Painful, sure, but the re­sults are worth it. Small mi­cro­con­trollers are fun, desk­tops are fun, phones are fun, and servers are fun, whether racked in your base­ment or in a data cen­ter across the world. I like them all.

So it is no small thing for me when I ad­mit: I do not like the cloud to­day.

I want to. Computers are great, whether it is a BSD in­stalled di­rectly on a PC or a Linux VM. I can en­joy Windows, BeOS, Novell NetWare, I even in­stalled OS/2 Warp back in the day and had a great time with it. Linux is par­tic­u­larly pow­er­ful to­day and a source of end­less po­ten­tial. And for all the pages of prod­ucts, the cloud is just Linux VMs. Better, they are API dri­ven Linux VMs. I should be in heaven.

But every cloud prod­uct I try is wrong. Some are bet­ter than oth­ers, but I am con­stantly con­strained by the choices cloud ven­dors make in ways that make it hard to get com­put­ers to do the things I want them to do.

These is­sues go be­yond UX or bad API de­sign. Some of the fun­da­men­tal build­ing blocks of to­day’s clouds are the wrong shape. VMs are the wrong shape be­cause they are tied to CPU/memory re­sources. I want to buy some CPUs, mem­ory, and disk, and then run VMs on it. A Linux VM is a process run­ning in an­other Linux’s cgroup, I should be able to run as many as I like on the com­puter I have. The only way to do that eas­ily on to­day’s clouds is to take iso­la­tion into my own hands, with gVi­sor or nested vir­tu­al­iza­tion on a sin­gle cloud VM, pay­ing the nest­ing per­for­mance penalty, and then I am left with the job of run­ning and man­ag­ing, at a min­i­mum, a re­verse proxy onto my VMs. All be­cause the cloud ab­strac­tion is the wrong shape.

Clouds have tried to solve this with PaaS” sys­tems. Abstractions that are in­her­ently less pow­er­ful than a com­puter, be­spoke to a par­tic­u­lar provider. Learn a new way to write soft­ware for each com­pute ven­dor, only to find half way into your pro­ject that some­thing that is easy on a nor­mal com­puter is nearly im­pos­si­ble be­cause of some ob­scure limit of the plat­form sys­tem buried so deep you can­not find it un­til you are deeply com­mit­ted to a pro­ject. Time and again I have said this is the one” only to be be­trayed by some half-assed, half-im­ple­mented, or half-thought-through ab­strac­tion. No thank you.

Consider disk. Cloud providers want you to use re­mote block de­vices (or some­thing even more lim­ited and slow, like S3). When re­mote block de­vices were in­tro­duced they made sense, be­cause com­put­ers used hard dri­ves. Remote does not hurt se­quen­tial read/​write per­for­mance, if the buffer­ing im­ple­men­ta­tion is good. Random seeks on a hard drive take 10ms, so 1ms RTT for the Ethernet con­nec­tion to re­mote stor­age is a fine price to pay. It is a good prod­uct for hard dri­ves and makes the cloud ven­dor’s life a lot eas­ier be­cause it re­moves an en­tire di­men­sion from their stan­dard in­stance types.

But then we all switched to SSD. Seek time went from 10 mil­lisec­onds to 20 mi­crosec­onds. Heroic ef­forts have cut the net­work RTT a bit for re­ally good re­mote block sys­tems, but the IOPS over­head of re­mote sys­tems went from 10% with hard dri­ves to more than 10x with SSDs.

It is a lot of work to con­fig­ure an EC2 VM to have 200k IOPS, and you will pay $10k/month for the priv­i­lege. My MacBook has 500k IOPS. Why are we hob­bling our cloud in­fra­struc­ture with slow disk?

Finally net­work­ing. Hyperscalers have great net­works. They charge you the earth for them and make it mis­er­able to do deals with other ven­dors. The stan­dard price for a GB of egress from a cloud provider is 10x what you pay rack­ing a server in a nor­mal data cen­ter. At mod­er­ate vol­ume the mul­ti­plier is even worse. Sure, if you spend $XXm/month with a cloud the prices get much bet­ter, but most of my pro­jects want to spend $XX/month, with­out the lit­tle m. The fun­da­men­tal tech­nol­ogy here is fine, but this is where lim­its are placed on you to make sure what­ever you build can­not be af­ford­able.

Finally, clouds have painful APIs. This is where pro­jects like K8S come in, pa­per­ing over the pain so en­gi­neers suf­fer a bit less from us­ing the cloud. But VMs are hard with Kubernetes be­cause the cloud makes you do it all your­self with lumpy nested vir­tu­al­iza­tion. Disk is hard be­cause back when they were de­sign­ing K8S Google did­n’t re­ally even do us­able re­mote block de­vices, and even if you can find a com­mon pat­tern among clouds to­day to pa­per over, it will be slow. Networking is hard be­cause if it were easy you would pri­vate link in a few sys­tems from a neigh­bor­ing open DC and drop a zero from your cloud spend. It is tempt­ing to dis­miss Kubernetes as a scam, ar­ti­fi­cial make work de­signed to avoid do­ing real prod­uct work, but the truth is worse: it is a prod­uct at­tempt­ing to solve an im­pos­si­ble prob­lem: make clouds portable and us­able. It can­not be done.

You can­not solve the fun­da­men­tal prob­lems with cloud ab­strac­tions by build­ing new ab­strac­tions on top. Making Kubernetes good is in­her­ently im­pos­si­ble, a pro­ject in putting (admittedly high qual­ity) lip­stick on a pig.

We have been mud­dy­ing along with these mis­er­able clouds for 15 years now. We make do, in the way we do with all the un­pleas­ant parts of our soft­ware stack, hold­ing our nose when­ever we have to deal with and try­ing to min­i­mize how of­ten that hap­pens.

This how­ever, is the mo­ment to fix it.

This is the mo­ment be­cause some­thing has changed: we have agents now. (Indeed my co-founder Josh and I started tin­ker­ing be­cause we wanted to use LLMs in pro­gram­ming. It turns out what needs build­ing for LLMs are bet­ter tra­di­tional ab­strac­tions.) Agents, by mak­ing it eas­i­est to write code, means there will be a lot more soft­ware. Economists would call this an in­stance of Jevons para­dox. Each of us will write more pro­grams, for fun and for work. We need pri­vate places to run them, easy shar­ing with friends and col­leagues, min­i­mal over­head.

With more to­tal soft­ware in our lives the cloud, which was an an­noy­ing pain, be­comes a much big­ger pain. We need a lot more com­pute, we need it to be eas­ier to man­age. Agents help to some de­gree. If you trust them with your cre­den­tials they will do a great job dri­ving the AWS API for you (though oc­ca­sion­ally it will delete your pro­duc­tion DB). But agents strug­gle with the fun­da­men­tal lim­its of the ab­strac­tions as much as we do. You need more to­kens than you should and you get a worse re­sult than you should. Every per­cent of con­text win­dow the agent spends think­ing about how to con­tort clas­sic clouds into work­ing is con­text win­dow is not us­ing to solve your prob­lem.

So we are go­ing to fix it. What we have launched on exe.dev to­day ad­dresses the VM re­source iso­la­tion prob­lem: in­stead of pro­vi­sion­ing in­di­vid­ual VMs, you get CPU and mem­ory and run the VMs you want. We took care of a TLS proxy and an au­then­ti­ca­tion proxy, be­cause I do not ac­tu­ally want my fresh VMs dumped di­rectly on the in­ter­net. Your disk is lo­cal NVMe with blocks repli­cated off ma­chine asyn­chro­nously. We have re­gions around the world for your ma­chines, be­cause you want your ma­chines close. Your ma­chines are be­hind an any­cast net­work to give all your global users a low la­tency en­try­point to your prod­uct (and so we can build some new ex­cit­ing things soon).

There is a lot more to build here, from ob­vi­ous things like sta­tic IPs to UX chal­lenges like how to give you ac­cess to our au­to­matic his­tor­i­cal disk snap­shots. Those will get built. And at the same time we are go­ing right back to the be­gin­ning, rack­ing com­put­ers in data cen­ters, think­ing through every layer of the soft­ware stack, ex­plor­ing all the op­tions for how we wire up net­works.

So, I am build­ing a cloud. One I ac­tu­ally want to use. I hope it is use­ful to you.

Apple fixes bug that cops used to extract deleted chat messages from iPhones

techcrunch.com

12:13 PM PDT · April 22, 2026

Apple re­leased a soft­ware up­date on Wednesday for iPhones and iPads fix­ing a bug that al­lowed law en­force­ment to ex­tract mes­sages that had been deleted or dis­ap­peared au­to­mat­i­cally from mes­sag­ing apps. This was be­cause no­ti­fi­ca­tions that dis­played the mes­sages’ con­tent were also cached on the de­vice for up to a month.

In a se­cu­rity no­tice on its web­site, Apple said that the bug meant notifications marked for dele­tion could be un­ex­pect­edly re­tained on the de­vice.”

This is a clear ref­er­ence to an is­sue re­vealed by 404 Media ear­lier this month. The in­de­pen­dent news out­let re­ported that the FBI had been able to ex­tract deleted Signal mes­sages from some­one’s iPhone us­ing foren­sic tools, due to the fact that the con­tent of the mes­sages had been dis­played in a no­ti­fi­ca­tion and then stored in­side a phone’s data­base — even af­ter the mes­sages were deleted in­side Signal.

After the news, Signal pres­i­dent Meredith Whittaker said the mes­sag­ing app maker asked Apple to ad­dress the is­sue. Notifications for deleted mes­sages should­n’t re­main in any OS no­ti­fi­ca­tion data­base,” Whittaker wrote in a post on Bluesky.

Contact Us

Do you have more in­for­ma­tion about how au­thor­i­ties are us­ing foren­sic tools on iPhones or Android de­vices? From a non-work de­vice, you can con­tact Lorenzo Franceschi-Bicchierai se­curely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email.

It’s un­clear why the no­ti­fi­ca­tions’ con­tent was logged to be­gin with, but to­day’s fix sug­gests it was a bug.

Apple did not im­me­di­ately re­spond to a re­quest for com­ment ask­ing why the no­ti­fi­ca­tions were be­ing re­tained. The com­pany also back­ported the fix to iPhone and iPad own­ers run­ning the older iOS 18 soft­ware.

Privacy ac­tivists ex­pressed alarm when they learned that the FBI had found a way around a se­cu­rity fea­ture that is used daily by at-risk users. Signal, like other mes­sag­ing apps such as WhatsApp, al­lows users to set up a timer that in­structs the app to au­to­mat­i­cally delete mes­sages af­ter a set amount of time. This fea­ture can be help­ful for any­one who wants to keep their con­ver­sa­tions se­cret in the event that au­thor­i­ties seize their de­vices.

Techcrunch event

San Francisco, CA

|

October 13 – 15, 2026

Topics

When you pur­chase through links in our ar­ti­cles, we may earn a small com­mis­sion. This does­n’t af­fect our ed­i­to­r­ial in­de­pen­dence.

Lorenzo Franceschi-Bicchierai is a Senior Writer at TechCrunch, where he cov­ers hack­ing, cy­ber­se­cu­rity, sur­veil­lance, and pri­vacy.

You can con­tact or ver­ify out­reach from Lorenzo by email­ing lorenzo@techcrunch.com, via en­crypted mes­sage at +1 917 257 1382 on Signal, and @lorenzofb on Keybase/Telegram.

View Bio

openai.com

Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain Campaign

socket.dev

Secure your de­pen­den­cies with us

Socket proac­tively blocks ma­li­cious open source pack­ages in your code.

Socket re­searchers dis­cov­ered that the Bitwarden CLI was com­pro­mised as part of the on­go­ing Check­marx sup­ply chain cam­paign. The open source pass­word man­ager serves more than 10 mil­lion users and over 50,000 busi­nesses, and ranks among among the top three pass­word man­agers by en­ter­prise adop­tion.

The af­fected pack­age ver­sion ap­pears to be @bit­war­den/​cli2026.4.0, and the ma­li­cious code was pub­lished in bw1.js, a file in­cluded in the pack­age con­tents. The at­tack ap­pears to have lever­aged a com­pro­mised GitHub Action in Bitwarden’s CI/CD pipeline, con­sis­tent with the pat­tern seen across other af­fected repos­i­to­ries in this cam­paign.

What we know so far:

Bitwarden CLI builds were af­fected

The com­pro­mise fol­lows the same GitHub Actions sup­ply chain vec­tor iden­ti­fied in the broader Check­marx cam­paign

This is an on­go­ing in­ves­ti­ga­tion. Socket’s se­cu­rity re­search team is con­duct­ing a full tech­ni­cal analy­sis and will pub­lish de­tailed find­ings, in­clud­ing af­fected ver­sions, in­di­ca­tors of com­pro­mise, and re­me­di­a­tion guid­ance.

If you use Bitwarden CLI, we rec­om­mend re­view­ing your CI logs and ro­tat­ing any se­crets that may have been ex­posed to the com­pro­mised work­flow. At this time, the com­pro­mise only in­volves only the npm pack­age for the CLI. Bitwarden’s Chrome ex­ten­sion, MCP server, and other le­git­i­mate dis­tri­b­u­tions have not been af­fected yet.

Technical analy­sis#

The ma­li­cious pay­load was in a file named bw1.js , which shares core in­fra­struc­ture with the Checkmarx mc­pAd­don.js we an­a­lyzed yes­ter­day:

Same C2 end­point: Uses iden­ti­cal au­dit.check­marx[.]cx/​v1/​teleme­try end­point, ob­fus­cated via __decodeScrambled with seed 0x3039. Exfiltration also oc­curs through GitHub API (commit-based) and npm reg­istry (token theft/​re­pub­lish­ing)

Embedded pay­loads: Same gzip+base64 struc­ture con­tain­ing a Python mem­ory-scrap­ing script tar­get­ing GitHub Actions Runner.Worker, a setup.mjs loader for re­pub­lished npm pack­ages, a GitHub Actions work­flow YAML, hard­coded RSA pub­lic keys, and an ide­o­log­i­cal man­i­festo string

Credential har­vest­ing: GitHub to­kens via Runner.Worker mem­ory scrap­ing and en­vi­ron­ment vari­ables, AWS cre­den­tials via ~/.aws/ files and en­vi­ron­ment, Azure to­kens via azd, GCP cre­den­tials via gcloud con­fig con­fig-helper, npm con­fig­u­ra­tion files (.npmrc), SSH keys, en­vi­ron­ment vari­ables, and Claude/MCP con­fig­u­ra­tion files

Github Exfiltration: Public repos­i­to­ries cre­ated un­der vic­tim ac­counts us­ing Dune-themed nam­ing ({word}-{word}-{3digits}), with en­crypted re­sults com­mit­ted and to­kens em­bed­ded in com­mit mes­sages us­ing the marker LongLiveTheResistanceAgainstMachines

Supply chain prop­a­ga­tion: npm to­ken theft to iden­tify writable pack­ages and re­pub­lish with in­jected pre­in­stall hooks, GitHub Actions work­flow in­jec­tion to cap­ture repos­i­tory se­crets

Russian lo­cale kill switch: Exits silently if sys­tem lo­cale be­gins with ru”, check­ing Intl.DateTimeFormat().resolvedOptions().locale and en­vi­ron­ment vari­ables LC_ALL, LC_MESSAGES, LANGUAGE, and LANG

Runtime: Bun v1.3.13 in­ter­preter down­loaded from GitHub re­leases

This pay­load (bw1.js)also in­cludes sev­eral in­di­ca­tors not doc­u­mented in the Checkmarx in­ci­dent:

Lock file: Hardcoded path /tmp/tmp.987654321.lock pre­vents mul­ti­ple in­stances from run­ning si­mul­ta­ne­ously

Shell pro­file per­sis­tence: Injects pay­load into ~/.bashrc and ~/.zshrc

Explicit brand­ing: Repository de­scrip­tion Shai-Hulud: The Third Coming re­places the de­cep­tive Checkmarx Configuration Storage”, and de­bug strings in­clude Would be ex­e­cut­ing but­ler­ian ji­had!”

The shared tool­ing strongly sug­gests a con­nec­tion to the same mal­ware ecosys­tem, but the op­er­a­tional sig­na­tures dif­fer in ways that com­pli­cate at­tri­bu­tion. The Checkmarx at­tack was claimed by TeamPCP via the @pcpcats so­cial me­dia ac­count af­ter dis­cov­ery, and the mal­ware it­self at­tempted to blend in with le­git­i­mate-look­ing de­scrip­tions. This pay­load takes a dif­fer­ent ap­proach: the ide­o­log­i­cal brand­ing is em­bed­ded di­rectly in the mal­ware, from the Shai-Hulud repos­i­tory names to the Butlerian Jihad” man­i­festo pay­load to com­mit mes­sages pro­claim­ing re­sis­tance against ma­chines. This sug­gests ei­ther a dif­fer­ent op­er­a­tor us­ing shared in­fra­struc­ture, a splin­ter group with stronger ide­o­log­i­cal mo­ti­va­tions, or an evo­lu­tion in the cam­paign’s pub­lic pos­ture.

Recommendations#

Organizations that in­stalled the ma­li­cious Bitwarden npm pack­age should treat this in­ci­dent as a cre­den­tial ex­po­sure and CI/CD com­pro­mise event.

Immediately re­move the af­fected pack­age from de­vel­oper sys­tems and build en­vi­ron­ments. Rotate any cre­den­tials that may have been ex­posed to those en­vi­ron­ments, in­clud­ing GitHub to­kens, npm to­kens, cloud cre­den­tials, SSH keys, and CI/CD se­crets. Review GitHub for unau­tho­rized repos­i­tory cre­ation, un­ex­pected work­flow files un­der .github/workflows/, sus­pi­cious work­flow runs, ar­ti­fact down­loads, and pub­lic repos­i­to­ries match­ing the ob­served Dune-themed stag­ing pat­tern ({word}-{word}-{3digits}). Check for the fol­low­ing key­words in newly pub­lished repos­i­to­ries if you be­lieve you may be im­pacted:

atrei­des

cog­i­tor

fe­daykin

fre­men

fu­tar

gesserit

ghola

harkon­nen

heigh­liner

kanly

kral­izec

las­gun

laza

melange

men­tat

nav­i­ga­tor

or­nithopter

phib­ian

powin­dah

prana

pre­scient

sand­worm

sar­daukar

sayyad­ina

si­etch

siri­dar

slig

still­suit

thumper

tleilaxu

Audit npm for unau­tho­rized pub­lishes, ver­sion changes, or newly added in­stall hooks. In cloud en­vi­ron­ments, re­view ac­cess logs for un­usual se­cret ac­cess, to­ken use, and newly is­sued cre­den­tials.

On end­points and run­ners, hunt for out­bound con­nec­tions to the ob­served ex­fil­tra­tion in­fra­struc­ture (audit[.]checkmarx[.]cx), ex­e­cu­tion of Bun where it is not nor­mally used, ac­cess to files such as .npmrc, .git-credentials, .env, cloud cre­den­tial stores, gcloud, az, or azd. Check for the lock file /tmp/tmp.987654321.lock and shell pro­file mod­i­fi­ca­tions in ~/.bashrc and ~/.zshrc. For GitHub Actions, re­view whether any un­ap­proved work­flows were cre­ated on tran­sient branches and whether ar­ti­facts such as for­mat-re­sults.txt were gen­er­ated or down­loaded.

As a longer-term con­trol, re­duce the blast ra­dius of fu­ture sup­ply chain in­ci­dents by lock­ing down to­ken scopes, re­quir­ing short-lived cre­den­tials where pos­si­ble, re­strict­ing who can cre­ate or pub­lish pack­ages, hard­en­ing GitHub Actions per­mis­sions, dis­abling un­nec­es­sary ar­ti­fact ac­cess, and mon­i­tor­ing for new pub­lic repos­i­to­ries or work­flow changes cre­ated out­side nor­mal re­lease processes.

IOCs#

Malicious Package

@bitwarden/cli2026.4.0

Network Indicators

94[.]154[.]172[.]43

https://​au­dit.check­marx[.]cx/​v1/​teleme­try

File System Indicators (Victim Package Compromise)

/tmp/tmp.987654321.lock

/tmp/_tmp_<Unix Epoch Timestamp>/

pack­age-up­dated.tgz

The Onion Signs New Deal to Take Over Infowars

www.nytimes.com

You have a pre­view view of this ar­ti­cle while we are check­ing your ac­cess. When we have con­firmed ac­cess, the full ar­ti­cle con­tent will load.

A new deal, which would al­low The Onion to use the Infowars name and web­site ad­dress, must be ap­proved by a Texas judge.

The Onion Has a New Plan to Take Over Infowars

A new deal, which would al­low The Onion to use the Infowars name and web­site ad­dress, must be ap­proved by a Texas judge.

The Onion, a satir­i­cal news out­let, wants to con­vert the right-wing Infowars site into a par­ody of it­self.Credit…Jamie Kelter Davis for The New York Times

Listen

· 6:39 min

April 20, 2026

When Infowars, the web­site founded by the right-wing con­spir­acist Alex Jones, came up for sale two years ago, an un­likely suitor stepped up. The Onion, a satir­i­cal news out­let, planned to con­vert the site into a par­ody of it­self.

That sale was scut­tled by a bank­ruptcy court. Now, The Onion has re-emerged with a new plan: li­cens­ing the web­site from Gregory Milligan, the court-ap­pointed man­ager of the site.

On Monday, Mr. Milligan asked Maya Guerra Gamble, a judge in Texas’ Travis County District Court over­see­ing the dis­po­si­tion of Infowars, to ap­prove that li­cens­ing agree­ment in a court fil­ing. Under the terms, The Onion’s par­ent com­pany, Global Tetrahedron, would pay $81,000 a month to li­cense Infowars.com and its as­so­ci­ated in­tel­lec­tual prop­erty — such as its name — for an ini­tial six months, with an op­tion to re­new for an­other six months.

The li­cens­ing deal has been agreed to by The Onion and the court-ap­pointed ad­min­is­tra­tor. But it is not ef­fec­tive un­til Judge Guerra Gamble ap­proves it, and Mr. Jones could ap­peal any rul­ing. That means the fate of Infowars re­mains in limbo un­til the court rules, prob­a­bly some­time in the next two weeks. Mr. Jones con­tin­ues to op­er­ate Infowars.com and host its week­day pro­gram, The Alex Jones Show.”

Mr. Jones had no im­me­di­ate com­ment.

The bat­tle over Infowars has been a long and fraught saga, and Mr. Jones — a no­to­ri­ous ped­dler of lies and in­vec­tive — has used his bully pul­pit for more than a year to cru­sade against The Onion’s ef­forts to take over the plat­form. The site is in limbo be­cause of a se­ries of defama­tion law­suits against Mr. Jones filed by fam­i­lies of vic­tims of the mass shoot­ing in 2012 at Sandy Hook Elementary School in Connecticut, which Mr. Jones falsely claimed was a hoax.

Image

Thank you for your pa­tience while we ver­ify ac­cess. If you are in Reader mode please exit and log into your Times ac­count, or sub­scribe for all of The Times.

Thank you for your pa­tience while we ver­ify ac­cess.

Already a sub­scriber? Log in.

Want all of The Times? Subscribe.

Advertisement

SKIP ADVERTISEMENT

your hex editor should color-code bytes

simonomi.dev

al­ice pel­lerin • 2026 – 03-31

too of­ten, i see hex ed­i­tors1 that look like this:

00000000 00 00 02 00 28 00 00 00 88 15 00 00 C4 01 00 00 ⋄⋄•⋄(⋄⋄⋄ו⋄⋄ו⋄⋄

00000010 14 00 00 00 03 00 00 00 00 01 00 00 03 00 00 00 •⋄⋄⋄•⋄⋄⋄⋄•⋄⋄•⋄⋄⋄

00000020 3C 00 00 00 C4 0A 00 00 50 00 00 00 18 00 00 00 <⋄⋄⋄×⏎⋄⋄P⋄⋄⋄•⋄⋄⋄

00000030 14 00 00 10 00 00 00 00 18 00 00 20 00 00 00 00 •⋄⋄•⋄⋄⋄⋄•⋄⋄ ⋄⋄⋄⋄

00000040 20 00 00 30 00 00 00 00 51 00 00 00 48 00 00 00 ⋄⋄0⋄⋄⋄⋄Q⋄⋄⋄H⋄⋄⋄

00000050 10 00 00 80 00 00 00 00 00 00 00 A0 00 00 00 00 •⋄⋄×⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄⋄

00000060 01 00 00 A0 01 00 00 00 02 00 00 A0 02 00 00 00 •⋄⋄ו⋄⋄⋄•⋄⋄ו⋄⋄⋄

00000070 03 00 00 A0 03 00 00 00 04 00 00 A0 04 00 00 00 •⋄⋄ו⋄⋄⋄•⋄⋄ו⋄⋄⋄

00000080 05 00 00 A0 05 00 00 00 06 00 00 A0 06 00 00 00 •⋄⋄ו⋄⋄⋄•⋄⋄ו⋄⋄⋄

00000090 20 00 00 30 00 00 00 00 53 00 00 00 00 DE 00 00 ⋄⋄0⋄⋄⋄⋄S⋄⋄⋄⋄×⋄⋄

000000a0 5D FA 01 44 E1 3A 9A 0F 52 00 00 00 FC 14 00 00 ]וD×:וR⋄⋄⋄ו⋄⋄

000000b0 1B 20 2A 2B 00 80 00 00 00 80 00 00 00 80 00 00 • *+⋄×⋄⋄⋄×⋄⋄⋄×⋄⋄

000000c0 FF 7F 00 00 00 00 33 52 00 00 00 00 29 10 15 10 ╳•⋄⋄⋄⋄3R⋄⋄⋄⋄)•••

000000d0 80 00 1F 00 03 00 00 00 02 00 00 00 40 14 22 23 ×⋄•⋄•⋄⋄⋄•⋄⋄⋄@•“#

000000e0 03 00 00 00 06 00 00 00 23 00 9D 05 6B FA C0 05 •⋄⋄⋄•⋄⋄⋄#⋄וk×ו

000000f0 C8 03 00 00 14 22 23 14 05 00 00 00 2E 00 9E 06 ו⋄⋄•“#••⋄⋄⋄.⋄ו

every time i do, i feel bad for the poor per­son hav­ing to use it (especially if that per­son is me!). a plain list of bytes makes it hard to no­tice in­ter­est­ing things in the data. go ahead, try to find the sin­gle C0 in these bytes:

00000000 15 29 21 25 03 2F 2E 2B 15 11 24 3F 10 14 3B 13 •)!%•/.+••$?••;•

00000001 32 25 09 01 10 02 01 23 26 1E 25 2D 24 2F 23 3E 2%␣••••#&•%-$/#>

00000002 05 0F 33 2D 18 29 3E 1E 16 3B 29 0D 24 0B 3E 38 ••3-•)>••;)␍$•>8

00000003 33 3C 1E 2C 28 31 C0 1D 11 32 14 05 10 17 3F 01 3<•,(1ו•2••••?•

00000004 1E 32 0A 14 2B 2F 0B 14 3E 27 39 0A 17 23 1B 39 •2⏎•+/••>’9⏎•#•9

00000005 18 0B 3B 13 25 14 2C 3B 33 3C 19 10 21 0F 2C 34 ••;•%•,;3<••!•,4

00000006 2F 0C 1D 2C 2E 22 11 28 0D 0A 1F 37 27 39 35 21 /••,.“•(␍⏎•7′95!

00000007 23 39 21 2B 37 23 28 16 30 28 02 04 25 22 37 1F #9!+7#(•0(••%“7•

00000008 36 2F 2D 25 12 25 01 31 3B 39 2D 35 26 37 30 2A 6/-%•%•1;9 – 5&70*

00000009 06 0D 11 1F 25 0A 1E 29 15 0B 0A 2A 2E 2C 21 16 •␍••%⏎•)••⏎*.,!•

0000000a 1D 37 0F 16 12 03 2C 02 0B 22 24 11 1A 3B 0D 0B •7••••,••“$••;␍•

0000000b 0D 13 30 2D 3B 15 05 15 32 19 20 30 3C 0E 3D 0B ␍•0-;•••2• 0<•=•

0000000c 17 24 22 3E 1E 22 18 0D 21 06 29 38 3E 20 3B 12 •$“>•“•␍!•)8> ;•

0000000d 06 1F 19 17 29 35 1E 3B 1E 01 31 08 13 0C 27 20 ••••)5•;••1•••′

0000000e 08 24 2E 32 16 06 1F 3D 35 35 19 16 02 07 31 13 •$.2•••=55••••1•

0000000f 31 33 30 36 14 32 07 05 05 34 19 0B 18 16 12 3C 1306•2•••4•••••<

com­pare that to one with col­ors:

00000000 37 2D 08 13 0D 0B 18 1D 02 1A 2D 12 2A 0D 0F 27 7-••␍•••••-•*␍•′

00000001 04 2A 25 32 0F 17 32 11 2F 2A 2A 0A 0A 16 04 1D •*%2••2•/**⏎⏎•••

00000002 32 13 09 01 2B 26 1A 30 3D 26 13 39 09 0D 38 3E 2•␣•+&•0=&•9␣␍8>

00000003 0A 0D 1D 0B 36 30 02 36 0E 0B 2F 09 26 1E 33 03 ⏎␍••60•6••/␣&•3•

00000004 3C 3C 08 0A 1E 36 12 11 1B 17 05 09 0B 37 0C 0E <<•⏎•6•••••␣•7••

00000005 31 05 09 17 2D 1D 05 16 25 03 3E 0A 1A 01 0C 2B 1•␣•-•••%•>⏎•••+

00000006 13 37 17 14 37 03 18 34 2D 03 30 11 2B 19 04 0B •7••7••4-•0•+•••

00000007 04 2A 18 26 21 25 3F 23 1D 0F 2F 2B 35 0C 09 37 •*•&!%?#••/+5•␣7

00000008 25 33 19 1C 12 1E 2E 38 3A 3A 3C 28 39 0A 30 23 %3••••.8::<(9⏎0#

00000009 21 08 09 24 0B 0E 13 26 04 30 06 20 10 18 15 3C  !•␣$•••&•0• •••<

0000000a 10 3C 30 34 28 28 1D 31 22 23 22 38 0E 12 25 15 •<04((•1″#“8••%•

0000000b 3B 1F 30 0D 26 0E 15 32 1C 2B 12 1A 32 1C 02 07  ;•0␍&••2•+••2•••

0000000c 35 2E 06 13 1F 33 3D 16 05 1C 2A 0F 34 34 21 26 5.•••3=•••*•44!&

0000000d 0C 17 3D 02 27 39 21 17 3F 07 1A 2F 38 0D 2D 1E ••=•’9!•?••/8␍-•

0000000e 32 0C C0 14 0E 20 25 0E 2E 2D 0D 21 27 13 2C 07 2•ו• %•.-␍!’•,•

0000000f 14 0A 20 31 15 13 2C 3B 0F 12 1A 2D 0C 11 32 11 •⏎ 1••,;•••-••2•

it’s much eas­ier to pick out the unique byte when it’s a dif­fer­ent color! hu­man brains are re­ally good at spot­ting vi­sual pat­terns—given the right for­mat

here are a few more ex­am­ples:

ex­am­ple 1

00000000 4B 50 53 00 0A 00 00 00 0C 00 00 00 01 00 00 00 KPS⋄⏎⋄⋄⋄•⋄⋄⋄•⋄⋄⋄

00000010 00 00 00 00 B4 00 00 00 46 00 00 00 64 00 00 00 ⋄⋄⋄⋄×⋄⋄⋄F⋄⋄⋄d⋄⋄⋄

00000020 46 00 00 00 02 00 00 00 00 00 00 00 DC 00 00 00 F⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄

00000030 50 00 00 00 A0 00 00 00 50 00 00 00 03 00 00 00 P⋄⋄⋄×⋄⋄⋄P⋄⋄⋄•⋄⋄⋄

00000040 00 00 00 00 FA 00 00 00 5A 00 00 00 B4 00 00 00 ⋄⋄⋄⋄×⋄⋄⋄Z⋄⋄⋄×⋄⋄⋄

00000050 5A 00 00 00 04 00 00 00 00 00 00 00 18 01 00 00 Z⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄••⋄⋄

00000060 64 00 00 00 C8 00 00 00 64 00 00 00 05 00 00 00 d⋄⋄⋄×⋄⋄⋄d⋄⋄⋄•⋄⋄⋄

00000070 00 00 00 00 4A 01 00 00 78 00 00 00 F0 00 00 00 ⋄⋄⋄⋄J•⋄⋄x⋄⋄⋄×⋄⋄⋄

00000080 78 00 00 00 06 00 00 00 00 00 00 00 90 01 00 00 x⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄ו⋄⋄

00000090 8C 00 00 00 18 01 00 00 8C 00 00 00 07 00 00 00 ×⋄⋄⋄••⋄⋄×⋄⋄⋄•⋄⋄⋄

000000a0 00 00 00 00 F4 01 00 00 B4 00 00 00 68 01 00 00 ⋄⋄⋄⋄ו⋄⋄×⋄⋄⋄h•⋄⋄

000000b0 B4 00 00 00 08 00 00 00 00 00 00 00 58 02 00 00 ×⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄X•⋄⋄

000000c0 DC 00 00 00 B8 01 00 00 DC 00 00 00 09 00 00 00 ×⋄⋄⋄ו⋄⋄×⋄⋄⋄␣⋄⋄⋄

000000d0 E7 03 00 00 E7 03 00 00 00 00 00 00 E7 03 00 00 ו⋄⋄ו⋄⋄⋄⋄⋄⋄ו⋄⋄

000000e0 E7 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ו⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄

000000f0 00 00 00 00 00 00 00 00 00 00 00 00 ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄

00000000 4B 50 53 00 0A 00 00 00 0C 00 00 00 01 00 00 00 KPS⋄⏎⋄⋄⋄•⋄⋄⋄•⋄⋄⋄

00000010 00 00 00 00 B4 00 00 00 46 00 00 00 64 00 00 00 ⋄⋄⋄⋄×⋄⋄⋄F⋄⋄⋄d⋄⋄⋄

00000020 46 00 00 00 02 00 00 00 00 00 00 00 DC 00 00 00 F⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄

00000030 50 00 00 00 A0 00 00 00 50 00 00 00 03 00 00 00 P⋄⋄⋄×⋄⋄⋄P⋄⋄⋄•⋄⋄⋄

00000040 00 00 00 00 FA 00 00 00 5A 00 00 00 B4 00 00 00 ⋄⋄⋄⋄×⋄⋄⋄Z⋄⋄⋄×⋄⋄⋄

00000050 5A 00 00 00 04 00 00 00 00 00 00 00 18 01 00 00 Z⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄••⋄⋄

00000060 64 00 00 00 C8 00 00 00 64 00 00 00 05 00 00 00 d⋄⋄⋄×⋄⋄⋄d⋄⋄⋄•⋄⋄⋄

00000070 00 00 00 00 4A 01 00 00 78 00 00 00 F0 00 00 00 ⋄⋄⋄⋄J•⋄⋄x⋄⋄⋄×⋄⋄⋄

00000080 78 00 00 00 06 00 00 00 00 00 00 00 90 01 00 00 x⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄ו⋄⋄

00000090 8C 00 00 00 18 01 00 00 8C 00 00 00 07 00 00 00 ×⋄⋄⋄••⋄⋄×⋄⋄⋄•⋄⋄⋄

000000a0 00 00 00 00 F4 01 00 00 B4 00 00 00 68 01 00 00 ⋄⋄⋄⋄ו⋄⋄×⋄⋄⋄h•⋄⋄

000000b0 B4 00 00 00 08 00 00 00 00 00 00 00 58 02 00 00 ×⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄X•⋄⋄

000000c0 DC 00 00 00 B8 01 00 00 DC 00 00 00 09 00 00 00 ×⋄⋄⋄ו⋄⋄×⋄⋄⋄␣⋄⋄⋄

000000d0 E7 03 00 00 E7 03 00 00 00 00 00 00 E7 03 00 00 ו⋄⋄ו⋄⋄⋄⋄⋄⋄ו⋄⋄

000000e0 E7 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ו⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄

000000f0 00 00 00 00 00 00 00 00 00 00 00 00 ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄

this file starts with the magic bytes KPS, then a bunch of (little-endian) 32-bit in­te­gers that range from 0 to 999 (0x3E7). the col­ors make it quick to rec­og­nize that every 32-bit in­te­ger is rel­a­tively small, as the two high bytes are al­ways 00 00. if you look closely, you may no­tice other pat­terns, like the num­bers count­ing up every 0x18 bytes start­ing at 0xC

if you’re cu­ri­ous about this par­tic­u­lar file for­mat, the code that parses it is pretty sim­ple, even if you’re not a pro­gram­mer. there’s even a wiki page for the data it rep­re­sents, if you’re into Fossil Fighters

ex­am­ple 2

00000000 44 41 4C 00 59 06 00 00 F4 07 00 00 F5 01 00 00 DAL⋄Y•⋄⋄ו⋄⋄ו⋄⋄

00000010 14 00 00 00 E8 07 00 00 08 08 00 00 44 08 00 00 •⋄⋄⋄ו⋄⋄••⋄⋄D•⋄⋄

00000020 84 08 00 00 C8 08 00 00 04 09 00 00 40 09 00 00 ו⋄⋄ו⋄⋄•␣⋄⋄@␣⋄⋄

00000030 7C 09 00 00 B8 09 00 00 F8 09 00 00 34 0A 00 00 |␣⋄⋄×␣⋄⋄×␣⋄⋄4⏎⋄⋄

00000040 70 0A 00 00 AC 0A 00 00 EC 0A 00 00 30 0B 00 00 p⏎⋄⋄×⏎⋄⋄×⏎⋄⋄0•⋄⋄

00000050 6C 0B 00 00 A8 0B 00 00 E8 0B 00 00 24 0C 00 00 l•⋄⋄ו⋄⋄ו⋄⋄$•⋄⋄

00000060 60 0C 00 00 9C 0C 00 00 D8 0C 00 00 14 0D 00 00 `•⋄⋄ו⋄⋄ו⋄⋄•␍⋄⋄

00000070 50 0D 00 00 8C 0D 00 00 CC 0D 00 00 08 0E 00 00 P␍⋄⋄×␍⋄⋄×␍⋄⋄••⋄⋄

00000080 48 0E 00 00 84 0E 00 00 C4 0E 00 00 08 0F 00 00 H•⋄⋄ו⋄⋄ו⋄⋄••⋄⋄

00000090 44 0F 00 00 80 0F 00 00 C0 0F 00 00 04 10 00 00 D•⋄⋄ו⋄⋄ו⋄⋄••⋄⋄

Surveillance vendors caught abusing access to telcos to track people’s phone locations, researchers say

techcrunch.com

Security re­searchers have un­cov­ered two sep­a­rate spy­ing cam­paigns that are abus­ing well-known weak­nesses in the global tele­coms in­fra­struc­ture to track peo­ple’s lo­ca­tions. The re­searchers say these two cam­paigns are likely a small snap­shot of what they be­lieve to be wide­spread ex­ploita­tion of sur­veil­lance ven­dors seek­ing ac­cess to global phone net­works.

On Thursday, the Citizen Lab, a dig­i­tal rights or­ga­ni­za­tion with more than a decade of ex­pe­ri­ence ex­pos­ing sur­veil­lance abuses, pub­lished a new re­port de­tail­ing the two newly iden­ti­fied cam­paigns. The sur­veil­lance ven­dors be­hind them, which Citizen Lab did not name, op­er­ated as ghost” com­pa­nies that pre­tended to be le­git­i­mate cel­lu­lar providers and would pig­gy­back their ac­cess to those net­works to look up the lo­ca­tion data of their tar­gets.

The new find­ings re­veal con­tin­ued ex­ploita­tion of known flaws in the tech­nolo­gies that un­der­pin the global phone net­works.

One of them is the in­se­cu­rity of Signaling System 7, or SS7, a set of pro­to­cols for 2G and 3G net­works that for years has been the back­bone of how cel­lu­lar net­works con­nect to each other and route sub­scribers’ calls and text mes­sages around the world. Researchers and ex­perts have long warned that gov­ern­ments and sur­veil­lance tech mak­ers can ex­ploit vul­ner­a­bil­i­ties in SS7 to ge­olo­cate in­di­vid­u­als’ cell phones, as SS7 does not re­quire au­then­ti­ca­tion nor en­cryp­tion, leav­ing the door open for rogue op­er­a­tors to abuse it.

The newer pro­to­col, Diameter, de­signed for newer 4G and 5G com­mu­ni­ca­tions, is sup­posed to re­place SS7 and in­cludes the se­cu­rity fea­tures that were lack­ing in its pre­de­ces­sor. But as the Citizen Lab high­lights in this re­port, there are still ways to ex­ploit Diameter, as cell providers do not al­ways im­ple­ment the new pro­tec­tions. In some cases, at­tack­ers can still fall back to ex­ploit­ing the older SS7 pro­to­col.

The two spy cam­paigns have at least one thing in com­mon: Both abused ac­cess to three spe­cific tele­com providers that re­peat­edly acted as the sur­veil­lance en­try and tran­sit points within the telecom­mu­ni­ca­tions ecosys­tem.” This ac­cess gave the sur­veil­lance ven­dors and their gov­ern­ment cus­tomers be­hind the cam­paigns the abil­ity to hide be­hind their in­fra­struc­ture,” as the re­searchers ex­plained.

According to the re­port, the first one is Israeli op­er­a­tor 019Mobile, which re­searchers said was used in sev­eral sur­veil­lance at­tempts. British provider Tango Networks U.K. was also used for sur­veil­lance ac­tiv­ity over sev­eral years, the re­searchers say.

Techcrunch event

San Francisco, CA

|

October 13 – 15, 2026

The third cell phone provider is Airtel Jersey, an op­er­a­tor on the Channel Island of Jersey now owned by Sure, a com­pany whose net­works have been linked to prior sur­veil­lance cam­paigns.

Sure CEO Alistair Beak told TechCrunch that the com­pany does not lease ac­cess to sig­nalling di­rectly or know­ingly to or­gan­i­sa­tions for the pur­poses of lo­cat­ing or track­ing in­di­vid­u­als, or for in­ter­cept­ing com­mu­ni­ca­tions con­tent.”

Sure ac­knowl­edges that dig­i­tal ser­vices can be mis­used, which is why we take a num­ber of steps to mit­i­gate this risk. Sure has im­ple­mented sev­eral pro­tec­tive mea­sures to pre­vent the mis­use of sig­nalling ser­vices, in­clud­ing mon­i­tor­ing and block­ing in­ap­pro­pri­ate sig­nalling,” read Beak’s state­ment. Any ev­i­dence or valid com­plaint re­lat­ing to the mis­use of Sure’s net­work re­sults in the ser­vice be­ing im­me­di­ately sus­pended and, where ma­li­cious or in­ap­pro­pri­ate ac­tiv­ity is con­firmed fol­low­ing in­ves­ti­ga­tion, per­ma­nently ter­mi­nated.”

Tango Networks and 019Mobile did not re­spond to TechCrunch’s re­quest for com­ment.

Gil Nagar, the head of IT and se­cu­rity and 019Mobile, sent a let­ter to Citizen Lab. Nagar said that the com­pany cannot con­firm” that the al­leged 019Mobile in­fra­struc­ture, iden­ti­fied by Citizen Lab as be­ing used by the sur­veil­lance ven­dors, be­longs to the com­pany.

Researchers say high-profile’ peo­ple tar­geted

According to the Citizen Lab, the first sur­veil­lance ven­dor fa­cil­i­tated spy­ing cam­paigns span­ning sev­eral years against dif­fer­ent tar­gets all over the world, and us­ing the in­fra­struc­ture of sev­eral dif­fer­ent cell phone providers. This led re­searchers to con­clude that dif­fer­ent gov­ern­ment cus­tomers of the sur­veil­lance ven­dor were be­hind the var­i­ous cam­paigns.

The ev­i­dence shows a de­lib­er­ate and well-funded op­er­a­tion with deep in­te­gra­tion into the mo­bile sig­nal­ing ecosys­tem,” the re­searchers wrote.

Gary Miller, one of the re­searchers who in­ves­ti­gated these at­tacks, told TechCrunch that some clues point to an Israeli-based com­mer­cial geo-in­tel­li­gence provider with spe­cial­ized tele­com ca­pa­bil­i­ties,” but did not name the sur­veil­lance provider. Several Israeli com­pa­nies are known to of­fer sim­i­lar ser­vices, such as Circles (later ac­quired by spy­ware maker NSO Group), Cognyte, and Rayzone.

Contact Us

Do you have more in­for­ma­tion about sur­veil­lance ven­dors that ex­ploit cell­phone net­works? From a non-work de­vice, you can con­tact Lorenzo Franceschi-Bicchierai se­curely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email.

According to the Citizen Lab, the first cam­paign re­lied on try­ing to abuse flaws in SS7, and then switch­ing to ex­ploit­ing Diameter if those at­tempts failed.

The sec­ond spy cam­paign used dif­fer­ent meth­ods. In this case, the other sur­veil­lance ven­dor be­hind it — which Citizen Lab is not nam­ing ei­ther — re­lied on send­ing a spe­cial type of SMS mes­sage to one spe­cific high-profile” tar­get, as the re­searchers ex­plained.

These are text-based mes­sages de­signed to com­mu­ni­cate di­rectly with the tar­get’s SIM card, with­out show­ing any trace of them to the user. Under nor­mal cir­cum­stances, these mes­sages are used by cell phone providers to send in­nocu­ous com­mands to their sub­scribers’ SIM cards used for keep­ing a de­vice con­nected to their net­work. But the sur­veil­lance ven­dor in­stead sent com­mands that es­sen­tially turned the tar­get’s phone into a lo­ca­tion track­ing de­vice, ac­cord­ing to the re­searchers. This type of at­tack was dubbed SIMjacker by mo­bile cy­ber­se­cu­rity com­pany Enea in 2019.

I’ve ob­served thou­sands of these at­tacks through the years, so I would say it’s a fairly com­mon ex­ploit that’s dif­fi­cult to de­tect,” said Miller. However, these at­tacks ap­pear to be ge­o­graph­i­cally tar­geted, in­di­cat­ing that ac­tors em­ploy­ing SIMjacker-style at­tacks likely know the coun­tries and net­works most vul­ner­a­ble to them.”

Miller made it clear that these two cam­paigns are just the tip of the ice­berg. We only fo­cused on two sur­veil­lance cam­paigns in a uni­verse of mil­lions of at­tacks across the globe,” he said.

Updated to in­clude 019Mobile’s re­sponses sent to Citizen Lab.

When you pur­chase through links in our ar­ti­cles, we may earn a small com­mis­sion. This does­n’t af­fect our ed­i­to­r­ial in­de­pen­dence.

An update on recent Claude Code quality reports

www.anthropic.com

Over the past month, we’ve been look­ing into re­ports that Claude’s re­sponses have wors­ened for some users. We’ve traced these re­ports to three sep­a­rate changes that af­fected Claude Code, the Claude Agent SDK, and Claude Cowork. The API was not im­pacted.

All three is­sues have now been re­solved as of April 20 (v2.1.116).

In this post, we ex­plain what we found, what we fixed, and what we’ll do dif­fer­ently to en­sure sim­i­lar is­sues are much less likely to hap­pen again.

We take re­ports about degra­da­tion very se­ri­ously. We never in­ten­tion­ally de­grade our mod­els, and we were able to im­me­di­ately con­firm that our API and in­fer­ence layer were un­af­fected.

After in­ves­ti­ga­tion, we iden­ti­fied three dif­fer­ent is­sues:

On March 4, we changed Claude Code’s de­fault rea­son­ing ef­fort from high to medium to re­duce the very long la­tency—enough to make the UI ap­pear frozen—some users were see­ing in high mode. This was the wrong trade­off. We re­verted this change on April 7 af­ter users told us they’d pre­fer to de­fault to higher in­tel­li­gence and opt into lower ef­fort for sim­ple tasks. This im­pacted Sonnet 4.6 and Opus 4.6.

On March 26, we shipped a change to clear Claude’s older think­ing from ses­sions that had been idle for over an hour, to re­duce la­tency when users re­sumed those ses­sions. A bug caused this to keep hap­pen­ing every turn for the rest of the ses­sion in­stead of just once, which made Claude seem for­get­ful and repet­i­tive. We fixed it on April 10. This af­fected Sonnet 4.6 and Opus 4.6.

On April 16, we added a sys­tem prompt in­struc­tion to re­duce ver­bosity. In com­bi­na­tion with other prompt changes, it hurt cod­ing qual­ity, and was re­verted on April 20. This im­pacted Sonnet 4.6, Opus 4.6, and Opus 4.7.

Because each change af­fected a dif­fer­ent slice of traf­fic on a dif­fer­ent sched­ule, the ag­gre­gate ef­fect looked like broad, in­con­sis­tent degra­da­tion. While we be­gan in­ves­ti­gat­ing re­ports in early March, they were chal­leng­ing to dis­tin­guish from nor­mal vari­a­tion in user feed­back at first, and nei­ther our in­ter­nal us­age nor evals ini­tially re­pro­duced the is­sues iden­ti­fied.

This is­n’t the ex­pe­ri­ence users should ex­pect from Claude Code. As of April 23, we’re re­set­ting us­age lim­its for all sub­scribers.

A change to Claude Code’s de­fault rea­son­ing ef­fort

When we re­leased Opus 4.6 in Claude Code in February, we set the de­fault rea­son­ing ef­fort to high.

Soon af­ter, we re­ceived user feed­back that Claude Opus 4.6 in high ef­fort mode would oc­ca­sion­ally think for too long, caus­ing the UI to ap­pear frozen and lead­ing to dis­pro­por­tion­ate la­tency and to­ken us­age for those users.

In gen­eral, the longer the model thinks, the bet­ter the out­put. Effort lev­els are how Claude Code lets users set that trade­off—more think­ing ver­sus lower la­tency and fewer us­age limit hits. As we cal­i­brate ef­fort lev­els for our mod­els, we take this trade­off into ac­count in or­der to pick points along the test-time-com­pute curve that give peo­ple the best range of op­tions. In the prod­uct layer, we then choose which point along this curve we set as our de­fault, and that is the value we send to the Messages API as the ef­fort pa­ra­me­ter; we then make the other op­tions avail­able via /effort.

In our in­ter­nal evals and test­ing, medium ef­fort achieved slightly lower in­tel­li­gence with sig­nif­i­cantly less la­tency for the ma­jor­ity of tasks. It also did­n’t suf­fer from the same is­sues with oc­ca­sional very long tail la­ten­cies for think­ing, and it helped max­i­mize users’ us­age lim­its. As a re­sult, we rolled out a change mak­ing medium the de­fault ef­fort, and ex­plained the ra­tio­nale via in-prod­uct di­a­log.

Soon af­ter rolling out, users be­gan re­port­ing that Claude Code felt less in­tel­li­gent. We shipped a num­ber of de­sign it­er­a­tions to make the cur­rent ef­fort set­ting clearer in or­der to alert peo­ple they could change the de­fault (notices on startup, an in­line ef­fort se­lec­tor, and bring­ing back ul­tra­think), but most users re­tained the medium ef­fort de­fault.

After hear­ing feed­back from more cus­tomers, we re­versed this de­ci­sion on April 7. All users now de­fault to xhigh ef­fort for Opus 4.7, and high ef­fort for all other mod­els.

A caching op­ti­miza­tion that dropped prior rea­son­ing

When Claude rea­sons through a task, that rea­son­ing is nor­mally kept in the con­ver­sa­tion his­tory so that on every sub­se­quent turn, Claude can see why it made the ed­its and tool calls it did.

On March 26, we shipped what was meant to be an ef­fi­ciency im­prove­ment to this fea­ture. We use prompt caching to make back-to-back API calls cheaper and faster for users. Claude writes the in­put to­kens to the cache when it makes an API re­quest, then af­ter a pe­riod of in­ac­tiv­ity the prompt is evicted from cache, mak­ing room for other prompts. Cache uti­liza­tion is some­thing we man­age care­fully (more on our ap­proach).

The de­sign should have been sim­ple: if a ses­sion has been idle for more than an hour, we could re­duce users’ cost of re­sum­ing that ses­sion by clear­ing old think­ing sec­tions. Since the re­quest would be a cache miss any­way, we could prune un­nec­es­sary mes­sages from the re­quest to re­duce the num­ber of un­cached to­kens sent to the API. We’d then re­sume send­ing full rea­son­ing his­tory. To do this we used the clear_­think­ing_20251015 API header along with keep:1.

The im­ple­men­ta­tion had a bug. Instead of clear­ing think­ing his­tory once, it cleared it on every turn for the rest of the ses­sion. After a ses­sion crossed the idle thresh­old once, each re­quest for the rest of that process told the API to keep only the most re­cent block of rea­son­ing and dis­card every­thing be­fore it. This com­pounded: if you sent a fol­low-up mes­sage while Claude was in the mid­dle of a tool use, that started a new turn un­der the bro­ken flag, so even the rea­son­ing from the cur­rent turn was dropped. Claude would con­tinue ex­e­cut­ing, but in­creas­ingly with­out mem­ory of why it had cho­sen to do what it was do­ing. This sur­faced as the for­get­ful­ness, rep­e­ti­tion, and odd tool choices peo­ple re­ported.

Because this would con­tin­u­ously drop think­ing blocks from sub­se­quent re­quests, those re­quests also re­sulted in cache misses. We be­lieve this is what drove the sep­a­rate re­ports of us­age lim­its drain­ing faster than ex­pected.

Two un­re­lated ex­per­i­ments made it chal­leng­ing for us to re­pro­duce the is­sue at first: an in­ter­nal-only server-side ex­per­i­ment re­lated to mes­sage queu­ing; and an or­thog­o­nal change in how we dis­play think­ing sup­pressed this bug in most CLI ses­sions, so we did­n’t catch it even when test­ing ex­ter­nal builds.

This bug was at the in­ter­sec­tion of Claude Code’s con­text man­age­ment, the Anthropic API, and ex­tended think­ing. The changes it in­tro­duced made it past mul­ti­ple hu­man and au­to­mated code re­views, as well as unit tests, end-to-end tests, au­to­mated ver­i­fi­ca­tion, and dog­food­ing. Combined with this only hap­pen­ing in a cor­ner case (stale ses­sions) and the dif­fi­culty of re­pro­duc­ing the is­sue, it took us over a week to dis­cover and con­firm the root cause.

As part of the in­ves­ti­ga­tion, we back-tested Code Review against the of­fend­ing pull re­quests us­ing Opus 4.7. When pro­vided the code repos­i­to­ries nec­es­sary to gather com­plete con­text, Opus 4.7 found the bug, while Opus 4.6 did­n’t. To pre­vent this from hap­pen­ing again, we are now land­ing sup­port for ad­di­tional repos­i­to­ries as con­text for code re­views.

We fixed this bug on April 10 in v2.1.101.

A sys­tem prompt change to re­duce ver­bosity

Our lat­est model, Claude Opus 4.7, has a no­table be­hav­ioral quirk rel­a­tive to its pre­de­ces­sor: as we wrote about at launch, it tends to be quite ver­bose. This makes it smarter on hard prob­lems, but it also pro­duces more out­put to­kens.

A few weeks be­fore we re­leased Opus 4.7, we started tun­ing Claude Code in prepa­ra­tion. Each model be­haves slightly dif­fer­ently, and we spend time be­fore each re­lease op­ti­miz­ing the har­ness and prod­uct for it.

We have a num­ber of tools to re­duce ver­bosity: model train­ing, prompt­ing, and im­prov­ing think­ing UX in the prod­uct. Ultimately we used all of these, but one ad­di­tion to the sys­tem prompt caused an out­sized ef­fect on in­tel­li­gence in Claude Code:

Length lim­its: keep text be­tween tool calls to ≤25 words. Keep fi­nal re­sponses to ≤100 words un­less the task re­quires more de­tail.”

After mul­ti­ple weeks of in­ter­nal test­ing and no re­gres­sions in the set of eval­u­a­tions we ran, we felt con­fi­dent about the change and shipped it along­side Opus 4.7 on April 16.

As part of this in­ves­ti­ga­tion, we ran more ab­la­tions (removing lines from the sys­tem prompt to un­der­stand the im­pact of each line) us­ing a broader set of eval­u­a­tions. One of these eval­u­a­tions showed a 3% drop for both Opus 4.6 and 4.7. We im­me­di­ately re­verted the prompt as part of the April 20 re­lease.

Going for­ward

We are go­ing to do sev­eral things dif­fer­ently to avoid these is­sues: we’ll en­sure that a larger share of in­ter­nal staff use the ex­act pub­lic build of Claude Code (as op­posed to the ver­sion we use to test new fea­tures); and we’ll make im­prove­ments to our Code Review tool that we use in­ter­nally, and ship this im­proved ver­sion to cus­tomers.

We’re also adding tighter con­trols on sys­tem prompt changes. We will run a broad suite of per-model evals for every sys­tem prompt change to Claude Code, con­tin­u­ing ab­la­tions to un­der­stand the im­pact of each line, and we have built new tool­ing to make prompt changes eas­ier to re­view and au­dit. We’ve ad­di­tion­ally added guid­ance to our CLAUDE.md to en­sure model-spe­cific changes are gated to the spe­cific model they’re tar­get­ing. For any change that could trade off against in­tel­li­gence, we’ll add soak pe­ri­ods, a broader eval suite, and grad­ual roll­outs so we catch is­sues ear­lier.

We re­cently cre­ated @ClaudeDevs on X to give us the room to ex­plain prod­uct de­ci­sions and the rea­son­ing be­hind them in depth. We’ll share the same up­dates in cen­tral­ized threads on GitHub.

Finally, we’d like to thank our users: the peo­ple who used the /feedback com­mand to share their is­sues with us (or who posted spe­cific, re­pro­ducible ex­am­ples on­line) are the ones who ul­ti­mately al­lowed us to iden­tify and fix these prob­lems. Today we are re­set­ting us­age lim­its for all sub­scribers.

We’re im­mensely grate­ful for your feed­back and for your pa­tience.

Fragments: April 2

martinfowler.com

As we see LLMs churn out scads of code, folks have in­creas­ingly turned to Cognitive Debt as a metaphor for cap­tur­ing how a team can lose un­der­stand­ing of what a sys­tem does. Margaret-Anne Storey thinks a good way of think­ing about these prob­lems is to con­sider three lay­ers of sys­tem health:

Technical debt lives in code. It ac­cu­mu­lates when im­ple­men­ta­tion de­ci­sions com­pro­mise fu­ture change­abil­ity. It lim­its how sys­tems can change.

Cognitive debt lives in peo­ple. It ac­cu­mu­lates when shared un­der­stand­ing of the sys­tem erodes faster than it is re­plen­ished. It lim­its how teams can rea­son about change.

Intent debt lives in ar­ti­facts. It ac­cu­mu­lates when the goals and con­straints that should guide the sys­tem are poorly cap­tured or main­tained. It lim­its whether the sys­tem con­tin­ues to re­flect what we meant to build and it lim­its how hu­mans and AI agents can con­tinue to evolve the sys­tem ef­fec­tively.

Technical debt lives in code. It ac­cu­mu­lates when im­ple­men­ta­tion de­ci­sions com­pro­mise fu­ture change­abil­ity. It lim­its how sys­tems can change.

Cognitive debt lives in peo­ple. It ac­cu­mu­lates when shared un­der­stand­ing of the sys­tem erodes faster than it is re­plen­ished. It lim­its how teams can rea­son about change.

Intent debt lives in ar­ti­facts. It ac­cu­mu­lates when the goals and con­straints that should guide the sys­tem are poorly cap­tured or main­tained. It lim­its whether the sys­tem con­tin­ues to re­flect what we meant to build and it lim­its how hu­mans and AI agents can con­tinue to evolve the sys­tem ef­fec­tively.

While I’m get­ting a bit be­mused by debt metaphor pro­lif­er­a­tion, this way of think­ing does make a fair bit of sense. The ar­ti­cle in­cludes use­ful sec­tions to di­ag­nose and mit­i­gate each kind of debt. The three in­ter­act with each other, and the ar­ti­cle out­lines some gen­eral ac­tiv­i­ties teams should do to keep it all un­der con­trol

❄                ❄

In the ar­ti­cle she ref­er­ences a re­cent pa­per by Shaw and Nave at the Wharton School that adds LLMs to Kahneman’s two-sys­tem model of think­ing.

Kahneman’s book, Thinking Fast and Slow”, is one of my fa­vorite books. Its cen­tral idea is that hu­mans have two sys­tems of cog­ni­tion. System 1 (intuition) makes rapid de­ci­sions, of­ten barely-con­sciously. System 2 (deliberation) is when we ap­ply de­lib­er­ate think­ing to a prob­lem. He ob­served that to save en­ergy we de­fault to in­tu­ition, and that some­times gets us into trou­ble when we over­look things that we would have spot­ted had we ap­plied de­lib­er­a­tion to the prob­lem.

Shaw and Nave con­sider AI as System 3

A con­se­quence of System 3 is the in­tro­duc­tion of cog­ni­tive sur­ren­der, char­ac­ter­ized by un­crit­i­cal re­liance on ex­ter­nally gen­er­ated ar­ti­fi­cial rea­son­ing, by­pass­ing System 2. Crucially, we dis­tin­guish cog­ni­tive sur­ren­der, marked by pas­sive trust and un­crit­i­cal eval­u­a­tion of ex­ter­nal in­for­ma­tion, from cog­ni­tive of­fload­ing, which in­volves strate­gic del­e­ga­tion of cog­ni­tion dur­ing de­lib­er­a­tion.

A con­se­quence of System 3 is the in­tro­duc­tion of cog­ni­tive sur­ren­der, char­ac­ter­ized by un­crit­i­cal re­liance on ex­ter­nally gen­er­ated ar­ti­fi­cial rea­son­ing, by­pass­ing System 2. Crucially, we dis­tin­guish cog­ni­tive sur­ren­der, marked by pas­sive trust and un­crit­i­cal eval­u­a­tion of ex­ter­nal in­for­ma­tion, from cog­ni­tive of­fload­ing, which in­volves strate­gic del­e­ga­tion of cog­ni­tion dur­ing de­lib­er­a­tion.

It’s a long pa­per, that goes into de­tail on this Tri-System the­ory of cog­ni­tion” and re­ports on sev­eral ex­per­i­ments they’ve done to test how well this the­ory can pre­dict be­hav­ior (at least within a lab).

❄                ❄                ❄                ❄                ❄

I’ve seen a few il­lus­tra­tions re­cently that use the sym­bols < >” as part of an icon to il­lus­trate code. That strikes me as rather odd, I can’t think of any pro­gram­ming lan­guage that uses < >” to sur­round pro­gram el­e­ments. Why that and not, say, { }”?

Obviously the rea­son is that they are think­ing of HTML (or maybe XML), which is even more ob­vi­ous when they use </>” in their icons. But pro­gram­mers don’t pro­gram in HTML.

❄                ❄                ❄                ❄                ❄

Ajey Gore thinks about if cod­ing agents make cod­ing free, what be­comes the ex­pen­sive thing? His an­swer is ver­i­fi­ca­tion.

What does correct” mean for an ETA al­go­rithm in Jakarta traf­fic ver­sus Ho Chi Minh City? What does a successful” dri­ver al­lo­ca­tion look like when you’re bal­anc­ing earn­ings fair­ness, cus­tomer wait time, and fleet util­i­sa­tion si­mul­ta­ne­ously? When hun­dreds of en­gi­neers are ship­ping into ~900 mi­croser­vices around the clock, correct” is­n’t one de­f­i­n­i­tion — it’s thou­sands of de­f­i­n­i­tions, all shift­ing, all con­text-de­pen­dent. These aren’t edge cases. They’re the en­tire job.

And they’re pre­cisely the kind of judg­ment that agents can­not per­form for you.

What does correct” mean for an ETA al­go­rithm in Jakarta traf­fic ver­sus Ho Chi Minh City? What does a successful” dri­ver al­lo­ca­tion look like when you’re bal­anc­ing earn­ings fair­ness, cus­tomer wait time, and fleet util­i­sa­tion si­mul­ta­ne­ously? When hun­dreds of en­gi­neers are ship­ping into ~900 mi­croser­vices around the clock, correct” is­n’t one de­f­i­n­i­tion — it’s thou­sands of de­f­i­n­i­tions, all shift­ing, all con­text-de­pen­dent. These aren’t edge cases. They’re the en­tire job.

And they’re pre­cisely the kind of judg­ment that agents can­not per­form for you.

Increasingly I’m see­ing a view that agents do re­ally well when they have good, prefer­ably au­to­mated, ver­i­fi­ca­tion for their work. This en­cour­ages such things as Test Driven Development. That’s still a lot of ver­i­fi­ca­tion to do, which sug­gests we should see more ef­fort to find ways to make it eas­ier for hu­mans to com­pre­hend larger ranges of tests.

While I agree with most of what Ajey writes here, I do have a quib­ble with his view of legacy mi­gra­tion. He thinks it’s a delu­sion that agentic cod­ing will fi­nally crack legacy mod­erni­sa­tion”. I agree with him that agen­tic cod­ing is over­rated in a legacy con­text, but I have seen com­pelling ev­i­dence that LLMs help a great deal in un­der­stand­ing what legacy code is do­ing.

The big con­se­quence of Ajey’s as­sess­ment is that we’ll need to re­or­ga­nize around ver­i­fi­ca­tion rather than writ­ing code:

If agents han­dle ex­e­cu­tion, the hu­man job be­comes de­sign­ing ver­i­fi­ca­tion sys­tems, defin­ing qual­ity, and han­dling the am­bigu­ous cases agents can’t re­solve. Your org chart should re­flect this. Practically, this means your Monday morn­ing standup changes. Instead of what did we ship?” the ques­tion be­comes what did we val­i­date?” Instead of track­ing out­put, you’re track­ing whether the out­put was right. The team that used to have ten en­gi­neers build­ing fea­tures now has three en­gi­neers and seven peo­ple defin­ing ac­cep­tance cri­te­ria, de­sign­ing test har­nesses, and mon­i­tor­ing out­comes. That’s the re­or­gan­i­sa­tion. It’s un­com­fort­able be­cause it de­motes the act of build­ing and pro­motes the act of judg­ing. Most en­gi­neer­ing cul­tures re­sist this. The ones that don’t will win.

If agents han­dle ex­e­cu­tion, the hu­man job be­comes de­sign­ing ver­i­fi­ca­tion sys­tems, defin­ing qual­ity, and han­dling the am­bigu­ous cases agents can’t re­solve. Your org chart should re­flect this. Practically, this means your Monday morn­ing standup changes. Instead of what did we ship?” the ques­tion be­comes what did we val­i­date?” Instead of track­ing out­put, you’re track­ing whether the out­put was right. The team that used to have ten en­gi­neers build­ing fea­tures now has three en­gi­neers and seven peo­ple defin­ing ac­cep­tance cri­te­ria, de­sign­ing test har­nesses, and mon­i­tor­ing out­comes. That’s the re­or­gan­i­sa­tion. It’s un­com­fort­able be­cause it de­motes the act of build­ing and pro­motes the act of judg­ing. Most en­gi­neer­ing cul­tures re­sist this. The ones that don’t will win.

❄                ❄                ❄                ❄                ❄

One the ques­tions comes up when we think of LLMs-as-programmers is whether there is a fu­ture for source code. David Cassel on The New Stack has an ar­ti­cle sum­ma­riz­ing sev­eral views of the fu­ture of code. Some folks are ex­per­i­ment­ing with en­tirely new lan­guages built with the LLM in mind, oth­ers think that ex­ist­ing lan­guages, es­pe­cially strictly typed lan­guages like TypeScript and Rust will be the best fit for LLMs. It’s an overview ar­ti­cle, one that has lots of quo­ta­tions, but not much analy­sis in it­self - but it’s worth a read as a good overview of the dis­cus­sion.

I’m in­ter­ested to see how all this will play out. I do think there’s still a role for hu­mans to work with LLMs to build use­ful ab­strac­tions in which to talk about what the code does - es­sen­tially the DDD no­tion of Ubiquitous Language. Last year Unmesh and I talked about grow­ing a lan­guage with LLMs. As Unmesh put it

Programming is­n’t just typ­ing cod­ing syn­tax that com­put­ers can un­der­stand and ex­e­cute; it’s shap­ing a so­lu­tion. We slice the prob­lem into fo­cused pieces, bind re­lated data and be­hav­iour to­gether, and—cru­cially—choose names that ex­pose in­tent. Good names cut through com­plex­ity and turn code into a schematic every­one can fol­low. The most cre­ative act is this con­tin­ual weav­ing of names that re­veal the struc­ture of the so­lu­tion that maps clearly to the prob­lem we are try­ing to solve.

Programming is­n’t just typ­ing cod­ing syn­tax that com­put­ers can un­der­stand and ex­e­cute; it’s shap­ing a so­lu­tion. We slice the prob­lem into fo­cused pieces, bind re­lated data and be­hav­iour to­gether, and—cru­cially—choose names that ex­pose in­tent. Good names cut through com­plex­ity and turn code into a schematic every­one can fol­low. The most cre­ative act is this con­tin­ual weav­ing of names that re­veal the struc­ture of the so­lu­tion that maps clearly to the prob­lem we are try­ing to solve.

Palantir Employees Are Starting to Wonder if They're the Bad Guys

www.wired.com

It took just a few months of President Donald Trump’s sec­ond term for Palantir em­ploy­ees to ques­tion their com­pa­ny’s com­mit­ments to civil lib­er­ties. Last fall, Palantir seemed to be­come the tech­no­log­i­cal back­bone of Trump’s im­mi­gra­tion en­force­ment ma­chin­ery, pro­vid­ing soft­ware iden­ti­fy­ing, track­ing, and help­ing de­port im­mi­grants on be­half of the Department of Homeland Security (DHS), when cur­rent and for­mer em­ploy­ees started ring­ing the alarm.

Around that time, two for­mer em­ploy­ees re­con­nected by phone. Right as they picked up the call, one of them asked, Are you track­ing Palantir’s de­scent into fas­cism?”

That was their greet­ing,” the other for­mer em­ployee says. There’s this feel­ing not of Oh, this is un­pop­u­lar and hard,’ but, This feels wrong.’”

Palantir was founded—with ini­tial ven­ture cap­i­tal in­vest­ment from the CIA—at a mo­ment of na­tional con­sen­sus fol­low­ing the September 11, 2001 at­tacks, when many saw fight­ing ter­ror­ism abroad as the most crit­i­cal mis­sion fac­ing the US. The com­pany, which was co­founded by tech bil­lion­aire Peter Thiel, sells soft­ware that acts as a high-pow­ered data ag­gre­ga­tion and analy­sis tool pow­er­ing every­thing from pri­vate busi­nesses to the US mil­i­tary’s tar­get­ing sys­tems.

For the last 20 years, em­ploy­ees could ac­cept the in­tense ex­ter­nal crit­i­cism and awk­ward con­ver­sa­tions with fam­ily and friends about work­ing for a com­pany named af­ter J. R. R. Tolkien’s cor­rupt­ing all-see­ing orb. But a year into Trump’s sec­ond term, as Palantir deep­ens its re­la­tion­ship with an ad­min­is­tra­tion many work­ers fear is wreak­ing havoc at home, em­ploy­ees are fi­nally rais­ing these con­cerns in­ter­nally, as the USs war on im­mi­grants, war in Iran, and even com­pany-re­leased man­i­festos has forced them to re­think the role they play in it all.

We hire the best and bright­est tal­ent to help de­fend America and its al­lies and to build and de­ploy our soft­ware to help gov­ern­ments and busi­nesses around the world. Palantir is no mono­lith of be­lief, nor should we be,” a Palantir spokesper­son said in a state­ment. We all pride our­selves on a cul­ture of fierce in­ter­nal di­a­logue and even dis­agree­ment over the com­plex ar­eas we work on. That has been true from our found­ing and re­mains true to­day.”

The broad story of Palantir as told to it­self and to em­ploy­ees was that com­ing out of 9/11 we knew that there was go­ing to be this big push for safety, and we were wor­ried that that safety might in­fringe on civil lib­er­ties,” one for­mer em­ployee tells WIRED. And now the threat’s com­ing from within. I think there’s a bit of an iden­tity cri­sis and a bit of a chal­lenge. We were sup­posed to be the ones who were pre­vent­ing a lot of these abuses. Now we’re not pre­vent­ing them. We seem to be en­abling them.”

Palantir has al­ways had a se­cre­tive rep­u­ta­tion, for­bid­ding em­ploy­ees from speak­ing to the press and re­quir­ing alumni to sign non-dis­par­age­ment agree­ments. But through­out the com­pa­ny’s his­tory, man­age­ment has al­ways at least ap­peared to be open to en­gage­ment and in­ter­nal crit­i­cism, mul­ti­ple em­ploy­ees say. Over the last year, how­ever, much of that feed­back has been met by philo­soph­i­cal so­lil­o­quies and redi­rec­tion. It’s never been re­ally that peo­ple are afraid of speak­ing up against Karp. It’s more a ques­tion of what it would do, if any­thing,” one cur­rent em­ployee tells WIRED.

While in­ter­nal ten­sions within Palantir have grown over the last year, they reached a boil­ing point in January af­ter the vi­o­lent killing of Alex Pretti, a nurse who was shot and killed by fed­eral agents dur­ing protests against Immigration and Customs Enforcement (ICE) in Minneapolis. Employees from across the com­pany com­mented in a Slack thread ded­i­cated to the news de­mand­ing more in­for­ma­tion about the com­pa­ny’s re­la­tion­ship with ICE from man­age­ment and CEO Alex Karp.

Our in­volve­ment with ice has been in­ter­nally swept un­der the rug un­der Trump2 too much,” one per­son wrote in a Slack mes­sage WIRED re­ported at the time. We need an un­der­stand­ing of our in­volve­ment here.”

Around this time, Palantir started wip­ing Slack con­ver­sa­tions af­ter seven days in at least one chan­nel where most of the in­ter­nal de­bate takes place, #palantir-in-the-news. Because the de­ci­sion was­n’t for­mally an­nounced be­fore the pol­icy rolled out, one worker who no­ticed the dele­tions asked in the chan­nel why the com­pany was re­mov­ing relevant in­ter­nal dis­course on cur­rent events.”

A mem­ber of Palantir’s cy­ber­se­cu­rity team re­sponded, writ­ing that the de­ci­sion was made in re­sponse to leaks.

This pe­riod led Palantir man­age­ment to re­lease an up­dated wiki, or a col­lec­tion of blog posts ex­plain­ing the ICE con­tract, where the com­pany de­fended its work with DHS. Management wrote that the tech­nol­ogy the com­pany pro­vides is mak­ing a dif­fer­ence in mit­i­gat­ing risks while en­abling tar­geted out­comes.”

Palantir man­age­ment ran de­fense by hold­ing a hand­ful of AMA (ask me any­thing) fo­rums across the com­pany with lead­er­ship like chief tech­nol­ogy of­fi­cer Shyam Sankar and mem­bers of its pri­vacy and civil lib­er­ties (PCL) teams.

At least one of these AMAs was or­ga­nized in­de­pen­dently of PCL lead­er­ship by two team leads, in­clud­ing one who worked di­rectly on the ICE con­tract for a pe­riod of time. This was very rogue,” a PCL em­ployee who worked on the ICE con­tract said in a February AMA, a record­ing of which was ob­tained by WIRED. Courtney [Bowman, head of the pri­vacy and civil lib­er­ties team] does­n’t know that I’m spend­ing three hours this week talk­ing to IMPLs [Palantir ter­mi­nol­ogy for its client-fac­ing prod­uct teams], but I think this is the only real way to start go­ing in the right di­rec­tion.”

Throughout the lengthy call, em­ploy­ees work­ing on a va­ri­ety of Palantir’s de­fense pro­jects posed hard ques­tions. Could ICE agents delete au­dit logs in Palantir’s soft­ware? Could agents cre­ate harm­ful work­flows on their own with­out the com­pa­ny’s help? What is the most ma­li­cious thing that could come out of this work?

Answering these ques­tions, the PCL em­ployee who worked on the ICE con­tract said that a suf­fi­ciently ma­li­cious cus­tomer is, like, ba­si­cally im­pos­si­ble to pre­vent at the mo­ment” and could only be con­trolled through auditing to prove what hap­pened” and le­gal ac­tion af­ter the fact if the cus­tomer breached the com­pa­ny’s con­tract.

At one point dur­ing the call, one of the em­ploy­ees tried to level with the group, ex­plain­ing that Palantir’s work with ICE was a pri­or­ity for Karp and some­thing that likely would­n’t change any time soon.

Karp re­ally wants to do this and con­tin­u­ously wants this,” they said. We’re largely at the role of try­ing to give him sug­ges­tions and try­ing to redi­rect him, but it was largely un­suc­cess­ful and we seem to be on a very sharp path of con­tin­u­ing to ex­pand this work­flow.”

Around the time of these fo­rums, Karp sat down for a pre­re­corded in­ter­view with Bowman, seem­ingly to dis­cuss Palantir’s con­tracts with ICE, but re­fused to broach the topic di­rectly. Instead, Karp sug­gested that em­ploy­ees in­ter­ested in the work sign nondis­clo­sure agree­ments be­fore re­ceiv­ing more de­tailed in­for­ma­tion.

Then came the deadly February 28 mis­sile strike on an Iranian el­e­men­tary school on the first full day of the Trump ad­min­is­tra­tion and Israel’s war in Iran. The US is the only known coun­try in the con­flict to use that spe­cific type of mis­sile. More than 120 chil­dren were killed when a Tomahawk mis­sile struck the school, kick­ing off a se­ries of in­ves­ti­ga­tions that con­cluded that the US was re­spon­si­ble and that sur­veil­lance tools like Palantir’s Maven sys­tem had been used dur­ing that day’s strikes. For a com­pany full of em­ploy­ees al­ready reel­ing over its work with ICE, pos­si­ble in­volve­ment in the death of chil­dren was a break­ing point.

I guess the root of what I’m ask­ing is … were we in­volved, and are do­ing any­thing to stop a re­peat if we were,” one em­ployee asked in the Palantir news Slack chan­nel. Some em­ploy­ees posed sim­i­lar ques­tions in the thread, while oth­ers crit­i­cized them for dis­cussing what could be con­sid­ered clas­si­fied in­for­ma­tion in a Slack chan­nel open to the en­tire com­pany. The in­ves­ti­ga­tion is on­go­ing.

The Palantir spokesper­son said the com­pany was proud” to sup­port the US mil­i­tary across Democratic and Republican ad­min­is­tra­tions.”

In March, Karp gave an in­ter­view to CNBC claim­ing that AI could un­der­mine the power of humanities-trained—largely Democratic—voters” and in­crease the power of work­ing-class male vot­ers. While crit­ics re­acted to the piece, call­ing the state­ments con­cern­ing, so did em­ploy­ees in­ter­nally: Is it true that AI dis­rup­tion is go­ing to dis­pro­por­tion­ately neg­a­tively af­fect women and peo­ple who vote Democrat? and if it is, why are we cool with that?” one worker asked on Slack in a chan­nel ded­i­cated to news about Palantir.

Palantir’s lead­er­ship in­censed work­ers yet again this week af­ter the com­pany posted a Saturday af­ter­noon man­i­festo re­duc­ing Karp’s re­cent book, The Technological Republic, to 22 points. The post—which in­cludes many of Karp’s long-stand­ing be­liefs on how Silicon Valley could bet­ter serve US na­tional in­ter­ests—goes as far as sug­gest­ing that the US should con­sider re­in­stat­ing the draft. Critics called the man­i­festo fas­cist.

Internally, the post alarmed some work­ers who hud­dled in a Slack thread on Monday morn­ing, ques­tion­ing lead­er­ship over its de­ci­sion to post it in the first place.

I’m cu­ri­ous why this had to be posted. Especially on the com­pany ac­count. On the prac­ti­cal level every time stuff like that gets posted it gets harder for us to sell the soft­ware out­side of the US (for sure in the cur­rent po­lit­i­cal cli­mate), and I doubt we need this in the US?” wrote one frus­trated em­ployee. The mes­sage re­ceived more than 50 +1” emo­jis.

Wether [sic] we ac­knowl­edge it or not, this im­pacts us all per­son­ally,” an­other worker wrote on Monday. I’ve al­ready had mul­ti­ple friends reach out and ask what the hell did we post.” This mes­sage re­ceived nearly two dozen +1” emoji re­ac­tions.

Yeah it turns out that short-form sum­maries of the book’s long-form ideas are easy to mis­rep­re­sent. It’s like we taped a kick me’ sign on our own backs,” a third worker wrote. I hope no one who de­cided to put this out is sur­prised that we are, in fact, get­ting kicked.”

These con­ver­sa­tions in­volv­ing shame and un­cer­tainty from work­ers have seem­ingly popped up in in­ter­nal chan­nels when­ever Palantir has been in the news over the last year. I think the only thing not dif­fer­ent is a lot of folks are still in­cred­i­bly wary about leaks and talk­ing to the press,” one cur­rent em­ployee tells WIRED, de­scrib­ing how the in­ter­nal com­pany cul­ture has evolved over the last year.

All of this dis­sent does­n’t seem to bother Karp, who re­cently told work­ers that the com­pany is behind the curve in­ter­nally” when it comes to pop­u­lar­ity. Here, he’s been con­sis­tent; in March 2024 Karp told a CNBC re­porter that if you have a po­si­tion that does not cost you ever to lose an em­ployee, it’s not a po­si­tion.”

But for em­ploy­ees, the cul­ture shift feels in­ten­tional. I don’t want to as­sert that I have knowl­edge of what’s go­ing on in their in­ter­nal mind,” one for­mer worker tells WIRED. But maybe it’s got­ten to a place where en­cour­ag­ing in­de­pen­dent thought and ques­tion­ing leads to some bad con­clu­sions.”

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.