10 interesting stories served every morning and every evening.




1 796 shares, 35 trendiness

All elementary functions from a single binary operator

...

Read the original on arxiv.org »

2 678 shares, 76 trendiness

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them.

Last week, I wrote about catch­ing a sup­ply chain at­tack on a WordPress plu­gin called Widget Logic. A trusted name, ac­quired by a new owner, turned into some­thing ma­li­cious. It hap­pened again. This time at a much larger scale.

Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dash­board for a client site. The no­tice was from the WordPress.org Plugins Team, warn­ing that a plu­gin called Countdown Timer Ultimate con­tained code that could al­low unau­tho­rized third-party ac­cess.

I ran a full se­cu­rity au­dit on the site. The plu­gin it­self had al­ready been force-up­dated by WordPress.org to ver­sion 2.6.9.1, which was sup­posed to clean things up. But the dam­age was al­ready done.

The plug­in’s wpos-an­a­lyt­ics mod­ule had phoned home to an­a­lyt­ics.es­sen­tialplu­gin.com, down­loaded a back­door file called wp-com­ments-posts.php (designed to look like the core file wp-com­ments-post.php), and used it to in­ject a mas­sive block of PHP into wp-con­fig.php.

The in­jected code was so­phis­ti­cated. It fetched spam links, redi­rects, and fake pages from a com­mand-and-con­trol server. It only showed the spam to Googlebot, mak­ing it in­vis­i­ble to site own­ers. And here is the wildest part. It re­solved its C2 do­main through an Ethereum smart con­tract, query­ing pub­lic blockchain RPC end­points. Traditional do­main take­downs would not work be­cause the at­tacker could up­date the smart con­tract to point to a new do­main at any time.

CaptainCore keeps daily restic back­ups. I ex­tracted wp-con­fig.php from 8 dif­fer­ent backup dates and com­pared file sizes. Binary search style.

The in­jec­tion hap­pened on April 6, 2026, be­tween 04:22 and 11:06 UTC. A 6-hour 44-minute win­dow.

I traced the plug­in’s his­tory through 939 quick­save snap­shots. The plu­gin had been on the site since January 2019. The wpos-an­a­lyt­ics mod­ule was al­ways there, func­tion­ing as a le­git­i­mate an­a­lyt­ics opt-in sys­tem for years.

Then came ver­sion 2.6.7, re­leased August 8, 2025. The changelog said, Check com­pat­i­bil­ity with WordPress ver­sion 6.8.2.” What it ac­tu­ally did was add 191 lines of code, in­clud­ing a PHP de­se­ri­al­iza­tion back­door. The class-anylc-ad­min.php file grew from 473 to 664 lines.

The new code in­tro­duced three things:

A fetch_ver_info() method that calls file_get_­con­tents() on the at­tack­er’s server and passes the re­sponse to @unserialize()

A ver­sion_in­fo_­clean() method that ex­e­cutes @$clean($this->version_cache, $this->changelog) where all three val­ues come from the un­se­ri­al­ized re­mote data

That is a text­book ar­bi­trary func­tion call. The re­mote server con­trols the func­tion name, the ar­gu­ments, every­thing. It sat dor­mant for 8 months be­fore be­ing ac­ti­vated on April 5-6, 2026.

This is where it gets in­ter­est­ing. The orig­i­nal plu­gin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that op­er­ated un­der WP Online Support” start­ing around 2015. They later re­branded to Essential Plugin” and grew the port­fo­lio to 30+ free plu­g­ins with pre­mium ver­sions.

By late 2024, rev­enue had de­clined 35-45%. Minesh listed the en­tire busi­ness on Flippa. A buyer iden­ti­fied only as Kris,” with a back­ground in SEO, crypto, and on­line gam­bling mar­ket­ing, pur­chased every­thing for six fig­ures. Flippa even pub­lished a case study about the sale in July 2025.

The buy­er’s very first SVN com­mit was the back­door.

On April 7, 2026, the WordPress.org Plugins Team per­ma­nently closed every plu­gin from the Essential Plugin au­thor. At least 30 plu­g­ins, all on the same day. Here are the ones I con­firmed:

* SlidersPack — All in One Image Sliders — slid­er­spack-all-in-one-im­age-slid­ers

All per­ma­nently closed. The au­thor search on WordPress.org re­turns zero re­sults. The an­a­lyt­ics.es­sen­tialplu­gin.com end­point now re­turns {“message”:“closed”}.

In 2017, a buyer us­ing the alias Daley Tias” pur­chased the Display Widgets plu­gin (200,000 in­stalls) for $15,000 and in­jected pay­day loan spam. That buyer went on to com­pro­mise at least 9 plu­g­ins the same way.

The Essential Plugin case is the same play­book at a larger scale. 30+ plu­g­ins. Hundreds of thou­sands of ac­tive in­stal­la­tions. A le­git­i­mate 8-year-old busi­ness ac­quired through a pub­lic mar­ket­place and weaponized within months.

WordPress.org’s forced up­date added re­turn; state­ments to dis­able the phone-home func­tions. That is a band-aid. The wpos-an­a­lyt­ics mod­ule is still there with all its code. I built patched ver­sions with the en­tire back­door mod­ule stripped out.

I scanned my en­tire fleet and found 12 of the 26 Essential Plugin plu­g­ins in­stalled across 22 cus­tomer sites. I patched 10 of them (one had no back­door mod­ule, one was a dif­fer­ent pro” fork by the orig­i­nal au­thors). Here are the patched ver­sions, hosted per­ma­nently on B2:

# Countdown Timer Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​count­down-timer-ul­ti­mate-2.6.9.1-patched.zip –force

# Popup Anything on Click

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​popup-any­thing-on-click-2.9.1.1-patched.zip –force

# WP Testimonial with Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-tes­ti­mo­nial-with-wid­get-3.5.1-patched.zip –force

# WP Team Showcase and Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-team-show­case-and-slider-2.8.6.1-patched.zip –force

# WP FAQ (sp-faq)

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-faq-3.9.5.1-patched.zip –force

# Timeline and History Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​time­line-and-his­tory-slider-2.4.5.1-patched.zip –force

# Album and Image Gallery plus Lightbox

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​al­bum-and-im­age-gallery-plus-light­box-2.1.8.1-patched.zip –force

# SP News and Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-news-and-wid­get-5.0.6-patched.zip –force

# WP Blog and Widgets

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-blog-and-wid­gets-2.6.6.1-patched.zip –force

# Featured Post Creative

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​fea­tured-post-cre­ative-1.5.7-patched.zip –force

# Post Grid and Filter Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​post-grid-and-fil­ter-ul­ti­mate-1.7.4-patched.zip –force

Each patched ver­sion re­moves the en­tire wpos-an­a­lyt­ics di­rec­tory, deletes the loader func­tion from the main plu­gin file, and bumps the ver­sion to -patched. The plu­gin it­self con­tin­ues to work nor­mally.

The process is straight­for­ward with Claude Code. Point it at this ar­ti­cle for con­text, tell it which plu­gin you need patched, and it can strip the wpos-an­a­lyt­ics mod­ule the same way I did. The pat­tern is iden­ti­cal across all of the Essential Plugin plu­g­ins:

Delete the wpos-an­a­lyt­ics/ di­rec­tory from the plu­gin

Remove the loader func­tion block in the main plu­gin PHP file (search for Plugin Wpos Analytics Data Starts” or wpos_­an­a­lyt­ic­s_anl)

Two sup­ply chain at­tacks in two weeks. Both fol­lowed the same pat­tern. Buy a trusted plu­gin with an es­tab­lished in­stall base, in­herit the WordPress.org com­mit ac­cess, and in­ject ma­li­cious code. The Flippa list­ing for Essential Plugin was pub­lic. The buy­er’s back­ground in SEO and gam­bling mar­ket­ing was pub­lic. And yet the ac­qui­si­tion sailed through with­out any re­view from WordPress.org.

WordPress.org has no mech­a­nism to flag or re­view plu­gin own­er­ship trans­fers. There is no change of con­trol” no­ti­fi­ca­tion to users. No ad­di­tional code re­view trig­gered by a new com­mit­ter. The Plugins Team re­sponded quickly once the at­tack was dis­cov­ered. But 8 months passed be­tween the back­door be­ing planted and be­ing caught.

If you man­age WordPress sites, search your fleet for any of the 26 plu­gin slugs listed above. If you find one, patch it or re­move it. And check wp-con­fig.php.

...

Read the original on anchor.host »

3 421 shares, 33 trendiness

Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.

Servo is now avail­able on crates.io

Today the Servo team has re­leased v0.1.0 of the servo crate. This is our first crates.io re­lease of the servo crate that al­lows Servo to be used as a li­brary.

We cur­rently do not have any plans of pub­lish­ing our demo browser ser­voshell to crates.io. In the 5 re­leases since our ini­tial GitHub re­lease in October 2025, our re­lease process has ma­tured, with the main bottleneck” now be­ing the hu­man-writ­ten monthly blog post. Since we’re quite ex­cited about this re­lease, we de­cided to not wait for the monthly blog post to be fin­ished, but promise to de­liver the monthly up­date in the com­ing weeks.

As you can see from the ver­sion num­ber, this re­lease is not a 1.0 re­lease. In fact, we still haven’t fin­ished dis­cussing what 1.0 means for Servo. Nevertheless, the in­creased ver­sion num­ber re­flects our grow­ing con­fi­dence in Servo’s em­bed­ding API and its abil­ity to meet some users’ needs.

In the mean­time we also de­cided to of­fer a long-term sup­port (LTS) ver­sion of Servo, since break­ing changes in the reg­u­lar monthly re­leases are ex­pected and some em­bed­ders might pre­fer do­ing ma­jor up­grades on a sched­uled half-yearly ba­sis while still re­ceiv­ing se­cu­rity up­dates and (hopefully!) some mi­gra­tion guides. For more de­tails on the LTS re­lease, see the re­spec­tive sec­tion in the Servo book.

...

Read the original on servo.org »

4 408 shares, 61 trendiness

GitHub Stacked PRs

Large pull re­quests are hard to re­view, slow to merge, and prone to con­flicts. Reviewers lose con­text, feed­back qual­ity drops, and the whole team slows down. Stacked PRs solve this by break­ing big changes into a chain of small, fo­cused pull re­quests that build on each other — each one in­de­pen­dently re­view­able.

A stack is a se­ries of pull re­quests in the same repos­i­tory where each PR tar­gets the branch of the PR be­low it, form­ing an or­dered chain that ul­ti­mately lands on your main branch.

GitHub un­der­stands stacks end-to-end: the pull re­quest UI shows a stack map so re­view­ers can nav­i­gate be­tween lay­ers, branch pro­tec­tion rules are en­forced against the fi­nal tar­get branch (not just the di­rect base), and CI runs for every PR in the stack as if they were tar­get­ing the fi­nal branch.

The gh stack CLI han­dles the lo­cal work­flow: cre­at­ing branches, man­ag­ing re­bases, push­ing to GitHub, and cre­at­ing PRs with the cor­rect base branches. On GitHub, the PR UI gives re­view­ers the con­text they need — a stack map for nav­i­ga­tion, fo­cused diffs for each layer, and proper rules en­force­ment.

When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged di­rectly or through the merge queue. After a merge, the re­main­ing PRs in the stack are au­to­mat­i­cally re­based so the low­est un­merged PR tar­gets the base branch.

Ready to dive in? Start with the Quick Start guide or read the full overview.

...

Read the original on github.github.com »

5 400 shares, 17 trendiness

How the "AI Loser" may end up winning

A few weeks ago I wrote about how I thought in­tel­li­gence is be­com­ing a com­mod­ity. The idea is quite straight­for­ward, and wide­spread now: when every­one races to build the best model, the mod­els get bet­ter, but so does every other model even­tu­ally. Every dol­lar spent on a big­ger train­ing run makes the pre­vi­ous one cheaper. The dis­tance be­tween fron­tier, sec­ond-best, and open-source al­ter­na­tives is col­laps­ing fast (actually Gemma4, Kimi K2.5 and GLM 5.1 are be­com­ing my bed­side mod­els these days). Even more, as mod­els be­come bet­ter, the unit of in­tel­li­gence that can be de­ployed in lo­cal hard­ware with lower hard­ware ca­pa­bil­i­ties in­creases sig­nif­i­cantly.

The irony of this sit­u­a­tion is that this com­modi­ti­sa­tion of in­tel­li­gence is ben­e­fit­ing the com­pany that every­one was fram­ing as the AI loser”: Apple

There’s a ver­sion of the last three years where Apple gen­uinely failed at AI. They had Siri be­fore any­one had a se­ri­ous voice as­sis­tant, and then watched how ChatGPT ate their lunch al­ready since their first re­lease (even be­fore they had in­tro­duced their na­tive voice in­ter­ac­tion). Apple did­n’t have a flag­ship fron­tier (or even a van­ity open-source) model, no $500B com­pute com­mit­ment with the usual sus­pects. Meanwhile, the rest of the AI labs and big tech com­pa­nies were rac­ing to win the next state-of-the-art bench­mark by burn­ing bags of cash.

What this also meant is that while these com­pa­nies were burn­ing money at a rate that would make a sov­er­eign wealth fund un­com­fort­able, Apple was (and still is) sit­ting in a pile of un­de­ployed cash (to the point of even in­creas­ing their stock buy­backs) giv­ing them op­tion­al­ity.

To me, OpenAI is the most par­a­dig­matic ex­am­ple of this infinite money burn­ing ma­chine”. OpenAI raised at a $300B val­u­a­tion and then shut down Sora, the video prod­uct they’d been po­si­tion­ing as a cre­ative in­dus­try flag­ship, be­cause it was run­ning at roughly $15M a day in costs against $2.1M in daily rev­enue. Disney had al­ready signed a three-year li­cens­ing deal for Sora to gen­er­ate con­tent from Marvel, Pixar, and Star Wars char­ac­ters. They were fi­nal­is­ing a $1B eq­uity stake in OpenAI. When Sora died, so did the bil­lion. A $1B in­vest­ment evap­o­rated, be­cause the prod­uct it was staked on could­n’t pay for it­self (reducing their buffer that ac­com­mo­dates their daily burn).

On the in­fra­struc­ture side: OpenAI signed non-bind­ing let­ters of in­tent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global out­put. These were of course non-bind­ing. Micron, read­ing the de­mand sig­nal, shut down its 29-year-old Crucial con­sumer mem­ory brand to redi­rect all ca­pac­ity to­ward AI cus­tomers. Then Stargate Texas was can­celled, OpenAI and Oracle could­n’t agree terms, and the de­mand that had jus­ti­fied Micron’s en­tire strate­gic pivot sim­ply van­ished. Micron’s stock crashed.

I don’t know about you, but I don’t see these be­hav­iours as those of some­one that is win­ning the AI race, in­de­pen­dently of how good their mod­els do in bench­marks, and how much they are burn­ing in in­fra­struc­ture. A small mis­cal­cu­la­tion in the ex­pected rev­enue, and you are out of the game (I am ac­tu­ally of the opin­ion that with­out some kind of bailout, OpenAI could be bank­rupt in the next 18-24 months, but I am hor­ri­ble at pre­dic­tions).

My sense is that the labs’ bet was al­ways that raw model ca­pa­bil­ity, i.e. in­tel­li­gence, along with the in­fra­struc­ture re­quired to run them would stay scarce. Those who man­age to se­cure the best model and the in­fra­struc­ture to run it at scale would get the best moat. But I am afraid that hav­ing the best model in it­self may not be enough mov­ing for­ward. Less ca­pa­ble mod­els are be­com­ing as ca­pa­ble as pre­vi­ous ver­sions of the fron­tier mod­els.

The best re­cent ex­am­ple I can think of is Gemma 4, Google’s open-weight model. It was built to run on a phone, scores 85.2% on MMLU Pro and matches Claude Sonnet 4.5 Thinking on the Arena leader­board. 2 mil­lion down­loads in its first week. Models that would have been state-of-the-art eigh­teen months ago now run on a lap­top, and they get bet­ter every quar­ter.

If you haven’t tried Gemma4 your­self I highly rec­om­mend it. I am run­ning it on my AMD Ryzen AI Max+, and its per­for­mance in terms of to­kens per sec­ond and in­tel­li­gence are so good that I have al­ready mi­grated some of my per­sonal tools to use this model as the back­end with­out vis­i­bly im­pact­ing their out­put. This trend can re­ally change in the next few months way we ac­cess in­tel­li­gence.

I feel that some of the labs see this com­ing. Anthropic has been par­tic­u­larly ag­gres­sive about it and they are re­leas­ing new (actually use­ful) tools every day that work like a charm with their mod­els in or­der to lock users into their ecosys­tem. Claude Code for de­vel­op­ers, Claude Cowork for teams, the re­cent Claude Managed Sessions to or­ches­trate agents, all de­signed to put Claude in­side work­flows peo­ple are al­ready in.

The logic be­hind it: if the model it­self won’t hold the moat, cap­ture the us­age layer and make switch­ing painful. I think this is bril­liant, and see­ing how much Anthropic is grow­ing in num­ber of users and rev­enue, it seems to be pay­ing off. The eco­nom­ics of their plans are still rough, though. One analy­sis found a max-plan sub­scriber con­sum­ing $27,000 worth of com­pute with their 200$ Max sub­scrip­tion. The labs are sub­si­dis­ing the de­mand they’re chas­ing, which jus­ti­fies their level of burn (let’s see for how long they can af­ford these sub­si­dies).

Apple, by con­trast, has spent al­most noth­ing on AI in­fra­struc­ture and sub­si­dis­ing users’ to­ken burn. And this may be giv­ing them more op­tion­al­ity and lever­age than any of the other com­pa­nies that jumped heads first into the AI race.

In that ear­lier post, I ar­gued that if in­tel­li­gence be­comes abun­dant, con­text be­comes the scarce re­source. A model that can rea­son about any­thing but knows noth­ing about you or the en­vi­ron­ment it op­er­ates in is a generic tool. What makes AI gen­uinely use­ful day-to-day is rea­son­ing plus per­sonal con­text: your mes­sages, your cal­en­dar, your code, your tools, your health data, your pho­tos, your habits. I think here is where Anthropic is mak­ing an amaz­ing job with their Claude suite”.

But Apple al­ready has all this con­text and ac­cess to your en­vi­ron­ment through their 2.5 bil­lion ac­tive de­vices. Each one is a con­text mine that users have been fill­ing for years. Health data from Apple Watch. Every photo taken on an iPhone. Notes, mes­sages, lo­ca­tion his­tory, app be­hav­iour, emails, and aware­ness of your en­vi­ron­ment through the pool of sen­sors of your de­vice. Why build a com­mod­ity when they al­ready have the con­text that can be­come their moat?

And they even have the abil­ity to keep all this data on-de­vice, which is where the Privacy. That’s iPhone” po­si­tion­ing be­comes some­thing more than a PR strat­egy, and which could ac­tu­ally make a come­back to be­come one of their core value propo­si­tions. Apple spent years us­ing pri­vacy as a dif­fer­en­tia­tor against the ad-dri­ven mod­els of Google and Meta. It worked, but it al­ways felt a bit ab­stract and, hon­estly, fake. Now it could be­come re­ally con­crete. Would you hand OpenAI your med­ical records and fif­teen years of pho­tos to get bet­ter AI an­swers? Probably not. Some are, but I per­son­ally would­n’t like Sam to have that per­sonal data from me. Would you let a model run­ning en­tirely on your de­vice (no net­work re­quest, no data leav­ing your phone) ac­cess all of that? That’s a dif­fer­ent ques­tion. The on-de­vice model gets full con­text be­cause it never leaves the hard­ware. Apple built the rep­u­ta­tion and the ar­chi­tec­ture for this when no one else thought it mat­tered.

Of course, there are still tech­no­log­i­cal bar­ri­ers to make this pos­si­ble, but I feel like we may be get­ting there.

In this con­text, the Gemini deal, where Apple signed a $1B to li­cense Google’s fron­tier model for the queries that need cloud-scale rea­son­ing, makes to­tal sense. Apple did­n’t build a fron­tier model. They bought ac­cess to one, at a price that’s round­ing er­ror against OpenAI’s weekly com­pute bill. What they kept in-house: the con­text layer, the on-de­vice stack, and the op­er­at­ing sys­tem that me­di­ates every­thing.

Turns out Apple had an­other un­ex­pected lever for AI as shown with the Mac Mini craze af­ter OpenClaw’s re­lease. Apple Silicon was­n’t built specif­i­cally for AI, it was built for ef­fi­ciency, for bat­tery life, for ther­mal per­for­mance, for the hard­ware/​soft­ware co-de­sign that Apple had been run­ning for fif­teen years. But it turned out to be the per­fect ar­chi­tec­ture to run lo­cal mod­els ef­fi­ciently.

The key de­ci­sion is uni­fied mem­ory. On a con­ven­tional ar­chi­tec­ture (that of most lap­tops, and even tra­di­tional data cen­ter-grade GPUs) the CPU and GPU are sep­a­rate chips with sep­a­rate mem­ory pools. Moving data be­tween them is slow and power-hun­gry. Nvidia’s GPUs are ex­tremely fast at ma­trix op­er­a­tions, but they sit on the other side of a PCIe bus from the CPU, and feed­ing them is a con­stant bot­tle­neck (as dis­cussed when pre­sent­ing the dif­fer­ence be­tween DRAM and HBM in this post from a few weeks ago).

Apple’s M-series and A-series chips put the CPU, GPU, and Neural Engine (their pro­pri­etary ac­cel­er­a­tor) on the same die, shar­ing one high-band­width mem­ory pool. No bus cross­ing, no trans­fer over­head, no la­tency switch­ing be­tween CPU and GPU work. For video edit­ing or com­pil­ing Xcode, this is a nice ef­fi­ciency win. For LLM in­fer­ence, this has been key.

As de­scribed also in my post about RAM mem­ory and TurboQuant, LLM in­fer­ence is cur­rently mem­ory-band­width bound, not com­pute bound. The bot­tle­neck is­n’t so much how fast you can mul­ti­ply ma­tri­ces, it’s how fast you can stream model weights from mem­ory into the com­pute units, and how big of a KV cache you can store to avoid hav­ing to re-com­pute it. Apple’s uni­fied pool gives every com­pute unit di­rect, high-band­width ac­cess to the same mem­ory si­mul­ta­ne­ously. That’s ex­actly the op­er­a­tion in­fer­ence needs.

This is what makes the LLM in a Flash tech­nique work so well on Apple hard­ware. Someone re­cently ran Qwen 397B, a 209GB model, on an M3 Max Mac at ~5.7 to­kens per sec­ond, us­ing only 5.5GB of ac­tive RAM. The weights live on the SSD and stream in at ~17.5 GB/s as needed. This works be­cause Qwen is a mix­ture-of-ex­perts ar­chi­tec­ture: each to­ken only ac­ti­vates a small sub­set of ex­pert lay­ers, so you only ever need a frac­tion of the 209GB res­i­dent in mem­ory. The SSD through­put Apple achieves (faster than their own fig­ures from the orig­i­nal LLM in a Flash pa­per) comes from stor­age ar­chi­tec­ture they built for iPhone re­spon­sive­ness, not AI. Claude wrote the ~5,000 lines of Objective-C and Metal shaders to make it all work. A 400-billion-parameter model, on a con­sumer lap­top, from 5.5GB of RAM (another win of the au­tore­search flow dis­cussed in this newslet­ter).

What I find more in­ter­est­ing about all of this is the plat­form dy­namic that this can re­sult in. Think about the App Store. Apple did­n’t build the apps, they built the plat­form where apps ran best, and the ecosys­tem fol­lowed. Developers did­n’t tar­get iOS be­cause Apple asked, they tar­geted it be­cause the users were there, the tool­ing was good, the hard­ware was con­sis­tent. My feel­ing is that the same thing could hap­pen now with lo­cal in­fer­ence. MLX is al­ready a de facto frame­work for on-de­vice AI. Gemma, Qwen, Mistral, the most rel­e­vant model ar­chi­tec­tures have MLX sup­port. Apple does­n’t need to win the model race if they man­age to be­come the de-facto plat­form where the mod­els (or the agents that use them) run. Again, a great ex­am­ple of this is the Mac Mini craze af­ter OpenClaw went vi­ral.

I keep go­ing back and forth on this, hon­estly, and I still don’t know if this was Apple’s strat­egy all along, or they did­n’t feel in the po­si­tion to make a bet and are just flow­ing as the events un­fold max­imis­ing their op­tion­al­ity.

The hard­ware/​soft­ware co-de­sign strat­egy has been a key fo­cus for years, and one that I’ve al­ways agreed on my­self (as an elec­tri­cal en­gi­neer­ing by train­ing, I’ve al­ways been into hard­ware/​soft­ware co-de­sign). If you can af­ford it, I think that’s the right ap­proach. The pri­vacy po­si­tion­ing, the on-de­vice pro­cess­ing fo­cus, the de­ci­sion to build their own sil­i­con when the rest of the in­dus­try was happy buy­ing Nvidia and Intel, all of those were choices Apple made when they were com­mer­cially risky and the di­rec­tion was­n’t ob­vi­ous. Is it true that they were made with cost and gov­er­nance in mind, not AI, but it turned out well for them.

What Apple could­n’t have planned (or could they?) is that their uni­fied mem­ory ar­chi­tec­ture would be a per­fect fit for LLMs, and that open-weight mod­els would get this ca­pa­ble, this fast, re­mov­ing the need for huge hard­ware in­vest­ment for AI in­fra­struc­ture from their side. That the model race would com­modi­tise in­tel­li­gence as quickly as it did. Or that some­one would stream a 400B pa­ra­me­ter model from an SSD and it would ac­tu­ally work.

So some of this is luck. But it’s the kind of luck that finds you when you built the right foun­da­tion, even if you built it for com­pletely dif­fer­ent rea­sons. They were def­i­nitely well-po­si­tioned.

The rest of the in­dus­try spent three years rac­ing to see who could build the best model with Apple look­ing from the side­lines, wait­ing to un­der­stand how their de­vices and own ecosys­tem could fit in this fu­ture. I don’t know if this is ex­actly the case, but I feel this was smart. Risky but smart.

I gen­uinely don’t know how this plays out over the next few years. The labs are not stand­ing still, and Apple’s AI track record (looking at you, Siri, you al­ready suck a bit) is not ex­actly flaw­less. But it’s hard to imag­ine a world where 2.5 bil­lion de­vices, car­ry­ing your en­tire per­sonal con­text, run­ning ca­pa­ble mod­els lo­cally on pur­pose-built sil­i­con, with Gemini on-call for the hard stuff, in­cur­ring in vari­able cost for in­fer­ence in­stead of ex­pen­sive CAPEX in­vest­ment could be a bad po­si­tion to be in a fu­ture where AI is every­where.

Whether that was strat­egy or for­tune, I’ll leave for you to de­cide. And if you do, please let me know what you think about it. My TL;DR is that, to my sur­prise, I am still bull­ish about Apple and their rel­e­vance in an AI-centric fu­ture.

Disclaimer: To frame the opin­ion of this post, I just want to be clear about the fact that I am not one of those Apple fan boys. Proof of this is that this post was writ­ten from a Linux ma­chine and that I don’t even own a Mac :)

...

Read the original on adlrocha.substack.com »

6 388 shares, 19 trendiness

Why Most Engineering Organizations Are Flying Blind

This post works through the fi­nan­cial logic of soft­ware teams, from what a team of eight en­gi­neers ac­tu­ally costs per month to what it needs to gen­er­ate to be eco­nom­i­cally vi­able. It also ex­am­ines why most teams have no vis­i­bil­ity into ei­ther num­ber, how that con­di­tion was built over two decades, and what the ar­rival of LLMs now means for or­ga­ni­za­tions that have been treat­ing large en­gi­neer­ing head­count as an as­set.

Software de­vel­op­ment is one of the most cap­i­tal-in­ten­sive ac­tiv­i­ties a mod­ern com­pany un­der­takes, and it is also one of the least un­der­stood from a fi­nan­cial per­spec­tive. The peo­ple mak­ing daily de­ci­sions about what to build, what to de­lay, and what to aban­don are rarely given the fi­nan­cial con­text to un­der­stand what those de­ci­sions ac­tu­ally cost. This is not a co­in­ci­dence. It is a struc­tural con­di­tion that most or­ga­ni­za­tions have main­tained, qui­etly and con­sis­tently, for roughly two decades.

A soft­ware en­gi­neer in Western Europe costs some­where be­tween €120,000 and €150,000 per year when you ac­count for salary, so­cial fees, pen­sion con­tri­bu­tions, equip­ment, so­cial ac­tiv­i­ties, man­age­ment over­head, and of­fice space. Call it €130,000 as a rea­son­able mid­dle es­ti­mate. A team of eight en­gi­neers there­fore costs ap­prox­i­mately €1,040,000 per year, or €87,000 per month, or roughly €4,000 for every work­ing day.

Most en­gi­neers do not know this num­ber. Many of their man­agers do not ei­ther. And in the or­ga­ni­za­tions where some­one does know it, the num­ber rarely makes its way into the con­ver­sa­tions where pri­or­i­ti­za­tion de­ci­sions are ac­tu­ally made.

This mat­ters be­cause every de­ci­sion a team makes car­ries an im­plicit cost that com­pounds over time. Choosing to spend three weeks on a fea­ture that serves 2% of users is a €60,000 de­ci­sion. Delaying an op­er­a­tional im­prove­ment for a quar­ter is a de­ci­sion with a cal­cu­la­ble daily price tag. Rebuilding a plat­form be­cause the cur­rent one feels em­bar­rass­ing, rather than be­cause cus­tomers are leav­ing, is a cap­i­tal al­lo­ca­tion choice that would look very dif­fer­ent if the peo­ple mak­ing it were spend­ing their own money.

Consider a team of eight en­gi­neers whose mis­sion is to build and main­tain an in­ter­nal de­vel­oper plat­form serv­ing one hun­dred other en­gi­neers. This is a com­mon or­ga­ni­za­tional struc­ture, and it is one where the fi­nan­cial logic is rarely ex­am­ined care­fully.

The team costs €87,000 per month. To jus­tify that cost, the plat­form they build needs to gen­er­ate at least €87,000 per month in value for the en­gi­neers who use it. The most di­rect way to mea­sure that value is through time saved, since the plat­for­m’s pur­pose is to make other en­gi­neers more pro­duc­tive.

At a cost of €130,000 per year, one en­gi­neer costs ap­prox­i­mately €10,800 per month, or around €65 per work­ing hour. For the plat­form team to break even, their plat­form needs to save the hun­dred en­gi­neers they serve a com­bined to­tal of 1,340 hours per month. That is 13.4 hours per en­gi­neer per month, or roughly three hours per week per per­son.

Three hours per week is achiev­able. A well-built plat­form that elim­i­nates man­ual de­ploy­ment steps, re­duces en­vi­ron­ment setup time, or re­moves the need for repet­i­tive con­fig­u­ra­tion work can eas­ily clear that bar. Time saved is the most di­rect mea­sure for a plat­form team, though value can also come from re­duc­ing out­ages, which car­ries a di­rect rev­enue im­pact of its own. But the ques­tion worth ask­ing is whether any­one on that team knows this num­ber, tracks it, or uses it to de­cide what to build next. In most or­ga­ni­za­tions, the an­swer is no. The team has a roadmap dri­ven by en­gi­neer­ing pref­er­ences, stake­holder re­quests, and quar­terly plan­ning cy­cles, and the fi­nan­cial logic un­der­ly­ing their ex­is­tence is left un­ex­am­ined.

And break-even is not ac­tu­ally the right bar. Leah Tharin has writ­ten a sharp break­down of the math­e­mat­ics of this: a team with a 50% ini­tia­tive suc­cess rate, which is al­ready op­ti­mistic, needs its wins to cover its losses too. Leah’s cal­cu­la­tion is growth-ori­ented, but even for non-growth or­ga­ni­za­tions, the same in­vest­ment the­sis holds. Even a two-times re­turn is not suf­fi­cient. Capital sit­ting in a bank car­ries no op­er­a­tional risk, no co­or­di­na­tion costs, and no on­go­ing main­te­nance oblig­a­tions. The sys­tems a team builds will out­live the team it­self, and the cost of own­ing, main­tain­ing, and even­tu­ally re­plac­ing those sys­tems is al­most al­ways larger than an­tic­i­pated. The re­turn has to cover not just the team’s cur­rent cost, but the long tail of what they leave be­hind.

That pushes the re­al­is­tic thresh­old for fi­nan­cial vi­a­bil­ity to some­where be­tween three and five times an­nual cost. For an €87,000 per month team, that means gen­er­at­ing be­tween €260,000 and €435,000 in monthly value. The three hours per week cal­cu­la­tion gets you to break-even. To clear the re­al­is­tic fi­nan­cial bar, the plat­form needs to be gen­uinely trans­for­ma­tive for the en­gi­neers us­ing it, and the team needs to be ruth­less about work­ing on the high­est-value prob­lems rather than the most in­ter­est­ing ones.

A cus­tomer-fac­ing prod­uct team of eight car­ries the same €87,000 monthly cost. The levers avail­able to jus­tify that cost are dif­fer­ent, but the un­der­ly­ing logic is iden­ti­cal.

If the prod­uct has an av­er­age rev­enue per user of €50 per month, the team needs to gen­er­ate or pro­tect the equiv­a­lent of 1,740 users worth of value every month just to break even, and roughly 5,000 to 8,700 users worth of value to clear the three-to-five times thresh­old.

Churn is of­ten the most di­rect lever. Consider a prod­uct with 50,000 ac­tive users los­ing 2% monthly to churn. That is 1,000 users per month, rep­re­sent­ing €50,000 in monthly re­cur­ring rev­enue walk­ing out the door. A team that iden­ti­fies the pri­mary dri­ver of that churn and elim­i­nates it is gen­er­at­ing nearly €50,000 per month in pro­tected rev­enue, cov­er­ing most of its break-even cost from a sin­gle ini­tia­tive. But that cal­cu­la­tion re­quires know­ing the churn rate, un­der­stand­ing its causes, and con­nect­ing those causes to the team’s work, and most teams are not op­er­at­ing with that level of fi­nan­cial clar­ity.

Activation is an­other lever that is fre­quently un­der­es­ti­mated. If 10,000 users sign up each month but only 30% com­plete the ac­ti­va­tion steps that lead to long-term re­ten­tion, there are 7,000 users each month who paid ac­qui­si­tion costs but never con­verted to re­tained rev­enue. Improving the ac­ti­va­tion rate by five per­cent­age points, from 30% to 35%, con­verts an ad­di­tional 500 users per month. At €50 av­er­age rev­enue per user, that is €25,000 in ad­di­tional monthly re­cur­ring rev­enue, rep­re­sent­ing roughly 29% of the team’s break-even thresh­old from one met­ric mov­ing in the right di­rec­tion.

Sales con­ver­sion fol­lows the same logic. If the prod­uct has a free-to-paid con­ver­sion fun­nel pro­cess­ing 20,000 tri­als per month at a 4% con­ver­sion rate, that pro­duces 800 pay­ing cus­tomers monthly. Moving con­ver­sion from 4% to 4.5% pro­duces 900 cus­tomers, an ad­di­tional 100 pay­ing users, and €5,000 in ad­di­tional monthly rev­enue. Small im­prove­ments across mul­ti­ple levers com­pound quickly, but only if the team un­der­stands which levers con­nect to which fi­nan­cial out­comes and by how much.

Given that soft­ware teams are ex­pen­sive and that their value is, at least in prin­ci­ple, cal­cu­la­ble, it is worth ex­am­in­ing why most teams do not mea­sure any­thing fi­nan­cially mean­ing­ful. Some mea­sure ac­tiv­ity prox­ies such as ve­loc­ity, tick­ets closed, or fea­tures shipped. Others mea­sure sen­ti­ment prox­ies such as NPS, CSAT, or en­gage­ment scores. These are not de­graded ver­sions of fi­nan­cial mea­sure­ment. They are a dif­fer­ent cat­e­gory en­tirely, one that was de­signed around the goal of un­der­stand­ing user be­hav­ior and team through­put rather than around the goal of un­der­stand­ing eco­nomic re­turn.

The prob­lem is that ac­tiv­ity and sen­ti­ment met­rics can trend up­ward while fi­nan­cial per­for­mance de­te­ri­o­rates. A team can ship more fea­tures while build­ing the wrong things. Engagement scores can rise while churn ac­cel­er­ates among the users who ac­tu­ally gen­er­ate rev­enue. Velocity can in­crease while the work be­ing com­pleted has no mea­sur­able con­nec­tion to busi­ness out­comes. These met­rics feel mean­ing­ful be­cause they cor­re­late with out­comes in many cir­cum­stances, but cor­re­la­tion is not a re­li­able guide to pri­or­i­ti­za­tion when the un­der­ly­ing fi­nan­cial logic is never ex­am­ined.

This is a struc­tural con­di­tion rather than a fail­ure of in­di­vid­ual judg­ment. Organizations chose these met­rics be­cause they are eas­ier to in­stru­ment, eas­ier to com­mu­ni­cate, and eas­ier to look good on than fi­nan­cial met­rics. A team that mea­sures its suc­cess by fea­tures shipped will al­ways have some­thing to show. A team that mea­sures its suc­cess by re­turn gen­er­ated will some­times have to re­port that it does not know, or that the re­turn was dis­ap­point­ing, and that kind of trans­parency re­quires an or­ga­ni­za­tional cul­ture that most com­pa­nies have not de­lib­er­ately built.

The ma­trix above is drawn from a prod­uct man­age­ment train­ing pro­gram I run called Booster, where prod­uct lead­ers map their ac­tual met­rics against their in­vest­ment the­sis to sur­face gaps. The ex­er­cise is un­com­fort­able pre­cisely be­cause most lead­ers dis­cover mid-map­ping that their team’s daily mea­sure­ments have no di­rect con­nec­tion to the fi­nan­cial ob­jec­tive they were given.

Understanding why this con­di­tion ex­ists re­quires look­ing at roughly two decades of macro­eco­nomic con­text, be­cause the fi­nan­cial dys­func­tion in mod­ern soft­ware or­ga­ni­za­tions did not emerge from bad in­ten­tions or in­tel­lec­tual fail­ure. It emerged from a spe­cific en­vi­ron­ment that made fi­nan­cial dis­ci­pline in prod­uct teams eco­nom­i­cally un­nec­es­sary.

The pic­ture is not a sin­gle clean era but two dis­tinct phases. From roughly 2002 through 2011, cap­i­tal was pe­ri­od­i­cally cheap but con­di­tions were mixed. Rates fell sharply af­ter the dot-com crash and again af­ter the global fi­nan­cial cri­sis, but in both cases risk ap­petite was sup­pressed. The money was tech­ni­cally in­ex­pen­sive but in­vestors were cau­tious, mul­ti­ples were rea­son­able, and the growth-at-all-costs logic had not yet taken hold. Product or­ga­ni­za­tions dur­ing this pe­riod still op­er­ated with some resid­ual fi­nan­cial dis­ci­pline in­her­ited from the dot-com reck­on­ing.

From ap­prox­i­mately 2011 through 2022, some­thing dif­fer­ent hap­pened. Zero-rate pol­icy be­came fully nor­mal­ized, risk ap­petite re­cov­ered and then over­cor­rected, and the SaaS men­tal model crys­tal­lized into a broadly shared in­vest­ment the­sis. All three con­di­tions ar­rived si­mul­ta­ne­ously, and the re­sult was about eleven years dur­ing which soft­ware com­pa­nies could grow head­count ag­gres­sively, miss on the ma­jor­ity of their roadmap, and still look healthy on pa­per. Revenue growth for­gave an enor­mous range of pri­or­i­ti­za­tion mis­takes, and the cost of build­ing the wrong thing was largely in­vis­i­ble.

Eleven years is not a long time, but it is long enough to form the pro­fes­sional in­stincts of an en­tire gen­er­a­tion of prod­uct and en­gi­neer­ing lead­ers. The frame­works they learned, the met­rics they adopted, the plan­ning rit­u­als they prac­tice, and the de­f­i­n­i­tions of suc­cess they in­ter­nal­ized were all formed dur­ing a win­dow that was un­usu­ally short and un­usu­ally dis­torted. There is no co­hort of se­nior prod­uct lead­ers who de­vel­oped their judg­ment in con­di­tions where their teams were ex­pected to demon­strate fi­nan­cial re­turn, be­cause those con­di­tions did not ex­ist dur­ing the years when that co­hort was learn­ing the craft.

When cap­i­tal be­came ex­pen­sive again in 2022, the be­hav­ior did not au­to­mat­i­cally ad­just, be­cause the be­hav­ior was never con­nected to the fi­nan­cial logic in the first place.

There is a deeper con­se­quence of this twenty-year pe­riod that is now be­com­ing painfully vis­i­ble, and it con­cerns how the in­dus­try has thought about large en­gi­neer­ing or­ga­ni­za­tions and code­bases.

The con­ven­tional un­der­stand­ing is that a code­base rep­re­sent­ing years of en­gi­neer­ing in­vest­ment is a valu­able as­set. It en­codes busi­ness logic, cap­tures ac­cu­mu­lated de­ci­sions, and rep­re­sents the tech­ni­cal foun­da­tion on which fu­ture prod­ucts are built. A large en­gi­neer­ing or­ga­ni­za­tion is sim­i­larly un­der­stood as a source of ca­pa­bil­ity, with more en­gi­neers mean­ing more ca­pac­ity to build, main­tain, and im­prove that foun­da­tion.

While some ar­gued that large code­bases ac­tu­ally shoulg be con­sid­ered a li­a­bil­ity, the in­dus­try as a whole has mostly ig­nored that. But this un­der­stand­ing is now be­ing more closely ex­am­ined. A large code­base also car­ries main­te­nance costs that grow over time as the sys­tem be­comes more com­plex, more in­ter­con­nected, and more dif­fi­cult to change safely. Every en­gi­neer added to main­tain it in­creases co­or­di­na­tion costs, in­tro­duces new de­pen­den­cies, and adds to the or­ga­ni­za­tional weight that slows de­ci­sion-mak­ing. The as­set and the li­a­bil­ity ex­ist si­mul­ta­ne­ously, and for most of the past twenty years, the fi­nan­cial en­vi­ron­ment masked the li­a­bil­ity side of that equa­tion.

The ar­rival of large lan­guage mod­els has made the li­a­bil­ity vis­i­ble in a way that is dif­fi­cult to ig­nore. Recently, Nathan Cavaglione, a de­vel­oper, built a func­tional replica of ap­prox­i­mately 95% of Slack’s core prod­uct in four­teen days us­ing LLM agents. Slack was built by thou­sands of en­gi­neers over the course of more than a decade, at a cost that rep­re­sents bil­lions of dol­lars in cu­mu­la­tive en­gi­neer­ing in­vest­ment. Nathan started with­out any of that ac­cu­mu­lated com­plex­ity, with­out the or­ga­ni­za­tional weight, with­out the legacy ar­chi­tec­tural de­ci­sions, and with­out the co­or­di­na­tion costs, and ar­rived at a com­pa­ra­ble prod­uct in a pe­riod that would not con­sti­tute a sin­gle sprint in most en­ter­prise en­gi­neer­ing or­ga­ni­za­tions.

Day 14: A func­tional replica of Slack’s core prod­uct, built by a Nathan us­ing LLM agents.

This does not mean that Slack’s en­gi­neer­ing in­vest­ment was wasted, be­cause Slack also built en­ter­prise sales in­fra­struc­ture, com­pli­ance ca­pa­bil­i­ties, data se­cu­rity prac­tices, and or­ga­ni­za­tional re­silience that a four­teen-day pro­to­type does not in­clude. But it does mean that the as­sump­tion un­der­ly­ing large en­gi­neer­ing or­ga­ni­za­tions, which is that scale and ac­cu­mu­lated com­plex­ity rep­re­sent com­pet­i­tive moats, is no longer re­li­able in the way it once was. When the cost of build­ing a func­tional ap­prox­i­ma­tion of a so­phis­ti­cated soft­ware prod­uct can col­lapse to days of in­di­vid­ual ef­fort, the ques­tion of what a large en­gi­neer­ing team jus­ti­fies be­comes both more ur­gent and more dif­fi­cult to an­swer with the met­rics most or­ga­ni­za­tions cur­rently track.

The ob­vi­ous ob­jec­tion is that code pro­duced at that speed be­comes un­man­age­able, a li­a­bil­ity in it­self. That is a rea­son­able con­cern, but it largely ap­plies when agents pro­duce code that hu­mans then main­tain. Agentic plat­forms are be­ing it­er­ated upon quickly, and for es­tab­lished pat­terns and non-busi­ness-crit­i­cal code, which is the ma­jor­ity of what most en­gi­neer­ing or­ga­ni­za­tions ac­tu­ally main­tain, de­tailed hu­man fa­mil­iar­ity with the code­base mat­ters less than it once did. A messy code­base is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to rea­son through an un­fa­mil­iar sys­tem, that is still faster and cheaper than most de­vel­op­ment teams op­er­at­ing to­day. The li­a­bil­ity ar­gu­ment holds in a hu­man-to-hu­man or agent-to-hu­man world. In an agent-to-agent world, it largely dis­solves.

The com­pet­i­tive ad­van­tage avail­able to or­ga­ni­za­tions that take this se­ri­ously is not pri­mar­ily tech­ni­cal. It is an­a­lyt­i­cal. Companies that can clearly ar­tic­u­late what each of their teams costs, what value each team gen­er­ates, and whether that value clears a fi­nan­cially vi­able thresh­old are in a struc­turally dif­fer­ent po­si­tion than com­pa­nies that can­not. They can make build ver­sus buy de­ci­sions based on ac­tual eco­nom­ics rather than or­ga­ni­za­tional pref­er­ence. They can iden­tify when a team is work­ing on prob­lems that can­not gen­er­ate suf­fi­cient re­turn at their cost level. They can se­quence ini­tia­tives based on what value is be­ing lost each day they are de­layed, rather than on who ar­gued most per­sua­sively in the last plan­ning meet­ing.

Most or­ga­ni­za­tions can­not do this to­day. The mea­sure­ment in­fra­struc­ture does not ex­ist, the fi­nan­cial data does not flow to the peo­ple mak­ing pri­or­i­ti­za­tion de­ci­sions, and the habit of ask­ing these ques­tions has not been built. Building it is un­com­fort­able, be­cause the an­swers are some­times un­flat­ter­ing. A team that ex­am­ines its work through this lens will some­times dis­cover that it has spent a quar­ter on things that do not con­nect to fi­nan­cial out­comes in any mean­ing­ful way, and that is a dif­fi­cult find­ing to sit with.

But the al­ter­na­tive is con­tin­u­ing to run an or­ga­ni­za­tion where teams with mil­lion-euro an­nual bud­gets make daily in­vest­ment de­ci­sions with­out the fi­nan­cial con­text to know whether those de­ci­sions are gen­er­at­ing re­turn. That con­di­tion was sus­tain­able when cap­i­tal was cheap and growth for­gave every­thing. It is in­creas­ingly dif­fi­cult to sus­tain in an en­vi­ron­ment where boards ex­pect fi­nan­cial re­turns, where the cost of build­ing soft­ware is col­laps­ing due to AI, and where the ques­tion of what a team jus­ti­fies can no longer be de­ferred in­def­i­nitely.

The or­ga­ni­za­tions that de­velop the habit of ask­ing these ques­tions clearly, reg­u­larly, and with­out flinch­ing will ac­cu­mu­late an ad­van­tage that com­pounds over time. The ques­tion is sim­ply whether they will start ask­ing be­fore or af­ter the pres­sure forces them to.

...

Read the original on www.viktorcessan.com »

7 354 shares, 34 trendiness

sterlingcrispin/nothing-ever-happens

Focused async Python bot for Polymarket that buys No on stand­alone non-sports yes/​no mar­kets.

FOR ENTERTAINMENT ONLY. PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. THE AUTHORS ARE NOT LIABLE FOR ANY CLAIMS, LOSSES, OR DAMAGES.

The bot scans stand­alone mar­kets, looks for NO en­tries be­low a con­fig­ured price cap, tracks open po­si­tions, ex­poses a dash­board, and per­sists live re­cov­ery state when or­der trans­mis­sion is en­abled.

If any of those are miss­ing, the bot uses PaperExchangeClient.

pip in­stall -r re­quire­ments.txt

cp con­fig.ex­am­ple.json con­fig.json

cp .env.example .env

con­fig.json is in­ten­tion­ally lo­cal and ig­nored by git.

The run­time con­fig lives un­der strate­gies.noth­ing_hap­pens. See con­fig.ex­am­ple.json and .env.example.

You can point the run­time at a dif­fer­ent con­fig file with CONFIG_PATH=/path/to/config.json.

python -m bot.main

The dash­board binds $PORT or DASHBOARD_PORT when one is set.

The shell helpers use ei­ther an ex­plicit app name ar­gu­ment or HEROKU_APP_NAME.

ex­port HEROKU_APP_NAME=

heroku con­fig:set BOT_MODE=live DRY_RUN=false LIVE_TRADING_ENABLED=true -a $HEROKU_APP_NAME”

heroku con­fig:set PRIVATE_KEY=

Only run the web dyno. The worker en­try ex­ists only to fail fast if it is started ac­ci­den­tally.

python -m pytest -q

Local con­fig, ledgers, ex­ports, re­ports, and de­ploy­ment ar­ti­facts are ig­nored by de­fault.

...

Read the original on github.com »

8 310 shares, 22 trendiness

Make tmux Pretty and Usable

In my pre­vi­ous blog post I gave a quick and easy in­tro­duc­tion to tmux and ex­plained how to use tmux with a ba­sic con­fig­u­ra­tion.

If you’ve fol­lowed that guide you might have had a feel­ing that many peo­ple have when work­ing with tmux for the first time: These key com­bi­na­tions are re­ally awk­ward!”. Rest as­sured, you’re not alone. Judging from the co­pi­ous blog posts and dot­files re­pos on GitHub there are many peo­ple out there who feel the urge to make tmux be­have a lit­tle dif­fer­ent; to make it more com­fort­able to use.

And ac­tu­ally it’s quite easy to cus­tomize the look and feel of tmux. Let me tell you some­thing about the ba­sics of cus­tomiz­ing tmux and share some of the con­fig­u­ra­tions I find most use­ful.

Customizing tmux is as easy as edit­ing a text file. Tmux uses a file called tmux.conf to store its con­fig­u­ra­tion. If you store that file as ~/.tmux.conf (Note: there’s a pe­riod as the first char­ac­ter in the file name. It’s a hid­den file) tmux will pick this con­fig­u­ra­tion file for your cur­rent user. If you want to share a con­fig­u­ra­tion for mul­ti­ple users you can also put your tmux.conf into a sys­tem-wide di­rec­tory. The lo­ca­tion of this di­rec­tory will be dif­fer­ent across dif­fer­ent op­er­at­ing sys­tems. The man page (man tmux) will tell you the ex­act lo­ca­tion, just have a look at doc­u­men­ta­tion for the -f pa­ra­me­ter.

Probably the most com­mon change among tmux users is to change the pre­fix from the rather awk­ward C-b to some­thing that’s a lit­tle more ac­ces­si­ble. Personally I’m us­ing C-a in­stead but note that this might in­ter­fere with bash’s go to be­gin­ning of line” com­mand1. On top of the C-a bind­ing I’ve also remapped my Caps Lock key to act as since I’m not us­ing Caps Lock any­ways. This al­lows me to nicely trig­ger my pre­fix key combo.

To change your pre­fix from C-b to C-a, sim­ply add fol­low­ing lines to your tmux.conf:

# remap pre­fix from C-b’ to C-a’

un­bind C-b

set-op­tion -g pre­fix C-a

bind-key C-a send-pre­fix

Another thing I per­son­ally find quite dif­fi­cult to re­mem­ber is the pane split­ting com­mands.” to split ver­ti­cally and % to split hor­i­zon­tally just does­n’t work for my brain. I find it help­ful to have use char­ac­ters that re­sem­ble a vi­sual rep­re­sen­ta­tion of the split, so I chose | and - for split­ting panes hor­i­zon­tally and ver­ti­cally:

# split panes us­ing | and -

bind | split-win­dow -h

bind - split-win­dow -v

un­bind “’

un­bind %

Since I’m ex­per­i­ment­ing quite of­ten with my tmux.conf I want to re­load the con­fig eas­ily. This is why I have a com­mand to re­load my con­fig on r:

# re­load con­fig file (change file lo­ca­tion to your the tmux.conf you want to use)

bind r source-file ~/.tmux.conf

Switching be­tween panes is one of the most fre­quent tasks when us­ing tmux. Therefore it should be as easy as pos­si­ble. I’m not quite fond of trig­ger­ing the pre­fix key all the time. I want to be able to sim­ply say M- to go where I want to go (remember: M is for Meta, which is usu­ally your Alt key). With this mod­i­fi­ca­tion I can sim­ply press Alt-left to go to the left pane (and other di­rec­tions re­spec­tively):

# switch panes us­ing Alt-arrow with­out pre­fix

bind -n M-Left se­lect-pane -L

bind -n M-Right se­lect-pane -R

bind -n M-Up se­lect-pane -U

bind -n M-Down se­lect-pane -D

Although tmux clearly fo­cuses on key­board-only us­age (and this is cer­tainly the most ef­fi­cient way of in­ter­act­ing with your ter­mi­nal) it can be help­ful to en­able mouse in­ter­ac­tion with tmux. This is es­pe­cially help­ful if you find your­self in a sit­u­a­tion where oth­ers have to work with your tmux con­fig and nat­u­rally don’t have a clue about your key bind­ings or tmux in gen­eral. Pair Programming might be one of those oc­ca­sions where this hap­pens quite fre­quently.

Enabling mouse mode al­lows you to se­lect win­dows and dif­fer­ent panes by sim­ply click­ing and to re­size panes by drag­ging their bor­ders around. I find it pretty con­ve­nient and it does­n’t get in my way of­ten, so I usu­ally en­able it:

# Enable mouse con­trol (clickable win­dows, panes, re­siz­able panes)

set -g mouse on

I like to give my tmux win­dows cus­tom names us­ing the , key. This helps me nam­ing my win­dows ac­cord­ing to the con­text they’re fo­cus­ing on. By de­fault tmux will up­date the win­dow ti­tle au­to­mat­i­cally de­pend­ing on the last ex­e­cuted com­mand within that win­dow. In or­der to pre­vent tmux from over­rid­ing my wisely cho­sen win­dow names I want to sup­press this be­hav­ior:

# don’t re­name win­dows au­to­mat­i­cally

set-op­tion -g al­low-re­name off

Changing the col­ors and de­sign of tmux is a lit­tle more com­plex than what I’ve pre­sented so far. As tmux al­lows you to tweak the ap­pear­ance of a lot of el­e­ments (e.g. the bor­ders of panes, your sta­tus­bar and in­di­vid­ual el­e­ments of it, mes­sages), you’ll need to add a few op­tions to get a con­sis­tent look and feel. You can make this as sim­ple or as elab­o­rate as you like. Tmux’s man page (specifically the STYLES sec­tion) con­tains more in­for­ma­tion about what you can tweak and how you can tweak it.

Depending on your color scheme your re­sult­ing tmux will look some­thing like this:

# DESIGN TWEAKS

# don’t do any­thing when a bell’ rings

set -g vi­sual-ac­tiv­ity off

set -g vi­sual-bell off

set -g vi­sual-si­lence off

setw -g mon­i­tor-ac­tiv­ity off

set -g bell-ac­tion none

# clock mode

setw -g clock-mode-colour yel­low

# copy mode

setw -g mode-style fg=black bg=red bold’

# panes

set -g pane-bor­der-style fg=red’

set -g pane-ac­tive-bor­der-style fg=yellow’

# sta­tus­bar

set -g sta­tus-po­si­tion bot­tom

set -g sta­tus-jus­tify left

set -g sta­tus-style fg=red’

set -g sta­tus-left

set -g sta­tus-left-length 10

set -g sta­tus-right-style fg=black bg=yel­low’

set -g sta­tus-right %Y-%m-%d %H:%M

set -g sta­tus-right-length 50

setw -g win­dow-sta­tus-cur­rent-style fg=black bg=red’

setw -g win­dow-sta­tus-cur­rent-for­mat ′ #I #W #F ′

setw -g win­dow-sta­tus-style fg=red bg=black’

setw -g win­dow-sta­tus-for­mat ′ #I #[fg=white]#W #[fg=yellow]#F ′

setw -g win­dow-sta­tus-bell-style fg=yellow bg=red bold’

# mes­sages

set -g mes­sage-style fg=yellow bg=red bold’

In the snip­pet above, I’m us­ing your ter­mi­nal’s de­fault col­ors (by us­ing the named col­ors, like red, yel­low or black). This al­lows tmux to play nicely with what­ever color theme you have set for your ter­mi­nal. Some pre­fer to use a broader range of col­ors for their ter­mi­nals and tmux color schemes. If you don’t want to use your ter­mi­nal de­fault col­ors but in­stead want to de­fine col­ors from a 256 col­ors range, you can use colour0 to colour256 in­stead of red, cyan, and so on when defin­ing your col­ors in your tmux.conf.

Looking for a nice color scheme for your ter­mi­nal?

If you’re look­ing for a nice color scheme for your ter­mi­nal I rec­om­mend to check out my very own Root Loops. With Root Loops you can eas­ily de­sign a per­sonal, awe­some-look­ing ter­mi­nal color scheme and stand out from all the other folks us­ing the same bor­ing-ass color schemes every­one else is us­ing.

There are plenty of re­sources out there where you can find peo­ple pre­sent­ing their tmux con­fig­u­ra­tions. GitHub and other code host­ing ser­vices tend to be a great source. Simply search for tmux.conf” or re­pos called dotfiles” to find a vast amount of con­fig­u­ra­tions that are out there. Some peo­ple share their con­fig­u­ra­tion on their blog. Reddit might have a few sub­red­dits that could have use­ful in­spi­ra­tion, too (there’s /r/dotfiles and /r/unixporn, for ex­am­ple).

You can find my com­plete tmux.conf (along with other con­fig­u­ra­tion files I’m us­ing on my sys­tems) on my per­sonal dot­files repo on GitHub.

If you want to dive deeper into how you can cus­tomize tmux, the canon­i­cal source of truth is tmux’s man page (simply type man tmux to get there). You should also take a look at the elab­o­rate tmux wiki and see their Configuring tmux sec­tion if this blog post was too shal­low for your needs. Both will con­tain up-to-date in­for­ma­tion about each and every tiny thing you can tweak to make your tmux ex­pe­ri­ence truly yours. Have fun!

...

Read the original on hamvocke.com »

9 298 shares, 24 trendiness

US appeals court declares 158-year-old home distilling ban unconstitutional

A U. S. ap­peals court on Friday de­clared un­con­sti­tu­tional a nearly 158-year-old fed­eral ban on home dis­till­ing, call­ing it an un­nec­es­sary and im­proper means for ​Congress to ex­er­cise its power to tax.

The 5th U. S. Circuit Court of ‌Appeals in New Orleans ruled in fa­vor of the non­profit Hobby Distillers Association and four of its 1,300 mem­bers.

They ar­gued that peo­ple should be free to dis­till spir­its at home, whether as ​a hobby or for per­sonal con­sump­tion in­clud­ing, in one in­stance, to cre­ate ​an ap­ple-pie-vodka recipe.

The ban was part of a law passed dur­ing ⁠Reconstruction in July 1868, in part to thwart liquor tax eva­sion, and sub­jected vi­o­la­tors ​to up to five years in prison and a $10,000 fine.

Writing for a three-judge panel, ​Circuit Judge Edith Hollan Jones said the ban ac­tu­ally re­duced tax rev­enue by pre­vent­ing dis­till­ing in the first place, un­like laws that reg­u­lated the man­u­fac­ture and la­bel­ing of dis­tilled spir­its on which ​the gov­ern­ment could col­lect taxes.

She also said that un­der the gov­ern­men­t’s logic, Congress could ​criminalize vir­tu­ally any in-home ac­tiv­ity that might es­cape no­tice from tax col­lec­tors, in­clud­ing re­mote work and ‌home-based ⁠businesses.

Without any lim­it­ing prin­ci­ple, the gov­ern­men­t’s the­ory would vi­o­late this court’s oblig­a­tion to read the Constitution care­fully to avoid cre­at­ing a gen­eral fed­eral au­thor­ity akin to the po­lice power,” Jones wrote.

The U. S. Department of Justice had no im­me­di­ate com­ment.

Another de­fen­dant, the ​Treasury Department’s Alcohol and ​Tobacco Tax and ⁠Trade Bureau, did not im­me­di­ately re­spond to a re­quest for com­ment.

Devin Watkins, a lawyer rep­re­sent­ing the Hobby Distillers Association, in an ​interview called the rul­ing an im­por­tant de­ci­sion about the lim­its of ​federal power.

Andrew ⁠Grossman, who ar­gued the non­prof­it’s ap­peal, called the de­ci­sion an im­por­tant vic­tory for in­di­vid­ual lib­erty” that lets the plain­tiffs pursue their pas­sion to dis­till fine bev­er­ages in their homes.”

I look for­ward ⁠to ​sampling their out­put,” he said.

The de­ci­sion up­held a July 2024 ​ruling by U. S. District Judge Mark Pittman in Fort Worth, Texas. He put his rul­ing on hold so ​the gov­ern­ment could ap­peal.

...

Read the original on nypost.com »

10 278 shares, 22 trendiness

We May Be Living Through the Most Consequential Hundred Days in Cyber History, and Almost Nobody Has Noticed

The first four months of 2026 have pro­duced a se­quence of cy­ber in­ci­dents that, if any one of them had landed in 2014 or 2017, would have dom­i­nated a news cy­cle for a week.

A Chinese state su­per­com­puter re­port­edly bled ten petabytes. Stryker was wiped across 79 coun­tries. Lockheed Martin was hit for a re­ported 375 ter­abytes. The FBI Director’s per­sonal in­box was dumped on the open web. The FBIs wire­tap man­age­ment net­work was breached in a sep­a­rate major in­ci­dent.” Rockstar Games was breached through a SaaS an­a­lyt­ics ven­dor most peo­ple have never heard of. Cisco’s pri­vate GitHub was cloned. Oracle’s legacy cloud cracked open. The Axios npm pack­age, down­loaded a hun­dred mil­lion times a week, was hi­jacked by North Korea. Mercor, the $10 bil­lion AI train­ing-data ven­dor that sits in­side the data pipelines of OpenAI, Anthropic, and Meta si­mul­ta­ne­ously, was breached through the LiteLLM open source li­brary and had 4 ter­abytes ex­tracted by Lapsus$. Honda was hit twice. The new ShinyHunters/Scattered Spider/LAPSUS$ al­liance breached ap­prox­i­mately 400 or­ga­ni­za­tions and ex­fil­trated roughly 1.5 bil­lion Salesforce records.

Stacked on top of each other across roughly a hun­dred days, these events are some­thing a his­to­rian of com­put­ing se­cu­rity writ­ing in 2050 will prob­a­bly file as a turn­ing point, re­gard­less of what else hap­pens be­tween now and then.

And yet, the pub­lic con­ver­sa­tion around them has been quiet to the point of be­ing strange. This is a cu­ri­ous ob­ser­va­tion more than a com­plaint. And the goal of what fol­lows is to gather the events into one place, cite the pub­li­ca­tions that re­ported each one, and then ask, gen­tly, why the pe­riod feels so un­doc­u­mented in real time.

Every named in­ci­dent be­low is fol­lowed by in­line par­en­thet­i­cal ci­ta­tions to the pub­li­ca­tions that broke or cov­ered it, in the same way an aca­d­e­mic pa­per would.

I am not ar­gu­ing that the cy­ber­se­cu­rity com­mu­nity is fail­ing. I am not­ing that some­thing un­usual is hap­pen­ing.

Strip out the noise and the 2026 wave so far breaks cleanly into four sep­a­rate cam­paigns run­ning in par­al­lel against U. S. and Western tar­gets. This con­ver­gence is the part no­body is nam­ing out loud.

Cluster 1: Iran / Handala / Void Manticore (destructive state op­er­a­tions). Operating un­der the Handala Hack Team per­sona, at­trib­uted by Palo Alto Networks Unit 42 to Void Manticore, an ac­tor linked to Iran’s Ministry of Intelligence and Security. Handala is claim­ing at­tacks against U. S. in­dus­trial, de­fense, and gov­ern­ment tar­gets and ex­plic­itly fram­ing them as re­tal­i­a­tion for a February 28 mis­sile strike on a school in Minab, south­ern Iran, that killed at least 175 peo­ple, most of them chil­dren. Confirmed and claimed Q1 2026 vic­tims: Stryker (200,000 de­vices wiped), Lockheed Martin (375 TB claim, 28 en­gi­neer doxxing), FBI Director Kash Patel (personal email dump).

Cluster 2: Scattered LAPSUS$ Hunters / SLH — the apex-preda­tor merger (financially-motivated SaaS theft and ex­tor­tion at in­dus­trial scale). This is the sin­gle largest and least-dis­cussed or­ga­ni­za­tional de­vel­op­ment in the crim­i­nal cy­ber land­scape since the Conti col­lapse. In August 2025, three of the most no­to­ri­ous fi­nan­cially-mo­ti­vated crews on the planet, ShinyHunters, Scattered Spider, and LAPSUS$, for­mally com­bined into a co­or­di­nated al­liance widely tracked as Scattered LAPSUS$ Hunters (SLH), some­times called the Trinity of Chaos” (Resecurity; Cyberbit; Infosecurity Magazine; The Hacker News; Computer Weekly; ReliaQuest). Scattered Spider pro­vides ini­tial ac­cess through highly-ef­fec­tive so­cial en­gi­neer­ing and vish­ing. ShinyHunters han­dles ex­fil­tra­tion, leak-site man­age­ment, and ex­tor­tion. LAPSUS$ con­tributes its own brand of iden­tity-sys­tem com­pro­mise. The re­sult is an end-to-end crim­i­nal pipeline op­er­at­ing against the SaaS layer of the global en­ter­prise.

The num­bers from this clus­ter’s 2025-2026 Salesforce cam­paign alone are dif­fi­cult to ab­sorb. ShinyHunters has pub­licly claimed com­pro­mise of ap­prox­i­mately 300 to 400 or­ga­ni­za­tions, with around 100 de­scribed as high-pro­file, and ap­prox­i­mately 1.5 bil­lion Salesforce records stolen in ag­gre­gate (BankInfoSecurity, ShinyHunters Counts 1.5 Billion Stolen Salesforce Records”; The Register; State of Surveillance, 400 Companies Breached”; Salesforce Ben). Salesforce re­leased a se­cu­rity ad­vi­sory on March 7, 2026 con­firm­ing that a known threat group” was ex­ploit­ing mis­con­fig­u­ra­tions in its Experience Cloud prod­uct, and ShinyHunters claimed re­spon­si­bil­ity on its data leak site two days later. The named vic­tim list reads like a roll call of global brand recog­ni­tion: Google (corporate Salesforce in­stance, ~2.55M records of small and medium busi­ness con­tact data), Cisco, Adidas, Qantas (5.7M cus­tomers), Allianz Life, Farmers Insurance Group, Workday, Pandora, Chanel, TransUnion, the en­tire LVMH fam­ily in­clud­ing Louis Vuitton, Dior, Tiffany & Co., and Cartier, Air France-KLM, LastPass, Okta, AMD, Snowflake it­self, Match Group (Hinge, Bumble, OkCupid), SoundCloud (29.8M users), Panera Bread (5.1M ac­counts), Betterment (1.4M), Harvard, the University of Pennsylvania, Crunchbase, Canada Goose, and the December 2025 Pornhub breach via the Mixpanel cam­paign that ex­posed roughly 200 mil­lion user records and 94 GB of his­tor­i­cal an­a­lyt­ics data (BleepingComputer on Qantas, Allianz Life, LVMH; Cybersecurity News on Google, Adidas, Louis Vuitton; Malwarebytes; Google Cloud Threat Intel; Wikipedia ShinyHunters). Q1 2026 alone added Rockstar Games (via Anodot → Snowflake), the Cisco Trivy / Salesforce dou­ble hit, and the sin­gle most con­se­quen­tial AI-industry-specific in­ci­dent of the quar­ter: the Lapsus$-claimed breach of Mercor, the $10B AI re­cruit­ing and train­ing-data ven­dor that sits in­side the data pipelines of OpenAI, Anthropic, and Meta si­mul­ta­ne­ously, af­ter a LiteLLM open source sup­ply chain com­pro­mise by the TeamPCP clus­ter. All cat­a­logued in ded­i­cated sec­tions later in this ar­ti­cle.

Tradecraft note: this clus­ter is no longer just com­pro­mis­ing SaaS in­te­gra­tors to lift their OAuth to­kens, al­though that re­mains part of the play­book (the 2025 Salesloft Drift / UNC6395 in­ci­dent that com­pro­mised over 700 Salesforce en­vi­ron­ments in­clud­ing Cloudflare, Google, PagerDuty, Palo Alto Networks, Proofpoint, Tanium, Zscaler, and CyberArk is the prece­dent that proved the OAuth model works at scale, Unit 42 threat brief; UpGuard; Cloudflare re­sponse). The 2026 evo­lu­tion is more di­rect: SLH op­er­a­tors now call em­ploy­ees on the phone, pre­tend to be IT sup­port, walk them through updating MFA set­tings” or linking the Salesforce Data Loader app,” and har­vest cre­den­tials, MFA codes, and OAuth grants in real time (Google Cloud Blog, The Cost of a Call”; Varonis; The Hacker News). In par­al­lel, ShinyHunters has weaponized Mandiant’s own AuraInspector au­dit tool to scan and ex­ploit mis­con­fig­ured Salesforce Experience Cloud guest user per­mis­sions across the cus­tomer base (Allure Security). Voice phish­ing has pro­duced more 2026 en­ter­prise breaches than any sin­gle tech­ni­cal vul­ner­a­bil­ity.

Cluster 3: North Korea / UNC1069 (open source sup­ply chain com­pro­mise). Google Threat Intelligence Group at­trib­uted the March 31, 2026 hi­jack of the Axios npm pack­age to UNC1069, a North Korea-nexus fi­nan­cially-mo­ti­vated ac­tor. They did not ex­ploit a vul­ner­a­bil­ity. They built an en­tire fake com­pany, a branded fake Slack work­space, a fake Microsoft Teams meet­ing with fake team­mates, so­cial-en­gi­neered the lead Axios main­tainer into trust­ing the fake or­ga­ni­za­tion, and used that trust to seize his npm ac­count. Then they shipped a cross-plat­form RAT to a JavaScript li­brary with roughly 100 mil­lion weekly down­loads. Cisco’s sep­a­rate March 2026 breach via the Trivy sup­ply chain at­tack, in which over 300 in­ter­nal GitHub repos­i­to­ries were cloned, fits the same gen­eral pat­tern of up­stream de­vel­oper-trust com­pro­mise.

Cluster 4: Russia / APT28 (zero-day ex­ploita­tion against Ukraine and EU). Russia-backed APT28 be­gan ex­ploit­ing a freshly dis­closed Microsoft Office vul­ner­a­bil­ity, CVE-2026-21509, within days of its January patch re­lease. Targets in­cluded Ukrainian gov­ern­ment bod­ies and over 60 European email ad­dresses, with ma­li­cious Office doc­u­ments dis­guised as cor­re­spon­dence from Ukraine’s hy­drom­e­te­o­ro­log­i­cal cen­ter. This is the only clus­ter of the four that is not pri­mar­ily aimed at the United States, but it shares the ar­chi­tec­ture: speed of weaponiza­tion mea­sured in days, ex­ploita­tion of trust re­la­tion­ships, and min­i­mal Western pub­lic re­sponse.

All four clus­ters are ex­ploit­ing the same struc­tural weak­ness: the mod­ern Western en­ter­prise no longer has a de­fen­si­ble perime­ter, only a long chain of ven­dor and de­vel­oper trust re­la­tion­ships, any of which can be turned against the host. Iran is us­ing that chain to break things. ShinyHunters is us­ing it to ex­tort money. North Korea is us­ing it to seed im­plants into the world’s de­vel­oper ma­chines. Russia is us­ing it to read European in­boxes. The chain is the same. Only the pay­loads dif­fer.

Setting aside any ar­gu­ment about cause and ef­fect, there is a par­al­lel set of num­bers from the AI side of the in­dus­try over the same pe­riod that is worth putting on the table. They may or may not ex­plain the wave above. They are at min­i­mum strange enough to be worth not­ing along­side it, and the pub­lic ob­scu­rity around them is it­self part of the ob­ser­va­tion.

In late 2025, Anthropic pub­lished a re­port ti­tled Disrupting the first re­ported AI-orchestrated cy­ber es­pi­onage cam­paign.” In it, the com­pany dis­closed that a Chinese state-aligned ac­tor had used Claude to au­to­mate a spy­ing op­er­a­tion against ap­prox­i­mately 30 or­ga­ni­za­tions, with AI han­dling an es­ti­mated 80 to 90 per­cent of the cam­paign work­load and hu­man op­er­a­tors in­ter­ven­ing only spo­rad­i­cally (Anthropic full re­port PDF; Anthropic news re­lease). That dis­clo­sure came from the model ven­dor it­self, not a third-party threat in­tel re­port, which is un­usual on its own. What is more un­usual is how lit­tle sub­se­quent dis­cus­sion it gen­er­ated out­side spe­cial­ist cir­cles.

Around it sits a stack of mea­sure­ment data from Hoxhunt, ZeroThreat, StrongestLayer, Bright Defense, and StationX that points in the same di­rec­tion across 2025 and into 2026. None of these num­bers, on their own, prove a causal link to any spe­cific in­ci­dent in this ar­ti­cle. Taken to­gether they de­scribe a sharp shift in the am­bi­ent threat en­vi­ron­ment that has gone largely un­re­marked upon in main­stream cov­er­age:

On the threat-in­tel side, Microsoft’s track­ing now for­mally de­scribes two North Korean threat ac­tor clus­ters, Jasper Sleet and Coral Sleet, as us­ing AI across the at­tack life­cy­cle from re­con­nais­sance through im­per­son­ation through post-com­pro­mise (Dark Reading). Genians and The Record have sep­a­rately doc­u­mented Kimsuky, the long-run­ning North Korean APT, us­ing ChatGPT to forge con­vinc­ing South Korean mil­i­tary and gov­ern­ment iden­ti­fi­ca­tion doc­u­ments for phish­ing lures (Genians; The Record; eS­e­cu­ri­ty­Planet). In March 2026 the U. S. Treasury’s OFAC sanc­tioned six in­di­vid­u­als and two en­ti­ties in­volved in the broader DPRK IT worker fraud scheme, in which large lan­guage mod­els are used to gen­er­ate fake per­sonas, re­sumes, and even in­ter­view an­swers to land re­mote en­gi­neer­ing jobs at Fortune 500 com­pa­nies (The Hacker News; TechRadar on OpenAI bans). Whether you read that as a trend or a co­in­ci­dence, it is on the pub­lic record.

There is also the widely re­ported multi-per­son Microsoft Teams call in which a fi­nan­cial de­part­ment em­ployee was ma­nip­u­lated by an AI-generated deep­fake of their own CFO, along­side other AI-generated colleagues,” into wiring more than 25 mil­lion U. S. dol­lars to Hong Kong bank ac­counts (Microtime). Whatever else that in­ci­dent tells us, it con­firms that the in­fra­struc­ture to fake a con­vinc­ing multi-per­son video call in real time ex­ists and has been used.

From the de­fender side, Anthropic’s in­ter­nal red-team eval­u­a­tion of its with­held Mythos model found that the model could com­plete a sim­u­lated net­work in­tru­sion in 6.2 hours ver­sus 10.4 hours for GPT-4o, and could iden­tify ex­ploitable flaws in 73 per­cent of the ap­pli­ca­tions it scanned (NPR; Axios; CNN Business; Fortune). Anthropic has de­clined to re­lease Mythos pub­licly, re­strict­ing ac­cess to ap­prox­i­mately 40 tech­nol­ogy com­pa­nies in­clud­ing Microsoft and Google. OpenAI is fi­nal­iz­ing a com­pa­ra­ble model that will ship only to a small vet­ted cus­tomer set through a Trusted Access for Cyber” pro­gram (Axios). Two lead­ing fron­tier labs si­mul­ta­ne­ously hold­ing back cy­ber-ca­pa­ble mod­els on safety grounds is, again, not nec­es­sar­ily ev­i­dence of any­thing causal. It is, again, worth not­ing.

And then, on April 7, 2026, the part of this story that should an­chor every other para­graph in this sec­tion fi­nally hap­pened, in pri­vate, at the high­est pos­si­ble level of the United States gov­ern­ment, and al­most no­body out­side the fi­nan­cial press picked it up.

On that date, U. S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell con­vened an ur­gent, in-per­son meet­ing in Washington with the chief ex­ec­u­tives of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo, to brief them di­rectly on the cy­ber risks posed by Mythos (Bloomberg, Bessent, Powell Summon Bank CEOs to Urgent Meeting Over Anthropic’s New AI Model”; Bloomberg, Mythos: Why Anthropic’s New AI Has Officials Worried”; Fortune; CNBC; CNBC / Reuters; Fox News; Yahoo Finance; TechXplore). The meet­ing was trig­gered by Anthropic’s dis­clo­sure that Mythos had iden­ti­fied thou­sands of pre­vi­ously-un­known zero-day vul­ner­a­bil­i­ties in every ma­jor op­er­at­ing sys­tem and every ma­jor web browser, along with a range of other crit­i­cal soft­ware. Anthropic said the vul­ner­a­bil­ity-dis­cov­ery ca­pa­bil­ity was suf­fi­ciently dan­ger­ous that the model could only be re­leased to a tightly con­trolled hand­ful of trusted par­ties. Bessent and Powell, hav­ing ab­sorbed that dis­clo­sure, de­cided that the heads of the largest sys­tem­i­cally-im­por­tant U.S. banks needed to be told in per­son.

Pause and read that para­graph one more time. The Treasury Secretary and the Federal Reserve Chair do not con­vene the CEOs of the largest U. S. banks about a sin­gle soft­ware ven­dor’s prod­uct. They con­vene them about fi­nan­cial sta­bil­ity events. The fact that this meet­ing hap­pened, on this sub­ject, at this level, is the sin­gle most au­thor­i­ta­tive sig­nal in this en­tire note­book that some­thing has shifted in the cy­ber threat land­scape at a mag­ni­tude the fed­eral gov­ern­ment con­sid­ers com­pa­ra­ble in im­por­tance to a fi­nan­cial sta­bil­ity con­cern. Treasury and the Fed are not in the habit of cry­ing wolf about tech­nol­ogy ven­dor prod­uct re­leases. They cried wolf about this one.

This meet­ing also re­frames the si­lence the ar­ti­cle keeps re­turn­ing to. The si­lence has not bro­ken in main­stream pub­lic dis­course. It has clearly bro­ken in pri­vate, at the top of the U. S. gov­ern­ment, in clas­si­fied brief­ings and emer­gency con­ven­ings the pub­lic is mostly not see­ing. The his­to­ri­an’s ques­tion is no longer whether the cy­ber com­mu­nity is be­ing quiet. The his­to­ri­an’s ques­tion is why the pub­lic con­ver­sa­tion is so thor­oughly out of sync with what is clearly be­ing dis­cussed be­hind closed doors at the level of the Treasury and the Federal Reserve. The gap be­tween those two lay­ers of con­ver­sa­tion is, in the long view, the most in­ter­est­ing thing in this en­tire chron­i­cle.

None of the above proves that any spe­cific in­ci­dent in this ar­ti­cle was AI-driven. The Stryker wipe was ex­e­cuted through Microsoft Intune, not a chat­bot. The Patel email leak was a per­sonal Gmail com­pro­mise. The Lockheed claims re­main un­ver­i­fied. What the AI num­bers do es­tab­lish, with a fair amount of con­fi­dence, is that the am­bi­ent cost of run­ning a con­vinc­ing of­fen­sive op­er­a­tion has shifted dra­mat­i­cally over the same win­dow in which the wave above un­folded. Two things changed at once. A rea­son­able ob­server can de­cide for them­selves whether they are con­nected, but a rea­son­able ob­server should at least know both lists ex­ist, side by side, and that al­most no­body in main­stream cov­er­age has put them on the same page.

That ob­scu­rity is the part of this sec­tion that mat­ters most. It is not the AI num­bers. It is the si­lence around them.

On March 11, 2026, Stryker Corporation, one of the largest med­ical de­vice com­pa­nies on the planet, watched its global op­er­a­tions col­lapse in­side an af­ter­noon (Krebs on Security; Cybersecurity Dive; HIPAA Journal; Stryker of­fi­cial state­ment). Attackers com­pro­mised a Windows do­main ad­min­is­tra­tor ac­count, used it to pro­vi­sion a new Global Administrator in­side the com­pa­ny’s Microsoft Entra and Intune en­vi­ron­ment, and then is­sued mass re­mote-wipe and fac­tory-re­set com­mands across the de­vice fleet (Lumos; The Register; Coalition). More than 200,000 sys­tems, servers, lap­tops, and mo­bile de­vices were wiped within min­utes. Offices in 79 coun­tries went dark. Order pro­cess­ing, man­u­fac­tur­ing, and ship­ping all stopped.

The Handala Hack Team claimed re­spon­si­bil­ity, claimed ex­fil­tra­tion of roughly 50 ter­abytes of data prior to the wipe, and be­gan pub­lish­ing it from in­fra­struc­ture that the FBI later seized. Stryker has since re­cov­ered and re­ported full op­er­a­tional restora­tion, and em­ployee law­suits have al­ready been filed. The down­stream ef­fect on pa­tients is the part that has not been ad­e­quately re­ported: hos­pi­tals that re­lied on Stryker sur­gi­cal hard­ware and the com­pa­ny’s or­der and sup­port sys­tems had to post­pone pro­ce­dures while the fleet re­built, which means the wiper trans­lated di­rectly into can­celled surg­eries across mul­ti­ple coun­tries in the hours and days af­ter the event.

Who Handala ac­tu­ally is mat­ters for read­ing this in­ci­dent cor­rectly. Handala Hack Team” is not an in­de­pen­dent crew. The U. S. Department of Justice for­mally clas­si­fies it as a fic­ti­tious iden­tity used by MOIS to hide its role in in­flu­ence op­er­a­tions and psy­cho­log­i­cal scare­mon­ger­ing cam­paigns.” The un­der­ly­ing op­er­a­tor is as­sessed as Void Manticore, also tracked as Banished Kitten, Red Sandstorm, and Storm-842, an of­fen­sive unit sit­ting in­side the Iranian Ministry of Intelligence and Security. The per­sona first sur­faced in December 2023, im­me­di­ately af­ter October 7, and in­her­ited the op­er­a­tional lin­eage of two ear­lier MOIS fronts: Homeland Justice, which ran the 2022 to 2023 Albania op­er­a­tions, and Karma, which Handala for­mally re­placed. The unit it be­longs to was, un­til early 2026, headed by Seyed Yahya Hosseini Panjaki, sanc­tioned by the U.S. Treasury in September 2024, then by the EU and UK, specif­i­cally for over­see­ing Iranian dis­si­dent as­sas­si­na­tion op­er­a­tions, and placed on the FBI ter­ror­ism watch list. Panjaki was killed in the open­ing phase of U.S. and Israeli strikes on Iranian in­tel­li­gence in­fra­struc­ture in early March 2026. The Stryker at­tack landed af­ter his death, un­der the same per­sona. The or­ga­ni­za­tional re­silience is it­self part of the story.

The stated mo­tive pub­lished by Handala is re­tal­i­a­tion for the February 28, 2026 strike on a school in Minab, south­ern Iran, with a claimed ca­su­alty count of more than 170 chil­dren. That fram­ing is the group’s own, and it should be read as psy­cho­log­i­cal op­er­a­tion as much as at­tri­bu­tion, but it is the rea­son the op­er­a­tor put on the record. On March 19, 2026, the FBI seized four Handala do­mains (including han­dala-hack.tw, hosted on a Taiwan top-level do­main specif­i­cally to avoid Western take­down ju­ris­dic­tion) and the State Department an­nounced a $10 mil­lion bounty through its Rewards for Justice pro­gram. A re­place­ment site was stand­ing within hours. Handala pub­licly an­swered the bounty with a $50 mil­lion counter-bounty” threat framed at Trump and Netanyahu. The in­fra­struc­ture traces to Cloudzy (PONYNET), a bul­let­proof host­ing provider that Halcyon has as­sessed with high con­fi­dence is a front for abrNOC, an Iranian host­ing com­pany founded in the same year by the same in­di­vid­ual, with post-seizure failover routed through Russian DDoS provider DDOS-Guard.

Read all of that in one breath. A MOIS-operated per­sona whose unit head was killed three weeks ear­lier walked into one of the largest med­ical de­vice man­u­fac­tur­ers in the world, ex­fil­trated 50 TB, then pushed a de­struc­tive but­ton that bricked 200,000 end­points across 79 coun­tries in min­utes, post­poned surg­eries, stated a re­tal­i­a­tion mo­tive, ab­sorbed a $10 mil­lion FBI bounty, had four of its do­mains seized, and was op­er­at­ing a re­place­ment site the same day. The re­cov­ery worked, which is a credit to Stryker’s in­ci­dent re­sponse team, but the fact that the re­cov­ery worked does not erase what hap­pened, and what hap­pened is the most con­se­quen­tial wartime cy­ber at­tack on U. S. soil in the pub­lic record. Coverage out­side spe­cial­ist out­lets was min­i­mal.

This is ac­tu­ally two dis­tinct in­ci­dents aimed at Lockheed Martin in­side a few days of each other, and con­flat­ing them has caused most of the cov­er­age con­fu­sion.

Incident one, the 375 TB claim. An en­tity self-iden­ti­fy­ing as APT Iran, which sits in­side the broader Handala ecosys­tem but pub­lishes un­der its own ban­ner, claimed on or around March 2026 to have ex­fil­trated 375 ter­abytes of data from Lockheed Martin and listed the cache for sale on dark web in­fra­struc­ture (Cybersecurity Dive; UpGuard; Cybersecurity Insiders; Hackread). The ini­tial price was re­ported at roughly $400 mil­lion USD and was later raised to­ward $598 mil­lion. The group claims the trove in­cludes cor­po­rate doc­u­ments and tech­ni­cal blue­prints re­lated to the F-35 Joint Strike Fighter pro­gram. Lockheed Martin has not pub­licly con­firmed any breach. Trusted se­cu­rity re­searchers have not ver­i­fied the sam­ple data. Iranian in­tel­li­gence-linked ac­tors are in­de­pen­dently doc­u­mented to ex­ag­ger­ate and to fold prior un­re­lated breaches and open-source ma­te­r­ial into cur­rent claims to am­plify psy­cho­log­i­cal reach. The 375 TB F-35 claim, to be di­rect about it, is widely as­sessed as over­stated. Treat it as claim, not as con­firmed event.

Incident two, March 26, 2026, the 28 en­gi­neers. This one is the part that should be get­ting more at­ten­tion. The Handala per­sona it­self (distinct from the APT Iran data-sale list­ing) pub­lished the names, pho­tographs, em­ployer de­tails, and lo­ca­tion in­for­ma­tion of 28 se­nior American en­gi­neers iden­ti­fied as work­ing in Israel on de­fense pro­grams that specif­i­cally in­cluded the F-35, the F-22, and the THAAD mis­sile de­fense sys­tem (NetCrook). The pub­li­ca­tion was ac­com­pa­nied by threat­en­ing phone calls to the en­gi­neers them­selves and by lan­guage stat­ing that Handala’s friends in the United States” would pay vis­its to their fam­i­lies. A 48 hour ul­ti­ma­tum was at­tached. This is doxxing as threat-to-life, ex­e­cuted by a MOIS-operated per­sona, against named Americans work­ing on three spe­cific weapons pro­grams, and it landed the same week the group was ab­sorb­ing FBI do­main seizures and a $10 mil­lion bounty.

Whether or not the 375 TB claim is real, the doxxing of 28 named American de­fense per­son­nel by an ac­tor with con­firmed state ties is not a hy­po­thet­i­cal. This is where the si­lence be­comes hard to ex­plain. A MOIS front is pub­lish­ing kill lists of U. S. de­fense en­gi­neers, ty­ing them to F-35, F-22, and THAAD by name, and the U.S. cy­ber­se­cu­rity ecosys­tem is treat­ing it as a Tuesday.

On March 27, 2026, the same Handala Hack Team pub­lished a tranche of more than 300 emails, pho­tographs, and a copy of Patel’s re­sume, all stolen from the per­sonal Gmail ac­count of FBI Director Kash Patel (CNN; CBS News; NBC News; Axios; PBS NewsHour; Al Jazeera; CNBC). A U. S. of­fi­cial fa­mil­iar with the mat­ter con­firmed the au­then­tic­ity of at least some of the pub­lished im­ages. The FBI sub­se­quently ac­knowl­edged the breach, and the State Department reis­sued the $10 mil­lion Rewards for Justice of­fer that had been an­nounced eight days ear­lier against Handala.

The fed­eral gov­ern­men­t’s fram­ing was care­ful and ac­cu­rate as far as it goes: the com­pro­mised ma­te­r­ial is his­tor­i­cal, dates from roughly 2011 to 2022, came from a per­sonal Gmail ac­count rather than any FBI sys­tem, and con­tains no cur­rent op­er­a­tional in­for­ma­tion. Patel’s of­fi­cial in­box was not breached. The ini­tial ac­cess vec­tor is the part that should em­bar­rass the dis­course. Handala did not burn a zero-day. They did not spear-phish a cab­i­net-level of­fi­cial. They used cre­den­tial stuff­ing against cre­den­tials har­vested from older pub­lic breach data­bases, the same tech­nique a teenager with a lap­top uses to break into gam­ing ac­counts. The sit­ting Director of the Federal Bureau of Investigation had a pass­word reused or reusable across a pre-gov­ern­ment breach cor­pus, and a hos­tile state ran the same brute-force work­flow that every fraud team in the world tracks hourly, and it worked.

The fram­ing is also a de­flec­tion. The point of the op­er­a­tion is not to ex­tract op­er­a­tional se­crets. The point is to demon­strate that an Iranian in­tel­li­gence-linked group can read the per­sonal cor­re­spon­dence of the sit­ting Director of the Federal Bureau of Investigation and pub­lish it on the open web with at­tri­bu­tion and with­out con­se­quence. This is an ex­plicit re­tal­i­a­tion event. It landed eight days af­ter the FBI seized four Handala do­mains, in the same month the State Department put a $10 mil­lion bounty on the group, and three weeks af­ter the U. S. and Israel killed the unit’s lead­er­ship in the Minab win­dow. Handala, for its part, an­swered the $10 mil­lion FBI bounty with a pub­lic $50 mil­lion counter-bounty aimed at Trump and Netanyahu. The March 25 dump of 14 GB from for­mer Mossad Director Tamir Pardo’s per­sonal Gmail, claim­ing to ex­pose as­sas­si­na­tion pro­ject de­tails and Stuxnet over­sight, was pub­lished by the same per­sona two days be­fore the Patel re­lease as a proof of con­cept.” The se­quenc­ing is the mes­sage, and it has been re­ceived in­ter­nally even if it has not been said pub­licly.

The Patel per­sonal Gmail story con­sumed al­most all of the pub­lic oxy­gen in March, but it was not the most con­se­quen­tial FBI com­pro­mise of the quar­ter. That dis­tinc­tion be­longs to a sep­a­rate in­ci­dent that re­ceived a frac­tion of the cov­er­age and ar­guably rep­re­sents a big­ger prob­lem.

The Federal Bureau of Investigation de­tected ab­nor­mal ac­tiv­ity on an in­ter­nal net­work on February 17, 2026, opened an in­quiry, and on March 23, 2026 the Department of Justice for­mally clas­si­fied the in­tru­sion as a major in­ci­dent” un­der the 2014 fed­eral law that re­quires es­ca­lated re­port­ing and re­me­di­a­tion (Bloomberg; Insurance Journal; GovInfoSecurity). The af­fected sys­tem is de­scribed in the pub­lic re­port­ing as the net­work the Bureau uses to man­age wire­taps and other sur­veil­lance op­er­a­tions, and it con­tains sen­si­tive law en­force­ment data in­clud­ing elec­tronic sur­veil­lance con­tent and per­son­ally iden­ti­fy­ing in­for­ma­tion about sub­jects of FBI in­ves­ti­ga­tions (Bloomberg, FBI Breach Exposes Secret Investigative Records to Intruders”).

The agen­cies’ no­ti­fi­ca­tion to law­mak­ers de­scribed the threat ac­tor’s trade­craft as sophisticated,” and noted in par­tic­u­lar that the at­tacker lever­aged a com­mer­cial Internet Service Provider ven­dor’s in­fra­struc­ture to by­pass the FBIs net­work se­cu­rity con­trols. That de­tail is the part the Bureau will have stayed up nights about. It means the ac­cess path was not a phish­ing email or a stolen lap­top. It was an up­stream telecom­mu­ni­ca­tions ven­dor whose in­fra­struc­ture trust re­la­tion­ship with the FBI was suc­cess­fully turned. That is the same ar­chi­tec­tural pat­tern as the SaaS sup­ply chain piv­ots de­scribed else­where in this ar­ti­cle, scaled up to the level of na­tion-state in­tel­li­gence op­er­a­tions against a fed­eral law en­force­ment sys­tem.

A his­to­ri­an’s ques­tion worth paus­ing on: which of the two FBI in­ci­dents in this quar­ter is the one a care­ful per­son would ac­tu­ally want to know more about? The Patel per­sonal Gmail leak, with its pho­tographs from 2011 and per­sonal cor­re­spon­dence from be­fore he held of­fice? Or the breach of the sys­tem the Bureau uses to man­age fed­eral wire­taps and which holds PII on the sub­jects of ac­tive FBI in­ves­ti­ga­tions? The an­swer is ob­vi­ous. The rel­a­tive cov­er­age of the two sto­ries is also ob­vi­ous, and the gap be­tween those two facts is one of the clean­est ex­am­ples in this en­tire note­book of the si­lence the ar­ti­cle keeps re­turn­ing to.

On March 31, 2026, an at­tacker hi­jacked the npm ac­count of the lead main­tainer of the Axios JavaScript HTTP client li­brary, one of the most-down­loaded pack­ages in the en­tire JavaScript ecosys­tem at roughly 100 mil­lion weekly down­loads, and pub­lished two ma­li­cious ver­sions: 1.14.1 and 0.30.4 (Huntress; The Hacker News; Bloomberg; TechCrunch; Sophos; Microsoft Security). The ma­li­cious ver­sions sat live on the npm reg­istry for about two to three hours be­fore be­ing pulled. Inside that win­dow, every CI pipeline, every de­vel­oper work­sta­tion, and every cloud build that pulled the lat­est mi­nor or patch range silently in­stalled a hid­den de­pen­dency that fetched and ex­e­cuted a cross-plat­form Remote Access Trojan.

On April 1, 2026, the Google Threat Intelligence Group pub­licly at­trib­uted the op­er­a­tion to UNC1069, a North Korea-nexus fi­nan­cially-mo­ti­vated clus­ter (Google Cloud Blog; Axios). On April 2, the Axios lead main­tainer Jason Saayman pub­lished a post-mortem de­scrib­ing what ac­tu­ally hap­pened, and the trade­craft is the part that should be mak­ing every­one in the open source ecosys­tem re­think how trust works on a per­sonal level.

The at­tacker did not ex­ploit a CVE. The at­tacker built an or­ga­ni­za­tion. They im­per­son­ated the founder of a real com­pany us­ing a cloned iden­tity and plau­si­ble out­reach. They in­vited the main­tainer into a real Slack work­space that had been care­fully branded to look le­git­i­mate, with chan­nel ac­tiv­ity, linked so­cial con­tent, and what ap­peared to be team pro­files and other open source main­tain­ers as fake mem­bers. They moved the con­ver­sa­tion to a Microsoft Teams meet­ing pop­u­lated with what looked like mul­ti­ple par­tic­i­pants. By the time the at­tacker re­quested any ac­tion that touched the main­tain­er’s npm ac­count, the so­cial proof was over­whelm­ing.

This is the high­est-ef­fort npm sup­ply chain op­er­a­tion pub­licly dis­closed since the 2024 XZ Utils back­door, and it is qual­i­ta­tively dif­fer­ent. XZ was pa­tient iden­tity laun­der­ing across years. The Axios at­tack was pa­tient iden­tity laun­der­ing across weeks, with a fake Slack work­space and a fake Teams meet­ing stand­ing in for years of GitHub com­mits. The bar to com­pro­mise a heav­ily-used open source main­tainer just dropped from infiltrate the pro­ject for two years” to build a con­vinc­ing Slack and host one Teams call.”

If you ran npm in­stall against ax­ios 1.14.1 or 0.30.4 in any en­vi­ron­ment, ro­tate every se­cret in that en­vi­ron­ment now and down­grade to 1.14.0 or 0.30.3. Microsoft Security, Sophos, Huntress, and Malwarebytes have all pub­lished de­tec­tion guid­ance.

Cisco took two dis­tinct hits in roughly the same win­dow. The first was a sup­ply chain com­pro­mise: in March 2026, at­tack­ers used cre­den­tials stolen via the Trivy sup­ply chain at­tack to breach Cisco’s in­ter­nal de­vel­op­ment en­vi­ron­ment and clone more than 300 GitHub repos­i­to­ries (BleepingComputer; SocRadar; TechCrunch). The stolen code re­port­edly in­cluded source for Cisco’s AI-powered prod­ucts and cus­tomer code be­long­ing to banks, busi­ness process out­sourc­ing firms, and U. S. gov­ern­ment agen­cies.

The sec­ond was fi­nan­cial. On March 31, 2026, the same day the Axios story broke, ShinyHunters pub­lished an ex­tor­tion post claim­ing theft of over 3 mil­lion Salesforce records from Cisco con­tain­ing per­sonal data, along­side GitHub repos­i­tory con­tents, AWS bucket data, and other in­ter­nal cor­po­rate as­sets (Hackread; Cybernews; SC Media). The dead­line for pay­ment was set for April 3, 2026. Cisco has not pub­licly con­firmed the ShinyHunters claim. The Trivy-linked source code theft is on firmer re­port­ing ground.

Two breaches in two weeks, one sup­ply chain and one SaaS, both tar­get­ing one of the most se­cu­rity-ma­ture ven­dors in the in­dus­try. If Cisco can be hit twice in a month through these vec­tors, the ques­tion of whether your own or­ga­ni­za­tion is hit through them is mostly a ques­tion of whether any­one is both­er­ing to look.

Of every in­ci­dent in this chron­i­cle, Mercor is the one most peo­ple out­side the AI in­dus­try have never heard of, and it is also the one most likely to turn out to mat­ter most. Mercor is a two-year-old AI re­cruit­ing and train­ing-data startup val­ued at ap­prox­i­mately $10 bil­lion fol­low­ing a $350 mil­lion Series C round led by Felicis Ventures in October 2025. Its cus­tomers in­clude OpenAI, Anthropic, and Meta (Fortune; TechCrunch; Cybernews). Which means one 2026 sup­ply chain com­pro­mise against Mercor touches the train­ing data pipelines of three of the largest fron­tier AI labs in the world si­mul­ta­ne­ously.

The at­tack chain is the clean­est ex­am­ple in this en­tire ar­ti­cle of how up­stream open source trust gets turned into down­stream en­ter­prise ex­tor­tion. A threat ac­tor tracked as TeamPCP com­pro­mised LiteLLM, a widely-used open source li­brary that de­vel­op­ers use to plug their ap­pli­ca­tions into AI ser­vices and which is down­loaded mil­lions of times per day, and planted cre­den­tial-har­vest­ing mal­ware in­side it (The Register; SecurityWeek; BankInfoSecurity). The ma­li­cious code was live for hours be­fore be­ing iden­ti­fied and re­moved. Mercor has said it was one of thou­sands of com­pa­nies af­fected” by the LiteLLM com­pro­mise. What makes Mercor the head­line vic­tim is not that it was uniquely vul­ner­a­ble. It is that the har­vested cre­den­tials led into an en­vi­ron­ment hold­ing the AI in­dus­try’s sin­gle most sen­si­tive shared as­set: train­ing data, la­bel­ing pro­to­cols, and data se­lec­tion cri­te­ria that the three largest fron­tier labs have each spent years and bil­lions of dol­lars de­vel­op­ing.

Lapsus$ sub­se­quently claimed re­spon­si­bil­ity for the down­stream Mercor breach on its leak site and pub­lished sam­ples (TechCrunch; PureWL; Cybernews). The claimed haul is ap­prox­i­mately 4 ter­abytes of data, bro­ken down as roughly 211 GB of data­base records, 939 GB of source code, and 3 TB of stor­age in­clud­ing can­di­date pro­files, per­son­ally iden­ti­fi­able in­for­ma­tion, em­ployer data, API keys, in­ter­nal Slack dumps, tick­et­ing sys­tem ex­ports, and, most dis­turbingly, videos pur­port­edly show­ing con­ver­sa­tions be­tween Mercor’s AI sys­tems and the con­trac­tors those sys­tems were train­ing. That last cat­e­gory is the part that should be get­ting more at­ten­tion. It is not just data about train­ing data. It is footage of the train­ing process it­self.

Note the clus­ter con­ver­gence here. Lapsus$ is one of the three legs of the Scattered LAPSUS$ Hunters (SLH) al­liance de­scribed ear­lier in this ar­ti­cle. The Mercor breach, the Rockstar Games breach via Anodot, the Cisco Salesforce ex­tor­tion on March 31, and the broader ~400-organization Salesforce mega-cam­paign are all, in vary­ing com­bi­na­tions, op­er­a­tions by the same new apex-preda­tor crim­i­nal al­liance. The pat­tern is no longer that a hand­ful of un­re­lated groups hap­pened to have a big quar­ter. It is that one newly-merged crim­i­nal col­lec­tive is run­ning an in­dus­trial-scale SaaS-and-supply-chain ex­tor­tion cam­paign across every sec­tor of the global en­ter­prise, and Mercor is the AI-industry-specific node of that cam­paign.

Business im­pact to date. Meta has paused its con­tracts with Mercor in­def­i­nitely. Five Mercor con­trac­tors have filed law­suits al­leg­ing per­sonal data ex­po­sure. Other large cus­tomers are re­port­edly re­assess­ing the re­la­tion­ship (TechCrunch; Strike Graph). For a two-year-old com­pany at a $10 bil­lion val­u­a­tion whose en­tire busi­ness model is be­ing the trusted data mid­dle­ware be­tween con­trac­tors and fron­tier AI labs, los­ing the trust of one of those labs is an ex­is­ten­tial event. Losing the trust of all three would be the end.

The struc­tural ob­ser­va­tion worth paus­ing on. The global fron­tier AI in­dus­try, in 2026, is ef­fec­tively run­ning on a shared data pipeline pro­vided by a small num­ber of ven­dors most of the pub­lic has never heard of. Mercor is one of those ven­dors. Its com­pro­mise demon­strates that the AI labs are not in fact the perime­ter that mat­ters. The perime­ter that mat­ters is the iden­tity and in­tegrity of every up­stream de­pen­dency in the data pipeline, and most of those de­pen­den­cies are ei­ther two-year-old star­tups or open source li­braries main­tained by a hand­ful of de­vel­op­ers. This is the same struc­tural prob­lem the rest of this ar­ti­cle keeps cir­cling: the mod­ern en­ter­prise no longer has a de­fen­si­ble bound­ary, only a chain of trust re­la­tion­ships, any of which can be turned. The AI in­dus­try in­her­ited that same ar­chi­tec­ture and is learn­ing the same les­son in real time.

On March 21, 2026, a threat ac­tor us­ing the han­dle rose87168” be­gan of­fer­ing for sale 6 mil­lion records ex­tracted from Oracle Cloud, with claims that more than 140,000 Oracle Cloud ten­ants were po­ten­tially af­fected (eSecurityPlanet; Cybersecurity Dive; FINRA). The breach was tied to CVE-2021-35587, a vul­ner­a­bil­ity in Oracle Access Manager and the OpenSSO Agent com­po­nent of Oracle Fusion Middleware, used to com­pro­mise Oracle’s Single Sign-On and LDAP sys­tems.

Oracle’s ini­tial pub­lic re­sponse de­nied that its main cloud plat­form had been breached. Oracle sub­se­quently ac­knowl­edged that an unau­tho­rized party had ac­cessed its legacy cloud en­vi­ron­ment, char­ac­ter­iz­ing the af­fected sys­tems as obsolete servers” (CSO Online). The HIPAA Journal later re­ported that up to 80 hos­pi­tals were po­ten­tially af­fected by data ex­po­sure tied to the same in­ci­dent, and Parexel International con­firmed that a se­cu­rity flaw in Oracle’s cloud in­fra­struc­ture had af­fected its Oracle OCI E-Business Suite en­vi­ron­ment (HIPAA Journal).

The pat­tern here is the one we used to call shadow legacy” and have ap­par­ently stopped warn­ing about. Hyperscale cloud providers carry quiet in­her­i­tances of older plat­forms that cus­tomers were moved off of years ago, but whose op­er­a­tional shells were never ac­tu­ally de­com­mis­sioned, and the line be­tween main cloud” and legacy cloud” is mean­ing­ful in mar­ket­ing copy and mean­ing­less to an at­tacker who finds a work­ing cre­den­tial.

On the fi­nan­cially-mo­ti­vated side of the wave, ShinyHunters claimed in early April 2026 to have breached Rockstar Games. Rockstar pub­licly con­firmed a third-party data breach” and char­ac­ter­ized the ac­cessed in­for­ma­tion as limited” and non-material” (Engadget; Tom’s Hardware). ShinyHunters tells a dif­fer­ent story.

ShinyHunters did not di­rectly com­pro­mise Rockstar’s in­ter­nal in­fra­struc­ture. They com­pro­mised Anodot, a third-party SaaS plat­form Rockstar uses for cloud cost mon­i­tor­ing, lifted au­then­ti­ca­tion to­kens from in­side Anodot’s en­vi­ron­ment, and used those to­kens to au­then­ti­cate into Rockstar’s Snowflake in­stance (Hackread; BleepingComputer; TechRadar; Kotaku; PC Gamer). The ran­som note posted to ShinyHunters’ leak chan­nel reads in part: Rockstar Games, your Snowflake in­stances were com­pro­mised thanks to Anodot.com. Pay or leak. Final warn­ing to reach out by 14 Apr 2026.”

The dead­line is two days from the date of this ar­ti­cle. Whatever ends up pub­lished, the struc­ture of the at­tack is the part worth re­mem­ber­ing: a small SaaS an­a­lyt­ics ven­dor most peo­ple have never heard of be­came the ac­cess path into one of the most valu­able cre­ative IP en­vi­ron­ments on the planet, weeks be­fore the most an­tic­i­pated game launch in in­dus­try his­tory.

This is the same play­book that pro­duced the 2024 Snowflake wave. It is not new. It has just been re­fined and aimed at higher-value tar­gets, and it is go­ing to keep work­ing un­til the SaaS-to-data-warehouse trust chain gets re-ar­chi­tected end to end.

On April 6, 2026, a cy­ber­at­tack on avi­a­tion IT sys­tems used by ma­jor European hubs took down check-in, bag­gage han­dling, and board­ing at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen si­mul­ta­ne­ously (The Traveler; National Today; VisaHQ; BlackFog on Collins Aerospace). More than 1,600 flights across the con­ti­nent were can­celled or de­layed on April 6 alone, with at least 13 can­cel­la­tions at Heathrow by the af­ter­noon of April 4 as the dis­rup­tion ramped up. Staff re­verted to man­ual check-in and pa­per board­ing passes, a pro­ce­dure most ground crews un­der thirty have never been trained on.

The vec­tor re­port­edly traces back to Collins Aerospace’s MUSE (Multi User System Environment) plat­form, the shared check-in and board­ing soft­ware used across many European air­line op­er­a­tions, which had al­ready ab­sorbed a sep­a­rate ran­somware at­tack in September 2025 that knocked Collins of­fline and pro­duced wide­spread Air France and KLM cus­tomer data ex­po­sure. The 2026 in­ci­dent is, in ef­fect, the sec­ond time in roughly six months that the same up­stream avi­a­tion soft­ware sup­plier has pro­duced a con­ti­nent-scale out­age.

The wider con­text, from the European Union Aviation Safety Agency: EASA doc­u­mented a 600 per­cent spike in avi­a­tion cy­ber­at­tacks be­tween 2024 and 2025, with air­ports world­wide ab­sorb­ing roughly 1,000 cy­ber­at­tacks per month by the end of that pe­riod. The April 2026 event is not an out­lier in that en­vi­ron­ment. It is the largest vis­i­ble sur­face of a much wider, much qui­eter trend in which the en­tire global avi­a­tion IT stack is start­ing to look load-bear­ing on a small num­ber of sin­gle-ven­dor de­pen­den­cies that no in­di­vid­ual air­port se­cu­rity team can de­fend in iso­la­tion.

I am putting this sec­tion in the mid­dle of the ar­ti­cle rather than at the top be­cause the sourc­ing is un­even and the ver­i­fi­ca­tion sta­tus is un­set­tled, but in the long view this is the in­ci­dent a his­to­rian will most likely cir­cle and un­der­line.

Around early February 2026, a hacker op­er­at­ing un­der the alias FlamingChina be­gan post­ing sam­ples on Telegram of what they claimed was a multi-petabyte data set ex­fil­trated from the National Supercomputing Center (NSCC) in Tianjin, one of the cen­tral Chinese state com­put­ing fa­cil­i­ties, which pro­vides in­fra­struc­ture ser­vices to more than 6,000 clients in­clud­ing ad­vanced sci­ence in­sti­tu­tions and Chinese de­fense agen­cies. By April 8, 2026, main­stream Western press had picked up the story and re­ported the same ba­sic claim from the ac­tor: ap­prox­i­mately 10 petabytes of sen­si­tive data, equiv­a­lent to roughly 10,240 ter­abytes, ex­tracted over a six-month pe­riod through a com­pro­mised VPN do­main into NSCCs en­vi­ron­ment, with a bot­net qui­etly si­phon­ing data out with­out de­tec­tion (CNN; Tom’s Hardware; TechRadar; SC Media; Security Magazine; BGR; Tech Startups; Vision Times; Computing.co.uk; Security Affairs).

The sam­ples that have been cir­cu­lated re­port­edly in­clude doc­u­ments marked secret” in Chinese, along with tech­ni­cal files, an­i­mated sim­u­la­tions, and ren­der­ings of de­fense equip­ment in­clud­ing bombs and mis­siles. The named prove­nance of some of the ma­te­r­ial in­cludes the Aviation Industry Corporation of China and the National University of Defense Technology. CNN spoke with mul­ti­ple cy­ber­se­cu­rity ex­perts who re­viewed the sam­ples and as­sessed them as ap­pear­ing gen­uine, while not­ing that in­de­pen­dent ver­i­fi­ca­tion of the full dataset has not been pos­si­ble. Some re­searchers have sug­gested the 10 petabyte fig­ure may be in­flated for com­mer­cial lever­age on BreachForums, and the ac­tor is re­port­edly of­fer­ing a lim­ited pre­view for thou­sands of dol­lars and full ac­cess for hun­dreds of thou­sands, payable in cryp­tocur­rency.

Take a mo­ment with the scale. Ten petabytes is, in rough terms, equiv­a­lent to two bil­lion pho­tographs, or the en­tire tex­tual con­tent of the pub­lic web sev­eral times over. It is the con­tents of roughly ten thou­sand de­cent lap­tops. Even if the ac­tual ex­fil­tra­tion turns out to be one tenth of the claim, the re­sult­ing one petabyte event is still in a cat­e­gory by it­self, larger than es­sen­tially any sin­gle named breach in the pub­lic his­tory of cy­ber­se­cu­rity. And the tar­get is not a mar­ket­ing data­base. It is a state su­per­com­put­ing fa­cil­ity host­ing work for Chinese de­fense acad­e­mia and avi­a­tion in­dus­try pro­grams.

A few things make this in­ci­dent his­tor­i­cally in­ter­est­ing be­yond the size. First, it is one of the very rare cases of a ma­jor Chinese state com­put­ing fa­cil­ity be­ing pub­licly breached and looted from out­side. The his­tor­i­cal asym­me­try of ma­jor re­ported breaches has run heav­ily in the other di­rec­tion, with Chinese state ac­tors as the named op­er­a­tors against Western tar­gets. If the FlamingChina claims hold up even par­tially, the sym­me­try has shifted. Second, the ac­cess vec­tor re­ported, a com­pro­mised VPN do­main fol­lowed by a long-dwell bot­net qui­etly ex­fil­trat­ing over six months with­out de­tec­tion, is the same pat­tern Western in­ci­dent re­sponse teams de­scribe in their worst na­tion-state en­gage­ments. The de­fend­ers’ prob­lems and the at­tack­ers’ prob­lems are start­ing to look like the same prob­lem. Third, and most qui­etly, the U. S. main­stream press picked the story up for a sin­gle news cy­cle in early April and then mostly let it go. A po­ten­tial record-break­ing ex­fil­tra­tion event from a Chinese state su­per­com­puter is the sort of thing that, in any prior decade, would have pro­duced sus­tained re­port­ing for weeks. In 2026 it pro­duced a few ar­ti­cles, a flurry of trade press cov­er­age, and then quiet.

The Chinese gov­ern­ment has not pub­licly ac­knowl­edged the in­ci­dent. The sam­ples re­main in cir­cu­la­tion. The claim re­mains un­ver­i­fied at full scope. The his­tor­i­cal im­por­tance re­mains, in the mean­time, sus­pended in ex­actly the kind of par­tial in­for­ma­tion state where most gen­uinely un­prece­dented events live for a while be­fore they fi­nally get named.

It would be a mis­take to read the 2026 wave as a bolt from the blue. It is more use­ful to read it as the vis­i­ble sur­face of a longer pre-po­si­tion­ing cam­paign that has been qui­etly run­ning un­der­neath the pub­lic-fac­ing in­ci­dents for years. Two Chinese state-aligned ac­tor clus­ters, Volt Typhoon and Salt Typhoon, are the rel­e­vant back­ground.

Volt Typhoon, at­trib­uted to the People’s Republic of China and ac­tive since at least mid-2021, has been doc­u­mented in­side U. S. crit­i­cal in­fra­struc­ture across com­mu­ni­ca­tions, man­u­fac­tur­ing, util­ity, trans­porta­tion, con­struc­tion, mar­itime, gov­ern­ment, IT, and ed­u­ca­tion sec­tors (CISA AA24-038A; Microsoft Security). The U.S. Intelligence Community has as­sessed pub­licly that Volt Typhoon’s tar­get­ing car­ries lim­ited es­pi­onage value and is in­stead con­sis­tent with prepo­si­tion­ing to dis­rupt U.S. in­fra­struc­ture in the event of a fu­ture cri­sis, par­tic­u­larly in Guam and near U.S. mil­i­tary bases in the Pacific. The International Institute for Strategic Studies pub­lished Volt Typhoon’s long shadow” in January 2026, not­ing that re­searchers warn the group re­mains em­bed­ded in U.S. util­i­ties and that some com­pro­mises may never be fully dis­cov­ered (IISS; The Record).

Salt Typhoon is the par­al­lel tele­com-fo­cused clus­ter, at­trib­uted to China’s Ministry of State Security, re­spon­si­ble for the high-pro­file com­pro­mises of mul­ti­ple ma­jor U. S. telecom­mu­ni­ca­tions car­ri­ers that sur­faced in late 2024 and con­tin­ued through 2025, in­clud­ing re­ported ac­cess to law­ful in­ter­cept sys­tems used by U.S. law en­force­ment (Congress.gov; Wikipedia overview). Both groups are still ac­tive in 2026.

The rea­son these two names be­long in this ar­ti­cle, even though their pub­lic dis­clo­sures pre­date the 2026 in­ci­dents cat­a­logued above, is that they de­scribe the base­line in­side which the 2026 wave is hap­pen­ing. The named 2026 in­ci­dents are not the en­tire pic­ture. They are the vis­i­ble sur­face. Underneath them, in U. S. util­i­ties and telecom­mu­ni­ca­tions in­fra­struc­ture, there are pre-po­si­tioned im­plants that the rel­e­vant fed­eral agen­cies have pub­licly stated may never be fully evicted. The his­to­rian sit­ting in 2050 read­ing this pe­riod is go­ing to want to know that the sur­face events of the first hun­dred days of 2026 oc­curred against a back­ground in which the deeper in­fra­struc­ture had al­ready been qui­etly com­pro­mised for years. That is the kind of con­text that gets lost when each in­ci­dent is re­ported as if it were the first.

Honda’s 2026 has been a slow drip rather than a sin­gle event. The re­port­ing de­scribes a se­quence of dis­tinct in­ci­dents: API flaws in Honda’s e-com­merce plat­form that ex­posed cus­tomer data, dealer panel data, and in­ter­nal doc­u­ments (BleepingComputer; SecurityWeek); a pass­word re­set flow ex­ploit that ex­posed ad­di­tional data (Cybersecurity Tribe); and a Clawson Honda deal­er­ship data breach claimed by the PLAY ran­somware group that ex­posed names, Social Security num­bers, ad­dresses, dri­ver’s li­cense data, and dates of birth, with no­ti­fi­ca­tion let­ters go­ing out as re­cently as April (Claim Depot).

None of these are in­di­vid­u­ally cat­a­strophic. Stacked to­gether they tell a fa­mil­iar story about a man­u­fac­tur­ing gi­ant whose at­tack sur­face has out­grown its se­cu­rity ma­tu­rity, and they be­long in the wave count.

The named in­ci­dents above are just the ones that broke through. The full first-quar­ter 2026 list is much longer. Brief Defense, PKWARE, Cybersecurity News, ACI Learning, and CSIS are all main­tain­ing 2026 in­ci­dent time­lines, and the pat­tern is con­sis­tent.

* January 2026: Illinois and Minnesota state sys­tems ex­posed per­sonal data on nearly one mil­lion peo­ple; the Match fam­ily of dat­ing apps was breached by ShinyHunters; Eurail con­firmed unau­tho­rized ac­cess; re­searchers found a 149-million-record data­base pub­licly ex­posed via cloud mis­con­fig­u­ra­tion; Microsoft January Patch Tuesday shipped 115 fixes in­clud­ing the Office bug APT28 be­gan ex­ploit­ing within days; Nike in­ves­ti­gated a pos­si­ble cy­ber at­tack af­ter WorldLeaks claimed 1.4 TB of in­ter­nal com­pany data on January 24; Red Hat suf­fered a pri­vate GitHub and GitLab com­pro­mise by the Crimson Collective, with roughly 570 GB ex­fil­trated from over 28,000 in­ter­nal repos­i­to­ries in­clud­ing ap­prox­i­mately 800 Customer Engagement Reports con­tain­ing in­fra­struc­ture de­tails and cre­den­tials for large en­ter­prise clients; Pickett USA breach ex­posed sen­si­tive en­gi­neer­ing data linked to U.S. util­i­ties; ShinyHunters / SLH vish­ing cam­paigns tar­get­ing en­ter­prise SSO en­vi­ron­ments in­clud­ing Okta surged in early-to-mid January.

* February 2026: BridgePay, a pay­ments plat­form serv­ing city gov­ern­ments, was hit by ran­somware; Odido dis­closed unau­tho­rized ac­cess af­fect­ing up to 6.2 mil­lion cus­tomers; Change Healthcare, a UnitedHealth sub­sidiary, was hit again, this time by AlphV/BlackCat; Cisco dis­closed that a crit­i­cal Catalyst SD-WAN vul­ner­a­bil­ity (CVE-2026-20127, CVSS 10.0) had been ac­tively ex­ploited since 2023; APT28 was ob­served weaponiz­ing CVE-2026-21509 against Ukrainian and EU gov­ern­ment tar­gets; the FBI de­tected ab­nor­mal ac­tiv­ity (Feb 17) on the in­ter­nal net­work it uses to man­age wire­taps and sur­veil­lance, even­tu­ally clas­si­fied as a major in­ci­dent” on March 23; the 2026 Winter Olympics opened in Milan and Cortina d’Am­pezzo and pro-Russ­ian DDoS group NoName057(16) be­gan hit­ting Italian Olympic in­fra­struc­ture, sev­eral na­tional Olympic com­mit­tees (Lithuania, Poland, Spain), the Cortina d’Am­pezzo tourism site, and Milan Malpensa Airport; University of Mississippi Medical Center closed clin­ics fol­low­ing a ran­somware at­tack and re­verted to man­ual pa­tient care; France’s National Bank Account Registry (FICOBA) was hit through cre­den­tial weak­ness ex­ploita­tion; Iron Mountain, Panera Bread, SmarterTools, Step Finance, and Advantest Corporation all ab­sorbed pub­licly dis­closed in­ci­dents.

* March 2026: Stryker wiper event (March 11); Microsoft pub­lished the Help on the line” re­port on a Teams-vishing ini­tial ac­cess pat­tern (March 16); Oracle Cloud rose87168” list­ing (March 21); Lockheed Martin / 28-engineer doxxing claims (March 23); European Commission Europa cloud plat­form breached (March 24); Kash Patel per­sonal email dump (March 27); Cisco Trivy sup­ply chain breach sur­faces; TeamPCP com­pro­mises the LiteLLM open source li­brary in a sup­ply chain at­tack that prop­a­gates to thousands of com­pa­nies” in­clud­ing Mercor, the $10B AI train­ing-data ven­dor whose cus­tomers in­clude OpenAI, Anthropic, and Meta; Axios npm hi­jack (March 31); ShinyHunters Cisco Salesforce ex­tor­tion post (March 31); Mercor dis­closes the se­cu­rity in­ci­dent pub­licly on March 31 / April 1; FlamingChina sam­ples from the Tianjin NSCC breach cir­cu­late widely on Telegram and BreachForums fol­low­ing early-Feb­ru­ary ini­tial post­ings.

* April 2026 so far: Google Threat Intelligence Group at­trib­utes Axios npm com­pro­mise to UNC1069 / North Korea (April 1); Axios main­tainer post-mortem pub­lished (April 2); Fortune con­firms the Mercor breach pub­licly (April 2), with Lapsus$ claim­ing 4 TB of ex­fil­trated data in­clud­ing ~211 GB data­base records, ~939 GB source code, and ~3 TB stor­age cov­er­ing can­di­date PII, em­ployer data, API keys, Slack dumps, and videos of Mercor AI sys­tems talk­ing to con­trac­tors; Meta pauses all AI data train­ing con­tracts with Mercor in­def­i­nitely, five Mercor con­trac­tors file law­suits over per­sonal data ex­po­sure; DOJ con­firms the FBI in­ter­nal major in­ci­dent” clas­si­fi­ca­tion pub­licly (early April); a con­ti­nent-wide avi­a­tion IT at­tack on April 6 crip­ples check-in, bag­gage, and board­ing at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen, can­celling or de­lay­ing more than 1,600 flights in a sin­gle day, traced to the Collins Aerospace MUSE plat­form that had al­ready been hit in September 2025; on April 7, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell con­vene an emer­gency in-per­son meet­ing in Washington with the CEOs of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo to brief them on the cy­ber risks posed by Anthropic’s Mythos AI model, which has iden­ti­fied thou­sands of pre­vi­ously-un­known zero-day vul­ner­a­bil­i­ties in every ma­jor op­er­at­ing sys­tem and web browser; Rockstar Games con­firms third-party breach via Anodot/Snowflake (early April); Snowflake it­self con­firms unusual ac­tiv­ity” af­fect­ing more than a dozen cus­tomer ac­counts linked to the Anodot in­te­gra­tion; ShinyHunters sets April 14 ran­som dead­line and tells re­porters they had Anodot ac­cess for some time” and had also tried (and failed) to breach Salesforce di­rectly; FlamingChina su­per­com­puter claims sur­face in main­stream Western press (CNN, Tom’s Hardware, TechRadar, BGR, around April 8-10); National Public Data suc­ces­sor breach sur­faces with roughly 2.9 bil­lion records of per­sonal in­for­ma­tion sold for 3.5 mil­lion dol­lars on the dark web; Yale New Haven Health System dis­closes breach af­fect­ing 5.5 mil­lion pa­tients; HIPAA Journal links up to 80 hos­pi­tals to the Oracle Cloud in­ci­dent; March 2026 ran­somware ac­tiv­ity to­tals 672 in­ci­dents in a sin­gle month, with Qilin, Akira, and DragonForce alone ac­count­ing for roughly 40 per­cent.

By the time you fin­ish count­ing the named in­ci­dents above, you are well past forty, and the count does not in­clude the 300 to 400 or­ga­ni­za­tions swept up in the SLH Salesforce mega-cam­paign or the roughly 1.5 bil­lion Salesforce records es­ti­mated stolen across that sin­gle op­er­a­tion. Nor does it in­clude the 672 ran­somware in­ci­dents that ran­somware track­ing firms recorded in March 2026 alone, with Qilin, Akira, and DragonForce ac­count­ing for about 40 per­cent of that sin­gle-month to­tal. Nor does it in­clude the dozens of smaller school dis­trict, mu­nic­i­pal, and health­care ran­somware events that have be­come so rou­tine they no longer make na­tional news. The 2025 base­line al­ready showed pub­licly dis­closed ran­somware at­tacks ris­ing 49% year-over-year to 1,174 in­ci­dents, with health­care ab­sorb­ing 22% of the to­tal. The 2026 first-quar­ter pace, on the tra­jec­tory above, is com­fort­ably on track to make 2025 look quiet.

This is the part of the note­book where the his­to­ri­an’s voice has to be hon­est about what it does not fully un­der­stand. The events above are real. The vol­ume is real. The pat­tern is real. The rel­a­tive quiet­ness around them in main­stream Western pub­lic dis­course is also real, and it is gen­uinely puz­zling rather than ob­vi­ously sin­is­ter. Several plau­si­ble ex­pla­na­tions sit on the table at once, and the hon­est move is to lay them out with­out in­sist­ing on any of them.

One pos­si­bil­ity is that at­tri­bu­tion to a state ac­tor has be­come pro­fes­sion­ally ex­pen­sive. Calling a Handala wiper event an Iranian in­tel­li­gence-linked de­struc­tive op­er­a­tion against a U. S. med­ical de­vice com­pany, or call­ing the FlamingChina su­per­com­puter leak what it might be, takes on po­lit­i­cal weight that prac­ti­tion­ers and ven­dors in­creas­ingly pre­fer to avoid. Analysis gets soft­ened to threat ac­tor” or sophisticated ad­ver­sary,” and the geopo­lit­i­cal read­ing gets edited out with­out any­one de­cid­ing to edit it out. That soft­en­ing is not a con­spir­acy. It is the cu­mu­la­tive ef­fect of many small com­mer­cial choices that each in­di­vid­u­ally seem rea­son­able.

A sec­ond pos­si­bil­ity is that the SaaS sup­ply chain story is un­com­fort­able for the se­cu­rity in­dus­try to dwell on, be­cause the in­dus­try sells into it. Saying out loud that the mod­ern en­ter­prise no longer has a de­fen­si­ble perime­ter, only a long chain of ven­dor trust re­la­tion­ships that can be turned at any link, is also say­ing that the se­cu­rity stack the in­dus­try shipped last quar­ter can­not stop the at­tacks the in­dus­try is sup­posed to be talk­ing about this quar­ter. That is a hard pub­lic mes­sage to de­liver from in­side a ven­dor.

A third pos­si­bil­ity is much sim­pler and pos­si­bly the most pow­er­ful. The news cy­cle has trained the pub­lic to bounce off cy­ber sto­ries. The au­di­ence has al­ready ab­sorbed Equifax, OPM, Yahoo, SolarWinds, NPD, and Snowflake, and the mar­ginal shock of another one” has flat­tened. When the mar­ginal shock is flat, even gen­uinely un­prece­dented events strug­gle to land. Practitioners know this, so they save their breath. The si­lence may be less an act of sup­pres­sion than an act of fa­tigue.

A fourth pos­si­bil­ity is the one this note­book keeps cir­cling back to. The par­al­lel ac­cel­er­a­tion on the AI side of the in­dus­try is awk­ward to dis­cuss in the same para­graph as the of­fen­sive in­ci­dents, be­cause every cy­ber­se­cu­rity ven­dor is cur­rently rac­ing to ship AI-powered” de­fense. It is com­mer­cially un­com­fort­able to put the two lists on the same page, even if no one in par­tic­u­lar is for­bid­ding it. The ab­sence of that pair­ing is, at min­i­mum, a strange thing to no­tice in the his­tor­i­cal record.

A his­to­rian writ­ing in 2050 about the first hun­dred days of 2026 will prob­a­bly find all four of these ex­pla­na­tions par­tially true and none of them fully suf­fi­cient. What that his­to­rian will al­most cer­tainly no­tice, more than any sin­gle ex­pla­na­tion, is the gap it­self, and more specif­i­cally the lay­ered na­ture of the gap. The April 7 meet­ing be­tween the Treasury Secretary, the Fed Chair, and the CEOs of the largest U. S. banks proves some­thing cru­cial about that lay­er­ing. The si­lence has not fully held at the high­est lev­els of the U.S. gov­ern­ment. Bessent and Powell are clearly not in the dark, and nei­ther are the peo­ple they briefed. What has held is the si­lence in the pub­lic dis­course, in the main­stream press, in the day-to-day con­ver­sa­tions prac­ti­tion­ers have with their boards and their cus­tomers. The in­for­ma­tion is mov­ing in pri­vate. It is just barely mov­ing in pub­lic. A pe­riod that, on the ev­i­dence, looks un­prece­dented in the his­tory of com­put­ing se­cu­rity passed through real-time pub­lic dis­course with­out pro­duc­ing the kind of sus­tained, co­her­ent, named con­ver­sa­tion the events seem to de­serve, and yet be­hind closed doors at the high­est lev­els of fi­nan­cial reg­u­la­tion, the con­ver­sa­tion is clearly hap­pen­ing. That asym­me­try is the most in­ter­est­ing ob­ject in this en­tire note­book.

If you work in this field and the last hun­dred days have felt strange to you, you are not imag­in­ing it. Something gen­uinely un­usual is hap­pen­ing, and the un­usu­al­ness of how qui­etly it is hap­pen­ing may, in the long view, be the most his­tor­i­cally in­ter­est­ing layer of all. Naming the gap, even gen­tly, is a small con­tri­bu­tion to mak­ing sure the pe­riod even­tu­ally gets the doc­u­men­ta­tion it de­serves.

...

Read the original on substack.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.