10 interesting stories served every morning and every evening.
...
Read the original on arxiv.org »
Last week, I wrote about catching a supply chain attack on a WordPress plugin called Widget Logic. A trusted name, acquired by a new owner, turned into something malicious. It happened again. This time at a much larger scale.
Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dashboard for a client site. The notice was from the WordPress.org Plugins Team, warning that a plugin called Countdown Timer Ultimate contained code that could allow unauthorized third-party access.
I ran a full security audit on the site. The plugin itself had already been force-updated by WordPress.org to version 2.6.9.1, which was supposed to clean things up. But the damage was already done.
The plugin’s wpos-analytics module had phoned home to analytics.essentialplugin.com, downloaded a backdoor file called wp-comments-posts.php (designed to look like the core file wp-comments-post.php), and used it to inject a massive block of PHP into wp-config.php.
The injected code was sophisticated. It fetched spam links, redirects, and fake pages from a command-and-control server. It only showed the spam to Googlebot, making it invisible to site owners. And here is the wildest part. It resolved its C2 domain through an Ethereum smart contract, querying public blockchain RPC endpoints. Traditional domain takedowns would not work because the attacker could update the smart contract to point to a new domain at any time.
CaptainCore keeps daily restic backups. I extracted wp-config.php from 8 different backup dates and compared file sizes. Binary search style.
The injection happened on April 6, 2026, between 04:22 and 11:06 UTC. A 6-hour 44-minute window.
I traced the plugin’s history through 939 quicksave snapshots. The plugin had been on the site since January 2019. The wpos-analytics module was always there, functioning as a legitimate analytics opt-in system for years.
Then came version 2.6.7, released August 8, 2025. The changelog said, “Check compatibility with WordPress version 6.8.2.” What it actually did was add 191 lines of code, including a PHP deserialization backdoor. The class-anylc-admin.php file grew from 473 to 664 lines.
The new code introduced three things:
A fetch_ver_info() method that calls file_get_contents() on the attacker’s server and passes the response to @unserialize()
A version_info_clean() method that executes @$clean($this->version_cache, $this->changelog) where all three values come from the unserialized remote data
That is a textbook arbitrary function call. The remote server controls the function name, the arguments, everything. It sat dormant for 8 months before being activated on April 5-6, 2026.
This is where it gets interesting. The original plugin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that operated under “WP Online Support” starting around 2015. They later rebranded to “Essential Plugin” and grew the portfolio to 30+ free plugins with premium versions.
By late 2024, revenue had declined 35-45%. Minesh listed the entire business on Flippa. A buyer identified only as “Kris,” with a background in SEO, crypto, and online gambling marketing, purchased everything for six figures. Flippa even published a case study about the sale in July 2025.
The buyer’s very first SVN commit was the backdoor.
On April 7, 2026, the WordPress.org Plugins Team permanently closed every plugin from the Essential Plugin author. At least 30 plugins, all on the same day. Here are the ones I confirmed:
* SlidersPack — All in One Image Sliders — sliderspack-all-in-one-image-sliders
All permanently closed. The author search on WordPress.org returns zero results. The analytics.essentialplugin.com endpoint now returns {“message”:“closed”}.
In 2017, a buyer using the alias “Daley Tias” purchased the Display Widgets plugin (200,000 installs) for $15,000 and injected payday loan spam. That buyer went on to compromise at least 9 plugins the same way.
The Essential Plugin case is the same playbook at a larger scale. 30+ plugins. Hundreds of thousands of active installations. A legitimate 8-year-old business acquired through a public marketplace and weaponized within months.
WordPress.org’s forced update added return; statements to disable the phone-home functions. That is a band-aid. The wpos-analytics module is still there with all its code. I built patched versions with the entire backdoor module stripped out.
I scanned my entire fleet and found 12 of the 26 Essential Plugin plugins installed across 22 customer sites. I patched 10 of them (one had no backdoor module, one was a different “pro” fork by the original authors). Here are the patched versions, hosted permanently on B2:
# Countdown Timer Ultimate
wp plugin install https://plugins.captaincore.io/countdown-timer-ultimate-2.6.9.1-patched.zip –force
# Popup Anything on Click
wp plugin install https://plugins.captaincore.io/popup-anything-on-click-2.9.1.1-patched.zip –force
# WP Testimonial with Widget
wp plugin install https://plugins.captaincore.io/wp-testimonial-with-widget-3.5.1-patched.zip –force
# WP Team Showcase and Slider
wp plugin install https://plugins.captaincore.io/wp-team-showcase-and-slider-2.8.6.1-patched.zip –force
# WP FAQ (sp-faq)
wp plugin install https://plugins.captaincore.io/sp-faq-3.9.5.1-patched.zip –force
# Timeline and History Slider
wp plugin install https://plugins.captaincore.io/timeline-and-history-slider-2.4.5.1-patched.zip –force
# Album and Image Gallery plus Lightbox
wp plugin install https://plugins.captaincore.io/album-and-image-gallery-plus-lightbox-2.1.8.1-patched.zip –force
# SP News and Widget
wp plugin install https://plugins.captaincore.io/sp-news-and-widget-5.0.6-patched.zip –force
# WP Blog and Widgets
wp plugin install https://plugins.captaincore.io/wp-blog-and-widgets-2.6.6.1-patched.zip –force
# Featured Post Creative
wp plugin install https://plugins.captaincore.io/featured-post-creative-1.5.7-patched.zip –force
# Post Grid and Filter Ultimate
wp plugin install https://plugins.captaincore.io/post-grid-and-filter-ultimate-1.7.4-patched.zip –force
Each patched version removes the entire wpos-analytics directory, deletes the loader function from the main plugin file, and bumps the version to -patched. The plugin itself continues to work normally.
The process is straightforward with Claude Code. Point it at this article for context, tell it which plugin you need patched, and it can strip the wpos-analytics module the same way I did. The pattern is identical across all of the Essential Plugin plugins:
Delete the wpos-analytics/ directory from the plugin
Remove the loader function block in the main plugin PHP file (search for “Plugin Wpos Analytics Data Starts” or wpos_analytics_anl)
Two supply chain attacks in two weeks. Both followed the same pattern. Buy a trusted plugin with an established install base, inherit the WordPress.org commit access, and inject malicious code. The Flippa listing for Essential Plugin was public. The buyer’s background in SEO and gambling marketing was public. And yet the acquisition sailed through without any review from WordPress.org.
WordPress.org has no mechanism to flag or review plugin ownership transfers. There is no “change of control” notification to users. No additional code review triggered by a new committer. The Plugins Team responded quickly once the attack was discovered. But 8 months passed between the backdoor being planted and being caught.
If you manage WordPress sites, search your fleet for any of the 26 plugin slugs listed above. If you find one, patch it or remove it. And check wp-config.php.
...
Read the original on anchor.host »
Servo is now available on crates.io
Today the Servo team has released v0.1.0 of the servo crate. This is our first crates.io release of the servo crate that allows Servo to be used as a library.
We currently do not have any plans of publishing our demo browser servoshell to crates.io. In the 5 releases since our initial GitHub release in October 2025, our release process has matured, with the main “bottleneck” now being the human-written monthly blog post. Since we’re quite excited about this release, we decided to not wait for the monthly blog post to be finished, but promise to deliver the monthly update in the coming weeks.
As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo’s embedding API and its ability to meet some users’ needs.
In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.
...
Read the original on servo.org »
Large pull requests are hard to review, slow to merge, and prone to conflicts. Reviewers lose context, feedback quality drops, and the whole team slows down. Stacked PRs solve this by breaking big changes into a chain of small, focused pull requests that build on each other — each one independently reviewable.
A stack is a series of pull requests in the same repository where each PR targets the branch of the PR below it, forming an ordered chain that ultimately lands on your main branch.
GitHub understands stacks end-to-end: the pull request UI shows a stack map so reviewers can navigate between layers, branch protection rules are enforced against the final target branch (not just the direct base), and CI runs for every PR in the stack as if they were targeting the final branch.
The gh stack CLI handles the local workflow: creating branches, managing rebases, pushing to GitHub, and creating PRs with the correct base branches. On GitHub, the PR UI gives reviewers the context they need — a stack map for navigation, focused diffs for each layer, and proper rules enforcement.
When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged directly or through the merge queue. After a merge, the remaining PRs in the stack are automatically rebased so the lowest unmerged PR targets the base branch.
Ready to dive in? Start with the Quick Start guide or read the full overview.
...
Read the original on github.github.com »
A few weeks ago I wrote about how I thought intelligence is becoming a commodity. The idea is quite straightforward, and widespread now: when everyone races to build the best model, the models get better, but so does every other model eventually. Every dollar spent on a bigger training run makes the previous one cheaper. The distance between frontier, second-best, and open-source alternatives is collapsing fast (actually Gemma4, Kimi K2.5 and GLM 5.1 are becoming my bedside models these days). Even more, as models become better, the unit of intelligence that can be deployed in local hardware with lower hardware capabilities increases significantly.
The irony of this situation is that this commoditisation of intelligence is benefiting the company that everyone was framing as the “AI loser”: Apple
There’s a version of the last three years where Apple genuinely failed at AI. They had Siri before anyone had a serious voice assistant, and then watched how ChatGPT ate their lunch already since their first release (even before they had introduced their native voice interaction). Apple didn’t have a flagship frontier (or even a vanity open-source) model, no $500B compute commitment with the usual suspects. Meanwhile, the rest of the AI labs and big tech companies were racing to win the next state-of-the-art benchmark by burning bags of cash.
What this also meant is that while these companies were burning money at a rate that would make a sovereign wealth fund uncomfortable, Apple was (and still is) sitting in a pile of undeployed cash (to the point of even increasing their stock buybacks) giving them optionality.
To me, OpenAI is the most paradigmatic example of this “infinite money burning machine”. OpenAI raised at a $300B valuation and then shut down Sora, the video product they’d been positioning as a creative industry flagship, because it was running at roughly $15M a day in costs against $2.1M in daily revenue. Disney had already signed a three-year licensing deal for Sora to generate content from Marvel, Pixar, and Star Wars characters. They were finalising a $1B equity stake in OpenAI. When Sora died, so did the billion. A $1B investment evaporated, because the product it was staked on couldn’t pay for itself (reducing their buffer that accommodates their daily burn).
On the infrastructure side: OpenAI signed non-binding letters of intent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global output. These were of course non-binding. Micron, reading the demand signal, shut down its 29-year-old Crucial consumer memory brand to redirect all capacity toward AI customers. Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
I don’t know about you, but I don’t see these behaviours as those of someone that is winning the AI race, independently of how good their models do in benchmarks, and how much they are burning in infrastructure. A small miscalculation in the expected revenue, and you are out of the game (I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions).
My sense is that the labs’ bet was always that raw model capability, i.e. intelligence, along with the infrastructure required to run them would stay scarce. Those who manage to secure the best model and the infrastructure to run it at scale would get the best moat. But I am afraid that having the best model in itself may not be enough moving forward. Less capable models are becoming as capable as previous versions of the frontier models.
The best recent example I can think of is Gemma 4, Google’s open-weight model. It was built to run on a phone, scores 85.2% on MMLU Pro and matches Claude Sonnet 4.5 Thinking on the Arena leaderboard. 2 million downloads in its first week. Models that would have been state-of-the-art eighteen months ago now run on a laptop, and they get better every quarter.
If you haven’t tried Gemma4 yourself I highly recommend it. I am running it on my AMD Ryzen AI Max+, and its performance in terms of tokens per second and intelligence are so good that I have already migrated some of my personal tools to use this model as the backend without visibly impacting their output. This trend can really change in the next few months way we access intelligence.
I feel that some of the labs see this coming. Anthropic has been particularly aggressive about it and they are releasing new (actually useful) tools every day that work like a charm with their models in order to lock users into their ecosystem. Claude Code for developers, Claude Cowork for teams, the recent Claude Managed Sessions to orchestrate agents, all designed to put Claude inside workflows people are already in.
The logic behind it: if the model itself won’t hold the moat, capture the usage layer and make switching painful. I think this is brilliant, and seeing how much Anthropic is growing in number of users and revenue, it seems to be paying off. The economics of their plans are still rough, though. One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription. The labs are subsidising the demand they’re chasing, which justifies their level of burn (let’s see for how long they can afford these subsidies).
Apple, by contrast, has spent almost nothing on AI infrastructure and subsidising users’ token burn. And this may be giving them more optionality and leverage than any of the other companies that jumped heads first into the AI race.
In that earlier post, I argued that if intelligence becomes abundant, context becomes the scarce resource. A model that can reason about anything but knows nothing about you or the environment it operates in is a generic tool. What makes AI genuinely useful day-to-day is reasoning plus personal context: your messages, your calendar, your code, your tools, your health data, your photos, your habits. I think here is where Anthropic is making an amazing job with their “Claude suite”.
But Apple already has all this context and access to your environment through their 2.5 billion active devices. Each one is a context mine that users have been filling for years. Health data from Apple Watch. Every photo taken on an iPhone. Notes, messages, location history, app behaviour, emails, and awareness of your environment through the pool of sensors of your device. Why build a commodity when they already have the context that can become their moat?
And they even have the ability to keep all this data on-device, which is where the “Privacy. That’s iPhone” positioning becomes something more than a PR strategy, and which could actually make a comeback to become one of their core value propositions. Apple spent years using privacy as a differentiator against the ad-driven models of Google and Meta. It worked, but it always felt a bit abstract and, honestly, fake. Now it could become really concrete. Would you hand OpenAI your medical records and fifteen years of photos to get better AI answers? Probably not. Some are, but I personally wouldn’t like Sam to have that personal data from me. Would you let a model running entirely on your device (no network request, no data leaving your phone) access all of that? That’s a different question. The on-device model gets full context because it never leaves the hardware. Apple built the reputation and the architecture for this when no one else thought it mattered.
Of course, there are still technological barriers to make this possible, but I feel like we may be getting there.
In this context, the Gemini deal, where Apple signed a $1B to license Google’s frontier model for the queries that need cloud-scale reasoning, makes total sense. Apple didn’t build a frontier model. They bought access to one, at a price that’s rounding error against OpenAI’s weekly compute bill. What they kept in-house: the context layer, the on-device stack, and the operating system that mediates everything.
Turns out Apple had another unexpected lever for AI as shown with the Mac Mini craze after OpenClaw’s release. Apple Silicon wasn’t built specifically for AI, it was built for efficiency, for battery life, for thermal performance, for the hardware/software co-design that Apple had been running for fifteen years. But it turned out to be the perfect architecture to run local models efficiently.
The key decision is unified memory. On a conventional architecture (that of most laptops, and even traditional data center-grade GPUs) the CPU and GPU are separate chips with separate memory pools. Moving data between them is slow and power-hungry. Nvidia’s GPUs are extremely fast at matrix operations, but they sit on the other side of a PCIe bus from the CPU, and feeding them is a constant bottleneck (as discussed when presenting the difference between DRAM and HBM in this post from a few weeks ago).
Apple’s M-series and A-series chips put the CPU, GPU, and Neural Engine (their proprietary accelerator) on the same die, sharing one high-bandwidth memory pool. No bus crossing, no transfer overhead, no latency switching between CPU and GPU work. For video editing or compiling Xcode, this is a nice efficiency win. For LLM inference, this has been key.
As described also in my post about RAM memory and TurboQuant, LLM inference is currently memory-bandwidth bound, not compute bound. The bottleneck isn’t so much how fast you can multiply matrices, it’s how fast you can stream model weights from memory into the compute units, and how big of a KV cache you can store to avoid having to re-compute it. Apple’s unified pool gives every compute unit direct, high-bandwidth access to the same memory simultaneously. That’s exactly the operation inference needs.
This is what makes the LLM in a Flash technique work so well on Apple hardware. Someone recently ran Qwen 397B, a 209GB model, on an M3 Max Mac at ~5.7 tokens per second, using only 5.5GB of active RAM. The weights live on the SSD and stream in at ~17.5 GB/s as needed. This works because Qwen is a mixture-of-experts architecture: each token only activates a small subset of expert layers, so you only ever need a fraction of the 209GB resident in memory. The SSD throughput Apple achieves (faster than their own figures from the original LLM in a Flash paper) comes from storage architecture they built for iPhone responsiveness, not AI. Claude wrote the ~5,000 lines of Objective-C and Metal shaders to make it all work. A 400-billion-parameter model, on a consumer laptop, from 5.5GB of RAM (another win of the autoresearch flow discussed in this newsletter).
What I find more interesting about all of this is the platform dynamic that this can result in. Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed. Developers didn’t target iOS because Apple asked, they targeted it because the users were there, the tooling was good, the hardware was consistent. My feeling is that the same thing could happen now with local inference. MLX is already a de facto framework for on-device AI. Gemma, Qwen, Mistral, the most relevant model architectures have MLX support. Apple doesn’t need to win the model race if they manage to become the de-facto platform where the models (or the agents that use them) run. Again, a great example of this is the Mac Mini craze after OpenClaw went viral.
I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
The hardware/software co-design strategy has been a key focus for years, and one that I’ve always agreed on myself (as an electrical engineering by training, I’ve always been into hardware/software co-design). If you can afford it, I think that’s the right approach. The privacy positioning, the on-device processing focus, the decision to build their own silicon when the rest of the industry was happy buying Nvidia and Intel, all of those were choices Apple made when they were commercially risky and the direction wasn’t obvious. Is it true that they were made with cost and governance in mind, not AI, but it turned out well for them.
What Apple couldn’t have planned (or could they?) is that their unified memory architecture would be a perfect fit for LLMs, and that open-weight models would get this capable, this fast, removing the need for huge hardware investment for AI infrastructure from their side. That the model race would commoditise intelligence as quickly as it did. Or that someone would stream a 400B parameter model from an SSD and it would actually work.
So some of this is luck. But it’s the kind of luck that finds you when you built the right foundation, even if you built it for completely different reasons. They were definitely well-positioned.
The rest of the industry spent three years racing to see who could build the best model with Apple looking from the sidelines, waiting to understand how their devices and own ecosystem could fit in this future. I don’t know if this is exactly the case, but I feel this was smart. Risky but smart.
I genuinely don’t know how this plays out over the next few years. The labs are not standing still, and Apple’s AI track record (looking at you, Siri, you already suck a bit) is not exactly flawless. But it’s hard to imagine a world where 2.5 billion devices, carrying your entire personal context, running capable models locally on purpose-built silicon, with Gemini on-call for the hard stuff, incurring in variable cost for inference instead of expensive CAPEX investment could be a bad position to be in a future where AI is everywhere.
Whether that was strategy or fortune, I’ll leave for you to decide. And if you do, please let me know what you think about it. My TL;DR is that, to my surprise, I am still bullish about Apple and their relevance in an AI-centric future.
Disclaimer: To frame the opinion of this post, I just want to be clear about the fact that I am not one of those Apple fan boys. Proof of this is that this post was written from a Linux machine and that I don’t even own a Mac :)
...
Read the original on adlrocha.substack.com »
This post works through the financial logic of software teams, from what a team of eight engineers actually costs per month to what it needs to generate to be economically viable. It also examines why most teams have no visibility into either number, how that condition was built over two decades, and what the arrival of LLMs now means for organizations that have been treating large engineering headcount as an asset.
Software development is one of the most capital-intensive activities a modern company undertakes, and it is also one of the least understood from a financial perspective. The people making daily decisions about what to build, what to delay, and what to abandon are rarely given the financial context to understand what those decisions actually cost. This is not a coincidence. It is a structural condition that most organizations have maintained, quietly and consistently, for roughly two decades.
A software engineer in Western Europe costs somewhere between €120,000 and €150,000 per year when you account for salary, social fees, pension contributions, equipment, social activities, management overhead, and office space. Call it €130,000 as a reasonable middle estimate. A team of eight engineers therefore costs approximately €1,040,000 per year, or €87,000 per month, or roughly €4,000 for every working day.
Most engineers do not know this number. Many of their managers do not either. And in the organizations where someone does know it, the number rarely makes its way into the conversations where prioritization decisions are actually made.
This matters because every decision a team makes carries an implicit cost that compounds over time. Choosing to spend three weeks on a feature that serves 2% of users is a €60,000 decision. Delaying an operational improvement for a quarter is a decision with a calculable daily price tag. Rebuilding a platform because the current one feels embarrassing, rather than because customers are leaving, is a capital allocation choice that would look very different if the people making it were spending their own money.
Consider a team of eight engineers whose mission is to build and maintain an internal developer platform serving one hundred other engineers. This is a common organizational structure, and it is one where the financial logic is rarely examined carefully.
The team costs €87,000 per month. To justify that cost, the platform they build needs to generate at least €87,000 per month in value for the engineers who use it. The most direct way to measure that value is through time saved, since the platform’s purpose is to make other engineers more productive.
At a cost of €130,000 per year, one engineer costs approximately €10,800 per month, or around €65 per working hour. For the platform team to break even, their platform needs to save the hundred engineers they serve a combined total of 1,340 hours per month. That is 13.4 hours per engineer per month, or roughly three hours per week per person.
Three hours per week is achievable. A well-built platform that eliminates manual deployment steps, reduces environment setup time, or removes the need for repetitive configuration work can easily clear that bar. Time saved is the most direct measure for a platform team, though value can also come from reducing outages, which carries a direct revenue impact of its own. But the question worth asking is whether anyone on that team knows this number, tracks it, or uses it to decide what to build next. In most organizations, the answer is no. The team has a roadmap driven by engineering preferences, stakeholder requests, and quarterly planning cycles, and the financial logic underlying their existence is left unexamined.
And break-even is not actually the right bar. Leah Tharin has written a sharp breakdown of the mathematics of this: a team with a 50% initiative success rate, which is already optimistic, needs its wins to cover its losses too. Leah’s calculation is growth-oriented, but even for non-growth organizations, the same investment thesis holds. Even a two-times return is not sufficient. Capital sitting in a bank carries no operational risk, no coordination costs, and no ongoing maintenance obligations. The systems a team builds will outlive the team itself, and the cost of owning, maintaining, and eventually replacing those systems is almost always larger than anticipated. The return has to cover not just the team’s current cost, but the long tail of what they leave behind.
That pushes the realistic threshold for financial viability to somewhere between three and five times annual cost. For an €87,000 per month team, that means generating between €260,000 and €435,000 in monthly value. The three hours per week calculation gets you to break-even. To clear the realistic financial bar, the platform needs to be genuinely transformative for the engineers using it, and the team needs to be ruthless about working on the highest-value problems rather than the most interesting ones.
A customer-facing product team of eight carries the same €87,000 monthly cost. The levers available to justify that cost are different, but the underlying logic is identical.
If the product has an average revenue per user of €50 per month, the team needs to generate or protect the equivalent of 1,740 users worth of value every month just to break even, and roughly 5,000 to 8,700 users worth of value to clear the three-to-five times threshold.
Churn is often the most direct lever. Consider a product with 50,000 active users losing 2% monthly to churn. That is 1,000 users per month, representing €50,000 in monthly recurring revenue walking out the door. A team that identifies the primary driver of that churn and eliminates it is generating nearly €50,000 per month in protected revenue, covering most of its break-even cost from a single initiative. But that calculation requires knowing the churn rate, understanding its causes, and connecting those causes to the team’s work, and most teams are not operating with that level of financial clarity.
Activation is another lever that is frequently underestimated. If 10,000 users sign up each month but only 30% complete the activation steps that lead to long-term retention, there are 7,000 users each month who paid acquisition costs but never converted to retained revenue. Improving the activation rate by five percentage points, from 30% to 35%, converts an additional 500 users per month. At €50 average revenue per user, that is €25,000 in additional monthly recurring revenue, representing roughly 29% of the team’s break-even threshold from one metric moving in the right direction.
Sales conversion follows the same logic. If the product has a free-to-paid conversion funnel processing 20,000 trials per month at a 4% conversion rate, that produces 800 paying customers monthly. Moving conversion from 4% to 4.5% produces 900 customers, an additional 100 paying users, and €5,000 in additional monthly revenue. Small improvements across multiple levers compound quickly, but only if the team understands which levers connect to which financial outcomes and by how much.
Given that software teams are expensive and that their value is, at least in principle, calculable, it is worth examining why most teams do not measure anything financially meaningful. Some measure activity proxies such as velocity, tickets closed, or features shipped. Others measure sentiment proxies such as NPS, CSAT, or engagement scores. These are not degraded versions of financial measurement. They are a different category entirely, one that was designed around the goal of understanding user behavior and team throughput rather than around the goal of understanding economic return.
The problem is that activity and sentiment metrics can trend upward while financial performance deteriorates. A team can ship more features while building the wrong things. Engagement scores can rise while churn accelerates among the users who actually generate revenue. Velocity can increase while the work being completed has no measurable connection to business outcomes. These metrics feel meaningful because they correlate with outcomes in many circumstances, but correlation is not a reliable guide to prioritization when the underlying financial logic is never examined.
This is a structural condition rather than a failure of individual judgment. Organizations chose these metrics because they are easier to instrument, easier to communicate, and easier to look good on than financial metrics. A team that measures its success by features shipped will always have something to show. A team that measures its success by return generated will sometimes have to report that it does not know, or that the return was disappointing, and that kind of transparency requires an organizational culture that most companies have not deliberately built.
The matrix above is drawn from a product management training program I run called Booster, where product leaders map their actual metrics against their investment thesis to surface gaps. The exercise is uncomfortable precisely because most leaders discover mid-mapping that their team’s daily measurements have no direct connection to the financial objective they were given.
Understanding why this condition exists requires looking at roughly two decades of macroeconomic context, because the financial dysfunction in modern software organizations did not emerge from bad intentions or intellectual failure. It emerged from a specific environment that made financial discipline in product teams economically unnecessary.
The picture is not a single clean era but two distinct phases. From roughly 2002 through 2011, capital was periodically cheap but conditions were mixed. Rates fell sharply after the dot-com crash and again after the global financial crisis, but in both cases risk appetite was suppressed. The money was technically inexpensive but investors were cautious, multiples were reasonable, and the growth-at-all-costs logic had not yet taken hold. Product organizations during this period still operated with some residual financial discipline inherited from the dot-com reckoning.
From approximately 2011 through 2022, something different happened. Zero-rate policy became fully normalized, risk appetite recovered and then overcorrected, and the SaaS mental model crystallized into a broadly shared investment thesis. All three conditions arrived simultaneously, and the result was about eleven years during which software companies could grow headcount aggressively, miss on the majority of their roadmap, and still look healthy on paper. Revenue growth forgave an enormous range of prioritization mistakes, and the cost of building the wrong thing was largely invisible.
Eleven years is not a long time, but it is long enough to form the professional instincts of an entire generation of product and engineering leaders. The frameworks they learned, the metrics they adopted, the planning rituals they practice, and the definitions of success they internalized were all formed during a window that was unusually short and unusually distorted. There is no cohort of senior product leaders who developed their judgment in conditions where their teams were expected to demonstrate financial return, because those conditions did not exist during the years when that cohort was learning the craft.
When capital became expensive again in 2022, the behavior did not automatically adjust, because the behavior was never connected to the financial logic in the first place.
There is a deeper consequence of this twenty-year period that is now becoming painfully visible, and it concerns how the industry has thought about large engineering organizations and codebases.
The conventional understanding is that a codebase representing years of engineering investment is a valuable asset. It encodes business logic, captures accumulated decisions, and represents the technical foundation on which future products are built. A large engineering organization is similarly understood as a source of capability, with more engineers meaning more capacity to build, maintain, and improve that foundation.
While some argued that large codebases actually shoulg be considered a liability, the industry as a whole has mostly ignored that. But this understanding is now being more closely examined. A large codebase also carries maintenance costs that grow over time as the system becomes more complex, more interconnected, and more difficult to change safely. Every engineer added to maintain it increases coordination costs, introduces new dependencies, and adds to the organizational weight that slows decision-making. The asset and the liability exist simultaneously, and for most of the past twenty years, the financial environment masked the liability side of that equation.
The arrival of large language models has made the liability visible in a way that is difficult to ignore. Recently, Nathan Cavaglione, a developer, built a functional replica of approximately 95% of Slack’s core product in fourteen days using LLM agents. Slack was built by thousands of engineers over the course of more than a decade, at a cost that represents billions of dollars in cumulative engineering investment. Nathan started without any of that accumulated complexity, without the organizational weight, without the legacy architectural decisions, and without the coordination costs, and arrived at a comparable product in a period that would not constitute a single sprint in most enterprise engineering organizations.
Day 14: A functional replica of Slack’s core product, built by a Nathan using LLM agents.
This does not mean that Slack’s engineering investment was wasted, because Slack also built enterprise sales infrastructure, compliance capabilities, data security practices, and organizational resilience that a fourteen-day prototype does not include. But it does mean that the assumption underlying large engineering organizations, which is that scale and accumulated complexity represent competitive moats, is no longer reliable in the way it once was. When the cost of building a functional approximation of a sophisticated software product can collapse to days of individual effort, the question of what a large engineering team justifies becomes both more urgent and more difficult to answer with the metrics most organizations currently track.
The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.
The competitive advantage available to organizations that take this seriously is not primarily technical. It is analytical. Companies that can clearly articulate what each of their teams costs, what value each team generates, and whether that value clears a financially viable threshold are in a structurally different position than companies that cannot. They can make build versus buy decisions based on actual economics rather than organizational preference. They can identify when a team is working on problems that cannot generate sufficient return at their cost level. They can sequence initiatives based on what value is being lost each day they are delayed, rather than on who argued most persuasively in the last planning meeting.
Most organizations cannot do this today. The measurement infrastructure does not exist, the financial data does not flow to the people making prioritization decisions, and the habit of asking these questions has not been built. Building it is uncomfortable, because the answers are sometimes unflattering. A team that examines its work through this lens will sometimes discover that it has spent a quarter on things that do not connect to financial outcomes in any meaningful way, and that is a difficult finding to sit with.
But the alternative is continuing to run an organization where teams with million-euro annual budgets make daily investment decisions without the financial context to know whether those decisions are generating return. That condition was sustainable when capital was cheap and growth forgave everything. It is increasingly difficult to sustain in an environment where boards expect financial returns, where the cost of building software is collapsing due to AI, and where the question of what a team justifies can no longer be deferred indefinitely.
The organizations that develop the habit of asking these questions clearly, regularly, and without flinching will accumulate an advantage that compounds over time. The question is simply whether they will start asking before or after the pressure forces them to.
...
Read the original on www.viktorcessan.com »
Focused async Python bot for Polymarket that buys No on standalone non-sports yes/no markets.
FOR ENTERTAINMENT ONLY. PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. THE AUTHORS ARE NOT LIABLE FOR ANY CLAIMS, LOSSES, OR DAMAGES.
The bot scans standalone markets, looks for NO entries below a configured price cap, tracks open positions, exposes a dashboard, and persists live recovery state when order transmission is enabled.
If any of those are missing, the bot uses PaperExchangeClient.
pip install -r requirements.txt
cp config.example.json config.json
cp .env.example .env
config.json is intentionally local and ignored by git.
The runtime config lives under strategies.nothing_happens. See config.example.json and .env.example.
You can point the runtime at a different config file with CONFIG_PATH=/path/to/config.json.
python -m bot.main
The dashboard binds $PORT or DASHBOARD_PORT when one is set.
The shell helpers use either an explicit app name argument or HEROKU_APP_NAME.
export HEROKU_APP_NAME=
heroku config:set BOT_MODE=live DRY_RUN=false LIVE_TRADING_ENABLED=true -a “$HEROKU_APP_NAME”
heroku config:set PRIVATE_KEY=
Only run the web dyno. The worker entry exists only to fail fast if it is started accidentally.
python -m pytest -q
Local config, ledgers, exports, reports, and deployment artifacts are ignored by default.
...
Read the original on github.com »
In my previous blog post I gave a quick and easy introduction to tmux and explained how to use tmux with a basic configuration.
If you’ve followed that guide you might have had a feeling that many people have when working with tmux for the first time: “These key combinations are really awkward!”. Rest assured, you’re not alone. Judging from the copious blog posts and dotfiles repos on GitHub there are many people out there who feel the urge to make tmux behave a little different; to make it more comfortable to use.
And actually it’s quite easy to customize the look and feel of tmux. Let me tell you something about the basics of customizing tmux and share some of the configurations I find most useful.
Customizing tmux is as easy as editing a text file. Tmux uses a file called tmux.conf to store its configuration. If you store that file as ~/.tmux.conf (Note: there’s a period as the first character in the file name. It’s a hidden file) tmux will pick this configuration file for your current user. If you want to share a configuration for multiple users you can also put your tmux.conf into a system-wide directory. The location of this directory will be different across different operating systems. The man page (man tmux) will tell you the exact location, just have a look at documentation for the -f parameter.
Probably the most common change among tmux users is to change the prefix from the rather awkward C-b to something that’s a little more accessible. Personally I’m using C-a instead but note that this might interfere with bash’s “go to beginning of line” command1. On top of the C-a binding I’ve also remapped my Caps Lock key to act as since I’m not using Caps Lock anyways. This allows me to nicely trigger my prefix key combo.
To change your prefix from C-b to C-a, simply add following lines to your tmux.conf:
# remap prefix from ‘C-b’ to ‘C-a’
unbind C-b
set-option -g prefix C-a
bind-key C-a send-prefix
Another thing I personally find quite difficult to remember is the pane splitting commands.” to split vertically and % to split horizontally just doesn’t work for my brain. I find it helpful to have use characters that resemble a visual representation of the split, so I chose | and - for splitting panes horizontally and vertically:
# split panes using | and -
bind | split-window -h
bind - split-window -v
unbind ‘“’
unbind %
Since I’m experimenting quite often with my tmux.conf I want to reload the config easily. This is why I have a command to reload my config on r:
# reload config file (change file location to your the tmux.conf you want to use)
bind r source-file ~/.tmux.conf
Switching between panes is one of the most frequent tasks when using tmux. Therefore it should be as easy as possible. I’m not quite fond of triggering the prefix key all the time. I want to be able to simply say M- to go where I want to go (remember: M is for Meta, which is usually your Alt key). With this modification I can simply press Alt-left to go to the left pane (and other directions respectively):
# switch panes using Alt-arrow without prefix
bind -n M-Left select-pane -L
bind -n M-Right select-pane -R
bind -n M-Up select-pane -U
bind -n M-Down select-pane -D
Although tmux clearly focuses on keyboard-only usage (and this is certainly the most efficient way of interacting with your terminal) it can be helpful to enable mouse interaction with tmux. This is especially helpful if you find yourself in a situation where others have to work with your tmux config and naturally don’t have a clue about your key bindings or tmux in general. Pair Programming might be one of those occasions where this happens quite frequently.
Enabling mouse mode allows you to select windows and different panes by simply clicking and to resize panes by dragging their borders around. I find it pretty convenient and it doesn’t get in my way often, so I usually enable it:
# Enable mouse control (clickable windows, panes, resizable panes)
set -g mouse on
I like to give my tmux windows custom names using the , key. This helps me naming my windows according to the context they’re focusing on. By default tmux will update the window title automatically depending on the last executed command within that window. In order to prevent tmux from overriding my wisely chosen window names I want to suppress this behavior:
# don’t rename windows automatically
set-option -g allow-rename off
Changing the colors and design of tmux is a little more complex than what I’ve presented so far. As tmux allows you to tweak the appearance of a lot of elements (e.g. the borders of panes, your statusbar and individual elements of it, messages), you’ll need to add a few options to get a consistent look and feel. You can make this as simple or as elaborate as you like. Tmux’s man page (specifically the STYLES section) contains more information about what you can tweak and how you can tweak it.
Depending on your color scheme your resulting tmux will look something like this:
# DESIGN TWEAKS
# don’t do anything when a ‘bell’ rings
set -g visual-activity off
set -g visual-bell off
set -g visual-silence off
setw -g monitor-activity off
set -g bell-action none
# clock mode
setw -g clock-mode-colour yellow
# copy mode
setw -g mode-style ‘fg=black bg=red bold’
# panes
set -g pane-border-style ‘fg=red’
set -g pane-active-border-style ‘fg=yellow’
# statusbar
set -g status-position bottom
set -g status-justify left
set -g status-style ‘fg=red’
set -g status-left ‘’
set -g status-left-length 10
set -g status-right-style ‘fg=black bg=yellow’
set -g status-right ‘%Y-%m-%d %H:%M ’
set -g status-right-length 50
setw -g window-status-current-style ‘fg=black bg=red’
setw -g window-status-current-format ′ #I #W #F ′
setw -g window-status-style ‘fg=red bg=black’
setw -g window-status-format ′ #I #[fg=white]#W #[fg=yellow]#F ′
setw -g window-status-bell-style ‘fg=yellow bg=red bold’
# messages
set -g message-style ‘fg=yellow bg=red bold’
In the snippet above, I’m using your terminal’s default colors (by using the named colors, like red, yellow or black). This allows tmux to play nicely with whatever color theme you have set for your terminal. Some prefer to use a broader range of colors for their terminals and tmux color schemes. If you don’t want to use your terminal default colors but instead want to define colors from a 256 colors range, you can use colour0 to colour256 instead of red, cyan, and so on when defining your colors in your tmux.conf.
Looking for a nice color scheme for your terminal?
If you’re looking for a nice color scheme for your terminal I recommend to check out my very own Root Loops. With Root Loops you can easily design a personal, awesome-looking terminal color scheme and stand out from all the other folks using the same boring-ass color schemes everyone else is using.
There are plenty of resources out there where you can find people presenting their tmux configurations. GitHub and other code hosting services tend to be a great source. Simply search for “tmux.conf” or repos called “dotfiles” to find a vast amount of configurations that are out there. Some people share their configuration on their blog. Reddit might have a few subreddits that could have useful inspiration, too (there’s /r/dotfiles and /r/unixporn, for example).
You can find my complete tmux.conf (along with other configuration files I’m using on my systems) on my personal dotfiles repo on GitHub.
If you want to dive deeper into how you can customize tmux, the canonical source of truth is tmux’s man page (simply type man tmux to get there). You should also take a look at the elaborate tmux wiki and see their Configuring tmux section if this blog post was too shallow for your needs. Both will contain up-to-date information about each and every tiny thing you can tweak to make your tmux experience truly yours. Have fun!
...
Read the original on hamvocke.com »
A U. S. appeals court on Friday declared unconstitutional a nearly 158-year-old federal ban on home distilling, calling it an unnecessary and improper means for Congress to exercise its power to tax.
The 5th U. S. Circuit Court of Appeals in New Orleans ruled in favor of the nonprofit Hobby Distillers Association and four of its 1,300 members.
They argued that people should be free to distill spirits at home, whether as a hobby or for personal consumption including, in one instance, to create an apple-pie-vodka recipe.
The ban was part of a law passed during Reconstruction in July 1868, in part to thwart liquor tax evasion, and subjected violators to up to five years in prison and a $10,000 fine.
Writing for a three-judge panel, Circuit Judge Edith Hollan Jones said the ban actually reduced tax revenue by preventing distilling in the first place, unlike laws that regulated the manufacture and labeling of distilled spirits on which the government could collect taxes.
She also said that under the government’s logic, Congress could criminalize virtually any in-home activity that might escape notice from tax collectors, including remote work and home-based businesses.
“Without any limiting principle, the government’s theory would violate this court’s obligation to read the Constitution carefully to avoid creating a general federal authority akin to the police power,” Jones wrote.
The U. S. Department of Justice had no immediate comment.
Another defendant, the Treasury Department’s Alcohol and Tobacco Tax and Trade Bureau, did not immediately respond to a request for comment.
Devin Watkins, a lawyer representing the Hobby Distillers Association, in an interview called the ruling an important decision about the limits of federal power.
Andrew Grossman, who argued the nonprofit’s appeal, called the decision “an important victory for individual liberty” that lets the plaintiffs “pursue their passion to distill fine beverages in their homes.”
“I look forward to sampling their output,” he said.
The decision upheld a July 2024 ruling by U. S. District Judge Mark Pittman in Fort Worth, Texas. He put his ruling on hold so the government could appeal.
...
Read the original on nypost.com »
The first four months of 2026 have produced a sequence of cyber incidents that, if any one of them had landed in 2014 or 2017, would have dominated a news cycle for a week.
A Chinese state supercomputer reportedly bled ten petabytes. Stryker was wiped across 79 countries. Lockheed Martin was hit for a reported 375 terabytes. The FBI Director’s personal inbox was dumped on the open web. The FBI’s wiretap management network was breached in a separate “major incident.” Rockstar Games was breached through a SaaS analytics vendor most people have never heard of. Cisco’s private GitHub was cloned. Oracle’s legacy cloud cracked open. The Axios npm package, downloaded a hundred million times a week, was hijacked by North Korea. Mercor, the $10 billion AI training-data vendor that sits inside the data pipelines of OpenAI, Anthropic, and Meta simultaneously, was breached through the LiteLLM open source library and had 4 terabytes extracted by Lapsus$. Honda was hit twice. The new ShinyHunters/Scattered Spider/LAPSUS$ alliance breached approximately 400 organizations and exfiltrated roughly 1.5 billion Salesforce records.
Stacked on top of each other across roughly a hundred days, these events are something a historian of computing security writing in 2050 will probably file as a turning point, regardless of what else happens between now and then.
And yet, the public conversation around them has been quiet to the point of being strange. This is a curious observation more than a complaint. And the goal of what follows is to gather the events into one place, cite the publications that reported each one, and then ask, gently, why the period feels so undocumented in real time.
Every named incident below is followed by inline parenthetical citations to the publications that broke or covered it, in the same way an academic paper would.
I am not arguing that the cybersecurity community is failing. I am noting that something unusual is happening.
Strip out the noise and the 2026 wave so far breaks cleanly into four separate campaigns running in parallel against U. S. and Western targets. This convergence is the part nobody is naming out loud.
Cluster 1: Iran / Handala / Void Manticore (destructive state operations). Operating under the Handala Hack Team persona, attributed by Palo Alto Networks Unit 42 to Void Manticore, an actor linked to Iran’s Ministry of Intelligence and Security. Handala is claiming attacks against U. S. industrial, defense, and government targets and explicitly framing them as retaliation for a February 28 missile strike on a school in Minab, southern Iran, that killed at least 175 people, most of them children. Confirmed and claimed Q1 2026 victims: Stryker (200,000 devices wiped), Lockheed Martin (375 TB claim, 28 engineer doxxing), FBI Director Kash Patel (personal email dump).
Cluster 2: Scattered LAPSUS$ Hunters / SLH — the apex-predator merger (financially-motivated SaaS theft and extortion at industrial scale). This is the single largest and least-discussed organizational development in the criminal cyber landscape since the Conti collapse. In August 2025, three of the most notorious financially-motivated crews on the planet, ShinyHunters, Scattered Spider, and LAPSUS$, formally combined into a coordinated alliance widely tracked as Scattered LAPSUS$ Hunters (SLH), sometimes called “the Trinity of Chaos” (Resecurity; Cyberbit; Infosecurity Magazine; The Hacker News; Computer Weekly; ReliaQuest). Scattered Spider provides initial access through highly-effective social engineering and vishing. ShinyHunters handles exfiltration, leak-site management, and extortion. LAPSUS$ contributes its own brand of identity-system compromise. The result is an end-to-end criminal pipeline operating against the SaaS layer of the global enterprise.
The numbers from this cluster’s 2025-2026 Salesforce campaign alone are difficult to absorb. ShinyHunters has publicly claimed compromise of approximately 300 to 400 organizations, with around 100 described as high-profile, and approximately 1.5 billion Salesforce records stolen in aggregate (BankInfoSecurity, “ShinyHunters Counts 1.5 Billion Stolen Salesforce Records”; The Register; State of Surveillance, “400 Companies Breached”; Salesforce Ben). Salesforce released a security advisory on March 7, 2026 confirming that a “known threat group” was exploiting misconfigurations in its Experience Cloud product, and ShinyHunters claimed responsibility on its data leak site two days later. The named victim list reads like a roll call of global brand recognition: Google (corporate Salesforce instance, ~2.55M records of small and medium business contact data), Cisco, Adidas, Qantas (5.7M customers), Allianz Life, Farmers Insurance Group, Workday, Pandora, Chanel, TransUnion, the entire LVMH family including Louis Vuitton, Dior, Tiffany & Co., and Cartier, Air France-KLM, LastPass, Okta, AMD, Snowflake itself, Match Group (Hinge, Bumble, OkCupid), SoundCloud (29.8M users), Panera Bread (5.1M accounts), Betterment (1.4M), Harvard, the University of Pennsylvania, Crunchbase, Canada Goose, and the December 2025 Pornhub breach via the Mixpanel campaign that exposed roughly 200 million user records and 94 GB of historical analytics data (BleepingComputer on Qantas, Allianz Life, LVMH; Cybersecurity News on Google, Adidas, Louis Vuitton; Malwarebytes; Google Cloud Threat Intel; Wikipedia ShinyHunters). Q1 2026 alone added Rockstar Games (via Anodot → Snowflake), the Cisco Trivy / Salesforce double hit, and the single most consequential AI-industry-specific incident of the quarter: the Lapsus$-claimed breach of Mercor, the $10B AI recruiting and training-data vendor that sits inside the data pipelines of OpenAI, Anthropic, and Meta simultaneously, after a LiteLLM open source supply chain compromise by the TeamPCP cluster. All catalogued in dedicated sections later in this article.
Tradecraft note: this cluster is no longer just compromising SaaS integrators to lift their OAuth tokens, although that remains part of the playbook (the 2025 Salesloft Drift / UNC6395 incident that compromised over 700 Salesforce environments including Cloudflare, Google, PagerDuty, Palo Alto Networks, Proofpoint, Tanium, Zscaler, and CyberArk is the precedent that proved the OAuth model works at scale, Unit 42 threat brief; UpGuard; Cloudflare response). The 2026 evolution is more direct: SLH operators now call employees on the phone, pretend to be IT support, walk them through “updating MFA settings” or “linking the Salesforce Data Loader app,” and harvest credentials, MFA codes, and OAuth grants in real time (Google Cloud Blog, “The Cost of a Call”; Varonis; The Hacker News). In parallel, ShinyHunters has weaponized Mandiant’s own AuraInspector audit tool to scan and exploit misconfigured Salesforce Experience Cloud guest user permissions across the customer base (Allure Security). Voice phishing has produced more 2026 enterprise breaches than any single technical vulnerability.
Cluster 3: North Korea / UNC1069 (open source supply chain compromise). Google Threat Intelligence Group attributed the March 31, 2026 hijack of the Axios npm package to UNC1069, a North Korea-nexus financially-motivated actor. They did not exploit a vulnerability. They built an entire fake company, a branded fake Slack workspace, a fake Microsoft Teams meeting with fake teammates, social-engineered the lead Axios maintainer into trusting the fake organization, and used that trust to seize his npm account. Then they shipped a cross-platform RAT to a JavaScript library with roughly 100 million weekly downloads. Cisco’s separate March 2026 breach via the Trivy supply chain attack, in which over 300 internal GitHub repositories were cloned, fits the same general pattern of upstream developer-trust compromise.
Cluster 4: Russia / APT28 (zero-day exploitation against Ukraine and EU). Russia-backed APT28 began exploiting a freshly disclosed Microsoft Office vulnerability, CVE-2026-21509, within days of its January patch release. Targets included Ukrainian government bodies and over 60 European email addresses, with malicious Office documents disguised as correspondence from Ukraine’s hydrometeorological center. This is the only cluster of the four that is not primarily aimed at the United States, but it shares the architecture: speed of weaponization measured in days, exploitation of trust relationships, and minimal Western public response.
All four clusters are exploiting the same structural weakness: the modern Western enterprise no longer has a defensible perimeter, only a long chain of vendor and developer trust relationships, any of which can be turned against the host. Iran is using that chain to break things. ShinyHunters is using it to extort money. North Korea is using it to seed implants into the world’s developer machines. Russia is using it to read European inboxes. The chain is the same. Only the payloads differ.
Setting aside any argument about cause and effect, there is a parallel set of numbers from the AI side of the industry over the same period that is worth putting on the table. They may or may not explain the wave above. They are at minimum strange enough to be worth noting alongside it, and the public obscurity around them is itself part of the observation.
In late 2025, Anthropic published a report titled “Disrupting the first reported AI-orchestrated cyber espionage campaign.” In it, the company disclosed that a Chinese state-aligned actor had used Claude to automate a spying operation against approximately 30 organizations, with AI handling an estimated 80 to 90 percent of the campaign workload and human operators intervening only sporadically (Anthropic full report PDF; Anthropic news release). That disclosure came from the model vendor itself, not a third-party threat intel report, which is unusual on its own. What is more unusual is how little subsequent discussion it generated outside specialist circles.
Around it sits a stack of measurement data from Hoxhunt, ZeroThreat, StrongestLayer, Bright Defense, and StationX that points in the same direction across 2025 and into 2026. None of these numbers, on their own, prove a causal link to any specific incident in this article. Taken together they describe a sharp shift in the ambient threat environment that has gone largely unremarked upon in mainstream coverage:
On the threat-intel side, Microsoft’s tracking now formally describes two North Korean threat actor clusters, Jasper Sleet and Coral Sleet, as using AI across the attack lifecycle from reconnaissance through impersonation through post-compromise (Dark Reading). Genians and The Record have separately documented Kimsuky, the long-running North Korean APT, using ChatGPT to forge convincing South Korean military and government identification documents for phishing lures (Genians; The Record; eSecurityPlanet). In March 2026 the U. S. Treasury’s OFAC sanctioned six individuals and two entities involved in the broader DPRK IT worker fraud scheme, in which large language models are used to generate fake personas, resumes, and even interview answers to land remote engineering jobs at Fortune 500 companies (The Hacker News; TechRadar on OpenAI bans). Whether you read that as a trend or a coincidence, it is on the public record.
There is also the widely reported multi-person Microsoft Teams call in which a financial department employee was manipulated by an AI-generated deepfake of their own CFO, alongside other AI-generated “colleagues,” into wiring more than 25 million U. S. dollars to Hong Kong bank accounts (Microtime). Whatever else that incident tells us, it confirms that the infrastructure to fake a convincing multi-person video call in real time exists and has been used.
From the defender side, Anthropic’s internal red-team evaluation of its withheld Mythos model found that the model could complete a simulated network intrusion in 6.2 hours versus 10.4 hours for GPT-4o, and could identify exploitable flaws in 73 percent of the applications it scanned (NPR; Axios; CNN Business; Fortune). Anthropic has declined to release Mythos publicly, restricting access to approximately 40 technology companies including Microsoft and Google. OpenAI is finalizing a comparable model that will ship only to a small vetted customer set through a “Trusted Access for Cyber” program (Axios). Two leading frontier labs simultaneously holding back cyber-capable models on safety grounds is, again, not necessarily evidence of anything causal. It is, again, worth noting.
And then, on April 7, 2026, the part of this story that should anchor every other paragraph in this section finally happened, in private, at the highest possible level of the United States government, and almost nobody outside the financial press picked it up.
On that date, U. S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent, in-person meeting in Washington with the chief executives of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo, to brief them directly on the cyber risks posed by Mythos (Bloomberg, “Bessent, Powell Summon Bank CEOs to Urgent Meeting Over Anthropic’s New AI Model”; Bloomberg, “Mythos: Why Anthropic’s New AI Has Officials Worried”; Fortune; CNBC; CNBC / Reuters; Fox News; Yahoo Finance; TechXplore). The meeting was triggered by Anthropic’s disclosure that Mythos had identified thousands of previously-unknown zero-day vulnerabilities in every major operating system and every major web browser, along with a range of other critical software. Anthropic said the vulnerability-discovery capability was sufficiently dangerous that the model could only be released to a tightly controlled handful of trusted parties. Bessent and Powell, having absorbed that disclosure, decided that the heads of the largest systemically-important U.S. banks needed to be told in person.
Pause and read that paragraph one more time. The Treasury Secretary and the Federal Reserve Chair do not convene the CEOs of the largest U. S. banks about a single software vendor’s product. They convene them about financial stability events. The fact that this meeting happened, on this subject, at this level, is the single most authoritative signal in this entire notebook that something has shifted in the cyber threat landscape at a magnitude the federal government considers comparable in importance to a financial stability concern. Treasury and the Fed are not in the habit of crying wolf about technology vendor product releases. They cried wolf about this one.
This meeting also reframes the silence the article keeps returning to. The silence has not broken in mainstream public discourse. It has clearly broken in private, at the top of the U. S. government, in classified briefings and emergency convenings the public is mostly not seeing. The historian’s question is no longer whether the cyber community is being quiet. The historian’s question is why the public conversation is so thoroughly out of sync with what is clearly being discussed behind closed doors at the level of the Treasury and the Federal Reserve. The gap between those two layers of conversation is, in the long view, the most interesting thing in this entire chronicle.
None of the above proves that any specific incident in this article was AI-driven. The Stryker wipe was executed through Microsoft Intune, not a chatbot. The Patel email leak was a personal Gmail compromise. The Lockheed claims remain unverified. What the AI numbers do establish, with a fair amount of confidence, is that the ambient cost of running a convincing offensive operation has shifted dramatically over the same window in which the wave above unfolded. Two things changed at once. A reasonable observer can decide for themselves whether they are connected, but a reasonable observer should at least know both lists exist, side by side, and that almost nobody in mainstream coverage has put them on the same page.
That obscurity is the part of this section that matters most. It is not the AI numbers. It is the silence around them.
On March 11, 2026, Stryker Corporation, one of the largest medical device companies on the planet, watched its global operations collapse inside an afternoon (Krebs on Security; Cybersecurity Dive; HIPAA Journal; Stryker official statement). Attackers compromised a Windows domain administrator account, used it to provision a new Global Administrator inside the company’s Microsoft Entra and Intune environment, and then issued mass remote-wipe and factory-reset commands across the device fleet (Lumos; The Register; Coalition). More than 200,000 systems, servers, laptops, and mobile devices were wiped within minutes. Offices in 79 countries went dark. Order processing, manufacturing, and shipping all stopped.
The Handala Hack Team claimed responsibility, claimed exfiltration of roughly 50 terabytes of data prior to the wipe, and began publishing it from infrastructure that the FBI later seized. Stryker has since recovered and reported full operational restoration, and employee lawsuits have already been filed. The downstream effect on patients is the part that has not been adequately reported: hospitals that relied on Stryker surgical hardware and the company’s order and support systems had to postpone procedures while the fleet rebuilt, which means the wiper translated directly into cancelled surgeries across multiple countries in the hours and days after the event.
Who Handala actually is matters for reading this incident correctly. “Handala Hack Team” is not an independent crew. The U. S. Department of Justice formally classifies it as “a fictitious identity used by MOIS to hide its role in influence operations and psychological scaremongering campaigns.” The underlying operator is assessed as Void Manticore, also tracked as Banished Kitten, Red Sandstorm, and Storm-842, an offensive unit sitting inside the Iranian Ministry of Intelligence and Security. The persona first surfaced in December 2023, immediately after October 7, and inherited the operational lineage of two earlier MOIS fronts: Homeland Justice, which ran the 2022 to 2023 Albania operations, and Karma, which Handala formally replaced. The unit it belongs to was, until early 2026, headed by Seyed Yahya Hosseini Panjaki, sanctioned by the U.S. Treasury in September 2024, then by the EU and UK, specifically for overseeing Iranian dissident assassination operations, and placed on the FBI terrorism watch list. Panjaki was killed in the opening phase of U.S. and Israeli strikes on Iranian intelligence infrastructure in early March 2026. The Stryker attack landed after his death, under the same persona. The organizational resilience is itself part of the story.
The stated motive published by Handala is retaliation for the February 28, 2026 strike on a school in Minab, southern Iran, with a claimed casualty count of more than 170 children. That framing is the group’s own, and it should be read as psychological operation as much as attribution, but it is the reason the operator put on the record. On March 19, 2026, the FBI seized four Handala domains (including handala-hack.tw, hosted on a Taiwan top-level domain specifically to avoid Western takedown jurisdiction) and the State Department announced a $10 million bounty through its Rewards for Justice program. A replacement site was standing within hours. Handala publicly answered the bounty with a $50 million “counter-bounty” threat framed at Trump and Netanyahu. The infrastructure traces to Cloudzy (PONYNET), a bulletproof hosting provider that Halcyon has assessed with high confidence is a front for abrNOC, an Iranian hosting company founded in the same year by the same individual, with post-seizure failover routed through Russian DDoS provider DDOS-Guard.
Read all of that in one breath. A MOIS-operated persona whose unit head was killed three weeks earlier walked into one of the largest medical device manufacturers in the world, exfiltrated 50 TB, then pushed a destructive button that bricked 200,000 endpoints across 79 countries in minutes, postponed surgeries, stated a retaliation motive, absorbed a $10 million FBI bounty, had four of its domains seized, and was operating a replacement site the same day. The recovery worked, which is a credit to Stryker’s incident response team, but the fact that the recovery worked does not erase what happened, and what happened is the most consequential wartime cyber attack on U. S. soil in the public record. Coverage outside specialist outlets was minimal.
This is actually two distinct incidents aimed at Lockheed Martin inside a few days of each other, and conflating them has caused most of the coverage confusion.
Incident one, the 375 TB claim. An entity self-identifying as APT Iran, which sits inside the broader Handala ecosystem but publishes under its own banner, claimed on or around March 2026 to have exfiltrated 375 terabytes of data from Lockheed Martin and listed the cache for sale on dark web infrastructure (Cybersecurity Dive; UpGuard; Cybersecurity Insiders; Hackread). The initial price was reported at roughly $400 million USD and was later raised toward $598 million. The group claims the trove includes corporate documents and technical blueprints related to the F-35 Joint Strike Fighter program. Lockheed Martin has not publicly confirmed any breach. Trusted security researchers have not verified the sample data. Iranian intelligence-linked actors are independently documented to exaggerate and to fold prior unrelated breaches and open-source material into current claims to amplify psychological reach. The 375 TB F-35 claim, to be direct about it, is widely assessed as overstated. Treat it as claim, not as confirmed event.
Incident two, March 26, 2026, the 28 engineers. This one is the part that should be getting more attention. The Handala persona itself (distinct from the APT Iran data-sale listing) published the names, photographs, employer details, and location information of 28 senior American engineers identified as working in Israel on defense programs that specifically included the F-35, the F-22, and the THAAD missile defense system (NetCrook). The publication was accompanied by threatening phone calls to the engineers themselves and by language stating that Handala’s “friends in the United States” would pay visits to their families. A 48 hour ultimatum was attached. This is doxxing as threat-to-life, executed by a MOIS-operated persona, against named Americans working on three specific weapons programs, and it landed the same week the group was absorbing FBI domain seizures and a $10 million bounty.
Whether or not the 375 TB claim is real, the doxxing of 28 named American defense personnel by an actor with confirmed state ties is not a hypothetical. This is where the silence becomes hard to explain. A MOIS front is publishing kill lists of U. S. defense engineers, tying them to F-35, F-22, and THAAD by name, and the U.S. cybersecurity ecosystem is treating it as a Tuesday.
On March 27, 2026, the same Handala Hack Team published a tranche of more than 300 emails, photographs, and a copy of Patel’s resume, all stolen from the personal Gmail account of FBI Director Kash Patel (CNN; CBS News; NBC News; Axios; PBS NewsHour; Al Jazeera; CNBC). A U. S. official familiar with the matter confirmed the authenticity of at least some of the published images. The FBI subsequently acknowledged the breach, and the State Department reissued the $10 million Rewards for Justice offer that had been announced eight days earlier against Handala.
The federal government’s framing was careful and accurate as far as it goes: the compromised material is historical, dates from roughly 2011 to 2022, came from a personal Gmail account rather than any FBI system, and contains no current operational information. Patel’s official inbox was not breached. The initial access vector is the part that should embarrass the discourse. Handala did not burn a zero-day. They did not spear-phish a cabinet-level official. They used credential stuffing against credentials harvested from older public breach databases, the same technique a teenager with a laptop uses to break into gaming accounts. The sitting Director of the Federal Bureau of Investigation had a password reused or reusable across a pre-government breach corpus, and a hostile state ran the same brute-force workflow that every fraud team in the world tracks hourly, and it worked.
The framing is also a deflection. The point of the operation is not to extract operational secrets. The point is to demonstrate that an Iranian intelligence-linked group can read the personal correspondence of the sitting Director of the Federal Bureau of Investigation and publish it on the open web with attribution and without consequence. This is an explicit retaliation event. It landed eight days after the FBI seized four Handala domains, in the same month the State Department put a $10 million bounty on the group, and three weeks after the U. S. and Israel killed the unit’s leadership in the Minab window. Handala, for its part, answered the $10 million FBI bounty with a public $50 million counter-bounty aimed at Trump and Netanyahu. The March 25 dump of 14 GB from former Mossad Director Tamir Pardo’s personal Gmail, claiming to expose assassination project details and Stuxnet oversight, was published by the same persona two days before the Patel release as a “proof of concept.” The sequencing is the message, and it has been received internally even if it has not been said publicly.
The Patel personal Gmail story consumed almost all of the public oxygen in March, but it was not the most consequential FBI compromise of the quarter. That distinction belongs to a separate incident that received a fraction of the coverage and arguably represents a bigger problem.
The Federal Bureau of Investigation detected abnormal activity on an internal network on February 17, 2026, opened an inquiry, and on March 23, 2026 the Department of Justice formally classified the intrusion as a “major incident” under the 2014 federal law that requires escalated reporting and remediation (Bloomberg; Insurance Journal; GovInfoSecurity). The affected system is described in the public reporting as the network the Bureau uses to manage wiretaps and other surveillance operations, and it contains sensitive law enforcement data including electronic surveillance content and personally identifying information about subjects of FBI investigations (Bloomberg, “FBI Breach Exposes Secret Investigative Records to Intruders”).
The agencies’ notification to lawmakers described the threat actor’s tradecraft as “sophisticated,” and noted in particular that the attacker leveraged a commercial Internet Service Provider vendor’s infrastructure to bypass the FBI’s network security controls. That detail is the part the Bureau will have stayed up nights about. It means the access path was not a phishing email or a stolen laptop. It was an upstream telecommunications vendor whose infrastructure trust relationship with the FBI was successfully turned. That is the same architectural pattern as the SaaS supply chain pivots described elsewhere in this article, scaled up to the level of nation-state intelligence operations against a federal law enforcement system.
A historian’s question worth pausing on: which of the two FBI incidents in this quarter is the one a careful person would actually want to know more about? The Patel personal Gmail leak, with its photographs from 2011 and personal correspondence from before he held office? Or the breach of the system the Bureau uses to manage federal wiretaps and which holds PII on the subjects of active FBI investigations? The answer is obvious. The relative coverage of the two stories is also obvious, and the gap between those two facts is one of the cleanest examples in this entire notebook of the silence the article keeps returning to.
On March 31, 2026, an attacker hijacked the npm account of the lead maintainer of the Axios JavaScript HTTP client library, one of the most-downloaded packages in the entire JavaScript ecosystem at roughly 100 million weekly downloads, and published two malicious versions: 1.14.1 and 0.30.4 (Huntress; The Hacker News; Bloomberg; TechCrunch; Sophos; Microsoft Security). The malicious versions sat live on the npm registry for about two to three hours before being pulled. Inside that window, every CI pipeline, every developer workstation, and every cloud build that pulled the latest minor or patch range silently installed a hidden dependency that fetched and executed a cross-platform Remote Access Trojan.
On April 1, 2026, the Google Threat Intelligence Group publicly attributed the operation to UNC1069, a North Korea-nexus financially-motivated cluster (Google Cloud Blog; Axios). On April 2, the Axios lead maintainer Jason Saayman published a post-mortem describing what actually happened, and the tradecraft is the part that should be making everyone in the open source ecosystem rethink how trust works on a personal level.
The attacker did not exploit a CVE. The attacker built an organization. They impersonated the founder of a real company using a cloned identity and plausible outreach. They invited the maintainer into a real Slack workspace that had been carefully branded to look legitimate, with channel activity, linked social content, and what appeared to be team profiles and other open source maintainers as fake members. They moved the conversation to a Microsoft Teams meeting populated with what looked like multiple participants. By the time the attacker requested any action that touched the maintainer’s npm account, the social proof was overwhelming.
This is the highest-effort npm supply chain operation publicly disclosed since the 2024 XZ Utils backdoor, and it is qualitatively different. XZ was patient identity laundering across years. The Axios attack was patient identity laundering across weeks, with a fake Slack workspace and a fake Teams meeting standing in for years of GitHub commits. The bar to compromise a heavily-used open source maintainer just dropped from “infiltrate the project for two years” to “build a convincing Slack and host one Teams call.”
If you ran npm install against axios 1.14.1 or 0.30.4 in any environment, rotate every secret in that environment now and downgrade to 1.14.0 or 0.30.3. Microsoft Security, Sophos, Huntress, and Malwarebytes have all published detection guidance.
Cisco took two distinct hits in roughly the same window. The first was a supply chain compromise: in March 2026, attackers used credentials stolen via the Trivy supply chain attack to breach Cisco’s internal development environment and clone more than 300 GitHub repositories (BleepingComputer; SocRadar; TechCrunch). The stolen code reportedly included source for Cisco’s AI-powered products and customer code belonging to banks, business process outsourcing firms, and U. S. government agencies.
The second was financial. On March 31, 2026, the same day the Axios story broke, ShinyHunters published an extortion post claiming theft of over 3 million Salesforce records from Cisco containing personal data, alongside GitHub repository contents, AWS bucket data, and other internal corporate assets (Hackread; Cybernews; SC Media). The deadline for payment was set for April 3, 2026. Cisco has not publicly confirmed the ShinyHunters claim. The Trivy-linked source code theft is on firmer reporting ground.
Two breaches in two weeks, one supply chain and one SaaS, both targeting one of the most security-mature vendors in the industry. If Cisco can be hit twice in a month through these vectors, the question of whether your own organization is hit through them is mostly a question of whether anyone is bothering to look.
Of every incident in this chronicle, Mercor is the one most people outside the AI industry have never heard of, and it is also the one most likely to turn out to matter most. Mercor is a two-year-old AI recruiting and training-data startup valued at approximately $10 billion following a $350 million Series C round led by Felicis Ventures in October 2025. Its customers include OpenAI, Anthropic, and Meta (Fortune; TechCrunch; Cybernews). Which means one 2026 supply chain compromise against Mercor touches the training data pipelines of three of the largest frontier AI labs in the world simultaneously.
The attack chain is the cleanest example in this entire article of how upstream open source trust gets turned into downstream enterprise extortion. A threat actor tracked as TeamPCP compromised LiteLLM, a widely-used open source library that developers use to plug their applications into AI services and which is downloaded millions of times per day, and planted credential-harvesting malware inside it (The Register; SecurityWeek; BankInfoSecurity). The malicious code was live for hours before being identified and removed. Mercor has said it was “one of thousands of companies affected” by the LiteLLM compromise. What makes Mercor the headline victim is not that it was uniquely vulnerable. It is that the harvested credentials led into an environment holding the AI industry’s single most sensitive shared asset: training data, labeling protocols, and data selection criteria that the three largest frontier labs have each spent years and billions of dollars developing.
Lapsus$ subsequently claimed responsibility for the downstream Mercor breach on its leak site and published samples (TechCrunch; PureWL; Cybernews). The claimed haul is approximately 4 terabytes of data, broken down as roughly 211 GB of database records, 939 GB of source code, and 3 TB of storage including candidate profiles, personally identifiable information, employer data, API keys, internal Slack dumps, ticketing system exports, and, most disturbingly, videos purportedly showing conversations between Mercor’s AI systems and the contractors those systems were training. That last category is the part that should be getting more attention. It is not just data about training data. It is footage of the training process itself.
Note the cluster convergence here. Lapsus$ is one of the three legs of the Scattered LAPSUS$ Hunters (SLH) alliance described earlier in this article. The Mercor breach, the Rockstar Games breach via Anodot, the Cisco Salesforce extortion on March 31, and the broader ~400-organization Salesforce mega-campaign are all, in varying combinations, operations by the same new apex-predator criminal alliance. The pattern is no longer that a handful of unrelated groups happened to have a big quarter. It is that one newly-merged criminal collective is running an industrial-scale SaaS-and-supply-chain extortion campaign across every sector of the global enterprise, and Mercor is the AI-industry-specific node of that campaign.
Business impact to date. Meta has paused its contracts with Mercor indefinitely. Five Mercor contractors have filed lawsuits alleging personal data exposure. Other large customers are reportedly reassessing the relationship (TechCrunch; Strike Graph). For a two-year-old company at a $10 billion valuation whose entire business model is being the trusted data middleware between contractors and frontier AI labs, losing the trust of one of those labs is an existential event. Losing the trust of all three would be the end.
The structural observation worth pausing on. The global frontier AI industry, in 2026, is effectively running on a shared data pipeline provided by a small number of vendors most of the public has never heard of. Mercor is one of those vendors. Its compromise demonstrates that the AI labs are not in fact the perimeter that matters. The perimeter that matters is the identity and integrity of every upstream dependency in the data pipeline, and most of those dependencies are either two-year-old startups or open source libraries maintained by a handful of developers. This is the same structural problem the rest of this article keeps circling: the modern enterprise no longer has a defensible boundary, only a chain of trust relationships, any of which can be turned. The AI industry inherited that same architecture and is learning the same lesson in real time.
On March 21, 2026, a threat actor using the handle “rose87168” began offering for sale 6 million records extracted from Oracle Cloud, with claims that more than 140,000 Oracle Cloud tenants were potentially affected (eSecurityPlanet; Cybersecurity Dive; FINRA). The breach was tied to CVE-2021-35587, a vulnerability in Oracle Access Manager and the OpenSSO Agent component of Oracle Fusion Middleware, used to compromise Oracle’s Single Sign-On and LDAP systems.
Oracle’s initial public response denied that its main cloud platform had been breached. Oracle subsequently acknowledged that an unauthorized party had accessed its legacy cloud environment, characterizing the affected systems as “obsolete servers” (CSO Online). The HIPAA Journal later reported that up to 80 hospitals were potentially affected by data exposure tied to the same incident, and Parexel International confirmed that a security flaw in Oracle’s cloud infrastructure had affected its Oracle OCI E-Business Suite environment (HIPAA Journal).
The pattern here is the one we used to call “shadow legacy” and have apparently stopped warning about. Hyperscale cloud providers carry quiet inheritances of older platforms that customers were moved off of years ago, but whose operational shells were never actually decommissioned, and the line between “main cloud” and “legacy cloud” is meaningful in marketing copy and meaningless to an attacker who finds a working credential.
On the financially-motivated side of the wave, ShinyHunters claimed in early April 2026 to have breached Rockstar Games. Rockstar publicly confirmed a “third-party data breach” and characterized the accessed information as “limited” and “non-material” (Engadget; Tom’s Hardware). ShinyHunters tells a different story.
ShinyHunters did not directly compromise Rockstar’s internal infrastructure. They compromised Anodot, a third-party SaaS platform Rockstar uses for cloud cost monitoring, lifted authentication tokens from inside Anodot’s environment, and used those tokens to authenticate into Rockstar’s Snowflake instance (Hackread; BleepingComputer; TechRadar; Kotaku; PC Gamer). The ransom note posted to ShinyHunters’ leak channel reads in part: “Rockstar Games, your Snowflake instances were compromised thanks to Anodot.com. Pay or leak. Final warning to reach out by 14 Apr 2026.”
The deadline is two days from the date of this article. Whatever ends up published, the structure of the attack is the part worth remembering: a small SaaS analytics vendor most people have never heard of became the access path into one of the most valuable creative IP environments on the planet, weeks before the most anticipated game launch in industry history.
This is the same playbook that produced the 2024 Snowflake wave. It is not new. It has just been refined and aimed at higher-value targets, and it is going to keep working until the SaaS-to-data-warehouse trust chain gets re-architected end to end.
On April 6, 2026, a cyberattack on aviation IT systems used by major European hubs took down check-in, baggage handling, and boarding at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen simultaneously (The Traveler; National Today; VisaHQ; BlackFog on Collins Aerospace). More than 1,600 flights across the continent were cancelled or delayed on April 6 alone, with at least 13 cancellations at Heathrow by the afternoon of April 4 as the disruption ramped up. Staff reverted to manual check-in and paper boarding passes, a procedure most ground crews under thirty have never been trained on.
The vector reportedly traces back to Collins Aerospace’s MUSE (Multi User System Environment) platform, the shared check-in and boarding software used across many European airline operations, which had already absorbed a separate ransomware attack in September 2025 that knocked Collins offline and produced widespread Air France and KLM customer data exposure. The 2026 incident is, in effect, the second time in roughly six months that the same upstream aviation software supplier has produced a continent-scale outage.
The wider context, from the European Union Aviation Safety Agency: EASA documented a 600 percent spike in aviation cyberattacks between 2024 and 2025, with airports worldwide absorbing roughly 1,000 cyberattacks per month by the end of that period. The April 2026 event is not an outlier in that environment. It is the largest visible surface of a much wider, much quieter trend in which the entire global aviation IT stack is starting to look load-bearing on a small number of single-vendor dependencies that no individual airport security team can defend in isolation.
I am putting this section in the middle of the article rather than at the top because the sourcing is uneven and the verification status is unsettled, but in the long view this is the incident a historian will most likely circle and underline.
Around early February 2026, a hacker operating under the alias FlamingChina began posting samples on Telegram of what they claimed was a multi-petabyte data set exfiltrated from the National Supercomputing Center (NSCC) in Tianjin, one of the central Chinese state computing facilities, which provides infrastructure services to more than 6,000 clients including advanced science institutions and Chinese defense agencies. By April 8, 2026, mainstream Western press had picked up the story and reported the same basic claim from the actor: approximately 10 petabytes of sensitive data, equivalent to roughly 10,240 terabytes, extracted over a six-month period through a compromised VPN domain into NSCC’s environment, with a botnet quietly siphoning data out without detection (CNN; Tom’s Hardware; TechRadar; SC Media; Security Magazine; BGR; Tech Startups; Vision Times; Computing.co.uk; Security Affairs).
The samples that have been circulated reportedly include documents marked “secret” in Chinese, along with technical files, animated simulations, and renderings of defense equipment including bombs and missiles. The named provenance of some of the material includes the Aviation Industry Corporation of China and the National University of Defense Technology. CNN spoke with multiple cybersecurity experts who reviewed the samples and assessed them as appearing genuine, while noting that independent verification of the full dataset has not been possible. Some researchers have suggested the 10 petabyte figure may be inflated for commercial leverage on BreachForums, and the actor is reportedly offering a limited preview for thousands of dollars and full access for hundreds of thousands, payable in cryptocurrency.
Take a moment with the scale. Ten petabytes is, in rough terms, equivalent to two billion photographs, or the entire textual content of the public web several times over. It is the contents of roughly ten thousand decent laptops. Even if the actual exfiltration turns out to be one tenth of the claim, the resulting one petabyte event is still in a category by itself, larger than essentially any single named breach in the public history of cybersecurity. And the target is not a marketing database. It is a state supercomputing facility hosting work for Chinese defense academia and aviation industry programs.
A few things make this incident historically interesting beyond the size. First, it is one of the very rare cases of a major Chinese state computing facility being publicly breached and looted from outside. The historical asymmetry of major reported breaches has run heavily in the other direction, with Chinese state actors as the named operators against Western targets. If the FlamingChina claims hold up even partially, the symmetry has shifted. Second, the access vector reported, a compromised VPN domain followed by a long-dwell botnet quietly exfiltrating over six months without detection, is the same pattern Western incident response teams describe in their worst nation-state engagements. The defenders’ problems and the attackers’ problems are starting to look like the same problem. Third, and most quietly, the U. S. mainstream press picked the story up for a single news cycle in early April and then mostly let it go. A potential record-breaking exfiltration event from a Chinese state supercomputer is the sort of thing that, in any prior decade, would have produced sustained reporting for weeks. In 2026 it produced a few articles, a flurry of trade press coverage, and then quiet.
The Chinese government has not publicly acknowledged the incident. The samples remain in circulation. The claim remains unverified at full scope. The historical importance remains, in the meantime, suspended in exactly the kind of partial information state where most genuinely unprecedented events live for a while before they finally get named.
It would be a mistake to read the 2026 wave as a bolt from the blue. It is more useful to read it as the visible surface of a longer pre-positioning campaign that has been quietly running underneath the public-facing incidents for years. Two Chinese state-aligned actor clusters, Volt Typhoon and Salt Typhoon, are the relevant background.
Volt Typhoon, attributed to the People’s Republic of China and active since at least mid-2021, has been documented inside U. S. critical infrastructure across communications, manufacturing, utility, transportation, construction, maritime, government, IT, and education sectors (CISA AA24-038A; Microsoft Security). The U.S. Intelligence Community has assessed publicly that Volt Typhoon’s targeting carries limited espionage value and is instead consistent with prepositioning to disrupt U.S. infrastructure in the event of a future crisis, particularly in Guam and near U.S. military bases in the Pacific. The International Institute for Strategic Studies published “Volt Typhoon’s long shadow” in January 2026, noting that researchers warn the group remains embedded in U.S. utilities and that some compromises may never be fully discovered (IISS; The Record).
Salt Typhoon is the parallel telecom-focused cluster, attributed to China’s Ministry of State Security, responsible for the high-profile compromises of multiple major U. S. telecommunications carriers that surfaced in late 2024 and continued through 2025, including reported access to lawful intercept systems used by U.S. law enforcement (Congress.gov; Wikipedia overview). Both groups are still active in 2026.
The reason these two names belong in this article, even though their public disclosures predate the 2026 incidents catalogued above, is that they describe the baseline inside which the 2026 wave is happening. The named 2026 incidents are not the entire picture. They are the visible surface. Underneath them, in U. S. utilities and telecommunications infrastructure, there are pre-positioned implants that the relevant federal agencies have publicly stated may never be fully evicted. The historian sitting in 2050 reading this period is going to want to know that the surface events of the first hundred days of 2026 occurred against a background in which the deeper infrastructure had already been quietly compromised for years. That is the kind of context that gets lost when each incident is reported as if it were the first.
Honda’s 2026 has been a slow drip rather than a single event. The reporting describes a sequence of distinct incidents: API flaws in Honda’s e-commerce platform that exposed customer data, dealer panel data, and internal documents (BleepingComputer; SecurityWeek); a password reset flow exploit that exposed additional data (Cybersecurity Tribe); and a Clawson Honda dealership data breach claimed by the PLAY ransomware group that exposed names, Social Security numbers, addresses, driver’s license data, and dates of birth, with notification letters going out as recently as April (Claim Depot).
None of these are individually catastrophic. Stacked together they tell a familiar story about a manufacturing giant whose attack surface has outgrown its security maturity, and they belong in the wave count.
The named incidents above are just the ones that broke through. The full first-quarter 2026 list is much longer. Brief Defense, PKWARE, Cybersecurity News, ACI Learning, and CSIS are all maintaining 2026 incident timelines, and the pattern is consistent.
* January 2026: Illinois and Minnesota state systems exposed personal data on nearly one million people; the Match family of dating apps was breached by ShinyHunters; Eurail confirmed unauthorized access; researchers found a 149-million-record database publicly exposed via cloud misconfiguration; Microsoft January Patch Tuesday shipped 115 fixes including the Office bug APT28 began exploiting within days; Nike investigated a possible cyber attack after WorldLeaks claimed 1.4 TB of internal company data on January 24; Red Hat suffered a private GitHub and GitLab compromise by the Crimson Collective, with roughly 570 GB exfiltrated from over 28,000 internal repositories including approximately 800 Customer Engagement Reports containing infrastructure details and credentials for large enterprise clients; Pickett USA breach exposed sensitive engineering data linked to U.S. utilities; ShinyHunters / SLH vishing campaigns targeting enterprise SSO environments including Okta surged in early-to-mid January.
* February 2026: BridgePay, a payments platform serving city governments, was hit by ransomware; Odido disclosed unauthorized access affecting up to 6.2 million customers; Change Healthcare, a UnitedHealth subsidiary, was hit again, this time by AlphV/BlackCat; Cisco disclosed that a critical Catalyst SD-WAN vulnerability (CVE-2026-20127, CVSS 10.0) had been actively exploited since 2023; APT28 was observed weaponizing CVE-2026-21509 against Ukrainian and EU government targets; the FBI detected abnormal activity (Feb 17) on the internal network it uses to manage wiretaps and surveillance, eventually classified as a “major incident” on March 23; the 2026 Winter Olympics opened in Milan and Cortina d’Ampezzo and pro-Russian DDoS group NoName057(16) began hitting Italian Olympic infrastructure, several national Olympic committees (Lithuania, Poland, Spain), the Cortina d’Ampezzo tourism site, and Milan Malpensa Airport; University of Mississippi Medical Center closed clinics following a ransomware attack and reverted to manual patient care; France’s National Bank Account Registry (FICOBA) was hit through credential weakness exploitation; Iron Mountain, Panera Bread, SmarterTools, Step Finance, and Advantest Corporation all absorbed publicly disclosed incidents.
* March 2026: Stryker wiper event (March 11); Microsoft published the “Help on the line” report on a Teams-vishing initial access pattern (March 16); Oracle Cloud “rose87168” listing (March 21); Lockheed Martin / 28-engineer doxxing claims (March 23); European Commission Europa cloud platform breached (March 24); Kash Patel personal email dump (March 27); Cisco Trivy supply chain breach surfaces; TeamPCP compromises the LiteLLM open source library in a supply chain attack that propagates to “thousands of companies” including Mercor, the $10B AI training-data vendor whose customers include OpenAI, Anthropic, and Meta; Axios npm hijack (March 31); ShinyHunters Cisco Salesforce extortion post (March 31); Mercor discloses the security incident publicly on March 31 / April 1; FlamingChina samples from the Tianjin NSCC breach circulate widely on Telegram and BreachForums following early-February initial postings.
* April 2026 so far: Google Threat Intelligence Group attributes Axios npm compromise to UNC1069 / North Korea (April 1); Axios maintainer post-mortem published (April 2); Fortune confirms the Mercor breach publicly (April 2), with Lapsus$ claiming 4 TB of exfiltrated data including ~211 GB database records, ~939 GB source code, and ~3 TB storage covering candidate PII, employer data, API keys, Slack dumps, and videos of Mercor AI systems talking to contractors; Meta pauses all AI data training contracts with Mercor indefinitely, five Mercor contractors file lawsuits over personal data exposure; DOJ confirms the FBI internal “major incident” classification publicly (early April); a continent-wide aviation IT attack on April 6 cripples check-in, baggage, and boarding at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen, cancelling or delaying more than 1,600 flights in a single day, traced to the Collins Aerospace MUSE platform that had already been hit in September 2025; on April 7, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convene an emergency in-person meeting in Washington with the CEOs of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo to brief them on the cyber risks posed by Anthropic’s Mythos AI model, which has identified thousands of previously-unknown zero-day vulnerabilities in every major operating system and web browser; Rockstar Games confirms third-party breach via Anodot/Snowflake (early April); Snowflake itself confirms “unusual activity” affecting more than a dozen customer accounts linked to the Anodot integration; ShinyHunters sets April 14 ransom deadline and tells reporters they had Anodot access “for some time” and had also tried (and failed) to breach Salesforce directly; FlamingChina supercomputer claims surface in mainstream Western press (CNN, Tom’s Hardware, TechRadar, BGR, around April 8-10); National Public Data successor breach surfaces with roughly 2.9 billion records of personal information sold for 3.5 million dollars on the dark web; Yale New Haven Health System discloses breach affecting 5.5 million patients; HIPAA Journal links up to 80 hospitals to the Oracle Cloud incident; March 2026 ransomware activity totals 672 incidents in a single month, with Qilin, Akira, and DragonForce alone accounting for roughly 40 percent.
By the time you finish counting the named incidents above, you are well past forty, and the count does not include the 300 to 400 organizations swept up in the SLH Salesforce mega-campaign or the roughly 1.5 billion Salesforce records estimated stolen across that single operation. Nor does it include the 672 ransomware incidents that ransomware tracking firms recorded in March 2026 alone, with Qilin, Akira, and DragonForce accounting for about 40 percent of that single-month total. Nor does it include the dozens of smaller school district, municipal, and healthcare ransomware events that have become so routine they no longer make national news. The 2025 baseline already showed publicly disclosed ransomware attacks rising 49% year-over-year to 1,174 incidents, with healthcare absorbing 22% of the total. The 2026 first-quarter pace, on the trajectory above, is comfortably on track to make 2025 look quiet.
This is the part of the notebook where the historian’s voice has to be honest about what it does not fully understand. The events above are real. The volume is real. The pattern is real. The relative quietness around them in mainstream Western public discourse is also real, and it is genuinely puzzling rather than obviously sinister. Several plausible explanations sit on the table at once, and the honest move is to lay them out without insisting on any of them.
One possibility is that attribution to a state actor has become professionally expensive. Calling a Handala wiper event an Iranian intelligence-linked destructive operation against a U. S. medical device company, or calling the FlamingChina supercomputer leak what it might be, takes on political weight that practitioners and vendors increasingly prefer to avoid. Analysis gets softened to “threat actor” or “sophisticated adversary,” and the geopolitical reading gets edited out without anyone deciding to edit it out. That softening is not a conspiracy. It is the cumulative effect of many small commercial choices that each individually seem reasonable.
A second possibility is that the SaaS supply chain story is uncomfortable for the security industry to dwell on, because the industry sells into it. Saying out loud that the modern enterprise no longer has a defensible perimeter, only a long chain of vendor trust relationships that can be turned at any link, is also saying that the security stack the industry shipped last quarter cannot stop the attacks the industry is supposed to be talking about this quarter. That is a hard public message to deliver from inside a vendor.
A third possibility is much simpler and possibly the most powerful. The news cycle has trained the public to bounce off cyber stories. The audience has already absorbed Equifax, OPM, Yahoo, SolarWinds, NPD, and Snowflake, and the marginal shock of “another one” has flattened. When the marginal shock is flat, even genuinely unprecedented events struggle to land. Practitioners know this, so they save their breath. The silence may be less an act of suppression than an act of fatigue.
A fourth possibility is the one this notebook keeps circling back to. The parallel acceleration on the AI side of the industry is awkward to discuss in the same paragraph as the offensive incidents, because every cybersecurity vendor is currently racing to ship “AI-powered” defense. It is commercially uncomfortable to put the two lists on the same page, even if no one in particular is forbidding it. The absence of that pairing is, at minimum, a strange thing to notice in the historical record.
A historian writing in 2050 about the first hundred days of 2026 will probably find all four of these explanations partially true and none of them fully sufficient. What that historian will almost certainly notice, more than any single explanation, is the gap itself, and more specifically the layered nature of the gap. The April 7 meeting between the Treasury Secretary, the Fed Chair, and the CEOs of the largest U. S. banks proves something crucial about that layering. The silence has not fully held at the highest levels of the U.S. government. Bessent and Powell are clearly not in the dark, and neither are the people they briefed. What has held is the silence in the public discourse, in the mainstream press, in the day-to-day conversations practitioners have with their boards and their customers. The information is moving in private. It is just barely moving in public. A period that, on the evidence, looks unprecedented in the history of computing security passed through real-time public discourse without producing the kind of sustained, coherent, named conversation the events seem to deserve, and yet behind closed doors at the highest levels of financial regulation, the conversation is clearly happening. That asymmetry is the most interesting object in this entire notebook.
If you work in this field and the last hundred days have felt strange to you, you are not imagining it. Something genuinely unusual is happening, and the unusualness of how quietly it is happening may, in the long view, be the most historically interesting layer of all. Naming the gap, even gently, is a small contribution to making sure the period eventually gets the documentation it deserves.
...
Read the original on substack.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.