10 interesting stories served every morning and every evening.
Last week, I wrote about catching a supply chain attack on a WordPress plugin called Widget Logic. A trusted name, acquired by a new owner, turned into something malicious. It happened again. This time at a much larger scale.
Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dashboard for a client site. The notice was from the WordPress.org Plugins Team, warning that a plugin called Countdown Timer Ultimate contained code that could allow unauthorized third-party access.
I ran a full security audit on the site. The plugin itself had already been force-updated by WordPress.org to version 2.6.9.1, which was supposed to clean things up. But the damage was already done.
The plugin’s wpos-analytics module had phoned home to analytics.essentialplugin.com, downloaded a backdoor file called wp-comments-posts.php (designed to look like the core file wp-comments-post.php), and used it to inject a massive block of PHP into wp-config.php.
The injected code was sophisticated. It fetched spam links, redirects, and fake pages from a command-and-control server. It only showed the spam to Googlebot, making it invisible to site owners. And here is the wildest part. It resolved its C2 domain through an Ethereum smart contract, querying public blockchain RPC endpoints. Traditional domain takedowns would not work because the attacker could update the smart contract to point to a new domain at any time.
CaptainCore keeps daily restic backups. I extracted wp-config.php from 8 different backup dates and compared file sizes. Binary search style.
The injection happened on April 6, 2026, between 04:22 and 11:06 UTC. A 6-hour 44-minute window.
I traced the plugin’s history through 939 quicksave snapshots. The plugin had been on the site since January 2019. The wpos-analytics module was always there, functioning as a legitimate analytics opt-in system for years.
Then came version 2.6.7, released August 8, 2025. The changelog said, “Check compatibility with WordPress version 6.8.2.” What it actually did was add 191 lines of code, including a PHP deserialization backdoor. The class-anylc-admin.php file grew from 473 to 664 lines.
The new code introduced three things:
A fetch_ver_info() method that calls file_get_contents() on the attacker’s server and passes the response to @unserialize()
A version_info_clean() method that executes @$clean($this->version_cache, $this->changelog) where all three values come from the unserialized remote data
That is a textbook arbitrary function call. The remote server controls the function name, the arguments, everything. It sat dormant for 8 months before being activated on April 5-6, 2026.
This is where it gets interesting. The original plugin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that operated under “WP Online Support” starting around 2015. They later rebranded to “Essential Plugin” and grew the portfolio to 30+ free plugins with premium versions.
By late 2024, revenue had declined 35-45%. Minesh listed the entire business on Flippa. A buyer identified only as “Kris,” with a background in SEO, crypto, and online gambling marketing, purchased everything for six figures. Flippa even published a case study about the sale in July 2025.
The buyer’s very first SVN commit was the backdoor.
On April 7, 2026, the WordPress.org Plugins Team permanently closed every plugin from the Essential Plugin author. At least 30 plugins, all on the same day. Here are the ones I confirmed:
* SlidersPack — All in One Image Sliders — sliderspack-all-in-one-image-sliders
All permanently closed. The author search on WordPress.org returns zero results. The analytics.essentialplugin.com endpoint now returns {“message”:“closed”}.
In 2017, a buyer using the alias “Daley Tias” purchased the Display Widgets plugin (200,000 installs) for $15,000 and injected payday loan spam. That buyer went on to compromise at least 9 plugins the same way.
The Essential Plugin case is the same playbook at a larger scale. 30+ plugins. Hundreds of thousands of active installations. A legitimate 8-year-old business acquired through a public marketplace and weaponized within months.
WordPress.org’s forced update added return; statements to disable the phone-home functions. That is a band-aid. The wpos-analytics module is still there with all its code. I built patched versions with the entire backdoor module stripped out.
I scanned my entire fleet and found 12 of the 26 Essential Plugin plugins installed across 22 customer sites. I patched 10 of them (one had no backdoor module, one was a different “pro” fork by the original authors). Here are the patched versions, hosted permanently on B2:
# Countdown Timer Ultimate
wp plugin install https://plugins.captaincore.io/countdown-timer-ultimate-2.6.9.1-patched.zip –force
# Popup Anything on Click
wp plugin install https://plugins.captaincore.io/popup-anything-on-click-2.9.1.1-patched.zip –force
# WP Testimonial with Widget
wp plugin install https://plugins.captaincore.io/wp-testimonial-with-widget-3.5.1-patched.zip –force
# WP Team Showcase and Slider
wp plugin install https://plugins.captaincore.io/wp-team-showcase-and-slider-2.8.6.1-patched.zip –force
# WP FAQ (sp-faq)
wp plugin install https://plugins.captaincore.io/sp-faq-3.9.5.1-patched.zip –force
# Timeline and History Slider
wp plugin install https://plugins.captaincore.io/timeline-and-history-slider-2.4.5.1-patched.zip –force
# Album and Image Gallery plus Lightbox
wp plugin install https://plugins.captaincore.io/album-and-image-gallery-plus-lightbox-2.1.8.1-patched.zip –force
# SP News and Widget
wp plugin install https://plugins.captaincore.io/sp-news-and-widget-5.0.6-patched.zip –force
# WP Blog and Widgets
wp plugin install https://plugins.captaincore.io/wp-blog-and-widgets-2.6.6.1-patched.zip –force
# Featured Post Creative
wp plugin install https://plugins.captaincore.io/featured-post-creative-1.5.7-patched.zip –force
# Post Grid and Filter Ultimate
wp plugin install https://plugins.captaincore.io/post-grid-and-filter-ultimate-1.7.4-patched.zip –force
Each patched version removes the entire wpos-analytics directory, deletes the loader function from the main plugin file, and bumps the version to -patched. The plugin itself continues to work normally.
The process is straightforward with Claude Code. Point it at this article for context, tell it which plugin you need patched, and it can strip the wpos-analytics module the same way I did. The pattern is identical across all of the Essential Plugin plugins:
Delete the wpos-analytics/ directory from the plugin
Remove the loader function block in the main plugin PHP file (search for “Plugin Wpos Analytics Data Starts” or wpos_analytics_anl)
Two supply chain attacks in two weeks. Both followed the same pattern. Buy a trusted plugin with an established install base, inherit the WordPress.org commit access, and inject malicious code. The Flippa listing for Essential Plugin was public. The buyer’s background in SEO and gambling marketing was public. And yet the acquisition sailed through without any review from WordPress.org.
WordPress.org has no mechanism to flag or review plugin ownership transfers. There is no “change of control” notification to users. No additional code review triggered by a new committer. The Plugins Team responded quickly once the attack was discovered. But 8 months passed between the backdoor being planted and being caught.
If you manage WordPress sites, search your fleet for any of the 26 plugin slugs listed above. If you find one, patch it or remove it. And check wp-config.php.
...
Read the original on anchor.host »
Large pull requests are hard to review, slow to merge, and prone to conflicts. Reviewers lose context, feedback quality drops, and the whole team slows down. Stacked PRs solve this by breaking big changes into a chain of small, focused pull requests that build on each other — each one independently reviewable.
A stack is a series of pull requests in the same repository where each PR targets the branch of the PR below it, forming an ordered chain that ultimately lands on your main branch.
GitHub understands stacks end-to-end: the pull request UI shows a stack map so reviewers can navigate between layers, branch protection rules are enforced against the final target branch (not just the direct base), and CI runs for every PR in the stack as if they were targeting the final branch.
The gh stack CLI handles the local workflow: creating branches, managing rebases, pushing to GitHub, and creating PRs with the correct base branches. On GitHub, the PR UI gives reviewers the context they need — a stack map for navigation, focused diffs for each layer, and proper rules enforcement.
When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged directly or through the merge queue. After a merge, the remaining PRs in the stack are automatically rebased so the lowest unmerged PR targets the base branch.
Ready to dive in? Start with the Quick Start guide or read the full overview.
...
Read the original on github.github.com »
The Photo page brings Hollywood’s most advanced color tools to still photography for the first time! Whether you’re a professional colorist looking to apply your skills to fashion shoots and weddings, or a photographer who wants to work beyond the limits of traditional photo applications, the Photo page unlocks the tools you need. Start with familiar photo tools including white balance, exposure and primary color adjustments, then switch to the Color page for access to the full DaVinci color grading toolset trusted by Hollywood’s best colorists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you export faster than ever before!
For photographers, the Photo page offers a familiar set of tools alongside DaVinci’s powerful color grading capabilities. It includes native RAW support for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image processing takes place at source resolution up to 32K, or over 400 megapixels, so you’re never limited to project resolution. Familiar basic adjustments including white balance, exposure, color and saturation give you a comfortable starting point. With non-destructive processing you can reframe, crop and re-interpret your original sensor data at any time. And with GPU acceleration, entire albums can be processed dramatically faster than conventional photo applications!
The Photo page Inspector gives you precise control over the transform and cropping parameters of your images. Reframe and crop non-destructively at the original source resolution and aspect ratio, so you’re never restricted to a fixed timeline size! Zoom, position, rotate and flip images with full transform controls and use the cropping parameters to trim the edges of any image with precision. Reframe a shot to improve composition, adjust for a specific ratio for print or social media use, or simply remove unwanted elements from the edges of a frame. All adjustments can be refined or reset at any time without ever affecting the original source file!
DaVinci Resolve is the world’s only post production software that lets everyone work together on the same project at the same time! Built on a powerful cloud based workflow, you can share albums, all associated metadata and tags, as well as grades and effects with colorists, photographers and retouchers anywhere in the world. Blackmagic Cloud syncing keeps every collaborator with the latest version of your image library in real time, and remote reviewers can approve grades offsite without needing to be in the same room. Hollywood colorists can even grade live fashion shoots remotely, all while the photographer is still on set!
The Photo page gives you everything you need to manage your entire image library from import to completion. You can import photos directly, from your Apple Photos library or Lightroom, and organize them with tags, ratings, favorites and keywords for fast, flexible management of even the largest libraries. It supports all standard RAW files and image types. AI IntelliSearch lets you instantly search across your entire project to find exactly what you’re looking for, from objects to people to animals! Albums allow you to build and manage collections for any project and with a single click you can switch between your photo library and your color grading workflow!
Albums are a powerful way to build and manage photo collections directly in DaVinci Resolve. You can add images manually to each album or organize by date, camera, star rating, EXIF data and more. Powerful filter and sort tools give you total control over how your collection is arranged. The thumbnail view displays each image’s graded version alongside its file name and source clip format so you can see your grades at a glance. Create multiple grade versions of any image, all referencing the original source file, so you can explore different looks without ever duplicating a file. Plus, grades applied to one photo can be instantly copied across others in the album for a fast, consistent look!
Connect Sony or Canon cameras directly to DaVinci Resolve for tethered shooting with full live view! Adjust camera settings including ISO, exposure and white balance without leaving the page and save image capture presets to establish a consistent look before you shoot. Images can be captured directly into an album, with albums created automatically during capture so your library is perfectly organized from the moment you start shooting. Grade images as they arrive using DaVinci Resolve’s extensive color toolset and use a hardware panel for hands-on creative control in a collaborative shoot. That means you can capture, grade and organize an entire shoot without leaving DaVinci Resolve!
The Photo page gives you access to over 100 GPU and CPU accelerated Resolve FX and specialty AI tools for still image work. They’re organized by category in the Open FX library and cover everything from color effects, blurs and glows to image repair, skin refinement and cinematic lighting tools. These are the same tools used by Hollywood colorists and VFX artists on the world’s biggest productions, now available for still images. To add an effect, drag it to any node. Whether you’re making subtle beauty refinements for a fashion shoot or applying dramatic film looks and atmospheric lighting effects emulating the looks of a Hollywood feature, the Photo page has the tools you need!
Magic Mask makes precise selections of subjects or backgrounds, while Depth Map generates a 3D map of your scene to separate foreground and background without manual masking. Use together to grade different depths of an image independently for results that have never before been possible for stills!
Add a realistic light source to any photo after capture with Relight FX. Relight analyzes the surfaces of faces and objects to reflect light naturally across the image. Combine with Magic Mask to light a subject independently from the background, turning flat portraits into stunning fashion images!
Face refinement automatically masks different parts of a face, saving countless hours of manual work. Sharpen eyes, remove dark circles, smooth skin, and color lips. Ultra Beauty separates skin texture from color for natural, high end results, while AI Blemish Removal handles fast skin repair!
The Film Look Creator lets you add cinematic looks that replicate film properties like halation, bloom, grain and vignetting. Adjust exposure in stops and use subtractive saturation, richness and split tone controls to achieve looks usually found on the big screen, now for your still images!
AI SuperScale uses the DaVinci AI Neural Engine to upscale low resolution images with exceptional quality. The enhanced mode is specifically designed to remove compression artifacts, making it the perfect tool for rescaling low quality photos or frame grabs up to 4x their original resolution!
UltraNR is a DaVinci AI Neural Engine driven denoise mode in the Color page’s spatial noise reduction palette. Use it to dramatically reduce digital noise from an image while maintaining image clarity. Use with spatial noise reduction to smooth out digital grain or scanner noise while keeping fine hair and eye edges sharp.
Sample an area of a scene to quickly cover up unwanted elements, like objects or even blemishes on a face. The patch replacer has a fantastic auto grading feature that will seamlessly blend the covered area with the surrounding color data. Perfect for removing sensor dust.
The Quick Export option makes it fast and easy to deliver finished images in a wide range of common formats including JPEG, PNG, HEIF and TIFF. Export either an entire album or just selected photos providing flexibility to meet your specific delivery needs. You can set the resolution, bit depth, quality and compression to ensure your images are optimized for their intended use. Whether you’re exporting standalone images for print, sharing on social media platforms or delivering graded files to a client, Quick Export has you covered. All exports preserve your original photo EXIF metadata, so camera settings, location data and other important information always travels with your files.
The Photo page uses GPU accelerated processing to deliver fast, accurate results across your entire workflow. Process hundreds of RAW files in seconds with GPU accelerated decoding and apply Resolve FX to your images in real time. GPU acceleration also means batch exports and conversions are dramatically faster than conventional photo applications. On Mac, DaVinci Resolve is optimized for Metal and Apple Silicon, taking full advantage of the latest hardware. On Windows and Linux, you get CUDA support for NVIDIA GPUs, while the Windows version also features full OpenCL support for AMD, Intel and Qualcomm GPUs. All this ensures you get high performance results on any system!
Hollywood colorists have always relied on hardware panels to work faster and more creatively and now photographers can too! The DaVinci Resolve Micro Color Panel is the perfect companion for photo grading as it is compact enough to sit next to a laptop and portable enough to take on location for shoots. It features three high quality trackballs for lift, gamma and gain adjustments, 12 primary correction knobs for contrast, saturation, hue, temperature and more. It even has a built in rechargeable battery! DaVinci Resolve color panels let you adjust multiple parameters at once, so you can create looks that are simply impossible with a mouse and keyboard.
Hollywood’s most popular solution for editing, visual effects, motion graphics, color correction and audio post production, for Mac, Windows and Linux. Now supports Blackmagic Cloud for collaboration!
The most powerful DaVinci Resolve adds DaVinci Neural Engine for automatic AI region tracking, stereoscopic tools, more Resolve FX filters, more Fairlight FX audio plugins and advanced HDR grading.
Includes large search dial in a design that includes only the specific keys needed for editing. Includes Bluetooth with battery for wireless use so it’s more portable than a full sized keyboard!
Editor panel specifically designed for multi-cam editing for news cutting and live sports replay. Includes buttons to make camera selection and editing extremely fast! Connects via Bluetooth or USB‑C.
Full sized traditional QWERTY editor keyboard in a premium metal design. Featuring a metal search dial with clutch, plus extra edit, trim and timecode keys. Can be installed inset for flush mounting.
Powerful color panel gives you all the control you need to create cinematic images. Includes controls for refined color grading including adding windows. Connects via Bluetooth or USB‑C.
Portable DaVinci color panel with 3 high resolution trackballs, 12 primary corrector knobs and LCDs with menus and buttons for switching tools, adding color nodes, HDR and secondary grading and more!
Designed in collaboration with professional Hollywood colorists, the DaVinci Resolve Advanced Panel features a massive number of controls for direct access to every DaVinci color correction feature.
Portable audio control surface includes 12 premium touch sensitive flying faders, channel LCDs for advanced processing, automation and transport controls plus HDMI for an external graphics display.
Get incredibly fast audio editing for sound engineers working on tight deadlines! Includes LCD screen, touch sensitive control knobs, built in search dial and full keyboard with multi function keys.
Used by Hollywood and broadcasters, these large consoles make it easy to mix large projects with a massive number of channels and tracks. Modular design allows customizing 2, 3, 4, or 5 bay consoles!
Fairlight studio console legs at 0º angle for when you require a flat working surface. Required for all Fairlight Studio Consoles.
Fairlight studio console legs at 8º angle for when you require a slightly angled working surface. Required for all Fairlight Studio Consoles.
Features 12 motorized faders, rotary control knobs illuminated buttons for pan, solo, mute and call, plus bank select buttons.
12 groups of touch sensitive rotary control knobs and illuminated buttons, assignable to fader strips, single channel or master bus.
Get quick access to virtually every Fairlight feature! Includes a 12” LCD, graphical keyboard, macro keys, transport controls and more.
Features HDMI, SDI inputs for video and computer monitoring and Ethernet for graphics display of channel status and meters.
Empty 2 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 3 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 4 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 5 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Use alternative HDMI or SDI televisions and monitors when building a Fairlight studio console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 2 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 3 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 4 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 5 bay Fairlight console.
Side arm kit mounts into Fairlight console mounting bar and holds each fader, channel control and LCD monitor module.
Blank 1/3rd wide bay for building a custom console with the extra 1/3rd section. Includes blank infill panels.
Allows mounting standard 19 inch rack mount equipment in the channel control area of the Fairlight studio console.
Blank panel to fill in the channel control area of the Fairlight studio console.
Blank panel to fill in the LCD monitor area of the Fairlight studio console when you’re not using the standard Fairlight LCD monitor.
Blank panel to fill in the fader control area of the Fairlight studio console.
Adds 3 MADI I/O connections to the single MADI on the accelerator card, for a total of 256 inputs and outputs at 24 bit and 48kHz.
Add up to 2,000 tracks with real time processing of EQ, dynamics, 6 plug‑ins per track, plus MADI for extra 64 inputs and outputs.
Adds analog and digital connections, preamps for mics and instruments, sample rate conversion and sync at any standard frame rate.
...
Read the original on www.blackmagicdesign.com »
Servo is now available on crates.io
Today the Servo team has released v0.1.0 of the servo crate. This is our first crates.io release of the servo crate that allows Servo to be used as a library.
We currently do not have any plans of publishing our demo browser servoshell to crates.io. In the 5 releases since our initial GitHub release in October 2025, our release process has matured, with the main “bottleneck” now being the human-written monthly blog post. Since we’re quite excited about this release, we decided to not wait for the monthly blog post to be finished, but promise to deliver the monthly update in the coming weeks.
As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo’s embedding API and its ability to meet some users’ needs.
In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.
...
Read the original on servo.org »
Focused async Python bot for Polymarket that buys No on standalone non-sports yes/no markets.
FOR ENTERTAINMENT ONLY. PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. THE AUTHORS ARE NOT LIABLE FOR ANY CLAIMS, LOSSES, OR DAMAGES.
The bot scans standalone markets, looks for NO entries below a configured price cap, tracks open positions, exposes a dashboard, and persists live recovery state when order transmission is enabled.
If any of those are missing, the bot uses PaperExchangeClient.
pip install -r requirements.txt
cp config.example.json config.json
cp .env.example .env
config.json is intentionally local and ignored by git.
The runtime config lives under strategies.nothing_happens. See config.example.json and .env.example.
You can point the runtime at a different config file with CONFIG_PATH=/path/to/config.json.
python -m bot.main
The dashboard binds $PORT or DASHBOARD_PORT when one is set.
The shell helpers use either an explicit app name argument or HEROKU_APP_NAME.
export HEROKU_APP_NAME=
heroku config:set BOT_MODE=live DRY_RUN=false LIVE_TRADING_ENABLED=true -a “$HEROKU_APP_NAME”
heroku config:set PRIVATE_KEY=
Only run the web dyno. The worker entry exists only to fail fast if it is started accidentally.
python -m pytest -q
Local config, ledgers, exports, reports, and deployment artifacts are ignored by default.
...
Read the original on github.com »
A U. S. appeals court on Friday declared unconstitutional a nearly 158-year-old federal ban on home distilling, calling it an unnecessary and improper means for Congress to exercise its power to tax.
The 5th U. S. Circuit Court of Appeals in New Orleans ruled in favor of the nonprofit Hobby Distillers Association and four of its 1,300 members.
They argued that people should be free to distill spirits at home, whether as a hobby or for personal consumption including, in one instance, to create an apple-pie-vodka recipe.
The ban was part of a law passed during Reconstruction in July 1868, in part to thwart liquor tax evasion, and subjected violators to up to five years in prison and a $10,000 fine.
Writing for a three-judge panel, Circuit Judge Edith Hollan Jones said the ban actually reduced tax revenue by preventing distilling in the first place, unlike laws that regulated the manufacture and labeling of distilled spirits on which the government could collect taxes.
She also said that under the government’s logic, Congress could criminalize virtually any in-home activity that might escape notice from tax collectors, including remote work and home-based businesses.
“Without any limiting principle, the government’s theory would violate this court’s obligation to read the Constitution carefully to avoid creating a general federal authority akin to the police power,” Jones wrote.
The U. S. Department of Justice had no immediate comment.
Another defendant, the Treasury Department’s Alcohol and Tobacco Tax and Trade Bureau, did not immediately respond to a request for comment.
Devin Watkins, a lawyer representing the Hobby Distillers Association, in an interview called the ruling an important decision about the limits of federal power.
Andrew Grossman, who argued the nonprofit’s appeal, called the decision “an important victory for individual liberty” that lets the plaintiffs “pursue their passion to distill fine beverages in their homes.”
“I look forward to sampling their output,” he said.
The decision upheld a July 2024 ruling by U. S. District Judge Mark Pittman in Fort Worth, Texas. He put his ruling on hold so the government could appeal.
...
Read the original on nypost.com »
In my previous blog post I gave a quick and easy introduction to tmux and explained how to use tmux with a basic configuration.
If you’ve followed that guide you might have had a feeling that many people have when working with tmux for the first time: “These key combinations are really awkward!”. Rest assured, you’re not alone. Judging from the copious blog posts and dotfiles repos on GitHub there are many people out there who feel the urge to make tmux behave a little different; to make it more comfortable to use.
And actually it’s quite easy to customize the look and feel of tmux. Let me tell you something about the basics of customizing tmux and share some of the configurations I find most useful.
Customizing tmux is as easy as editing a text file. Tmux uses a file called tmux.conf to store its configuration. If you store that file as ~/.tmux.conf (Note: there’s a period as the first character in the file name. It’s a hidden file) tmux will pick this configuration file for your current user. If you want to share a configuration for multiple users you can also put your tmux.conf into a system-wide directory. The location of this directory will be different across different operating systems. The man page (man tmux) will tell you the exact location, just have a look at documentation for the -f parameter.
Probably the most common change among tmux users is to change the prefix from the rather awkward C-b to something that’s a little more accessible. Personally I’m using C-a instead but note that this might interfere with bash’s “go to beginning of line” command1. On top of the C-a binding I’ve also remapped my Caps Lock key to act as since I’m not using Caps Lock anyways. This allows me to nicely trigger my prefix key combo.
To change your prefix from C-b to C-a, simply add following lines to your tmux.conf:
# remap prefix from ‘C-b’ to ‘C-a’
unbind C-b
set-option -g prefix C-a
bind-key C-a send-prefix
Another thing I personally find quite difficult to remember is the pane splitting commands.” to split vertically and % to split horizontally just doesn’t work for my brain. I find it helpful to have use characters that resemble a visual representation of the split, so I chose | and - for splitting panes horizontally and vertically:
# split panes using | and -
bind | split-window -h
bind - split-window -v
unbind ‘“’
unbind %
Since I’m experimenting quite often with my tmux.conf I want to reload the config easily. This is why I have a command to reload my config on r:
# reload config file (change file location to your the tmux.conf you want to use)
bind r source-file ~/.tmux.conf
Switching between panes is one of the most frequent tasks when using tmux. Therefore it should be as easy as possible. I’m not quite fond of triggering the prefix key all the time. I want to be able to simply say M- to go where I want to go (remember: M is for Meta, which is usually your Alt key). With this modification I can simply press Alt-left to go to the left pane (and other directions respectively):
# switch panes using Alt-arrow without prefix
bind -n M-Left select-pane -L
bind -n M-Right select-pane -R
bind -n M-Up select-pane -U
bind -n M-Down select-pane -D
Although tmux clearly focuses on keyboard-only usage (and this is certainly the most efficient way of interacting with your terminal) it can be helpful to enable mouse interaction with tmux. This is especially helpful if you find yourself in a situation where others have to work with your tmux config and naturally don’t have a clue about your key bindings or tmux in general. Pair Programming might be one of those occasions where this happens quite frequently.
Enabling mouse mode allows you to select windows and different panes by simply clicking and to resize panes by dragging their borders around. I find it pretty convenient and it doesn’t get in my way often, so I usually enable it:
# Enable mouse control (clickable windows, panes, resizable panes)
set -g mouse on
I like to give my tmux windows custom names using the , key. This helps me naming my windows according to the context they’re focusing on. By default tmux will update the window title automatically depending on the last executed command within that window. In order to prevent tmux from overriding my wisely chosen window names I want to suppress this behavior:
# don’t rename windows automatically
set-option -g allow-rename off
Changing the colors and design of tmux is a little more complex than what I’ve presented so far. As tmux allows you to tweak the appearance of a lot of elements (e.g. the borders of panes, your statusbar and individual elements of it, messages), you’ll need to add a few options to get a consistent look and feel. You can make this as simple or as elaborate as you like. Tmux’s man page (specifically the STYLES section) contains more information about what you can tweak and how you can tweak it.
Depending on your color scheme your resulting tmux will look something like this:
# DESIGN TWEAKS
# don’t do anything when a ‘bell’ rings
set -g visual-activity off
set -g visual-bell off
set -g visual-silence off
setw -g monitor-activity off
set -g bell-action none
# clock mode
setw -g clock-mode-colour yellow
# copy mode
setw -g mode-style ‘fg=black bg=red bold’
# panes
set -g pane-border-style ‘fg=red’
set -g pane-active-border-style ‘fg=yellow’
# statusbar
set -g status-position bottom
set -g status-justify left
set -g status-style ‘fg=red’
set -g status-left ‘’
set -g status-left-length 10
set -g status-right-style ‘fg=black bg=yellow’
set -g status-right ‘%Y-%m-%d %H:%M ’
set -g status-right-length 50
setw -g window-status-current-style ‘fg=black bg=red’
setw -g window-status-current-format ′ #I #W #F ′
setw -g window-status-style ‘fg=red bg=black’
setw -g window-status-format ′ #I #[fg=white]#W #[fg=yellow]#F ′
setw -g window-status-bell-style ‘fg=yellow bg=red bold’
# messages
set -g message-style ‘fg=yellow bg=red bold’
In the snippet above, I’m using your terminal’s default colors (by using the named colors, like red, yellow or black). This allows tmux to play nicely with whatever color theme you have set for your terminal. Some prefer to use a broader range of colors for their terminals and tmux color schemes. If you don’t want to use your terminal default colors but instead want to define colors from a 256 colors range, you can use colour0 to colour256 instead of red, cyan, and so on when defining your colors in your tmux.conf.
Looking for a nice color scheme for your terminal?
If you’re looking for a nice color scheme for your terminal I recommend to check out my very own Root Loops. With Root Loops you can easily design a personal, awesome-looking terminal color scheme and stand out from all the other folks using the same boring-ass color schemes everyone else is using.
There are plenty of resources out there where you can find people presenting their tmux configurations. GitHub and other code hosting services tend to be a great source. Simply search for “tmux.conf” or repos called “dotfiles” to find a vast amount of configurations that are out there. Some people share their configuration on their blog. Reddit might have a few subreddits that could have useful inspiration, too (there’s /r/dotfiles and /r/unixporn, for example).
You can find my complete tmux.conf (along with other configuration files I’m using on my systems) on my personal dotfiles repo on GitHub.
If you want to dive deeper into how you can customize tmux, the canonical source of truth is tmux’s man page (simply type man tmux to get there). You should also take a look at the elaborate tmux wiki and see their Configuring tmux section if this blog post was too shallow for your needs. Both will contain up-to-date information about each and every tiny thing you can tweak to make your tmux experience truly yours. Have fun!
...
Read the original on hamvocke.com »
The first four months of 2026 have produced a sequence of cyber incidents that, if any one of them had landed in 2014 or 2017, would have dominated a news cycle for a week.
A Chinese state supercomputer reportedly bled ten petabytes. Stryker was wiped across 79 countries. Lockheed Martin was hit for a reported 375 terabytes. The FBI Director’s personal inbox was dumped on the open web. The FBI’s wiretap management network was breached in a separate “major incident.” Rockstar Games was breached through a SaaS analytics vendor most people have never heard of. Cisco’s private GitHub was cloned. Oracle’s legacy cloud cracked open. The Axios npm package, downloaded a hundred million times a week, was hijacked by North Korea. Mercor, the $10 billion AI training-data vendor that sits inside the data pipelines of OpenAI, Anthropic, and Meta simultaneously, was breached through the LiteLLM open source library and had 4 terabytes extracted by Lapsus$. Honda was hit twice. The new ShinyHunters/Scattered Spider/LAPSUS$ alliance breached approximately 400 organizations and exfiltrated roughly 1.5 billion Salesforce records.
Stacked on top of each other across roughly a hundred days, these events are something a historian of computing security writing in 2050 will probably file as a turning point, regardless of what else happens between now and then.
And yet, the public conversation around them has been quiet to the point of being strange. This is a curious observation more than a complaint. And the goal of what follows is to gather the events into one place, cite the publications that reported each one, and then ask, gently, why the period feels so undocumented in real time.
Every named incident below is followed by inline parenthetical citations to the publications that broke or covered it, in the same way an academic paper would.
I am not arguing that the cybersecurity community is failing. I am noting that something unusual is happening.
Strip out the noise and the 2026 wave so far breaks cleanly into four separate campaigns running in parallel against U. S. and Western targets. This convergence is the part nobody is naming out loud.
Cluster 1: Iran / Handala / Void Manticore (destructive state operations). Operating under the Handala Hack Team persona, attributed by Palo Alto Networks Unit 42 to Void Manticore, an actor linked to Iran’s Ministry of Intelligence and Security. Handala is claiming attacks against U. S. industrial, defense, and government targets and explicitly framing them as retaliation for a February 28 missile strike on a school in Minab, southern Iran, that killed at least 175 people, most of them children. Confirmed and claimed Q1 2026 victims: Stryker (200,000 devices wiped), Lockheed Martin (375 TB claim, 28 engineer doxxing), FBI Director Kash Patel (personal email dump).
Cluster 2: Scattered LAPSUS$ Hunters / SLH — the apex-predator merger (financially-motivated SaaS theft and extortion at industrial scale). This is the single largest and least-discussed organizational development in the criminal cyber landscape since the Conti collapse. In August 2025, three of the most notorious financially-motivated crews on the planet, ShinyHunters, Scattered Spider, and LAPSUS$, formally combined into a coordinated alliance widely tracked as Scattered LAPSUS$ Hunters (SLH), sometimes called “the Trinity of Chaos” (Resecurity; Cyberbit; Infosecurity Magazine; The Hacker News; Computer Weekly; ReliaQuest). Scattered Spider provides initial access through highly-effective social engineering and vishing. ShinyHunters handles exfiltration, leak-site management, and extortion. LAPSUS$ contributes its own brand of identity-system compromise. The result is an end-to-end criminal pipeline operating against the SaaS layer of the global enterprise.
The numbers from this cluster’s 2025-2026 Salesforce campaign alone are difficult to absorb. ShinyHunters has publicly claimed compromise of approximately 300 to 400 organizations, with around 100 described as high-profile, and approximately 1.5 billion Salesforce records stolen in aggregate (BankInfoSecurity, “ShinyHunters Counts 1.5 Billion Stolen Salesforce Records”; The Register; State of Surveillance, “400 Companies Breached”; Salesforce Ben). Salesforce released a security advisory on March 7, 2026 confirming that a “known threat group” was exploiting misconfigurations in its Experience Cloud product, and ShinyHunters claimed responsibility on its data leak site two days later. The named victim list reads like a roll call of global brand recognition: Google (corporate Salesforce instance, ~2.55M records of small and medium business contact data), Cisco, Adidas, Qantas (5.7M customers), Allianz Life, Farmers Insurance Group, Workday, Pandora, Chanel, TransUnion, the entire LVMH family including Louis Vuitton, Dior, Tiffany & Co., and Cartier, Air France-KLM, LastPass, Okta, AMD, Snowflake itself, Match Group (Hinge, Bumble, OkCupid), SoundCloud (29.8M users), Panera Bread (5.1M accounts), Betterment (1.4M), Harvard, the University of Pennsylvania, Crunchbase, Canada Goose, and the December 2025 Pornhub breach via the Mixpanel campaign that exposed roughly 200 million user records and 94 GB of historical analytics data (BleepingComputer on Qantas, Allianz Life, LVMH; Cybersecurity News on Google, Adidas, Louis Vuitton; Malwarebytes; Google Cloud Threat Intel; Wikipedia ShinyHunters). Q1 2026 alone added Rockstar Games (via Anodot → Snowflake), the Cisco Trivy / Salesforce double hit, and the single most consequential AI-industry-specific incident of the quarter: the Lapsus$-claimed breach of Mercor, the $10B AI recruiting and training-data vendor that sits inside the data pipelines of OpenAI, Anthropic, and Meta simultaneously, after a LiteLLM open source supply chain compromise by the TeamPCP cluster. All catalogued in dedicated sections later in this article.
Tradecraft note: this cluster is no longer just compromising SaaS integrators to lift their OAuth tokens, although that remains part of the playbook (the 2025 Salesloft Drift / UNC6395 incident that compromised over 700 Salesforce environments including Cloudflare, Google, PagerDuty, Palo Alto Networks, Proofpoint, Tanium, Zscaler, and CyberArk is the precedent that proved the OAuth model works at scale, Unit 42 threat brief; UpGuard; Cloudflare response). The 2026 evolution is more direct: SLH operators now call employees on the phone, pretend to be IT support, walk them through “updating MFA settings” or “linking the Salesforce Data Loader app,” and harvest credentials, MFA codes, and OAuth grants in real time (Google Cloud Blog, “The Cost of a Call”; Varonis; The Hacker News). In parallel, ShinyHunters has weaponized Mandiant’s own AuraInspector audit tool to scan and exploit misconfigured Salesforce Experience Cloud guest user permissions across the customer base (Allure Security). Voice phishing has produced more 2026 enterprise breaches than any single technical vulnerability.
Cluster 3: North Korea / UNC1069 (open source supply chain compromise). Google Threat Intelligence Group attributed the March 31, 2026 hijack of the Axios npm package to UNC1069, a North Korea-nexus financially-motivated actor. They did not exploit a vulnerability. They built an entire fake company, a branded fake Slack workspace, a fake Microsoft Teams meeting with fake teammates, social-engineered the lead Axios maintainer into trusting the fake organization, and used that trust to seize his npm account. Then they shipped a cross-platform RAT to a JavaScript library with roughly 100 million weekly downloads. Cisco’s separate March 2026 breach via the Trivy supply chain attack, in which over 300 internal GitHub repositories were cloned, fits the same general pattern of upstream developer-trust compromise.
Cluster 4: Russia / APT28 (zero-day exploitation against Ukraine and EU). Russia-backed APT28 began exploiting a freshly disclosed Microsoft Office vulnerability, CVE-2026-21509, within days of its January patch release. Targets included Ukrainian government bodies and over 60 European email addresses, with malicious Office documents disguised as correspondence from Ukraine’s hydrometeorological center. This is the only cluster of the four that is not primarily aimed at the United States, but it shares the architecture: speed of weaponization measured in days, exploitation of trust relationships, and minimal Western public response.
All four clusters are exploiting the same structural weakness: the modern Western enterprise no longer has a defensible perimeter, only a long chain of vendor and developer trust relationships, any of which can be turned against the host. Iran is using that chain to break things. ShinyHunters is using it to extort money. North Korea is using it to seed implants into the world’s developer machines. Russia is using it to read European inboxes. The chain is the same. Only the payloads differ.
Setting aside any argument about cause and effect, there is a parallel set of numbers from the AI side of the industry over the same period that is worth putting on the table. They may or may not explain the wave above. They are at minimum strange enough to be worth noting alongside it, and the public obscurity around them is itself part of the observation.
In late 2025, Anthropic published a report titled “Disrupting the first reported AI-orchestrated cyber espionage campaign.” In it, the company disclosed that a Chinese state-aligned actor had used Claude to automate a spying operation against approximately 30 organizations, with AI handling an estimated 80 to 90 percent of the campaign workload and human operators intervening only sporadically (Anthropic full report PDF; Anthropic news release). That disclosure came from the model vendor itself, not a third-party threat intel report, which is unusual on its own. What is more unusual is how little subsequent discussion it generated outside specialist circles.
Around it sits a stack of measurement data from Hoxhunt, ZeroThreat, StrongestLayer, Bright Defense, and StationX that points in the same direction across 2025 and into 2026. None of these numbers, on their own, prove a causal link to any specific incident in this article. Taken together they describe a sharp shift in the ambient threat environment that has gone largely unremarked upon in mainstream coverage:
On the threat-intel side, Microsoft’s tracking now formally describes two North Korean threat actor clusters, Jasper Sleet and Coral Sleet, as using AI across the attack lifecycle from reconnaissance through impersonation through post-compromise (Dark Reading). Genians and The Record have separately documented Kimsuky, the long-running North Korean APT, using ChatGPT to forge convincing South Korean military and government identification documents for phishing lures (Genians; The Record; eSecurityPlanet). In March 2026 the U. S. Treasury’s OFAC sanctioned six individuals and two entities involved in the broader DPRK IT worker fraud scheme, in which large language models are used to generate fake personas, resumes, and even interview answers to land remote engineering jobs at Fortune 500 companies (The Hacker News; TechRadar on OpenAI bans). Whether you read that as a trend or a coincidence, it is on the public record.
There is also the widely reported multi-person Microsoft Teams call in which a financial department employee was manipulated by an AI-generated deepfake of their own CFO, alongside other AI-generated “colleagues,” into wiring more than 25 million U. S. dollars to Hong Kong bank accounts (Microtime). Whatever else that incident tells us, it confirms that the infrastructure to fake a convincing multi-person video call in real time exists and has been used.
From the defender side, Anthropic’s internal red-team evaluation of its withheld Mythos model found that the model could complete a simulated network intrusion in 6.2 hours versus 10.4 hours for GPT-4o, and could identify exploitable flaws in 73 percent of the applications it scanned (NPR; Axios; CNN Business; Fortune). Anthropic has declined to release Mythos publicly, restricting access to approximately 40 technology companies including Microsoft and Google. OpenAI is finalizing a comparable model that will ship only to a small vetted customer set through a “Trusted Access for Cyber” program (Axios). Two leading frontier labs simultaneously holding back cyber-capable models on safety grounds is, again, not necessarily evidence of anything causal. It is, again, worth noting.
And then, on April 7, 2026, the part of this story that should anchor every other paragraph in this section finally happened, in private, at the highest possible level of the United States government, and almost nobody outside the financial press picked it up.
On that date, U. S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent, in-person meeting in Washington with the chief executives of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo, to brief them directly on the cyber risks posed by Mythos (Bloomberg, “Bessent, Powell Summon Bank CEOs to Urgent Meeting Over Anthropic’s New AI Model”; Bloomberg, “Mythos: Why Anthropic’s New AI Has Officials Worried”; Fortune; CNBC; CNBC / Reuters; Fox News; Yahoo Finance; TechXplore). The meeting was triggered by Anthropic’s disclosure that Mythos had identified thousands of previously-unknown zero-day vulnerabilities in every major operating system and every major web browser, along with a range of other critical software. Anthropic said the vulnerability-discovery capability was sufficiently dangerous that the model could only be released to a tightly controlled handful of trusted parties. Bessent and Powell, having absorbed that disclosure, decided that the heads of the largest systemically-important U.S. banks needed to be told in person.
Pause and read that paragraph one more time. The Treasury Secretary and the Federal Reserve Chair do not convene the CEOs of the largest U. S. banks about a single software vendor’s product. They convene them about financial stability events. The fact that this meeting happened, on this subject, at this level, is the single most authoritative signal in this entire notebook that something has shifted in the cyber threat landscape at a magnitude the federal government considers comparable in importance to a financial stability concern. Treasury and the Fed are not in the habit of crying wolf about technology vendor product releases. They cried wolf about this one.
This meeting also reframes the silence the article keeps returning to. The silence has not broken in mainstream public discourse. It has clearly broken in private, at the top of the U. S. government, in classified briefings and emergency convenings the public is mostly not seeing. The historian’s question is no longer whether the cyber community is being quiet. The historian’s question is why the public conversation is so thoroughly out of sync with what is clearly being discussed behind closed doors at the level of the Treasury and the Federal Reserve. The gap between those two layers of conversation is, in the long view, the most interesting thing in this entire chronicle.
None of the above proves that any specific incident in this article was AI-driven. The Stryker wipe was executed through Microsoft Intune, not a chatbot. The Patel email leak was a personal Gmail compromise. The Lockheed claims remain unverified. What the AI numbers do establish, with a fair amount of confidence, is that the ambient cost of running a convincing offensive operation has shifted dramatically over the same window in which the wave above unfolded. Two things changed at once. A reasonable observer can decide for themselves whether they are connected, but a reasonable observer should at least know both lists exist, side by side, and that almost nobody in mainstream coverage has put them on the same page.
That obscurity is the part of this section that matters most. It is not the AI numbers. It is the silence around them.
On March 11, 2026, Stryker Corporation, one of the largest medical device companies on the planet, watched its global operations collapse inside an afternoon (Krebs on Security; Cybersecurity Dive; HIPAA Journal; Stryker official statement). Attackers compromised a Windows domain administrator account, used it to provision a new Global Administrator inside the company’s Microsoft Entra and Intune environment, and then issued mass remote-wipe and factory-reset commands across the device fleet (Lumos; The Register; Coalition). More than 200,000 systems, servers, laptops, and mobile devices were wiped within minutes. Offices in 79 countries went dark. Order processing, manufacturing, and shipping all stopped.
The Handala Hack Team claimed responsibility, claimed exfiltration of roughly 50 terabytes of data prior to the wipe, and began publishing it from infrastructure that the FBI later seized. Stryker has since recovered and reported full operational restoration, and employee lawsuits have already been filed. The downstream effect on patients is the part that has not been adequately reported: hospitals that relied on Stryker surgical hardware and the company’s order and support systems had to postpone procedures while the fleet rebuilt, which means the wiper translated directly into cancelled surgeries across multiple countries in the hours and days after the event.
Who Handala actually is matters for reading this incident correctly. “Handala Hack Team” is not an independent crew. The U. S. Department of Justice formally classifies it as “a fictitious identity used by MOIS to hide its role in influence operations and psychological scaremongering campaigns.” The underlying operator is assessed as Void Manticore, also tracked as Banished Kitten, Red Sandstorm, and Storm-842, an offensive unit sitting inside the Iranian Ministry of Intelligence and Security. The persona first surfaced in December 2023, immediately after October 7, and inherited the operational lineage of two earlier MOIS fronts: Homeland Justice, which ran the 2022 to 2023 Albania operations, and Karma, which Handala formally replaced. The unit it belongs to was, until early 2026, headed by Seyed Yahya Hosseini Panjaki, sanctioned by the U.S. Treasury in September 2024, then by the EU and UK, specifically for overseeing Iranian dissident assassination operations, and placed on the FBI terrorism watch list. Panjaki was killed in the opening phase of U.S. and Israeli strikes on Iranian intelligence infrastructure in early March 2026. The Stryker attack landed after his death, under the same persona. The organizational resilience is itself part of the story.
The stated motive published by Handala is retaliation for the February 28, 2026 strike on a school in Minab, southern Iran, with a claimed casualty count of more than 170 children. That framing is the group’s own, and it should be read as psychological operation as much as attribution, but it is the reason the operator put on the record. On March 19, 2026, the FBI seized four Handala domains (including handala-hack.tw, hosted on a Taiwan top-level domain specifically to avoid Western takedown jurisdiction) and the State Department announced a $10 million bounty through its Rewards for Justice program. A replacement site was standing within hours. Handala publicly answered the bounty with a $50 million “counter-bounty” threat framed at Trump and Netanyahu. The infrastructure traces to Cloudzy (PONYNET), a bulletproof hosting provider that Halcyon has assessed with high confidence is a front for abrNOC, an Iranian hosting company founded in the same year by the same individual, with post-seizure failover routed through Russian DDoS provider DDOS-Guard.
Read all of that in one breath. A MOIS-operated persona whose unit head was killed three weeks earlier walked into one of the largest medical device manufacturers in the world, exfiltrated 50 TB, then pushed a destructive button that bricked 200,000 endpoints across 79 countries in minutes, postponed surgeries, stated a retaliation motive, absorbed a $10 million FBI bounty, had four of its domains seized, and was operating a replacement site the same day. The recovery worked, which is a credit to Stryker’s incident response team, but the fact that the recovery worked does not erase what happened, and what happened is the most consequential wartime cyber attack on U. S. soil in the public record. Coverage outside specialist outlets was minimal.
This is actually two distinct incidents aimed at Lockheed Martin inside a few days of each other, and conflating them has caused most of the coverage confusion.
Incident one, the 375 TB claim. An entity self-identifying as APT Iran, which sits inside the broader Handala ecosystem but publishes under its own banner, claimed on or around March 2026 to have exfiltrated 375 terabytes of data from Lockheed Martin and listed the cache for sale on dark web infrastructure (Cybersecurity Dive; UpGuard; Cybersecurity Insiders; Hackread). The initial price was reported at roughly $400 million USD and was later raised toward $598 million. The group claims the trove includes corporate documents and technical blueprints related to the F-35 Joint Strike Fighter program. Lockheed Martin has not publicly confirmed any breach. Trusted security researchers have not verified the sample data. Iranian intelligence-linked actors are independently documented to exaggerate and to fold prior unrelated breaches and open-source material into current claims to amplify psychological reach. The 375 TB F-35 claim, to be direct about it, is widely assessed as overstated. Treat it as claim, not as confirmed event.
Incident two, March 26, 2026, the 28 engineers. This one is the part that should be getting more attention. The Handala persona itself (distinct from the APT Iran data-sale listing) published the names, photographs, employer details, and location information of 28 senior American engineers identified as working in Israel on defense programs that specifically included the F-35, the F-22, and the THAAD missile defense system (NetCrook). The publication was accompanied by threatening phone calls to the engineers themselves and by language stating that Handala’s “friends in the United States” would pay visits to their families. A 48 hour ultimatum was attached. This is doxxing as threat-to-life, executed by a MOIS-operated persona, against named Americans working on three specific weapons programs, and it landed the same week the group was absorbing FBI domain seizures and a $10 million bounty.
Whether or not the 375 TB claim is real, the doxxing of 28 named American defense personnel by an actor with confirmed state ties is not a hypothetical. This is where the silence becomes hard to explain. A MOIS front is publishing kill lists of U. S. defense engineers, tying them to F-35, F-22, and THAAD by name, and the U.S. cybersecurity ecosystem is treating it as a Tuesday.
On March 27, 2026, the same Handala Hack Team published a tranche of more than 300 emails, photographs, and a copy of Patel’s resume, all stolen from the personal Gmail account of FBI Director Kash Patel (CNN; CBS News; NBC News; Axios; PBS NewsHour; Al Jazeera; CNBC). A U. S. official familiar with the matter confirmed the authenticity of at least some of the published images. The FBI subsequently acknowledged the breach, and the State Department reissued the $10 million Rewards for Justice offer that had been announced eight days earlier against Handala.
The federal government’s framing was careful and accurate as far as it goes: the compromised material is historical, dates from roughly 2011 to 2022, came from a personal Gmail account rather than any FBI system, and contains no current operational information. Patel’s official inbox was not breached. The initial access vector is the part that should embarrass the discourse. Handala did not burn a zero-day. They did not spear-phish a cabinet-level official. They used credential stuffing against credentials harvested from older public breach databases, the same technique a teenager with a laptop uses to break into gaming accounts. The sitting Director of the Federal Bureau of Investigation had a password reused or reusable across a pre-government breach corpus, and a hostile state ran the same brute-force workflow that every fraud team in the world tracks hourly, and it worked.
The framing is also a deflection. The point of the operation is not to extract operational secrets. The point is to demonstrate that an Iranian intelligence-linked group can read the personal correspondence of the sitting Director of the Federal Bureau of Investigation and publish it on the open web with attribution and without consequence. This is an explicit retaliation event. It landed eight days after the FBI seized four Handala domains, in the same month the State Department put a $10 million bounty on the group, and three weeks after the U. S. and Israel killed the unit’s leadership in the Minab window. Handala, for its part, answered the $10 million FBI bounty with a public $50 million counter-bounty aimed at Trump and Netanyahu. The March 25 dump of 14 GB from former Mossad Director Tamir Pardo’s personal Gmail, claiming to expose assassination project details and Stuxnet oversight, was published by the same persona two days before the Patel release as a “proof of concept.” The sequencing is the message, and it has been received internally even if it has not been said publicly.
The Patel personal Gmail story consumed almost all of the public oxygen in March, but it was not the most consequential FBI compromise of the quarter. That distinction belongs to a separate incident that received a fraction of the coverage and arguably represents a bigger problem.
The Federal Bureau of Investigation detected abnormal activity on an internal network on February 17, 2026, opened an inquiry, and on March 23, 2026 the Department of Justice formally classified the intrusion as a “major incident” under the 2014 federal law that requires escalated reporting and remediation (Bloomberg; Insurance Journal; GovInfoSecurity). The affected system is described in the public reporting as the network the Bureau uses to manage wiretaps and other surveillance operations, and it contains sensitive law enforcement data including electronic surveillance content and personally identifying information about subjects of FBI investigations (Bloomberg, “FBI Breach Exposes Secret Investigative Records to Intruders”).
The agencies’ notification to lawmakers described the threat actor’s tradecraft as “sophisticated,” and noted in particular that the attacker leveraged a commercial Internet Service Provider vendor’s infrastructure to bypass the FBI’s network security controls. That detail is the part the Bureau will have stayed up nights about. It means the access path was not a phishing email or a stolen laptop. It was an upstream telecommunications vendor whose infrastructure trust relationship with the FBI was successfully turned. That is the same architectural pattern as the SaaS supply chain pivots described elsewhere in this article, scaled up to the level of nation-state intelligence operations against a federal law enforcement system.
A historian’s question worth pausing on: which of the two FBI incidents in this quarter is the one a careful person would actually want to know more about? The Patel personal Gmail leak, with its photographs from 2011 and personal correspondence from before he held office? Or the breach of the system the Bureau uses to manage federal wiretaps and which holds PII on the subjects of active FBI investigations? The answer is obvious. The relative coverage of the two stories is also obvious, and the gap between those two facts is one of the cleanest examples in this entire notebook of the silence the article keeps returning to.
On March 31, 2026, an attacker hijacked the npm account of the lead maintainer of the Axios JavaScript HTTP client library, one of the most-downloaded packages in the entire JavaScript ecosystem at roughly 100 million weekly downloads, and published two malicious versions: 1.14.1 and 0.30.4 (Huntress; The Hacker News; Bloomberg; TechCrunch; Sophos; Microsoft Security). The malicious versions sat live on the npm registry for about two to three hours before being pulled. Inside that window, every CI pipeline, every developer workstation, and every cloud build that pulled the latest minor or patch range silently installed a hidden dependency that fetched and executed a cross-platform Remote Access Trojan.
On April 1, 2026, the Google Threat Intelligence Group publicly attributed the operation to UNC1069, a North Korea-nexus financially-motivated cluster (Google Cloud Blog; Axios). On April 2, the Axios lead maintainer Jason Saayman published a post-mortem describing what actually happened, and the tradecraft is the part that should be making everyone in the open source ecosystem rethink how trust works on a personal level.
The attacker did not exploit a CVE. The attacker built an organization. They impersonated the founder of a real company using a cloned identity and plausible outreach. They invited the maintainer into a real Slack workspace that had been carefully branded to look legitimate, with channel activity, linked social content, and what appeared to be team profiles and other open source maintainers as fake members. They moved the conversation to a Microsoft Teams meeting populated with what looked like multiple participants. By the time the attacker requested any action that touched the maintainer’s npm account, the social proof was overwhelming.
This is the highest-effort npm supply chain operation publicly disclosed since the 2024 XZ Utils backdoor, and it is qualitatively different. XZ was patient identity laundering across years. The Axios attack was patient identity laundering across weeks, with a fake Slack workspace and a fake Teams meeting standing in for years of GitHub commits. The bar to compromise a heavily-used open source maintainer just dropped from “infiltrate the project for two years” to “build a convincing Slack and host one Teams call.”
If you ran npm install against axios 1.14.1 or 0.30.4 in any environment, rotate every secret in that environment now and downgrade to 1.14.0 or 0.30.3. Microsoft Security, Sophos, Huntress, and Malwarebytes have all published detection guidance.
Cisco took two distinct hits in roughly the same window. The first was a supply chain compromise: in March 2026, attackers used credentials stolen via the Trivy supply chain attack to breach Cisco’s internal development environment and clone more than 300 GitHub repositories (BleepingComputer; SocRadar; TechCrunch). The stolen code reportedly included source for Cisco’s AI-powered products and customer code belonging to banks, business process outsourcing firms, and U. S. government agencies.
The second was financial. On March 31, 2026, the same day the Axios story broke, ShinyHunters published an extortion post claiming theft of over 3 million Salesforce records from Cisco containing personal data, alongside GitHub repository contents, AWS bucket data, and other internal corporate assets (Hackread; Cybernews; SC Media). The deadline for payment was set for April 3, 2026. Cisco has not publicly confirmed the ShinyHunters claim. The Trivy-linked source code theft is on firmer reporting ground.
Two breaches in two weeks, one supply chain and one SaaS, both targeting one of the most security-mature vendors in the industry. If Cisco can be hit twice in a month through these vectors, the question of whether your own organization is hit through them is mostly a question of whether anyone is bothering to look.
Of every incident in this chronicle, Mercor is the one most people outside the AI industry have never heard of, and it is also the one most likely to turn out to matter most. Mercor is a two-year-old AI recruiting and training-data startup valued at approximately $10 billion following a $350 million Series C round led by Felicis Ventures in October 2025. Its customers include OpenAI, Anthropic, and Meta (Fortune; TechCrunch; Cybernews). Which means one 2026 supply chain compromise against Mercor touches the training data pipelines of three of the largest frontier AI labs in the world simultaneously.
The attack chain is the cleanest example in this entire article of how upstream open source trust gets turned into downstream enterprise extortion. A threat actor tracked as TeamPCP compromised LiteLLM, a widely-used open source library that developers use to plug their applications into AI services and which is downloaded millions of times per day, and planted credential-harvesting malware inside it (The Register; SecurityWeek; BankInfoSecurity). The malicious code was live for hours before being identified and removed. Mercor has said it was “one of thousands of companies affected” by the LiteLLM compromise. What makes Mercor the headline victim is not that it was uniquely vulnerable. It is that the harvested credentials led into an environment holding the AI industry’s single most sensitive shared asset: training data, labeling protocols, and data selection criteria that the three largest frontier labs have each spent years and billions of dollars developing.
Lapsus$ subsequently claimed responsibility for the downstream Mercor breach on its leak site and published samples (TechCrunch; PureWL; Cybernews). The claimed haul is approximately 4 terabytes of data, broken down as roughly 211 GB of database records, 939 GB of source code, and 3 TB of storage including candidate profiles, personally identifiable information, employer data, API keys, internal Slack dumps, ticketing system exports, and, most disturbingly, videos purportedly showing conversations between Mercor’s AI systems and the contractors those systems were training. That last category is the part that should be getting more attention. It is not just data about training data. It is footage of the training process itself.
Note the cluster convergence here. Lapsus$ is one of the three legs of the Scattered LAPSUS$ Hunters (SLH) alliance described earlier in this article. The Mercor breach, the Rockstar Games breach via Anodot, the Cisco Salesforce extortion on March 31, and the broader ~400-organization Salesforce mega-campaign are all, in varying combinations, operations by the same new apex-predator criminal alliance. The pattern is no longer that a handful of unrelated groups happened to have a big quarter. It is that one newly-merged criminal collective is running an industrial-scale SaaS-and-supply-chain extortion campaign across every sector of the global enterprise, and Mercor is the AI-industry-specific node of that campaign.
Business impact to date. Meta has paused its contracts with Mercor indefinitely. Five Mercor contractors have filed lawsuits alleging personal data exposure. Other large customers are reportedly reassessing the relationship (TechCrunch; Strike Graph). For a two-year-old company at a $10 billion valuation whose entire business model is being the trusted data middleware between contractors and frontier AI labs, losing the trust of one of those labs is an existential event. Losing the trust of all three would be the end.
The structural observation worth pausing on. The global frontier AI industry, in 2026, is effectively running on a shared data pipeline provided by a small number of vendors most of the public has never heard of. Mercor is one of those vendors. Its compromise demonstrates that the AI labs are not in fact the perimeter that matters. The perimeter that matters is the identity and integrity of every upstream dependency in the data pipeline, and most of those dependencies are either two-year-old startups or open source libraries maintained by a handful of developers. This is the same structural problem the rest of this article keeps circling: the modern enterprise no longer has a defensible boundary, only a chain of trust relationships, any of which can be turned. The AI industry inherited that same architecture and is learning the same lesson in real time.
On March 21, 2026, a threat actor using the handle “rose87168” began offering for sale 6 million records extracted from Oracle Cloud, with claims that more than 140,000 Oracle Cloud tenants were potentially affected (eSecurityPlanet; Cybersecurity Dive; FINRA). The breach was tied to CVE-2021-35587, a vulnerability in Oracle Access Manager and the OpenSSO Agent component of Oracle Fusion Middleware, used to compromise Oracle’s Single Sign-On and LDAP systems.
Oracle’s initial public response denied that its main cloud platform had been breached. Oracle subsequently acknowledged that an unauthorized party had accessed its legacy cloud environment, characterizing the affected systems as “obsolete servers” (CSO Online). The HIPAA Journal later reported that up to 80 hospitals were potentially affected by data exposure tied to the same incident, and Parexel International confirmed that a security flaw in Oracle’s cloud infrastructure had affected its Oracle OCI E-Business Suite environment (HIPAA Journal).
The pattern here is the one we used to call “shadow legacy” and have apparently stopped warning about. Hyperscale cloud providers carry quiet inheritances of older platforms that customers were moved off of years ago, but whose operational shells were never actually decommissioned, and the line between “main cloud” and “legacy cloud” is meaningful in marketing copy and meaningless to an attacker who finds a working credential.
On the financially-motivated side of the wave, ShinyHunters claimed in early April 2026 to have breached Rockstar Games. Rockstar publicly confirmed a “third-party data breach” and characterized the accessed information as “limited” and “non-material” (Engadget; Tom’s Hardware). ShinyHunters tells a different story.
ShinyHunters did not directly compromise Rockstar’s internal infrastructure. They compromised Anodot, a third-party SaaS platform Rockstar uses for cloud cost monitoring, lifted authentication tokens from inside Anodot’s environment, and used those tokens to authenticate into Rockstar’s Snowflake instance (Hackread; BleepingComputer; TechRadar; Kotaku; PC Gamer). The ransom note posted to ShinyHunters’ leak channel reads in part: “Rockstar Games, your Snowflake instances were compromised thanks to Anodot.com. Pay or leak. Final warning to reach out by 14 Apr 2026.”
The deadline is two days from the date of this article. Whatever ends up published, the structure of the attack is the part worth remembering: a small SaaS analytics vendor most people have never heard of became the access path into one of the most valuable creative IP environments on the planet, weeks before the most anticipated game launch in industry history.
This is the same playbook that produced the 2024 Snowflake wave. It is not new. It has just been refined and aimed at higher-value targets, and it is going to keep working until the SaaS-to-data-warehouse trust chain gets re-architected end to end.
On April 6, 2026, a cyberattack on aviation IT systems used by major European hubs took down check-in, baggage handling, and boarding at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen simultaneously (The Traveler; National Today; VisaHQ; BlackFog on Collins Aerospace). More than 1,600 flights across the continent were cancelled or delayed on April 6 alone, with at least 13 cancellations at Heathrow by the afternoon of April 4 as the disruption ramped up. Staff reverted to manual check-in and paper boarding passes, a procedure most ground crews under thirty have never been trained on.
The vector reportedly traces back to Collins Aerospace’s MUSE (Multi User System Environment) platform, the shared check-in and boarding software used across many European airline operations, which had already absorbed a separate ransomware attack in September 2025 that knocked Collins offline and produced widespread Air France and KLM customer data exposure. The 2026 incident is, in effect, the second time in roughly six months that the same upstream aviation software supplier has produced a continent-scale outage.
The wider context, from the European Union Aviation Safety Agency: EASA documented a 600 percent spike in aviation cyberattacks between 2024 and 2025, with airports worldwide absorbing roughly 1,000 cyberattacks per month by the end of that period. The April 2026 event is not an outlier in that environment. It is the largest visible surface of a much wider, much quieter trend in which the entire global aviation IT stack is starting to look load-bearing on a small number of single-vendor dependencies that no individual airport security team can defend in isolation.
I am putting this section in the middle of the article rather than at the top because the sourcing is uneven and the verification status is unsettled, but in the long view this is the incident a historian will most likely circle and underline.
Around early February 2026, a hacker operating under the alias FlamingChina began posting samples on Telegram of what they claimed was a multi-petabyte data set exfiltrated from the National Supercomputing Center (NSCC) in Tianjin, one of the central Chinese state computing facilities, which provides infrastructure services to more than 6,000 clients including advanced science institutions and Chinese defense agencies. By April 8, 2026, mainstream Western press had picked up the story and reported the same basic claim from the actor: approximately 10 petabytes of sensitive data, equivalent to roughly 10,240 terabytes, extracted over a six-month period through a compromised VPN domain into NSCC’s environment, with a botnet quietly siphoning data out without detection (CNN; Tom’s Hardware; TechRadar; SC Media; Security Magazine; BGR; Tech Startups; Vision Times; Computing.co.uk; Security Affairs).
The samples that have been circulated reportedly include documents marked “secret” in Chinese, along with technical files, animated simulations, and renderings of defense equipment including bombs and missiles. The named provenance of some of the material includes the Aviation Industry Corporation of China and the National University of Defense Technology. CNN spoke with multiple cybersecurity experts who reviewed the samples and assessed them as appearing genuine, while noting that independent verification of the full dataset has not been possible. Some researchers have suggested the 10 petabyte figure may be inflated for commercial leverage on BreachForums, and the actor is reportedly offering a limited preview for thousands of dollars and full access for hundreds of thousands, payable in cryptocurrency.
Take a moment with the scale. Ten petabytes is, in rough terms, equivalent to two billion photographs, or the entire textual content of the public web several times over. It is the contents of roughly ten thousand decent laptops. Even if the actual exfiltration turns out to be one tenth of the claim, the resulting one petabyte event is still in a category by itself, larger than essentially any single named breach in the public history of cybersecurity. And the target is not a marketing database. It is a state supercomputing facility hosting work for Chinese defense academia and aviation industry programs.
A few things make this incident historically interesting beyond the size. First, it is one of the very rare cases of a major Chinese state computing facility being publicly breached and looted from outside. The historical asymmetry of major reported breaches has run heavily in the other direction, with Chinese state actors as the named operators against Western targets. If the FlamingChina claims hold up even partially, the symmetry has shifted. Second, the access vector reported, a compromised VPN domain followed by a long-dwell botnet quietly exfiltrating over six months without detection, is the same pattern Western incident response teams describe in their worst nation-state engagements. The defenders’ problems and the attackers’ problems are starting to look like the same problem. Third, and most quietly, the U. S. mainstream press picked the story up for a single news cycle in early April and then mostly let it go. A potential record-breaking exfiltration event from a Chinese state supercomputer is the sort of thing that, in any prior decade, would have produced sustained reporting for weeks. In 2026 it produced a few articles, a flurry of trade press coverage, and then quiet.
The Chinese government has not publicly acknowledged the incident. The samples remain in circulation. The claim remains unverified at full scope. The historical importance remains, in the meantime, suspended in exactly the kind of partial information state where most genuinely unprecedented events live for a while before they finally get named.
It would be a mistake to read the 2026 wave as a bolt from the blue. It is more useful to read it as the visible surface of a longer pre-positioning campaign that has been quietly running underneath the public-facing incidents for years. Two Chinese state-aligned actor clusters, Volt Typhoon and Salt Typhoon, are the relevant background.
Volt Typhoon, attributed to the People’s Republic of China and active since at least mid-2021, has been documented inside U. S. critical infrastructure across communications, manufacturing, utility, transportation, construction, maritime, government, IT, and education sectors (CISA AA24-038A; Microsoft Security). The U.S. Intelligence Community has assessed publicly that Volt Typhoon’s targeting carries limited espionage value and is instead consistent with prepositioning to disrupt U.S. infrastructure in the event of a future crisis, particularly in Guam and near U.S. military bases in the Pacific. The International Institute for Strategic Studies published “Volt Typhoon’s long shadow” in January 2026, noting that researchers warn the group remains embedded in U.S. utilities and that some compromises may never be fully discovered (IISS; The Record).
Salt Typhoon is the parallel telecom-focused cluster, attributed to China’s Ministry of State Security, responsible for the high-profile compromises of multiple major U. S. telecommunications carriers that surfaced in late 2024 and continued through 2025, including reported access to lawful intercept systems used by U.S. law enforcement (Congress.gov; Wikipedia overview). Both groups are still active in 2026.
The reason these two names belong in this article, even though their public disclosures predate the 2026 incidents catalogued above, is that they describe the baseline inside which the 2026 wave is happening. The named 2026 incidents are not the entire picture. They are the visible surface. Underneath them, in U. S. utilities and telecommunications infrastructure, there are pre-positioned implants that the relevant federal agencies have publicly stated may never be fully evicted. The historian sitting in 2050 reading this period is going to want to know that the surface events of the first hundred days of 2026 occurred against a background in which the deeper infrastructure had already been quietly compromised for years. That is the kind of context that gets lost when each incident is reported as if it were the first.
Honda’s 2026 has been a slow drip rather than a single event. The reporting describes a sequence of distinct incidents: API flaws in Honda’s e-commerce platform that exposed customer data, dealer panel data, and internal documents (BleepingComputer; SecurityWeek); a password reset flow exploit that exposed additional data (Cybersecurity Tribe); and a Clawson Honda dealership data breach claimed by the PLAY ransomware group that exposed names, Social Security numbers, addresses, driver’s license data, and dates of birth, with notification letters going out as recently as April (Claim Depot).
None of these are individually catastrophic. Stacked together they tell a familiar story about a manufacturing giant whose attack surface has outgrown its security maturity, and they belong in the wave count.
The named incidents above are just the ones that broke through. The full first-quarter 2026 list is much longer. Brief Defense, PKWARE, Cybersecurity News, ACI Learning, and CSIS are all maintaining 2026 incident timelines, and the pattern is consistent.
* January 2026: Illinois and Minnesota state systems exposed personal data on nearly one million people; the Match family of dating apps was breached by ShinyHunters; Eurail confirmed unauthorized access; researchers found a 149-million-record database publicly exposed via cloud misconfiguration; Microsoft January Patch Tuesday shipped 115 fixes including the Office bug APT28 began exploiting within days; Nike investigated a possible cyber attack after WorldLeaks claimed 1.4 TB of internal company data on January 24; Red Hat suffered a private GitHub and GitLab compromise by the Crimson Collective, with roughly 570 GB exfiltrated from over 28,000 internal repositories including approximately 800 Customer Engagement Reports containing infrastructure details and credentials for large enterprise clients; Pickett USA breach exposed sensitive engineering data linked to U.S. utilities; ShinyHunters / SLH vishing campaigns targeting enterprise SSO environments including Okta surged in early-to-mid January.
* February 2026: BridgePay, a payments platform serving city governments, was hit by ransomware; Odido disclosed unauthorized access affecting up to 6.2 million customers; Change Healthcare, a UnitedHealth subsidiary, was hit again, this time by AlphV/BlackCat; Cisco disclosed that a critical Catalyst SD-WAN vulnerability (CVE-2026-20127, CVSS 10.0) had been actively exploited since 2023; APT28 was observed weaponizing CVE-2026-21509 against Ukrainian and EU government targets; the FBI detected abnormal activity (Feb 17) on the internal network it uses to manage wiretaps and surveillance, eventually classified as a “major incident” on March 23; the 2026 Winter Olympics opened in Milan and Cortina d’Ampezzo and pro-Russian DDoS group NoName057(16) began hitting Italian Olympic infrastructure, several national Olympic committees (Lithuania, Poland, Spain), the Cortina d’Ampezzo tourism site, and Milan Malpensa Airport; University of Mississippi Medical Center closed clinics following a ransomware attack and reverted to manual patient care; France’s National Bank Account Registry (FICOBA) was hit through credential weakness exploitation; Iron Mountain, Panera Bread, SmarterTools, Step Finance, and Advantest Corporation all absorbed publicly disclosed incidents.
* March 2026: Stryker wiper event (March 11); Microsoft published the “Help on the line” report on a Teams-vishing initial access pattern (March 16); Oracle Cloud “rose87168” listing (March 21); Lockheed Martin / 28-engineer doxxing claims (March 23); European Commission Europa cloud platform breached (March 24); Kash Patel personal email dump (March 27); Cisco Trivy supply chain breach surfaces; TeamPCP compromises the LiteLLM open source library in a supply chain attack that propagates to “thousands of companies” including Mercor, the $10B AI training-data vendor whose customers include OpenAI, Anthropic, and Meta; Axios npm hijack (March 31); ShinyHunters Cisco Salesforce extortion post (March 31); Mercor discloses the security incident publicly on March 31 / April 1; FlamingChina samples from the Tianjin NSCC breach circulate widely on Telegram and BreachForums following early-February initial postings.
* April 2026 so far: Google Threat Intelligence Group attributes Axios npm compromise to UNC1069 / North Korea (April 1); Axios maintainer post-mortem published (April 2); Fortune confirms the Mercor breach publicly (April 2), with Lapsus$ claiming 4 TB of exfiltrated data including ~211 GB database records, ~939 GB source code, and ~3 TB storage covering candidate PII, employer data, API keys, Slack dumps, and videos of Mercor AI systems talking to contractors; Meta pauses all AI data training contracts with Mercor indefinitely, five Mercor contractors file lawsuits over personal data exposure; DOJ confirms the FBI internal “major incident” classification publicly (early April); a continent-wide aviation IT attack on April 6 cripples check-in, baggage, and boarding at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen, cancelling or delaying more than 1,600 flights in a single day, traced to the Collins Aerospace MUSE platform that had already been hit in September 2025; on April 7, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convene an emergency in-person meeting in Washington with the CEOs of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo to brief them on the cyber risks posed by Anthropic’s Mythos AI model, which has identified thousands of previously-unknown zero-day vulnerabilities in every major operating system and web browser; Rockstar Games confirms third-party breach via Anodot/Snowflake (early April); Snowflake itself confirms “unusual activity” affecting more than a dozen customer accounts linked to the Anodot integration; ShinyHunters sets April 14 ransom deadline and tells reporters they had Anodot access “for some time” and had also tried (and failed) to breach Salesforce directly; FlamingChina supercomputer claims surface in mainstream Western press (CNN, Tom’s Hardware, TechRadar, BGR, around April 8-10); National Public Data successor breach surfaces with roughly 2.9 billion records of personal information sold for 3.5 million dollars on the dark web; Yale New Haven Health System discloses breach affecting 5.5 million patients; HIPAA Journal links up to 80 hospitals to the Oracle Cloud incident; March 2026 ransomware activity totals 672 incidents in a single month, with Qilin, Akira, and DragonForce alone accounting for roughly 40 percent.
By the time you finish counting the named incidents above, you are well past forty, and the count does not include the 300 to 400 organizations swept up in the SLH Salesforce mega-campaign or the roughly 1.5 billion Salesforce records estimated stolen across that single operation. Nor does it include the 672 ransomware incidents that ransomware tracking firms recorded in March 2026 alone, with Qilin, Akira, and DragonForce accounting for about 40 percent of that single-month total. Nor does it include the dozens of smaller school district, municipal, and healthcare ransomware events that have become so routine they no longer make national news. The 2025 baseline already showed publicly disclosed ransomware attacks rising 49% year-over-year to 1,174 incidents, with healthcare absorbing 22% of the total. The 2026 first-quarter pace, on the trajectory above, is comfortably on track to make 2025 look quiet.
This is the part of the notebook where the historian’s voice has to be honest about what it does not fully understand. The events above are real. The volume is real. The pattern is real. The relative quietness around them in mainstream Western public discourse is also real, and it is genuinely puzzling rather than obviously sinister. Several plausible explanations sit on the table at once, and the honest move is to lay them out without insisting on any of them.
One possibility is that attribution to a state actor has become professionally expensive. Calling a Handala wiper event an Iranian intelligence-linked destructive operation against a U. S. medical device company, or calling the FlamingChina supercomputer leak what it might be, takes on political weight that practitioners and vendors increasingly prefer to avoid. Analysis gets softened to “threat actor” or “sophisticated adversary,” and the geopolitical reading gets edited out without anyone deciding to edit it out. That softening is not a conspiracy. It is the cumulative effect of many small commercial choices that each individually seem reasonable.
A second possibility is that the SaaS supply chain story is uncomfortable for the security industry to dwell on, because the industry sells into it. Saying out loud that the modern enterprise no longer has a defensible perimeter, only a long chain of vendor trust relationships that can be turned at any link, is also saying that the security stack the industry shipped last quarter cannot stop the attacks the industry is supposed to be talking about this quarter. That is a hard public message to deliver from inside a vendor.
A third possibility is much simpler and possibly the most powerful. The news cycle has trained the public to bounce off cyber stories. The audience has already absorbed Equifax, OPM, Yahoo, SolarWinds, NPD, and Snowflake, and the marginal shock of “another one” has flattened. When the marginal shock is flat, even genuinely unprecedented events struggle to land. Practitioners know this, so they save their breath. The silence may be less an act of suppression than an act of fatigue.
A fourth possibility is the one this notebook keeps circling back to. The parallel acceleration on the AI side of the industry is awkward to discuss in the same paragraph as the offensive incidents, because every cybersecurity vendor is currently racing to ship “AI-powered” defense. It is commercially uncomfortable to put the two lists on the same page, even if no one in particular is forbidding it. The absence of that pairing is, at minimum, a strange thing to notice in the historical record.
A historian writing in 2050 about the first hundred days of 2026 will probably find all four of these explanations partially true and none of them fully sufficient. What that historian will almost certainly notice, more than any single explanation, is the gap itself, and more specifically the layered nature of the gap. The April 7 meeting between the Treasury Secretary, the Fed Chair, and the CEOs of the largest U. S. banks proves something crucial about that layering. The silence has not fully held at the highest levels of the U.S. government. Bessent and Powell are clearly not in the dark, and neither are the people they briefed. What has held is the silence in the public discourse, in the mainstream press, in the day-to-day conversations practitioners have with their boards and their customers. The information is moving in private. It is just barely moving in public. A period that, on the evidence, looks unprecedented in the history of computing security passed through real-time public discourse without producing the kind of sustained, coherent, named conversation the events seem to deserve, and yet behind closed doors at the highest levels of financial regulation, the conversation is clearly happening. That asymmetry is the most interesting object in this entire notebook.
If you work in this field and the last hundred days have felt strange to you, you are not imagining it. Something genuinely unusual is happening, and the unusualness of how quietly it is happening may, in the long view, be the most historically interesting layer of all. Naming the gap, even gently, is a small contribution to making sure the period eventually gets the documentation it deserves.
...
Read the original on substack.com »
Today, we are expanding our spam policies
to address a deceptive practice known as “back button hijacking”, which will become an explicit violation of the “malicious practices” of spam policies, leading to potential spam actions.
When a user clicks the “back” button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation. It occurs when a site interferes with a user’s browser navigation and prevents them from using their back button to immediately get back to the page they came from. Instead, users might be sent to pages they never visited before, be presented with unsolicited recommendations or ads, or are otherwise just prevented from normally browsing the web.
Why are we taking action?
We believe that the user experience comes first. Back button hijacking interferes with the browser’s functionality, breaks the expected user journey, and results in user frustration. People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we’ve stated before, inserting deceptive or manipulative pages into a user’s browser history has always been against our Google Search Essentials.
We’ve seen a rise of this type of behavior, which is why we’re designating this an explicit violation of our malicious practices
policy, which says:
Malicious practices create a mismatch between user expectations and the actual outcome,
leading to a negative and deceptive user experience, or compromised user security or privacy.
Pages that are engaging in back button hijacking may be subject to manual spam actions
or automated demotions, which can impact the site’s performance in Google Search results. To give site owners time to make any needed changes, we’re publishing this policy two months in advance of enforcement on June 15, 2026.
What should site owners do?
Ensure you are not doing anything to interfere with a user’s ability to navigate their browser history.
If you’re currently using any script or technique that inserts or replaces deceptive or manipulative pages into a user’s browser history that prevents them from using their back button to immediately get back to the page they came from, you are expected to remove or disable it.
Notably, some instances of back button hijacking may originate from the site’s included libraries or advertising platform. We encourage site owners to thoroughly review their technical implementation and remove or disable any code, imports or any configurations that are responsible for back button hijacking, to ensure a helpful and non-deceptive experience for users.
If your site has been impacted by a manual action and you have fixed the issue, you can always let us know by submitting a reconsideration request
in Search Console. For questions or feedback, feel free to reach out on social media or discuss in our help community.
...
Read the original on developers.google.com »
Cloudflare has a vast API surface. We have over 100 products, and nearly 3,000 HTTP API operations.
Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy applications, agents, and platforms to Cloudflare, configure their account, and query our APIs for analytics and logs.
We want to make every Cloudflare product available in all of the ways agents need. For example, we now make Cloudflare’s entire API available in a single Code Mode MCP server that uses less than 1,000 tokens. There’s a lot more surface area to cover, though: CLI commands. Workers Bindings — including APIs for local development and testing. SDKs across multiple languages. Our configuration file. Terraform. Developer docs. API docs and OpenAPI schemas. Agent Skills.
Today, many of our products aren’t available across every one of these interfaces. This is particularly true of our CLI — Wrangler. Many Cloudflare products have no CLI commands in Wrangler. And agents love CLIs.
So we’ve been rebuilding Wrangler CLI, to make it the CLI for all of Cloudflare. It provides commands for all Cloudflare products, and lets you configure them together using infrastructure-as-code.
Today we’re sharing an early version of what the next version of Wrangler will look like as a technical preview. It’s very early, but we get the best feedback when we work in public.
You can try the Technical Preview today by running npx cf. Or you can install it globally by running npm install -g cf.
Right now, cf provides commands for just a small subset of Cloudflare products. We’re already testing a version of cf that supports the entirety of the Cloudflare API surface — and we will be intentionally reviewing and tuning the commands for each product, to have output that is ergonomic for both agents and humans. To be clear, this Technical Preview is just a small piece of the future Wrangler CLI. Over the coming months we will bring this together with the parts of Wrangler you know and love.
To build this in a way that keeps in sync with the rapid pace of product development at Cloudflare, we had to create a new system that allows us to generate commands, configuration, binding APIs, and more.
We already generate the Cloudflare API SDKs, Terraform provider, and Code Mode MCP server based on the OpenAPI schema for Cloudflare API. But updating our CLI, Workers Bindings, wrangler.jsonc configuration, Agent Skills, dashboard and docs is still a manual process. This was already error-prone, required too much back and forth, and wouldn’t scale to support the whole Cloudflare API in the next version of our CLI.
To do this, we needed more than could be expressed in an OpenAPI schema. OpenAPI schemas describe REST APIs, but we have interactive CLI commands that involve multiple actions that combine both local development and API requests, Workers bindings expressed as RPC APIs, along with Agent Skills and documentation that ties this all together.
We write a lot of TypeScript at Cloudflare. It’s the lingua franca of software engineering. And we keep finding that it just works better to express APIs in TypeScript — as we do with Cap n’ Web, Code Mode, and the RPC system built into the Workers platform.
So we introduced a new TypeScript schema that can define the full scope of APIs, CLI commands and arguments, and context needed to generate any interface. The schema format is “just” a set of TypeScript types with conventions, linting, and guardrails to ensure consistency. But because it is our own format, it can easily be adapted to support any interface we need, today or in the future, while still also being able to generate an OpenAPI schema:
To date most of our focus has been at this layer — building the machine we needed, so that we can now start building the CLI and other interfaces we’ve wanted for years to be able to provide. This lets us start to dream bigger about what we could standardize across Cloudflare and make better for Agents — especially when it comes to context engineering our CLI.
Agents expect CLIs to be consistent. If one command uses info as the syntax for getting information about a resource, and another uses get, the agent will expect one and call a non-existent command for the other. In a large engineering org of hundreds or thousands of people, and with many products, manually enforcing consistency through reviews is Swiss cheese. And you can enforce it at the CLI layer, but then naming differs between the CLI, REST API and SDKs, making the problem arguably worse.
One of the first things we’ve done is to start creating rules and guardrails, enforced at the schema layer. It’s always get, never info. Always –force, never –skip-confirmations. Always –json, never –format, and always supported across commands.
Wrangler CLI is also fairly unique — it provides commands and configuration that can work with both simulated local resources, or remote resources, like D1 databases, R2 storage buckets, and KV namespaces. This means consistent defaults matter even more. If an agent thinks it’s modifying a remote database, but is actually adding records to local database, and the developer is using remote bindings to develop locally against a remote database, their agent won’t understand why the newly-added records aren’t showing up when the agent makes a request to the local dev server. Consistent defaults, along with output that clearly signals whether commands are applied to remote or local resources, ensure agents have explicit guidance.
Today we are also releasing Local Explorer, a new feature available in open beta in both Wrangler and the Cloudflare Vite plugin.
Local Explorer lets you introspect the simulated resources that your Worker uses when you are developing locally, including KV, R2, D1, Durable Objects and Workflows. The same things you can do via the Cloudflare API and Dashboard with each of these, you can also do entirely locally, powered by the same underlying API structure.
For years we’ve made a bet on fully local development — not just for Cloudflare Workers, but for the entire platform. When you use D1, even though D1 is a hosted, serverless database product, you can run your database and communicate with it via bindings entirely locally, without any extra setup or tooling. Via Miniflare, our local development platform emulator, the Workers runtime provides the exact same APIs in local dev as in production, and uses a local SQLite database to provide the same functionality. This makes it easy to write and run tests that run fast, without the need for network access, and work offline.
But until now, working out what data was stored locally required you to reverse engineer, introspect the contents of the .wrangler/state directory, or install third-party tools.
Now whenever you run an app with Wrangler CLI or the Cloudflare Vite plugin, you will be prompted to open the local explorer (keyboard shortcut e). This provides you with a simple, local interface to see what bindings your Worker currently has attached, and what data is stored against them.
When you build using Agents, Local Explorer is a great way to understand what the agent is doing with data, making the local development cycle much more interactive. You can turn to Local Explorer anytime you need to verify a schema, seed some test records, or just start over and DROP TABLE.
Our goal here is to provide a mirror of the Cloudflare API that only modifies local data, so that all of your local resources are available via the same APIs that you use remotely. And by making the API shape match across local and remote, when you run CLI commands in the upcoming version of the CLI and pass a –local flag, the commands just work. The only difference is that the command makes a request to this local mirror of the Cloudflare API instead.
Starting today, this API is available at /cdn-cgi/explorer/api on any Wrangler- or Vite Plugin- powered application. By pointing your agent at this address, it will find an OpenAPI specification to be able to manage your local resources for you, just by talking to your agent.
Now that we have built the machine, it’s time to take the best parts of Wrangler today, combine them with what’s now possible, and make Wrangler the best CLI possible for using all of Cloudflare.
You can try the technical preview today by running npx cf. Or you can install it globally by running npm install -g cf.
With this very early version, we want your feedback — not just about what the technical preview does today, but what you want from a CLI for Cloudflare’s entire platform. Tell us what you wish was an easy one-line CLI command but takes a few clicks in our dashboard today. What you wish you could configure in wrangler.jsonc — like DNS records or Cache Rules. And where you’ve seen your agents get stuck, and what commands you wish our CLI provided for your agent to use.
Jump into the Cloudflare Developers Discord and tell us what you’d like us to add first to the CLI, and stay tuned for many more updates soon.
Thanks to Emily Shen for her valuable contributions to kicking off the Local Explorer project.
...
Read the original on blog.cloudflare.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.