10 interesting stories served every morning and every evening.




1 920 shares, 53 trendiness

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them.

Last week, I wrote about catch­ing a sup­ply chain at­tack on a WordPress plu­gin called Widget Logic. A trusted name, ac­quired by a new owner, turned into some­thing ma­li­cious. It hap­pened again. This time at a much larger scale.

Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dash­board for a client site. The no­tice was from the WordPress.org Plugins Team, warn­ing that a plu­gin called Countdown Timer Ultimate con­tained code that could al­low unau­tho­rized third-party ac­cess.

I ran a full se­cu­rity au­dit on the site. The plu­gin it­self had al­ready been force-up­dated by WordPress.org to ver­sion 2.6.9.1, which was sup­posed to clean things up. But the dam­age was al­ready done.

The plug­in’s wpos-an­a­lyt­ics mod­ule had phoned home to an­a­lyt­ics.es­sen­tialplu­gin.com, down­loaded a back­door file called wp-com­ments-posts.php (designed to look like the core file wp-com­ments-post.php), and used it to in­ject a mas­sive block of PHP into wp-con­fig.php.

The in­jected code was so­phis­ti­cated. It fetched spam links, redi­rects, and fake pages from a com­mand-and-con­trol server. It only showed the spam to Googlebot, mak­ing it in­vis­i­ble to site own­ers. And here is the wildest part. It re­solved its C2 do­main through an Ethereum smart con­tract, query­ing pub­lic blockchain RPC end­points. Traditional do­main take­downs would not work be­cause the at­tacker could up­date the smart con­tract to point to a new do­main at any time.

CaptainCore keeps daily restic back­ups. I ex­tracted wp-con­fig.php from 8 dif­fer­ent backup dates and com­pared file sizes. Binary search style.

The in­jec­tion hap­pened on April 6, 2026, be­tween 04:22 and 11:06 UTC. A 6-hour 44-minute win­dow.

I traced the plug­in’s his­tory through 939 quick­save snap­shots. The plu­gin had been on the site since January 2019. The wpos-an­a­lyt­ics mod­ule was al­ways there, func­tion­ing as a le­git­i­mate an­a­lyt­ics opt-in sys­tem for years.

Then came ver­sion 2.6.7, re­leased August 8, 2025. The changelog said, Check com­pat­i­bil­ity with WordPress ver­sion 6.8.2.” What it ac­tu­ally did was add 191 lines of code, in­clud­ing a PHP de­se­ri­al­iza­tion back­door. The class-anylc-ad­min.php file grew from 473 to 664 lines.

The new code in­tro­duced three things:

A fetch_ver_info() method that calls file_get_­con­tents() on the at­tack­er’s server and passes the re­sponse to @unserialize()

A ver­sion_in­fo_­clean() method that ex­e­cutes @$clean($this->version_cache, $this->changelog) where all three val­ues come from the un­se­ri­al­ized re­mote data

That is a text­book ar­bi­trary func­tion call. The re­mote server con­trols the func­tion name, the ar­gu­ments, every­thing. It sat dor­mant for 8 months be­fore be­ing ac­ti­vated on April 5-6, 2026.

This is where it gets in­ter­est­ing. The orig­i­nal plu­gin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that op­er­ated un­der WP Online Support” start­ing around 2015. They later re­branded to Essential Plugin” and grew the port­fo­lio to 30+ free plu­g­ins with pre­mium ver­sions.

By late 2024, rev­enue had de­clined 35-45%. Minesh listed the en­tire busi­ness on Flippa. A buyer iden­ti­fied only as Kris,” with a back­ground in SEO, crypto, and on­line gam­bling mar­ket­ing, pur­chased every­thing for six fig­ures. Flippa even pub­lished a case study about the sale in July 2025.

The buy­er’s very first SVN com­mit was the back­door.

On April 7, 2026, the WordPress.org Plugins Team per­ma­nently closed every plu­gin from the Essential Plugin au­thor. At least 30 plu­g­ins, all on the same day. Here are the ones I con­firmed:

* SlidersPack — All in One Image Sliders — slid­er­spack-all-in-one-im­age-slid­ers

All per­ma­nently closed. The au­thor search on WordPress.org re­turns zero re­sults. The an­a­lyt­ics.es­sen­tialplu­gin.com end­point now re­turns {“message”:“closed”}.

In 2017, a buyer us­ing the alias Daley Tias” pur­chased the Display Widgets plu­gin (200,000 in­stalls) for $15,000 and in­jected pay­day loan spam. That buyer went on to com­pro­mise at least 9 plu­g­ins the same way.

The Essential Plugin case is the same play­book at a larger scale. 30+ plu­g­ins. Hundreds of thou­sands of ac­tive in­stal­la­tions. A le­git­i­mate 8-year-old busi­ness ac­quired through a pub­lic mar­ket­place and weaponized within months.

WordPress.org’s forced up­date added re­turn; state­ments to dis­able the phone-home func­tions. That is a band-aid. The wpos-an­a­lyt­ics mod­ule is still there with all its code. I built patched ver­sions with the en­tire back­door mod­ule stripped out.

I scanned my en­tire fleet and found 12 of the 26 Essential Plugin plu­g­ins in­stalled across 22 cus­tomer sites. I patched 10 of them (one had no back­door mod­ule, one was a dif­fer­ent pro” fork by the orig­i­nal au­thors). Here are the patched ver­sions, hosted per­ma­nently on B2:

# Countdown Timer Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​count­down-timer-ul­ti­mate-2.6.9.1-patched.zip –force

# Popup Anything on Click

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​popup-any­thing-on-click-2.9.1.1-patched.zip –force

# WP Testimonial with Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-tes­ti­mo­nial-with-wid­get-3.5.1-patched.zip –force

# WP Team Showcase and Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-team-show­case-and-slider-2.8.6.1-patched.zip –force

# WP FAQ (sp-faq)

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-faq-3.9.5.1-patched.zip –force

# Timeline and History Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​time­line-and-his­tory-slider-2.4.5.1-patched.zip –force

# Album and Image Gallery plus Lightbox

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​al­bum-and-im­age-gallery-plus-light­box-2.1.8.1-patched.zip –force

# SP News and Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-news-and-wid­get-5.0.6-patched.zip –force

# WP Blog and Widgets

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-blog-and-wid­gets-2.6.6.1-patched.zip –force

# Featured Post Creative

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​fea­tured-post-cre­ative-1.5.7-patched.zip –force

# Post Grid and Filter Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​post-grid-and-fil­ter-ul­ti­mate-1.7.4-patched.zip –force

Each patched ver­sion re­moves the en­tire wpos-an­a­lyt­ics di­rec­tory, deletes the loader func­tion from the main plu­gin file, and bumps the ver­sion to -patched. The plu­gin it­self con­tin­ues to work nor­mally.

The process is straight­for­ward with Claude Code. Point it at this ar­ti­cle for con­text, tell it which plu­gin you need patched, and it can strip the wpos-an­a­lyt­ics mod­ule the same way I did. The pat­tern is iden­ti­cal across all of the Essential Plugin plu­g­ins:

Delete the wpos-an­a­lyt­ics/ di­rec­tory from the plu­gin

Remove the loader func­tion block in the main plu­gin PHP file (search for Plugin Wpos Analytics Data Starts” or wpos_­an­a­lyt­ic­s_anl)

Two sup­ply chain at­tacks in two weeks. Both fol­lowed the same pat­tern. Buy a trusted plu­gin with an es­tab­lished in­stall base, in­herit the WordPress.org com­mit ac­cess, and in­ject ma­li­cious code. The Flippa list­ing for Essential Plugin was pub­lic. The buy­er’s back­ground in SEO and gam­bling mar­ket­ing was pub­lic. And yet the ac­qui­si­tion sailed through with­out any re­view from WordPress.org.

WordPress.org has no mech­a­nism to flag or re­view plu­gin own­er­ship trans­fers. There is no change of con­trol” no­ti­fi­ca­tion to users. No ad­di­tional code re­view trig­gered by a new com­mit­ter. The Plugins Team re­sponded quickly once the at­tack was dis­cov­ered. But 8 months passed be­tween the back­door be­ing planted and be­ing caught.

If you man­age WordPress sites, search your fleet for any of the 26 plu­gin slugs listed above. If you find one, patch it or re­move it. And check wp-con­fig.php.

...

Read the original on anchor.host »

2 697 shares, 48 trendiness

GitHub Stacked PRs

Large pull re­quests are hard to re­view, slow to merge, and prone to con­flicts. Reviewers lose con­text, feed­back qual­ity drops, and the whole team slows down. Stacked PRs solve this by break­ing big changes into a chain of small, fo­cused pull re­quests that build on each other — each one in­de­pen­dently re­view­able.

A stack is a se­ries of pull re­quests in the same repos­i­tory where each PR tar­gets the branch of the PR be­low it, form­ing an or­dered chain that ul­ti­mately lands on your main branch.

GitHub un­der­stands stacks end-to-end: the pull re­quest UI shows a stack map so re­view­ers can nav­i­gate be­tween lay­ers, branch pro­tec­tion rules are en­forced against the fi­nal tar­get branch (not just the di­rect base), and CI runs for every PR in the stack as if they were tar­get­ing the fi­nal branch.

The gh stack CLI han­dles the lo­cal work­flow: cre­at­ing branches, man­ag­ing re­bases, push­ing to GitHub, and cre­at­ing PRs with the cor­rect base branches. On GitHub, the PR UI gives re­view­ers the con­text they need — a stack map for nav­i­ga­tion, fo­cused diffs for each layer, and proper rules en­force­ment.

When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged di­rectly or through the merge queue. After a merge, the re­main­ing PRs in the stack are au­to­mat­i­cally re­based so the low­est un­merged PR tar­gets the base branch.

Ready to dive in? Start with the Quick Start guide or read the full overview.

...

Read the original on github.github.com »

3 496 shares, 89 trendiness

DaVinci Resolve – Photo

The Photo page brings Hollywood’s most ad­vanced color tools to still pho­tog­ra­phy for the first time! Whether you’re a pro­fes­sional col­orist look­ing to ap­ply your skills to fash­ion shoots and wed­dings, or a pho­tog­ra­pher who wants to work be­yond the lim­its of tra­di­tional photo ap­pli­ca­tions, the Photo page un­locks the tools you need. Start with fa­mil­iar photo tools in­clud­ing white bal­ance, ex­po­sure and pri­mary color ad­just­ments, then switch to the Color page for ac­cess to the full DaVinci color grad­ing toolset trusted by Hollywood’s best col­orists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you ex­port faster than ever be­fore!

For pho­tog­ra­phers, the Photo page of­fers a fa­mil­iar set of tools along­side DaVinci’s pow­er­ful color grad­ing ca­pa­bil­i­ties. It includes na­tive RAW sup­port for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image pro­cess­ing takes place at source res­o­lu­tion up to 32K, or over 400 megapix­els, so you’re never lim­ited to pro­ject res­o­lu­tion. Familiar ba­sic ad­just­ments in­clud­ing white bal­ance, ex­po­sure, color and sat­u­ra­tion give you a com­fort­able start­ing point. With non-de­struc­tive pro­cess­ing you can re­frame, crop and re-in­ter­pret your orig­i­nal sen­sor data at any time. And with GPU ac­cel­er­a­tion, en­tire al­bums can be processed dra­mat­i­cally faster than con­ven­tional photo ap­pli­ca­tions!

The Photo page Inspector gives you pre­cise con­trol over the trans­form and crop­ping pa­ra­me­ters of your im­ages. Reframe and crop non-de­struc­tively at the orig­i­nal source res­o­lu­tion and as­pect ra­tio, so you’re never re­stricted to a fixed time­line size! Zoom, po­si­tion, ro­tate and flip im­ages with full trans­form con­trols and use the crop­ping pa­ra­me­ters to trim the edges of any im­age with pre­ci­sion. Reframe a shot to im­prove com­po­si­tion, ad­just for a spe­cific ra­tio for print or so­cial me­dia use, or sim­ply re­move un­wanted el­e­ments from the edges of a frame. All adjustments can be re­fined or re­set at any time with­out ever af­fect­ing the orig­i­nal source file!

DaVinci Resolve is the world’s only post pro­duc­tion soft­ware that lets every­one work to­gether on the same pro­ject at the same time! Built on a pow­er­ful cloud based work­flow, you can share al­bums, all as­so­ci­ated meta­data and tags, as well as grades and ef­fects with col­orists, pho­tog­ra­phers and re­touch­ers any­where in the world. Blackmagic Cloud sync­ing keeps every col­lab­o­ra­tor with the lat­est ver­sion of your im­age li­brary in real time, and re­mote re­view­ers can ap­prove grades off­site with­out need­ing to be in the same room. Hollywood col­orists can even grade live fash­ion shoots re­motely, all while the pho­tog­ra­pher is still on set!

The Photo page gives you every­thing you need to man­age your en­tire im­age li­brary from im­port to com­ple­tion. You can im­port pho­tos di­rectly, from your Apple Photos li­brary or Lightroom, and or­ga­nize them with tags, rat­ings, fa­vorites and key­words for fast, flex­i­ble man­age­ment of even the largest li­braries. It supports all stan­dard RAW files and im­age types. AI IntelliSearch lets you in­stantly search across your en­tire pro­ject to find ex­actly what you’re look­ing for, from ob­jects to peo­ple to an­i­mals! Albums al­low you to build and man­age col­lec­tions for any pro­ject and with a sin­gle click you can switch be­tween your photo li­brary and your color grad­ing work­flow!

Albums are a pow­er­ful way to build and man­age photo col­lec­tions di­rectly in DaVinci Resolve. You can add im­ages man­u­ally to each al­bum or or­ga­nize by date, cam­era, star rat­ing, EXIF data and more. Powerful fil­ter and sort tools give you to­tal con­trol over how your col­lec­tion is arranged. The thumbnail view dis­plays each im­age’s graded ver­sion along­side its file name and source clip for­mat so you can see your grades at a glance. Create mul­ti­ple grade ver­sions of any im­age, all ref­er­enc­ing the orig­i­nal source file, so you can ex­plore dif­fer­ent looks with­out ever du­pli­cat­ing a file. Plus, grades ap­plied to one photo can be in­stantly copied across oth­ers in the al­bum for a fast, con­sis­tent look!

Connect Sony or Canon cam­eras di­rectly to DaVinci Resolve for teth­ered shoot­ing with full live view! Adjust cam­era set­tings in­clud­ing ISO, ex­po­sure and white bal­ance with­out leav­ing the page and save im­age cap­ture pre­sets to es­tab­lish a con­sis­tent look be­fore you shoot. Images can be cap­tured di­rectly into an al­bum, with al­bums cre­ated au­to­mat­i­cally dur­ing cap­ture so your li­brary is per­fectly or­ga­nized from the mo­ment you start shoot­ing. Grade im­ages as they ar­rive us­ing DaVinci Resolve’s ex­ten­sive color toolset and use a hard­ware panel for hands-on cre­ative con­trol in a col­lab­o­ra­tive shoot. That means you can cap­ture, grade and or­ga­nize an en­tire shoot with­out leav­ing DaVinci Resolve!

The Photo page gives you ac­cess to over 100 GPU and CPU ac­cel­er­ated Resolve FX and spe­cialty AI tools for still im­age work. They’re or­ga­nized by cat­e­gory in the Open FX li­brary and cover every­thing from color ef­fects, blurs and glows to im­age re­pair, skin re­fine­ment and cin­e­matic light­ing tools. These are the same tools used by Hollywood col­orists and VFX artists on the world’s biggest pro­duc­tions, now avail­able for still im­ages. To add an ef­fect, drag it to any node. Whether you’re mak­ing sub­tle beauty re­fine­ments for a fash­ion shoot or ap­ply­ing dra­matic film looks and at­mos­pheric light­ing ef­fects em­u­lat­ing the looks of a Hol­ly­wood fea­ture, the Photo page has the tools you need!

Magic Mask makes pre­cise se­lec­tions of sub­jects or back­grounds, while Depth Map gen­er­ates a 3D map of your scene to sep­a­rate fore­ground and back­ground with­out man­ual mask­ing. Use together to grade dif­fer­ent depths of an im­age in­de­pen­dently for re­sults that have never be­fore been pos­si­ble for stills!

Add a re­al­is­tic light source to any photo af­ter cap­ture with Relight FX. Relight an­a­lyzes the sur­faces of faces and ob­jects to re­flect light nat­u­rally across the im­age. Combine with Magic Mask to light a sub­ject in­de­pen­dently from the back­ground, turn­ing flat por­traits into stun­ning fash­ion im­ages!

Face re­fine­ment au­to­mat­i­cally masks dif­fer­ent parts of a face, sav­ing count­less hours of man­ual work. Sharpen eyes, re­move dark cir­cles, smooth skin, and color lips. Ultra Beauty sep­a­rates skin tex­ture from color for nat­ural, high end re­sults, while AI Blemish Removal han­dles fast skin re­pair!

The Film Look Creator lets you add cin­e­matic looks that repli­cate film prop­er­ties like ha­la­tion, bloom, grain and vi­gnetting. Adjust ex­po­sure in stops and use sub­trac­tive sat­u­ra­tion, rich­ness and split tone con­trols to achieve looks usu­ally found on the big screen, now for your still im­ages!

AI SuperScale uses the DaVinci AI Neural Engine to up­scale low res­o­lu­tion im­ages with ex­cep­tional qual­ity. The enhanced mode is specif­i­cally de­signed to re­move com­pres­sion ar­ti­facts, mak­ing it the per­fect tool for rescal­ing low qual­ity pho­tos or frame grabs up to 4x their orig­i­nal res­o­lu­tion!

UltraNR is a DaVinci AI Neural Engine dri­ven de­noise mode in the Color page’s spa­tial noise re­duc­tion palette. Use it to dra­mat­i­cally re­duce dig­i­tal noise from an im­age while main­tain­ing im­age clar­ity. Use with spa­tial noise re­duc­tion to smooth out dig­i­tal grain or scan­ner noise while keep­ing fine hair and eye edges sharp.

Sample an area of a scene to quickly cover up un­wanted el­e­ments, like ob­jects or even blem­ishes on a face. The patch re­placer has a fan­tas­tic auto grad­ing fea­ture that will seam­lessly blend the cov­ered area with the sur­round­ing color data. Perfect for re­mov­ing sen­sor dust.

The Quick Export op­tion makes it fast and easy to de­liver fin­ished im­ages in a wide range of com­mon for­mats in­clud­ing JPEG, PNG, HEIF and TIFF. Export ei­ther an en­tire al­bum or just se­lected pho­tos pro­vid­ing flex­i­bil­ity to meet your spe­cific de­liv­ery needs. You can set the res­o­lu­tion, bit depth, qual­ity and com­pres­sion to en­sure your im­ages are op­ti­mized for their in­tended use. Whether you’re ex­port­ing stand­alone im­ages for print, shar­ing on so­cial me­dia plat­forms or de­liv­er­ing graded files to a client, Quick Export has you cov­ered. All exports pre­serve your orig­i­nal photo EXIF meta­data, so cam­era set­tings, lo­ca­tion data and other im­por­tant in­for­ma­tion al­ways trav­els with your files.

The Photo page uses GPU ac­cel­er­ated pro­cess­ing to de­liver fast, ac­cu­rate re­sults across your en­tire work­flow. Process hun­dreds of RAW files in sec­onds with GPU ac­cel­er­ated de­cod­ing and ap­ply Resolve FX to your im­ages in real time. GPU acceleration also means batch ex­ports and con­ver­sions are dra­mat­i­cally faster than con­ven­tional photo ap­pli­ca­tions. On Mac, DaVinci Resolve is op­ti­mized for Metal and Apple Silicon, tak­ing full ad­van­tage of the lat­est hard­ware. On Windows and Linux, you get CUDA sup­port for NVIDIA GPUs, while the Windows ver­sion also fea­tures full OpenCL sup­port for AMD, Intel and Qualcomm GPUs. All this en­sures you get high per­for­mance re­sults on any sys­tem!

Hollywood col­orists have al­ways re­lied on hard­ware pan­els to work faster and more cre­atively and now pho­tog­ra­phers can too! The DaVinci Resolve Micro Color Panel is the per­fect com­pan­ion for photo grad­ing as it is com­pact enough to sit next to a lap­top and portable enough to take on lo­ca­tion for shoots. It features three high qual­ity track­balls for lift, gamma and gain ad­just­ments, 12 pri­mary cor­rec­tion knobs for con­trast, sat­u­ra­tion, hue, tem­per­a­ture and more. It even has a built in recharge­able bat­tery! DaVinci Resolve color pan­els let you ad­just mul­ti­ple pa­ra­me­ters at once, so you can cre­ate looks that are sim­ply im­pos­si­ble with a mouse and key­board.

Hollywood’s most pop­u­lar so­lu­tion for edit­ing, vi­sual ef­fects, mo­tion graph­ics, color cor­rec­tion and au­dio post pro­duc­tion, for Mac, Windows and Linux. Now supports Blackmagic Cloud for col­lab­o­ra­tion!

The most pow­er­ful DaVinci Resolve adds DaVinci Neural Engine for au­to­matic AI re­gion track­ing, stereo­scopic tools, more Resolve FX fil­ters, more Fairlight FX au­dio plu­g­ins and ad­vanced HDR grading.

Includes large search dial in a de­sign that in­cludes only the spe­cific keys needed for edit­ing. Includes Bluetooth with bat­tery for wire­less use so it’s more portable than a full sized key­board!

Editor panel specif­i­cally de­signed for multi-cam edit­ing for news cut­ting and live sports re­play. Includes but­tons to make cam­era se­lec­tion and edit­ing ex­tremely fast! Connects via Bluetooth or USB‑C.

Full sized tra­di­tional QWERTY ed­i­tor key­board in a pre­mium metal de­sign. Featuring a metal search dial with clutch, plus ex­tra edit, trim and time­code keys. Can be in­stalled in­set for flush mount­ing.

Powerful color panel gives you all the con­trol you need to cre­ate cin­e­matic im­ages. Includes con­trols for re­fined color grad­ing in­clud­ing adding win­dows. Connects via Bluetooth or USB‑C.

Portable DaVinci color panel with 3 high res­o­lu­tion track­balls, 12 pri­mary cor­rec­tor knobs and LCDs with menus and but­tons for switch­ing tools, adding color nodes, HDR and sec­ondary grad­ing and more!

Designed in col­lab­o­ra­tion with pro­fes­sional Hollywood col­orists, the DaVinci Resolve Advanced Panel fea­tures a mas­sive num­ber of con­trols for di­rect ac­cess to every DaVinci color cor­rec­tion fea­ture.

Portable au­dio con­trol sur­face in­cludes 12 pre­mium touch sen­si­tive fly­ing faders, chan­nel LCDs for ad­vanced pro­cess­ing, au­toma­tion and trans­port con­trols plus HDMI for an ex­ter­nal graph­ics dis­play.

Get in­cred­i­bly fast au­dio edit­ing for sound en­gi­neers work­ing on tight dead­lines! Includes LCD screen, touch sen­si­tive con­trol knobs, built in search dial and full key­board with multi func­tion keys.

Used by Hollywood and broad­cast­ers, these large con­soles make it easy to mix large pro­jects with a mas­sive num­ber of chan­nels and tracks. Modular de­sign al­lows cus­tomiz­ing 2, 3, 4, or 5 bay consoles!

Fairlight stu­dio con­sole legs at an­gle for when you re­quire a flat work­ing sur­face. Required for all Fairlight Studio Consoles.

Fairlight stu­dio con­sole legs at 8º angle for when you re­quire a slightly an­gled work­ing sur­face. Required for all Fairlight Studio Consoles.

Features 12 mo­tor­ized faders, ro­tary con­trol knobs il­lu­mi­nated but­tons for pan, solo, mute and call, plus bank se­lect but­tons.

12 groups of touch sen­si­tive ro­tary con­trol knobs and il­lu­mi­nated but­tons, as­sign­a­ble to fader strips, sin­gle chan­nel or mas­ter bus.

Get quick ac­cess to vir­tu­ally every Fairlight fea­ture! Includes a 12” LCD, graph­i­cal key­board, macro keys, trans­port con­trols and more.

Features HDMI, SDI in­puts for video and com­puter mon­i­tor­ing and Ethernet for graph­ics dis­play of chan­nel sta­tus and me­ters.

Empty 2 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 3 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 4 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 5 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Use al­ter­na­tive HDMI or SDI tele­vi­sions and mon­i­tors when build­ing a Fairlight stu­dio con­sole.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 2 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 3 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 4 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 5 bay Fairlight console.

Side arm kit mounts into Fairlight con­sole mount­ing bar and holds each fader, chan­nel con­trol and LCD mon­i­tor mod­ule.

Blank 1/3rd wide bay for build­ing a cus­tom con­sole with the ex­tra 1/3rd sec­tion. Includes blank in­fill pan­els.

Allows mount­ing stan­dard 19 inch rack mount equip­ment in the chan­nel con­trol area of the Fairlight stu­dio con­sole.

Blank panel to fill in the chan­nel con­trol area of the Fairlight stu­dio con­sole.

Blank panel to fill in the LCD mon­i­tor area of the Fairlight stu­dio con­sole when you’re not us­ing the stan­dard Fairlight LCD monitor.

Blank panel to fill in the fader con­trol area of the Fairlight stu­dio con­sole.

Adds 3 MADI I/O con­nec­tions to the sin­gle MADI on the ac­cel­er­a­tor card, for a to­tal of 256 inputs and out­puts at 24 bit and 48kHz.

Add up to 2,000 tracks with real time pro­cess­ing of EQ, dy­nam­ics, 6 plug‑ins per track, plus MADI for ex­tra 64 inputs and out­puts.

Adds ana­log and dig­i­tal con­nec­tions, pre­amps for mics and in­stru­ments, sam­ple rate con­ver­sion and sync at any stan­dard frame rate.

...

Read the original on www.blackmagicdesign.com »

4 449 shares, 22 trendiness

Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.

Servo is now avail­able on crates.io

Today the Servo team has re­leased v0.1.0 of the servo crate. This is our first crates.io re­lease of the servo crate that al­lows Servo to be used as a li­brary.

We cur­rently do not have any plans of pub­lish­ing our demo browser ser­voshell to crates.io. In the 5 re­leases since our ini­tial GitHub re­lease in October 2025, our re­lease process has ma­tured, with the main bottleneck” now be­ing the hu­man-writ­ten monthly blog post. Since we’re quite ex­cited about this re­lease, we de­cided to not wait for the monthly blog post to be fin­ished, but promise to de­liver the monthly up­date in the com­ing weeks.

As you can see from the ver­sion num­ber, this re­lease is not a 1.0 re­lease. In fact, we still haven’t fin­ished dis­cussing what 1.0 means for Servo. Nevertheless, the in­creased ver­sion num­ber re­flects our grow­ing con­fi­dence in Servo’s em­bed­ding API and its abil­ity to meet some users’ needs.

In the mean­time we also de­cided to of­fer a long-term sup­port (LTS) ver­sion of Servo, since break­ing changes in the reg­u­lar monthly re­leases are ex­pected and some em­bed­ders might pre­fer do­ing ma­jor up­grades on a sched­uled half-yearly ba­sis while still re­ceiv­ing se­cu­rity up­dates and (hopefully!) some mi­gra­tion guides. For more de­tails on the LTS re­lease, see the re­spec­tive sec­tion in the Servo book.

...

Read the original on servo.org »

5 413 shares, 23 trendiness

sterlingcrispin/nothing-ever-happens

Focused async Python bot for Polymarket that buys No on stand­alone non-sports yes/​no mar­kets.

FOR ENTERTAINMENT ONLY. PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. THE AUTHORS ARE NOT LIABLE FOR ANY CLAIMS, LOSSES, OR DAMAGES.

The bot scans stand­alone mar­kets, looks for NO en­tries be­low a con­fig­ured price cap, tracks open po­si­tions, ex­poses a dash­board, and per­sists live re­cov­ery state when or­der trans­mis­sion is en­abled.

If any of those are miss­ing, the bot uses PaperExchangeClient.

pip in­stall -r re­quire­ments.txt

cp con­fig.ex­am­ple.json con­fig.json

cp .env.example .env

con­fig.json is in­ten­tion­ally lo­cal and ig­nored by git.

The run­time con­fig lives un­der strate­gies.noth­ing_hap­pens. See con­fig.ex­am­ple.json and .env.example.

You can point the run­time at a dif­fer­ent con­fig file with CONFIG_PATH=/path/to/config.json.

python -m bot.main

The dash­board binds $PORT or DASHBOARD_PORT when one is set.

The shell helpers use ei­ther an ex­plicit app name ar­gu­ment or HEROKU_APP_NAME.

ex­port HEROKU_APP_NAME=

heroku con­fig:set BOT_MODE=live DRY_RUN=false LIVE_TRADING_ENABLED=true -a $HEROKU_APP_NAME”

heroku con­fig:set PRIVATE_KEY=

Only run the web dyno. The worker en­try ex­ists only to fail fast if it is started ac­ci­den­tally.

python -m pytest -q

Local con­fig, ledgers, ex­ports, re­ports, and de­ploy­ment ar­ti­facts are ig­nored by de­fault.

...

Read the original on github.com »

6 380 shares, 19 trendiness

US appeals court declares 158-year-old home distilling ban unconstitutional

A U. S. ap­peals court on Friday de­clared un­con­sti­tu­tional a nearly 158-year-old fed­eral ban on home dis­till­ing, call­ing it an un­nec­es­sary and im­proper means for ​Congress to ex­er­cise its power to tax.

The 5th U. S. Circuit Court of ‌Appeals in New Orleans ruled in fa­vor of the non­profit Hobby Distillers Association and four of its 1,300 mem­bers.

They ar­gued that peo­ple should be free to dis­till spir­its at home, whether as ​a hobby or for per­sonal con­sump­tion in­clud­ing, in one in­stance, to cre­ate ​an ap­ple-pie-vodka recipe.

The ban was part of a law passed dur­ing ⁠Reconstruction in July 1868, in part to thwart liquor tax eva­sion, and sub­jected vi­o­la­tors ​to up to five years in prison and a $10,000 fine.

Writing for a three-judge panel, ​Circuit Judge Edith Hollan Jones said the ban ac­tu­ally re­duced tax rev­enue by pre­vent­ing dis­till­ing in the first place, un­like laws that reg­u­lated the man­u­fac­ture and la­bel­ing of dis­tilled spir­its on which ​the gov­ern­ment could col­lect taxes.

She also said that un­der the gov­ern­men­t’s logic, Congress could ​criminalize vir­tu­ally any in-home ac­tiv­ity that might es­cape no­tice from tax col­lec­tors, in­clud­ing re­mote work and ‌home-based ⁠businesses.

Without any lim­it­ing prin­ci­ple, the gov­ern­men­t’s the­ory would vi­o­late this court’s oblig­a­tion to read the Constitution care­fully to avoid cre­at­ing a gen­eral fed­eral au­thor­ity akin to the po­lice power,” Jones wrote.

The U. S. Department of Justice had no im­me­di­ate com­ment.

Another de­fen­dant, the ​Treasury Department’s Alcohol and ​Tobacco Tax and ⁠Trade Bureau, did not im­me­di­ately re­spond to a re­quest for com­ment.

Devin Watkins, a lawyer rep­re­sent­ing the Hobby Distillers Association, in an ​interview called the rul­ing an im­por­tant de­ci­sion about the lim­its of ​federal power.

Andrew ⁠Grossman, who ar­gued the non­prof­it’s ap­peal, called the de­ci­sion an im­por­tant vic­tory for in­di­vid­ual lib­erty” that lets the plain­tiffs pursue their pas­sion to dis­till fine bev­er­ages in their homes.”

I look for­ward ⁠to ​sampling their out­put,” he said.

The de­ci­sion up­held a July 2024 ​ruling by U. S. District Judge Mark Pittman in Fort Worth, Texas. He put his rul­ing on hold so ​the gov­ern­ment could ap­peal.

...

Read the original on nypost.com »

7 374 shares, 16 trendiness

Make tmux Pretty and Usable

In my pre­vi­ous blog post I gave a quick and easy in­tro­duc­tion to tmux and ex­plained how to use tmux with a ba­sic con­fig­u­ra­tion.

If you’ve fol­lowed that guide you might have had a feel­ing that many peo­ple have when work­ing with tmux for the first time: These key com­bi­na­tions are re­ally awk­ward!”. Rest as­sured, you’re not alone. Judging from the co­pi­ous blog posts and dot­files re­pos on GitHub there are many peo­ple out there who feel the urge to make tmux be­have a lit­tle dif­fer­ent; to make it more com­fort­able to use.

And ac­tu­ally it’s quite easy to cus­tomize the look and feel of tmux. Let me tell you some­thing about the ba­sics of cus­tomiz­ing tmux and share some of the con­fig­u­ra­tions I find most use­ful.

Customizing tmux is as easy as edit­ing a text file. Tmux uses a file called tmux.conf to store its con­fig­u­ra­tion. If you store that file as ~/.tmux.conf (Note: there’s a pe­riod as the first char­ac­ter in the file name. It’s a hid­den file) tmux will pick this con­fig­u­ra­tion file for your cur­rent user. If you want to share a con­fig­u­ra­tion for mul­ti­ple users you can also put your tmux.conf into a sys­tem-wide di­rec­tory. The lo­ca­tion of this di­rec­tory will be dif­fer­ent across dif­fer­ent op­er­at­ing sys­tems. The man page (man tmux) will tell you the ex­act lo­ca­tion, just have a look at doc­u­men­ta­tion for the -f pa­ra­me­ter.

Probably the most com­mon change among tmux users is to change the pre­fix from the rather awk­ward C-b to some­thing that’s a lit­tle more ac­ces­si­ble. Personally I’m us­ing C-a in­stead but note that this might in­ter­fere with bash’s go to be­gin­ning of line” com­mand1. On top of the C-a bind­ing I’ve also remapped my Caps Lock key to act as since I’m not us­ing Caps Lock any­ways. This al­lows me to nicely trig­ger my pre­fix key combo.

To change your pre­fix from C-b to C-a, sim­ply add fol­low­ing lines to your tmux.conf:

# remap pre­fix from C-b’ to C-a’

un­bind C-b

set-op­tion -g pre­fix C-a

bind-key C-a send-pre­fix

Another thing I per­son­ally find quite dif­fi­cult to re­mem­ber is the pane split­ting com­mands.” to split ver­ti­cally and % to split hor­i­zon­tally just does­n’t work for my brain. I find it help­ful to have use char­ac­ters that re­sem­ble a vi­sual rep­re­sen­ta­tion of the split, so I chose | and - for split­ting panes hor­i­zon­tally and ver­ti­cally:

# split panes us­ing | and -

bind | split-win­dow -h

bind - split-win­dow -v

un­bind “’

un­bind %

Since I’m ex­per­i­ment­ing quite of­ten with my tmux.conf I want to re­load the con­fig eas­ily. This is why I have a com­mand to re­load my con­fig on r:

# re­load con­fig file (change file lo­ca­tion to your the tmux.conf you want to use)

bind r source-file ~/.tmux.conf

Switching be­tween panes is one of the most fre­quent tasks when us­ing tmux. Therefore it should be as easy as pos­si­ble. I’m not quite fond of trig­ger­ing the pre­fix key all the time. I want to be able to sim­ply say M- to go where I want to go (remember: M is for Meta, which is usu­ally your Alt key). With this mod­i­fi­ca­tion I can sim­ply press Alt-left to go to the left pane (and other di­rec­tions re­spec­tively):

# switch panes us­ing Alt-arrow with­out pre­fix

bind -n M-Left se­lect-pane -L

bind -n M-Right se­lect-pane -R

bind -n M-Up se­lect-pane -U

bind -n M-Down se­lect-pane -D

Although tmux clearly fo­cuses on key­board-only us­age (and this is cer­tainly the most ef­fi­cient way of in­ter­act­ing with your ter­mi­nal) it can be help­ful to en­able mouse in­ter­ac­tion with tmux. This is es­pe­cially help­ful if you find your­self in a sit­u­a­tion where oth­ers have to work with your tmux con­fig and nat­u­rally don’t have a clue about your key bind­ings or tmux in gen­eral. Pair Programming might be one of those oc­ca­sions where this hap­pens quite fre­quently.

Enabling mouse mode al­lows you to se­lect win­dows and dif­fer­ent panes by sim­ply click­ing and to re­size panes by drag­ging their bor­ders around. I find it pretty con­ve­nient and it does­n’t get in my way of­ten, so I usu­ally en­able it:

# Enable mouse con­trol (clickable win­dows, panes, re­siz­able panes)

set -g mouse on

I like to give my tmux win­dows cus­tom names us­ing the , key. This helps me nam­ing my win­dows ac­cord­ing to the con­text they’re fo­cus­ing on. By de­fault tmux will up­date the win­dow ti­tle au­to­mat­i­cally de­pend­ing on the last ex­e­cuted com­mand within that win­dow. In or­der to pre­vent tmux from over­rid­ing my wisely cho­sen win­dow names I want to sup­press this be­hav­ior:

# don’t re­name win­dows au­to­mat­i­cally

set-op­tion -g al­low-re­name off

Changing the col­ors and de­sign of tmux is a lit­tle more com­plex than what I’ve pre­sented so far. As tmux al­lows you to tweak the ap­pear­ance of a lot of el­e­ments (e.g. the bor­ders of panes, your sta­tus­bar and in­di­vid­ual el­e­ments of it, mes­sages), you’ll need to add a few op­tions to get a con­sis­tent look and feel. You can make this as sim­ple or as elab­o­rate as you like. Tmux’s man page (specifically the STYLES sec­tion) con­tains more in­for­ma­tion about what you can tweak and how you can tweak it.

Depending on your color scheme your re­sult­ing tmux will look some­thing like this:

# DESIGN TWEAKS

# don’t do any­thing when a bell’ rings

set -g vi­sual-ac­tiv­ity off

set -g vi­sual-bell off

set -g vi­sual-si­lence off

setw -g mon­i­tor-ac­tiv­ity off

set -g bell-ac­tion none

# clock mode

setw -g clock-mode-colour yel­low

# copy mode

setw -g mode-style fg=black bg=red bold’

# panes

set -g pane-bor­der-style fg=red’

set -g pane-ac­tive-bor­der-style fg=yellow’

# sta­tus­bar

set -g sta­tus-po­si­tion bot­tom

set -g sta­tus-jus­tify left

set -g sta­tus-style fg=red’

set -g sta­tus-left

set -g sta­tus-left-length 10

set -g sta­tus-right-style fg=black bg=yel­low’

set -g sta­tus-right %Y-%m-%d %H:%M

set -g sta­tus-right-length 50

setw -g win­dow-sta­tus-cur­rent-style fg=black bg=red’

setw -g win­dow-sta­tus-cur­rent-for­mat ′ #I #W #F ′

setw -g win­dow-sta­tus-style fg=red bg=black’

setw -g win­dow-sta­tus-for­mat ′ #I #[fg=white]#W #[fg=yellow]#F ′

setw -g win­dow-sta­tus-bell-style fg=yellow bg=red bold’

# mes­sages

set -g mes­sage-style fg=yellow bg=red bold’

In the snip­pet above, I’m us­ing your ter­mi­nal’s de­fault col­ors (by us­ing the named col­ors, like red, yel­low or black). This al­lows tmux to play nicely with what­ever color theme you have set for your ter­mi­nal. Some pre­fer to use a broader range of col­ors for their ter­mi­nals and tmux color schemes. If you don’t want to use your ter­mi­nal de­fault col­ors but in­stead want to de­fine col­ors from a 256 col­ors range, you can use colour0 to colour256 in­stead of red, cyan, and so on when defin­ing your col­ors in your tmux.conf.

Looking for a nice color scheme for your ter­mi­nal?

If you’re look­ing for a nice color scheme for your ter­mi­nal I rec­om­mend to check out my very own Root Loops. With Root Loops you can eas­ily de­sign a per­sonal, awe­some-look­ing ter­mi­nal color scheme and stand out from all the other folks us­ing the same bor­ing-ass color schemes every­one else is us­ing.

There are plenty of re­sources out there where you can find peo­ple pre­sent­ing their tmux con­fig­u­ra­tions. GitHub and other code host­ing ser­vices tend to be a great source. Simply search for tmux.conf” or re­pos called dotfiles” to find a vast amount of con­fig­u­ra­tions that are out there. Some peo­ple share their con­fig­u­ra­tion on their blog. Reddit might have a few sub­red­dits that could have use­ful in­spi­ra­tion, too (there’s /r/dotfiles and /r/unixporn, for ex­am­ple).

You can find my com­plete tmux.conf (along with other con­fig­u­ra­tion files I’m us­ing on my sys­tems) on my per­sonal dot­files repo on GitHub.

If you want to dive deeper into how you can cus­tomize tmux, the canon­i­cal source of truth is tmux’s man page (simply type man tmux to get there). You should also take a look at the elab­o­rate tmux wiki and see their Configuring tmux sec­tion if this blog post was too shal­low for your needs. Both will con­tain up-to-date in­for­ma­tion about each and every tiny thing you can tweak to make your tmux ex­pe­ri­ence truly yours. Have fun!

...

Read the original on hamvocke.com »

8 308 shares, 14 trendiness

We May Be Living Through the Most Consequential Hundred Days in Cyber History, and Almost Nobody Has Noticed

The first four months of 2026 have pro­duced a se­quence of cy­ber in­ci­dents that, if any one of them had landed in 2014 or 2017, would have dom­i­nated a news cy­cle for a week.

A Chinese state su­per­com­puter re­port­edly bled ten petabytes. Stryker was wiped across 79 coun­tries. Lockheed Martin was hit for a re­ported 375 ter­abytes. The FBI Director’s per­sonal in­box was dumped on the open web. The FBIs wire­tap man­age­ment net­work was breached in a sep­a­rate major in­ci­dent.” Rockstar Games was breached through a SaaS an­a­lyt­ics ven­dor most peo­ple have never heard of. Cisco’s pri­vate GitHub was cloned. Oracle’s legacy cloud cracked open. The Axios npm pack­age, down­loaded a hun­dred mil­lion times a week, was hi­jacked by North Korea. Mercor, the $10 bil­lion AI train­ing-data ven­dor that sits in­side the data pipelines of OpenAI, Anthropic, and Meta si­mul­ta­ne­ously, was breached through the LiteLLM open source li­brary and had 4 ter­abytes ex­tracted by Lapsus$. Honda was hit twice. The new ShinyHunters/Scattered Spider/LAPSUS$ al­liance breached ap­prox­i­mately 400 or­ga­ni­za­tions and ex­fil­trated roughly 1.5 bil­lion Salesforce records.

Stacked on top of each other across roughly a hun­dred days, these events are some­thing a his­to­rian of com­put­ing se­cu­rity writ­ing in 2050 will prob­a­bly file as a turn­ing point, re­gard­less of what else hap­pens be­tween now and then.

And yet, the pub­lic con­ver­sa­tion around them has been quiet to the point of be­ing strange. This is a cu­ri­ous ob­ser­va­tion more than a com­plaint. And the goal of what fol­lows is to gather the events into one place, cite the pub­li­ca­tions that re­ported each one, and then ask, gen­tly, why the pe­riod feels so un­doc­u­mented in real time.

Every named in­ci­dent be­low is fol­lowed by in­line par­en­thet­i­cal ci­ta­tions to the pub­li­ca­tions that broke or cov­ered it, in the same way an aca­d­e­mic pa­per would.

I am not ar­gu­ing that the cy­ber­se­cu­rity com­mu­nity is fail­ing. I am not­ing that some­thing un­usual is hap­pen­ing.

Strip out the noise and the 2026 wave so far breaks cleanly into four sep­a­rate cam­paigns run­ning in par­al­lel against U. S. and Western tar­gets. This con­ver­gence is the part no­body is nam­ing out loud.

Cluster 1: Iran / Handala / Void Manticore (destructive state op­er­a­tions). Operating un­der the Handala Hack Team per­sona, at­trib­uted by Palo Alto Networks Unit 42 to Void Manticore, an ac­tor linked to Iran’s Ministry of Intelligence and Security. Handala is claim­ing at­tacks against U. S. in­dus­trial, de­fense, and gov­ern­ment tar­gets and ex­plic­itly fram­ing them as re­tal­i­a­tion for a February 28 mis­sile strike on a school in Minab, south­ern Iran, that killed at least 175 peo­ple, most of them chil­dren. Confirmed and claimed Q1 2026 vic­tims: Stryker (200,000 de­vices wiped), Lockheed Martin (375 TB claim, 28 en­gi­neer doxxing), FBI Director Kash Patel (personal email dump).

Cluster 2: Scattered LAPSUS$ Hunters / SLH — the apex-preda­tor merger (financially-motivated SaaS theft and ex­tor­tion at in­dus­trial scale). This is the sin­gle largest and least-dis­cussed or­ga­ni­za­tional de­vel­op­ment in the crim­i­nal cy­ber land­scape since the Conti col­lapse. In August 2025, three of the most no­to­ri­ous fi­nan­cially-mo­ti­vated crews on the planet, ShinyHunters, Scattered Spider, and LAPSUS$, for­mally com­bined into a co­or­di­nated al­liance widely tracked as Scattered LAPSUS$ Hunters (SLH), some­times called the Trinity of Chaos” (Resecurity; Cyberbit; Infosecurity Magazine; The Hacker News; Computer Weekly; ReliaQuest). Scattered Spider pro­vides ini­tial ac­cess through highly-ef­fec­tive so­cial en­gi­neer­ing and vish­ing. ShinyHunters han­dles ex­fil­tra­tion, leak-site man­age­ment, and ex­tor­tion. LAPSUS$ con­tributes its own brand of iden­tity-sys­tem com­pro­mise. The re­sult is an end-to-end crim­i­nal pipeline op­er­at­ing against the SaaS layer of the global en­ter­prise.

The num­bers from this clus­ter’s 2025-2026 Salesforce cam­paign alone are dif­fi­cult to ab­sorb. ShinyHunters has pub­licly claimed com­pro­mise of ap­prox­i­mately 300 to 400 or­ga­ni­za­tions, with around 100 de­scribed as high-pro­file, and ap­prox­i­mately 1.5 bil­lion Salesforce records stolen in ag­gre­gate (BankInfoSecurity, ShinyHunters Counts 1.5 Billion Stolen Salesforce Records”; The Register; State of Surveillance, 400 Companies Breached”; Salesforce Ben). Salesforce re­leased a se­cu­rity ad­vi­sory on March 7, 2026 con­firm­ing that a known threat group” was ex­ploit­ing mis­con­fig­u­ra­tions in its Experience Cloud prod­uct, and ShinyHunters claimed re­spon­si­bil­ity on its data leak site two days later. The named vic­tim list reads like a roll call of global brand recog­ni­tion: Google (corporate Salesforce in­stance, ~2.55M records of small and medium busi­ness con­tact data), Cisco, Adidas, Qantas (5.7M cus­tomers), Allianz Life, Farmers Insurance Group, Workday, Pandora, Chanel, TransUnion, the en­tire LVMH fam­ily in­clud­ing Louis Vuitton, Dior, Tiffany & Co., and Cartier, Air France-KLM, LastPass, Okta, AMD, Snowflake it­self, Match Group (Hinge, Bumble, OkCupid), SoundCloud (29.8M users), Panera Bread (5.1M ac­counts), Betterment (1.4M), Harvard, the University of Pennsylvania, Crunchbase, Canada Goose, and the December 2025 Pornhub breach via the Mixpanel cam­paign that ex­posed roughly 200 mil­lion user records and 94 GB of his­tor­i­cal an­a­lyt­ics data (BleepingComputer on Qantas, Allianz Life, LVMH; Cybersecurity News on Google, Adidas, Louis Vuitton; Malwarebytes; Google Cloud Threat Intel; Wikipedia ShinyHunters). Q1 2026 alone added Rockstar Games (via Anodot → Snowflake), the Cisco Trivy / Salesforce dou­ble hit, and the sin­gle most con­se­quen­tial AI-industry-specific in­ci­dent of the quar­ter: the Lapsus$-claimed breach of Mercor, the $10B AI re­cruit­ing and train­ing-data ven­dor that sits in­side the data pipelines of OpenAI, Anthropic, and Meta si­mul­ta­ne­ously, af­ter a LiteLLM open source sup­ply chain com­pro­mise by the TeamPCP clus­ter. All cat­a­logued in ded­i­cated sec­tions later in this ar­ti­cle.

Tradecraft note: this clus­ter is no longer just com­pro­mis­ing SaaS in­te­gra­tors to lift their OAuth to­kens, al­though that re­mains part of the play­book (the 2025 Salesloft Drift / UNC6395 in­ci­dent that com­pro­mised over 700 Salesforce en­vi­ron­ments in­clud­ing Cloudflare, Google, PagerDuty, Palo Alto Networks, Proofpoint, Tanium, Zscaler, and CyberArk is the prece­dent that proved the OAuth model works at scale, Unit 42 threat brief; UpGuard; Cloudflare re­sponse). The 2026 evo­lu­tion is more di­rect: SLH op­er­a­tors now call em­ploy­ees on the phone, pre­tend to be IT sup­port, walk them through updating MFA set­tings” or linking the Salesforce Data Loader app,” and har­vest cre­den­tials, MFA codes, and OAuth grants in real time (Google Cloud Blog, The Cost of a Call”; Varonis; The Hacker News). In par­al­lel, ShinyHunters has weaponized Mandiant’s own AuraInspector au­dit tool to scan and ex­ploit mis­con­fig­ured Salesforce Experience Cloud guest user per­mis­sions across the cus­tomer base (Allure Security). Voice phish­ing has pro­duced more 2026 en­ter­prise breaches than any sin­gle tech­ni­cal vul­ner­a­bil­ity.

Cluster 3: North Korea / UNC1069 (open source sup­ply chain com­pro­mise). Google Threat Intelligence Group at­trib­uted the March 31, 2026 hi­jack of the Axios npm pack­age to UNC1069, a North Korea-nexus fi­nan­cially-mo­ti­vated ac­tor. They did not ex­ploit a vul­ner­a­bil­ity. They built an en­tire fake com­pany, a branded fake Slack work­space, a fake Microsoft Teams meet­ing with fake team­mates, so­cial-en­gi­neered the lead Axios main­tainer into trust­ing the fake or­ga­ni­za­tion, and used that trust to seize his npm ac­count. Then they shipped a cross-plat­form RAT to a JavaScript li­brary with roughly 100 mil­lion weekly down­loads. Cisco’s sep­a­rate March 2026 breach via the Trivy sup­ply chain at­tack, in which over 300 in­ter­nal GitHub repos­i­to­ries were cloned, fits the same gen­eral pat­tern of up­stream de­vel­oper-trust com­pro­mise.

Cluster 4: Russia / APT28 (zero-day ex­ploita­tion against Ukraine and EU). Russia-backed APT28 be­gan ex­ploit­ing a freshly dis­closed Microsoft Office vul­ner­a­bil­ity, CVE-2026-21509, within days of its January patch re­lease. Targets in­cluded Ukrainian gov­ern­ment bod­ies and over 60 European email ad­dresses, with ma­li­cious Office doc­u­ments dis­guised as cor­re­spon­dence from Ukraine’s hy­drom­e­te­o­ro­log­i­cal cen­ter. This is the only clus­ter of the four that is not pri­mar­ily aimed at the United States, but it shares the ar­chi­tec­ture: speed of weaponiza­tion mea­sured in days, ex­ploita­tion of trust re­la­tion­ships, and min­i­mal Western pub­lic re­sponse.

All four clus­ters are ex­ploit­ing the same struc­tural weak­ness: the mod­ern Western en­ter­prise no longer has a de­fen­si­ble perime­ter, only a long chain of ven­dor and de­vel­oper trust re­la­tion­ships, any of which can be turned against the host. Iran is us­ing that chain to break things. ShinyHunters is us­ing it to ex­tort money. North Korea is us­ing it to seed im­plants into the world’s de­vel­oper ma­chines. Russia is us­ing it to read European in­boxes. The chain is the same. Only the pay­loads dif­fer.

Setting aside any ar­gu­ment about cause and ef­fect, there is a par­al­lel set of num­bers from the AI side of the in­dus­try over the same pe­riod that is worth putting on the table. They may or may not ex­plain the wave above. They are at min­i­mum strange enough to be worth not­ing along­side it, and the pub­lic ob­scu­rity around them is it­self part of the ob­ser­va­tion.

In late 2025, Anthropic pub­lished a re­port ti­tled Disrupting the first re­ported AI-orchestrated cy­ber es­pi­onage cam­paign.” In it, the com­pany dis­closed that a Chinese state-aligned ac­tor had used Claude to au­to­mate a spy­ing op­er­a­tion against ap­prox­i­mately 30 or­ga­ni­za­tions, with AI han­dling an es­ti­mated 80 to 90 per­cent of the cam­paign work­load and hu­man op­er­a­tors in­ter­ven­ing only spo­rad­i­cally (Anthropic full re­port PDF; Anthropic news re­lease). That dis­clo­sure came from the model ven­dor it­self, not a third-party threat in­tel re­port, which is un­usual on its own. What is more un­usual is how lit­tle sub­se­quent dis­cus­sion it gen­er­ated out­side spe­cial­ist cir­cles.

Around it sits a stack of mea­sure­ment data from Hoxhunt, ZeroThreat, StrongestLayer, Bright Defense, and StationX that points in the same di­rec­tion across 2025 and into 2026. None of these num­bers, on their own, prove a causal link to any spe­cific in­ci­dent in this ar­ti­cle. Taken to­gether they de­scribe a sharp shift in the am­bi­ent threat en­vi­ron­ment that has gone largely un­re­marked upon in main­stream cov­er­age:

On the threat-in­tel side, Microsoft’s track­ing now for­mally de­scribes two North Korean threat ac­tor clus­ters, Jasper Sleet and Coral Sleet, as us­ing AI across the at­tack life­cy­cle from re­con­nais­sance through im­per­son­ation through post-com­pro­mise (Dark Reading). Genians and The Record have sep­a­rately doc­u­mented Kimsuky, the long-run­ning North Korean APT, us­ing ChatGPT to forge con­vinc­ing South Korean mil­i­tary and gov­ern­ment iden­ti­fi­ca­tion doc­u­ments for phish­ing lures (Genians; The Record; eS­e­cu­ri­ty­Planet). In March 2026 the U. S. Treasury’s OFAC sanc­tioned six in­di­vid­u­als and two en­ti­ties in­volved in the broader DPRK IT worker fraud scheme, in which large lan­guage mod­els are used to gen­er­ate fake per­sonas, re­sumes, and even in­ter­view an­swers to land re­mote en­gi­neer­ing jobs at Fortune 500 com­pa­nies (The Hacker News; TechRadar on OpenAI bans). Whether you read that as a trend or a co­in­ci­dence, it is on the pub­lic record.

There is also the widely re­ported multi-per­son Microsoft Teams call in which a fi­nan­cial de­part­ment em­ployee was ma­nip­u­lated by an AI-generated deep­fake of their own CFO, along­side other AI-generated colleagues,” into wiring more than 25 mil­lion U. S. dol­lars to Hong Kong bank ac­counts (Microtime). Whatever else that in­ci­dent tells us, it con­firms that the in­fra­struc­ture to fake a con­vinc­ing multi-per­son video call in real time ex­ists and has been used.

From the de­fender side, Anthropic’s in­ter­nal red-team eval­u­a­tion of its with­held Mythos model found that the model could com­plete a sim­u­lated net­work in­tru­sion in 6.2 hours ver­sus 10.4 hours for GPT-4o, and could iden­tify ex­ploitable flaws in 73 per­cent of the ap­pli­ca­tions it scanned (NPR; Axios; CNN Business; Fortune). Anthropic has de­clined to re­lease Mythos pub­licly, re­strict­ing ac­cess to ap­prox­i­mately 40 tech­nol­ogy com­pa­nies in­clud­ing Microsoft and Google. OpenAI is fi­nal­iz­ing a com­pa­ra­ble model that will ship only to a small vet­ted cus­tomer set through a Trusted Access for Cyber” pro­gram (Axios). Two lead­ing fron­tier labs si­mul­ta­ne­ously hold­ing back cy­ber-ca­pa­ble mod­els on safety grounds is, again, not nec­es­sar­ily ev­i­dence of any­thing causal. It is, again, worth not­ing.

And then, on April 7, 2026, the part of this story that should an­chor every other para­graph in this sec­tion fi­nally hap­pened, in pri­vate, at the high­est pos­si­ble level of the United States gov­ern­ment, and al­most no­body out­side the fi­nan­cial press picked it up.

On that date, U. S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell con­vened an ur­gent, in-per­son meet­ing in Washington with the chief ex­ec­u­tives of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo, to brief them di­rectly on the cy­ber risks posed by Mythos (Bloomberg, Bessent, Powell Summon Bank CEOs to Urgent Meeting Over Anthropic’s New AI Model”; Bloomberg, Mythos: Why Anthropic’s New AI Has Officials Worried”; Fortune; CNBC; CNBC / Reuters; Fox News; Yahoo Finance; TechXplore). The meet­ing was trig­gered by Anthropic’s dis­clo­sure that Mythos had iden­ti­fied thou­sands of pre­vi­ously-un­known zero-day vul­ner­a­bil­i­ties in every ma­jor op­er­at­ing sys­tem and every ma­jor web browser, along with a range of other crit­i­cal soft­ware. Anthropic said the vul­ner­a­bil­ity-dis­cov­ery ca­pa­bil­ity was suf­fi­ciently dan­ger­ous that the model could only be re­leased to a tightly con­trolled hand­ful of trusted par­ties. Bessent and Powell, hav­ing ab­sorbed that dis­clo­sure, de­cided that the heads of the largest sys­tem­i­cally-im­por­tant U.S. banks needed to be told in per­son.

Pause and read that para­graph one more time. The Treasury Secretary and the Federal Reserve Chair do not con­vene the CEOs of the largest U. S. banks about a sin­gle soft­ware ven­dor’s prod­uct. They con­vene them about fi­nan­cial sta­bil­ity events. The fact that this meet­ing hap­pened, on this sub­ject, at this level, is the sin­gle most au­thor­i­ta­tive sig­nal in this en­tire note­book that some­thing has shifted in the cy­ber threat land­scape at a mag­ni­tude the fed­eral gov­ern­ment con­sid­ers com­pa­ra­ble in im­por­tance to a fi­nan­cial sta­bil­ity con­cern. Treasury and the Fed are not in the habit of cry­ing wolf about tech­nol­ogy ven­dor prod­uct re­leases. They cried wolf about this one.

This meet­ing also re­frames the si­lence the ar­ti­cle keeps re­turn­ing to. The si­lence has not bro­ken in main­stream pub­lic dis­course. It has clearly bro­ken in pri­vate, at the top of the U. S. gov­ern­ment, in clas­si­fied brief­ings and emer­gency con­ven­ings the pub­lic is mostly not see­ing. The his­to­ri­an’s ques­tion is no longer whether the cy­ber com­mu­nity is be­ing quiet. The his­to­ri­an’s ques­tion is why the pub­lic con­ver­sa­tion is so thor­oughly out of sync with what is clearly be­ing dis­cussed be­hind closed doors at the level of the Treasury and the Federal Reserve. The gap be­tween those two lay­ers of con­ver­sa­tion is, in the long view, the most in­ter­est­ing thing in this en­tire chron­i­cle.

None of the above proves that any spe­cific in­ci­dent in this ar­ti­cle was AI-driven. The Stryker wipe was ex­e­cuted through Microsoft Intune, not a chat­bot. The Patel email leak was a per­sonal Gmail com­pro­mise. The Lockheed claims re­main un­ver­i­fied. What the AI num­bers do es­tab­lish, with a fair amount of con­fi­dence, is that the am­bi­ent cost of run­ning a con­vinc­ing of­fen­sive op­er­a­tion has shifted dra­mat­i­cally over the same win­dow in which the wave above un­folded. Two things changed at once. A rea­son­able ob­server can de­cide for them­selves whether they are con­nected, but a rea­son­able ob­server should at least know both lists ex­ist, side by side, and that al­most no­body in main­stream cov­er­age has put them on the same page.

That ob­scu­rity is the part of this sec­tion that mat­ters most. It is not the AI num­bers. It is the si­lence around them.

On March 11, 2026, Stryker Corporation, one of the largest med­ical de­vice com­pa­nies on the planet, watched its global op­er­a­tions col­lapse in­side an af­ter­noon (Krebs on Security; Cybersecurity Dive; HIPAA Journal; Stryker of­fi­cial state­ment). Attackers com­pro­mised a Windows do­main ad­min­is­tra­tor ac­count, used it to pro­vi­sion a new Global Administrator in­side the com­pa­ny’s Microsoft Entra and Intune en­vi­ron­ment, and then is­sued mass re­mote-wipe and fac­tory-re­set com­mands across the de­vice fleet (Lumos; The Register; Coalition). More than 200,000 sys­tems, servers, lap­tops, and mo­bile de­vices were wiped within min­utes. Offices in 79 coun­tries went dark. Order pro­cess­ing, man­u­fac­tur­ing, and ship­ping all stopped.

The Handala Hack Team claimed re­spon­si­bil­ity, claimed ex­fil­tra­tion of roughly 50 ter­abytes of data prior to the wipe, and be­gan pub­lish­ing it from in­fra­struc­ture that the FBI later seized. Stryker has since re­cov­ered and re­ported full op­er­a­tional restora­tion, and em­ployee law­suits have al­ready been filed. The down­stream ef­fect on pa­tients is the part that has not been ad­e­quately re­ported: hos­pi­tals that re­lied on Stryker sur­gi­cal hard­ware and the com­pa­ny’s or­der and sup­port sys­tems had to post­pone pro­ce­dures while the fleet re­built, which means the wiper trans­lated di­rectly into can­celled surg­eries across mul­ti­ple coun­tries in the hours and days af­ter the event.

Who Handala ac­tu­ally is mat­ters for read­ing this in­ci­dent cor­rectly. Handala Hack Team” is not an in­de­pen­dent crew. The U. S. Department of Justice for­mally clas­si­fies it as a fic­ti­tious iden­tity used by MOIS to hide its role in in­flu­ence op­er­a­tions and psy­cho­log­i­cal scare­mon­ger­ing cam­paigns.” The un­der­ly­ing op­er­a­tor is as­sessed as Void Manticore, also tracked as Banished Kitten, Red Sandstorm, and Storm-842, an of­fen­sive unit sit­ting in­side the Iranian Ministry of Intelligence and Security. The per­sona first sur­faced in December 2023, im­me­di­ately af­ter October 7, and in­her­ited the op­er­a­tional lin­eage of two ear­lier MOIS fronts: Homeland Justice, which ran the 2022 to 2023 Albania op­er­a­tions, and Karma, which Handala for­mally re­placed. The unit it be­longs to was, un­til early 2026, headed by Seyed Yahya Hosseini Panjaki, sanc­tioned by the U.S. Treasury in September 2024, then by the EU and UK, specif­i­cally for over­see­ing Iranian dis­si­dent as­sas­si­na­tion op­er­a­tions, and placed on the FBI ter­ror­ism watch list. Panjaki was killed in the open­ing phase of U.S. and Israeli strikes on Iranian in­tel­li­gence in­fra­struc­ture in early March 2026. The Stryker at­tack landed af­ter his death, un­der the same per­sona. The or­ga­ni­za­tional re­silience is it­self part of the story.

The stated mo­tive pub­lished by Handala is re­tal­i­a­tion for the February 28, 2026 strike on a school in Minab, south­ern Iran, with a claimed ca­su­alty count of more than 170 chil­dren. That fram­ing is the group’s own, and it should be read as psy­cho­log­i­cal op­er­a­tion as much as at­tri­bu­tion, but it is the rea­son the op­er­a­tor put on the record. On March 19, 2026, the FBI seized four Handala do­mains (including han­dala-hack.tw, hosted on a Taiwan top-level do­main specif­i­cally to avoid Western take­down ju­ris­dic­tion) and the State Department an­nounced a $10 mil­lion bounty through its Rewards for Justice pro­gram. A re­place­ment site was stand­ing within hours. Handala pub­licly an­swered the bounty with a $50 mil­lion counter-bounty” threat framed at Trump and Netanyahu. The in­fra­struc­ture traces to Cloudzy (PONYNET), a bul­let­proof host­ing provider that Halcyon has as­sessed with high con­fi­dence is a front for abrNOC, an Iranian host­ing com­pany founded in the same year by the same in­di­vid­ual, with post-seizure failover routed through Russian DDoS provider DDOS-Guard.

Read all of that in one breath. A MOIS-operated per­sona whose unit head was killed three weeks ear­lier walked into one of the largest med­ical de­vice man­u­fac­tur­ers in the world, ex­fil­trated 50 TB, then pushed a de­struc­tive but­ton that bricked 200,000 end­points across 79 coun­tries in min­utes, post­poned surg­eries, stated a re­tal­i­a­tion mo­tive, ab­sorbed a $10 mil­lion FBI bounty, had four of its do­mains seized, and was op­er­at­ing a re­place­ment site the same day. The re­cov­ery worked, which is a credit to Stryker’s in­ci­dent re­sponse team, but the fact that the re­cov­ery worked does not erase what hap­pened, and what hap­pened is the most con­se­quen­tial wartime cy­ber at­tack on U. S. soil in the pub­lic record. Coverage out­side spe­cial­ist out­lets was min­i­mal.

This is ac­tu­ally two dis­tinct in­ci­dents aimed at Lockheed Martin in­side a few days of each other, and con­flat­ing them has caused most of the cov­er­age con­fu­sion.

Incident one, the 375 TB claim. An en­tity self-iden­ti­fy­ing as APT Iran, which sits in­side the broader Handala ecosys­tem but pub­lishes un­der its own ban­ner, claimed on or around March 2026 to have ex­fil­trated 375 ter­abytes of data from Lockheed Martin and listed the cache for sale on dark web in­fra­struc­ture (Cybersecurity Dive; UpGuard; Cybersecurity Insiders; Hackread). The ini­tial price was re­ported at roughly $400 mil­lion USD and was later raised to­ward $598 mil­lion. The group claims the trove in­cludes cor­po­rate doc­u­ments and tech­ni­cal blue­prints re­lated to the F-35 Joint Strike Fighter pro­gram. Lockheed Martin has not pub­licly con­firmed any breach. Trusted se­cu­rity re­searchers have not ver­i­fied the sam­ple data. Iranian in­tel­li­gence-linked ac­tors are in­de­pen­dently doc­u­mented to ex­ag­ger­ate and to fold prior un­re­lated breaches and open-source ma­te­r­ial into cur­rent claims to am­plify psy­cho­log­i­cal reach. The 375 TB F-35 claim, to be di­rect about it, is widely as­sessed as over­stated. Treat it as claim, not as con­firmed event.

Incident two, March 26, 2026, the 28 en­gi­neers. This one is the part that should be get­ting more at­ten­tion. The Handala per­sona it­self (distinct from the APT Iran data-sale list­ing) pub­lished the names, pho­tographs, em­ployer de­tails, and lo­ca­tion in­for­ma­tion of 28 se­nior American en­gi­neers iden­ti­fied as work­ing in Israel on de­fense pro­grams that specif­i­cally in­cluded the F-35, the F-22, and the THAAD mis­sile de­fense sys­tem (NetCrook). The pub­li­ca­tion was ac­com­pa­nied by threat­en­ing phone calls to the en­gi­neers them­selves and by lan­guage stat­ing that Handala’s friends in the United States” would pay vis­its to their fam­i­lies. A 48 hour ul­ti­ma­tum was at­tached. This is doxxing as threat-to-life, ex­e­cuted by a MOIS-operated per­sona, against named Americans work­ing on three spe­cific weapons pro­grams, and it landed the same week the group was ab­sorb­ing FBI do­main seizures and a $10 mil­lion bounty.

Whether or not the 375 TB claim is real, the doxxing of 28 named American de­fense per­son­nel by an ac­tor with con­firmed state ties is not a hy­po­thet­i­cal. This is where the si­lence be­comes hard to ex­plain. A MOIS front is pub­lish­ing kill lists of U. S. de­fense en­gi­neers, ty­ing them to F-35, F-22, and THAAD by name, and the U.S. cy­ber­se­cu­rity ecosys­tem is treat­ing it as a Tuesday.

On March 27, 2026, the same Handala Hack Team pub­lished a tranche of more than 300 emails, pho­tographs, and a copy of Patel’s re­sume, all stolen from the per­sonal Gmail ac­count of FBI Director Kash Patel (CNN; CBS News; NBC News; Axios; PBS NewsHour; Al Jazeera; CNBC). A U. S. of­fi­cial fa­mil­iar with the mat­ter con­firmed the au­then­tic­ity of at least some of the pub­lished im­ages. The FBI sub­se­quently ac­knowl­edged the breach, and the State Department reis­sued the $10 mil­lion Rewards for Justice of­fer that had been an­nounced eight days ear­lier against Handala.

The fed­eral gov­ern­men­t’s fram­ing was care­ful and ac­cu­rate as far as it goes: the com­pro­mised ma­te­r­ial is his­tor­i­cal, dates from roughly 2011 to 2022, came from a per­sonal Gmail ac­count rather than any FBI sys­tem, and con­tains no cur­rent op­er­a­tional in­for­ma­tion. Patel’s of­fi­cial in­box was not breached. The ini­tial ac­cess vec­tor is the part that should em­bar­rass the dis­course. Handala did not burn a zero-day. They did not spear-phish a cab­i­net-level of­fi­cial. They used cre­den­tial stuff­ing against cre­den­tials har­vested from older pub­lic breach data­bases, the same tech­nique a teenager with a lap­top uses to break into gam­ing ac­counts. The sit­ting Director of the Federal Bureau of Investigation had a pass­word reused or reusable across a pre-gov­ern­ment breach cor­pus, and a hos­tile state ran the same brute-force work­flow that every fraud team in the world tracks hourly, and it worked.

The fram­ing is also a de­flec­tion. The point of the op­er­a­tion is not to ex­tract op­er­a­tional se­crets. The point is to demon­strate that an Iranian in­tel­li­gence-linked group can read the per­sonal cor­re­spon­dence of the sit­ting Director of the Federal Bureau of Investigation and pub­lish it on the open web with at­tri­bu­tion and with­out con­se­quence. This is an ex­plicit re­tal­i­a­tion event. It landed eight days af­ter the FBI seized four Handala do­mains, in the same month the State Department put a $10 mil­lion bounty on the group, and three weeks af­ter the U. S. and Israel killed the unit’s lead­er­ship in the Minab win­dow. Handala, for its part, an­swered the $10 mil­lion FBI bounty with a pub­lic $50 mil­lion counter-bounty aimed at Trump and Netanyahu. The March 25 dump of 14 GB from for­mer Mossad Director Tamir Pardo’s per­sonal Gmail, claim­ing to ex­pose as­sas­si­na­tion pro­ject de­tails and Stuxnet over­sight, was pub­lished by the same per­sona two days be­fore the Patel re­lease as a proof of con­cept.” The se­quenc­ing is the mes­sage, and it has been re­ceived in­ter­nally even if it has not been said pub­licly.

The Patel per­sonal Gmail story con­sumed al­most all of the pub­lic oxy­gen in March, but it was not the most con­se­quen­tial FBI com­pro­mise of the quar­ter. That dis­tinc­tion be­longs to a sep­a­rate in­ci­dent that re­ceived a frac­tion of the cov­er­age and ar­guably rep­re­sents a big­ger prob­lem.

The Federal Bureau of Investigation de­tected ab­nor­mal ac­tiv­ity on an in­ter­nal net­work on February 17, 2026, opened an in­quiry, and on March 23, 2026 the Department of Justice for­mally clas­si­fied the in­tru­sion as a major in­ci­dent” un­der the 2014 fed­eral law that re­quires es­ca­lated re­port­ing and re­me­di­a­tion (Bloomberg; Insurance Journal; GovInfoSecurity). The af­fected sys­tem is de­scribed in the pub­lic re­port­ing as the net­work the Bureau uses to man­age wire­taps and other sur­veil­lance op­er­a­tions, and it con­tains sen­si­tive law en­force­ment data in­clud­ing elec­tronic sur­veil­lance con­tent and per­son­ally iden­ti­fy­ing in­for­ma­tion about sub­jects of FBI in­ves­ti­ga­tions (Bloomberg, FBI Breach Exposes Secret Investigative Records to Intruders”).

The agen­cies’ no­ti­fi­ca­tion to law­mak­ers de­scribed the threat ac­tor’s trade­craft as sophisticated,” and noted in par­tic­u­lar that the at­tacker lever­aged a com­mer­cial Internet Service Provider ven­dor’s in­fra­struc­ture to by­pass the FBIs net­work se­cu­rity con­trols. That de­tail is the part the Bureau will have stayed up nights about. It means the ac­cess path was not a phish­ing email or a stolen lap­top. It was an up­stream telecom­mu­ni­ca­tions ven­dor whose in­fra­struc­ture trust re­la­tion­ship with the FBI was suc­cess­fully turned. That is the same ar­chi­tec­tural pat­tern as the SaaS sup­ply chain piv­ots de­scribed else­where in this ar­ti­cle, scaled up to the level of na­tion-state in­tel­li­gence op­er­a­tions against a fed­eral law en­force­ment sys­tem.

A his­to­ri­an’s ques­tion worth paus­ing on: which of the two FBI in­ci­dents in this quar­ter is the one a care­ful per­son would ac­tu­ally want to know more about? The Patel per­sonal Gmail leak, with its pho­tographs from 2011 and per­sonal cor­re­spon­dence from be­fore he held of­fice? Or the breach of the sys­tem the Bureau uses to man­age fed­eral wire­taps and which holds PII on the sub­jects of ac­tive FBI in­ves­ti­ga­tions? The an­swer is ob­vi­ous. The rel­a­tive cov­er­age of the two sto­ries is also ob­vi­ous, and the gap be­tween those two facts is one of the clean­est ex­am­ples in this en­tire note­book of the si­lence the ar­ti­cle keeps re­turn­ing to.

On March 31, 2026, an at­tacker hi­jacked the npm ac­count of the lead main­tainer of the Axios JavaScript HTTP client li­brary, one of the most-down­loaded pack­ages in the en­tire JavaScript ecosys­tem at roughly 100 mil­lion weekly down­loads, and pub­lished two ma­li­cious ver­sions: 1.14.1 and 0.30.4 (Huntress; The Hacker News; Bloomberg; TechCrunch; Sophos; Microsoft Security). The ma­li­cious ver­sions sat live on the npm reg­istry for about two to three hours be­fore be­ing pulled. Inside that win­dow, every CI pipeline, every de­vel­oper work­sta­tion, and every cloud build that pulled the lat­est mi­nor or patch range silently in­stalled a hid­den de­pen­dency that fetched and ex­e­cuted a cross-plat­form Remote Access Trojan.

On April 1, 2026, the Google Threat Intelligence Group pub­licly at­trib­uted the op­er­a­tion to UNC1069, a North Korea-nexus fi­nan­cially-mo­ti­vated clus­ter (Google Cloud Blog; Axios). On April 2, the Axios lead main­tainer Jason Saayman pub­lished a post-mortem de­scrib­ing what ac­tu­ally hap­pened, and the trade­craft is the part that should be mak­ing every­one in the open source ecosys­tem re­think how trust works on a per­sonal level.

The at­tacker did not ex­ploit a CVE. The at­tacker built an or­ga­ni­za­tion. They im­per­son­ated the founder of a real com­pany us­ing a cloned iden­tity and plau­si­ble out­reach. They in­vited the main­tainer into a real Slack work­space that had been care­fully branded to look le­git­i­mate, with chan­nel ac­tiv­ity, linked so­cial con­tent, and what ap­peared to be team pro­files and other open source main­tain­ers as fake mem­bers. They moved the con­ver­sa­tion to a Microsoft Teams meet­ing pop­u­lated with what looked like mul­ti­ple par­tic­i­pants. By the time the at­tacker re­quested any ac­tion that touched the main­tain­er’s npm ac­count, the so­cial proof was over­whelm­ing.

This is the high­est-ef­fort npm sup­ply chain op­er­a­tion pub­licly dis­closed since the 2024 XZ Utils back­door, and it is qual­i­ta­tively dif­fer­ent. XZ was pa­tient iden­tity laun­der­ing across years. The Axios at­tack was pa­tient iden­tity laun­der­ing across weeks, with a fake Slack work­space and a fake Teams meet­ing stand­ing in for years of GitHub com­mits. The bar to com­pro­mise a heav­ily-used open source main­tainer just dropped from infiltrate the pro­ject for two years” to build a con­vinc­ing Slack and host one Teams call.”

If you ran npm in­stall against ax­ios 1.14.1 or 0.30.4 in any en­vi­ron­ment, ro­tate every se­cret in that en­vi­ron­ment now and down­grade to 1.14.0 or 0.30.3. Microsoft Security, Sophos, Huntress, and Malwarebytes have all pub­lished de­tec­tion guid­ance.

Cisco took two dis­tinct hits in roughly the same win­dow. The first was a sup­ply chain com­pro­mise: in March 2026, at­tack­ers used cre­den­tials stolen via the Trivy sup­ply chain at­tack to breach Cisco’s in­ter­nal de­vel­op­ment en­vi­ron­ment and clone more than 300 GitHub repos­i­to­ries (BleepingComputer; SocRadar; TechCrunch). The stolen code re­port­edly in­cluded source for Cisco’s AI-powered prod­ucts and cus­tomer code be­long­ing to banks, busi­ness process out­sourc­ing firms, and U. S. gov­ern­ment agen­cies.

The sec­ond was fi­nan­cial. On March 31, 2026, the same day the Axios story broke, ShinyHunters pub­lished an ex­tor­tion post claim­ing theft of over 3 mil­lion Salesforce records from Cisco con­tain­ing per­sonal data, along­side GitHub repos­i­tory con­tents, AWS bucket data, and other in­ter­nal cor­po­rate as­sets (Hackread; Cybernews; SC Media). The dead­line for pay­ment was set for April 3, 2026. Cisco has not pub­licly con­firmed the ShinyHunters claim. The Trivy-linked source code theft is on firmer re­port­ing ground.

Two breaches in two weeks, one sup­ply chain and one SaaS, both tar­get­ing one of the most se­cu­rity-ma­ture ven­dors in the in­dus­try. If Cisco can be hit twice in a month through these vec­tors, the ques­tion of whether your own or­ga­ni­za­tion is hit through them is mostly a ques­tion of whether any­one is both­er­ing to look.

Of every in­ci­dent in this chron­i­cle, Mercor is the one most peo­ple out­side the AI in­dus­try have never heard of, and it is also the one most likely to turn out to mat­ter most. Mercor is a two-year-old AI re­cruit­ing and train­ing-data startup val­ued at ap­prox­i­mately $10 bil­lion fol­low­ing a $350 mil­lion Series C round led by Felicis Ventures in October 2025. Its cus­tomers in­clude OpenAI, Anthropic, and Meta (Fortune; TechCrunch; Cybernews). Which means one 2026 sup­ply chain com­pro­mise against Mercor touches the train­ing data pipelines of three of the largest fron­tier AI labs in the world si­mul­ta­ne­ously.

The at­tack chain is the clean­est ex­am­ple in this en­tire ar­ti­cle of how up­stream open source trust gets turned into down­stream en­ter­prise ex­tor­tion. A threat ac­tor tracked as TeamPCP com­pro­mised LiteLLM, a widely-used open source li­brary that de­vel­op­ers use to plug their ap­pli­ca­tions into AI ser­vices and which is down­loaded mil­lions of times per day, and planted cre­den­tial-har­vest­ing mal­ware in­side it (The Register; SecurityWeek; BankInfoSecurity). The ma­li­cious code was live for hours be­fore be­ing iden­ti­fied and re­moved. Mercor has said it was one of thou­sands of com­pa­nies af­fected” by the LiteLLM com­pro­mise. What makes Mercor the head­line vic­tim is not that it was uniquely vul­ner­a­ble. It is that the har­vested cre­den­tials led into an en­vi­ron­ment hold­ing the AI in­dus­try’s sin­gle most sen­si­tive shared as­set: train­ing data, la­bel­ing pro­to­cols, and data se­lec­tion cri­te­ria that the three largest fron­tier labs have each spent years and bil­lions of dol­lars de­vel­op­ing.

Lapsus$ sub­se­quently claimed re­spon­si­bil­ity for the down­stream Mercor breach on its leak site and pub­lished sam­ples (TechCrunch; PureWL; Cybernews). The claimed haul is ap­prox­i­mately 4 ter­abytes of data, bro­ken down as roughly 211 GB of data­base records, 939 GB of source code, and 3 TB of stor­age in­clud­ing can­di­date pro­files, per­son­ally iden­ti­fi­able in­for­ma­tion, em­ployer data, API keys, in­ter­nal Slack dumps, tick­et­ing sys­tem ex­ports, and, most dis­turbingly, videos pur­port­edly show­ing con­ver­sa­tions be­tween Mercor’s AI sys­tems and the con­trac­tors those sys­tems were train­ing. That last cat­e­gory is the part that should be get­ting more at­ten­tion. It is not just data about train­ing data. It is footage of the train­ing process it­self.

Note the clus­ter con­ver­gence here. Lapsus$ is one of the three legs of the Scattered LAPSUS$ Hunters (SLH) al­liance de­scribed ear­lier in this ar­ti­cle. The Mercor breach, the Rockstar Games breach via Anodot, the Cisco Salesforce ex­tor­tion on March 31, and the broader ~400-organization Salesforce mega-cam­paign are all, in vary­ing com­bi­na­tions, op­er­a­tions by the same new apex-preda­tor crim­i­nal al­liance. The pat­tern is no longer that a hand­ful of un­re­lated groups hap­pened to have a big quar­ter. It is that one newly-merged crim­i­nal col­lec­tive is run­ning an in­dus­trial-scale SaaS-and-supply-chain ex­tor­tion cam­paign across every sec­tor of the global en­ter­prise, and Mercor is the AI-industry-specific node of that cam­paign.

Business im­pact to date. Meta has paused its con­tracts with Mercor in­def­i­nitely. Five Mercor con­trac­tors have filed law­suits al­leg­ing per­sonal data ex­po­sure. Other large cus­tomers are re­port­edly re­assess­ing the re­la­tion­ship (TechCrunch; Strike Graph). For a two-year-old com­pany at a $10 bil­lion val­u­a­tion whose en­tire busi­ness model is be­ing the trusted data mid­dle­ware be­tween con­trac­tors and fron­tier AI labs, los­ing the trust of one of those labs is an ex­is­ten­tial event. Losing the trust of all three would be the end.

The struc­tural ob­ser­va­tion worth paus­ing on. The global fron­tier AI in­dus­try, in 2026, is ef­fec­tively run­ning on a shared data pipeline pro­vided by a small num­ber of ven­dors most of the pub­lic has never heard of. Mercor is one of those ven­dors. Its com­pro­mise demon­strates that the AI labs are not in fact the perime­ter that mat­ters. The perime­ter that mat­ters is the iden­tity and in­tegrity of every up­stream de­pen­dency in the data pipeline, and most of those de­pen­den­cies are ei­ther two-year-old star­tups or open source li­braries main­tained by a hand­ful of de­vel­op­ers. This is the same struc­tural prob­lem the rest of this ar­ti­cle keeps cir­cling: the mod­ern en­ter­prise no longer has a de­fen­si­ble bound­ary, only a chain of trust re­la­tion­ships, any of which can be turned. The AI in­dus­try in­her­ited that same ar­chi­tec­ture and is learn­ing the same les­son in real time.

On March 21, 2026, a threat ac­tor us­ing the han­dle rose87168” be­gan of­fer­ing for sale 6 mil­lion records ex­tracted from Oracle Cloud, with claims that more than 140,000 Oracle Cloud ten­ants were po­ten­tially af­fected (eSecurityPlanet; Cybersecurity Dive; FINRA). The breach was tied to CVE-2021-35587, a vul­ner­a­bil­ity in Oracle Access Manager and the OpenSSO Agent com­po­nent of Oracle Fusion Middleware, used to com­pro­mise Oracle’s Single Sign-On and LDAP sys­tems.

Oracle’s ini­tial pub­lic re­sponse de­nied that its main cloud plat­form had been breached. Oracle sub­se­quently ac­knowl­edged that an unau­tho­rized party had ac­cessed its legacy cloud en­vi­ron­ment, char­ac­ter­iz­ing the af­fected sys­tems as obsolete servers” (CSO Online). The HIPAA Journal later re­ported that up to 80 hos­pi­tals were po­ten­tially af­fected by data ex­po­sure tied to the same in­ci­dent, and Parexel International con­firmed that a se­cu­rity flaw in Oracle’s cloud in­fra­struc­ture had af­fected its Oracle OCI E-Business Suite en­vi­ron­ment (HIPAA Journal).

The pat­tern here is the one we used to call shadow legacy” and have ap­par­ently stopped warn­ing about. Hyperscale cloud providers carry quiet in­her­i­tances of older plat­forms that cus­tomers were moved off of years ago, but whose op­er­a­tional shells were never ac­tu­ally de­com­mis­sioned, and the line be­tween main cloud” and legacy cloud” is mean­ing­ful in mar­ket­ing copy and mean­ing­less to an at­tacker who finds a work­ing cre­den­tial.

On the fi­nan­cially-mo­ti­vated side of the wave, ShinyHunters claimed in early April 2026 to have breached Rockstar Games. Rockstar pub­licly con­firmed a third-party data breach” and char­ac­ter­ized the ac­cessed in­for­ma­tion as limited” and non-material” (Engadget; Tom’s Hardware). ShinyHunters tells a dif­fer­ent story.

ShinyHunters did not di­rectly com­pro­mise Rockstar’s in­ter­nal in­fra­struc­ture. They com­pro­mised Anodot, a third-party SaaS plat­form Rockstar uses for cloud cost mon­i­tor­ing, lifted au­then­ti­ca­tion to­kens from in­side Anodot’s en­vi­ron­ment, and used those to­kens to au­then­ti­cate into Rockstar’s Snowflake in­stance (Hackread; BleepingComputer; TechRadar; Kotaku; PC Gamer). The ran­som note posted to ShinyHunters’ leak chan­nel reads in part: Rockstar Games, your Snowflake in­stances were com­pro­mised thanks to Anodot.com. Pay or leak. Final warn­ing to reach out by 14 Apr 2026.”

The dead­line is two days from the date of this ar­ti­cle. Whatever ends up pub­lished, the struc­ture of the at­tack is the part worth re­mem­ber­ing: a small SaaS an­a­lyt­ics ven­dor most peo­ple have never heard of be­came the ac­cess path into one of the most valu­able cre­ative IP en­vi­ron­ments on the planet, weeks be­fore the most an­tic­i­pated game launch in in­dus­try his­tory.

This is the same play­book that pro­duced the 2024 Snowflake wave. It is not new. It has just been re­fined and aimed at higher-value tar­gets, and it is go­ing to keep work­ing un­til the SaaS-to-data-warehouse trust chain gets re-ar­chi­tected end to end.

On April 6, 2026, a cy­ber­at­tack on avi­a­tion IT sys­tems used by ma­jor European hubs took down check-in, bag­gage han­dling, and board­ing at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen si­mul­ta­ne­ously (The Traveler; National Today; VisaHQ; BlackFog on Collins Aerospace). More than 1,600 flights across the con­ti­nent were can­celled or de­layed on April 6 alone, with at least 13 can­cel­la­tions at Heathrow by the af­ter­noon of April 4 as the dis­rup­tion ramped up. Staff re­verted to man­ual check-in and pa­per board­ing passes, a pro­ce­dure most ground crews un­der thirty have never been trained on.

The vec­tor re­port­edly traces back to Collins Aerospace’s MUSE (Multi User System Environment) plat­form, the shared check-in and board­ing soft­ware used across many European air­line op­er­a­tions, which had al­ready ab­sorbed a sep­a­rate ran­somware at­tack in September 2025 that knocked Collins of­fline and pro­duced wide­spread Air France and KLM cus­tomer data ex­po­sure. The 2026 in­ci­dent is, in ef­fect, the sec­ond time in roughly six months that the same up­stream avi­a­tion soft­ware sup­plier has pro­duced a con­ti­nent-scale out­age.

The wider con­text, from the European Union Aviation Safety Agency: EASA doc­u­mented a 600 per­cent spike in avi­a­tion cy­ber­at­tacks be­tween 2024 and 2025, with air­ports world­wide ab­sorb­ing roughly 1,000 cy­ber­at­tacks per month by the end of that pe­riod. The April 2026 event is not an out­lier in that en­vi­ron­ment. It is the largest vis­i­ble sur­face of a much wider, much qui­eter trend in which the en­tire global avi­a­tion IT stack is start­ing to look load-bear­ing on a small num­ber of sin­gle-ven­dor de­pen­den­cies that no in­di­vid­ual air­port se­cu­rity team can de­fend in iso­la­tion.

I am putting this sec­tion in the mid­dle of the ar­ti­cle rather than at the top be­cause the sourc­ing is un­even and the ver­i­fi­ca­tion sta­tus is un­set­tled, but in the long view this is the in­ci­dent a his­to­rian will most likely cir­cle and un­der­line.

Around early February 2026, a hacker op­er­at­ing un­der the alias FlamingChina be­gan post­ing sam­ples on Telegram of what they claimed was a multi-petabyte data set ex­fil­trated from the National Supercomputing Center (NSCC) in Tianjin, one of the cen­tral Chinese state com­put­ing fa­cil­i­ties, which pro­vides in­fra­struc­ture ser­vices to more than 6,000 clients in­clud­ing ad­vanced sci­ence in­sti­tu­tions and Chinese de­fense agen­cies. By April 8, 2026, main­stream Western press had picked up the story and re­ported the same ba­sic claim from the ac­tor: ap­prox­i­mately 10 petabytes of sen­si­tive data, equiv­a­lent to roughly 10,240 ter­abytes, ex­tracted over a six-month pe­riod through a com­pro­mised VPN do­main into NSCCs en­vi­ron­ment, with a bot­net qui­etly si­phon­ing data out with­out de­tec­tion (CNN; Tom’s Hardware; TechRadar; SC Media; Security Magazine; BGR; Tech Startups; Vision Times; Computing.co.uk; Security Affairs).

The sam­ples that have been cir­cu­lated re­port­edly in­clude doc­u­ments marked secret” in Chinese, along with tech­ni­cal files, an­i­mated sim­u­la­tions, and ren­der­ings of de­fense equip­ment in­clud­ing bombs and mis­siles. The named prove­nance of some of the ma­te­r­ial in­cludes the Aviation Industry Corporation of China and the National University of Defense Technology. CNN spoke with mul­ti­ple cy­ber­se­cu­rity ex­perts who re­viewed the sam­ples and as­sessed them as ap­pear­ing gen­uine, while not­ing that in­de­pen­dent ver­i­fi­ca­tion of the full dataset has not been pos­si­ble. Some re­searchers have sug­gested the 10 petabyte fig­ure may be in­flated for com­mer­cial lever­age on BreachForums, and the ac­tor is re­port­edly of­fer­ing a lim­ited pre­view for thou­sands of dol­lars and full ac­cess for hun­dreds of thou­sands, payable in cryp­tocur­rency.

Take a mo­ment with the scale. Ten petabytes is, in rough terms, equiv­a­lent to two bil­lion pho­tographs, or the en­tire tex­tual con­tent of the pub­lic web sev­eral times over. It is the con­tents of roughly ten thou­sand de­cent lap­tops. Even if the ac­tual ex­fil­tra­tion turns out to be one tenth of the claim, the re­sult­ing one petabyte event is still in a cat­e­gory by it­self, larger than es­sen­tially any sin­gle named breach in the pub­lic his­tory of cy­ber­se­cu­rity. And the tar­get is not a mar­ket­ing data­base. It is a state su­per­com­put­ing fa­cil­ity host­ing work for Chinese de­fense acad­e­mia and avi­a­tion in­dus­try pro­grams.

A few things make this in­ci­dent his­tor­i­cally in­ter­est­ing be­yond the size. First, it is one of the very rare cases of a ma­jor Chinese state com­put­ing fa­cil­ity be­ing pub­licly breached and looted from out­side. The his­tor­i­cal asym­me­try of ma­jor re­ported breaches has run heav­ily in the other di­rec­tion, with Chinese state ac­tors as the named op­er­a­tors against Western tar­gets. If the FlamingChina claims hold up even par­tially, the sym­me­try has shifted. Second, the ac­cess vec­tor re­ported, a com­pro­mised VPN do­main fol­lowed by a long-dwell bot­net qui­etly ex­fil­trat­ing over six months with­out de­tec­tion, is the same pat­tern Western in­ci­dent re­sponse teams de­scribe in their worst na­tion-state en­gage­ments. The de­fend­ers’ prob­lems and the at­tack­ers’ prob­lems are start­ing to look like the same prob­lem. Third, and most qui­etly, the U. S. main­stream press picked the story up for a sin­gle news cy­cle in early April and then mostly let it go. A po­ten­tial record-break­ing ex­fil­tra­tion event from a Chinese state su­per­com­puter is the sort of thing that, in any prior decade, would have pro­duced sus­tained re­port­ing for weeks. In 2026 it pro­duced a few ar­ti­cles, a flurry of trade press cov­er­age, and then quiet.

The Chinese gov­ern­ment has not pub­licly ac­knowl­edged the in­ci­dent. The sam­ples re­main in cir­cu­la­tion. The claim re­mains un­ver­i­fied at full scope. The his­tor­i­cal im­por­tance re­mains, in the mean­time, sus­pended in ex­actly the kind of par­tial in­for­ma­tion state where most gen­uinely un­prece­dented events live for a while be­fore they fi­nally get named.

It would be a mis­take to read the 2026 wave as a bolt from the blue. It is more use­ful to read it as the vis­i­ble sur­face of a longer pre-po­si­tion­ing cam­paign that has been qui­etly run­ning un­der­neath the pub­lic-fac­ing in­ci­dents for years. Two Chinese state-aligned ac­tor clus­ters, Volt Typhoon and Salt Typhoon, are the rel­e­vant back­ground.

Volt Typhoon, at­trib­uted to the People’s Republic of China and ac­tive since at least mid-2021, has been doc­u­mented in­side U. S. crit­i­cal in­fra­struc­ture across com­mu­ni­ca­tions, man­u­fac­tur­ing, util­ity, trans­porta­tion, con­struc­tion, mar­itime, gov­ern­ment, IT, and ed­u­ca­tion sec­tors (CISA AA24-038A; Microsoft Security). The U.S. Intelligence Community has as­sessed pub­licly that Volt Typhoon’s tar­get­ing car­ries lim­ited es­pi­onage value and is in­stead con­sis­tent with prepo­si­tion­ing to dis­rupt U.S. in­fra­struc­ture in the event of a fu­ture cri­sis, par­tic­u­larly in Guam and near U.S. mil­i­tary bases in the Pacific. The International Institute for Strategic Studies pub­lished Volt Typhoon’s long shadow” in January 2026, not­ing that re­searchers warn the group re­mains em­bed­ded in U.S. util­i­ties and that some com­pro­mises may never be fully dis­cov­ered (IISS; The Record).

Salt Typhoon is the par­al­lel tele­com-fo­cused clus­ter, at­trib­uted to China’s Ministry of State Security, re­spon­si­ble for the high-pro­file com­pro­mises of mul­ti­ple ma­jor U. S. telecom­mu­ni­ca­tions car­ri­ers that sur­faced in late 2024 and con­tin­ued through 2025, in­clud­ing re­ported ac­cess to law­ful in­ter­cept sys­tems used by U.S. law en­force­ment (Congress.gov; Wikipedia overview). Both groups are still ac­tive in 2026.

The rea­son these two names be­long in this ar­ti­cle, even though their pub­lic dis­clo­sures pre­date the 2026 in­ci­dents cat­a­logued above, is that they de­scribe the base­line in­side which the 2026 wave is hap­pen­ing. The named 2026 in­ci­dents are not the en­tire pic­ture. They are the vis­i­ble sur­face. Underneath them, in U. S. util­i­ties and telecom­mu­ni­ca­tions in­fra­struc­ture, there are pre-po­si­tioned im­plants that the rel­e­vant fed­eral agen­cies have pub­licly stated may never be fully evicted. The his­to­rian sit­ting in 2050 read­ing this pe­riod is go­ing to want to know that the sur­face events of the first hun­dred days of 2026 oc­curred against a back­ground in which the deeper in­fra­struc­ture had al­ready been qui­etly com­pro­mised for years. That is the kind of con­text that gets lost when each in­ci­dent is re­ported as if it were the first.

Honda’s 2026 has been a slow drip rather than a sin­gle event. The re­port­ing de­scribes a se­quence of dis­tinct in­ci­dents: API flaws in Honda’s e-com­merce plat­form that ex­posed cus­tomer data, dealer panel data, and in­ter­nal doc­u­ments (BleepingComputer; SecurityWeek); a pass­word re­set flow ex­ploit that ex­posed ad­di­tional data (Cybersecurity Tribe); and a Clawson Honda deal­er­ship data breach claimed by the PLAY ran­somware group that ex­posed names, Social Security num­bers, ad­dresses, dri­ver’s li­cense data, and dates of birth, with no­ti­fi­ca­tion let­ters go­ing out as re­cently as April (Claim Depot).

None of these are in­di­vid­u­ally cat­a­strophic. Stacked to­gether they tell a fa­mil­iar story about a man­u­fac­tur­ing gi­ant whose at­tack sur­face has out­grown its se­cu­rity ma­tu­rity, and they be­long in the wave count.

The named in­ci­dents above are just the ones that broke through. The full first-quar­ter 2026 list is much longer. Brief Defense, PKWARE, Cybersecurity News, ACI Learning, and CSIS are all main­tain­ing 2026 in­ci­dent time­lines, and the pat­tern is con­sis­tent.

* January 2026: Illinois and Minnesota state sys­tems ex­posed per­sonal data on nearly one mil­lion peo­ple; the Match fam­ily of dat­ing apps was breached by ShinyHunters; Eurail con­firmed unau­tho­rized ac­cess; re­searchers found a 149-million-record data­base pub­licly ex­posed via cloud mis­con­fig­u­ra­tion; Microsoft January Patch Tuesday shipped 115 fixes in­clud­ing the Office bug APT28 be­gan ex­ploit­ing within days; Nike in­ves­ti­gated a pos­si­ble cy­ber at­tack af­ter WorldLeaks claimed 1.4 TB of in­ter­nal com­pany data on January 24; Red Hat suf­fered a pri­vate GitHub and GitLab com­pro­mise by the Crimson Collective, with roughly 570 GB ex­fil­trated from over 28,000 in­ter­nal repos­i­to­ries in­clud­ing ap­prox­i­mately 800 Customer Engagement Reports con­tain­ing in­fra­struc­ture de­tails and cre­den­tials for large en­ter­prise clients; Pickett USA breach ex­posed sen­si­tive en­gi­neer­ing data linked to U.S. util­i­ties; ShinyHunters / SLH vish­ing cam­paigns tar­get­ing en­ter­prise SSO en­vi­ron­ments in­clud­ing Okta surged in early-to-mid January.

* February 2026: BridgePay, a pay­ments plat­form serv­ing city gov­ern­ments, was hit by ran­somware; Odido dis­closed unau­tho­rized ac­cess af­fect­ing up to 6.2 mil­lion cus­tomers; Change Healthcare, a UnitedHealth sub­sidiary, was hit again, this time by AlphV/BlackCat; Cisco dis­closed that a crit­i­cal Catalyst SD-WAN vul­ner­a­bil­ity (CVE-2026-20127, CVSS 10.0) had been ac­tively ex­ploited since 2023; APT28 was ob­served weaponiz­ing CVE-2026-21509 against Ukrainian and EU gov­ern­ment tar­gets; the FBI de­tected ab­nor­mal ac­tiv­ity (Feb 17) on the in­ter­nal net­work it uses to man­age wire­taps and sur­veil­lance, even­tu­ally clas­si­fied as a major in­ci­dent” on March 23; the 2026 Winter Olympics opened in Milan and Cortina d’Am­pezzo and pro-Russ­ian DDoS group NoName057(16) be­gan hit­ting Italian Olympic in­fra­struc­ture, sev­eral na­tional Olympic com­mit­tees (Lithuania, Poland, Spain), the Cortina d’Am­pezzo tourism site, and Milan Malpensa Airport; University of Mississippi Medical Center closed clin­ics fol­low­ing a ran­somware at­tack and re­verted to man­ual pa­tient care; France’s National Bank Account Registry (FICOBA) was hit through cre­den­tial weak­ness ex­ploita­tion; Iron Mountain, Panera Bread, SmarterTools, Step Finance, and Advantest Corporation all ab­sorbed pub­licly dis­closed in­ci­dents.

* March 2026: Stryker wiper event (March 11); Microsoft pub­lished the Help on the line” re­port on a Teams-vishing ini­tial ac­cess pat­tern (March 16); Oracle Cloud rose87168” list­ing (March 21); Lockheed Martin / 28-engineer doxxing claims (March 23); European Commission Europa cloud plat­form breached (March 24); Kash Patel per­sonal email dump (March 27); Cisco Trivy sup­ply chain breach sur­faces; TeamPCP com­pro­mises the LiteLLM open source li­brary in a sup­ply chain at­tack that prop­a­gates to thousands of com­pa­nies” in­clud­ing Mercor, the $10B AI train­ing-data ven­dor whose cus­tomers in­clude OpenAI, Anthropic, and Meta; Axios npm hi­jack (March 31); ShinyHunters Cisco Salesforce ex­tor­tion post (March 31); Mercor dis­closes the se­cu­rity in­ci­dent pub­licly on March 31 / April 1; FlamingChina sam­ples from the Tianjin NSCC breach cir­cu­late widely on Telegram and BreachForums fol­low­ing early-Feb­ru­ary ini­tial post­ings.

* April 2026 so far: Google Threat Intelligence Group at­trib­utes Axios npm com­pro­mise to UNC1069 / North Korea (April 1); Axios main­tainer post-mortem pub­lished (April 2); Fortune con­firms the Mercor breach pub­licly (April 2), with Lapsus$ claim­ing 4 TB of ex­fil­trated data in­clud­ing ~211 GB data­base records, ~939 GB source code, and ~3 TB stor­age cov­er­ing can­di­date PII, em­ployer data, API keys, Slack dumps, and videos of Mercor AI sys­tems talk­ing to con­trac­tors; Meta pauses all AI data train­ing con­tracts with Mercor in­def­i­nitely, five Mercor con­trac­tors file law­suits over per­sonal data ex­po­sure; DOJ con­firms the FBI in­ter­nal major in­ci­dent” clas­si­fi­ca­tion pub­licly (early April); a con­ti­nent-wide avi­a­tion IT at­tack on April 6 crip­ples check-in, bag­gage, and board­ing at Heathrow, Charles-de-Gaulle, Frankfurt, and Copenhagen, can­celling or de­lay­ing more than 1,600 flights in a sin­gle day, traced to the Collins Aerospace MUSE plat­form that had al­ready been hit in September 2025; on April 7, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell con­vene an emer­gency in-per­son meet­ing in Washington with the CEOs of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo to brief them on the cy­ber risks posed by Anthropic’s Mythos AI model, which has iden­ti­fied thou­sands of pre­vi­ously-un­known zero-day vul­ner­a­bil­i­ties in every ma­jor op­er­at­ing sys­tem and web browser; Rockstar Games con­firms third-party breach via Anodot/Snowflake (early April); Snowflake it­self con­firms unusual ac­tiv­ity” af­fect­ing more than a dozen cus­tomer ac­counts linked to the Anodot in­te­gra­tion; ShinyHunters sets April 14 ran­som dead­line and tells re­porters they had Anodot ac­cess for some time” and had also tried (and failed) to breach Salesforce di­rectly; FlamingChina su­per­com­puter claims sur­face in main­stream Western press (CNN, Tom’s Hardware, TechRadar, BGR, around April 8-10); National Public Data suc­ces­sor breach sur­faces with roughly 2.9 bil­lion records of per­sonal in­for­ma­tion sold for 3.5 mil­lion dol­lars on the dark web; Yale New Haven Health System dis­closes breach af­fect­ing 5.5 mil­lion pa­tients; HIPAA Journal links up to 80 hos­pi­tals to the Oracle Cloud in­ci­dent; March 2026 ran­somware ac­tiv­ity to­tals 672 in­ci­dents in a sin­gle month, with Qilin, Akira, and DragonForce alone ac­count­ing for roughly 40 per­cent.

By the time you fin­ish count­ing the named in­ci­dents above, you are well past forty, and the count does not in­clude the 300 to 400 or­ga­ni­za­tions swept up in the SLH Salesforce mega-cam­paign or the roughly 1.5 bil­lion Salesforce records es­ti­mated stolen across that sin­gle op­er­a­tion. Nor does it in­clude the 672 ran­somware in­ci­dents that ran­somware track­ing firms recorded in March 2026 alone, with Qilin, Akira, and DragonForce ac­count­ing for about 40 per­cent of that sin­gle-month to­tal. Nor does it in­clude the dozens of smaller school dis­trict, mu­nic­i­pal, and health­care ran­somware events that have be­come so rou­tine they no longer make na­tional news. The 2025 base­line al­ready showed pub­licly dis­closed ran­somware at­tacks ris­ing 49% year-over-year to 1,174 in­ci­dents, with health­care ab­sorb­ing 22% of the to­tal. The 2026 first-quar­ter pace, on the tra­jec­tory above, is com­fort­ably on track to make 2025 look quiet.

This is the part of the note­book where the his­to­ri­an’s voice has to be hon­est about what it does not fully un­der­stand. The events above are real. The vol­ume is real. The pat­tern is real. The rel­a­tive quiet­ness around them in main­stream Western pub­lic dis­course is also real, and it is gen­uinely puz­zling rather than ob­vi­ously sin­is­ter. Several plau­si­ble ex­pla­na­tions sit on the table at once, and the hon­est move is to lay them out with­out in­sist­ing on any of them.

One pos­si­bil­ity is that at­tri­bu­tion to a state ac­tor has be­come pro­fes­sion­ally ex­pen­sive. Calling a Handala wiper event an Iranian in­tel­li­gence-linked de­struc­tive op­er­a­tion against a U. S. med­ical de­vice com­pany, or call­ing the FlamingChina su­per­com­puter leak what it might be, takes on po­lit­i­cal weight that prac­ti­tion­ers and ven­dors in­creas­ingly pre­fer to avoid. Analysis gets soft­ened to threat ac­tor” or sophisticated ad­ver­sary,” and the geopo­lit­i­cal read­ing gets edited out with­out any­one de­cid­ing to edit it out. That soft­en­ing is not a con­spir­acy. It is the cu­mu­la­tive ef­fect of many small com­mer­cial choices that each in­di­vid­u­ally seem rea­son­able.

A sec­ond pos­si­bil­ity is that the SaaS sup­ply chain story is un­com­fort­able for the se­cu­rity in­dus­try to dwell on, be­cause the in­dus­try sells into it. Saying out loud that the mod­ern en­ter­prise no longer has a de­fen­si­ble perime­ter, only a long chain of ven­dor trust re­la­tion­ships that can be turned at any link, is also say­ing that the se­cu­rity stack the in­dus­try shipped last quar­ter can­not stop the at­tacks the in­dus­try is sup­posed to be talk­ing about this quar­ter. That is a hard pub­lic mes­sage to de­liver from in­side a ven­dor.

A third pos­si­bil­ity is much sim­pler and pos­si­bly the most pow­er­ful. The news cy­cle has trained the pub­lic to bounce off cy­ber sto­ries. The au­di­ence has al­ready ab­sorbed Equifax, OPM, Yahoo, SolarWinds, NPD, and Snowflake, and the mar­ginal shock of another one” has flat­tened. When the mar­ginal shock is flat, even gen­uinely un­prece­dented events strug­gle to land. Practitioners know this, so they save their breath. The si­lence may be less an act of sup­pres­sion than an act of fa­tigue.

A fourth pos­si­bil­ity is the one this note­book keeps cir­cling back to. The par­al­lel ac­cel­er­a­tion on the AI side of the in­dus­try is awk­ward to dis­cuss in the same para­graph as the of­fen­sive in­ci­dents, be­cause every cy­ber­se­cu­rity ven­dor is cur­rently rac­ing to ship AI-powered” de­fense. It is com­mer­cially un­com­fort­able to put the two lists on the same page, even if no one in par­tic­u­lar is for­bid­ding it. The ab­sence of that pair­ing is, at min­i­mum, a strange thing to no­tice in the his­tor­i­cal record.

A his­to­rian writ­ing in 2050 about the first hun­dred days of 2026 will prob­a­bly find all four of these ex­pla­na­tions par­tially true and none of them fully suf­fi­cient. What that his­to­rian will al­most cer­tainly no­tice, more than any sin­gle ex­pla­na­tion, is the gap it­self, and more specif­i­cally the lay­ered na­ture of the gap. The April 7 meet­ing be­tween the Treasury Secretary, the Fed Chair, and the CEOs of the largest U. S. banks proves some­thing cru­cial about that lay­er­ing. The si­lence has not fully held at the high­est lev­els of the U.S. gov­ern­ment. Bessent and Powell are clearly not in the dark, and nei­ther are the peo­ple they briefed. What has held is the si­lence in the pub­lic dis­course, in the main­stream press, in the day-to-day con­ver­sa­tions prac­ti­tion­ers have with their boards and their cus­tomers. The in­for­ma­tion is mov­ing in pri­vate. It is just barely mov­ing in pub­lic. A pe­riod that, on the ev­i­dence, looks un­prece­dented in the his­tory of com­put­ing se­cu­rity passed through real-time pub­lic dis­course with­out pro­duc­ing the kind of sus­tained, co­her­ent, named con­ver­sa­tion the events seem to de­serve, and yet be­hind closed doors at the high­est lev­els of fi­nan­cial reg­u­la­tion, the con­ver­sa­tion is clearly hap­pen­ing. That asym­me­try is the most in­ter­est­ing ob­ject in this en­tire note­book.

If you work in this field and the last hun­dred days have felt strange to you, you are not imag­in­ing it. Something gen­uinely un­usual is hap­pen­ing, and the un­usu­al­ness of how qui­etly it is hap­pen­ing may, in the long view, be the most his­tor­i­cally in­ter­est­ing layer of all. Naming the gap, even gen­tly, is a small con­tri­bu­tion to mak­ing sure the pe­riod even­tu­ally gets the doc­u­men­ta­tion it de­serves.

...

Read the original on substack.com »

9 303 shares, 57 trendiness

Introducing a new spam policy for "back button hijacking"

Today, we are ex­pand­ing our spam poli­cies

to ad­dress a de­cep­tive prac­tice known as back but­ton hi­jack­ing”, which will be­come an ex­plicit vi­o­la­tion of the malicious prac­tices” of spam poli­cies, lead­ing to po­ten­tial spam ac­tions.

When a user clicks the back” but­ton in the browser, they have a clear ex­pec­ta­tion: they want to re­turn to the pre­vi­ous page. Back but­ton hi­jack­ing breaks this fun­da­men­tal ex­pec­ta­tion. It oc­curs when a site in­ter­feres with a user’s browser nav­i­ga­tion and pre­vents them from us­ing their back but­ton to im­me­di­ately get back to the page they came from. Instead, users might be sent to pages they never vis­ited be­fore, be pre­sented with un­so­licited rec­om­men­da­tions or ads, or are oth­er­wise just pre­vented from nor­mally brows­ing the web.

Why are we tak­ing ac­tion?

We be­lieve that the user ex­pe­ri­ence comes first. Back but­ton hi­jack­ing in­ter­feres with the browser’s func­tion­al­ity, breaks the ex­pected user jour­ney, and re­sults in user frus­tra­tion. People re­port feel­ing ma­nip­u­lated and even­tu­ally less will­ing to visit un­fa­mil­iar sites. As we’ve stated be­fore, in­sert­ing de­cep­tive or ma­nip­u­la­tive pages into a user’s browser his­tory has al­ways been against our Google Search Essentials.

We’ve seen a rise of this type of be­hav­ior, which is why we’re des­ig­nat­ing this an ex­plicit vi­o­la­tion of our ma­li­cious prac­tices

pol­icy, which says:

Malicious prac­tices cre­ate a mis­match be­tween user ex­pec­ta­tions and the ac­tual out­come,

lead­ing to a neg­a­tive and de­cep­tive user ex­pe­ri­ence, or com­pro­mised user se­cu­rity or pri­vacy.

Pages that are en­gag­ing in back but­ton hi­jack­ing may be sub­ject to man­ual spam ac­tions

or au­to­mated de­mo­tions, which can im­pact the site’s per­for­mance in Google Search re­sults. To give site own­ers time to make any needed changes, we’re pub­lish­ing this pol­icy two months in ad­vance of en­force­ment on June 15, 2026.

What should site own­ers do?

Ensure you are not do­ing any­thing to in­ter­fere with a user’s abil­ity to nav­i­gate their browser his­tory.

If you’re cur­rently us­ing any script or tech­nique that in­serts or re­places de­cep­tive or ma­nip­u­la­tive pages into a user’s browser his­tory that pre­vents them from us­ing their back but­ton to im­me­di­ately get back to the page they came from, you are ex­pected to re­move or dis­able it.

Notably, some in­stances of back but­ton hi­jack­ing may orig­i­nate from the site’s in­cluded li­braries or ad­ver­tis­ing plat­form. We en­cour­age site own­ers to thor­oughly re­view their tech­ni­cal im­ple­men­ta­tion and re­move or dis­able any code, im­ports or any con­fig­u­ra­tions that are re­spon­si­ble for back but­ton hi­jack­ing, to en­sure a help­ful and non-de­cep­tive ex­pe­ri­ence for users.

If your site has been im­pacted by a man­ual ac­tion and you have fixed the is­sue, you can al­ways let us know by sub­mit­ting a re­con­sid­er­a­tion re­quest

in Search Console. For ques­tions or feed­back, feel free to reach out on so­cial me­dia or dis­cuss in our help com­mu­nity.

...

Read the original on developers.google.com »

10 299 shares, 16 trendiness

Building a CLI for all of Cloudflare

Cloudflare has a vast API sur­face. We have over 100 prod­ucts, and nearly 3,000 HTTP API op­er­a­tions.

Increasingly, agents are the pri­mary cus­tomer of our APIs. Developers bring their cod­ing agents to build and de­ploy ap­pli­ca­tions, agents, and plat­forms to Cloudflare, con­fig­ure their ac­count, and query our APIs for an­a­lyt­ics and logs.

We want to make every Cloudflare prod­uct avail­able in all of the ways agents need. For ex­am­ple, we now make Cloudflare’s en­tire API avail­able in a sin­gle Code Mode MCP server that uses less than 1,000 to­kens. There’s a lot more sur­face area to cover, though: CLI com­mands. Workers Bindings — including APIs for lo­cal de­vel­op­ment and test­ing. SDKs across mul­ti­ple lan­guages. Our con­fig­u­ra­tion file. Terraform. Developer docs. API docs and OpenAPI schemas. Agent Skills.

Today, many of our prod­ucts aren’t avail­able across every one of these in­ter­faces. This is par­tic­u­larly true of our CLI — Wrangler. Many Cloudflare prod­ucts have no CLI com­mands in Wrangler. And agents love CLIs.

So we’ve been re­build­ing Wrangler CLI, to make it the CLI for all of Cloudflare. It pro­vides com­mands for all Cloudflare prod­ucts, and lets you con­fig­ure them to­gether us­ing in­fra­struc­ture-as-code.

Today we’re shar­ing an early ver­sion of what the next ver­sion of Wrangler will look like as a tech­ni­cal pre­view. It’s very early, but we get the best feed­back when we work in pub­lic.

You can try the Technical Preview to­day by run­ning npx cf. Or you can in­stall it glob­ally by run­ning npm in­stall -g cf.

Right now, cf pro­vides com­mands for just a small sub­set of Cloudflare prod­ucts. We’re al­ready test­ing a ver­sion of cf that sup­ports the en­tirety of the Cloudflare API sur­face — and we will be in­ten­tion­ally re­view­ing and tun­ing the com­mands for each prod­uct, to have out­put that is er­gonomic for both agents and hu­mans. To be clear, this Technical Preview is just a small piece of the fu­ture Wrangler CLI. Over the com­ing months we will bring this to­gether with the parts of Wrangler you know and love.

To build this in a way that keeps in sync with the rapid pace of prod­uct de­vel­op­ment at Cloudflare, we had to cre­ate a new sys­tem that al­lows us to gen­er­ate com­mands, con­fig­u­ra­tion, bind­ing APIs, and more.

We al­ready gen­er­ate the Cloudflare API SDKs, Terraform provider, and Code Mode MCP server based on the OpenAPI schema for Cloudflare API. But up­dat­ing our CLI, Workers Bindings, wran­gler.jsonc con­fig­u­ra­tion, Agent Skills, dash­board and docs is still a man­ual process. This was al­ready er­ror-prone, re­quired too much back and forth, and would­n’t scale to sup­port the whole Cloudflare API in the next ver­sion of our CLI.

To do this, we needed more than could be ex­pressed in an OpenAPI schema. OpenAPI schemas de­scribe REST APIs, but we have in­ter­ac­tive CLI com­mands that in­volve mul­ti­ple ac­tions that com­bine both lo­cal de­vel­op­ment and API re­quests, Workers bind­ings ex­pressed as RPC APIs, along with Agent Skills and doc­u­men­ta­tion that ties this all to­gether.

We write a lot of TypeScript at Cloudflare. It’s the lin­gua franca of soft­ware en­gi­neer­ing. And we keep find­ing that it just works bet­ter to ex­press APIs in TypeScript — as we do with Cap n’ Web, Code Mode, and the RPC sys­tem built into the Workers plat­form.

So we in­tro­duced a new TypeScript schema that can de­fine the full scope of APIs, CLI com­mands and ar­gu­ments, and con­text needed to gen­er­ate any in­ter­face. The schema for­mat is just” a set of TypeScript types with con­ven­tions, lint­ing, and guardrails to en­sure con­sis­tency. But be­cause it is our own for­mat, it can eas­ily be adapted to sup­port any in­ter­face we need, to­day or in the fu­ture, while still also be­ing able to gen­er­ate an OpenAPI schema:

To date most of our fo­cus has been at this layer — building the ma­chine we needed, so that we can now start build­ing the CLI and other in­ter­faces we’ve wanted for years to be able to pro­vide. This lets us start to dream big­ger about what we could stan­dard­ize across Cloudflare and make bet­ter for Agents — es­pe­cially when it comes to con­text en­gi­neer­ing our CLI.

Agents ex­pect CLIs to be con­sis­tent. If one com­mand uses info as the syn­tax for get­ting in­for­ma­tion about a re­source, and an­other uses get, the agent will ex­pect one and call a non-ex­is­tent com­mand for the other. In a large en­gi­neer­ing org of hun­dreds or thou­sands of peo­ple, and with many prod­ucts, man­u­ally en­forc­ing con­sis­tency through re­views is Swiss cheese. And you can en­force it at the CLI layer, but then nam­ing dif­fers be­tween the CLI, REST API and SDKs, mak­ing the prob­lem ar­guably worse.

One of the first things we’ve done is to start cre­at­ing rules and guardrails, en­forced at the schema layer. It’s al­ways get, never info. Always –force, never –skip-confirmations. Always –json, never –format, and al­ways sup­ported across com­mands.

Wrangler CLI is also fairly unique — it pro­vides com­mands and con­fig­u­ra­tion that can work with both sim­u­lated lo­cal re­sources, or re­mote re­sources, like D1 data­bases, R2 stor­age buck­ets, and KV name­spaces. This means con­sis­tent de­faults mat­ter even more. If an agent thinks it’s mod­i­fy­ing a re­mote data­base, but is ac­tu­ally adding records to lo­cal data­base, and the de­vel­oper is us­ing re­mote bind­ings to de­velop lo­cally against a re­mote data­base, their agent won’t un­der­stand why the newly-added records aren’t show­ing up when the agent makes a re­quest to the lo­cal dev server. Consistent de­faults, along with out­put that clearly sig­nals whether com­mands are ap­plied to re­mote or lo­cal re­sources, en­sure agents have ex­plicit guid­ance.

Today we are also re­leas­ing Local Explorer, a new fea­ture avail­able in open beta in both Wrangler and the Cloudflare Vite plu­gin.

Local Explorer lets you in­tro­spect the sim­u­lated re­sources that your Worker uses when you are de­vel­op­ing lo­cally, in­clud­ing KV, R2, D1, Durable Objects and Workflows. The same things you can do via the Cloudflare API and Dashboard with each of these, you can also do en­tirely lo­cally, pow­ered by the same un­der­ly­ing API struc­ture.

For years we’ve made a bet on fully lo­cal de­vel­op­ment — not just for Cloudflare Workers, but for the en­tire plat­form. When you use D1, even though D1 is a hosted, server­less data­base prod­uct, you can run your data­base and com­mu­ni­cate with it via bind­ings en­tirely lo­cally, with­out any ex­tra setup or tool­ing. Via Miniflare, our lo­cal de­vel­op­ment plat­form em­u­la­tor, the Workers run­time pro­vides the ex­act same APIs in lo­cal dev as in pro­duc­tion, and uses a lo­cal SQLite data­base to pro­vide the same func­tion­al­ity. This makes it easy to write and run tests that run fast, with­out the need for net­work ac­cess, and work of­fline.

But un­til now, work­ing out what data was stored lo­cally re­quired you to re­verse en­gi­neer, in­tro­spect the con­tents of the .wrangler/state di­rec­tory, or in­stall third-party tools.

Now when­ever you run an app with Wrangler CLI or the Cloudflare Vite plu­gin, you will be prompted to open the lo­cal ex­plorer (keyboard short­cut e). This pro­vides you with a sim­ple, lo­cal in­ter­face to see what bind­ings your Worker cur­rently has at­tached, and what data is stored against them.

When you build us­ing Agents, Local Explorer is a great way to un­der­stand what the agent is do­ing with data, mak­ing the lo­cal de­vel­op­ment cy­cle much more in­ter­ac­tive. You can turn to Local Explorer any­time you need to ver­ify a schema, seed some test records, or just start over and DROP TABLE.

Our goal here is to pro­vide a mir­ror of the Cloudflare API that only mod­i­fies lo­cal data, so that all of your lo­cal re­sources are avail­able via the same APIs that you use re­motely. And by mak­ing the API shape match across lo­cal and re­mote, when you run CLI com­mands in the up­com­ing ver­sion of the CLI and pass a –local flag, the com­mands just work. The only dif­fer­ence is that the com­mand makes a re­quest to this lo­cal mir­ror of the Cloudflare API in­stead.

Starting to­day, this API is avail­able at /cdn-cgi/explorer/api on any Wrangler- or Vite Plugin- pow­ered ap­pli­ca­tion. By point­ing your agent at this ad­dress, it will find an OpenAPI spec­i­fi­ca­tion to be able to man­age your lo­cal re­sources for you, just by talk­ing to your agent.

Now that we have built the ma­chine, it’s time to take the best parts of Wrangler to­day, com­bine them with what’s now pos­si­ble, and make Wrangler the best CLI pos­si­ble for us­ing all of Cloudflare.

You can try the tech­ni­cal pre­view to­day by run­ning npx cf. Or you can in­stall it glob­ally by run­ning npm in­stall -g cf.

With this very early ver­sion, we want your feed­back — not just about what the tech­ni­cal pre­view does to­day, but what you want from a CLI for Cloudflare’s en­tire plat­form. Tell us what you wish was an easy one-line CLI com­mand but takes a few clicks in our dash­board to­day. What you wish you could con­fig­ure in wran­gler.jsonc — like DNS records or Cache Rules. And where you’ve seen your agents get stuck, and what com­mands you wish our CLI pro­vided for your agent to use.

Jump into the Cloudflare Developers Discord and tell us what you’d like us to add first to the CLI, and stay tuned for many more up­dates soon.

Thanks to Emily Shen for her valu­able con­tri­bu­tions to kick­ing off the Local Explorer pro­ject.

...

Read the original on blog.cloudflare.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.