10 interesting stories served every morning and every evening.




1 1,115 shares, 40 trendiness

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them.

Last week, I wrote about catch­ing a sup­ply chain at­tack on a WordPress plu­gin called Widget Logic. A trusted name, ac­quired by a new owner, turned into some­thing ma­li­cious. It hap­pened again. This time at a much larger scale.

Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dash­board for a client site. The no­tice was from the WordPress.org Plugins Team, warn­ing that a plu­gin called Countdown Timer Ultimate con­tained code that could al­low unau­tho­rized third-party ac­cess.

I ran a full se­cu­rity au­dit on the site. The plu­gin it­self had al­ready been force-up­dated by WordPress.org to ver­sion 2.6.9.1, which was sup­posed to clean things up. But the dam­age was al­ready done.

The plug­in’s wpos-an­a­lyt­ics mod­ule had phoned home to an­a­lyt­ics.es­sen­tialplu­gin.com, down­loaded a back­door file called wp-com­ments-posts.php (designed to look like the core file wp-com­ments-post.php), and used it to in­ject a mas­sive block of PHP into wp-con­fig.php.

The in­jected code was so­phis­ti­cated. It fetched spam links, redi­rects, and fake pages from a com­mand-and-con­trol server. It only showed the spam to Googlebot, mak­ing it in­vis­i­ble to site own­ers. And here is the wildest part. It re­solved its C2 do­main through an Ethereum smart con­tract, query­ing pub­lic blockchain RPC end­points. Traditional do­main take­downs would not work be­cause the at­tacker could up­date the smart con­tract to point to a new do­main at any time.

CaptainCore keeps daily restic back­ups. I ex­tracted wp-con­fig.php from 8 dif­fer­ent backup dates and com­pared file sizes. Binary search style.

The in­jec­tion hap­pened on April 6, 2026, be­tween 04:22 and 11:06 UTC. A 6-hour 44-minute win­dow.

I traced the plug­in’s his­tory through 939 quick­save snap­shots. The plu­gin had been on the site since January 2019. The wpos-an­a­lyt­ics mod­ule was al­ways there, func­tion­ing as a le­git­i­mate an­a­lyt­ics opt-in sys­tem for years.

Then came ver­sion 2.6.7, re­leased August 8, 2025. The changelog said, Check com­pat­i­bil­ity with WordPress ver­sion 6.8.2.” What it ac­tu­ally did was add 191 lines of code, in­clud­ing a PHP de­se­ri­al­iza­tion back­door. The class-anylc-ad­min.php file grew from 473 to 664 lines.

The new code in­tro­duced three things:

A fetch_ver_info() method that calls file_get_­con­tents() on the at­tack­er’s server and passes the re­sponse to @unserialize()

A ver­sion_in­fo_­clean() method that ex­e­cutes @$clean($this->version_cache, $this->changelog) where all three val­ues come from the un­se­ri­al­ized re­mote data

That is a text­book ar­bi­trary func­tion call. The re­mote server con­trols the func­tion name, the ar­gu­ments, every­thing. It sat dor­mant for 8 months be­fore be­ing ac­ti­vated on April 5-6, 2026.

This is where it gets in­ter­est­ing. The orig­i­nal plu­gin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that op­er­ated un­der WP Online Support” start­ing around 2015. They later re­branded to Essential Plugin” and grew the port­fo­lio to 30+ free plu­g­ins with pre­mium ver­sions.

By late 2024, rev­enue had de­clined 35-45%. Minesh listed the en­tire busi­ness on Flippa. A buyer iden­ti­fied only as Kris,” with a back­ground in SEO, crypto, and on­line gam­bling mar­ket­ing, pur­chased every­thing for six fig­ures. Flippa even pub­lished a case study about the sale in July 2025.

The buy­er’s very first SVN com­mit was the back­door.

On April 7, 2026, the WordPress.org Plugins Team per­ma­nently closed every plu­gin from the Essential Plugin au­thor. At least 30 plu­g­ins, all on the same day. Here are the ones I con­firmed:

* SlidersPack — All in One Image Sliders — slid­er­spack-all-in-one-im­age-slid­ers

All per­ma­nently closed. The au­thor search on WordPress.org re­turns zero re­sults. The an­a­lyt­ics.es­sen­tialplu­gin.com end­point now re­turns {“message”:“closed”}.

In 2017, a buyer us­ing the alias Daley Tias” pur­chased the Display Widgets plu­gin (200,000 in­stalls) for $15,000 and in­jected pay­day loan spam. That buyer went on to com­pro­mise at least 9 plu­g­ins the same way.

The Essential Plugin case is the same play­book at a larger scale. 30+ plu­g­ins. Hundreds of thou­sands of ac­tive in­stal­la­tions. A le­git­i­mate 8-year-old busi­ness ac­quired through a pub­lic mar­ket­place and weaponized within months.

WordPress.org’s forced up­date added re­turn; state­ments to dis­able the phone-home func­tions. That is a band-aid. The wpos-an­a­lyt­ics mod­ule is still there with all its code. I built patched ver­sions with the en­tire back­door mod­ule stripped out.

I scanned my en­tire fleet and found 12 of the 26 Essential Plugin plu­g­ins in­stalled across 22 cus­tomer sites. I patched 10 of them (one had no back­door mod­ule, one was a dif­fer­ent pro” fork by the orig­i­nal au­thors). Here are the patched ver­sions, hosted per­ma­nently on B2:

# Countdown Timer Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​count­down-timer-ul­ti­mate-2.6.9.1-patched.zip –force

# Popup Anything on Click

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​popup-any­thing-on-click-2.9.1.1-patched.zip –force

# WP Testimonial with Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-tes­ti­mo­nial-with-wid­get-3.5.1-patched.zip –force

# WP Team Showcase and Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-team-show­case-and-slider-2.8.6.1-patched.zip –force

# WP FAQ (sp-faq)

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-faq-3.9.5.1-patched.zip –force

# Timeline and History Slider

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​time­line-and-his­tory-slider-2.4.5.1-patched.zip –force

# Album and Image Gallery plus Lightbox

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​al­bum-and-im­age-gallery-plus-light­box-2.1.8.1-patched.zip –force

# SP News and Widget

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​sp-news-and-wid­get-5.0.6-patched.zip –force

# WP Blog and Widgets

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​wp-blog-and-wid­gets-2.6.6.1-patched.zip –force

# Featured Post Creative

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​fea­tured-post-cre­ative-1.5.7-patched.zip –force

# Post Grid and Filter Ultimate

wp plu­gin in­stall https://​plu­g­ins.cap­tain­core.io/​post-grid-and-fil­ter-ul­ti­mate-1.7.4-patched.zip –force

Each patched ver­sion re­moves the en­tire wpos-an­a­lyt­ics di­rec­tory, deletes the loader func­tion from the main plu­gin file, and bumps the ver­sion to -patched. The plu­gin it­self con­tin­ues to work nor­mally.

The process is straight­for­ward with Claude Code. Point it at this ar­ti­cle for con­text, tell it which plu­gin you need patched, and it can strip the wpos-an­a­lyt­ics mod­ule the same way I did. The pat­tern is iden­ti­cal across all of the Essential Plugin plu­g­ins:

Delete the wpos-an­a­lyt­ics/ di­rec­tory from the plu­gin

Remove the loader func­tion block in the main plu­gin PHP file (search for Plugin Wpos Analytics Data Starts” or wpos_­an­a­lyt­ic­s_anl)

Two sup­ply chain at­tacks in two weeks. Both fol­lowed the same pat­tern. Buy a trusted plu­gin with an es­tab­lished in­stall base, in­herit the WordPress.org com­mit ac­cess, and in­ject ma­li­cious code. The Flippa list­ing for Essential Plugin was pub­lic. The buy­er’s back­ground in SEO and gam­bling mar­ket­ing was pub­lic. And yet the ac­qui­si­tion sailed through with­out any re­view from WordPress.org.

WordPress.org has no mech­a­nism to flag or re­view plu­gin own­er­ship trans­fers. There is no change of con­trol” no­ti­fi­ca­tion to users. No ad­di­tional code re­view trig­gered by a new com­mit­ter. The Plugins Team re­sponded quickly once the at­tack was dis­cov­ered. But 8 months passed be­tween the back­door be­ing planted and be­ing caught.

If you man­age WordPress sites, search your fleet for any of the 26 plu­gin slugs listed above. If you find one, patch it or re­move it. And check wp-con­fig.php.

...

Read the original on anchor.host »

2 948 shares, 63 trendiness

DaVinci Resolve – Photo

The Photo page brings Hollywood’s most ad­vanced color tools to still pho­tog­ra­phy for the first time! Whether you’re a pro­fes­sional col­orist look­ing to ap­ply your skills to fash­ion shoots and wed­dings, or a pho­tog­ra­pher who wants to work be­yond the lim­its of tra­di­tional photo ap­pli­ca­tions, the Photo page un­locks the tools you need. Start with fa­mil­iar photo tools in­clud­ing white bal­ance, ex­po­sure and pri­mary color ad­just­ments, then switch to the Color page for ac­cess to the full DaVinci color grad­ing toolset trusted by Hollywood’s best col­orists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you ex­port faster than ever be­fore!

For pho­tog­ra­phers, the Photo page of­fers a fa­mil­iar set of tools along­side DaVinci’s pow­er­ful color grad­ing ca­pa­bil­i­ties. It includes na­tive RAW sup­port for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image pro­cess­ing takes place at source res­o­lu­tion up to 32K, or over 400 megapix­els, so you’re never lim­ited to pro­ject res­o­lu­tion. Familiar ba­sic ad­just­ments in­clud­ing white bal­ance, ex­po­sure, color and sat­u­ra­tion give you a com­fort­able start­ing point. With non-de­struc­tive pro­cess­ing you can re­frame, crop and re-in­ter­pret your orig­i­nal sen­sor data at any time. And with GPU ac­cel­er­a­tion, en­tire al­bums can be processed dra­mat­i­cally faster than con­ven­tional photo ap­pli­ca­tions!

The Photo page Inspector gives you pre­cise con­trol over the trans­form and crop­ping pa­ra­me­ters of your im­ages. Reframe and crop non-de­struc­tively at the orig­i­nal source res­o­lu­tion and as­pect ra­tio, so you’re never re­stricted to a fixed time­line size! Zoom, po­si­tion, ro­tate and flip im­ages with full trans­form con­trols and use the crop­ping pa­ra­me­ters to trim the edges of any im­age with pre­ci­sion. Reframe a shot to im­prove com­po­si­tion, ad­just for a spe­cific ra­tio for print or so­cial me­dia use, or sim­ply re­move un­wanted el­e­ments from the edges of a frame. All adjustments can be re­fined or re­set at any time with­out ever af­fect­ing the orig­i­nal source file!

DaVinci Resolve is the world’s only post pro­duc­tion soft­ware that lets every­one work to­gether on the same pro­ject at the same time! Built on a pow­er­ful cloud based work­flow, you can share al­bums, all as­so­ci­ated meta­data and tags, as well as grades and ef­fects with col­orists, pho­tog­ra­phers and re­touch­ers any­where in the world. Blackmagic Cloud sync­ing keeps every col­lab­o­ra­tor with the lat­est ver­sion of your im­age li­brary in real time, and re­mote re­view­ers can ap­prove grades off­site with­out need­ing to be in the same room. Hollywood col­orists can even grade live fash­ion shoots re­motely, all while the pho­tog­ra­pher is still on set!

The Photo page gives you every­thing you need to man­age your en­tire im­age li­brary from im­port to com­ple­tion. You can im­port pho­tos di­rectly, from your Apple Photos li­brary or Lightroom, and or­ga­nize them with tags, rat­ings, fa­vorites and key­words for fast, flex­i­ble man­age­ment of even the largest li­braries. It supports all stan­dard RAW files and im­age types. AI IntelliSearch lets you in­stantly search across your en­tire pro­ject to find ex­actly what you’re look­ing for, from ob­jects to peo­ple to an­i­mals! Albums al­low you to build and man­age col­lec­tions for any pro­ject and with a sin­gle click you can switch be­tween your photo li­brary and your color grad­ing work­flow!

Albums are a pow­er­ful way to build and man­age photo col­lec­tions di­rectly in DaVinci Resolve. You can add im­ages man­u­ally to each al­bum or or­ga­nize by date, cam­era, star rat­ing, EXIF data and more. Powerful fil­ter and sort tools give you to­tal con­trol over how your col­lec­tion is arranged. The thumbnail view dis­plays each im­age’s graded ver­sion along­side its file name and source clip for­mat so you can see your grades at a glance. Create mul­ti­ple grade ver­sions of any im­age, all ref­er­enc­ing the orig­i­nal source file, so you can ex­plore dif­fer­ent looks with­out ever du­pli­cat­ing a file. Plus, grades ap­plied to one photo can be in­stantly copied across oth­ers in the al­bum for a fast, con­sis­tent look!

Connect Sony or Canon cam­eras di­rectly to DaVinci Resolve for teth­ered shoot­ing with full live view! Adjust cam­era set­tings in­clud­ing ISO, ex­po­sure and white bal­ance with­out leav­ing the page and save im­age cap­ture pre­sets to es­tab­lish a con­sis­tent look be­fore you shoot. Images can be cap­tured di­rectly into an al­bum, with al­bums cre­ated au­to­mat­i­cally dur­ing cap­ture so your li­brary is per­fectly or­ga­nized from the mo­ment you start shoot­ing. Grade im­ages as they ar­rive us­ing DaVinci Resolve’s ex­ten­sive color toolset and use a hard­ware panel for hands-on cre­ative con­trol in a col­lab­o­ra­tive shoot. That means you can cap­ture, grade and or­ga­nize an en­tire shoot with­out leav­ing DaVinci Resolve!

The Photo page gives you ac­cess to over 100 GPU and CPU ac­cel­er­ated Resolve FX and spe­cialty AI tools for still im­age work. They’re or­ga­nized by cat­e­gory in the Open FX li­brary and cover every­thing from color ef­fects, blurs and glows to im­age re­pair, skin re­fine­ment and cin­e­matic light­ing tools. These are the same tools used by Hollywood col­orists and VFX artists on the world’s biggest pro­duc­tions, now avail­able for still im­ages. To add an ef­fect, drag it to any node. Whether you’re mak­ing sub­tle beauty re­fine­ments for a fash­ion shoot or ap­ply­ing dra­matic film looks and at­mos­pheric light­ing ef­fects em­u­lat­ing the looks of a Hol­ly­wood fea­ture, the Photo page has the tools you need!

Magic Mask makes pre­cise se­lec­tions of sub­jects or back­grounds, while Depth Map gen­er­ates a 3D map of your scene to sep­a­rate fore­ground and back­ground with­out man­ual mask­ing. Use together to grade dif­fer­ent depths of an im­age in­de­pen­dently for re­sults that have never be­fore been pos­si­ble for stills!

Add a re­al­is­tic light source to any photo af­ter cap­ture with Relight FX. Relight an­a­lyzes the sur­faces of faces and ob­jects to re­flect light nat­u­rally across the im­age. Combine with Magic Mask to light a sub­ject in­de­pen­dently from the back­ground, turn­ing flat por­traits into stun­ning fash­ion im­ages!

Face re­fine­ment au­to­mat­i­cally masks dif­fer­ent parts of a face, sav­ing count­less hours of man­ual work. Sharpen eyes, re­move dark cir­cles, smooth skin, and color lips. Ultra Beauty sep­a­rates skin tex­ture from color for nat­ural, high end re­sults, while AI Blemish Removal han­dles fast skin re­pair!

The Film Look Creator lets you add cin­e­matic looks that repli­cate film prop­er­ties like ha­la­tion, bloom, grain and vi­gnetting. Adjust ex­po­sure in stops and use sub­trac­tive sat­u­ra­tion, rich­ness and split tone con­trols to achieve looks usu­ally found on the big screen, now for your still im­ages!

AI SuperScale uses the DaVinci AI Neural Engine to up­scale low res­o­lu­tion im­ages with ex­cep­tional qual­ity. The enhanced mode is specif­i­cally de­signed to re­move com­pres­sion ar­ti­facts, mak­ing it the per­fect tool for rescal­ing low qual­ity pho­tos or frame grabs up to 4x their orig­i­nal res­o­lu­tion!

UltraNR is a DaVinci AI Neural Engine dri­ven de­noise mode in the Color page’s spa­tial noise re­duc­tion palette. Use it to dra­mat­i­cally re­duce dig­i­tal noise from an im­age while main­tain­ing im­age clar­ity. Use with spa­tial noise re­duc­tion to smooth out dig­i­tal grain or scan­ner noise while keep­ing fine hair and eye edges sharp.

Sample an area of a scene to quickly cover up un­wanted el­e­ments, like ob­jects or even blem­ishes on a face. The patch re­placer has a fan­tas­tic auto grad­ing fea­ture that will seam­lessly blend the cov­ered area with the sur­round­ing color data. Perfect for re­mov­ing sen­sor dust.

The Quick Export op­tion makes it fast and easy to de­liver fin­ished im­ages in a wide range of com­mon for­mats in­clud­ing JPEG, PNG, HEIF and TIFF. Export ei­ther an en­tire al­bum or just se­lected pho­tos pro­vid­ing flex­i­bil­ity to meet your spe­cific de­liv­ery needs. You can set the res­o­lu­tion, bit depth, qual­ity and com­pres­sion to en­sure your im­ages are op­ti­mized for their in­tended use. Whether you’re ex­port­ing stand­alone im­ages for print, shar­ing on so­cial me­dia plat­forms or de­liv­er­ing graded files to a client, Quick Export has you cov­ered. All exports pre­serve your orig­i­nal photo EXIF meta­data, so cam­era set­tings, lo­ca­tion data and other im­por­tant in­for­ma­tion al­ways trav­els with your files.

The Photo page uses GPU ac­cel­er­ated pro­cess­ing to de­liver fast, ac­cu­rate re­sults across your en­tire work­flow. Process hun­dreds of RAW files in sec­onds with GPU ac­cel­er­ated de­cod­ing and ap­ply Resolve FX to your im­ages in real time. GPU acceleration also means batch ex­ports and con­ver­sions are dra­mat­i­cally faster than con­ven­tional photo ap­pli­ca­tions. On Mac, DaVinci Resolve is op­ti­mized for Metal and Apple Silicon, tak­ing full ad­van­tage of the lat­est hard­ware. On Windows and Linux, you get CUDA sup­port for NVIDIA GPUs, while the Windows ver­sion also fea­tures full OpenCL sup­port for AMD, Intel and Qualcomm GPUs. All this en­sures you get high per­for­mance re­sults on any sys­tem!

Hollywood col­orists have al­ways re­lied on hard­ware pan­els to work faster and more cre­atively and now pho­tog­ra­phers can too! The DaVinci Resolve Micro Color Panel is the per­fect com­pan­ion for photo grad­ing as it is com­pact enough to sit next to a lap­top and portable enough to take on lo­ca­tion for shoots. It features three high qual­ity track­balls for lift, gamma and gain ad­just­ments, 12 pri­mary cor­rec­tion knobs for con­trast, sat­u­ra­tion, hue, tem­per­a­ture and more. It even has a built in recharge­able bat­tery! DaVinci Resolve color pan­els let you ad­just mul­ti­ple pa­ra­me­ters at once, so you can cre­ate looks that are sim­ply im­pos­si­ble with a mouse and key­board.

Hollywood’s most pop­u­lar so­lu­tion for edit­ing, vi­sual ef­fects, mo­tion graph­ics, color cor­rec­tion and au­dio post pro­duc­tion, for Mac, Windows and Linux. Now supports Blackmagic Cloud for col­lab­o­ra­tion!

The most pow­er­ful DaVinci Resolve adds DaVinci Neural Engine for au­to­matic AI re­gion track­ing, stereo­scopic tools, more Resolve FX fil­ters, more Fairlight FX au­dio plu­g­ins and ad­vanced HDR grading.

Includes large search dial in a de­sign that in­cludes only the spe­cific keys needed for edit­ing. Includes Bluetooth with bat­tery for wire­less use so it’s more portable than a full sized key­board!

Editor panel specif­i­cally de­signed for multi-cam edit­ing for news cut­ting and live sports re­play. Includes but­tons to make cam­era se­lec­tion and edit­ing ex­tremely fast! Connects via Bluetooth or USB‑C.

Full sized tra­di­tional QWERTY ed­i­tor key­board in a pre­mium metal de­sign. Featuring a metal search dial with clutch, plus ex­tra edit, trim and time­code keys. Can be in­stalled in­set for flush mount­ing.

Powerful color panel gives you all the con­trol you need to cre­ate cin­e­matic im­ages. Includes con­trols for re­fined color grad­ing in­clud­ing adding win­dows. Connects via Bluetooth or USB‑C.

Portable DaVinci color panel with 3 high res­o­lu­tion track­balls, 12 pri­mary cor­rec­tor knobs and LCDs with menus and but­tons for switch­ing tools, adding color nodes, HDR and sec­ondary grad­ing and more!

Designed in col­lab­o­ra­tion with pro­fes­sional Hollywood col­orists, the DaVinci Resolve Advanced Panel fea­tures a mas­sive num­ber of con­trols for di­rect ac­cess to every DaVinci color cor­rec­tion fea­ture.

Portable au­dio con­trol sur­face in­cludes 12 pre­mium touch sen­si­tive fly­ing faders, chan­nel LCDs for ad­vanced pro­cess­ing, au­toma­tion and trans­port con­trols plus HDMI for an ex­ter­nal graph­ics dis­play.

Get in­cred­i­bly fast au­dio edit­ing for sound en­gi­neers work­ing on tight dead­lines! Includes LCD screen, touch sen­si­tive con­trol knobs, built in search dial and full key­board with multi func­tion keys.

Used by Hollywood and broad­cast­ers, these large con­soles make it easy to mix large pro­jects with a mas­sive num­ber of chan­nels and tracks. Modular de­sign al­lows cus­tomiz­ing 2, 3, 4, or 5 bay consoles!

Fairlight stu­dio con­sole legs at an­gle for when you re­quire a flat work­ing sur­face. Required for all Fairlight Studio Consoles.

Fairlight stu­dio con­sole legs at 8º angle for when you re­quire a slightly an­gled work­ing sur­face. Required for all Fairlight Studio Consoles.

Features 12 mo­tor­ized faders, ro­tary con­trol knobs il­lu­mi­nated but­tons for pan, solo, mute and call, plus bank se­lect but­tons.

12 groups of touch sen­si­tive ro­tary con­trol knobs and il­lu­mi­nated but­tons, as­sign­a­ble to fader strips, sin­gle chan­nel or mas­ter bus.

Get quick ac­cess to vir­tu­ally every Fairlight fea­ture! Includes a 12” LCD, graph­i­cal key­board, macro keys, trans­port con­trols and more.

Features HDMI, SDI in­puts for video and com­puter mon­i­tor­ing and Ethernet for graph­ics dis­play of chan­nel sta­tus and me­ters.

Empty 2 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 3 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 4 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Empty 5 bay Fairlight stu­dio con­sole chas­sis that can be pop­u­lated with var­i­ous faders, chan­nel con­trols, edit and LCD monitors.

Use al­ter­na­tive HDMI or SDI tele­vi­sions and mon­i­tors when build­ing a Fairlight stu­dio con­sole.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 2 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 3 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 4 bay Fairlight console.

Mounting bar with lo­cat­ing pins to al­low cor­rect align­ment of bay mod­ules when build­ing a cus­tom 5 bay Fairlight console.

Side arm kit mounts into Fairlight con­sole mount­ing bar and holds each fader, chan­nel con­trol and LCD mon­i­tor mod­ule.

Blank 1/3rd wide bay for build­ing a cus­tom con­sole with the ex­tra 1/3rd sec­tion. Includes blank in­fill pan­els.

Allows mount­ing stan­dard 19 inch rack mount equip­ment in the chan­nel con­trol area of the Fairlight stu­dio con­sole.

Blank panel to fill in the chan­nel con­trol area of the Fairlight stu­dio con­sole.

Blank panel to fill in the LCD mon­i­tor area of the Fairlight stu­dio con­sole when you’re not us­ing the stan­dard Fairlight LCD monitor.

Blank panel to fill in the fader con­trol area of the Fairlight stu­dio con­sole.

Adds 3 MADI I/O con­nec­tions to the sin­gle MADI on the ac­cel­er­a­tor card, for a to­tal of 256 inputs and out­puts at 24 bit and 48kHz.

Add up to 2,000 tracks with real time pro­cess­ing of EQ, dy­nam­ics, 6 plug‑ins per track, plus MADI for ex­tra 64 inputs and out­puts.

Adds ana­log and dig­i­tal con­nec­tions, pre­amps for mics and in­stru­ments, sam­ple rate con­ver­sion and sync at any stan­dard frame rate.

...

Read the original on www.blackmagicdesign.com »

3 853 shares, 34 trendiness

GitHub Stacked PRs

Large pull re­quests are hard to re­view, slow to merge, and prone to con­flicts. Reviewers lose con­text, feed­back qual­ity drops, and the whole team slows down. Stacked PRs solve this by break­ing big changes into a chain of small, fo­cused pull re­quests that build on each other — each one in­de­pen­dently re­view­able.

A stack is a se­ries of pull re­quests in the same repos­i­tory where each PR tar­gets the branch of the PR be­low it, form­ing an or­dered chain that ul­ti­mately lands on your main branch.

GitHub un­der­stands stacks end-to-end: the pull re­quest UI shows a stack map so re­view­ers can nav­i­gate be­tween lay­ers, branch pro­tec­tion rules are en­forced against the fi­nal tar­get branch (not just the di­rect base), and CI runs for every PR in the stack as if they were tar­get­ing the fi­nal branch.

The gh stack CLI han­dles the lo­cal work­flow: cre­at­ing branches, man­ag­ing re­bases, push­ing to GitHub, and cre­at­ing PRs with the cor­rect base branches. On GitHub, the PR UI gives re­view­ers the con­text they need — a stack map for nav­i­ga­tion, fo­cused diffs for each layer, and proper rules en­force­ment.

When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged di­rectly or through the merge queue. After a merge, the re­main­ing PRs in the stack are au­to­mat­i­cally re­based so the low­est un­merged PR tar­gets the base branch.

Ready to dive in? Start with the Quick Start guide or read the full overview.

...

Read the original on github.github.com »

4 734 shares, 79 trendiness

Robert Reese's Website

TLDR: Despite claim­ing to backup all your data, Backblaze qui­etly stopped back­ing up OneDrive and Dropbox fold­ers - along with po­ten­tially many other things.

For ten years I have been us­ing Backblaze for my per­sonal com­puter backup. Before 2015 I would backup files to one of two large ex­ter­nal hard discs. I then ro­tated these dri­ves be­tween, first my fa­ther’s house, and af­ter I moved to the UK, my of­fice draw­ers.

In 2015 Backblaze seemed like a good bet. Unlike Crashplan their soft­ware was­n’t a bloated Java app, but they did have un­lim­ited stor­age. If you could cram it into your PC they would back it up. With their yearly Hard Drive re­views mak­ing good press, a lot of per­sonal rec­om­men­da­tions from my friends and col­leagues, their ser­vice sounded great. I in­stalled the soft­ware, ran it for sev­eral weeks, and sure enough my data was safely stored in their cloud.

I had fur­ther rea­son to be im­pressed when sev­eral years later one of my hard dri­ves failed. I made use of their send me a hard drive with my stuff on it ser­vice”. A drive turned up filled with my pre­cious data. That for me was proof that this sys­tem worked, and that it worked well.

And so I rec­om­mended Backblaze for years. What do you do for backup? I would ex­toll the virtues of Backblaze, and they made many sales from such rec­om­men­da­tions.

There were a few things I did­n’t like. The app, could use a lot of mem­ory, es­pe­cially af­ter do­ing a large im­port of pho­tographs. The web­site, which I of­ten used to re­store sin­gle files or fold­ers, was slow and clunky to use. The win­dows app in par­tic­u­lar was clunky with an early 2000s aes­thetic and cramped lists. There was the time they leaked all your file­names to Facebook, but they prob­a­bly fixed that.

But no mat­ter, small prob­lems for the peace of mind of hav­ing all my files backed up.

Backup soft­ware is meant to back up your files. Which files? Well the files you need. Given every­one is dif­fer­ent, with dif­fer­ent work­flows and file­types, the ideal thing is to back up all your files. No backup provider knows what I will need in the fu­ture. The provider must plan ac­cord­ingly.

My first trou­bling dis­cov­ery was in 2025, when I made sev­eral er­rors then did a push -f to GitHub and blew away the git his­tory for a half decade old repo. No data was lost, but the log of changes was. No prob­lem I thought, I’ll just re­store this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ig­nore .git fold­ers.

This an­noyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze pref­er­ences I could find no way to re-en­able this. In fact look­ing at the list of ex­clu­sions I could find no men­tion of .git what­so­ever.

This made me won­der - I had checked the ex­clu­sions list when I in­stalled Backblaze 9 years be­fore, had I missed it? Had I missed any­thing else?

Well les­son learned I guess, but then a week ago I came across this thread on red­dit: Doesn’t back up Dropbox folder??”. A user was sur­prised to find their Dropbox folder no longer be­ing backed up. Alarmed I logged into Backblaze, and lo and be­hold, my OneDrive folder was miss­ing.

Backblaze has one job, and ap­par­ently they are un­able to do that job. Back up my stuff. But they have de­cided not to.

Lets take an aside.

A rea­son­able per­son might point out those files on OneDrive are al­ready be­ing backed up - by OneDrive! No. Dropbox and OneDrive are for file sync­ing - sync­ing your files to the cloud. They of­fer lim­ited pro­tec­tion. OneDrive and Dropbox only re­tain deleted files for one month. Backblaze has one year file re­ten­tion, or if you pay per GB, un­lim­ited re­ten­tion. While OneDrive re­tains ver­sion changes for longer, Dropbox only re­tains ver­sion changes for a month - again un­less you pay for more. Your files are less se­cure and less backed up when you stick them in a cloud stor­age provider folder com­pared to just be­ing on your desk­top.

And that’s as­sum­ing your cloud provider is play­ing ball. If Microsoft or Dropbox bans your ac­count you may find your­self with no backup what­so­ever.

For me the larger is­sue is they never told us. My OneDrive folder sits at 383GB. You would think that hav­ing de­cided to no longer back this up I might get an email, and alert or some other no­ti­fi­ca­tion. Of course not.

Nestled into their re­lease notes un­der Improvements” we see:

The Backup Client now ex­cludes pop­u­lar cloud stor­age providers from backup, in­clud­ing both mount points and cache di­rec­to­ries. This pre­vents per­for­mance is­sues, ex­ces­sive data us­age, and un­in­tended up­loads from ser­vices like OneDrive, Google Drive, Dropbox, Box, iDrive, and oth­ers. This change aligns with Backblaze’s pol­icy to back up only lo­cal and di­rectly con­nected stor­age.

First, I would hardly call this change in pol­icy an im­prove­ment, its hard to imag­ine any­one read­ing this as any­thing other than a down­grade in ser­vice. Secondly does Backblaze be­lieve most of its users are read­ing their re­lease notes?

And if you joined to­day and looked at their list of file ex­clu­sions you would find no ref­er­ence to Dropbox or OneDrive. No men­tion of Git ei­ther.

Here’s the thing, to­day they don’t back up Git or OneDrive. Who’s to say to­mor­row they wont add to the list. Maybe some ob­scure file for­mat that’s crit­i­cal to your work flow. Or they will ig­nore a file ex­ten­sion that just hap­pens be the same as one used by your DAW or 3D Modelling soft­ware. And they won’t tell you this. They wont even list it on their site.

By de­cid­ing not to back up every­thing, Backblaze has made it as if they are back­ing up noth­ing.

But re­ally this feels like a promise bro­ken. Back in 2015 their web­site proudly pro­claimed:

All user data in­cluded by de­fault No re­stric­tions on file type or size

Protect the dig­i­tal mem­o­ries and files that mat­ter most to you.

File backup is a mat­ter of trust. You are pay­ing a monthly fee so that if and when things go wrong you can get your data back. By silently chang­ing the rules, Backblaze has not sim­ply eroded my trust, but swept it away.

I wrote this to warn you - Backblaze is no longer do­ing their part, they are no longer back­ing up your data. Some of your data sure, but not all of it.

Finally let me leave you with Backblaze’s own words from 2015:

They promised to sim­plify backup. They suc­ceeded - they don’t even do the backup part any­more.

...

Read the original on rareese.com »

5 711 shares, 48 trendiness

Introducing a new spam policy for "back button hijacking"

Today, we are ex­pand­ing our spam poli­cies

to ad­dress a de­cep­tive prac­tice known as back but­ton hi­jack­ing”, which will be­come an ex­plicit vi­o­la­tion of the malicious prac­tices” of spam poli­cies, lead­ing to po­ten­tial spam ac­tions.

When a user clicks the back” but­ton in the browser, they have a clear ex­pec­ta­tion: they want to re­turn to the pre­vi­ous page. Back but­ton hi­jack­ing breaks this fun­da­men­tal ex­pec­ta­tion. It oc­curs when a site in­ter­feres with a user’s browser nav­i­ga­tion and pre­vents them from us­ing their back but­ton to im­me­di­ately get back to the page they came from. Instead, users might be sent to pages they never vis­ited be­fore, be pre­sented with un­so­licited rec­om­men­da­tions or ads, or are oth­er­wise just pre­vented from nor­mally brows­ing the web.

Why are we tak­ing ac­tion?

We be­lieve that the user ex­pe­ri­ence comes first. Back but­ton hi­jack­ing in­ter­feres with the browser’s func­tion­al­ity, breaks the ex­pected user jour­ney, and re­sults in user frus­tra­tion. People re­port feel­ing ma­nip­u­lated and even­tu­ally less will­ing to visit un­fa­mil­iar sites. As we’ve stated be­fore, in­sert­ing de­cep­tive or ma­nip­u­la­tive pages into a user’s browser his­tory has al­ways been against our Google Search Essentials.

We’ve seen a rise of this type of be­hav­ior, which is why we’re des­ig­nat­ing this an ex­plicit vi­o­la­tion of our ma­li­cious prac­tices

pol­icy, which says:

Malicious prac­tices cre­ate a mis­match be­tween user ex­pec­ta­tions and the ac­tual out­come,

lead­ing to a neg­a­tive and de­cep­tive user ex­pe­ri­ence, or com­pro­mised user se­cu­rity or pri­vacy.

Pages that are en­gag­ing in back but­ton hi­jack­ing may be sub­ject to man­ual spam ac­tions

or au­to­mated de­mo­tions, which can im­pact the site’s per­for­mance in Google Search re­sults. To give site own­ers time to make any needed changes, we’re pub­lish­ing this pol­icy two months in ad­vance of en­force­ment on June 15, 2026.

What should site own­ers do?

Ensure you are not do­ing any­thing to in­ter­fere with a user’s abil­ity to nav­i­gate their browser his­tory.

If you’re cur­rently us­ing any script or tech­nique that in­serts or re­places de­cep­tive or ma­nip­u­la­tive pages into a user’s browser his­tory that pre­vents them from us­ing their back but­ton to im­me­di­ately get back to the page they came from, you are ex­pected to re­move or dis­able it.

Notably, some in­stances of back but­ton hi­jack­ing may orig­i­nate from the site’s in­cluded li­braries or ad­ver­tis­ing plat­form. We en­cour­age site own­ers to thor­oughly re­view their tech­ni­cal im­ple­men­ta­tion and re­move or dis­able any code, im­ports or any con­fig­u­ra­tions that are re­spon­si­ble for back but­ton hi­jack­ing, to en­sure a help­ful and non-de­cep­tive ex­pe­ri­ence for users.

If your site has been im­pacted by a man­ual ac­tion and you have fixed the is­sue, you can al­ways let us know by sub­mit­ting a re­con­sid­er­a­tion re­quest

in Search Console. For ques­tions or feed­back, feel free to reach out on so­cial me­dia or dis­cuss in our help com­mu­nity.

...

Read the original on developers.google.com »

6 400 shares, 63 trendiness

Steve's Jujutsu Tutorial

jj is the name of the CLI for Jujutsu. Jujutsu is a DVCS, or distributed ver­sion con­trol sys­tem.” You may be fa­mil­iar with other DVCSes, such as git, and this tu­to­r­ial as­sumes you’re com­ing to jj from git.

So why should you care about jj? Well, it has a prop­erty that’s pretty rare in the world of pro­gram­ming: it is both sim­pler and eas­ier than git, but at the same time, it is more pow­er­ful. This is a pretty huge claim! We’re of­ten taught, cor­rectly, that there ex­ist trade­offs when we make choices. And powerful but com­plex” is a very com­mon trade­off. That power has been worth it, and so peo­ple flocked to git over its pre­de­ces­sors.

What jj man­ages to do is cre­ate a DVCS that takes the best of git, the best of Mercurial (hg), and syn­the­size that into some­thing new, yet strangely fa­mil­iar. In do­ing so, it’s man­aged to have a smaller num­ber of es­sen­tial tools, but also make them more pow­er­ful, be­cause they work to­gether in a cleaner way. Furthermore, more ad­vanced jj us­age can give you ad­di­tional pow­er­ful tools in your VCS sand­box that are very dif­fi­cult with git.

I know that sounds like a huge claim, but I be­lieve that the rest of this tu­to­r­ial will show you why.

There’s one other rea­son you should be in­ter­ested in giv­ing jj a try: it has a git com­pat­i­ble back­end, and so you can use jj on your own, with­out re­quir­ing any­one else you’re work­ing with to con­vert too. This means that there’s no real down­side to giv­ing it a shot; if it’s not for you, you’re not giv­ing up all of the his­tory you wrote with it, and can go right back to git with no is­sues.

...

Read the original on steveklabnik.github.io »

7 352 shares, 18 trendiness

Lean proved this program was correct; then I found a bug.

Lean proved this pro­gram was cor­rect; then I found a bug.

I fuzzed a ver­i­fied im­ple­men­ta­tion of zlib and found a buffer over­flow in the Lean run­time.

AI agents are get­ting very good at find­ing vul­ner­a­bil­i­ties in large-scale soft­ware sys­tems.

Anthropic, was ap­par­ently so spooked by the vul­ner­a­bil­ity-dis­cov­ery ca­pa­bil­i­ties of Mythos, they de­cided not to re­lease it as it was too dan­ger­ous” (lol). Whether you be­lieve the hype about these lat­est mod­els or not, it seems un­de­ni­able that the writ­ing is on the wall:

The cost of dis­cov­er­ing se­cu­rity bugs is col­laps­ing, and the vast ma­jor­ity of soft­ware run­ning to­day was never built to with­stand that kind of scrutiny. We are fac­ing a loom­ing soft­ware cri­sis.

In the face of this on­com­ing tsunami, re­cently there has been in­creas­ing in­ter­est in for­mal ver­i­fi­ca­tion as a so­lu­tion. If we state and prove prop­er­ties about our code us­ing a me­chan­i­cal tool, can we build ro­bust, se­cure and sta­ble soft­ware that can over­come this on­com­ing bar­rage of at­tacks?

One re­cent de­vel­op­ment in the Lean ecosys­tem has taken steps to­wards this ques­tion. 10 agents au­tonomously built and proved an im­ple­men­ta­tion of zlib, lean-zip, an im­pres­sive land­mark re­sult. Quoting from Leo De Moura, the chief ar­chi­tect of the Lean FRO (here):

With apolo­gies for the AI-slop (Leo has a pen­chant for it, it seems), the key re­sult is that lean-zip is not just an­other im­ple­men­ta­tion of zlib. It is an im­ple­men­ta­tion that has been ver­i­fied as cor­rect end to end, guar­an­teed by Lean to be en­tirely free of im­ple­men­ta­tion bugs.

What does verified as cor­rect” ac­tu­ally look like? Here is one of the main the­o­rems (github):

For any byte ar­ray less than 1 gi­ga­byte, call­ing

ZlibDecode.decompressSingle on the out­put of ZlibEncode.compress

pro­duces the orig­i­nal data. The de­com­press func­tion is ex­actly the in­verse of com­pres­sion. This pair of func­tions is en­tirely cor­rect.

I pointed a Claude agent at lean-zip over a week­end, armed with AFL++, AddressSanitizer, Valgrind, and UBSan. Over 105 mil­lion fuzzing

ex­e­cu­tions, it found:

Zero mem­ory vul­ner­a­bil­i­ties in the ver­i­fied Lean ap­pli­ca­tion code.

A heap buffer over­flow in the Lean 4 run­time (lean_alloc_sarray), af­fect­ing every ver­sion of Lean to date. (bug re­port, pend­ing fix)

A de­nial-of-ser­vice in lean-zip’s archive parser, which was never ver­i­fied.

The setup for the ex­per­i­ment was quite sim­ple. I took the lean-zip

code­base and pro­duced a stripped down ver­sion and pointed Claude at it.

In par­tic­u­lar, as part of the setup: (1) I dropped all the­o­rems and spec­i­fi­ca­tions, (2) re­moved all mark­down doc­u­men­ta­tion, and (3) stripped out lean-zip’s C FFI bind­ings to zlib which it pro­vided as an al­ter­na­tive to its na­tive im­ple­men­ta­tion. What re­mained was purely the ver­i­fied code: the na­tive Lean de­f­i­n­i­tions for DEFLATE, gzip, ZIP archive han­dling, and tar. Any bug found in this would cor­re­spond to an er­ror in the ver­i­fied code.

The idea with drop­ping the­o­rems and doc­u­men­ta­tion was to avoid bi­as­ing the Claude agent by re­veal­ing that the code was ac­tu­ally ver­i­fied — I fig­ured if it knew the code had no bugs” then it might pre-emp­tively give up, while op­er­at­ing in the blind might let it work through the soft­ware with­out bias.

With the lean im­ple­men­ta­tion ac­ces­si­ble through a CLI, I then spun up a server for the fuzzing ex­per­i­ments, pointed Claude at it, and let it go wild.

Over the course of a night, Claude launched 16 par­al­lel fuzzers across the 6 at­tack sur­faces of the li­brary: ZIP ex­tract, gzip de­com­press, raw DEFLATE in­flate, tar ex­tract, tar.gz, and com­pres­sion. It built sep­a­rate bi­na­ries with AddressSanitizer and UndefinedBehaviorSanitizer, ran Valgrind mem­check, used cp­pcheck and flawfinder for sta­tic analy­sis, crafted 48 hand-writ­ten ex­ploit files tar­get­ing known zlib CVE pat­terns.

Overall, this re­sulted in 105,823,818 fuzzing ex­e­cu­tions. 359 seed files. 16 fuzzers run­ning for ap­prox­i­mately 19 hours un­cov­er­ing 4

crash­ing in­puts, and 1 mem­ory vul­ner­a­bil­ity in the code.

The most sub­stan­tial find­ing was a heap buffer over­flow! but, not in

lean-zip’s code, but in the Lean run­time it­self.

The vul­ner­a­ble func­tion is lean_al­loc_sar­ray, which al­lo­cates all scalar ar­rays (ByteArray, FloatArray, etc.) in Lean 4:

For a ByteArray of ca­pac­ity n, the al­lo­ca­tion size is 24 + n. When n is close to SIZE_MAX (2^{64} - 1 on 64-bit sys­tems), the ad­di­tion wraps around to a small num­ber. The run­time al­lo­cates a tiny buffer of around 23 bytes, but the caller pro­ceeds to read n bytes into it.

The over­flow can be trig­gered through lean_io_prim_han­dle_read, the C func­tion back­ing IO. FS.Handle.read:

A 156-byte crafted ZIP file with a ZIP64 com­pressed­Size of

0xFFFFFFFFFFFFFFFF is suf­fi­cient to trig­ger it. The same pat­tern ex­ists in lean_io_get_ran­dom_bytes. The bug af­fects every ver­sion of Lean 4 up to and in­clud­ing the lat­est nightly (v4.31.0-nightly-2026-04-11). The min­i­mal re­pro­ducer is 5 sim­ple lines:

def main : IO Unit := do

IO.FS.writeFile test.bin” hello”

let h ← IO.FS.Handle.mk test.bin” .read

let n : USize := (0 : USize) - (1 : USize)  — SIZE_MAX

let _ ← h.read n  — over­flows in lean_al­loc_sar­ray

Edit: there is a pend­ing PR to lean to fix this.

AFL also found a de­nial-of-ser­vice in lean-zip’s own code. The

read­Ex­act func­tion in Archive.lean passes the com­pressed­Size

field from the ZIP cen­tral di­rec­tory straight to h.read with­out val­i­dat­ing it against the ac­tual file size:

def read­Ex­act (h : IO.FS.Handle) (n : Nat) … := do

while buf.size < n do

let re­main­ing := n - buf.size

let chunk ← h.read re­main­ing.toU­Size

– n comes from the ZIP header

A 156-byte ZIP claim­ing a com­pressed­Size of sev­eral ex­abytes causes the process to panic with INTERNAL PANIC: out of mem­ory, as h.read

al­lo­cates more mem­ory than avail­able. This is in­deed a bug: the sys­tem

un­zip han­dles this grace­fully, val­i­dat­ing header sizes against the file be­fore al­lo­cat­ing, while lean-zip does not and crashes with an OOM.

The OOM de­nial-of-ser­vice is straight­for­ward: the archive parser was never ver­i­fied. lean-zip’s proofs cover the com­pres­sion and de­com­pres­sion pipeline (DEFLATE, Huffman, CRC32, roundtrip cor­rect­ness), but Archive.lean, the mod­ule that reads ZIP head­ers and ex­tracts files, has zero the­o­rems even in the orig­i­nal un­stripped code­base. The com­pressed­Size field is read from an un­trusted header and passed di­rectly to an al­lo­ca­tion with­out val­i­da­tion. The sit­u­a­tion is rem­i­nis­cent of Yang et al.’s CSmith work (PLDI 2011), which found that CompCert’s ver­i­fied op­ti­mi­sa­tion passes had zero bugs while its

un­ver­i­fied front-end did. Verification works where it is ap­plied. The archive parser was where lean-zip was not ver­i­fied.

The heap buffer over­flow is more fun­da­men­tal. lean_al­loc_sar­ray is a C++ func­tion in the Lean run­time, part of the trusted com­put­ing

base. Every Lean proof as­sumes the run­time is cor­rect. A bug here does not just af­fect lean-zip. It af­fects every Lean 4 pro­gram that al­lo­cates a ByteArray.

The pos­i­tive re­sult here is ac­tu­ally the re­mark­able one. Across 105 mil­lion ex­e­cu­tions, the ap­pli­ca­tion code (that is, ex­clud­ing the run­time) had zero heap buffer over­flows, zero use-af­ter-free, zero stack buffer over­flows, zero un­de­fined be­hav­iour (UBSan clean), and zero out-of-bounds ar­ray reads in the Lean-generated C code. To quote Claude’s own as­sess­ment of the code­base (without know­ing it was ver­i­fied):

This is gen­uinely one of the most mem­ory-safe code­bases I’ve an­a­lyzed. The Lean type sys­tem with de­pen­dent types and well-founded re­cur­sion has elim­i­nated en­tire classes of bugs that plague C/C++ zip im­ple­men­ta­tions. The CVE classes that have plagued zlib for decades are struc­turally im­pos­si­ble in this code­base.

The two bugs that were found both sat out­side the bound­ary of what the proofs cover. The de­nial-of-ser­vice was a miss­ing spec­i­fi­ca­tion. The heap over­flow was a deeper is­sue in the trusted com­put­ing base, the C++ run­time that the en­tire proof ed­i­fice as­sumes is cor­rect (and now

has a PR ad­dress­ing).

Overall ver­i­fi­ca­tion re­sulted in a re­mark­ably ro­bust and rig­or­ous code­base. AFL and Claude had a re­ally hard time find­ing er­rors. But they did still find is­sues. Verification is only as strong as the ques­tions you think to ask and the foun­da­tions you choose to trust.

...

Read the original on kirancodes.me »

8 284 shares, 80 trendiness

Thousands of rare concert recordings are landing on the Internet Archive -- listen now

Chicago-based mu­sic su­per­fan Aadam Jacobs has been record­ing the con­certs he at­tends since the 1980s, amass­ing an archive of over 10,000 tapes. Now 59, Jacobs knows that these cas­settes are go­ing to de­grade over time, so he agreed to let vol­un­teers from the Internet Archive, the non­profit dig­i­tal li­brary, dig­i­tize the tapes.

So far, about 2,500 of these tapes have been posted on the Internet Archive, in­clud­ing some rare gems like a Nirvana per­for­mance from 1989. (The group would­n’t break through to main­stream au­di­ences un­til they re­leased the sin­gle Smells Like Teen Spirit” in 1991.) Within the col­lec­tion, you can also find pre­vi­ously un­known record­ings from in­flu­en­tial artists like Sonic Youth, R. E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and a whole bunch of other punk groups.

For many of these record­ings, Jacobs was us­ing pretty mediocre equip­ment, but the vol­un­teer au­dio en­gi­neers work­ing with the Internet Archive have made these tapes sound great.

One vol­un­teer, Brian Emerick, dri­ves to Jacobs’ house once a month to pick up more boxes of tapes — he has to use anachro­nis­tic cas­sette decks to play the tapes, which get con­verted into dig­i­tal files. From there, other vol­un­teers clean up, or­ga­nize, and la­bel the record­ings, even track­ing down song names from for­got­ten punk bands.

Sometimes, the in­ter­net is good. And so is this Tracy Chapman record­ing from 1988.

...

Read the original on techcrunch.com »

9 253 shares, 7 trendiness

Stanford report highlights growing disconnect between AI insiders and everyone else

AI ex­perts and the pub­lic’s opin­ion on the tech­nol­ogy are in­creas­ingly di­verg­ing, ac­cord­ing to Stanford University’s an­nual re­port on the AI in­dus­try, which was re­leased Monday. In par­tic­u­lar, the re­port noted a grow­ing trend of anx­i­ety around AI and, in the U. S., con­cerns about how the tech­nol­ogy will im­pact key so­ci­etal ar­eas, such as jobs, med­ical care, and the econ­omy.

The re­port’s find­ings fol­low grow­ing neg­a­tive sen­ti­ment about AI, with Gen Z re­port­edly lead­ing the way, ac­cord­ing to a re­cent Gallup poll. The study found that young peo­ple were grow­ing less hope­ful and more an­gry about the tech­nol­ogy, even though around half of the de­mo­graphic was us­ing AI ei­ther daily or weekly.

For some work­ing in tech, the AI back­lash has come as a sur­prise. AI lead­ers have fo­cused on man­ag­ing the pos­si­bil­ity of Artificial General Intelligence, or AGI — a the­o­ret­i­cal form of AI su­per­in­tel­li­gence that could per­form any task a hu­man could do and think for it­self. But every­day folks are more con­cerned about AIs im­pact on their pay­check and whether or not their power bills will go up as en­ergy-hun­gry data cen­ters are built.

The di­vide has been most ap­par­ent in the on­line re­ac­tion to the re­cent at­tacks on OpenAI CEO Sam Altman’s home. In posts on X , for in­stance, AI in­sid­ers voiced sur­prise at a se­ries of Instagram com­ments that seemed to praise the at­tack on Altman’s home. Some of the on­line com­ments have a sim­i­lar vibe to those that cir­cu­lated on­line af­ter the shoot­ing of the United Healthcare CEO in 2024 and the more re­cent burn­ing of a Kimberly-Clark ware­house by a worker an­gry about not re­ceiv­ing a livable wage” — with some com­ments even go­ing so far as to sug­gest that even more ac­tion, akin to a rev­o­lu­tion, is needed.

Stanford’s re­port pro­vides more in­sight into where all this neg­a­tiv­ity is com­ing from, as it sum­ma­rizes data around pub­lic sen­ti­ment of AI across var­i­ous sources.

For in­stance, it pointed to a re­port from Pew Research pub­lished last month, which noted that only 10% of Americans said they were more ex­cited than con­cerned about the in­creased use of AI in daily life. Meanwhile, 56% of AI ex­perts said they be­lieved AI would have a pos­i­tive im­pact on the U. S. over the next 20 years.

Expert opin­ion and pub­lic sen­ti­ment also greatly di­verged in par­tic­u­lar ar­eas where AI could have a so­ci­etal im­pact. Indeed, 84% of ex­perts, the re­port au­thors noted, said that AI would have a largely pos­i­tive im­pact on med­ical care over the next 20 years, but only 44% of the U. S. gen­eral pub­lic said the same.

Plus, a ma­jor­ity (73%) of ex­perts felt pos­i­tive about AIs im­pact on how peo­ple do their jobs, com­pared with just 23% of the pub­lic. And 69% of ex­perts felt that AI would have a pos­i­tive im­pact on the econ­omy. Given the sup­posed AI-fueled lay­offs and dis­rup­tions to the work­place, it’s not sur­pris­ing that only 21% of the pub­lic felt sim­i­larly.

Other data from Pew Research, cited by the re­port, noted that AI ex­perts were less pes­simistic on AIs im­pact on the job mar­ket, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U. S. also re­ported the low­est trust in its gov­ern­ment to reg­u­late AI re­spon­si­bly, com­pared with other na­tions, at 31%. Singapore ranked high­est at 81%, per data pulled from Ipsos found in Stanford’s re­port.

Another source looked at reg­u­la­tion con­cerns on a state-by-state level and con­cluded that, na­tion­wide, 41% of re­spon­dents said fed­eral AI reg­u­la­tion will not go far enough, while only 27% said it would go too far.”

Despite the fears and con­cerns, AI did get one ac­co­lade: Globally, those who feel like AI prod­ucts and ser­vices of­fer more ben­e­fits than draw­backs slightly rose from 55% in 2024 to 59% in 2025.

But at the same time, those re­spon­dents who said that AI makes them nervous” grew from 50% to 52% dur­ing the same pe­riod, per data cited by the re­port’s au­thors.

...

Read the original on techcrunch.com »

10 246 shares, 13 trendiness

Sometimes powerful people just do dumb shit

In June 1812, Napoleon Bonaparte marched 685,000 sol­diers into Russia - the largest mil­i­tary force ever as­sem­bled in European his­tory up to that point, and one of the largest mil­i­tary fuck­ups of all time.

He had no co­her­ent sup­ply plan for feed­ing them, he had no re­al­is­tic time­line for when, ex­actly, the Russians would agree to fight a de­ci­sive bat­tle on his terms, and he could­n’t even ar­tic­u­late a co­her­ent goal for his gam­ble, be­yond ~beat the Russians in some vague way.

He had been warned by mul­ti­ple ad­vi­sors, in­clud­ing his own for­eign min­is­ter Talleyrand, that in­vad­ing Russia was a cat­a­strophic idea - and he did it any­way.

By December, roughly 400,000 of his sol­diers were dead, mostly from star­va­tion and ex­po­sure and the con­se­quences of field surgery, and an­other 100,000 had been cap­tured. The Grande Armée, the most feared fight­ing force on the con­ti­nent, clawed its way back across the Niemen River as a frozen, shat­tered rem­nant of it­self. It was the be­gin­ning of the end for Napoleon, who would never again be able to field an army of the size // qual­ity he squan­dered on his point­less ex­cur­sion into Russia.

Napoleon was, by any rea­son­able ac­count­ing, a ge­nius - a mil­i­tary mind who rewrote the rules of European war­fare, a po­lit­i­cal op­er­a­tor who fought his way up from be­ing a mi­nor league Corsican no­bil­ity to the Emperor of France and ruler of most of mod­ern Europe be­fore he turned 35, and a re­former whose ideas around the ju­di­cial sys­tem and the lib­eral or­der still echo to­day.

But none of that stopped him from mak­ing one of the dumb­est de­ci­sions any leader has ever made, be­cause he was ar­ro­gant, be­cause he’d got­ten away with so much for so long that he con­fused his luck for a sys­tem, and be­cause (with the ex­cep­tion of Talleyrand) most of the peo­ple around him had sim­ply stopped telling him no.

There’s a par­tic­u­lar kind of per­son who can’t ac­cept that story at face value, and you’ve met them. I am ab­solutely sure of it. They show up in every com­ment sec­tion and re­ply thread where some­one pow­er­ful does some­thing that looks, on its face, like a mis­take - and their ar­gu­ment al­ways runs the same way: you don’t un­der­stand, this is ac­tu­ally part of a larger plan, there’s a strat­egy here that you and I can’t see be­cause we’re not op­er­at­ing at that an­nointed and el­e­vated level…

And they are fuck­ing every­where.

When Elon Musk bought Twitter in October 2022 for $44 bil­lion, a price he him­self had tried to back out of af­ter waiv­ing due dili­gence (a de­ci­sion so baf­fling that the pre­sid­ing judge, Kathleen McCormick, openly mar­veled at it in court), the 4D chess an­a­lysts fired up im­me­di­ately. You haven’t seen the in­side of the hon­ey­comb, they in­sisted! You don’t get it! You’re not the rich­est man on earth - how could you pos­si­bly hope to process his bril­liance?

The mass lay­offs that gut­ted the com­pa­ny’s ac­ces­si­bil­ity team and its con­tent mod­er­a­tion staff were, ob­vi­ously, equally strate­gic. The ver­i­fi­ca­tion fi­asco that let some­one im­per­son­ate Eli Lilly and tank their stock price with a fake tweet about free in­sulin had to be part of The Plan™️. The ad­ver­tiser ex­o­dus that cratered the com­pa­ny’s rev­enue was just Musk shak­ing off the dead weight, build­ing some­thing new, play­ing a longer game than any of us could un­der­stand.

But a jury in 2026 found Musk li­able for de­lib­er­ately mis­lead­ing in­vestors dur­ing the ac­qui­si­tion. People he’d fired had to be re-hired weeks later be­cause no­body had both­ered to check whether they were, you know, run­ning any­thing im­por­tant be­fore they were shown the door. The com­pany lost roughly 80% of its value un­der his own­er­ship - be­cause there was no 4D chess. There was sim­ply a bil­lion­aire who’d got­ten used to be­ing the smartest per­son in every room he walked into, who did­n’t even have a Talleyrand of his own to hint that it might be a bad call, who bought a com­pany on im­pulse, and then made it worse through a se­ries of de­ci­sions that were ex­actly as bad as they looked from the out­side.

The same thing plays out with Trump; every chaotic press con­fer­ence, every con­tra­dic­tory pol­icy an­nounce­ment is im­me­di­ately re­framed by his most syco­phan­tic sup­port­ers (and, weirdly, by a cer­tain type of op­po­nent who wants to be­lieve they’re up against a mas­ter­mind).

He knows ex­actly what he’s do­ing.”

I’ll grant you - some­times Trump does know what he’s do­ing. Sometimes a provo­ca­tion is cal­cu­lated and the out­rage does serve a pur­pose. But the 4D chess crowd can’t dis­tin­guish be­tween those mo­ments and the mo­ments where the sim­plest ex­pla­na­tion is just that a 79-year-old man with a phone, no im­pulse con­trol, and an au­di­ence of mil­lions is post­ing what­ever dumb shit he feels like post­ing at 2 AM.

The pow­er­ful don’t get to be pow­er­ful with­out be­ing spe­cial, right?

And if they’re spe­cial, if they’re smarter than all the rest of us, every­thing they do must be for a rea­son, right?

And if we can’t see that rea­son, that must be a prob­lem with us - mere mor­tals - not the di­vinely ap­pointed ti­tans, right?

The most re­cent en­try in this genre is OpenAI’s ac­qui­si­tion of TBPN, the daily tech talk show hosted by John Coogan and Jordi Hays. OpenAI re­port­edly paid in the low hun­dreds of mil­lions of dol­lars for a show with 58,000 sub­scribers on YouTube. The show re­ports to Chris Lehane, OpenAI’s chief po­lit­i­cal op­er­a­tive. And pre­dictably, the ra­tio­nal­iz­ers have lined up.

Fortune ran a piece ti­tled 3 rea­sons OpenAI buy­ing daily tech show TBPN for hun­dreds of mil­lions is­n’t to­tally crazy.” The ar­gu­ment boiled down to: OpenAI is buy­ing in­flu­ence, pack­ag­ing dis­tri­b­u­tion with nar­ra­tive con­trol, po­si­tion­ing it­self to shape pub­lic con­ver­sa­tion about AI at a mo­ment when that con­ver­sa­tion will de­ter­mine the reg­u­la­tory en­vi­ron­ment the com­pany op­er­ates in.

And look, some of that might be true.

But it’s worth sit­ting with the sim­pler read for a sec­ond.

A com­pany whose own ex­ec­u­tives told staff to stop chas­ing side quests” and fo­cus on core AI model de­vel­op­ment spent hun­dreds of mil­lions of dol­lars on a pod­cast. CNBCs head­line called it chasing vibes.”

Ben Thompson at Stratechery did the most thor­ough de­mo­li­tion job. He com­pared OpenAI to the short bus at the end of the rain­bow,” which is funny and also bru­tal and also cor­rect. The whole Stratechery piece is worth read­ing be­cause Thompson ac­tu­ally both­ered to lay out just how in­co­her­ent OpenAI’s strat­egy has be­come — they were against ads un­til sud­denly ads were the plan, Apple was a part­ner un­til they poached Jony Ive, and Anthropic is over there ship­ping mod­els while Sam Altman is sign­ing checks for a talk show. Thompson’s take­away: there just is­n’t much ev­i­dence that any­one knows what they are do­ing or that there is any sort of over­ar­ch­ing plan.”

The 4D chess read asks you to be­lieve that Altman - Google breath­ing down his neck, Anthropic breath­ing down his neck, Meta breath­ing down his neck - sat down and de­cided a talk show with fewer sub­scribers than most mid-tier gam­ing stream­ers was the best pos­si­ble use of hun­dreds of mil­lions of dol­lars.

The bor­ing read asks you to be­lieve a CEO did some­thing that served his ego. Pick whichever one re­quires less of a leap of faith. I know which one I’m go­ing with…

Why do peo­ple re­sist the bor­ing read? Melvin Lerner had a the­ory. He pub­lished a book in 1980 called The Belief in a Just World, and his ar­gu­ment was that most of us walk around with a bone-deep need to be­lieve that peo­ple Get What They Deserve. If some­one is rich, they must be smart. If they’re smart, their de­ci­sions must make sense. And if their de­ci­sions look dumb, well, you must be the one who’s miss­ing some­thing. It’s a warm blan­ket of a world­view. It just does­n’t sur­vive con­tact with re­al­ity.

There’s some­thing else go­ing on, too, and it’s less in­tel­lec­tual // more an­i­mal. We see pat­terns every­where. We see them when they’re not there. Kahneman built half his ca­reer on this - we are so des­per­ate to find sig­nal in the noise that we’ll con­struct en­tire nar­ra­tives out of noth­ing, and a nar­ra­tive where the pow­er­ful guy is play­ing 12 moves ahead is just a bet­ter story than one where he fucked up be­cause that’s what peo­ple do.

But the 4D chess fram­ing also flat­ters the be­liever. If you can see the hid­den strat­egy that every­one else is miss­ing, you’re the smart one, you’re the one who gets it. Which rather stops be­ing funny when you re­al­ize what it costs…when you in­sist that every ac­tion a pow­er­ful per­son takes is part of a grand strat­egy, you strip away ac­count­abil­ity and you make it im­pos­si­ble to call a bad de­ci­sion a bad de­ci­sion.

Every fail­ure be­comes a setup for a fu­ture suc­cess that never ar­rives, and every scan­dal a dis­trac­tion from a larger game that never ma­te­ri­al­izes. The goal­posts dis­ap­pear en­tirely, be­cause the frame has be­come un­fal­si­fi­able; any out­come can be ab­sorbed into the the­ory. If the plan works, it was ge­nius. If it does­n’t, the real plan has­n’t been re­vealed yet.

This is how cults of per­son­al­ity sus­tain them­selves - through in­ter­pre­ta­tion, and through a com­mu­nity of be­liev­ers who will do the in­tel­lec­tual la­bor of mak­ing sense of the non­sen­si­cal, who treat con­fu­sion as ev­i­dence of their own lim­ited un­der­stand­ing rather than ev­i­dence that the thing they’re look­ing at is, in fact, con­fused.

The higher some­one climbs, the fewer peo­ple around them will push back.

The richer they get, the more their bad ideas get funded in­stead of chal­lenged.

The more suc­cess­ful they be­come, the more they start to be­lieve that their suc­cess came from skill rather than from some volatile, un­re­peat­able cock­tail of skill, tim­ing, luck, and other peo­ple’s la­bor.

Napoleon was bril­liant. He was also sur­rounded, by 1812, by mar­shals who were tired of ar­gu­ing with him and a court that had learned it was safer to agree, and the in­va­sion of Russia was pre­cisely what hap­pens when a bril­liant per­son loses the feed­back mech­a­nisms that kept them bril­liant.

OpenAI buy­ing a pod­cast for a price that could have funded a mid-sized AI re­search lab was­n’t a strate­gic fuck­ing mas­ter­stroke.

Sometimes pow­er­ful peo­ple just do dumb shit, and some­times there is no plan.

The peo­ple who will pay the high­est price for the 4D chess delu­sion are, iron­i­cally, the peo­ple most de­voted to it; be­cause if you can’t look at a pow­er­ful per­son’s de­ci­sion and say that was a bloody stu­pid thing to do,” you can’t learn any­thing from their mis­takes, and you can’t see the world clearly.

But when the choice is be­tween speak­ing up and watch­ing an unchecked mega­lo­ma­niac march 685,000 sol­diers into a Russian win­ter with­out a fur coat in sight, clar­ity is the only thing worth hav­ing.

...

Read the original on www.joanwestenberg.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.