10 interesting stories served every morning and every evening.
Last week, I wrote about catching a supply chain attack on a WordPress plugin called Widget Logic. A trusted name, acquired by a new owner, turned into something malicious. It happened again. This time at a much larger scale.
Ricky from Improve & Grow emailed us about an alert he saw in the WordPress dashboard for a client site. The notice was from the WordPress.org Plugins Team, warning that a plugin called Countdown Timer Ultimate contained code that could allow unauthorized third-party access.
I ran a full security audit on the site. The plugin itself had already been force-updated by WordPress.org to version 2.6.9.1, which was supposed to clean things up. But the damage was already done.
The plugin’s wpos-analytics module had phoned home to analytics.essentialplugin.com, downloaded a backdoor file called wp-comments-posts.php (designed to look like the core file wp-comments-post.php), and used it to inject a massive block of PHP into wp-config.php.
The injected code was sophisticated. It fetched spam links, redirects, and fake pages from a command-and-control server. It only showed the spam to Googlebot, making it invisible to site owners. And here is the wildest part. It resolved its C2 domain through an Ethereum smart contract, querying public blockchain RPC endpoints. Traditional domain takedowns would not work because the attacker could update the smart contract to point to a new domain at any time.
CaptainCore keeps daily restic backups. I extracted wp-config.php from 8 different backup dates and compared file sizes. Binary search style.
The injection happened on April 6, 2026, between 04:22 and 11:06 UTC. A 6-hour 44-minute window.
I traced the plugin’s history through 939 quicksave snapshots. The plugin had been on the site since January 2019. The wpos-analytics module was always there, functioning as a legitimate analytics opt-in system for years.
Then came version 2.6.7, released August 8, 2025. The changelog said, “Check compatibility with WordPress version 6.8.2.” What it actually did was add 191 lines of code, including a PHP deserialization backdoor. The class-anylc-admin.php file grew from 473 to 664 lines.
The new code introduced three things:
A fetch_ver_info() method that calls file_get_contents() on the attacker’s server and passes the response to @unserialize()
A version_info_clean() method that executes @$clean($this->version_cache, $this->changelog) where all three values come from the unserialized remote data
That is a textbook arbitrary function call. The remote server controls the function name, the arguments, everything. It sat dormant for 8 months before being activated on April 5-6, 2026.
This is where it gets interesting. The original plugin was built by Minesh Shah, Anoop Ranawat, and Pratik Jain. An India-based team that operated under “WP Online Support” starting around 2015. They later rebranded to “Essential Plugin” and grew the portfolio to 30+ free plugins with premium versions.
By late 2024, revenue had declined 35-45%. Minesh listed the entire business on Flippa. A buyer identified only as “Kris,” with a background in SEO, crypto, and online gambling marketing, purchased everything for six figures. Flippa even published a case study about the sale in July 2025.
The buyer’s very first SVN commit was the backdoor.
On April 7, 2026, the WordPress.org Plugins Team permanently closed every plugin from the Essential Plugin author. At least 30 plugins, all on the same day. Here are the ones I confirmed:
* SlidersPack — All in One Image Sliders — sliderspack-all-in-one-image-sliders
All permanently closed. The author search on WordPress.org returns zero results. The analytics.essentialplugin.com endpoint now returns {“message”:“closed”}.
In 2017, a buyer using the alias “Daley Tias” purchased the Display Widgets plugin (200,000 installs) for $15,000 and injected payday loan spam. That buyer went on to compromise at least 9 plugins the same way.
The Essential Plugin case is the same playbook at a larger scale. 30+ plugins. Hundreds of thousands of active installations. A legitimate 8-year-old business acquired through a public marketplace and weaponized within months.
WordPress.org’s forced update added return; statements to disable the phone-home functions. That is a band-aid. The wpos-analytics module is still there with all its code. I built patched versions with the entire backdoor module stripped out.
I scanned my entire fleet and found 12 of the 26 Essential Plugin plugins installed across 22 customer sites. I patched 10 of them (one had no backdoor module, one was a different “pro” fork by the original authors). Here are the patched versions, hosted permanently on B2:
# Countdown Timer Ultimate
wp plugin install https://plugins.captaincore.io/countdown-timer-ultimate-2.6.9.1-patched.zip –force
# Popup Anything on Click
wp plugin install https://plugins.captaincore.io/popup-anything-on-click-2.9.1.1-patched.zip –force
# WP Testimonial with Widget
wp plugin install https://plugins.captaincore.io/wp-testimonial-with-widget-3.5.1-patched.zip –force
# WP Team Showcase and Slider
wp plugin install https://plugins.captaincore.io/wp-team-showcase-and-slider-2.8.6.1-patched.zip –force
# WP FAQ (sp-faq)
wp plugin install https://plugins.captaincore.io/sp-faq-3.9.5.1-patched.zip –force
# Timeline and History Slider
wp plugin install https://plugins.captaincore.io/timeline-and-history-slider-2.4.5.1-patched.zip –force
# Album and Image Gallery plus Lightbox
wp plugin install https://plugins.captaincore.io/album-and-image-gallery-plus-lightbox-2.1.8.1-patched.zip –force
# SP News and Widget
wp plugin install https://plugins.captaincore.io/sp-news-and-widget-5.0.6-patched.zip –force
# WP Blog and Widgets
wp plugin install https://plugins.captaincore.io/wp-blog-and-widgets-2.6.6.1-patched.zip –force
# Featured Post Creative
wp plugin install https://plugins.captaincore.io/featured-post-creative-1.5.7-patched.zip –force
# Post Grid and Filter Ultimate
wp plugin install https://plugins.captaincore.io/post-grid-and-filter-ultimate-1.7.4-patched.zip –force
Each patched version removes the entire wpos-analytics directory, deletes the loader function from the main plugin file, and bumps the version to -patched. The plugin itself continues to work normally.
The process is straightforward with Claude Code. Point it at this article for context, tell it which plugin you need patched, and it can strip the wpos-analytics module the same way I did. The pattern is identical across all of the Essential Plugin plugins:
Delete the wpos-analytics/ directory from the plugin
Remove the loader function block in the main plugin PHP file (search for “Plugin Wpos Analytics Data Starts” or wpos_analytics_anl)
Two supply chain attacks in two weeks. Both followed the same pattern. Buy a trusted plugin with an established install base, inherit the WordPress.org commit access, and inject malicious code. The Flippa listing for Essential Plugin was public. The buyer’s background in SEO and gambling marketing was public. And yet the acquisition sailed through without any review from WordPress.org.
WordPress.org has no mechanism to flag or review plugin ownership transfers. There is no “change of control” notification to users. No additional code review triggered by a new committer. The Plugins Team responded quickly once the attack was discovered. But 8 months passed between the backdoor being planted and being caught.
If you manage WordPress sites, search your fleet for any of the 26 plugin slugs listed above. If you find one, patch it or remove it. And check wp-config.php.
...
Read the original on anchor.host »
The Photo page brings Hollywood’s most advanced color tools to still photography for the first time! Whether you’re a professional colorist looking to apply your skills to fashion shoots and weddings, or a photographer who wants to work beyond the limits of traditional photo applications, the Photo page unlocks the tools you need. Start with familiar photo tools including white balance, exposure and primary color adjustments, then switch to the Color page for access to the full DaVinci color grading toolset trusted by Hollywood’s best colorists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you export faster than ever before!
For photographers, the Photo page offers a familiar set of tools alongside DaVinci’s powerful color grading capabilities. It includes native RAW support for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image processing takes place at source resolution up to 32K, or over 400 megapixels, so you’re never limited to project resolution. Familiar basic adjustments including white balance, exposure, color and saturation give you a comfortable starting point. With non-destructive processing you can reframe, crop and re-interpret your original sensor data at any time. And with GPU acceleration, entire albums can be processed dramatically faster than conventional photo applications!
The Photo page Inspector gives you precise control over the transform and cropping parameters of your images. Reframe and crop non-destructively at the original source resolution and aspect ratio, so you’re never restricted to a fixed timeline size! Zoom, position, rotate and flip images with full transform controls and use the cropping parameters to trim the edges of any image with precision. Reframe a shot to improve composition, adjust for a specific ratio for print or social media use, or simply remove unwanted elements from the edges of a frame. All adjustments can be refined or reset at any time without ever affecting the original source file!
DaVinci Resolve is the world’s only post production software that lets everyone work together on the same project at the same time! Built on a powerful cloud based workflow, you can share albums, all associated metadata and tags, as well as grades and effects with colorists, photographers and retouchers anywhere in the world. Blackmagic Cloud syncing keeps every collaborator with the latest version of your image library in real time, and remote reviewers can approve grades offsite without needing to be in the same room. Hollywood colorists can even grade live fashion shoots remotely, all while the photographer is still on set!
The Photo page gives you everything you need to manage your entire image library from import to completion. You can import photos directly, from your Apple Photos library or Lightroom, and organize them with tags, ratings, favorites and keywords for fast, flexible management of even the largest libraries. It supports all standard RAW files and image types. AI IntelliSearch lets you instantly search across your entire project to find exactly what you’re looking for, from objects to people to animals! Albums allow you to build and manage collections for any project and with a single click you can switch between your photo library and your color grading workflow!
Albums are a powerful way to build and manage photo collections directly in DaVinci Resolve. You can add images manually to each album or organize by date, camera, star rating, EXIF data and more. Powerful filter and sort tools give you total control over how your collection is arranged. The thumbnail view displays each image’s graded version alongside its file name and source clip format so you can see your grades at a glance. Create multiple grade versions of any image, all referencing the original source file, so you can explore different looks without ever duplicating a file. Plus, grades applied to one photo can be instantly copied across others in the album for a fast, consistent look!
Connect Sony or Canon cameras directly to DaVinci Resolve for tethered shooting with full live view! Adjust camera settings including ISO, exposure and white balance without leaving the page and save image capture presets to establish a consistent look before you shoot. Images can be captured directly into an album, with albums created automatically during capture so your library is perfectly organized from the moment you start shooting. Grade images as they arrive using DaVinci Resolve’s extensive color toolset and use a hardware panel for hands-on creative control in a collaborative shoot. That means you can capture, grade and organize an entire shoot without leaving DaVinci Resolve!
The Photo page gives you access to over 100 GPU and CPU accelerated Resolve FX and specialty AI tools for still image work. They’re organized by category in the Open FX library and cover everything from color effects, blurs and glows to image repair, skin refinement and cinematic lighting tools. These are the same tools used by Hollywood colorists and VFX artists on the world’s biggest productions, now available for still images. To add an effect, drag it to any node. Whether you’re making subtle beauty refinements for a fashion shoot or applying dramatic film looks and atmospheric lighting effects emulating the looks of a Hollywood feature, the Photo page has the tools you need!
Magic Mask makes precise selections of subjects or backgrounds, while Depth Map generates a 3D map of your scene to separate foreground and background without manual masking. Use together to grade different depths of an image independently for results that have never before been possible for stills!
Add a realistic light source to any photo after capture with Relight FX. Relight analyzes the surfaces of faces and objects to reflect light naturally across the image. Combine with Magic Mask to light a subject independently from the background, turning flat portraits into stunning fashion images!
Face refinement automatically masks different parts of a face, saving countless hours of manual work. Sharpen eyes, remove dark circles, smooth skin, and color lips. Ultra Beauty separates skin texture from color for natural, high end results, while AI Blemish Removal handles fast skin repair!
The Film Look Creator lets you add cinematic looks that replicate film properties like halation, bloom, grain and vignetting. Adjust exposure in stops and use subtractive saturation, richness and split tone controls to achieve looks usually found on the big screen, now for your still images!
AI SuperScale uses the DaVinci AI Neural Engine to upscale low resolution images with exceptional quality. The enhanced mode is specifically designed to remove compression artifacts, making it the perfect tool for rescaling low quality photos or frame grabs up to 4x their original resolution!
UltraNR is a DaVinci AI Neural Engine driven denoise mode in the Color page’s spatial noise reduction palette. Use it to dramatically reduce digital noise from an image while maintaining image clarity. Use with spatial noise reduction to smooth out digital grain or scanner noise while keeping fine hair and eye edges sharp.
Sample an area of a scene to quickly cover up unwanted elements, like objects or even blemishes on a face. The patch replacer has a fantastic auto grading feature that will seamlessly blend the covered area with the surrounding color data. Perfect for removing sensor dust.
The Quick Export option makes it fast and easy to deliver finished images in a wide range of common formats including JPEG, PNG, HEIF and TIFF. Export either an entire album or just selected photos providing flexibility to meet your specific delivery needs. You can set the resolution, bit depth, quality and compression to ensure your images are optimized for their intended use. Whether you’re exporting standalone images for print, sharing on social media platforms or delivering graded files to a client, Quick Export has you covered. All exports preserve your original photo EXIF metadata, so camera settings, location data and other important information always travels with your files.
The Photo page uses GPU accelerated processing to deliver fast, accurate results across your entire workflow. Process hundreds of RAW files in seconds with GPU accelerated decoding and apply Resolve FX to your images in real time. GPU acceleration also means batch exports and conversions are dramatically faster than conventional photo applications. On Mac, DaVinci Resolve is optimized for Metal and Apple Silicon, taking full advantage of the latest hardware. On Windows and Linux, you get CUDA support for NVIDIA GPUs, while the Windows version also features full OpenCL support for AMD, Intel and Qualcomm GPUs. All this ensures you get high performance results on any system!
Hollywood colorists have always relied on hardware panels to work faster and more creatively and now photographers can too! The DaVinci Resolve Micro Color Panel is the perfect companion for photo grading as it is compact enough to sit next to a laptop and portable enough to take on location for shoots. It features three high quality trackballs for lift, gamma and gain adjustments, 12 primary correction knobs for contrast, saturation, hue, temperature and more. It even has a built in rechargeable battery! DaVinci Resolve color panels let you adjust multiple parameters at once, so you can create looks that are simply impossible with a mouse and keyboard.
Hollywood’s most popular solution for editing, visual effects, motion graphics, color correction and audio post production, for Mac, Windows and Linux. Now supports Blackmagic Cloud for collaboration!
The most powerful DaVinci Resolve adds DaVinci Neural Engine for automatic AI region tracking, stereoscopic tools, more Resolve FX filters, more Fairlight FX audio plugins and advanced HDR grading.
Includes large search dial in a design that includes only the specific keys needed for editing. Includes Bluetooth with battery for wireless use so it’s more portable than a full sized keyboard!
Editor panel specifically designed for multi-cam editing for news cutting and live sports replay. Includes buttons to make camera selection and editing extremely fast! Connects via Bluetooth or USB‑C.
Full sized traditional QWERTY editor keyboard in a premium metal design. Featuring a metal search dial with clutch, plus extra edit, trim and timecode keys. Can be installed inset for flush mounting.
Powerful color panel gives you all the control you need to create cinematic images. Includes controls for refined color grading including adding windows. Connects via Bluetooth or USB‑C.
Portable DaVinci color panel with 3 high resolution trackballs, 12 primary corrector knobs and LCDs with menus and buttons for switching tools, adding color nodes, HDR and secondary grading and more!
Designed in collaboration with professional Hollywood colorists, the DaVinci Resolve Advanced Panel features a massive number of controls for direct access to every DaVinci color correction feature.
Portable audio control surface includes 12 premium touch sensitive flying faders, channel LCDs for advanced processing, automation and transport controls plus HDMI for an external graphics display.
Get incredibly fast audio editing for sound engineers working on tight deadlines! Includes LCD screen, touch sensitive control knobs, built in search dial and full keyboard with multi function keys.
Used by Hollywood and broadcasters, these large consoles make it easy to mix large projects with a massive number of channels and tracks. Modular design allows customizing 2, 3, 4, or 5 bay consoles!
Fairlight studio console legs at 0º angle for when you require a flat working surface. Required for all Fairlight Studio Consoles.
Fairlight studio console legs at 8º angle for when you require a slightly angled working surface. Required for all Fairlight Studio Consoles.
Features 12 motorized faders, rotary control knobs illuminated buttons for pan, solo, mute and call, plus bank select buttons.
12 groups of touch sensitive rotary control knobs and illuminated buttons, assignable to fader strips, single channel or master bus.
Get quick access to virtually every Fairlight feature! Includes a 12” LCD, graphical keyboard, macro keys, transport controls and more.
Features HDMI, SDI inputs for video and computer monitoring and Ethernet for graphics display of channel status and meters.
Empty 2 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 3 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 4 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 5 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Use alternative HDMI or SDI televisions and monitors when building a Fairlight studio console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 2 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 3 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 4 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 5 bay Fairlight console.
Side arm kit mounts into Fairlight console mounting bar and holds each fader, channel control and LCD monitor module.
Blank 1/3rd wide bay for building a custom console with the extra 1/3rd section. Includes blank infill panels.
Allows mounting standard 19 inch rack mount equipment in the channel control area of the Fairlight studio console.
Blank panel to fill in the channel control area of the Fairlight studio console.
Blank panel to fill in the LCD monitor area of the Fairlight studio console when you’re not using the standard Fairlight LCD monitor.
Blank panel to fill in the fader control area of the Fairlight studio console.
Adds 3 MADI I/O connections to the single MADI on the accelerator card, for a total of 256 inputs and outputs at 24 bit and 48kHz.
Add up to 2,000 tracks with real time processing of EQ, dynamics, 6 plug‑ins per track, plus MADI for extra 64 inputs and outputs.
Adds analog and digital connections, preamps for mics and instruments, sample rate conversion and sync at any standard frame rate.
...
Read the original on www.blackmagicdesign.com »
Large pull requests are hard to review, slow to merge, and prone to conflicts. Reviewers lose context, feedback quality drops, and the whole team slows down. Stacked PRs solve this by breaking big changes into a chain of small, focused pull requests that build on each other — each one independently reviewable.
A stack is a series of pull requests in the same repository where each PR targets the branch of the PR below it, forming an ordered chain that ultimately lands on your main branch.
GitHub understands stacks end-to-end: the pull request UI shows a stack map so reviewers can navigate between layers, branch protection rules are enforced against the final target branch (not just the direct base), and CI runs for every PR in the stack as if they were targeting the final branch.
The gh stack CLI handles the local workflow: creating branches, managing rebases, pushing to GitHub, and creating PRs with the correct base branches. On GitHub, the PR UI gives reviewers the context they need — a stack map for navigation, focused diffs for each layer, and proper rules enforcement.
When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged directly or through the merge queue. After a merge, the remaining PRs in the stack are automatically rebased so the lowest unmerged PR targets the base branch.
Ready to dive in? Start with the Quick Start guide or read the full overview.
...
Read the original on github.github.com »
TLDR: Despite claiming to backup all your data, Backblaze quietly stopped backing up OneDrive and Dropbox folders - along with potentially many other things.
For ten years I have been using Backblaze for my personal computer backup. Before 2015 I would backup files to one of two large external hard discs. I then rotated these drives between, first my father’s house, and after I moved to the UK, my office drawers.
In 2015 Backblaze seemed like a good bet. Unlike Crashplan their software wasn’t a bloated Java app, but they did have unlimited storage. If you could cram it into your PC they would back it up. With their yearly Hard Drive reviews making good press, a lot of personal recommendations from my friends and colleagues, their service sounded great. I installed the software, ran it for several weeks, and sure enough my data was safely stored in their cloud.
I had further reason to be impressed when several years later one of my hard drives failed. I made use of their “send me a hard drive with my stuff on it service”. A drive turned up filled with my precious data. That for me was proof that this system worked, and that it worked well.
And so I recommended Backblaze for years. What do you do for backup? I would extoll the virtues of Backblaze, and they made many sales from such recommendations.
There were a few things I didn’t like. The app, could use a lot of memory, especially after doing a large import of photographs. The website, which I often used to restore single files or folders, was slow and clunky to use. The windows app in particular was clunky with an early 2000s aesthetic and cramped lists. There was the time they leaked all your filenames to Facebook, but they probably fixed that.
But no matter, small problems for the peace of mind of having all my files backed up.
Backup software is meant to back up your files. Which files? Well the files you need. Given everyone is different, with different workflows and filetypes, the ideal thing is to back up all your files. No backup provider knows what I will need in the future. The provider must plan accordingly.
My first troubling discovery was in 2025, when I made several errors then did a push -f to GitHub and blew away the git history for a half decade old repo. No data was lost, but the log of changes was. No problem I thought, I’ll just restore this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ignore .git folders.
This annoyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze preferences I could find no way to re-enable this. In fact looking at the list of exclusions I could find no mention of .git whatsoever.
This made me wonder - I had checked the exclusions list when I installed Backblaze 9 years before, had I missed it? Had I missed anything else?
Well lesson learned I guess, but then a week ago I came across this thread on reddit: “Doesn’t back up Dropbox folder??”. A user was surprised to find their Dropbox folder no longer being backed up. Alarmed I logged into Backblaze, and lo and behold, my OneDrive folder was missing.
Backblaze has one job, and apparently they are unable to do that job. Back up my stuff. But they have decided not to.
Lets take an aside.
A reasonable person might point out those files on OneDrive are already being backed up - by OneDrive! No. Dropbox and OneDrive are for file syncing - syncing your files to the cloud. They offer limited protection. OneDrive and Dropbox only retain deleted files for one month. Backblaze has one year file retention, or if you pay per GB, unlimited retention. While OneDrive retains version changes for longer, Dropbox only retains version changes for a month - again unless you pay for more. Your files are less secure and less backed up when you stick them in a cloud storage provider folder compared to just being on your desktop.
And that’s assuming your cloud provider is playing ball. If Microsoft or Dropbox bans your account you may find yourself with no backup whatsoever.
For me the larger issue is they never told us. My OneDrive folder sits at 383GB. You would think that having decided to no longer back this up I might get an email, and alert or some other notification. Of course not.
Nestled into their release notes under “Improvements” we see:
The Backup Client now excludes popular cloud storage providers from backup, including both mount points and cache directories. This prevents performance issues, excessive data usage, and unintended uploads from services like OneDrive, Google Drive, Dropbox, Box, iDrive, and others. This change aligns with Backblaze’s policy to back up only local and directly connected storage.
First, I would hardly call this change in policy an improvement, its hard to imagine anyone reading this as anything other than a downgrade in service. Secondly does Backblaze believe most of its users are reading their release notes?
And if you joined today and looked at their list of file exclusions you would find no reference to Dropbox or OneDrive. No mention of Git either.
Here’s the thing, today they don’t back up Git or OneDrive. Who’s to say tomorrow they wont add to the list. Maybe some obscure file format that’s critical to your work flow. Or they will ignore a file extension that just happens be the same as one used by your DAW or 3D Modelling software. And they won’t tell you this. They wont even list it on their site.
By deciding not to back up everything, Backblaze has made it as if they are backing up nothing.
But really this feels like a promise broken. Back in 2015 their website proudly proclaimed:
All user data included by default No restrictions on file type or size
Protect the digital memories and files that matter most to you.
File backup is a matter of trust. You are paying a monthly fee so that if and when things go wrong you can get your data back. By silently changing the rules, Backblaze has not simply eroded my trust, but swept it away.
I wrote this to warn you - Backblaze is no longer doing their part, they are no longer backing up your data. Some of your data sure, but not all of it.
Finally let me leave you with Backblaze’s own words from 2015:
They promised to simplify backup. They succeeded - they don’t even do the backup part anymore.
...
Read the original on rareese.com »
Today, we are expanding our spam policies
to address a deceptive practice known as “back button hijacking”, which will become an explicit violation of the “malicious practices” of spam policies, leading to potential spam actions.
When a user clicks the “back” button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation. It occurs when a site interferes with a user’s browser navigation and prevents them from using their back button to immediately get back to the page they came from. Instead, users might be sent to pages they never visited before, be presented with unsolicited recommendations or ads, or are otherwise just prevented from normally browsing the web.
Why are we taking action?
We believe that the user experience comes first. Back button hijacking interferes with the browser’s functionality, breaks the expected user journey, and results in user frustration. People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we’ve stated before, inserting deceptive or manipulative pages into a user’s browser history has always been against our Google Search Essentials.
We’ve seen a rise of this type of behavior, which is why we’re designating this an explicit violation of our malicious practices
policy, which says:
Malicious practices create a mismatch between user expectations and the actual outcome,
leading to a negative and deceptive user experience, or compromised user security or privacy.
Pages that are engaging in back button hijacking may be subject to manual spam actions
or automated demotions, which can impact the site’s performance in Google Search results. To give site owners time to make any needed changes, we’re publishing this policy two months in advance of enforcement on June 15, 2026.
What should site owners do?
Ensure you are not doing anything to interfere with a user’s ability to navigate their browser history.
If you’re currently using any script or technique that inserts or replaces deceptive or manipulative pages into a user’s browser history that prevents them from using their back button to immediately get back to the page they came from, you are expected to remove or disable it.
Notably, some instances of back button hijacking may originate from the site’s included libraries or advertising platform. We encourage site owners to thoroughly review their technical implementation and remove or disable any code, imports or any configurations that are responsible for back button hijacking, to ensure a helpful and non-deceptive experience for users.
If your site has been impacted by a manual action and you have fixed the issue, you can always let us know by submitting a reconsideration request
in Search Console. For questions or feedback, feel free to reach out on social media or discuss in our help community.
...
Read the original on developers.google.com »
jj is the name of the CLI for Jujutsu. Jujutsu is a DVCS, or “distributed version control system.” You may be familiar with other DVCSes, such as git, and this tutorial assumes you’re coming to jj from git.
So why should you care about jj? Well, it has a property that’s pretty rare in the world of programming: it is both simpler and easier than git, but at the same time, it is more powerful. This is a pretty huge claim! We’re often taught, correctly, that there exist tradeoffs when we make choices. And “powerful but complex” is a very common tradeoff. That power has been worth it, and so people flocked to git over its predecessors.
What jj manages to do is create a DVCS that takes the best of git, the best of Mercurial (hg), and synthesize that into something new, yet strangely familiar. In doing so, it’s managed to have a smaller number of essential tools, but also make them more powerful, because they work together in a cleaner way. Furthermore, more advanced jj usage can give you additional powerful tools in your VCS sandbox that are very difficult with git.
I know that sounds like a huge claim, but I believe that the rest of this tutorial will show you why.
There’s one other reason you should be interested in giving jj a try: it has a git compatible backend, and so you can use jj on your own, without requiring anyone else you’re working with to convert too. This means that there’s no real downside to giving it a shot; if it’s not for you, you’re not giving up all of the history you wrote with it, and can go right back to git with no issues.
...
Read the original on steveklabnik.github.io »
Lean proved this program was correct; then I found a bug.
I fuzzed a verified implementation of zlib and found a buffer overflow in the Lean runtime.
AI agents are getting very good at finding vulnerabilities in large-scale software systems.
Anthropic, was apparently so spooked by the vulnerability-discovery capabilities of Mythos, they decided not to release it as it was “too dangerous” (lol). Whether you believe the hype about these latest models or not, it seems undeniable that the writing is on the wall:
The cost of discovering security bugs is collapsing, and the vast majority of software running today was never built to withstand that kind of scrutiny. We are facing a looming software crisis.
In the face of this oncoming tsunami, recently there has been increasing interest in formal verification as a solution. If we state and prove properties about our code using a mechanical tool, can we build robust, secure and stable software that can overcome this oncoming barrage of attacks?
One recent development in the Lean ecosystem has taken steps towards this question. 10 agents autonomously built and proved an implementation of zlib, lean-zip, an impressive landmark result. Quoting from Leo De Moura, the chief architect of the Lean FRO (here):
With apologies for the AI-slop (Leo has a penchant for it, it seems), the key result is that lean-zip is not just another implementation of zlib. It is an implementation that has been verified as correct end to end, guaranteed by Lean to be entirely free of implementation bugs.
What does “verified as correct” actually look like? Here is one of the main theorems (github):
For any byte array less than 1 gigabyte, calling
ZlibDecode.decompressSingle on the output of ZlibEncode.compress
produces the original data. The decompress function is exactly the inverse of compression. This pair of functions is entirely correct.
I pointed a Claude agent at lean-zip over a weekend, armed with AFL++, AddressSanitizer, Valgrind, and UBSan. Over 105 million fuzzing
executions, it found:
Zero memory vulnerabilities in the verified Lean application code.
A heap buffer overflow in the Lean 4 runtime (lean_alloc_sarray), affecting every version of Lean to date. (bug report, pending fix)
A denial-of-service in lean-zip’s archive parser, which was never verified.
The setup for the experiment was quite simple. I took the lean-zip
codebase and produced a stripped down version and pointed Claude at it.
In particular, as part of the setup: (1) I dropped all theorems and specifications, (2) removed all markdown documentation, and (3) stripped out lean-zip’s C FFI bindings to zlib which it provided as an alternative to its native implementation. What remained was purely the verified code: the native Lean definitions for DEFLATE, gzip, ZIP archive handling, and tar. Any bug found in this would correspond to an error in the verified code.
The idea with dropping theorems and documentation was to avoid biasing the Claude agent by revealing that the code was actually verified — I figured if it knew the code “had no bugs” then it might pre-emptively give up, while operating in the blind might let it work through the software without bias.
With the lean implementation accessible through a CLI, I then spun up a server for the fuzzing experiments, pointed Claude at it, and let it go wild.
Over the course of a night, Claude launched 16 parallel fuzzers across the 6 attack surfaces of the library: ZIP extract, gzip decompress, raw DEFLATE inflate, tar extract, tar.gz, and compression. It built separate binaries with AddressSanitizer and UndefinedBehaviorSanitizer, ran Valgrind memcheck, used cppcheck and flawfinder for static analysis, crafted 48 hand-written exploit files targeting known zlib CVE patterns.
Overall, this resulted in 105,823,818 fuzzing executions. 359 seed files. 16 fuzzers running for approximately 19 hours uncovering 4
crashing inputs, and 1 memory vulnerability in the code.
The most substantial finding was a heap buffer overflow! but, not in
lean-zip’s code, but in the Lean runtime itself.
The vulnerable function is lean_alloc_sarray, which allocates all scalar arrays (ByteArray, FloatArray, etc.) in Lean 4:
For a ByteArray of capacity n, the allocation size is 24 + n. When n is close to SIZE_MAX (2^{64} - 1 on 64-bit systems), the addition wraps around to a small number. The runtime allocates a tiny buffer of around 23 bytes, but the caller proceeds to read n bytes into it.
The overflow can be triggered through lean_io_prim_handle_read, the C function backing IO. FS.Handle.read:
A 156-byte crafted ZIP file with a ZIP64 compressedSize of
0xFFFFFFFFFFFFFFFF is sufficient to trigger it. The same pattern exists in lean_io_get_random_bytes. The bug affects every version of Lean 4 up to and including the latest nightly (v4.31.0-nightly-2026-04-11). The minimal reproducer is 5 simple lines:
def main : IO Unit := do
IO.FS.writeFile “test.bin” “hello”
let h ← IO.FS.Handle.mk “test.bin” .read
let n : USize := (0 : USize) - (1 : USize) — SIZE_MAX
let _ ← h.read n — overflows in lean_alloc_sarray
Edit: there is a pending PR to lean to fix this.
AFL also found a denial-of-service in lean-zip’s own code. The
readExact function in Archive.lean passes the compressedSize
field from the ZIP central directory straight to h.read without validating it against the actual file size:
def readExact (h : IO.FS.Handle) (n : Nat) … := do
while buf.size < n do
let remaining := n - buf.size
let chunk ← h.read remaining.toUSize
– n comes from the ZIP header
A 156-byte ZIP claiming a compressedSize of several exabytes causes the process to panic with INTERNAL PANIC: out of memory, as h.read
allocates more memory than available. This is indeed a bug: the system
unzip handles this gracefully, validating header sizes against the file before allocating, while lean-zip does not and crashes with an OOM.
The OOM denial-of-service is straightforward: the archive parser was never verified. lean-zip’s proofs cover the compression and decompression pipeline (DEFLATE, Huffman, CRC32, roundtrip correctness), but Archive.lean, the module that reads ZIP headers and extracts files, has zero theorems even in the original unstripped codebase. The compressedSize field is read from an untrusted header and passed directly to an allocation without validation. The situation is reminiscent of Yang et al.’s CSmith work (PLDI 2011), which found that CompCert’s verified optimisation passes had zero bugs while its
unverified front-end did. Verification works where it is applied. The archive parser was where lean-zip was not verified.
The heap buffer overflow is more fundamental. lean_alloc_sarray is a C++ function in the Lean runtime, part of the trusted computing
base. Every Lean proof assumes the runtime is correct. A bug here does not just affect lean-zip. It affects every Lean 4 program that allocates a ByteArray.
The positive result here is actually the remarkable one. Across 105 million executions, the application code (that is, excluding the runtime) had zero heap buffer overflows, zero use-after-free, zero stack buffer overflows, zero undefined behaviour (UBSan clean), and zero out-of-bounds array reads in the Lean-generated C code. To quote Claude’s own assessment of the codebase (without knowing it was verified):
This is genuinely one of the most memory-safe codebases I’ve analyzed. The Lean type system with dependent types and well-founded recursion has eliminated entire classes of bugs that plague C/C++ zip implementations. The CVE classes that have plagued zlib for decades are structurally impossible in this codebase.
The two bugs that were found both sat outside the boundary of what the proofs cover. The denial-of-service was a missing specification. The heap overflow was a deeper issue in the trusted computing base, the C++ runtime that the entire proof edifice assumes is correct (and now
has a PR addressing).
Overall verification resulted in a remarkably robust and rigorous codebase. AFL and Claude had a really hard time finding errors. But they did still find issues. Verification is only as strong as the questions you think to ask and the foundations you choose to trust.
...
Read the original on kirancodes.me »
Chicago-based music superfan Aadam Jacobs has been recording the concerts he attends since the 1980s, amassing an archive of over 10,000 tapes. Now 59, Jacobs knows that these cassettes are going to degrade over time, so he agreed to let volunteers from the Internet Archive, the nonprofit digital library, digitize the tapes.
So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989. (The group wouldn’t break through to mainstream audiences until they released the single “Smells Like Teen Spirit” in 1991.) Within the collection, you can also find previously unknown recordings from influential artists like Sonic Youth, R. E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and a whole bunch of other punk groups.
For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great.
One volunteer, Brian Emerick, drives to Jacobs’ house once a month to pick up more boxes of tapes — he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands.
Sometimes, the internet is good. And so is this Tracy Chapman recording from 1988.
...
Read the original on techcrunch.com »
AI experts and the public’s opinion on the technology are increasingly diverging, according to Stanford University’s annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U. S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy.
The report’s findings follow growing negative sentiment about AI, with Gen Z reportedly leading the way, according to a recent Gallup poll. The study found that young people were growing less hopeful and more angry about the technology, even though around half of the demographic was using AI either daily or weekly.
For some working in tech, the AI backlash has come as a surprise. AI leaders have focused on managing the possibility of Artificial General Intelligence, or AGI — a theoretical form of AI superintelligence that could perform any task a human could do and think for itself. But everyday folks are more concerned about AI’s impact on their paycheck and whether or not their power bills will go up as energy-hungry data centers are built.
The divide has been most apparent in the online reaction to the recent attacks on OpenAI CEO Sam Altman’s home. In posts on X , for instance, AI insiders voiced surprise at a series of Instagram comments that seemed to praise the attack on Altman’s home. Some of the online comments have a similar vibe to those that circulated online after the shooting of the United Healthcare CEO in 2024 and the more recent burning of a Kimberly-Clark warehouse by a worker angry about not receiving a “livable wage” — with some comments even going so far as to suggest that even more action, akin to a revolution, is needed.
Stanford’s report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources.
For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U. S. over the next 20 years.
Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U. S. general public said the same.
Plus, a majority (73%) of experts felt positive about AI’s impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it’s not surprising that only 21% of the public felt similarly.
Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI’s impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.
The U. S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford’s report.
Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go “too far.”
Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025.
But at the same time, those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period, per data cited by the report’s authors.
...
Read the original on techcrunch.com »
In June 1812, Napoleon Bonaparte marched 685,000 soldiers into Russia - the largest military force ever assembled in European history up to that point, and one of the largest military fuckups of all time.
He had no coherent supply plan for feeding them, he had no realistic timeline for when, exactly, the Russians would agree to fight a decisive battle on his terms, and he couldn’t even articulate a coherent goal for his gamble, beyond ~beat the Russians in some vague way.
He had been warned by multiple advisors, including his own foreign minister Talleyrand, that invading Russia was a catastrophic idea - and he did it anyway.
By December, roughly 400,000 of his soldiers were dead, mostly from starvation and exposure and the consequences of field surgery, and another 100,000 had been captured. The Grande Armée, the most feared fighting force on the continent, clawed its way back across the Niemen River as a frozen, shattered remnant of itself. It was the beginning of the end for Napoleon, who would never again be able to field an army of the size // quality he squandered on his pointless excursion into Russia.
Napoleon was, by any reasonable accounting, a genius - a military mind who rewrote the rules of European warfare, a political operator who fought his way up from being a minor league Corsican nobility to the Emperor of France and ruler of most of modern Europe before he turned 35, and a reformer whose ideas around the judicial system and the liberal order still echo today.
But none of that stopped him from making one of the dumbest decisions any leader has ever made, because he was arrogant, because he’d gotten away with so much for so long that he confused his luck for a system, and because (with the exception of Talleyrand) most of the people around him had simply stopped telling him no.
There’s a particular kind of person who can’t accept that story at face value, and you’ve met them. I am absolutely sure of it. They show up in every comment section and reply thread where someone powerful does something that looks, on its face, like a mistake - and their argument always runs the same way: you don’t understand, this is actually part of a larger plan, there’s a strategy here that you and I can’t see because we’re not operating at that annointed and elevated level…
And they are fucking everywhere.
When Elon Musk bought Twitter in October 2022 for $44 billion, a price he himself had tried to back out of after waiving due diligence (a decision so baffling that the presiding judge, Kathleen McCormick, openly marveled at it in court), the 4D chess analysts fired up immediately. You haven’t seen the inside of the honeycomb, they insisted! You don’t get it! You’re not the richest man on earth - how could you possibly hope to process his brilliance?
The mass layoffs that gutted the company’s accessibility team and its content moderation staff were, obviously, equally strategic. The verification fiasco that let someone impersonate Eli Lilly and tank their stock price with a fake tweet about free insulin had to be part of The Plan™️. The advertiser exodus that cratered the company’s revenue was just Musk shaking off the dead weight, building something new, playing a longer game than any of us could understand.
But a jury in 2026 found Musk liable for deliberately misleading investors during the acquisition. People he’d fired had to be re-hired weeks later because nobody had bothered to check whether they were, you know, running anything important before they were shown the door. The company lost roughly 80% of its value under his ownership - because there was no 4D chess. There was simply a billionaire who’d gotten used to being the smartest person in every room he walked into, who didn’t even have a Talleyrand of his own to hint that it might be a bad call, who bought a company on impulse, and then made it worse through a series of decisions that were exactly as bad as they looked from the outside.
The same thing plays out with Trump; every chaotic press conference, every contradictory policy announcement is immediately reframed by his most sycophantic supporters (and, weirdly, by a certain type of opponent who wants to believe they’re up against a mastermind).
“He knows exactly what he’s doing.”
I’ll grant you - sometimes Trump does know what he’s doing. Sometimes a provocation is calculated and the outrage does serve a purpose. But the 4D chess crowd can’t distinguish between those moments and the moments where the simplest explanation is just that a 79-year-old man with a phone, no impulse control, and an audience of millions is posting whatever dumb shit he feels like posting at 2 AM.
The powerful don’t get to be powerful without being special, right?
And if they’re special, if they’re smarter than all the rest of us, everything they do must be for a reason, right?
And if we can’t see that reason, that must be a problem with us - mere mortals - not the divinely appointed titans, right?
The most recent entry in this genre is OpenAI’s acquisition of TBPN, the daily tech talk show hosted by John Coogan and Jordi Hays. OpenAI reportedly paid in the low hundreds of millions of dollars for a show with 58,000 subscribers on YouTube. The show reports to Chris Lehane, OpenAI’s chief political operative. And predictably, the rationalizers have lined up.
Fortune ran a piece titled “3 reasons OpenAI buying daily tech show TBPN for hundreds of millions isn’t totally crazy.” The argument boiled down to: OpenAI is buying influence, packaging distribution with narrative control, positioning itself to shape public conversation about AI at a moment when that conversation will determine the regulatory environment the company operates in.
And look, some of that might be true.
But it’s worth sitting with the simpler read for a second.
A company whose own executives told staff to stop chasing “side quests” and focus on core AI model development spent hundreds of millions of dollars on a podcast. CNBC’s headline called it “chasing vibes.”
Ben Thompson at Stratechery did the most thorough demolition job. He compared OpenAI to “the short bus at the end of the rainbow,” which is funny and also brutal and also correct. The whole Stratechery piece is worth reading because Thompson actually bothered to lay out just how incoherent OpenAI’s strategy has become — they were against ads until suddenly ads were the plan, Apple was a partner until they poached Jony Ive, and Anthropic is over there shipping models while Sam Altman is signing checks for a talk show. Thompson’s takeaway: “there just isn’t much evidence that anyone knows what they are doing or that there is any sort of overarching plan.”
The 4D chess read asks you to believe that Altman - Google breathing down his neck, Anthropic breathing down his neck, Meta breathing down his neck - sat down and decided a talk show with fewer subscribers than most mid-tier gaming streamers was the best possible use of hundreds of millions of dollars.
The boring read asks you to believe a CEO did something that served his ego. Pick whichever one requires less of a leap of faith. I know which one I’m going with…
Why do people resist the boring read? Melvin Lerner had a theory. He published a book in 1980 called The Belief in a Just World, and his argument was that most of us walk around with a bone-deep need to believe that people Get What They Deserve. If someone is rich, they must be smart. If they’re smart, their decisions must make sense. And if their decisions look dumb, well, you must be the one who’s missing something. It’s a warm blanket of a worldview. It just doesn’t survive contact with reality.
There’s something else going on, too, and it’s less intellectual // more animal. We see patterns everywhere. We see them when they’re not there. Kahneman built half his career on this - we are so desperate to find signal in the noise that we’ll construct entire narratives out of nothing, and a narrative where the powerful guy is playing 12 moves ahead is just a better story than one where he fucked up because that’s what people do.
But the 4D chess framing also flatters the believer. If you can see the hidden strategy that everyone else is missing, you’re the smart one, you’re the one who gets it. Which rather stops being funny when you realize what it costs…when you insist that every action a powerful person takes is part of a grand strategy, you strip away accountability and you make it impossible to call a bad decision a bad decision.
Every failure becomes a setup for a future success that never arrives, and every scandal a distraction from a larger game that never materializes. The goalposts disappear entirely, because the frame has become unfalsifiable; any outcome can be absorbed into the theory. If the plan works, it was genius. If it doesn’t, the real plan hasn’t been revealed yet.
This is how cults of personality sustain themselves - through interpretation, and through a community of believers who will do the intellectual labor of making sense of the nonsensical, who treat confusion as evidence of their own limited understanding rather than evidence that the thing they’re looking at is, in fact, confused.
The higher someone climbs, the fewer people around them will push back.
The richer they get, the more their bad ideas get funded instead of challenged.
The more successful they become, the more they start to believe that their success came from skill rather than from some volatile, unrepeatable cocktail of skill, timing, luck, and other people’s labor.
Napoleon was brilliant. He was also surrounded, by 1812, by marshals who were tired of arguing with him and a court that had learned it was safer to agree, and the invasion of Russia was precisely what happens when a brilliant person loses the feedback mechanisms that kept them brilliant.
OpenAI buying a podcast for a price that could have funded a mid-sized AI research lab wasn’t a strategic fucking masterstroke.
Sometimes powerful people just do dumb shit, and sometimes there is no plan.
The people who will pay the highest price for the 4D chess delusion are, ironically, the people most devoted to it; because if you can’t look at a powerful person’s decision and say “that was a bloody stupid thing to do,” you can’t learn anything from their mistakes, and you can’t see the world clearly.
But when the choice is between speaking up and watching an unchecked megalomaniac march 685,000 soldiers into a Russian winter without a fur coat in sight, clarity is the only thing worth having.
...
Read the original on www.joanwestenberg.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.