10 interesting stories served every morning and every evening.
The Photo page brings Hollywood’s most advanced color tools to still photography for the first time! Whether you’re a professional colorist looking to apply your skills to fashion shoots and weddings, or a photographer who wants to work beyond the limits of traditional photo applications, the Photo page unlocks the tools you need. Start with familiar photo tools including white balance, exposure and primary color adjustments, then switch to the Color page for access to the full DaVinci color grading toolset trusted by Hollywood’s best colorists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you export faster than ever before!
For photographers, the Photo page offers a familiar set of tools alongside DaVinci’s powerful color grading capabilities. It includes native RAW support for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image processing takes place at source resolution up to 32K, or over 400 megapixels, so you’re never limited to project resolution. Familiar basic adjustments including white balance, exposure, color and saturation give you a comfortable starting point. With non-destructive processing you can reframe, crop and re-interpret your original sensor data at any time. And with GPU acceleration, entire albums can be processed dramatically faster than conventional photo applications!
The Photo page Inspector gives you precise control over the transform and cropping parameters of your images. Reframe and crop non-destructively at the original source resolution and aspect ratio, so you’re never restricted to a fixed timeline size! Zoom, position, rotate and flip images with full transform controls and use the cropping parameters to trim the edges of any image with precision. Reframe a shot to improve composition, adjust for a specific ratio for print or social media use, or simply remove unwanted elements from the edges of a frame. All adjustments can be refined or reset at any time without ever affecting the original source file!
DaVinci Resolve is the world’s only post production software that lets everyone work together on the same project at the same time! Built on a powerful cloud based workflow, you can share albums, all associated metadata and tags, as well as grades and effects with colorists, photographers and retouchers anywhere in the world. Blackmagic Cloud syncing keeps every collaborator with the latest version of your image library in real time, and remote reviewers can approve grades offsite without needing to be in the same room. Hollywood colorists can even grade live fashion shoots remotely, all while the photographer is still on set!
The Photo page gives you everything you need to manage your entire image library from import to completion. You can import photos directly, from your Apple Photos library or Lightroom, and organize them with tags, ratings, favorites and keywords for fast, flexible management of even the largest libraries. It supports all standard RAW files and image types. AI IntelliSearch lets you instantly search across your entire project to find exactly what you’re looking for, from objects to people to animals! Albums allow you to build and manage collections for any project and with a single click you can switch between your photo library and your color grading workflow!
Albums are a powerful way to build and manage photo collections directly in DaVinci Resolve. You can add images manually to each album or organize by date, camera, star rating, EXIF data and more. Powerful filter and sort tools give you total control over how your collection is arranged. The thumbnail view displays each image’s graded version alongside its file name and source clip format so you can see your grades at a glance. Create multiple grade versions of any image, all referencing the original source file, so you can explore different looks without ever duplicating a file. Plus, grades applied to one photo can be instantly copied across others in the album for a fast, consistent look!
Connect Sony or Canon cameras directly to DaVinci Resolve for tethered shooting with full live view! Adjust camera settings including ISO, exposure and white balance without leaving the page and save image capture presets to establish a consistent look before you shoot. Images can be captured directly into an album, with albums created automatically during capture so your library is perfectly organized from the moment you start shooting. Grade images as they arrive using DaVinci Resolve’s extensive color toolset and use a hardware panel for hands-on creative control in a collaborative shoot. That means you can capture, grade and organize an entire shoot without leaving DaVinci Resolve!
The Photo page gives you access to over 100 GPU and CPU accelerated Resolve FX and specialty AI tools for still image work. They’re organized by category in the Open FX library and cover everything from color effects, blurs and glows to image repair, skin refinement and cinematic lighting tools. These are the same tools used by Hollywood colorists and VFX artists on the world’s biggest productions, now available for still images. To add an effect, drag it to any node. Whether you’re making subtle beauty refinements for a fashion shoot or applying dramatic film looks and atmospheric lighting effects emulating the looks of a Hollywood feature, the Photo page has the tools you need!
Magic Mask makes precise selections of subjects or backgrounds, while Depth Map generates a 3D map of your scene to separate foreground and background without manual masking. Use together to grade different depths of an image independently for results that have never before been possible for stills!
Add a realistic light source to any photo after capture with Relight FX. Relight analyzes the surfaces of faces and objects to reflect light naturally across the image. Combine with Magic Mask to light a subject independently from the background, turning flat portraits into stunning fashion images!
Face refinement automatically masks different parts of a face, saving countless hours of manual work. Sharpen eyes, remove dark circles, smooth skin, and color lips. Ultra Beauty separates skin texture from color for natural, high end results, while AI Blemish Removal handles fast skin repair!
The Film Look Creator lets you add cinematic looks that replicate film properties like halation, bloom, grain and vignetting. Adjust exposure in stops and use subtractive saturation, richness and split tone controls to achieve looks usually found on the big screen, now for your still images!
AI SuperScale uses the DaVinci AI Neural Engine to upscale low resolution images with exceptional quality. The enhanced mode is specifically designed to remove compression artifacts, making it the perfect tool for rescaling low quality photos or frame grabs up to 4x their original resolution!
UltraNR is a DaVinci AI Neural Engine driven denoise mode in the Color page’s spatial noise reduction palette. Use it to dramatically reduce digital noise from an image while maintaining image clarity. Use with spatial noise reduction to smooth out digital grain or scanner noise while keeping fine hair and eye edges sharp.
Sample an area of a scene to quickly cover up unwanted elements, like objects or even blemishes on a face. The patch replacer has a fantastic auto grading feature that will seamlessly blend the covered area with the surrounding color data. Perfect for removing sensor dust.
The Quick Export option makes it fast and easy to deliver finished images in a wide range of common formats including JPEG, PNG, HEIF and TIFF. Export either an entire album or just selected photos providing flexibility to meet your specific delivery needs. You can set the resolution, bit depth, quality and compression to ensure your images are optimized for their intended use. Whether you’re exporting standalone images for print, sharing on social media platforms or delivering graded files to a client, Quick Export has you covered. All exports preserve your original photo EXIF metadata, so camera settings, location data and other important information always travels with your files.
The Photo page uses GPU accelerated processing to deliver fast, accurate results across your entire workflow. Process hundreds of RAW files in seconds with GPU accelerated decoding and apply Resolve FX to your images in real time. GPU acceleration also means batch exports and conversions are dramatically faster than conventional photo applications. On Mac, DaVinci Resolve is optimized for Metal and Apple Silicon, taking full advantage of the latest hardware. On Windows and Linux, you get CUDA support for NVIDIA GPUs, while the Windows version also features full OpenCL support for AMD, Intel and Qualcomm GPUs. All this ensures you get high performance results on any system!
Hollywood colorists have always relied on hardware panels to work faster and more creatively and now photographers can too! The DaVinci Resolve Micro Color Panel is the perfect companion for photo grading as it is compact enough to sit next to a laptop and portable enough to take on location for shoots. It features three high quality trackballs for lift, gamma and gain adjustments, 12 primary correction knobs for contrast, saturation, hue, temperature and more. It even has a built in rechargeable battery! DaVinci Resolve color panels let you adjust multiple parameters at once, so you can create looks that are simply impossible with a mouse and keyboard.
Hollywood’s most popular solution for editing, visual effects, motion graphics, color correction and audio post production, for Mac, Windows and Linux. Now supports Blackmagic Cloud for collaboration!
The most powerful DaVinci Resolve adds DaVinci Neural Engine for automatic AI region tracking, stereoscopic tools, more Resolve FX filters, more Fairlight FX audio plugins and advanced HDR grading.
Includes large search dial in a design that includes only the specific keys needed for editing. Includes Bluetooth with battery for wireless use so it’s more portable than a full sized keyboard!
Editor panel specifically designed for multi-cam editing for news cutting and live sports replay. Includes buttons to make camera selection and editing extremely fast! Connects via Bluetooth or USB‑C.
Full sized traditional QWERTY editor keyboard in a premium metal design. Featuring a metal search dial with clutch, plus extra edit, trim and timecode keys. Can be installed inset for flush mounting.
Powerful color panel gives you all the control you need to create cinematic images. Includes controls for refined color grading including adding windows. Connects via Bluetooth or USB‑C.
Portable DaVinci color panel with 3 high resolution trackballs, 12 primary corrector knobs and LCDs with menus and buttons for switching tools, adding color nodes, HDR and secondary grading and more!
Designed in collaboration with professional Hollywood colorists, the DaVinci Resolve Advanced Panel features a massive number of controls for direct access to every DaVinci color correction feature.
Portable audio control surface includes 12 premium touch sensitive flying faders, channel LCDs for advanced processing, automation and transport controls plus HDMI for an external graphics display.
Get incredibly fast audio editing for sound engineers working on tight deadlines! Includes LCD screen, touch sensitive control knobs, built in search dial and full keyboard with multi function keys.
Used by Hollywood and broadcasters, these large consoles make it easy to mix large projects with a massive number of channels and tracks. Modular design allows customizing 2, 3, 4, or 5 bay consoles!
Fairlight studio console legs at 0º angle for when you require a flat working surface. Required for all Fairlight Studio Consoles.
Fairlight studio console legs at 8º angle for when you require a slightly angled working surface. Required for all Fairlight Studio Consoles.
Features 12 motorized faders, rotary control knobs illuminated buttons for pan, solo, mute and call, plus bank select buttons.
12 groups of touch sensitive rotary control knobs and illuminated buttons, assignable to fader strips, single channel or master bus.
Get quick access to virtually every Fairlight feature! Includes a 12” LCD, graphical keyboard, macro keys, transport controls and more.
Features HDMI, SDI inputs for video and computer monitoring and Ethernet for graphics display of channel status and meters.
Empty 2 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 3 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 4 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 5 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Use alternative HDMI or SDI televisions and monitors when building a Fairlight studio console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 2 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 3 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 4 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 5 bay Fairlight console.
Side arm kit mounts into Fairlight console mounting bar and holds each fader, channel control and LCD monitor module.
Blank 1/3rd wide bay for building a custom console with the extra 1/3rd section. Includes blank infill panels.
Allows mounting standard 19 inch rack mount equipment in the channel control area of the Fairlight studio console.
Blank panel to fill in the channel control area of the Fairlight studio console.
Blank panel to fill in the LCD monitor area of the Fairlight studio console when you’re not using the standard Fairlight LCD monitor.
Blank panel to fill in the fader control area of the Fairlight studio console.
Adds 3 MADI I/O connections to the single MADI on the accelerator card, for a total of 256 inputs and outputs at 24 bit and 48kHz.
Add up to 2,000 tracks with real time processing of EQ, dynamics, 6 plug‑ins per track, plus MADI for extra 64 inputs and outputs.
Adds analog and digital connections, preamps for mics and instruments, sample rate conversion and sync at any standard frame rate.
...
Read the original on www.blackmagicdesign.com »
Large pull requests are hard to review, slow to merge, and prone to conflicts. Reviewers lose context, feedback quality drops, and the whole team slows down. Stacked PRs solve this by breaking big changes into a chain of small, focused pull requests that build on each other — each one independently reviewable.
A stack is a series of pull requests in the same repository where each PR targets the branch of the PR below it, forming an ordered chain that ultimately lands on your main branch.
GitHub understands stacks end-to-end: the pull request UI shows a stack map so reviewers can navigate between layers, branch protection rules are enforced against the final target branch (not just the direct base), and CI runs for every PR in the stack as if they were targeting the final branch.
While the gh stack CLI makes the local workflow seamless, it is entirely optional. You can create and manage Stacked PRs directly via the GitHub UI, the API, or your standard Git workflow. If you choose to use the CLI, it handles creating branches, managing rebases, pushing to GitHub, and creating PRs with the correct base branches. On GitHub, the PR UI gives reviewers the context they need — a stack map for navigation, focused diffs for each layer, and proper rules enforcement.
When you’re ready to merge, you can merge all or a part of the stack. Each PR can be merged directly or through the merge queue. If you want to merge multiple PRs at once (e.g., the bottom two PRs in a stack), simply wait for CI to pass on those specific layers, and you can merge them in a single step. After a merge, the remaining PRs in the stack are automatically rebased so the lowest unmerged PR targets the updated base branch.
Ready to dive in? Start with the Quick Start guide or read the full overview.
...
Read the original on github.github.com »
TLDR: Despite claiming to backup all your data, Backblaze quietly stopped backing up OneDrive and Dropbox folders - along with potentially many other things.
For ten years I have been using Backblaze for my personal computer backup. Before 2015 I would backup files to one of two large external hard discs. I then rotated these drives between, first my father’s house, and after I moved to the UK, my office drawers.
In 2015 Backblaze seemed like a good bet. Unlike Crashplan their software wasn’t a bloated Java app, but they did have unlimited storage. If you could cram it into your PC they would back it up. With their yearly Hard Drive reviews making good press, a lot of personal recommendations from my friends and colleagues, their service sounded great. I installed the software, ran it for several weeks, and sure enough my data was safely stored in their cloud.
I had further reason to be impressed when several years later one of my hard drives failed. I made use of their “send me a hard drive with my stuff on it service”. A drive turned up filled with my precious data. That for me was proof that this system worked, and that it worked well.
And so I recommended Backblaze for years. What do you do for backup? I would extoll the virtues of Backblaze, and they made many sales from such recommendations.
There were a few things I didn’t like. The app, could use a lot of memory, especially after doing a large import of photographs. The website, which I often used to restore single files or folders, was slow and clunky to use. The windows app in particular was clunky with an early 2000s aesthetic and cramped lists. There was the time they leaked all your filenames to Facebook, but they probably fixed that.
But no matter, small problems for the peace of mind of having all my files backed up.
Backup software is meant to back up your files. Which files? Well the files you need. Given everyone is different, with different workflows and filetypes, the ideal thing is to back up all your files. No backup provider knows what I will need in the future. The provider must plan accordingly.
My first troubling discovery was in 2025, when I made several errors then did a push -f to GitHub and blew away the git history for a half decade old repo. No data was lost, but the log of changes was. No problem I thought, I’ll just restore this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ignore .git folders.
This annoyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze preferences I could find no way to re-enable this. In fact looking at the list of exclusions I could find no mention of .git whatsoever.
This made me wonder - I had checked the exclusions list when I installed Backblaze 9 years before, had I missed it? Had I missed anything else?
Well lesson learned I guess, but then a week ago I came across this thread on reddit: “Doesn’t back up Dropbox folder??”. A user was surprised to find their Dropbox folder no longer being backed up. Alarmed I logged into Backblaze, and lo and behold, my OneDrive folder was missing.
Backblaze has one job, and apparently they are unable to do that job. Back up my stuff. But they have decided not to.
Lets take an aside.
A reasonable person might point out those files on OneDrive are already being backed up - by OneDrive! No. Dropbox and OneDrive are for file syncing - syncing your files to the cloud. They offer limited protection. OneDrive and Dropbox only retain deleted files for one month. Backblaze has one year file retention, or if you pay per GB, unlimited retention. While OneDrive retains version changes for longer, Dropbox only retains version changes for a month - again unless you pay for more. Your files are less secure and less backed up when you stick them in a cloud storage provider folder compared to just being on your desktop.
And that’s assuming your cloud provider is playing ball. If Microsoft or Dropbox bans your account you may find yourself with no backup whatsoever.
For me the larger issue is they never told us. My OneDrive folder sits at 383GB. You would think that having decided to no longer back this up I might get an email, and alert or some other notification. Of course not.
Nestled into their release notes under “Improvements” we see:
The Backup Client now excludes popular cloud storage providers from backup, including both mount points and cache directories. This prevents performance issues, excessive data usage, and unintended uploads from services like OneDrive, Google Drive, Dropbox, Box, iDrive, and others. This change aligns with Backblaze’s policy to back up only local and directly connected storage.
First, I would hardly call this change in policy an improvement, its hard to imagine anyone reading this as anything other than a downgrade in service. Secondly does Backblaze believe most of its users are reading their release notes?
And if you joined today and looked at their list of file exclusions you would find no reference to Dropbox or OneDrive. No mention of Git either.
Here’s the thing, today they don’t back up Git or OneDrive. Who’s to say tomorrow they wont add to the list. Maybe some obscure file format that’s critical to your work flow. Or they will ignore a file extension that just happens be the same as one used by your DAW or 3D Modelling software. And they won’t tell you this. They wont even list it on their site.
By deciding not to back up everything, Backblaze has made it as if they are backing up nothing.
But really this feels like a promise broken. Back in 2015 their website proudly proclaimed:
All user data included by default No restrictions on file type or size
Protect the digital memories and files that matter most to you.
File backup is a matter of trust. You are paying a monthly fee so that if and when things go wrong you can get your data back. By silently changing the rules, Backblaze has not simply eroded my trust, but swept it away.
I wrote this to warn you - Backblaze is no longer doing their part, they are no longer backing up your data. Some of your data sure, but not all of it.
Finally let me leave you with Backblaze’s own words from 2015:
They promised to simplify backup. They succeeded - they don’t even do the backup part anymore.
...
Read the original on rareese.com »
Today, we are expanding our spam policies
to address a deceptive practice known as “back button hijacking”, which will become an explicit violation of the “malicious practices” of spam policies, leading to potential spam actions.
When a user clicks the “back” button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation. It occurs when a site interferes with a user’s browser navigation and prevents them from using their back button to immediately get back to the page they came from. Instead, users might be sent to pages they never visited before, be presented with unsolicited recommendations or ads, or are otherwise just prevented from normally browsing the web.
Why are we taking action?
We believe that the user experience comes first. Back button hijacking interferes with the browser’s functionality, breaks the expected user journey, and results in user frustration. People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we’ve stated before, inserting deceptive or manipulative pages into a user’s browser history has always been against our Google Search Essentials.
We’ve seen a rise of this type of behavior, which is why we’re designating this an explicit violation of our malicious practices
policy, which says:
Malicious practices create a mismatch between user expectations and the actual outcome,
leading to a negative and deceptive user experience, or compromised user security or privacy.
Pages that are engaging in back button hijacking may be subject to manual spam actions
or automated demotions, which can impact the site’s performance in Google Search results. To give site owners time to make any needed changes, we’re publishing this policy two months in advance of enforcement on June 15, 2026.
What should site owners do?
Ensure you are not doing anything to interfere with a user’s ability to navigate their browser history.
If you’re currently using any script or technique that inserts or replaces deceptive or manipulative pages into a user’s browser history that prevents them from using their back button to immediately get back to the page they came from, you are expected to remove or disable it.
Notably, some instances of back button hijacking may originate from the site’s included libraries or advertising platform. We encourage site owners to thoroughly review their technical implementation and remove or disable any code, imports or any configurations that are responsible for back button hijacking, to ensure a helpful and non-deceptive experience for users.
If your site has been impacted by a manual action and you have fixed the issue, you can always let us know by submitting a reconsideration request
in Search Console. For questions or feedback, feel free to reach out on social media or discuss in our help community.
...
Read the original on developers.google.com »
jj is the name of the CLI for Jujutsu. Jujutsu is a DVCS, or “distributed version control system.” You may be familiar with other DVCSes, such as git, and this tutorial assumes you’re coming to jj from git.
So why should you care about jj? Well, it has a property that’s pretty rare in the world of programming: it is both simpler and easier than git, but at the same time, it is more powerful. This is a pretty huge claim! We’re often taught, correctly, that there exist tradeoffs when we make choices. And “powerful but complex” is a very common tradeoff. That power has been worth it, and so people flocked to git over its predecessors.
What jj manages to do is create a DVCS that takes the best of git, the best of Mercurial (hg), and synthesize that into something new, yet strangely familiar. In doing so, it’s managed to have a smaller number of essential tools, but also make them more powerful, because they work together in a cleaner way. Furthermore, more advanced jj usage can give you additional powerful tools in your VCS sandbox that are very difficult with git.
I know that sounds like a huge claim, but I believe that the rest of this tutorial will show you why.
There’s one other reason you should be interested in giving jj a try: it has a git compatible backend, and so you can use jj on your own, without requiring anyone else you’re working with to convert too. This means that there’s no real downside to giving it a shot; if it’s not for you, you’re not giving up all of the history you wrote with it, and can go right back to git with no issues.
...
Read the original on steveklabnik.github.io »
Chicago-based music superfan Aadam Jacobs has been recording the concerts he attends since the 1980s, amassing an archive of over 10,000 tapes. Now 59, Jacobs knows that these cassettes are going to degrade over time, so he agreed to let volunteers from the Internet Archive, the nonprofit digital library, digitize the tapes.
So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989. (The group wouldn’t break through to mainstream audiences until they released the single “Smells Like Teen Spirit” in 1991.) Within the collection, you can also find previously unknown recordings from influential artists like Sonic Youth, R. E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and a whole bunch of other punk groups.
For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great.
One volunteer, Brian Emerick, drives to Jacobs’ house once a month to pick up more boxes of tapes — he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands.
Sometimes, the internet is good. And so is this Tracy Chapman recording from 1988.
...
Read the original on techcrunch.com »
Lean proved this program was correct; then I found a bug.
I fuzzed a verified implementation of zlib and found a buffer overflow in the Lean runtime.
AI agents are getting very good at finding vulnerabilities in large-scale software systems.
Anthropic, was apparently so spooked by the vulnerability-discovery capabilities of Mythos, they decided not to release it as it was “too dangerous” (lol). Whether you believe the hype about these latest models or not, it seems undeniable that the writing is on the wall:
The cost of discovering security bugs is collapsing, and the vast majority of software running today was never built to withstand that kind of scrutiny. We are facing a looming software crisis.
In the face of this oncoming tsunami, recently there has been increasing interest in formal verification as a solution. If we state and prove properties about our code using a mechanical tool, can we build robust, secure and stable software that can overcome this oncoming barrage of attacks?
One recent development in the Lean ecosystem has taken steps towards this question. 10 agents autonomously built and proved an implementation of zlib, lean-zip, an impressive landmark result. Quoting from Leo De Moura, the chief architect of the Lean FRO (here):
With apologies for the AI-slop (Leo has a penchant for it, it seems), the key result is that lean-zip is not just another implementation of zlib. It is an implementation that has been verified as correct end to end, guaranteed by Lean to be entirely free of implementation bugs.
What does “verified as correct” actually look like? Here is one of the main theorems (github):
For any byte array less than 1 gigabyte, calling
ZlibDecode.decompressSingle on the output of ZlibEncode.compress
produces the original data. The decompress function is exactly the inverse of compression. This pair of functions is entirely correct.
I pointed a Claude agent at lean-zip over a weekend, armed with AFL++, AddressSanitizer, Valgrind, and UBSan. Over 105 million fuzzing
executions, it found:
Zero memory vulnerabilities in the verified Lean application code.
A heap buffer overflow in the Lean 4 runtime (lean_alloc_sarray), affecting every version of Lean to date. (bug report, pending fix)
A denial-of-service in lean-zip’s archive parser, which was never verified.
The setup for the experiment was quite simple. I took the lean-zip
codebase and produced a stripped down version and pointed Claude at it.
In particular, as part of the setup: (1) I dropped all theorems and specifications, (2) removed all markdown documentation, and (3) stripped out lean-zip’s C FFI bindings to zlib which it provided as an alternative to its native implementation. What remained was purely the verified code: the native Lean definitions for DEFLATE, gzip, ZIP archive handling, and tar. Any bug found in this would correspond to an error in the verified code.
The idea with dropping theorems and documentation was to avoid biasing the Claude agent by revealing that the code was actually verified — I figured if it knew the code “had no bugs” then it might pre-emptively give up, while operating in the blind might let it work through the software without bias.
With the lean implementation accessible through a CLI, I then spun up a server for the fuzzing experiments, pointed Claude at it, and let it go wild.
Over the course of a night, Claude launched 16 parallel fuzzers across the 6 attack surfaces of the library: ZIP extract, gzip decompress, raw DEFLATE inflate, tar extract, tar.gz, and compression. It built separate binaries with AddressSanitizer and UndefinedBehaviorSanitizer, ran Valgrind memcheck, used cppcheck and flawfinder for static analysis, crafted 48 hand-written exploit files targeting known zlib CVE patterns.
Overall, this resulted in 105,823,818 fuzzing executions. 359 seed files. 16 fuzzers running for approximately 19 hours uncovering 4
crashing inputs, and 1 memory vulnerability in the code.
The most substantial finding was a heap buffer overflow! but, not in
lean-zip’s code, but in the Lean runtime itself.
The vulnerable function is lean_alloc_sarray, which allocates all scalar arrays (ByteArray, FloatArray, etc.) in Lean 4:
For a ByteArray of capacity n, the allocation size is 24 + n. When n is close to SIZE_MAX (2^{64} - 1 on 64-bit systems), the addition wraps around to a small number. The runtime allocates a tiny buffer of around 23 bytes, but the caller proceeds to read n bytes into it.
The overflow can be triggered through lean_io_prim_handle_read, the C function backing IO. FS.Handle.read:
A 156-byte crafted ZIP file with a ZIP64 compressedSize of
0xFFFFFFFFFFFFFFFF is sufficient to trigger it. The same pattern exists in lean_io_get_random_bytes. The bug affects every version of Lean 4 up to and including the latest nightly (v4.31.0-nightly-2026-04-11). The minimal reproducer is 5 simple lines:
def main : IO Unit := do
IO.FS.writeFile “test.bin” “hello”
let h ← IO.FS.Handle.mk “test.bin” .read
let n : USize := (0 : USize) - (1 : USize) — SIZE_MAX
let _ ← h.read n — overflows in lean_alloc_sarray
Edit: there is a pending PR to lean to fix this.
AFL also found a denial-of-service in lean-zip’s own code. The
readExact function in Archive.lean passes the compressedSize
field from the ZIP central directory straight to h.read without validating it against the actual file size (here):
def readExact (h : IO.FS.Handle) (n : Nat) … := do
while buf.size < n do
let remaining := n - buf.size
let chunk ← h.read remaining.toUSize
– n comes from the ZIP header
A 156-byte ZIP claiming a compressedSize of several exabytes causes the process to panic with INTERNAL PANIC: out of memory, as h.read
allocates more memory than available. This is indeed a bug: the system
unzip handles this gracefully, validating header sizes against the file before allocating, while lean-zip does not and crashes with an OOM.
The OOM denial-of-service is straightforward: the archive parser was never verified. lean-zip’s proofs cover the compression and decompression pipeline (DEFLATE, Huffman, CRC32, roundtrip correctness), but Archive.lean, the module that reads ZIP headers and extracts files, has zero theorems even in the original unstripped codebase. The compressedSize field is read from an untrusted header and passed directly to an allocation without validation. The situation is reminiscent of Yang et al.’s CSmith work (PLDI 2011), which found that CompCert’s verified optimisation passes had zero bugs while its
unverified front-end did. Verification works where it is applied. The archive parser was where lean-zip was not verified.
The heap buffer overflow is more fundamental. lean_alloc_sarray is a C++ function in the Lean runtime, part of the trusted computing
base. Every Lean proof assumes the runtime is correct. A bug here does not just affect lean-zip. It affects every Lean 4 program that allocates a ByteArray.
The positive result here is actually the remarkable one. Across 105 million executions, the application code (that is, excluding the runtime) had zero heap buffer overflows, zero use-after-free, zero stack buffer overflows, zero undefined behaviour (UBSan clean), and zero out-of-bounds array reads in the Lean-generated C code. To quote Claude’s own assessment of the codebase (without knowing it was verified):
This is genuinely one of the most memory-safe codebases I’ve analyzed. The Lean type system with dependent types and well-founded recursion has eliminated entire classes of bugs that plague C/C++ zip implementations. The CVE classes that have plagued zlib for decades are structurally impossible in this codebase.
The two bugs that were found both sat outside the boundary of what the proofs cover. The denial-of-service was a missing specification. The heap overflow was a deeper issue in the trusted computing base, the C++ runtime that the entire proof edifice assumes is correct (and now
has a PR addressing).
Overall verification resulted in a remarkably robust and rigorous codebase. AFL and Claude had a really hard time finding errors. But they did still find issues. Verification is only as strong as the questions you think to ask and the foundations you choose to trust.
...
Read the original on kirancodes.me »
Telefónica Audiovisual Digital, la división de la operadora de telecomunicaciones que dirige la plataforma de Movistar Plus+, consiguió el pasado 23 de marzo una nueva resolución judicial que le habilita a aplicar nuevos bloqueos relacionados no solo con el fútbol, sino con otros deportes e incluso contenidos de entretenimiento.
Internet sufre en España desde febrero de 2025 problemas de conectividad cada vez que hay un partido de fútbol relevante de LaLiga. La patronal de los clubes, de la mano de Telefónica, consiguió en los juzgados una autorización para bloquear de forma dinámica direcciones IP que son detectadas participando en la difusión de sus contenidos sin permiso. De la misma forma que en una calle hay muchas viviendas, en una dirección IP hay alojadas miles de webs, que quedan inaccesibles cuando esta se bloquea. Cada fin de semana, el sistema orquestado por Javier Tebas interfiere en el acceso a numerosas webs legítimas, como ha reconocido el propio Gobierno.
Fuera de las competiciones de LaLiga, durante el horario del resto del fútbol, los usuarios podían utilizar la red con normalidad, pero esto dejará de ser así tan pronto como hoy. Antonio Lorenzo de ElEconomista adelanta la existencia de una nueva autorización para extender los bloqueos.
A falta de ver el texto de la sentencia y según la información del artículo, en esta ocasión el promotor de los nuevos bloqueos es Telefónica en solitario a través de su división audiovisual. El Juzgado Mercantil de Barcelona ha autorizado el bloqueo dinámico de webs que difunden contenidos ilícitos propiedad de Telefónica.
La información habla tanto de bloqueos de dominios, URLs y de direcciones IP, caso este último que, cuando se produce, afecta a servicios legítimos si se trata de direcciones pertenecientes a servicios CDN como Cloudflare.
Lo bloqueos aplicarán “todos los días de emisión de eventos deportivos en directo”, arrancando por primera vez con el partido de eliminatoria de la Champions League entre el Atlético de Madrid y el Barcelona que se celebra hoy martes 14 de abril. Continuará el miércoles con el Bayern de Múnich - Real Madrid. Además, según el diario se repetirá “en otros acontecimientos deportivos, como torneos de tenis o de golf, tanto en emisiones en directo como en películas y series”.
La autorización tiene una novedad importante. Y es que no afecta solo a las principales operadoras como ocurre con los bloqueos de LaLiga, sino que se dirige, además de a las marcas de Movistar, MásOrange, Vodafone y Digi, “al resto de pequeños y medianos operadores que ofrecen sus servicios de acceso a la red de ámbito nacional, regional y local”. Estas operadoras recibirán procedentes de Telefónica, los listados de “direcciones IP como de URL y nombres de dominio utilizados para la difusión ilícita”.
...
Read the original on bandaancha.eu »
I wrote to Flock’s privacy contact to opt out of their domestic spying program:
I am a resident of California. As such, and because you are subject to the CCPA, delete all information about me, my vehicle, and other household members from all of your databases. I do not give you permission to collect or store data about me, my vehicles, or my relatives, in any future situation.
Dear [misspelled name, i.e. not copied and pasted],
Your request cannot be completed at this time.
Thank you for submitting your privacy request. At this time, we are unable to process this request for the reasons detailed below.
Flock Safety provides its services to our customers, and our customers are owners and controllers of the data Flock Safety processes on their behalf. Flock Safety processes data as a service provider and processor for our customers and as a result, we are unable to directly fulfill your request. We recommend contacting the organization that engaged Flock Safety’s services to submit your request, as they are responsible for assessing and responding to it.
Here are a few additional points about Flock Safety’s data collection and privacy practices:
* Customer Contracts: Flock Safety’s processing activity as a service provider and processor is governed by the contract we have with our customers, which captures their instructions and the limitations on how Flock Safety may process their data. Flock Safety’s customers own the data and make all decisions around how such data is used and shared.
* No Sale of Data: Because Flock Safety’s customers own the data, Flock Safety may only process the data in accordance with our customer’s instructions, as outlined in our contracts with customers. Flock Safety is not permitted to sell, publish, or exchange such data for our own commercial purposes.
* Information Collected: Where Flock Safety’s customers leverage License Plate Reader (LPR) technology, the LPRs do not process sensitive information like names or addresses. Instead, LPRs only capture images of publicly available and visible vehicle characteristics that are taken in the public view.
* Purpose: Flock Safety customers use data for security purposes, including managing public safety or responding to safety concerns and reports. Additionally, such data may be used to help solve crimes and provide objective evidence.
* Retention: By default, Flock Safety’s systems only retain data for 30 days, which means that any data collected on behalf of customers is permanently hard deleted on a rolling 30 day basis. Flock Safety customers are able to adjust this retention period based on their local laws or policies.
For more information about how Flock Safety processes data, please refer to our Privacy Policy and LPR Policy.
I think that’s legally inaccurate. They’re the entity collecting and processing my personally identifiable information, and my non-lawyer reading of the California Consumer Privacy Act (CCPA) would seem to obligate them to comply with my request. I haven’t decided to engage a lawyer yet, but neither have I ruled it out.
...
Read the original on honeypot.net »
AI experts and the public’s opinion on the technology are increasingly diverging, according to Stanford University’s annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U. S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy.
The report’s findings follow growing negative sentiment about AI, with Gen Z reportedly leading the way, according to a recent Gallup poll. The study found that young people were growing less hopeful and more angry about the technology, even though around half of the demographic was using AI either daily or weekly.
For some working in tech, the AI backlash has come as a surprise. AI leaders have focused on managing the possibility of Artificial General Intelligence, or AGI — a theoretical form of AI superintelligence that could perform any task a human could do and think for itself. But everyday folks are more concerned about AI’s impact on their paycheck and whether or not their power bills will go up as energy-hungry data centers are built.
The divide has been most apparent in the online reaction to the recent attacks on OpenAI CEO Sam Altman’s home. In posts on X , for instance, AI insiders voiced surprise at a series of Instagram comments that seemed to praise the attack on Altman’s home. Some of the online comments have a similar vibe to those that circulated online after the shooting of the United Healthcare CEO in 2024 and the more recent burning of a Kimberly-Clark warehouse by a worker angry about not receiving a “livable wage” — with some comments even going so far as to suggest that even more action, akin to a revolution, is needed.
Stanford’s report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources.
For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U. S. over the next 20 years.
Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U. S. general public said the same.
Plus, a majority (73%) of experts felt positive about AI’s impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it’s not surprising that only 21% of the public felt similarly.
Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI’s impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.
The U. S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford’s report.
Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go “too far.”
Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025.
But at the same time, those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period, per data cited by the report’s authors.
...
Read the original on techcrunch.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.