10 interesting stories served every morning and every evening.
Every time an application on your computer opens a network connection, it does so quietly, without asking. Little Snitch for Linux makes that activity visible and gives you the option to do something about it. You can see exactly which applications are talking to which servers, block the ones you didn’t invite, and keep an eye on traffic history and data volumes over time.
Once installed, open the user interface by running littlesnitch in a terminal, or go straight to http://localhost:3031/. You can bookmark that URL, or install it as a Progressive Web App. Any Chromium-based browser supports this natively, and Firefox users can do the same with the Progressive Web Apps extension.
The connections view is where most of the action is. It lists current and past network activity by application, shows you what’s being blocked by your rules and blocklists, and tracks data volumes and traffic history. Sorting by last activity, data volume, or name, and filtering the list to what’s relevant, makes it easy to spot anything unexpected. Blocking a connection takes a single click.
The traffic diagram at the bottom shows data volume over time. You can drag to select a time range, which zooms in and filters the connection list to show only activity from that period.
Blocklists let you cut off whole categories of unwanted traffic at once. Little Snitch downloads them from remote sources and keeps them current automatically. It accepts lists in several common formats: one domain per line, one hostname per line, /etc/hosts style (IP address followed by hostname), and CIDR network ranges. Wildcard formats, regex or glob patterns, and URL-based formats are not supported. When you have a choice, prefer domain-based lists over host-based ones, they’re handled more efficiently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a starting point.
One thing to be aware of: the .lsrules format from Little Snitch on macOS is not compatible with the Linux version.
Blocklists work at the domain level, but rules let you go further. A rule can target a specific process, match particular ports or protocols, and be as broad or narrow as you need. The rules view lets you sort and filter them so you can stay on top of things as the list grows.
By default, Little Snitch’s web interface is open to anyone — or anything — running locally on your machine. A misbehaving or malicious application could, in principle, add and remove rules, tamper with blocklists, or turn the filter off entirely.
If that concerns you, Little Snitch can be configured to require authentication. See the Advanced configuration section below for details.
Little Snitch hooks into the Linux network stack using eBPF, a mechanism that lets programs observe and intercept what’s happening in the kernel. An eBPF program watches outgoing connections and feeds data to a daemon, which tracks statistics, preconditions your rules, and serves the web UI.
The source code for the eBPF program and the web UI is on GitHub.
The UI deliberately exposes only the most common settings. Anything more technical can be configured through plain text files, which take effect after restarting the littlesnitch daemon.
The default configuration lives in /var/lib/littlesnitch/config/. Don’t edit those files directly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will always prefer the override.
The files you’re most likely to care about:
web_ui.toml — network address, port, TLS, and authentication. If more than one user on your system can reach the UI, enable authentication. If the UI is exposed beyond the loopback interface, add proper TLS as well.
main.toml — what to do when a connection matches nothing. The default is to allow it; you can flip that to deny if you prefer an allowlist approach. But be careful! It’s easy to lock yourself out of the computer!
executables.toml — a set of heuristics for grouping applications sensibly. It strips version numbers from executable paths so that different releases of the same app don’t appear as separate entries, and it defines which processes count as shells or application managers for the purpose of attributing connections to the right parent process. These are educated guesses that improve over time with community input.
Both the eBPF program and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the version in overrides.
Little Snitch for Linux is built for privacy, not security, and that distinction matters. The macOS version can make stronger guarantees because it can have more complexity. On Linux, the foundation is eBPF, which is powerful but bounded: it has strict limits on storage size and program complexity. Under heavy traffic, cache tables can overflow, which makes it impossible to reliably tie every network packet to a process or a DNS name. And reconstructing which hostname was originally looked up for a given IP address requires heuristics rather than certainty. The macOS version uses deep packet inspection to do this more reliably. That’s not an option here.
For keeping tabs on what your software is up to and blocking legitimate software from phoning home, Little Snitch for Linux works well. For hardening a system against a determined adversary, it’s not the right tool.
Little Snitch for Linux has three components. The eBPF kernel program and the web UI are both released under the GNU General Public License version 2 and available on GitHub. The daemon (littlesnitch –daemon) is proprietary, but free to use and redistribute.
...
Read the original on obdev.at »
In early March, I noticed approximately $180 in unexpected charges to my Anthropic account. I’m a Claude Max subscriber, and between March 3-5, I received 16 separate “Extra Usage” invoices ranging from $10-$13 each, all in quick succession of one another. However, I wasn’t using Claude. I was away from my laptop entirely and was out sailing with my parents back home in San Diego.
When I checked my usage dashboard, it showed my session at 100% despite no activity. My Claude Code session history showed two tiny sessions from March 5 totaling under 7KB (no sessions on March 3 or March 4.) Nothing that would explain $180 in Extra Usage charges.
This isn’t just me. Other Max plan users have reported the same issue. There are numerous GitHub issues about it (e.g. claude-code#29289 and claude-code#24727), and posts on r/ClaudeCode describing the exact same behavior: usage meters showing incorrect values and Extra Usage charges piling up erroneously.
On March 7, I sent a detailed email to Anthropic support laying out the situation with all the evidence above. Within two minutes, I received a response… from “Fin AI Agent, Anthropic’s AI Agent.” The AI agent told me to go through an in-app refund request flow. Sadly, this refund pipeline is only applicable for subscriptions, and not for Extra Usage charges. I also wanted to confirm with a human on exactly what went wrong rather than just getting a refund and calling it a day.
So, naturally, I replied asking to speak to a human. The response:
Thank you for reaching out to Anthropic Support. We’ve received your request for assistance.
While we review your request, you can visit our Help Center and API documentation for self-service troubleshooting. A member of our team will be with you as soon as we can.
That was March 7. I followed up on March 17. No response. I followed up again on March 25. No response. I followed up again today, April 8, over a month later. Still nothing.
Anthropic is an AI company that builds one of the most capable AI assistants in the world. Their support system is a Fin AI chatbot that can’t actually help you, and there is seemingly no human behind it. I don’t have a problem with AI-assisted support, though I do have a problem with AI-only support that serves as a wall between customers and anyone who can actually resolve their issue.
...
Read the original on nickvecchioni.github.io »
Say you’re being handed a USB device and told to write a driver for it. Seems like a daunting task at first, right? Writing drivers means you have to write Kernel code, and writing Kernel code is hard, low level, hard to debug and so on.
None of this is actually true though. Writing a driver for a USB device is actually not much more difficult than writing an application that uses Sockets.
This post aims to be a high level introduction to using USB for people who may not have worked with Hardware too much yet and just want to use the technology. There are amazing resources out there such as USB in a NutShell that go into a lot of detail about how USB precisely works (check them out if you want more information), they are however not really approachable for somebody who has never worked with USB before and doesn’t have a certain background in Hardware. You don’t need to be an Embedded Systems Engineer to use USB the same way you don’t need to be a Network Specialist to use Sockets and the Internet.
The device we’ll be using an Android phone in Bootloader mode. The reason for this is that
* It’s a device you can easily get your hands on
* The protocol it uses is well documented and incredibly simple
* Drivers for it are generally not pre-installed on your system so the OS will not interfere with our experiments
Getting the phone into Bootloader mode is different for every device, but usually involves holding down a combination of buttons while the phone is starting up. In my case it’s holding the volume down button while powering on the phone
Enumeration refers to the process of the host asking the device for information about itself. This happens automatically when you plug in the device and it’s where the OS normally decides which driver to load for the device. For most standard devices, the OS will look at the USB Device Class and loads a driver that supports that class. For vendor specific devices, you generally install a driver made by the manufacturer which will look at the VID (Vendor ID) and PID (Product ID) instead to detect whether or not it should handle the device.
Even without a driver, plugging the phone into your computer will still make it get recognized as a USB device. That’s because the USB specification defines a standard way for devices to identify themselves to the host, more on how that exactly works in a bit though.
On Linux, we can use the handy lsusb tool to see what the device identified itself as:
Bus and Device are just identifiers for the physical USB port the device is plugged into. They will most likely differ on your system since they depend on which port you plugged the device into.
ID is the most interesting part here. The first part 18d1 is the Vendor ID (VID) and the second part 4ee0 is the Product ID (PID). These are identifiers that the device sends to the host to identify itself. The VID is assigned by the USB-IF to companies that pay them a lot of money, in this case Google, and the PID is assigned by the company to a specific product, in this case the Nexus/Pixel Bootloader.
Using the lsusb -t command we can also see the device’s USB class and what driver is currently handling it:
This shows the entire tree of USB devices connected to the system. The bottom most one in this part of the tree is our device (Bus 008, Device 014 as reported in the previous command). The Class=Vendor Specific Class part specifies that the device does not use any of the standard USB classes (e.g HID, Mass Storage or Audio) but instead uses a custom protocol defined by the manufacturer. The Driver=[none] part simply tells us that the OS didn’t load a driver for the device which is good for us since we want to write our own.
We will also go after the VID and PID since they are the only real identifying information we have. The Device Class is not very useful for it here since it’s just Vendor Specific Class which any manufacturer can use for any device. Instead of doing all of this in the Kernel though, we can write a Userspace application that does the same thing. This is much easier to write and debug (and is arguably the correct place for drivers to live anyway but that’s a different topic). To do this, we can use the libusb library which provides a simple API for communicating with USB devices from Userspace. It achieves this by providing a generic driver that can be loaded for any device and then provides a way for Userspace applications to claim the device and talk to it directly.
The same thing we just did manually can also be done in software though. The following program initializes libusb, registers a hotplug event handler for devices matching the 18d1:4ee0 VendorId / ProductId combination and then waits for that device to be plugged into the host.
If you compile and run this, plugging in the device should result in the following output:
Congrats! You have a program now that can detect your device without ever having to touch any Kernel code at all.
Next step, getting any answer from the device. The easiest way to do that for now is by using the standardized Control endpoint. This endpoint is always on ID 0x00 and has a standardized protocol. This endpoint is also what the OS previously used to identify the device and get its VID:PID.
The way we use this endpoint is with yet another libusb function that’s made specifically to send requests to that endpoint. So we can extend our hotplug event handler using the following code:
This code will now send a GET_STATUS request to the device as soon as it’s plugged in and prints out the data it sends back to the console.
Those bytes came from the device itself! Decoding them using the specification tells us that the first byte tells us whether or not the device is Self-Powered (1 means it is which makes sense, the device has a battery) and the second byte means it does not support Remote Wakeup (meaning it cannot wake up the host).
There are a few more standardized request types (and some devices even add their own for simple things!) but the main one we (and the OS too) are interested in is the GET_DESCRIPTOR request.
Descriptors are binary structures that are generally hardcoded into the firmware of a USB device. They are what tells the host exactly what the device is, what it’s capable of and what driver it would like the OS to load. So when you plug in a device, the host simply sends multiple GET_DESCRIPTOR requests to the standardized Control Endpoint at ID 0x00 to get back a struct that gives it all the information it needs for enumeration. And the cool thing is, we can do that too!
Instead of a GET_STATUS request, we now send a GET_DESCRIPTOR request:
This now instead returns the following data:
Now to decode this data, we need to look at the USB specification on Chapter 9.6.1 Device. There we can find that the format looks as follows:
Throwing the data into ImHex and giving its Pattern Language this structure definition yields the following result:
And there we have it! idVendor and idProduct correspond to the values we found previously using lsusb.
There’s more than just the device descriptor though. There’s also Configuration, Interface, Endpoint, String and a couple of other descriptors. These can all be read using the same GET_DESCRIPTOR request on the control endpoint. We could still do this all by hand but luckily for us, lsusb has an option that can do that for us already!
This output shows us a few more of the descriptors the device has. Specifically, it has a single Configuration Descriptor that contains a Interface Descriptor for the Android Fastboot interface. And that interface now contains two Endpoints. This is where the device tells the host about all the other endpoints, besides the Control endpoint, and these will be the ones we’ll be using in the next step to actually finally send data to the device’s Fastboot interface!
Let’s talk a bit more about endpoints first though. We already learned about the Control endpoint on address 0x00. Endpoints are basically the equivalent to ports that a device on the network opened for us to send data back and fourth. The device specifies in its descriptor which kind of endpoints it has and then services these in its firmware. So we don’t even need to do port scanning or know that SSH just runs on port 22 usually, we have a nice way of finding out what interfaces the device has, what language they speak and how we can speak to them. Looking at the descriptors above, that control descriptor is not there though. Instead, there’s two others with different types.
There’s exactly one per device and it’s always fixed on Endpoint Address 0x00. It’s what is used do initial configuration and request information about the device.
The main purpose of the Control endpoint is to solve the chicken-and-egg problem where you couldn’t communicate with a device without knowing its endpoints but to know its endpoints you’d need to communicate with it. That’s also why it doesn’t even appear in the descriptors. It’s not part of any interface but the device itself. And we know about its existence thanks to the spec, without it having to be advertised.
It’s made for setting simple configuration values or requesting small amounts of data. The function in libusb doesn’t even allow you to set the endpoint address to make a control request to because there’s only ever one control endpoint and it’s always on address 0x00
Bulk Endpoints are what’s used when you want to transfer larger amounts of data. They’re used when you have large amounts of non-time-sensitive data that you just want to send over the wire.
This is what’s used for things like the Mass Storage Class, CDC-ACM (Serial Port over USB) and RNDIS (Ethernet over USB).
One detail: Data sent over Bulk endpoints is high bandwidth but low priority. This means, Bulk data will always just fill up the remaining bandwidth. Any Interrupt and Isochronous transfers (further detail below) have a higher priority so if you’re sending both Bulk and Isochronous data over the same connection, the bandwidth of the Bulk transmission will be lowered until the Isochronous one can transmit its data in the requested timeframe.
Interrupt Endpoints are the opposite of Bulk Endpoints. They allow you to send small amounts of data with very low latency. For example Keyboards and Mice use this transfer type under the HID Class to poll for button presses 1000+ times per second. If no button was pressed, the transfer fails immediately without sending back a full failure message (only a NAK), only when something actually changed you’ll get a description back of what happened.
The important fact here is, even though these are called interrupt endpoints, there’s no interrupts happening. The Device still does not talk to the Host without being asked. The Host just polls so frequently that it acts as if it’s an interrupt.
The functions in libusb that handle interrupt transfers also abstract this behaviour away further. You can start an interrupt transfer and the function will block until the device sends back a full response.
Isochronous Endpoints are somewhat special. They’re used for bigger amounts of data that is really timing critical. They’re mainly used for streaming interfaces such as Audio or Video where any latency or delay will be immediately noticeable through stuttering or desyncs. In libusb, these work asynchronously. You can setup multiple transfers at once and they will be queued and you’ll get back an event once data has arrived so you can process it and queue further requests.
This type is generally not used very often outside of the Audio and Video classes.
Besides the Transfer Type, endpoints also have a direction. Keep in mind, USB is a full master-slave oriented interface. The Host is the only one ever making any requests and the Device will never answer unless addressed by the Host. This means, the device cannot actually send any data directly to the Host. Instead the Host needs to ask the Device to please send the data over.
This is what the direction is for.
* IN endpoints are for when the Host wants to receive some data. It makes a request on an IN endpoint and waits for the device to respond back with the data.
* OUT endpoints are for when the Host wants to transmit some data. It makes a request on an OUT endpoint and then immediately transfers the data it wants to send over. The Device in this case only acknowledges (ACK) that it received the data but won’t send any additional data back.
Contrary to the transfer type, the direction is encoded in the endpoint address instead. If the topmost bit (MSB) is set to 1, it’s an IN endpoint, if it’s set to 0 it’s an OUT endpoint. (If you’re into Hardware, you might recognize this same concept from the I2C interface.)
* You can have a maximum of custom endpoints available at once
because we have 7 bits available for addresses
because we always have the control endpoint that’s on the fixed address 0x00.
* because we have 7 bits available for addresses
* because we always have the control endpoint that’s on the fixed address 0x00.
* Endpoints are entirely unidirectional. Either you’re using an endpoint to request data or to transmit data, it cannot do both at once
That’s also the reason why our Fastboot interface has two Bulk endpoints: one is dedicated to listening to requests the Host sends over and the other one is for responding to those same requests
* That’s also the reason why our Fastboot interface has two Bulk endpoints: one is dedicated to listening to requests the Host sends over and the other one is for responding to those same requests
Now that we have all this information about USB, let’s look into the Fastboot protocol. The best documentation for this is both the u-boot Source Code and as its Documentation.
According to the documentation, the protocol really is incredibly simple. The Host sends a string command and the device responds with a 4 character status code followed by some data.
Let’s update our code to do just that then:
Plugging the device in now, prints the following message to the terminal:
That seems to match the documentation!
First 4 bytes are OKAY, specifying that the request was executed successfully The rest of the data after that is 0.4 which corresponds to the implemented Fastboot Version in the Documentation: v0.4
And that’s it! You successfully made your first USB driver from scratch without ever touching the Kernel.
All these same principles apply to all USB drivers out there. The underlying protocol may be significantly more complex than the fastboot protocol (I was pulling my hair out before over the atrocity that the MTP protocol is) but everything around it stays identical. Not much more complex than TCP over sockets, is it? :)
...
Read the original on werwolv.net »
All of the work we do is funded by less than 3% of our users.
We never show advertisements or sell your data. We don’t have corporate funding. We are fully funded by financial contributions from our users.
Thunderbird’s mission is to give you the best privacy-respecting, customizable email experience possible. Free for everyone to install and enjoy! Maintaining expensive servers, fixing bugs, developing new features, and hiring talented engineers are crucial for this mission.
If you get value from using Thunderbird, please help support it. We can’t do this without you.
...
Read the original on updates.thunderbird.net »
Farmers have been fighting John Deere for years over the right to repair their equipment, and this week, they finally reached a landmark settlement.
While the agricultural manufacturing giant pointed out in a statement that this is no admission of wrongdoing, it agreed to pay $99 million into a fund for farms and individuals who participated in a class action lawsuit. Specifically, that money is available to those involved who paid John Deere’s authorized dealers for large equipment repairs from January 2018. This means that plaintiffs will recover somewhere between 26% and 53% of overcharge damages, according to one of the court documents—far beyond the typical amount, which lands between 5% and 15%.
The settlement also includes an agreement by Deere to provide “the digital tools required for the maintenance, diagnosis, and repair” of tractors, combines, and other machinery for 10 years. That part is crucial, as farmers previously resorted to hacking their own equipment’s software just to get it up and running again. John Deere signed a memorandum of understanding in 2023 that partially addressed those concerns, providing third parties with the technology to diagnose and repair, as long as its intellectual property was safeguarded. Monday’s settlement seems to represent a much stronger (and legally binding) step forward.
Ripple effects of this battle have been felt far beyond the sales floors at John Deere dealers, as the price of used equipment skyrocketed in response to the infamous service difficulties. Even when the cost of older tractors doubled, farmers reasoned that they were still worth it because repairs were simpler and downtime was minimized. $60,000 for a 40-year-old machine became the norm.
A judge’s approval of the settlement is still required, though it seems likely. Still, John Deere isn’t out of the woods yet. It still faces another lawsuit from the United States Federal Trade Commission, in which the government organization accuses Deere of harmfully locking down the repair process.
It’s difficult to overstate the significance of this right-to-repair fight. While it has obvious implications for the ag industry, others like the automotive and even home appliance sectors are looking on. Any court ruling that might formally condemn John Deere of wrongdoing may set a precedent for others to follow. At a time when manufacturers want more and more control of their products after the point of sale, every little update feels incredibly high-stakes.
Got a tip or question for the author? Contact them directly: caleb@thedrive.com
...
Read the original on www.thedrive.com »
And that’s not OK. This bug is categorically distinct from hallucinations or missing permission boundaries.
And that’s not OK. This bug is categorically distinct from hallucinations or missing permission boundaries.
Claude sometimes sends messages to itself and then thinks those messages came from the user. This is the worst bug I’ve seen from an LLM provider, but people always misunderstand what’s happening and blame LLMs, hallucinations, or lack of permission boundaries. Those are related issues, but this ‘who said what’ bug is categorically distinct.
I wrote about this in detail in The worst bug I’ve seen so far in Claude Code, where I showed two examples of Claude giving itself instructions and then believing those instructions came from me.
Claude told itself my typos were intentional and deployed anyway, then insisted I was the one who said it.
It’s not just me
Here’s a Reddit thread where Claude said “Tear down the H100 too”, and then claimed that the user had given that instruction.
From r/Anthropic — Claude gives itself a destructive instruction and blames the user.
“You shouldn’t give it that much access”
Comments on my previous post were things like “It should help you use more discipline in your DevOps.” And on the Reddit thread, many in the class of “don’t give it nearly this much access to a production environment, especially if there’s data you want to keep.”
This isn’t the point. Yes, of course AI has risks and can behave unpredictably, but after using it for months you get a ‘feel’ for what kind of mistakes it makes, when to watch it more closely, when to give it more permissions or a longer leash.
This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”
Before, I thought it was a temporary thing — I saw it a few times in a single day, and then not again for months. But either they have a regression or it was a coincidence and it just pops up every so often, and people only notice when it gives itself permission to do something bad.
This article reached #1 on Hacker News, and it seems that this is definitely a widespread issue. Here’s another super clear example shared by nathell (full transcript).
From nathell — Claude asks itself “Shall I commit this progress?” and treats it as user approval.
Several people questioned whether this is actually a harness bug like I assumed, as people have reported similar issues using other interfaces and models, including chatgpt.com. One pattern does seem to be that it happens in the so-called “Dumb Zone” once a conversation starts approaching the limits of the context window.
...
Read the original on dwyer.co.za »
Astral builds tools that millions of developers around the world depend on and trust.
That trust includes confidence in our security posture: developers reasonably expect that our tools (and the processes that build, test, and release them) are secure. The rise of supply chain attacks, typified by the recent Trivy and LiteLLM hacks, has developers questioning whether they can trust their tools.
To that end, we want to share some of the techniques we use to secure our tools in the hope that they’re useful to:
Our users, who want to understand what we do to keep their systems secure;
Other maintainers, projects, and companies, who may benefit from some of the techniques we use;
Developers of CI/CD systems, so that projects do not need to follow non-obvious paths or avoid
useful features to maintain secure and robust processes.
We sustain our development velocity on Ruff, uv, and ty through extensive CI/CD workflows that run on GitHub Actions. Without these workflows we would struggle to review, test, and release our tools at the pace and to the degree of confidence that we demand. Our CI/CD workflows are also a critical part of our security posture, in that they allow us to keep critical development and release processes away from local developer machines and inside of controlled, observable environments.
GitHub Actions is a logical choice for us because of its tight first-party integration with GitHub, along with its mature support for contributor workflows: anybody who wants to contribute can validate that their pull request is correct with the same processes we use ourselves.
Unfortunately, there’s a flipside to this: GitHub Actions has poor security defaults, and security compromises like those of Ultralytics, tj-actions, and Nx all began with well-trodden weaknesses like pwn requests.
Here are some of the things we do to secure our CI/CD processes:
We forbid many of GitHub’s most dangerous and insecure triggers, such as pull_request_target and
workflow_run, across our entire GitHub organization. These triggers are almost impossible to use securely and attackers keep finding ways to abuse them, so we simply don’t allow them.
Our experience with these triggers is that many projects think that they need them, but the overwhelming majority of their usages are better off being replaced with a less privileged trigger (such as pull_request) or removed entirely. For example, many projects use pull_request_target
so that third-party contributor-triggered workflows can leave comments on PRs, but these use cases are often well served by job summaries or even just leaving the relevant information in the workflow’s logs.
Of course, there are some use cases that do require these triggers, such as anything that does
really need to leave comments on third-party issues or pull requests. In these instances we recommend leaving GitHub Actions entirely and using a GitHub App (or webhook) that listens for the relevant events and acts in an independent context. We cover this pattern in more detail under
Automations below.
We require all actions to be pinned to specific commits (rather than tags or branches, which are mutable). Additionally, we cross-check these commits to ensure they match an actual released repository state and are not impostor commits.
We do this in two ways: first with zizmor’s unpinned-uses and impostor-commit audits, and again with GitHub’s own “require actions to be pinned to a full-length commit SHA” policy. The former gives us a quick check that we can run locally (and prevents impostor commits), while the latter is a hard gate on workflow execution that actually ensures that all actions, including nested actions, are fully hash-pinned.
Enabling the latter is a nontrivial endeavor, since it requires indirect action usages (the actions called by the actions we call) to be hash-pinned as well. To achieve this, we coordinated with our downstreams (example) to land hash-pinning across our entire dependency graph.
Together, these checks increase our confidence in the reproducibility and hermeticity of our workflows, which in turn increases our confidence in their security (in the presence of an attacker’s ability to compromise a dependent action).
However, while necessary, this isn’t sufficient: hash-pinning ensures that the action’s
contents are immutable, but doesn’t prevent those immutable contents from making mutable decisions (such as installing the latest version of a binary from a GitHub repository’s releases). Neither GitHub nor third-party tools perform well at detecting these kinds of immutability gaps yet, so we currently rely on manual review of our action dependencies to detect this class of risks.
When manual review does identify gaps, we work with our upstreams to close them. For example, for actions that use native binaries internally, this is achieved by embedding a mapping between the download URL for the binary and a cryptographic hash. This hash in turn becomes part of the action’s immutable state. While this doesn’t ensure that the binary itself is authentic, it does ensure that an attacker cannot effectively tamper with a mutable pointer to the binary (such as a non-immutable tag or release).
We limit our workflow and job permissions in multiple places: we default to read-only permissions at the organization level, and we additionally start every workflow with permissions: {} and only broaden beyond that on a job-by-job basis.
We isolate our GitHub Actions secrets, wherever possible: instead of using organization- or repository-level secrets, we use deployment environments and environment-specific secrets. This allows us to further limit the blast radius of a potential compromise, as a compromised test or linting job won’t have access to, for example, the secrets needed to publish release artifacts.
To do these things, we leverage GitHub’s own settings, as well as tools like zizmor (for static analysis) and pinact (for automatic pinning).
Beyond our CI/CD processes, we also take a number of steps to limit both the likelihood and the impact of account and repository compromises within the Astral organization:
We limit the number of accounts with admin- and other highly-privileged roles, with most organization members only having read and write access to the repositories they need to work on. This reduces the number of accounts that an attacker can compromise to gain access to our organization-level controls.
We enforce strong 2FA methods for all members of the Astral organization, beyond GitHub’s default of requiring any 2FA method. In effect, this requires all Astral organization members to have a 2FA method that’s no weaker than TOTP. If and when GitHub allows us to enforce only 2FA methods that are phishing-resistant (such as WebAuthn and Passkeys only), we will do so.
We impose branch protection rules on an org-wide basis: changes to main cannot be force-pushed and must always go through a pull request. We also forbid the creation of particular branch patterns (like advisory-* and internal-*) to prevent premature disclosure of security work.
We impose tag protection rules that prevent release tags from being created until a
release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.
Finally, we ban repository admins from bypassing all of the above protections. All of our protections are enforced at the organization level, meaning that an attacker who manages to compromise an account that has admin access to a specific repository still won’t be able to disable our controls.
To help others implement these kinds of branch and tag controls, we’re sharing a gist that shows some of the rulesets we use. These rulesets are specific to our GitHub organization and repositories, but you can use them as a starting point for your own policies!
There are certain things that GitHub Actions can do, but can’t do securely, such as leaving comments on third-party issues and pull requests. Most of the time it’s better to just forgo these features, but in some cases they’re a valuable part of our workflows.
In these latter cases, we use astral-sh-bot to safely isolate these tasks outside of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have received (since GitHub Actions consumes the same webhook payloads as GitHub Apps do), but with much more control and much less implicit state.
However, there’s still a catch with GitHub Apps: an app doesn’t eliminate any sensitive credentials needed for an operation, it just moves them into an environment that doesn’t mix code and data as pervasively as GitHub Actions does. For example, an app won’t be susceptible to a
template injection attack like a workflow would be, but could still contain SQLi, prompt injection, or other weaknesses that allow an attacker to abuse the app’s credentials. Consequently, it’s essential to treat GitHub App development with the same security mindset as any other software development. This also extends to untrusted code: using a GitHub App does not make it safe to run untrusted code, it just makes it harder to do so unexpectedly. If your processes need
to run untrusted code, they must use pull_request or another “safe” trigger that doesn’t provide any privileged credentials to third-party pull requests.
With all that said, we’ve found that the GitHub App pattern works well for us, and we recommend it to other maintainers and projects who have similar needs. The main downside to it comes in the form of complexity: it requires developing and hosting a GitHub App, rather than writing a workflow that GitHub orchestrates for you. We’ve found that frameworks like Gidgethub make the development process for GitHub Apps relatively straightforward, but that hosting remains a burden in terms of time and cost.
It’s an unfortunate reality that there still aren’t great GitHub App options for one-person and hobbyist open source projects; it’s our hope that usability enhancements in this space can be led by companies and larger projects that have the resources needed to paper over GitHub Actions’ shortcomings as a platform.
We recommend this tutorial by Mariatta as a good introduction to building GitHub Apps in Python. We also plan to open source astral-sh-bot in the future.
So far, we’ve covered aspects that tie closely to GitHub, as the source host for Astral’s tools. But many of our users install our tools via other mechanisms, such as PyPI, Homebrew, and our
Docker images. These distribution channels add another “link” to the metaphorical supply chain, and require discrete consideration:
Where possible, we use Trusted Publishing to publish to registries (like PyPI, crates.io, and
NPM). This technique eliminates the need for long-lived registry credentials, in turn ameliorating one of the most common sources of package takeover (credential compromise in CI/CD platforms).
Where possible (currently our binary and Docker images releases), we generate Sigstore-based attestations. These attestations establish a cryptographically verifiable link between the released artifact and the workflow that produced it, in turn allowing users to verify that their build of uv, Ruff, or ty came from our actual release processes. You can see our recent
attestations for uv as an example of this.1
We use GitHub’s immutable releases feature to prevent the post-hoc modification of the builds we publish on GitHub. This addresses a common attacker pivoting technique where previously published builds are replaced with malicious builds. A variant of this technique was used in the recent Trivy attack, with the attacker force-pushing over previous tags to introduce compromised versions of the trivy-action and setup-trivy actions.
We do not use caching to improve build times during releases, to prevent an attacker from compromising our builds via a GitHub Actions cache poisoning attack.
* To reduce the risk of an attacker publishing a new malicious version of our tools, we use a
stack of protections on our release processes:
Our release process is isolated within a dedicated GitHub deployment environment. This means that jobs that don’t run in the release environment (such as tests and linters) don’t have access to our release secrets.
In order to activate the release environment, the activating job must be approved by at least one other privileged member of the Astral organization. This mitigates the risk of a single rogue or compromised account being able to publish a malicious release (or exfiltrate release secrets); the attacker needs to compromise at least two distinct accounts, both with strong 2FA.
In repositories (like uv) where we have a large number of release jobs, we use a distinct
release-gate environment to work the fact that GitHub triggers approvals for every job that uses the release environment. This retains the two-person approval requirement, with one additional hop: a small, minimally-privileged GitHub App mediates the approval from
release-gate to release via a deployment protection rule.
Finally, we use a tag protection ruleset to prevent the creation of a release’s tag until the release deployment succeeds. This prevents an attacker from bypassing the normal release process to create a tag and release directly.
Our release process is isolated within a dedicated GitHub deployment environment. This means that jobs that don’t run in the release environment (such as tests and linters) don’t have access to our release secrets.
In order to activate the release environment, the activating job must be approved by at least one other privileged member of the Astral organization. This mitigates the risk of a single rogue or compromised account being able to publish a malicious release (or exfiltrate release secrets); the attacker needs to compromise at least two distinct accounts, both with strong 2FA.
In repositories (like uv) where we have a large number of release jobs, we use a distinct
release-gate environment to work the fact that GitHub triggers approvals for every job that uses the release environment. This retains the two-person approval requirement, with one additional hop: a small, minimally-privileged GitHub App mediates the approval from
release-gate to release via a deployment protection rule.
Finally, we use a tag protection ruleset to prevent the creation of a release’s tag until the release deployment succeeds. This prevents an attacker from bypassing the normal release process to create a tag and release directly.
* For users who install uv via our standalone installer, we enforce the integrity of the installed
binaries via checksums embedded directly into the installer’s source code2.
Our release processes also involve “knock-on” changes, like updating the our public documentation, version manifests, and the official pre-commit hooks. These are privileged operations that we protect through dedicated bot accounts and fine-grained PATs issued through those accounts.
Going forwards, we’re also looking at adding codesigning with official developer certificates on macOS and Windows.
Last but not least is the question of dependencies. Like almost all modern software, our tools depend on an ecosystem of third-party dependencies (both direct and transitive), each of which is in an implicit position of trust. Here are some of the things we do to measure and mitigate upstream risk:
We use dependency management tools like Dependabot and Renovate to keep our dependencies updated, and to notify us when our dependencies contain known vulnerabilities.
In general, we employ cooldowns in conjunction with the above to avoid updating dependencies immediately after a new release, as this is when temporarily compromised dependencies are most likely to affect us.
Both Dependabot and Renovate support cooldowns, and uv also has built-in support. We’ve found Renovate’s ability to configure cooldowns on a per-group basis to be particularly useful, as it allows us to relax the cooldown requirement for our own (first-party) dependencies while keeping it in place for most third-party dependencies.
We maintain social connections with many of our upstream dependencies, and we perform both regular and security contributions with them (including fixes to their own CI/CD and release processes). For example, here’s a recent contribution we made to apache/opendal-reqsign to help them ratchet down their CI/CD security.
Separately, we maintain social connections with adjacent projects and working groups in the ecosystem, including the Python Packaging Authority and the Python Security Response Team. These connections have proven invaluable for sharing information, such as when a report against pip also affects uv (or vice versa), or when a security release for CPython will require a release of python-build-standalone.
We’re conservative about adding new dependencies, and we look to eliminate dependencies where practical and minimally disruptive to our users. Over the coming release cycles, we hope to remove some dependencies related to support for rarely used compression schemes, as part of a larger effort to align ourselves with Python packaging standards.
More generally, we’re also conservative about what our dependencies bring in: we try to avoid dependencies that introduce binary blobs, and we carefully review our dependencies’ features to disable functionality that we don’t need or desire.
Finally, we contribute financially (in the form of our OSS Fund) to the sustainability of projects that we depend on or that push the OSS ecosystem as a whole forwards.
Open source security is a hard problem, in part because it’s really many problems (some technical, some social) masquerading as one. We’ve covered many of the techniques we use to tackle this problem, but this post is by no means an exhaustive list. It’s also not a static list: attackers are dynamic participants in the security process, and defenses necessarily evolve in response to their changing techniques.
With that in mind, we’d like to recall some of the points mentioned above that deserve the most attention:
Respect the limits of CI/CD: it’s extremely tempting to do everything in CI/CD, but there are some things that CI/CD (and particularly GitHub Actions) just can’t do securely. For these things, it’s often better to forgo them entirely, or isolate them outside of CI/CD with a GitHub App or similar.
With that said, it’s important to not overcorrect and throw CI/CD away entirely: as mentioned above, CI/CD is a critical part of our security posture and probably yours too! It’s unfortunate that securing GitHub Actions is so difficult, but we consider it worth the effort relative to the velocity and security risks that would come with not using hosted CI/CD at all.
In particular, we strongly recommend using CI/CD for release processes, rather than relying on local developer machines, particularly when those release processes can be secured with misuse- and disclosure-resistant credential schemes like Trusted Publishing.
Isolate and eliminate long-lived credentials: the single most common form of post-compromise spread is the abuse of long-lived credentials. Wherever possible, eliminate these credentials entirely (for example, with Trusted Publishing or other OIDC-based authentication mechanisms).
Where elimination isn’t possible, isolate these credentials to the smallest possible scope: put them in specific deployment environments with additional activation requirements, and only issue credentials with the minimum necessary permissions to accomplish a given task.
Strengthen release processes: if you’re on GitHub, use deployment environments, approvals, tag and branch rulesets, and immutable releases to reduce the degrees of freedom the attacker has in the event of an account takeover or repository compromise.
Maintain awareness of your dependencies: maintaining awareness of the overall health of your dependency tree is critical to understanding your own risk profile. Use both tools and elbow grease to keep your dependencies secure, and to help them keep their own processes and dependencies secure too.
Finally, we’re still evaluating many of the techniques mentioned above, and will almost certainly be tweaking (and strengthening) them over the coming weeks and months as we learn more about their limitations and how they interact with our development processes. That’s to say that this post represents a point in time, not the final word on how we think about security for our open source tools.
...
Read the original on astral.sh »
As I type these words, I worry over the day when I will no longer be commissioned to write them. The day, to be specific, that The American Scholar asks Claude (the moniker for Anthropic’s AI) and not Robert (the name of Max and Roslyn Zaretsky’s son) to create an essay on, say, AI and the future of work.
Not surprisingly, I am not alone to worry: Not many subjects stir greater fear and dread among Americans than the seemingly irresistible rise of AI. According to a recent Pew Research Center survey, 64 percent of the public believes that AI will translate into fewer jobs. Small wonder, then, that only 17 percent of the same respondents expect that AI, even when humanized by names like Claude, will make their future brighter.
Were he alive today, Paul Lafargue would be among that 17 percent, and his voice would be both loud and funny. Born in Cuba in 1842 to parents of mixed race—part Jewish and part Creole—Lafargue was married to Laura Marx, one of Karl Marx’s four daughters. Even before this marriage, though, Lafargue, who had studied medicine in Paris, had thrown over a secure future as a doctor to devote (and pauperize) himself and his family to working on behalf of the shining (and classless) future glimpsed by his father-in-law.
Knocking out polemical and theoretical essays while striving to launch France’s first workers’ party, the Parti ouvrier français, Lafargue was a well-known figure on the radical left in fin-de-siècle Paris. Predictably, his activities also made him well-known to the French police, who repeatedly arrested him, including on one evening in 1883 when he was taking home a salad to his wife. (He managed to find a passerby to deliver the salad before the police hauled him away.)
Making wine from this bunch of grapes, Lafargue used his time behind bars at Saint Pélagie—a forbidding Parisian prison where many of the century’s most notorious writers, artists, and thinkers found themselves from time to time—to draft his most famous work, Le Droit à la paresse, or The Right to Be Lazy, translated into English by Alex Andriesse. Though he dashed off this pamphlet nearly 150 years ago, Lafargue asked questions that remain most pertinent to our current anxieties over the future of work.
During Lafargue’s own lifetime, the nature of work was undergoing a traumatic transformation. The seismic effect of the first and second industrial revolutions, as well as the quickening pace of globalization, proved an extinction event for traditional forms of production. “The gods and kings of the past,” declared the historian Eric Hobsbawm, “were powerless before the businessmen and steam engines of the present.” As factory workers and unskilled laborers replaced ateliers and artisans, the former struggled to organize themselves, a struggle into which Lafargue threw himself body and soul.
Or, perhaps, not his entire soul. His essay’s title reveals a dramatic divergence of goals he and union leaders held. He bemoans the demand of workers for shorter workdays (which often lasted as long as 12 hours), insisting that curtailing work hours did not represent victory but defeat: “Shame on the proletariat, only slaves would have been capable of such baseness” to have sought such an outcome. On the contrary, he declaims, workers should oppose the very notion of work.
If you are puzzled, don’t worry—so, too, were nearly all of Lafargue’s contemporaries on the left. How could they not be? Here was a committed Marxist—and the great man’s son-in-law, to boot—asserting that workers, rather than strike for the right to work, should instead protest for the right to be lazy. Machines, he believed, could become “humanity’s savior, the god who will redeem man from the sordidae artes [manual labor] and give him leisure and liberty.”
And yet, Lafargue exclaims, “the blind passion and perverse murderousness of work have transformed the machine from an instrument of emancipation into an instrument that enslaves free beings.” The reason workers spend so many hours shackled to their machines, he contended, was not from economic necessity. Instead, it was imposed upon them by their superiors, the captains of industry and finance, who were wedded to “the dogma of work and diabolically drilled the vice of work into the heads of workers.”
Of course, Lafargue never called for the eradication of work. The necessities of life, after all, would always require the labor of women and men to produce and provide. But he did press for the rationalization of work. Given the efficiency of machines, fewer hours were needed to provide the necessities of life. Maintaining the same excessive number of work hours inevitably flooded the market with superfluities and the era’s repeated economic crises stretching from 1873 to the end of the century.
The dramatic reduction of time at work would be a boon not just to the well-being of the economy, Lafargue concluded, but also to the well-being of both workers and owners, who would have more time to … well, to do what?
Karl Marx had an answer of sorts, suggesting that we would “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticism after dinner, just as I have a mind.” But Lafargue instead conjured a Rabelaisian future in which former workers would eat and drink their fill on holidays while their former taskmasters would entertain them by performing parodies of their now defunct roles as generals and industrialists. Et le voilà, Lafargue concludes, in this world turned upside down, “social discord will vanish.”
Though his tongue was firmly in cheek, Lafargue did imagine that these machines—perhaps the forerunners of the “machines of loving grace” invoked by Dario Amodei, the CEO of Anthropic—would lead us to a paradise we had lost. A paradise bathed in otium, the Latin word that can be translated as “idleness” as well as “laziness.” When Lafargue praises la paresse, he means not the latter, but the former. He makes this clear by quoting, at the start of his essay, a line from Virgil’s Eclogues that celebrates the pleasures of otium.
Although Lafargue does not flesh out his notion of a future filled with idleness, my guess is that he meant it would be devoted not to the pleasure of doing a particular hobby or specific activity, painting a landscape or swinging a gold club. Instead, it would be a life given out, quite simply, to the pleasure of faisant rien or doing nothing. As the Czech playwright Karel Capek wrote in an essay called “In Praise of Idleness,” this state is defined as “the absence of everything by which a person is occupied, diverted, distracted, interested, employed, annoyed, pleased, attracted, involved, entertained, bored, enchanted, fatigued, absorbed, or confused.” In a word, idling is the sentiment of being.
But even idlers, try as they might, cannot ignore the passage of time. In 1911, a dozen years before Capek published his essay, Paul Lafargue and his wife committed suicide—he was 69; she was 66. His reason, it seems to me, dovetailed with his philosophy: “I am killing myself before pitiless old age, which gradually deprives me one by one of the pleasures and joys of existence.” It might repay us to take a moment, not just from our jobs but also from our leisures, to make some to-do about doing nothing.
...
Read the original on theamericanscholar.org »
Thank you for reading! Letters from Leo is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Before you read on: Pope Leo XIV has asked Americans to contact their members of Congress and demand an end to the war in Iran. Answer the pope’s call in one click at standwithpopeleo.com, an app we built to make it as easy as possible.
[UPDATE at 4:33 PM EDT: Letters from Leo can now independently confirm The Free Press report that the meeting took place — and that some Vatican officials were so alarmed by the Pentagon’s tactics that they shelved plans for Pope Leo XIV to visit the United States later this year.
Other officials in the Vatican saw the Pentagon’s reference to an Avignon papacy as a threat to use military force against the Holy See.]
In January, behind closed doors at the Pentagon, Under Secretary of War for Policy Elbridge Colby summoned Cardinal Christophe Pierre — Pope Leo XIV’s then-ambassador to the United States — and delivered a lecture.
America, Colby and his colleagues told the cardinal, has the military power to do whatever it wants in the world. The Catholic Church had better take its side.
As tempers rose, an unidentified U. S. official reached for a fourteenth-century weapon and invoked the Avignon Papacy, the period when the French Crown used military force to bend the bishop of Rome to its will.
That scene, broken this week by Mattia Ferraresi in an extraordinary piece of journalism for The Free Press, may be the most remarkable moment in the long and knotted history of the American republic’s relationship with the Catholic Church.
There is no public record of any Vatican official ever taking a meeting at the Pentagon, and certainly none of a senior U. S. official threatening the Vicar of Christ on Earth with the prospect of an American Babylonian Captivity.
The reporting also confirms — with fresh sources and new color — what I first reported in February: that the Vatican declined the Trump-Vance White House’s invitation to host Pope Leo XIV for America’s 250th anniversary in 2026.
Ferraresi obtained accounts from Vatican and U. S. officials briefed on the Pentagon meeting. According to his sources, Colby’s team picked apart the pope’s January state-of-the-world address line by line and read it as a hostile message aimed directly at the administration.
What enraged them most was Leo’s declaration that “a diplomacy that promotes dialogue and seeks consensus among all parties is being replaced by a diplomacy based on force.”
The Pentagon read that sentence as a frontal challenge to the so-called “Donroe Doctrine” — Trump’s update of Monroe, asserting unchallenged American dominion over the Western Hemisphere.
The cardinal sat through the lecture in silence. The Holy See has not, since that day, given an inch.
Ferraresi’s reporting also adds vital color to the collapse of the 250th anniversary visit. JD Vance personally extended the invitation in May 2025, just two weeks after Leo’s election in the conclave.
According to a senior Vatican official quoted in the piece, the Holy See initially considered the request, then postponed it indefinitely because of foreign policy disagreements, the rising opposition of American bishops to the Trump-Vance mass deportation regime, and a refusal to become a partisan trophy in the 2026 midterms.
“The administration tried every possible way to have the Pope in the U. S. in 2026,” one Vatican official told The Free Press.
Instead, on July 4, 2026, the first American pope will travel to Lampedusa, the Italian island where North African migrants wash ashore by the thousands. Robert Francis Prevost is too deliberate a man to have chosen that date by accident.
The Pentagon meeting also clarifies the moral intensity of Leo’s public posture over the last six weeks.
After Colby’s lecture, the pope did not retreat into Vatican diplomacy. He pressed harder.
...
Read the original on www.thelettersfromleo.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.