10 interesting stories served every morning and every evening.
Every time an application on your computer opens a network connection, it does so quietly, without asking. Little Snitch for Linux makes that activity visible and gives you the option to do something about it. You can see exactly which applications are talking to which servers, block the ones you didn’t invite, and keep an eye on traffic history and data volumes over time.
Once installed, open the user interface by running littlesnitch in a terminal, or go straight to http://localhost:3031/. You can bookmark that URL, or install it as a Progressive Web App. Any Chromium-based browser supports this natively, and Firefox users can do the same with the Progressive Web Apps extension.
The connections view is where most of the action is. It lists current and past network activity by application, shows you what’s being blocked by your rules and blocklists, and tracks data volumes and traffic history. Sorting by last activity, data volume, or name, and filtering the list to what’s relevant, makes it easy to spot anything unexpected. Blocking a connection takes a single click.
The traffic diagram at the bottom shows data volume over time. You can drag to select a time range, which zooms in and filters the connection list to show only activity from that period.
Blocklists let you cut off whole categories of unwanted traffic at once. Little Snitch downloads them from remote sources and keeps them current automatically. It accepts lists in several common formats: one domain per line, one hostname per line, /etc/hosts style (IP address followed by hostname), and CIDR network ranges. Wildcard formats, regex or glob patterns, and URL-based formats are not supported. When you have a choice, prefer domain-based lists over host-based ones, they’re handled more efficiently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a starting point.
One thing to be aware of: the .lsrules format from Little Snitch on macOS is not compatible with the Linux version.
Blocklists work at the domain level, but rules let you go further. A rule can target a specific process, match particular ports or protocols, and be as broad or narrow as you need. The rules view lets you sort and filter them so you can stay on top of things as the list grows.
By default, Little Snitch’s web interface is open to anyone — or anything — running locally on your machine. A misbehaving or malicious application could, in principle, add and remove rules, tamper with blocklists, or turn the filter off entirely.
If that concerns you, Little Snitch can be configured to require authentication. See the Advanced configuration section below for details.
Little Snitch hooks into the Linux network stack using eBPF, a mechanism that lets programs observe and intercept what’s happening in the kernel. An eBPF program watches outgoing connections and feeds data to a daemon, which tracks statistics, preconditions your rules, and serves the web UI.
The source code for the eBPF program and the web UI is on GitHub.
The UI deliberately exposes only the most common settings. Anything more technical can be configured through plain text files, which take effect after restarting the littlesnitch daemon.
The default configuration lives in /var/lib/littlesnitch/config/. Don’t edit those files directly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will always prefer the override.
The files you’re most likely to care about:
web_ui.toml — network address, port, TLS, and authentication. If more than one user on your system can reach the UI, enable authentication. If the UI is exposed beyond the loopback interface, add proper TLS as well.
main.toml — what to do when a connection matches nothing. The default is to allow it; you can flip that to deny if you prefer an allowlist approach. But be careful! It’s easy to lock yourself out of the computer!
executables.toml — a set of heuristics for grouping applications sensibly. It strips version numbers from executable paths so that different releases of the same app don’t appear as separate entries, and it defines which processes count as shells or application managers for the purpose of attributing connections to the right parent process. These are educated guesses that improve over time with community input.
Both the eBPF program and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the version in overrides.
Little Snitch for Linux is built for privacy, not security, and that distinction matters. The macOS version can make stronger guarantees because it can have more complexity. On Linux, the foundation is eBPF, which is powerful but bounded: it has strict limits on storage size and program complexity. Under heavy traffic, cache tables can overflow, which makes it impossible to reliably tie every network packet to a process or a DNS name. And reconstructing which hostname was originally looked up for a given IP address requires heuristics rather than certainty. The macOS version uses deep packet inspection to do this more reliably. That’s not an option here.
For keeping tabs on what your software is up to and blocking legitimate software from phoning home, Little Snitch for Linux works well. For hardening a system against a determined adversary, it’s not the right tool.
Little Snitch for Linux has three components. The eBPF kernel program and the web UI are both released under the GNU General Public License version 2 and available on GitHub. The daemon (littlesnitch –daemon) is proprietary, but free to use and redistribute.
...
Read the original on obdev.at »
After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.
We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.
When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing.
* Greater user control: Giving users and third-party developers the means to control the user experience through filters and
Twitter was never a utopia. We’ve criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we’re joining them.
Yes. And we understand why that looks contradictory. Let us explain.
EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance.
Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn’t always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:
* You own a small business that depends on Instagram for customers.
* Your abortion fund uses TikTok to spread crucial information.
* You’re isolated and rely on online spaces to connect with your community.
Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We’ve spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.
We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we’re posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better.
When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.
EFF takes on big fights, and we win. We do that by putting our time, skills, and our members’ support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we’re here to help you take back control.
...
Read the original on www.eff.org »
All of the work we do is funded by less than 3% of our users.
We never show advertisements or sell your data. We don’t have corporate funding. We are fully funded by financial contributions from our users.
Thunderbird’s mission is to give you the best privacy-respecting, customizable email experience possible. Free for everyone to install and enjoy! Maintaining expensive servers, fixing bugs, developing new features, and hiring talented engineers are crucial for this mission.
If you get value from using Thunderbird, please help support it. We can’t do this without you.
...
Read the original on updates.thunderbird.net »
Thank you for reading! Letters from Leo is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Before you read on: Pope Leo XIV has asked Americans to contact their members of Congress and demand an end to the war in Iran. Answer the pope’s call in one click at standwithpopeleo.com, an app we built to make it as easy as possible.
[UPDATE at 4:33 PM EDT: Letters from Leo can now independently confirm The Free Press report that the meeting took place — and that some Vatican officials were so alarmed by the Pentagon’s tactics that they shelved plans for Pope Leo XIV to visit the United States later this year.
Other officials in the Vatican saw the Pentagon’s reference to an Avignon papacy as a threat to use military force against the Holy See.]
In January, behind closed doors at the Pentagon, Under Secretary of War for Policy Elbridge Colby summoned Cardinal Christophe Pierre — Pope Leo XIV’s then-ambassador to the United States — and delivered a lecture.
America, Colby and his colleagues told the cardinal, has the military power to do whatever it wants in the world. The Catholic Church had better take its side.
As tempers rose, an unidentified U. S. official reached for a fourteenth-century weapon and invoked the Avignon Papacy, the period when the French Crown used military force to bend the bishop of Rome to its will.
That scene, broken this week by Mattia Ferraresi in an extraordinary piece of journalism for The Free Press, may be the most remarkable moment in the long and knotted history of the American republic’s relationship with the Catholic Church.
There is no public record of any Vatican official ever taking a meeting at the Pentagon, and certainly none of a senior U. S. official threatening the Vicar of Christ on Earth with the prospect of an American Babylonian Captivity.
The reporting also confirms — with fresh sources and new color — what I first reported in February: that the Vatican declined the Trump-Vance White House’s invitation to host Pope Leo XIV for America’s 250th anniversary in 2026.
Ferraresi obtained accounts from Vatican and U. S. officials briefed on the Pentagon meeting. According to his sources, Colby’s team picked apart the pope’s January state-of-the-world address line by line and read it as a hostile message aimed directly at the administration.
What enraged them most was Leo’s declaration that “a diplomacy that promotes dialogue and seeks consensus among all parties is being replaced by a diplomacy based on force.”
The Pentagon read that sentence as a frontal challenge to the so-called “Donroe Doctrine” — Trump’s update of Monroe, asserting unchallenged American dominion over the Western Hemisphere.
The cardinal sat through the lecture in silence. The Holy See has not, since that day, given an inch.
Ferraresi’s reporting also adds vital color to the collapse of the 250th anniversary visit. JD Vance personally extended the invitation in May 2025, just two weeks after Leo’s election in the conclave.
According to a senior Vatican official quoted in the piece, the Holy See initially considered the request, then postponed it indefinitely because of foreign policy disagreements, the rising opposition of American bishops to the Trump-Vance mass deportation regime, and a refusal to become a partisan trophy in the 2026 midterms.
“The administration tried every possible way to have the Pope in the U. S. in 2026,” one Vatican official told The Free Press.
Instead, on July 4, 2026, the first American pope will travel to Lampedusa, the Italian island where North African migrants wash ashore by the thousands. Robert Francis Prevost is too deliberate a man to have chosen that date by accident.
The Pentagon meeting also clarifies the moral intensity of Leo’s public posture over the last six weeks.
After Colby’s lecture, the pope did not retreat into Vatican diplomacy. He pressed harder.
...
Read the original on www.thelettersfromleo.com »
And that’s not OK. This bug is categorically distinct from hallucinations or missing permission boundaries.
And that’s not OK. This bug is categorically distinct from hallucinations or missing permission boundaries.
Claude sometimes sends messages to itself and then thinks those messages came from the user. This is the worst bug I’ve seen from an LLM provider, but people always misunderstand what’s happening and blame LLMs, hallucinations, or lack of permission boundaries. Those are related issues, but this ‘who said what’ bug is categorically distinct.
I wrote about this in detail in The worst bug I’ve seen so far in Claude Code, where I showed two examples of Claude giving itself instructions and then believing those instructions came from me.
Claude told itself my typos were intentional and deployed anyway, then insisted I was the one who said it.
It’s not just me
Here’s a Reddit thread where Claude said “Tear down the H100 too”, and then claimed that the user had given that instruction.
From r/Anthropic — Claude gives itself a destructive instruction and blames the user.
“You shouldn’t give it that much access”
Comments on my previous post were things like “It should help you use more discipline in your DevOps.” And on the Reddit thread, many in the class of “don’t give it nearly this much access to a production environment, especially if there’s data you want to keep.”
This isn’t the point. Yes, of course AI has risks and can behave unpredictably, but after using it for months you get a ‘feel’ for what kind of mistakes it makes, when to watch it more closely, when to give it more permissions or a longer leash.
This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”
Before, I thought it was a temporary thing — I saw it a few times in a single day, and then not again for months. But either they have a regression or it was a coincidence and it just pops up every so often, and people only notice when it gives itself permission to do something bad.
This article reached #1 on Hacker News, and it seems that this is definitely a widespread issue. Here’s another super clear example shared by nathell (full transcript).
From nathell — Claude asks itself “Shall I commit this progress?” and treats it as user approval.
Several people questioned whether this is actually a harness bug like I assumed, as people have reported similar issues using other interfaces and models, including chatgpt.com. One pattern does seem to be that it happens in the so-called “Dumb Zone” once a conversation starts approaching the limits of the context window.
...
Read the original on dwyer.co.za »
Farmers have been fighting John Deere for years over the right to repair their equipment, and this week, they finally reached a landmark settlement.
While the agricultural manufacturing giant pointed out in a statement that this is no admission of wrongdoing, it agreed to pay $99 million into a fund for farms and individuals who participated in a class action lawsuit. Specifically, that money is available to those involved who paid John Deere’s authorized dealers for large equipment repairs from January 2018. This means that plaintiffs will recover somewhere between 26% and 53% of overcharge damages, according to one of the court documents—far beyond the typical amount, which lands between 5% and 15%.
The settlement also includes an agreement by Deere to provide “the digital tools required for the maintenance, diagnosis, and repair” of tractors, combines, and other machinery for 10 years. That part is crucial, as farmers previously resorted to hacking their own equipment’s software just to get it up and running again. John Deere signed a memorandum of understanding in 2023 that partially addressed those concerns, providing third parties with the technology to diagnose and repair, as long as its intellectual property was safeguarded. Monday’s settlement seems to represent a much stronger (and legally binding) step forward.
Ripple effects of this battle have been felt far beyond the sales floors at John Deere dealers, as the price of used equipment skyrocketed in response to the infamous service difficulties. Even when the cost of older tractors doubled, farmers reasoned that they were still worth it because repairs were simpler and downtime was minimized. $60,000 for a 40-year-old machine became the norm.
A judge’s approval of the settlement is still required, though it seems likely. Still, John Deere isn’t out of the woods yet. It still faces another lawsuit from the United States Federal Trade Commission, in which the government organization accuses Deere of harmfully locking down the repair process.
It’s difficult to overstate the significance of this right-to-repair fight. While it has obvious implications for the ag industry, others like the automotive and even home appliance sectors are looking on. Any court ruling that might formally condemn John Deere of wrongdoing may set a precedent for others to follow. At a time when manufacturers want more and more control of their products after the point of sale, every little update feels incredibly high-stakes.
Got a tip or question for the author? Contact them directly: caleb@thedrive.com
...
Read the original on www.thedrive.com »
Astral builds tools that millions of developers around the world depend on and trust.
That trust includes confidence in our security posture: developers reasonably expect that our tools (and the processes that build, test, and release them) are secure. The rise of supply chain attacks, typified by the recent Trivy and LiteLLM hacks, has developers questioning whether they can trust their tools.
To that end, we want to share some of the techniques we use to secure our tools in the hope that they’re useful to:
Our users, who want to understand what we do to keep their systems secure;
Other maintainers, projects, and companies, who may benefit from some of the techniques we use;
Developers of CI/CD systems, so that projects do not need to follow non-obvious paths or avoid
useful features to maintain secure and robust processes.
We sustain our development velocity on Ruff, uv, and ty through extensive CI/CD workflows that run on GitHub Actions. Without these workflows we would struggle to review, test, and release our tools at the pace and to the degree of confidence that we demand. Our CI/CD workflows are also a critical part of our security posture, in that they allow us to keep critical development and release processes away from local developer machines and inside of controlled, observable environments.
GitHub Actions is a logical choice for us because of its tight first-party integration with GitHub, along with its mature support for contributor workflows: anybody who wants to contribute can validate that their pull request is correct with the same processes we use ourselves.
Unfortunately, there’s a flipside to this: GitHub Actions has poor security defaults, and security compromises like those of Ultralytics, tj-actions, and Nx all began with well-trodden weaknesses like pwn requests.
Here are some of the things we do to secure our CI/CD processes:
We forbid many of GitHub’s most dangerous and insecure triggers, such as pull_request_target and
workflow_run, across our entire GitHub organization. These triggers are almost impossible to use securely and attackers keep finding ways to abuse them, so we simply don’t allow them.
Our experience with these triggers is that many projects think that they need them, but the overwhelming majority of their usages are better off being replaced with a less privileged trigger (such as pull_request) or removed entirely. For example, many projects use pull_request_target
so that third-party contributor-triggered workflows can leave comments on PRs, but these use cases are often well served by job summaries or even just leaving the relevant information in the workflow’s logs.
Of course, there are some use cases that do require these triggers, such as anything that does
really need to leave comments on third-party issues or pull requests. In these instances we recommend leaving GitHub Actions entirely and using a GitHub App (or webhook) that listens for the relevant events and acts in an independent context. We cover this pattern in more detail under
Automations below.
We require all actions to be pinned to specific commits (rather than tags or branches, which are mutable). Additionally, we cross-check these commits to ensure they match an actual released repository state and are not impostor commits.
We do this in two ways: first with zizmor’s unpinned-uses and impostor-commit audits, and again with GitHub’s own “require actions to be pinned to a full-length commit SHA” policy. The former gives us a quick check that we can run locally (and prevents impostor commits), while the latter is a hard gate on workflow execution that actually ensures that all actions, including nested actions, are fully hash-pinned.
Enabling the latter is a nontrivial endeavor, since it requires indirect action usages (the actions called by the actions we call) to be hash-pinned as well. To achieve this, we coordinated with our downstreams (example) to land hash-pinning across our entire dependency graph.
Together, these checks increase our confidence in the reproducibility and hermeticity of our workflows, which in turn increases our confidence in their security (in the presence of an attacker’s ability to compromise a dependent action).
However, while necessary, this isn’t sufficient: hash-pinning ensures that the action’s
contents are immutable, but doesn’t prevent those immutable contents from making mutable decisions (such as installing the latest version of a binary from a GitHub repository’s releases). Neither GitHub nor third-party tools perform well at detecting these kinds of immutability gaps yet, so we currently rely on manual review of our action dependencies to detect this class of risks.
When manual review does identify gaps, we work with our upstreams to close them. For example, for actions that use native binaries internally, this is achieved by embedding a mapping between the download URL for the binary and a cryptographic hash. This hash in turn becomes part of the action’s immutable state. While this doesn’t ensure that the binary itself is authentic, it does ensure that an attacker cannot effectively tamper with a mutable pointer to the binary (such as a non-immutable tag or release).
We limit our workflow and job permissions in multiple places: we default to read-only permissions at the organization level, and we additionally start every workflow with permissions: {} and only broaden beyond that on a job-by-job basis.
We isolate our GitHub Actions secrets, wherever possible: instead of using organization- or repository-level secrets, we use deployment environments and environment-specific secrets. This allows us to further limit the blast radius of a potential compromise, as a compromised test or linting job won’t have access to, for example, the secrets needed to publish release artifacts.
To do these things, we leverage GitHub’s own settings, as well as tools like zizmor (for static analysis) and pinact (for automatic pinning).
Beyond our CI/CD processes, we also take a number of steps to limit both the likelihood and the impact of account and repository compromises within the Astral organization:
We limit the number of accounts with admin- and other highly-privileged roles, with most organization members only having read and write access to the repositories they need to work on. This reduces the number of accounts that an attacker can compromise to gain access to our organization-level controls.
We enforce strong 2FA methods for all members of the Astral organization, beyond GitHub’s default of requiring any 2FA method. In effect, this requires all Astral organization members to have a 2FA method that’s no weaker than TOTP. If and when GitHub allows us to enforce only 2FA methods that are phishing-resistant (such as WebAuthn and Passkeys only), we will do so.
We impose branch protection rules on an org-wide basis: changes to main cannot be force-pushed and must always go through a pull request. We also forbid the creation of particular branch patterns (like advisory-* and internal-*) to prevent premature disclosure of security work.
We impose tag protection rules that prevent release tags from being created until a
release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.
Finally, we ban repository admins from bypassing all of the above protections. All of our protections are enforced at the organization level, meaning that an attacker who manages to compromise an account that has admin access to a specific repository still won’t be able to disable our controls.
To help others implement these kinds of branch and tag controls, we’re sharing a gist that shows some of the rulesets we use. These rulesets are specific to our GitHub organization and repositories, but you can use them as a starting point for your own policies!
There are certain things that GitHub Actions can do, but can’t do securely, such as leaving comments on third-party issues and pull requests. Most of the time it’s better to just forgo these features, but in some cases they’re a valuable part of our workflows.
In these latter cases, we use astral-sh-bot to safely isolate these tasks outside of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have received (since GitHub Actions consumes the same webhook payloads as GitHub Apps do), but with much more control and much less implicit state.
However, there’s still a catch with GitHub Apps: an app doesn’t eliminate any sensitive credentials needed for an operation, it just moves them into an environment that doesn’t mix code and data as pervasively as GitHub Actions does. For example, an app won’t be susceptible to a
template injection attack like a workflow would be, but could still contain SQLi, prompt injection, or other weaknesses that allow an attacker to abuse the app’s credentials. Consequently, it’s essential to treat GitHub App development with the same security mindset as any other software development. This also extends to untrusted code: using a GitHub App does not make it safe to run untrusted code, it just makes it harder to do so unexpectedly. If your processes need
to run untrusted code, they must use pull_request or another “safe” trigger that doesn’t provide any privileged credentials to third-party pull requests.
With all that said, we’ve found that the GitHub App pattern works well for us, and we recommend it to other maintainers and projects who have similar needs. The main downside to it comes in the form of complexity: it requires developing and hosting a GitHub App, rather than writing a workflow that GitHub orchestrates for you. We’ve found that frameworks like Gidgethub make the development process for GitHub Apps relatively straightforward, but that hosting remains a burden in terms of time and cost.
It’s an unfortunate reality that there still aren’t great GitHub App options for one-person and hobbyist open source projects; it’s our hope that usability enhancements in this space can be led by companies and larger projects that have the resources needed to paper over GitHub Actions’ shortcomings as a platform.
We recommend this tutorial by Mariatta as a good introduction to building GitHub Apps in Python. We also plan to open source astral-sh-bot in the future.
So far, we’ve covered aspects that tie closely to GitHub, as the source host for Astral’s tools. But many of our users install our tools via other mechanisms, such as PyPI, Homebrew, and our
Docker images. These distribution channels add another “link” to the metaphorical supply chain, and require discrete consideration:
Where possible, we use Trusted Publishing to publish to registries (like PyPI, crates.io, and
NPM). This technique eliminates the need for long-lived registry credentials, in turn ameliorating one of the most common sources of package takeover (credential compromise in CI/CD platforms).
Where possible (currently our binary and Docker images releases), we generate Sigstore-based attestations. These attestations establish a cryptographically verifiable link between the released artifact and the workflow that produced it, in turn allowing users to verify that their build of uv, Ruff, or ty came from our actual release processes. You can see our recent
attestations for uv as an example of this.1
We use GitHub’s immutable releases feature to prevent the post-hoc modification of the builds we publish on GitHub. This addresses a common attacker pivoting technique where previously published builds are replaced with malicious builds. A variant of this technique was used in the recent Trivy attack, with the attacker force-pushing over previous tags to introduce compromised versions of the trivy-action and setup-trivy actions.
We do not use caching to improve build times during releases, to prevent an attacker from compromising our builds via a GitHub Actions cache poisoning attack.
* To reduce the risk of an attacker publishing a new malicious version of our tools, we use a
stack of protections on our release processes:
Our release process is isolated within a dedicated GitHub deployment environment. This means that jobs that don’t run in the release environment (such as tests and linters) don’t have access to our release secrets.
In order to activate the release environment, the activating job must be approved by at least one other privileged member of the Astral organization. This mitigates the risk of a single rogue or compromised account being able to publish a malicious release (or exfiltrate release secrets); the attacker needs to compromise at least two distinct accounts, both with strong 2FA.
In repositories (like uv) where we have a large number of release jobs, we use a distinct
release-gate environment to work the fact that GitHub triggers approvals for every job that uses the release environment. This retains the two-person approval requirement, with one additional hop: a small, minimally-privileged GitHub App mediates the approval from
release-gate to release via a deployment protection rule.
Finally, we use a tag protection ruleset to prevent the creation of a release’s tag until the release deployment succeeds. This prevents an attacker from bypassing the normal release process to create a tag and release directly.
Our release process is isolated within a dedicated GitHub deployment environment. This means that jobs that don’t run in the release environment (such as tests and linters) don’t have access to our release secrets.
In order to activate the release environment, the activating job must be approved by at least one other privileged member of the Astral organization. This mitigates the risk of a single rogue or compromised account being able to publish a malicious release (or exfiltrate release secrets); the attacker needs to compromise at least two distinct accounts, both with strong 2FA.
In repositories (like uv) where we have a large number of release jobs, we use a distinct
release-gate environment to work the fact that GitHub triggers approvals for every job that uses the release environment. This retains the two-person approval requirement, with one additional hop: a small, minimally-privileged GitHub App mediates the approval from
release-gate to release via a deployment protection rule.
Finally, we use a tag protection ruleset to prevent the creation of a release’s tag until the release deployment succeeds. This prevents an attacker from bypassing the normal release process to create a tag and release directly.
* For users who install uv via our standalone installer, we enforce the integrity of the installed
binaries via checksums embedded directly into the installer’s source code2.
Our release processes also involve “knock-on” changes, like updating the our public documentation, version manifests, and the official pre-commit hooks. These are privileged operations that we protect through dedicated bot accounts and fine-grained PATs issued through those accounts.
Going forwards, we’re also looking at adding codesigning with official developer certificates on macOS and Windows.
Last but not least is the question of dependencies. Like almost all modern software, our tools depend on an ecosystem of third-party dependencies (both direct and transitive), each of which is in an implicit position of trust. Here are some of the things we do to measure and mitigate upstream risk:
We use dependency management tools like Dependabot and Renovate to keep our dependencies updated, and to notify us when our dependencies contain known vulnerabilities.
In general, we employ cooldowns in conjunction with the above to avoid updating dependencies immediately after a new release, as this is when temporarily compromised dependencies are most likely to affect us.
Both Dependabot and Renovate support cooldowns, and uv also has built-in support. We’ve found Renovate’s ability to configure cooldowns on a per-group basis to be particularly useful, as it allows us to relax the cooldown requirement for our own (first-party) dependencies while keeping it in place for most third-party dependencies.
We maintain social connections with many of our upstream dependencies, and we perform both regular and security contributions with them (including fixes to their own CI/CD and release processes). For example, here’s a recent contribution we made to apache/opendal-reqsign to help them ratchet down their CI/CD security.
Separately, we maintain social connections with adjacent projects and working groups in the ecosystem, including the Python Packaging Authority and the Python Security Response Team. These connections have proven invaluable for sharing information, such as when a report against pip also affects uv (or vice versa), or when a security release for CPython will require a release of python-build-standalone.
We’re conservative about adding new dependencies, and we look to eliminate dependencies where practical and minimally disruptive to our users. Over the coming release cycles, we hope to remove some dependencies related to support for rarely used compression schemes, as part of a larger effort to align ourselves with Python packaging standards.
More generally, we’re also conservative about what our dependencies bring in: we try to avoid dependencies that introduce binary blobs, and we carefully review our dependencies’ features to disable functionality that we don’t need or desire.
Finally, we contribute financially (in the form of our OSS Fund) to the sustainability of projects that we depend on or that push the OSS ecosystem as a whole forwards.
Open source security is a hard problem, in part because it’s really many problems (some technical, some social) masquerading as one. We’ve covered many of the techniques we use to tackle this problem, but this post is by no means an exhaustive list. It’s also not a static list: attackers are dynamic participants in the security process, and defenses necessarily evolve in response to their changing techniques.
With that in mind, we’d like to recall some of the points mentioned above that deserve the most attention:
Respect the limits of CI/CD: it’s extremely tempting to do everything in CI/CD, but there are some things that CI/CD (and particularly GitHub Actions) just can’t do securely. For these things, it’s often better to forgo them entirely, or isolate them outside of CI/CD with a GitHub App or similar.
With that said, it’s important to not overcorrect and throw CI/CD away entirely: as mentioned above, CI/CD is a critical part of our security posture and probably yours too! It’s unfortunate that securing GitHub Actions is so difficult, but we consider it worth the effort relative to the velocity and security risks that would come with not using hosted CI/CD at all.
In particular, we strongly recommend using CI/CD for release processes, rather than relying on local developer machines, particularly when those release processes can be secured with misuse- and disclosure-resistant credential schemes like Trusted Publishing.
Isolate and eliminate long-lived credentials: the single most common form of post-compromise spread is the abuse of long-lived credentials. Wherever possible, eliminate these credentials entirely (for example, with Trusted Publishing or other OIDC-based authentication mechanisms).
Where elimination isn’t possible, isolate these credentials to the smallest possible scope: put them in specific deployment environments with additional activation requirements, and only issue credentials with the minimum necessary permissions to accomplish a given task.
Strengthen release processes: if you’re on GitHub, use deployment environments, approvals, tag and branch rulesets, and immutable releases to reduce the degrees of freedom the attacker has in the event of an account takeover or repository compromise.
Maintain awareness of your dependencies: maintaining awareness of the overall health of your dependency tree is critical to understanding your own risk profile. Use both tools and elbow grease to keep your dependencies secure, and to help them keep their own processes and dependencies secure too.
Finally, we’re still evaluating many of the techniques mentioned above, and will almost certainly be tweaking (and strengthening) them over the coming weeks and months as we learn more about their limitations and how they interact with our development processes. That’s to say that this post represents a point in time, not the final word on how we think about security for our open source tools.
...
Read the original on astral.sh »
As I type these words, I worry over the day when I will no longer be commissioned to write them. The day, to be specific, that The American Scholar asks Claude (the moniker for Anthropic’s AI) and not Robert (the name of Max and Roslyn Zaretsky’s son) to create an essay on, say, AI and the future of work.
Not surprisingly, I am not alone to worry: Not many subjects stir greater fear and dread among Americans than the seemingly irresistible rise of AI. According to a recent Pew Research Center survey, 64 percent of the public believes that AI will translate into fewer jobs. Small wonder, then, that only 17 percent of the same respondents expect that AI, even when humanized by names like Claude, will make their future brighter.
Were he alive today, Paul Lafargue would be among that 17 percent, and his voice would be both loud and funny. Born in Cuba in 1842 to parents of mixed race—part Jewish and part Creole—Lafargue was married to Laura Marx, one of Karl Marx’s four daughters. Even before this marriage, though, Lafargue, who had studied medicine in Paris, had thrown over a secure future as a doctor to devote (and pauperize) himself and his family to working on behalf of the shining (and classless) future glimpsed by his father-in-law.
Knocking out polemical and theoretical essays while striving to launch France’s first workers’ party, the Parti ouvrier français, Lafargue was a well-known figure on the radical left in fin-de-siècle Paris. Predictably, his activities also made him well-known to the French police, who repeatedly arrested him, including on one evening in 1883 when he was taking home a salad to his wife. (He managed to find a passerby to deliver the salad before the police hauled him away.)
Making wine from this bunch of grapes, Lafargue used his time behind bars at Saint Pélagie—a forbidding Parisian prison where many of the century’s most notorious writers, artists, and thinkers found themselves from time to time—to draft his most famous work, Le Droit à la paresse, or The Right to Be Lazy, translated into English by Alex Andriesse. Though he dashed off this pamphlet nearly 150 years ago, Lafargue asked questions that remain most pertinent to our current anxieties over the future of work.
During Lafargue’s own lifetime, the nature of work was undergoing a traumatic transformation. The seismic effect of the first and second industrial revolutions, as well as the quickening pace of globalization, proved an extinction event for traditional forms of production. “The gods and kings of the past,” declared the historian Eric Hobsbawm, “were powerless before the businessmen and steam engines of the present.” As factory workers and unskilled laborers replaced ateliers and artisans, the former struggled to organize themselves, a struggle into which Lafargue threw himself body and soul.
Or, perhaps, not his entire soul. His essay’s title reveals a dramatic divergence of goals he and union leaders held. He bemoans the demand of workers for shorter workdays (which often lasted as long as 12 hours), insisting that curtailing work hours did not represent victory but defeat: “Shame on the proletariat, only slaves would have been capable of such baseness” to have sought such an outcome. On the contrary, he declaims, workers should oppose the very notion of work.
If you are puzzled, don’t worry—so, too, were nearly all of Lafargue’s contemporaries on the left. How could they not be? Here was a committed Marxist—and the great man’s son-in-law, to boot—asserting that workers, rather than strike for the right to work, should instead protest for the right to be lazy. Machines, he believed, could become “humanity’s savior, the god who will redeem man from the sordidae artes [manual labor] and give him leisure and liberty.”
And yet, Lafargue exclaims, “the blind passion and perverse murderousness of work have transformed the machine from an instrument of emancipation into an instrument that enslaves free beings.” The reason workers spend so many hours shackled to their machines, he contended, was not from economic necessity. Instead, it was imposed upon them by their superiors, the captains of industry and finance, who were wedded to “the dogma of work and diabolically drilled the vice of work into the heads of workers.”
Of course, Lafargue never called for the eradication of work. The necessities of life, after all, would always require the labor of women and men to produce and provide. But he did press for the rationalization of work. Given the efficiency of machines, fewer hours were needed to provide the necessities of life. Maintaining the same excessive number of work hours inevitably flooded the market with superfluities and the era’s repeated economic crises stretching from 1873 to the end of the century.
The dramatic reduction of time at work would be a boon not just to the well-being of the economy, Lafargue concluded, but also to the well-being of both workers and owners, who would have more time to … well, to do what?
Karl Marx had an answer of sorts, suggesting that we would “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticism after dinner, just as I have a mind.” But Lafargue instead conjured a Rabelaisian future in which former workers would eat and drink their fill on holidays while their former taskmasters would entertain them by performing parodies of their now defunct roles as generals and industrialists. Et le voilà, Lafargue concludes, in this world turned upside down, “social discord will vanish.”
Though his tongue was firmly in cheek, Lafargue did imagine that these machines—perhaps the forerunners of the “machines of loving grace” invoked by Dario Amodei, the CEO of Anthropic—would lead us to a paradise we had lost. A paradise bathed in otium, the Latin word that can be translated as “idleness” as well as “laziness.” When Lafargue praises la paresse, he means not the latter, but the former. He makes this clear by quoting, at the start of his essay, a line from Virgil’s Eclogues that celebrates the pleasures of otium.
Although Lafargue does not flesh out his notion of a future filled with idleness, my guess is that he meant it would be devoted not to the pleasure of doing a particular hobby or specific activity, painting a landscape or swinging a gold club. Instead, it would be a life given out, quite simply, to the pleasure of faisant rien or doing nothing. As the Czech playwright Karel Capek wrote in an essay called “In Praise of Idleness,” this state is defined as “the absence of everything by which a person is occupied, diverted, distracted, interested, employed, annoyed, pleased, attracted, involved, entertained, bored, enchanted, fatigued, absorbed, or confused.” In a word, idling is the sentiment of being.
But even idlers, try as they might, cannot ignore the passage of time. In 1911, a dozen years before Capek published his essay, Paul Lafargue and his wife committed suicide—he was 69; she was 66. His reason, it seems to me, dovetailed with his philosophy: “I am killing myself before pitiless old age, which gradually deprives me one by one of the pleasures and joys of existence.” It might repay us to take a moment, not just from our jobs but also from our leisures, to make some to-do about doing nothing.
...
Read the original on theamericanscholar.org »
I’ve been working on Pizza Legacy, an open-source reimplementation of the 1994 DOS game . The game has a close-zoom street view of the cities, and when you scroll around it you can see a steady stream of cars driving through the streets. Maybe 20 or 30 tiny sprites at a time, but they navigate the road network, queue behind each other at intersections, and generally look like a living city. Yes, it was a bit buggy because sometimes they would drive through each other, but it was good enough to just give some sense of life to the map. All that on a 25 MHz 386 CPU.
The first thing I implemented in 2010 when I started this project was that close zoom level, but it took 14 years before I finally had the cars driving around on it, in a way that I was happy about; I had multiple attempts over the years but every time I ran into problems I got stuck building an overly complicated system that was hard to reason about and no fun to work on.
One attempt in 2017 involved each tile keeping track of which positions were occupied, and every car had to ask the grid for permission before moving, reserving and freeing slots as it went. It basically turned into a shared locking system just to move a few pixels, with cars and tiles constantly trying to stay in sync.
All the while I had this nagging thought in the back of my mind: the original ran this on a 25 MHz CPU, so why were my versions always so complicated?
Finally I went to the assembly (which I had spent many years slowly understanding better and documenting) to figure out what the original was doing, with the help of LLMs which were (a couple of years ago) this new and exciting technology that could better understand assembly than I could.
Now that I finally have it working I can see where I went wrong: I went into it with a brain full of modern concepts: scene graphs, path finding, collision detection, and of course plenty of CPU to run it all!
First, let’s look at what a city actually looks like:
As you can see there are two-lane roads, T-junctions, intersections, and corners. In maps are made up out of a grid of 160 by 120 tiles, where each tile is one of the tiles from landsym.vga:
The original landsym.vga file with added borders between tiles and text to indicate the row and column offset. Byte 0x54 means column 5, row 4 (roof tile of a depot).
Back to the traffic; the key insight that makes it possible to run this system on such a slow CPU: cars don’t need to know where they’re going. Each road tile type carries its own direction. Road tile 0x16 is the bottom part of a horizontal road, meaning that cars can only drive from left to right on these roads. Similarly road tile 0x06 is just for right to left traffic, then 0x26 and 0x36 are the same but for vertical traffic.
This means the city is basically just a bunch of one-way roads, once a car knows which tile it sits on, it can keep going.
Corners work the same way, 0x56
(CORNER_SW in my enum) is the corner that allows the car to either keep going west, or turn south. When a car hits a corner it flips a coin, 50% chance of going straight on, 50% chance of taking the turn. The maps have been designed in such a way that the roads always make sense, which means that next to the CORNER_SW there is another tile that is either a south to north traffic (so we have to go south) or it’s another edge tile that allows either a turn or straight on.
There is one extra rule to keep traffic looking natural, if you just took a left turn the next corner forces you straight-on; no two consecutive left turns.
Valid directions per tile type indicated with arrows.
Cars move one pixel per frame. Each tick the main loop checks if a car is blocked, and if not, increments or decrements its screen coordinate by one depending on direction. East adds 1 to X. North subtracts 1 from Y.
There’s a second progress counter, counting down from 16 to 1. When it hits zero it resets to 16 and the game runs the tile-boundary logic: look up the next tile, decide the new direction, update the sprite frame (to visually turn the car in the new direction). Since each tile is 16 pixels wide and tall, this runs exactly once per tile crossed. The per-pixel move happens every tick; the heavier tile logic runs only 1/16th as often.
When a car first spawns, progress is set to a random value between 1 and 16. That staggers all the cars so their tile-boundary checks don’t all land on the same frame, spreading the work out evenly.
Unlike my various attempts at fancy collision detection, the original uses a straightforward pairwise check: for each car, walk the whole car list and ask “would these two overlap next tick?” If yes, set a wait counter of 10 ticks on the blocked car and move on to the next car.
But the collision detection code is written to bail out as fast as possible. The very first thing it does is extract the other car’s direction; because roads are one-way, east and west never share a road, so an east car and a west car can never collide. That pair returns immediately, no coordinate reads at all. Same for east and south, west and north, and so on.
With say 25 cars in a typical city view there are 625 pairwise calls per frame. About half of those return in just a few CPU instructions on the direction check alone. Most of the rest fail the lane check (same-direction cars have to be on the same road, which is one equality comparison). The pairs that actually reach any coordinate arithmetic are usually single digits.
When a car does get blocked, the 10-tick wait creates natural traffic jams: cars bunch up, the front one eventually finds the way clear, the queue drains. There are some bugs in the system (especially when you let it run for a while and there are lot of intersections) but given that the point of this is not to run an accurate driving simulation but just show some movement on the screen, it works perfectly well and very efficiently. The collision detection system has some quirks; some combinations are never checked (e.g. eastbound car never intersects with a southbound car) that might be the reason behind some buginess.
When you enter the close-zoom view, the game scans all 132 tiles in the viewport (12 columns by 11 rows), and for each road tile it rolls against the district’s traffic density to decide whether to spawn a car there, so higher-traffic districts are busier. Corner tiles are excluded from spawn points, so cars only appear on straight road tiles.
Cars that drive off the edge of the screen are respawned as a new (random) color car facing the other direction, on the tile going the other direction. This means that the game doesn’t have to worry about respawning cars other than just every time one car drives of going east it spawns a new car below going west, etc.
Pay attention to the cars driving off the map at the edges, notice they are replaced by cars driving the opposite direction.
When you scroll, the newly exposed strip of tiles gets the same treatment of having a chance of having cars spawned on them.
Looking back at my failed attempts, I was designing for problems that the original just didn’t consider. Cars don’t need pathfinding because the map tells them where they can go. Collision detection was cheap because the early-exit logic makes most pairs basically free. There’s no velocity or physics because 1 pixel per tick is enough to look convincing. When you’re about to hit something just pause for 10 ticks, and when you have to make a turn you just travel half the width of the tile and then make your turn, works on every tile in any direction.
I reimplemented it following the assembly pretty closely, so just a couple of switch statements with different routing options per tile type, you can see the decide_desired_direction method in Car.cpp.
...
Read the original on pizzalegacy.nl »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.