10 interesting stories served every morning and every evening.




1 1,233 shares, 62 trendiness

Little Snitch for Linux

Every time an ap­pli­ca­tion on your com­puter opens a net­work con­nec­tion, it does so qui­etly, with­out ask­ing. Little Snitch for Linux makes that ac­tiv­ity vis­i­ble and gives you the op­tion to do some­thing about it. You can see ex­actly which ap­pli­ca­tions are talk­ing to which servers, block the ones you did­n’t in­vite, and keep an eye on traf­fic his­tory and data vol­umes over time.

Once in­stalled, open the user in­ter­face by run­ning lit­tlesnitch in a ter­mi­nal, or go straight to http://​lo­cal­host:3031/. You can book­mark that URL, or in­stall it as a Progressive Web App. Any Chromium-based browser sup­ports this na­tively, and Firefox users can do the same with the Progressive Web Apps ex­ten­sion.

The con­nec­tions view is where most of the ac­tion is. It lists cur­rent and past net­work ac­tiv­ity by ap­pli­ca­tion, shows you what’s be­ing blocked by your rules and block­lists, and tracks data vol­umes and traf­fic his­tory. Sorting by last ac­tiv­ity, data vol­ume, or name, and fil­ter­ing the list to what’s rel­e­vant, makes it easy to spot any­thing un­ex­pected. Blocking a con­nec­tion takes a sin­gle click.

The traf­fic di­a­gram at the bot­tom shows data vol­ume over time. You can drag to se­lect a time range, which zooms in and fil­ters the con­nec­tion list to show only ac­tiv­ity from that pe­riod.

Blocklists let you cut off whole cat­e­gories of un­wanted traf­fic at once. Little Snitch down­loads them from re­mote sources and keeps them cur­rent au­to­mat­i­cally. It ac­cepts lists in sev­eral com­mon for­mats: one do­main per line, one host­name per line, /etc/hosts style (IP ad­dress fol­lowed by host­name), and CIDR net­work ranges. Wildcard for­mats, regex or glob pat­terns, and URL-based for­mats are not sup­ported. When you have a choice, pre­fer do­main-based lists over host-based ones, they’re han­dled more ef­fi­ciently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a start­ing point.

One thing to be aware of: the .lsrules for­mat from Little Snitch on ma­cOS is not com­pat­i­ble with the Linux ver­sion.

Blocklists work at the do­main level, but rules let you go fur­ther. A rule can tar­get a spe­cific process, match par­tic­u­lar ports or pro­to­cols, and be as broad or nar­row as you need. The rules view lets you sort and fil­ter them so you can stay on top of things as the list grows.

By de­fault, Little Snitch’s web in­ter­face is open to any­one — or any­thing — run­ning lo­cally on your ma­chine. A mis­be­hav­ing or ma­li­cious ap­pli­ca­tion could, in prin­ci­ple, add and re­move rules, tam­per with block­lists, or turn the fil­ter off en­tirely.

If that con­cerns you, Little Snitch can be con­fig­ured to re­quire au­then­ti­ca­tion. See the Advanced con­fig­u­ra­tion sec­tion be­low for de­tails.

Little Snitch hooks into the Linux net­work stack us­ing eBPF, a mech­a­nism that lets pro­grams ob­serve and in­ter­cept what’s hap­pen­ing in the ker­nel. An eBPF pro­gram watches out­go­ing con­nec­tions and feeds data to a dae­mon, which tracks sta­tis­tics, pre­con­di­tions your rules, and serves the web UI.

The source code for the eBPF pro­gram and the web UI is on GitHub.

The UI de­lib­er­ately ex­poses only the most com­mon set­tings. Anything more tech­ni­cal can be con­fig­ured through plain text files, which take ef­fect af­ter restart­ing the lit­tlesnitch dae­mon.

The de­fault con­fig­u­ra­tion lives in /var/lib/littlesnitch/config/. Don’t edit those files di­rectly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will al­ways pre­fer the over­ride.

The files you’re most likely to care about:

we­b_ui.toml — net­work ad­dress, port, TLS, and au­then­ti­ca­tion. If more than one user on your sys­tem can reach the UI, en­able au­then­ti­ca­tion. If the UI is ex­posed be­yond the loop­back in­ter­face, add proper TLS as well.

main.toml — what to do when a con­nec­tion matches noth­ing. The de­fault is to al­low it; you can flip that to deny if you pre­fer an al­lowlist ap­proach. But be care­ful! It’s easy to lock your­self out of the com­puter!

ex­e­cuta­bles.toml — a set of heuris­tics for group­ing ap­pli­ca­tions sen­si­bly. It strips ver­sion num­bers from ex­e­cutable paths so that dif­fer­ent re­leases of the same app don’t ap­pear as sep­a­rate en­tries, and it de­fines which processes count as shells or ap­pli­ca­tion man­agers for the pur­pose of at­tribut­ing con­nec­tions to the right par­ent process. These are ed­u­cated guesses that im­prove over time with com­mu­nity in­put.

Both the eBPF pro­gram and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the ver­sion in over­rides.

Little Snitch for Linux is built for pri­vacy, not se­cu­rity, and that dis­tinc­tion mat­ters. The ma­cOS ver­sion can make stronger guar­an­tees be­cause it can have more com­plex­ity. On Linux, the foun­da­tion is eBPF, which is pow­er­ful but bounded: it has strict lim­its on stor­age size and pro­gram com­plex­ity. Under heavy traf­fic, cache ta­bles can over­flow, which makes it im­pos­si­ble to re­li­ably tie every net­work packet to a process or a DNS name. And re­con­struct­ing which host­name was orig­i­nally looked up for a given IP ad­dress re­quires heuris­tics rather than cer­tainty. The ma­cOS ver­sion uses deep packet in­spec­tion to do this more re­li­ably. That’s not an op­tion here.

For keep­ing tabs on what your soft­ware is up to and block­ing le­git­i­mate soft­ware from phon­ing home, Little Snitch for Linux works well. For hard­en­ing a sys­tem against a de­ter­mined ad­ver­sary, it’s not the right tool.

Little Snitch for Linux has three com­po­nents. The eBPF ker­nel pro­gram and the web UI are both re­leased un­der the GNU General Public License ver­sion 2 and avail­able on GitHub. The dae­mon (littlesnitch –daemon) is pro­pri­etary, but free to use and re­dis­trib­ute.

...

Read the original on obdev.at »

2 771 shares, 211 trendiness

EFF is Leaving X

After al­most twenty years on the plat­form, EFF is log­ging off of X. This is­n’t a de­ci­sion we made lightly, but it might be over­due. The math has­n’t worked out for a while now.

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets gar­nered some­where be­tween 50 and 100 mil­lion im­pres­sions per month. By 2024, our 2,500 X posts gen­er­ated around 2 mil­lion im­pres­sions each month. Last year, our 1,500 posts earned roughly 13 mil­lion im­pres­sions for the en­tire year. To put it bluntly, an X post to­day re­ceives less than 3% of the views a sin­gle tweet de­liv­ered seven years ago.

When Elon Musk ac­quired Twitter in October 2022, EFF was clear about what needed fix­ing.

* Greater user con­trol: Giving users and third-party de­vel­op­ers the means to con­trol the user ex­pe­ri­ence through fil­ters and

Twitter was never a utopia. We’ve crit­i­cized the plat­form for about as long as it’s been around. Still, Twitter did de­serve recog­ni­tion from time to time for vo­cif­er­ously fight­ing for its users’ rights. That changed. Musk fired the en­tire hu­man rights team and laid off staffers in coun­tries where the com­pany pre­vi­ously fought off cen­sor­ship de­mands from re­pres­sive regimes. Many users left. Today we’re join­ing them.

Yes. And we un­der­stand why that looks con­tra­dic­tory. Let us ex­plain.

EFF ex­ists to pro­tect peo­ple’s dig­i­tal rights. Not just the peo­ple who al­ready value our work, have opted out of sur­veil­lance, or have al­ready mi­grated to the fe­di­verse. The peo­ple who need us most are of­ten the ones most em­bed­ded in the walled gar­dens of the main­stream plat­forms and sub­jected to their cor­po­rate sur­veil­lance.

Young peo­ple, peo­ple of color, queer folks, ac­tivists, and or­ga­niz­ers use Instagram, TikTok, and Facebook every day. These plat­forms host mu­tual aid net­works and serve as hubs for po­lit­i­cal or­ga­niz­ing, cul­tural ex­pres­sion, and com­mu­nity care. Just delet­ing the apps is­n’t al­ways a re­al­is­tic or ac­ces­si­ble op­tion, and nei­ther is push­ing every user to the fe­di­verse when there are cir­cum­stances like:

* You own a small busi­ness that de­pends on Instagram for cus­tomers.

* Your abor­tion fund uses TikTok to spread cru­cial in­for­ma­tion.

* You’re iso­lated and rely on on­line spaces to con­nect with your com­mu­nity.

Our pres­ence on Facebook, Instagram, YouTube, and TikTok is not an en­dorse­ment. We’ve spent years ex­pos­ing how these plat­forms sup­press mar­gin­al­ized voices, en­able in­va­sive be­hav­ioral ad­ver­tis­ing, and flag posts about abor­tion as dan­ger­ous. We’ve also taken ac­tion in court, in leg­is­la­tures, and through di­rect en­gage­ment with their staff to push them to change poor poli­cies and prac­tices.

We stay be­cause the peo­ple on those plat­forms de­serve ac­cess to in­for­ma­tion, too. We stay be­cause some of our most-read posts are the ones crit­i­ciz­ing the very plat­form we’re post­ing on. We stay be­cause the fewer steps be­tween you and the re­sources you need to pro­tect your­self, the bet­ter.

When you go on­line, your rights should go with you. X is no longer where the fight is hap­pen­ing. The plat­form Musk took over was im­per­fect but im­pact­ful. What ex­ists to­day is some­thing else: di­min­ished, and in­creas­ingly de min­imis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our mem­bers’ sup­port where they will ef­fect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you fol­low us there and keep sup­port­ing the work we do. Our work pro­tect­ing dig­i­tal rights is needed more than ever be­fore, and we’re here to help you take back con­trol.

...

Read the original on www.eff.org »

3 440 shares, 16 trendiness

Is Hormuz Open Yet?

...

Read the original on www.ishormuzopenyet.com »

4 438 shares, 39 trendiness

Help Keep Thunderbird Alive!

All of the work we do is funded by less than 3% of our users.

We never show ad­ver­tise­ments or sell your data. We don’t have cor­po­rate fund­ing. We are fully funded by fi­nan­cial con­tri­bu­tions from our users.

Thunderbird’s mis­sion is to give you the best pri­vacy-re­spect­ing, cus­tomiz­able email ex­pe­ri­ence pos­si­ble. Free for every­one to in­stall and en­joy! Maintaining ex­pen­sive servers, fix­ing bugs, de­vel­op­ing new fea­tures, and hir­ing tal­ented en­gi­neers are cru­cial for this mis­sion.

If you get value from us­ing Thunderbird, please help sup­port it. We can’t do this with­out you.

...

Read the original on updates.thunderbird.net »

5 390 shares, 53 trendiness

The Pentagon Threatened Pope Leo XIV’s Ambassador With the Avignon Papacy

Thank you for read­ing! Letters from Leo is a reader-sup­ported pub­li­ca­tion. To re­ceive new posts and sup­port my work, con­sider be­com­ing a free or paid sub­scriber.

Before you read on: Pope Leo XIV has asked Americans to con­tact their mem­bers of Congress and de­mand an end to the war in Iran. Answer the pope’s call in one click at stand­with­popeleo.com, an app we built to make it as easy as pos­si­ble.

[UPDATE at 4:33 PM EDT: Letters from Leo can now in­de­pen­dently con­firm The Free Press re­port that the meet­ing took place — and that some Vatican of­fi­cials were so alarmed by the Pentagon’s tac­tics that they shelved plans for Pope Leo XIV to visit the United States later this year.

Other of­fi­cials in the Vatican saw the Pentagon’s ref­er­ence to an Avignon pa­pacy as a threat to use mil­i­tary force against the Holy See.]

In January, be­hind closed doors at the Pentagon, Under Secretary of War for Policy Elbridge Colby sum­moned Cardinal Christophe Pierre — Pope Leo XIVs then-am­bas­sador to the United States — and de­liv­ered a lec­ture.

America, Colby and his col­leagues told the car­di­nal, has the mil­i­tary power to do what­ever it wants in the world. The Catholic Church had bet­ter take its side.

As tem­pers rose, an uniden­ti­fied U. S. of­fi­cial reached for a four­teenth-cen­tury weapon and in­voked the Avignon Papacy, the pe­riod when the French Crown used mil­i­tary force to bend the bishop of Rome to its will.

That scene, bro­ken this week by Mattia Ferraresi in an ex­tra­or­di­nary piece of jour­nal­ism for The Free Press, may be the most re­mark­able mo­ment in the long and knot­ted his­tory of the American re­pub­lic’s re­la­tion­ship with the Catholic Church.

There is no pub­lic record of any Vatican of­fi­cial ever tak­ing a meet­ing at the Pentagon, and cer­tainly none of a se­nior U. S. of­fi­cial threat­en­ing the Vicar of Christ on Earth with the prospect of an American Babylonian Captivity.

The re­port­ing also con­firms — with fresh sources and new color — what I first re­ported in February: that the Vatican de­clined the Trump-Vance White House’s in­vi­ta­tion to host Pope Leo XIV for America’s 250th an­niver­sary in 2026.

Ferraresi ob­tained ac­counts from Vatican and U. S. of­fi­cials briefed on the Pentagon meet­ing. According to his sources, Colby’s team picked apart the pope’s January state-of-the-world ad­dress line by line and read it as a hos­tile mes­sage aimed di­rectly at the ad­min­is­tra­tion.

What en­raged them most was Leo’s de­c­la­ra­tion that a diplo­macy that pro­motes di­a­logue and seeks con­sen­sus among all par­ties is be­ing re­placed by a diplo­macy based on force.”

The Pentagon read that sen­tence as a frontal chal­lenge to the so-called Donroe Doctrine” — Trump’s up­date of Monroe, as­sert­ing un­chal­lenged American do­min­ion over the Western Hemisphere.

The car­di­nal sat through the lec­ture in si­lence. The Holy See has not, since that day, given an inch.

Ferraresi’s re­port­ing also adds vi­tal color to the col­lapse of the 250th an­niver­sary visit. JD Vance per­son­ally ex­tended the in­vi­ta­tion in May 2025, just two weeks af­ter Leo’s elec­tion in the con­clave.

According to a se­nior Vatican of­fi­cial quoted in the piece, the Holy See ini­tially con­sid­ered the re­quest, then post­poned it in­def­i­nitely be­cause of for­eign pol­icy dis­agree­ments, the ris­ing op­po­si­tion of American bish­ops to the Trump-Vance mass de­por­ta­tion regime, and a re­fusal to be­come a par­ti­san tro­phy in the 2026 midterms.

The ad­min­is­tra­tion tried every pos­si­ble way to have the Pope in the U. S. in 2026,” one Vatican of­fi­cial told The Free Press.

Instead, on July 4, 2026, the first American pope will travel to Lampedusa, the Italian is­land where North African mi­grants wash ashore by the thou­sands. Robert Francis Prevost is too de­lib­er­ate a man to have cho­sen that date by ac­ci­dent.

The Pentagon meet­ing also clar­i­fies the moral in­ten­sity of Leo’s pub­lic pos­ture over the last six weeks.

After Colby’s lec­ture, the pope did not re­treat into Vatican diplo­macy. He pressed harder.

...

Read the original on www.thelettersfromleo.com »

6 386 shares, 32 trendiness

Claude mixes up who said what, and that's not OK

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

Claude some­times sends mes­sages to it­self and then thinks those mes­sages came from the user. This is the worst bug I’ve seen from an LLM provider, but peo­ple al­ways mis­un­der­stand what’s hap­pen­ing and blame LLMs, hal­lu­ci­na­tions, or lack of per­mis­sion bound­aries. Those are re­lated is­sues, but this who said what’ bug is cat­e­gor­i­cally dis­tinct.

I wrote about this in de­tail in The worst bug I’ve seen so far in Claude Code, where I showed two ex­am­ples of Claude giv­ing it­self in­struc­tions and then be­liev­ing those in­struc­tions came from me.

Claude told it­self my ty­pos were in­ten­tional and de­ployed any­way, then in­sisted I was the one who said it.

It’s not just me

Here’s a Reddit thread where Claude said Tear down the H100 too”, and then claimed that the user had given that in­struc­tion.

From r/​An­thropic — Claude gives it­self a de­struc­tive in­struc­tion and blames the user.

You should­n’t give it that much ac­cess”

Comments on my pre­vi­ous post were things like It should help you use more dis­ci­pline in your DevOps.” And on the Reddit thread, many in the class of don’t give it nearly this much ac­cess to a pro­duc­tion en­vi­ron­ment, es­pe­cially if there’s data you want to keep.”

This is­n’t the point. Yes, of course AI has risks and can be­have un­pre­dictably, but af­ter us­ing it for months you get a feel’ for what kind of mis­takes it makes, when to watch it more closely, when to give it more per­mis­sions or a longer leash.

This class of bug seems to be in the har­ness, not in the model it­self. It’s some­how la­belling in­ter­nal rea­son­ing mes­sages as com­ing from the user, which is why the model is so con­fi­dent that No, you said that.”

Before, I thought it was a tem­po­rary thing — I saw it a few times in a sin­gle day, and then not again for months. But ei­ther they have a re­gres­sion or it was a co­in­ci­dence and it just pops up every so of­ten, and peo­ple only no­tice when it gives it­self per­mis­sion to do some­thing bad.

This ar­ti­cle reached #1 on Hacker News, and it seems that this is def­i­nitely a wide­spread is­sue. Here’s an­other su­per clear ex­am­ple shared by nathell (full tran­script).

From nathell — Claude asks it­self Shall I com­mit this progress?” and treats it as user ap­proval.

Several peo­ple ques­tioned whether this is ac­tu­ally a har­ness bug like I as­sumed, as peo­ple have re­ported sim­i­lar is­sues us­ing other in­ter­faces and mod­els, in­clud­ing chat­gpt.com. One pat­tern does seem to be that it hap­pens in the so-called Dumb Zone” once a con­ver­sa­tion starts ap­proach­ing the lim­its of the con­text win­dow.

...

Read the original on dwyer.co.za »

7 369 shares, 14 trendiness

John Deere to Pay $99 Million in Monumental Right-to-Repair Settlement

Farmers have been fight­ing John Deere for years over the right to re­pair their equip­ment, and this week, they fi­nally reached a land­mark set­tle­ment.

While the agri­cul­tural man­u­fac­tur­ing gi­ant pointed out in a state­ment that this is no ad­mis­sion of wrong­do­ing, it agreed to pay $99 mil­lion into a fund for farms and in­di­vid­u­als who par­tic­i­pated in a class ac­tion law­suit. Specifically, that money is avail­able to those in­volved who paid John Deere’s au­tho­rized deal­ers for large equip­ment re­pairs from January 2018. This means that plain­tiffs will re­cover some­where be­tween 26% and 53% of over­charge dam­ages, ac­cord­ing to one of the court doc­u­ments—far be­yond the typ­i­cal amount, which lands be­tween 5% and 15%.

The set­tle­ment also in­cludes an agree­ment by Deere to pro­vide the dig­i­tal tools ​required for the main­te­nance, di­ag­no­sis, and re­pair” of trac­tors, com­bines, and other ma­chin­ery for 10 years. That part is cru­cial, as farm­ers pre­vi­ously re­sorted to hack­ing their own equip­men­t’s soft­ware just to get it up and run­ning again. John Deere signed a mem­o­ran­dum of un­der­stand­ing in 2023 that par­tially ad­dressed those con­cerns, pro­vid­ing third par­ties with the tech­nol­ogy to di­ag­nose and re­pair, as long as its in­tel­lec­tual prop­erty was safe­guarded. Monday’s set­tle­ment seems to rep­re­sent a much stronger (and legally bind­ing) step for­ward.

Ripple ef­fects of this bat­tle have been felt far be­yond the sales floors at John Deere deal­ers, as the price of used equip­ment sky­rock­eted in re­sponse to the in­fa­mous ser­vice dif­fi­cul­ties. Even when the cost of older trac­tors dou­bled, farm­ers rea­soned that they were still worth it be­cause re­pairs were sim­pler and down­time was min­i­mized. $60,000 for a 40-year-old ma­chine be­came the norm.

A judge’s ap­proval of the set­tle­ment is still re­quired, though it seems likely. Still, John Deere is­n’t out of the woods yet. It still faces an­other law­suit from the United States Federal Trade Commission, in which the gov­ern­ment or­ga­ni­za­tion ac­cuses Deere of harm­fully lock­ing down the re­pair process.

It’s dif­fi­cult to over­state the sig­nif­i­cance of this right-to-re­pair fight. While it has ob­vi­ous im­pli­ca­tions for the ag in­dus­try, oth­ers like the au­to­mo­tive and even home ap­pli­ance sec­tors are look­ing on. Any court rul­ing that might for­mally con­demn John Deere of wrong­do­ing may set a prece­dent for oth­ers to fol­low. At a time when man­u­fac­tur­ers want more and more con­trol of their prod­ucts af­ter the point of sale, every lit­tle up­date feels in­cred­i­bly high-stakes.

Got a tip or ques­tion for the au­thor? Contact them di­rectly: caleb@thedrive.com

...

Read the original on www.thedrive.com »

8 335 shares, 20 trendiness

Open source security at Astral

Astral builds tools that mil­lions of de­vel­op­ers around the world de­pend on and trust.

That trust in­cludes con­fi­dence in our se­cu­rity pos­ture: de­vel­op­ers rea­son­ably ex­pect that our tools (and the processes that build, test, and re­lease them) are se­cure. The rise of sup­ply chain at­tacks, typ­i­fied by the re­cent Trivy and LiteLLM hacks, has de­vel­op­ers ques­tion­ing whether they can trust their tools.

To that end, we want to share some of the tech­niques we use to se­cure our tools in the hope that they’re use­ful to:

Our users, who want to un­der­stand what we do to keep their sys­tems se­cure;

Other main­tain­ers, pro­jects, and com­pa­nies, who may ben­e­fit from some of the tech­niques we use;

Developers of CI/CD sys­tems, so that pro­jects do not need to fol­low non-ob­vi­ous paths or avoid

use­ful fea­tures to main­tain se­cure and ro­bust processes.

We sus­tain our de­vel­op­ment ve­loc­ity on Ruff, uv, and ty through ex­ten­sive CI/CD work­flows that run on GitHub Actions. Without these work­flows we would strug­gle to re­view, test, and re­lease our tools at the pace and to the de­gree of con­fi­dence that we de­mand. Our CI/CD work­flows are also a crit­i­cal part of our se­cu­rity pos­ture, in that they al­low us to keep crit­i­cal de­vel­op­ment and re­lease processes away from lo­cal de­vel­oper ma­chines and in­side of con­trolled, ob­serv­able en­vi­ron­ments.

GitHub Actions is a log­i­cal choice for us be­cause of its tight first-party in­te­gra­tion with GitHub, along with its ma­ture sup­port for con­trib­u­tor work­flows: any­body who wants to con­tribute can val­i­date that their pull re­quest is cor­rect with the same processes we use our­selves.

Unfortunately, there’s a flip­side to this: GitHub Actions has poor se­cu­rity de­faults, and se­cu­rity com­pro­mises like those of Ultralytics, tj-ac­tions, and Nx all be­gan with well-trod­den weak­nesses like pwn re­quests.

Here are some of the things we do to se­cure our CI/CD processes:

We for­bid many of GitHub’s most dan­ger­ous and in­se­cure trig­gers, such as pul­l_re­quest_­tar­get and

work­flow_run, across our en­tire GitHub or­ga­ni­za­tion. These trig­gers are al­most im­pos­si­ble to use se­curely and at­tack­ers keep find­ing ways to abuse them, so we sim­ply don’t al­low them.

Our ex­pe­ri­ence with these trig­gers is that many pro­jects think that they need them, but the over­whelm­ing ma­jor­ity of their us­ages are bet­ter off be­ing re­placed with a less priv­i­leged trig­ger (such as pul­l_re­quest) or re­moved en­tirely. For ex­am­ple, many pro­jects use pul­l_re­quest_­tar­get

so that third-party con­trib­u­tor-trig­gered work­flows can leave com­ments on PRs, but these use cases are of­ten well served by job sum­maries or even just leav­ing the rel­e­vant in­for­ma­tion in the work­flow’s logs.

Of course, there are some use cases that do re­quire these trig­gers, such as any­thing that does

re­ally need to leave com­ments on third-party is­sues or pull re­quests. In these in­stances we rec­om­mend leav­ing GitHub Actions en­tirely and us­ing a GitHub App (or web­hook) that lis­tens for the rel­e­vant events and acts in an in­de­pen­dent con­text. We cover this pat­tern in more de­tail un­der

Automations be­low.

We re­quire all ac­tions to be pinned to spe­cific com­mits (rather than tags or branches, which are mu­ta­ble). Additionally, we cross-check these com­mits to en­sure they match an ac­tual re­leased repos­i­tory state and are not im­pos­tor com­mits.

We do this in two ways: first with ziz­mor’s un­pinned-uses and im­pos­tor-com­mit au­dits, and again with GitHub’s own require ac­tions to be pinned to a full-length com­mit SHA pol­icy. The for­mer gives us a quick check that we can run lo­cally (and pre­vents im­pos­tor com­mits), while the lat­ter is a hard gate on work­flow ex­e­cu­tion that ac­tu­ally en­sures that all ac­tions, in­clud­ing nested ac­tions, are fully hash-pinned.

Enabling the lat­ter is a non­triv­ial en­deavor, since it re­quires in­di­rect ac­tion us­ages (the ac­tions called by the ac­tions we call) to be hash-pinned as well. To achieve this, we co­or­di­nated with our down­streams (example) to land hash-pin­ning across our en­tire de­pen­dency graph.

Together, these checks in­crease our con­fi­dence in the re­pro­ducibil­ity and her­metic­ity of our work­flows, which in turn in­creases our con­fi­dence in their se­cu­rity (in the pres­ence of an at­tack­er’s abil­ity to com­pro­mise a de­pen­dent ac­tion).

However, while nec­es­sary, this is­n’t suf­fi­cient: hash-pin­ning en­sures that the ac­tion’s

con­tents are im­mutable, but does­n’t pre­vent those im­mutable con­tents from mak­ing mu­ta­ble de­ci­sions (such as in­stalling the lat­est ver­sion of a bi­nary from a GitHub repos­i­to­ry’s re­leases). Neither GitHub nor third-party tools per­form well at de­tect­ing these kinds of im­mutabil­ity gaps yet, so we cur­rently rely on man­ual re­view of our ac­tion de­pen­den­cies to de­tect this class of risks.

When man­ual re­view does iden­tify gaps, we work with our up­streams to close them. For ex­am­ple, for ac­tions that use na­tive bi­na­ries in­ter­nally, this is achieved by em­bed­ding a map­ping be­tween the down­load URL for the bi­nary and a cryp­to­graphic hash. This hash in turn be­comes part of the ac­tion’s im­mutable state. While this does­n’t en­sure that the bi­nary it­self is au­then­tic, it does en­sure that an at­tacker can­not ef­fec­tively tam­per with a mu­ta­ble pointer to the bi­nary (such as a non-im­mutable tag or re­lease).

We limit our work­flow and job per­mis­sions in mul­ti­ple places: we de­fault to read-only per­mis­sions at the or­ga­ni­za­tion level, and we ad­di­tion­ally start every work­flow with per­mis­sions: {} and only broaden be­yond that on a job-by-job ba­sis.

We iso­late our GitHub Actions se­crets, wher­ever pos­si­ble: in­stead of us­ing or­ga­ni­za­tion- or repos­i­tory-level se­crets, we use de­ploy­ment en­vi­ron­ments and en­vi­ron­ment-spe­cific se­crets. This al­lows us to fur­ther limit the blast ra­dius of a po­ten­tial com­pro­mise, as a com­pro­mised test or lint­ing job won’t have ac­cess to, for ex­am­ple, the se­crets needed to pub­lish re­lease ar­ti­facts.

To do these things, we lever­age GitHub’s own set­tings, as well as tools like ziz­mor (for sta­tic analy­sis) and pin­act (for au­to­matic pin­ning).

Beyond our CI/CD processes, we also take a num­ber of steps to limit both the like­li­hood and the im­pact of ac­count and repos­i­tory com­pro­mises within the Astral or­ga­ni­za­tion:

We limit the num­ber of ac­counts with ad­min- and other highly-priv­i­leged roles, with most or­ga­ni­za­tion mem­bers only hav­ing read and write ac­cess to the repos­i­to­ries they need to work on. This re­duces the num­ber of ac­counts that an at­tacker can com­pro­mise to gain ac­cess to our or­ga­ni­za­tion-level con­trols.

We en­force strong 2FA meth­ods for all mem­bers of the Astral or­ga­ni­za­tion, be­yond GitHub’s de­fault of re­quir­ing any 2FA method. In ef­fect, this re­quires all Astral or­ga­ni­za­tion mem­bers to have a 2FA method that’s no weaker than TOTP. If and when GitHub al­lows us to en­force only 2FA meth­ods that are phish­ing-re­sis­tant (such as WebAuthn and Passkeys only), we will do so.

We im­pose branch pro­tec­tion rules on an org-wide ba­sis: changes to main can­not be force-pushed and must al­ways go through a pull re­quest. We also for­bid the cre­ation of par­tic­u­lar branch pat­terns (like ad­vi­sory-* and in­ter­nal-*) to pre­vent pre­ma­ture dis­clo­sure of se­cu­rity work.

We im­pose tag pro­tec­tion rules that pre­vent re­lease tags from be­ing cre­ated un­til a

re­lease de­ploy­ment suc­ceeds, with the re­lease de­ploy­ment it­self be­ing gated on a man­ual ap­proval by at least one other team mem­ber. We also pre­vent the up­dat­ing or dele­tion of tags, mak­ing them ef­fec­tively im­mutable once cre­ated. On top of that we layer a branch re­stric­tion: re­lease de­ploy­ments may only be cre­ated against main, pre­vent­ing an at­tacker from us­ing an un­re­lated first-party branch to at­tempt to by­pass our con­trols.

Finally, we ban repos­i­tory ad­mins from by­pass­ing all of the above pro­tec­tions. All of our pro­tec­tions are en­forced at the or­ga­ni­za­tion level, mean­ing that an at­tacker who man­ages to com­pro­mise an ac­count that has ad­min ac­cess to a spe­cific repos­i­tory still won’t be able to dis­able our con­trols.

To help oth­ers im­ple­ment these kinds of branch and tag con­trols, we’re shar­ing a gist that shows some of the rule­sets we use. These rule­sets are spe­cific to our GitHub or­ga­ni­za­tion and repos­i­to­ries, but you can use them as a start­ing point for your own poli­cies!

There are cer­tain things that GitHub Actions can do, but can’t do se­curely, such as leav­ing com­ments on third-party is­sues and pull re­quests. Most of the time it’s bet­ter to just forgo these fea­tures, but in some cases they’re a valu­able part of our work­flows.

In these lat­ter cases, we use as­tral-sh-bot to safely iso­late these tasks out­side of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have re­ceived (since GitHub Actions con­sumes the same web­hook pay­loads as GitHub Apps do), but with much more con­trol and much less im­plicit state.

However, there’s still a catch with GitHub Apps: an app does­n’t elim­i­nate any sen­si­tive cre­den­tials needed for an op­er­a­tion, it just moves them into an en­vi­ron­ment that does­n’t mix code and data as per­va­sively as GitHub Actions does. For ex­am­ple, an app won’t be sus­cep­ti­ble to a

tem­plate in­jec­tion at­tack like a work­flow would be, but could still con­tain SQLi, prompt in­jec­tion, or other weak­nesses that al­low an at­tacker to abuse the ap­p’s cre­den­tials. Consequently, it’s es­sen­tial to treat GitHub App de­vel­op­ment with the same se­cu­rity mind­set as any other soft­ware de­vel­op­ment. This also ex­tends to un­trusted code: us­ing a GitHub App does not make it safe to run un­trusted code, it just makes it harder to do so un­ex­pect­edly. If your processes need

to run un­trusted code, they must use pul­l_re­quest or an­other safe” trig­ger that does­n’t pro­vide any priv­i­leged cre­den­tials to third-party pull re­quests.

With all that said, we’ve found that the GitHub App pat­tern works well for us, and we rec­om­mend it to other main­tain­ers and pro­jects who have sim­i­lar needs. The main down­side to it comes in the form of com­plex­ity: it re­quires de­vel­op­ing and host­ing a GitHub App, rather than writ­ing a work­flow that GitHub or­ches­trates for you. We’ve found that frame­works like Gidgethub make the de­vel­op­ment process for GitHub Apps rel­a­tively straight­for­ward, but that host­ing re­mains a bur­den in terms of time and cost.

It’s an un­for­tu­nate re­al­ity that there still aren’t great GitHub App op­tions for one-per­son and hob­by­ist open source pro­jects; it’s our hope that us­abil­ity en­hance­ments in this space can be led by com­pa­nies and larger pro­jects that have the re­sources needed to pa­per over GitHub Actions’ short­com­ings as a plat­form.

We rec­om­mend this tu­to­r­ial by Mariatta as a good in­tro­duc­tion to build­ing GitHub Apps in Python. We also plan to open source as­tral-sh-bot in the fu­ture.

So far, we’ve cov­ered as­pects that tie closely to GitHub, as the source host for Astral’s tools. But many of our users in­stall our tools via other mech­a­nisms, such as PyPI, Homebrew, and our

Docker im­ages. These dis­tri­b­u­tion chan­nels add an­other link” to the metaphor­i­cal sup­ply chain, and re­quire dis­crete con­sid­er­a­tion:

Where pos­si­ble, we use Trusted Publishing to pub­lish to reg­istries (like PyPI, crates.io, and

NPM). This tech­nique elim­i­nates the need for long-lived reg­istry cre­den­tials, in turn ame­lio­rat­ing one of the most com­mon sources of pack­age takeover (credential com­pro­mise in CI/CD plat­forms).

Where pos­si­ble (currently our bi­nary and Docker im­ages re­leases), we gen­er­ate Sigstore-based at­tes­ta­tions. These at­tes­ta­tions es­tab­lish a cryp­to­graph­i­cally ver­i­fi­able link be­tween the re­leased ar­ti­fact and the work­flow that pro­duced it, in turn al­low­ing users to ver­ify that their build of uv, Ruff, or ty came from our ac­tual re­lease processes. You can see our re­cent

at­tes­ta­tions for uv as an ex­am­ple of this.1

We use GitHub’s im­mutable re­leases fea­ture to pre­vent the post-hoc mod­i­fi­ca­tion of the builds we pub­lish on GitHub. This ad­dresses a com­mon at­tacker piv­ot­ing tech­nique where pre­vi­ously pub­lished builds are re­placed with ma­li­cious builds. A vari­ant of this tech­nique was used in the re­cent Trivy at­tack, with the at­tacker force-push­ing over pre­vi­ous tags to in­tro­duce com­pro­mised ver­sions of the trivy-ac­tion and setup-trivy ac­tions.

We do not use caching to im­prove build times dur­ing re­leases, to pre­vent an at­tacker from com­pro­mis­ing our builds via a GitHub Actions cache poi­son­ing at­tack.

* To re­duce the risk of an at­tacker pub­lish­ing a new ma­li­cious ver­sion of our tools, we use a

stack of pro­tec­tions on our re­lease processes:

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

* For users who in­stall uv via our stand­alone in­staller, we en­force the in­tegrity of the in­stalled

bi­na­ries via check­sums em­bed­ded di­rectly into the in­staller’s source code2.

Our re­lease processes also in­volve knock-on” changes, like up­dat­ing the our pub­lic doc­u­men­ta­tion, ver­sion man­i­fests, and the of­fi­cial pre-com­mit hooks. These are priv­i­leged op­er­a­tions that we pro­tect through ded­i­cated bot ac­counts and fine-grained PATs is­sued through those ac­counts.

Going for­wards, we’re also look­ing at adding code­sign­ing with of­fi­cial de­vel­oper cer­tifi­cates on ma­cOS and Windows.

Last but not least is the ques­tion of de­pen­den­cies. Like al­most all mod­ern soft­ware, our tools de­pend on an ecosys­tem of third-party de­pen­den­cies (both di­rect and tran­si­tive), each of which is in an im­plicit po­si­tion of trust. Here are some of the things we do to mea­sure and mit­i­gate up­stream risk:

We use de­pen­dency man­age­ment tools like Dependabot and Renovate to keep our de­pen­den­cies up­dated, and to no­tify us when our de­pen­den­cies con­tain known vul­ner­a­bil­i­ties.

In gen­eral, we em­ploy cooldowns in con­junc­tion with the above to avoid up­dat­ing de­pen­den­cies im­me­di­ately af­ter a new re­lease, as this is when tem­porar­ily com­pro­mised de­pen­den­cies are most likely to af­fect us.

Both Dependabot and Renovate sup­port cooldowns, and uv also has built-in sup­port. We’ve found Renovate’s abil­ity to con­fig­ure cooldowns on a per-group ba­sis to be par­tic­u­larly use­ful, as it al­lows us to re­lax the cooldown re­quire­ment for our own (first-party) de­pen­den­cies while keep­ing it in place for most third-party de­pen­den­cies.

We main­tain so­cial con­nec­tions with many of our up­stream de­pen­den­cies, and we per­form both reg­u­lar and se­cu­rity con­tri­bu­tions with them (including fixes to their own CI/CD and re­lease processes). For ex­am­ple, here’s a re­cent con­tri­bu­tion we made to apache/​open­dal-re­qsign to help them ratchet down their CI/CD se­cu­rity.

Separately, we main­tain so­cial con­nec­tions with ad­ja­cent pro­jects and work­ing groups in the ecosys­tem, in­clud­ing the Python Packaging Authority and the Python Security Response Team. These con­nec­tions have proven in­valu­able for shar­ing in­for­ma­tion, such as when a re­port against pip also af­fects uv (or vice versa), or when a se­cu­rity re­lease for CPython will re­quire a re­lease of python-build-stand­alone.

We’re con­ser­v­a­tive about adding new de­pen­den­cies, and we look to elim­i­nate de­pen­den­cies where prac­ti­cal and min­i­mally dis­rup­tive to our users. Over the com­ing re­lease cy­cles, we hope to re­move some de­pen­den­cies re­lated to sup­port for rarely used com­pres­sion schemes, as part of a larger ef­fort to align our­selves with Python pack­ag­ing stan­dards.

More gen­er­ally, we’re also con­ser­v­a­tive about what our de­pen­den­cies bring in: we try to avoid de­pen­den­cies that in­tro­duce bi­nary blobs, and we care­fully re­view our de­pen­den­cies’ fea­tures to dis­able func­tion­al­ity that we don’t need or de­sire.

Finally, we con­tribute fi­nan­cially (in the form of our OSS Fund) to the sus­tain­abil­ity of pro­jects that we de­pend on or that push the OSS ecosys­tem as a whole for­wards.

Open source se­cu­rity is a hard prob­lem, in part be­cause it’s re­ally many prob­lems (some tech­ni­cal, some so­cial) mas­querad­ing as one. We’ve cov­ered many of the tech­niques we use to tackle this prob­lem, but this post is by no means an ex­haus­tive list. It’s also not a sta­tic list: at­tack­ers are dy­namic par­tic­i­pants in the se­cu­rity process, and de­fenses nec­es­sar­ily evolve in re­sponse to their chang­ing tech­niques.

With that in mind, we’d like to re­call some of the points men­tioned above that de­serve the most at­ten­tion:

Respect the lim­its of CI/CD: it’s ex­tremely tempt­ing to do every­thing in CI/CD, but there are some things that CI/CD (and par­tic­u­larly GitHub Actions) just can’t do se­curely. For these things, it’s of­ten bet­ter to forgo them en­tirely, or iso­late them out­side of CI/CD with a GitHub App or sim­i­lar.

With that said, it’s im­por­tant to not over­cor­rect and throw CI/CD away en­tirely: as men­tioned above, CI/CD is a crit­i­cal part of our se­cu­rity pos­ture and prob­a­bly yours too! It’s un­for­tu­nate that se­cur­ing GitHub Actions is so dif­fi­cult, but we con­sider it worth the ef­fort rel­a­tive to the ve­loc­ity and se­cu­rity risks that would come with not us­ing hosted CI/CD at all.

In par­tic­u­lar, we strongly rec­om­mend us­ing CI/CD for re­lease processes, rather than re­ly­ing on lo­cal de­vel­oper ma­chines, par­tic­u­larly when those re­lease processes can be se­cured with mis­use- and dis­clo­sure-re­sis­tant cre­den­tial schemes like Trusted Publishing.

Isolate and elim­i­nate long-lived cre­den­tials: the sin­gle most com­mon form of post-com­pro­mise spread is the abuse of long-lived cre­den­tials. Wherever pos­si­ble, elim­i­nate these cre­den­tials en­tirely (for ex­am­ple, with Trusted Publishing or other OIDC-based au­then­ti­ca­tion mech­a­nisms).

Where elim­i­na­tion is­n’t pos­si­ble, iso­late these cre­den­tials to the small­est pos­si­ble scope: put them in spe­cific de­ploy­ment en­vi­ron­ments with ad­di­tional ac­ti­va­tion re­quire­ments, and only is­sue cre­den­tials with the min­i­mum nec­es­sary per­mis­sions to ac­com­plish a given task.

Strengthen re­lease processes: if you’re on GitHub, use de­ploy­ment en­vi­ron­ments, ap­provals, tag and branch rule­sets, and im­mutable re­leases to re­duce the de­grees of free­dom the at­tacker has in the event of an ac­count takeover or repos­i­tory com­pro­mise.

Maintain aware­ness of your de­pen­den­cies: main­tain­ing aware­ness of the over­all health of your de­pen­dency tree is crit­i­cal to un­der­stand­ing your own risk pro­file. Use both tools and el­bow grease to keep your de­pen­den­cies se­cure, and to help them keep their own processes and de­pen­den­cies se­cure too.

Finally, we’re still eval­u­at­ing many of the tech­niques men­tioned above, and will al­most cer­tainly be tweak­ing (and strength­en­ing) them over the com­ing weeks and months as we learn more about their lim­i­ta­tions and how they in­ter­act with our de­vel­op­ment processes. That’s to say that this post rep­re­sents a point in time, not the fi­nal word on how we think about se­cu­rity for our open source tools.

...

Read the original on astral.sh »

9 293 shares, 16 trendiness

The Importance of Being Idle

As I type these words, I worry over the day when I will no longer be com­mis­sioned to write them. The day, to be spe­cific, that The American Scholar asks Claude (the moniker for Anthropic’s AI) and not Robert (the name of Max and Roslyn Zaretsky’s son) to cre­ate an es­say on, say, AI and the fu­ture of work.

Not sur­pris­ingly, I am not alone to worry: Not many sub­jects stir greater fear and dread among Americans than the seem­ingly ir­re­sistible rise of AI.  According to a re­cent Pew Research Center sur­vey, 64 per­cent of the pub­lic be­lieves that AI will trans­late into fewer jobs. Small won­der, then, that only 17 per­cent of the same re­spon­dents ex­pect that AI, even when hu­man­ized by names like Claude, will make their fu­ture brighter.

Were he alive to­day, Paul Lafargue would be among that 17 per­cent, and his voice would be both loud and funny. Born in Cuba in 1842 to par­ents of mixed race—part Jewish and part Creole—Lafargue was mar­ried to Laura Marx, one of Karl Marx’s four daugh­ters. Even be­fore this mar­riage, though, Lafargue, who had stud­ied med­i­cine in Paris, had thrown over a se­cure fu­ture as a doc­tor to de­vote (and pau­per­ize) him­self and his fam­ily to work­ing on be­half of the shin­ing (and class­less) fu­ture glimpsed by his fa­ther-in-law.

Knocking out polem­i­cal and the­o­ret­i­cal es­says while striv­ing to launch France’s first work­ers’ party, the Parti ou­vrier français, Lafargue was a well-known fig­ure on the rad­i­cal left in fin-de-siè­cle Paris. Predictably, his ac­tiv­i­ties also made him well-known to the French po­lice, who re­peat­edly ar­rested him, in­clud­ing on one evening in 1883 when he was tak­ing home a salad to his wife. (He man­aged to find a passerby to de­liver the salad be­fore the po­lice hauled him away.)

Making wine from this bunch of grapes, Lafargue used his time be­hind bars at Saint Pélagie—a for­bid­ding Parisian prison where many of the cen­tu­ry’s most no­to­ri­ous writ­ers, artists, and thinkers found them­selves from time to time—to draft his most fa­mous work, Le Droit à la pa­resse, or The Right to Be Lazy, trans­lated into English by Alex Andriesse. Though he dashed off this pam­phlet nearly 150 years ago, Lafargue asked ques­tions that re­main most per­ti­nent to our cur­rent anx­i­eties over the fu­ture of work.

During Lafargue’s own life­time, the na­ture of work was un­der­go­ing a trau­matic trans­for­ma­tion. The seis­mic ef­fect of the first and sec­ond in­dus­trial rev­o­lu­tions, as well as the quick­en­ing pace of glob­al­iza­tion, proved an ex­tinc­tion event for tra­di­tional forms of pro­duc­tion. The gods and kings of the past,” de­clared the his­to­rian Eric Hobsbawm, were pow­er­less be­fore the busi­ness­men and steam en­gines of the pre­sent.” As fac­tory work­ers and un­skilled la­bor­ers re­placed ate­liers and ar­ti­sans, the for­mer strug­gled to or­ga­nize them­selves, a strug­gle into which Lafargue threw him­self body and soul.

Or, per­haps, not his en­tire soul. His es­say’s ti­tle re­veals a dra­matic di­ver­gence of goals he and union lead­ers held. He be­moans the de­mand of work­ers for shorter work­days (which of­ten lasted as long as 12 hours), in­sist­ing that cur­tail­ing work hours did not rep­re­sent vic­tory but de­feat: Shame on the pro­le­tariat, only slaves would have been ca­pa­ble of such base­ness” to have sought such an out­come. On the con­trary, he de­claims, work­ers should op­pose the very no­tion of work.

If you are puz­zled, don’t worry—so, too, were nearly all of Lafargue’s con­tem­po­raries on the left. How could they not be? Here was a com­mit­ted Marxist—and the great man’s son-in-law, to boot—as­sert­ing that work­ers, rather than strike for the right to work, should in­stead protest for the right to be lazy. Machines, he be­lieved, could be­come humanity’s sav­ior, the god who will re­deem man from the sor­di­dae artes [manual la­bor] and give him leisure and lib­erty.”

And yet, Lafargue ex­claims, the blind pas­sion and per­verse mur­der­ous­ness of work have trans­formed the ma­chine from an in­stru­ment of eman­ci­pa­tion into an in­stru­ment that en­slaves free be­ings.” The rea­son work­ers spend so many hours shack­led to their ma­chines, he con­tended, was not from eco­nomic ne­ces­sity. Instead, it was im­posed upon them by their su­pe­ri­ors, the cap­tains of in­dus­try and fi­nance, who were wed­ded to the dogma of work and di­a­bol­i­cally drilled the vice of work into the heads of work­ers.”

Of course, Lafargue never called for the erad­i­ca­tion of work. The ne­ces­si­ties of life, af­ter all, would al­ways re­quire the la­bor of women and men to pro­duce and pro­vide. But he did press for the ra­tio­nal­iza­tion of work. Given the ef­fi­ciency of ma­chines, fewer hours were needed to pro­vide the ne­ces­si­ties of life. Maintaining the same ex­ces­sive num­ber of work hours in­evitably flooded the mar­ket with su­per­fluities and the er­a’s re­peated eco­nomic crises stretch­ing from 1873 to the end of the cen­tury.

The dra­matic re­duc­tion of time at work would be a boon not just to the well-be­ing of the econ­omy, Lafargue con­cluded, but also to the well-be­ing of both work­ers and own­ers, who would have more time to … well, to do what?

Karl Marx had an an­swer of sorts, sug­gest­ing that we would hunt in the morn­ing, fish in the af­ter­noon, rear cat­tle in the evening, crit­i­cism af­ter din­ner, just as I have a mind.” But Lafargue in­stead con­jured a Rabelaisian fu­ture in which for­mer work­ers would eat and drink their fill on hol­i­days while their for­mer taskmas­ters would en­ter­tain them by per­form­ing par­o­dies of their now de­funct roles as gen­er­als and in­dus­tri­al­ists. Et le voilà, Lafargue con­cludes, in this world turned up­side down, social dis­cord will van­ish.”

Though his tongue was firmly in cheek, Lafargue did imag­ine that these ma­chines—per­haps the fore­run­ners of the machines of lov­ing grace” in­voked by Dario Amodei, the CEO of Anthropic—would lead us to a par­adise we had lost. A par­adise bathed in otium, the Latin word that can be trans­lated as idleness” as well as laziness.” When Lafargue praises la pa­resse, he means not the lat­ter, but the for­mer. He makes this clear by quot­ing, at the start of his es­say, a line from Virgil’s Eclogues that cel­e­brates the plea­sures of otium.

Although Lafargue does not flesh out his no­tion of a fu­ture filled with idle­ness, my guess is that he meant it would be de­voted not to the plea­sure of do­ing a par­tic­u­lar hobby or spe­cific ac­tiv­ity, paint­ing a land­scape or swing­ing a gold club. Instead, it would be a life given out, quite sim­ply, to the plea­sure of faisant rien or do­ing noth­ing. As the Czech play­wright Karel Capek wrote in an es­say called In Praise of Idleness,” this state is de­fined as the ab­sence of every­thing by which a per­son is oc­cu­pied, di­verted, dis­tracted, in­ter­ested, em­ployed, an­noyed, pleased, at­tracted, in­volved, en­ter­tained, bored, en­chanted, fa­tigued, ab­sorbed, or con­fused.” In a word, idling is the sen­ti­ment of be­ing.

But even idlers, try as they might, can­not ig­nore the pas­sage of time. In 1911, a dozen years be­fore Capek pub­lished his es­say, Paul Lafargue and his wife com­mit­ted sui­cide—he was 69; she was 66. His rea­son, it seems to me, dove­tailed with his phi­los­o­phy: I am killing my­self be­fore piti­less old age, which grad­u­ally de­prives me one by one of the plea­sures and joys of ex­is­tence.” It might re­pay us to take a mo­ment, not just from our jobs but also from our leisures, to make some to-do about do­ing noth­ing.

...

Read the original on theamericanscholar.org »

10 249 shares, 30 trendiness

How Pizza Tycoon simulated traffic on a 25 MHz CPU — Pizza Legacy Blog

I’ve been work­ing on Pizza Legacy, an open-source reim­ple­men­ta­tion of the 1994 DOS game . The game has a close-zoom street view of the cities, and when you scroll around it you can see a steady stream of cars dri­ving through the streets. Maybe 20 or 30 tiny sprites at a time, but they nav­i­gate the road net­work, queue be­hind each other at in­ter­sec­tions, and gen­er­ally look like a liv­ing city. Yes, it was a bit buggy be­cause some­times they would drive through each other, but it was good enough to just give some sense of life to the map. All that on a 25 MHz 386 CPU.

The first thing I im­ple­mented in 2010 when I started this pro­ject was that close zoom level, but it took 14 years be­fore I fi­nally had the cars dri­ving around on it, in a way that I was happy about; I had mul­ti­ple at­tempts over the years but every time I ran into prob­lems I got stuck build­ing an overly com­pli­cated sys­tem that was hard to rea­son about and no fun to work on.

One at­tempt in 2017 in­volved each tile keep­ing track of which po­si­tions were oc­cu­pied, and every car had to ask the grid for per­mis­sion be­fore mov­ing, re­serv­ing and free­ing slots as it went. It ba­si­cally turned into a shared lock­ing sys­tem just to move a few pix­els, with cars and tiles con­stantly try­ing to stay in sync.

All the while I had this nag­ging thought in the back of my mind: the orig­i­nal ran this on a 25 MHz CPU, so why were my ver­sions al­ways so com­pli­cated?

Finally I went to the as­sem­bly (which I had spent many years slowly un­der­stand­ing bet­ter and doc­u­ment­ing) to fig­ure out what the orig­i­nal was do­ing, with the help of LLMs which were (a cou­ple of years ago) this new and ex­cit­ing tech­nol­ogy that could bet­ter un­der­stand as­sem­bly than I could.

Now that I fi­nally have it work­ing I can see where I went wrong: I went into it with a brain full of mod­ern con­cepts: scene graphs, path find­ing, col­li­sion de­tec­tion, and of course plenty of CPU to run it all!

First, let’s look at what a city ac­tu­ally looks like:

As you can see there are two-lane roads, T-junctions, in­ter­sec­tions, and cor­ners. In maps are made up out of a grid of 160 by 120 tiles, where each tile is one of the tiles from landsym.vga:

The orig­i­nal landsym.vga file with added bor­ders be­tween tiles and text to in­di­cate the row and col­umn off­set. Byte 0x54 means col­umn 5, row 4 (roof tile of a de­pot).

Back to the traf­fic; the key in­sight that makes it pos­si­ble to run this sys­tem on such a slow CPU: cars don’t need to know where they’re go­ing. Each road tile type car­ries its own di­rec­tion. Road tile 0x16 is the bot­tom part of a hor­i­zon­tal road, mean­ing that cars can only drive from left to right on these roads. Similarly road tile 0x06 is just for right to left traf­fic, then 0x26 and 0x36 are the same but for ver­ti­cal traf­fic.

This means the city is ba­si­cally just a bunch of one-way roads, once a car knows which tile it sits on, it can keep go­ing.

Corners work the same way, 0x56

(CORNER_SW in my enum) is the cor­ner that al­lows the car to ei­ther keep go­ing west, or turn south. When a car hits a cor­ner it flips a coin, 50% chance of go­ing straight on, 50% chance of tak­ing the turn. The maps have been de­signed in such a way that the roads al­ways make sense, which means that next to the CORNER_SW there is an­other tile that is ei­ther a south to north traf­fic (so we have to go south) or it’s an­other edge tile that al­lows ei­ther a turn or straight on.

There is one ex­tra rule to keep traf­fic look­ing nat­ural, if you just took a left turn the next cor­ner forces you straight-on; no two con­sec­u­tive left turns.

Valid di­rec­tions per tile type in­di­cated with ar­rows.

Cars move one pixel per frame. Each tick the main loop checks if a car is blocked, and if not, in­cre­ments or decre­ments its screen co­or­di­nate by one de­pend­ing on di­rec­tion. East adds 1 to X. North sub­tracts 1 from Y.

There’s a sec­ond progress counter, count­ing down from 16 to 1. When it hits zero it re­sets to 16 and the game runs the tile-bound­ary logic: look up the next tile, de­cide the new di­rec­tion, up­date the sprite frame (to vi­su­ally turn the car in the new di­rec­tion). Since each tile is 16 pix­els wide and tall, this runs ex­actly once per tile crossed. The per-pixel move hap­pens every tick; the heav­ier tile logic runs only 1/16th as of­ten.

When a car first spawns, progress is set to a ran­dom value be­tween 1 and 16. That stag­gers all the cars so their tile-bound­ary checks don’t all land on the same frame, spread­ing the work out evenly.

Unlike my var­i­ous at­tempts at fancy col­li­sion de­tec­tion, the orig­i­nal uses a straight­for­ward pair­wise check: for each car, walk the whole car list and ask would these two over­lap next tick?” If yes, set a wait counter of 10 ticks on the blocked car and move on to the next car.

But the col­li­sion de­tec­tion code is writ­ten to bail out as fast as pos­si­ble. The very first thing it does is ex­tract the other car’s di­rec­tion; be­cause roads are one-way, east and west never share a road, so an east car and a west car can never col­lide. That pair re­turns im­me­di­ately, no co­or­di­nate reads at all. Same for east and south, west and north, and so on.

With say 25 cars in a typ­i­cal city view there are 625 pair­wise calls per frame. About half of those re­turn in just a few CPU in­struc­tions on the di­rec­tion check alone. Most of the rest fail the lane check (same-direction cars have to be on the same road, which is one equal­ity com­par­i­son). The pairs that ac­tu­ally reach any co­or­di­nate arith­metic are usu­ally sin­gle dig­its.

When a car does get blocked, the 10-tick wait cre­ates nat­ural traf­fic jams: cars bunch up, the front one even­tu­ally finds the way clear, the queue drains. There are some bugs in the sys­tem (especially when you let it run for a while and there are lot of in­ter­sec­tions) but given that the point of this is not to run an ac­cu­rate dri­ving sim­u­la­tion but just show some move­ment on the screen, it works per­fectly well and very ef­fi­ciently. The col­li­sion de­tec­tion sys­tem has some quirks; some com­bi­na­tions are never checked (e.g. east­bound car never in­ter­sects with a south­bound car) that might be the rea­son be­hind some bug­i­ness.

When you en­ter the close-zoom view, the game scans all 132 tiles in the view­port (12 columns by 11 rows), and for each road tile it rolls against the dis­tric­t’s traf­fic den­sity to de­cide whether to spawn a car there, so higher-traf­fic dis­tricts are busier. Corner tiles are ex­cluded from spawn points, so cars only ap­pear on straight road tiles.

Cars that drive off the edge of the screen are respawned as a new (random) color car fac­ing the other di­rec­tion, on the tile go­ing the other di­rec­tion. This means that the game does­n’t have to worry about respawn­ing cars other than just every time one car dri­ves of go­ing east it spawns a new car be­low go­ing west, etc.

Pay at­ten­tion to the cars dri­ving off the map at the edges, no­tice they are re­placed by cars dri­ving the op­po­site di­rec­tion.

When you scroll, the newly ex­posed strip of tiles gets the same treat­ment of hav­ing a chance of hav­ing cars spawned on them.

Looking back at my failed at­tempts, I was de­sign­ing for prob­lems that the orig­i­nal just did­n’t con­sider. Cars don’t need pathfind­ing be­cause the map tells them where they can go. Collision de­tec­tion was cheap be­cause the early-exit logic makes most pairs ba­si­cally free. There’s no ve­loc­ity or physics be­cause 1 pixel per tick is enough to look con­vinc­ing. When you’re about to hit some­thing just pause for 10 ticks, and when you have to make a turn you just travel half the width of the tile and then make your turn, works on every tile in any di­rec­tion.

I reim­ple­mented it fol­low­ing the as­sem­bly pretty closely, so just a cou­ple of switch state­ments with dif­fer­ent rout­ing op­tions per tile type, you can see the de­cide_de­sired_di­rec­tion method in Car.cpp.

...

Read the original on pizzalegacy.nl »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.