10 interesting stories served every morning and every evening.




1 1,175 shares, 70 trendiness

Little Snitch for Linux

Every time an ap­pli­ca­tion on your com­puter opens a net­work con­nec­tion, it does so qui­etly, with­out ask­ing. Little Snitch for Linux makes that ac­tiv­ity vis­i­ble and gives you the op­tion to do some­thing about it. You can see ex­actly which ap­pli­ca­tions are talk­ing to which servers, block the ones you did­n’t in­vite, and keep an eye on traf­fic his­tory and data vol­umes over time.

Once in­stalled, open the user in­ter­face by run­ning lit­tlesnitch in a ter­mi­nal, or go straight to http://​lo­cal­host:3031/. You can book­mark that URL, or in­stall it as a Progressive Web App. Any Chromium-based browser sup­ports this na­tively, and Firefox users can do the same with the Progressive Web Apps ex­ten­sion.

The con­nec­tions view is where most of the ac­tion is. It lists cur­rent and past net­work ac­tiv­ity by ap­pli­ca­tion, shows you what’s be­ing blocked by your rules and block­lists, and tracks data vol­umes and traf­fic his­tory. Sorting by last ac­tiv­ity, data vol­ume, or name, and fil­ter­ing the list to what’s rel­e­vant, makes it easy to spot any­thing un­ex­pected. Blocking a con­nec­tion takes a sin­gle click.

The traf­fic di­a­gram at the bot­tom shows data vol­ume over time. You can drag to se­lect a time range, which zooms in and fil­ters the con­nec­tion list to show only ac­tiv­ity from that pe­riod.

Blocklists let you cut off whole cat­e­gories of un­wanted traf­fic at once. Little Snitch down­loads them from re­mote sources and keeps them cur­rent au­to­mat­i­cally. It ac­cepts lists in sev­eral com­mon for­mats: one do­main per line, one host­name per line, /etc/hosts style (IP ad­dress fol­lowed by host­name), and CIDR net­work ranges. Wildcard for­mats, regex or glob pat­terns, and URL-based for­mats are not sup­ported. When you have a choice, pre­fer do­main-based lists over host-based ones, they’re han­dled more ef­fi­ciently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a start­ing point.

One thing to be aware of: the .lsrules for­mat from Little Snitch on ma­cOS is not com­pat­i­ble with the Linux ver­sion.

Blocklists work at the do­main level, but rules let you go fur­ther. A rule can tar­get a spe­cific process, match par­tic­u­lar ports or pro­to­cols, and be as broad or nar­row as you need. The rules view lets you sort and fil­ter them so you can stay on top of things as the list grows.

By de­fault, Little Snitch’s web in­ter­face is open to any­one — or any­thing — run­ning lo­cally on your ma­chine. A mis­be­hav­ing or ma­li­cious ap­pli­ca­tion could, in prin­ci­ple, add and re­move rules, tam­per with block­lists, or turn the fil­ter off en­tirely.

If that con­cerns you, Little Snitch can be con­fig­ured to re­quire au­then­ti­ca­tion. See the Advanced con­fig­u­ra­tion sec­tion be­low for de­tails.

Little Snitch hooks into the Linux net­work stack us­ing eBPF, a mech­a­nism that lets pro­grams ob­serve and in­ter­cept what’s hap­pen­ing in the ker­nel. An eBPF pro­gram watches out­go­ing con­nec­tions and feeds data to a dae­mon, which tracks sta­tis­tics, pre­con­di­tions your rules, and serves the web UI.

The source code for the eBPF pro­gram and the web UI is on GitHub.

The UI de­lib­er­ately ex­poses only the most com­mon set­tings. Anything more tech­ni­cal can be con­fig­ured through plain text files, which take ef­fect af­ter restart­ing the lit­tlesnitch dae­mon.

The de­fault con­fig­u­ra­tion lives in /var/lib/littlesnitch/config/. Don’t edit those files di­rectly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will al­ways pre­fer the over­ride.

The files you’re most likely to care about:

we­b_ui.toml — net­work ad­dress, port, TLS, and au­then­ti­ca­tion. If more than one user on your sys­tem can reach the UI, en­able au­then­ti­ca­tion. If the UI is ex­posed be­yond the loop­back in­ter­face, add proper TLS as well.

main.toml — what to do when a con­nec­tion matches noth­ing. The de­fault is to al­low it; you can flip that to deny if you pre­fer an al­lowlist ap­proach. But be care­ful! It’s easy to lock your­self out of the com­puter!

ex­e­cuta­bles.toml — a set of heuris­tics for group­ing ap­pli­ca­tions sen­si­bly. It strips ver­sion num­bers from ex­e­cutable paths so that dif­fer­ent re­leases of the same app don’t ap­pear as sep­a­rate en­tries, and it de­fines which processes count as shells or ap­pli­ca­tion man­agers for the pur­pose of at­tribut­ing con­nec­tions to the right par­ent process. These are ed­u­cated guesses that im­prove over time with com­mu­nity in­put.

Both the eBPF pro­gram and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the ver­sion in over­rides.

Little Snitch for Linux is built for pri­vacy, not se­cu­rity, and that dis­tinc­tion mat­ters. The ma­cOS ver­sion can make stronger guar­an­tees be­cause it can have more com­plex­ity. On Linux, the foun­da­tion is eBPF, which is pow­er­ful but bounded: it has strict lim­its on stor­age size and pro­gram com­plex­ity. Under heavy traf­fic, cache ta­bles can over­flow, which makes it im­pos­si­ble to re­li­ably tie every net­work packet to a process or a DNS name. And re­con­struct­ing which host­name was orig­i­nally looked up for a given IP ad­dress re­quires heuris­tics rather than cer­tainty. The ma­cOS ver­sion uses deep packet in­spec­tion to do this more re­li­ably. That’s not an op­tion here.

For keep­ing tabs on what your soft­ware is up to and block­ing le­git­i­mate soft­ware from phon­ing home, Little Snitch for Linux works well. For hard­en­ing a sys­tem against a de­ter­mined ad­ver­sary, it’s not the right tool.

Little Snitch for Linux has three com­po­nents. The eBPF ker­nel pro­gram and the web UI are both re­leased un­der the GNU General Public License ver­sion 2 and avail­able on GitHub. The dae­mon (littlesnitch –daemon) is pro­pri­etary, but free to use and re­dis­trib­ute.

...

Read the original on obdev.at »

2 434 shares, 18 trendiness

Is Hormuz Open Yet?

...

Read the original on www.ishormuzopenyet.com »

3 400 shares, 15 trendiness

I’ve been waiting over a month for Anthropic support to respond to my billing issue

In early March, I no­ticed ap­prox­i­mately $180 in un­ex­pected charges to my Anthropic ac­count. I’m a Claude Max sub­scriber, and be­tween March 3-5, I re­ceived 16 sep­a­rate Extra Usage” in­voices rang­ing from $10-$13 each, all in quick suc­ces­sion of one an­other. However, I was­n’t us­ing Claude. I was away from my lap­top en­tirely and was out sail­ing with my par­ents back home in San Diego.

When I checked my us­age dash­board, it showed my ses­sion at 100% de­spite no ac­tiv­ity. My Claude Code ses­sion his­tory showed two tiny ses­sions from March 5 to­tal­ing un­der 7KB (no ses­sions on March 3 or March 4.) Nothing that would ex­plain $180 in Extra Usage charges.

This is­n’t just me. Other Max plan users have re­ported the same is­sue. There are nu­mer­ous GitHub is­sues about it (e.g. claude-code#29289 and claude-code#24727), and posts on r/​Claude­Code de­scrib­ing the ex­act same be­hav­ior: us­age me­ters show­ing in­cor­rect val­ues and Extra Usage charges pil­ing up er­ro­neously.

On March 7, I sent a de­tailed email to Anthropic sup­port lay­ing out the sit­u­a­tion with all the ev­i­dence above. Within two min­utes, I re­ceived a re­sponse… from Fin AI Agent, Anthropic’s AI Agent.” The AI agent told me to go through an in-app re­fund re­quest flow. Sadly, this re­fund pipeline is only ap­plic­a­ble for sub­scrip­tions, and not for Extra Usage charges. I also wanted to con­firm with a hu­man on ex­actly what went wrong rather than just get­ting a re­fund and call­ing it a day.

So, nat­u­rally, I replied ask­ing to speak to a hu­man. The re­sponse:

Thank you for reach­ing out to Anthropic Support. We’ve re­ceived your re­quest for as­sis­tance.

While we re­view your re­quest, you can visit our Help Center and API doc­u­men­ta­tion for self-ser­vice trou­bleshoot­ing. A mem­ber of our team will be with you as soon as we can.

That was March 7. I fol­lowed up on March 17. No re­sponse. I fol­lowed up again on March 25. No re­sponse. I fol­lowed up again to­day, April 8, over a month later. Still noth­ing.

Anthropic is an AI com­pany that builds one of the most ca­pa­ble AI as­sis­tants in the world. Their sup­port sys­tem is a Fin AI chat­bot that can’t ac­tu­ally help you, and there is seem­ingly no hu­man be­hind it. I don’t have a prob­lem with AI-assisted sup­port, though I do have a prob­lem with AI-only sup­port that serves as a wall be­tween cus­tomers and any­one who can ac­tu­ally re­solve their is­sue.

...

Read the original on nickvecchioni.github.io »

4 381 shares, 16 trendiness

USB for Software Developers

Say you’re be­ing handed a USB de­vice and told to write a dri­ver for it. Seems like a daunt­ing task at first, right? Writing dri­vers means you have to write Kernel code, and writ­ing Kernel code is hard, low level, hard to de­bug and so on.

None of this is ac­tu­ally true though. Writing a dri­ver for a USB de­vice is ac­tu­ally not much more dif­fi­cult than writ­ing an ap­pli­ca­tion that uses Sockets.

This post aims to be a high level in­tro­duc­tion to us­ing USB for peo­ple who may not have worked with Hardware too much yet and just want to use the tech­nol­ogy. There are amaz­ing re­sources out there such as USB in a NutShell that go into a lot of de­tail about how USB pre­cisely works (check them out if you want more in­for­ma­tion), they are how­ever not re­ally ap­proach­able for some­body who has never worked with USB be­fore and does­n’t have a cer­tain back­ground in Hardware. You don’t need to be an Embedded Systems Engineer to use USB the same way you don’t need to be a Network Specialist to use Sockets and the Internet.

The de­vice we’ll be us­ing an Android phone in Bootloader mode. The rea­son for this is that

* It’s a de­vice you can eas­ily get your hands on

* The pro­to­col it uses is well doc­u­mented and in­cred­i­bly sim­ple

* Drivers for it are gen­er­ally not pre-in­stalled on your sys­tem so the OS will not in­ter­fere with our ex­per­i­ments

Getting the phone into Bootloader mode is dif­fer­ent for every de­vice, but usu­ally in­volves hold­ing down a com­bi­na­tion of but­tons while the phone is start­ing up. In my case it’s hold­ing the vol­ume down but­ton while pow­er­ing on the phone

Enumeration refers to the process of the host ask­ing the de­vice for in­for­ma­tion about it­self. This hap­pens au­to­mat­i­cally when you plug in the de­vice and it’s where the OS nor­mally de­cides which dri­ver to load for the de­vice. For most stan­dard de­vices, the OS will look at the USB Device Class and loads a dri­ver that sup­ports that class. For ven­dor spe­cific de­vices, you gen­er­ally in­stall a dri­ver made by the man­u­fac­turer which will look at the VID (Vendor ID) and PID (Product ID) in­stead to de­tect whether or not it should han­dle the de­vice.

Even with­out a dri­ver, plug­ging the phone into your com­puter will still make it get rec­og­nized as a USB de­vice. That’s be­cause the USB spec­i­fi­ca­tion de­fines a stan­dard way for de­vices to iden­tify them­selves to the host, more on how that ex­actly works in a bit though.

On Linux, we can use the handy lsusb tool to see what the de­vice iden­ti­fied it­self as:

Bus and Device are just iden­ti­fiers for the phys­i­cal USB port the de­vice is plugged into. They will most likely dif­fer on your sys­tem since they de­pend on which port you plugged the de­vice into.

ID is the most in­ter­est­ing part here. The first part 18d1 is the Vendor ID (VID) and the sec­ond part 4ee0 is the Product ID (PID). These are iden­ti­fiers that the de­vice sends to the host to iden­tify it­self. The VID is as­signed by the USB-IF to com­pa­nies that pay them a lot of money, in this case Google, and the PID is as­signed by the com­pany to a spe­cific prod­uct, in this case the Nexus/Pixel Bootloader.

Using the lsusb -t com­mand we can also see the de­vice’s USB class and what dri­ver is cur­rently han­dling it:

This shows the en­tire tree of USB de­vices con­nected to the sys­tem. The bot­tom most one in this part of the tree is our de­vice (Bus 008, Device 014 as re­ported in the pre­vi­ous com­mand). The Class=Vendor Specific Class part spec­i­fies that the de­vice does not use any of the stan­dard USB classes (e.g HID, Mass Storage or Audio) but in­stead uses a cus­tom pro­to­col de­fined by the man­u­fac­turer. The Driver=[none] part sim­ply tells us that the OS did­n’t load a dri­ver for the de­vice which is good for us since we want to write our own.

We will also go af­ter the VID and PID since they are the only real iden­ti­fy­ing in­for­ma­tion we have. The Device Class is not very use­ful for it here since it’s just Vendor Specific Class which any man­u­fac­turer can use for any de­vice. Instead of do­ing all of this in the Kernel though, we can write a Userspace ap­pli­ca­tion that does the same thing. This is much eas­ier to write and de­bug (and is ar­guably the cor­rect place for dri­vers to live any­way but that’s a dif­fer­ent topic). To do this, we can use the libusb li­brary which pro­vides a sim­ple API for com­mu­ni­cat­ing with USB de­vices from Userspace. It achieves this by pro­vid­ing a generic dri­ver that can be loaded for any de­vice and then pro­vides a way for Userspace ap­pli­ca­tions to claim the de­vice and talk to it di­rectly.

The same thing we just did man­u­ally can also be done in soft­ware though. The fol­low­ing pro­gram ini­tial­izes libusb, reg­is­ters a hot­plug event han­dler for de­vices match­ing the 18d1:4ee0 VendorId / ProductId com­bi­na­tion and then waits for that de­vice to be plugged into the host.

If you com­pile and run this, plug­ging in the de­vice should re­sult in the fol­low­ing out­put:

Congrats! You have a pro­gram now that can de­tect your de­vice with­out ever hav­ing to touch any Kernel code at all.

Next step, get­ting any an­swer from the de­vice. The eas­i­est way to do that for now is by us­ing the stan­dard­ized Control end­point. This end­point is al­ways on ID 0x00 and has a stan­dard­ized pro­to­col. This end­point is also what the OS pre­vi­ously used to iden­tify the de­vice and get its VID:PID.

The way we use this end­point is with yet an­other libusb func­tion that’s made specif­i­cally to send re­quests to that end­point. So we can ex­tend our hot­plug event han­dler us­ing the fol­low­ing code:

This code will now send a GET_STATUS re­quest to the de­vice as soon as it’s plugged in and prints out the data it sends back to the con­sole.

Those bytes came from the de­vice it­self! Decoding them us­ing the spec­i­fi­ca­tion tells us that the first byte tells us whether or not the de­vice is Self-Powered (1 means it is which makes sense, the de­vice has a bat­tery) and the sec­ond byte means it does not sup­port Remote Wakeup (meaning it can­not wake up the host).

There are a few more stan­dard­ized re­quest types (and some de­vices even add their own for sim­ple things!) but the main one we (and the OS too) are in­ter­ested in is the GET_DESCRIPTOR re­quest.

Descriptors are bi­nary struc­tures that are gen­er­ally hard­coded into the firmware of a USB de­vice. They are what tells the host ex­actly what the de­vice is, what it’s ca­pa­ble of and what dri­ver it would like the OS to load. So when you plug in a de­vice, the host sim­ply sends mul­ti­ple GET_DESCRIPTOR re­quests to the stan­dard­ized Control Endpoint at ID 0x00 to get back a struct that gives it all the in­for­ma­tion it needs for enu­mer­a­tion. And the cool thing is, we can do that too!

Instead of a GET_STATUS re­quest, we now send a GET_DESCRIPTOR re­quest:

This now in­stead re­turns the fol­low­ing data:

Now to de­code this data, we need to look at the USB spec­i­fi­ca­tion on Chapter 9.6.1 Device. There we can find that the for­mat looks as fol­lows:

Throwing the data into ImHex and giv­ing its Pattern Language this struc­ture de­f­i­n­i­tion yields the fol­low­ing re­sult:

And there we have it! id­Ven­dor and id­Prod­uct cor­re­spond to the val­ues we found pre­vi­ously us­ing lsusb.

There’s more than just the de­vice de­scrip­tor though. There’s also Configuration, Interface, Endpoint, String and a cou­ple of other de­scrip­tors. These can all be read us­ing the same GET_DESCRIPTOR re­quest on the con­trol end­point. We could still do this all by hand but luck­ily for us, lsusb has an op­tion that can do that for us al­ready!

This out­put shows us a few more of the de­scrip­tors the de­vice has. Specifically, it has a sin­gle Configuration Descriptor that con­tains a Interface Descriptor for the Android Fastboot in­ter­face. And that in­ter­face now con­tains two Endpoints. This is where the de­vice tells the host about all the other end­points, be­sides the Control end­point, and these will be the ones we’ll be us­ing in the next step to ac­tu­ally fi­nally send data to the de­vice’s Fastboot in­ter­face!

Let’s talk a bit more about end­points first though. We al­ready learned about the Control end­point on ad­dress 0x00. Endpoints are ba­si­cally the equiv­a­lent to ports that a de­vice on the net­work opened for us to send data back and fourth. The de­vice spec­i­fies in its de­scrip­tor which kind of end­points it has and then ser­vices these in its firmware. So we don’t even need to do port scan­ning or know that SSH just runs on port 22 usu­ally, we have a nice way of find­ing out what in­ter­faces the de­vice has, what lan­guage they speak and how we can speak to them. Looking at the de­scrip­tors above, that con­trol de­scrip­tor is not there though. Instead, there’s two oth­ers with dif­fer­ent types.

There’s ex­actly one per de­vice and it’s al­ways fixed on Endpoint Address 0x00. It’s what is used do ini­tial con­fig­u­ra­tion and re­quest in­for­ma­tion about the de­vice.

The main pur­pose of the Control end­point is to solve the chicken-and-egg prob­lem where you could­n’t com­mu­ni­cate with a de­vice with­out know­ing its end­points but to know its end­points you’d need to com­mu­ni­cate with it. That’s also why it does­n’t even ap­pear in the de­scrip­tors. It’s not part of any in­ter­face but the de­vice it­self. And we know about its ex­is­tence thanks to the spec, with­out it hav­ing to be ad­ver­tised.

It’s made for set­ting sim­ple con­fig­u­ra­tion val­ues or re­quest­ing small amounts of data. The func­tion in libusb does­n’t even al­low you to set the end­point ad­dress to make a con­trol re­quest to be­cause there’s only ever one con­trol end­point and it’s al­ways on ad­dress 0x00

Bulk Endpoints are what’s used when you want to trans­fer larger amounts of data. They’re used when you have large amounts of non-time-sen­si­tive data that you just want to send over the wire.

This is what’s used for things like the Mass Storage Class, CDC-ACM (Serial Port over USB) and RNDIS (Ethernet over USB).

One de­tail: Data sent over Bulk end­points is high band­width but low pri­or­ity. This means, Bulk data will al­ways just fill up the re­main­ing band­width. Any Interrupt and Isochronous trans­fers (further de­tail be­low) have a higher pri­or­ity so if you’re send­ing both Bulk and Isochronous data over the same con­nec­tion, the band­width of the Bulk trans­mis­sion will be low­ered un­til the Isochronous one can trans­mit its data in the re­quested time­frame.

Interrupt Endpoints are the op­po­site of Bulk Endpoints. They al­low you to send small amounts of data with very low la­tency. For ex­am­ple Keyboards and Mice use this trans­fer type un­der the HID Class to poll for but­ton presses 1000+ times per sec­ond. If no but­ton was pressed, the trans­fer fails im­me­di­ately with­out send­ing back a full fail­ure mes­sage (only a NAK), only when some­thing ac­tu­ally changed you’ll get a de­scrip­tion back of what hap­pened.

The im­por­tant fact here is, even though these are called in­ter­rupt end­points, there’s no in­ter­rupts hap­pen­ing. The Device still does not talk to the Host with­out be­ing asked. The Host just polls so fre­quently that it acts as if it’s an in­ter­rupt.

The func­tions in libusb that han­dle in­ter­rupt trans­fers also ab­stract this be­hav­iour away fur­ther. You can start an in­ter­rupt trans­fer and the func­tion will block un­til the de­vice sends back a full re­sponse.

Isochronous Endpoints are some­what spe­cial. They’re used for big­ger amounts of data that is re­ally tim­ing crit­i­cal. They’re mainly used for stream­ing in­ter­faces such as Audio or Video where any la­tency or de­lay will be im­me­di­ately no­tice­able through stut­ter­ing or de­syncs. In libusb, these work asyn­chro­nously. You can setup mul­ti­ple trans­fers at once and they will be queued and you’ll get back an event once data has ar­rived so you can process it and queue fur­ther re­quests.

This type is gen­er­ally not used very of­ten out­side of the Audio and Video classes.

Besides the Transfer Type, end­points also have a di­rec­tion. Keep in mind, USB is a full mas­ter-slave ori­ented in­ter­face. The Host is the only one ever mak­ing any re­quests and the Device will never an­swer un­less ad­dressed by the Host. This means, the de­vice can­not ac­tu­ally send any data di­rectly to the Host. Instead the Host needs to ask the Device to please send the data over.

This is what the di­rec­tion is for.

* IN end­points are for when the Host wants to re­ceive some data. It makes a re­quest on an IN end­point and waits for the de­vice to re­spond back with the data.

* OUT end­points are for when the Host wants to trans­mit some data. It makes a re­quest on an OUT end­point and then im­me­di­ately trans­fers the data it wants to send over. The Device in this case only ac­knowl­edges (ACK) that it re­ceived the data but won’t send any ad­di­tional data back.

Contrary to the trans­fer type, the di­rec­tion is en­coded in the end­point ad­dress in­stead. If the top­most bit (MSB) is set to 1, it’s an IN end­point, if it’s set to 0 it’s an OUT end­point. (If you’re into Hardware, you might rec­og­nize this same con­cept from the I2C in­ter­face.)

* You can have a max­i­mum of cus­tom end­points avail­able at once

be­cause we have 7 bits avail­able for ad­dresses

be­cause we al­ways have the con­trol end­point that’s on the fixed ad­dress 0x00.

* be­cause we have 7 bits avail­able for ad­dresses

* be­cause we al­ways have the con­trol end­point that’s on the fixed ad­dress 0x00.

* Endpoints are en­tirely uni­di­rec­tional. Either you’re us­ing an end­point to re­quest data or to trans­mit data, it can­not do both at once

That’s also the rea­son why our Fastboot in­ter­face has two Bulk end­points: one is ded­i­cated to lis­ten­ing to re­quests the Host sends over and the other one is for re­spond­ing to those same re­quests

* That’s also the rea­son why our Fastboot in­ter­face has two Bulk end­points: one is ded­i­cated to lis­ten­ing to re­quests the Host sends over and the other one is for re­spond­ing to those same re­quests

Now that we have all this in­for­ma­tion about USB, let’s look into the Fastboot pro­to­col. The best doc­u­men­ta­tion for this is both the u-boot Source Code and as its Documentation.

According to the doc­u­men­ta­tion, the pro­to­col re­ally is in­cred­i­bly sim­ple. The Host sends a string com­mand and the de­vice re­sponds with a 4 char­ac­ter sta­tus code fol­lowed by some data.

Let’s up­date our code to do just that then:

Plugging the de­vice in now, prints the fol­low­ing mes­sage to the ter­mi­nal:

That seems to match the doc­u­men­ta­tion!

First 4 bytes are OKAY, spec­i­fy­ing that the re­quest was ex­e­cuted suc­cess­fully The rest of the data af­ter that is 0.4 which cor­re­sponds to the im­ple­mented Fastboot Version in the Documentation: v0.4

And that’s it! You suc­cess­fully made your first USB dri­ver from scratch with­out ever touch­ing the Kernel.

All these same prin­ci­ples ap­ply to all USB dri­vers out there. The un­der­ly­ing pro­to­col may be sig­nif­i­cantly more com­plex than the fast­boot pro­to­col (I was pulling my hair out be­fore over the atroc­ity that the MTP pro­to­col is) but every­thing around it stays iden­ti­cal. Not much more com­plex than TCP over sock­ets, is it? :)

...

Read the original on werwolv.net »

5 368 shares, 46 trendiness

Help Keep Thunderbird Alive!

All of the work we do is funded by less than 3% of our users.

We never show ad­ver­tise­ments or sell your data. We don’t have cor­po­rate fund­ing. We are fully funded by fi­nan­cial con­tri­bu­tions from our users.

Thunderbird’s mis­sion is to give you the best pri­vacy-re­spect­ing, cus­tomiz­able email ex­pe­ri­ence pos­si­ble. Free for every­one to in­stall and en­joy! Maintaining ex­pen­sive servers, fix­ing bugs, de­vel­op­ing new fea­tures, and hir­ing tal­ented en­gi­neers are cru­cial for this mis­sion.

If you get value from us­ing Thunderbird, please help sup­port it. We can’t do this with­out you.

...

Read the original on updates.thunderbird.net »

6 357 shares, 16 trendiness

John Deere to Pay $99 Million in Monumental Right-to-Repair Settlement

Farmers have been fight­ing John Deere for years over the right to re­pair their equip­ment, and this week, they fi­nally reached a land­mark set­tle­ment.

While the agri­cul­tural man­u­fac­tur­ing gi­ant pointed out in a state­ment that this is no ad­mis­sion of wrong­do­ing, it agreed to pay $99 mil­lion into a fund for farms and in­di­vid­u­als who par­tic­i­pated in a class ac­tion law­suit. Specifically, that money is avail­able to those in­volved who paid John Deere’s au­tho­rized deal­ers for large equip­ment re­pairs from January 2018. This means that plain­tiffs will re­cover some­where be­tween 26% and 53% of over­charge dam­ages, ac­cord­ing to one of the court doc­u­ments—far be­yond the typ­i­cal amount, which lands be­tween 5% and 15%.

The set­tle­ment also in­cludes an agree­ment by Deere to pro­vide the dig­i­tal tools ​required for the main­te­nance, di­ag­no­sis, and re­pair” of trac­tors, com­bines, and other ma­chin­ery for 10 years. That part is cru­cial, as farm­ers pre­vi­ously re­sorted to hack­ing their own equip­men­t’s soft­ware just to get it up and run­ning again. John Deere signed a mem­o­ran­dum of un­der­stand­ing in 2023 that par­tially ad­dressed those con­cerns, pro­vid­ing third par­ties with the tech­nol­ogy to di­ag­nose and re­pair, as long as its in­tel­lec­tual prop­erty was safe­guarded. Monday’s set­tle­ment seems to rep­re­sent a much stronger (and legally bind­ing) step for­ward.

Ripple ef­fects of this bat­tle have been felt far be­yond the sales floors at John Deere deal­ers, as the price of used equip­ment sky­rock­eted in re­sponse to the in­fa­mous ser­vice dif­fi­cul­ties. Even when the cost of older trac­tors dou­bled, farm­ers rea­soned that they were still worth it be­cause re­pairs were sim­pler and down­time was min­i­mized. $60,000 for a 40-year-old ma­chine be­came the norm.

A judge’s ap­proval of the set­tle­ment is still re­quired, though it seems likely. Still, John Deere is­n’t out of the woods yet. It still faces an­other law­suit from the United States Federal Trade Commission, in which the gov­ern­ment or­ga­ni­za­tion ac­cuses Deere of harm­fully lock­ing down the re­pair process.

It’s dif­fi­cult to over­state the sig­nif­i­cance of this right-to-re­pair fight. While it has ob­vi­ous im­pli­ca­tions for the ag in­dus­try, oth­ers like the au­to­mo­tive and even home ap­pli­ance sec­tors are look­ing on. Any court rul­ing that might for­mally con­demn John Deere of wrong­do­ing may set a prece­dent for oth­ers to fol­low. At a time when man­u­fac­tur­ers want more and more con­trol of their prod­ucts af­ter the point of sale, every lit­tle up­date feels in­cred­i­bly high-stakes.

Got a tip or ques­tion for the au­thor? Contact them di­rectly: caleb@thedrive.com

...

Read the original on www.thedrive.com »

7 331 shares, 39 trendiness

Claude mixes up who said what, and that's not OK

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

Claude some­times sends mes­sages to it­self and then thinks those mes­sages came from the user. This is the worst bug I’ve seen from an LLM provider, but peo­ple al­ways mis­un­der­stand what’s hap­pen­ing and blame LLMs, hal­lu­ci­na­tions, or lack of per­mis­sion bound­aries. Those are re­lated is­sues, but this who said what’ bug is cat­e­gor­i­cally dis­tinct.

I wrote about this in de­tail in The worst bug I’ve seen so far in Claude Code, where I showed two ex­am­ples of Claude giv­ing it­self in­struc­tions and then be­liev­ing those in­struc­tions came from me.

Claude told it­self my ty­pos were in­ten­tional and de­ployed any­way, then in­sisted I was the one who said it.

It’s not just me

Here’s a Reddit thread where Claude said Tear down the H100 too”, and then claimed that the user had given that in­struc­tion.

From r/​An­thropic — Claude gives it­self a de­struc­tive in­struc­tion and blames the user.

You should­n’t give it that much ac­cess”

Comments on my pre­vi­ous post were things like It should help you use more dis­ci­pline in your DevOps.” And on the Reddit thread, many in the class of don’t give it nearly this much ac­cess to a pro­duc­tion en­vi­ron­ment, es­pe­cially if there’s data you want to keep.”

This is­n’t the point. Yes, of course AI has risks and can be­have un­pre­dictably, but af­ter us­ing it for months you get a feel’ for what kind of mis­takes it makes, when to watch it more closely, when to give it more per­mis­sions or a longer leash.

This class of bug seems to be in the har­ness, not in the model it­self. It’s some­how la­belling in­ter­nal rea­son­ing mes­sages as com­ing from the user, which is why the model is so con­fi­dent that No, you said that.”

Before, I thought it was a tem­po­rary thing — I saw it a few times in a sin­gle day, and then not again for months. But ei­ther they have a re­gres­sion or it was a co­in­ci­dence and it just pops up every so of­ten, and peo­ple only no­tice when it gives it­self per­mis­sion to do some­thing bad.

This ar­ti­cle reached #1 on Hacker News, and it seems that this is def­i­nitely a wide­spread is­sue. Here’s an­other su­per clear ex­am­ple shared by nathell (full tran­script).

From nathell — Claude asks it­self Shall I com­mit this progress?” and treats it as user ap­proval.

Several peo­ple ques­tioned whether this is ac­tu­ally a har­ness bug like I as­sumed, as peo­ple have re­ported sim­i­lar is­sues us­ing other in­ter­faces and mod­els, in­clud­ing chat­gpt.com. One pat­tern does seem to be that it hap­pens in the so-called Dumb Zone” once a con­ver­sa­tion starts ap­proach­ing the lim­its of the con­text win­dow.

...

Read the original on dwyer.co.za »

8 308 shares, 22 trendiness

Open source security at Astral

Astral builds tools that mil­lions of de­vel­op­ers around the world de­pend on and trust.

That trust in­cludes con­fi­dence in our se­cu­rity pos­ture: de­vel­op­ers rea­son­ably ex­pect that our tools (and the processes that build, test, and re­lease them) are se­cure. The rise of sup­ply chain at­tacks, typ­i­fied by the re­cent Trivy and LiteLLM hacks, has de­vel­op­ers ques­tion­ing whether they can trust their tools.

To that end, we want to share some of the tech­niques we use to se­cure our tools in the hope that they’re use­ful to:

Our users, who want to un­der­stand what we do to keep their sys­tems se­cure;

Other main­tain­ers, pro­jects, and com­pa­nies, who may ben­e­fit from some of the tech­niques we use;

Developers of CI/CD sys­tems, so that pro­jects do not need to fol­low non-ob­vi­ous paths or avoid

use­ful fea­tures to main­tain se­cure and ro­bust processes.

We sus­tain our de­vel­op­ment ve­loc­ity on Ruff, uv, and ty through ex­ten­sive CI/CD work­flows that run on GitHub Actions. Without these work­flows we would strug­gle to re­view, test, and re­lease our tools at the pace and to the de­gree of con­fi­dence that we de­mand. Our CI/CD work­flows are also a crit­i­cal part of our se­cu­rity pos­ture, in that they al­low us to keep crit­i­cal de­vel­op­ment and re­lease processes away from lo­cal de­vel­oper ma­chines and in­side of con­trolled, ob­serv­able en­vi­ron­ments.

GitHub Actions is a log­i­cal choice for us be­cause of its tight first-party in­te­gra­tion with GitHub, along with its ma­ture sup­port for con­trib­u­tor work­flows: any­body who wants to con­tribute can val­i­date that their pull re­quest is cor­rect with the same processes we use our­selves.

Unfortunately, there’s a flip­side to this: GitHub Actions has poor se­cu­rity de­faults, and se­cu­rity com­pro­mises like those of Ultralytics, tj-ac­tions, and Nx all be­gan with well-trod­den weak­nesses like pwn re­quests.

Here are some of the things we do to se­cure our CI/CD processes:

We for­bid many of GitHub’s most dan­ger­ous and in­se­cure trig­gers, such as pul­l_re­quest_­tar­get and

work­flow_run, across our en­tire GitHub or­ga­ni­za­tion. These trig­gers are al­most im­pos­si­ble to use se­curely and at­tack­ers keep find­ing ways to abuse them, so we sim­ply don’t al­low them.

Our ex­pe­ri­ence with these trig­gers is that many pro­jects think that they need them, but the over­whelm­ing ma­jor­ity of their us­ages are bet­ter off be­ing re­placed with a less priv­i­leged trig­ger (such as pul­l_re­quest) or re­moved en­tirely. For ex­am­ple, many pro­jects use pul­l_re­quest_­tar­get

so that third-party con­trib­u­tor-trig­gered work­flows can leave com­ments on PRs, but these use cases are of­ten well served by job sum­maries or even just leav­ing the rel­e­vant in­for­ma­tion in the work­flow’s logs.

Of course, there are some use cases that do re­quire these trig­gers, such as any­thing that does

re­ally need to leave com­ments on third-party is­sues or pull re­quests. In these in­stances we rec­om­mend leav­ing GitHub Actions en­tirely and us­ing a GitHub App (or web­hook) that lis­tens for the rel­e­vant events and acts in an in­de­pen­dent con­text. We cover this pat­tern in more de­tail un­der

Automations be­low.

We re­quire all ac­tions to be pinned to spe­cific com­mits (rather than tags or branches, which are mu­ta­ble). Additionally, we cross-check these com­mits to en­sure they match an ac­tual re­leased repos­i­tory state and are not im­pos­tor com­mits.

We do this in two ways: first with ziz­mor’s un­pinned-uses and im­pos­tor-com­mit au­dits, and again with GitHub’s own require ac­tions to be pinned to a full-length com­mit SHA pol­icy. The for­mer gives us a quick check that we can run lo­cally (and pre­vents im­pos­tor com­mits), while the lat­ter is a hard gate on work­flow ex­e­cu­tion that ac­tu­ally en­sures that all ac­tions, in­clud­ing nested ac­tions, are fully hash-pinned.

Enabling the lat­ter is a non­triv­ial en­deavor, since it re­quires in­di­rect ac­tion us­ages (the ac­tions called by the ac­tions we call) to be hash-pinned as well. To achieve this, we co­or­di­nated with our down­streams (example) to land hash-pin­ning across our en­tire de­pen­dency graph.

Together, these checks in­crease our con­fi­dence in the re­pro­ducibil­ity and her­metic­ity of our work­flows, which in turn in­creases our con­fi­dence in their se­cu­rity (in the pres­ence of an at­tack­er’s abil­ity to com­pro­mise a de­pen­dent ac­tion).

However, while nec­es­sary, this is­n’t suf­fi­cient: hash-pin­ning en­sures that the ac­tion’s

con­tents are im­mutable, but does­n’t pre­vent those im­mutable con­tents from mak­ing mu­ta­ble de­ci­sions (such as in­stalling the lat­est ver­sion of a bi­nary from a GitHub repos­i­to­ry’s re­leases). Neither GitHub nor third-party tools per­form well at de­tect­ing these kinds of im­mutabil­ity gaps yet, so we cur­rently rely on man­ual re­view of our ac­tion de­pen­den­cies to de­tect this class of risks.

When man­ual re­view does iden­tify gaps, we work with our up­streams to close them. For ex­am­ple, for ac­tions that use na­tive bi­na­ries in­ter­nally, this is achieved by em­bed­ding a map­ping be­tween the down­load URL for the bi­nary and a cryp­to­graphic hash. This hash in turn be­comes part of the ac­tion’s im­mutable state. While this does­n’t en­sure that the bi­nary it­self is au­then­tic, it does en­sure that an at­tacker can­not ef­fec­tively tam­per with a mu­ta­ble pointer to the bi­nary (such as a non-im­mutable tag or re­lease).

We limit our work­flow and job per­mis­sions in mul­ti­ple places: we de­fault to read-only per­mis­sions at the or­ga­ni­za­tion level, and we ad­di­tion­ally start every work­flow with per­mis­sions: {} and only broaden be­yond that on a job-by-job ba­sis.

We iso­late our GitHub Actions se­crets, wher­ever pos­si­ble: in­stead of us­ing or­ga­ni­za­tion- or repos­i­tory-level se­crets, we use de­ploy­ment en­vi­ron­ments and en­vi­ron­ment-spe­cific se­crets. This al­lows us to fur­ther limit the blast ra­dius of a po­ten­tial com­pro­mise, as a com­pro­mised test or lint­ing job won’t have ac­cess to, for ex­am­ple, the se­crets needed to pub­lish re­lease ar­ti­facts.

To do these things, we lever­age GitHub’s own set­tings, as well as tools like ziz­mor (for sta­tic analy­sis) and pin­act (for au­to­matic pin­ning).

Beyond our CI/CD processes, we also take a num­ber of steps to limit both the like­li­hood and the im­pact of ac­count and repos­i­tory com­pro­mises within the Astral or­ga­ni­za­tion:

We limit the num­ber of ac­counts with ad­min- and other highly-priv­i­leged roles, with most or­ga­ni­za­tion mem­bers only hav­ing read and write ac­cess to the repos­i­to­ries they need to work on. This re­duces the num­ber of ac­counts that an at­tacker can com­pro­mise to gain ac­cess to our or­ga­ni­za­tion-level con­trols.

We en­force strong 2FA meth­ods for all mem­bers of the Astral or­ga­ni­za­tion, be­yond GitHub’s de­fault of re­quir­ing any 2FA method. In ef­fect, this re­quires all Astral or­ga­ni­za­tion mem­bers to have a 2FA method that’s no weaker than TOTP. If and when GitHub al­lows us to en­force only 2FA meth­ods that are phish­ing-re­sis­tant (such as WebAuthn and Passkeys only), we will do so.

We im­pose branch pro­tec­tion rules on an org-wide ba­sis: changes to main can­not be force-pushed and must al­ways go through a pull re­quest. We also for­bid the cre­ation of par­tic­u­lar branch pat­terns (like ad­vi­sory-* and in­ter­nal-*) to pre­vent pre­ma­ture dis­clo­sure of se­cu­rity work.

We im­pose tag pro­tec­tion rules that pre­vent re­lease tags from be­ing cre­ated un­til a

re­lease de­ploy­ment suc­ceeds, with the re­lease de­ploy­ment it­self be­ing gated on a man­ual ap­proval by at least one other team mem­ber. We also pre­vent the up­dat­ing or dele­tion of tags, mak­ing them ef­fec­tively im­mutable once cre­ated. On top of that we layer a branch re­stric­tion: re­lease de­ploy­ments may only be cre­ated against main, pre­vent­ing an at­tacker from us­ing an un­re­lated first-party branch to at­tempt to by­pass our con­trols.

Finally, we ban repos­i­tory ad­mins from by­pass­ing all of the above pro­tec­tions. All of our pro­tec­tions are en­forced at the or­ga­ni­za­tion level, mean­ing that an at­tacker who man­ages to com­pro­mise an ac­count that has ad­min ac­cess to a spe­cific repos­i­tory still won’t be able to dis­able our con­trols.

To help oth­ers im­ple­ment these kinds of branch and tag con­trols, we’re shar­ing a gist that shows some of the rule­sets we use. These rule­sets are spe­cific to our GitHub or­ga­ni­za­tion and repos­i­to­ries, but you can use them as a start­ing point for your own poli­cies!

There are cer­tain things that GitHub Actions can do, but can’t do se­curely, such as leav­ing com­ments on third-party is­sues and pull re­quests. Most of the time it’s bet­ter to just forgo these fea­tures, but in some cases they’re a valu­able part of our work­flows.

In these lat­ter cases, we use as­tral-sh-bot to safely iso­late these tasks out­side of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have re­ceived (since GitHub Actions con­sumes the same web­hook pay­loads as GitHub Apps do), but with much more con­trol and much less im­plicit state.

However, there’s still a catch with GitHub Apps: an app does­n’t elim­i­nate any sen­si­tive cre­den­tials needed for an op­er­a­tion, it just moves them into an en­vi­ron­ment that does­n’t mix code and data as per­va­sively as GitHub Actions does. For ex­am­ple, an app won’t be sus­cep­ti­ble to a

tem­plate in­jec­tion at­tack like a work­flow would be, but could still con­tain SQLi, prompt in­jec­tion, or other weak­nesses that al­low an at­tacker to abuse the ap­p’s cre­den­tials. Consequently, it’s es­sen­tial to treat GitHub App de­vel­op­ment with the same se­cu­rity mind­set as any other soft­ware de­vel­op­ment. This also ex­tends to un­trusted code: us­ing a GitHub App does not make it safe to run un­trusted code, it just makes it harder to do so un­ex­pect­edly. If your processes need

to run un­trusted code, they must use pul­l_re­quest or an­other safe” trig­ger that does­n’t pro­vide any priv­i­leged cre­den­tials to third-party pull re­quests.

With all that said, we’ve found that the GitHub App pat­tern works well for us, and we rec­om­mend it to other main­tain­ers and pro­jects who have sim­i­lar needs. The main down­side to it comes in the form of com­plex­ity: it re­quires de­vel­op­ing and host­ing a GitHub App, rather than writ­ing a work­flow that GitHub or­ches­trates for you. We’ve found that frame­works like Gidgethub make the de­vel­op­ment process for GitHub Apps rel­a­tively straight­for­ward, but that host­ing re­mains a bur­den in terms of time and cost.

It’s an un­for­tu­nate re­al­ity that there still aren’t great GitHub App op­tions for one-per­son and hob­by­ist open source pro­jects; it’s our hope that us­abil­ity en­hance­ments in this space can be led by com­pa­nies and larger pro­jects that have the re­sources needed to pa­per over GitHub Actions’ short­com­ings as a plat­form.

We rec­om­mend this tu­to­r­ial by Mariatta as a good in­tro­duc­tion to build­ing GitHub Apps in Python. We also plan to open source as­tral-sh-bot in the fu­ture.

So far, we’ve cov­ered as­pects that tie closely to GitHub, as the source host for Astral’s tools. But many of our users in­stall our tools via other mech­a­nisms, such as PyPI, Homebrew, and our

Docker im­ages. These dis­tri­b­u­tion chan­nels add an­other link” to the metaphor­i­cal sup­ply chain, and re­quire dis­crete con­sid­er­a­tion:

Where pos­si­ble, we use Trusted Publishing to pub­lish to reg­istries (like PyPI, crates.io, and

NPM). This tech­nique elim­i­nates the need for long-lived reg­istry cre­den­tials, in turn ame­lio­rat­ing one of the most com­mon sources of pack­age takeover (credential com­pro­mise in CI/CD plat­forms).

Where pos­si­ble (currently our bi­nary and Docker im­ages re­leases), we gen­er­ate Sigstore-based at­tes­ta­tions. These at­tes­ta­tions es­tab­lish a cryp­to­graph­i­cally ver­i­fi­able link be­tween the re­leased ar­ti­fact and the work­flow that pro­duced it, in turn al­low­ing users to ver­ify that their build of uv, Ruff, or ty came from our ac­tual re­lease processes. You can see our re­cent

at­tes­ta­tions for uv as an ex­am­ple of this.1

We use GitHub’s im­mutable re­leases fea­ture to pre­vent the post-hoc mod­i­fi­ca­tion of the builds we pub­lish on GitHub. This ad­dresses a com­mon at­tacker piv­ot­ing tech­nique where pre­vi­ously pub­lished builds are re­placed with ma­li­cious builds. A vari­ant of this tech­nique was used in the re­cent Trivy at­tack, with the at­tacker force-push­ing over pre­vi­ous tags to in­tro­duce com­pro­mised ver­sions of the trivy-ac­tion and setup-trivy ac­tions.

We do not use caching to im­prove build times dur­ing re­leases, to pre­vent an at­tacker from com­pro­mis­ing our builds via a GitHub Actions cache poi­son­ing at­tack.

* To re­duce the risk of an at­tacker pub­lish­ing a new ma­li­cious ver­sion of our tools, we use a

stack of pro­tec­tions on our re­lease processes:

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

* For users who in­stall uv via our stand­alone in­staller, we en­force the in­tegrity of the in­stalled

bi­na­ries via check­sums em­bed­ded di­rectly into the in­staller’s source code2.

Our re­lease processes also in­volve knock-on” changes, like up­dat­ing the our pub­lic doc­u­men­ta­tion, ver­sion man­i­fests, and the of­fi­cial pre-com­mit hooks. These are priv­i­leged op­er­a­tions that we pro­tect through ded­i­cated bot ac­counts and fine-grained PATs is­sued through those ac­counts.

Going for­wards, we’re also look­ing at adding code­sign­ing with of­fi­cial de­vel­oper cer­tifi­cates on ma­cOS and Windows.

Last but not least is the ques­tion of de­pen­den­cies. Like al­most all mod­ern soft­ware, our tools de­pend on an ecosys­tem of third-party de­pen­den­cies (both di­rect and tran­si­tive), each of which is in an im­plicit po­si­tion of trust. Here are some of the things we do to mea­sure and mit­i­gate up­stream risk:

We use de­pen­dency man­age­ment tools like Dependabot and Renovate to keep our de­pen­den­cies up­dated, and to no­tify us when our de­pen­den­cies con­tain known vul­ner­a­bil­i­ties.

In gen­eral, we em­ploy cooldowns in con­junc­tion with the above to avoid up­dat­ing de­pen­den­cies im­me­di­ately af­ter a new re­lease, as this is when tem­porar­ily com­pro­mised de­pen­den­cies are most likely to af­fect us.

Both Dependabot and Renovate sup­port cooldowns, and uv also has built-in sup­port. We’ve found Renovate’s abil­ity to con­fig­ure cooldowns on a per-group ba­sis to be par­tic­u­larly use­ful, as it al­lows us to re­lax the cooldown re­quire­ment for our own (first-party) de­pen­den­cies while keep­ing it in place for most third-party de­pen­den­cies.

We main­tain so­cial con­nec­tions with many of our up­stream de­pen­den­cies, and we per­form both reg­u­lar and se­cu­rity con­tri­bu­tions with them (including fixes to their own CI/CD and re­lease processes). For ex­am­ple, here’s a re­cent con­tri­bu­tion we made to apache/​open­dal-re­qsign to help them ratchet down their CI/CD se­cu­rity.

Separately, we main­tain so­cial con­nec­tions with ad­ja­cent pro­jects and work­ing groups in the ecosys­tem, in­clud­ing the Python Packaging Authority and the Python Security Response Team. These con­nec­tions have proven in­valu­able for shar­ing in­for­ma­tion, such as when a re­port against pip also af­fects uv (or vice versa), or when a se­cu­rity re­lease for CPython will re­quire a re­lease of python-build-stand­alone.

We’re con­ser­v­a­tive about adding new de­pen­den­cies, and we look to elim­i­nate de­pen­den­cies where prac­ti­cal and min­i­mally dis­rup­tive to our users. Over the com­ing re­lease cy­cles, we hope to re­move some de­pen­den­cies re­lated to sup­port for rarely used com­pres­sion schemes, as part of a larger ef­fort to align our­selves with Python pack­ag­ing stan­dards.

More gen­er­ally, we’re also con­ser­v­a­tive about what our de­pen­den­cies bring in: we try to avoid de­pen­den­cies that in­tro­duce bi­nary blobs, and we care­fully re­view our de­pen­den­cies’ fea­tures to dis­able func­tion­al­ity that we don’t need or de­sire.

Finally, we con­tribute fi­nan­cially (in the form of our OSS Fund) to the sus­tain­abil­ity of pro­jects that we de­pend on or that push the OSS ecosys­tem as a whole for­wards.

Open source se­cu­rity is a hard prob­lem, in part be­cause it’s re­ally many prob­lems (some tech­ni­cal, some so­cial) mas­querad­ing as one. We’ve cov­ered many of the tech­niques we use to tackle this prob­lem, but this post is by no means an ex­haus­tive list. It’s also not a sta­tic list: at­tack­ers are dy­namic par­tic­i­pants in the se­cu­rity process, and de­fenses nec­es­sar­ily evolve in re­sponse to their chang­ing tech­niques.

With that in mind, we’d like to re­call some of the points men­tioned above that de­serve the most at­ten­tion:

Respect the lim­its of CI/CD: it’s ex­tremely tempt­ing to do every­thing in CI/CD, but there are some things that CI/CD (and par­tic­u­larly GitHub Actions) just can’t do se­curely. For these things, it’s of­ten bet­ter to forgo them en­tirely, or iso­late them out­side of CI/CD with a GitHub App or sim­i­lar.

With that said, it’s im­por­tant to not over­cor­rect and throw CI/CD away en­tirely: as men­tioned above, CI/CD is a crit­i­cal part of our se­cu­rity pos­ture and prob­a­bly yours too! It’s un­for­tu­nate that se­cur­ing GitHub Actions is so dif­fi­cult, but we con­sider it worth the ef­fort rel­a­tive to the ve­loc­ity and se­cu­rity risks that would come with not us­ing hosted CI/CD at all.

In par­tic­u­lar, we strongly rec­om­mend us­ing CI/CD for re­lease processes, rather than re­ly­ing on lo­cal de­vel­oper ma­chines, par­tic­u­larly when those re­lease processes can be se­cured with mis­use- and dis­clo­sure-re­sis­tant cre­den­tial schemes like Trusted Publishing.

Isolate and elim­i­nate long-lived cre­den­tials: the sin­gle most com­mon form of post-com­pro­mise spread is the abuse of long-lived cre­den­tials. Wherever pos­si­ble, elim­i­nate these cre­den­tials en­tirely (for ex­am­ple, with Trusted Publishing or other OIDC-based au­then­ti­ca­tion mech­a­nisms).

Where elim­i­na­tion is­n’t pos­si­ble, iso­late these cre­den­tials to the small­est pos­si­ble scope: put them in spe­cific de­ploy­ment en­vi­ron­ments with ad­di­tional ac­ti­va­tion re­quire­ments, and only is­sue cre­den­tials with the min­i­mum nec­es­sary per­mis­sions to ac­com­plish a given task.

Strengthen re­lease processes: if you’re on GitHub, use de­ploy­ment en­vi­ron­ments, ap­provals, tag and branch rule­sets, and im­mutable re­leases to re­duce the de­grees of free­dom the at­tacker has in the event of an ac­count takeover or repos­i­tory com­pro­mise.

Maintain aware­ness of your de­pen­den­cies: main­tain­ing aware­ness of the over­all health of your de­pen­dency tree is crit­i­cal to un­der­stand­ing your own risk pro­file. Use both tools and el­bow grease to keep your de­pen­den­cies se­cure, and to help them keep their own processes and de­pen­den­cies se­cure too.

Finally, we’re still eval­u­at­ing many of the tech­niques men­tioned above, and will al­most cer­tainly be tweak­ing (and strength­en­ing) them over the com­ing weeks and months as we learn more about their lim­i­ta­tions and how they in­ter­act with our de­vel­op­ment processes. That’s to say that this post rep­re­sents a point in time, not the fi­nal word on how we think about se­cu­rity for our open source tools.

...

Read the original on astral.sh »

9 277 shares, 18 trendiness

The Importance of Being Idle

As I type these words, I worry over the day when I will no longer be com­mis­sioned to write them. The day, to be spe­cific, that The American Scholar asks Claude (the moniker for Anthropic’s AI) and not Robert (the name of Max and Roslyn Zaretsky’s son) to cre­ate an es­say on, say, AI and the fu­ture of work.

Not sur­pris­ingly, I am not alone to worry: Not many sub­jects stir greater fear and dread among Americans than the seem­ingly ir­re­sistible rise of AI.  According to a re­cent Pew Research Center sur­vey, 64 per­cent of the pub­lic be­lieves that AI will trans­late into fewer jobs. Small won­der, then, that only 17 per­cent of the same re­spon­dents ex­pect that AI, even when hu­man­ized by names like Claude, will make their fu­ture brighter.

Were he alive to­day, Paul Lafargue would be among that 17 per­cent, and his voice would be both loud and funny. Born in Cuba in 1842 to par­ents of mixed race—part Jewish and part Creole—Lafargue was mar­ried to Laura Marx, one of Karl Marx’s four daugh­ters. Even be­fore this mar­riage, though, Lafargue, who had stud­ied med­i­cine in Paris, had thrown over a se­cure fu­ture as a doc­tor to de­vote (and pau­per­ize) him­self and his fam­ily to work­ing on be­half of the shin­ing (and class­less) fu­ture glimpsed by his fa­ther-in-law.

Knocking out polem­i­cal and the­o­ret­i­cal es­says while striv­ing to launch France’s first work­ers’ party, the Parti ou­vrier français, Lafargue was a well-known fig­ure on the rad­i­cal left in fin-de-siè­cle Paris. Predictably, his ac­tiv­i­ties also made him well-known to the French po­lice, who re­peat­edly ar­rested him, in­clud­ing on one evening in 1883 when he was tak­ing home a salad to his wife. (He man­aged to find a passerby to de­liver the salad be­fore the po­lice hauled him away.)

Making wine from this bunch of grapes, Lafargue used his time be­hind bars at Saint Pélagie—a for­bid­ding Parisian prison where many of the cen­tu­ry’s most no­to­ri­ous writ­ers, artists, and thinkers found them­selves from time to time—to draft his most fa­mous work, Le Droit à la pa­resse, or The Right to Be Lazy, trans­lated into English by Alex Andriesse. Though he dashed off this pam­phlet nearly 150 years ago, Lafargue asked ques­tions that re­main most per­ti­nent to our cur­rent anx­i­eties over the fu­ture of work.

During Lafargue’s own life­time, the na­ture of work was un­der­go­ing a trau­matic trans­for­ma­tion. The seis­mic ef­fect of the first and sec­ond in­dus­trial rev­o­lu­tions, as well as the quick­en­ing pace of glob­al­iza­tion, proved an ex­tinc­tion event for tra­di­tional forms of pro­duc­tion. The gods and kings of the past,” de­clared the his­to­rian Eric Hobsbawm, were pow­er­less be­fore the busi­ness­men and steam en­gines of the pre­sent.” As fac­tory work­ers and un­skilled la­bor­ers re­placed ate­liers and ar­ti­sans, the for­mer strug­gled to or­ga­nize them­selves, a strug­gle into which Lafargue threw him­self body and soul.

Or, per­haps, not his en­tire soul. His es­say’s ti­tle re­veals a dra­matic di­ver­gence of goals he and union lead­ers held. He be­moans the de­mand of work­ers for shorter work­days (which of­ten lasted as long as 12 hours), in­sist­ing that cur­tail­ing work hours did not rep­re­sent vic­tory but de­feat: Shame on the pro­le­tariat, only slaves would have been ca­pa­ble of such base­ness” to have sought such an out­come. On the con­trary, he de­claims, work­ers should op­pose the very no­tion of work.

If you are puz­zled, don’t worry—so, too, were nearly all of Lafargue’s con­tem­po­raries on the left. How could they not be? Here was a com­mit­ted Marxist—and the great man’s son-in-law, to boot—as­sert­ing that work­ers, rather than strike for the right to work, should in­stead protest for the right to be lazy. Machines, he be­lieved, could be­come humanity’s sav­ior, the god who will re­deem man from the sor­di­dae artes [manual la­bor] and give him leisure and lib­erty.”

And yet, Lafargue ex­claims, the blind pas­sion and per­verse mur­der­ous­ness of work have trans­formed the ma­chine from an in­stru­ment of eman­ci­pa­tion into an in­stru­ment that en­slaves free be­ings.” The rea­son work­ers spend so many hours shack­led to their ma­chines, he con­tended, was not from eco­nomic ne­ces­sity. Instead, it was im­posed upon them by their su­pe­ri­ors, the cap­tains of in­dus­try and fi­nance, who were wed­ded to the dogma of work and di­a­bol­i­cally drilled the vice of work into the heads of work­ers.”

Of course, Lafargue never called for the erad­i­ca­tion of work. The ne­ces­si­ties of life, af­ter all, would al­ways re­quire the la­bor of women and men to pro­duce and pro­vide. But he did press for the ra­tio­nal­iza­tion of work. Given the ef­fi­ciency of ma­chines, fewer hours were needed to pro­vide the ne­ces­si­ties of life. Maintaining the same ex­ces­sive num­ber of work hours in­evitably flooded the mar­ket with su­per­fluities and the er­a’s re­peated eco­nomic crises stretch­ing from 1873 to the end of the cen­tury.

The dra­matic re­duc­tion of time at work would be a boon not just to the well-be­ing of the econ­omy, Lafargue con­cluded, but also to the well-be­ing of both work­ers and own­ers, who would have more time to … well, to do what?

Karl Marx had an an­swer of sorts, sug­gest­ing that we would hunt in the morn­ing, fish in the af­ter­noon, rear cat­tle in the evening, crit­i­cism af­ter din­ner, just as I have a mind.” But Lafargue in­stead con­jured a Rabelaisian fu­ture in which for­mer work­ers would eat and drink their fill on hol­i­days while their for­mer taskmas­ters would en­ter­tain them by per­form­ing par­o­dies of their now de­funct roles as gen­er­als and in­dus­tri­al­ists. Et le voilà, Lafargue con­cludes, in this world turned up­side down, social dis­cord will van­ish.”

Though his tongue was firmly in cheek, Lafargue did imag­ine that these ma­chines—per­haps the fore­run­ners of the machines of lov­ing grace” in­voked by Dario Amodei, the CEO of Anthropic—would lead us to a par­adise we had lost. A par­adise bathed in otium, the Latin word that can be trans­lated as idleness” as well as laziness.” When Lafargue praises la pa­resse, he means not the lat­ter, but the for­mer. He makes this clear by quot­ing, at the start of his es­say, a line from Virgil’s Eclogues that cel­e­brates the plea­sures of otium.

Although Lafargue does not flesh out his no­tion of a fu­ture filled with idle­ness, my guess is that he meant it would be de­voted not to the plea­sure of do­ing a par­tic­u­lar hobby or spe­cific ac­tiv­ity, paint­ing a land­scape or swing­ing a gold club. Instead, it would be a life given out, quite sim­ply, to the plea­sure of faisant rien or do­ing noth­ing. As the Czech play­wright Karel Capek wrote in an es­say called In Praise of Idleness,” this state is de­fined as the ab­sence of every­thing by which a per­son is oc­cu­pied, di­verted, dis­tracted, in­ter­ested, em­ployed, an­noyed, pleased, at­tracted, in­volved, en­ter­tained, bored, en­chanted, fa­tigued, ab­sorbed, or con­fused.” In a word, idling is the sen­ti­ment of be­ing.

But even idlers, try as they might, can­not ig­nore the pas­sage of time. In 1911, a dozen years be­fore Capek pub­lished his es­say, Paul Lafargue and his wife com­mit­ted sui­cide—he was 69; she was 66. His rea­son, it seems to me, dove­tailed with his phi­los­o­phy: I am killing my­self be­fore piti­less old age, which grad­u­ally de­prives me one by one of the plea­sures and joys of ex­is­tence.” It might re­pay us to take a mo­ment, not just from our jobs but also from our leisures, to make some to-do about do­ing noth­ing.

...

Read the original on theamericanscholar.org »

10 228 shares, 0 trendiness

The Pentagon Threatened Pope Leo XIV’s Ambassador With the Avignon Papacy

Thank you for read­ing! Letters from Leo is a reader-sup­ported pub­li­ca­tion. To re­ceive new posts and sup­port my work, con­sider be­com­ing a free or paid sub­scriber.

Before you read on: Pope Leo XIV has asked Americans to con­tact their mem­bers of Congress and de­mand an end to the war in Iran. Answer the pope’s call in one click at stand­with­popeleo.com, an app we built to make it as easy as pos­si­ble.

[UPDATE at 4:33 PM EDT: Letters from Leo can now in­de­pen­dently con­firm The Free Press re­port that the meet­ing took place — and that some Vatican of­fi­cials were so alarmed by the Pentagon’s tac­tics that they shelved plans for Pope Leo XIV to visit the United States later this year.

Other of­fi­cials in the Vatican saw the Pentagon’s ref­er­ence to an Avignon pa­pacy as a threat to use mil­i­tary force against the Holy See.]

In January, be­hind closed doors at the Pentagon, Under Secretary of War for Policy Elbridge Colby sum­moned Cardinal Christophe Pierre — Pope Leo XIVs then-am­bas­sador to the United States — and de­liv­ered a lec­ture.

America, Colby and his col­leagues told the car­di­nal, has the mil­i­tary power to do what­ever it wants in the world. The Catholic Church had bet­ter take its side.

As tem­pers rose, an uniden­ti­fied U. S. of­fi­cial reached for a four­teenth-cen­tury weapon and in­voked the Avignon Papacy, the pe­riod when the French Crown used mil­i­tary force to bend the bishop of Rome to its will.

That scene, bro­ken this week by Mattia Ferraresi in an ex­tra­or­di­nary piece of jour­nal­ism for The Free Press, may be the most re­mark­able mo­ment in the long and knot­ted his­tory of the American re­pub­lic’s re­la­tion­ship with the Catholic Church.

There is no pub­lic record of any Vatican of­fi­cial ever tak­ing a meet­ing at the Pentagon, and cer­tainly none of a se­nior U. S. of­fi­cial threat­en­ing the Vicar of Christ on Earth with the prospect of an American Babylonian Captivity.

The re­port­ing also con­firms — with fresh sources and new color — what I first re­ported in February: that the Vatican de­clined the Trump-Vance White House’s in­vi­ta­tion to host Pope Leo XIV for America’s 250th an­niver­sary in 2026.

Ferraresi ob­tained ac­counts from Vatican and U. S. of­fi­cials briefed on the Pentagon meet­ing. According to his sources, Colby’s team picked apart the pope’s January state-of-the-world ad­dress line by line and read it as a hos­tile mes­sage aimed di­rectly at the ad­min­is­tra­tion.

What en­raged them most was Leo’s de­c­la­ra­tion that a diplo­macy that pro­motes di­a­logue and seeks con­sen­sus among all par­ties is be­ing re­placed by a diplo­macy based on force.”

The Pentagon read that sen­tence as a frontal chal­lenge to the so-called Donroe Doctrine” — Trump’s up­date of Monroe, as­sert­ing un­chal­lenged American do­min­ion over the Western Hemisphere.

The car­di­nal sat through the lec­ture in si­lence. The Holy See has not, since that day, given an inch.

Ferraresi’s re­port­ing also adds vi­tal color to the col­lapse of the 250th an­niver­sary visit. JD Vance per­son­ally ex­tended the in­vi­ta­tion in May 2025, just two weeks af­ter Leo’s elec­tion in the con­clave.

According to a se­nior Vatican of­fi­cial quoted in the piece, the Holy See ini­tially con­sid­ered the re­quest, then post­poned it in­def­i­nitely be­cause of for­eign pol­icy dis­agree­ments, the ris­ing op­po­si­tion of American bish­ops to the Trump-Vance mass de­por­ta­tion regime, and a re­fusal to be­come a par­ti­san tro­phy in the 2026 midterms.

The ad­min­is­tra­tion tried every pos­si­ble way to have the Pope in the U. S. in 2026,” one Vatican of­fi­cial told The Free Press.

Instead, on July 4, 2026, the first American pope will travel to Lampedusa, the Italian is­land where North African mi­grants wash ashore by the thou­sands. Robert Francis Prevost is too de­lib­er­ate a man to have cho­sen that date by ac­ci­dent.

The Pentagon meet­ing also clar­i­fies the moral in­ten­sity of Leo’s pub­lic pos­ture over the last six weeks.

After Colby’s lec­ture, the pope did not re­treat into Vatican diplo­macy. He pressed harder.

...

Read the original on www.thelettersfromleo.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.