10 interesting stories served every morning and every evening.




1 1,215 shares, 86 trendiness

EFF is Leaving X

After al­most twenty years on the plat­form, EFF is log­ging off of X. This is­n’t a de­ci­sion we made lightly, but it might be over­due. The math has­n’t worked out for a while now.

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets gar­nered some­where be­tween 50 and 100 mil­lion im­pres­sions per month. By 2024, our 2,500 X posts gen­er­ated around 2 mil­lion im­pres­sions each month. Last year, our 1,500 posts earned roughly 13 mil­lion im­pres­sions for the en­tire year. To put it bluntly, an X post to­day re­ceives less than 3% of the views a sin­gle tweet de­liv­ered seven years ago.

When Elon Musk ac­quired Twitter in October 2022, EFF was clear about what needed fix­ing.

* Greater user con­trol: Giving users and third-party de­vel­op­ers the means to con­trol the user ex­pe­ri­ence through fil­ters and

Twitter was never a utopia. We’ve crit­i­cized the plat­form for about as long as it’s been around. Still, Twitter did de­serve recog­ni­tion from time to time for vo­cif­er­ously fight­ing for its users’ rights. That changed. Musk fired the en­tire hu­man rights team and laid off staffers in coun­tries where the com­pany pre­vi­ously fought off cen­sor­ship de­mands from re­pres­sive regimes. Many users left. Today we’re join­ing them.

Yes. And we un­der­stand why that looks con­tra­dic­tory. Let us ex­plain.

EFF ex­ists to pro­tect peo­ple’s dig­i­tal rights. Not just the peo­ple who al­ready value our work, have opted out of sur­veil­lance, or have al­ready mi­grated to the fe­di­verse. The peo­ple who need us most are of­ten the ones most em­bed­ded in the walled gar­dens of the main­stream plat­forms and sub­jected to their cor­po­rate sur­veil­lance.

Young peo­ple, peo­ple of color, queer folks, ac­tivists, and or­ga­niz­ers use Instagram, TikTok, and Facebook every day. These plat­forms host mu­tual aid net­works and serve as hubs for po­lit­i­cal or­ga­niz­ing, cul­tural ex­pres­sion, and com­mu­nity care. Just delet­ing the apps is­n’t al­ways a re­al­is­tic or ac­ces­si­ble op­tion, and nei­ther is push­ing every user to the fe­di­verse when there are cir­cum­stances like:

* You own a small busi­ness that de­pends on Instagram for cus­tomers.

* Your abor­tion fund uses TikTok to spread cru­cial in­for­ma­tion.

* You’re iso­lated and rely on on­line spaces to con­nect with your com­mu­nity.

Our pres­ence on Facebook, Instagram, YouTube, and TikTok is not an en­dorse­ment. We’ve spent years ex­pos­ing how these plat­forms sup­press mar­gin­al­ized voices, en­able in­va­sive be­hav­ioral ad­ver­tis­ing, and flag posts about abor­tion as dan­ger­ous. We’ve also taken ac­tion in court, in leg­is­la­tures, and through di­rect en­gage­ment with their staff to push them to change poor poli­cies and prac­tices.

We stay be­cause the peo­ple on those plat­forms de­serve ac­cess to in­for­ma­tion, too. We stay be­cause some of our most-read posts are the ones crit­i­ciz­ing the very plat­form we’re post­ing on. We stay be­cause the fewer steps be­tween you and the re­sources you need to pro­tect your­self, the bet­ter.

When you go on­line, your rights should go with you. X is no longer where the fight is hap­pen­ing. The plat­form Musk took over was im­per­fect but im­pact­ful. What ex­ists to­day is some­thing else: di­min­ished, and in­creas­ingly de min­imis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our mem­bers’ sup­port where they will ef­fect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you fol­low us there and keep sup­port­ing the work we do. Our work pro­tect­ing dig­i­tal rights is needed more than ever be­fore, and we’re here to help you take back con­trol.

...

Read the original on www.eff.org »

2 529 shares, 26 trendiness

Help Keep Thunderbird Alive!

All of the work we do is funded by less than 3% of our users.

We never show ad­ver­tise­ments or sell your data. We don’t have cor­po­rate fund­ing. We are fully funded by fi­nan­cial con­tri­bu­tions from our users.

Thunderbird’s mis­sion is to give you the best pri­vacy-re­spect­ing, cus­tomiz­able email ex­pe­ri­ence pos­si­ble. Free for every­one to in­stall and en­joy! Maintaining ex­pen­sive servers, fix­ing bugs, de­vel­op­ing new fea­tures, and hir­ing tal­ented en­gi­neers are cru­cial for this mis­sion.

If you get value from us­ing Thunderbird, please help sup­port it. We can’t do this with­out you.

...

Read the original on updates.thunderbird.net »

3 429 shares, 18 trendiness

The Pentagon Threatened Pope Leo XIV’s Ambassador With the Avignon Papacy

Thank you for read­ing! Letters from Leo is a reader-sup­ported pub­li­ca­tion. To re­ceive new posts and sup­port my work, con­sider be­com­ing a free or paid sub­scriber.

Before you read on: Pope Leo XIV has asked Americans to con­tact their mem­bers of Congress and de­mand an end to the war in Iran. Answer the pope’s call in one click at stand­with­popeleo.com, an app we built to make it as easy as pos­si­ble.

[UPDATE at 4:33 PM EDT: Letters from Leo can now in­de­pen­dently con­firm The Free Press re­port that the meet­ing took place — and that some Vatican of­fi­cials were so alarmed by the Pentagon’s tac­tics that they shelved plans for Pope Leo XIV to visit the United States later this year.

Other of­fi­cials in the Vatican saw the Pentagon’s ref­er­ence to an Avignon pa­pacy as a threat to use mil­i­tary force against the Holy See.]

In January, be­hind closed doors at the Pentagon, Under Secretary of War for Policy Elbridge Colby sum­moned Cardinal Christophe Pierre — Pope Leo XIVs then-am­bas­sador to the United States — and de­liv­ered a lec­ture.

America, Colby and his col­leagues told the car­di­nal, has the mil­i­tary power to do what­ever it wants in the world. The Catholic Church had bet­ter take its side.

As tem­pers rose, an uniden­ti­fied U. S. of­fi­cial reached for a four­teenth-cen­tury weapon and in­voked the Avignon Papacy, the pe­riod when the French Crown used mil­i­tary force to bend the bishop of Rome to its will.

That scene, bro­ken this week by Mattia Ferraresi in an ex­tra­or­di­nary piece of jour­nal­ism for The Free Press, may be the most re­mark­able mo­ment in the long and knot­ted his­tory of the American re­pub­lic’s re­la­tion­ship with the Catholic Church.

There is no pub­lic record of any Vatican of­fi­cial ever tak­ing a meet­ing at the Pentagon, and cer­tainly none of a se­nior U. S. of­fi­cial threat­en­ing the Vicar of Christ on Earth with the prospect of an American Babylonian Captivity.

The re­port­ing also con­firms — with fresh sources and new color — what I first re­ported in February: that the Vatican de­clined the Trump-Vance White House’s in­vi­ta­tion to host Pope Leo XIV for America’s 250th an­niver­sary in 2026.

Ferraresi ob­tained ac­counts from Vatican and U. S. of­fi­cials briefed on the Pentagon meet­ing. According to his sources, Colby’s team picked apart the pope’s January state-of-the-world ad­dress line by line and read it as a hos­tile mes­sage aimed di­rectly at the ad­min­is­tra­tion.

What en­raged them most was Leo’s de­c­la­ra­tion that a diplo­macy that pro­motes di­a­logue and seeks con­sen­sus among all par­ties is be­ing re­placed by a diplo­macy based on force.”

The Pentagon read that sen­tence as a frontal chal­lenge to the so-called Donroe Doctrine” — Trump’s up­date of Monroe, as­sert­ing un­chal­lenged American do­min­ion over the Western Hemisphere.

The car­di­nal sat through the lec­ture in si­lence. The Holy See has not, since that day, given an inch.

Ferraresi’s re­port­ing also adds vi­tal color to the col­lapse of the 250th an­niver­sary visit. JD Vance per­son­ally ex­tended the in­vi­ta­tion in May 2025, just two weeks af­ter Leo’s elec­tion in the con­clave.

According to a se­nior Vatican of­fi­cial quoted in the piece, the Holy See ini­tially con­sid­ered the re­quest, then post­poned it in­def­i­nitely be­cause of for­eign pol­icy dis­agree­ments, the ris­ing op­po­si­tion of American bish­ops to the Trump-Vance mass de­por­ta­tion regime, and a re­fusal to be­come a par­ti­san tro­phy in the 2026 midterms.

The ad­min­is­tra­tion tried every pos­si­ble way to have the Pope in the U. S. in 2026,” one Vatican of­fi­cial told The Free Press.

Instead, on July 4, 2026, the first American pope will travel to Lampedusa, the Italian is­land where North African mi­grants wash ashore by the thou­sands. Robert Francis Prevost is too de­lib­er­ate a man to have cho­sen that date by ac­ci­dent.

The Pentagon meet­ing also clar­i­fies the moral in­ten­sity of Leo’s pub­lic pos­ture over the last six weeks.

After Colby’s lec­ture, the pope did not re­treat into Vatican diplo­macy. He pressed harder.

...

Read the original on www.thelettersfromleo.com »

4 414 shares, 19 trendiness

Claude mixes up who said what, and that's not OK

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

And that’s not OK. This bug is cat­e­gor­i­cally dis­tinct from hal­lu­ci­na­tions or miss­ing per­mis­sion bound­aries.

Claude some­times sends mes­sages to it­self and then thinks those mes­sages came from the user. This is the worst bug I’ve seen from an LLM provider, but peo­ple al­ways mis­un­der­stand what’s hap­pen­ing and blame LLMs, hal­lu­ci­na­tions, or lack of per­mis­sion bound­aries. Those are re­lated is­sues, but this who said what’ bug is cat­e­gor­i­cally dis­tinct.

I wrote about this in de­tail in The worst bug I’ve seen so far in Claude Code, where I showed two ex­am­ples of Claude giv­ing it­self in­struc­tions and then be­liev­ing those in­struc­tions came from me.

Claude told it­self my ty­pos were in­ten­tional and de­ployed any­way, then in­sisted I was the one who said it.

It’s not just me

Here’s a Reddit thread where Claude said Tear down the H100 too”, and then claimed that the user had given that in­struc­tion.

From r/​An­thropic — Claude gives it­self a de­struc­tive in­struc­tion and blames the user.

You should­n’t give it that much ac­cess”

Comments on my pre­vi­ous post were things like It should help you use more dis­ci­pline in your DevOps.” And on the Reddit thread, many in the class of don’t give it nearly this much ac­cess to a pro­duc­tion en­vi­ron­ment, es­pe­cially if there’s data you want to keep.”

This is­n’t the point. Yes, of course AI has risks and can be­have un­pre­dictably, but af­ter us­ing it for months you get a feel’ for what kind of mis­takes it makes, when to watch it more closely, when to give it more per­mis­sions or a longer leash.

This class of bug seems to be in the har­ness, not in the model it­self. It’s some­how la­belling in­ter­nal rea­son­ing mes­sages as com­ing from the user, which is why the model is so con­fi­dent that No, you said that.”

Before, I thought it was a tem­po­rary thing — I saw it a few times in a sin­gle day, and then not again for months. But ei­ther they have a re­gres­sion or it was a co­in­ci­dence and it just pops up every so of­ten, and peo­ple only no­tice when it gives it­self per­mis­sion to do some­thing bad.

This ar­ti­cle reached #1 on Hacker News, and it seems that this is def­i­nitely a wide­spread is­sue. Here’s an­other su­per clear ex­am­ple shared by nathell (full tran­script).

From nathell — Claude asks it­self Shall I com­mit this progress?” and treats it as user ap­proval.

Several peo­ple ques­tioned whether this is ac­tu­ally a har­ness bug like I as­sumed, as peo­ple have re­ported sim­i­lar is­sues us­ing other in­ter­faces and mod­els, in­clud­ing chat­gpt.com. One pat­tern does seem to be that it hap­pens in the so-called Dumb Zone” once a con­ver­sa­tion starts ap­proach­ing the lim­its of the con­text win­dow.

...

Read the original on dwyer.co.za »

5 380 shares, 42 trendiness

Native Instant Space Switching on MacOS

• 3 min read • more posts View on • 3 min read • more posts View on

The worst part about the MacOS win­dow man­age­ment sit­u­a­tion is the in­abil­ity to in­stantly switch spaces, and that Apple has con­tin­u­ously ig­nored re­quests to dis­able the nau­se­at­ing switch­ing an­i­ma­tion. Sure, it’s not that long, but I switch spaces of­ten enough to the point where it be­comes very no­tice­able and dri­ves me in­sane.

I be­lieve to have found the best so­lu­tion to in­stant space switch­ing!

But be­fore I show you, of course, other peo­ple share the same sen­ti­ment. I claim that none of the sur­veyed con­tem­po­rary so­lu­tions, ex­cept for what I bring up at the end of this ar­ti­cle, suf­fice for what I want:

This is al­ways the de­fault an­swer to this ques­tion on­line, and I’m sick of it! It does­n’t even solve the prob­lem, but rather re­places it with an equally use­less fade-in an­i­ma­tion. It also has the side ef­fect of ac­ti­vat­ing the prefers-re­duced-mo­tion me­dia query on web browsers.

Install the yabai tiling win­dow man­ager and use its in­stant space switcher.

And to be fair, it works pretty well. There are only two prob­lems: for one, yabai does this by bi­nary patch­ing a part of the op­er­at­ing sys­tem. This is only pos­si­ble by dis­abling System Integrity Protection at your own dis­cre­tion. For the sec­ond, in­stalling yabai forces you to learn and use it as your tiling win­dow man­ager1. I per­son­ally use PaperWM.spoon as my win­dow man­ager. Both of which are in­com­pat­i­ble when in­stalled to­gether.

Use a third-party vir­tual space man­ager fa­cade, hid­ing and show­ing win­dows as needed when switch­ing spaces.

Some pop­u­lar op­tions are FlashSpace and AeroSpace vir­tual work­spaces. I ac­tu­ally of­fer no crit­i­cism other than that they are not na­tive to MacOS, and feel un­nec­es­sary given that all we want to do is dis­able an an­i­ma­tion.

Pay for a li­cense for BetterTouchTool. Enable Move Right Space (Without Animation)” and Move Left Space (Without Animation)”.

Without fur­ther ado, I man­aged to find InstantSpaceSwitcher by ju­r­plel on GitHub. It is a sim­ple menu bar ap­pli­ca­tion that achieves in­stant space switch­ing while of­fer­ing none of the afore­men­tioned draw­backs.

InstantSpaceSwitcher does not re­quire dis­abling Security Integration Protection; it works by sim­u­lat­ing a track­pad swipe with a large amount of ve­loc­ity. It ad­di­tion­ally al­lows you to in­stantly jump to a space num­ber. The last thing it pro­vides is a com­mand line in­ter­face.

The in­stal­la­tion in­struc­tions are not listed on the README, so they are:

$ git clone https://​github.com/​ju­r­plel/​In­stantSpaceSwitcher

$ cd InstantSpaceSwitcher

$ ./build.sh

InstantSpaceSwitcher should now be avail­able as a na­tive ap­pli­ca­tion.

After run­ning the above, the com­mand line in­ter­face is avail­able at:

$ .build/release/ISSCli –help

Usage: .build/release/ISSCli [left|right|index

Did I men­tion that the repos­i­tory lit­er­ally has one star on GitHub (me)? I want more peo­ple to dis­cover InstantSpaceSwitcher and con­sider it trust­wor­thy; hence, please con­sider giv­ing it a star if you find it help­ful.

↑ Scroll to top ↑

...

Read the original on arhan.sh »

6 355 shares, 14 trendiness

Open source security at Astral

Astral builds tools that mil­lions of de­vel­op­ers around the world de­pend on and trust.

That trust in­cludes con­fi­dence in our se­cu­rity pos­ture: de­vel­op­ers rea­son­ably ex­pect that our tools (and the processes that build, test, and re­lease them) are se­cure. The rise of sup­ply chain at­tacks, typ­i­fied by the re­cent Trivy and LiteLLM hacks, has de­vel­op­ers ques­tion­ing whether they can trust their tools.

To that end, we want to share some of the tech­niques we use to se­cure our tools in the hope that they’re use­ful to:

Our users, who want to un­der­stand what we do to keep their sys­tems se­cure;

Other main­tain­ers, pro­jects, and com­pa­nies, who may ben­e­fit from some of the tech­niques we use;

Developers of CI/CD sys­tems, so that pro­jects do not need to fol­low non-ob­vi­ous paths or avoid

use­ful fea­tures to main­tain se­cure and ro­bust processes.

We sus­tain our de­vel­op­ment ve­loc­ity on Ruff, uv, and ty through ex­ten­sive CI/CD work­flows that run on GitHub Actions. Without these work­flows we would strug­gle to re­view, test, and re­lease our tools at the pace and to the de­gree of con­fi­dence that we de­mand. Our CI/CD work­flows are also a crit­i­cal part of our se­cu­rity pos­ture, in that they al­low us to keep crit­i­cal de­vel­op­ment and re­lease processes away from lo­cal de­vel­oper ma­chines and in­side of con­trolled, ob­serv­able en­vi­ron­ments.

GitHub Actions is a log­i­cal choice for us be­cause of its tight first-party in­te­gra­tion with GitHub, along with its ma­ture sup­port for con­trib­u­tor work­flows: any­body who wants to con­tribute can val­i­date that their pull re­quest is cor­rect with the same processes we use our­selves.

Unfortunately, there’s a flip­side to this: GitHub Actions has poor se­cu­rity de­faults, and se­cu­rity com­pro­mises like those of Ultralytics, tj-ac­tions, and Nx all be­gan with well-trod­den weak­nesses like pwn re­quests.

Here are some of the things we do to se­cure our CI/CD processes:

We for­bid many of GitHub’s most dan­ger­ous and in­se­cure trig­gers, such as pul­l_re­quest_­tar­get and

work­flow_run, across our en­tire GitHub or­ga­ni­za­tion. These trig­gers are al­most im­pos­si­ble to use se­curely and at­tack­ers keep find­ing ways to abuse them, so we sim­ply don’t al­low them.

Our ex­pe­ri­ence with these trig­gers is that many pro­jects think that they need them, but the over­whelm­ing ma­jor­ity of their us­ages are bet­ter off be­ing re­placed with a less priv­i­leged trig­ger (such as pul­l_re­quest) or re­moved en­tirely. For ex­am­ple, many pro­jects use pul­l_re­quest_­tar­get

so that third-party con­trib­u­tor-trig­gered work­flows can leave com­ments on PRs, but these use cases are of­ten well served by job sum­maries or even just leav­ing the rel­e­vant in­for­ma­tion in the work­flow’s logs.

Of course, there are some use cases that do re­quire these trig­gers, such as any­thing that does

re­ally need to leave com­ments on third-party is­sues or pull re­quests. In these in­stances we rec­om­mend leav­ing GitHub Actions en­tirely and us­ing a GitHub App (or web­hook) that lis­tens for the rel­e­vant events and acts in an in­de­pen­dent con­text. We cover this pat­tern in more de­tail un­der

Automations be­low.

We re­quire all ac­tions to be pinned to spe­cific com­mits (rather than tags or branches, which are mu­ta­ble). Additionally, we cross-check these com­mits to en­sure they match an ac­tual re­leased repos­i­tory state and are not im­pos­tor com­mits.

We do this in two ways: first with ziz­mor’s un­pinned-uses and im­pos­tor-com­mit au­dits, and again with GitHub’s own require ac­tions to be pinned to a full-length com­mit SHA pol­icy. The for­mer gives us a quick check that we can run lo­cally (and pre­vents im­pos­tor com­mits), while the lat­ter is a hard gate on work­flow ex­e­cu­tion that ac­tu­ally en­sures that all ac­tions, in­clud­ing nested ac­tions, are fully hash-pinned.

Enabling the lat­ter is a non­triv­ial en­deavor, since it re­quires in­di­rect ac­tion us­ages (the ac­tions called by the ac­tions we call) to be hash-pinned as well. To achieve this, we co­or­di­nated with our down­streams (example) to land hash-pin­ning across our en­tire de­pen­dency graph.

Together, these checks in­crease our con­fi­dence in the re­pro­ducibil­ity and her­metic­ity of our work­flows, which in turn in­creases our con­fi­dence in their se­cu­rity (in the pres­ence of an at­tack­er’s abil­ity to com­pro­mise a de­pen­dent ac­tion).

However, while nec­es­sary, this is­n’t suf­fi­cient: hash-pin­ning en­sures that the ac­tion’s

con­tents are im­mutable, but does­n’t pre­vent those im­mutable con­tents from mak­ing mu­ta­ble de­ci­sions (such as in­stalling the lat­est ver­sion of a bi­nary from a GitHub repos­i­to­ry’s re­leases). Neither GitHub nor third-party tools per­form well at de­tect­ing these kinds of im­mutabil­ity gaps yet, so we cur­rently rely on man­ual re­view of our ac­tion de­pen­den­cies to de­tect this class of risks.

When man­ual re­view does iden­tify gaps, we work with our up­streams to close them. For ex­am­ple, for ac­tions that use na­tive bi­na­ries in­ter­nally, this is achieved by em­bed­ding a map­ping be­tween the down­load URL for the bi­nary and a cryp­to­graphic hash. This hash in turn be­comes part of the ac­tion’s im­mutable state. While this does­n’t en­sure that the bi­nary it­self is au­then­tic, it does en­sure that an at­tacker can­not ef­fec­tively tam­per with a mu­ta­ble pointer to the bi­nary (such as a non-im­mutable tag or re­lease).

We limit our work­flow and job per­mis­sions in mul­ti­ple places: we de­fault to read-only per­mis­sions at the or­ga­ni­za­tion level, and we ad­di­tion­ally start every work­flow with per­mis­sions: {} and only broaden be­yond that on a job-by-job ba­sis.

We iso­late our GitHub Actions se­crets, wher­ever pos­si­ble: in­stead of us­ing or­ga­ni­za­tion- or repos­i­tory-level se­crets, we use de­ploy­ment en­vi­ron­ments and en­vi­ron­ment-spe­cific se­crets. This al­lows us to fur­ther limit the blast ra­dius of a po­ten­tial com­pro­mise, as a com­pro­mised test or lint­ing job won’t have ac­cess to, for ex­am­ple, the se­crets needed to pub­lish re­lease ar­ti­facts.

To do these things, we lever­age GitHub’s own set­tings, as well as tools like ziz­mor (for sta­tic analy­sis) and pin­act (for au­to­matic pin­ning).

Beyond our CI/CD processes, we also take a num­ber of steps to limit both the like­li­hood and the im­pact of ac­count and repos­i­tory com­pro­mises within the Astral or­ga­ni­za­tion:

We limit the num­ber of ac­counts with ad­min- and other highly-priv­i­leged roles, with most or­ga­ni­za­tion mem­bers only hav­ing read and write ac­cess to the repos­i­to­ries they need to work on. This re­duces the num­ber of ac­counts that an at­tacker can com­pro­mise to gain ac­cess to our or­ga­ni­za­tion-level con­trols.

We en­force strong 2FA meth­ods for all mem­bers of the Astral or­ga­ni­za­tion, be­yond GitHub’s de­fault of re­quir­ing any 2FA method. In ef­fect, this re­quires all Astral or­ga­ni­za­tion mem­bers to have a 2FA method that’s no weaker than TOTP. If and when GitHub al­lows us to en­force only 2FA meth­ods that are phish­ing-re­sis­tant (such as WebAuthn and Passkeys only), we will do so.

We im­pose branch pro­tec­tion rules on an org-wide ba­sis: changes to main can­not be force-pushed and must al­ways go through a pull re­quest. We also for­bid the cre­ation of par­tic­u­lar branch pat­terns (like ad­vi­sory-* and in­ter­nal-*) to pre­vent pre­ma­ture dis­clo­sure of se­cu­rity work.

We im­pose tag pro­tec­tion rules that pre­vent re­lease tags from be­ing cre­ated un­til a

re­lease de­ploy­ment suc­ceeds, with the re­lease de­ploy­ment it­self be­ing gated on a man­ual ap­proval by at least one other team mem­ber. We also pre­vent the up­dat­ing or dele­tion of tags, mak­ing them ef­fec­tively im­mutable once cre­ated. On top of that we layer a branch re­stric­tion: re­lease de­ploy­ments may only be cre­ated against main, pre­vent­ing an at­tacker from us­ing an un­re­lated first-party branch to at­tempt to by­pass our con­trols.

Finally, we ban repos­i­tory ad­mins from by­pass­ing all of the above pro­tec­tions. All of our pro­tec­tions are en­forced at the or­ga­ni­za­tion level, mean­ing that an at­tacker who man­ages to com­pro­mise an ac­count that has ad­min ac­cess to a spe­cific repos­i­tory still won’t be able to dis­able our con­trols.

To help oth­ers im­ple­ment these kinds of branch and tag con­trols, we’re shar­ing a gist that shows some of the rule­sets we use. These rule­sets are spe­cific to our GitHub or­ga­ni­za­tion and repos­i­to­ries, but you can use them as a start­ing point for your own poli­cies!

There are cer­tain things that GitHub Actions can do, but can’t do se­curely, such as leav­ing com­ments on third-party is­sues and pull re­quests. Most of the time it’s bet­ter to just forgo these fea­tures, but in some cases they’re a valu­able part of our work­flows.

In these lat­ter cases, we use as­tral-sh-bot to safely iso­late these tasks out­side of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have re­ceived (since GitHub Actions con­sumes the same web­hook pay­loads as GitHub Apps do), but with much more con­trol and much less im­plicit state.

However, there’s still a catch with GitHub Apps: an app does­n’t elim­i­nate any sen­si­tive cre­den­tials needed for an op­er­a­tion, it just moves them into an en­vi­ron­ment that does­n’t mix code and data as per­va­sively as GitHub Actions does. For ex­am­ple, an app won’t be sus­cep­ti­ble to a

tem­plate in­jec­tion at­tack like a work­flow would be, but could still con­tain SQLi, prompt in­jec­tion, or other weak­nesses that al­low an at­tacker to abuse the ap­p’s cre­den­tials. Consequently, it’s es­sen­tial to treat GitHub App de­vel­op­ment with the same se­cu­rity mind­set as any other soft­ware de­vel­op­ment. This also ex­tends to un­trusted code: us­ing a GitHub App does not make it safe to run un­trusted code, it just makes it harder to do so un­ex­pect­edly. If your processes need

to run un­trusted code, they must use pul­l_re­quest or an­other safe” trig­ger that does­n’t pro­vide any priv­i­leged cre­den­tials to third-party pull re­quests.

With all that said, we’ve found that the GitHub App pat­tern works well for us, and we rec­om­mend it to other main­tain­ers and pro­jects who have sim­i­lar needs. The main down­side to it comes in the form of com­plex­ity: it re­quires de­vel­op­ing and host­ing a GitHub App, rather than writ­ing a work­flow that GitHub or­ches­trates for you. We’ve found that frame­works like Gidgethub make the de­vel­op­ment process for GitHub Apps rel­a­tively straight­for­ward, but that host­ing re­mains a bur­den in terms of time and cost.

It’s an un­for­tu­nate re­al­ity that there still aren’t great GitHub App op­tions for one-per­son and hob­by­ist open source pro­jects; it’s our hope that us­abil­ity en­hance­ments in this space can be led by com­pa­nies and larger pro­jects that have the re­sources needed to pa­per over GitHub Actions’ short­com­ings as a plat­form.

We rec­om­mend this tu­to­r­ial by Mariatta as a good in­tro­duc­tion to build­ing GitHub Apps in Python. We also plan to open source as­tral-sh-bot in the fu­ture.

So far, we’ve cov­ered as­pects that tie closely to GitHub, as the source host for Astral’s tools. But many of our users in­stall our tools via other mech­a­nisms, such as PyPI, Homebrew, and our

Docker im­ages. These dis­tri­b­u­tion chan­nels add an­other link” to the metaphor­i­cal sup­ply chain, and re­quire dis­crete con­sid­er­a­tion:

Where pos­si­ble, we use Trusted Publishing to pub­lish to reg­istries (like PyPI, crates.io, and

NPM). This tech­nique elim­i­nates the need for long-lived reg­istry cre­den­tials, in turn ame­lio­rat­ing one of the most com­mon sources of pack­age takeover (credential com­pro­mise in CI/CD plat­forms).

Where pos­si­ble (currently our bi­nary and Docker im­ages re­leases), we gen­er­ate Sigstore-based at­tes­ta­tions. These at­tes­ta­tions es­tab­lish a cryp­to­graph­i­cally ver­i­fi­able link be­tween the re­leased ar­ti­fact and the work­flow that pro­duced it, in turn al­low­ing users to ver­ify that their build of uv, Ruff, or ty came from our ac­tual re­lease processes. You can see our re­cent

at­tes­ta­tions for uv as an ex­am­ple of this.1

We use GitHub’s im­mutable re­leases fea­ture to pre­vent the post-hoc mod­i­fi­ca­tion of the builds we pub­lish on GitHub. This ad­dresses a com­mon at­tacker piv­ot­ing tech­nique where pre­vi­ously pub­lished builds are re­placed with ma­li­cious builds. A vari­ant of this tech­nique was used in the re­cent Trivy at­tack, with the at­tacker force-push­ing over pre­vi­ous tags to in­tro­duce com­pro­mised ver­sions of the trivy-ac­tion and setup-trivy ac­tions.

We do not use caching to im­prove build times dur­ing re­leases, to pre­vent an at­tacker from com­pro­mis­ing our builds via a GitHub Actions cache poi­son­ing at­tack.

* To re­duce the risk of an at­tacker pub­lish­ing a new ma­li­cious ver­sion of our tools, we use a

stack of pro­tec­tions on our re­lease processes:

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

Our re­lease process is iso­lated within a ded­i­cated GitHub de­ploy­ment en­vi­ron­ment. This means that jobs that don’t run in the re­lease en­vi­ron­ment (such as tests and lin­ters) don’t have ac­cess to our re­lease se­crets.

In or­der to ac­ti­vate the re­lease en­vi­ron­ment, the ac­ti­vat­ing job must be ap­proved by at least one other priv­i­leged mem­ber of the Astral or­ga­ni­za­tion. This mit­i­gates the risk of a sin­gle rogue or com­pro­mised ac­count be­ing able to pub­lish a ma­li­cious re­lease (or ex­fil­trate re­lease se­crets); the at­tacker needs to com­pro­mise at least two dis­tinct ac­counts, both with strong 2FA.

In repos­i­to­ries (like uv) where we have a large num­ber of re­lease jobs, we use a dis­tinct

re­lease-gate en­vi­ron­ment to work the fact that GitHub trig­gers ap­provals for every job that uses the re­lease en­vi­ron­ment. This re­tains the two-per­son ap­proval re­quire­ment, with one ad­di­tional hop: a small, min­i­mally-priv­i­leged GitHub App me­di­ates the ap­proval from

re­lease-gate to re­lease via a de­ploy­ment pro­tec­tion rule.

Finally, we use a tag pro­tec­tion rule­set to pre­vent the cre­ation of a re­lease’s tag un­til the re­lease de­ploy­ment suc­ceeds. This pre­vents an at­tacker from by­pass­ing the nor­mal re­lease process to cre­ate a tag and re­lease di­rectly.

* For users who in­stall uv via our stand­alone in­staller, we en­force the in­tegrity of the in­stalled

bi­na­ries via check­sums em­bed­ded di­rectly into the in­staller’s source code2.

Our re­lease processes also in­volve knock-on” changes, like up­dat­ing the our pub­lic doc­u­men­ta­tion, ver­sion man­i­fests, and the of­fi­cial pre-com­mit hooks. These are priv­i­leged op­er­a­tions that we pro­tect through ded­i­cated bot ac­counts and fine-grained PATs is­sued through those ac­counts.

Going for­wards, we’re also look­ing at adding code­sign­ing with of­fi­cial de­vel­oper cer­tifi­cates on ma­cOS and Windows.

Last but not least is the ques­tion of de­pen­den­cies. Like al­most all mod­ern soft­ware, our tools de­pend on an ecosys­tem of third-party de­pen­den­cies (both di­rect and tran­si­tive), each of which is in an im­plicit po­si­tion of trust. Here are some of the things we do to mea­sure and mit­i­gate up­stream risk:

We use de­pen­dency man­age­ment tools like Dependabot and Renovate to keep our de­pen­den­cies up­dated, and to no­tify us when our de­pen­den­cies con­tain known vul­ner­a­bil­i­ties.

In gen­eral, we em­ploy cooldowns in con­junc­tion with the above to avoid up­dat­ing de­pen­den­cies im­me­di­ately af­ter a new re­lease, as this is when tem­porar­ily com­pro­mised de­pen­den­cies are most likely to af­fect us.

Both Dependabot and Renovate sup­port cooldowns, and uv also has built-in sup­port. We’ve found Renovate’s abil­ity to con­fig­ure cooldowns on a per-group ba­sis to be par­tic­u­larly use­ful, as it al­lows us to re­lax the cooldown re­quire­ment for our own (first-party) de­pen­den­cies while keep­ing it in place for most third-party de­pen­den­cies.

We main­tain so­cial con­nec­tions with many of our up­stream de­pen­den­cies, and we per­form both reg­u­lar and se­cu­rity con­tri­bu­tions with them (including fixes to their own CI/CD and re­lease processes). For ex­am­ple, here’s a re­cent con­tri­bu­tion we made to apache/​open­dal-re­qsign to help them ratchet down their CI/CD se­cu­rity.

Separately, we main­tain so­cial con­nec­tions with ad­ja­cent pro­jects and work­ing groups in the ecosys­tem, in­clud­ing the Python Packaging Authority and the Python Security Response Team. These con­nec­tions have proven in­valu­able for shar­ing in­for­ma­tion, such as when a re­port against pip also af­fects uv (or vice versa), or when a se­cu­rity re­lease for CPython will re­quire a re­lease of python-build-stand­alone.

We’re con­ser­v­a­tive about adding new de­pen­den­cies, and we look to elim­i­nate de­pen­den­cies where prac­ti­cal and min­i­mally dis­rup­tive to our users. Over the com­ing re­lease cy­cles, we hope to re­move some de­pen­den­cies re­lated to sup­port for rarely used com­pres­sion schemes, as part of a larger ef­fort to align our­selves with Python pack­ag­ing stan­dards.

More gen­er­ally, we’re also con­ser­v­a­tive about what our de­pen­den­cies bring in: we try to avoid de­pen­den­cies that in­tro­duce bi­nary blobs, and we care­fully re­view our de­pen­den­cies’ fea­tures to dis­able func­tion­al­ity that we don’t need or de­sire.

Finally, we con­tribute fi­nan­cially (in the form of our OSS Fund) to the sus­tain­abil­ity of pro­jects that we de­pend on or that push the OSS ecosys­tem as a whole for­wards.

Open source se­cu­rity is a hard prob­lem, in part be­cause it’s re­ally many prob­lems (some tech­ni­cal, some so­cial) mas­querad­ing as one. We’ve cov­ered many of the tech­niques we use to tackle this prob­lem, but this post is by no means an ex­haus­tive list. It’s also not a sta­tic list: at­tack­ers are dy­namic par­tic­i­pants in the se­cu­rity process, and de­fenses nec­es­sar­ily evolve in re­sponse to their chang­ing tech­niques.

With that in mind, we’d like to re­call some of the points men­tioned above that de­serve the most at­ten­tion:

Respect the lim­its of CI/CD: it’s ex­tremely tempt­ing to do every­thing in CI/CD, but there are some things that CI/CD (and par­tic­u­larly GitHub Actions) just can’t do se­curely. For these things, it’s of­ten bet­ter to forgo them en­tirely, or iso­late them out­side of CI/CD with a GitHub App or sim­i­lar.

With that said, it’s im­por­tant to not over­cor­rect and throw CI/CD away en­tirely: as men­tioned above, CI/CD is a crit­i­cal part of our se­cu­rity pos­ture and prob­a­bly yours too! It’s un­for­tu­nate that se­cur­ing GitHub Actions is so dif­fi­cult, but we con­sider it worth the ef­fort rel­a­tive to the ve­loc­ity and se­cu­rity risks that would come with not us­ing hosted CI/CD at all.

In par­tic­u­lar, we strongly rec­om­mend us­ing CI/CD for re­lease processes, rather than re­ly­ing on lo­cal de­vel­oper ma­chines, par­tic­u­larly when those re­lease processes can be se­cured with mis­use- and dis­clo­sure-re­sis­tant cre­den­tial schemes like Trusted Publishing.

Isolate and elim­i­nate long-lived cre­den­tials: the sin­gle most com­mon form of post-com­pro­mise spread is the abuse of long-lived cre­den­tials. Wherever pos­si­ble, elim­i­nate these cre­den­tials en­tirely (for ex­am­ple, with Trusted Publishing or other OIDC-based au­then­ti­ca­tion mech­a­nisms).

Where elim­i­na­tion is­n’t pos­si­ble, iso­late these cre­den­tials to the small­est pos­si­ble scope: put them in spe­cific de­ploy­ment en­vi­ron­ments with ad­di­tional ac­ti­va­tion re­quire­ments, and only is­sue cre­den­tials with the min­i­mum nec­es­sary per­mis­sions to ac­com­plish a given task.

Strengthen re­lease processes: if you’re on GitHub, use de­ploy­ment en­vi­ron­ments, ap­provals, tag and branch rule­sets, and im­mutable re­leases to re­duce the de­grees of free­dom the at­tacker has in the event of an ac­count takeover or repos­i­tory com­pro­mise.

Maintain aware­ness of your de­pen­den­cies: main­tain­ing aware­ness of the over­all health of your de­pen­dency tree is crit­i­cal to un­der­stand­ing your own risk pro­file. Use both tools and el­bow grease to keep your de­pen­den­cies se­cure, and to help them keep their own processes and de­pen­den­cies se­cure too.

Finally, we’re still eval­u­at­ing many of the tech­niques men­tioned above, and will al­most cer­tainly be tweak­ing (and strength­en­ing) them over the com­ing weeks and months as we learn more about their lim­i­ta­tions and how they in­ter­act with our de­vel­op­ment processes. That’s to say that this post rep­re­sents a point in time, not the fi­nal word on how we think about se­cu­rity for our open source tools.

...

Read the original on astral.sh »

7 316 shares, 16 trendiness

Reallocating $100/Month Claude Code spend to Zed and OpenRouter

I’ve been dis­ap­pointed to feel that I’m hit­ting Claude lim­its faster than be­fore. For con­text, I use both Claude Code and the Claude desk­top app for work and pay $100/month for the priv­i­lege of hit­ting lim­its. I’m not the only one

(this was AMDs se­nior di­rec­tor of AI) with nu­mer­ous other re­ports found over Reddit and Twitter.

My us­age pat­tern is bursty” so I’m not us­ing the win­dows all the time through­out the day but find it in­cred­i­bly frus­trat­ing to hit a limit mid-way through a cod­ing ses­sion.

This ar­ti­cle is how I’m re­al­lo­cat­ing that spend to other tools and mod­els while get­ting more flex­i­bil­ity at the same time.

I like op­tions and while Opus is un­doubt­edly the mar­ket leader for agen­tic cod­ing, there are other mod­els that I like to use to bal­ance cost and speed de­pend­ing on the com­plex­ity of the task in hand. I’m look­ing at how I can use dif­fer­ent mod­els with an Agent Harness.

You don’t re­alise how slow/​laggy VSCode and the all of the forks are un­til you try out

Zed. The builtin agent har­ness is ba­sic but nice with the abil­ity to fol­low the agent around as it mod­i­fies files and to add new pro­files to cus­tomise the agent be­hav­iour. Like Cursor it shows the con­text us­age and the rules that are be­ing ap­plied to the cur­rent ses­sion. If you con­tinue to use Claude Code or other tools like Mistral Vibe, Zed in­te­grates them di­rectly into the ed­i­tor us­ing the

Agent Client Protocol (ACP) - see

sup­ported agents.

The biggest dis­ad­van­tage is def­i­nitely the lack of ex­ten­sions com­pared to VSCode but there are enough to cover com­mon lan­guages and com­mon tasks.

Zed do of­fer us­age based pric­ing once you have used up the cred­its they pro­vide how­ever their

to­ken prices are higher than go­ing di­rectly to the API them­selves. This is why I pre­fer to use the OpenRouter in­te­gra­tion into Zed in­stead. A nice side ben­e­fit is you get the more na­tive con­text win­dow sizes. For some rea­son Zed lim­its the Gemini 3.1 con­text to 200k to­kens in their na­tive in­te­gra­tion how­ever with OpenRouter you can make use of the full 1M. Their docs say this may be changed in the fu­ture.

The largest op­tion of mod­els and providers that I know of is OpenRouter and it’s easy enough to sign up, pre-pay some cred­its and get an API key.

I don’t like that I have a set win­dow of Anthropic cred­its. If I use it I have to wait for it to re­set (or pay). But when I’m not us­ing it I’m miss­ing out on that win­dow of op­por­tu­nity. Instead I can top up my OpenRouter cred­its which ex­pire af­ter

365 days if un­used. Then I can use the cred­its when I’m work­ing and save them/​roll-over when I’m not.

To min­imise data ex­po­sure risk, I have cho­sen not to con­sent to OpenRouter be­ing able to use in­puts/​out­puts to im­prove the prod­uct” (though you get a 1% dis­count if you do), and I have en­abled the Zero Data Retention (ZDR) Endpoints Only” in my

Workspace Guardrail set­tings. You do lose out on some mod­els here - for ex­am­ple, qwen/​qwen3.6-plus

which is only hosted on Alibaba Cloud - how­ever that’s a small price I’m will­ing to pay.

Cursor was (or still semi-is) my pre­ferred ed­i­tor. As a VSCode fork, all ex­ten­sions are avail­able. They were an early adopter of the plan mode -> agent mode work­flow and now sup­port a new

de­bug mode which is a more ad­vanced print style de­bug that the agent can also in­ter­act with.

Cursor also sup­ports dif­fer­ent types of rule ap­pli­ca­tions, some­thing I per­son­ally love and am sur­prised that other agent har­nesses haven’t adopted. Most agent har­nesses take an apply in­tel­li­gently” ap­proach, try­ing to let the AI make de­ci­sions on when to in­clude a rule based on the de­scrip­tion. But Cursor also of­fers the abil­ity to only ap­ply to spe­cific files. I know I have rules that only ap­ply to *.py files, or even **/models.py etc. I am able to make the most of my con­text win­dow by ex­plic­itly set­ting those rules to be added only to cer­tain filepath regexs. It guar­an­tees their us­age

Choosing Cursor you get API rate pric­ing above the in­cluded use in your plan (and you can limit this so your to­tal spend is lim­ited to $100) but you are still pay­ing min­i­mum $20/month which does not roll over to the next month.

I know - I said I’m redi­rect­ing funds away from Anthropic, but it is pos­si­ble to con­tinue us­ing the Claude Code agent har­ness with other mod­els (or even Opus should you want to). We might want to do this be­cause Claude Code is un­de­ni­ably a great har­ness, how­ever we need to

con­fig­ure Claude Code to use OpenRouter rather than the Anthropic API.

First, log out of Claude Code if you have been us­ing it be­fore:Next, set some en­vi­ron­ment vari­ables to con­fig­ure the OpenRouter end­points and which mod­els you want to use for Opus”, Sonnet”, Haiku” and SubAgents” (I rec­om­mend set­ting these in your

~/.zshrc or ~/.bashrc file so they per­sist):ex­port OPENROUTER_API_KEY=“”

ex­port ANTHROPIC_BASE_URL=“https://​open­router.ai/​api

ex­port ANTHROPIC_AUTH_TOKEN=“$OPENROUTER_API_KEY”

ex­port ANTHROPIC_API_KEY=“” # Important: Must be ex­plic­itly empty

# Set these mod­els to whichever model you would like to use on OpenRouter

ex­port ANTHROPIC_DEFAULT_OPUS_MODEL=“anthropic/claude-opus-4.6”

ex­port ANTHROPIC_DEFAULT_SONNET_MODEL=“anthropic/claude-sonnet-4.6”

ex­port ANTHROPIC_DEFAULT_HAIKU_MODEL=“anthropic/claude-haiku-4.5”

ex­port CLAUDE_CODE_SUBAGENT_MODEL=“anthropic/claude-opus-4.6”Verify that Claude Code is us­ing your new con­fig (you may need to restart your ter­mi­nal or

source ~/.zshrc):

There are a mul­ti­tude of

other cod­ing Agent Harnesses that can be used from the com­mand line with OpenRouter. I’ve tried a few but none have stuck, here’s the list for you to try and my brief thoughts on them:

* OpenCode - Typescript - The one I use the most. Good

sup­port for a lot of things. Very pop­u­lar.

* Crush - Go - I want to like it. It has a dis­tinct

style choice (that I don’t mind). It’s per­for­mant. But it’s a pain to con­fig­ure cus­tom mod­els (all

man­ual) so an­noy­ing when try­ing out new ones.

Even for pop­u­lar tools that typ­i­cally limit you to use their own mod­els like Gemini CLI, of­ten there are forks which at­tempt to make it OpenRouter com­pat­i­ble. This is worth check­ing if you are us­ing and like a dif­fer­ent har­ness but want to try other mod­els.

I’m now a happy sub­scriber to Zed for the rea­son­able $10/month. I ac­tu­ally also main­tain my Cursor sub­scrip­tion for $20/month as I want to see where they go with their new Cursor 3, agent or­ches­tra­tor. The other $70 gets auto added to my OpenRouter cred­its each month which don’t get lost. They rollover, wait­ing for me to use them.

If you’re reg­u­larly hit­ting Claude lim­its and want to give other mod­els a shot (but you can still use Opus when you need to), I highly rec­om­mend giv­ing it a try. You can get started with Zed for free and load up OpenRouter with $20 worth of cred­its with­out any sub­scrip­tion.

...

Read the original on braw.dev »

8 301 shares, 17 trendiness

FreeBSD Laptop Compatibility

Top Laptops for use with FreeBSD

Each lap­top is scored based on an ag­gre­gate of:

how many lap­top com­po­nents are de­tected, where each fully auto-de­tected com­po­nent adds a point

whether de­vices have de­graded func­tion­al­ity, re­duc­ing the score by 0.5-1.5 based on sever­ity and how im­por­tant it is to the lap­top ex­pe­ri­ence (wi-fi/graphics weighted more)

user-pro­vided com­ments about test re­sults, and how in­volved setup is for that de­vice

...

Read the original on freebsdfoundation.github.io »

9 269 shares, 14 trendiness

How Pizza Tycoon simulated traffic on a 25 MHz CPU — Pizza Legacy Blog

I’ve been work­ing on Pizza Legacy, an open-source reim­ple­men­ta­tion of the 1994 DOS game . The game has a close-zoom street view of the cities, and when you scroll around it you can see a steady stream of cars dri­ving through the streets. Maybe 20 or 30 tiny sprites at a time, but they nav­i­gate the road net­work, queue be­hind each other at in­ter­sec­tions, and gen­er­ally look like a liv­ing city. Yes, it was a bit buggy be­cause some­times they would drive through each other, but it was good enough to just give some sense of life to the map. All that on a 25 MHz 386 CPU.

The first thing I im­ple­mented in 2010 when I started this pro­ject was that close zoom level, but it took 14 years be­fore I fi­nally had the cars dri­ving around on it, in a way that I was happy about; I had mul­ti­ple at­tempts over the years but every time I ran into prob­lems I got stuck build­ing an overly com­pli­cated sys­tem that was hard to rea­son about and no fun to work on.

One at­tempt in 2017 in­volved each tile keep­ing track of which po­si­tions were oc­cu­pied, and every car had to ask the grid for per­mis­sion be­fore mov­ing, re­serv­ing and free­ing slots as it went. It ba­si­cally turned into a shared lock­ing sys­tem just to move a few pix­els, with cars and tiles con­stantly try­ing to stay in sync.

All the while I had this nag­ging thought in the back of my mind: the orig­i­nal ran this on a 25 MHz CPU, so why were my ver­sions al­ways so com­pli­cated?

Finally I went to the as­sem­bly (which I had spent many years slowly un­der­stand­ing bet­ter and doc­u­ment­ing) to fig­ure out what the orig­i­nal was do­ing, with the help of LLMs which were (a cou­ple of years ago) this new and ex­cit­ing tech­nol­ogy that could bet­ter un­der­stand as­sem­bly than I could.

Now that I fi­nally have it work­ing I can see where I went wrong: I went into it with a brain full of mod­ern con­cepts: scene graphs, path find­ing, col­li­sion de­tec­tion, and of course plenty of CPU to run it all!

First, let’s look at what a city ac­tu­ally looks like:

As you can see there are two-lane roads, T-junctions, in­ter­sec­tions, and cor­ners. In maps are made up out of a grid of 160 by 120 tiles, where each tile is one of the tiles from landsym.vga:

The orig­i­nal landsym.vga file with added bor­ders be­tween tiles and text to in­di­cate the row and col­umn off­set. Byte 0x54 means col­umn 5, row 4 (roof tile of a de­pot).

Back to the traf­fic; the key in­sight that makes it pos­si­ble to run this sys­tem on such a slow CPU: cars don’t need to know where they’re go­ing. Each road tile type car­ries its own di­rec­tion. Road tile 0x16 is the bot­tom part of a hor­i­zon­tal road, mean­ing that cars can only drive from left to right on these roads. Similarly road tile 0x06 is just for right to left traf­fic, then 0x26 and 0x36 are the same but for ver­ti­cal traf­fic.

This means the city is ba­si­cally just a bunch of one-way roads, once a car knows which tile it sits on, it can keep go­ing.

Corners work the same way, 0x56

(CORNER_SW in my enum) is the cor­ner that al­lows the car to ei­ther keep go­ing west, or turn south. When a car hits a cor­ner it flips a coin, 50% chance of go­ing straight on, 50% chance of tak­ing the turn. The maps have been de­signed in such a way that the roads al­ways make sense, which means that next to the CORNER_SW there is an­other tile that is ei­ther a south to north traf­fic (so we have to go south) or it’s an­other edge tile that al­lows ei­ther a turn or straight on.

There is one ex­tra rule to keep traf­fic look­ing nat­ural, if you just took a left turn the next cor­ner forces you straight-on; no two con­sec­u­tive left turns.

Valid di­rec­tions per tile type in­di­cated with ar­rows.

Cars move one pixel per frame. Each tick the main loop checks if a car is blocked, and if not, in­cre­ments or decre­ments its screen co­or­di­nate by one de­pend­ing on di­rec­tion. East adds 1 to X. North sub­tracts 1 from Y.

There’s a sec­ond progress counter, count­ing down from 16 to 1. When it hits zero it re­sets to 16 and the game runs the tile-bound­ary logic: look up the next tile, de­cide the new di­rec­tion, up­date the sprite frame (to vi­su­ally turn the car in the new di­rec­tion). Since each tile is 16 pix­els wide and tall, this runs ex­actly once per tile crossed. The per-pixel move hap­pens every tick; the heav­ier tile logic runs only 1/16th as of­ten.

When a car first spawns, progress is set to a ran­dom value be­tween 1 and 16. That stag­gers all the cars so their tile-bound­ary checks don’t all land on the same frame, spread­ing the work out evenly.

Unlike my var­i­ous at­tempts at fancy col­li­sion de­tec­tion, the orig­i­nal uses a straight­for­ward pair­wise check: for each car, walk the whole car list and ask would these two over­lap next tick?” If yes, set a wait counter of 10 ticks on the blocked car and move on to the next car.

But the col­li­sion de­tec­tion code is writ­ten to bail out as fast as pos­si­ble. The very first thing it does is ex­tract the other car’s di­rec­tion; be­cause roads are one-way, east and west never share a road, so an east car and a west car can never col­lide. That pair re­turns im­me­di­ately, no co­or­di­nate reads at all. Same for east and south, west and north, and so on.

With say 25 cars in a typ­i­cal city view there are 625 pair­wise calls per frame. About half of those re­turn in just a few CPU in­struc­tions on the di­rec­tion check alone. Most of the rest fail the lane check (same-direction cars have to be on the same road, which is one equal­ity com­par­i­son). The pairs that ac­tu­ally reach any co­or­di­nate arith­metic are usu­ally sin­gle dig­its.

When a car does get blocked, the 10-tick wait cre­ates nat­ural traf­fic jams: cars bunch up, the front one even­tu­ally finds the way clear, the queue drains. There are some bugs in the sys­tem (especially when you let it run for a while and there are lot of in­ter­sec­tions) but given that the point of this is not to run an ac­cu­rate dri­ving sim­u­la­tion but just show some move­ment on the screen, it works per­fectly well and very ef­fi­ciently. The col­li­sion de­tec­tion sys­tem has some quirks; some com­bi­na­tions are never checked (e.g. east­bound car never in­ter­sects with a south­bound car) that might be the rea­son be­hind some bug­i­ness.

When you en­ter the close-zoom view, the game scans all 132 tiles in the view­port (12 columns by 11 rows), and for each road tile it rolls against the dis­tric­t’s traf­fic den­sity to de­cide whether to spawn a car there, so higher-traf­fic dis­tricts are busier. Corner tiles are ex­cluded from spawn points, so cars only ap­pear on straight road tiles.

Cars that drive off the edge of the screen are respawned as a new (random) color car fac­ing the other di­rec­tion, on the tile go­ing the other di­rec­tion. This means that the game does­n’t have to worry about respawn­ing cars other than just every time one car dri­ves of go­ing east it spawns a new car be­low go­ing west, etc.

Pay at­ten­tion to the cars dri­ving off the map at the edges, no­tice they are re­placed by cars dri­ving the op­po­site di­rec­tion.

When you scroll, the newly ex­posed strip of tiles gets the same treat­ment of hav­ing a chance of hav­ing cars spawned on them.

Looking back at my failed at­tempts, I was de­sign­ing for prob­lems that the orig­i­nal just did­n’t con­sider. Cars don’t need pathfind­ing be­cause the map tells them where they can go. Collision de­tec­tion was cheap be­cause the early-exit logic makes most pairs ba­si­cally free. There’s no ve­loc­ity or physics be­cause 1 pixel per tick is enough to look con­vinc­ing. When you’re about to hit some­thing just pause for 10 ticks, and when you have to make a turn you just travel half the width of the tile and then make your turn, works on every tile in any di­rec­tion.

I reim­ple­mented it fol­low­ing the as­sem­bly pretty closely, so just a cou­ple of switch state­ments with dif­fer­ent rout­ing op­tions per tile type, you can see the de­cide_de­sired_di­rec­tion method in Car.cpp.

...

Read the original on pizzalegacy.nl »

10 265 shares, 27 trendiness

Lzon.ca. A personal blog, by a programmer and IT expert.

I’d like to tell the story of job I just com­pleted for a cus­tomer, so that I can make a point about how I feel Microsoft and other large tech­nol­ogy com­pa­nies are ac­tively hos­tile to their users.

I re­ceived a call from my neigh­bour ask­ing if I would be will­ing to help her hus­band with an is­sue he’d been hav­ing with his lap­top. As the proud new owner of my own IT ser­vices com­pany, I of course agreed to take a look.

I spoke with my neigh­bour’s hus­band, and im­me­di­ately saw that he was not tech lit­er­ate. I learned to iden­tify the type while do­ing IT work for my pre­vi­ous em­ployer. This made un­der­stand­ing his prob­lem dif­fi­cult, but through con­ver­sa­tion we did man­age to come to an un­der­stand­ing about what the real is­sue was that he was ex­pe­ri­enc­ing.

What he was see­ing was that he was no longer re­ceiv­ing email in Outlook, and that there was an er­ror mes­sage claim­ing he had run out of avail­able stor­age’, or some other sim­i­lar non­sense. He is a very light email user, and he knows it. He was con­fused as to why he’d run out of stor­age. I was con­fused as well, at first.

Through in­ves­ti­ga­tion I dis­cov­ered that the Outlook email ser­vice uses Onedrive for stor­age of all mes­sages and at­tach­ments. He had 5 GB of avail­able stor­age, the amount that is given with his free ac­count. This had yet to ex­plain why he was see­ing that er­ror mes­sage, there was no way he had con­sumed 5 GB of stor­age with just his email use.

Unsurprisingly, his Onedrive stor­age was­n’t filled by his email, it was filled by the per­sonal files from his Windows 11 desk­top. Did he con­fig­ure Windows to save those files to his Onedrive di­rec­tory, in­stead of his lo­cal home di­rec­tory? Of course not, that was done by de­fault. Did he even know that this was hap­pen­ing? Also, no. He had no idea this was hap­pen­ing un­til he saw that er­ror mes­sage, which oh-so-help­fully of­fered to solve’ his prob­lem by of­fer­ing him a sub­scrip­tion to ad­di­tional paid stor­age ca­pac­ity on the ac­count.

He did man­age to loosely un­der­stand what was hap­pen­ing, enough at least to start delet­ing files from his com­puter to try and make the er­ror mes­sage go away. I was never able to con­firm with him, but I sus­pect that he deleted files (including fam­ily pho­tos) for which he had no other backup.

I will be blunt, this in­fu­ri­ates me. This was­n’t the first time I’ve seen this. I saw it many times while work­ing for my pre­vi­ous em­ployer. Microsoft has in­ten­tion­ally bro­ken a fun­da­men­tal as­sump­tion about how files are stored on a com­puter run­ning Windows. They do this with­out ask­ing the user, and with­out ad­e­quately ex­plain­ing what they have done. Microsoft is very ob­vi­ously em­ploy­ing dark pat­terns in or­der to goad its users into pay­ing for Onedrive stor­age.

I’m a com­puter nerd, and if you are read­ing this you prob­a­bly are as well. We can change that set­ting our­selves with­out much thought, and we prob­a­bly have back­ups of our im­por­tant data in case re­cov­ery is nec­es­sary. I will tell you that many peo­ple are ex­tremely util­i­tar­ian about their com­puter use. They use their com­put­ers only to the de­gree that they must to serve their other in­ter­ests in life. They also trust that their prop­erty, the de­vice that cost them hun­dreds of dol­lars is­n’t try­ing to cheat them like some back-al­ley con artist.

This is­n’t a game. My cus­tomer is­n’t a num­ber on a spread­sheet, merely an in­cre­ment to­wards reach­ing some use­less KPI. He deleted fam­ily pho­tos to try and get that er­ror mes­sage to go away, so that he could just re­ceive emails again. He may not un­der­stand what hap­pened, but he’s not stu­pid. He sus­pected that this was a scam to get him to pay for some­thing he did­n’t need, he just did­n’t un­der­stand how the scam worked.

First and fore­most, I per­formed a com­plete backup of his data. I took every­thing that I could find lo­cally on the ma­chine, as well as every­thing from the Onedrive ac­count, in­clud­ing the Trash. It was­n’t much, only a few gi­ga­bytes, which I trans­ferred to a sep­a­rate USB drive.

I care­fully trans­ferred all files out of the Onedrive di­rec­tory struc­ture and back into his home folder. The Windows file ex­plorer did not make this easy or in­tu­itive.

I pro­ceeded to delete every­thing from the Onedrive ac­count, through the web in­ter­face. I did no­tice that delet­ing files merely moved them into the Trash, which was still be­ing counted to­wards to­tal stor­age us­age. I as­sumed this was yet an­other sub­tle dark pat­tern.

I al­luded to chang­ing set­tings as a way to solve this. The ap­proach we of­ten took at my pre­vi­ous em­ployer was to sim­ply dis­able Onedrive in the Windows startup list. That could have worked in this case but I had a bet­ter idea. Remove Onedrive en­tirely.

I have mus­cle mem­ory at this point for how to do it, if you were won­der­ing this is the pro­ce­dure I used:

Open an ad­min Terminal and load up the Chris Titus’ winu­til.

This en­tirely re­moves the Onedrive ap­pli­ca­tion from Windows, in­clud­ing all in­te­gra­tions into other pro­grams, such as the file ex­plorer.

I then pro­ceeded to delete every­thing from the Onedrive ac­count, in­clud­ing the Trash. The er­ror mes­sages fi­nally went away in Outlook and he was able to re­cieve email mes­sages again.

I may be preach­ing to the choir, but re­gard­less I want to use this post as my op­por­tu­nity to make these points in my own way. Microsoft is ac­tively hos­tile to­wards its users.They have be­come a bas­ket-case of an or­gan­i­sa­tion, where chas­ing ir­rel­e­vant KPIs has be­come more im­por­tant than prod­uct qual­ity, or even base­line re­spect for their users.The ex­act same can be said, to vary­ing de­grees, to every other large con­sumer-tech com­pany.

I see this as the re­sult of bad in­cen­tive struc­tures. A toxic game the­ory that has been al­lowed to play out over many years with­out proper scrutiny. The lefty in me might think that this is a man­i­fes­ta­tion of Late Capitalism. If so then it feels like we’re about 30 sec­onds away from mid­night.

I think a lot about the pos­si­ble ways to tweak said in­cen­tive struc­tures, to build a choice ar­chi­tec­ture that can pre­vent even the first step in the process that led to this.

Days like to­day, when I’m think­ing about the real ac­tual ways that this non­sense im­pacts real ac­tual peo­ple, I can’t ig­nore the hu­mans in this loop. People need to ac­tu­ally take re­spon­si­bil­ity for their choices, not just turn their brain off when the num­ber looks right in the spread­sheet.

If you en­joyed this post, let me know! Email me at mail@lzon.ca, or reach out through one of my so­cial ac­counts linked on the home­page.

...

Read the original on lzon.ca »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.