10 interesting stories served every morning and every evening.




1 574 shares, 24 trendiness

Finding and Fixing Ghostty's Largest Memory Leak

A few months ago, users started re­port­ing that Ghostty was con­sum­ing ab­surd amounts of mem­ory, with one user re­port­ing 37 GB af­ter 10 days of up­time. Today, I’m happy to say the fix has been found and merged. This post is an overview of what caused the leak, a look at some of Ghostty’s in­ter­nals, and some brief de­scrip­tions of how we tracked it down.1

The leak was pre­sent since at least Ghostty 1.0, but it is only re­cently that pop­u­lar CLI ap­pli­ca­tions (particularly Claude Code) started pro­duc­ing the cor­rect con­di­tions to trig­ger it at scale. The lim­ited con­di­tions that trig­gered the leak are what made it par­tic­u­larly tricky to di­ag­nose.

The fix is merged and is avail­able in tip/​nightly re­leases, and will be part of the tagged 1.3 re­lease in March.

To un­der­stand the bug, we first need to un­der­stand how Ghostty man­ages ter­mi­nal mem­ory. Ghostty uses a data struc­ture called the

PageList

to store ter­mi­nal con­tent. PageList is a dou­bly-linked list of mem­ory pages that store the ter­mi­nal con­tent (characters, styles, hy­per­links, etc.).

The un­der­ly­ing pages” are not sin­gle vir­tual mem­ory pages

but they are a con­tigu­ous block of mem­ory aligned to page bound­aries and com­posed of an even mul­ti­ple of sys­tem pages.2

These pages are al­lo­cated us­ing mmap. mmap is­n’t par­tic­u­larly fast, so to avoid con­stant syscalls, we use a mem­ory pool. When we need a new page, we pull from the pool. When we’re done with a page, we re­turn it to the pool for reuse.

The pool uses a stan­dard size for pages. Think of it like buy­ing stan­dard-sized ship­ping boxes: most things peo­ple ship fit in a stan­dard box, and hav­ing a stan­dard box comes with var­i­ous ef­fi­cien­cies.

But some­times ter­mi­nals need more mem­ory than a stan­dard page pro­vides. If a set of lines has many emoji, styles, or hy­per­links, we need a larger page. In these cases, we al­lo­cate a non-stan­dard page

di­rectly with mmap, by­pass­ing the pool en­tirely. This is typ­i­cally a rare sce­nario.

When we free” a page, we ap­ply some sim­ple logic:

If the page is : re­turn it to the pool

If the page is > stan­dard size: call mun­map to free it

This is the core back­ground for ter­mi­nal mem­ory man­age­ment in Ghostty, and the idea it­self is sound. A logic bug around an op­ti­miza­tion is what pro­duced the leak, as we’ll see next.

There’s one more back­ground de­tail we need to cover to un­der­stand the bug: scroll­back prun­ing.

Ghostty has a scroll­back-limit con­fig­u­ra­tion that caps how much his­tory is re­tained. When you hit this limit, we delete the old­est pages in the scroll­back buffer to free up mem­ory.

But this of­ten hap­pens in a su­per hot path (e.g. when out­putting large amounts of data quickly), and al­lo­cat­ing and free­ing mem­ory pages is ex­pen­sive, even with the pool. Therefore, we have an op­ti­miza­tion:

reuse the old­est page as the newest page when we reach the limit.

This op­ti­miza­tion works great. It re­quires zero al­lo­ca­tions and uses only some quick pointer ma­nip­u­la­tions to move the page from the front to the back of the list. We do some meta­data cleanup to clear” the page but oth­er­wise leave the pre­vi­ous mem­ory in­tact.

It’s fast and em­pir­i­cally speeds up scroll­back-heavy work­loads sig­nif­i­cantly.

During the scroll­back prun­ing op­ti­miza­tion, we al­ways

re­sized our page back to stan­dard size. But we did­n’t re­size the un­der­ly­ing mem­ory al­lo­ca­tion it­self, we only noted the re­size in the meta­data. The un­der­ly­ing mem­ory was still the large non-stan­dard mmap al­lo­ca­tion, but now the PageList thought it was stan­dard sized.

Eventually, we’d free the page un­der var­i­ous cir­cum­stances (e.g. when the user closes the ter­mi­nal, but also other times). At that point, we’d see the page mem­ory was within the stan­dard size, as­sume it was part of the pool, and we would never call mun­map on it. A clas­sic leak.

This all seems pretty ob­vi­ous, but the is­sue is that non-stan­dard pages are rare by de­sign. The goal of our de­sign and op­ti­miza­tions is that stan­dard pages are the com­mon case and pro­vide a fast-path. Only very spe­cific sce­nar­ios pro­duce non-stan­dard pages and they’re usu­ally not pro­duced in large quan­ti­ties.

But the rise of Claude Code

changed this. For some rea­son, Claude Code’s CLI pro­duces a lot of multi-code­point grapheme out­puts which force Ghostty to reg­u­larly use non-stan­dard pages. Additionally, Claude Code uses the pri­mary screen and pro­duces a sig­nif­i­cant amount of scroll­back out­put. These things com­bined to­gether cre­ated the per­fect storm to trig­ger the leak in huge quan­ti­ties.

The fix is con­cep­tu­ally sim­ple: never reuse non-stan­dard pages. If we en­counter a non-stan­dard page dur­ing scroll­back prun­ing, we de­stroy it prop­erly (calling mun­map) and al­lo­cate a fresh stan­dard-sized page from the pool.

The core of the fix is in the snip­pet be­low, but some ex­tra work was needed to fix up some other bits of ac­count­ing we have:

We could’ve also reused the non-stan­dard page and just re­tained the large mem­ory size, but un­til we have data that shows oth­er­wise, we’re still op­er­at­ing un­der the as­sump­tion that stan­dard pages are the com­mon case and it makes sense to re­set back to a stan­dard pooled page.

Other users have rec­om­mended more com­plex strate­gies (e.g. main­tain­ing some met­rics on how of­ten non-stan­dard pages are used and ad­just­ing our as­sump­tions ac­cord­ingly), but more re­search is needed be­fore mak­ing those changes. This change is sim­ple, fixes the bug, and aligns with our cur­rent as­sump­tions.

As part of the fix, I added sup­port for vir­tual mem­ory tags on ma­cOS pro­vided by the Mach ker­nel. This lets us tag our PageList mem­ory al­lo­ca­tions with a spe­cific iden­ti­fier that shows up in var­i­ous tool­ing.

Now when de­bug­ging mem­ory on ma­cOS, Ghostty’s PageList mem­ory shows up with a spe­cific tag in­stead of be­ing lumped in with every­thing else. This made it triv­ial to iden­tify the leak, as­so­ci­ate it with the PageList, and also ver­ify that the fix worked by ob­serv­ing the tagged mem­ory be­ing prop­erly freed.

We do a lot of work in the Ghostty pro­ject to find and pre­vent mem­ory leaks:

* In de­bug builds and unit tests, we use leak-de­tect­ing Zig al­lo­ca­tors.

* The CI runs val­grind on our full unit test suite on every com­mit

to find more than just leaks, such as un­de­fined mem­ory us­age.

* We reg­u­larly run the ma­cOS GUI via ma­cOS Instruments to look for

leaks par­tic­u­larly in the Swift code­base.

* We run every GTK-related PR us­ing Valgrind (the full GUI) to look

for leaks in the GTK code­path that is­n’t unit tested.

This has worked re­ally well to date, but un­for­tu­nately it did­n’t catch this par­tic­u­lar leak be­cause it only trig­gers un­der very spe­cific con­di­tions that our tests did­n’t re­pro­duce. The merged PR in­cludes a test that does re­pro­duce the leak to pre­vent re­gres­sions in the fu­ture.

This was the largest known mem­ory leak in Ghostty to date, and the only re­ported leak that has been con­firmed by more than a sin­gle user. We’ll con­tinue to mon­i­tor and ad­dress mem­ory re­ports as they come in, but re­mem­ber that re­pro­duc­tion is the key to di­ag­nos­ing and fix­ing mem­ory leaks!

Big thanks to @grishy who fi­nally got me a re­li­able re­pro­duc­tion so I could an­a­lyze the is­sue my­self. Their own analy­sis reached the same con­clu­sion as mine, and the re­pro­duc­tion let me ver­ify both our un­der­stand­ings in­de­pen­dently.

Thanks also to every­one who re­ported this is­sue with de­tailed di­ag­nos­tics. The com­mu­ni­ty’s analy­sis, es­pe­cially around the foot­print out­put and VM re­gion count­ing, gave me im­por­tant clues that pointed to­ward the PageList as the cul­prit.

...

Read the original on mitchellh.com »

2 512 shares, 69 trendiness

I dumped Windows 11 for Linux, and you should too

There. That’s out of the way. I re­cently in­stalled Linux on my main desk­top com­puter and work lap­top, over­writ­ing the Windows par­ti­tion com­pletely. Essentially, I deleted the pri­mary op­er­at­ing sys­tem from the two com­put­ers I use the most, day in and day out, in­stead trust­ing all of my per­sonal and work com­put­ing needs to the Open Source com­mu­nity. This has been a grow­ing trend, and I hopped on the band­wagon, but for good rea­sons. Some of those rea­sons might per­tain to you and con­vince you to fi­nally make the jump as well. Here’s my ex­pe­ri­ence.

It’s no se­cret that Windows 11 har­vests data like a pump­kin farmer in October, and there is no easy way (and some­times no way at all) to stop it. The op­er­at­ing sys­tem it­self acts ex­actly like what was called spyware” a decade or so ago, pulling every piece of data it can about its cur­rent user. This data in­cludes (but is far from lim­ited to) hard­ware in­for­ma­tion, spe­cific apps and soft­ware used, us­age trends, and more. With the ad­vent of AI, Microsoft made head­lines with Copilot, an ar­ti­fi­cial as­sis­tant de­signed to help users by cap­tur­ing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.

Why are so many ar­ti­cles and YouTube videos lately re­gal­ing read­ers and watch­ers with the har­row­ing tales of techies switch­ing from Windows to Linux? Anyone who has read one of those ar­ti­cles or watched one of those videos will know it boils down to two main is­sues: teleme­try and poor soft­ware sta­bil­ity.

After deal­ing with these is­sues and try­ing to solve them with workarounds, I dual-booted a Linux par­ti­tion for a few weeks. After a Windows up­date (that I did­n’t choose to do) wiped that par­ti­tion and, con­se­quently, the Linux in­stal­la­tion, I de­cided to go whole-hog: I deleted Windows 11 and used the en­tire drive for Linux.

The other main rea­son folks unin­stall Windows is due to the over­all poor soft­ware ex­pe­ri­ence. Windows 11 has mul­ti­ple set­tings mod­ules to han­dle the same task (such as set­ting up net­work­ing or adding de­vices), and none of them seem to talk to each other. Additionally, each new up­date (which will even­tu­ally be forced upon you) seems to bring more bugs than fixes. Personally, I en­coun­tered 2-3 full sys­tem crashes a week when I ran Windows 11, and my hard­ware is fairly de­cent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my com­puter would freeze for a few sec­onds, the dis­plays would go dark, and the PC would ei­ther restart or hang in­def­i­nitely.

There. That’s out of the way. I re­cently in­stalled Linux on my main desk­top com­puter and work lap­top, over­writ­ing the Windows par­ti­tion com­pletely. Essentially, I deleted the pri­mary op­er­at­ing sys­tem from the two com­put­ers I use the most, day in and day out, in­stead trust­ing all of my per­sonal and work com­put­ing needs to the Open Source com­mu­nity. This has been a grow­ing trend, and I hopped on the band­wagon, but for good rea­sons. Some of those rea­sons might per­tain to you and con­vince you to fi­nally make the jump as well. Here’s my ex­pe­ri­ence.

It’s no se­cret that Windows 11 har­vests data like a pump­kin farmer in October, and there is no easy way (and some­times no way at all) to stop it. The op­er­at­ing sys­tem it­self acts ex­actly like what was called spyware” a decade or so ago, pulling every piece of data it can about its cur­rent user. This data in­cludes (but is far from lim­ited to) hard­ware in­for­ma­tion, spe­cific apps and soft­ware used, us­age trends, and more. With the ad­vent of AI, Microsoft made head­lines with Copilot, an ar­ti­fi­cial as­sis­tant de­signed to help users by cap­tur­ing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.

Why are so many ar­ti­cles and YouTube videos lately re­gal­ing read­ers and watch­ers with the har­row­ing tales of techies switch­ing from Windows to Linux? Anyone who has read one of those ar­ti­cles or watched one of those videos will know it boils down to two main is­sues: teleme­try and poor soft­ware sta­bil­ity.

After deal­ing with these is­sues and try­ing to solve them with workarounds, I dual-booted a Linux par­ti­tion for a few weeks. After a Windows up­date (that I did­n’t choose to do) wiped that par­ti­tion and, con­se­quently, the Linux in­stal­la­tion, I de­cided to go whole-hog: I deleted Windows 11 and used the en­tire drive for Linux.

The other main rea­son folks unin­stall Windows is due to the over­all poor soft­ware ex­pe­ri­ence. Windows 11 has mul­ti­ple set­tings mod­ules to han­dle the same task (such as set­ting up net­work­ing or adding de­vices), and none of them seem to talk to each other. Additionally, each new up­date (which will even­tu­ally be forced upon you) seems to bring more bugs than fixes. Personally, I en­coun­tered 2-3 full sys­tem crashes a week when I ran Windows 11, and my hard­ware is fairly de­cent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my com­puter would freeze for a few sec­onds, the dis­plays would go dark, and the PC would ei­ther restart or hang in­def­i­nitely.

The first ques­tion of­ten asked of Windows refugees mi­grat­ing to Linux is, Why Linux?” It’s a good ques­tion, and one that needs to be asked be­fore dump­ing Windows for any­thing else. Personally, I tried ma­cOS first. The ex­pe­ri­ence was smooth and easy but ul­ti­mately felt re­stric­tive (installing from third-party de­vel­op­ers, any­one?). Additionally, the only Apple com­puter I have is a 2014 MacBook Air. As such, the lat­est ver­sion of ma­cOS I could ac­tu­ally run is 11 (Big Sur), which was re­leased in 2020. Overall sys­tem op­er­a­tion was quite slug­gish on the older hard­ware, and I knew that time would in­evitably take its toll on the soft­ware ex­pe­ri­ence — apps would soon be out of date and I would­n’t be able to up­date them. I also tried the OpenCore Legacy Patcher to push the lap­top to ma­cOS 13. While per­for­mance im­proved, key fea­tures like iMes­sage and Continuity Camera were ei­ther buggy or flat out re­fused to work. It felt like my lap­top was run­ning in mud with its hands tied be­hind its back. Plus, I needed some­thing for my desk­top. Not want­ing to drop a mort­gage pay­ment or two on new hard­ware, I opted for Linux.

Linux promised me the po­ten­tial of what I wanted - high hard­ware com­pat­i­bil­ity with full soft­ware free­dom. The op­er­at­ing sys­tem can run on pretty much any­thing, and it grants users a huge amount of con­trol over their sys­tem. I tried out a few ditri­bu­tions, or dis­tros, of Linux. A dis­tro is like a flavor” of Linux, and each one has unique fac­tors (e.g., app/​pack­age man­age­ment, bun­dled user in­ter­face). With most dis­tros, these dif­fer­ences are largely ir­rel­e­vant; most dis­tros of­fer the same main pack­ages as oth­ers.

...

Read the original on www.notebookcheck.net »

3 454 shares, 19 trendiness

Trails

Self-deception as strat­egy: the best liars be­lieve them­selves.

...

Read the original on trails.pieterma.es »

4 416 shares, 13 trendiness

Code And Let Live

The state of the art in agent iso­la­tion is a read-only sand­box. At Fly.io, we’ve been sell­ing that story for years, and we’re call­ing it: ephemeral sand­boxes are ob­so­lete. Stop killing your sand­boxes every time you use them.

My ar­gu­ment won’t make sense with­out show­ing you some­thing new we’ve built. We’re all adults here, this is a com­pany, we talk about what we do. Here goes.

So, I want to run some code. So what I do is, I run sprite cre­ate. While it op­er­ates, I’ll ex­plain what’s hap­pen­ing be­hind the—

Wrap text

Copy to clip­board

✓ Created demo-123 sprite in 1.0s

● Connecting to con­sole…

sprite@sprite:~#

That’s a root shell on a Linux com­puter we now own. It came on­line in about the same amount of time it would take to ssh into a host that al­ready ex­isted. We call these things Sprites”.

Wrap text

Copy to clip­board

sudo apt-get in­stall -y ffm­peg >/dev/null 2>&1

Unlike cre­at­ing the Sprite in the first place, in­stalling ffm­peg with apt-get is dog slow. Let’s try not to have to do that again:

Wrap text

Copy to clip­board

sprite@sprite:~# sprite-env check­points cre­ate

{“type”:“complete”,“data”:“Checkpoint v1 cre­ated suc­cess­fully”,

time”:“2025-12-22T22:50:48.60423809Z”}

This com­pletes in­stantly. Didn’t even bother to mea­sure.

I step away to get cof­fee. Time passes. The Sprite, notic­ing my in­ac­tiv­ity, goes to sleep. I meet an old friend from high school at the cof­fee shop. End up spend­ing the day to­gether. More time passes. Days even. Returning later:

Wrap text

Copy to clip­board

> $ sprite con­sole

sprite@sprite:~# ffm­peg

ffm­peg ver­sion 7.1.1-1ubuntu1.3 Copyright (c) 2000-2025 the FFmpeg de­vel­op­ers

Use -h to get full help or, even bet­ter, run man ffm­peg’

sprite@sprite:~#

Everything’s where I left it. Sprites are durable. 100GB ca­pac­ity to start, no cer­e­mony. Maybe I’ll keep it around a few more days, maybe a few months, does­n’t mat­ter, just works.

Say I get an ap­pli­ca­tion up on its legs. Install more pack­ages. Then: dis­as­ter. Maybe an ill-ad­vised global pip3 in­stall . Or rm -rf $HMOE/bin. Or dd if=/​dev/​ran­dom of=/​dev/​vdb. Whatever it was, every­thing’s bro­ken. So:

Wrap text

Copy to clip­board

> $ sprite check­point re­store v1

Restoring from check­point v1…

Container com­po­nents started suc­cess­fully

Restore from v1 com­plete

> $ sprite con­sole

sprite@sprite:~#

Sprites have first-class check­point and re­store. You can’t see it in text, but that re­store took about one sec­ond. It’s fast enough to use ca­su­ally, in­ter­ac­tively. Not an es­cape hatch. Rather: an in­tended part of the or­di­nary course of us­ing a Sprite. Like git, but for the whole sys­tem.

If you’re ask­ing how this is any dif­fer­ent from an EC2 in­stance, good. That’s what we’re go­ing for, ex­cept:

I can ca­su­ally cre­ate hun­dreds of them (without need­ing a Docker con­tainer), each ap­pear­ing in 1-2 sec­onds.

They go idle and stop me­ter­ing au­to­mat­i­cally, so it’s cheap to have lots of them. I use dozens.

They’re hooked up to our Anycast net­work, so I can get an HTTPS URL.

Despite all that, they’re fully durable. They don’t die un­til I tell them to.

This com­bi­na­tion of at­trib­utes is­n’t com­mon enough to al­ready have a name, so we de­cided we get to name them Sprites”. Sprites are like BIC dis­pos­able cloud com­put­ers.

That’s what we built. You can go try it your­self. We wrote an­other 1000 words about how they work, but I cut them out be­cause I want to stop talk­ing about our prod­ucts now and get to my point.

For years, we’ve been try­ing to serve two very dif­fer­ent users with the same ab­strac­tion. It has­n’t worked.

Professional soft­ware de­vel­op­ers are trained to build state­less in­stances. Stateless de­ploy­ments, where per­sis­tent data is con­fined to data­base servers, buys you sim­plic­ity, flex­i­ble scale-out, and re­duced fail­ure blast ra­dius. It’s a good idea, so pop­u­lar that most places you can run code in the cloud look like state­less con­tain­ers. Fly Machines, our flag­ship of­fer­ing, look like state­less con­tain­ers.

The prob­lem is that Claude is­n’t a pro de­vel­oper. Claude is a hy­per-pro­duc­tive five-year-old sa­vant. It’s un­can­nily smart, wants to stick its fin­ger in every avail­able elec­tri­cal socket, and works best when you find a way to let it zap it­self.

If you force an agent to, it’ll work around con­tainer­iza­tion and do work . But you’re not help­ing the agent in any way by do­ing that. They don’t want con­tain­ers. They don’t want sandboxes”. They want com­put­ers.

Someone asked me about this the other day and wanted to know if I was say­ing that agents needed sound cards and USB ports. And, maybe? I don’t know. Not to­day.

In a mo­ment, I’ll ex­plain why. But first I prob­a­bly need to ex­plain what the hell I mean by a computer”. I think we all agree:

* A com­puter does­n’t nec­es­sar­ily van­ish af­ter a sin­gle job is com­pleted, and

Since cur­rent agent sand­boxes have nei­ther of these, I can stop the de­f­i­n­i­tion right there and get back to my point.

Start here: with an ac­tual com­puter, Claude does­n’t have to re­build my en­tire de­vel­op­ment en­vi­ron­ment every time I pick up a PR.

This seems su­per­fi­cial but re­build­ing stuff like node_­mod­ules is such a mon­u­men­tal pain in the ass that the in­dus­try is spend­ing tens of mil­lions of dol­lars fig­ur­ing out how to snap­shot and re­store ephemeral sand­boxes.

I’m not say­ing those prob­lems are in­tractable. I’m say­ing they’re un­nec­es­sary. Instead of fig­ur­ing them out, just use an ac­tual com­puter. Work out a PR, re­view and push it, then just start on the next one. Without re­boot­ing.

People will ra­tio­nal­ize why it’s a good thing that they start from a new build en­vi­ron­ment every time they start a change­set. Stockholm Syndrome. When you start a fea­ture branch on your own, do you cre­ate an en­tirely new de­vel­op­ment en­vi­ron­ment to do it?

The rea­son agents waste all this ef­fort is that no­body saw them com­ing. Read-only ephemeral sand­boxes were the only tool we had hang­ing on the wall to help use them sanely.

Have you ever had to set up ac­tual in­fra­struc­ture to give an agent ac­cess to re­al­is­tic data? People do this. Because they know they’re deal­ing with a clean slate every time they prompt their agent, they arrange for S3 buck­ets, Redis servers, or even RDS in­stances out­side the sand­box for their agents to talk to. They’re build­ing in­fra­struc­ture to work around the fact that they can’t just write a file and trust it to stay put. Gross.

Ephemerality means time lim­its. Providers de­sign sand­box sys­tems to han­dle the ex­pected work­loads agents gen­er­ate. Most things agents do to­day don’t take much time; in fact, they’re of­ten lim­ited only by the rate at which fron­tier mod­els can crunch to­kens. Test suites run quickly. The 99th per­centile sand­boxed agent run prob­a­bly needs less than 15 min­utes.

But there are fea­ture re­quests where com­pute and net­work time swamp to­ken crunch­ing. I built the doc­u­men­ta­tion site for the Sprites API by hav­ing a Claude Sprite in­ter­act with the code and our API, build­ing and test­ing ex­am­ples for the API one at a time. There are APIs where the client in­ter­ac­tion time alone would blow sand­box bud­gets.

You see the lim­its of the cur­rent ap­proach in how peo­ple round-trip state through plan files”, which are os­ten­si­bly prose but of­ten re­ally just egre­giously-en­coded key-value stores.

An agent run­ning on an ac­tual com­puter can ex­ploit the whole life­cy­cle of the ap­pli­ca­tion. We saw this when Chris McCord built Phoenix.new. The agent be­hind a Phoenix.new app runs on a Fly Machine where it can see the app logs from the Phoenix app it gen­er­ated. When users do things that gen­er­ate ex­cep­tions, Phoenix.new no­tices and gets to work fig­ur­ing out what hap­pened.

It took way too much work for Chris to set that up, and he was able to do it in part be­cause he wrote his own agent. You can do it with Claude to­day with an MCP server or some other arrange­ment to haul logs over. But all you re­ally need is to just not shoot your sand­box in the head when the agent fin­ishes writ­ing code.

Here’s where I lose you. I know this be­cause it’s also where I lose my team, most of whom don’t be­lieve me about this.

The na­ture of soft­ware de­vel­op­ment is chang­ing out from un­der us, and I think we’re kid­ding our­selves that it’s go­ing to end with just a re­con­fig­u­ra­tion of how pro­fes­sional de­vel­op­ers ship soft­ware.

I have kids. They have de­vices. I wanted some con­trol over them. So I did what many of you would do in my sit­u­a­tion: I vibe-coded an MDM.

I built this thing with Claude. It’s a SQLite-backed Go ap­pli­ca­tion run­ning on a Sprite. The Anycast URL my Sprite ex­ports works as an MDM reg­is­tra­tion URL. Claude also worked out all the APNS Push Certificate drama for me. It all just works.

Editing PHP files over FTP: we weren’t wrong, just ahead of our time!”

I’ve been run­ning this for a month now, still on a Sprite, and see no rea­son ever to stop. It is a piece of soft­ware that solves an im­por­tant real-world prob­lem for me. It might evolve as my needs change, and I tell Claude to change it. Or it might not. For this app, dev is prod, prod is dev.

For rea­sons we’ll get into when we write up how we built these things, you would­n’t want to ship an app to mil­lions of peo­ple on a Sprite. But most apps don’t want to serve mil­lions of peo­ple. The most im­por­tant day-to-day apps dis­pro­por­tion­ately won’t have mil­lion-per­son au­di­ences. There are some im­por­tant mil­lion-per­son apps, but most of them just de­stroy civil so­ci­ety, melt our brains, and arrange chauf­feurs for in­di­vid­ual cheese­burg­ers.

Applications that solve real prob­lems for peo­ple will be owned by the peo­ple they solve prob­lems for. And for the most part, they won’t need a pro­fes­sional guild of soft­ware de­vel­op­ers to gate­keep fea­ture de­vel­op­ment for them. They’ll just ask for things and get them.

The prob­lem we’re all work­ing on is big­ger than safely ac­cel­er­at­ing pro soft­ware de­vel­op­ers. Sandboxes are hold­ing us back.

Obviously, I’m try­ing to sell you some­thing here. But that does­n’t make me wrong. The ar­gu­ment I’m mak­ing is the rea­son we built the spe­cific thing I’m sell­ing.

It took us a long time to get here. We spent years kid­ding our­selves. We built a plat­form for hor­i­zon­tal-scal­ing pro­duc­tion ap­pli­ca­tions with mi­cro-VMs that boot so quickly that, if you hold them in ex­actly the right way, you can do a pretty de­cent code sand­box with them. But it’s al­ways been a square peg, round hole sit­u­a­tion.

We have a lot to say about how Sprites work. They’re re­lated to Fly Machines but sharply dif­fer­ent in im­por­tant ways. They have an en­tirely new stor­age stack. They’re or­ches­trated dif­fer­ently. No Dockerfiles.

But for now, I just want you to think about what I’m say­ing here. Whether or not you ever boot a Sprite, ask: if you could run a cod­ing agent any­where, would you want it to look more like a read-only sand­box in a K8s clus­ter in the cloud, or like an en­tire EC2 in­stance you could sum­mon in the snap of a fin­ger?

I think the an­swer is ob­vi­ous. The age of sand­boxes is over. The time of the dis­pos­able com­puter has come.

...

Read the original on fly.io »

5 354 shares, 31 trendiness

Don't fall into the anti-AI hype

I love writ­ing soft­ware, line by line. It could be said that my ca­reer was a con­tin­u­ous ef­fort to cre­ate soft­ware well writ­ten, min­i­mal, where the hu­man touch was the fun­da­men­tal fea­ture. I also hope for a so­ci­ety where the last are not for­got­ten. Moreover, I don’t want AI to eco­nom­i­cally suc­ceed, I don’t care if the cur­rent eco­nomic sys­tem is sub­verted (I could be very happy, hon­estly, if it goes in the di­rec­tion of a mas­sive re­dis­tri­b­u­tion of wealth). But, I would not re­spect my­self and my in­tel­li­gence if my idea of soft­ware and so­ci­ety would im­pair my vi­sion: facts are facts, and AI is go­ing to change pro­gram­ming for­ever.

In 2020 I left my job in or­der to write a novel about AI, uni­ver­sal ba­sic in­come, a so­ci­ety that adapted to the au­toma­tion of work fac­ing many chal­lenges. At the very end of 2024 I opened a YouTube chan­nel fo­cused on AI, its use in cod­ing tasks, its po­ten­tial so­cial and eco­nom­i­cal ef­fects. But while I rec­og­nized what was go­ing to hap­pen very early, I thought that we had more time be­fore pro­gram­ming would be com­pletely re­shaped, at least a few years. I no longer be­lieve this is the case. Recently, state of the art LLMs are able to com­plete large sub­tasks or medium size pro­jects alone, al­most unas­sisted, given a good set of hints about what the end re­sult should be. The de­gree of suc­cess you’ll get is re­lated to the kind of pro­gram­ming you do (the more iso­lated, and the more tex­tu­ally rep­re­sentable, the bet­ter: sys­tem pro­gram­ming is par­tic­u­larly apt), and to your abil­ity to cre­ate a men­tal rep­re­sen­ta­tion of the prob­lem to com­mu­ni­cate to the LLM. But, in gen­eral, it is now clear that for most pro­jects, writ­ing the code your­self is no longer sen­si­ble, if not to have fun.

In the past week, just prompt­ing, and in­spect­ing the code to pro­vide guid­ance from time to time, in a few hours I did the fol­low­ing four tasks, in hours in­stead of weeks:

1. I mod­i­fied my linenoise li­brary to sup­port UTF-8, and cre­ated a frame­work for line edit­ing test­ing that uses an em­u­lated ter­mi­nal that is able to re­port what is get­ting dis­played in each char­ac­ter cell. Something that I al­ways wanted to do, but it was hard to jus­tify the work needed just to test a side pro­ject of mine. But if you can just de­scribe your idea, and it ma­te­ri­al­izes in the code, things are very dif­fer­ent.

2. I fixed tran­sient fail­ures in the Redis test. This is very an­noy­ing work, tim­ing re­lated is­sues, TCP dead­lock con­di­tions, and so forth. Claude Code it­er­ated for all the time needed to re­pro­duce it, in­spected the state of the processes to un­der­stand what was hap­pen­ing, and fixed the bugs.

3. Yesterday I wanted a pure C li­brary that would be able to do the in­fer­ence of BERT like em­bed­ding mod­els. Claude Code cre­ated it in 5 min­utes. Same out­put and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to con­vert the GTE-small model.

4. In the past weeks I op­er­ated changes to Redis Streams in­ter­nals. I had a de­sign doc­u­ment for the work I did. I tried to give it to Claude Code and it re­pro­duced my work in, like, 20 min­utes or less (mostly be­cause I’m slow at check­ing and au­tho­riz­ing to run the com­mands needed).

It is sim­ply im­pos­si­ble not to see the re­al­ity of what is hap­pen­ing. Writing code is no longer needed for the most part. It is now a lot more in­ter­est­ing to un­der­stand what to do, and how to do it (and, about this sec­ond part, LLMs are great part­ners, too). It does not mat­ter if AI com­pa­nies will not be able to get their money back and the stock mar­ket will crash. All that is ir­rel­e­vant, in the long run. It does not mat­ter if this or the other CEO of some uni­corn is telling you some­thing that is off putting, or ab­surd. Programming changed for­ever, any­way.

How do I feel, about all the code I wrote that was in­gested by LLMs? I feel great to be part of that, be­cause I see this as a con­tin­u­a­tion of what I tried to do all my life: de­moc­ra­tiz­ing code, sys­tems, knowl­edge. LLMs are go­ing to help us to write bet­ter soft­ware, faster, and will al­low small teams to have a chance to com­pete with big­ger com­pa­nies. The same thing open source soft­ware did in the 90s.

However, this tech­nol­ogy is far too im­por­tant to be in the hands of a few com­pa­nies. For now, you can do the pre-train­ing bet­ter or not, you can do re­in­force­ment learn­ing in a much more ef­fec­tive way than oth­ers, but the open mod­els, es­pe­cially the ones pro­duced in China, con­tinue to com­pete (even if they are be­hind) with fron­tier mod­els of closed labs. There is a suf­fi­cient de­moc­ra­ti­za­tion of AI, so far, even if im­per­fect. But: it is ab­solutely not ob­vi­ous that it will be like that for­ever. I’m scared about the cen­tral­iza­tion. At the same time, I be­lieve neural net­works, at scale, are sim­ply able to do in­cred­i­ble things, and that there is not enough magic” in­side cur­rent fron­tier AI for the other labs and teams not to catch up (otherwise it would be very hard to ex­plain, for in­stance, why OpenAI, Anthropic and Google are so near in their re­sults, for years now).

As a pro­gram­mer, I want to write more open source than ever, now. I want to im­prove cer­tain repos­i­to­ries of mine aban­doned for time con­cerns. I want to ap­ply AI to my Redis work­flow. Improve the Vector Sets im­ple­men­ta­tion and then other data struc­tures, like I’m do­ing with Streams now.

But I’m wor­ried for the folks that will get fired. It is not clear what the dy­namic at play will be: will com­pa­nies try to have more peo­ple, and to build more? Or will they try to cut salary costs, hav­ing fewer pro­gram­mers that are bet­ter at prompt­ing? And, there are other sec­tors where hu­mans will be­come com­pletely re­place­able, I fear.

What is the so­cial so­lu­tion, then? Innovation can’t be taken back af­ter all. I be­lieve we should vote for gov­ern­ments that rec­og­nize what is hap­pen­ing, and are will­ing to sup­port those who will re­main job­less. And, the more peo­ple get fired, the more po­lit­i­cal pres­sure there will be to vote for those who will guar­an­tee a cer­tain de­gree of pro­tec­tion. But I also look for­ward to the good AI could bring: new progress in sci­ence, that could help lower the suf­fer­ing of the hu­man con­di­tion, which is not al­ways happy.

Anyway, back to pro­gram­ming. I have a sin­gle sug­ges­tion for you, my friend. Whatever you be­lieve about what the Right Thing should be, you can’t con­trol it by re­fus­ing what is hap­pen­ing right now. Skipping AI is not go­ing to help you or your ca­reer. Think about it. Test these new tools, with care, with weeks of work, not in a five min­utes test where you can just re­in­force your own be­liefs. Find a way to mul­ti­ply your­self, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn cod­ing, and now ma­chines are do­ing it for you. But what was the fire in­side you, when you coded till night to see your pro­ject work­ing? It was build­ing. And now you can build more and bet­ter, if you find your way to use AI ef­fec­tively. The fun is still there, un­touched.

Please en­able JavaScript to view the com­ments pow­ered by Disqus.

blog com­ments pow­ered by

...

Read the original on antirez.com »

6 290 shares, 12 trendiness

Private equity firms acquired more than 500 autism centers in past decade, study shows

PROVIDENCE, R. I. [Brown University] — Private eq­uity firms ac­quired more than 500 autism ther­apy cen­ters across the U.S. over the past decade, with nearly 80% of ac­qui­si­tions oc­cur­ring over a four-year span.

That’s ac­cord­ing to a new study from re­searchers at Brown University’s Center for Advancing Health Policy through Research.

Study au­thor Yashaswini Singh, a health econ­o­mist at Brown’s School of Public Health, said the work high­lights how fi­nan­cial firms are rapidly mov­ing into a sen­si­tive area of health care with lit­tle pub­lic scrutiny or data on where this is hap­pen­ing or why.

The big take­away is that there is yet an­other seg­ment of health care that has emerged as po­ten­tially prof­itable to pri­vate eq­uity in­vestors, and it is very dis­tinct from where we have tra­di­tion­ally known in­vestors to go, so the po­ten­tial for harm can be a lot more se­ri­ous,” Singh said. We’re also deal­ing with chil­dren who are largely in­sured by Medicaid pro­grams, so if pri­vate eq­uity in­creases the in­ten­sity of care, what we’re look­ing at are im­pacts to state Medicaid bud­gets down the road.”

The find­ings were pub­lished in JAMA Pediatrics and of­fer one of the first na­tional as­sess­ments of pri­vate eq­ui­ty’s grow­ing role in autism ther­a­pies and ser­vices. Autism di­ag­noses among U. S. chil­dren have risen sharply in re­cent years, nearly tripling be­tween 2011 and 2022, and autism has been in the na­tional spot­light amid po­lit­i­cal de­bate claim­ing links be­tween autism and child­hood vac­cines.

The find­ings sug­gest that in­vest­ment has been con­cen­trated in states with higher rates of autism di­ag­noses among chil­dren and states that have fewer lim­its on in­sur­ance cov­er­age.

The re­searchers iden­ti­fied a to­tal of 574 autism ther­apy cen­ters owned by pri­vate eq­uity firms as of 2024, span­ning 42 states. Most of those cen­ters were ac­quired be­tween 2018 and 2022, the re­sult of 142 sep­a­rate deals. The largest con­cen­tra­tions of cen­ters were in California (97), Texas (81), Colorado (38), Illinois (36) and Florida (36). Sixteen states had one or no pri­vate eq­uity-owned clin­ics at the end of 2024.

States in the top third for child­hood autism preva­lence were 24% more likely to have pri­vate eq­uity–owned clin­ics than oth­ers, ac­cord­ing to the study.

The scale and speed of ac­qui­si­tions un­der­score the grow­ing trend of pri­vate eq­ui­ty’s en­try into the mar­ket, the re­searchers say. According to Singh, the team was prompted to in­ves­ti­gate that trend af­ter hear­ing anec­do­tal re­ports from fam­i­lies and health providers about changes fol­low­ing pri­vate eq­uity takeovers.

The pri­mary con­cern is that pri­vate eq­uity firms may pri­or­i­tize fi­nan­cial gains over fam­i­lies, said Daniel Arnold, a se­nior re­search sci­en­tist at the School of Public Health.

It’s all about the fi­nan­cial in­cen­tives,” Arnold said. I worry about the same types of rev­enue-gen­er­at­ing strate­gies seen in other pri­vate eq­uity-backed set­tings. I worry about chil­dren re­ceiv­ing more than the clin­i­cally ap­pro­pri­ate amount of ser­vices and wors­en­ing dis­par­i­ties in terms of which chil­dren have ac­cess to ser­vices.”

To es­tab­lish a base­line of where pri­vate eq­uity firms are in­vest­ing and why, the team used a mix of pro­pri­etary data­bases, pub­lic press re­leases and man­ual ver­i­fi­ca­tion of archived web­sites to track changes in own­er­ship. Unlike pub­lic com­pa­nies, pri­vate eq­uity firms and pri­vate prac­tices are not re­quired to dis­close ac­qui­si­tions, mak­ing data col­lec­tion chal­leng­ing and la­bor-in­ten­sive.

The team is now seek­ing fed­eral fund­ing to ex­am­ine how pri­vate eq­uity own­er­ship af­fects out­comes, in­clud­ing changes in ther­apy in­ten­sity, med­ica­tion use, di­ag­no­sis age or how long chil­dren stay in treat­ment. They seek to de­ter­mine whether these in­vest­ments are help­ing to meet real needs or are pri­mar­ily a way to make money.

Private in­vestors mak­ing a lit­tle bit of money while ex­pand­ing ac­cess is not a bad thing, per se,” Singh said. But we need to un­der­stand how much of a bad thing this is and how much of a good thing this is. This is a first step in that di­rec­tion.”

This study re­ceived fund­ing from the National Institute on Aging (R01AG073286) and the National Institute on Mental Health (R01MH132128).

...

Read the original on www.brown.edu »

7 231 shares, 34 trendiness

2025 in retrospect & happy new year 2026! – Gentoo Linux

Happy New Year 2026! Once again, a lot has hap­pened in Gentoo over the past months. New de­vel­op­ers, more bi­nary pack­ages, GnuPG al­ter­na­tives sup­port, Gentoo for WSL, im­proved Rust boot­strap, bet­ter NGINX pack­ag­ing, … As al­ways here

we’re go­ing to re­visit all the ex­cit­ing news from our favourite Linux dis­tri­b­u­tion.

Gentoo cur­rently con­sists of 31663 ebuilds for 19174 dif­fer­ent pack­ages. For amd64 (x86-64), there are 89 GBytes of bi­nary pack­ages avail­able on the mir­rors. Gentoo each week builds 154

dis­tinct in­stal­la­tion stages for dif­fer­ent proces­sor ar­chi­tec­tures and sys­tem con­fig­u­ra­tions, with an over­whelm­ing part of these fully up-to-date.

The num­ber of com­mits to the main ::gentoo repos­i­tory

has re­mained at an over­all high level in 2025, with a slight de­crease from 123942 to 112927. The num­ber of com­mits by ex­ter­nal con­trib­u­tors was 9396, now across 377 unique ex­ter­nal au­thors.

GURU, our user-cu­rated repos­i­tory with a trusted user model, as en­try point for po­ten­tial de­vel­op­ers, has shown a de­crease in ac­tiv­ity. We have had 5813 com­mits in 2025, com­pared to 7517 in 2024. The num­ber of con­trib­u­tors to GURU has in­creased, from 241 in 2024 to 264 in 2025. Please join us there and help pack­ag­ing the lat­est and great­est soft­ware. That’s the ideal prepa­ra­tion for be­com­ing a Gentoo de­vel­oper!

Activity has slowed down some­what on the Gentoo bug­tracker bugs.gen­too.org, where we’ve had 20763 bug re­ports cre­ated in 2025, com­pared to 26123 in 2024. The num­ber of re­solved bugs shows the same trend, with 22395 in 2025 com­pared to 25946 in 2024. The cur­rent val­ues are closer to those of 2023 - but clearly this year we fixed more than we broke!

In 2025 we have gained four new Gentoo de­vel­op­ers. They are in chrono­log­i­cal or­der:

Let’s now look at the ma­jor im­prove­ments and news of 2025 in Gentoo.

RISC-V bootable QCOW2: Same as for amd64 and ar­m64, also for RISC-V we now have ready-made bootable disk im­ages in QCOW2 for­mat

avail­able for down­load on our mir­rors in a con­sole and a cloud-init vari­ant. The disk im­ages use the rv64gc in­struc­tion set and the lp64d ABI, and can be booted via the stan­dard RISC-V UEFI sup­port.

Gentoo for WSL: We now pub­lish weekly Gentoo im­ages for Windows

Subsystem for Linux (WSL), based on the amd64 stages,

see our mir­rors. While these im­ages are not pre­sent in the Microsoft store yet, that’s some­thing we in­tend to fix soon.

hppa and sparc desta­bi­lized: Since we do not have hard­ware read­ily avail­able any­more and these ar­chi­tec­tures mostly fill a retro­com­put­ing niche, sta­ble key­words have been dropped for both hppa (PA-RISC) and sparc. The ar­chi­tec­tures will re­main sup­ported with test­ing key­words.

musl with lo­cales: Localization sup­port via the pack­age

sys-apps/​musl-lo­cales has been added by de­fault to the Gentoo stages based on the light­weight musl C li­brary.

GPG al­ter­na­tives: Given the un­for­tu­nate frac­tur­ing of the GnuPG / OpenPGP / LibrePGP ecosys­tem due to com­pet­ing stan­dards, we now pro­vide an al­ter­na­tives mech­a­nism to choose the sys­tem gpg provider and ease com­pat­i­bil­ity test­ing. At the mo­ment,

the orig­i­nal, un­mod­i­fied GnuPG, the FreePG fork/​patch­set as also used in many other Linux dis­tri­b­u­tions (Fedora, Debian, Arch, …), and the re-im­ple­men­ta­tion

Sequoia-PGP with

Chameleon

are avail­able. In prac­tice, im­ple­men­ta­tion de­tails vary be­tween the providers, and while GnuPG and FreePG are fully sup­ported, you may still en­counter dif­fi­cul­ties when se­lect­ing Sequoia-PGP/Chameleon.

zlib-ng sup­port: We have in­tro­duced ini­tial sup­port for us­ing zlib-ng and

minizip-ng in com­pat­i­bil­ity mode in place of the ref­er­ence zlib li­braries.

System-wide job­server: We have cre­ated steve, an im­ple­men­ta­tion of a to­ken-ac­count­ing sys­tem-wide job­server, and in­tro­duced ex­per­i­men­tal global job­server sup­port in Portage. Thanks to that, it is now pos­si­ble to glob­ally con­trol the con­cur­rently run­ning build job count, cor­rectly ac­count­ing for par­al­lel emerge jobs, make and ninja jobs, and other clients sup­port­ing the job­server pro­to­col.

NGINX re­work: The pack­ag­ing of the NGINX web server and re­verse proxy in Gentoo has un­der­gone a ma­jor im­prove­ment, in­clud­ing also the split­ting off of sev­eral third-party mod­ules into sep­a­rate pack­ages.

C++ based Rust boot­strap: We have added a boot­strap path for Rust from C++ us­ing

Mutabah’s Rust com­piler mrustc, which al­le­vi­ates the need for pre-built bi­na­ries and makes it sig­nif­i­cantly eas­ier to sup­port more con­fig­u­ra­tions.

Ada and D boot­strap: Similarly, Ada and D sup­port in gcc now have clean boot­strap paths, which makes en­abling these in the com­piler as easy as switch­ing the use­flags on gcc and run­ning emerge.

FlexiBLAS: Gentoo has adopted the new FlexiBLAS wrap­per

li­brary as the pri­mary way of switch­ing im­ple­men­ta­tions of the BLAS nu­mer­i­cal al­go­rithm li­brary at run­time. This au­to­mat­i­cally also pro­vides ABI sta­bil­ity for link­ing pro­grams and bun­dles the spe­cific treat­ment of dif­fer­ent BLAS vari­ants in one place.

Python: In the mean­time the de­fault Python ver­sion in Gentoo has reached Python 3.13. Additionally we have also Python 3.14 avail­able sta­ble - fully up to date with up­stream.

KDE up­grades: As of end of 2025, in Gentoo sta­ble we have KDE Gear 25.08.3, KDE Frameworks 6.20.0, and KDE Plasma 6.5.4. As al­ways, Gentoo test­ing fol­lows the newest up­stream re­leases (and us­ing the KDE over­lay you can even in­stall from git sources).

Additional build server: A sec­ond ded­i­cated build server, hosted at Hetzner Germany, has been added to speed up the gen­er­a­tion of in­stal­la­tion stages, iso and qcow2 im­ages, and bi­nary pack­ages.

Documentation: Documentation work has made con­stant progress on wiki.gen­too.org. The Gentoo Handbook had some par­tic­u­larly use­ful up­dates, and the doc­u­men­ta­tion re­ceived lots of im­prove­ments and ad­di­tions from the many ac­tive vol­un­teers. There are cur­rently 9,647 pages on the wiki, and there have been 766,731 ed­its since the pro­ject started. Please help

Gentoo by con­tribut­ing to doc­u­men­ta­tion!

Income: The Gentoo Foundation took in $12,066 in fis­cal year 2025 (ending 2025/06/30); the dom­i­nant part

(over 80%) con­sists of in­di­vid­ual cash do­na­tions from the com­mu­nity. On the SPI side, we re­ceived $8,471

in the same pe­riod as fis­cal year 2025; also here, this is all from small in­di­vid­ual cash do­na­tions.

* Expenses: Our ex­penses in 2025 were, pro­gram ser­vices (e.g. host­ing costs) $8,332, man­age­ment & gen­eral (accounting)

$1,724, fundrais­ing $905, and non-op­er­at­ing (depreciation ex­penses) $10,075.

* Balance: We have $104,831 in the bank as of July 1, 2025 (which is when our fis­cal year 2026 starts for ac­count­ing

pur­poses). The Gentoo Foundation FY2025 fi­nan­cial state­ment

is avail­able on the Gentoo Wiki.

* Transition to SPI: The Foundation en­cour­ages donors to en­sure their on­go­ing con­tri­bu­tions are go­ing to

SPI - more than 40 donors had not re­sponded to re­quests to move the re­cur­ring do­na­tions

by the end of the year. Expenses will be moved to the SPI struc­ture as on­go­ing in­come per­mits.

As every year, we would like to thank all Gentoo de­vel­op­ers and all who have sub­mit­ted con­tri­bu­tions

for their re­lent­less every­day Gentoo work. If you are in­ter­ested and would like to help, please join us to make Gentoo even bet­ter! As a vol­un­teer pro­ject, Gentoo could not ex­ist with­out its com­mu­nity.

...

Read the original on www.gentoo.org »

8 225 shares, 13 trendiness

Alienchow

Thanks HN folks for all the com­ments. To clar­ify a bit, the ca­bles are pulled through PVC con­duits un­der the floor­ing be­fore be­ing buried in ce­ment. Currently the hy­poth­e­sis for why the ca­ble dis­in­te­grated so quickly is hy­drol­y­sis. Singapore is ex­tremely hu­mid af­ter all. A sec­ond pos­si­bil­ity is that I keep the left­over wall paints (Nippon Paint Vinilex 5000) in the same room and have no­ticed that much of the sol­vents have evap­o­rated. It is pos­si­ble that the sol­vents in the air might have caused the ca­ble to fail in 3 years. The other ends of the ca­bles don’t feel as sticky and crumbly de­spite be­ing out in the open ex­posed to the hu­mid­ity. My guess is that the paint sol­vent got to it.

Some other learn­ings from this. Buried ca­bling should al­ways be per­ma­nently fixed and at­tached to a patch panel in­stead of dan­gling in the open. That was the orig­i­nal plan but I fig­ured it would­n’t be an is­sue. I was wrong. Always mea­sure ex­act length of buried fi­bre ca­bling as they aren’t meant to be stored in loops.

This morn­ing I woke up and headed to my bomb shel­ter to grab the bike pump to in­flate the tyres on my chil­dren’s bikes. The han­dle got slightly tan­gled up in the fi­bre op­tic ca­bles so I lifted up the ca­bles to free the pump.

Like cookie crumbs the fi­bre ca­ble’s sleeve jack­ets crum­bled in my hands.

Before I could even ut­ter Oh fuck no”, an­other sec­tion of the ca­ble ex­ploded out­wards with thin metal wires jut­ting out from what seems to be like strands of white plas­tic threads, which I as­sume is the Kevlar sheath. I think I must have stood in my pseudo server room in shock for a whole minute, un­able to move or process what had hap­pened. A main com­po­nent of why I was in sheer hor­ror was the fact that I had stu­pidly buried all of these ca­bles un­der my ce­ment floor­ing in PVC trunk­ing from my shel­ter to all of the rooms in the flat. If this ca­ble fails, the con­nec­tion from the server room to a spe­cific room would be per­ma­nently sev­ered. The room for this par­tic­u­lar ca­ble turned out to be my home of­fice where my home­lab MS-A2 resided.

I had pur­chased these ca­bles from FS.com roughly 3.5 years ago in 2022. Because I was bury­ing the ca­bles un­der­ground per­ma­nently, I opted to get the MiLiTaRy GrAdE ar­moured fi­bre ca­bles for this pur­pose.

The ca­bles had been kept spooled up with a ra­dius of around 5cm for 3 whole years, lightly tied to­gether with hook and loop ca­ble fas­ten­ers and hung on laun­dry hooks in the shel­ter all this time.

The de­stroyed ca­ble is the only one that I had un­rav­elled re­cently to patch into my UDM to en­able SFP+ con­nec­tion to my of­fice space. As it turns out, ar­moured ca­bles in this spe­cific in­stance aren’t re­ally meant for move­ment, it’s likely more of a bury and for­get pur­pose. In hind­sight I should’ve con­nected all of the ca­bles to a fi­bre patch panel on the wall so that they would never move, then con­nect the patch panel to my UDM with eas­ily re­place­able LSZH ca­bles.

But it’s too late now, all I can do is to sal­vage the sit­u­a­tion. I headed out and pur­chased 3M self-bond­ing rub­ber elec­tri­cal tape 23, and Temflex 160 vinyl elec­tri­cal tape. The idea I had was to use the com­pres­sion prop­er­ties of the stretched rub­ber tape to hold the cor­ru­gated metal sheath and wire mesh in place, be­fore wrap­ping a sec­ond vinyl pro­tec­tion layer out­side with the 160.

However, the wrap­ping process it­self re­quires me to slowly shift the ca­ble around to hook onto higher ground to pre­vent kinks. The ac­tion it­self trig­gered more jacket fail­ures. Some of the fail­ures ac­tu­ally forced the ca­ble in a sharp right an­gle, which I am al­most cer­tain has caused kinks and cracks in the in­ner fi­bre strand. RIP.

At this point, I’m look­ing at re­build­ing the en­tire sleeve jacket of any­thing that’s ex­posed and mov­able with elec­tri­cal tape. What I had pre­vi­ously thought was a good idea to keep about 5-10m of slack to al­low me to eas­ily move my server rack around is now caus­ing me more prob­lems as good elec­tri­cal tape ain’t cheap. I have to es­sen­tially re­pair around 10 me­tres of jacket with­out ac­ci­den­tally de­stroy­ing parts in­side trunk­ing that I am un­able to reach. This is as­sum­ing that the 4 other un­touched ca­bles would­n’t spon­ta­neously crum­ble as well. Based on how they felt in my hand, I think it is an in­evitable out­come.

I’m pretty cer­tain that dat­a­cen­tre tech­ni­cians read­ing this by chance would mock my id­i­otic setup and I would be in­clined to join in. This is not a good day.

On the dim side of things, at least it seems like fi­bre op­tic ca­bles are pretty hardy. My MS-A2 SFP+ con­nec­tion is still work­ing and speedtest-cli is re­port­ing around 4000/3000 Mbps up/​down speeds to my ISP (10G fi­bre in­ter­net plan). UDM is see­ing 6000/7000, so the fi­bre ca­ble is def­i­nitely com­pro­mised. :(

...

Read the original on alienchow.dev »

9 216 shares, 11 trendiness

OlaProeis/Ferrite: A fast, lightweight text editor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a native, responsive experience.

A fast, light­weight text ed­i­tor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a na­tive, re­spon­sive ex­pe­ri­ence.

Platform Note: Ferrite has been pri­mar­ily de­vel­oped and tested on Windows. While it should work on Linux and ma­cOS, these plat­forms have not been ex­ten­sively tested. If you en­counter is­sues, please re­port them.

🤖 AI Disclosure: This pro­ject is 100% AI-generated code. All Rust code, doc­u­men­ta­tion, and con­fig­u­ra­tion was writ­ten by Claude (Anthropic) via Cursor with MCP tools. My role is prod­uct di­rec­tion, test­ing, and learn­ing to or­ches­trate AI-assisted de­vel­op­ment ef­fec­tively. The code is re­viewed and tested, not blindly ac­cepted — but I want to be trans­par­ent about the de­vel­op­ment process. This pro­ject is partly a learn­ing ex­er­cise in ex­plor­ing how far AI-assisted de­vel­op­ment can go.

* Tree Viewer - Hierarchical view for JSON/YAML/TOML with in­line edit­ing, ex­pand/​col­lapse, and path copy­ing

* Syntax Highlighting - Full-file syn­tax high­light­ing for 40+ lan­guages (Rust, Python, JavaScript, Go, etc.)

* Code Folding - Fold de­tec­tion with gut­ter in­di­ca­tors (▶/▼) for head­ings, code blocks, and lists (text hid­ing de­ferred to v0.3.0)

* Minimap - VS Code-style nav­i­ga­tion panel with click-to-jump and search high­lights

Native ren­der­ing of 11 di­a­gram types di­rectly in the pre­view:

✨ v0.2.2 Released: Stability & CLI im­prove­ments! CJK font sup­port, undo/​redo fixes, com­mand-line file open­ing (ferrite file.md), con­fig­urable log level, and de­fault view mode set­ting. See CHANGELOG.md for full de­tails.

* Export Options - Export to HTML with themed styling, or copy as HTML

* Formatting Toolbar - Quick ac­cess to bold, italic, head­ings, lists, links, and more

Download the lat­est re­lease for your plat­form from GitHub Releases.

# Download the .deb file, then in­stall with:

sudo apt in­stall ./ferrite-editor_amd64.deb

# Or us­ing dpkg:

sudo dpkg -i fer­rite-ed­i­tor_amd64.deb

Ferrite is avail­able on the AUR:

You can in­stall it us­ing your AUR helper of choice.

# Release pack­age

yay -Sy fer­rite

# Binary pack­age

yay -Sy fer­rite-bin

tar -xzf fer­rite-linux-x64.tar.gz

./ferrite

# Ubuntu/Debian

sudo apt in­stall build-es­sen­tial pkg-con­fig libgtk-3-dev libxcb-shape0-dev libxcb-xfix­es0-dev

# Fedora

sudo dnf in­stall gcc pkg-con­fig gtk3-de­vel libxcb-de­vel

# Arch

sudo pac­man -S base-de­vel pkg-con­fig gtk3 libxcb

xcode-se­lect –install

# Clone the repos­i­tory

git clone https://​github.com/​OlaProeis/​Fer­rite.git

cd Ferrite

# Build re­lease ver­sion (optimized)

cargo build –release

# The bi­nary will be at:

# Windows: tar­get/​re­lease/​fer­rite.exe

# Linux/macOS: tar­get/​re­lease/​fer­rite

# Run from source

cargo run –release

# Or run the bi­nary di­rectly

./target/release/ferrite

# Open a spe­cific file

./target/release/ferrite path/​to/​file.md

# Open mul­ti­ple files as tabs

./target/release/ferrite file1.md file2.md

# Open a folder as work­space

./target/release/ferrite path/​to/​folder/

# Show ver­sion

./target/release/ferrite –version

# Show help

./target/release/ferrite –help

Toggle be­tween modes us­ing the tool­bar but­tons or key­board short­cuts.

Workspace set­tings are stored in .ferrite/ within the work­space folder.

Access set­tings via Ctrl+, or the gear icon. Configure:

See ROADMAP.md for planned fea­tures and known is­sues.

Contributions are wel­come! Please see CONTRIBUTING.md for guide­lines.

# Fork and clone

git clone https://​github.com/​YOUR_USER­NAME/​Fer­rite.git

cd Ferrite

# Create a fea­ture branch

git check­out -b fea­ture/​your-fea­ture

# Make changes, then ver­ify

cargo fmt

cargo clippy

cargo test

cargo build

# Commit and push

git com­mit -m feat: your fea­ture de­scrip­tion”

git push ori­gin fea­ture/​your-fea­ture

This pro­ject is li­censed un­der the MIT License - see the LICENSE file for de­tails.

...

Read the original on github.com »

10 215 shares, 0 trendiness

Video filmed by ICE agent who shot Minneapolis woman emerges

A video filmed by the US im­mi­gra­tion agent who fa­tally shot a woman in Minneapolis on Wednesday has emerged, show­ing the mo­ments be­fore gun­fire rang out. The 47-second clip, ob­tained by Minnesota-based con­ser­v­a­tive news out­let Alpha News, shows Renee Nicole Good sit­ting be­hind the wheel of her car and speak­ing to the of­fi­cer. US Vice-President JD Vance shared the footage on so­cial me­dia, com­ment­ing that the agent had acted in self-de­fence. Local of­fi­cials have in­sisted the woman posed no dan­ger.Good’s wife has paid trib­ute to the 37-year-old, say­ing the pair had been try­ing to sup­port their neigh­bours when she was shot. Her death has sparked protests across the US.

President Donald Trump’s ad­min­is­tra­tion says Good tried to run over the US Immigration and Customs Enforcement (ICE) of­fi­cer in an act of domestic ter­ror­ism” af­ter block­ing the road and im­ped­ing the agen­cy’s work. Democratic Minneapolis Mayor Jacob Frey has de­scribed that ac­count as garbage” based on the video footage.The BBC has asked the home­land se­cu­rity de­part­ment and the White House for com­ment on the new video that emerged on Friday.The footage starts with the of­fi­cer get­ting out of his car and film­ing Good’s ve­hi­cle and reg­is­tra­tion plate while he walks around the Honda SUV. A dog is in the back­seat. Good says: That’s fine dude. I’m not mad at you.“Her wife, Becca Good, is stand­ing on the street film­ing the in­ter­ac­tion with her mo­bile phone. She tells the ICE agent: That’s OK, we don’t change our plates every morn­ing just so you know. It will be the same plate when you come talk to us later.“She adds: You want to come at us? You want to come at us? I say go and get your­self some lunch, big boy.”

Another agent ap­proaches Good on the dri­ver’s side and uses an ex­ple­tive as he says: Get out of the car.” The agent film­ing the clip moves in front of Good’s car as she re­verses. In a chaotic few sec­onds, she turns the wheel to the right and pulls for­wards.The cam­era jerks up to the sky. Woah, woah!” a voice says, as bangs are heard.In the fi­nal part of the video, the car is seen veer­ing down the road. The ICE agent swears.Other clips pre­vi­ously re­leased from the scene show the ma­roon SUV crashed into the side of the road af­ter Good was shot by the agent.The of­fi­cer ap­pears to stay on his feet, and is later seen in other videos walk­ing to­ward the crashed car. Federal of­fi­cials say the agent was in­jured and treated in hos­pi­tal. The FBI is in­ves­ti­gat­ing the in­ci­dent.

The of­fi­cer who fired on Good is Jonathan Ross, a vet­eran ICE agent who was pre­vi­ously in­jured in the line of duty when he was struck by a car. When asked about the video at the White House on Friday, Trump said: You have ag­i­ta­tors and we will al­ways be pro­tect­ing ICE, and we’re al­ways go­ing to be pro­tect­ing our bor­der pa­trol and our law en­force­ment.“Vance re­posted the video on X on Friday, and de­fended the agen­t’s ac­tions, say­ing: The re­al­ity is that his life was en­dan­gered and he fired in self-de­fence.“White House spokes­woman Karoline Leavitt also share the video, say­ing the me­dia had smeared an ICE agent who had properly de­fended him­self from be­ing run over”. Good’s wife told lo­cal me­dia the pair had gone to the scene of im­mi­gra­tion en­force­ment ac­tiv­ity to sup­port neigh­bours. We had whis­tles,” Becca Good said. They had guns.“When speak­ing about Good - a mother-of-three, in­clud­ing a six-year-old son - she said kindness ra­di­ated out of her”.“We were rais­ing our son to be­lieve that no mat­ter where you come from or what you look like, all of us de­serve com­pas­sion and kind­ness,” she added.

Demonstrators turned out for a third night of protests on Friday over the killing of Good. The Minneapolis Police Department told BBC News that at least 30 peo­ple were de­tained, cited and re­leased af­ter protests in the down­town area. Photos showed pro­test­ers gath­ered out­side a ho­tel in the city, be­lieved to be where some ICE agents were stay­ing.Min­neso­ta’s Department of Public Safety said it as­sisted po­lice of­fi­cers with ar­rest­ing peo­ple sus­pected of un­law­ful as­sem­bly, af­ter re­ceiv­ing information that demon­stra­tions were no longer peace­ful and re­ports of dam­age to prop­erty” near the Canopy Hotel in the city’s down­town.Min­nesota Governor Tim Walz ear­lier said he had ac­ti­vated the state’s National Guard to help with se­cu­rity around the protests.

On Friday, Minnesota of­fi­cials said they would open an in­quiry into the shoot­ing af­ter say­ing they had been frozen out of the fed­eral in­ves­ti­ga­tion. Trump was asked by a re­porter whether the FBI should share its find­ings with Minnesota, and said: Well nor­mally I would, but they’re crooked of­fi­cials.“The an­nounce­ment by Hennepin County’s top pros­e­cu­tor Mary Moriarty and Minnesota’s Democratic Attorney General Keith Ellison came a day af­ter the Minnesota Bureau of Criminal Apprehension said the FBI had ini­tially pledged a joint in­ves­ti­ga­tion, then re­versed course.One fed­eral agency that is not look­ing into the shoot­ing is the US jus­tice de­part­men­t’s Civil Rights Division, which has in the past in­ves­ti­gated al­leged ex­ces­sive use of force by law en­force­ment.But pros­e­cu­tors have ad­vised its crim­i­nal sec­tion that there will be no in­ves­ti­ga­tion in this case, sources told the BBCs US part­ner, CBS News. Walz, a Democrat, has ac­cused the Trump ad­min­is­tra­tion of block­ing state of­fi­cials, but Vance said it was a fed­eral mat­ter.

...

Read the original on www.bbc.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.