10 interesting stories served every morning and every evening.




1 748 shares, 162 trendiness

The struggle of resizing windows on macOS Tahoe

A lot has al­ready been said about the ab­surdly large cor­ner ra­dius of win­dows on ma­cOS Tahoe. People are call­ing the way it looks com­i­cal, like a child’s toy, or down­right in­sane.

Setting all the aes­thetic is­sues aside — which are to some ex­tent a mat­ter of taste — it also comes at a cost in terms of us­abil­ity.

Since up­grad­ing to ma­cOS Tahoe, I’ve no­ticed that quite of­ten my at­tempts to re­size a win­dow are fail­ing.

This never hap­pened to me be­fore in al­most 40 years of us­ing com­put­ers. So why all of a sud­den?

It turns out that my ini­tial click in the win­dow cor­ner in­stinc­tively hap­pens in an area where the win­dow does­n’t re­spond to it. The win­dow ex­pects this click to hap­pen in an area of 19 × 19 pix­els, lo­cated near the win­dow cor­ner.

If the win­dow had no rounded cor­ners at all, 62% of that area would lie in­side the win­dow:

But due to the huge cor­ner ra­dius in Tahoe, most of it — about 75% — now lies out­side the win­dow:

Living on this planet for quite a few decades, I have learned that it rarely works to grab things if you don’t ac­tu­ally touch them:

So I in­stinc­tively try to grab the win­dow cor­ner in­side the win­dow, typ­i­cally some­where in that green area, near the blue dot:

And I as­sume that most peo­ple would also in­tu­itively ex­pect to be able to grab the cor­ner there. But no, that’s al­ready out­side the ac­cepted tar­get area:

So, for ex­am­ple, grab­bing it here does not work:

But guess what — grab­bing it here does:

So in the end, the most re­li­able way to re­size a win­dow in Tahoe is to grab it out­side the cor­ner — a ges­ture that feels un­nat­ural and un­in­tu­itive, and is there­fore in­evitably er­ror-prone.

...

Read the original on noheger.at »

2 726 shares, 52 trendiness

I dumped Windows 11 for Linux, and you should too

There. That’s out of the way. I re­cently in­stalled Linux on my main desk­top com­puter and work lap­top, over­writ­ing the Windows par­ti­tion com­pletely. Essentially, I deleted the pri­mary op­er­at­ing sys­tem from the two com­put­ers I use the most, day in and day out, in­stead trust­ing all of my per­sonal and work com­put­ing needs to the Open Source com­mu­nity. This has been a grow­ing trend, and I hopped on the band­wagon, but for good rea­sons. Some of those rea­sons might per­tain to you and con­vince you to fi­nally make the jump as well. Here’s my ex­pe­ri­ence.

It’s no se­cret that Windows 11 har­vests data like a pump­kin farmer in October, and there is no easy way (and some­times no way at all) to stop it. The op­er­at­ing sys­tem it­self acts ex­actly like what was called spyware” a decade or so ago, pulling every piece of data it can about its cur­rent user. This data in­cludes (but is far from lim­ited to) hard­ware in­for­ma­tion, spe­cific apps and soft­ware used, us­age trends, and more. With the ad­vent of AI, Microsoft made head­lines with Copilot, an ar­ti­fi­cial as­sis­tant de­signed to help users by cap­tur­ing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.

Why are so many ar­ti­cles and YouTube videos lately re­gal­ing read­ers and watch­ers with the har­row­ing tales of techies switch­ing from Windows to Linux? Anyone who has read one of those ar­ti­cles or watched one of those videos will know it boils down to two main is­sues: teleme­try and poor soft­ware sta­bil­ity.

After deal­ing with these is­sues and try­ing to solve them with workarounds, I dual-booted a Linux par­ti­tion for a few weeks. After a Windows up­date (that I did­n’t choose to do) wiped that par­ti­tion and, con­se­quently, the Linux in­stal­la­tion, I de­cided to go whole-hog: I deleted Windows 11 and used the en­tire drive for Linux.

The other main rea­son folks unin­stall Windows is due to the over­all poor soft­ware ex­pe­ri­ence. Windows 11 has mul­ti­ple set­tings mod­ules to han­dle the same task (such as set­ting up net­work­ing or adding de­vices), and none of them seem to talk to each other. Additionally, each new up­date (which will even­tu­ally be forced upon you) seems to bring more bugs than fixes. Personally, I en­coun­tered 2-3 full sys­tem crashes a week when I ran Windows 11, and my hard­ware is fairly de­cent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my com­puter would freeze for a few sec­onds, the dis­plays would go dark, and the PC would ei­ther restart or hang in­def­i­nitely.

There. That’s out of the way. I re­cently in­stalled Linux on my main desk­top com­puter and work lap­top, over­writ­ing the Windows par­ti­tion com­pletely. Essentially, I deleted the pri­mary op­er­at­ing sys­tem from the two com­put­ers I use the most, day in and day out, in­stead trust­ing all of my per­sonal and work com­put­ing needs to the Open Source com­mu­nity. This has been a grow­ing trend, and I hopped on the band­wagon, but for good rea­sons. Some of those rea­sons might per­tain to you and con­vince you to fi­nally make the jump as well. Here’s my ex­pe­ri­ence.

It’s no se­cret that Windows 11 har­vests data like a pump­kin farmer in October, and there is no easy way (and some­times no way at all) to stop it. The op­er­at­ing sys­tem it­self acts ex­actly like what was called spyware” a decade or so ago, pulling every piece of data it can about its cur­rent user. This data in­cludes (but is far from lim­ited to) hard­ware in­for­ma­tion, spe­cific apps and soft­ware used, us­age trends, and more. With the ad­vent of AI, Microsoft made head­lines with Copilot, an ar­ti­fi­cial as­sis­tant de­signed to help users by cap­tur­ing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.

Why are so many ar­ti­cles and YouTube videos lately re­gal­ing read­ers and watch­ers with the har­row­ing tales of techies switch­ing from Windows to Linux? Anyone who has read one of those ar­ti­cles or watched one of those videos will know it boils down to two main is­sues: teleme­try and poor soft­ware sta­bil­ity.

After deal­ing with these is­sues and try­ing to solve them with workarounds, I dual-booted a Linux par­ti­tion for a few weeks. After a Windows up­date (that I did­n’t choose to do) wiped that par­ti­tion and, con­se­quently, the Linux in­stal­la­tion, I de­cided to go whole-hog: I deleted Windows 11 and used the en­tire drive for Linux.

The other main rea­son folks unin­stall Windows is due to the over­all poor soft­ware ex­pe­ri­ence. Windows 11 has mul­ti­ple set­tings mod­ules to han­dle the same task (such as set­ting up net­work­ing or adding de­vices), and none of them seem to talk to each other. Additionally, each new up­date (which will even­tu­ally be forced upon you) seems to bring more bugs than fixes. Personally, I en­coun­tered 2-3 full sys­tem crashes a week when I ran Windows 11, and my hard­ware is fairly de­cent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my com­puter would freeze for a few sec­onds, the dis­plays would go dark, and the PC would ei­ther restart or hang in­def­i­nitely.

The first ques­tion of­ten asked of Windows refugees mi­grat­ing to Linux is, Why Linux?” It’s a good ques­tion, and one that needs to be asked be­fore dump­ing Windows for any­thing else. Personally, I tried ma­cOS first. The ex­pe­ri­ence was smooth and easy but ul­ti­mately felt re­stric­tive (installing from third-party de­vel­op­ers, any­one?). Additionally, the only Apple com­puter I have is a 2014 MacBook Air. As such, the lat­est ver­sion of ma­cOS I could ac­tu­ally run is 11 (Big Sur), which was re­leased in 2020. Overall sys­tem op­er­a­tion was quite slug­gish on the older hard­ware, and I knew that time would in­evitably take its toll on the soft­ware ex­pe­ri­ence — apps would soon be out of date and I would­n’t be able to up­date them. I also tried the OpenCore Legacy Patcher to push the lap­top to ma­cOS 13. While per­for­mance im­proved, key fea­tures like iMes­sage and Continuity Camera were ei­ther buggy or flat out re­fused to work. It felt like my lap­top was run­ning in mud with its hands tied be­hind its back. Plus, I needed some­thing for my desk­top. Not want­ing to drop a mort­gage pay­ment or two on new hard­ware, I opted for Linux.

Linux promised me the po­ten­tial of what I wanted - high hard­ware com­pat­i­bil­ity with full soft­ware free­dom. The op­er­at­ing sys­tem can run on pretty much any­thing, and it grants users a huge amount of con­trol over their sys­tem. I tried out a few ditri­bu­tions, or dis­tros, of Linux. A dis­tro is like a flavor” of Linux, and each one has unique fac­tors (e.g., app/​pack­age man­age­ment, bun­dled user in­ter­face). With most dis­tros, these dif­fer­ences are largely ir­rel­e­vant; most dis­tros of­fer the same main pack­ages as oth­ers.

...

Read the original on www.notebookcheck.net »

3 558 shares, 32 trendiness

Don't fall into the anti-AI hype

I love writ­ing soft­ware, line by line. It could be said that my ca­reer was a con­tin­u­ous ef­fort to cre­ate soft­ware well writ­ten, min­i­mal, where the hu­man touch was the fun­da­men­tal fea­ture. I also hope for a so­ci­ety where the last are not for­got­ten. Moreover, I don’t want AI to eco­nom­i­cally suc­ceed, I don’t care if the cur­rent eco­nomic sys­tem is sub­verted (I could be very happy, hon­estly, if it goes in the di­rec­tion of a mas­sive re­dis­tri­b­u­tion of wealth). But, I would not re­spect my­self and my in­tel­li­gence if my idea of soft­ware and so­ci­ety would im­pair my vi­sion: facts are facts, and AI is go­ing to change pro­gram­ming for­ever.

In 2020 I left my job in or­der to write a novel about AI, uni­ver­sal ba­sic in­come, a so­ci­ety that adapted to the au­toma­tion of work fac­ing many chal­lenges. At the very end of 2024 I opened a YouTube chan­nel fo­cused on AI, its use in cod­ing tasks, its po­ten­tial so­cial and eco­nom­i­cal ef­fects. But while I rec­og­nized what was go­ing to hap­pen very early, I thought that we had more time be­fore pro­gram­ming would be com­pletely re­shaped, at least a few years. I no longer be­lieve this is the case. Recently, state of the art LLMs are able to com­plete large sub­tasks or medium size pro­jects alone, al­most unas­sisted, given a good set of hints about what the end re­sult should be. The de­gree of suc­cess you’ll get is re­lated to the kind of pro­gram­ming you do (the more iso­lated, and the more tex­tu­ally rep­re­sentable, the bet­ter: sys­tem pro­gram­ming is par­tic­u­larly apt), and to your abil­ity to cre­ate a men­tal rep­re­sen­ta­tion of the prob­lem to com­mu­ni­cate to the LLM. But, in gen­eral, it is now clear that for most pro­jects, writ­ing the code your­self is no longer sen­si­ble, if not to have fun.

In the past week, just prompt­ing, and in­spect­ing the code to pro­vide guid­ance from time to time, in a few hours I did the fol­low­ing four tasks, in hours in­stead of weeks:

1. I mod­i­fied my linenoise li­brary to sup­port UTF-8, and cre­ated a frame­work for line edit­ing test­ing that uses an em­u­lated ter­mi­nal that is able to re­port what is get­ting dis­played in each char­ac­ter cell. Something that I al­ways wanted to do, but it was hard to jus­tify the work needed just to test a side pro­ject of mine. But if you can just de­scribe your idea, and it ma­te­ri­al­izes in the code, things are very dif­fer­ent.

2. I fixed tran­sient fail­ures in the Redis test. This is very an­noy­ing work, tim­ing re­lated is­sues, TCP dead­lock con­di­tions, and so forth. Claude Code it­er­ated for all the time needed to re­pro­duce it, in­spected the state of the processes to un­der­stand what was hap­pen­ing, and fixed the bugs.

3. Yesterday I wanted a pure C li­brary that would be able to do the in­fer­ence of BERT like em­bed­ding mod­els. Claude Code cre­ated it in 5 min­utes. Same out­put and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to con­vert the GTE-small model.

4. In the past weeks I op­er­ated changes to Redis Streams in­ter­nals. I had a de­sign doc­u­ment for the work I did. I tried to give it to Claude Code and it re­pro­duced my work in, like, 20 min­utes or less (mostly be­cause I’m slow at check­ing and au­tho­riz­ing to run the com­mands needed).

It is sim­ply im­pos­si­ble not to see the re­al­ity of what is hap­pen­ing. Writing code is no longer needed for the most part. It is now a lot more in­ter­est­ing to un­der­stand what to do, and how to do it (and, about this sec­ond part, LLMs are great part­ners, too). It does not mat­ter if AI com­pa­nies will not be able to get their money back and the stock mar­ket will crash. All that is ir­rel­e­vant, in the long run. It does not mat­ter if this or the other CEO of some uni­corn is telling you some­thing that is off putting, or ab­surd. Programming changed for­ever, any­way.

How do I feel, about all the code I wrote that was in­gested by LLMs? I feel great to be part of that, be­cause I see this as a con­tin­u­a­tion of what I tried to do all my life: de­moc­ra­tiz­ing code, sys­tems, knowl­edge. LLMs are go­ing to help us to write bet­ter soft­ware, faster, and will al­low small teams to have a chance to com­pete with big­ger com­pa­nies. The same thing open source soft­ware did in the 90s.

However, this tech­nol­ogy is far too im­por­tant to be in the hands of a few com­pa­nies. For now, you can do the pre-train­ing bet­ter or not, you can do re­in­force­ment learn­ing in a much more ef­fec­tive way than oth­ers, but the open mod­els, es­pe­cially the ones pro­duced in China, con­tinue to com­pete (even if they are be­hind) with fron­tier mod­els of closed labs. There is a suf­fi­cient de­moc­ra­ti­za­tion of AI, so far, even if im­per­fect. But: it is ab­solutely not ob­vi­ous that it will be like that for­ever. I’m scared about the cen­tral­iza­tion. At the same time, I be­lieve neural net­works, at scale, are sim­ply able to do in­cred­i­ble things, and that there is not enough magic” in­side cur­rent fron­tier AI for the other labs and teams not to catch up (otherwise it would be very hard to ex­plain, for in­stance, why OpenAI, Anthropic and Google are so near in their re­sults, for years now).

As a pro­gram­mer, I want to write more open source than ever, now. I want to im­prove cer­tain repos­i­to­ries of mine aban­doned for time con­cerns. I want to ap­ply AI to my Redis work­flow. Improve the Vector Sets im­ple­men­ta­tion and then other data struc­tures, like I’m do­ing with Streams now.

But I’m wor­ried for the folks that will get fired. It is not clear what the dy­namic at play will be: will com­pa­nies try to have more peo­ple, and to build more? Or will they try to cut salary costs, hav­ing fewer pro­gram­mers that are bet­ter at prompt­ing? And, there are other sec­tors where hu­mans will be­come com­pletely re­place­able, I fear.

What is the so­cial so­lu­tion, then? Innovation can’t be taken back af­ter all. I be­lieve we should vote for gov­ern­ments that rec­og­nize what is hap­pen­ing, and are will­ing to sup­port those who will re­main job­less. And, the more peo­ple get fired, the more po­lit­i­cal pres­sure there will be to vote for those who will guar­an­tee a cer­tain de­gree of pro­tec­tion. But I also look for­ward to the good AI could bring: new progress in sci­ence, that could help lower the suf­fer­ing of the hu­man con­di­tion, which is not al­ways happy.

Anyway, back to pro­gram­ming. I have a sin­gle sug­ges­tion for you, my friend. Whatever you be­lieve about what the Right Thing should be, you can’t con­trol it by re­fus­ing what is hap­pen­ing right now. Skipping AI is not go­ing to help you or your ca­reer. Think about it. Test these new tools, with care, with weeks of work, not in a five min­utes test where you can just re­in­force your own be­liefs. Find a way to mul­ti­ply your­self, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn cod­ing, and now ma­chines are do­ing it for you. But what was the fire in­side you, when you coded till night to see your pro­ject work­ing? It was build­ing. And now you can build more and bet­ter, if you find your way to use AI ef­fec­tively. The fun is still there, un­touched.

Please en­able JavaScript to view the com­ments pow­ered by Disqus.

blog com­ments pow­ered by

...

Read the original on antirez.com »

4 294 shares, 39 trendiness

icloud-photos-downloader/icloud_photos_downloader: A command-line tool to download photos from iCloud

* A com­mand-line tool to down­load all your iCloud pho­tos.

* Works on Linux, Windows, and ma­cOS; lap­top, desk­top, and NAS

* Available as an ex­e­cutable for di­rect down­load­ing and through pack­age man­agers/​ecosys­tems (Docker, PyPI, AUR, npm)

* Developed and main­tained by vol­un­teers (we are al­ways look­ing for help).

See Documentation for more de­tails. Also, check Issues

We aim to re­lease new ver­sions once a week (Friday), if there is some­thing worth de­liv­er­ing.

To make iCloud Photo Downloader work, en­sure the iCloud ac­count is con­fig­ured with the fol­low­ing set­tings, oth­er­wise Apple Servers will re­turn an ACCESS_DENIED er­ror:

* Enable Access iCloud Data on the Web: On your iPhone / iPad, en­able Settings > Apple ID > iCloud > Access iCloud Data on the Web

There are three ways to run icloudpd:

Download ex­e­cutable for your plat­form from the GitHub Release and run it

Use pack­age man­ager to in­stall, up­date, and, in some cases, run (Docker, PyPI, AUR, npm)

Build and run from the source

See Documentation for more de­tails

* Three modes of op­er­a­tion:

Sync - down­load new pho­tos from iCloud and delete lo­cal files that were re­moved in iCloud (–auto-delete op­tion)

Move - down­load new pho­tos from iCloud and delete pho­tos in iCloud (–keep-icloud-recent-days op­tion)

* Sync - down­load new pho­tos from iCloud and delete lo­cal files that were re­moved in iCloud (–auto-delete op­tion)

* Move - down­load new pho­tos from iCloud and delete pho­tos in iCloud (–keep-icloud-recent-days op­tion)

* Support for Live Photos (image and video as sep­a­rate files) and RAW im­ages (including RAW+JPEG)

* Automatic de-du­pli­ca­tion of pho­tos with the same name

* One time down­load and an op­tion to mon­i­tor for iCloud changes con­tin­u­ously (–watch-with-interval op­tion)

* … and many more (use –help op­tion to get full list)

Some changes are added to the ex­per­i­men­tal mode be­fore they grad­u­ate into the main pack­age. Details

To keep your iCloud photo col­lec­tion syn­chro­nized to your lo­cal sys­tem:

To in­de­pen­dently cre­ate and au­tho­rize a ses­sion (and com­plete 2SA/2FA val­i­da­tion if needed) on your lo­cal sys­tem:

Want to con­tribute to iCloud Photos Downloader? Awesome! Check out the con­tribut­ing guide­lines to get in­volved.

...

Read the original on github.com »

5 291 shares, 21 trendiness

2025 in retrospect & happy new year 2026! – Gentoo Linux

Happy New Year 2026! Once again, a lot has hap­pened in Gentoo over the past months. New de­vel­op­ers, more bi­nary pack­ages, GnuPG al­ter­na­tives sup­port, Gentoo for WSL, im­proved Rust boot­strap, bet­ter NGINX pack­ag­ing, … As al­ways here

we’re go­ing to re­visit all the ex­cit­ing news from our favourite Linux dis­tri­b­u­tion.

Gentoo cur­rently con­sists of 31663 ebuilds for 19174 dif­fer­ent pack­ages. For amd64 (x86-64), there are 89 GBytes of bi­nary pack­ages avail­able on the mir­rors. Gentoo each week builds 154

dis­tinct in­stal­la­tion stages for dif­fer­ent proces­sor ar­chi­tec­tures and sys­tem con­fig­u­ra­tions, with an over­whelm­ing part of these fully up-to-date.

The num­ber of com­mits to the main ::gentoo repos­i­tory

has re­mained at an over­all high level in 2025, with a slight de­crease from 123942 to 112927. The num­ber of com­mits by ex­ter­nal con­trib­u­tors was 9396, now across 377 unique ex­ter­nal au­thors.

GURU, our user-cu­rated repos­i­tory with a trusted user model, as en­try point for po­ten­tial de­vel­op­ers, has shown a de­crease in ac­tiv­ity. We have had 5813 com­mits in 2025, com­pared to 7517 in 2024. The num­ber of con­trib­u­tors to GURU has in­creased, from 241 in 2024 to 264 in 2025. Please join us there and help pack­ag­ing the lat­est and great­est soft­ware. That’s the ideal prepa­ra­tion for be­com­ing a Gentoo de­vel­oper!

Activity has slowed down some­what on the Gentoo bug­tracker bugs.gen­too.org, where we’ve had 20763 bug re­ports cre­ated in 2025, com­pared to 26123 in 2024. The num­ber of re­solved bugs shows the same trend, with 22395 in 2025 com­pared to 25946 in 2024. The cur­rent val­ues are closer to those of 2023 - but clearly this year we fixed more than we broke!

In 2025 we have gained four new Gentoo de­vel­op­ers. They are in chrono­log­i­cal or­der:

Let’s now look at the ma­jor im­prove­ments and news of 2025 in Gentoo.

RISC-V bootable QCOW2: Same as for amd64 and ar­m64, also for RISC-V we now have ready-made bootable disk im­ages in QCOW2 for­mat

avail­able for down­load on our mir­rors in a con­sole and a cloud-init vari­ant. The disk im­ages use the rv64gc in­struc­tion set and the lp64d ABI, and can be booted via the stan­dard RISC-V UEFI sup­port.

Gentoo for WSL: We now pub­lish weekly Gentoo im­ages for Windows

Subsystem for Linux (WSL), based on the amd64 stages,

see our mir­rors. While these im­ages are not pre­sent in the Microsoft store yet, that’s some­thing we in­tend to fix soon.

hppa and sparc desta­bi­lized: Since we do not have hard­ware read­ily avail­able any­more and these ar­chi­tec­tures mostly fill a retro­com­put­ing niche, sta­ble key­words have been dropped for both hppa (PA-RISC) and sparc. The ar­chi­tec­tures will re­main sup­ported with test­ing key­words.

musl with lo­cales: Localization sup­port via the pack­age

sys-apps/​musl-lo­cales has been added by de­fault to the Gentoo stages based on the light­weight musl C li­brary.

GPG al­ter­na­tives: Given the un­for­tu­nate frac­tur­ing of the GnuPG / OpenPGP / LibrePGP ecosys­tem due to com­pet­ing stan­dards, we now pro­vide an al­ter­na­tives mech­a­nism to choose the sys­tem gpg provider and ease com­pat­i­bil­ity test­ing. At the mo­ment,

the orig­i­nal, un­mod­i­fied GnuPG, the FreePG fork/​patch­set as also used in many other Linux dis­tri­b­u­tions (Fedora, Debian, Arch, …), and the re-im­ple­men­ta­tion

Sequoia-PGP with

Chameleon

are avail­able. In prac­tice, im­ple­men­ta­tion de­tails vary be­tween the providers, and while GnuPG and FreePG are fully sup­ported, you may still en­counter dif­fi­cul­ties when se­lect­ing Sequoia-PGP/Chameleon.

zlib-ng sup­port: We have in­tro­duced ini­tial sup­port for us­ing zlib-ng and

minizip-ng in com­pat­i­bil­ity mode in place of the ref­er­ence zlib li­braries.

System-wide job­server: We have cre­ated steve, an im­ple­men­ta­tion of a to­ken-ac­count­ing sys­tem-wide job­server, and in­tro­duced ex­per­i­men­tal global job­server sup­port in Portage. Thanks to that, it is now pos­si­ble to glob­ally con­trol the con­cur­rently run­ning build job count, cor­rectly ac­count­ing for par­al­lel emerge jobs, make and ninja jobs, and other clients sup­port­ing the job­server pro­to­col.

NGINX re­work: The pack­ag­ing of the NGINX web server and re­verse proxy in Gentoo has un­der­gone a ma­jor im­prove­ment, in­clud­ing also the split­ting off of sev­eral third-party mod­ules into sep­a­rate pack­ages.

C++ based Rust boot­strap: We have added a boot­strap path for Rust from C++ us­ing

Mutabah’s Rust com­piler mrustc, which al­le­vi­ates the need for pre-built bi­na­ries and makes it sig­nif­i­cantly eas­ier to sup­port more con­fig­u­ra­tions.

Ada and D boot­strap: Similarly, Ada and D sup­port in gcc now have clean boot­strap paths, which makes en­abling these in the com­piler as easy as switch­ing the use­flags on gcc and run­ning emerge.

FlexiBLAS: Gentoo has adopted the new FlexiBLAS wrap­per

li­brary as the pri­mary way of switch­ing im­ple­men­ta­tions of the BLAS nu­mer­i­cal al­go­rithm li­brary at run­time. This au­to­mat­i­cally also pro­vides ABI sta­bil­ity for link­ing pro­grams and bun­dles the spe­cific treat­ment of dif­fer­ent BLAS vari­ants in one place.

Python: In the mean­time the de­fault Python ver­sion in Gentoo has reached Python 3.13. Additionally we have also Python 3.14 avail­able sta­ble - fully up to date with up­stream.

KDE up­grades: As of end of 2025, in Gentoo sta­ble we have KDE Gear 25.08.3, KDE Frameworks 6.20.0, and KDE Plasma 6.5.4. As al­ways, Gentoo test­ing fol­lows the newest up­stream re­leases (and us­ing the KDE over­lay you can even in­stall from git sources).

Additional build server: A sec­ond ded­i­cated build server, hosted at Hetzner Germany, has been added to speed up the gen­er­a­tion of in­stal­la­tion stages, iso and qcow2 im­ages, and bi­nary pack­ages.

Documentation: Documentation work has made con­stant progress on wiki.gen­too.org. The Gentoo Handbook had some par­tic­u­larly use­ful up­dates, and the doc­u­men­ta­tion re­ceived lots of im­prove­ments and ad­di­tions from the many ac­tive vol­un­teers. There are cur­rently 9,647 pages on the wiki, and there have been 766,731 ed­its since the pro­ject started. Please help

Gentoo by con­tribut­ing to doc­u­men­ta­tion!

Income: The Gentoo Foundation took in $12,066 in fis­cal year 2025 (ending 2025/06/30); the dom­i­nant part

(over 80%) con­sists of in­di­vid­ual cash do­na­tions from the com­mu­nity. On the SPI side, we re­ceived $8,471

in the same pe­riod as fis­cal year 2025; also here, this is all from small in­di­vid­ual cash do­na­tions.

* Expenses: Our ex­penses in 2025 were, pro­gram ser­vices (e.g. host­ing costs) $8,332, man­age­ment & gen­eral (accounting)

$1,724, fundrais­ing $905, and non-op­er­at­ing (depreciation ex­penses) $10,075.

* Balance: We have $104,831 in the bank as of July 1, 2025 (which is when our fis­cal year 2026 starts for ac­count­ing

pur­poses). The Gentoo Foundation FY2025 fi­nan­cial state­ment

is avail­able on the Gentoo Wiki.

* Transition to SPI: The Foundation en­cour­ages donors to en­sure their on­go­ing con­tri­bu­tions are go­ing to

SPI - more than 40 donors had not re­sponded to re­quests to move the re­cur­ring do­na­tions

by the end of the year. Expenses will be moved to the SPI struc­ture as on­go­ing in­come per­mits.

As every year, we would like to thank all Gentoo de­vel­op­ers and all who have sub­mit­ted con­tri­bu­tions

for their re­lent­less every­day Gentoo work. If you are in­ter­ested and would like to help, please join us to make Gentoo even bet­ter! As a vol­un­teer pro­ject, Gentoo could not ex­ist with­out its com­mu­nity.

...

Read the original on www.gentoo.org »

6 256 shares, 19 trendiness

“Food JPEGs” in Super Smash Bros & Kirby Air Riders

Have you ever no­ticed that the food graph­ics in Super Smash Bros. and Kirby Air Riders is flat billboarded” stock im­ages of food?

This artis­tic de­ci­sion from di­rec­tor Masahiro Sakurai has per­sisted through 8 games over nearly 25 years. I’ve seen a few folks on­line re­mark­ing about the JPEG or PNG”-like qual­ity of the im­ages in the most re­cent re­lease: .

While re­search­ing every game with this art style and all 150+ unique food im­ages I ended up fix­ing wikis, re­view­ing a sea­sonal KitKat fla­vor, and pre­serv­ing an un­cat­a­logued im­age of tem­pura soba.

Masahiro Sakurai is the di­rec­tor for every game on this list, so clearly this is his artis­tic de­ci­sion.

Super Smash Bros. Melee was the first game to con­tain this food art style, pub­lished in 2001. This style was then re­peated in Kirby Air Ride (2003), Super Smash Bros. Brawl (2008),

Super Smash Bros. for 3DS and Wii U (2014), Super Smash Bros.

Ultimate (2018), and most re­cently in Kirby Air Riders (2025).

Credit to Nintendo, HAL Laboratories, SORA Ltd., and Bandai Namco Studios as de­vel­op­ers and pub­lish­ers of these games. Artwork was sourced from the Spriters Resource.

Where it all be­gan! Super Smash Bros. Melee for the GameCube started off with 28 dis­tinct food items, of­ten found in Party Balls”. Each type of food had a dif­fer­ent nutritional value” and yumminess quo­tient” ac­cord­ing to the in-game tro­phy ded­i­cated to the food items.

Melee in­cluded many foods spe­cific to Japanese cui­sine, such as un­agi (eel), omurice, soba, dango, and gyū­don. I do dis­tinctly re­mem­ber grow­ing up as a culinarily shel­tered” kid in the mid­west United States and not un­der­stand­ing what many of these food items were.

The orig­i­nal stock im­ages of Super Smash Bros. Melee and the next game, Kirby Air Ride, have been par­tially dis­cov­ered and doc­u­mented by a group called Render96”. The stock im­ages are from a com­pany called Sozaijiten”. Many of the food im­ages come from Material Dictionary CDs (Vegetables & Fruits),

(Food & Dishes), and (Cooking Japanese, Western, & Chinese). The ap­ple stock im­age in par­tic­u­lar was re-used all the way through Super Smash Bros. Ultimate (2018). The burger, milk, dango, and donut are still miss­ing their pri­mary source.

Kirby Air Ride for the GameCube had sig­nif­i­cantly fewer dis­tinct food items (12) com­pared to Melee and main­tained many of the same food stock im­ages from Melee, in­clud­ing the ap­ple, burger, chicken, curry, omurice, oni­giri, and ra­men. Nigiri was in­cluded, but the im­age was changed from a sushi board to a plate.

The stock im­ages had their sat­u­ra­tion in­creased and the black bor­ders around the im­ages are thicker, some­times 2-3 pix­els in­stead of only 1 pixel for Melee.

I paid $50 plus ship­ping on eBay for this PNG. This is the clos­est I’ll get to NFTs.

While re­search­ing the foods in Kirby Air Ride I dis­cov­ered a wiki de­scrip­tion of a tempura soba” item that I’d never heard of and was­n’t in­cluded in the Spriters Resource spritesheets

for Kirby Air Ride. Turns out that this item was changed to a hotdog” in the NSTC-M and PAL re­leases of Kirby Air Ride.

I was un­able to find a non-blurry im­age of the tem­pura soba sprite on­line, so of course I had to pre­serve this sprite my­self. I pur­chased

a Japanese copy of Kirby Air Ride, dumped the ROM us­ing the FlippyDrive Disc Backup Utility, and ran the ROM us­ing Dolphin with Dump Textures” mode en­abled to archive the sprite di­rectly from the game.

Kirby Air Ride cover art­work (left: JP, right: US, PAL). Images from the GameTDB.

In the process I also learned that the cover of Kirby Air Ride changed be­tween the Japanese and in­ter­na­tional re­leases. The Japanese cover art fea­tures a smil­ing happy Kirby where the in­ter­na­tional cover has Kirby with a fur­rowed brow and se­ri­ous look.

Super Smash Bros. Brawl for the Wii has only one more food item com­pared to Melee (29) and in­tro­duces 11 new food items in­clud­ing bread, cake, candy, choco­late, cookie, melon soda, par­fait, peaches, pie, pineap­ple, and steak.

About half of the Japanese-specific foods from both Melee and Kirby Air Ride were re­placed: curry, omurice, oni­giri, and ra­men.

The art is less sat­u­rated and more realistic” which is in-line with the rest of the game’s art di­rec­tion. The im­ages lost their black out­line, likely to draw less at­ten­tion to the arcade-y” feel that the pre­vi­ous ti­tles had with food items.

Super Smash Bros. Wii U and 3DS have the same to­tal num­ber of food items as Brawl (29). These games change the food art style com­pletely, again! It’s brighter, sat­u­rated, and looks de­li­cious.

The soda item was changed from a melon cream soda to a dark cola with lemon. The omurice was changed to a pair of fried eggs with ba­con. These games are also the only ones with­out the burger” food item.

Super Smash Bros. for 3DS uses the same food art­work used in Super Smash Bros. for down­scaled to 64x64 pix­els from 256x256 pix­els with some mi­nor edit­ing.

Super Smash Bros. Wii U and 3DS added the Mont Blanc” food item, which is a French dessert that is pop­u­lar in Japan. I’ve seen mul­ti­ple guides and wikis mis­tak­enly la­bel this food item as noodles” due to the vermicelli” shape of the puréed chest­nuts. Yummy!

While re­search­ing and writ­ing this blog post I hap­pened across Mont Blanc”-flavored KitKats. These are ap­par­ently a lim­ited-time fla­vor for au­tumn. The KitKats are creamy and have plenty of chest­nut fla­vor, but they are very sweet (apparently Mont Blanc is quite sweet, too, so this is to be ex­pected).

Super Smash Bros. Ultimate uses the same 29 foods from the Wii U and 3DS and adds 9 more foods for a to­tal of 38. Many of the newly added foods are call-backs to food items in pre­vi­ous ti­tles, be­low high­lighted in pink.

The 9 new foods in Ultimate are burg­ers, cheese, corn­dogs, donuts, dumplings, daisies, pizza, pineap­ple, and steak.

It’s clear that the Sozaijiten” stock im­ages were still in use even in 2018: 17 years later! The ap­ple, cheese, and chicken stock im­ages for Super Smash Bros. Melee match the stock im­ages used in Ultimate.

Kirby Air Riders re­leased for the Switch 2 has the most foods of any game with this art style with 45 dis­tinct food items.

Massive thank-you to Charles Bernardo for send­ing me care­fully cropped im­ages of the food in Kirby Air Riders.

Kirby Air Riders is the first game in this se­ries to use com­pletely new mod­els for all food items: not even the ap­ple or cheese are the same from any pre­vi­ous game. Kirby Air Riders is also the first game in this se­ries not to have a roast chicken” item, break­ing from an es­tab­lished video-game food trope.

Kirby Air Riders adds a new food-cen­tric mode called ” where rid­ers earn points by con­sum­ing food as quickly as pos­si­ble in a small arena. Gourmet Race in­tro­duces a new food con­cept: Large Foods”. Large food items are worth 15 points in­stead of 1 point per food item. There are 14 large food items, some pre­sent­ing as upgraded” ver­sions of reg­u­lar-sized foods.

The large food items are: a bunch of 12 ba­nanas in­stead of 3, a bread-bas­ket, a dou­ble cheese­burger, a whole cake in­stead of a slice, donuts, a fruit bas­ket, a board of ni­giri in­stead of a plate, fruit par­fait, pizza, pop­corn, salad, rain­bow shave ice in­stead of blue only, a tem­pura bowl, and a whole wa­ter­melon in­stead of a slice.

Prior to this ar­ti­cle there was not yet a com­plete list of foods in Kirby Air Riders doc­u­mented on a wiki or spritesheet. I added this list to the Kirby wiki, but I’ve also in­cluded the list be­low:

There are 16 to­tal food items that only ap­pear in a sin­gle ti­tle across the 25-year span of games. Kirby Air Riders and Super Smash Bros. Melee have by far the most unique food items with 8 and 5 re­spec­tively.

Finally, here is a table with every im­age so you can com­pare how each changed across dif­fer­ent ti­tles:

Wow, you made it to the end!

Share your thoughts with me on Mastodon, email, or Bluesky.

Check out this list of cool stuff I found on the in­ter­net.

Follow this blog on RSS or the email newslet­ter.

Go out­side (best op­tion)

...

Read the original on sethmlarson.dev »

7 243 shares, 23 trendiness

Meta Announces Nuclear Energy Projects, Unlocking Up to 6.6 GW to Power American Leadership in AI Innovation

Store and Device Help Centers

see all re­sults for

Meta Announces Nuclear Energy Projects, Unlocking Up to 6.6 GW to Power American Leadership in AI Innovation

Today, we’re an­nounc­ing land­mark agree­ments that will ex­tend and ex­pand the op­er­a­tion of three nu­clear power plants, boost the de­vel­op­ment of new ad­vanced nu­clear tech­nol­ogy, and fos­ter job growth in sev­eral American com­mu­ni­ties.

Supporting nu­clear en­ergy de­vel­op­ment strength­ens our coun­try’s en­ergy in­fra­struc­ture and helps cre­ate a more re­li­able elec­tric grid, which are key to pow­er­ing the econ­omy and se­cur­ing America’s en­ergy in­de­pen­dence and global lead­er­ship in AI.

Our agree­ments with Vistra, TerraPower, and Oklo — and the one we signed with Constellation Energy last year — make Meta one of the most sig­nif­i­cant cor­po­rate pur­chasers of nu­clear en­ergy in American his­tory.

At Meta, we’re fo­cused on build­ing per­sonal su­per­in­tel­li­gence for every­one, and de­liv­er­ing the app ex­pe­ri­ences and com­put­ing de­vices that will im­prove the lives of bil­lions of peo­ple around the world. Our in­dus­try-lead­ing data cen­ters are the back­bone of these break­throughs — they pro­vide the in­fra­struc­ture that dri­ves in­no­va­tion and brings trans­for­ma­tive tech­nolo­gies to life. Innovation at this scale re­quires more elec­tric­ity, and that’s where nu­clear en­ergy comes in. It pro­vides clean, re­li­able, and firm elec­tric­ity that helps power America’s econ­omy and com­mu­ni­ties.

That’s why to­day, we’re proud to an­nounce agree­ments with three com­pa­nies — fol­low­ing our nu­clear RFP process — that will help add clean, re­li­able en­ergy to elec­tric grids, pre­serve con­tin­ued in­vest­ment in op­er­at­ing nu­clear power plants, and sup­port the nu­clear fuel sup­ply chain, American jobs, and AI in­no­va­tion.

Our com­mit­ments to Oklo and TerraPower sup­port the next gen­er­a­tion of American de­vel­op­ers cre­at­ing safer, ad­vanced nu­clear re­ac­tors and ac­cel­er­at­ing the de­vel­op­ment of nu­clear tech­nolo­gies. Through our part­ner­ship with Vistra, we’re pro­vid­ing fi­nan­cial sup­port for op­er­at­ing nu­clear power plants, ex­tend­ing the op­er­a­tional lifes­pan, and in­creas­ing en­ergy pro­duc­tion at the Perry and Davis-Besse plants in Ohio and the Beaver Valley plant in Pennsylvania. The pro­jects we’re an­nounc­ing to­day will de­liver power to the grids that sup­port our op­er­a­tions, in­clud­ing our Prometheus su­per­clus­ter in New Albany, Ohio.

These pro­jects are ex­pected to pro­vide thou­sands of con­struc­tion jobs and hun­dreds of long-term op­er­a­tional jobs, sup­port­ing up to 6.6 GW of new and ex­ist­ing clean en­ergy by 2035. Importantly, these pro­jects add re­li­able and firm power to the grid, re­in­force America’s nu­clear sup­ply chain, and sup­port new and ex­ist­ing jobs to build and op­er­ate American power plants.

This work builds on our on­go­ing col­lab­o­ra­tion with elec­tric util­ity com­pa­nies and power providers to plan for and meet our en­ergy needs years in ad­vance of our data cen­ters be­com­ing op­er­a­tional. We pay the full costs for en­ergy used by our data cen­ters so con­sumers don’t bear these ex­penses, and we sup­port the broader grid through our en­ergy agree­ments.

Our agree­ments with Vistra, TerraPower, Oklo, and Constellation make Meta one of the most sig­nif­i­cant cor­po­rate pur­chasers of nu­clear en­ergy in American his­tory. State-of-the-art data cen­ters and AI in­fra­struc­ture are es­sen­tial to se­cur­ing America’s po­si­tion as a global leader in AI. Nuclear en­ergy will help power our AI fu­ture, strengthen our coun­try’s en­ergy in­fra­struc­ture, and pro­vide clean, re­li­able elec­tric­ity for every­one. These pro­jects are go­ing to cre­ate thou­sands of skilled jobs in Ohio and Pennsylvania, add new en­ergy to the grid, ex­tend the life of three ex­ist­ing nu­clear plants, and ac­cel­er­ate new re­ac­tor tech­nolo­gies.”

As the de­mand for re­li­able, scal­able, and clean en­ergy con­tin­ues to rise, ad­vanced nu­clear tech­nol­ogy has the po­ten­tial to be­come a key part of the so­lu­tion. The lat­est gen­er­a­tion of ad­vanced nu­clear re­ac­tors are de­signed to be safer — de­liv­er­ing re­li­able base­load power that can be ef­fi­ciently added to ex­ist­ing grids, which makes them ideal for sup­port­ing America’s evolv­ing power needs. Our agree­ments with Oklo and TerraPower will help ad­vance this next gen­er­a­tion of en­ergy tech­nol­ogy.

These agree­ments also mean that Oklo and TerraPower have greater busi­ness cer­tainty, can raise cap­i­tal to move for­ward with these pro­jects, and ul­ti­mately add more en­ergy ca­pac­ity to the grid. Over time, this will be an im­por­tant tool in en­sur­ing that grids main­tain re­li­a­bil­ity for all cus­tomers and en­sure sta­ble whole­sale elec­tric­ity prices.

Our agree­ment with TerraPower will pro­vide fund­ing that sup­ports the de­vel­op­ment of two new Natrium® units ca­pa­ble of gen­er­at­ing up to 690 MW of firm power with de­liv­ery as early as 2032. The agree­ment also pro­vides Meta with rights for en­ergy from up to six other Natrium units ca­pa­ble of pro­duc­ing 2.1 GW and tar­geted for de­liv­ery by 2035. At a to­tal of eight po­ten­tial units, with 2.8 GW of base­load en­ergy gen­er­a­tion ca­pac­ity and an ad­di­tional 1.2 GW of built-in stor­age, this agree­ment is Meta’s largest sup­port of ad­vanced nu­clear tech­nolo­gies to date.

To suc­cess­fully ad­dress grow­ing en­ergy de­mand, we must de­ploy gi­gawatts of ad­vanced nu­clear en­ergy in the 2030s. This agree­ment with Meta is de­signed to sup­port the rapid de­ploy­ment of our Natrium tech­nol­ogy that pro­vides the re­li­able, flex­i­ble, and car­bon-free power our coun­try needs,” said Chris Levesque, TerraPower pres­i­dent and CEO. With our first Natrium plant un­der de­vel­op­ment, we have com­pleted our de­sign, es­tab­lished our sup­ply chain, and cleared key reg­u­la­tory mile­stones. These suc­cesses mean our TerraPower team is well po­si­tioned to de­liver on this his­toric multi-unit de­liv­ery agree­ment.”

Our part­ner­ship with Oklo helps ad­vance the de­vel­op­ment of en­tirely new nu­clear en­ergy in Pike County, Ohio. This ad­vanced nu­clear tech­nol­ogy cam­pus — which may come on­line as early as 2030 — is poised to add up to 1.2 GW of clean base­load power di­rectly into the PJM mar­ket and sup­port our op­er­a­tions in the re­gion.

This agree­ment lays the foun­da­tion for con­struct­ing mul­ti­ple Oklo Aurora Powerhouse re­ac­tors, which is ex­pected to cre­ate thou­sands of con­struc­tion and long-term op­er­a­tions jobs and gen­er­ate new lo­cal and state tax rev­enue through ma­jor in­vest­ments in en­ergy in­fra­struc­ture. Oklo Aurora pow­er­houses are based on proven fast-re­ac­tor de­signs with in­her­ently safe sys­tems ca­pa­ble of us­ing both fresh and re­pur­posed fuel.

Meta’s fund­ing com­mit­ment in sup­port of early pro­cure­ment and de­vel­op­ment ac­tiv­ity is a ma­jor step in mov­ing ad­vanced nu­clear for­ward,” said Jacob DeWitte, Oklo’s co-founder and CEO. Two years ago, Oklo shared its vi­sion to build a new gen­er­a­tion of ad­vanced nu­clear pow­er­houses in Ohio. Today, that vi­sion is be­com­ing a re­al­ity through the sup­port of a multi-year ef­fort with Meta; to de­liver clean en­ergy and cre­ate long-term, high-qual­ity jobs in Ohio.”

Many American nu­clear power plants need long-term sup­port and re­quire on­go­ing in­vest­ment to main­tain best-in-class safety and re­li­a­bil­ity in op­er­a­tions. For ex­am­ple, our first nu­clear en­ergy agree­ment helped ex­tend the life of a nu­clear en­ergy plant in Clinton, Illinois for 20 more years.

Through ad­di­tional 20-year nu­clear en­ergy agree­ments, we will pur­chase more than 2.1 GW of en­ergy from two op­er­at­ing Vistra nu­clear power plants in Ohio (Perry and Davis-Besse), in ad­di­tion to the en­ergy from ex­pan­sions (uprates) at these two Ohio plants and a third Vistra nu­clear plant in Pennsylvania (Beaver Valley). All three plants are lo­cated in and will con­tinue to de­liver power into the PJM grid re­gion, and these ex­pan­sions will be the largest nu­clear up­rates sup­ported by a cor­po­rate cus­tomer in the US.

Meta’s com­mit­ments en­sure that these fa­cil­i­ties can con­tinue pro­vid­ing re­li­able power to the re­gional elec­tric­ity grid. The new ad­di­tional up­rate ca­pac­ity at each of them, to­tal­ing 433 MW, is ex­pected to come on­line in the early 2030s — sup­port­ing the grow­ing needs in the PJM grid re­gion in the fu­ture. This means con­sumers will ben­e­fit from a larger sup­ply of re­li­able, al­ways-ready power through Meta-supported up­rates to the Vistra fa­cil­i­ties.

This is an ex­cit­ing col­lab­o­ra­tion for us at Vistra. We are fo­cused on meet­ing cus­tomer needs, and pro­vid­ing re­li­able, car­bon-free nu­clear power is some­thing we’re proud to of­fer Meta,” said Jim Burke, pres­i­dent and CEO of Vistra. This agree­ment is ben­e­fi­cial in many ways — it pow­ers American in­no­va­tion and AI tech­nol­ogy, while al­low­ing us to ex­tend the op­er­a­tional life of these plants, boost the ca­pac­ity of the nu­clear re­ac­tors to sup­port the grid, pro­tect ex­ist­ing jobs while cre­at­ing new ones, and con­tinue in­vest­ing in the com­mu­ni­ties where our plants are lo­cated. Partnerships like ours are key in mov­ing America for­ward in both AI and en­ergy lead­er­ship.”

Today’s an­nounce­ments are the re­sult of a thor­ough nu­clear RFP process where we learned how we could im­prove our sup­port of nu­clear pro­jects’ de­vel­op­ment life­cy­cles and iden­tify spe­cific part­ner com­pa­nies to help scale and ac­cel­er­ate the build­out of new nu­clear en­ergy pro­duc­tion. For more than a decade, we’ve worked with in­no­v­a­tive part­ners to back clean en­ergy pro­jects that sup­port the grid — adding nearly 28 GW of new en­ergy to grids across 27 states. We’re proud to in­clude Oklo, TerraPower, and Vistra on that list and sup­port their work to boost America’s en­ergy lead­er­ship.

Customize Your WhatsApp Group Chats With New Member Tags, Text Stickers, and Event Reminders

AI Breakthroughs, Our Most Advanced Glasses, and More: Meta’s 2025 Highlights

One Year In: Meta’s Richland Parish Data Center Supports Louisiana Economy With $875M In Contracts

To help per­son­al­ize con­tent, tai­lor and mea­sure ads, and pro­vide a safer ex­pe­ri­ence, we use cook­ies. By click­ing or nav­i­gat­ing the site, you agree to al­low our col­lec­tion of in­for­ma­tion on and off Facebook through cook­ies. Learn more, in­clud­ing about avail­able con­trols: Cookie Policy

...

Read the original on about.fb.com »

8 241 shares, 10 trendiness

Alienchow

Thanks HN folks for all the com­ments. To clar­ify a bit, the ca­bles are pulled through PVC con­duits un­der the floor­ing be­fore be­ing buried in ce­ment. Currently the hy­poth­e­sis for why the ca­ble dis­in­te­grated so quickly is hy­drol­y­sis. Singapore is ex­tremely hu­mid af­ter all. A sec­ond pos­si­bil­ity is that I keep the left­over wall paints (Nippon Paint Vinilex 5000) in the same room and have no­ticed that much of the sol­vents have evap­o­rated. It is pos­si­ble that the sol­vents in the air might have caused the ca­ble to fail in 3 years. The other ends of the ca­bles don’t feel as sticky and crumbly de­spite be­ing out in the open ex­posed to the hu­mid­ity. My guess is that the paint sol­vent got to it.

Some other learn­ings from this. Buried ca­bling should al­ways be per­ma­nently fixed and at­tached to a patch panel in­stead of dan­gling in the open. That was the orig­i­nal plan but I fig­ured it would­n’t be an is­sue. I was wrong. Always mea­sure ex­act length of buried fi­bre ca­bling as they aren’t meant to be stored in loops.

This morn­ing I woke up and headed to my bomb shel­ter to grab the bike pump to in­flate the tyres on my chil­dren’s bikes. The han­dle got slightly tan­gled up in the fi­bre op­tic ca­bles so I lifted up the ca­bles to free the pump.

Like cookie crumbs the fi­bre ca­ble’s sleeve jack­ets crum­bled in my hands.

Before I could even ut­ter Oh fuck no”, an­other sec­tion of the ca­ble ex­ploded out­wards with thin metal wires jut­ting out from what seems to be like strands of white plas­tic threads, which I as­sume is the Kevlar sheath. I think I must have stood in my pseudo server room in shock for a whole minute, un­able to move or process what had hap­pened. A main com­po­nent of why I was in sheer hor­ror was the fact that I had stu­pidly buried all of these ca­bles un­der my ce­ment floor­ing in PVC trunk­ing from my shel­ter to all of the rooms in the flat. If this ca­ble fails, the con­nec­tion from the server room to a spe­cific room would be per­ma­nently sev­ered. The room for this par­tic­u­lar ca­ble turned out to be my home of­fice where my home­lab MS-A2 resided.

I had pur­chased these ca­bles from FS.com roughly 3.5 years ago in 2022. Because I was bury­ing the ca­bles un­der­ground per­ma­nently, I opted to get the MiLiTaRy GrAdE ar­moured fi­bre ca­bles for this pur­pose.

The ca­bles had been kept spooled up with a ra­dius of around 5cm for 3 whole years, lightly tied to­gether with hook and loop ca­ble fas­ten­ers and hung on laun­dry hooks in the shel­ter all this time.

The de­stroyed ca­ble is the only one that I had un­rav­elled re­cently to patch into my UDM to en­able SFP+ con­nec­tion to my of­fice space. As it turns out, ar­moured ca­bles in this spe­cific in­stance aren’t re­ally meant for move­ment, it’s likely more of a bury and for­get pur­pose. In hind­sight I should’ve con­nected all of the ca­bles to a fi­bre patch panel on the wall so that they would never move, then con­nect the patch panel to my UDM with eas­ily re­place­able LSZH ca­bles.

But it’s too late now, all I can do is to sal­vage the sit­u­a­tion. I headed out and pur­chased 3M self-bond­ing rub­ber elec­tri­cal tape 23, and Temflex 160 vinyl elec­tri­cal tape. The idea I had was to use the com­pres­sion prop­er­ties of the stretched rub­ber tape to hold the cor­ru­gated metal sheath and wire mesh in place, be­fore wrap­ping a sec­ond vinyl pro­tec­tion layer out­side with the 160.

However, the wrap­ping process it­self re­quires me to slowly shift the ca­ble around to hook onto higher ground to pre­vent kinks. The ac­tion it­self trig­gered more jacket fail­ures. Some of the fail­ures ac­tu­ally forced the ca­ble in a sharp right an­gle, which I am al­most cer­tain has caused kinks and cracks in the in­ner fi­bre strand. RIP.

At this point, I’m look­ing at re­build­ing the en­tire sleeve jacket of any­thing that’s ex­posed and mov­able with elec­tri­cal tape. What I had pre­vi­ously thought was a good idea to keep about 5-10m of slack to al­low me to eas­ily move my server rack around is now caus­ing me more prob­lems as good elec­tri­cal tape ain’t cheap. I have to es­sen­tially re­pair around 10 me­tres of jacket with­out ac­ci­den­tally de­stroy­ing parts in­side trunk­ing that I am un­able to reach. This is as­sum­ing that the 4 other un­touched ca­bles would­n’t spon­ta­neously crum­ble as well. Based on how they felt in my hand, I think it is an in­evitable out­come.

I’m pretty cer­tain that dat­a­cen­tre tech­ni­cians read­ing this by chance would mock my id­i­otic setup and I would be in­clined to join in. This is not a good day.

On the dim side of things, at least it seems like fi­bre op­tic ca­bles are pretty hardy. My MS-A2 SFP+ con­nec­tion is still work­ing and speedtest-cli is re­port­ing around 4000/3000 Mbps up/​down speeds to my ISP (10G fi­bre in­ter­net plan). UDM is see­ing 6000/7000, so the fi­bre ca­ble is def­i­nitely com­pro­mised. :(

...

Read the original on alienchow.dev »

9 236 shares, 84 trendiness

2026 is the Year of Self-hosting

I have flirted with self-host­ing at home for years. I al­ways bounced off it - too much time spent con­fig­ur­ing in­stead of us­ing. It just was­n’t fun.

That changed re­cently. The rea­son is sim­ple: CLI agents like Claude Code make self-host­ing on a cheapo home server dra­mat­i­cally eas­ier and ac­tu­ally fun.

This is the first time I would rec­om­mend it to normie/​soft­ware-lit­er­ate peo­ple who never re­ally wanted to sign up to be­come a sysad­min and stress about up­time of core per­sonal ser­vices.

The last one is the real un­lock.

Instead of Googling docker com­pose vault­war­den caddy re­verse proxy” and stitch­ing to­gether five blog posts from 2021, I just let Claude fig­ure out (up to you how much you care to re­ally un­der­stand the tech­ni­cal de­tails!).

Fits in one hand. Check that cen­tral cool­ing unit!

I pre­vi­ously ran my Plex server on an M1 Mac mini, which was great, but as I wanted to add more ser­vices I found my­self run­ning a lot of re­source-hun­gry VMs (via UTM) and it was get­ting com­pli­cated any­time the Mac re­booted. So, I picked up a Beelink Mini N150. It is small, quiet, and just barely sips power. I paid around $379 for the de­vice and an­other few hun­dred USD for 8TB in NVMe SSD. It’s pretty wild how ac­ces­si­ble these mini PCs have be­come in re­cent years!

This is the en­tire work­flow:

This is the part that sur­prised me. I’ve been us­ing Claude Code and other agen­tic CLIs for my day-to-day de­vel­op­ment, but as oth­ers are re­al­iz­ing, they are gen­er­al­ized com­puter agents and na­tive to the ter­mi­nal.

I in­stalled Claude Code di­rectly on the Linux box. Then I asked it things like:

* Keep my Docker im­ages up to date

* Restart on boot so I never have to futz with it af­ter an out­age

Claude Code run­ning di­rectly on the server. Just de­scribe what you want.

I did­n’t copy-paste YAML from the in­ter­net or have to do deep googling. I just asked.

I fo­cused on things I al­ready used, but wanted more con­trol over - ef­fec­tively start­ing to knock down the walled gar­den around my core ser­vices like pass­words, pho­tos, me­dia.

Each one lives in its own con­tainer.

I can ac­cess every­thing from my phone, lap­top, and tablet like it is lo­cal.

Uptime Kuma keep­ing an eye on every­thing.

Automatic alerts via email give me peace of mind.

When some­thing goes down, I get an email. When it comes back up, an­other email. No pager duty, no com­plex alert­ing rules. Just a sim­ple ping that tells me if I need to care.

Vaultwarden was kinda the okay, this can work” mo­ment.

It is a Bitwarden-compatible server writ­ten in Rust. Lightweight, re­li­able, and you can use the ex­ist­ing Bitwarden clients (like na­tive apps and browser ex­ten­sions). You can even set it as the de­fault pass­word man­ager on iOS, at the OS level!

Once that was run­ning, I ex­ported my pass­words from iCloud/​Key­chain, im­ported them eas­ily into Vaultwarden, and haven’t looked back since.

That alone jus­ti­fied the box.

Immich is a se­ri­ous Google Photos re­place­ment. I thought I’d have to com­pro­mise and flinched a bit when I in­stalled it. But nope, it’s good. Mobile apps. Face recog­ni­tion via a lo­cal (but slow) ma­chine learn­ing thread. Timeline and map view. Automatic up­loads from your photo roll.

Immich. This is not a com­pro­mise. This is bet­ter.

This is the kind of thing that used to feel frag­ile and half-baked when self-hosted. It does not any­more.

I took a bet on ReadDeck. The UI is gen­uinely good. Clean ty­pog­ra­phy, nice read­ing ex­pe­ri­ence, good mo­bile sup­port. It al­ways re­mem­bers where I stopped read­ing and takes me right there. I even set up a short­cut that al­lows me to save an ar­ti­cle for later right from mo­bile Firefox. Awesome.

This is ex­actly the kind of thing self-host­ing is per­fect for. A small, per­sonal tool that you ac­tu­ally use every day.

Lazydocker is a ter­mi­nal UI for Docker. It shows you all your con­tain­ers, logs, stats, and lets you restart or shell into any­thing with a few key­strokes.

I have been a huge fan of Lazygit for some time. I think it’s one of the best UIs I’ve ever used. So I was ex­cited to learn that Lazydocker is ba­si­cally that, but for mon­i­tor­ing Docker con­tain­ers. No mem­o­riz­ing docker ps flags or grep­ping through logs. Just SSH in, type lazy­docker, and every­thing is right there.

You feel like a su­per­hero af­ter you ssh in and see this

For a fuller pic­ture, Glances shows every­thing at once: CPU, mem­ory, disk, net­work, and all run­ning con­tain­ers.

Glances show­ing the whole pic­ture. 13 con­tain­ers, 6% CPU, 32% mem­ory. This lit­tle box barely breaks a sweat.

That is 13 ser­vices run­ning on a $379 mini PC, us­ing about 4 GB of RAM and al­most no CPU. The N150 is not a pow­er­house, but it does not need to be.

This does not feel like running a server.”

The feel­ing of own­er­ship is pow­er­ful, but a bit hard to de­scribe. I think you just have to try it, and I hope you get a strong feel­ing of in­de­pen­dence like I have.

When some­thing breaks, I SSH in, ask the agent what is wrong, and fix it. When I want to add some­thing new, I de­scribe it in plain English.

I am spend­ing time us­ing soft­ware, learn­ing, and hav­ing fun - in­stead of main­tain­ing it and stress­ing out about it.

This is for peo­ple who:

* Do not want to be­come in­fra ex­perts

If that is you, I re­ally think this is the year to try self-host­ing.

For the first time, I would say this is not just vi­able. It is fun.

Follow me on Twitter for more.

...

Read the original on fulghum.io »

10 226 shares, 9 trendiness

OlaProeis/Ferrite: A fast, lightweight text editor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a native, responsive experience.

A fast, light­weight text ed­i­tor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a na­tive, re­spon­sive ex­pe­ri­ence.

Platform Note: Ferrite has been pri­mar­ily de­vel­oped and tested on Windows. While it should work on Linux and ma­cOS, these plat­forms have not been ex­ten­sively tested. If you en­counter is­sues, please re­port them.

🤖 AI Disclosure: This pro­ject is 100% AI-generated code. All Rust code, doc­u­men­ta­tion, and con­fig­u­ra­tion was writ­ten by Claude (Anthropic) via Cursor with MCP tools. My role is prod­uct di­rec­tion, test­ing, and learn­ing to or­ches­trate AI-assisted de­vel­op­ment ef­fec­tively. The code is re­viewed and tested, not blindly ac­cepted — but I want to be trans­par­ent about the de­vel­op­ment process. This pro­ject is partly a learn­ing ex­er­cise in ex­plor­ing how far AI-assisted de­vel­op­ment can go.

* Tree Viewer - Hierarchical view for JSON/YAML/TOML with in­line edit­ing, ex­pand/​col­lapse, and path copy­ing

* Syntax Highlighting - Full-file syn­tax high­light­ing for 40+ lan­guages (Rust, Python, JavaScript, Go, etc.)

* Code Folding - Fold de­tec­tion with gut­ter in­di­ca­tors (▶/▼) for head­ings, code blocks, and lists (text hid­ing de­ferred to v0.3.0)

* Minimap - VS Code-style nav­i­ga­tion panel with click-to-jump and search high­lights

Native ren­der­ing of 11 di­a­gram types di­rectly in the pre­view:

✨ v0.2.2 Released: Stability & CLI im­prove­ments! CJK font sup­port, undo/​redo fixes, com­mand-line file open­ing (ferrite file.md), con­fig­urable log level, and de­fault view mode set­ting. See CHANGELOG.md for full de­tails.

* Export Options - Export to HTML with themed styling, or copy as HTML

* Formatting Toolbar - Quick ac­cess to bold, italic, head­ings, lists, links, and more

Download the lat­est re­lease for your plat­form from GitHub Releases.

# Download the .deb file, then in­stall with:

sudo apt in­stall ./ferrite-editor_amd64.deb

# Or us­ing dpkg:

sudo dpkg -i fer­rite-ed­i­tor_amd64.deb

Ferrite is avail­able on the AUR:

You can in­stall it us­ing your AUR helper of choice.

# Release pack­age

yay -Sy fer­rite

# Binary pack­age

yay -Sy fer­rite-bin

tar -xzf fer­rite-linux-x64.tar.gz

./ferrite

# Ubuntu/Debian

sudo apt in­stall build-es­sen­tial pkg-con­fig libgtk-3-dev libxcb-shape0-dev libxcb-xfix­es0-dev

# Fedora

sudo dnf in­stall gcc pkg-con­fig gtk3-de­vel libxcb-de­vel

# Arch

sudo pac­man -S base-de­vel pkg-con­fig gtk3 libxcb

xcode-se­lect –install

# Clone the repos­i­tory

git clone https://​github.com/​OlaProeis/​Fer­rite.git

cd Ferrite

# Build re­lease ver­sion (optimized)

cargo build –release

# The bi­nary will be at:

# Windows: tar­get/​re­lease/​fer­rite.exe

# Linux/macOS: tar­get/​re­lease/​fer­rite

# Run from source

cargo run –release

# Or run the bi­nary di­rectly

./target/release/ferrite

# Open a spe­cific file

./target/release/ferrite path/​to/​file.md

# Open mul­ti­ple files as tabs

./target/release/ferrite file1.md file2.md

# Open a folder as work­space

./target/release/ferrite path/​to/​folder/

# Show ver­sion

./target/release/ferrite –version

# Show help

./target/release/ferrite –help

Toggle be­tween modes us­ing the tool­bar but­tons or key­board short­cuts.

Workspace set­tings are stored in .ferrite/ within the work­space folder.

Access set­tings via Ctrl+, or the gear icon. Configure:

See ROADMAP.md for planned fea­tures and known is­sues.

Contributions are wel­come! Please see CONTRIBUTING.md for guide­lines.

# Fork and clone

git clone https://​github.com/​YOUR_USER­NAME/​Fer­rite.git

cd Ferrite

# Create a fea­ture branch

git check­out -b fea­ture/​your-fea­ture

# Make changes, then ver­ify

cargo fmt

cargo clippy

cargo test

cargo build

# Commit and push

git com­mit -m feat: your fea­ture de­scrip­tion”

git push ori­gin fea­ture/​your-fea­ture

This pro­ject is li­censed un­der the MIT License - see the LICENSE file for de­tails.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.