10 interesting stories served every morning and every evening.
A lot has already been said about the absurdly large corner radius of windows on macOS Tahoe. People are calling the way it looks comical, like a child’s toy, or downright insane.
Setting all the aesthetic issues aside — which are to some extent a matter of taste — it also comes at a cost in terms of usability.
Since upgrading to macOS Tahoe, I’ve noticed that quite often my attempts to resize a window are failing.
This never happened to me before in almost 40 years of using computers. So why all of a sudden?
It turns out that my initial click in the window corner instinctively happens in an area where the window doesn’t respond to it. The window expects this click to happen in an area of 19 × 19 pixels, located near the window corner.
If the window had no rounded corners at all, 62% of that area would lie inside the window:
But due to the huge corner radius in Tahoe, most of it — about 75% — now lies outside the window:
Living on this planet for quite a few decades, I have learned that it rarely works to grab things if you don’t actually touch them:
So I instinctively try to grab the window corner inside the window, typically somewhere in that green area, near the blue dot:
And I assume that most people would also intuitively expect to be able to grab the corner there. But no, that’s already outside the accepted target area:
So, for example, grabbing it here does not work:
But guess what — grabbing it here does:
So in the end, the most reliable way to resize a window in Tahoe is to grab it outside the corner — a gesture that feels unnatural and unintuitive, and is therefore inevitably error-prone.
...
Read the original on noheger.at »
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
The first question often asked of Windows refugees migrating to Linux is, “Why Linux?” It’s a good question, and one that needs to be asked before dumping Windows for anything else. Personally, I tried macOS first. The experience was smooth and easy but ultimately felt restrictive (installing from third-party developers, anyone?). Additionally, the only Apple computer I have is a 2014 MacBook Air. As such, the latest version of macOS I could actually run is 11 (Big Sur), which was released in 2020. Overall system operation was quite sluggish on the older hardware, and I knew that time would inevitably take its toll on the software experience — apps would soon be out of date and I wouldn’t be able to update them. I also tried the OpenCore Legacy Patcher to push the laptop to macOS 13. While performance improved, key features like iMessage and Continuity Camera were either buggy or flat out refused to work. It felt like my laptop was running in mud with its hands tied behind its back. Plus, I needed something for my desktop. Not wanting to drop a mortgage payment or two on new hardware, I opted for Linux.
Linux promised me the potential of what I wanted - high hardware compatibility with full software freedom. The operating system can run on pretty much anything, and it grants users a huge amount of control over their system. I tried out a few ditributions, or distros, of Linux. A distro is like a “flavor” of Linux, and each one has unique factors (e.g., app/package management, bundled user interface). With most distros, these differences are largely irrelevant; most distros offer the same main packages as others.
...
Read the original on www.notebookcheck.net »
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don’t want AI to economically succeed, I don’t care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.
In 2020 I left my job in order to write a novel about AI, universal basic income, a society that adapted to the automation of work facing many challenges. At the very end of 2024 I opened a YouTube channel focused on AI, its use in coding tasks, its potential social and economical effects. But while I recognized what was going to happen very early, I thought that we had more time before programming would be completely reshaped, at least a few years. I no longer believe this is the case. Recently, state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you’ll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM. But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.
In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks:
1. I modified my linenoise library to support UTF-8, and created a framework for line editing testing that uses an emulated terminal that is able to report what is getting displayed in each character cell. Something that I always wanted to do, but it was hard to justify the work needed just to test a side project of mine. But if you can just describe your idea, and it materializes in the code, things are very different.
2. I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.
3. Yesterday I wanted a pure C library that would be able to do the inference of BERT like embedding models. Claude Code created it in 5 minutes. Same output and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to convert the GTE-small model.
4. In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I’m slow at checking and authorizing to run the commands needed).
It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.
How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies. The same thing open source software did in the 90s.
However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I’m scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough “magic” inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).
As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I’m doing with Streams now.
But I’m worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more? Or will they try to cut salary costs, having fewer programmers that are better at prompting? And, there are other sectors where humans will become completely replaceable, I fear.
What is the social solution, then? Innovation can’t be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.
Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can’t control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.
Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.
Please enable JavaScript to view the comments powered by Disqus.
blog comments powered by
...
Read the original on antirez.com »
* A command-line tool to download all your iCloud photos.
* Works on Linux, Windows, and macOS; laptop, desktop, and NAS
* Available as an executable for direct downloading and through package managers/ecosystems (Docker, PyPI, AUR, npm)
* Developed and maintained by volunteers (we are always looking for help).
See Documentation for more details. Also, check Issues
We aim to release new versions once a week (Friday), if there is something worth delivering.
To make iCloud Photo Downloader work, ensure the iCloud account is configured with the following settings, otherwise Apple Servers will return an ACCESS_DENIED error:
* Enable Access iCloud Data on the Web: On your iPhone / iPad, enable Settings > Apple ID > iCloud > Access iCloud Data on the Web
There are three ways to run icloudpd:
Download executable for your platform from the GitHub Release and run it
Use package manager to install, update, and, in some cases, run (Docker, PyPI, AUR, npm)
Build and run from the source
See Documentation for more details
* Three modes of operation:
Sync - download new photos from iCloud and delete local files that were removed in iCloud (–auto-delete option)
Move - download new photos from iCloud and delete photos in iCloud (–keep-icloud-recent-days option)
* Sync - download new photos from iCloud and delete local files that were removed in iCloud (–auto-delete option)
* Move - download new photos from iCloud and delete photos in iCloud (–keep-icloud-recent-days option)
* Support for Live Photos (image and video as separate files) and RAW images (including RAW+JPEG)
* Automatic de-duplication of photos with the same name
* One time download and an option to monitor for iCloud changes continuously (–watch-with-interval option)
* … and many more (use –help option to get full list)
Some changes are added to the experimental mode before they graduate into the main package. Details
To keep your iCloud photo collection synchronized to your local system:
To independently create and authorize a session (and complete 2SA/2FA validation if needed) on your local system:
Want to contribute to iCloud Photos Downloader? Awesome! Check out the contributing guidelines to get involved.
...
Read the original on github.com »
Happy New Year 2026! Once again, a lot has happened in Gentoo over the past months. New developers, more binary packages, GnuPG alternatives support, Gentoo for WSL, improved Rust bootstrap, better NGINX packaging, … As always here
we’re going to revisit all the exciting news from our favourite Linux distribution.
Gentoo currently consists of 31663 ebuilds for 19174 different packages. For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors. Gentoo each week builds 154
distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.
The number of commits to the main ::gentoo repository
has remained at an overall high level in 2025, with a slight decrease from 123942 to 112927. The number of commits by external contributors was 9396, now across 377 unique external authors.
GURU, our user-curated repository with a trusted user model, as entry point for potential developers, has shown a decrease in activity. We have had 5813 commits in 2025, compared to 7517 in 2024. The number of contributors to GURU has increased, from 241 in 2024 to 264 in 2025. Please join us there and help packaging the latest and greatest software. That’s the ideal preparation for becoming a Gentoo developer!
Activity has slowed down somewhat on the Gentoo bugtracker bugs.gentoo.org, where we’ve had 20763 bug reports created in 2025, compared to 26123 in 2024. The number of resolved bugs shows the same trend, with 22395 in 2025 compared to 25946 in 2024. The current values are closer to those of 2023 - but clearly this year we fixed more than we broke!
In 2025 we have gained four new Gentoo developers. They are in chronological order:
Let’s now look at the major improvements and news of 2025 in Gentoo.
RISC-V bootable QCOW2: Same as for amd64 and arm64, also for RISC-V we now have ready-made bootable disk images in QCOW2 format
available for download on our mirrors in a console and a cloud-init variant. The disk images use the rv64gc instruction set and the lp64d ABI, and can be booted via the standard RISC-V UEFI support.
Gentoo for WSL: We now publish weekly Gentoo images for Windows
Subsystem for Linux (WSL), based on the amd64 stages,
see our mirrors. While these images are not present in the Microsoft store yet, that’s something we intend to fix soon.
hppa and sparc destabilized: Since we do not have hardware readily available anymore and these architectures mostly fill a retrocomputing niche, stable keywords have been dropped for both hppa (PA-RISC) and sparc. The architectures will remain supported with testing keywords.
musl with locales: Localization support via the package
sys-apps/musl-locales has been added by default to the Gentoo stages based on the lightweight musl C library.
GPG alternatives: Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing. At the moment,
the original, unmodified GnuPG, the FreePG fork/patchset as also used in many other Linux distributions (Fedora, Debian, Arch, …), and the re-implementation
Sequoia-PGP with
Chameleon
are available. In practice, implementation details vary between the providers, and while GnuPG and FreePG are fully supported, you may still encounter difficulties when selecting Sequoia-PGP/Chameleon.
zlib-ng support: We have introduced initial support for using zlib-ng and
minizip-ng in compatibility mode in place of the reference zlib libraries.
System-wide jobserver: We have created steve, an implementation of a token-accounting system-wide jobserver, and introduced experimental global jobserver support in Portage. Thanks to that, it is now possible to globally control the concurrently running build job count, correctly accounting for parallel emerge jobs, make and ninja jobs, and other clients supporting the jobserver protocol.
NGINX rework: The packaging of the NGINX web server and reverse proxy in Gentoo has undergone a major improvement, including also the splitting off of several third-party modules into separate packages.
C++ based Rust bootstrap: We have added a bootstrap path for Rust from C++ using
Mutabah’s Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations.
Ada and D bootstrap: Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge.
FlexiBLAS: Gentoo has adopted the new FlexiBLAS wrapper
library as the primary way of switching implementations of the BLAS numerical algorithm library at runtime. This automatically also provides ABI stability for linking programs and bundles the specific treatment of different BLAS variants in one place.
Python: In the meantime the default Python version in Gentoo has reached Python 3.13. Additionally we have also Python 3.14 available stable - fully up to date with upstream.
KDE upgrades: As of end of 2025, in Gentoo stable we have KDE Gear 25.08.3, KDE Frameworks 6.20.0, and KDE Plasma 6.5.4. As always, Gentoo testing follows the newest upstream releases (and using the KDE overlay you can even install from git sources).
Additional build server: A second dedicated build server, hosted at Hetzner Germany, has been added to speed up the generation of installation stages, iso and qcow2 images, and binary packages.
Documentation: Documentation work has made constant progress on wiki.gentoo.org. The Gentoo Handbook had some particularly useful updates, and the documentation received lots of improvements and additions from the many active volunteers. There are currently 9,647 pages on the wiki, and there have been 766,731 edits since the project started. Please help
Gentoo by contributing to documentation!
Income: The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part
(over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471
in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
* Expenses: Our expenses in 2025 were, program services (e.g. hosting costs) $8,332, management & general (accounting)
$1,724, fundraising $905, and non-operating (depreciation expenses) $10,075.
* Balance: We have $104,831 in the bank as of July 1, 2025 (which is when our fiscal year 2026 starts for accounting
purposes). The Gentoo Foundation FY2025 financial statement
is available on the Gentoo Wiki.
* Transition to SPI: The Foundation encourages donors to ensure their ongoing contributions are going to
SPI - more than 40 donors had not responded to requests to move the recurring donations
by the end of the year. Expenses will be moved to the SPI structure as ongoing income permits.
As every year, we would like to thank all Gentoo developers and all who have submitted contributions
for their relentless everyday Gentoo work. If you are interested and would like to help, please join us to make Gentoo even better! As a volunteer project, Gentoo could not exist without its community.
...
Read the original on www.gentoo.org »
Have you ever noticed that the food graphics in Super Smash Bros. and Kirby Air Riders is flat “billboarded” stock images of food?
This artistic decision from director Masahiro Sakurai has persisted through 8 games over nearly 25 years. I’ve seen a few folks online remarking about the “JPEG” or “PNG”-like quality of the images in the most recent release: .
While researching every game with this art style and all 150+ unique food images I ended up fixing wikis, reviewing a seasonal KitKat flavor, and preserving an uncatalogued image of tempura soba.
Masahiro Sakurai is the director for every game on this list, so clearly this is his artistic decision.
Super Smash Bros. Melee was the first game to contain this food art style, published in 2001. This style was then repeated in Kirby Air Ride (2003), Super Smash Bros. Brawl (2008),
Super Smash Bros. for 3DS and Wii U (2014), Super Smash Bros.
Ultimate (2018), and most recently in Kirby Air Riders (2025).
Credit to Nintendo, HAL Laboratories, SORA Ltd., and Bandai Namco Studios as developers and publishers of these games. Artwork was sourced from the Spriters Resource.
Where it all began! Super Smash Bros. Melee for the GameCube started off with 28 distinct food items, often found in “Party Balls”. Each type of food had a different “nutritional value” and “yumminess quotient” according to the in-game trophy dedicated to the food items.
Melee included many foods specific to Japanese cuisine, such as unagi (eel), omurice, soba, dango, and gyūdon. I do distinctly remember growing up as a “culinarily sheltered” kid in the midwest United States and not understanding what many of these food items were.
The original stock images of Super Smash Bros. Melee and the next game, Kirby Air Ride, have been partially discovered and documented by a group called “Render96”. The stock images are from a company called “Sozaijiten”. Many of the food images come from Material Dictionary CDs (Vegetables & Fruits),
(Food & Dishes), and (Cooking Japanese, Western, & Chinese). The apple stock image in particular was re-used all the way through Super Smash Bros. Ultimate (2018). The burger, milk, dango, and donut are still missing their primary source.
Kirby Air Ride for the GameCube had significantly fewer distinct food items (12) compared to Melee and maintained many of the same food stock images from Melee, including the apple, burger, chicken, curry, omurice, onigiri, and ramen. Nigiri was included, but the image was changed from a sushi board to a plate.
The stock images had their saturation increased and the black borders around the images are thicker, sometimes 2-3 pixels instead of only 1 pixel for Melee.
I paid $50 plus shipping on eBay for this PNG. This is the closest I’ll get to NFTs.
While researching the foods in Kirby Air Ride I discovered a wiki description of a “tempura soba” item that I’d never heard of and wasn’t included in the Spriters Resource spritesheets
for Kirby Air Ride. Turns out that this item was changed to a “hotdog” in the NSTC-M and PAL releases of Kirby Air Ride.
I was unable to find a non-blurry image of the tempura soba sprite online, so of course I had to preserve this sprite myself. I purchased
a Japanese copy of Kirby Air Ride, dumped the ROM using the FlippyDrive Disc Backup Utility, and ran the ROM using Dolphin with “Dump Textures” mode enabled to archive the sprite directly from the game.
Kirby Air Ride cover artwork (left: JP, right: US, PAL). Images from the GameTDB.
In the process I also learned that the cover of Kirby Air Ride changed between the Japanese and international releases. The Japanese cover art features a smiling happy Kirby where the international cover has Kirby with a furrowed brow and serious look.
Super Smash Bros. Brawl for the Wii has only one more food item compared to Melee (29) and introduces 11 new food items including bread, cake, candy, chocolate, cookie, melon soda, parfait, peaches, pie, pineapple, and steak.
About half of the Japanese-specific foods from both Melee and Kirby Air Ride were replaced: curry, omurice, onigiri, and ramen.
The art is less saturated and more “realistic” which is in-line with the rest of the game’s art direction. The images lost their black outline, likely to draw less attention to the “arcade-y” feel that the previous titles had with food items.
Super Smash Bros. Wii U and 3DS have the same total number of food items as Brawl (29). These games change the food art style completely, again! It’s brighter, saturated, and looks delicious.
The soda item was changed from a melon cream soda to a dark cola with lemon. The omurice was changed to a pair of fried eggs with bacon. These games are also the only ones without the “burger” food item.
Super Smash Bros. for 3DS uses the same food artwork used in Super Smash Bros. for downscaled to 64x64 pixels from 256x256 pixels with some minor editing.
Super Smash Bros. Wii U and 3DS added the “Mont Blanc” food item, which is a French dessert that is popular in Japan. I’ve seen multiple guides and wikis mistakenly label this food item as “noodles” due to the “vermicelli” shape of the puréed chestnuts. Yummy!
While researching and writing this blog post I happened across “Mont Blanc”-flavored KitKats. These are apparently a limited-time flavor for autumn. The KitKats are creamy and have plenty of chestnut flavor, but they are very sweet (apparently Mont Blanc is quite sweet, too, so this is to be expected).
Super Smash Bros. Ultimate uses the same 29 foods from the Wii U and 3DS and adds 9 more foods for a total of 38. Many of the newly added foods are call-backs to food items in previous titles, below highlighted in pink.
The 9 new foods in Ultimate are burgers, cheese, corndogs, donuts, dumplings, daisies, pizza, pineapple, and steak.
It’s clear that the “Sozaijiten” stock images were still in use even in 2018: 17 years later! The apple, cheese, and chicken stock images for Super Smash Bros. Melee match the stock images used in Ultimate.
Kirby Air Riders released for the Switch 2 has the most foods of any game with this art style with 45 distinct food items.
Massive thank-you to Charles Bernardo for sending me carefully cropped images of the food in Kirby Air Riders.
Kirby Air Riders is the first game in this series to use completely new models for all food items: not even the apple or cheese are the same from any previous game. Kirby Air Riders is also the first game in this series not to have a “roast chicken” item, breaking from an established video-game food trope.
Kirby Air Riders adds a new food-centric mode called “” where riders earn points by consuming food as quickly as possible in a small arena. Gourmet Race introduces a new food concept: “Large Foods”. Large food items are worth 15 points instead of 1 point per food item. There are 14 large food items, some presenting as “upgraded” versions of regular-sized foods.
The large food items are: a bunch of 12 bananas instead of 3, a bread-basket, a double cheeseburger, a whole cake instead of a slice, donuts, a fruit basket, a board of nigiri instead of a plate, fruit parfait, pizza, popcorn, salad, rainbow shave ice instead of blue only, a tempura bowl, and a whole watermelon instead of a slice.
Prior to this article there was not yet a complete list of foods in Kirby Air Riders documented on a wiki or spritesheet. I added this list to the Kirby wiki, but I’ve also included the list below:
There are 16 total food items that only appear in a single title across the 25-year span of games. Kirby Air Riders and Super Smash Bros. Melee have by far the most unique food items with 8 and 5 respectively.
Finally, here is a table with every image so you can compare how each changed across different titles:
Wow, you made it to the end!
Share your thoughts with me on Mastodon, email, or Bluesky.
Check out this list of cool stuff I found on the internet.
Follow this blog on RSS or the email newsletter.
Go outside (best option)
...
Read the original on sethmlarson.dev »
Store and Device Help Centers
see all results for
Meta Announces Nuclear Energy Projects, Unlocking Up to 6.6 GW to Power American Leadership in AI Innovation
Today, we’re announcing landmark agreements that will extend and expand the operation of three nuclear power plants, boost the development of new advanced nuclear technology, and foster job growth in several American communities.
Supporting nuclear energy development strengthens our country’s energy infrastructure and helps create a more reliable electric grid, which are key to powering the economy and securing America’s energy independence and global leadership in AI.
Our agreements with Vistra, TerraPower, and Oklo — and the one we signed with Constellation Energy last year — make Meta one of the most significant corporate purchasers of nuclear energy in American history.
At Meta, we’re focused on building personal superintelligence for everyone, and delivering the app experiences and computing devices that will improve the lives of billions of people around the world. Our industry-leading data centers are the backbone of these breakthroughs — they provide the infrastructure that drives innovation and brings transformative technologies to life. Innovation at this scale requires more electricity, and that’s where nuclear energy comes in. It provides clean, reliable, and firm electricity that helps power America’s economy and communities.
That’s why today, we’re proud to announce agreements with three companies — following our nuclear RFP process — that will help add clean, reliable energy to electric grids, preserve continued investment in operating nuclear power plants, and support the nuclear fuel supply chain, American jobs, and AI innovation.
Our commitments to Oklo and TerraPower support the next generation of American developers creating safer, advanced nuclear reactors and accelerating the development of nuclear technologies. Through our partnership with Vistra, we’re providing financial support for operating nuclear power plants, extending the operational lifespan, and increasing energy production at the Perry and Davis-Besse plants in Ohio and the Beaver Valley plant in Pennsylvania. The projects we’re announcing today will deliver power to the grids that support our operations, including our Prometheus supercluster in New Albany, Ohio.
These projects are expected to provide thousands of construction jobs and hundreds of long-term operational jobs, supporting up to 6.6 GW of new and existing clean energy by 2035. Importantly, these projects add reliable and firm power to the grid, reinforce America’s nuclear supply chain, and support new and existing jobs to build and operate American power plants.
This work builds on our ongoing collaboration with electric utility companies and power providers to plan for and meet our energy needs years in advance of our data centers becoming operational. We pay the full costs for energy used by our data centers so consumers don’t bear these expenses, and we support the broader grid through our energy agreements.
“Our agreements with Vistra, TerraPower, Oklo, and Constellation make Meta one of the most significant corporate purchasers of nuclear energy in American history. State-of-the-art data centers and AI infrastructure are essential to securing America’s position as a global leader in AI. Nuclear energy will help power our AI future, strengthen our country’s energy infrastructure, and provide clean, reliable electricity for everyone. These projects are going to create thousands of skilled jobs in Ohio and Pennsylvania, add new energy to the grid, extend the life of three existing nuclear plants, and accelerate new reactor technologies.”
As the demand for reliable, scalable, and clean energy continues to rise, advanced nuclear technology has the potential to become a key part of the solution. The latest generation of advanced nuclear reactors are designed to be safer — delivering reliable baseload power that can be efficiently added to existing grids, which makes them ideal for supporting America’s evolving power needs. Our agreements with Oklo and TerraPower will help advance this next generation of energy technology.
These agreements also mean that Oklo and TerraPower have greater business certainty, can raise capital to move forward with these projects, and ultimately add more energy capacity to the grid. Over time, this will be an important tool in ensuring that grids maintain reliability for all customers and ensure stable wholesale electricity prices.
Our agreement with TerraPower will provide funding that supports the development of two new Natrium® units capable of generating up to 690 MW of firm power with delivery as early as 2032. The agreement also provides Meta with rights for energy from up to six other Natrium units capable of producing 2.1 GW and targeted for delivery by 2035. At a total of eight potential units, with 2.8 GW of baseload energy generation capacity and an additional 1.2 GW of built-in storage, this agreement is Meta’s largest support of advanced nuclear technologies to date.
“To successfully address growing energy demand, we must deploy gigawatts of advanced nuclear energy in the 2030s. This agreement with Meta is designed to support the rapid deployment of our Natrium technology that provides the reliable, flexible, and carbon-free power our country needs,” said Chris Levesque, TerraPower president and CEO. “With our first Natrium plant under development, we have completed our design, established our supply chain, and cleared key regulatory milestones. These successes mean our TerraPower team is well positioned to deliver on this historic multi-unit delivery agreement.”
Our partnership with Oklo helps advance the development of entirely new nuclear energy in Pike County, Ohio. This advanced nuclear technology campus — which may come online as early as 2030 — is poised to add up to 1.2 GW of clean baseload power directly into the PJM market and support our operations in the region.
This agreement lays the foundation for constructing multiple Oklo Aurora Powerhouse reactors, which is expected to create thousands of construction and long-term operations jobs and generate new local and state tax revenue through major investments in energy infrastructure. Oklo Aurora powerhouses are based on proven fast-reactor designs with inherently safe systems capable of using both fresh and repurposed fuel.
“Meta’s funding commitment in support of early procurement and development activity is a major step in moving advanced nuclear forward,” said Jacob DeWitte, Oklo’s co-founder and CEO. “Two years ago, Oklo shared its vision to build a new generation of advanced nuclear powerhouses in Ohio. Today, that vision is becoming a reality through the support of a multi-year effort with Meta; to deliver clean energy and create long-term, high-quality jobs in Ohio.”
Many American nuclear power plants need long-term support and require ongoing investment to maintain best-in-class safety and reliability in operations. For example, our first nuclear energy agreement helped extend the life of a nuclear energy plant in Clinton, Illinois for 20 more years.
Through additional 20-year nuclear energy agreements, we will purchase more than 2.1 GW of energy from two operating Vistra nuclear power plants in Ohio (Perry and Davis-Besse), in addition to the energy from expansions (uprates) at these two Ohio plants and a third Vistra nuclear plant in Pennsylvania (Beaver Valley). All three plants are located in and will continue to deliver power into the PJM grid region, and these expansions will be the largest nuclear uprates supported by a corporate customer in the US.
Meta’s commitments ensure that these facilities can continue providing reliable power to the regional electricity grid. The new additional uprate capacity at each of them, totaling 433 MW, is expected to come online in the early 2030s — supporting the growing needs in the PJM grid region in the future. This means consumers will benefit from a larger supply of reliable, always-ready power through Meta-supported uprates to the Vistra facilities.
“This is an exciting collaboration for us at Vistra. We are focused on meeting customer needs, and providing reliable, carbon-free nuclear power is something we’re proud to offer Meta,” said Jim Burke, president and CEO of Vistra. “This agreement is beneficial in many ways — it powers American innovation and AI technology, while allowing us to extend the operational life of these plants, boost the capacity of the nuclear reactors to support the grid, protect existing jobs while creating new ones, and continue investing in the communities where our plants are located. Partnerships like ours are key in moving America forward in both AI and energy leadership.”
Today’s announcements are the result of a thorough nuclear RFP process where we learned how we could improve our support of nuclear projects’ development lifecycles and identify specific partner companies to help scale and accelerate the buildout of new nuclear energy production. For more than a decade, we’ve worked with innovative partners to back clean energy projects that support the grid — adding nearly 28 GW of new energy to grids across 27 states. We’re proud to include Oklo, TerraPower, and Vistra on that list and support their work to boost America’s energy leadership.
Customize Your WhatsApp Group Chats With New Member Tags, Text Stickers, and Event Reminders
AI Breakthroughs, Our Most Advanced Glasses, and More: Meta’s 2025 Highlights
One Year In: Meta’s Richland Parish Data Center Supports Louisiana Economy With $875M In Contracts
To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy
...
Read the original on about.fb.com »
Thanks HN folks for all the comments. To clarify a bit, the cables are pulled through PVC conduits under the flooring before being buried in cement. Currently the hypothesis for why the cable disintegrated so quickly is hydrolysis. Singapore is extremely humid after all. A second possibility is that I keep the leftover wall paints (Nippon Paint Vinilex 5000) in the same room and have noticed that much of the solvents have evaporated. It is possible that the solvents in the air might have caused the cable to fail in 3 years. The other ends of the cables don’t feel as sticky and crumbly despite being out in the open exposed to the humidity. My guess is that the paint solvent got to it.
Some other learnings from this. Buried cabling should always be permanently fixed and attached to a patch panel instead of dangling in the open. That was the original plan but I figured it wouldn’t be an issue. I was wrong. Always measure exact length of buried fibre cabling as they aren’t meant to be stored in loops.
This morning I woke up and headed to my bomb shelter to grab the bike pump to inflate the tyres on my children’s bikes. The handle got slightly tangled up in the fibre optic cables so I lifted up the cables to free the pump.
Like cookie crumbs the fibre cable’s sleeve jackets crumbled in my hands.
Before I could even utter “Oh fuck no”, another section of the cable exploded outwards with thin metal wires jutting out from what seems to be like strands of white plastic threads, which I assume is the Kevlar sheath. I think I must have stood in my pseudo server room in shock for a whole minute, unable to move or process what had happened. A main component of why I was in sheer horror was the fact that I had stupidly buried all of these cables under my cement flooring in PVC trunking from my shelter to all of the rooms in the flat. If this cable fails, the connection from the server room to a specific room would be permanently severed. The room for this particular cable turned out to be my home office where my homelab MS-A2 resided.
I had purchased these cables from FS.com roughly 3.5 years ago in 2022. Because I was burying the cables underground permanently, I opted to get the MiLiTaRy GrAdE armoured fibre cables for this purpose.
The cables had been kept spooled up with a radius of around 5cm for 3 whole years, lightly tied together with hook and loop cable fasteners and hung on laundry hooks in the shelter all this time.
The destroyed cable is the only one that I had unravelled recently to patch into my UDM to enable SFP+ connection to my office space. As it turns out, armoured cables in this specific instance aren’t really meant for movement, it’s likely more of a bury and forget purpose. In hindsight I should’ve connected all of the cables to a fibre patch panel on the wall so that they would never move, then connect the patch panel to my UDM with easily replaceable LSZH cables.
But it’s too late now, all I can do is to salvage the situation. I headed out and purchased 3M self-bonding rubber electrical tape 23, and Temflex 160 vinyl electrical tape. The idea I had was to use the compression properties of the stretched rubber tape to hold the corrugated metal sheath and wire mesh in place, before wrapping a second vinyl protection layer outside with the 160.
However, the wrapping process itself requires me to slowly shift the cable around to hook onto higher ground to prevent kinks. The action itself triggered more jacket failures. Some of the failures actually forced the cable in a sharp right angle, which I am almost certain has caused kinks and cracks in the inner fibre strand. RIP.
At this point, I’m looking at rebuilding the entire sleeve jacket of anything that’s exposed and movable with electrical tape. What I had previously thought was a good idea to keep about 5-10m of slack to allow me to easily move my server rack around is now causing me more problems as good electrical tape ain’t cheap. I have to essentially repair around 10 metres of jacket without accidentally destroying parts inside trunking that I am unable to reach. This is assuming that the 4 other untouched cables wouldn’t spontaneously crumble as well. Based on how they felt in my hand, I think it is an inevitable outcome.
I’m pretty certain that datacentre technicians reading this by chance would mock my idiotic setup and I would be inclined to join in. This is not a good day.
On the dim side of things, at least it seems like fibre optic cables are pretty hardy. My MS-A2 SFP+ connection is still working and speedtest-cli is reporting around 4000/3000 Mbps up/down speeds to my ISP (10G fibre internet plan). UDM is seeing 6000/7000, so the fibre cable is definitely compromised. :(
...
Read the original on alienchow.dev »
I have flirted with self-hosting at home for years. I always bounced off it - too much time spent configuring instead of using. It just wasn’t fun.
That changed recently. The reason is simple: CLI agents like Claude Code make self-hosting on a cheapo home server dramatically easier and actually fun.
This is the first time I would recommend it to normie/software-literate people who never really wanted to sign up to become a sysadmin and stress about uptime of core personal services.
The last one is the real unlock.
Instead of Googling “docker compose vaultwarden caddy reverse proxy” and stitching together five blog posts from 2021, I just let Claude figure out (up to you how much you care to really understand the technical details!).
Fits in one hand. Check that central cooling unit!
I previously ran my Plex server on an M1 Mac mini, which was great, but as I wanted to add more services I found myself running a lot of resource-hungry VMs (via UTM) and it was getting complicated anytime the Mac rebooted. So, I picked up a Beelink Mini N150. It is small, quiet, and just barely sips power. I paid around $379 for the device and another few hundred USD for 8TB in NVMe SSD. It’s pretty wild how accessible these mini PCs have become in recent years!
This is the entire workflow:
This is the part that surprised me. I’ve been using Claude Code and other agentic CLIs for my day-to-day development, but as others are realizing, they are generalized computer agents and native to the terminal.
I installed Claude Code directly on the Linux box. Then I asked it things like:
* Keep my Docker images up to date
* Restart on boot so I never have to futz with it after an outage
Claude Code running directly on the server. Just describe what you want.
I didn’t copy-paste YAML from the internet or have to do deep googling. I just asked.
I focused on things I already used, but wanted more control over - effectively starting to knock down the walled garden around my core services like passwords, photos, media.
Each one lives in its own container.
I can access everything from my phone, laptop, and tablet like it is local.
Uptime Kuma keeping an eye on everything.
Automatic alerts via email give me peace of mind.
When something goes down, I get an email. When it comes back up, another email. No pager duty, no complex alerting rules. Just a simple ping that tells me if I need to care.
Vaultwarden was kinda the “okay, this can work” moment.
It is a Bitwarden-compatible server written in Rust. Lightweight, reliable, and you can use the existing Bitwarden clients (like native apps and browser extensions). You can even set it as the default password manager on iOS, at the OS level!
Once that was running, I exported my passwords from iCloud/Keychain, imported them easily into Vaultwarden, and haven’t looked back since.
That alone justified the box.
Immich is a serious Google Photos replacement. I thought I’d have to compromise and flinched a bit when I installed it. But nope, it’s good. Mobile apps. Face recognition via a local (but slow) machine learning thread. Timeline and map view. Automatic uploads from your photo roll.
Immich. This is not a compromise. This is better.
This is the kind of thing that used to feel fragile and half-baked when self-hosted. It does not anymore.
I took a bet on ReadDeck. The UI is genuinely good. Clean typography, nice reading experience, good mobile support. It always remembers where I stopped reading and takes me right there. I even set up a shortcut that allows me to save an article for later right from mobile Firefox. Awesome.
This is exactly the kind of thing self-hosting is perfect for. A small, personal tool that you actually use every day.
Lazydocker is a terminal UI for Docker. It shows you all your containers, logs, stats, and lets you restart or shell into anything with a few keystrokes.
I have been a huge fan of Lazygit for some time. I think it’s one of the best UIs I’ve ever used. So I was excited to learn that Lazydocker is basically that, but for monitoring Docker containers. No memorizing docker ps flags or grepping through logs. Just SSH in, type lazydocker, and everything is right there.
You feel like a superhero after you ssh in and see this
For a fuller picture, Glances shows everything at once: CPU, memory, disk, network, and all running containers.
Glances showing the whole picture. 13 containers, 6% CPU, 32% memory. This little box barely breaks a sweat.
That is 13 services running on a $379 mini PC, using about 4 GB of RAM and almost no CPU. The N150 is not a powerhouse, but it does not need to be.
This does not feel like “running a server.”
The feeling of ownership is powerful, but a bit hard to describe. I think you just have to try it, and I hope you get a strong feeling of independence like I have.
When something breaks, I SSH in, ask the agent what is wrong, and fix it. When I want to add something new, I describe it in plain English.
I am spending time using software, learning, and having fun - instead of maintaining it and stressing out about it.
This is for people who:
* Do not want to become infra experts
If that is you, I really think this is the year to try self-hosting.
For the first time, I would say this is not just viable. It is fun.
Follow me on Twitter for more.
...
Read the original on fulghum.io »
A fast, lightweight text editor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a native, responsive experience.
Platform Note: Ferrite has been primarily developed and tested on Windows. While it should work on Linux and macOS, these platforms have not been extensively tested. If you encounter issues, please report them.
🤖 AI Disclosure: This project is 100% AI-generated code. All Rust code, documentation, and configuration was written by Claude (Anthropic) via Cursor with MCP tools. My role is product direction, testing, and learning to orchestrate AI-assisted development effectively. The code is reviewed and tested, not blindly accepted — but I want to be transparent about the development process. This project is partly a learning exercise in exploring how far AI-assisted development can go.
* Tree Viewer - Hierarchical view for JSON/YAML/TOML with inline editing, expand/collapse, and path copying
* Syntax Highlighting - Full-file syntax highlighting for 40+ languages (Rust, Python, JavaScript, Go, etc.)
* Code Folding - Fold detection with gutter indicators (▶/▼) for headings, code blocks, and lists (text hiding deferred to v0.3.0)
* Minimap - VS Code-style navigation panel with click-to-jump and search highlights
Native rendering of 11 diagram types directly in the preview:
✨ v0.2.2 Released: Stability & CLI improvements! CJK font support, undo/redo fixes, command-line file opening (ferrite file.md), configurable log level, and default view mode setting. See CHANGELOG.md for full details.
* Export Options - Export to HTML with themed styling, or copy as HTML
* Formatting Toolbar - Quick access to bold, italic, headings, lists, links, and more
Download the latest release for your platform from GitHub Releases.
# Download the .deb file, then install with:
sudo apt install ./ferrite-editor_amd64.deb
# Or using dpkg:
sudo dpkg -i ferrite-editor_amd64.deb
Ferrite is available on the AUR:
You can install it using your AUR helper of choice.
# Release package
yay -Sy ferrite
# Binary package
yay -Sy ferrite-bin
tar -xzf ferrite-linux-x64.tar.gz
./ferrite
# Ubuntu/Debian
sudo apt install build-essential pkg-config libgtk-3-dev libxcb-shape0-dev libxcb-xfixes0-dev
# Fedora
sudo dnf install gcc pkg-config gtk3-devel libxcb-devel
# Arch
sudo pacman -S base-devel pkg-config gtk3 libxcb
xcode-select –install
# Clone the repository
git clone https://github.com/OlaProeis/Ferrite.git
cd Ferrite
# Build release version (optimized)
cargo build –release
# The binary will be at:
# Windows: target/release/ferrite.exe
# Linux/macOS: target/release/ferrite
# Run from source
cargo run –release
# Or run the binary directly
./target/release/ferrite
# Open a specific file
./target/release/ferrite path/to/file.md
# Open multiple files as tabs
./target/release/ferrite file1.md file2.md
# Open a folder as workspace
./target/release/ferrite path/to/folder/
# Show version
./target/release/ferrite –version
# Show help
./target/release/ferrite –help
Toggle between modes using the toolbar buttons or keyboard shortcuts.
Workspace settings are stored in .ferrite/ within the workspace folder.
Access settings via Ctrl+, or the gear icon. Configure:
See ROADMAP.md for planned features and known issues.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/Ferrite.git
cd Ferrite
# Create a feature branch
git checkout -b feature/your-feature
# Make changes, then verify
cargo fmt
cargo clippy
cargo test
cargo build
# Commit and push
git commit -m “feat: your feature description”
git push origin feature/your-feature
This project is licensed under the MIT License - see the LICENSE file for details.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.