10 interesting stories served every morning and every evening.
A lot has already been said about the absurdly large corner radius of windows on macOS Tahoe. People are calling the way it looks comical, like a child’s toy, or downright insane.
Setting all the aesthetic issues aside — which are to some extent a matter of taste — it also comes at a cost in terms of usability.
Since upgrading to macOS Tahoe, I’ve noticed that quite often my attempts to resize a window are failing.
This never happened to me before in almost 40 years of using computers. So why all of a sudden?
It turns out that my initial click in the window corner instinctively happens in an area where the window doesn’t respond to it. The window expects this click to happen in an area of 19 × 19 pixels, located near the window corner.
If the window had no rounded corners at all, 62% of that area would lie inside the window:
But due to the huge corner radius in Tahoe, most of it — about 75% — now lies outside the window:
Living on this planet for quite a few decades, I have learned that it rarely works to grab things if you don’t actually touch them:
So I instinctively try to grab the window corner inside the window, typically somewhere in that green area, near the blue dot:
And I assume that most people would also intuitively expect to be able to grab the corner there. But no, that’s already outside the accepted target area:
So, for example, grabbing it here does not work:
But guess what — grabbing it here does:
So in the end, the most reliable way to resize a window in Tahoe is to grab it outside the corner — a gesture that feels unnatural and unintuitive, and is therefore inevitably error-prone.
...
Read the original on noheger.at »
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don’t want AI to economically succeed, I don’t care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.
In 2020 I left my job in order to write a novel about AI, universal basic income, a society that adapted to the automation of work facing many challenges. At the very end of 2024 I opened a YouTube channel focused on AI, its use in coding tasks, its potential social and economical effects. But while I recognized what was going to happen very early, I thought that we had more time before programming would be completely reshaped, at least a few years. I no longer believe this is the case. Recently, state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you’ll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM. But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.
In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks:
1. I modified my linenoise library to support UTF-8, and created a framework for line editing testing that uses an emulated terminal that is able to report what is getting displayed in each character cell. Something that I always wanted to do, but it was hard to justify the work needed just to test a side project of mine. But if you can just describe your idea, and it materializes in the code, things are very different.
2. I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.
3. Yesterday I wanted a pure C library that would be able to do the inference of BERT like embedding models. Claude Code created it in 5 minutes. Same output and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to convert the GTE-small model.
4. In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I’m slow at checking and authorizing to run the commands needed).
It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.
How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies. The same thing open source software did in the 90s.
However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I’m scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough “magic” inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).
As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I’m doing with Streams now.
But I’m worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more? Or will they try to cut salary costs, having fewer programmers that are better at prompting? And, there are other sectors where humans will become completely replaceable, I fear.
What is the social solution, then? Innovation can’t be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.
Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can’t control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.
Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.
Please enable JavaScript to view the comments powered by Disqus.
blog comments powered by
...
Read the original on antirez.com »
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
The first question often asked of Windows refugees migrating to Linux is, “Why Linux?” It’s a good question, and one that needs to be asked before dumping Windows for anything else. Personally, I tried macOS first. The experience was smooth and easy but ultimately felt restrictive (installing from third-party developers, anyone?). Additionally, the only Apple computer I have is a 2014 MacBook Air. As such, the latest version of macOS I could actually run is 11 (Big Sur), which was released in 2020. Overall system operation was quite sluggish on the older hardware, and I knew that time would inevitably take its toll on the software experience — apps would soon be out of date and I wouldn’t be able to update them. I also tried the OpenCore Legacy Patcher to push the laptop to macOS 13. While performance improved, key features like iMessage and Continuity Camera were either buggy or flat out refused to work. It felt like my laptop was running in mud with its hands tied behind its back. Plus, I needed something for my desktop. Not wanting to drop a mortgage payment or two on new hardware, I opted for Linux.
Linux promised me the potential of what I wanted - high hardware compatibility with full software freedom. The operating system can run on pretty much anything, and it grants users a huge amount of control over their system. I tried out a few ditributions, or distros, of Linux. A distro is like a “flavor” of Linux, and each one has unique factors (e.g., app/package management, bundled user interface). With most distros, these differences are largely irrelevant; most distros offer the same main packages as others.
...
Read the original on www.notebookcheck.net »
Skip to main content
An official website of the United States GovernmentHere’s how you knowOfficial websites use .gov
A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Please enable JavaScript if it is disabled in your browser or access the information through the links provided below.
The [Tab] key may be used in combination with the [Enter/Return] key to navigate and activate control buttons, such as caption on/off.
On Friday, the Department of Justice served the Federal Reserve with grand jury subpoenas, threatening a criminal indictment related to my testimony before the Senate Banking Committee last June. That testimony concerned in part a multi-year project to renovate historic Federal Reserve office buildings.
I have deep respect for the rule of law and for accountability in our democracy. No one—certainly not the chair of the Federal Reserve—is above the law. But this unprecedented action should be seen in the broader context of the administration’s threats and ongoing pressure.
This new threat is not about my testimony last June or about the renovation of the Federal Reserve buildings. It is not about Congress’s oversight role; the Fed through testimony and other public disclosures made every effort to keep Congress informed about the renovation project. Those are pretexts. The threat of criminal charges is a consequence of the Federal Reserve setting interest rates based on our best assessment of what will serve the public, rather than following the preferences of the President.
This is about whether the Fed will be able to continue to set interest rates based on evidence and economic conditions—or whether instead monetary policy will be directed by political pressure or intimidation.
I have served at the Federal Reserve under four administrations, Republicans and Democrats alike. In every case, I have carried out my duties without political fear or favor, focused solely on our mandate of price stability and maximum employment. Public service sometimes requires standing firm in the face of threats. I will continue to do the job the Senate confirmed me to do, with integrity and a commitment to serving the American people.
...
Read the original on www.federalreserve.gov »
I have flirted with self-hosting at home for years. I always bounced off it - too much time spent configuring instead of using. It just wasn’t fun.
That changed recently. The reason is simple: CLI agents like Claude Code make self-hosting on a cheapo home server dramatically easier and actually fun.
This is the first time I would recommend it to normie/software-literate people who never really wanted to sign up to become a sysadmin and stress about uptime of core personal services.
The last one is the real unlock.
Instead of Googling “docker compose vaultwarden caddy reverse proxy” and stitching together five blog posts from 2021, I just let Claude figure out (up to you how much you care to really understand the technical details!).
Fits in one hand. Check that central cooling unit!
I previously ran my Plex server on an M1 Mac mini, which was great, but as I wanted to add more services I found myself running a lot of resource-hungry VMs (via UTM) and it was getting complicated anytime the Mac rebooted. So, I picked up a Beelink Mini N150. It is small, quiet, and just barely sips power. I paid around $379 for the device and another few hundred USD for 8TB in NVMe SSD. It’s pretty wild how accessible these mini PCs have become in recent years!
This is the entire workflow:
This is the part that surprised me. I’ve been using Claude Code and other agentic CLIs for my day-to-day development, but as others are realizing, they are generalized computer agents and native to the terminal.
I installed Claude Code directly on the Linux box. Then I asked it things like:
* Keep my Docker images up to date
* Restart on boot so I never have to futz with it after an outage
Claude Code running directly on the server. Just describe what you want.
I didn’t copy-paste YAML from the internet or have to do deep googling. I just asked.
I focused on things I already used, but wanted more control over - effectively starting to knock down the walled garden around my core services like passwords, photos, media.
Each one lives in its own container.
I can access everything from my phone, laptop, and tablet like it is local.
Uptime Kuma keeping an eye on everything.
Automatic alerts via email give me peace of mind.
When something goes down, I get an email. When it comes back up, another email. No pager duty, no complex alerting rules. Just a simple ping that tells me if I need to care.
Vaultwarden was kinda the “okay, this can work” moment.
It is a Bitwarden-compatible server written in Rust. Lightweight, reliable, and you can use the existing Bitwarden clients (like native apps and browser extensions). You can even set it as the default password manager on iOS, at the OS level!
Once that was running, I exported my passwords from iCloud/Keychain, imported them easily into Vaultwarden, and haven’t looked back since.
That alone justified the box.
Immich is a serious Google Photos replacement. I thought I’d have to compromise and flinched a bit when I installed it. But nope, it’s good. Mobile apps. Face recognition via a local (but slow) machine learning thread. Timeline and map view. Automatic uploads from your photo roll.
Immich. This is not a compromise. This is better.
This is the kind of thing that used to feel fragile and half-baked when self-hosted. It does not anymore.
I took a bet on ReadDeck. The UI is genuinely good. Clean typography, nice reading experience, good mobile support. It always remembers where I stopped reading and takes me right there. I even set up a shortcut that allows me to save an article for later right from mobile Firefox. Awesome.
This is exactly the kind of thing self-hosting is perfect for. A small, personal tool that you actually use every day.
Lazydocker is a terminal UI for Docker. It shows you all your containers, logs, stats, and lets you restart or shell into anything with a few keystrokes.
I have been a huge fan of Lazygit for some time. I think it’s one of the best UIs I’ve ever used. So I was excited to learn that Lazydocker is basically that, but for monitoring Docker containers. No memorizing docker ps flags or grepping through logs. Just SSH in, type lazydocker, and everything is right there.
You feel like a superhero after you ssh in and see this
For a fuller picture, Glances shows everything at once: CPU, memory, disk, network, and all running containers.
Glances showing the whole picture. 13 containers, 6% CPU, 32% memory. This little box barely breaks a sweat.
That is 13 services running on a $379 mini PC, using about 4 GB of RAM and almost no CPU. The N150 is not a powerhouse, but it does not need to be.
This does not feel like “running a server.”
The feeling of ownership is powerful, but a bit hard to describe. I think you just have to try it, and I hope you get a strong feeling of independence like I have.
When something breaks, I SSH in, ask the agent what is wrong, and fix it. When I want to add something new, I describe it in plain English.
I am spending time using software, learning, and having fun - instead of maintaining it and stressing out about it.
This is for people who:
* Do not want to become infra experts
If that is you, I really think this is the year to try self-hosting.
For the first time, I would say this is not just viable. It is fun.
Follow me on Twitter for more.
...
Read the original on fulghum.io »
* A command-line tool to download all your iCloud photos.
* Works on Linux, Windows, and macOS; laptop, desktop, and NAS
* Available as an executable for direct downloading and through package managers/ecosystems (Docker, PyPI, AUR, npm)
* Developed and maintained by volunteers (we are always looking for help).
See Documentation for more details. Also, check Issues
We aim to release new versions once a week (Friday), if there is something worth delivering.
To make iCloud Photo Downloader work, ensure the iCloud account is configured with the following settings, otherwise Apple Servers will return an ACCESS_DENIED error:
* Enable Access iCloud Data on the Web: On your iPhone / iPad, enable Settings > Apple ID > iCloud > Access iCloud Data on the Web
There are three ways to run icloudpd:
Download executable for your platform from the GitHub Release and run it
Use package manager to install, update, and, in some cases, run (Docker, PyPI, AUR, npm)
Build and run from the source
See Documentation for more details
* Three modes of operation:
Sync - download new photos from iCloud and delete local files that were removed in iCloud (–auto-delete option)
Move - download new photos from iCloud and delete photos in iCloud (–keep-icloud-recent-days option)
* Sync - download new photos from iCloud and delete local files that were removed in iCloud (–auto-delete option)
* Move - download new photos from iCloud and delete photos in iCloud (–keep-icloud-recent-days option)
* Support for Live Photos (image and video as separate files) and RAW images (including RAW+JPEG)
* Automatic de-duplication of photos with the same name
* One time download and an option to monitor for iCloud changes continuously (–watch-with-interval option)
* … and many more (use –help option to get full list)
Some changes are added to the experimental mode before they graduate into the main package. Details
To keep your iCloud photo collection synchronized to your local system:
To independently create and authorize a session (and complete 2SA/2FA validation if needed) on your local system:
Want to contribute to iCloud Photos Downloader? Awesome! Check out the contributing guidelines to get involved.
...
Read the original on github.com »
Happy New Year 2026! Once again, a lot has happened in Gentoo over the past months. New developers, more binary packages, GnuPG alternatives support, Gentoo for WSL, improved Rust bootstrap, better NGINX packaging, … As always here
we’re going to revisit all the exciting news from our favourite Linux distribution.
Gentoo currently consists of 31663 ebuilds for 19174 different packages. For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors. Gentoo each week builds 154
distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.
The number of commits to the main ::gentoo repository
has remained at an overall high level in 2025, with a slight decrease from 123942 to 112927. The number of commits by external contributors was 9396, now across 377 unique external authors.
GURU, our user-curated repository with a trusted user model, as entry point for potential developers, has shown a decrease in activity. We have had 5813 commits in 2025, compared to 7517 in 2024. The number of contributors to GURU has increased, from 241 in 2024 to 264 in 2025. Please join us there and help packaging the latest and greatest software. That’s the ideal preparation for becoming a Gentoo developer!
Activity has slowed down somewhat on the Gentoo bugtracker bugs.gentoo.org, where we’ve had 20763 bug reports created in 2025, compared to 26123 in 2024. The number of resolved bugs shows the same trend, with 22395 in 2025 compared to 25946 in 2024. The current values are closer to those of 2023 - but clearly this year we fixed more than we broke!
In 2025 we have gained four new Gentoo developers. They are in chronological order:
Let’s now look at the major improvements and news of 2025 in Gentoo.
RISC-V bootable QCOW2: Same as for amd64 and arm64, also for RISC-V we now have ready-made bootable disk images in QCOW2 format
available for download on our mirrors in a console and a cloud-init variant. The disk images use the rv64gc instruction set and the lp64d ABI, and can be booted via the standard RISC-V UEFI support.
Gentoo for WSL: We now publish weekly Gentoo images for Windows
Subsystem for Linux (WSL), based on the amd64 stages,
see our mirrors. While these images are not present in the Microsoft store yet, that’s something we intend to fix soon.
hppa and sparc destabilized: Since we do not have hardware readily available anymore and these architectures mostly fill a retrocomputing niche, stable keywords have been dropped for both hppa (PA-RISC) and sparc. The architectures will remain supported with testing keywords.
musl with locales: Localization support via the package
sys-apps/musl-locales has been added by default to the Gentoo stages based on the lightweight musl C library.
GPG alternatives: Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing. At the moment,
the original, unmodified GnuPG, the FreePG fork/patchset as also used in many other Linux distributions (Fedora, Debian, Arch, …), and the re-implementation
Sequoia-PGP with
Chameleon
are available. In practice, implementation details vary between the providers, and while GnuPG and FreePG are fully supported, you may still encounter difficulties when selecting Sequoia-PGP/Chameleon.
zlib-ng support: We have introduced initial support for using zlib-ng and
minizip-ng in compatibility mode in place of the reference zlib libraries.
System-wide jobserver: We have created steve, an implementation of a token-accounting system-wide jobserver, and introduced experimental global jobserver support in Portage. Thanks to that, it is now possible to globally control the concurrently running build job count, correctly accounting for parallel emerge jobs, make and ninja jobs, and other clients supporting the jobserver protocol.
NGINX rework: The packaging of the NGINX web server and reverse proxy in Gentoo has undergone a major improvement, including also the splitting off of several third-party modules into separate packages.
C++ based Rust bootstrap: We have added a bootstrap path for Rust from C++ using
Mutabah’s Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations.
Ada and D bootstrap: Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge.
FlexiBLAS: Gentoo has adopted the new FlexiBLAS wrapper
library as the primary way of switching implementations of the BLAS numerical algorithm library at runtime. This automatically also provides ABI stability for linking programs and bundles the specific treatment of different BLAS variants in one place.
Python: In the meantime the default Python version in Gentoo has reached Python 3.13. Additionally we have also Python 3.14 available stable - fully up to date with upstream.
KDE upgrades: As of end of 2025, in Gentoo stable we have KDE Gear 25.08.3, KDE Frameworks 6.20.0, and KDE Plasma 6.5.4. As always, Gentoo testing follows the newest upstream releases (and using the KDE overlay you can even install from git sources).
Additional build server: A second dedicated build server, hosted at Hetzner Germany, has been added to speed up the generation of installation stages, iso and qcow2 images, and binary packages.
Documentation: Documentation work has made constant progress on wiki.gentoo.org. The Gentoo Handbook had some particularly useful updates, and the documentation received lots of improvements and additions from the many active volunteers. There are currently 9,647 pages on the wiki, and there have been 766,731 edits since the project started. Please help
Gentoo by contributing to documentation!
Income: The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part
(over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471
in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
* Expenses: Our expenses in 2025 were, program services (e.g. hosting costs) $8,332, management & general (accounting)
$1,724, fundraising $905, and non-operating (depreciation expenses) $10,075.
* Balance: We have $104,831 in the bank as of July 1, 2025 (which is when our fiscal year 2026 starts for accounting
purposes). The Gentoo Foundation FY2025 financial statement
is available on the Gentoo Wiki.
* Transition to SPI: The Foundation encourages donors to ensure their ongoing contributions are going to
SPI - more than 40 donors had not responded to requests to move the recurring donations
by the end of the year. Expenses will be moved to the SPI structure as ongoing income permits.
As every year, we would like to thank all Gentoo developers and all who have submitted contributions
for their relentless everyday Gentoo work. If you are interested and would like to help, please join us to make Gentoo even better! As a volunteer project, Gentoo could not exist without its community.
...
Read the original on www.gentoo.org »
Have you ever noticed that the food graphics in Super Smash Bros. and Kirby Air Riders is flat “billboarded” stock images of food?
This artistic decision from director Masahiro Sakurai has persisted through 8 games over nearly 25 years. I’ve seen a few folks online remarking about the “JPEG” or “PNG”-like quality of the images in the most recent release: .
While researching every game with this art style and all 150+ unique food images I ended up fixing wikis, reviewing a seasonal KitKat flavor, and preserving an uncatalogued image of tempura soba.
Masahiro Sakurai is the director for every game on this list, so clearly this is his artistic decision.
Super Smash Bros. Melee was the first game to contain this food art style, published in 2001. This style was then repeated in Kirby Air Ride (2003), Super Smash Bros. Brawl (2008),
Super Smash Bros. for 3DS and Wii U (2014), Super Smash Bros.
Ultimate (2018), and most recently in Kirby Air Riders (2025).
Credit to Nintendo, HAL Laboratories, SORA Ltd., and Bandai Namco Studios as developers and publishers of these games. Artwork was sourced from the Spriters Resource.
Where it all began! Super Smash Bros. Melee for the GameCube started off with 28 distinct food items, often found in “Party Balls”. Each type of food had a different “nutritional value” and “yumminess quotient” according to the in-game trophy dedicated to the food items.
Melee included many foods specific to Japanese cuisine, such as unagi (eel), omurice, soba, dango, and gyūdon. I do distinctly remember growing up as a “culinarily sheltered” kid in the midwest United States and not understanding what many of these food items were.
The original stock images of Super Smash Bros. Melee and the next game, Kirby Air Ride, have been partially discovered and documented by a group called “Render96”. The stock images are from a company called “Sozaijiten”. Many of the food images come from Material Dictionary CDs (Vegetables & Fruits),
(Food & Dishes), and (Cooking Japanese, Western, & Chinese). The apple stock image in particular was re-used all the way through Super Smash Bros. Ultimate (2018). The burger, milk, dango, and donut are still missing their primary source.
Kirby Air Ride for the GameCube had significantly fewer distinct food items (12) compared to Melee and maintained many of the same food stock images from Melee, including the apple, burger, chicken, curry, omurice, onigiri, and ramen. Nigiri was included, but the image was changed from a sushi board to a plate.
The stock images had their saturation increased and the black borders around the images are thicker, sometimes 2-3 pixels instead of only 1 pixel for Melee.
I paid $50 plus shipping on eBay for this PNG. This is the closest I’ll get to NFTs.
While researching the foods in Kirby Air Ride I discovered a wiki description of a “tempura soba” item that I’d never heard of and wasn’t included in the Spriters Resource spritesheets
for Kirby Air Ride. Turns out that this item was changed to a “hotdog” in the NSTC-M and PAL releases of Kirby Air Ride.
I was unable to find a non-blurry image of the tempura soba sprite online, so of course I had to preserve this sprite myself. I purchased
a Japanese copy of Kirby Air Ride, dumped the ROM using the FlippyDrive Disc Backup Utility, and ran the ROM using Dolphin with “Dump Textures” mode enabled to archive the sprite directly from the game.
Kirby Air Ride cover artwork (left: JP, right: US, PAL). Images from the GameTDB.
In the process I also learned that the cover of Kirby Air Ride changed between the Japanese and international releases. The Japanese cover art features a smiling happy Kirby where the international cover has Kirby with a furrowed brow and serious look.
Super Smash Bros. Brawl for the Wii has only one more food item compared to Melee (29) and introduces 11 new food items including bread, cake, candy, chocolate, cookie, melon soda, parfait, peaches, pie, pineapple, and steak.
About half of the Japanese-specific foods from both Melee and Kirby Air Ride were replaced: curry, omurice, onigiri, and ramen.
The art is less saturated and more “realistic” which is in-line with the rest of the game’s art direction. The images lost their black outline, likely to draw less attention to the “arcade-y” feel that the previous titles had with food items.
Super Smash Bros. Wii U and 3DS have the same total number of food items as Brawl (29). These games change the food art style completely, again! It’s brighter, saturated, and looks delicious.
The soda item was changed from a melon cream soda to a dark cola with lemon. The omurice was changed to a pair of fried eggs with bacon. These games are also the only ones without the “burger” food item.
Super Smash Bros. for 3DS uses the same food artwork used in Super Smash Bros. for downscaled to 64x64 pixels from 256x256 pixels with some minor editing.
Super Smash Bros. Wii U and 3DS added the “Mont Blanc” food item, which is a French dessert that is popular in Japan. I’ve seen multiple guides and wikis mistakenly label this food item as “noodles” due to the “vermicelli” shape of the puréed chestnuts. Yummy!
While researching and writing this blog post I happened across “Mont Blanc”-flavored KitKats. These are apparently a limited-time flavor for autumn. The KitKats are creamy and have plenty of chestnut flavor, but they are very sweet (apparently Mont Blanc is quite sweet, too, so this is to be expected).
Super Smash Bros. Ultimate uses the same 29 foods from the Wii U and 3DS and adds 9 more foods for a total of 38. Many of the newly added foods are call-backs to food items in previous titles, below highlighted in pink.
The 9 new foods in Ultimate are burgers, cheese, corndogs, donuts, dumplings, daisies, pizza, pineapple, and steak.
It’s clear that the “Sozaijiten” stock images were still in use even in 2018: 17 years later! The apple, cheese, and chicken stock images for Super Smash Bros. Melee match the stock images used in Ultimate.
Kirby Air Riders released for the Switch 2 has the most foods of any game with this art style with 45 distinct food items.
Massive thank-you to Charles Bernardo for sending me carefully cropped images of the food in Kirby Air Riders.
Kirby Air Riders is the first game in this series to use completely new models for all food items: not even the apple or cheese are the same from any previous game. Kirby Air Riders is also the first game in this series not to have a “roast chicken” item, breaking from an established video-game food trope.
Kirby Air Riders adds a new food-centric mode called “” where riders earn points by consuming food as quickly as possible in a small arena. Gourmet Race introduces a new food concept: “Large Foods”. Large food items are worth 15 points instead of 1 point per food item. There are 14 large food items, some presenting as “upgraded” versions of regular-sized foods.
The large food items are: a bunch of 12 bananas instead of 3, a bread-basket, a double cheeseburger, a whole cake instead of a slice, donuts, a fruit basket, a board of nigiri instead of a plate, fruit parfait, pizza, popcorn, salad, rainbow shave ice instead of blue only, a tempura bowl, and a whole watermelon instead of a slice.
Prior to this article there was not yet a complete list of foods in Kirby Air Riders documented on a wiki or spritesheet. I added this list to the Kirby wiki, but I’ve also included the list below:
There are 16 total food items that only appear in a single title across the 25-year span of games. Kirby Air Riders and Super Smash Bros. Melee have by far the most unique food items with 8 and 5 respectively.
Finally, here is a table with every image so you can compare how each changed across different titles:
Wow, you made it to the end!
Share your thoughts with me on Mastodon, email, or Bluesky.
Check out this list of cool stuff I found on the internet.
Follow this blog on RSS or the email newsletter.
Go outside (best option)
...
Read the original on sethmlarson.dev »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.