10 interesting stories served every morning and every evening.
Skip to contentGo full –yolo. We’ve got you. LLMs are probabilistic - 1% chance of disaster makes it a matter of when, not if. Safehouse makes this a 0% chance — enforced by the kernel. Safehouse denies write access outside your project directory. The kernel blocks the syscall before any file is touched. All agents work perfectly in their sandboxes, but can’t impact anything outside it.Agents inherit your full user permissions. Safehouse flips this — nothing is accessible unless explicitly granted.Download a single shell script, make it executable, and run your agent inside it. No build step, no dependencies — just Bash and macOS.Safehouse automatically grants read/write access to the selected workdir (git root by default) and read access to your installed toolchains. Most of your home directory — SSH keys, other repos, personal files — is denied by the kernel.See it fail — proof the sandbox worksTry reading something sensitive inside safehouse. The kernel blocks it before the process ever sees the data.# Try to read your SSH private key — denied by the kernel
safehouse cat ~/.ssh/id_ed25519
# cat: /Users/you/.ssh/id_ed25519: Operation not permitted
# Try to list another repo — invisible
safehouse ls ~/other-project
# ls: /Users/you/other-project: Operation not permitted
# But your current project works fine
safehouse ls .
# README.md src/ package.json …Add these to your shell config and every agent runs inside Safehouse automatically — you don’t have to remember. To run without the sandbox, use command claude to bypass the function.# ~/.zshrc or ~/.bashrc
safe() { safehouse –add-dirs-ro=~/mywork “$@”; }
# Sandboxed — the default. Just type the command name.
claude() { safe claude –dangerously-skip-permissions “$@”; }
codex() { safe codex –dangerously-bypass-approvals-and-sandbox “$@”; }
amp() { safe amp –dangerously-allow-all “$@”; }
gemini() { NO_BROWSER=true safe gemini –yolo “$@”; }
# Unsandboxed — bypass the function with `command`
# command claude — plain interactive sessionGenerate your own profile with an LLMUse a ready-made prompt that tells Claude, Codex, Gemini, or another model to inspect the real Safehouse profile templates, ask about your home directory and toolchain, and generate a least-privilege `sandbox-exec` profile for your setup.The guide also tells the LLM to ask about global dotfiles, suggest a durable profile path like ~/.config/sandbox-exec.profile, offer a wrapper that grants the current working directory, and add shell shortcuts for your preferred agents.Open the copy-paste prompt
...
Read the original on agent-safehouse.dev »
Ireland today (June 20) became the 15th coal-free country in Europe, having ended coal power generation at its 915 MW Moneypoint coal plant in County Clare. Initially commissioned in the mid-1980s by ESB, Moneypoint was intended to help Ireland offset the impact of the oil crises in the 1970s by providing a dependable source of energy.
But with Ireland now generating a lot more renewable energy nowadays, coal burning is no longer such an urgent need. Energy think tank Ember data states Ireland generated 37% (11.4 TWh) of its electricity from wind in 2024. Solar is not near wind levels of generation, (0.97 TWh in 2024) but it has been continuously breaking generation records in recent months and local stakeholders are confident this positive trend will continue.
Following the closure, the Moneypoint plant will continue to serve a limited backup role, burning heavy fuel oil under emergency instruction from Ireland’s transmission system operator EirGrid until 2029.
This strategy is in line with previous plans made by EirGrid and ESB to exit coal-fired generation by the end of 2025, which stipulated that Moneypoint would no longer be active in the wholesale electricity market.
“Ireland has quietly rewritten its energy story, replacing toxic coal with homegrown renewable power,” said Alexandru Mustață, campaigner on coal and gas at Europe’s Beyond Fossil Fuels.
“But this isn’t ‘job done’. The government’s priority now must be building a power system for a renewable future; one with the storage, flexibility, and grid infrastructure needed to run fully on clean, domestic renewable electricity,” Mustață warned.
Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, appealed to the government to ensure oil backup at Moneypoint is kept to an absolute minimum and ultimately decommissioned. He also appealed for the government to prevent further development of data centers, which he said are increasing Ireland’s reliance on fossil gas.
“We also can’t ignore that the government is targeting the installation of at least 2 GW of gas power plants with no strategy to reduce Ireland’s dangerous gas dependency,” he added.
On a broader level, Ireland’s step to close coal power generation at Moneypoint sets a precedent for further European countries’ coal exits to come, says Beyond Fossil Fuels. The group tracks European countries’ progress on their commitments to switching from fossil fuels to renewable energy. So far, 23 European countries have committed to coal phase-outs. Italy is expected to complete its mainland coal phase-out this summer with the upcoming closure of its last two big coal power plants, while mainland Spain is also expecting to declare itself coal-free this summer.
...
Read the original on www.pv-magazine.com »
FontCrafter turns your handwriting into a real, installable font — entirely in your browser. No accounts, no uploads to servers, no cost.
Still have questions? Here’s our FAQ.
It’s easier than you think. Print, write, scan — done.
Your handwriting becomes an installable font (OTF, TTF, WOFF2, Base64)
Natural variation — your letters won’t look robotic or identical every time
Connected letter pairs (ff, th, st, etc.) that flow like real handwriting
Works in Word, Pages, Photoshop, websites — everywhere fonts are used
No account, no server, 100% private — everything happens in your browser
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
Download and print the template — US Letter or A4.
Print at 100% scale (no “fit to page”). Use white, unlined paper.
Fill in every box with a felt-tip pen. All 3 rows for each character.
Ballpoints are too faint; thick markers bleed. Keep strokes inside the boxes with breathing room from edges.
How to use the three rows: Row 1 is always uppercase. Row 2 can be a second version of your uppercase or lowercase. Row 3 can also be uppercase or lowercase.
Scan or photograph the sheet, then drag & drop that photo file below.
Lay the sheet flat on a table with even lighting — no shadows, no curl. A phone camera works great if the sheet is flat and well-lit.
Drop your completed (scanned) image below. JPG, PNG, or high-res photo — make sure the page is flat and evenly lit. The processing happens on your end. No servers involved. Nothing is saved or stored remotely.
Drop your filled-in scan here, or click to browse
Not happy with a character? Touch it up in any image editor, or use correction tape and re-scan.
Ensure All Four Crosshair Markers Are Visible and Continue →
Characters with green borders were detected cleanly. Click any character to deselect it — a replacement from another row will be used. Small imperfections are fine — they give your font personality.
Name your font and choose how your three rows should be used. Enable ligatures for natural-looking connected letter pairs.
What do you want to call this font?
What did you put in each row?
I wrote uppercase in all three rows
I wrote uppercase in Row 1, lowercase in Row 2, uppercase in Row 3
I wrote uppercase in Row 1, lowercase in Row 2, lowercase in Row 3
Row 2 will be used as your lowercase. Adjust how much to shrink it — set to 1.00 if you already wrote Row 2 smaller than Row 1.
Allow certain characters to dip below the baseline (e.g. g, j, p, q, y, or a slashed zero).
Separate with spaces. Add or remove characters as needed for your handwriting.
Controls how far descender tails drop below the line. Slide left for deeper, right for shallower.
Cleans up tiny ink specks that bled through from adjacent cells. Won’t affect dots on letters like i, j, or punctuation marks.
Evens out letters that were drawn at different sizes.
Adds 100+ derived glyphs from your handwriting. Uncheck if you only want your handwritten alphanumeric characters.
Auto-generates diacritics (accents, tildes, umlauts) from your base letters. Covers French, German, Spanish, Portuguese, Scandinavian, and more.
Ligatures are letter pairs that connect naturally in handwriting — like ff, fi, fl, th, and st. Auto-generate is recommended — it’s instant and produces natural-looking connections from your existing characters.
Kerning adjusts spacing between specific letter pairs — like AV, To, and WA — so characters with complementary shapes sit together naturally instead of having uniform gaps.
See how your font looks with sample text, or type anything you like below.
THE QUICK BROWN FOX JUMPS OVER A LAZY DOG.
sphinx of black quartz, judge my vow?
Both Fickle Dwarves Can Jinx My Pig Quiz!
• $11.38 + tax & a 5.69% tip = more than $20.74
• (I love Star Wars) [Yes] {Maybe} <OK>
• That’s what I said! “Really?”
• arcade.pirillo.com * chris.pirillo.com
• He scored 7/8 on the quiz — not bad~
• Order #4053: 2x @$16.99 each | Total: $33.98
• Is it _really_ 100^2 = 10,000‽
• “Yes,” she said, ‘it’s a go;’ then walked away.
Your font includes contextual alternates (calt) — consecutive characters will automatically cycle between your 3 handwriting variants. This works in apps with OpenType support but may not appear in this preview.
Click here to try it for yourself…
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
OTF for desktop apps, TTF for universal compatibility, WOFF2 for websites, Base64 for CSS embedding.
I confirm that the handwriting used to generate this font is my own or I have explicit permission from the handwriting’s owner to create and use this font. I understand that I am solely responsible for how I use the generated font files, and I release FontCrafter, Chris Pirillo, LockerGnome, and arcade.pirillo.com from any liability related to the font’s creation, distribution, or use.
...
Read the original on arcade.pirillo.com »
Literate programming is the idea that code should be intermingled with prose such that an uninformed reader could read a code base as a narrative, and come away with an understanding of how it works and what it does.
Although I have long been intrigued by this idea, and have found uses for it in a couple of different cases, I have found that in practice literate programming turns into a chore of maintaining two parallel narratives: the code itself, and the prose. This has obviously limited its adoption.
Historically in practice literate programming is most commonly found as Jupyter notebooks in the data science community, where explanations live alongside calculations and their results in a web browser.
Frequent readers of this blog will be aware that Emacs Org Mode supports polyglot literate programming through its org-babel package, allowing execution of arbitrary languages with results captured back into the document, but this has remained a niche pattern for nerds like me.
Even for someone as enthusiastic about this pattern as I am, it becomes cumbersome to use Org as the source of truth for larger software projects, as the source code essentially becomes a compiled output, and after every edit in the Org file, the code must be re-extracted and placed into its destination (“tangled”, in Org Mode parlance). Obviously this can be automated, but it’s easy to get into annoying situations where you or your agent has edited the real source and it gets overwritten on the next tangle.
That said, I have had enough success with using literate programming for bookkeeping personal configuration that I have not been able to fully give up on the idea, even before the advent of LLMs.
For example: before coding agents, I had been adapting a pattern for using Org Mode for manual testing and note-taking: instead of working on the command line, I would write more commands into my editor and execute them there, editing them in place until each step was correct, and running them in-place, so that when I was done I would have a document explaining exactly the steps that were taken, without extra steps or note-taking. Combining the act of creating the note and running the test gives you the notes for free when the test is completed.
This is even more exciting now that we have coding agents. Claude and Kimi and friends all have a great grasp of Org Mode syntax; it’s a forgiving markup language and they are quite good at those. All the documentation is available online and was probably in the training data, and while a big downside of Org Mode is just how much syntax there is, but that’s no problem at all for a language model.
Now when I want to test a feature, I ask the clanker to write me a runbook in Org. Then I can review it — the prose explains the model’s reflection of the intent for each step, and the code blocks are interactively executable once I am done reviewing, either one at a time or the whole file like a script. The results will be stored in the document, under the code, like a Jupyter notebook.
I can edit the prose and ask the model to update the code, or edit the code and have the model reflect the meaning upon the text. Or ask the agent to change both simultaneously. The problem of maintaining the parallel systems disappears.
The agent is told to handle tangling, and the problem of extraction goes away. The agent can be instructed with an AGENTS.md file to treat the Org Mode file as the source of truth, to always explain in prose what is going on, and to tangle before execution. The agent is very good at all of these things, and it never gets tired of re-explaining something in prose after a tweak to the code.
The fundamental extra labor of literate programming, which I believe is why it is not widely practiced, is eliminated by the agent and it utilizes capabilities the large language model is best at: translation and summarization.
As a benefit, the code base can now be exported into many formats for comfortable reading. This is especially important if the primary role of engineers is shifting from writing to reading.
I don’t have data to support this, but I also suspect that literate programming will improve the quality of generated code, because the prose explaining the intent of each code block will appear in context alongside the code itself.
I have not personally had the opportunity to try this pattern yet on a larger, more serious codebase. So far, I have only been using this workflow for testing and for documenting manual processes, but I am thrilled by its application there.
I also recognize that the Org format is a limiting factor, due to its tight integration with Emacs. However, I have long believed that Org should escape Emacs. I would promote something like Markdown instead, however Markdown lacks the ability to include metadata. But as usual in my posts about Emacs, it’s not Emacs’s specific implementation of the idea that excites me, as in this case Org’s implementation of literate programming does.
It is the idea itself that is exciting to me, not the tool.
With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?
...
Read the original on silly.business »
AngstromIO is one of the smallest devboards out there, barely longer than a USB-C connector, based on the Attiny1616 MCU. 2 GPIOs as well as I2C lines are broken out. I made a dual CH340 programming board too, both for UPDI programming and debugging (one way Serial Communication). I also designed a breadboard friendly, experimentation board for the CH32V003, with a 4 by 5 charlieplexed LED matrix.
While the AngstromIO is a tiny devboard, yet powerful, that could be embbeded in any space constrained projects, the CH32 devboard is more an experimentation board, for me to learn how to program this awesome chip on the MounriverStudio programming and how to program a charlieplexed matrix. The Programmer is an all in one module, that will make debugging with the Serial monitor while programming easy: one board for both.
* One of the smallest devboards: 8.9mm by 9mm, USB-C included
* Attiny1616 MCU, 16Kb flash, low power, arduino compatible (for basic libraries at least)
* Pins broken out: SCL, SDA, PB2 (TX), PA3, +5V, GND, and UPDI for programming
* Dual CH340E setup:
One for programming (set as SerialUPDI programmer),
One for debugging (Serial Communication, USB to UART)
* One for programming (set as SerialUPDI programmer),
* One for debugging (Serial Communication, USB to UART)
* 2 USB-C for data transfer, only the USB-C for Serial provides 5V to the board
* USB-C for power, the CH32 runs at 3.3V but PC6 and PC5 are 5V tolerant
Arduino compatible, some libraries may not work, but some have been arranged/made by SpenceKonde like Wire (I2C) and tinyNeoPixel (for more information, see: https://github.com/SpenceKonde/megaTinyCore/tree/master/megaavr/libraries)
PCB designed in EasyEDA Pro, 2 layers, 1.0mm thick, Purple soldermask All 3 designs panelized into one PCB.
🚧 to be continued…
...
Read the original on github.com »
The US economy has proven more resilient than many feared, but one Wall Street veteran says that a recession is only being held back by tech.
Jim Paulsen, a markets strategist with more than 40 years of experience on Wall Street, argued that most of the economy is already in a recession. Tech spending has an outsized influence on economic growth, bolstering the data while the “old” economy struggles.
“Technology may be the tail wagging the dog, but the rest is a recession by any other name,” Paulsen wrote in a Thursday Substack post.
Real private GDP rose 2.3% in 2025, Paulsen said, but nearly all of this economic growth is tied to what he calls “new era” growth.
“Excluding new era investment, the other 89% of real private spending rose by only 1% with no job creation,” the strategist wrote.
Every time Naomi publishes a story, you’ll get an alert straight to your inbox!
Stay connected to Naomi and get more of their work as it publishes.
By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy.
Real GDP is generally considered to be a good measure of economic growth, but in recent years, the metric has seen several distortions from factors like government spending, tax changes, and tariff-fueled supply and demand volatility. Focusing on private real GDP strips out some of these distortions.
“Do we really need to continue focusing mostly on inflation when 89% of the private economy is in a recession and the 11% which is booming — new era pursuits — are by their very nature ‘disinflationary’?”
The strategist focused on business spending on information processing equipment and intellectual property, using it as a measure of “new era” spending. This would include big tech’s mega spending on AI.
Paulsen found that the new era subsection has grown nearly 2.5 times as fast as traditional private-sector spending. This gap has only widened in more recent years.
New-era private spending grew 14% in 2025, compared to 1% growth in private spending, excluding the tech-focused subset.
“Overall, new era pursuits have grown rapidly and their influence on the overall U. S. economy has become outsized relative to old era activities,” Paulsen wrote.
Paulsen compared the widening gap to the stock market narrative of the Magnificent Seven vs. the other 493 stocks in the S&P 500.
The stock market lately has seen gains broaden, with AI disruption fears and the war in Iran supporting a broader rotation out of former tech leaders.
The dynamic between the new era economy vs. everything else shows deeper bifurcation.
“When the President, the Federal Reserve Chairman, economists, financial pundits, and journalists imply overall real GDP growth currently remains okay, it misses the fact that, similar to the stock market, although the aggregate growth number appears satisfactory, the great bulk of the economy — 89% — is NOT doing okay!”
This dynamic could explain the mixed sentiment among economists as well as the discrepancy between the American public’s negative view of the economy and what economic data signals.
...
Read the original on www.businessinsider.com »
Social media was once hailed as the great democratizer of information. It promised a world where anyone could share their voice, connect with like-minded communities, and discover authentic human perspectives. But in 2025, the landscape looks very different. Social platforms are drowning in a flood of Generative AI (GenAI) content—articles, images, videos, and even entire conversations churned out at industrial scale. The result is a noisy ocean of sameness, where genuine human voices struggle to be heard.
For years, one could track the slow decline of social platforms: from algorithmic manipulation and advertising overload, to bot armies, political polarization, and the prioritization of engagement over truth. Yet, the explosion of AI-generated content has dramatically accelerated this downfall. Instead of scrolling for connection, users now find themselves wading through endless streams of repetitive, shallow, or outright meaningless material. This is not just a temporary nuisance. It signals the death of social media as we know it. And in this environment, an old technology—long thought to be obsolete—emerges as a savior: RSS, or Really Simple Syndication. RSS offers a way out of the algorithmic maze, a return to authenticity, and a chance to reclaim control over the information we consume.
This article explores why AI-generated content is destroying social media, how RSS provides a viable and elegant solution, and why now might be the perfect moment to revive this forgotten technology.
Generative AI has lowered the barrier to content creation to near zero. What once required hours of writing, designing, or editing can now be produced in seconds. Tools like ChatGPT, Midjourney, and Runway generate polished text, stunning visuals, and slick videos instantly. This is revolutionary in some ways, but devastating in others. The problem lies in scale. When anyone—or any bot—can publish unlimited content at no cost, the supply of information skyrockets, but human attention remains finite. The inevitable result is oversaturation, an endless flood of low-value material that drowns out everything else.
AI-generated articles and posts often sound competent, but they rarely sound alive. They mimic human style but lack human depth. After reading a dozen AI-written articles, a pattern emerges: similar phrases, repetitive structures, and predictable conclusions. The internet is filling up with machine-generated déjà vu. For readers, this creates fatigue in encountering the same types of content over and over, along with the erosion of trust as it becomes difficult to distinguish genuine human thought from automated output.
Compounding this problem is the way social media algorithms function. They don’t care about authenticity; they care about engagement. AI-generated content, designed for clicks and shares, fits neatly into their goals. As more content is produced, algorithms amplify the ones that trigger emotions or quick interactions, even if they are shallow, manipulative, or misleading. AI makes it cheaper to produce clickbait, and social media ensures it spreads faster than ever. For creators, this is crushing. Human-made articles, videos, or posts now compete against endless waves of machine-made content. Audiences can barely tell the difference, and many no longer care. The result is suffocation of authentic voices. For users, the platforms feel less personal, less inspiring, and less trustworthy. Social media promised community, but what we now get is content sludge.
The decay of social media did not begin with AI. For over a decade, cracks have been visible. Platforms increasingly prioritized advertisers over users, forcing ad overload onto feeds. Algorithms replaced user choice with automated curation. Fake news and bots undermined trust. Excessive use was linked to anxiety, depression, and polarization. Yet now, the flood of AI content accelerates this process. If every scroll reveals machine-written posts, AI art clones, or synthetic influencers, the value of the platform erodes further. People log in less, engagement drops, and the feedback loop worsens.
At the heart of this decline lies the collapse of authenticity. The core appeal of social media was its human connection, the spontaneity of real people sharing real moments. But with AI content dominating feeds, that connection is diluted. Imagine logging into Instagram and seeing 90 percent AI-generated fashion shoots, travel photos, or influencer avatars. Or scrolling Twitter/X and finding endless AI-written hot takes. The magic of human unpredictability vanishes. Platforms are also trapped by their reliance on advertising. Ads depend on user trust and engagement, but as feeds become indistinguishable from AI sludge, users disengage and advertisers see diminishing returns. The result is a death spiral: platforms push harder for monetization while driving users further away. This is the trajectory we are on, and it points toward the death of social media as we knew it.
RSS, or Really Simple Syndication, is an old technology from the early web. At its core, it is a feed format that allows websites to publish updates which users can subscribe to. Instead of visiting each site individually, you can aggregate updates into a single reader app. Think of it as an inbox for the internet. While RSS was once popular in the blogging era of the 2000s, it faded as social media took over. But now, in the age of AI content floods, it might be the perfect antidote.
The benefits of RSS are striking. With RSS, you subscribe directly to websites, blogs, or news outlets, meaning there is no middleman algorithm deciding what you see. By curating your own sources, you can focus on human-written blogs, company press releases, or trusted outlets. Most feeds deliver clean content without injected ads or surveillance. Updates appear instantly in your reader, often faster than social media platforms surface them. Above all, RSS returns control to the user. You decide which voices matter, with no algorithm filtering or manipulating your feed.
Unlike social feeds, where AI-generated spam floods in, RSS allows you to choose human sources deliberately. If you want updates from your favorite indie journalist, you simply add their blog feed. If you want company news straight from the source, you subscribe to their press release feed. The result is a curated, personalized feed of authentic voices. In other words, RSS restores the signal-to-noise ratio.
Despite its age, RSS never disappeared. Many apps and services still support it, some paid, some free, some open source. During my own exploration, I tested multiple RSS reader apps. Most worked, but one stood out: Feeder.
Feeder is free, lightweight, and refreshingly simple. Unlike bloated apps or paid services, it focuses on what matters. There are no distracting ads, only content. It works across phone and desktop, providing seamless use. Transparency is another key advantage—it is open source on GitHub, meaning you can audit what it does. Notifications, search, folders, and syncing all work smoothly. It doesn’t try to reinvent RSS; it just makes it work. In a time where transparency and trust matter, Feeder’s open-source nature is a massive plus.
Of course, Feeder is not the only option. Other strong contenders include Feedly, Inoreader, and NetNewsWire. But Feeder captures the spirit of RSS: independence, simplicity, and user control.
AI-driven social media thrives on removing choice. It decides what you see, when you see it, and how you feel about it. RSS flips the script. You decide your sources. You build your own information ecosystem. Instead of waiting for an article to appear on Twitter/X or Facebook—filtered by algorithms, distorted by ads—you get it straight from the source, with no delays, no manipulation, and no man in the middle.
By subscribing directly to human creators—bloggers, journalists, thinkers—you bypass the noise and support authenticity. These voices are harder to find on AI-clogged platforms, but with RSS, they come straight to you. With RSS, there is no hidden data mining, no invisible AI influence. Just a feed of your chosen content, delivered locally. In a digital world plagued by hidden algorithms, this transparency is refreshing.
Social media will not vanish overnight, but its role is changing. For many, it will become background noise, a chaotic carnival of AI bots shouting into the void. Platforms may survive as entertainment machines, but their credibility as sources of authentic human insight is gone. The early web was decentralized, built on blogs, forums, and personal sites. Social media centralized everything, but at great cost. RSS offers a way back: decentralized, user-driven, and authentic. Instead of one feed controlled by one corporation, each person builds their own.
In the age of infinite AI content, curation becomes the most valuable act. RSS empowers individuals to curate their own feeds. Instead of passively scrolling, we actively select. Instead of drowning, we choose.
Social media as we knew it is dying—not because humans lost interest in sharing, but because machines overwhelmed the system. Generative AI has flooded platforms with repetitive, shallow, and manipulative content. Algorithms amplify the noise. Authenticity evaporates.
But out of this collapse emerges an opportunity: the rebirth of RSS. This old, simple technology empowers us to reclaim control, cut out middlemen, and reconnect with genuine human voices. Tools like Feeder make it accessible again, providing clean, ad-free, transparent feeds of the content that matters most.
The future of information consumption may not be algorithm-driven feeds but self-curated, human-centered inboxes of authenticity. The death of social media could be the start of something better: a renaissance of the open web. RSS isn’t just nostalgia. It’s a survival tool in the age of AI content floods. And the sooner we embrace it, the sooner we can rediscover the joy of consuming information that feels truly human.
By the way, here is my own RSS blog feed!
...
Read the original on www.smartlab.at »
2025 was a pretty busy year for single board computers. I had 15 boards released in 2025 come through the bench from 8 different manufacturers, spanning SoCs from Rockchip, Broadcom, Qualcomm, MediaTek, Allwinner, StarFive, CIX, and Texas Instruments. Prices have ranged from $42 all the way up to $590, and the variety on offer has been genuinely impressive. We’ve had RISC-V boards, Qualcomm entering the SBC space (in a big way), a new-ish SoC vendor in CIX turning heads, an Arduino SBC of all things, and Raspberry Pi iterating on their keyboard form factor.
Also, my friend Meco of sbcwiki has a great series called “State of Embedded” that shares some insight into the scene, too, and you can read his Q4 roundup on the matter if you want bit of a dive into things in general, rather than specific SBCs.
All of the boards in this article have been benchmarked and are available to compare on sbc.compare, so if you want to dig into the raw numbers yourself, head over there. I’ll be linking to each board’s page throughout this article so you can see the full data for anything that catches your eye.
Before we get into it though, a quick note on pricing. The prices listed throughout this article are what the boards were retailing at when I tested them. As many of you will be aware, LPDDR4 and LPDDR5 memory didn’t escape RAMageddon and costs have been climbing since late 2025, with manufacturers pivoting production towards more profitable (and AI-hungry) memory types. This has already hit Raspberry Pi pricing and it’s affecting other boards too. Some of the prices you see here may have gone up, or the boards may not be available at all right now. I’ll try to note where I’m aware of changes, but do check current pricing before purchasing and comparing.
Six boards came in under $50 this year, and they’re a surprisingly varied bunch. You’ve got RISC-V, ARM, even a decades-old Texas Instruments SoC making an appearance. If you’re looking to tinker without a significant outlay, 2025 has given you plenty to choose from.
So to kick things off, I’ll be honest, the BeagleBone Green Eco is a bit of an oddity on this list. It’s running a TI Sitara AM3358, a single-core Cortex-A8, with 512MB of DDR3L. In 2025. BeagleBoard have always had a strong following in the industrial and education spaces though, and this is clearly where the Green Eco is aimed. It’s not here to compete on raw performance and it knows it. If you need something reliable, well-documented, and with a long history of community support for embedded applications, BeagleBoard have that covered. For general-purpose SBC tinkering though, you’re better off looking elsewhere in this tier.
StarFive’s VisionFive 2 Lite is a trimmed-down version of the VisionFive 2 that I reviewed a while back, running the JH7110S variant of the SoC. With Geekbench 6 scores of 59 single-core and 180 multi-core, the numbers aren’t going to set the world alight, but sadly, that’s the state of RISC-V in 2025 (at least at these price points). If you’re interested in the architecture and want a cheap way in, $43 for 4GB of RAM isn’t bad. Just don’t go in expecting 2025 ARM-level performance.
This one caught me off guard. Arduino, the company most of us associate with microcontrollers and blinking LEDs, have released an SBC. The UNO Q runs a Qualcomm QRB2210 with 2GB of LPDDR4X, and whilst the Geekbench scores of 190 SC / 527 MC are modest, I think the story here is less about the performance and more about what Arduino are trying to do. It feels like they’re testing the waters in the SBC market, and having Qualcomm silicon in an Arduino product is something that we should have seen coming given Qualcomm’s acquisition of them. Whether there’s a meaningful audience for it beyond curiosity remains to be seen, however, as it was a rather confusing product. My X thread below details it fairly well (feel free to follow whilst you’re there, too!)
Orange Pi have thrown two RISC-V boards into the ring this year. The Orange Pi RV ($50) runs the StarFive JH7110, the same chip as the VisionFive 2, pulling in Geekbench scores of 74 SC / 220 MC. The Orange Pi RV2 ($46) is the more interesting of the pair, using the less common Ky X1 SoC (which seems to just be a clone of the Spacemit K1?) and managing 118 SC / 528 MC in Geekbench. Both come with 4GB of LPDDR4/4X.
It’s good to see Orange Pi investing in RISC-V alongside their ARM lineup to keep things fresh and fun. The RV2’s Ky X1 results are a step in the right direction, though we’re still a long way from RISC-V boards being competitive with similarly priced ARM options on raw compute. The software ecosystems for both chips are still maturing too, so if you’re picking one of these up, you’ll want to be comfortable with a bit of tinkering to get things working the way you want, but the software wasn’t completely terrible, which is nice.
The Radxa Cubie A7A is the standout of the budget tier. An Allwinner A733 with 6GB of LPDDR5 for $45 is a strong proposition, and the Geekbench scores of 641 SC / 1,545 MC put it comfortably ahead of everything else under $50. For context, that multi-core score isn’t far off some of the Rockchip RK3576 boards in the next tier up that cost $60+. If you’re after the best bang for your buck in 2025, the Cubie A7A makes a compelling case for itself.
The $50 to $100 bracket is where most people tend to shop, and this year it’s been dominated by a few interesting trends. Rockchip’s RK3576 has turned up on multiple boards, Qualcomm has made a very strong entrance, and there’s a networking-focused option that doesn’t care about your Geekbench score.
The Radxa ROCK 4D is one of three RK3576-based boards I’ve tested this year (the others being the ArmSoM CM5 and the DShanPi A1). With 8GB of LPDDR5 at $60, it’s the cheapest way into RK3576 territory. Geekbench results of 319 SC / 1,332 MC are right in line with what we’ve seen from this SoC across all three boards. You can compare all three RK3576 boards on sbc.compare if you’re curious how they stack up against each other.
At this price for 8GB of RAM, it’s solid value. The usual Rockchip software story applies though, so do factor that in. Armbian have a great alternative to the Radxa OS builds, and that’s what I’ve used in testing, so maybe it’s less of an issue on that front.
If I had to pick the most interesting board of the year, the Radxa Dragon Q6A would be right up there. A Qualcomm QCS6490 with 6GB of LPDDR5 for $70, and Geekbench scores of 1,180 SC / 3,215 MC. Read those numbers again. That single-core score is in Raspberry Pi 5 territory, at $70, from a Qualcomm SoC, on a Radxa board. If someone had told me that a year ago, I’d have been extremely sceptical. It even dips its toes into Windows on ARM and can offer respectable gaming performance!
Qualcomm entering the SBC market is one of the bigger stories of 2025 in my view. The performance-per-dollar here is genuinely impressive. The big question mark is the software ecosystem. Qualcomm’s Linux support has historically been a little hit and miss, though they seem to be really trying with these SoCs, and mainline support is there or there abouts. It also has great Armbian support. Regardless, the hardware is there though, and I’m here for it.
The ArmSoM CM5 is another RK3576 board, this time in a compute module form factor with 8GB of LPDDR5 at $95. Geekbench scores of 326 SC / 1,366 MC are essentially identical to the ROCK 4D, which makes sense given they share the same SoC. The price premium over the ROCK 4D comes down to the compute module form factor, which serves a different purpose. If you need something that slots into a carrier board for a specific deployment, the CM5 fills that role. If you’re after a general-purpose dev board and don’t need the modularity, the ROCK 4D at $60 is likely the better buy. I’ve reviewed ArmSoM’s AIM7 previously and came away impressed with their documentation and approach, so they’re a brand worth keeping an eye on.
The Banana Pi R4 is a different beast entirely. Running a MediaTek MT7988A with 8GB of LPDDR4 at $99, its Geekbench scores of 305 SC / 889 MC look underwhelming on paper. But judging this board on CPU benchmarks alone would be missing the point (and to be honest, that stands true for most of the boards in this piece.) The MT7988A is a networking SoC, and the R4 exists for OpenWrt, router, and network appliance use cases. If you’re looking for something to replace ageing network hardware or build a custom router, this is the board to look at. If you want a general-purpose SBC, it’s not.
I’m using the R4 in the office for benchmarking, with its 10Gbit SFP+ ports, it’s connected to my big ol’ UniFi Pro 48 PoE switch and acts as a perfect iPerf3 endpoint for testing!
Five boards landed above the $100 mark this year, ranging from $199 to $590. This is where we see some serious silicon, a newer SoC vendor making waves, and one board that’s in a category of its own.
I’ve already reviewed the Pi 500+ in full, so I won’t rehash everything here. The short version: 16GB of LPDDR4X, NVMe storage, mechanical keyboard switches, and it’s what the original Pi 500 should have been from the start. Geekbench scores of 892 SC / 2,121 MC are in line with the BCM2712’s known performance. At $200 it’s not cheap, but when you break down what you’re getting (the NVMe, the doubled RAM, the mechanical keyboard upgrade), the value is there. It’s the most refined Raspberry Pi product to date in my opinion, though 2025 wasn’t a revolutionary year for Pi. More of an iterative one.
The ArmSoM AIM7 packs an RK3588 with 8GB of LPDDR4X into a Jetson Nano-compatible form factor. Geekbench scores of 828 SC / 3,186 MC. I covered this one in depth in my full AIM7 review and the summary hasn’t changed: if you specifically need a Jetson Nano replacement with modern performance, the AIM7 does exactly what it says on the tin. If you don’t need that form factor compatibility, there are more flexible RK3588 options out there for less money. As mentioned earlier, ArmSoM continue to impress on the documentation and overall quality front though.
I’m covering these two together because they share the same story: the CIX P1. This is a brand new SoC from a vendor that wasn’t on anyone’s radar a year ago, and both the Radxa Orion O6N ($199) and the Orange Pi 6 Plus ($260) pair it with 32GB of LPDDR5.
The numbers speak for themselves. The O6N pulls in 1,327 SC / 6,954 MC on Geekbench, whilst the 6 Plus hits 1,328 SC / 7,130 MC. Those multi-core scores are in a completely different league to anything else on this list. You can compare the two CIX P1 boards directly on sbc.compare and the results are near-identical, as you’d expect from the same SoC.
So is the CIX P1 the most exciting new SoC to arrive in the SBC space this year? Based on raw performance, I’d say yes. Having two manufacturers already shipping boards with it suggests there’s confidence in the platform. The $61 price gap between the O6N and the 6 Plus is worth investigating if you’re considering one, as the silicon is the same and the performance is effectively identical.
Software maturity is the usual question mark with a new SoC vendor, but the early signs are encouraging. There’s the power consumption quirks, along with a few other annoyances but in terms of raw performance, they take the top spots. Other SBCs (if I can call them that, I guess?) with the CIX P1 include the Radxa Orion O6 (the mITX brother of the O6N), and the Minisforum MS-R1, though the former is a bit harder to get right now, and the latter came at a pretty unfortunate time. It’s a polished unit, it’s just sadly quite expensive compared to the alternatives.
Finally, the Fogwise Airbox Q900 is the outlier on this list. A Qualcomm IQ-9075 with 36GB of LPDDR5 at $590 isn’t really a traditional SBC purchase. Geekbench scores of 1,111 SC / 5,638 MC are strong but actually fall behind the CIX P1 boards that cost a third of the price. The Airbox’s value proposition is in its Qualcomm AI stack and the specific edge deployment use cases it’s designed for, not general-purpose compute. If you know you need it, you know you need it. For the rest of us, it’s interesting to see where the high end of the SBC market is heading, but $590 is a lot of money for a single board computer.
I can’t write a 2025 SBC roundup without addressing the memory situation. LPDDR4 and LPDDR5 prices have been climbing since the second half of the year, driven by manufacturers shifting production towards higher-margin memory types (the AI boom has a lot to answer for here). We’ve already seen Raspberry Pi raise prices across their lineup, and they’re unlikely to be the last.
For the boards in this article, the prices I’ve listed are from when I tested them. Some may have gone up since, others may be temporarily out of stock. If you’re looking to pick something up, do check current pricing on the relevant retailer or AliExpress listing before ordering. The situation is supposedly temporary (come on bubbble.. pop..), but whether that means months or years is anyone’s guess at this point.
The silver lining, if there is one, is that it makes performance-per-dollar comparisons all the more relevant. If you’re going to pay more for a board, you want to make sure you’re getting the most out of that money. That’s exactly the kind of comparison you can run on sbc.compare, so do make use of it!
Looking back at 15 boards across a year, a few themes stand out.
Two boards, monster multi-core numbers, and a new vendor that brute forced their way to the top. If the software ecosystem matures alongside the hardware, CIX could become a serious player. The fact that both Radxa, Minisforum, and Orange Pi are already shipping boards with it says something about the confidence in the platform.
The Dragon Q6A’s single-core performance at $70 is a shot across the bow for anyone who thought ARM SBCs were a two-horse race between Broadcom and Rockchip. The Arduino UNO Q and Fogwise Airbox Q900 round out a spread from budget to premium. Software support will be the deciding factor, as it always is, but the silicon is competitive.
We even have Indian manufacturer, Vicharak teasing a modular SBC that utilises the same Qualcomm SoC as the Dragon Q6A from Radxa. I’ve been speaking with one of their hardware team about it, and it seems like it’s getting close to reality, and I’m pretty excited about that!
Four boards this year (VisionFive 2 Lite, Orange Pi RV, Orange Pi RV2, plus the Ky X1 showing up in the RV2) and the performance gap to ARM remains significant. It’s getting closer, and the fact that we’re seeing this many RISC-V boards in a single year is progress in itself, but we’re not at the point where I’d recommend one over an ARM board for anything other than architecture-specific interest. I’m not a hater, though, don’t get me wrong. I love the fast progress and the competition it’s slowly starting to offer. 2026 has already shown some promising new releases (with the Spacemit K3-based boards like the Milk-V Jupiter) so keep your eyes peeled.
Three boards this year (ROCK 4D, ArmSoM CM5, DShanPi A1), all performing within a few percent of each other. The software story is the same as it’s been for Rockchip, and that’s promising hardware that needs the kernel and driver support to catch up. That said, Collabora’s recent progress on upstream video decoder support for the RK3576 and RK3588 is encouraging, and it only builds on the great work they’ve already done in the area.
From $42 to $590, the definition of “single board computer” is stretching. That’s not a bad thing. Having options at every price point means there’s something for every use case, whether you’re building a network appliance for $99 or deploying edge AI at $590.
The 500+ is a great product and my pick if you want something that just works, but 2025 wasn’t a Pi-defined year the way 2023 and 2024 were. The competition has stepped up, and Raspberry Pi are likely biding their time to hit a sweet spot for interest (and likely costs!) before releasing the Raspberry Pi 6. Will that be 2026? Unless RAM pricing comes crashing down, I’d be surprised, but maybe it’ll launch with 1/2GB RAM options first and spread out over time. I’d hope not, it would hamstring the SoC I imagine, but let’s see.
If you want to explore all of the boards in this article (and the 80+ others I’ve benchmarked), head over to sbc.compare and have a dig around. If there’s a specific comparison you’d like to see, or a board you think I should get on the bench for 2026, let me know in the comments, or shout at your favourite SBC vendor to reach out ;-)
I hope 2026 is everything you want it to be!
...
Read the original on bret.dk »
FFmpeg is truly a multi-tool for media processing. As an industry-standard tool it supports a wide variety of audio and video codecs and container formats. It can also orchestrate complex chains of filters for media editing and manipulation. For the people who use our apps, FFmpeg plays an important role in enabling new video experiences and improving the reliability of existing ones.
Meta executes ffmpeg (the main CLI application) and ffprobe (a utility for obtaining media file properties) binaries tens of billions of times a day, introducing unique challenges when dealing with media files. FFmpeg can easily perform transcoding and editing on individual files, but our workflows have additional requirements to meet our needs. For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg, such as threaded multi-lane encoding and real-time quality metric computation.
Over time, our internal fork came to diverge significantly from the upstream version of FFmpeg. At the same time, new versions of FFmpeg brought support for new codecs and file formats, and reliability improvements, all of which allowed us to ingest more diverse video content from users without disruptions. This necessitated that we support both recent open-source versions of FFmpeg alongside our internal fork. Not only did this create a gradually divergent feature set, it also created challenges around safely rebasing our internal changes to avoid regressions.
As our internal fork became increasingly outdated, we collaborated with FFmpeg developers, FFlabs, and VideoLAN to develop features in FFmpeg that allowed us to fully deprecate our internal fork and rely exclusively on the upstream version for our use cases. Using upstreamed patches and refactorings we’ve been able to fill two important gaps that we had previously relied on our internal fork to fill: threaded, multi-lane transcoding and real-time quality metrics.
When a user uploads a video through one of our apps, we generate a set of encodings to support Dynamic Adaptive Streaming over HTTP (DASH) playback. DASH playback allows the app’s video player to dynamically choose an encoding based on signals such as network conditions. These encodings can differ in resolution, codec, framerate, and visual quality level but they are created from the same source encoding, and the player can seamlessly switch between them in real time.
In a very simple system separate FFmpeg command lines can generate the encodings for each lane one-by-one in serial. This could be optimized by running each command in parallel, but this quickly becomes inefficient due to the duplicate work done by each process.
To work around this, multiple outputs could be generated within a single FFmpeg command line, decoding the frames of a video once and sending them to each output’s encoder instance. This eliminates a lot of overhead by deduplicating the video decoding and process startup time overhead incurred by each command line. Given that we process over 1 billion video uploads daily, each requiring multiple FFmpeg executions, reductions in per-process compute usage yield significant efficiency gains.
Our internal FFmpeg fork provided an additional optimization to this: parallelized video encoding. While individual video encoders are often internally multi-threaded, previous FFmpeg versions executed each encoder in serial for a given frame when multiple encoders were in use. By running all encoder instances in parallel, better parallelism can be obtained overall.
Thanks to contributions from FFmpeg developers, including those at FFlabs and VideoLAN, more efficient threading was implemented starting with FFmpeg 6.0, with the finishing touches landing in 8.0. This was directly influenced by the design of our internal fork and was one of the main features we had relied on it to provide. This development led to the most complex refactoring of FFmpeg in decades and has enabled more efficient encodings for all FFmpeg users.
To fully migrate off of our internal fork we needed one more feature implemented upstream: real-time quality metrics.
Visual quality metrics, which give a numeric representation of the perceived visual quality of media, can be used to quantify the quality loss incurred from compression. These metrics are categorized as reference or no-reference metrics, where the former compares a reference encoding to some other distorted encoding.
FFmpeg can compute various visual quality metrics such as PSNR, SSIM, and VMAF using two existing encodings in a separate command line after encoding has finished. This is okay for offline or VOD use cases, but not for livestreaming where we might want to compute quality metrics in real time.
To do this, we need to insert a video decoder after each video encoder used by each output lane. These provide bitmaps for each frame in the video after compression has been applied so that we can compare against the frames before compression. In the end, we can produce a quality metric for each encoded lane in real time using a single FFmpeg command line.
Thanks to “in-loop” decoding, which was enabled by FFmpeg developers including those from FFlabs and VideoLAN, beginning with FFmpeg 7.0, we no longer have to rely on our internal FFmpeg fork for this capability.
Things like real-time quality metrics while transcoding and more efficient threading can bring efficiency gains to a variety of FFmpeg-based pipelines both in and outside of Meta, and we strive to enable these developments upstream to benefit the FFmpeg community and wider industry. However, there are some patches we’ve developed internally that don’t make sense to contribute upstream. These are highly specific to our infrastructure and don’t generalize well.
FFmpeg supports hardware-accelerated decoding, encoding, and filtering with devices such as NVIDIA’s NVDEC and NVENC, AMD’s Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each device is supported through an implementation of standard APIs in FFmpeg, allowing for easier integration and minimizing the need for device-specific command line flags. We’ve added support for the Meta Scalable Video Processor (MSVP), our custom ASIC for video transcoding, through these same APIs, enabling the use of common tooling across different hardware platforms with minimal platform-specific quirks.
As MSVP is only used within Meta’s own infrastructure, it would create a challenge for FFmpeg developers to support it without access to the hardware for testing and validation. In this case, it makes sense to keep patches like this internal since they wouldn’t provide benefit externally. We’ve taken on the responsibility of rebasing our internal patches onto more recent FFmpeg versions over time, utilizing extensive validation to ensure robustness and correctness during upgrades.
With more efficient multi-lane encoding and real-time quality metrics, we were able to fully deprecate our internal FFmpeg fork for all VOD and livestreaming pipelines. And thanks to standardized hardware APIs in FFmpeg, we’ve been able to support our MSVP ASIC alongside software-based pipelines with minimal friction.
FFmpeg has withstood the test of time with over 25 years of active development. Developments that improve resource utilization, add support for new codecs and features, and increase reliability enable robust support for a wider range of media. For people on our platforms, this means enabling new experiences and improving the reliability of existing ones. We plan to continue investing in FFmpeg in partnership with open source developers, bringing benefits to Meta, the wider industry, and people who use our products.
We would like to acknowledge contributions from the open source community, our partners in FFlabs and VideoLAN, and many Meta engineers, including Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.
...
Read the original on engineering.fb.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.