10 interesting stories served every morning and every evening.




1 769 shares, 31 trendiness

Agent Safehouse

Skip to con­tentGo full –yolo. We’ve got you. LLMs are prob­a­bilis­tic - 1% chance of dis­as­ter makes it a mat­ter of when, not if. Safehouse makes this a 0% chance — en­forced by the ker­nel. Safehouse de­nies write ac­cess out­side your pro­ject di­rec­tory. The ker­nel blocks the syscall be­fore any file is touched. All agents work per­fectly in their sand­boxes, but can’t im­pact any­thing out­side it.Agents in­herit your full user per­mis­sions. Safehouse flips this — noth­ing is ac­ces­si­ble un­less ex­plic­itly granted.Down­load a sin­gle shell script, make it ex­e­cutable, and run your agent in­side it. No build step, no de­pen­den­cies — just Bash and ma­cOS.Safe­house au­to­mat­i­cally grants read/​write ac­cess to the se­lected workdir (git root by de­fault) and read ac­cess to your in­stalled tool­chains. Most of your home di­rec­tory — SSH keys, other re­pos, per­sonal files — is de­nied by the ker­nel.See it fail — proof the sand­box worksTry read­ing some­thing sen­si­tive in­side safe­house. The ker­nel blocks it be­fore the process ever sees the data.# Try to read your SSH pri­vate key — de­nied by the ker­nel

safe­house cat ~/.ssh/id_ed25519

# cat: /Users/you/.ssh/id_ed25519: Operation not per­mit­ted

# Try to list an­other repo — in­vis­i­ble

safe­house ls ~/other-project

# ls: /Users/you/other-project: Operation not per­mit­ted

# But your cur­rent pro­ject works fine

safe­house ls .

# README.md src/ pack­age.json …Add these to your shell con­fig and every agent runs in­side Safehouse au­to­mat­i­cally — you don’t have to re­mem­ber. To run with­out the sand­box, use com­mand claude to by­pass the func­tion.# ~/.zshrc or ~/.bashrc

safe() { safe­house –add-dirs-ro=~/mywork $@”; }

# Sandboxed — the de­fault. Just type the com­mand name.

claude() { safe claude –dangerously-skip-permissions $@”; }

codex() { safe codex –dangerously-bypass-approvals-and-sandbox $@”; }

amp() { safe amp –dangerously-allow-all $@”; }

gem­ini() { NO_BROWSER=true safe gem­ini –yolo $@”; }

# Unsandboxed — by­pass the func­tion with `command`

# com­mand claude — plain in­ter­ac­tive ses­sion­Gener­ate your own pro­file with an LLMUse a ready-made prompt that tells Claude, Codex, Gemini, or an­other model to in­spect the real Safehouse pro­file tem­plates, ask about your home di­rec­tory and tool­chain, and gen­er­ate a least-priv­i­lege `sandbox-exec` pro­file for your setup.The guide also tells the LLM to ask about global dot­files, sug­gest a durable pro­file path like ~/.config/sandbox-exec.profile, of­fer a wrap­per that grants the cur­rent work­ing di­rec­tory, and add shell short­cuts for your pre­ferred agents.Open the copy-paste prompt

...

Read the original on agent-safehouse.dev »

2 712 shares, 80 trendiness

Ireland shuts last coal plant, becomes 15th coal-free country in Europe

Ireland to­day (June 20) be­came the 15th coal-free coun­try in Europe, hav­ing ended coal power gen­er­a­tion at its 915 MW Moneypoint coal plant in County Clare. Initially com­mis­sioned in the mid-1980s by ESB, Moneypoint was in­tended to help Ireland off­set the im­pact of the oil crises in the 1970s by pro­vid­ing a de­pend­able source of en­ergy.

But with Ireland now gen­er­at­ing a lot more re­new­able en­ergy nowa­days, coal burn­ing is no longer such an ur­gent need. Energy think tank Ember data states Ireland gen­er­ated 37% (11.4 TWh) of its elec­tric­ity from wind in 2024. Solar is not near wind lev­els of gen­er­a­tion, (0.97 TWh in 2024) but it has been con­tin­u­ously break­ing gen­er­a­tion records in re­cent months and lo­cal stake­hold­ers are con­fi­dent this pos­i­tive trend will con­tinue.

Following the clo­sure, the Moneypoint plant will con­tinue to serve a lim­ited backup role, burn­ing heavy fuel oil un­der emer­gency in­struc­tion from Ireland’s trans­mis­sion sys­tem op­er­a­tor EirGrid un­til 2029.

This strat­egy is in line with pre­vi­ous plans made by EirGrid and ESB to exit coal-fired gen­er­a­tion by the end of 2025, which stip­u­lated that Moneypoint would no longer be ac­tive in the whole­sale elec­tric­ity mar­ket.

Ireland has qui­etly rewrit­ten its en­ergy story, re­plac­ing toxic coal with home­grown re­new­able power,” said Alexandru Mustață, cam­paigner on coal and gas at Europe’s Beyond Fossil Fuels.

But this is­n’t job done’. The gov­ern­men­t’s pri­or­ity now must be build­ing a power sys­tem for a re­new­able fu­ture; one with the stor­age, flex­i­bil­ity, and grid in­fra­struc­ture needed to run fully on clean, do­mes­tic re­new­able elec­tric­ity,” Mustață warned.

Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, ap­pealed to the gov­ern­ment to en­sure oil backup at Moneypoint is kept to an ab­solute min­i­mum and ul­ti­mately de­com­mis­sioned. He also ap­pealed for the gov­ern­ment to pre­vent fur­ther de­vel­op­ment of data cen­ters, which he said are in­creas­ing Ireland’s re­liance on fos­sil gas.

We also can’t ig­nore that the gov­ern­ment is tar­get­ing the in­stal­la­tion of at least 2 GW of gas power plants with no strat­egy to re­duce Ireland’s dan­ger­ous gas de­pen­dency,” he added.

On a broader level, Ireland’s step to close coal power gen­er­a­tion at Moneypoint sets a prece­dent for fur­ther European coun­tries’ coal ex­its to come, says Beyond Fossil Fuels. The group tracks European coun­tries’ progress on their com­mit­ments to switch­ing from fos­sil fu­els to re­new­able en­ergy. So far, 23 European coun­tries have com­mit­ted to coal phase-outs. Italy is ex­pected to com­plete its main­land coal phase-out this sum­mer with the up­com­ing clo­sure of its last two big coal power plants, while main­land Spain is also ex­pect­ing to de­clare it­self coal-free this sum­mer.

...

Read the original on www.pv-magazine.com »

3 595 shares, 25 trendiness

- YouTube

...

Read the original on www.youtube.com »

4 355 shares, 38 trendiness

Create Your Handwriting Font for Free

FontCrafter turns your hand­writ­ing into a real, in­stal­lable font — en­tirely in your browser. No ac­counts, no up­loads to servers, no cost.

Still have ques­tions? Here’s our FAQ.

It’s eas­ier than you think. Print, write, scan — done.

Your hand­writ­ing be­comes an in­stal­lable font (OTF, TTF, WOFF2, Base64)

Natural vari­a­tion — your let­ters won’t look ro­botic or iden­ti­cal every time

Connected let­ter pairs (ff, th, st, etc.) that flow like real hand­writ­ing

Works in Word, Pages, Photoshop, web­sites — every­where fonts are used

No ac­count, no server, 100% pri­vate — every­thing hap­pens in your browser

If you found this use­ful, I’d ap­pre­ci­ate do­na­tions & pa­trons (to keep it­er­at­ing)!

Download and print the tem­plate — US Letter or A4.

Print at 100% scale (no fit to page”). Use white, un­lined pa­per.

Fill in every box with a felt-tip pen. All 3 rows for each char­ac­ter.

Ballpoints are too faint; thick mark­ers bleed. Keep strokes in­side the boxes with breath­ing room from edges.

How to use the three rows: Row 1 is al­ways up­per­case. Row 2 can be a sec­ond ver­sion of your up­per­case or low­er­case. Row 3 can also be up­per­case or low­er­case.

Scan or pho­to­graph the sheet, then drag & drop that photo file be­low.

Lay the sheet flat on a table with even light­ing — no shad­ows, no curl. A phone cam­era works great if the sheet is flat and well-lit.

Drop your com­pleted (scanned) im­age be­low. JPG, PNG, or high-res photo — make sure the page is flat and evenly lit. The pro­cess­ing hap­pens on your end. No servers in­volved. Nothing is saved or stored re­motely.

Drop your filled-in scan here, or click to browse

Not happy with a char­ac­ter? Touch it up in any im­age ed­i­tor, or use cor­rec­tion tape and re-scan.

Ensure All Four Crosshair Markers Are Visible and Continue →

Characters with green bor­ders were de­tected cleanly. Click any char­ac­ter to de­s­e­lect it — a re­place­ment from an­other row will be used. Small im­per­fec­tions are fine — they give your font per­son­al­ity.

Name your font and choose how your three rows should be used. Enable lig­a­tures for nat­ural-look­ing con­nected let­ter pairs.

What do you want to call this font?

What did you put in each row?

I wrote up­per­case in all three rows

I wrote up­per­case in Row 1, low­er­case in Row 2, up­per­case in Row 3

I wrote up­per­case in Row 1, low­er­case in Row 2, low­er­case in Row 3

Row 2 will be used as your low­er­case. Adjust how much to shrink it — set to 1.00 if you al­ready wrote Row 2 smaller than Row 1.

Allow cer­tain char­ac­ters to dip be­low the base­line (e.g. g, j, p, q, y, or a slashed zero).

Separate with spaces. Add or re­move char­ac­ters as needed for your hand­writ­ing.

Controls how far de­scen­der tails drop be­low the line. Slide left for deeper, right for shal­lower.

Cleans up tiny ink specks that bled through from ad­ja­cent cells. Won’t af­fect dots on let­ters like i, j, or punc­tu­a­tion marks.

Evens out let­ters that were drawn at dif­fer­ent sizes.

Adds 100+ de­rived glyphs from your hand­writ­ing. Uncheck if you only want your hand­writ­ten al­phanu­meric char­ac­ters.

Auto-generates di­a­crit­ics (accents, tildes, um­lauts) from your base let­ters. Covers French, German, Spanish, Portuguese, Scandinavian, and more.

Ligatures are let­ter pairs that con­nect nat­u­rally in hand­writ­ing — like ff, fi, fl, th, and st. Auto-generate is rec­om­mended — it’s in­stant and pro­duces nat­ural-look­ing con­nec­tions from your ex­ist­ing char­ac­ters.

Kerning ad­justs spac­ing be­tween spe­cific let­ter pairs — like AV, To, and WA — so char­ac­ters with com­ple­men­tary shapes sit to­gether nat­u­rally in­stead of hav­ing uni­form gaps.

See how your font looks with sam­ple text, or type any­thing you like be­low.

THE QUICK BROWN FOX JUMPS OVER A LAZY DOG.

sphinx of black quartz, judge my vow?

Both Fickle Dwarves Can Jinx My Pig Quiz!

• $11.38 + tax & a 5.69% tip = more than $20.74

• (I love Star Wars) [Yes] {Maybe} <OK>

• That’s what I said! Really?”

• ar­cade.pir­illo.com * chris.pir­illo.com

• He scored 7/8 on the quiz — not bad~

• Order #4053: 2x @$16.99 each | Total: $33.98

• Is it _really_ 100^2 = 10,000‽

Yes,” she said, it’s a go;’ then walked away.

Your font in­cludes con­tex­tual al­ter­nates (calt) — con­sec­u­tive char­ac­ters will au­to­mat­i­cally cy­cle be­tween your 3 hand­writ­ing vari­ants. This works in apps with OpenType sup­port but may not ap­pear in this pre­view.

Click here to try it for your­self…

If you found this use­ful, I’d ap­pre­ci­ate do­na­tions & pa­trons (to keep it­er­at­ing)!

OTF for desk­top apps, TTF for uni­ver­sal com­pat­i­bil­ity, WOFF2 for web­sites, Base64 for CSS em­bed­ding.

I con­firm that the hand­writ­ing used to gen­er­ate this font is my own or I have ex­plicit per­mis­sion from the hand­writ­ing’s owner to cre­ate and use this font. I un­der­stand that I am solely re­spon­si­ble for how I use the gen­er­ated font files, and I re­lease FontCrafter, Chris Pirillo, LockerGnome, and ar­cade.pir­illo.com from any li­a­bil­ity re­lated to the font’s cre­ation, dis­tri­b­u­tion, or use.

...

Read the original on arcade.pirillo.com »

5 279 shares, 11 trendiness

We Should Revisit Literate Programming in the Agent Era

Literate pro­gram­ming is the idea that code should be in­ter­min­gled with prose such that an un­in­formed reader could read a code base as a nar­ra­tive, and come away with an un­der­stand­ing of how it works and what it does.

Although I have long been in­trigued by this idea, and have found uses for it in a cou­ple of dif­fer­ent cases, I have found that in prac­tice lit­er­ate pro­gram­ming turns into a chore of main­tain­ing two par­al­lel nar­ra­tives: the code it­self, and the prose. This has ob­vi­ously lim­ited its adop­tion.

Historically in prac­tice lit­er­ate pro­gram­ming is most com­monly found as Jupyter note­books in the data sci­ence com­mu­nity, where ex­pla­na­tions live along­side cal­cu­la­tions and their re­sults in a web browser.

Frequent read­ers of this blog will be aware that Emacs Org Mode sup­ports poly­glot lit­er­ate pro­gram­ming through its org-ba­bel pack­age, al­low­ing ex­e­cu­tion of ar­bi­trary lan­guages with re­sults cap­tured back into the doc­u­ment, but this has re­mained a niche pat­tern for nerds like me.

Even for some­one as en­thu­si­as­tic about this pat­tern as I am, it be­comes cum­ber­some to use Org as the source of truth for larger soft­ware pro­jects, as the source code es­sen­tially be­comes a com­piled out­put, and af­ter every edit in the Org file, the code must be re-ex­tracted and placed into its des­ti­na­tion (“tangled”, in Org Mode par­lance). Obviously this can be au­to­mated, but it’s easy to get into an­noy­ing sit­u­a­tions where you or your agent has edited the real source and it gets over­writ­ten on the next tan­gle.

That said, I have had enough suc­cess with us­ing lit­er­ate pro­gram­ming for book­keep­ing per­sonal con­fig­u­ra­tion that I have not been able to fully give up on the idea, even be­fore the ad­vent of LLMs.

For ex­am­ple: be­fore cod­ing agents, I had been adapt­ing a pat­tern for us­ing Org Mode for man­ual test­ing and note-tak­ing: in­stead of work­ing on the com­mand line, I would write more com­mands into my ed­i­tor and ex­e­cute them there, edit­ing them in place un­til each step was cor­rect, and run­ning them in-place, so that when I was done I would have a doc­u­ment ex­plain­ing ex­actly the steps that were taken, with­out ex­tra steps or note-tak­ing. Combining the act of cre­at­ing the note and run­ning the test gives you the notes for free when the test is com­pleted.

This is even more ex­cit­ing now that we have cod­ing agents. Claude and Kimi and friends all have a great grasp of Org Mode syn­tax; it’s a for­giv­ing markup lan­guage and they are quite good at those. All the doc­u­men­ta­tion is avail­able on­line and was prob­a­bly in the train­ing data, and while a big down­side of Org Mode is just how much syn­tax there is, but that’s no prob­lem at all for a lan­guage model.

Now when I want to test a fea­ture, I ask the clanker to write me a run­book in Org. Then I can re­view it — the prose ex­plains the mod­el’s re­flec­tion of the in­tent for each step, and the code blocks are in­ter­ac­tively ex­e­cutable once I am done re­view­ing, ei­ther one at a time or the whole file like a script. The re­sults will be stored in the doc­u­ment, un­der the code, like a Jupyter note­book.

I can edit the prose and ask the model to up­date the code, or edit the code and have the model re­flect the mean­ing upon the text. Or ask the agent to change both si­mul­ta­ne­ously. The prob­lem of main­tain­ing the par­al­lel sys­tems dis­ap­pears.

The agent is told to han­dle tan­gling, and the prob­lem of ex­trac­tion goes away. The agent can be in­structed with an AGENTS.md file to treat the Org Mode file as the source of truth, to al­ways ex­plain in prose what is go­ing on, and to tan­gle be­fore ex­e­cu­tion. The agent is very good at all of these things, and it never gets tired of re-ex­plain­ing some­thing in prose af­ter a tweak to the code.

The fun­da­men­tal ex­tra la­bor of lit­er­ate pro­gram­ming, which I be­lieve is why it is not widely prac­ticed, is elim­i­nated by the agent and it uti­lizes ca­pa­bil­i­ties the large lan­guage model is best at: trans­la­tion and sum­ma­riza­tion.

As a ben­e­fit, the code base can now be ex­ported into many for­mats for com­fort­able read­ing. This is es­pe­cially im­por­tant if the pri­mary role of en­gi­neers is shift­ing from writ­ing to read­ing.

I don’t have data to sup­port this, but I also sus­pect that lit­er­ate pro­gram­ming will im­prove the qual­ity of gen­er­ated code, be­cause the prose ex­plain­ing the in­tent of each code block will ap­pear in con­text along­side the code it­self.

I have not per­son­ally had the op­por­tu­nity to try this pat­tern yet on a larger, more se­ri­ous code­base. So far, I have only been us­ing this work­flow for test­ing and for doc­u­ment­ing man­ual processes, but I am thrilled by its ap­pli­ca­tion there.

I also rec­og­nize that the Org for­mat is a lim­it­ing fac­tor, due to its tight in­te­gra­tion with Emacs. However, I have long be­lieved that Org should es­cape Emacs. I would pro­mote some­thing like Markdown in­stead, how­ever Markdown lacks the abil­ity to in­clude meta­data. But as usual in my posts about Emacs, it’s not Emacs’s spe­cific im­ple­men­ta­tion of the idea that ex­cites me, as in this case Org’s im­ple­men­ta­tion of lit­er­ate pro­gram­ming does.

It is the idea it­self that is ex­cit­ing to me, not the tool.

With agents, does it be­come prac­ti­cal to have large code­bases that can be read like a nar­ra­tive, whose prose is kept in sync with changes to the code by tire­less ma­chines?

...

Read the original on silly.business »

6 251 shares, 12 trendiness

Dieu-de-l-elec/AngstromIO-devboard: AngstromIO, one of the smallest devboards out there, barely longer than a USB C connector, based on the Attiny1616 MCU. And a dual CH340 board for programming and debugging, and another devboard, based on the CH32V003

AngstromIO is one of the small­est de­vboards out there, barely longer than a USB-C con­nec­tor, based on the Attiny1616 MCU. 2 GPIOs as well as I2C lines are bro­ken out. I made a dual CH340 pro­gram­ming board too, both for UPDI pro­gram­ming and de­bug­ging (one way Serial Communication). I also de­signed a bread­board friendly, ex­per­i­men­ta­tion board for the CH32V003, with a 4 by 5 char­lieplexed LED ma­trix.

While the AngstromIO is a tiny de­vboard, yet pow­er­ful, that could be em­bbe­ded in any space con­strained pro­jects, the CH32 de­vboard is more an ex­per­i­men­ta­tion board, for me to learn how to pro­gram this awe­some chip on the MounriverStudio pro­gram­ming and how to pro­gram a char­lieplexed ma­trix. The Programmer is an all in one mod­ule, that will make de­bug­ging with the Serial mon­i­tor while pro­gram­ming easy: one board for both.

* One of the small­est de­vboards: 8.9mm by 9mm, USB-C in­cluded

* Attiny1616 MCU, 16Kb flash, low power, ar­duino com­pat­i­ble (for ba­sic li­braries at least)

* Pins bro­ken out: SCL, SDA, PB2 (TX), PA3, +5V, GND, and UPDI for pro­gram­ming

* Dual CH340E setup:

One for pro­gram­ming (set as SerialUPDI pro­gram­mer),

One for de­bug­ging (Serial Communication, USB to UART)

* One for pro­gram­ming (set as SerialUPDI pro­gram­mer),

* One for de­bug­ging (Serial Communication, USB to UART)

* 2 USB-C for data trans­fer, only the USB-C for Serial pro­vides 5V to the board

* USB-C for power, the CH32 runs at 3.3V but PC6 and PC5 are 5V tol­er­ant

Arduino com­pat­i­ble, some li­braries may not work, but some have been arranged/​made by SpenceKonde like Wire (I2C) and tiny­NeoPixel (for more in­for­ma­tion, see: https://​github.com/​SpenceKonde/​megaTiny­Core/​tree/​mas­ter/​megaavr/​li­braries)

PCB de­signed in EasyEDA Pro, 2 lay­ers, 1.0mm thick, Purple sol­der­mask All 3 de­signs pan­el­ized into one PCB.

🚧 to be con­tin­ued…

...

Read the original on github.com »

7 224 shares, 10 trendiness

Why a 40-year Wall Street veteran says that most of the US economy is in a recession

The US econ­omy has proven more re­silient than many feared, but one Wall Street vet­eran says that a re­ces­sion is only be­ing held back by tech.

Jim Paulsen, a mar­kets strate­gist with more than 40 years of ex­pe­ri­ence on Wall Street, ar­gued that most of the econ­omy is al­ready in a re­ces­sion. Tech spend­ing has an out­sized in­flu­ence on eco­nomic growth, bol­ster­ing the data while the old” econ­omy strug­gles.

Technology may be the tail wag­ging the dog, but the rest is a re­ces­sion by any other name,” Paulsen wrote in a Thursday Substack post.

Real pri­vate GDP rose 2.3% in 2025, Paulsen said, but nearly all of this eco­nomic growth is tied to what he calls new era” growth.

Excluding new era in­vest­ment, the other 89% of real pri­vate spend­ing rose by only 1% with no job cre­ation,” the strate­gist wrote.

Every time Naomi pub­lishes a story, you’ll get an alert straight to your in­box!

Stay con­nected to Naomi and get more of their work as it pub­lishes.

By click­ing Sign up”, you agree to re­ceive emails from Business Insider. In ad­di­tion, you ac­cept Insider’s Terms of Service and Privacy Policy.

Real GDP is gen­er­ally con­sid­ered to be a good mea­sure of eco­nomic growth, but in re­cent years, the met­ric has seen sev­eral dis­tor­tions from fac­tors like gov­ern­ment spend­ing, tax changes, and tar­iff-fu­eled sup­ply and de­mand volatil­ity. Focusing on pri­vate real GDP strips out some of these dis­tor­tions.

Do we re­ally need to con­tinue fo­cus­ing mostly on in­fla­tion when 89% of the pri­vate econ­omy is in a re­ces­sion and the 11% which is boom­ing — new era pur­suits — are by their very na­ture disinflationary’?”

The strate­gist fo­cused on busi­ness spend­ing on in­for­ma­tion pro­cess­ing equip­ment and in­tel­lec­tual prop­erty, us­ing it as a mea­sure of new era” spend­ing. This would in­clude big tech’s mega spend­ing on AI.

Paulsen found that the new era sub­sec­tion has grown nearly 2.5 times as fast as tra­di­tional pri­vate-sec­tor spend­ing. This gap has only widened in more re­cent years.

New-era pri­vate spend­ing grew 14% in 2025, com­pared to 1% growth in pri­vate spend­ing, ex­clud­ing the tech-fo­cused sub­set.

Overall, new era pur­suits have grown rapidly and their in­flu­ence on the over­all U. S. econ­omy has be­come out­sized rel­a­tive to old era ac­tiv­i­ties,” Paulsen wrote.

Paulsen com­pared the widen­ing gap to the stock mar­ket nar­ra­tive of the Magnificent Seven vs. the other 493 stocks in the S&P 500.

The stock mar­ket lately has seen gains broaden, with AI dis­rup­tion fears and the war in Iran sup­port­ing a broader ro­ta­tion out of for­mer tech lead­ers.

The dy­namic be­tween the new era econ­omy vs. every­thing else shows deeper bi­fur­ca­tion.

When the President, the Federal Reserve Chairman, econ­o­mists, fi­nan­cial pun­dits, and jour­nal­ists im­ply over­all real GDP growth cur­rently re­mains okay, it misses the fact that, sim­i­lar to the stock mar­ket, al­though the ag­gre­gate growth num­ber ap­pears sat­is­fac­tory, the great bulk of the econ­omy — 89% — is NOT do­ing okay!”

This dy­namic could ex­plain the mixed sen­ti­ment among econ­o­mists as well as the dis­crep­ancy be­tween the American pub­lic’s neg­a­tive view of the econ­omy and what eco­nomic data sig­nals.

...

Read the original on www.businessinsider.com »

8 223 shares, 13 trendiness

The Death of Social Media is the Renaissance of RSS

Social me­dia was once hailed as the great de­moc­ra­tizer of in­for­ma­tion. It promised a world where any­one could share their voice, con­nect with like-minded com­mu­ni­ties, and dis­cover au­then­tic hu­man per­spec­tives. But in 2025, the land­scape looks very dif­fer­ent. Social plat­forms are drown­ing in a flood of Generative AI (GenAI) con­tent—ar­ti­cles, im­ages, videos, and even en­tire con­ver­sa­tions churned out at in­dus­trial scale. The re­sult is a noisy ocean of same­ness, where gen­uine hu­man voices strug­gle to be heard.

For years, one could track the slow de­cline of so­cial plat­forms: from al­go­rith­mic ma­nip­u­la­tion and ad­ver­tis­ing over­load, to bot armies, po­lit­i­cal po­lar­iza­tion, and the pri­or­i­ti­za­tion of en­gage­ment over truth. Yet, the ex­plo­sion of AI-generated con­tent has dra­mat­i­cally ac­cel­er­ated this down­fall. Instead of scrolling for con­nec­tion, users now find them­selves wad­ing through end­less streams of repet­i­tive, shal­low, or out­right mean­ing­less ma­te­r­ial. This is not just a tem­po­rary nui­sance. It sig­nals the death of so­cial me­dia as we know it. And in this en­vi­ron­ment, an old tech­nol­ogy—long thought to be ob­so­lete—emerges as a sav­ior: RSS, or Really Simple Syndication. RSS of­fers a way out of the al­go­rith­mic maze, a re­turn to au­then­tic­ity, and a chance to re­claim con­trol over the in­for­ma­tion we con­sume.

This ar­ti­cle ex­plores why AI-generated con­tent is de­stroy­ing so­cial me­dia, how RSS pro­vides a vi­able and el­e­gant so­lu­tion, and why now might be the per­fect mo­ment to re­vive this for­got­ten tech­nol­ogy.

Generative AI has low­ered the bar­rier to con­tent cre­ation to near zero. What once re­quired hours of writ­ing, de­sign­ing, or edit­ing can now be pro­duced in sec­onds. Tools like ChatGPT, Midjourney, and Runway gen­er­ate pol­ished text, stun­ning vi­su­als, and slick videos in­stantly. This is rev­o­lu­tion­ary in some ways, but dev­as­tat­ing in oth­ers. The prob­lem lies in scale. When any­one—or any bot—can pub­lish un­lim­ited con­tent at no cost, the sup­ply of in­for­ma­tion sky­rock­ets, but hu­man at­ten­tion re­mains fi­nite. The in­evitable re­sult is over­sat­u­ra­tion, an end­less flood of low-value ma­te­r­ial that drowns out every­thing else.

AI-generated ar­ti­cles and posts of­ten sound com­pe­tent, but they rarely sound alive. They mimic hu­man style but lack hu­man depth. After read­ing a dozen AI-written ar­ti­cles, a pat­tern emerges: sim­i­lar phrases, repet­i­tive struc­tures, and pre­dictable con­clu­sions. The in­ter­net is fill­ing up with ma­chine-gen­er­ated déjà vu. For read­ers, this cre­ates fa­tigue in en­coun­ter­ing the same types of con­tent over and over, along with the ero­sion of trust as it be­comes dif­fi­cult to dis­tin­guish gen­uine hu­man thought from au­to­mated out­put.

Compounding this prob­lem is the way so­cial me­dia al­go­rithms func­tion. They don’t care about au­then­tic­ity; they care about en­gage­ment. AI-generated con­tent, de­signed for clicks and shares, fits neatly into their goals. As more con­tent is pro­duced, al­go­rithms am­plify the ones that trig­ger emo­tions or quick in­ter­ac­tions, even if they are shal­low, ma­nip­u­la­tive, or mis­lead­ing. AI makes it cheaper to pro­duce click­bait, and so­cial me­dia en­sures it spreads faster than ever. For cre­ators, this is crush­ing. Human-made ar­ti­cles, videos, or posts now com­pete against end­less waves of ma­chine-made con­tent. Audiences can barely tell the dif­fer­ence, and many no longer care. The re­sult is suf­fo­ca­tion of au­then­tic voices. For users, the plat­forms feel less per­sonal, less in­spir­ing, and less trust­wor­thy. Social me­dia promised com­mu­nity, but what we now get is con­tent sludge.

The de­cay of so­cial me­dia did not be­gin with AI. For over a decade, cracks have been vis­i­ble. Platforms in­creas­ingly pri­or­i­tized ad­ver­tis­ers over users, forc­ing ad over­load onto feeds. Algorithms re­placed user choice with au­to­mated cu­ra­tion. Fake news and bots un­der­mined trust. Excessive use was linked to anx­i­ety, de­pres­sion, and po­lar­iza­tion. Yet now, the flood of AI con­tent ac­cel­er­ates this process. If every scroll re­veals ma­chine-writ­ten posts, AI art clones, or syn­thetic in­flu­encers, the value of the plat­form erodes fur­ther. People log in less, en­gage­ment drops, and the feed­back loop wors­ens.

At the heart of this de­cline lies the col­lapse of au­then­tic­ity. The core ap­peal of so­cial me­dia was its hu­man con­nec­tion, the spon­tane­ity of real peo­ple shar­ing real mo­ments. But with AI con­tent dom­i­nat­ing feeds, that con­nec­tion is di­luted. Imagine log­ging into Instagram and see­ing 90 per­cent AI-generated fash­ion shoots, travel pho­tos, or in­flu­encer avatars. Or scrolling Twitter/X and find­ing end­less AI-written hot takes. The magic of hu­man un­pre­dictabil­ity van­ishes. Platforms are also trapped by their re­liance on ad­ver­tis­ing. Ads de­pend on user trust and en­gage­ment, but as feeds be­come in­dis­tin­guish­able from AI sludge, users dis­en­gage and ad­ver­tis­ers see di­min­ish­ing re­turns. The re­sult is a death spi­ral: plat­forms push harder for mon­e­ti­za­tion while dri­ving users fur­ther away. This is the tra­jec­tory we are on, and it points to­ward the death of so­cial me­dia as we knew it.

RSS, or Really Simple Syndication, is an old tech­nol­ogy from the early web. At its core, it is a feed for­mat that al­lows web­sites to pub­lish up­dates which users can sub­scribe to. Instead of vis­it­ing each site in­di­vid­u­ally, you can ag­gre­gate up­dates into a sin­gle reader app. Think of it as an in­box for the in­ter­net. While RSS was once pop­u­lar in the blog­ging era of the 2000s, it faded as so­cial me­dia took over. But now, in the age of AI con­tent floods, it might be the per­fect an­ti­dote.

The ben­e­fits of RSS are strik­ing. With RSS, you sub­scribe di­rectly to web­sites, blogs, or news out­lets, mean­ing there is no mid­dle­man al­go­rithm de­cid­ing what you see. By cu­rat­ing your own sources, you can fo­cus on hu­man-writ­ten blogs, com­pany press re­leases, or trusted out­lets. Most feeds de­liver clean con­tent with­out in­jected ads or sur­veil­lance. Updates ap­pear in­stantly in your reader, of­ten faster than so­cial me­dia plat­forms sur­face them. Above all, RSS re­turns con­trol to the user. You de­cide which voices mat­ter, with no al­go­rithm fil­ter­ing or ma­nip­u­lat­ing your feed.

Unlike so­cial feeds, where AI-generated spam floods in, RSS al­lows you to choose hu­man sources de­lib­er­ately. If you want up­dates from your fa­vorite in­die jour­nal­ist, you sim­ply add their blog feed. If you want com­pany news straight from the source, you sub­scribe to their press re­lease feed. The re­sult is a cu­rated, per­son­al­ized feed of au­then­tic voices. In other words, RSS re­stores the sig­nal-to-noise ra­tio.

Despite its age, RSS never dis­ap­peared. Many apps and ser­vices still sup­port it, some paid, some free, some open source. During my own ex­plo­ration, I tested mul­ti­ple RSS reader apps. Most worked, but one stood out: Feeder.

Feeder is free, light­weight, and re­fresh­ingly sim­ple. Unlike bloated apps or paid ser­vices, it fo­cuses on what mat­ters. There are no dis­tract­ing ads, only con­tent. It works across phone and desk­top, pro­vid­ing seam­less use. Transparency is an­other key ad­van­tage—it is open source on GitHub, mean­ing you can au­dit what it does. Notifications, search, fold­ers, and sync­ing all work smoothly. It does­n’t try to rein­vent RSS; it just makes it work. In a time where trans­parency and trust mat­ter, Feeder’s open-source na­ture is a mas­sive plus.

Of course, Feeder is not the only op­tion. Other strong con­tenders in­clude Feedly, Inoreader, and NetNewsWire. But Feeder cap­tures the spirit of RSS: in­de­pen­dence, sim­plic­ity, and user con­trol.

AI-driven so­cial me­dia thrives on re­mov­ing choice. It de­cides what you see, when you see it, and how you feel about it. RSS flips the script. You de­cide your sources. You build your own in­for­ma­tion ecosys­tem. Instead of wait­ing for an ar­ti­cle to ap­pear on Twitter/X or Facebook—filtered by al­go­rithms, dis­torted by ads—you get it straight from the source, with no de­lays, no ma­nip­u­la­tion, and no man in the mid­dle.

By sub­scrib­ing di­rectly to hu­man cre­ators—blog­gers, jour­nal­ists, thinkers—you by­pass the noise and sup­port au­then­tic­ity. These voices are harder to find on AI-clogged plat­forms, but with RSS, they come straight to you. With RSS, there is no hid­den data min­ing, no in­vis­i­ble AI in­flu­ence. Just a feed of your cho­sen con­tent, de­liv­ered lo­cally. In a dig­i­tal world plagued by hid­den al­go­rithms, this trans­parency is re­fresh­ing.

Social me­dia will not van­ish overnight, but its role is chang­ing. For many, it will be­come back­ground noise, a chaotic car­ni­val of AI bots shout­ing into the void. Platforms may sur­vive as en­ter­tain­ment ma­chines, but their cred­i­bil­ity as sources of au­then­tic hu­man in­sight is gone. The early web was de­cen­tral­ized, built on blogs, fo­rums, and per­sonal sites. Social me­dia cen­tral­ized every­thing, but at great cost. RSS of­fers a way back: de­cen­tral­ized, user-dri­ven, and au­then­tic. Instead of one feed con­trolled by one cor­po­ra­tion, each per­son builds their own.

In the age of in­fi­nite AI con­tent, cu­ra­tion be­comes the most valu­able act. RSS em­pow­ers in­di­vid­u­als to cu­rate their own feeds. Instead of pas­sively scrolling, we ac­tively se­lect. Instead of drown­ing, we choose.

Social me­dia as we knew it is dy­ing—not be­cause hu­mans lost in­ter­est in shar­ing, but be­cause ma­chines over­whelmed the sys­tem. Generative AI has flooded plat­forms with repet­i­tive, shal­low, and ma­nip­u­la­tive con­tent. Algorithms am­plify the noise. Authenticity evap­o­rates.

But out of this col­lapse emerges an op­por­tu­nity: the re­birth of RSS. This old, sim­ple tech­nol­ogy em­pow­ers us to re­claim con­trol, cut out mid­dle­men, and re­con­nect with gen­uine hu­man voices. Tools like Feeder make it ac­ces­si­ble again, pro­vid­ing clean, ad-free, trans­par­ent feeds of the con­tent that mat­ters most.

The fu­ture of in­for­ma­tion con­sump­tion may not be al­go­rithm-dri­ven feeds but self-cu­rated, hu­man-cen­tered in­boxes of au­then­tic­ity. The death of so­cial me­dia could be the start of some­thing bet­ter: a re­nais­sance of the open web. RSS is­n’t just nos­tal­gia. It’s a sur­vival tool in the age of AI con­tent floods. And the sooner we em­brace it, the sooner we can re­dis­cover the joy of con­sum­ing in­for­ma­tion that feels truly hu­man.

By the way, here is my own RSS blog feed!

...

Read the original on www.smartlab.at »

9 215 shares, 8 trendiness

Every Single Board Computer I Tested in 2025

2025 was a pretty busy year for sin­gle board com­put­ers. I had 15 boards re­leased in 2025 come through the bench from 8 dif­fer­ent man­u­fac­tur­ers, span­ning SoCs from Rockchip, Broadcom, Qualcomm, MediaTek, Allwinner, StarFive, CIX, and Texas Instruments. Prices have ranged from $42 all the way up to $590, and the va­ri­ety on of­fer has been gen­uinely im­pres­sive. We’ve had RISC-V boards, Qualcomm en­ter­ing the SBC space (in a big way), a new-ish SoC ven­dor in CIX turn­ing heads, an Arduino SBC of all things, and Raspberry Pi it­er­at­ing on their key­board form fac­tor.

Also, my friend Meco of sbcwiki has a great se­ries called State of Embedded” that shares some in­sight into the scene, too, and you can read his Q4 roundup on the mat­ter if you want bit of a dive into things in gen­eral, rather than spe­cific SBCs.

All of the boards in this ar­ti­cle have been bench­marked and are avail­able to com­pare on sbc.com­pare, so if you want to dig into the raw num­bers your­self, head over there. I’ll be link­ing to each board’s page through­out this ar­ti­cle so you can see the full data for any­thing that catches your eye.

Before we get into it though, a quick note on pric­ing. The prices listed through­out this ar­ti­cle are what the boards were re­tail­ing at when I tested them. As many of you will be aware, LPDDR4 and LPDDR5 mem­ory did­n’t es­cape RAMageddon and costs have been climb­ing since late 2025, with man­u­fac­tur­ers piv­ot­ing pro­duc­tion to­wards more prof­itable (and AI-hungry) mem­ory types. This has al­ready hit Raspberry Pi pric­ing and it’s af­fect­ing other boards too. Some of the prices you see here may have gone up, or the boards may not be avail­able at all right now. I’ll try to note where I’m aware of changes, but do check cur­rent pric­ing be­fore pur­chas­ing and com­par­ing.

Six boards came in un­der $50 this year, and they’re a sur­pris­ingly var­ied bunch. You’ve got RISC-V, ARM, even a decades-old Texas Instruments SoC mak­ing an ap­pear­ance. If you’re look­ing to tin­ker with­out a sig­nif­i­cant out­lay, 2025 has given you plenty to choose from.

So to kick things off, I’ll be hon­est, the BeagleBone Green Eco is a bit of an odd­ity on this list. It’s run­ning a TI Sitara AM3358, a sin­gle-core Cortex-A8, with 512MB of DDR3L. In 2025. BeagleBoard have al­ways had a strong fol­low­ing in the in­dus­trial and ed­u­ca­tion spaces though, and this is clearly where the Green Eco is aimed. It’s not here to com­pete on raw per­for­mance and it knows it. If you need some­thing re­li­able, well-doc­u­mented, and with a long his­tory of com­mu­nity sup­port for em­bed­ded ap­pli­ca­tions, BeagleBoard have that cov­ered. For gen­eral-pur­pose SBC tin­ker­ing though, you’re bet­ter off look­ing else­where in this tier.

StarFive’s VisionFive 2 Lite is a trimmed-down ver­sion of the VisionFive 2 that I re­viewed a while back, run­ning the JH7110S vari­ant of the SoC. With Geekbench 6 scores of 59 sin­gle-core and 180 multi-core, the num­bers aren’t go­ing to set the world alight, but sadly, that’s the state of RISC-V in 2025 (at least at these price points). If you’re in­ter­ested in the ar­chi­tec­ture and want a cheap way in, $43 for 4GB of RAM is­n’t bad. Just don’t go in ex­pect­ing 2025 ARM-level per­for­mance.

This one caught me off guard. Arduino, the com­pany most of us as­so­ci­ate with mi­cro­con­trollers and blink­ing LEDs, have re­leased an SBC. The UNO Q runs a Qualcomm QRB2210 with 2GB of LPDDR4X, and whilst the Geekbench scores of 190 SC / 527 MC are mod­est, I think the story here is less about the per­for­mance and more about what Arduino are try­ing to do. It feels like they’re test­ing the wa­ters in the SBC mar­ket, and hav­ing Qualcomm sil­i­con in an Arduino prod­uct is some­thing that we should have seen com­ing given Qualcomm’s ac­qui­si­tion of them. Whether there’s a mean­ing­ful au­di­ence for it be­yond cu­rios­ity re­mains to be seen, how­ever, as it was a rather con­fus­ing prod­uct. My X thread be­low de­tails it fairly well (feel free to fol­low whilst you’re there, too!)

Orange Pi have thrown two RISC-V boards into the ring this year. The Orange Pi RV ($50) runs the StarFive JH7110, the same chip as the VisionFive 2, pulling in Geekbench scores of 74 SC / 220 MC. The Orange Pi RV2 ($46) is the more in­ter­est­ing of the pair, us­ing the less com­mon Ky X1 SoC (which seems to just be a clone of the Spacemit K1?) and man­ag­ing 118 SC / 528 MC in Geekbench. Both come with 4GB of LPDDR4/4X.

It’s good to see Orange Pi in­vest­ing in RISC-V along­side their ARM lineup to keep things fresh and fun. The RV2s Ky X1 re­sults are a step in the right di­rec­tion, though we’re still a long way from RISC-V boards be­ing com­pet­i­tive with sim­i­larly priced ARM op­tions on raw com­pute. The soft­ware ecosys­tems for both chips are still ma­tur­ing too, so if you’re pick­ing one of these up, you’ll want to be com­fort­able with a bit of tin­ker­ing to get things work­ing the way you want, but the soft­ware was­n’t com­pletely ter­ri­ble, which is nice.

The Radxa Cubie A7A is the stand­out of the bud­get tier. An Allwinner A733 with 6GB of LPDDR5 for $45 is a strong propo­si­tion, and the Geekbench scores of 641 SC / 1,545 MC put it com­fort­ably ahead of every­thing else un­der $50. For con­text, that multi-core score is­n’t far off some of the Rockchip RK3576 boards in the next tier up that cost $60+. If you’re af­ter the best bang for your buck in 2025, the Cubie A7A makes a com­pelling case for it­self.

The $50 to $100 bracket is where most peo­ple tend to shop, and this year it’s been dom­i­nated by a few in­ter­est­ing trends. Rockchip’s RK3576 has turned up on mul­ti­ple boards, Qualcomm has made a very strong en­trance, and there’s a net­work­ing-fo­cused op­tion that does­n’t care about your Geekbench score.

The Radxa ROCK 4D is one of three RK3576-based boards I’ve tested this year (the oth­ers be­ing the ArmSoM CM5 and the DShanPi A1). With 8GB of LPDDR5 at $60, it’s the cheap­est way into RK3576 ter­ri­tory. Geekbench re­sults of 319 SC / 1,332 MC are right in line with what we’ve seen from this SoC across all three boards. You can com­pare all three RK3576 boards on sbc.com­pare if you’re cu­ri­ous how they stack up against each other.

At this price for 8GB of RAM, it’s solid value. The usual Rockchip soft­ware story ap­plies though, so do fac­tor that in. Armbian have a great al­ter­na­tive to the Radxa OS builds, and that’s what I’ve used in test­ing, so maybe it’s less of an is­sue on that front.

If I had to pick the most in­ter­est­ing board of the year, the Radxa Dragon Q6A would be right up there. A Qualcomm QCS6490 with 6GB of LPDDR5 for $70, and Geekbench scores of 1,180 SC / 3,215 MC. Read those num­bers again. That sin­gle-core score is in Raspberry Pi 5 ter­ri­tory, at $70, from a Qualcomm SoC, on a Radxa board. If some­one had told me that a year ago, I’d have been ex­tremely scep­ti­cal. It even dips its toes into Windows on ARM and can of­fer re­spectable gam­ing per­for­mance!

Qualcomm en­ter­ing the SBC mar­ket is one of the big­ger sto­ries of 2025 in my view. The per­for­mance-per-dol­lar here is gen­uinely im­pres­sive. The big ques­tion mark is the soft­ware ecosys­tem. Qualcomm’s Linux sup­port has his­tor­i­cally been a lit­tle hit and miss, though they seem to be re­ally try­ing with these SoCs, and main­line sup­port is there or there abouts. It also has great Armbian sup­port. Regardless, the hard­ware is there though, and I’m here for it.

The ArmSoM CM5 is an­other RK3576 board, this time in a com­pute mod­ule form fac­tor with 8GB of LPDDR5 at $95. Geekbench scores of 326 SC / 1,366 MC are es­sen­tially iden­ti­cal to the ROCK 4D, which makes sense given they share the same SoC. The price pre­mium over the ROCK 4D comes down to the com­pute mod­ule form fac­tor, which serves a dif­fer­ent pur­pose. If you need some­thing that slots into a car­rier board for a spe­cific de­ploy­ment, the CM5 fills that role. If you’re af­ter a gen­eral-pur­pose dev board and don’t need the mod­u­lar­ity, the ROCK 4D at $60 is likely the bet­ter buy. I’ve re­viewed ArmSoM’s AIM7 pre­vi­ously and came away im­pressed with their doc­u­men­ta­tion and ap­proach, so they’re a brand worth keep­ing an eye on.

The Banana Pi R4 is a dif­fer­ent beast en­tirely. Running a MediaTek MT7988A with 8GB of LPDDR4 at $99, its Geekbench scores of 305 SC / 889 MC look un­der­whelm­ing on pa­per. But judg­ing this board on CPU bench­marks alone would be miss­ing the point (and to be hon­est, that stands true for most of the boards in this piece.) The MT7988A is a net­work­ing SoC, and the R4 ex­ists for OpenWrt, router, and net­work ap­pli­ance use cases. If you’re look­ing for some­thing to re­place age­ing net­work hard­ware or build a cus­tom router, this is the board to look at. If you want a gen­eral-pur­pose SBC, it’s not.

I’m us­ing the R4 in the of­fice for bench­mark­ing, with its 10Gbit SFP+ ports, it’s con­nected to my big ol’ UniFi Pro 48 PoE switch and acts as a per­fect iPerf3 end­point for test­ing!

Five boards landed above the $100 mark this year, rang­ing from $199 to $590. This is where we see some se­ri­ous sil­i­con, a newer SoC ven­dor mak­ing waves, and one board that’s in a cat­e­gory of its own.

I’ve al­ready re­viewed the Pi 500+ in full, so I won’t re­hash every­thing here. The short ver­sion: 16GB of LPDDR4X, NVMe stor­age, me­chan­i­cal key­board switches, and it’s what the orig­i­nal Pi 500 should have been from the start. Geekbench scores of 892 SC / 2,121 MC are in line with the BCM2712s known per­for­mance. At $200 it’s not cheap, but when you break down what you’re get­ting (the NVMe, the dou­bled RAM, the me­chan­i­cal key­board up­grade), the value is there. It’s the most re­fined Raspberry Pi prod­uct to date in my opin­ion, though 2025 was­n’t a rev­o­lu­tion­ary year for Pi. More of an it­er­a­tive one.

The ArmSoM AIM7 packs an RK3588 with 8GB of LPDDR4X into a Jetson Nano-compatible form fac­tor. Geekbench scores of 828 SC / 3,186 MC. I cov­ered this one in depth in my full AIM7 re­view and the sum­mary has­n’t changed: if you specif­i­cally need a Jetson Nano re­place­ment with mod­ern per­for­mance, the AIM7 does ex­actly what it says on the tin. If you don’t need that form fac­tor com­pat­i­bil­ity, there are more flex­i­ble RK3588 op­tions out there for less money. As men­tioned ear­lier, ArmSoM con­tinue to im­press on the doc­u­men­ta­tion and over­all qual­ity front though.

I’m cov­er­ing these two to­gether be­cause they share the same story: the CIX P1. This is a brand new SoC from a ven­dor that was­n’t on any­one’s radar a year ago, and both the Radxa Orion O6N ($199) and the Orange Pi 6 Plus ($260) pair it with 32GB of LPDDR5.

The num­bers speak for them­selves. The O6N pulls in 1,327 SC / 6,954 MC on Geekbench, whilst the 6 Plus hits 1,328 SC / 7,130 MC. Those multi-core scores are in a com­pletely dif­fer­ent league to any­thing else on this list. You can com­pare the two CIX P1 boards di­rectly on sbc.com­pare and the re­sults are near-iden­ti­cal, as you’d ex­pect from the same SoC.

So is the CIX P1 the most ex­cit­ing new SoC to ar­rive in the SBC space this year? Based on raw per­for­mance, I’d say yes. Having two man­u­fac­tur­ers al­ready ship­ping boards with it sug­gests there’s con­fi­dence in the plat­form. The $61 price gap be­tween the O6N and the 6 Plus is worth in­ves­ti­gat­ing if you’re con­sid­er­ing one, as the sil­i­con is the same and the per­for­mance is ef­fec­tively iden­ti­cal.

Software ma­tu­rity is the usual ques­tion mark with a new SoC ven­dor, but the early signs are en­cour­ag­ing. There’s the power con­sump­tion quirks, along with a few other an­noy­ances but in terms of raw per­for­mance, they take the top spots. Other SBCs (if I can call them that, I guess?) with the CIX P1 in­clude the Radxa Orion O6 (the mITX brother of the O6N), and the Minisforum MS-R1, though the for­mer is a bit harder to get right now, and the lat­ter came at a pretty un­for­tu­nate time. It’s a pol­ished unit, it’s just sadly quite ex­pen­sive com­pared to the al­ter­na­tives.

Finally, the Fogwise Airbox Q900 is the out­lier on this list. A Qualcomm IQ-9075 with 36GB of LPDDR5 at $590 is­n’t re­ally a tra­di­tional SBC pur­chase. Geekbench scores of 1,111 SC / 5,638 MC are strong but ac­tu­ally fall be­hind the CIX P1 boards that cost a third of the price. The Airbox’s value propo­si­tion is in its Qualcomm AI stack and the spe­cific edge de­ploy­ment use cases it’s de­signed for, not gen­eral-pur­pose com­pute. If you know you need it, you know you need it. For the rest of us, it’s in­ter­est­ing to see where the high end of the SBC mar­ket is head­ing, but $590 is a lot of money for a sin­gle board com­puter.

I can’t write a 2025 SBC roundup with­out ad­dress­ing the mem­ory sit­u­a­tion. LPDDR4 and LPDDR5 prices have been climb­ing since the sec­ond half of the year, dri­ven by man­u­fac­tur­ers shift­ing pro­duc­tion to­wards higher-mar­gin mem­ory types (the AI boom has a lot to an­swer for here). We’ve al­ready seen Raspberry Pi raise prices across their lineup, and they’re un­likely to be the last.

For the boards in this ar­ti­cle, the prices I’ve listed are from when I tested them. Some may have gone up since, oth­ers may be tem­porar­ily out of stock. If you’re look­ing to pick some­thing up, do check cur­rent pric­ing on the rel­e­vant re­tailer or AliExpress list­ing be­fore or­der­ing. The sit­u­a­tion is sup­pos­edly tem­po­rary (come on bubb­ble.. pop..), but whether that means months or years is any­one’s guess at this point.

The sil­ver lin­ing, if there is one, is that it makes per­for­mance-per-dol­lar com­par­isons all the more rel­e­vant. If you’re go­ing to pay more for a board, you want to make sure you’re get­ting the most out of that money. That’s ex­actly the kind of com­par­i­son you can run on sbc.com­pare, so do make use of it!

Looking back at 15 boards across a year, a few themes stand out.

Two boards, mon­ster multi-core num­bers, and a new ven­dor that brute forced their way to the top. If the soft­ware ecosys­tem ma­tures along­side the hard­ware, CIX could be­come a se­ri­ous player. The fact that both Radxa, Minisforum, and Orange Pi are al­ready ship­ping boards with it says some­thing about the con­fi­dence in the plat­form.

The Dragon Q6As sin­gle-core per­for­mance at $70 is a shot across the bow for any­one who thought ARM SBCs were a two-horse race be­tween Broadcom and Rockchip. The Arduino UNO Q and Fogwise Airbox Q900 round out a spread from bud­get to pre­mium. Software sup­port will be the de­cid­ing fac­tor, as it al­ways is, but the sil­i­con is com­pet­i­tive.

We even have Indian man­u­fac­turer, Vicharak teas­ing a mod­u­lar SBC that utilises the same Qualcomm SoC as the Dragon Q6A from Radxa. I’ve been speak­ing with one of their hard­ware team about it, and it seems like it’s get­ting close to re­al­ity, and I’m pretty ex­cited about that!

Four boards this year (VisionFive 2 Lite, Orange Pi RV, Orange Pi RV2, plus the Ky X1 show­ing up in the RV2) and the per­for­mance gap to ARM re­mains sig­nif­i­cant. It’s get­ting closer, and the fact that we’re see­ing this many RISC-V boards in a sin­gle year is progress in it­self, but we’re not at the point where I’d rec­om­mend one over an ARM board for any­thing other than ar­chi­tec­ture-spe­cific in­ter­est. I’m not a hater, though, don’t get me wrong. I love the fast progress and the com­pe­ti­tion it’s slowly start­ing to of­fer. 2026 has al­ready shown some promis­ing new re­leases (with the Spacemit K3-based boards like the Milk-V Jupiter) so keep your eyes peeled.

Three boards this year (ROCK 4D, ArmSoM CM5, DShanPi A1), all per­form­ing within a few per­cent of each other. The soft­ware story is the same as it’s been for Rockchip, and that’s promis­ing hard­ware that needs the ker­nel and dri­ver sup­port to catch up. That said, Collabora’s re­cent progress on up­stream video de­coder sup­port for the RK3576 and RK3588 is en­cour­ag­ing, and it only builds on the great work they’ve al­ready done in the area.

From $42 to $590, the de­f­i­n­i­tion of single board com­puter” is stretch­ing. That’s not a bad thing. Having op­tions at every price point means there’s some­thing for every use case, whether you’re build­ing a net­work ap­pli­ance for $99 or de­ploy­ing edge AI at $590.

The 500+ is a great prod­uct and my pick if you want some­thing that just works, but 2025 was­n’t a Pi-defined year the way 2023 and 2024 were. The com­pe­ti­tion has stepped up, and Raspberry Pi are likely bid­ing their time to hit a sweet spot for in­ter­est (and likely costs!) be­fore re­leas­ing the Raspberry Pi 6. Will that be 2026? Unless RAM pric­ing comes crash­ing down, I’d be sur­prised, but maybe it’ll launch with 1/2GB RAM op­tions first and spread out over time. I’d hope not, it would ham­string the SoC I imag­ine, but let’s see.

If you want to ex­plore all of the boards in this ar­ti­cle (and the 80+ oth­ers I’ve bench­marked), head over to sbc.com­pare and have a dig around. If there’s a spe­cific com­par­i­son you’d like to see, or a board you think I should get on the bench for 2026, let me know in the com­ments, or shout at your favourite SBC ven­dor to reach out ;-)

I hope 2026 is every­thing you want it to be!

...

Read the original on bret.dk »

10 183 shares, 28 trendiness

Media Processing at Scale

FFmpeg is truly a multi-tool for me­dia pro­cess­ing. As an in­dus­try-stan­dard tool it sup­ports a wide va­ri­ety of au­dio and video codecs and con­tainer for­mats. It can also or­ches­trate com­plex chains of fil­ters for me­dia edit­ing and ma­nip­u­la­tion. For the peo­ple who use our apps, FFmpeg plays an im­por­tant role in en­abling new video ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones.

Meta ex­e­cutes ffm­peg (the main CLI ap­pli­ca­tion) and ff­probe (a util­ity for ob­tain­ing me­dia file prop­er­ties) bi­na­ries tens of bil­lions of times a day, in­tro­duc­ing unique chal­lenges when deal­ing with me­dia files. FFmpeg can eas­ily per­form transcod­ing and edit­ing on in­di­vid­ual files, but our work­flows have ad­di­tional re­quire­ments to meet our needs. For many years we had to rely on our own in­ter­nally de­vel­oped fork of FFmpeg to pro­vide fea­tures that have only re­cently been added to FFmpeg, such as threaded multi-lane en­cod­ing and real-time qual­ity met­ric com­pu­ta­tion.

Over time, our in­ter­nal fork came to di­verge sig­nif­i­cantly from the up­stream ver­sion of FFmpeg. At the same time, new ver­sions of FFmpeg brought sup­port for new codecs and file for­mats, and re­li­a­bil­ity im­prove­ments, all of which al­lowed us to in­gest more di­verse video con­tent from users with­out dis­rup­tions. This ne­ces­si­tated that we sup­port both re­cent open-source ver­sions of FFmpeg along­side our in­ter­nal fork. Not only did this cre­ate a grad­u­ally di­ver­gent fea­ture set, it also cre­ated chal­lenges around safely re­bas­ing our in­ter­nal changes to avoid re­gres­sions.

As our in­ter­nal fork be­came in­creas­ingly out­dated, we col­lab­o­rated with FFmpeg de­vel­op­ers, FFlabs, and VideoLAN to de­velop fea­tures in FFmpeg that al­lowed us to fully dep­re­cate our in­ter­nal fork and rely ex­clu­sively on the up­stream ver­sion for our use cases. Using up­streamed patches and refac­tor­ings we’ve been able to fill two im­por­tant gaps that we had pre­vi­ously re­lied on our in­ter­nal fork to fill: threaded, multi-lane transcod­ing and real-time qual­ity met­rics.

When a user up­loads a video through one of our apps, we gen­er­ate a set of en­cod­ings to sup­port Dynamic Adaptive Streaming over HTTP (DASH) play­back. DASH play­back al­lows the ap­p’s video player to dy­nam­i­cally choose an en­cod­ing based on sig­nals such as net­work con­di­tions. These en­cod­ings can dif­fer in res­o­lu­tion, codec, fram­er­ate, and vi­sual qual­ity level but they are cre­ated from the same source en­cod­ing, and the player can seam­lessly switch be­tween them in real time.

In a very sim­ple sys­tem sep­a­rate FFmpeg com­mand lines can gen­er­ate the en­cod­ings for each lane one-by-one in se­r­ial. This could be op­ti­mized by run­ning each com­mand in par­al­lel, but this quickly be­comes in­ef­fi­cient due to the du­pli­cate work done by each process.

To work around this, mul­ti­ple out­puts could be gen­er­ated within a sin­gle FFmpeg com­mand line, de­cod­ing the frames of a video once and send­ing them to each out­put’s en­coder in­stance. This elim­i­nates a lot of over­head by dedu­pli­cat­ing the video de­cod­ing and process startup time over­head in­curred by each com­mand line. Given that we process over 1 bil­lion video up­loads daily, each re­quir­ing mul­ti­ple FFmpeg ex­e­cu­tions, re­duc­tions in per-process com­pute us­age yield sig­nif­i­cant ef­fi­ciency gains.

Our in­ter­nal FFmpeg fork pro­vided an ad­di­tional op­ti­miza­tion to this: par­al­lelized video en­cod­ing. While in­di­vid­ual video en­coders are of­ten in­ter­nally multi-threaded, pre­vi­ous FFmpeg ver­sions ex­e­cuted each en­coder in se­r­ial for a given frame when mul­ti­ple en­coders were in use. By run­ning all en­coder in­stances in par­al­lel, bet­ter par­al­lelism can be ob­tained over­all.

Thanks to con­tri­bu­tions from FFmpeg de­vel­op­ers, in­clud­ing those at FFlabs and VideoLAN, more ef­fi­cient thread­ing was im­ple­mented start­ing with FFmpeg 6.0, with the fin­ish­ing touches land­ing in 8.0. This was di­rectly in­flu­enced by the de­sign of our in­ter­nal fork and was one of the main fea­tures we had re­lied on it to pro­vide. This de­vel­op­ment led to the most com­plex refac­tor­ing of FFmpeg in decades and has en­abled more ef­fi­cient en­cod­ings for all FFmpeg users.

To fully mi­grate off of our in­ter­nal fork we needed one more fea­ture im­ple­mented up­stream: real-time qual­ity met­rics.

Visual qual­ity met­rics, which give a nu­meric rep­re­sen­ta­tion of the per­ceived vi­sual qual­ity of me­dia, can be used to quan­tify the qual­ity loss in­curred from com­pres­sion. These met­rics are cat­e­go­rized as ref­er­ence or no-ref­er­ence met­rics, where the for­mer com­pares a ref­er­ence en­cod­ing to some other dis­torted en­cod­ing.

FFmpeg can com­pute var­i­ous vi­sual qual­ity met­rics such as PSNR, SSIM, and VMAF us­ing two ex­ist­ing en­cod­ings in a sep­a­rate com­mand line af­ter en­cod­ing has fin­ished. This is okay for of­fline or VOD use cases, but not for livestream­ing where we might want to com­pute qual­ity met­rics in real time.

To do this, we need to in­sert a video de­coder af­ter each video en­coder used by each out­put lane. These pro­vide bitmaps for each frame in the video af­ter com­pres­sion has been ap­plied so that we can com­pare against the frames be­fore com­pres­sion. In the end, we can pro­duce a qual­ity met­ric for each en­coded lane in real time us­ing a sin­gle FFmpeg com­mand line.

Thanks to in-loop” de­cod­ing, which was en­abled by FFmpeg de­vel­op­ers in­clud­ing those from FFlabs and VideoLAN, be­gin­ning with FFmpeg 7.0, we no longer have to rely on our in­ter­nal FFmpeg fork for this ca­pa­bil­ity.

Things like real-time qual­ity met­rics while transcod­ing and more ef­fi­cient thread­ing can bring ef­fi­ciency gains to a va­ri­ety of FFmpeg-based pipelines both in and out­side of Meta, and we strive to en­able these de­vel­op­ments up­stream to ben­e­fit the FFmpeg com­mu­nity and wider in­dus­try. However, there are some patches we’ve de­vel­oped in­ter­nally that don’t make sense to con­tribute up­stream. These are highly spe­cific to our in­fra­struc­ture and don’t gen­er­al­ize well.

FFmpeg sup­ports hard­ware-ac­cel­er­ated de­cod­ing, en­cod­ing, and fil­ter­ing with de­vices such as NVIDIAs NVDEC and NVENC, AMDs Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each de­vice is sup­ported through an im­ple­men­ta­tion of stan­dard APIs in FFmpeg, al­low­ing for eas­ier in­te­gra­tion and min­i­miz­ing the need for de­vice-spe­cific com­mand line flags. We’ve added sup­port for the Meta Scalable Video Processor (MSVP), our cus­tom ASIC for video transcod­ing, through these same APIs, en­abling the use of com­mon tool­ing across dif­fer­ent hard­ware plat­forms with min­i­mal plat­form-spe­cific quirks.

As MSVP is only used within Meta’s own in­fra­struc­ture, it would cre­ate a chal­lenge for FFmpeg de­vel­op­ers to sup­port it with­out ac­cess to the hard­ware for test­ing and val­i­da­tion. In this case, it makes sense to keep patches like this in­ter­nal since they would­n’t pro­vide ben­e­fit ex­ter­nally. We’ve taken on the re­spon­si­bil­ity of re­bas­ing our in­ter­nal patches onto more re­cent FFmpeg ver­sions over time, uti­liz­ing ex­ten­sive val­i­da­tion to en­sure ro­bust­ness and cor­rect­ness dur­ing up­grades.

With more ef­fi­cient multi-lane en­cod­ing and real-time qual­ity met­rics, we were able to fully dep­re­cate our in­ter­nal FFmpeg fork for all VOD and livestream­ing pipelines. And thanks to stan­dard­ized hard­ware APIs in FFmpeg, we’ve been able to sup­port our MSVP ASIC along­side soft­ware-based pipelines with min­i­mal fric­tion.

FFmpeg has with­stood the test of time with over 25 years of ac­tive de­vel­op­ment. Developments that im­prove re­source uti­liza­tion, add sup­port for new codecs and fea­tures, and in­crease re­li­a­bil­ity en­able ro­bust sup­port for a wider range of me­dia. For peo­ple on our plat­forms, this means en­abling new ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones. We plan to con­tinue in­vest­ing in FFmpeg in part­ner­ship with open source de­vel­op­ers, bring­ing ben­e­fits to Meta, the wider in­dus­try, and peo­ple who use our prod­ucts.

We would like to ac­knowl­edge con­tri­bu­tions from the open source com­mu­nity, our part­ners in FFlabs and VideoLAN, and many Meta en­gi­neers, in­clud­ing Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.

...

Read the original on engineering.fb.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.