10 interesting stories served every morning and every evening.




1 895 shares, 49 trendiness

Ireland shuts last coal plant, becomes 15th coal-free country in Europe

Ireland to­day (June 20) be­came the 15th coal-free coun­try in Europe, hav­ing ended coal power gen­er­a­tion at its 915 MW Moneypoint coal plant in County Clare. Initially com­mis­sioned in the mid-1980s by ESB, Moneypoint was in­tended to help Ireland off­set the im­pact of the oil crises in the 1970s by pro­vid­ing a de­pend­able source of en­ergy.

But with Ireland now gen­er­at­ing a lot more re­new­able en­ergy nowa­days, coal burn­ing is no longer such an ur­gent need. Energy think tank Ember data states Ireland gen­er­ated 37% (11.4 TWh) of its elec­tric­ity from wind in 2024. Solar is not near wind lev­els of gen­er­a­tion, (0.97 TWh in 2024) but it has been con­tin­u­ously break­ing gen­er­a­tion records in re­cent months and lo­cal stake­hold­ers are con­fi­dent this pos­i­tive trend will con­tinue.

Following the clo­sure, the Moneypoint plant will con­tinue to serve a lim­ited backup role, burn­ing heavy fuel oil un­der emer­gency in­struc­tion from Ireland’s trans­mis­sion sys­tem op­er­a­tor EirGrid un­til 2029.

This strat­egy is in line with pre­vi­ous plans made by EirGrid and ESB to exit coal-fired gen­er­a­tion by the end of 2025, which stip­u­lated that Moneypoint would no longer be ac­tive in the whole­sale elec­tric­ity mar­ket.

Ireland has qui­etly rewrit­ten its en­ergy story, re­plac­ing toxic coal with home­grown re­new­able power,” said Alexandru Mustață, cam­paigner on coal and gas at Europe’s Beyond Fossil Fuels.

But this is­n’t job done’. The gov­ern­men­t’s pri­or­ity now must be build­ing a power sys­tem for a re­new­able fu­ture; one with the stor­age, flex­i­bil­ity, and grid in­fra­struc­ture needed to run fully on clean, do­mes­tic re­new­able elec­tric­ity,” Mustață warned.

Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, ap­pealed to the gov­ern­ment to en­sure oil backup at Moneypoint is kept to an ab­solute min­i­mum and ul­ti­mately de­com­mis­sioned. He also ap­pealed for the gov­ern­ment to pre­vent fur­ther de­vel­op­ment of data cen­ters, which he said are in­creas­ing Ireland’s re­liance on fos­sil gas.

We also can’t ig­nore that the gov­ern­ment is tar­get­ing the in­stal­la­tion of at least 2 GW of gas power plants with no strat­egy to re­duce Ireland’s dan­ger­ous gas de­pen­dency,” he added.

On a broader level, Ireland’s step to close coal power gen­er­a­tion at Moneypoint sets a prece­dent for fur­ther European coun­tries’ coal ex­its to come, says Beyond Fossil Fuels. The group tracks European coun­tries’ progress on their com­mit­ments to switch­ing from fos­sil fu­els to re­new­able en­ergy. So far, 23 European coun­tries have com­mit­ted to coal phase-outs. Italy is ex­pected to com­plete its main­land coal phase-out this sum­mer with the up­com­ing clo­sure of its last two big coal power plants, while main­land Spain is also ex­pect­ing to de­clare it­self coal-free this sum­mer.

...

Read the original on www.pv-magazine.com »

2 439 shares, 25 trendiness

Create Your Handwriting Font for Free

FontCrafter turns your hand­writ­ing into a real, in­stal­lable font — en­tirely in your browser. No ac­counts, no up­loads to servers, no cost.

Still have ques­tions? Here’s our FAQ.

It’s eas­ier than you think. Print, write, scan — done.

Your hand­writ­ing be­comes an in­stal­lable font (OTF, TTF, WOFF2, Base64)

Natural vari­a­tion — your let­ters won’t look ro­botic or iden­ti­cal every time

Connected let­ter pairs (ff, th, st, etc.) that flow like real hand­writ­ing

Works in Word, Pages, Photoshop, web­sites — every­where fonts are used

No ac­count, no server, 100% pri­vate — every­thing hap­pens in your browser

If you found this use­ful, I’d ap­pre­ci­ate do­na­tions & pa­trons (to keep it­er­at­ing)!

Download and print the tem­plate — US Letter or A4.

Print at 100% scale (no fit to page”). Use white, un­lined pa­per.

Fill in every box with a felt-tip pen. All 3 rows for each char­ac­ter.

Ballpoints are too faint; thick mark­ers bleed. Keep strokes in­side the boxes with breath­ing room from edges.

How to use the three rows: Row 1 is al­ways up­per­case. Row 2 can be a sec­ond ver­sion of your up­per­case or low­er­case. Row 3 can also be up­per­case or low­er­case.

Scan or pho­to­graph the sheet, then drag & drop that photo file be­low.

Lay the sheet flat on a table with even light­ing — no shad­ows, no curl. A phone cam­era works great if the sheet is flat and well-lit.

Drop your com­pleted (scanned) im­age be­low. JPG, PNG, or high-res photo — make sure the page is flat and evenly lit. The pro­cess­ing hap­pens on your end. No servers in­volved. Nothing is saved or stored re­motely.

Drop your filled-in scan here, or click to browse

Not happy with a char­ac­ter? Touch it up in any im­age ed­i­tor, or use cor­rec­tion tape and re-scan.

Ensure All Four Crosshair Markers Are Visible and Continue →

Characters with green bor­ders were de­tected cleanly. Click any char­ac­ter to de­s­e­lect it — a re­place­ment from an­other row will be used. Small im­per­fec­tions are fine — they give your font per­son­al­ity.

Name your font and choose how your three rows should be used. Enable lig­a­tures for nat­ural-look­ing con­nected let­ter pairs.

What do you want to call this font?

What did you put in each row?

I wrote up­per­case in all three rows

I wrote up­per­case in Row 1, low­er­case in Row 2, up­per­case in Row 3

I wrote up­per­case in Row 1, low­er­case in Row 2, low­er­case in Row 3

Row 2 will be used as your low­er­case. Adjust how much to shrink it — set to 1.00 if you al­ready wrote Row 2 smaller than Row 1.

Allow cer­tain char­ac­ters to dip be­low the base­line (e.g. g, j, p, q, y, or a slashed zero).

Separate with spaces. Add or re­move char­ac­ters as needed for your hand­writ­ing.

Controls how far de­scen­der tails drop be­low the line. Slide left for deeper, right for shal­lower.

Cleans up tiny ink specks that bled through from ad­ja­cent cells. Won’t af­fect dots on let­ters like i, j, or punc­tu­a­tion marks.

Evens out let­ters that were drawn at dif­fer­ent sizes.

Adds 100+ de­rived glyphs from your hand­writ­ing. Uncheck if you only want your hand­writ­ten al­phanu­meric char­ac­ters.

Auto-generates di­a­crit­ics (accents, tildes, um­lauts) from your base let­ters. Covers French, German, Spanish, Portuguese, Scandinavian, and more.

Ligatures are let­ter pairs that con­nect nat­u­rally in hand­writ­ing — like ff, fi, fl, th, and st. Auto-generate is rec­om­mended — it’s in­stant and pro­duces nat­ural-look­ing con­nec­tions from your ex­ist­ing char­ac­ters.

Kerning ad­justs spac­ing be­tween spe­cific let­ter pairs — like AV, To, and WA — so char­ac­ters with com­ple­men­tary shapes sit to­gether nat­u­rally in­stead of hav­ing uni­form gaps.

See how your font looks with sam­ple text, or type any­thing you like be­low.

THE QUICK BROWN FOX JUMPS OVER A LAZY DOG.

sphinx of black quartz, judge my vow?

Both Fickle Dwarves Can Jinx My Pig Quiz!

• $11.38 + tax & a 5.69% tip = more than $20.74

• (I love Star Wars) [Yes] {Maybe} <OK>

• That’s what I said! Really?”

• ar­cade.pir­illo.com * chris.pir­illo.com

• He scored 7/8 on the quiz — not bad~

• Order #4053: 2x @$16.99 each | Total: $33.98

• Is it _really_ 100^2 = 10,000‽

Yes,” she said, it’s a go;’ then walked away.

Your font in­cludes con­tex­tual al­ter­nates (calt) — con­sec­u­tive char­ac­ters will au­to­mat­i­cally cy­cle be­tween your 3 hand­writ­ing vari­ants. This works in apps with OpenType sup­port but may not ap­pear in this pre­view.

Click here to try it for your­self…

If you found this use­ful, I’d ap­pre­ci­ate do­na­tions & pa­trons (to keep it­er­at­ing)!

OTF for desk­top apps, TTF for uni­ver­sal com­pat­i­bil­ity, WOFF2 for web­sites, Base64 for CSS em­bed­ding.

I con­firm that the hand­writ­ing used to gen­er­ate this font is my own or I have ex­plicit per­mis­sion from the hand­writ­ing’s owner to cre­ate and use this font. I un­der­stand that I am solely re­spon­si­ble for how I use the gen­er­ated font files, and I re­lease FontCrafter, Chris Pirillo, LockerGnome, and ar­cade.pir­illo.com from any li­a­bil­ity re­lated to the font’s cre­ation, dis­tri­b­u­tion, or use.

...

Read the original on arcade.pirillo.com »

3 417 shares, 39 trendiness

Building a Procedural Hex Map with Wave Function Collapse

I’ve been ob­sessed with pro­ce­dural maps since I was a kid rolling dice on the ran­dom dun­geon ta­bles in the AD&D Dungeon Master’s Guide. There was some­thing mag­i­cal about it — you did­n’t de­sign the dun­geon, you dis­cov­ered it, one room at a time, and the dice de­cided whether you got a trea­sure cham­ber or a dead end full of rats.

Years later, I de­cided to build my own map gen­er­a­tor. It cre­ates lit­tle me­dieval is­land worlds — with roads, rivers, coast­lines, cliffs, forests, and vil­lages — en­tirely pro­ce­du­rally. Built with Three.js WebGPU and TSL shaders, about 4,100 hex cells across 19 grids, gen­er­ated in ~20 sec­onds.

If you’ve ever played Carcassonne, you al­ready un­der­stand WFC. You have a stack of tiles and place them so every­thing lines up. Each tile has edges — grass, road, city. Adjacent tiles must have match­ing edges. A road edge must con­nect to an­other road edge. Grass must meet grass. The only dif­fer­ence is that the com­puter does it faster, and com­plains less when it gets stuck.

The twist: hex tiles have 6 edges in­stead of 4. That’s 50% more con­straints per tile, and the com­bi­na­to­r­ial ex­plo­sion is real. Square WFC is well-trod­den ter­ri­tory. Hex WFC is… less so.

WFC is re­li­able for small grids. But as the grid gets big­ger, the chance of paint­ing your­self into a dead end goes up fast. A 217-cell hex grid al­most never fails. A 4123-cell grid fails reg­u­larly.

The so­lu­tion: mod­u­lar WFC. Instead of one gi­ant solve, the map is split into 19 hexag­o­nal grids arranged in two rings around a cen­ter — about 4,100 cells to­tal. Each grid is solved in­de­pen­dently, but it has to match what­ever tiles were al­ready placed in neigh­bor­ing grids. Those bor­der tiles be­come fixed con­straints.

And some­times those con­straints are sim­ply in­com­pat­i­ble. No amount of back­track­ing in­side the cur­rent grid can fix a prob­lem that was baked in by a neigh­bor. This is where I spent a lot of dev time.

Here’s the dirty se­cret of WFC: it fails. A lot. You make a se­ries of ran­dom choices, prop­a­gate con­straints, and even­tu­ally back your­self into a cor­ner where some cell has zero valid op­tions left. Congratulations, the puz­zle is un­solv­able.

The text­book so­lu­tion is back­track­ing — undo your last de­ci­sion and try a dif­fer­ent tile. My solver tracks every pos­si­bil­ity it re­moves dur­ing prop­a­ga­tion (a trail” of deltas), so it can rewind cheaply with­out copy­ing the en­tire grid state. It’ll try up to 500 back­tracks be­fore giv­ing up.

But back­track­ing alone is­n’t enough. The real prob­lem is cross-grid bound­aries.

After many failed ap­proaches, I landed on a lay­ered re­cov­ery sys­tem:

Layer 1: Unfixing. During the ini­tial con­straint prop­a­ga­tion, if a neigh­bor cell cre­ates a con­tra­dic­tion, the solver con­verts it from a fixed con­straint back into a solv­able cell. Its own neigh­bors (two cells out — anchors”) be­come the new con­straints. This is cheap and han­dles easy cases.

Layer 2: Local-WFC. If the main solve fails, the solver runs a mini-WFC on a small ra­dius-2 re­gion around the prob­lem area — re-solv­ing 19 cells in the over­lap area to cre­ate a more com­pat­i­ble bound­ary. Up to 5 at­tempts, each tar­get­ing a dif­fer­ent prob­lem cell. Local-WFC was the break­through. Instead of try­ing to solve the im­pos­si­ble, go back and change the prob­lem.

Layer 3: Drop and hide. Last re­sort. Drop the of­fend­ing neigh­bor cell en­tirely and place moun­tain tiles to cover the seams. Mountains are great — their cliff edges match any­thing, and they look in­ten­tional. Nobody ques­tions a moun­tain.

This map is­n’t flat — it has 5 lev­els of el­e­va­tion. Ocean and Grass start at level 0, but slopes and cliffs can move up or down a level. Low slopes go up 1 level, high slopes go up 2 lev­els. A road tile at level 3 needs to con­nect to an­other road tile at level 3, or a slope tile that tran­si­tions be­tween lev­els. Get it wrong and you end up with roads that dead-end into cliff faces or rivers flow­ing up­hill into the sky. The el­e­va­tion axis turns a 2D con­straint prob­lem into a 3D one, and it’s where a lot of the tile va­ri­ety (and a lot of the solver fail­ures) comes from.

The bet­ter ap­proach: cube co­or­di­nates (q, r, s where s = -q-r). It’s a 3D co­or­di­nate sys­tem for the three hex axes. Neighbor find­ing be­comes triv­ial — just add or sub­tract 1 from two co­or­di­nates.

The good news is that WFC does­n’t re­ally care about geom­e­try. It’s con­cerned with which edges match which — it’s es­sen­tially a graph prob­lem. The hex co­or­di­nates only mat­ter for ren­der­ing and for the multi-grid lay­out, where the 19 grids are them­selves arranged as a hex-of-hexes with their own off­set po­si­tions.

If you’ve ever worked with hex grids, you owe Amit Patel at Red Blob Games a debt of grat­i­tude. His hex grid guide is the de­fin­i­tive ref­er­ence.

Trees, Buildings, and Why Not Everything Should Be WFC

Early on, I tried us­ing WFC for tree and build­ing place­ment. Bad idea. WFC is great at lo­cal edge match­ing but ter­ri­ble at large-scale pat­terns. You’d get trees scat­tered ran­domly in­stead of clus­tered into forests, or build­ings spread evenly in­stead of gath­ered into vil­lages.

The so­lu­tion: good old Perlin noise. A global noise field de­ter­mines tree den­sity and build­ing place­ment, com­pletely sep­a­rate from WFC. Areas where the noise is above a thresh­old get trees; slightly dif­fer­ent noise dri­ves build­ings. This gives you or­ganic clus­ter­ing — forests, clear­ings, vil­lages — that WFC could never pro­duce. I also used some ad­di­tional logic to place build­ings at the end of roads, ports and wind­mills on coasts, henges on hill­tops etc.

WFC han­dles the ter­rain. Noise han­dles the dec­o­ra­tions. Each tool does what it’s good at.

I wanted that Zelda: The Wind Waker’ car­toon shim­mer on the wa­ter sur­face. Originally I tried gen­er­at­ing caus­tics pro­ce­du­rally with four lay­ers of Voronoi noise. This turned out to be very GPU heavy and did not look great. The so­lu­tion was sam­pling a small scrolling caus­tic tex­ture with a sim­ple noise mask, which looks way bet­ter and is su­per cheap. Sometimes the easy so­lu­tion is the cor­rect so­lu­tion.

Waves are sine bands that ra­di­ate out­ward from coast­lines, in­spired by Bad North’s gor­geous shore­line ef­fect. To know how far from the coast” each pixel is, the sys­tem ren­ders a coast mask — a top down or­tho­graphic ren­der of the en­tire map with white for land and black for wa­ter — then di­lates and blurs it into a gra­di­ent. The wave shader reads this gra­di­ent to place an­i­mated sine bands at reg­u­lar dis­tance in­ter­vals, with noise to break up the pat­tern.

This worked great on straight coast­lines. In con­cave coves and in­lets, the wave lines got thick and ugly. The blur-based gra­di­ent spreads the same value range over a wider phys­i­cal area in coves, stretch­ing the wave bands out.

Screen-space de­riv­a­tives to de­tect gra­di­ent stretch­ing — worked at one zoom level, broke at oth­ers.

Texture-space gra­di­ent mag­ni­tude to de­tect op­pos­ing coast edges can­cel­ing out — only de­tected nar­row rivers, not ac­tual prob­lem coves.

The fun­da­men­tal is­sue: blur en­codes how much land is nearby,” not how far is the near­est coast edge.” These are dif­fer­ent ques­tions, and no amount of post-pro­cess­ing the blur can ex­tract true dis­tance.

The solve was to do a CPU-side surroundedness” probe that checks each wa­ter cel­l’s neigh­bors to de­tect coves, writ­ing a sep­a­rate mask tex­ture that thins the waves in en­closed ar­eas. It’s kind of a hack but it works and the wave edges thin out nicely at the edges.

No dice re­quired this time — but the feel­ing is the same. You hit a but­ton, the map builds it­self, and you dis­cover what the al­go­rithm de­cided to put there. It’s su­per sat­is­fy­ing to see the road and river sys­tems match­ing up per­fectly. Every time it’s dif­fer­ent, and every time I find my­self ex­plor­ing for a while. The kid rolling dice on the dun­geon ta­bles would be into this.

...

Read the original on felixturner.github.io »

4 378 shares, 29 trendiness

AI reimplementation and the erosion of copyleft

Last week, Dan Blanchard, the main­tainer of chardet—a Python li­brary for de­tect­ing text en­cod­ings used by roughly 130 mil­lion pro­jects a month—re­leased a new ver­sion. Version 7.0 is 48 times faster than its pre­de­ces­sor, sup­ports mul­ti­ple cores, and was re­designed from the ground up. Anthropic’s Claude is listed as a con­trib­u­tor. The li­cense changed from LGPL to MIT.

Blanchard’s ac­count is that he never looked at the ex­ist­ing source code di­rectly. He fed only the API and the test suite to Claude and asked it to reim­ple­ment the li­brary from scratch. The re­sult­ing code shares less than 1.3% sim­i­lar­ity with any prior ver­sion, as mea­sured by JPlag. His con­clu­sion: this is an in­de­pen­dent new work, and he is un­der no oblig­a­tion to carry for­ward the LGPL. Mark Pilgrim, the li­brary’s orig­i­nal au­thor,

opened a GitHub is­sue to ob­ject. The LGPL re­quires that mod­i­fi­ca­tions be dis­trib­uted un­der the same li­cense, and a reim­ple­men­ta­tion pro­duced with am­ple ex­po­sure to the orig­i­nal code­base can­not, in Pilgrim’s view, pass as a clean-room ef­fort.

The dis­pute drew re­sponses from two promi­nent fig­ures in the open source world. Armin Ronacher, the cre­ator of Flask, wel­comed the re­li­cens­ing. Salvatore Sanfilippo (antirez), the cre­ator of Redis,

pub­lished a broader de­fense of AI reim­ple­men­ta­tion, ground­ing it in copy­right law and the his­tory of the GNU pro­ject. Both con­clude, by dif­fer­ent routes, that what Blanchard did is le­git­i­mate. I re­spect both writ­ers, and I think both are wrong—or more pre­cisely, both are evad­ing the ques­tion that ac­tu­ally mat­ters.

That ques­tion is this: does le­gal mean le­git­i­mate? Neither piece an­swers it. Both move from this is legally per­mis­si­ble” to this is there­fore fine,” with­out paus­ing at the gap be­tween those two claims. Law sets a floor; clear­ing it does not mean the con­duct is right. That gap is where this es­say be­gins.

Antirez builds his case on his­tory. When the GNU pro­ject reim­ple­mented the UNIX user­space, it was law­ful. So was Linux. Copyright law pro­hibits copy­ing protected ex­pres­sions”—the ac­tual code, its struc­ture, its spe­cific mech­a­nisms—but it does not pro­tect ideas or be­hav­ior. AI-assisted reim­ple­men­ta­tion oc­cu­pies the same le­gal ground. Therefore, it is law­ful.

The le­gal analy­sis is largely cor­rect, and I am not dis­put­ing it. The prob­lem lies in what an­ti­rez does next: he pre­sents the le­gal con­clu­sion as if it were also a so­cial one, and uses a his­tor­i­cal anal­ogy that, ex­am­ined more care­fully, ar­gues against his own po­si­tion.

When GNU reim­ple­mented the UNIX user­space, the vec­tor ran from pro­pri­etary to free. Stallman was us­ing the lim­its of copy­right law to turn pro­pri­etary soft­ware into free soft­ware. The eth­i­cal force of that pro­ject did not come from its le­gal per­mis­si­bil­ity—it came from the di­rec­tion it was mov­ing, from the fact that it was ex­pand­ing the com­mons. That is why peo­ple cheered.

The vec­tor in the chardet case runs the other way. Software pro­tected by a copy­left li­cense—one that guar­an­tees users the right to study, mod­ify, and re­dis­trib­ute de­riv­a­tive works un­der the same terms—has been reim­ple­mented un­der a per­mis­sive li­cense that car­ries no such guar­an­tee. This is not a reim­ple­men­ta­tion that ex­pands the com­mons. It is one that re­moves the fenc­ing that pro­tected the com­mons. Derivative works built on chardet 7.0 are un­der no oblig­a­tion to share their source code. That oblig­a­tion, which ap­plied to a li­brary down­loaded 130 mil­lion times a month, is now gone.

Antirez does not ad­dress this di­rec­tional dif­fer­ence. He in­vokes the GNU prece­dent, but that prece­dent is a coun­terex­am­ple to his con­clu­sion, not a sup­port­ing one.

Ronacher’s ar­gu­ment is dif­fer­ent. He dis­closes up­front that he has a stake in the out­come: I per­son­ally have a horse in the race here be­cause I too wanted chardet to be un­der a non-GPL li­cense for many years. So con­sider me a very bi­ased per­son in that re­gard.” He goes on to write that he con­sid­ers the GPL to run against that spirit by re­strict­ing what can be done with it”—the spirit be­ing that so­ci­ety is bet­ter off when we share.

This claim rests on a fun­da­men­tal mis­read­ing of what the GPL does.

Start with what the GPL ac­tu­ally pro­hibits. It does not pro­hibit keep­ing source code pri­vate. It im­poses no con­straint on pri­vately mod­i­fy­ing GPL soft­ware and us­ing it your­self. The GPLs con­di­tions are trig­gered only by dis­tri­b­u­tion. If you dis­trib­ute mod­i­fied code, or of­fer it as a net­worked ser­vice, you must make the source avail­able un­der the same terms. This is not a re­stric­tion on shar­ing. It is a con­di­tion placed on shar­ing: if you share, you must share in kind.

The re­quire­ment that im­prove­ments be re­turned to the com­mons is not a mech­a­nism that sup­presses shar­ing. It is a mech­a­nism that makes shar­ing re­cur­sive and self-re­in­forc­ing. The claim that im­pos­ing con­tri­bu­tion oblig­a­tions on users of a com­mons un­der­mines shar­ing cul­ture does not hold to­gether log­i­cally.

The con­trast with the MIT li­cense clar­i­fies the point. Under MIT, any­one may take code, im­prove it, and close it off into a pro­pri­etary prod­uct. You can re­ceive from the com­mons with­out giv­ing back. If Ronacher calls this struc­ture more share-friendly,” he is us­ing a con­cept of shar­ing with a spe­cific di­rec­tion­al­ity built in: shar­ing flows to­ward who­ever has more cap­i­tal and more en­gi­neers to take ad­van­tage of it.

The his­tor­i­cal record bears this out. In the 1990s, com­pa­nies rou­tinely ab­sorbed GPL code into pro­pri­etary prod­ucts—not be­cause they had cho­sen per­mis­sive li­censes, but be­cause copy­left en­force­ment was slack. The strength­en­ing of copy­left mech­a­nisms closed that gap. For in­di­vid­ual de­vel­op­ers and small pro­jects with­out the re­sources to com­pete on any­thing but rec­i­proc­ity, copy­left was what made the ex­change ap­prox­i­mately fair.

The cre­ator of Flask knows this dis­tinc­tion. If he elides it any­way, the ar­gu­ment is not naïve—it is con­ve­nient.

The most in­ter­est­ing mo­ment in Ronacher’s piece is not the ar­gu­ment but a de­tail he men­tions in pass­ing: Vercel reim­ple­mented GNU Bash us­ing AI and pub­lished it, then got vis­i­bly up­set when Cloudflare

reim­ple­mented Next.js the same way.

Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT li­censed. Cloudflare’s vinext did not vi­o­late any li­cense—it did ex­actly what Ronacher calls a con­tri­bu­tion to the cul­ture of open­ness, ap­plied to a per­mis­sively li­censed code­base. Vercel’s re­ac­tion had noth­ing to do with li­cense in­fringe­ment; it was purely com­pet­i­tive and ter­ri­to­r­ial. The im­plicit po­si­tion is: reim­ple­ment­ing GPL soft­ware as MIT is a vic­tory for shar­ing, but hav­ing our own MIT soft­ware reim­ple­mented by a com­peti­tor is cause for out­rage. This is what the claim that per­mis­sive li­cens­ing is more share-friendly” than copy­left looks like in prac­tice. The spirit of shar­ing, it turns out, runs in one di­rec­tion only: out­ward from one­self.

Ronacher reg­is­ters the con­tra­dic­tion and does not stop. This de­vel­op­ment plays into my world­view,” he writes. When you pre­sent ev­i­dence that cuts against your own po­si­tion, ac­knowl­edge it, and then pro­ceed to your orig­i­nal con­clu­sion un­changed, that is a sig­nal that the con­clu­sion pre­ceded the ar­gu­ment.

Back to the ques­tion posed at the start. Is le­gal the same as le­git­i­mate?

Antirez closes his care­ful le­gal analy­sis as though it set­tles the mat­ter. Ronacher ac­knowl­edges that there is an ob­vi­ous moral ques­tion here, but that is­n’t nec­es­sar­ily what I’m in­ter­ested in.” Both pieces treat le­gal per­mis­si­bil­ity as a proxy for so­cial le­git­i­macy. But law only says what con­duct it will not pre­vent—it does not cer­tify that con­duct as right. Aggressive tax min­i­miza­tion that never crosses into il­le­gal­ity may still be widely re­garded as an­ti­so­cial. A phar­ma­ceu­ti­cal com­pany that legally ac­quires a patent on a long-generic drug and raises the price a hun­dred­fold has not done some­thing le­gal and there­fore fine. Legality is a nec­es­sary con­di­tion; it is not a suf­fi­cient one.

In the chardet case, the dis­tinc­tion is sharper still. What the LGPL pro­tected was not Blanchard’s la­bor alone. It was a so­cial com­pact agreed to by every­one who con­tributed to the li­brary over twelve years. The terms of that com­pact were: if you take this and build on it, you share back un­der the same terms. This com­pact op­er­ated as a le­gal in­stru­ment, yes, but it was also the foun­da­tion of trust that made con­tri­bu­tion ra­tio­nal. The fact that a reim­ple­men­ta­tion may qual­ify legally as a new work, and the fact that it breaks faith with the orig­i­nal con­trib­u­tors, are sep­a­rate ques­tions. If a court even­tu­ally rules in Blanchard’s fa­vor, that rul­ing will tell us what the law per­mits. It will not tell us that the act was right.

Zoë Kooyman, ex­ec­u­tive di­rec­tor of the FSF, put it plainly: Refusing to grant oth­ers the rights you your­self re­ceived as a user is highly an­ti­so­cial, no mat­ter what method you use.”

Reading this de­bate, I keep re­turn­ing to a ques­tion about po­si­tion. From where are these two writ­ers look­ing at the sit­u­a­tion?

Antirez cre­ated Redis. Ronacher cre­ated Flask. Both are fig­ures at the cen­ter of the open source ecosys­tem, with large au­di­ences and well-es­tab­lished rep­u­ta­tions. For them, falling costs of AI reim­ple­men­ta­tion means some­thing spe­cific: it is eas­ier to reim­ple­ment things they want in a dif­fer­ent form. Ronacher says ex­plic­itly that he had be­gun reim­ple­ment­ing GNU Readline pre­cisely be­cause of its copy­left terms.

For the peo­ple who have spent years con­tribut­ing to a li­brary like chardet, the same shift in costs means some­thing else en­tirely: the copy­left pro­tec­tion around their con­tri­bu­tions can be re­moved. The two writ­ers are speak­ing from the for­mer po­si­tion to peo­ple in the lat­ter, telling them that this was al­ways law­ful, that his­tor­i­cal prece­dent sup­ports it, and that the ap­pro­pri­ate re­sponse is adap­ta­tion.

When po­si­tional asym­me­try of this kind is ig­nored, and the ar­gu­ment is pre­sented as uni­ver­sal analy­sis, what you get is not analy­sis but ra­tio­nal­iza­tion. Both writ­ers ar­rive at con­clu­sions that align pre­cisely with their own in­ter­ests. Readers should hold that fact in mind.

Bruce Perens, who wrote the orig­i­nal Open Source Definition, told : The en­tire eco­nom­ics of soft­ware de­vel­op­ment are dead, gone, over, ka­put!” He meant it as an alarm. Antirez, from a sim­i­lar as­sess­ment of the sit­u­a­tion, draws the con­clu­sion: adapt. Ronacher says he finds the di­rec­tion ex­cit­ing.

None of the three re­sponses ad­dresses the cen­tral ques­tion. When copy­left be­comes tech­ni­cally eas­ier to cir­cum­vent, does that make it less nec­es­sary, or more?

I think more. What the GPL pro­tected was not the scarcity of code but the free­dom of users. The fact that pro­duc­ing code has be­come cheaper does not make it ac­cept­able to use that code as a ve­hi­cle for erod­ing free­dom. If any­thing, as the fric­tion of reim­ple­men­ta­tion dis­ap­pears, so does the fric­tion of strip­ping copy­left from any­thing left ex­posed. The ero­sion of en­force­ment ca­pac­ity is a le­gal prob­lem. It does not touch the un­der­ly­ing nor­ma­tive judg­ment.

That judg­ment is this: those who take from the com­mons owe some­thing back to the com­mons. The prin­ci­ple does not change de­pend­ing on whether a reim­ple­men­ta­tion takes five years or five days. No court rul­ing on AI-generated code will al­ter its so­cial weight.

This is where law and com­mu­nity norms di­verge. Law is made slowly, af­ter the fact, re­flect­ing ex­ist­ing power arrange­ments. The norms that open source com­mu­ni­ties built over decades did not wait for court ap­proval. People chose the GPL when the law of­fered them no guar­an­tee of its en­force­ment, be­cause it ex­pressed the val­ues of the com­mu­ni­ties they wanted to be­long to. Those val­ues do not ex­pire when the law changes.

In pre­vi­ous writ­ing, I ar­gued for a train­ing copy­left (TGPL) as the next step in this line of de­vel­op­ment. The chardet sit­u­a­tion sug­gests the ar­gu­ment has to go fur­ther: to a spec­i­fi­ca­tion copy­left cov­er­ing the layer be­low source code. If source code can now be gen­er­ated from a spec­i­fi­ca­tion, the spec­i­fi­ca­tion is where the es­sen­tial in­tel­lec­tual con­tent of a GPL pro­ject re­sides. Blanchard’s own claim—that he worked only from the test suite and API with­out read­ing the source—is, para­dox­i­cally, an ar­gu­ment for pro­tect­ing that test suite and API spec­i­fi­ca­tion un­der copy­left terms.

The his­tory of the GPL is the his­tory of li­cens­ing tools evolv­ing in re­sponse to new forms of ex­ploita­tion: GPLv2 to GPLv3, then AGPL. What drove each evo­lu­tion was not a court rul­ing but a com­mu­nity reach­ing a value judg­ment first and then seek­ing le­gal in­stru­ments to ex­press it. The same se­quence is avail­able now. Whatever courts even­tu­ally de­cide about AI reim­ple­men­ta­tion, the ques­tion we need to an­swer first is not a le­gal one. It is a so­cial one. Do those who take from the com­mons owe some­thing back? I think they do. That judg­ment does not re­quire a ver­dict.

What makes the pieces by an­ti­rez and Ronacher worth read­ing is not that they are right. It is that they make vis­i­ble, with un­usual clar­ity, what they are choos­ing not to see. When le­gal­ity is used as a sub­sti­tute for a value judg­ment, the ques­tion that ac­tu­ally mat­ters gets buried in the foot­notes of a law it has al­ready out­grown.

...

Read the original on writings.hongminhee.org »

5 367 shares, 32 trendiness

Judge dismisses red-light camera ticket, rules law is unconstitutional

A Broward County judge has dis­missed a red-light cam­era ticket, rul­ing that the state law used to is­sue the ci­ta­tion im­prop­erly shifts the bur­den of proof onto ve­hi­cle own­ers.

In a 21-page or­der signed March 3, Judge Steven P. DeLuca granted a de­fen­dan­t’s mo­tion to dis­miss a photo-en­forced traf­fic ci­ta­tion is­sued un­der Florida’s red-light cam­era law.

The case in­volved a Sunrise red-light cam­era ci­ta­tion is­sued to a reg­is­tered ve­hi­cle owner af­ter au­to­mated cam­eras cap­tured a ve­hi­cle en­ter­ing an in­ter­sec­tion against a red sig­nal. The de­fen­dant ar­gued the statute un­con­sti­tu­tion­ally re­quires the reg­is­tered owner to prove they were not dri­ving — in­stead of re­quir­ing the gov­ern­ment to prove who was be­hind the wheel.

In the or­der, the court found that red-light cam­era cases, al­though la­beled as civil in­frac­tions, func­tion as quasi-criminal” pro­ceed­ings be­cause they can re­sult in mon­e­tary penal­ties, a for­mal find­ing of guilt, and con­se­quences tied to a dri­ver’s record.

Under Florida Statute 316.0083, once a cam­era cap­tures a vi­o­la­tion, the reg­is­tered owner is pre­sumed re­spon­si­ble un­less they sub­mit an af­fi­davit iden­ti­fy­ing an­other dri­ver. The court ruled that the frame­work im­prop­erly shifts the bur­den of proof away from the state.

Because traf­fic in­frac­tions that move to county court must be proven beyond a rea­son­able doubt,” the judge wrote that the statute’s pre­sump­tion vi­o­lates con­sti­tu­tional due process pro­tec­tions.

As a re­sult, the ci­ta­tion in this case was for­mally dis­missed.

Click here to view the PDF file

Joel Mumford is an at­tor­ney with The Ticket Clinic. He said if the case is quasi crim­i­nal, which means al­most crim­i­nal or crim­i­nal-like, the state has to fol­low pro­ce­dural due process.

The state or the agen­cies that is­sue the tick­ets, it’s their bur­den to prove all the el­e­ments of the crime be­yond and to the ex­tent of each and every rea­son­able doubt. And the first el­e­ment, which should be, who’s dri­ving the car,” Mumford said. The statute in Florida pre­sumes that the reg­is­tered owner is the dri­ver of the car.”

Mumford said even though the court or­der only ap­plies in Broward County, it could open the door for chal­lenges in other parts of the state, mean­ing other coun­ties across the state fol­low­ing suit.

What could prob­a­bly hap­pen is they the court could then get an ap­peal, and then if it goes up to the dis­trict court of ap­peal level and then make a de­ci­sion on it, de­pend­ing on what that says, that could then be ap­plied to the en­tire state, if there’s no sim­i­lar ap­pel­late cases through­out the state. So that could make it statewide,” he said.

Drivers in Boynton Beach want to see the red-light cam­eras gone.

Fifteen red-light cam­era sys­tems are al­ready run­ning at seven in­ter­sec­tions in Boynton Beach alone. Congress Avenue and Gateway Boulevard is just one of them.

I’ve been tick­eted here twice, and it’s ridicu­lous be­cause they it’s just not fair,” one dri­ver said who did­n’t want to be iden­ti­fied. The per­son that does the de­ter­mi­na­tion when you ran the light, it’s just a ran­dom. Whoever they want to pick, pick you to say, okay, you’re gonna pay the ticket.”

That dri­ver had to pay his $158 dol­lar ticket but he’s hope­ful Palm Beach County can soon fol­low suit. He wants red light cam­eras gone, so fu­ture vi­o­la­tions could be thrown out.

I think they need to out­law it and get rid of it,” he said.

Advocacy group StopTheCams, which has long op­posed au­to­mated traf­fic en­force­ment, called the rul­ing a ma­jor vic­tory. In a press re­lease, the group said the de­ci­sion con­firms what crit­ics have ar­gued for years — that red-light cam­era laws pun­ish ve­hi­cle own­ers with­out re­quir­ing proof they com­mit­ted the vi­o­la­tion.

Supporters of red-light cam­eras ar­gue the sys­tems im­prove safety by de­ter­ring dan­ger­ous dri­ving at in­ter­sec­tions. Florida’s red-light cam­era law, known as the Mark Wandall Traffic Safety Act, al­lows lo­cal gov­ern­ments to use au­to­mated en­force­ment sys­tems.

It re­mains un­clear whether the rul­ing will be ap­pealed or how broadly it could af­fect sim­i­lar cases statewide. For now, the de­ci­sion ap­plies to this spe­cific case in Broward County, but le­gal ob­servers say it could fuel re­newed chal­lenges to Florida’s red-light cam­era en­force­ment sys­tem.

...

Read the original on cbs12.com »

6 325 shares, 29 trendiness

A New Chapter for Bluesky

After sev­eral in­tense and in­cred­i­ble years build­ing Bluesky from the ground up, I’ve de­cided to step back as CEO and tran­si­tion to a new role as Bluesky’s Chief Innovation Officer.

In 2019, I set out to build an open pro­to­col for so­cial me­dia, with the goal of en­abling the de­vel­op­ment of a new gen­er­a­tion of ap­pli­ca­tions de­signed to em­power users. The Bluesky app started off as a ref­er­ence client for the pro­to­col, but has since grown into a thriv­ing plat­form with over 40 mil­lion users. Last year, we grew a world-class team, ex­panded the AT Protocol ecosys­tem, and proved that a val­ues-dri­ven so­cial net­work could thrive at scale.

As Bluesky ma­tures, the com­pany needs a sea­soned op­er­a­tor fo­cused on scal­ing and ex­e­cu­tion, while I re­turn to what I do best: build­ing new things. As part of this tran­si­tion, Toni Schneider, for­mer CEO of Automattic and part­ner at True Ventures, will join our team as in­terim CEO, while our board runs a search for a per­ma­nent chief ex­ec­u­tive.

Toni be­lieves deeply in the Bluesky mis­sion, and has been an ad­vi­sor to the com­pany and me per­son­ally for over a year. Both Automattic and True Ventures are also in­vestors in Bluesky, and sup­port the de­vel­op­ment of a more open, user-dri­ven in­ter­net. Automattic, the com­pany be­hind WordPress.com, has built their busi­ness on open source soft­ware, work­ing to make the web a bet­ter and more par­tic­i­pa­tory place. Toni was a key part of guid­ing that mis­sion, and is some­one who leads with gen­uine cu­rios­ity, takes the long view, and deeply un­der­stands what it means to build a com­pany around a mis­sion. I am con­fi­dent that he is the right per­son to lead us into this next chap­ter.

Scaling up this com­pany has been a learn­ing ex­pe­ri­ence un­like any­thing else. I’ve grown a lot as a leader and had the priv­i­lege of as­sem­bling the best team I’ve ever worked with. As we’ve grown, I’ve found that peo­ple thrive when they’re in a role where their pas­sions over­lap with their strengths. This is as true for me as it is for our team. I’m most en­er­gized by ex­plor­ing new ideas, bring­ing a vi­sion to life, and help­ing peo­ple dis­cover their strengths. Transitioning to a more fo­cused role where I can do what brings me en­ergy is my way of putting that be­lief into prac­tice.

I’m ex­cited to dig into the next fron­tier of what de­cen­tral­ized so­cial can be, while bring­ing Toni in to sup­port our team as an ex­pe­ri­enced op­er­a­tor and leader. The work that got us here was just the be­gin­ning, and I’m grate­ful to keep build­ing along­side this team and com­mu­nity.

...

Read the original on bsky.social »

7 264 shares, 37 trendiness

Oracle is building yesterday’s data centers with tomorrow’s debt

Artificial in­tel­li­gence chips are get­ting up­graded more quickly than data cen­ters can be built, a mar­ket re­al­ity that ex­poses a key risk to the AI trade and Oracle’s debt-fu­eled ex­pan­sion.

OpenAI is no longer plan­ning to ex­pand its part­ner­ship with Oracle in Abilene, Texas, home to the Stargate data cen­ter, be­cause it wants clus­ters with newer gen­er­a­tions of Nvidia graph­ics pro­cess­ing units, ac­cord­ing to a per­son fa­mil­iar with the mat­ter.

The cur­rent Abilene site is ex­pected to use Nvidia’s Blackwell proces­sors, and the power is­n’t pro­jected to come on­line for a year. By then, OpenAI is hop­ing to have ex­panded ac­cess to Nvidia’s next-gen­er­a­tion chips in big­ger clus­ters else­where, said the per­son, who asked not to be named due to con­fi­den­tial­ity.

Bloomberg was first to re­port on the com­pa­nies end­ing their plans for ex­pan­sion in Abilene. In a post on X on Sunday, Oracle called news re­ports about the ac­tiv­ity, false and in­cor­rect,” but the post only said ex­ist­ing pro­jects are on track and did­n’t ad­dress ex­pan­sion plans.

Oracle se­cured the site, or­dered the hard­ware, and spent bil­lions of dol­lars on con­struc­tion and staff, with the ex­pec­ta­tion of go­ing big­ger.

It’s a log­i­cal de­ci­sion for OpenAI, which does­n’t want older chips. Nvidia used to re­lease a new gen­er­a­tion of data cen­ter proces­sors every two years. Now, CEO Jensen Huang has the com­pany ship­ping one every year, and each gen­er­a­tion of­fers a leap in ca­pa­bil­ity. Vera Rubin, un­veiled at CES in January and al­ready in pro­duc­tion, de­liv­ers five times the in­fer­ence per­for­mance of Blackwell.

For the com­pa­nies build­ing fron­tier mod­els, the small­est im­prove­ment in per­for­mance could equate to huge gaps in model bench­marks and rank­ings, which are closely fol­lowed by de­vel­op­ers and trans­late di­rectly to us­age, rev­enue, and val­u­a­tion.

That all points to a big­ger prob­lem at play. For in­fra­struc­ture com­pa­nies, se­cur­ing a site, con­nect­ing power and stand­ing up a fa­cil­ity takes 12 to 24 months at min­i­mum. But cus­tomers want the lat­est and great­est, and they’re track­ing the yearly chip up­grades.

Oracle’s added chal­lenge is that it’s the only hy­per­scaler fund­ing its build­out pri­mar­ily with debt, to the tune of $100 bil­lion and count­ing. Google, Amazon and Microsoft, by con­trast, are lean­ing on their enor­mous cash-gen­er­at­ing busi­nesses.

Meanwhile, Oracle part­ner Blue Owl is de­clin­ing to fund an ad­di­tional fa­cil­ity, and plans to cut up to 30,000 jobs.

Oracle re­ports fis­cal third-quar­ter re­sults on Tuesday, and in­vestors will be pay­ing close to how the com­pany ad­dresses a $50 bil­lion cap­i­tal ex­pen­di­ture plan with neg­a­tive free cash flow, and whether the fi­nanc­ing pipeline can hold up.

The stock is down 23% so far this year and has lost over half its value since peak­ing in September.

Beyond Oracle, GPU de­pre­ci­a­tion is a risk for the broader mar­ket and could have ram­i­fi­ca­tions across the AI land­scape. Every in­fra­struc­ture deal signed to­day may re­sult in a com­mit­ment to out­dated hard­ware be­fore the power is even con­nected.

...

Read the original on www.cnbc.com »

8 262 shares, 25 trendiness

JSLinux

...

Read the original on bellard.org »

9 235 shares, 16 trendiness

Media Processing at Scale

FFmpeg is truly a multi-tool for me­dia pro­cess­ing. As an in­dus­try-stan­dard tool it sup­ports a wide va­ri­ety of au­dio and video codecs and con­tainer for­mats. It can also or­ches­trate com­plex chains of fil­ters for me­dia edit­ing and ma­nip­u­la­tion. For the peo­ple who use our apps, FFmpeg plays an im­por­tant role in en­abling new video ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones.

Meta ex­e­cutes ffm­peg (the main CLI ap­pli­ca­tion) and ff­probe (a util­ity for ob­tain­ing me­dia file prop­er­ties) bi­na­ries tens of bil­lions of times a day, in­tro­duc­ing unique chal­lenges when deal­ing with me­dia files. FFmpeg can eas­ily per­form transcod­ing and edit­ing on in­di­vid­ual files, but our work­flows have ad­di­tional re­quire­ments to meet our needs. For many years we had to rely on our own in­ter­nally de­vel­oped fork of FFmpeg to pro­vide fea­tures that have only re­cently been added to FFmpeg, such as threaded multi-lane en­cod­ing and real-time qual­ity met­ric com­pu­ta­tion.

Over time, our in­ter­nal fork came to di­verge sig­nif­i­cantly from the up­stream ver­sion of FFmpeg. At the same time, new ver­sions of FFmpeg brought sup­port for new codecs and file for­mats, and re­li­a­bil­ity im­prove­ments, all of which al­lowed us to in­gest more di­verse video con­tent from users with­out dis­rup­tions. This ne­ces­si­tated that we sup­port both re­cent open-source ver­sions of FFmpeg along­side our in­ter­nal fork. Not only did this cre­ate a grad­u­ally di­ver­gent fea­ture set, it also cre­ated chal­lenges around safely re­bas­ing our in­ter­nal changes to avoid re­gres­sions.

As our in­ter­nal fork be­came in­creas­ingly out­dated, we col­lab­o­rated with FFmpeg de­vel­op­ers, FFlabs, and VideoLAN to de­velop fea­tures in FFmpeg that al­lowed us to fully dep­re­cate our in­ter­nal fork and rely ex­clu­sively on the up­stream ver­sion for our use cases. Using up­streamed patches and refac­tor­ings we’ve been able to fill two im­por­tant gaps that we had pre­vi­ously re­lied on our in­ter­nal fork to fill: threaded, multi-lane transcod­ing and real-time qual­ity met­rics.

When a user up­loads a video through one of our apps, we gen­er­ate a set of en­cod­ings to sup­port Dynamic Adaptive Streaming over HTTP (DASH) play­back. DASH play­back al­lows the ap­p’s video player to dy­nam­i­cally choose an en­cod­ing based on sig­nals such as net­work con­di­tions. These en­cod­ings can dif­fer in res­o­lu­tion, codec, fram­er­ate, and vi­sual qual­ity level but they are cre­ated from the same source en­cod­ing, and the player can seam­lessly switch be­tween them in real time.

In a very sim­ple sys­tem sep­a­rate FFmpeg com­mand lines can gen­er­ate the en­cod­ings for each lane one-by-one in se­r­ial. This could be op­ti­mized by run­ning each com­mand in par­al­lel, but this quickly be­comes in­ef­fi­cient due to the du­pli­cate work done by each process.

To work around this, mul­ti­ple out­puts could be gen­er­ated within a sin­gle FFmpeg com­mand line, de­cod­ing the frames of a video once and send­ing them to each out­put’s en­coder in­stance. This elim­i­nates a lot of over­head by dedu­pli­cat­ing the video de­cod­ing and process startup time over­head in­curred by each com­mand line. Given that we process over 1 bil­lion video up­loads daily, each re­quir­ing mul­ti­ple FFmpeg ex­e­cu­tions, re­duc­tions in per-process com­pute us­age yield sig­nif­i­cant ef­fi­ciency gains.

Our in­ter­nal FFmpeg fork pro­vided an ad­di­tional op­ti­miza­tion to this: par­al­lelized video en­cod­ing. While in­di­vid­ual video en­coders are of­ten in­ter­nally multi-threaded, pre­vi­ous FFmpeg ver­sions ex­e­cuted each en­coder in se­r­ial for a given frame when mul­ti­ple en­coders were in use. By run­ning all en­coder in­stances in par­al­lel, bet­ter par­al­lelism can be ob­tained over­all.

Thanks to con­tri­bu­tions from FFmpeg de­vel­op­ers, in­clud­ing those at FFlabs and VideoLAN, more ef­fi­cient thread­ing was im­ple­mented start­ing with FFmpeg 6.0, with the fin­ish­ing touches land­ing in 8.0. This was di­rectly in­flu­enced by the de­sign of our in­ter­nal fork and was one of the main fea­tures we had re­lied on it to pro­vide. This de­vel­op­ment led to the most com­plex refac­tor­ing of FFmpeg in decades and has en­abled more ef­fi­cient en­cod­ings for all FFmpeg users.

To fully mi­grate off of our in­ter­nal fork we needed one more fea­ture im­ple­mented up­stream: real-time qual­ity met­rics.

Visual qual­ity met­rics, which give a nu­meric rep­re­sen­ta­tion of the per­ceived vi­sual qual­ity of me­dia, can be used to quan­tify the qual­ity loss in­curred from com­pres­sion. These met­rics are cat­e­go­rized as ref­er­ence or no-ref­er­ence met­rics, where the for­mer com­pares a ref­er­ence en­cod­ing to some other dis­torted en­cod­ing.

FFmpeg can com­pute var­i­ous vi­sual qual­ity met­rics such as PSNR, SSIM, and VMAF us­ing two ex­ist­ing en­cod­ings in a sep­a­rate com­mand line af­ter en­cod­ing has fin­ished. This is okay for of­fline or VOD use cases, but not for livestream­ing where we might want to com­pute qual­ity met­rics in real time.

To do this, we need to in­sert a video de­coder af­ter each video en­coder used by each out­put lane. These pro­vide bitmaps for each frame in the video af­ter com­pres­sion has been ap­plied so that we can com­pare against the frames be­fore com­pres­sion. In the end, we can pro­duce a qual­ity met­ric for each en­coded lane in real time us­ing a sin­gle FFmpeg com­mand line.

Thanks to in-loop” de­cod­ing, which was en­abled by FFmpeg de­vel­op­ers in­clud­ing those from FFlabs and VideoLAN, be­gin­ning with FFmpeg 7.0, we no longer have to rely on our in­ter­nal FFmpeg fork for this ca­pa­bil­ity.

Things like real-time qual­ity met­rics while transcod­ing and more ef­fi­cient thread­ing can bring ef­fi­ciency gains to a va­ri­ety of FFmpeg-based pipelines both in and out­side of Meta, and we strive to en­able these de­vel­op­ments up­stream to ben­e­fit the FFmpeg com­mu­nity and wider in­dus­try. However, there are some patches we’ve de­vel­oped in­ter­nally that don’t make sense to con­tribute up­stream. These are highly spe­cific to our in­fra­struc­ture and don’t gen­er­al­ize well.

FFmpeg sup­ports hard­ware-ac­cel­er­ated de­cod­ing, en­cod­ing, and fil­ter­ing with de­vices such as NVIDIAs NVDEC and NVENC, AMDs Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each de­vice is sup­ported through an im­ple­men­ta­tion of stan­dard APIs in FFmpeg, al­low­ing for eas­ier in­te­gra­tion and min­i­miz­ing the need for de­vice-spe­cific com­mand line flags. We’ve added sup­port for the Meta Scalable Video Processor (MSVP), our cus­tom ASIC for video transcod­ing, through these same APIs, en­abling the use of com­mon tool­ing across dif­fer­ent hard­ware plat­forms with min­i­mal plat­form-spe­cific quirks.

As MSVP is only used within Meta’s own in­fra­struc­ture, it would cre­ate a chal­lenge for FFmpeg de­vel­op­ers to sup­port it with­out ac­cess to the hard­ware for test­ing and val­i­da­tion. In this case, it makes sense to keep patches like this in­ter­nal since they would­n’t pro­vide ben­e­fit ex­ter­nally. We’ve taken on the re­spon­si­bil­ity of re­bas­ing our in­ter­nal patches onto more re­cent FFmpeg ver­sions over time, uti­liz­ing ex­ten­sive val­i­da­tion to en­sure ro­bust­ness and cor­rect­ness dur­ing up­grades.

With more ef­fi­cient multi-lane en­cod­ing and real-time qual­ity met­rics, we were able to fully dep­re­cate our in­ter­nal FFmpeg fork for all VOD and livestream­ing pipelines. And thanks to stan­dard­ized hard­ware APIs in FFmpeg, we’ve been able to sup­port our MSVP ASIC along­side soft­ware-based pipelines with min­i­mal fric­tion.

FFmpeg has with­stood the test of time with over 25 years of ac­tive de­vel­op­ment. Developments that im­prove re­source uti­liza­tion, add sup­port for new codecs and fea­tures, and in­crease re­li­a­bil­ity en­able ro­bust sup­port for a wider range of me­dia. For peo­ple on our plat­forms, this means en­abling new ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones. We plan to con­tinue in­vest­ing in FFmpeg in part­ner­ship with open source de­vel­op­ers, bring­ing ben­e­fits to Meta, the wider in­dus­try, and peo­ple who use our prod­ucts.

We would like to ac­knowl­edge con­tri­bu­tions from the open source com­mu­nity, our part­ners in FFlabs and VideoLAN, and many Meta en­gi­neers, in­clud­ing Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.

...

Read the original on engineering.fb.com »

10 233 shares, 9 trendiness

The Death of Social Media is the Renaissance of RSS

Social me­dia was once hailed as the great de­moc­ra­tizer of in­for­ma­tion. It promised a world where any­one could share their voice, con­nect with like-minded com­mu­ni­ties, and dis­cover au­then­tic hu­man per­spec­tives. But in 2025, the land­scape looks very dif­fer­ent. Social plat­forms are drown­ing in a flood of Generative AI (GenAI) con­tent—ar­ti­cles, im­ages, videos, and even en­tire con­ver­sa­tions churned out at in­dus­trial scale. The re­sult is a noisy ocean of same­ness, where gen­uine hu­man voices strug­gle to be heard.

For years, one could track the slow de­cline of so­cial plat­forms: from al­go­rith­mic ma­nip­u­la­tion and ad­ver­tis­ing over­load, to bot armies, po­lit­i­cal po­lar­iza­tion, and the pri­or­i­ti­za­tion of en­gage­ment over truth. Yet, the ex­plo­sion of AI-generated con­tent has dra­mat­i­cally ac­cel­er­ated this down­fall. Instead of scrolling for con­nec­tion, users now find them­selves wad­ing through end­less streams of repet­i­tive, shal­low, or out­right mean­ing­less ma­te­r­ial. This is not just a tem­po­rary nui­sance. It sig­nals the death of so­cial me­dia as we know it. And in this en­vi­ron­ment, an old tech­nol­ogy—long thought to be ob­so­lete—emerges as a sav­ior: RSS, or Really Simple Syndication. RSS of­fers a way out of the al­go­rith­mic maze, a re­turn to au­then­tic­ity, and a chance to re­claim con­trol over the in­for­ma­tion we con­sume.

This ar­ti­cle ex­plores why AI-generated con­tent is de­stroy­ing so­cial me­dia, how RSS pro­vides a vi­able and el­e­gant so­lu­tion, and why now might be the per­fect mo­ment to re­vive this for­got­ten tech­nol­ogy.

Generative AI has low­ered the bar­rier to con­tent cre­ation to near zero. What once re­quired hours of writ­ing, de­sign­ing, or edit­ing can now be pro­duced in sec­onds. Tools like ChatGPT, Midjourney, and Runway gen­er­ate pol­ished text, stun­ning vi­su­als, and slick videos in­stantly. This is rev­o­lu­tion­ary in some ways, but dev­as­tat­ing in oth­ers. The prob­lem lies in scale. When any­one—or any bot—can pub­lish un­lim­ited con­tent at no cost, the sup­ply of in­for­ma­tion sky­rock­ets, but hu­man at­ten­tion re­mains fi­nite. The in­evitable re­sult is over­sat­u­ra­tion, an end­less flood of low-value ma­te­r­ial that drowns out every­thing else.

AI-generated ar­ti­cles and posts of­ten sound com­pe­tent, but they rarely sound alive. They mimic hu­man style but lack hu­man depth. After read­ing a dozen AI-written ar­ti­cles, a pat­tern emerges: sim­i­lar phrases, repet­i­tive struc­tures, and pre­dictable con­clu­sions. The in­ter­net is fill­ing up with ma­chine-gen­er­ated déjà vu. For read­ers, this cre­ates fa­tigue in en­coun­ter­ing the same types of con­tent over and over, along with the ero­sion of trust as it be­comes dif­fi­cult to dis­tin­guish gen­uine hu­man thought from au­to­mated out­put.

Compounding this prob­lem is the way so­cial me­dia al­go­rithms func­tion. They don’t care about au­then­tic­ity; they care about en­gage­ment. AI-generated con­tent, de­signed for clicks and shares, fits neatly into their goals. As more con­tent is pro­duced, al­go­rithms am­plify the ones that trig­ger emo­tions or quick in­ter­ac­tions, even if they are shal­low, ma­nip­u­la­tive, or mis­lead­ing. AI makes it cheaper to pro­duce click­bait, and so­cial me­dia en­sures it spreads faster than ever. For cre­ators, this is crush­ing. Human-made ar­ti­cles, videos, or posts now com­pete against end­less waves of ma­chine-made con­tent. Audiences can barely tell the dif­fer­ence, and many no longer care. The re­sult is suf­fo­ca­tion of au­then­tic voices. For users, the plat­forms feel less per­sonal, less in­spir­ing, and less trust­wor­thy. Social me­dia promised com­mu­nity, but what we now get is con­tent sludge.

The de­cay of so­cial me­dia did not be­gin with AI. For over a decade, cracks have been vis­i­ble. Platforms in­creas­ingly pri­or­i­tized ad­ver­tis­ers over users, forc­ing ad over­load onto feeds. Algorithms re­placed user choice with au­to­mated cu­ra­tion. Fake news and bots un­der­mined trust. Excessive use was linked to anx­i­ety, de­pres­sion, and po­lar­iza­tion. Yet now, the flood of AI con­tent ac­cel­er­ates this process. If every scroll re­veals ma­chine-writ­ten posts, AI art clones, or syn­thetic in­flu­encers, the value of the plat­form erodes fur­ther. People log in less, en­gage­ment drops, and the feed­back loop wors­ens.

At the heart of this de­cline lies the col­lapse of au­then­tic­ity. The core ap­peal of so­cial me­dia was its hu­man con­nec­tion, the spon­tane­ity of real peo­ple shar­ing real mo­ments. But with AI con­tent dom­i­nat­ing feeds, that con­nec­tion is di­luted. Imagine log­ging into Instagram and see­ing 90 per­cent AI-generated fash­ion shoots, travel pho­tos, or in­flu­encer avatars. Or scrolling Twitter/X and find­ing end­less AI-written hot takes. The magic of hu­man un­pre­dictabil­ity van­ishes. Platforms are also trapped by their re­liance on ad­ver­tis­ing. Ads de­pend on user trust and en­gage­ment, but as feeds be­come in­dis­tin­guish­able from AI sludge, users dis­en­gage and ad­ver­tis­ers see di­min­ish­ing re­turns. The re­sult is a death spi­ral: plat­forms push harder for mon­e­ti­za­tion while dri­ving users fur­ther away. This is the tra­jec­tory we are on, and it points to­ward the death of so­cial me­dia as we knew it.

RSS, or Really Simple Syndication, is an old tech­nol­ogy from the early web. At its core, it is a feed for­mat that al­lows web­sites to pub­lish up­dates which users can sub­scribe to. Instead of vis­it­ing each site in­di­vid­u­ally, you can ag­gre­gate up­dates into a sin­gle reader app. Think of it as an in­box for the in­ter­net. While RSS was once pop­u­lar in the blog­ging era of the 2000s, it faded as so­cial me­dia took over. But now, in the age of AI con­tent floods, it might be the per­fect an­ti­dote.

The ben­e­fits of RSS are strik­ing. With RSS, you sub­scribe di­rectly to web­sites, blogs, or news out­lets, mean­ing there is no mid­dle­man al­go­rithm de­cid­ing what you see. By cu­rat­ing your own sources, you can fo­cus on hu­man-writ­ten blogs, com­pany press re­leases, or trusted out­lets. Most feeds de­liver clean con­tent with­out in­jected ads or sur­veil­lance. Updates ap­pear in­stantly in your reader, of­ten faster than so­cial me­dia plat­forms sur­face them. Above all, RSS re­turns con­trol to the user. You de­cide which voices mat­ter, with no al­go­rithm fil­ter­ing or ma­nip­u­lat­ing your feed.

Unlike so­cial feeds, where AI-generated spam floods in, RSS al­lows you to choose hu­man sources de­lib­er­ately. If you want up­dates from your fa­vorite in­die jour­nal­ist, you sim­ply add their blog feed. If you want com­pany news straight from the source, you sub­scribe to their press re­lease feed. The re­sult is a cu­rated, per­son­al­ized feed of au­then­tic voices. In other words, RSS re­stores the sig­nal-to-noise ra­tio.

Despite its age, RSS never dis­ap­peared. Many apps and ser­vices still sup­port it, some paid, some free, some open source. During my own ex­plo­ration, I tested mul­ti­ple RSS reader apps. Most worked, but one stood out: Feeder.

Feeder is free, light­weight, and re­fresh­ingly sim­ple. Unlike bloated apps or paid ser­vices, it fo­cuses on what mat­ters. There are no dis­tract­ing ads, only con­tent. It works across phone and desk­top, pro­vid­ing seam­less use. Transparency is an­other key ad­van­tage—it is open source on GitHub, mean­ing you can au­dit what it does. Notifications, search, fold­ers, and sync­ing all work smoothly. It does­n’t try to rein­vent RSS; it just makes it work. In a time where trans­parency and trust mat­ter, Feeder’s open-source na­ture is a mas­sive plus.

Of course, Feeder is not the only op­tion. Other strong con­tenders in­clude Feedly, Inoreader, and NetNewsWire. But Feeder cap­tures the spirit of RSS: in­de­pen­dence, sim­plic­ity, and user con­trol.

AI-driven so­cial me­dia thrives on re­mov­ing choice. It de­cides what you see, when you see it, and how you feel about it. RSS flips the script. You de­cide your sources. You build your own in­for­ma­tion ecosys­tem. Instead of wait­ing for an ar­ti­cle to ap­pear on Twitter/X or Facebook—filtered by al­go­rithms, dis­torted by ads—you get it straight from the source, with no de­lays, no ma­nip­u­la­tion, and no man in the mid­dle.

By sub­scrib­ing di­rectly to hu­man cre­ators—blog­gers, jour­nal­ists, thinkers—you by­pass the noise and sup­port au­then­tic­ity. These voices are harder to find on AI-clogged plat­forms, but with RSS, they come straight to you. With RSS, there is no hid­den data min­ing, no in­vis­i­ble AI in­flu­ence. Just a feed of your cho­sen con­tent, de­liv­ered lo­cally. In a dig­i­tal world plagued by hid­den al­go­rithms, this trans­parency is re­fresh­ing.

Social me­dia will not van­ish overnight, but its role is chang­ing. For many, it will be­come back­ground noise, a chaotic car­ni­val of AI bots shout­ing into the void. Platforms may sur­vive as en­ter­tain­ment ma­chines, but their cred­i­bil­ity as sources of au­then­tic hu­man in­sight is gone. The early web was de­cen­tral­ized, built on blogs, fo­rums, and per­sonal sites. Social me­dia cen­tral­ized every­thing, but at great cost. RSS of­fers a way back: de­cen­tral­ized, user-dri­ven, and au­then­tic. Instead of one feed con­trolled by one cor­po­ra­tion, each per­son builds their own.

In the age of in­fi­nite AI con­tent, cu­ra­tion be­comes the most valu­able act. RSS em­pow­ers in­di­vid­u­als to cu­rate their own feeds. Instead of pas­sively scrolling, we ac­tively se­lect. Instead of drown­ing, we choose.

Social me­dia as we knew it is dy­ing—not be­cause hu­mans lost in­ter­est in shar­ing, but be­cause ma­chines over­whelmed the sys­tem. Generative AI has flooded plat­forms with repet­i­tive, shal­low, and ma­nip­u­la­tive con­tent. Algorithms am­plify the noise. Authenticity evap­o­rates.

But out of this col­lapse emerges an op­por­tu­nity: the re­birth of RSS. This old, sim­ple tech­nol­ogy em­pow­ers us to re­claim con­trol, cut out mid­dle­men, and re­con­nect with gen­uine hu­man voices. Tools like Feeder make it ac­ces­si­ble again, pro­vid­ing clean, ad-free, trans­par­ent feeds of the con­tent that mat­ters most.

The fu­ture of in­for­ma­tion con­sump­tion may not be al­go­rithm-dri­ven feeds but self-cu­rated, hu­man-cen­tered in­boxes of au­then­tic­ity. The death of so­cial me­dia could be the start of some­thing bet­ter: a re­nais­sance of the open web. RSS is­n’t just nos­tal­gia. It’s a sur­vival tool in the age of AI con­tent floods. And the sooner we em­brace it, the sooner we can re­dis­cover the joy of con­sum­ing in­for­ma­tion that feels truly hu­man.

By the way, here is my own RSS blog feed!

...

Read the original on www.smartlab.at »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.