10 interesting stories served every morning and every evening.




1 962 shares, 38 trendiness

Ireland shuts last coal plant, becomes 15th coal-free country in Europe

Ireland to­day (June 20) be­came the 15th coal-free coun­try in Europe, hav­ing ended coal power gen­er­a­tion at its 915 MW Moneypoint coal plant in County Clare. Initially com­mis­sioned in the mid-1980s by ESB, Moneypoint was in­tended to help Ireland off­set the im­pact of the oil crises in the 1970s by pro­vid­ing a de­pend­able source of en­ergy.

But with Ireland now gen­er­at­ing a lot more re­new­able en­ergy nowa­days, coal burn­ing is no longer such an ur­gent need. Energy think tank Ember data states Ireland gen­er­ated 37% (11.4 TWh) of its elec­tric­ity from wind in 2024. Solar is not near wind lev­els of gen­er­a­tion, (0.97 TWh in 2024) but it has been con­tin­u­ously break­ing gen­er­a­tion records in re­cent months and lo­cal stake­hold­ers are con­fi­dent this pos­i­tive trend will con­tinue.

Following the clo­sure, the Moneypoint plant will con­tinue to serve a lim­ited backup role, burn­ing heavy fuel oil un­der emer­gency in­struc­tion from Ireland’s trans­mis­sion sys­tem op­er­a­tor EirGrid un­til 2029.

This strat­egy is in line with pre­vi­ous plans made by EirGrid and ESB to exit coal-fired gen­er­a­tion by the end of 2025, which stip­u­lated that Moneypoint would no longer be ac­tive in the whole­sale elec­tric­ity mar­ket.

Ireland has qui­etly rewrit­ten its en­ergy story, re­plac­ing toxic coal with home­grown re­new­able power,” said Alexandru Mustață, cam­paigner on coal and gas at Europe’s Beyond Fossil Fuels.

But this is­n’t job done’. The gov­ern­men­t’s pri­or­ity now must be build­ing a power sys­tem for a re­new­able fu­ture; one with the stor­age, flex­i­bil­ity, and grid in­fra­struc­ture needed to run fully on clean, do­mes­tic re­new­able elec­tric­ity,” Mustață warned.

Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, ap­pealed to the gov­ern­ment to en­sure oil backup at Moneypoint is kept to an ab­solute min­i­mum and ul­ti­mately de­com­mis­sioned. He also ap­pealed for the gov­ern­ment to pre­vent fur­ther de­vel­op­ment of data cen­ters, which he said are in­creas­ing Ireland’s re­liance on fos­sil gas.

We also can’t ig­nore that the gov­ern­ment is tar­get­ing the in­stal­la­tion of at least 2 GW of gas power plants with no strat­egy to re­duce Ireland’s dan­ger­ous gas de­pen­dency,” he added.

On a broader level, Ireland’s step to close coal power gen­er­a­tion at Moneypoint sets a prece­dent for fur­ther European coun­tries’ coal ex­its to come, says Beyond Fossil Fuels. The group tracks European coun­tries’ progress on their com­mit­ments to switch­ing from fos­sil fu­els to re­new­able en­ergy. So far, 23 European coun­tries have com­mit­ted to coal phase-outs. Italy is ex­pected to com­plete its main­land coal phase-out this sum­mer with the up­com­ing clo­sure of its last two big coal power plants, while main­land Spain is also ex­pect­ing to de­clare it­self coal-free this sum­mer.

...

Read the original on www.pv-magazine.com »

2 488 shares, 29 trendiness

Building a Procedural Hex Map with Wave Function Collapse

I’ve been ob­sessed with pro­ce­dural maps since I was a kid rolling dice on the ran­dom dun­geon ta­bles in the AD&D Dungeon Master’s Guide. There was some­thing mag­i­cal about it — you did­n’t de­sign the dun­geon, you dis­cov­ered it, one room at a time, and the dice de­cided whether you got a trea­sure cham­ber or a dead end full of rats.

Years later, I de­cided to build my own map gen­er­a­tor. It cre­ates lit­tle me­dieval is­land worlds — with roads, rivers, coast­lines, cliffs, forests, and vil­lages — en­tirely pro­ce­du­rally. Built with Three.js WebGPU and TSL shaders, about 4,100 hex cells across 19 grids, gen­er­ated in ~20 sec­onds.

If you’ve ever played Carcassonne, you al­ready un­der­stand WFC. You have a stack of tiles and place them so every­thing lines up. Each tile has edges — grass, road, city. Adjacent tiles must have match­ing edges. A road edge must con­nect to an­other road edge. Grass must meet grass. The only dif­fer­ence is that the com­puter does it faster, and com­plains less when it gets stuck.

The twist: hex tiles have 6 edges in­stead of 4. That’s 50% more con­straints per tile, and the com­bi­na­to­r­ial ex­plo­sion is real. Square WFC is well-trod­den ter­ri­tory. Hex WFC is… less so.

WFC is re­li­able for small grids. But as the grid gets big­ger, the chance of paint­ing your­self into a dead end goes up fast. A 217-cell hex grid al­most never fails. A 4123-cell grid fails reg­u­larly.

The so­lu­tion: mod­u­lar WFC. Instead of one gi­ant solve, the map is split into 19 hexag­o­nal grids arranged in two rings around a cen­ter — about 4,100 cells to­tal. Each grid is solved in­de­pen­dently, but it has to match what­ever tiles were al­ready placed in neigh­bor­ing grids. Those bor­der tiles be­come fixed con­straints.

And some­times those con­straints are sim­ply in­com­pat­i­ble. No amount of back­track­ing in­side the cur­rent grid can fix a prob­lem that was baked in by a neigh­bor. This is where I spent a lot of dev time.

Here’s the dirty se­cret of WFC: it fails. A lot. You make a se­ries of ran­dom choices, prop­a­gate con­straints, and even­tu­ally back your­self into a cor­ner where some cell has zero valid op­tions left. Congratulations, the puz­zle is un­solv­able.

The text­book so­lu­tion is back­track­ing — undo your last de­ci­sion and try a dif­fer­ent tile. My solver tracks every pos­si­bil­ity it re­moves dur­ing prop­a­ga­tion (a trail” of deltas), so it can rewind cheaply with­out copy­ing the en­tire grid state. It’ll try up to 500 back­tracks be­fore giv­ing up.

But back­track­ing alone is­n’t enough. The real prob­lem is cross-grid bound­aries.

After many failed ap­proaches, I landed on a lay­ered re­cov­ery sys­tem:

Layer 1: Unfixing. During the ini­tial con­straint prop­a­ga­tion, if a neigh­bor cell cre­ates a con­tra­dic­tion, the solver con­verts it from a fixed con­straint back into a solv­able cell. Its own neigh­bors (two cells out — anchors”) be­come the new con­straints. This is cheap and han­dles easy cases.

Layer 2: Local-WFC. If the main solve fails, the solver runs a mini-WFC on a small ra­dius-2 re­gion around the prob­lem area — re-solv­ing 19 cells in the over­lap area to cre­ate a more com­pat­i­ble bound­ary. Up to 5 at­tempts, each tar­get­ing a dif­fer­ent prob­lem cell. Local-WFC was the break­through. Instead of try­ing to solve the im­pos­si­ble, go back and change the prob­lem.

Layer 3: Drop and hide. Last re­sort. Drop the of­fend­ing neigh­bor cell en­tirely and place moun­tain tiles to cover the seams. Mountains are great — their cliff edges match any­thing, and they look in­ten­tional. Nobody ques­tions a moun­tain.

This map is­n’t flat — it has 5 lev­els of el­e­va­tion. Ocean and Grass start at level 0, but slopes and cliffs can move up or down a level. Low slopes go up 1 level, high slopes go up 2 lev­els. A road tile at level 3 needs to con­nect to an­other road tile at level 3, or a slope tile that tran­si­tions be­tween lev­els. Get it wrong and you end up with roads that dead-end into cliff faces or rivers flow­ing up­hill into the sky. The el­e­va­tion axis turns a 2D con­straint prob­lem into a 3D one, and it’s where a lot of the tile va­ri­ety (and a lot of the solver fail­ures) comes from.

The bet­ter ap­proach: cube co­or­di­nates (q, r, s where s = -q-r). It’s a 3D co­or­di­nate sys­tem for the three hex axes. Neighbor find­ing be­comes triv­ial — just add or sub­tract 1 from two co­or­di­nates.

The good news is that WFC does­n’t re­ally care about geom­e­try. It’s con­cerned with which edges match which — it’s es­sen­tially a graph prob­lem. The hex co­or­di­nates only mat­ter for ren­der­ing and for the multi-grid lay­out, where the 19 grids are them­selves arranged as a hex-of-hexes with their own off­set po­si­tions.

If you’ve ever worked with hex grids, you owe Amit Patel at Red Blob Games a debt of grat­i­tude. His hex grid guide is the de­fin­i­tive ref­er­ence.

Trees, Buildings, and Why Not Everything Should Be WFC

Early on, I tried us­ing WFC for tree and build­ing place­ment. Bad idea. WFC is great at lo­cal edge match­ing but ter­ri­ble at large-scale pat­terns. You’d get trees scat­tered ran­domly in­stead of clus­tered into forests, or build­ings spread evenly in­stead of gath­ered into vil­lages.

The so­lu­tion: good old Perlin noise. A global noise field de­ter­mines tree den­sity and build­ing place­ment, com­pletely sep­a­rate from WFC. Areas where the noise is above a thresh­old get trees; slightly dif­fer­ent noise dri­ves build­ings. This gives you or­ganic clus­ter­ing — forests, clear­ings, vil­lages — that WFC could never pro­duce. I also used some ad­di­tional logic to place build­ings at the end of roads, ports and wind­mills on coasts, henges on hill­tops etc.

WFC han­dles the ter­rain. Noise han­dles the dec­o­ra­tions. Each tool does what it’s good at.

I wanted that Zelda: The Wind Waker’ car­toon shim­mer on the wa­ter sur­face. Originally I tried gen­er­at­ing caus­tics pro­ce­du­rally with four lay­ers of Voronoi noise. This turned out to be very GPU heavy and did not look great. The so­lu­tion was sam­pling a small scrolling caus­tic tex­ture with a sim­ple noise mask, which looks way bet­ter and is su­per cheap. Sometimes the easy so­lu­tion is the cor­rect so­lu­tion.

Waves are sine bands that ra­di­ate out­ward from coast­lines, in­spired by Bad North’s gor­geous shore­line ef­fect. To know how far from the coast” each pixel is, the sys­tem ren­ders a coast mask — a top down or­tho­graphic ren­der of the en­tire map with white for land and black for wa­ter — then di­lates and blurs it into a gra­di­ent. The wave shader reads this gra­di­ent to place an­i­mated sine bands at reg­u­lar dis­tance in­ter­vals, with noise to break up the pat­tern.

This worked great on straight coast­lines. In con­cave coves and in­lets, the wave lines got thick and ugly. The blur-based gra­di­ent spreads the same value range over a wider phys­i­cal area in coves, stretch­ing the wave bands out.

Screen-space de­riv­a­tives to de­tect gra­di­ent stretch­ing — worked at one zoom level, broke at oth­ers.

Texture-space gra­di­ent mag­ni­tude to de­tect op­pos­ing coast edges can­cel­ing out — only de­tected nar­row rivers, not ac­tual prob­lem coves.

The fun­da­men­tal is­sue: blur en­codes how much land is nearby,” not how far is the near­est coast edge.” These are dif­fer­ent ques­tions, and no amount of post-pro­cess­ing the blur can ex­tract true dis­tance.

The solve was to do a CPU-side surroundedness” probe that checks each wa­ter cel­l’s neigh­bors to de­tect coves, writ­ing a sep­a­rate mask tex­ture that thins the waves in en­closed ar­eas. It’s kind of a hack but it works and the wave edges thin out nicely at the edges.

No dice re­quired this time — but the feel­ing is the same. You hit a but­ton, the map builds it­self, and you dis­cover what the al­go­rithm de­cided to put there. It’s su­per sat­is­fy­ing to see the road and river sys­tems match­ing up per­fectly. Every time it’s dif­fer­ent, and every time I find my­self ex­plor­ing for a while. The kid rolling dice on the dun­geon ta­bles would be into this.

...

Read the original on felixturner.github.io »

3 458 shares, 23 trendiness

AI reimplementation and the erosion of copyleft

Last week, Dan Blanchard, the main­tainer of chardet—a Python li­brary for de­tect­ing text en­cod­ings used by roughly 130 mil­lion pro­jects a month—re­leased a new ver­sion. Version 7.0 is 48 times faster than its pre­de­ces­sor, sup­ports mul­ti­ple cores, and was re­designed from the ground up. Anthropic’s Claude is listed as a con­trib­u­tor. The li­cense changed from LGPL to MIT.

Blanchard’s ac­count is that he never looked at the ex­ist­ing source code di­rectly. He fed only the API and the test suite to Claude and asked it to reim­ple­ment the li­brary from scratch. The re­sult­ing code shares less than 1.3% sim­i­lar­ity with any prior ver­sion, as mea­sured by JPlag. His con­clu­sion: this is an in­de­pen­dent new work, and he is un­der no oblig­a­tion to carry for­ward the LGPL. Mark Pilgrim, the li­brary’s orig­i­nal au­thor,

opened a GitHub is­sue to ob­ject. The LGPL re­quires that mod­i­fi­ca­tions be dis­trib­uted un­der the same li­cense, and a reim­ple­men­ta­tion pro­duced with am­ple ex­po­sure to the orig­i­nal code­base can­not, in Pilgrim’s view, pass as a clean-room ef­fort.

The dis­pute drew re­sponses from two promi­nent fig­ures in the open source world. Armin Ronacher, the cre­ator of Flask, wel­comed the re­li­cens­ing. Salvatore Sanfilippo (antirez), the cre­ator of Redis,

pub­lished a broader de­fense of AI reim­ple­men­ta­tion, ground­ing it in copy­right law and the his­tory of the GNU pro­ject. Both con­clude, by dif­fer­ent routes, that what Blanchard did is le­git­i­mate. I re­spect both writ­ers, and I think both are wrong—or more pre­cisely, both are evad­ing the ques­tion that ac­tu­ally mat­ters.

That ques­tion is this: does le­gal mean le­git­i­mate? Neither piece an­swers it. Both move from this is legally per­mis­si­ble” to this is there­fore fine,” with­out paus­ing at the gap be­tween those two claims. Law sets a floor; clear­ing it does not mean the con­duct is right. That gap is where this es­say be­gins.

Antirez builds his case on his­tory. When the GNU pro­ject reim­ple­mented the UNIX user­space, it was law­ful. So was Linux. Copyright law pro­hibits copy­ing protected ex­pres­sions”—the ac­tual code, its struc­ture, its spe­cific mech­a­nisms—but it does not pro­tect ideas or be­hav­ior. AI-assisted reim­ple­men­ta­tion oc­cu­pies the same le­gal ground. Therefore, it is law­ful.

The le­gal analy­sis is largely cor­rect, and I am not dis­put­ing it. The prob­lem lies in what an­ti­rez does next: he pre­sents the le­gal con­clu­sion as if it were also a so­cial one, and uses a his­tor­i­cal anal­ogy that, ex­am­ined more care­fully, ar­gues against his own po­si­tion.

When GNU reim­ple­mented the UNIX user­space, the vec­tor ran from pro­pri­etary to free. Stallman was us­ing the lim­its of copy­right law to turn pro­pri­etary soft­ware into free soft­ware. The eth­i­cal force of that pro­ject did not come from its le­gal per­mis­si­bil­ity—it came from the di­rec­tion it was mov­ing, from the fact that it was ex­pand­ing the com­mons. That is why peo­ple cheered.

The vec­tor in the chardet case runs the other way. Software pro­tected by a copy­left li­cense—one that guar­an­tees users the right to study, mod­ify, and re­dis­trib­ute de­riv­a­tive works un­der the same terms—has been reim­ple­mented un­der a per­mis­sive li­cense that car­ries no such guar­an­tee. This is not a reim­ple­men­ta­tion that ex­pands the com­mons. It is one that re­moves the fenc­ing that pro­tected the com­mons. Derivative works built on chardet 7.0 are un­der no oblig­a­tion to share their source code. That oblig­a­tion, which ap­plied to a li­brary down­loaded 130 mil­lion times a month, is now gone.

Antirez does not ad­dress this di­rec­tional dif­fer­ence. He in­vokes the GNU prece­dent, but that prece­dent is a coun­terex­am­ple to his con­clu­sion, not a sup­port­ing one.

Ronacher’s ar­gu­ment is dif­fer­ent. He dis­closes up­front that he has a stake in the out­come: I per­son­ally have a horse in the race here be­cause I too wanted chardet to be un­der a non-GPL li­cense for many years. So con­sider me a very bi­ased per­son in that re­gard.” He goes on to write that he con­sid­ers the GPL to run against that spirit by re­strict­ing what can be done with it”—the spirit be­ing that so­ci­ety is bet­ter off when we share.

This claim rests on a fun­da­men­tal mis­read­ing of what the GPL does.

Start with what the GPL ac­tu­ally pro­hibits. It does not pro­hibit keep­ing source code pri­vate. It im­poses no con­straint on pri­vately mod­i­fy­ing GPL soft­ware and us­ing it your­self. The GPLs con­di­tions are trig­gered only by dis­tri­b­u­tion. If you dis­trib­ute mod­i­fied code, or of­fer it as a net­worked ser­vice, you must make the source avail­able un­der the same terms. This is not a re­stric­tion on shar­ing. It is a con­di­tion placed on shar­ing: if you share, you must share in kind.

The re­quire­ment that im­prove­ments be re­turned to the com­mons is not a mech­a­nism that sup­presses shar­ing. It is a mech­a­nism that makes shar­ing re­cur­sive and self-re­in­forc­ing. The claim that im­pos­ing con­tri­bu­tion oblig­a­tions on users of a com­mons un­der­mines shar­ing cul­ture does not hold to­gether log­i­cally.

The con­trast with the MIT li­cense clar­i­fies the point. Under MIT, any­one may take code, im­prove it, and close it off into a pro­pri­etary prod­uct. You can re­ceive from the com­mons with­out giv­ing back. If Ronacher calls this struc­ture more share-friendly,” he is us­ing a con­cept of shar­ing with a spe­cific di­rec­tion­al­ity built in: shar­ing flows to­ward who­ever has more cap­i­tal and more en­gi­neers to take ad­van­tage of it.

The his­tor­i­cal record bears this out. In the 1990s, com­pa­nies rou­tinely ab­sorbed GPL code into pro­pri­etary prod­ucts—not be­cause they had cho­sen per­mis­sive li­censes, but be­cause copy­left en­force­ment was slack. The strength­en­ing of copy­left mech­a­nisms closed that gap. For in­di­vid­ual de­vel­op­ers and small pro­jects with­out the re­sources to com­pete on any­thing but rec­i­proc­ity, copy­left was what made the ex­change ap­prox­i­mately fair.

The cre­ator of Flask knows this dis­tinc­tion. If he elides it any­way, the ar­gu­ment is not naïve—it is con­ve­nient.

The most in­ter­est­ing mo­ment in Ronacher’s piece is not the ar­gu­ment but a de­tail he men­tions in pass­ing: Vercel reim­ple­mented GNU Bash us­ing AI and pub­lished it, then got vis­i­bly up­set when Cloudflare

reim­ple­mented Next.js the same way.

Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT li­censed. Cloudflare’s vinext did not vi­o­late any li­cense—it did ex­actly what Ronacher calls a con­tri­bu­tion to the cul­ture of open­ness, ap­plied to a per­mis­sively li­censed code­base. Vercel’s re­ac­tion had noth­ing to do with li­cense in­fringe­ment; it was purely com­pet­i­tive and ter­ri­to­r­ial. The im­plicit po­si­tion is: reim­ple­ment­ing GPL soft­ware as MIT is a vic­tory for shar­ing, but hav­ing our own MIT soft­ware reim­ple­mented by a com­peti­tor is cause for out­rage. This is what the claim that per­mis­sive li­cens­ing is more share-friendly” than copy­left looks like in prac­tice. The spirit of shar­ing, it turns out, runs in one di­rec­tion only: out­ward from one­self.

Ronacher reg­is­ters the con­tra­dic­tion and does not stop. This de­vel­op­ment plays into my world­view,” he writes. When you pre­sent ev­i­dence that cuts against your own po­si­tion, ac­knowl­edge it, and then pro­ceed to your orig­i­nal con­clu­sion un­changed, that is a sig­nal that the con­clu­sion pre­ceded the ar­gu­ment.

Back to the ques­tion posed at the start. Is le­gal the same as le­git­i­mate?

Antirez closes his care­ful le­gal analy­sis as though it set­tles the mat­ter. Ronacher ac­knowl­edges that there is an ob­vi­ous moral ques­tion here, but that is­n’t nec­es­sar­ily what I’m in­ter­ested in.” Both pieces treat le­gal per­mis­si­bil­ity as a proxy for so­cial le­git­i­macy. But law only says what con­duct it will not pre­vent—it does not cer­tify that con­duct as right. Aggressive tax min­i­miza­tion that never crosses into il­le­gal­ity may still be widely re­garded as an­ti­so­cial. A phar­ma­ceu­ti­cal com­pany that legally ac­quires a patent on a long-generic drug and raises the price a hun­dred­fold has not done some­thing le­gal and there­fore fine. Legality is a nec­es­sary con­di­tion; it is not a suf­fi­cient one.

In the chardet case, the dis­tinc­tion is sharper still. What the LGPL pro­tected was not Blanchard’s la­bor alone. It was a so­cial com­pact agreed to by every­one who con­tributed to the li­brary over twelve years. The terms of that com­pact were: if you take this and build on it, you share back un­der the same terms. This com­pact op­er­ated as a le­gal in­stru­ment, yes, but it was also the foun­da­tion of trust that made con­tri­bu­tion ra­tio­nal. The fact that a reim­ple­men­ta­tion may qual­ify legally as a new work, and the fact that it breaks faith with the orig­i­nal con­trib­u­tors, are sep­a­rate ques­tions. If a court even­tu­ally rules in Blanchard’s fa­vor, that rul­ing will tell us what the law per­mits. It will not tell us that the act was right.

Zoë Kooyman, ex­ec­u­tive di­rec­tor of the FSF, put it plainly: Refusing to grant oth­ers the rights you your­self re­ceived as a user is highly an­ti­so­cial, no mat­ter what method you use.”

Reading this de­bate, I keep re­turn­ing to a ques­tion about po­si­tion. From where are these two writ­ers look­ing at the sit­u­a­tion?

Antirez cre­ated Redis. Ronacher cre­ated Flask. Both are fig­ures at the cen­ter of the open source ecosys­tem, with large au­di­ences and well-es­tab­lished rep­u­ta­tions. For them, falling costs of AI reim­ple­men­ta­tion means some­thing spe­cific: it is eas­ier to reim­ple­ment things they want in a dif­fer­ent form. Ronacher says ex­plic­itly that he had be­gun reim­ple­ment­ing GNU Readline pre­cisely be­cause of its copy­left terms.

For the peo­ple who have spent years con­tribut­ing to a li­brary like chardet, the same shift in costs means some­thing else en­tirely: the copy­left pro­tec­tion around their con­tri­bu­tions can be re­moved. The two writ­ers are speak­ing from the for­mer po­si­tion to peo­ple in the lat­ter, telling them that this was al­ways law­ful, that his­tor­i­cal prece­dent sup­ports it, and that the ap­pro­pri­ate re­sponse is adap­ta­tion.

When po­si­tional asym­me­try of this kind is ig­nored, and the ar­gu­ment is pre­sented as uni­ver­sal analy­sis, what you get is not analy­sis but ra­tio­nal­iza­tion. Both writ­ers ar­rive at con­clu­sions that align pre­cisely with their own in­ter­ests. Readers should hold that fact in mind.

Bruce Perens, who wrote the orig­i­nal Open Source Definition, told : The en­tire eco­nom­ics of soft­ware de­vel­op­ment are dead, gone, over, ka­put!” He meant it as an alarm. Antirez, from a sim­i­lar as­sess­ment of the sit­u­a­tion, draws the con­clu­sion: adapt. Ronacher says he finds the di­rec­tion ex­cit­ing.

None of the three re­sponses ad­dresses the cen­tral ques­tion. When copy­left be­comes tech­ni­cally eas­ier to cir­cum­vent, does that make it less nec­es­sary, or more?

I think more. What the GPL pro­tected was not the scarcity of code but the free­dom of users. The fact that pro­duc­ing code has be­come cheaper does not make it ac­cept­able to use that code as a ve­hi­cle for erod­ing free­dom. If any­thing, as the fric­tion of reim­ple­men­ta­tion dis­ap­pears, so does the fric­tion of strip­ping copy­left from any­thing left ex­posed. The ero­sion of en­force­ment ca­pac­ity is a le­gal prob­lem. It does not touch the un­der­ly­ing nor­ma­tive judg­ment.

That judg­ment is this: those who take from the com­mons owe some­thing back to the com­mons. The prin­ci­ple does not change de­pend­ing on whether a reim­ple­men­ta­tion takes five years or five days. No court rul­ing on AI-generated code will al­ter its so­cial weight.

This is where law and com­mu­nity norms di­verge. Law is made slowly, af­ter the fact, re­flect­ing ex­ist­ing power arrange­ments. The norms that open source com­mu­ni­ties built over decades did not wait for court ap­proval. People chose the GPL when the law of­fered them no guar­an­tee of its en­force­ment, be­cause it ex­pressed the val­ues of the com­mu­ni­ties they wanted to be­long to. Those val­ues do not ex­pire when the law changes.

In pre­vi­ous writ­ing, I ar­gued for a train­ing copy­left (TGPL) as the next step in this line of de­vel­op­ment. The chardet sit­u­a­tion sug­gests the ar­gu­ment has to go fur­ther: to a spec­i­fi­ca­tion copy­left cov­er­ing the layer be­low source code. If source code can now be gen­er­ated from a spec­i­fi­ca­tion, the spec­i­fi­ca­tion is where the es­sen­tial in­tel­lec­tual con­tent of a GPL pro­ject re­sides. Blanchard’s own claim—that he worked only from the test suite and API with­out read­ing the source—is, para­dox­i­cally, an ar­gu­ment for pro­tect­ing that test suite and API spec­i­fi­ca­tion un­der copy­left terms.

The his­tory of the GPL is the his­tory of li­cens­ing tools evolv­ing in re­sponse to new forms of ex­ploita­tion: GPLv2 to GPLv3, then AGPL. What drove each evo­lu­tion was not a court rul­ing but a com­mu­nity reach­ing a value judg­ment first and then seek­ing le­gal in­stru­ments to ex­press it. The same se­quence is avail­able now. Whatever courts even­tu­ally de­cide about AI reim­ple­men­ta­tion, the ques­tion we need to an­swer first is not a le­gal one. It is a so­cial one. Do those who take from the com­mons owe some­thing back? I think they do. That judg­ment does not re­quire a ver­dict.

What makes the pieces by an­ti­rez and Ronacher worth read­ing is not that they are right. It is that they make vis­i­ble, with un­usual clar­ity, what they are choos­ing not to see. When le­gal­ity is used as a sub­sti­tute for a value judg­ment, the ques­tion that ac­tu­ally mat­ters gets buried in the foot­notes of a law it has al­ready out­grown.

...

Read the original on writings.hongminhee.org »

4 427 shares, 23 trendiness

Judge dismisses red-light camera ticket, rules law is unconstitutional

A Broward County judge has dis­missed a red-light cam­era ticket, rul­ing that the state law used to is­sue the ci­ta­tion im­prop­erly shifts the bur­den of proof onto ve­hi­cle own­ers.

In a 21-page or­der signed March 3, Judge Steven P. DeLuca granted a de­fen­dan­t’s mo­tion to dis­miss a photo-en­forced traf­fic ci­ta­tion is­sued un­der Florida’s red-light cam­era law.

The case in­volved a Sunrise red-light cam­era ci­ta­tion is­sued to a reg­is­tered ve­hi­cle owner af­ter au­to­mated cam­eras cap­tured a ve­hi­cle en­ter­ing an in­ter­sec­tion against a red sig­nal. The de­fen­dant ar­gued the statute un­con­sti­tu­tion­ally re­quires the reg­is­tered owner to prove they were not dri­ving — in­stead of re­quir­ing the gov­ern­ment to prove who was be­hind the wheel.

In the or­der, the court found that red-light cam­era cases, al­though la­beled as civil in­frac­tions, func­tion as quasi-criminal” pro­ceed­ings be­cause they can re­sult in mon­e­tary penal­ties, a for­mal find­ing of guilt, and con­se­quences tied to a dri­ver’s record.

Under Florida Statute 316.0083, once a cam­era cap­tures a vi­o­la­tion, the reg­is­tered owner is pre­sumed re­spon­si­ble un­less they sub­mit an af­fi­davit iden­ti­fy­ing an­other dri­ver. The court ruled that the frame­work im­prop­erly shifts the bur­den of proof away from the state.

Because traf­fic in­frac­tions that move to county court must be proven beyond a rea­son­able doubt,” the judge wrote that the statute’s pre­sump­tion vi­o­lates con­sti­tu­tional due process pro­tec­tions.

As a re­sult, the ci­ta­tion in this case was for­mally dis­missed.

Click here to view the PDF file

Joel Mumford is an at­tor­ney with The Ticket Clinic. He said if the case is quasi crim­i­nal, which means al­most crim­i­nal or crim­i­nal-like, the state has to fol­low pro­ce­dural due process.

The state or the agen­cies that is­sue the tick­ets, it’s their bur­den to prove all the el­e­ments of the crime be­yond and to the ex­tent of each and every rea­son­able doubt. And the first el­e­ment, which should be, who’s dri­ving the car,” Mumford said. The statute in Florida pre­sumes that the reg­is­tered owner is the dri­ver of the car.”

Mumford said even though the court or­der only ap­plies in Broward County, it could open the door for chal­lenges in other parts of the state, mean­ing other coun­ties across the state fol­low­ing suit.

What could prob­a­bly hap­pen is they the court could then get an ap­peal, and then if it goes up to the dis­trict court of ap­peal level and then make a de­ci­sion on it, de­pend­ing on what that says, that could then be ap­plied to the en­tire state, if there’s no sim­i­lar ap­pel­late cases through­out the state. So that could make it statewide,” he said.

Drivers in Boynton Beach want to see the red-light cam­eras gone.

Fifteen red-light cam­era sys­tems are al­ready run­ning at seven in­ter­sec­tions in Boynton Beach alone. Congress Avenue and Gateway Boulevard is just one of them.

I’ve been tick­eted here twice, and it’s ridicu­lous be­cause they it’s just not fair,” one dri­ver said who did­n’t want to be iden­ti­fied. The per­son that does the de­ter­mi­na­tion when you ran the light, it’s just a ran­dom. Whoever they want to pick, pick you to say, okay, you’re gonna pay the ticket.”

That dri­ver had to pay his $158 dol­lar ticket but he’s hope­ful Palm Beach County can soon fol­low suit. He wants red light cam­eras gone, so fu­ture vi­o­la­tions could be thrown out.

I think they need to out­law it and get rid of it,” he said.

Advocacy group StopTheCams, which has long op­posed au­to­mated traf­fic en­force­ment, called the rul­ing a ma­jor vic­tory. In a press re­lease, the group said the de­ci­sion con­firms what crit­ics have ar­gued for years — that red-light cam­era laws pun­ish ve­hi­cle own­ers with­out re­quir­ing proof they com­mit­ted the vi­o­la­tion.

Supporters of red-light cam­eras ar­gue the sys­tems im­prove safety by de­ter­ring dan­ger­ous dri­ving at in­ter­sec­tions. Florida’s red-light cam­era law, known as the Mark Wandall Traffic Safety Act, al­lows lo­cal gov­ern­ments to use au­to­mated en­force­ment sys­tems.

It re­mains un­clear whether the rul­ing will be ap­pealed or how broadly it could af­fect sim­i­lar cases statewide. For now, the de­ci­sion ap­plies to this spe­cific case in Broward County, but le­gal ob­servers say it could fuel re­newed chal­lenges to Florida’s red-light cam­era en­force­ment sys­tem.

...

Read the original on cbs12.com »

5 370 shares, 19 trendiness

A New Chapter for Bluesky

After sev­eral in­tense and in­cred­i­ble years build­ing Bluesky from the ground up, I’ve de­cided to step back as CEO and tran­si­tion to a new role as Bluesky’s Chief Innovation Officer.

In 2019, I set out to build an open pro­to­col for so­cial me­dia, with the goal of en­abling the de­vel­op­ment of a new gen­er­a­tion of ap­pli­ca­tions de­signed to em­power users. The Bluesky app started off as a ref­er­ence client for the pro­to­col, but has since grown into a thriv­ing plat­form with over 40 mil­lion users. Last year, we grew a world-class team, ex­panded the AT Protocol ecosys­tem, and proved that a val­ues-dri­ven so­cial net­work could thrive at scale.

As Bluesky ma­tures, the com­pany needs a sea­soned op­er­a­tor fo­cused on scal­ing and ex­e­cu­tion, while I re­turn to what I do best: build­ing new things. As part of this tran­si­tion, Toni Schneider, for­mer CEO of Automattic and part­ner at True Ventures, will join our team as in­terim CEO, while our board runs a search for a per­ma­nent chief ex­ec­u­tive.

Toni be­lieves deeply in the Bluesky mis­sion, and has been an ad­vi­sor to the com­pany and me per­son­ally for over a year. Both Automattic and True Ventures are also in­vestors in Bluesky, and sup­port the de­vel­op­ment of a more open, user-dri­ven in­ter­net. Automattic, the com­pany be­hind WordPress.com, has built their busi­ness on open source soft­ware, work­ing to make the web a bet­ter and more par­tic­i­pa­tory place. Toni was a key part of guid­ing that mis­sion, and is some­one who leads with gen­uine cu­rios­ity, takes the long view, and deeply un­der­stands what it means to build a com­pany around a mis­sion. I am con­fi­dent that he is the right per­son to lead us into this next chap­ter.

Scaling up this com­pany has been a learn­ing ex­pe­ri­ence un­like any­thing else. I’ve grown a lot as a leader and had the priv­i­lege of as­sem­bling the best team I’ve ever worked with. As we’ve grown, I’ve found that peo­ple thrive when they’re in a role where their pas­sions over­lap with their strengths. This is as true for me as it is for our team. I’m most en­er­gized by ex­plor­ing new ideas, bring­ing a vi­sion to life, and help­ing peo­ple dis­cover their strengths. Transitioning to a more fo­cused role where I can do what brings me en­ergy is my way of putting that be­lief into prac­tice.

I’m ex­cited to dig into the next fron­tier of what de­cen­tral­ized so­cial can be, while bring­ing Toni in to sup­port our team as an ex­pe­ri­enced op­er­a­tor and leader. The work that got us here was just the be­gin­ning, and I’m grate­ful to keep build­ing along­side this team and com­mu­nity.

...

Read the original on bsky.social »

6 352 shares, 26 trendiness

Oracle is building yesterday’s data centers with tomorrow’s debt

Artificial in­tel­li­gence chips are get­ting up­graded more quickly than data cen­ters can be built, a mar­ket re­al­ity that ex­poses a key risk to the AI trade and Oracle’s debt-fu­eled ex­pan­sion.

OpenAI is no longer plan­ning to ex­pand its part­ner­ship with Oracle in Abilene, Texas, home to the Stargate data cen­ter, be­cause it wants clus­ters with newer gen­er­a­tions of Nvidia graph­ics pro­cess­ing units, ac­cord­ing to a per­son fa­mil­iar with the mat­ter.

The cur­rent Abilene site is ex­pected to use Nvidia’s Blackwell proces­sors, and the power is­n’t pro­jected to come on­line for a year. By then, OpenAI is hop­ing to have ex­panded ac­cess to Nvidia’s next-gen­er­a­tion chips in big­ger clus­ters else­where, said the per­son, who asked not to be named due to con­fi­den­tial­ity.

Bloomberg was first to re­port on the com­pa­nies end­ing their plans for ex­pan­sion in Abilene. In a post on X on Sunday, Oracle called news re­ports about the ac­tiv­ity, false and in­cor­rect,” but the post only said ex­ist­ing pro­jects are on track and did­n’t ad­dress ex­pan­sion plans.

Oracle se­cured the site, or­dered the hard­ware, and spent bil­lions of dol­lars on con­struc­tion and staff, with the ex­pec­ta­tion of go­ing big­ger.

It’s a log­i­cal de­ci­sion for OpenAI, which does­n’t want older chips. Nvidia used to re­lease a new gen­er­a­tion of data cen­ter proces­sors every two years. Now, CEO Jensen Huang has the com­pany ship­ping one every year, and each gen­er­a­tion of­fers a leap in ca­pa­bil­ity. Vera Rubin, un­veiled at CES in January and al­ready in pro­duc­tion, de­liv­ers five times the in­fer­ence per­for­mance of Blackwell.

For the com­pa­nies build­ing fron­tier mod­els, the small­est im­prove­ment in per­for­mance could equate to huge gaps in model bench­marks and rank­ings, which are closely fol­lowed by de­vel­op­ers and trans­late di­rectly to us­age, rev­enue, and val­u­a­tion.

That all points to a big­ger prob­lem at play. For in­fra­struc­ture com­pa­nies, se­cur­ing a site, con­nect­ing power and stand­ing up a fa­cil­ity takes 12 to 24 months at min­i­mum. But cus­tomers want the lat­est and great­est, and they’re track­ing the yearly chip up­grades.

Oracle’s added chal­lenge is that it’s the only hy­per­scaler fund­ing its build­out pri­mar­ily with debt, to the tune of $100 bil­lion and count­ing. Google, Amazon and Microsoft, by con­trast, are lean­ing on their enor­mous cash-gen­er­at­ing busi­nesses.

Meanwhile, Oracle part­ner Blue Owl is de­clin­ing to fund an ad­di­tional fa­cil­ity, and plans to cut up to 30,000 jobs.

Oracle re­ports fis­cal third-quar­ter re­sults on Tuesday, and in­vestors will be pay­ing close to how the com­pany ad­dresses a $50 bil­lion cap­i­tal ex­pen­di­ture plan with neg­a­tive free cash flow, and whether the fi­nanc­ing pipeline can hold up.

The stock is down 23% so far this year and has lost over half its value since peak­ing in September.

Beyond Oracle, GPU de­pre­ci­a­tion is a risk for the broader mar­ket and could have ram­i­fi­ca­tions across the AI land­scape. Every in­fra­struc­ture deal signed to­day may re­sult in a com­mit­ment to out­dated hard­ware be­fore the power is even con­nected.

...

Read the original on www.cnbc.com »

7 317 shares, 18 trendiness

JSLinux

...

Read the original on bellard.org »

8 252 shares, 12 trendiness

Media Processing at Scale

FFmpeg is truly a multi-tool for me­dia pro­cess­ing. As an in­dus­try-stan­dard tool it sup­ports a wide va­ri­ety of au­dio and video codecs and con­tainer for­mats. It can also or­ches­trate com­plex chains of fil­ters for me­dia edit­ing and ma­nip­u­la­tion. For the peo­ple who use our apps, FFmpeg plays an im­por­tant role in en­abling new video ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones.

Meta ex­e­cutes ffm­peg (the main CLI ap­pli­ca­tion) and ff­probe (a util­ity for ob­tain­ing me­dia file prop­er­ties) bi­na­ries tens of bil­lions of times a day, in­tro­duc­ing unique chal­lenges when deal­ing with me­dia files. FFmpeg can eas­ily per­form transcod­ing and edit­ing on in­di­vid­ual files, but our work­flows have ad­di­tional re­quire­ments to meet our needs. For many years we had to rely on our own in­ter­nally de­vel­oped fork of FFmpeg to pro­vide fea­tures that have only re­cently been added to FFmpeg, such as threaded multi-lane en­cod­ing and real-time qual­ity met­ric com­pu­ta­tion.

Over time, our in­ter­nal fork came to di­verge sig­nif­i­cantly from the up­stream ver­sion of FFmpeg. At the same time, new ver­sions of FFmpeg brought sup­port for new codecs and file for­mats, and re­li­a­bil­ity im­prove­ments, all of which al­lowed us to in­gest more di­verse video con­tent from users with­out dis­rup­tions. This ne­ces­si­tated that we sup­port both re­cent open-source ver­sions of FFmpeg along­side our in­ter­nal fork. Not only did this cre­ate a grad­u­ally di­ver­gent fea­ture set, it also cre­ated chal­lenges around safely re­bas­ing our in­ter­nal changes to avoid re­gres­sions.

As our in­ter­nal fork be­came in­creas­ingly out­dated, we col­lab­o­rated with FFmpeg de­vel­op­ers, FFlabs, and VideoLAN to de­velop fea­tures in FFmpeg that al­lowed us to fully dep­re­cate our in­ter­nal fork and rely ex­clu­sively on the up­stream ver­sion for our use cases. Using up­streamed patches and refac­tor­ings we’ve been able to fill two im­por­tant gaps that we had pre­vi­ously re­lied on our in­ter­nal fork to fill: threaded, multi-lane transcod­ing and real-time qual­ity met­rics.

When a user up­loads a video through one of our apps, we gen­er­ate a set of en­cod­ings to sup­port Dynamic Adaptive Streaming over HTTP (DASH) play­back. DASH play­back al­lows the ap­p’s video player to dy­nam­i­cally choose an en­cod­ing based on sig­nals such as net­work con­di­tions. These en­cod­ings can dif­fer in res­o­lu­tion, codec, fram­er­ate, and vi­sual qual­ity level but they are cre­ated from the same source en­cod­ing, and the player can seam­lessly switch be­tween them in real time.

In a very sim­ple sys­tem sep­a­rate FFmpeg com­mand lines can gen­er­ate the en­cod­ings for each lane one-by-one in se­r­ial. This could be op­ti­mized by run­ning each com­mand in par­al­lel, but this quickly be­comes in­ef­fi­cient due to the du­pli­cate work done by each process.

To work around this, mul­ti­ple out­puts could be gen­er­ated within a sin­gle FFmpeg com­mand line, de­cod­ing the frames of a video once and send­ing them to each out­put’s en­coder in­stance. This elim­i­nates a lot of over­head by dedu­pli­cat­ing the video de­cod­ing and process startup time over­head in­curred by each com­mand line. Given that we process over 1 bil­lion video up­loads daily, each re­quir­ing mul­ti­ple FFmpeg ex­e­cu­tions, re­duc­tions in per-process com­pute us­age yield sig­nif­i­cant ef­fi­ciency gains.

Our in­ter­nal FFmpeg fork pro­vided an ad­di­tional op­ti­miza­tion to this: par­al­lelized video en­cod­ing. While in­di­vid­ual video en­coders are of­ten in­ter­nally multi-threaded, pre­vi­ous FFmpeg ver­sions ex­e­cuted each en­coder in se­r­ial for a given frame when mul­ti­ple en­coders were in use. By run­ning all en­coder in­stances in par­al­lel, bet­ter par­al­lelism can be ob­tained over­all.

Thanks to con­tri­bu­tions from FFmpeg de­vel­op­ers, in­clud­ing those at FFlabs and VideoLAN, more ef­fi­cient thread­ing was im­ple­mented start­ing with FFmpeg 6.0, with the fin­ish­ing touches land­ing in 8.0. This was di­rectly in­flu­enced by the de­sign of our in­ter­nal fork and was one of the main fea­tures we had re­lied on it to pro­vide. This de­vel­op­ment led to the most com­plex refac­tor­ing of FFmpeg in decades and has en­abled more ef­fi­cient en­cod­ings for all FFmpeg users.

To fully mi­grate off of our in­ter­nal fork we needed one more fea­ture im­ple­mented up­stream: real-time qual­ity met­rics.

Visual qual­ity met­rics, which give a nu­meric rep­re­sen­ta­tion of the per­ceived vi­sual qual­ity of me­dia, can be used to quan­tify the qual­ity loss in­curred from com­pres­sion. These met­rics are cat­e­go­rized as ref­er­ence or no-ref­er­ence met­rics, where the for­mer com­pares a ref­er­ence en­cod­ing to some other dis­torted en­cod­ing.

FFmpeg can com­pute var­i­ous vi­sual qual­ity met­rics such as PSNR, SSIM, and VMAF us­ing two ex­ist­ing en­cod­ings in a sep­a­rate com­mand line af­ter en­cod­ing has fin­ished. This is okay for of­fline or VOD use cases, but not for livestream­ing where we might want to com­pute qual­ity met­rics in real time.

To do this, we need to in­sert a video de­coder af­ter each video en­coder used by each out­put lane. These pro­vide bitmaps for each frame in the video af­ter com­pres­sion has been ap­plied so that we can com­pare against the frames be­fore com­pres­sion. In the end, we can pro­duce a qual­ity met­ric for each en­coded lane in real time us­ing a sin­gle FFmpeg com­mand line.

Thanks to in-loop” de­cod­ing, which was en­abled by FFmpeg de­vel­op­ers in­clud­ing those from FFlabs and VideoLAN, be­gin­ning with FFmpeg 7.0, we no longer have to rely on our in­ter­nal FFmpeg fork for this ca­pa­bil­ity.

Things like real-time qual­ity met­rics while transcod­ing and more ef­fi­cient thread­ing can bring ef­fi­ciency gains to a va­ri­ety of FFmpeg-based pipelines both in and out­side of Meta, and we strive to en­able these de­vel­op­ments up­stream to ben­e­fit the FFmpeg com­mu­nity and wider in­dus­try. However, there are some patches we’ve de­vel­oped in­ter­nally that don’t make sense to con­tribute up­stream. These are highly spe­cific to our in­fra­struc­ture and don’t gen­er­al­ize well.

FFmpeg sup­ports hard­ware-ac­cel­er­ated de­cod­ing, en­cod­ing, and fil­ter­ing with de­vices such as NVIDIAs NVDEC and NVENC, AMDs Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each de­vice is sup­ported through an im­ple­men­ta­tion of stan­dard APIs in FFmpeg, al­low­ing for eas­ier in­te­gra­tion and min­i­miz­ing the need for de­vice-spe­cific com­mand line flags. We’ve added sup­port for the Meta Scalable Video Processor (MSVP), our cus­tom ASIC for video transcod­ing, through these same APIs, en­abling the use of com­mon tool­ing across dif­fer­ent hard­ware plat­forms with min­i­mal plat­form-spe­cific quirks.

As MSVP is only used within Meta’s own in­fra­struc­ture, it would cre­ate a chal­lenge for FFmpeg de­vel­op­ers to sup­port it with­out ac­cess to the hard­ware for test­ing and val­i­da­tion. In this case, it makes sense to keep patches like this in­ter­nal since they would­n’t pro­vide ben­e­fit ex­ter­nally. We’ve taken on the re­spon­si­bil­ity of re­bas­ing our in­ter­nal patches onto more re­cent FFmpeg ver­sions over time, uti­liz­ing ex­ten­sive val­i­da­tion to en­sure ro­bust­ness and cor­rect­ness dur­ing up­grades.

With more ef­fi­cient multi-lane en­cod­ing and real-time qual­ity met­rics, we were able to fully dep­re­cate our in­ter­nal FFmpeg fork for all VOD and livestream­ing pipelines. And thanks to stan­dard­ized hard­ware APIs in FFmpeg, we’ve been able to sup­port our MSVP ASIC along­side soft­ware-based pipelines with min­i­mal fric­tion.

FFmpeg has with­stood the test of time with over 25 years of ac­tive de­vel­op­ment. Developments that im­prove re­source uti­liza­tion, add sup­port for new codecs and fea­tures, and in­crease re­li­a­bil­ity en­able ro­bust sup­port for a wider range of me­dia. For peo­ple on our plat­forms, this means en­abling new ex­pe­ri­ences and im­prov­ing the re­li­a­bil­ity of ex­ist­ing ones. We plan to con­tinue in­vest­ing in FFmpeg in part­ner­ship with open source de­vel­op­ers, bring­ing ben­e­fits to Meta, the wider in­dus­try, and peo­ple who use our prod­ucts.

We would like to ac­knowl­edge con­tri­bu­tions from the open source com­mu­nity, our part­ners in FFlabs and VideoLAN, and many Meta en­gi­neers, in­clud­ing Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.

...

Read the original on engineering.fb.com »

9 249 shares, 25 trendiness

Two Years of Emacs Solo: 35 Modules, Zero External Packages, and a Full Refactor

I’ve been main­tain­ing Emacs Solo

for a while now, and I think it’s time to talk about what hap­pened in this lat­est cy­cle as the pro­ject reaches its two-year mark.

For those who haven’t seen it be­fore, Emacs Solo is my daily-dri­ver Emacs con­fig­u­ra­tion with one strict rule: no ex­ter­nal pack­ages. Everything is ei­ther built into Emacs or writ­ten from scratch by me in the lisp/ di­rec­tory. No pack­age-in­stall, no straight.el, no

use-pack­age :ensure t point­ing at ELPA or MELPA. Just Emacs and Elisp. I’m keep­ing this post text only, but if you’d like to check how

Emacs Solo looks and feels, the repos­i­tory has screen­shots and more de­tails.

Why? Partly be­cause I wanted to un­der­stand what Emacs ac­tu­ally gives you out of the box. Partly be­cause I wanted my con­fig to sur­vive with­out break­age across Emacs re­leases. Partly be­cause I was tired of deal­ing with pack­age repos­i­to­ries, mir­rors go­ing down in the mid­dle of the work­day, na­tive com­pi­la­tion hic­cups, and the in­evitable down­time when some­thing changed some­where up­stream and my job sud­denly be­came de­bug­ging my very long (at the time) con­fig in­stead of do­ing ac­tual work. And partly, hon­estly, be­cause it’s a lot of fun!

This post cov­ers the re­cent refac­tor, walks through every sec­tion of the core con­fig, in­tro­duces all 35 self-con­tained ex­tra mod­ules I’ve writ­ten, and shares some thoughts on what I’ve learned.

Now, I’ll be the first to ad­mit: this con­fig is long. But there’s a prin­ci­ple be­hind it. I only add fea­tures when they are not al­ready in Emacs core, and when I do, I try to build them my­self. That means the code is sketchy some­times, sure, but it’s in my con­trol. I wrote it, I un­der­stand it, and when it breaks, I know ex­actly where to look. The refac­tor I’m about to de­scribe makes this dis­tinc­tion crys­tal clear: what is Emacs core be­ing tweaked” ver­sus what is a re­ally hacky out­sider I built in be­cause I did­n’t want to live with­out it”.

The sin­gle biggest change in this cy­cle was ar­chi­tec­tural. Emacs Solo used to be one big init.el with every­thing crammed to­gether. That worked, but it had prob­lems:

— It was hard to nav­i­gate (even with out­line-mode)

— If some­one wanted just one piece, say my Eshell con­fig or my VC ex­ten­sions, they had to dig through thou­sands of lines

— It was dif­fi­cult to tell where configuring built-in Emacs” ended and my own hacky reim­ple­men­ta­tions” be­gan

The so­lu­tion was clean and sim­ple: split the con­fig into two lay­ers.

This file con­fig­ures only built-in Emacs pack­ages and fea­tures. Every

use-pack­age block in here has :ensure nil, be­cause it’s point­ing at some­thing that ships with Emacs. This is pure, stan­dard Emacs cus­tomiza­tion.

The idea is that any­one can read init.el, find a sec­tion they like, and

copy-paste it di­rectly into their own con­fig. No de­pen­den­cies. No setup. It just works, be­cause it’s con­fig­ur­ing things Emacs al­ready has.

These are my own im­ple­men­ta­tions: re­place­ments for pop­u­lar ex­ter­nal pack­ages, reimag­ined as small, fo­cused Elisp files. Each one is a proper pro­vide/​re­quire mod­ule. They live un­der lisp/ and are loaded at the bot­tom of init.el via a sim­ple block:

If you don’t want one of them, just com­ment out the re­quire line. If you want to use one in your own con­fig, just copy the .el file into your own lisp/ di­rec­tory and re­quire it. That’s it.

This sep­a­ra­tion made the whole pro­ject dra­mat­i­cally eas­ier to main­tain, un­der­stand, and share.

The init.el file is or­ga­nized into clearly la­beled sec­tions (using out­line-mode-friendly head­ers, so you can fold and nav­i­gate them in­side Emacs). Here’s every built-in pack­age and fea­ture it touches, and why.

The emacs use-pack­age block is the largest sin­gle sec­tion. It sets up sen­si­ble de­faults that most peo­ple would want:

— Window lay­out com­mands bound un­der C-x w (these are up­com­ing

Emacs 31 fea­tures: win­dow-lay­out-trans­pose,

win­dow-lay­out-ro­tate-clock­wise, win­dow-lay­out-flip-left­right,

win­dow-lay­out-flip-top­down)

— Disabling C-z (suspend) be­cause ac­ci­den­tally sus­pend­ing Emacs in a ter­mi­nal is never fun

— Sensible file han­dling: back­ups and auto-saves in a cache/

di­rec­tory, re­centf for re­cent files, clean buffer nam­ing with

uniquify

— Tree-sitter auto-in­stall and auto-mode (treesit-auto-install-grammar t and treesit-en­abled-modes t, both Emacs 31)

— delete-pair-push-mark, kill-re­gion-dwim, ibuffer-hu­man-read­able-size, all the small qual­ity-of-life set­tings com­ing in Emacs 31

A full ab­brev-mode setup with a cus­tom place­holder sys­tem. You de­fine ab­bre­vi­a­tions with ###1###, ###2### mark­ers, and when the ab­bre­vi­a­tion ex­pands, it prompts you to fill in each place­holder in­ter­ac­tively. The ###@### marker tells it where to leave point af­ter ex­pan­sion. I wrote a whole ar­ti­cle about it.

Configures auth-source to use ~/.authinfo.gpg for cre­den­tial stor­age. Simple but es­sen­tial if you use Gnus, ERC, or any net­work-fac­ing Emacs fea­ture.

Makes buffers au­to­mat­i­cally re­fresh when files change on disk. Essential for any Git work­flow.

Configuration file mode set­tings and a com­pi­la­tion-mode setup with ANSI color sup­port, so com­piler out­put ac­tu­ally looks read­able.

Custom win­dow man­age­ment be­yond the de­faults, be­cause Emacs win­dow man­age­ment out of the box is pow­er­ful but needs a lit­tle nudg­ing.

Tab-bar con­fig­u­ra­tion for work­space man­age­ment. Emacs has had tabs since ver­sion 27, and they’re gen­uinely use­ful once you con­fig­ure them prop­erly.

Two IRC clients, both built into Emacs, both con­fig­ured. ERC gets the big­ger treat­ment: log­ging, scroll­to­bot­tom, fill, match high­light­ing, and even in­line im­age sup­port (via one of the ex­tra mod­ules). The Emacs 31 cy­cle brought nice im­prove­ments here too, in­clud­ing a fix for the scroll­to­bot­tom/​fill-wrap de­pen­dency is­sue.

This is where Emacs Solo’s com­ple­tion story lives. Instead of reach­ing for Vertico, Consult, or Helm, I use icom­plete-ver­ti­cal-mode, which is built into Emacs. With the right set­tings it’s sur­pris­ingly ca­pa­ble:

I’ve also been con­tribut­ing patches up­stream to im­prove icom­plete’s ver­ti­cal ren­der­ing with pre­fix in­di­ca­tors. Some of those fea­tures are al­ready land­ing in Emacs 31, which means the poly­fill code I carry to­day will even­tu­ally be­come un­nec­es­sary.

A heav­ily cus­tomized Dired setup. Custom list­ing switches, hu­man read­able sizes, in­te­gra­tion with sys­tem open­ers (open on ma­cOS,

xdg-open on Linux), and the dired-hide-de­tails-hide-ab­solute-lo­ca­tion

op­tion from Emacs 31.

Writable Dired, so you can re­name files by edit­ing the buffer di­rectly.

This one I’m par­tic­u­larly proud of. Emacs Solo’s Eshell con­fig­u­ra­tion in­cludes:

— Shared his­tory across all Eshell buffers: Every Eshell in­stance reads from and writes to a merged his­tory, so you never lose a com­mand just be­cause you ran it in a dif­fer­ent buffer

— Custom prompts: Multiple prompt styles you can tog­gle be­tween with C-c t (full vs. min­i­mal) and C-c T (lighter vs. heav­ier full prompt)

This is one of the largest sec­tions and one I’m most in­vested in. Emacs’s built-in vc is an in­cred­i­ble piece of soft­ware that most peo­ple over­look in fa­vor of Magit. I’m not say­ing it re­places Magit en­tirely, but with the right con­fig­u­ra­tion it cov­ers 95% of daily Git op­er­a­tions:

— Git add/​re­set from vc-dir: S to stage, U to un­stage, di­rectly in the vc-dir buffer. Admittedly, I al­most never use this be­cause I’m now used to the Emacs-style VC work­flow: C-x v D or C-x v =, then killing what I don’t want, split­ting what is­n’t ready yet, and fin­ish­ing with C-c C-c. Amending with C-c C-e is awe­some. Still use­ful once or twice a se­mes­ter.

— Browse re­mote: C-x v B opens your repos­i­tory on GitHub/GitLab in a browser; with a pre­fix ar­gu­ment it jumps to the cur­rent file and line

— Jump to cur­rent hunk: C-x v = opens the diff buffer scrolled to the hunk con­tain­ing your cur­rent line

— Switch be­tween mod­i­fied files: C-x C-g lets you

com­plet­ing-read through all mod­i­fied/​un­tracked files in the cur­rent repo

Merge con­flict res­o­lu­tion and diff view­ing. Ediff con­fig­ured to split win­dows sanely (side by side, not in a new frame).

Documentation at point, with el­doc-help-at-pt (Emacs 31) for show­ing docs au­to­mat­i­cally.

The LSP client that ships with Emacs. Configured with:

— Custom server pro­grams, in­clud­ing ras­sum­fras­sum for mul­ti­plex­ing TypeScript + ESLint + Tailwind (I wrote a whole post

about that)

— Keybindings un­der C-c l for code ac­tions, re­name, for­mat, and in­lay hints

— Automatic en­abling for all prog-mode buffers ex­cept

emacs-lisp-mode and lisp-mode

Diagnostics, spell check­ing, and white­space vi­su­al­iza­tion. All built-in, all con­fig­ured.

The Emacs news­reader and email client. Configured for IMAP/SMTP us­age.

RSS/Atom feed reader built into Emacs. Customized with some ex­tras I build my self for deal­ing with youtube feeds: thumb­nail, tran­scripts, send­ing to AI for a quick sum­mary, and so on.

Org-mode con­fig­u­ra­tion, be­cause of course.

File tree nav­i­ga­tion in a side win­dow. With Emacs 31, speed­bar gained speed­bar-win­dow sup­port, so it can live in­side your ex­ist­ing frame in­stead of spawn­ing a new one.

Buffer name dis­am­bigua­tion when you have mul­ti­ple files with the same name open.

Quick web searches from the minibuffer. Configured with use­ful search en­gines.

Specific con­fig­u­ra­tions for every lan­guage I work with, or­ga­nized into three ar­eas:

Common Lisp: in­fe­rior-lisp and lisp-mode with cus­tom REPL in­ter­ac­tion, eval­u­a­tion com­mands, and a poor man’s SLIME/SLY setup that ac­tu­ally works quite well for ba­sic Common Lisp de­vel­op­ment.

Non-Tree-sitter: sass-mode for when tree-sit­ter gram­mars aren’t avail­able.

Tree-sitter modes: ruby-ts-mode, js-ts-mode,

json-ts-mode, type­script-ts-mode, bash-ts-mode,

rust-ts-mode, toml-ts-mode, mark­down-ts-mode (Emacs 31),

yaml-ts-mode, dock­er­file-ts-mode, go-ts-mode. Each one con­fig­ured with tree-sit­ter gram­mar sources (which Emacs 31 is start­ing to de­fine in­ter­nally, so those de­f­i­n­i­tions will even­tu­ally be­come un­nec­es­sary).

This is where the fun re­ally is. Each of these is a com­plete, stand­alone Elisp file that reim­ple­ments func­tion­al­ity you’d nor­mally get from an ex­ter­nal pack­age. They’re all in lisp/ and can be used in­de­pen­dently.

I call them hacky reim­ple­men­ta­tions” in the spirit of Emacs Solo: they’re not try­ing to be fea­ture-com­plete re­place­ments for their MELPA coun­ter­parts. They’re try­ing to be small, un­der­stand­able,

and good enough for daily use while keep­ing the con­fig self-con­tained.

Custom color themes based on Modus. Provides sev­eral theme vari­ants: Catppuccin Mocha, Crafters (the de­fault), Matrix, and GITS. All built on top of Emacs’s built-in Modus themes by over­rid­ing faces, so you get the ac­ces­si­bil­ity and com­plete­ness of Modus with dif­fer­ent aes­thet­ics.

Custom mode-line for­mat and con­fig­u­ra­tion. A hand-crafted mode-line that shows ex­actly what I want: buffer state in­di­ca­tors, file name, ma­jor mode, Git branch, line/​col­umn, and noth­ing else. No doom-mod­e­line, no tele­phone-line, just for­mat strings and faces.

Enhanced nav­i­ga­tion and win­dow move­ment com­mands. Extra com­mands for mov­ing be­tween win­dows, re­siz­ing splits, and nav­i­gat­ing buffers more ef­fi­ciently.

Configurable for­mat-on-save with a for­mat­ter reg­istry. You reg­is­ter for­mat­ters by file ex­ten­sion (e.g., pret­tier for .tsx,

black for .py), and the mod­ule au­to­mat­i­cally hooks into

af­ter-save-hook to for­mat the buffer. All con­trol­lable via a

de­f­cus­tom, so you can tog­gle it on and off glob­ally.

Frame trans­parency for GUI and ter­mi­nal. Toggle trans­parency on your Emacs frame. Works on both graph­i­cal and ter­mi­nal Emacs, us­ing the ap­pro­pri­ate mech­a­nism for each.

Sync shell PATH into Emacs. The clas­sic ma­cOS prob­lem: GUI Emacs does­n’t in­herit your shel­l’s PATH. This mod­ule solves it the same way exec-path-from-shell does, but in about 20 lines in­stead of a full pack­age.

Rainbow col­or­ing for match­ing de­lim­iters. Colorizes nested paren­the­ses, brack­ets, and braces in dif­fer­ent col­ors so you can vi­su­ally match nest­ing lev­els. Essential for any Lisp, and help­ful every­where else.

Interactive pro­ject finder and switcher. A com­plet­ing-read

in­ter­face for find­ing and switch­ing be­tween pro­jects, build­ing on Emacs’s built-in pro­ject.el.

Vim-like key­bind­ings and text ob­jects for Viper. If you use Emacs’s built-in viper-mode (the Vim em­u­la­tion layer), this ex­tends it with text ob­jects and ad­di­tional Vim-like com­mands. No Evil needed.

Highlight TODO and sim­i­lar key­words in com­ments. Makes TODO,

FIXME, HACK, NOTE, and sim­i­lar key­words stand out in source code com­ments with dis­tinc­tive faces. A small thing that makes a big dif­fer­ence.

Git diff gut­ter in­di­ca­tors in buffers. Shows added, mod­i­fied, and deleted line in­di­ca­tors in the mar­gin, like diff-hl or

git-gut­ter. Pure Elisp, us­ing vc-git un­der the hood.

Quick win­dow switch­ing with la­bels. When you have three or more win­dows, this over­lays sin­gle-char­ac­ter la­bels on each win­dow so you can jump to any one with a sin­gle key­stroke. A min­i­mal reim­ple­men­ta­tion of the pop­u­lar ace-win­dow pack­age.

Centered doc­u­ment lay­out mode. Centers your text in the win­dow with wide mar­gins, like olivetti-mode. Great for prose writ­ing, Org doc­u­ments, or any time you want a dis­trac­tion-free cen­tered lay­out.

Upload text and files to 0x0.st. Select a re­gion or a file and up­load it to the 0x0.st paste ser­vice. The URL is copied to your kill ring. Quick and use­ful for shar­ing snip­pets.

Edit files as root via TRAMP. Reopen the cur­rent file with root priv­i­leges us­ing TRAMPs /sudo:: pre­fix. A reim­ple­men­ta­tion of the

...

Read the original on www.rahuljuliato.com »

10 240 shares, 15 trendiness

In memoriam, Tony Hoare

Ce blog traite d’é­d­u­ca­tion, d’en­seigne­ment supérieur, du Limousin et du Périgord, de Nantes et alen­tours, de so­ciété, de ce que nous lisons, il pose des ques­tions, at­tend des réponses, in­forme. Espérons que per­sonne ne dira quoras auras-tu’ch­a­bat de pla­tus­sar ?” quand auras-tu fini de faire du blablaware ? When will you stop quack­ing?” (blog joint :http://​quo­ras-tu-cha­bat.hautet­fort.com/)

...

Read the original on lefenetrou.blogspot.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.