10 interesting stories served every morning and every evening.
Ireland today (June 20) became the 15th coal-free country in Europe, having ended coal power generation at its 915 MW Moneypoint coal plant in County Clare. Initially commissioned in the mid-1980s by ESB, Moneypoint was intended to help Ireland offset the impact of the oil crises in the 1970s by providing a dependable source of energy.
But with Ireland now generating a lot more renewable energy nowadays, coal burning is no longer such an urgent need. Energy think tank Ember data states Ireland generated 37% (11.4 TWh) of its electricity from wind in 2024. Solar is not near wind levels of generation, (0.97 TWh in 2024) but it has been continuously breaking generation records in recent months and local stakeholders are confident this positive trend will continue.
Following the closure, the Moneypoint plant will continue to serve a limited backup role, burning heavy fuel oil under emergency instruction from Ireland’s transmission system operator EirGrid until 2029.
This strategy is in line with previous plans made by EirGrid and ESB to exit coal-fired generation by the end of 2025, which stipulated that Moneypoint would no longer be active in the wholesale electricity market.
“Ireland has quietly rewritten its energy story, replacing toxic coal with homegrown renewable power,” said Alexandru Mustață, campaigner on coal and gas at Europe’s Beyond Fossil Fuels.
“But this isn’t ‘job done’. The government’s priority now must be building a power system for a renewable future; one with the storage, flexibility, and grid infrastructure needed to run fully on clean, domestic renewable electricity,” Mustață warned.
Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, appealed to the government to ensure oil backup at Moneypoint is kept to an absolute minimum and ultimately decommissioned. He also appealed for the government to prevent further development of data centers, which he said are increasing Ireland’s reliance on fossil gas.
“We also can’t ignore that the government is targeting the installation of at least 2 GW of gas power plants with no strategy to reduce Ireland’s dangerous gas dependency,” he added.
On a broader level, Ireland’s step to close coal power generation at Moneypoint sets a precedent for further European countries’ coal exits to come, says Beyond Fossil Fuels. The group tracks European countries’ progress on their commitments to switching from fossil fuels to renewable energy. So far, 23 European countries have committed to coal phase-outs. Italy is expected to complete its mainland coal phase-out this summer with the upcoming closure of its last two big coal power plants, while mainland Spain is also expecting to declare itself coal-free this summer.
...
Read the original on www.pv-magazine.com »
I’ve been obsessed with procedural maps since I was a kid rolling dice on the random dungeon tables in the AD&D Dungeon Master’s Guide. There was something magical about it — you didn’t design the dungeon, you discovered it, one room at a time, and the dice decided whether you got a treasure chamber or a dead end full of rats.
Years later, I decided to build my own map generator. It creates little medieval island worlds — with roads, rivers, coastlines, cliffs, forests, and villages — entirely procedurally. Built with Three.js WebGPU and TSL shaders, about 4,100 hex cells across 19 grids, generated in ~20 seconds.
If you’ve ever played Carcassonne, you already understand WFC. You have a stack of tiles and place them so everything lines up. Each tile has edges — grass, road, city. Adjacent tiles must have matching edges. A road edge must connect to another road edge. Grass must meet grass. The only difference is that the computer does it faster, and complains less when it gets stuck.
The twist: hex tiles have 6 edges instead of 4. That’s 50% more constraints per tile, and the combinatorial explosion is real. Square WFC is well-trodden territory. Hex WFC is… less so.
WFC is reliable for small grids. But as the grid gets bigger, the chance of painting yourself into a dead end goes up fast. A 217-cell hex grid almost never fails. A 4123-cell grid fails regularly.
The solution: modular WFC. Instead of one giant solve, the map is split into 19 hexagonal grids arranged in two rings around a center — about 4,100 cells total. Each grid is solved independently, but it has to match whatever tiles were already placed in neighboring grids. Those border tiles become fixed constraints.
And sometimes those constraints are simply incompatible. No amount of backtracking inside the current grid can fix a problem that was baked in by a neighbor. This is where I spent a lot of dev time.
Here’s the dirty secret of WFC: it fails. A lot. You make a series of random choices, propagate constraints, and eventually back yourself into a corner where some cell has zero valid options left. Congratulations, the puzzle is unsolvable.
The textbook solution is backtracking — undo your last decision and try a different tile. My solver tracks every possibility it removes during propagation (a “trail” of deltas), so it can rewind cheaply without copying the entire grid state. It’ll try up to 500 backtracks before giving up.
But backtracking alone isn’t enough. The real problem is cross-grid boundaries.
After many failed approaches, I landed on a layered recovery system:
Layer 1: Unfixing. During the initial constraint propagation, if a neighbor cell creates a contradiction, the solver converts it from a fixed constraint back into a solvable cell. Its own neighbors (two cells out — “anchors”) become the new constraints. This is cheap and handles easy cases.
Layer 2: Local-WFC. If the main solve fails, the solver runs a mini-WFC on a small radius-2 region around the problem area — re-solving 19 cells in the overlap area to create a more compatible boundary. Up to 5 attempts, each targeting a different problem cell. Local-WFC was the breakthrough. Instead of trying to solve the impossible, go back and change the problem.
Layer 3: Drop and hide. Last resort. Drop the offending neighbor cell entirely and place mountain tiles to cover the seams. Mountains are great — their cliff edges match anything, and they look intentional. Nobody questions a mountain.
This map isn’t flat — it has 5 levels of elevation. Ocean and Grass start at level 0, but slopes and cliffs can move up or down a level. Low slopes go up 1 level, high slopes go up 2 levels. A road tile at level 3 needs to connect to another road tile at level 3, or a slope tile that transitions between levels. Get it wrong and you end up with roads that dead-end into cliff faces or rivers flowing uphill into the sky. The elevation axis turns a 2D constraint problem into a 3D one, and it’s where a lot of the tile variety (and a lot of the solver failures) comes from.
The better approach: cube coordinates (q, r, s where s = -q-r). It’s a 3D coordinate system for the three hex axes. Neighbor finding becomes trivial — just add or subtract 1 from two coordinates.
The good news is that WFC doesn’t really care about geometry. It’s concerned with which edges match which — it’s essentially a graph problem. The hex coordinates only matter for rendering and for the multi-grid layout, where the 19 grids are themselves arranged as a hex-of-hexes with their own offset positions.
If you’ve ever worked with hex grids, you owe Amit Patel at Red Blob Games a debt of gratitude. His hex grid guide is the definitive reference.
Trees, Buildings, and Why Not Everything Should Be WFC
Early on, I tried using WFC for tree and building placement. Bad idea. WFC is great at local edge matching but terrible at large-scale patterns. You’d get trees scattered randomly instead of clustered into forests, or buildings spread evenly instead of gathered into villages.
The solution: good old Perlin noise. A global noise field determines tree density and building placement, completely separate from WFC. Areas where the noise is above a threshold get trees; slightly different noise drives buildings. This gives you organic clustering — forests, clearings, villages — that WFC could never produce. I also used some additional logic to place buildings at the end of roads, ports and windmills on coasts, henges on hilltops etc.
WFC handles the terrain. Noise handles the decorations. Each tool does what it’s good at.
I wanted that ‘Zelda: The Wind Waker’ cartoon shimmer on the water surface. Originally I tried generating caustics procedurally with four layers of Voronoi noise. This turned out to be very GPU heavy and did not look great. The solution was sampling a small scrolling caustic texture with a simple noise mask, which looks way better and is super cheap. Sometimes the easy solution is the correct solution.
Waves are sine bands that radiate outward from coastlines, inspired by Bad North’s gorgeous shoreline effect. To know “how far from the coast” each pixel is, the system renders a coast mask — a top down orthographic render of the entire map with white for land and black for water — then dilates and blurs it into a gradient. The wave shader reads this gradient to place animated sine bands at regular distance intervals, with noise to break up the pattern.
This worked great on straight coastlines. In concave coves and inlets, the wave lines got thick and ugly. The blur-based gradient spreads the same value range over a wider physical area in coves, stretching the wave bands out.
Screen-space derivatives to detect gradient stretching — worked at one zoom level, broke at others.
Texture-space gradient magnitude to detect opposing coast edges canceling out — only detected narrow rivers, not actual problem coves.
The fundamental issue: blur encodes “how much land is nearby,” not “how far is the nearest coast edge.” These are different questions, and no amount of post-processing the blur can extract true distance.
The solve was to do a CPU-side “surroundedness” probe that checks each water cell’s neighbors to detect coves, writing a separate mask texture that thins the waves in enclosed areas. It’s kind of a hack but it works and the wave edges thin out nicely at the edges.
No dice required this time — but the feeling is the same. You hit a button, the map builds itself, and you discover what the algorithm decided to put there. It’s super satisfying to see the road and river systems matching up perfectly. Every time it’s different, and every time I find myself exploring for a while. The kid rolling dice on the dungeon tables would be into this.
...
Read the original on felixturner.github.io »
Last week, Dan Blanchard, the maintainer of chardet—a Python library for detecting text encodings used by roughly 130 million projects a month—released a new version. Version 7.0 is 48 times faster than its predecessor, supports multiple cores, and was redesigned from the ground up. Anthropic’s Claude is listed as a contributor. The license changed from LGPL to MIT.
Blanchard’s account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch. The resulting code shares less than 1.3% similarity with any prior version, as measured by JPlag. His conclusion: this is an independent new work, and he is under no obligation to carry forward the LGPL. Mark Pilgrim, the library’s original author,
opened a GitHub issue to object. The LGPL requires that modifications be distributed under the same license, and a reimplementation produced with ample exposure to the original codebase cannot, in Pilgrim’s view, pass as a clean-room effort.
The dispute drew responses from two prominent figures in the open source world. Armin Ronacher, the creator of Flask, welcomed the relicensing. Salvatore Sanfilippo (antirez), the creator of Redis,
published a broader defense of AI reimplementation, grounding it in copyright law and the history of the GNU project. Both conclude, by different routes, that what Blanchard did is legitimate. I respect both writers, and I think both are wrong—or more precisely, both are evading the question that actually matters.
That question is this: does legal mean legitimate? Neither piece answers it. Both move from “this is legally permissible” to “this is therefore fine,” without pausing at the gap between those two claims. Law sets a floor; clearing it does not mean the conduct is right. That gap is where this essay begins.
Antirez builds his case on history. When the GNU project reimplemented the UNIX userspace, it was lawful. So was Linux. Copyright law prohibits copying “protected expressions”—the actual code, its structure, its specific mechanisms—but it does not protect ideas or behavior. AI-assisted reimplementation occupies the same legal ground. Therefore, it is lawful.
The legal analysis is largely correct, and I am not disputing it. The problem lies in what antirez does next: he presents the legal conclusion as if it were also a social one, and uses a historical analogy that, examined more carefully, argues against his own position.
When GNU reimplemented the UNIX userspace, the vector ran from proprietary to free. Stallman was using the limits of copyright law to turn proprietary software into free software. The ethical force of that project did not come from its legal permissibility—it came from the direction it was moving, from the fact that it was expanding the commons. That is why people cheered.
The vector in the chardet case runs the other way. Software protected by a copyleft license—one that guarantees users the right to study, modify, and redistribute derivative works under the same terms—has been reimplemented under a permissive license that carries no such guarantee. This is not a reimplementation that expands the commons. It is one that removes the fencing that protected the commons. Derivative works built on chardet 7.0 are under no obligation to share their source code. That obligation, which applied to a library downloaded 130 million times a month, is now gone.
Antirez does not address this directional difference. He invokes the GNU precedent, but that precedent is a counterexample to his conclusion, not a supporting one.
Ronacher’s argument is different. He discloses upfront that he has a stake in the outcome: “I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard.” He goes on to write that he considers “the GPL to run against that spirit by restricting what can be done with it”—the spirit being that society is better off when we share.
This claim rests on a fundamental misreading of what the GPL does.
Start with what the GPL actually prohibits. It does not prohibit keeping source code private. It imposes no constraint on privately modifying GPL software and using it yourself. The GPL’s conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms. This is not a restriction on sharing. It is a condition placed on sharing: if you share, you must share in kind.
The requirement that improvements be returned to the commons is not a mechanism that suppresses sharing. It is a mechanism that makes sharing recursive and self-reinforcing. The claim that imposing contribution obligations on users of a commons undermines sharing culture does not hold together logically.
The contrast with the MIT license clarifies the point. Under MIT, anyone may take code, improve it, and close it off into a proprietary product. You can receive from the commons without giving back. If Ronacher calls this structure “more share-friendly,” he is using a concept of sharing with a specific directionality built in: sharing flows toward whoever has more capital and more engineers to take advantage of it.
The historical record bears this out. In the 1990s, companies routinely absorbed GPL code into proprietary products—not because they had chosen permissive licenses, but because copyleft enforcement was slack. The strengthening of copyleft mechanisms closed that gap. For individual developers and small projects without the resources to compete on anything but reciprocity, copyleft was what made the exchange approximately fair.
The creator of Flask knows this distinction. If he elides it anyway, the argument is not naïve—it is convenient.
The most interesting moment in Ronacher’s piece is not the argument but a detail he mentions in passing: Vercel reimplemented GNU Bash using AI and published it, then got visibly upset when Cloudflare
reimplemented Next.js the same way.
Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT licensed. Cloudflare’s vinext did not violate any license—it did exactly what Ronacher calls a contribution to the culture of openness, applied to a permissively licensed codebase. Vercel’s reaction had nothing to do with license infringement; it was purely competitive and territorial. The implicit position is: reimplementing GPL software as MIT is a victory for sharing, but having our own MIT software reimplemented by a competitor is cause for outrage. This is what the claim that permissive licensing is “more share-friendly” than copyleft looks like in practice. The spirit of sharing, it turns out, runs in one direction only: outward from oneself.
Ronacher registers the contradiction and does not stop. “This development plays into my worldview,” he writes. When you present evidence that cuts against your own position, acknowledge it, and then proceed to your original conclusion unchanged, that is a signal that the conclusion preceded the argument.
Back to the question posed at the start. Is legal the same as legitimate?
Antirez closes his careful legal analysis as though it settles the matter. Ronacher acknowledges that “there is an obvious moral question here, but that isn’t necessarily what I’m interested in.” Both pieces treat legal permissibility as a proxy for social legitimacy. But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
In the chardet case, the distinction is sharper still. What the LGPL protected was not Blanchard’s labor alone. It was a social compact agreed to by everyone who contributed to the library over twelve years. The terms of that compact were: if you take this and build on it, you share back under the same terms. This compact operated as a legal instrument, yes, but it was also the foundation of trust that made contribution rational. The fact that a reimplementation may qualify legally as a new work, and the fact that it breaks faith with the original contributors, are separate questions. If a court eventually rules in Blanchard’s favor, that ruling will tell us what the law permits. It will not tell us that the act was right.
Zoë Kooyman, executive director of the FSF, put it plainly: “Refusing to grant others the rights you yourself received as a user is highly antisocial, no matter what method you use.”
Reading this debate, I keep returning to a question about position. From where are these two writers looking at the situation?
Antirez created Redis. Ronacher created Flask. Both are figures at the center of the open source ecosystem, with large audiences and well-established reputations. For them, falling costs of AI reimplementation means something specific: it is easier to reimplement things they want in a different form. Ronacher says explicitly that he had begun reimplementing GNU Readline precisely because of its copyleft terms.
For the people who have spent years contributing to a library like chardet, the same shift in costs means something else entirely: the copyleft protection around their contributions can be removed. The two writers are speaking from the former position to people in the latter, telling them that this was always lawful, that historical precedent supports it, and that the appropriate response is adaptation.
When positional asymmetry of this kind is ignored, and the argument is presented as universal analysis, what you get is not analysis but rationalization. Both writers arrive at conclusions that align precisely with their own interests. Readers should hold that fact in mind.
Bruce Perens, who wrote the original Open Source Definition, told : “The entire economics of software development are dead, gone, over, kaput!” He meant it as an alarm. Antirez, from a similar assessment of the situation, draws the conclusion: adapt. Ronacher says he finds the direction exciting.
None of the three responses addresses the central question. When copyleft becomes technically easier to circumvent, does that make it less necessary, or more?
I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.
That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.
This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.
In previous writing, I argued for a training copyleft (TGPL) as the next step in this line of development. The chardet situation suggests the argument has to go further: to a specification copyleft covering the layer below source code. If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides. Blanchard’s own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.
The history of the GPL is the history of licensing tools evolving in response to new forms of exploitation: GPLv2 to GPLv3, then AGPL. What drove each evolution was not a court ruling but a community reaching a value judgment first and then seeking legal instruments to express it. The same sequence is available now. Whatever courts eventually decide about AI reimplementation, the question we need to answer first is not a legal one. It is a social one. Do those who take from the commons owe something back? I think they do. That judgment does not require a verdict.
What makes the pieces by antirez and Ronacher worth reading is not that they are right. It is that they make visible, with unusual clarity, what they are choosing not to see. When legality is used as a substitute for a value judgment, the question that actually matters gets buried in the footnotes of a law it has already outgrown.
...
Read the original on writings.hongminhee.org »
FontCrafter turns your handwriting into a real, installable font — entirely in your browser. No accounts, no uploads to servers, no cost.
Still have questions? Here’s our FAQ.
It’s easier than you think. Print, write, scan — done.
Your handwriting becomes an installable font (OTF, TTF, WOFF2, Base64)
Natural variation — your letters won’t look robotic or identical every time
Connected letter pairs (ff, th, st, etc.) that flow like real handwriting
Optional drop shadows and ink texture effects baked right into the font
No account, no server, 100% private — everything happens in your browser
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
Download and print the template — US Letter or A4.
Print at 100% scale (no “fit to page”). Use white, unlined paper.
Fill in every box with a felt-tip pen. All 3 rows for each character.
Ballpoints are too faint; thick markers bleed. Keep strokes inside the boxes with breathing room from edges.
How to use the three rows: Row 1 is always uppercase. Row 2 can be a second version of your uppercase or lowercase. Row 3 can also be uppercase or lowercase.
Scan or photograph the sheet, then drag & drop that photo file below.
Lay the sheet flat on a table with even lighting — no shadows, no curl. A phone camera works great if the sheet is flat and well-lit.
Drop your completed (scanned) image below. JPG, PNG, or high-res photo — make sure the page is flat and evenly lit. The processing happens on your end. No servers involved. Nothing is saved or stored remotely.
Drop your filled-in scan here, or click to browse
Not happy with a character? Touch it up in any image editor, or use correction tape and re-scan.
Ensure All Four Crosshair Markers Are Visible and Continue →
Characters with green borders were detected cleanly. Click any character to deselect it — a replacement from another row will be used. Small imperfections are fine — they give your font personality.
Name your font and choose how your three rows should be used. Enable ligatures for natural-looking connected letter pairs.
What do you want to call this font?
What did you put in each row?
I wrote uppercase in all three rows
I wrote uppercase in Row 1, lowercase in Row 2, uppercase in Row 3
I wrote uppercase in Row 1, lowercase in Row 2, lowercase in Row 3
Row 2 will be used as your lowercase. Adjust how much to shrink it — set to 1.00 if you already wrote Row 2 smaller than Row 1.
Allow certain characters to dip below the baseline (e.g. g, j, p, q, y, or a slashed zero).
Separate with spaces. Add or remove characters as needed for your handwriting.
Controls how far descender tails drop below the line. Slide left for deeper, right for shallower.
Cleans up tiny ink specks that bled through from adjacent cells. Won’t affect dots on letters like i, j, or punctuation marks.
Evens out letters that were drawn at different sizes.
Adds 100+ derived glyphs from your handwriting. Uncheck if you only want your handwritten alphanumeric characters.
Auto-generates diacritics (accents, tildes, umlauts) from your base letters. Covers French, German, Spanish, Portuguese, Scandinavian, and more.
Ligatures are letter pairs that connect naturally in handwriting — like ff, fi, fl, th, and st. Auto-generate is recommended — it’s instant and produces natural-looking connections from your existing characters.
Kerning adjusts spacing between specific letter pairs — like AV, To, and WA — so characters with complementary shapes sit together naturally instead of having uniform gaps.
See how your font looks with sample text, or type anything you like below.
THE QUICK BROWN FOX JUMPS OVER A LAZY DOG.
sphinx of black quartz, judge my vow?
Both Fickle Dwarves Can Jinx My Pig Quiz!
• $11.38 + tax & a 5.69% tip = more than $20.74
• (I love Star Wars) [Yes] {Maybe} <OK>
• That’s what I said! “Really?”
• arcade.pirillo.com * chris.pirillo.com
• He scored 7/8 on the quiz — not bad~
• Order #4053: 2x @$16.99 each | Total: $33.98
• Is it _really_ 100^2 = 10,000‽
• “Yes,” she said, ‘it’s a go;’ then walked away.
Your font includes contextual alternates (calt) — consecutive characters will automatically cycle between your 3 handwriting variants. This works in apps with OpenType support but may not appear in this preview.
Click here to try it for yourself…
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
OTF for desktop apps, TTF for universal compatibility, WOFF2 for websites, Base64 for CSS embedding.
I confirm that the handwriting used to generate this font is my own or I have explicit permission from the handwriting’s owner to create and use this font. I understand that I am solely responsible for how I use the generated font files, and I release FontCrafter, Chris Pirillo, LockerGnome, and arcade.pirillo.com from any liability related to the font’s creation, distribution, or use.
...
Read the original on arcade.pirillo.com »
A Broward County judge has dismissed a red-light camera ticket, ruling that the state law used to issue the citation improperly shifts the burden of proof onto vehicle owners.
In a 21-page order signed March 3, Judge Steven P. DeLuca granted a defendant’s motion to dismiss a photo-enforced traffic citation issued under Florida’s red-light camera law.
The case involved a Sunrise red-light camera citation issued to a registered vehicle owner after automated cameras captured a vehicle entering an intersection against a red signal. The defendant argued the statute unconstitutionally requires the registered owner to prove they were not driving — instead of requiring the government to prove who was behind the wheel.
In the order, the court found that red-light camera cases, although labeled as civil infractions, function as “quasi-criminal” proceedings because they can result in monetary penalties, a formal finding of guilt, and consequences tied to a driver’s record.
Under Florida Statute 316.0083, once a camera captures a violation, the registered owner is presumed responsible unless they submit an affidavit identifying another driver. The court ruled that the framework improperly shifts the burden of proof away from the state.
Because traffic infractions that move to county court must be proven “beyond a reasonable doubt,” the judge wrote that the statute’s presumption violates constitutional due process protections.
As a result, the citation in this case was formally dismissed.
Click here to view the PDF file
Joel Mumford is an attorney with The Ticket Clinic. He said if the case is quasi criminal, which means almost criminal or criminal-like, the state has to follow procedural due process.
“The state or the agencies that issue the tickets, it’s their burden to prove all the elements of the crime beyond and to the extent of each and every reasonable doubt. And the first element, which should be, who’s driving the car,” Mumford said. “The statute in Florida presumes that the registered owner is the driver of the car.”
Mumford said even though the court order only applies in Broward County, it could open the door for challenges in other parts of the state, meaning other counties across the state following suit.
“What could probably happen is they the court could then get an appeal, and then if it goes up to the district court of appeal level and then make a decision on it, depending on what that says, that could then be applied to the entire state, if there’s no similar appellate cases throughout the state. So that could make it statewide,” he said.
Drivers in Boynton Beach want to see the red-light cameras gone.
Fifteen red-light camera systems are already running at seven intersections in Boynton Beach alone. Congress Avenue and Gateway Boulevard is just one of them.
“I’ve been ticketed here twice, and it’s ridiculous because they it’s just not fair,” one driver said who didn’t want to be identified. The person that does the determination when you ran the light, it’s just a random. Whoever they want to pick, pick you to say, okay, you’re gonna pay the ticket.”
That driver had to pay his $158 dollar ticket but he’s hopeful Palm Beach County can soon follow suit. He wants red light cameras gone, so future violations could be thrown out.
“I think they need to outlaw it and get rid of it,” he said.
Advocacy group StopTheCams, which has long opposed automated traffic enforcement, called the ruling a major victory. In a press release, the group said the decision confirms what critics have argued for years — that red-light camera laws punish vehicle owners without requiring proof they committed the violation.
Supporters of red-light cameras argue the systems improve safety by deterring dangerous driving at intersections. Florida’s red-light camera law, known as the Mark Wandall Traffic Safety Act, allows local governments to use automated enforcement systems.
It remains unclear whether the ruling will be appealed or how broadly it could affect similar cases statewide. For now, the decision applies to this specific case in Broward County, but legal observers say it could fuel renewed challenges to Florida’s red-light camera enforcement system.
...
Read the original on cbs12.com »
After several intense and incredible years building Bluesky from the ground up, I’ve decided to step back as CEO and transition to a new role as Bluesky’s Chief Innovation Officer.
In 2019, I set out to build an open protocol for social media, with the goal of enabling the development of a new generation of applications designed to empower users. The Bluesky app started off as a reference client for the protocol, but has since grown into a thriving platform with over 40 million users. Last year, we grew a world-class team, expanded the AT Protocol ecosystem, and proved that a values-driven social network could thrive at scale.
As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things. As part of this transition, Toni Schneider, former CEO of Automattic and partner at True Ventures, will join our team as interim CEO, while our board runs a search for a permanent chief executive.
Toni believes deeply in the Bluesky mission, and has been an advisor to the company and me personally for over a year. Both Automattic and True Ventures are also investors in Bluesky, and support the development of a more open, user-driven internet. Automattic, the company behind WordPress.com, has built their business on open source software, working to make the web a better and more participatory place. Toni was a key part of guiding that mission, and is someone who leads with genuine curiosity, takes the long view, and deeply understands what it means to build a company around a mission. I am confident that he is the right person to lead us into this next chapter.
Scaling up this company has been a learning experience unlike anything else. I’ve grown a lot as a leader and had the privilege of assembling the best team I’ve ever worked with. As we’ve grown, I’ve found that people thrive when they’re in a role where their passions overlap with their strengths. This is as true for me as it is for our team. I’m most energized by exploring new ideas, bringing a vision to life, and helping people discover their strengths. Transitioning to a more focused role where I can do what brings me energy is my way of putting that belief into practice.
I’m excited to dig into the next frontier of what decentralized social can be, while bringing Toni in to support our team as an experienced operator and leader. The work that got us here was just the beginning, and I’m grateful to keep building alongside this team and community.
...
Read the original on bsky.social »
Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle’s debt-fueled expansion.
OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia’s Blackwell processors, and the power isn’t projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia’s next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality.
Bloomberg was first to report on the companies ending their plans for expansion in Abilene. In a post on X on Sunday, Oracle called news reports about the activity, “false and incorrect,” but the post only said existing projects are on track and didn’t address expansion plans.
Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger.
It’s a logical decision for OpenAI, which doesn’t want older chips. Nvidia used to release a new generation of data center processors every two years. Now, CEO Jensen Huang has the company shipping one every year, and each generation offers a leap in capability. Vera Rubin, unveiled at CES in January and already in production, delivers five times the inference performance of Blackwell.
For the companies building frontier models, the smallest improvement in performance could equate to huge gaps in model benchmarks and rankings, which are closely followed by developers and translate directly to usage, revenue, and valuation.
That all points to a bigger problem at play. For infrastructure companies, securing a site, connecting power and standing up a facility takes 12 to 24 months at minimum. But customers want the latest and greatest, and they’re tracking the yearly chip upgrades.
Oracle’s added challenge is that it’s the only hyperscaler funding its buildout primarily with debt, to the tune of $100 billion and counting. Google, Amazon and Microsoft, by contrast, are leaning on their enormous cash-generating businesses.
Meanwhile, Oracle partner Blue Owl is declining to fund an additional facility, and plans to cut up to 30,000 jobs.
Oracle reports fiscal third-quarter results on Tuesday, and investors will be paying close to how the company addresses a $50 billion capital expenditure plan with negative free cash flow, and whether the financing pipeline can hold up.
The stock is down 23% so far this year and has lost over half its value since peaking in September.
Beyond Oracle, GPU depreciation is a risk for the broader market and could have ramifications across the AI landscape. Every infrastructure deal signed today may result in a commitment to outdated hardware before the power is even connected.
...
Read the original on www.cnbc.com »
FFmpeg is truly a multi-tool for media processing. As an industry-standard tool it supports a wide variety of audio and video codecs and container formats. It can also orchestrate complex chains of filters for media editing and manipulation. For the people who use our apps, FFmpeg plays an important role in enabling new video experiences and improving the reliability of existing ones.
Meta executes ffmpeg (the main CLI application) and ffprobe (a utility for obtaining media file properties) binaries tens of billions of times a day, introducing unique challenges when dealing with media files. FFmpeg can easily perform transcoding and editing on individual files, but our workflows have additional requirements to meet our needs. For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg, such as threaded multi-lane encoding and real-time quality metric computation.
Over time, our internal fork came to diverge significantly from the upstream version of FFmpeg. At the same time, new versions of FFmpeg brought support for new codecs and file formats, and reliability improvements, all of which allowed us to ingest more diverse video content from users without disruptions. This necessitated that we support both recent open-source versions of FFmpeg alongside our internal fork. Not only did this create a gradually divergent feature set, it also created challenges around safely rebasing our internal changes to avoid regressions.
As our internal fork became increasingly outdated, we collaborated with FFmpeg developers, FFlabs, and VideoLAN to develop features in FFmpeg that allowed us to fully deprecate our internal fork and rely exclusively on the upstream version for our use cases. Using upstreamed patches and refactorings we’ve been able to fill two important gaps that we had previously relied on our internal fork to fill: threaded, multi-lane transcoding and real-time quality metrics.
When a user uploads a video through one of our apps, we generate a set of encodings to support Dynamic Adaptive Streaming over HTTP (DASH) playback. DASH playback allows the app’s video player to dynamically choose an encoding based on signals such as network conditions. These encodings can differ in resolution, codec, framerate, and visual quality level but they are created from the same source encoding, and the player can seamlessly switch between them in real time.
In a very simple system separate FFmpeg command lines can generate the encodings for each lane one-by-one in serial. This could be optimized by running each command in parallel, but this quickly becomes inefficient due to the duplicate work done by each process.
To work around this, multiple outputs could be generated within a single FFmpeg command line, decoding the frames of a video once and sending them to each output’s encoder instance. This eliminates a lot of overhead by deduplicating the video decoding and process startup time overhead incurred by each command line. Given that we process over 1 billion video uploads daily, each requiring multiple FFmpeg executions, reductions in per-process compute usage yield significant efficiency gains.
Our internal FFmpeg fork provided an additional optimization to this: parallelized video encoding. While individual video encoders are often internally multi-threaded, previous FFmpeg versions executed each encoder in serial for a given frame when multiple encoders were in use. By running all encoder instances in parallel, better parallelism can be obtained overall.
Thanks to contributions from FFmpeg developers, including those at FFlabs and VideoLAN, more efficient threading was implemented starting with FFmpeg 6.0, with the finishing touches landing in 8.0. This was directly influenced by the design of our internal fork and was one of the main features we had relied on it to provide. This development led to the most complex refactoring of FFmpeg in decades and has enabled more efficient encodings for all FFmpeg users.
To fully migrate off of our internal fork we needed one more feature implemented upstream: real-time quality metrics.
Visual quality metrics, which give a numeric representation of the perceived visual quality of media, can be used to quantify the quality loss incurred from compression. These metrics are categorized as reference or no-reference metrics, where the former compares a reference encoding to some other distorted encoding.
FFmpeg can compute various visual quality metrics such as PSNR, SSIM, and VMAF using two existing encodings in a separate command line after encoding has finished. This is okay for offline or VOD use cases, but not for livestreaming where we might want to compute quality metrics in real time.
To do this, we need to insert a video decoder after each video encoder used by each output lane. These provide bitmaps for each frame in the video after compression has been applied so that we can compare against the frames before compression. In the end, we can produce a quality metric for each encoded lane in real time using a single FFmpeg command line.
Thanks to “in-loop” decoding, which was enabled by FFmpeg developers including those from FFlabs and VideoLAN, beginning with FFmpeg 7.0, we no longer have to rely on our internal FFmpeg fork for this capability.
Things like real-time quality metrics while transcoding and more efficient threading can bring efficiency gains to a variety of FFmpeg-based pipelines both in and outside of Meta, and we strive to enable these developments upstream to benefit the FFmpeg community and wider industry. However, there are some patches we’ve developed internally that don’t make sense to contribute upstream. These are highly specific to our infrastructure and don’t generalize well.
FFmpeg supports hardware-accelerated decoding, encoding, and filtering with devices such as NVIDIA’s NVDEC and NVENC, AMD’s Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each device is supported through an implementation of standard APIs in FFmpeg, allowing for easier integration and minimizing the need for device-specific command line flags. We’ve added support for the Meta Scalable Video Processor (MSVP), our custom ASIC for video transcoding, through these same APIs, enabling the use of common tooling across different hardware platforms with minimal platform-specific quirks.
As MSVP is only used within Meta’s own infrastructure, it would create a challenge for FFmpeg developers to support it without access to the hardware for testing and validation. In this case, it makes sense to keep patches like this internal since they wouldn’t provide benefit externally. We’ve taken on the responsibility of rebasing our internal patches onto more recent FFmpeg versions over time, utilizing extensive validation to ensure robustness and correctness during upgrades.
With more efficient multi-lane encoding and real-time quality metrics, we were able to fully deprecate our internal FFmpeg fork for all VOD and livestreaming pipelines. And thanks to standardized hardware APIs in FFmpeg, we’ve been able to support our MSVP ASIC alongside software-based pipelines with minimal friction.
FFmpeg has withstood the test of time with over 25 years of active development. Developments that improve resource utilization, add support for new codecs and features, and increase reliability enable robust support for a wider range of media. For people on our platforms, this means enabling new experiences and improving the reliability of existing ones. We plan to continue investing in FFmpeg in partnership with open source developers, bringing benefits to Meta, the wider industry, and people who use our products.
We would like to acknowledge contributions from the open source community, our partners in FFlabs and VideoLAN, and many Meta engineers, including Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.
...
Read the original on engineering.fb.com »
I’ve been maintaining Emacs Solo
for a while now, and I think it’s time to talk about what happened in this latest cycle as the project reaches its two-year mark.
For those who haven’t seen it before, Emacs Solo is my daily-driver Emacs configuration with one strict rule: no external packages. Everything is either built into Emacs or written from scratch by me in the lisp/ directory. No package-install, no straight.el, no
use-package :ensure t pointing at ELPA or MELPA. Just Emacs and Elisp. I’m keeping this post text only, but if you’d like to check how
Emacs Solo looks and feels, the repository has screenshots and more details.
Why? Partly because I wanted to understand what Emacs actually gives you out of the box. Partly because I wanted my config to survive without breakage across Emacs releases. Partly because I was tired of dealing with package repositories, mirrors going down in the middle of the workday, native compilation hiccups, and the inevitable downtime when something changed somewhere upstream and my job suddenly became debugging my very long (at the time) config instead of doing actual work. And partly, honestly, because it’s a lot of fun!
This post covers the recent refactor, walks through every section of the core config, introduces all 35 self-contained extra modules I’ve written, and shares some thoughts on what I’ve learned.
Now, I’ll be the first to admit: this config is long. But there’s a principle behind it. I only add features when they are not already in Emacs core, and when I do, I try to build them myself. That means the code is sketchy sometimes, sure, but it’s in my control. I wrote it, I understand it, and when it breaks, I know exactly where to look. The refactor I’m about to describe makes this distinction crystal clear: what is “Emacs core being tweaked” versus what is “a really hacky outsider I built in because I didn’t want to live without it”.
The single biggest change in this cycle was architectural. Emacs Solo used to be one big init.el with everything crammed together. That worked, but it had problems:
— It was hard to navigate (even with outline-mode)
— If someone wanted just one piece, say my Eshell config or my VC extensions, they had to dig through thousands of lines
— It was difficult to tell where “configuring built-in Emacs” ended and “my own hacky reimplementations” began
The solution was clean and simple: split the config into two layers.
This file configures only built-in Emacs packages and features. Every
use-package block in here has :ensure nil, because it’s pointing at something that ships with Emacs. This is pure, standard Emacs customization.
The idea is that anyone can read init.el, find a section they like, and
copy-paste it directly into their own config. No dependencies. No setup. It just works, because it’s configuring things Emacs already has.
These are my own implementations: replacements for popular external packages, reimagined as small, focused Elisp files. Each one is a proper provide/require module. They live under lisp/ and are loaded at the bottom of init.el via a simple block:
If you don’t want one of them, just comment out the require line. If you want to use one in your own config, just copy the .el file into your own lisp/ directory and require it. That’s it.
This separation made the whole project dramatically easier to maintain, understand, and share.
The init.el file is organized into clearly labeled sections (using outline-mode-friendly headers, so you can fold and navigate them inside Emacs). Here’s every built-in package and feature it touches, and why.
The emacs use-package block is the largest single section. It sets up sensible defaults that most people would want:
— Window layout commands bound under C-x w (these are upcoming
Emacs 31 features: window-layout-transpose,
window-layout-rotate-clockwise, window-layout-flip-leftright,
window-layout-flip-topdown)
— Disabling C-z (suspend) because accidentally suspending Emacs in a terminal is never fun
— Sensible file handling: backups and auto-saves in a cache/
directory, recentf for recent files, clean buffer naming with
uniquify
— Tree-sitter auto-install and auto-mode (treesit-auto-install-grammar t and treesit-enabled-modes t, both Emacs 31)
— delete-pair-push-mark, kill-region-dwim, ibuffer-human-readable-size, all the small quality-of-life settings coming in Emacs 31
A full abbrev-mode setup with a custom placeholder system. You define abbreviations with ###1###, ###2### markers, and when the abbreviation expands, it prompts you to fill in each placeholder interactively. The ###@### marker tells it where to leave point after expansion. I wrote a whole article about it.
Configures auth-source to use ~/.authinfo.gpg for credential storage. Simple but essential if you use Gnus, ERC, or any network-facing Emacs feature.
Makes buffers automatically refresh when files change on disk. Essential for any Git workflow.
Configuration file mode settings and a compilation-mode setup with ANSI color support, so compiler output actually looks readable.
Custom window management beyond the defaults, because Emacs window management out of the box is powerful but needs a little nudging.
Tab-bar configuration for workspace management. Emacs has had tabs since version 27, and they’re genuinely useful once you configure them properly.
Two IRC clients, both built into Emacs, both configured. ERC gets the bigger treatment: logging, scrolltobottom, fill, match highlighting, and even inline image support (via one of the extra modules). The Emacs 31 cycle brought nice improvements here too, including a fix for the scrolltobottom/fill-wrap dependency issue.
This is where Emacs Solo’s completion story lives. Instead of reaching for Vertico, Consult, or Helm, I use icomplete-vertical-mode, which is built into Emacs. With the right settings it’s surprisingly capable:
I’ve also been contributing patches upstream to improve icomplete’s vertical rendering with prefix indicators. Some of those features are already landing in Emacs 31, which means the polyfill code I carry today will eventually become unnecessary.
A heavily customized Dired setup. Custom listing switches, human readable sizes, integration with system openers (open on macOS,
xdg-open on Linux), and the dired-hide-details-hide-absolute-location
option from Emacs 31.
Writable Dired, so you can rename files by editing the buffer directly.
This one I’m particularly proud of. Emacs Solo’s Eshell configuration includes:
— Shared history across all Eshell buffers: Every Eshell instance reads from and writes to a merged history, so you never lose a command just because you ran it in a different buffer
— Custom prompts: Multiple prompt styles you can toggle between with C-c t (full vs. minimal) and C-c T (lighter vs. heavier full prompt)
This is one of the largest sections and one I’m most invested in. Emacs’s built-in vc is an incredible piece of software that most people overlook in favor of Magit. I’m not saying it replaces Magit entirely, but with the right configuration it covers 95% of daily Git operations:
— Git add/reset from vc-dir: S to stage, U to unstage, directly in the vc-dir buffer. Admittedly, I almost never use this because I’m now used to the Emacs-style VC workflow: C-x v D or C-x v =, then killing what I don’t want, splitting what isn’t ready yet, and finishing with C-c C-c. Amending with C-c C-e is awesome. Still useful once or twice a semester.
— Browse remote: C-x v B opens your repository on GitHub/GitLab in a browser; with a prefix argument it jumps to the current file and line
— Jump to current hunk: C-x v = opens the diff buffer scrolled to the hunk containing your current line
— Switch between modified files: C-x C-g lets you
completing-read through all modified/untracked files in the current repo
Merge conflict resolution and diff viewing. Ediff configured to split windows sanely (side by side, not in a new frame).
Documentation at point, with eldoc-help-at-pt (Emacs 31) for showing docs automatically.
The LSP client that ships with Emacs. Configured with:
— Custom server programs, including rassumfrassum for multiplexing TypeScript + ESLint + Tailwind (I wrote a whole post
about that)
— Keybindings under C-c l for code actions, rename, format, and inlay hints
— Automatic enabling for all prog-mode buffers except
emacs-lisp-mode and lisp-mode
Diagnostics, spell checking, and whitespace visualization. All built-in, all configured.
The Emacs newsreader and email client. Configured for IMAP/SMTP usage.
RSS/Atom feed reader built into Emacs. Customized with some extras I build my self for dealing with youtube feeds: thumbnail, transcripts, sending to AI for a quick summary, and so on.
Org-mode configuration, because of course.
File tree navigation in a side window. With Emacs 31, speedbar gained speedbar-window support, so it can live inside your existing frame instead of spawning a new one.
Buffer name disambiguation when you have multiple files with the same name open.
Quick web searches from the minibuffer. Configured with useful search engines.
Specific configurations for every language I work with, organized into three areas:
Common Lisp: inferior-lisp and lisp-mode with custom REPL interaction, evaluation commands, and a poor man’s SLIME/SLY setup that actually works quite well for basic Common Lisp development.
Non-Tree-sitter: sass-mode for when tree-sitter grammars aren’t available.
Tree-sitter modes: ruby-ts-mode, js-ts-mode,
json-ts-mode, typescript-ts-mode, bash-ts-mode,
rust-ts-mode, toml-ts-mode, markdown-ts-mode (Emacs 31),
yaml-ts-mode, dockerfile-ts-mode, go-ts-mode. Each one configured with tree-sitter grammar sources (which Emacs 31 is starting to define internally, so those definitions will eventually become unnecessary).
This is where the fun really is. Each of these is a complete, standalone Elisp file that reimplements functionality you’d normally get from an external package. They’re all in lisp/ and can be used independently.
I call them “hacky reimplementations” in the spirit of Emacs Solo: they’re not trying to be feature-complete replacements for their MELPA counterparts. They’re trying to be small, understandable,
and good enough for daily use while keeping the config self-contained.
Custom color themes based on Modus. Provides several theme variants: Catppuccin Mocha, Crafters (the default), Matrix, and GITS. All built on top of Emacs’s built-in Modus themes by overriding faces, so you get the accessibility and completeness of Modus with different aesthetics.
Custom mode-line format and configuration. A hand-crafted mode-line that shows exactly what I want: buffer state indicators, file name, major mode, Git branch, line/column, and nothing else. No doom-modeline, no telephone-line, just format strings and faces.
Enhanced navigation and window movement commands. Extra commands for moving between windows, resizing splits, and navigating buffers more efficiently.
Configurable format-on-save with a formatter registry. You register formatters by file extension (e.g., prettier for .tsx,
black for .py), and the module automatically hooks into
after-save-hook to format the buffer. All controllable via a
defcustom, so you can toggle it on and off globally.
Frame transparency for GUI and terminal. Toggle transparency on your Emacs frame. Works on both graphical and terminal Emacs, using the appropriate mechanism for each.
Sync shell PATH into Emacs. The classic macOS problem: GUI Emacs doesn’t inherit your shell’s PATH. This module solves it the same way exec-path-from-shell does, but in about 20 lines instead of a full package.
Rainbow coloring for matching delimiters. Colorizes nested parentheses, brackets, and braces in different colors so you can visually match nesting levels. Essential for any Lisp, and helpful everywhere else.
Interactive project finder and switcher. A completing-read
interface for finding and switching between projects, building on Emacs’s built-in project.el.
Vim-like keybindings and text objects for Viper. If you use Emacs’s built-in viper-mode (the Vim emulation layer), this extends it with text objects and additional Vim-like commands. No Evil needed.
Highlight TODO and similar keywords in comments. Makes TODO,
FIXME, HACK, NOTE, and similar keywords stand out in source code comments with distinctive faces. A small thing that makes a big difference.
Git diff gutter indicators in buffers. Shows added, modified, and deleted line indicators in the margin, like diff-hl or
git-gutter. Pure Elisp, using vc-git under the hood.
Quick window switching with labels. When you have three or more windows, this overlays single-character labels on each window so you can jump to any one with a single keystroke. A minimal reimplementation of the popular ace-window package.
Centered document layout mode. Centers your text in the window with wide margins, like olivetti-mode. Great for prose writing, Org documents, or any time you want a distraction-free centered layout.
Upload text and files to 0x0.st. Select a region or a file and upload it to the 0x0.st paste service. The URL is copied to your kill ring. Quick and useful for sharing snippets.
Edit files as root via TRAMP. Reopen the current file with root privileges using TRAMP’s /sudo:: prefix. A reimplementation of the
...
Read the original on www.rahuljuliato.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.