10 interesting stories served every morning and every evening.
Ireland today (June 20) became the 15th coal-free country in Europe, having ended coal power generation at its 915 MW Moneypoint coal plant in County Clare. Initially commissioned in the mid-1980s by ESB, Moneypoint was intended to help Ireland offset the impact of the oil crises in the 1970s by providing a dependable source of energy.
But with Ireland now generating a lot more renewable energy nowadays, coal burning is no longer such an urgent need. Energy think tank Ember data states Ireland generated 37% (11.4 TWh) of its electricity from wind in 2024. Solar is not near wind levels of generation, (0.97 TWh in 2024) but it has been continuously breaking generation records in recent months and local stakeholders are confident this positive trend will continue.
Following the closure, the Moneypoint plant will continue to serve a limited backup role, burning heavy fuel oil under emergency instruction from Ireland’s transmission system operator EirGrid until 2029.
This strategy is in line with previous plans made by EirGrid and ESB to exit coal-fired generation by the end of 2025, which stipulated that Moneypoint would no longer be active in the wholesale electricity market.
“Ireland has quietly rewritten its energy story, replacing toxic coal with homegrown renewable power,” said Alexandru Mustață, campaigner on coal and gas at Europe’s Beyond Fossil Fuels.
“But this isn’t ‘job done’. The government’s priority now must be building a power system for a renewable future; one with the storage, flexibility, and grid infrastructure needed to run fully on clean, domestic renewable electricity,” Mustață warned.
Jerry Mac Evilly, Campaigns Director at Friends of the Earth Ireland, appealed to the government to ensure oil backup at Moneypoint is kept to an absolute minimum and ultimately decommissioned. He also appealed for the government to prevent further development of data centers, which he said are increasing Ireland’s reliance on fossil gas.
“We also can’t ignore that the government is targeting the installation of at least 2 GW of gas power plants with no strategy to reduce Ireland’s dangerous gas dependency,” he added.
On a broader level, Ireland’s step to close coal power generation at Moneypoint sets a precedent for further European countries’ coal exits to come, says Beyond Fossil Fuels. The group tracks European countries’ progress on their commitments to switching from fossil fuels to renewable energy. So far, 23 European countries have committed to coal phase-outs. Italy is expected to complete its mainland coal phase-out this summer with the upcoming closure of its last two big coal power plants, while mainland Spain is also expecting to declare itself coal-free this summer.
...
Read the original on www.pv-magazine.com »
FontCrafter turns your handwriting into a real, installable font — entirely in your browser. No accounts, no uploads to servers, no cost.
Still have questions? Here’s our FAQ.
It’s easier than you think. Print, write, scan — done.
Your handwriting becomes an installable font (OTF, TTF, WOFF2, Base64)
Natural variation — your letters won’t look robotic or identical every time
Connected letter pairs (ff, th, st, etc.) that flow like real handwriting
Works in Word, Pages, Photoshop, websites — everywhere fonts are used
No account, no server, 100% private — everything happens in your browser
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
Download and print the template — US Letter or A4.
Print at 100% scale (no “fit to page”). Use white, unlined paper.
Fill in every box with a felt-tip pen. All 3 rows for each character.
Ballpoints are too faint; thick markers bleed. Keep strokes inside the boxes with breathing room from edges.
How to use the three rows: Row 1 is always uppercase. Row 2 can be a second version of your uppercase or lowercase. Row 3 can also be uppercase or lowercase.
Scan or photograph the sheet, then drag & drop that photo file below.
Lay the sheet flat on a table with even lighting — no shadows, no curl. A phone camera works great if the sheet is flat and well-lit.
Drop your completed (scanned) image below. JPG, PNG, or high-res photo — make sure the page is flat and evenly lit. The processing happens on your end. No servers involved. Nothing is saved or stored remotely.
Drop your filled-in scan here, or click to browse
Not happy with a character? Touch it up in any image editor, or use correction tape and re-scan.
Ensure All Four Crosshair Markers Are Visible and Continue →
Characters with green borders were detected cleanly. Click any character to deselect it — a replacement from another row will be used. Small imperfections are fine — they give your font personality.
Name your font and choose how your three rows should be used. Enable ligatures for natural-looking connected letter pairs.
What do you want to call this font?
What did you put in each row?
I wrote uppercase in all three rows
I wrote uppercase in Row 1, lowercase in Row 2, uppercase in Row 3
I wrote uppercase in Row 1, lowercase in Row 2, lowercase in Row 3
Row 2 will be used as your lowercase. Adjust how much to shrink it — set to 1.00 if you already wrote Row 2 smaller than Row 1.
Allow certain characters to dip below the baseline (e.g. g, j, p, q, y, or a slashed zero).
Separate with spaces. Add or remove characters as needed for your handwriting.
Controls how far descender tails drop below the line. Slide left for deeper, right for shallower.
Cleans up tiny ink specks that bled through from adjacent cells. Won’t affect dots on letters like i, j, or punctuation marks.
Evens out letters that were drawn at different sizes.
Adds 100+ derived glyphs from your handwriting. Uncheck if you only want your handwritten alphanumeric characters.
Auto-generates diacritics (accents, tildes, umlauts) from your base letters. Covers French, German, Spanish, Portuguese, Scandinavian, and more.
Ligatures are letter pairs that connect naturally in handwriting — like ff, fi, fl, th, and st. Auto-generate is recommended — it’s instant and produces natural-looking connections from your existing characters.
Kerning adjusts spacing between specific letter pairs — like AV, To, and WA — so characters with complementary shapes sit together naturally instead of having uniform gaps.
See how your font looks with sample text, or type anything you like below.
THE QUICK BROWN FOX JUMPS OVER A LAZY DOG.
sphinx of black quartz, judge my vow?
Both Fickle Dwarves Can Jinx My Pig Quiz!
• $11.38 + tax & a 5.69% tip = more than $20.74
• (I love Star Wars) [Yes] {Maybe} <OK>
• That’s what I said! “Really?”
• arcade.pirillo.com * chris.pirillo.com
• He scored 7/8 on the quiz — not bad~
• Order #4053: 2x @$16.99 each | Total: $33.98
• Is it _really_ 100^2 = 10,000‽
• “Yes,” she said, ‘it’s a go;’ then walked away.
Your font includes contextual alternates (calt) — consecutive characters will automatically cycle between your 3 handwriting variants. This works in apps with OpenType support but may not appear in this preview.
Click here to try it for yourself…
If you found this useful, I’d appreciate donations & patrons (to keep iterating)!
OTF for desktop apps, TTF for universal compatibility, WOFF2 for websites, Base64 for CSS embedding.
I confirm that the handwriting used to generate this font is my own or I have explicit permission from the handwriting’s owner to create and use this font. I understand that I am solely responsible for how I use the generated font files, and I release FontCrafter, Chris Pirillo, LockerGnome, and arcade.pirillo.com from any liability related to the font’s creation, distribution, or use.
...
Read the original on arcade.pirillo.com »
I’ve been obsessed with procedural maps since I was a kid rolling dice on the random dungeon tables in the AD&D Dungeon Master’s Guide. There was something magical about it — you didn’t design the dungeon, you discovered it, one room at a time, and the dice decided whether you got a treasure chamber or a dead end full of rats.
Years later, I decided to build my own map generator. It creates little medieval island worlds — with roads, rivers, coastlines, cliffs, forests, and villages — entirely procedurally. Built with Three.js WebGPU and TSL shaders, about 4,100 hex cells across 19 grids, generated in ~20 seconds.
If you’ve ever played Carcassonne, you already understand WFC. You have a stack of tiles and place them so everything lines up. Each tile has edges — grass, road, city. Adjacent tiles must have matching edges. A road edge must connect to another road edge. Grass must meet grass. The only difference is that the computer does it faster, and complains less when it gets stuck.
The twist: hex tiles have 6 edges instead of 4. That’s 50% more constraints per tile, and the combinatorial explosion is real. Square WFC is well-trodden territory. Hex WFC is… less so.
WFC is reliable for small grids. But as the grid gets bigger, the chance of painting yourself into a dead end goes up fast. A 217-cell hex grid almost never fails. A 4123-cell grid fails regularly.
The solution: modular WFC. Instead of one giant solve, the map is split into 19 hexagonal grids arranged in two rings around a center — about 4,100 cells total. Each grid is solved independently, but it has to match whatever tiles were already placed in neighboring grids. Those border tiles become fixed constraints.
And sometimes those constraints are simply incompatible. No amount of backtracking inside the current grid can fix a problem that was baked in by a neighbor. This is where I spent a lot of dev time.
Here’s the dirty secret of WFC: it fails. A lot. You make a series of random choices, propagate constraints, and eventually back yourself into a corner where some cell has zero valid options left. Congratulations, the puzzle is unsolvable.
The textbook solution is backtracking — undo your last decision and try a different tile. My solver tracks every possibility it removes during propagation (a “trail” of deltas), so it can rewind cheaply without copying the entire grid state. It’ll try up to 500 backtracks before giving up.
But backtracking alone isn’t enough. The real problem is cross-grid boundaries.
After many failed approaches, I landed on a layered recovery system:
Layer 1: Unfixing. During the initial constraint propagation, if a neighbor cell creates a contradiction, the solver converts it from a fixed constraint back into a solvable cell. Its own neighbors (two cells out — “anchors”) become the new constraints. This is cheap and handles easy cases.
Layer 2: Local-WFC. If the main solve fails, the solver runs a mini-WFC on a small radius-2 region around the problem area — re-solving 19 cells in the overlap area to create a more compatible boundary. Up to 5 attempts, each targeting a different problem cell. Local-WFC was the breakthrough. Instead of trying to solve the impossible, go back and change the problem.
Layer 3: Drop and hide. Last resort. Drop the offending neighbor cell entirely and place mountain tiles to cover the seams. Mountains are great — their cliff edges match anything, and they look intentional. Nobody questions a mountain.
This map isn’t flat — it has 5 levels of elevation. Ocean and Grass start at level 0, but slopes and cliffs can move up or down a level. Low slopes go up 1 level, high slopes go up 2 levels. A road tile at level 3 needs to connect to another road tile at level 3, or a slope tile that transitions between levels. Get it wrong and you end up with roads that dead-end into cliff faces or rivers flowing uphill into the sky. The elevation axis turns a 2D constraint problem into a 3D one, and it’s where a lot of the tile variety (and a lot of the solver failures) comes from.
The better approach: cube coordinates (q, r, s where s = -q-r). It’s a 3D coordinate system for the three hex axes. Neighbor finding becomes trivial — just add or subtract 1 from two coordinates.
The good news is that WFC doesn’t really care about geometry. It’s concerned with which edges match which — it’s essentially a graph problem. The hex coordinates only matter for rendering and for the multi-grid layout, where the 19 grids are themselves arranged as a hex-of-hexes with their own offset positions.
If you’ve ever worked with hex grids, you owe Amit Patel at Red Blob Games a debt of gratitude. His hex grid guide is the definitive reference.
Trees, Buildings, and Why Not Everything Should Be WFC
Early on, I tried using WFC for tree and building placement. Bad idea. WFC is great at local edge matching but terrible at large-scale patterns. You’d get trees scattered randomly instead of clustered into forests, or buildings spread evenly instead of gathered into villages.
The solution: good old Perlin noise. A global noise field determines tree density and building placement, completely separate from WFC. Areas where the noise is above a threshold get trees; slightly different noise drives buildings. This gives you organic clustering — forests, clearings, villages — that WFC could never produce. I also used some additional logic to place buildings at the end of roads, ports and windmills on coasts, henges on hilltops etc.
WFC handles the terrain. Noise handles the decorations. Each tool does what it’s good at.
I wanted that ‘Zelda: The Wind Waker’ cartoon shimmer on the water surface. Originally I tried generating caustics procedurally with four layers of Voronoi noise. This turned out to be very GPU heavy and did not look great. The solution was sampling a small scrolling caustic texture with a simple noise mask, which looks way better and is super cheap. Sometimes the easy solution is the correct solution.
Waves are sine bands that radiate outward from coastlines, inspired by Bad North’s gorgeous shoreline effect. To know “how far from the coast” each pixel is, the system renders a coast mask — a top down orthographic render of the entire map with white for land and black for water — then dilates and blurs it into a gradient. The wave shader reads this gradient to place animated sine bands at regular distance intervals, with noise to break up the pattern.
This worked great on straight coastlines. In concave coves and inlets, the wave lines got thick and ugly. The blur-based gradient spreads the same value range over a wider physical area in coves, stretching the wave bands out.
Screen-space derivatives to detect gradient stretching — worked at one zoom level, broke at others.
Texture-space gradient magnitude to detect opposing coast edges canceling out — only detected narrow rivers, not actual problem coves.
The fundamental issue: blur encodes “how much land is nearby,” not “how far is the nearest coast edge.” These are different questions, and no amount of post-processing the blur can extract true distance.
The solve was to do a CPU-side “surroundedness” probe that checks each water cell’s neighbors to detect coves, writing a separate mask texture that thins the waves in enclosed areas. It’s kind of a hack but it works and the wave edges thin out nicely at the edges.
No dice required this time — but the feeling is the same. You hit a button, the map builds itself, and you discover what the algorithm decided to put there. It’s super satisfying to see the road and river systems matching up perfectly. Every time it’s different, and every time I find myself exploring for a while. The kid rolling dice on the dungeon tables would be into this.
...
Read the original on felixturner.github.io »
Last week, Dan Blanchard, the maintainer of chardet—a Python library for detecting text encodings used by roughly 130 million projects a month—released a new version. Version 7.0 is 48 times faster than its predecessor, supports multiple cores, and was redesigned from the ground up. Anthropic’s Claude is listed as a contributor. The license changed from LGPL to MIT.
Blanchard’s account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch. The resulting code shares less than 1.3% similarity with any prior version, as measured by JPlag. His conclusion: this is an independent new work, and he is under no obligation to carry forward the LGPL. Mark Pilgrim, the library’s original author,
opened a GitHub issue to object. The LGPL requires that modifications be distributed under the same license, and a reimplementation produced with ample exposure to the original codebase cannot, in Pilgrim’s view, pass as a clean-room effort.
The dispute drew responses from two prominent figures in the open source world. Armin Ronacher, the creator of Flask, welcomed the relicensing. Salvatore Sanfilippo (antirez), the creator of Redis,
published a broader defense of AI reimplementation, grounding it in copyright law and the history of the GNU project. Both conclude, by different routes, that what Blanchard did is legitimate. I respect both writers, and I think both are wrong—or more precisely, both are evading the question that actually matters.
That question is this: does legal mean legitimate? Neither piece answers it. Both move from “this is legally permissible” to “this is therefore fine,” without pausing at the gap between those two claims. Law sets a floor; clearing it does not mean the conduct is right. That gap is where this essay begins.
Antirez builds his case on history. When the GNU project reimplemented the UNIX userspace, it was lawful. So was Linux. Copyright law prohibits copying “protected expressions”—the actual code, its structure, its specific mechanisms—but it does not protect ideas or behavior. AI-assisted reimplementation occupies the same legal ground. Therefore, it is lawful.
The legal analysis is largely correct, and I am not disputing it. The problem lies in what antirez does next: he presents the legal conclusion as if it were also a social one, and uses a historical analogy that, examined more carefully, argues against his own position.
When GNU reimplemented the UNIX userspace, the vector ran from proprietary to free. Stallman was using the limits of copyright law to turn proprietary software into free software. The ethical force of that project did not come from its legal permissibility—it came from the direction it was moving, from the fact that it was expanding the commons. That is why people cheered.
The vector in the chardet case runs the other way. Software protected by a copyleft license—one that guarantees users the right to study, modify, and redistribute derivative works under the same terms—has been reimplemented under a permissive license that carries no such guarantee. This is not a reimplementation that expands the commons. It is one that removes the fencing that protected the commons. Derivative works built on chardet 7.0 are under no obligation to share their source code. That obligation, which applied to a library downloaded 130 million times a month, is now gone.
Antirez does not address this directional difference. He invokes the GNU precedent, but that precedent is a counterexample to his conclusion, not a supporting one.
Ronacher’s argument is different. He discloses upfront that he has a stake in the outcome: “I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard.” He goes on to write that he considers “the GPL to run against that spirit by restricting what can be done with it”—the spirit being that society is better off when we share.
This claim rests on a fundamental misreading of what the GPL does.
Start with what the GPL actually prohibits. It does not prohibit keeping source code private. It imposes no constraint on privately modifying GPL software and using it yourself. The GPL’s conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms. This is not a restriction on sharing. It is a condition placed on sharing: if you share, you must share in kind.
The requirement that improvements be returned to the commons is not a mechanism that suppresses sharing. It is a mechanism that makes sharing recursive and self-reinforcing. The claim that imposing contribution obligations on users of a commons undermines sharing culture does not hold together logically.
The contrast with the MIT license clarifies the point. Under MIT, anyone may take code, improve it, and close it off into a proprietary product. You can receive from the commons without giving back. If Ronacher calls this structure “more share-friendly,” he is using a concept of sharing with a specific directionality built in: sharing flows toward whoever has more capital and more engineers to take advantage of it.
The historical record bears this out. In the 1990s, companies routinely absorbed GPL code into proprietary products—not because they had chosen permissive licenses, but because copyleft enforcement was slack. The strengthening of copyleft mechanisms closed that gap. For individual developers and small projects without the resources to compete on anything but reciprocity, copyleft was what made the exchange approximately fair.
The creator of Flask knows this distinction. If he elides it anyway, the argument is not naïve—it is convenient.
The most interesting moment in Ronacher’s piece is not the argument but a detail he mentions in passing: Vercel reimplemented GNU Bash using AI and published it, then got visibly upset when Cloudflare
reimplemented Next.js the same way.
Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT licensed. Cloudflare’s vinext did not violate any license—it did exactly what Ronacher calls a contribution to the culture of openness, applied to a permissively licensed codebase. Vercel’s reaction had nothing to do with license infringement; it was purely competitive and territorial. The implicit position is: reimplementing GPL software as MIT is a victory for sharing, but having our own MIT software reimplemented by a competitor is cause for outrage. This is what the claim that permissive licensing is “more share-friendly” than copyleft looks like in practice. The spirit of sharing, it turns out, runs in one direction only: outward from oneself.
Ronacher registers the contradiction and does not stop. “This development plays into my worldview,” he writes. When you present evidence that cuts against your own position, acknowledge it, and then proceed to your original conclusion unchanged, that is a signal that the conclusion preceded the argument.
Back to the question posed at the start. Is legal the same as legitimate?
Antirez closes his careful legal analysis as though it settles the matter. Ronacher acknowledges that “there is an obvious moral question here, but that isn’t necessarily what I’m interested in.” Both pieces treat legal permissibility as a proxy for social legitimacy. But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
In the chardet case, the distinction is sharper still. What the LGPL protected was not Blanchard’s labor alone. It was a social compact agreed to by everyone who contributed to the library over twelve years. The terms of that compact were: if you take this and build on it, you share back under the same terms. This compact operated as a legal instrument, yes, but it was also the foundation of trust that made contribution rational. The fact that a reimplementation may qualify legally as a new work, and the fact that it breaks faith with the original contributors, are separate questions. If a court eventually rules in Blanchard’s favor, that ruling will tell us what the law permits. It will not tell us that the act was right.
Zoë Kooyman, executive director of the FSF, put it plainly: “Refusing to grant others the rights you yourself received as a user is highly antisocial, no matter what method you use.”
Reading this debate, I keep returning to a question about position. From where are these two writers looking at the situation?
Antirez created Redis. Ronacher created Flask. Both are figures at the center of the open source ecosystem, with large audiences and well-established reputations. For them, falling costs of AI reimplementation means something specific: it is easier to reimplement things they want in a different form. Ronacher says explicitly that he had begun reimplementing GNU Readline precisely because of its copyleft terms.
For the people who have spent years contributing to a library like chardet, the same shift in costs means something else entirely: the copyleft protection around their contributions can be removed. The two writers are speaking from the former position to people in the latter, telling them that this was always lawful, that historical precedent supports it, and that the appropriate response is adaptation.
When positional asymmetry of this kind is ignored, and the argument is presented as universal analysis, what you get is not analysis but rationalization. Both writers arrive at conclusions that align precisely with their own interests. Readers should hold that fact in mind.
Bruce Perens, who wrote the original Open Source Definition, told : “The entire economics of software development are dead, gone, over, kaput!” He meant it as an alarm. Antirez, from a similar assessment of the situation, draws the conclusion: adapt. Ronacher says he finds the direction exciting.
None of the three responses addresses the central question. When copyleft becomes technically easier to circumvent, does that make it less necessary, or more?
I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.
That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.
This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.
In previous writing, I argued for a training copyleft (TGPL) as the next step in this line of development. The chardet situation suggests the argument has to go further: to a specification copyleft covering the layer below source code. If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides. Blanchard’s own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.
The history of the GPL is the history of licensing tools evolving in response to new forms of exploitation: GPLv2 to GPLv3, then AGPL. What drove each evolution was not a court ruling but a community reaching a value judgment first and then seeking legal instruments to express it. The same sequence is available now. Whatever courts eventually decide about AI reimplementation, the question we need to answer first is not a legal one. It is a social one. Do those who take from the commons owe something back? I think they do. That judgment does not require a verdict.
What makes the pieces by antirez and Ronacher worth reading is not that they are right. It is that they make visible, with unusual clarity, what they are choosing not to see. When legality is used as a substitute for a value judgment, the question that actually matters gets buried in the footnotes of a law it has already outgrown.
...
Read the original on writings.hongminhee.org »
A Broward County judge has dismissed a red-light camera ticket, ruling that the state law used to issue the citation improperly shifts the burden of proof onto vehicle owners.
In a 21-page order signed March 3, Judge Steven P. DeLuca granted a defendant’s motion to dismiss a photo-enforced traffic citation issued under Florida’s red-light camera law.
The case involved a Sunrise red-light camera citation issued to a registered vehicle owner after automated cameras captured a vehicle entering an intersection against a red signal. The defendant argued the statute unconstitutionally requires the registered owner to prove they were not driving — instead of requiring the government to prove who was behind the wheel.
In the order, the court found that red-light camera cases, although labeled as civil infractions, function as “quasi-criminal” proceedings because they can result in monetary penalties, a formal finding of guilt, and consequences tied to a driver’s record.
Under Florida Statute 316.0083, once a camera captures a violation, the registered owner is presumed responsible unless they submit an affidavit identifying another driver. The court ruled that the framework improperly shifts the burden of proof away from the state.
Because traffic infractions that move to county court must be proven “beyond a reasonable doubt,” the judge wrote that the statute’s presumption violates constitutional due process protections.
As a result, the citation in this case was formally dismissed.
Click here to view the PDF file
Joel Mumford is an attorney with The Ticket Clinic. He said if the case is quasi criminal, which means almost criminal or criminal-like, the state has to follow procedural due process.
“The state or the agencies that issue the tickets, it’s their burden to prove all the elements of the crime beyond and to the extent of each and every reasonable doubt. And the first element, which should be, who’s driving the car,” Mumford said. “The statute in Florida presumes that the registered owner is the driver of the car.”
Mumford said even though the court order only applies in Broward County, it could open the door for challenges in other parts of the state, meaning other counties across the state following suit.
“What could probably happen is they the court could then get an appeal, and then if it goes up to the district court of appeal level and then make a decision on it, depending on what that says, that could then be applied to the entire state, if there’s no similar appellate cases throughout the state. So that could make it statewide,” he said.
Drivers in Boynton Beach want to see the red-light cameras gone.
Fifteen red-light camera systems are already running at seven intersections in Boynton Beach alone. Congress Avenue and Gateway Boulevard is just one of them.
“I’ve been ticketed here twice, and it’s ridiculous because they it’s just not fair,” one driver said who didn’t want to be identified. The person that does the determination when you ran the light, it’s just a random. Whoever they want to pick, pick you to say, okay, you’re gonna pay the ticket.”
That driver had to pay his $158 dollar ticket but he’s hopeful Palm Beach County can soon follow suit. He wants red light cameras gone, so future violations could be thrown out.
“I think they need to outlaw it and get rid of it,” he said.
Advocacy group StopTheCams, which has long opposed automated traffic enforcement, called the ruling a major victory. In a press release, the group said the decision confirms what critics have argued for years — that red-light camera laws punish vehicle owners without requiring proof they committed the violation.
Supporters of red-light cameras argue the systems improve safety by deterring dangerous driving at intersections. Florida’s red-light camera law, known as the Mark Wandall Traffic Safety Act, allows local governments to use automated enforcement systems.
It remains unclear whether the ruling will be appealed or how broadly it could affect similar cases statewide. For now, the decision applies to this specific case in Broward County, but legal observers say it could fuel renewed challenges to Florida’s red-light camera enforcement system.
...
Read the original on cbs12.com »
After several intense and incredible years building Bluesky from the ground up, I’ve decided to step back as CEO and transition to a new role as Bluesky’s Chief Innovation Officer.
In 2019, I set out to build an open protocol for social media, with the goal of enabling the development of a new generation of applications designed to empower users. The Bluesky app started off as a reference client for the protocol, but has since grown into a thriving platform with over 40 million users. Last year, we grew a world-class team, expanded the AT Protocol ecosystem, and proved that a values-driven social network could thrive at scale.
As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things. As part of this transition, Toni Schneider, former CEO of Automattic and partner at True Ventures, will join our team as interim CEO, while our board runs a search for a permanent chief executive.
Toni believes deeply in the Bluesky mission, and has been an advisor to the company and me personally for over a year. Both Automattic and True Ventures are also investors in Bluesky, and support the development of a more open, user-driven internet. Automattic, the company behind WordPress.com, has built their business on open source software, working to make the web a better and more participatory place. Toni was a key part of guiding that mission, and is someone who leads with genuine curiosity, takes the long view, and deeply understands what it means to build a company around a mission. I am confident that he is the right person to lead us into this next chapter.
Scaling up this company has been a learning experience unlike anything else. I’ve grown a lot as a leader and had the privilege of assembling the best team I’ve ever worked with. As we’ve grown, I’ve found that people thrive when they’re in a role where their passions overlap with their strengths. This is as true for me as it is for our team. I’m most energized by exploring new ideas, bringing a vision to life, and helping people discover their strengths. Transitioning to a more focused role where I can do what brings me energy is my way of putting that belief into practice.
I’m excited to dig into the next frontier of what decentralized social can be, while bringing Toni in to support our team as an experienced operator and leader. The work that got us here was just the beginning, and I’m grateful to keep building alongside this team and community.
...
Read the original on bsky.social »
Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle’s debt-fueled expansion.
OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia’s Blackwell processors, and the power isn’t projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia’s next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality.
Bloomberg was first to report on the companies ending their plans for expansion in Abilene. In a post on X on Sunday, Oracle called news reports about the activity, “false and incorrect,” but the post only said existing projects are on track and didn’t address expansion plans.
Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger.
It’s a logical decision for OpenAI, which doesn’t want older chips. Nvidia used to release a new generation of data center processors every two years. Now, CEO Jensen Huang has the company shipping one every year, and each generation offers a leap in capability. Vera Rubin, unveiled at CES in January and already in production, delivers five times the inference performance of Blackwell.
For the companies building frontier models, the smallest improvement in performance could equate to huge gaps in model benchmarks and rankings, which are closely followed by developers and translate directly to usage, revenue, and valuation.
That all points to a bigger problem at play. For infrastructure companies, securing a site, connecting power and standing up a facility takes 12 to 24 months at minimum. But customers want the latest and greatest, and they’re tracking the yearly chip upgrades.
Oracle’s added challenge is that it’s the only hyperscaler funding its buildout primarily with debt, to the tune of $100 billion and counting. Google, Amazon and Microsoft, by contrast, are leaning on their enormous cash-generating businesses.
Meanwhile, Oracle partner Blue Owl is declining to fund an additional facility, and plans to cut up to 30,000 jobs.
Oracle reports fiscal third-quarter results on Tuesday, and investors will be paying close to how the company addresses a $50 billion capital expenditure plan with negative free cash flow, and whether the financing pipeline can hold up.
The stock is down 23% so far this year and has lost over half its value since peaking in September.
Beyond Oracle, GPU depreciation is a risk for the broader market and could have ramifications across the AI landscape. Every infrastructure deal signed today may result in a commitment to outdated hardware before the power is even connected.
...
Read the original on www.cnbc.com »
FFmpeg is truly a multi-tool for media processing. As an industry-standard tool it supports a wide variety of audio and video codecs and container formats. It can also orchestrate complex chains of filters for media editing and manipulation. For the people who use our apps, FFmpeg plays an important role in enabling new video experiences and improving the reliability of existing ones.
Meta executes ffmpeg (the main CLI application) and ffprobe (a utility for obtaining media file properties) binaries tens of billions of times a day, introducing unique challenges when dealing with media files. FFmpeg can easily perform transcoding and editing on individual files, but our workflows have additional requirements to meet our needs. For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg, such as threaded multi-lane encoding and real-time quality metric computation.
Over time, our internal fork came to diverge significantly from the upstream version of FFmpeg. At the same time, new versions of FFmpeg brought support for new codecs and file formats, and reliability improvements, all of which allowed us to ingest more diverse video content from users without disruptions. This necessitated that we support both recent open-source versions of FFmpeg alongside our internal fork. Not only did this create a gradually divergent feature set, it also created challenges around safely rebasing our internal changes to avoid regressions.
As our internal fork became increasingly outdated, we collaborated with FFmpeg developers, FFlabs, and VideoLAN to develop features in FFmpeg that allowed us to fully deprecate our internal fork and rely exclusively on the upstream version for our use cases. Using upstreamed patches and refactorings we’ve been able to fill two important gaps that we had previously relied on our internal fork to fill: threaded, multi-lane transcoding and real-time quality metrics.
When a user uploads a video through one of our apps, we generate a set of encodings to support Dynamic Adaptive Streaming over HTTP (DASH) playback. DASH playback allows the app’s video player to dynamically choose an encoding based on signals such as network conditions. These encodings can differ in resolution, codec, framerate, and visual quality level but they are created from the same source encoding, and the player can seamlessly switch between them in real time.
In a very simple system separate FFmpeg command lines can generate the encodings for each lane one-by-one in serial. This could be optimized by running each command in parallel, but this quickly becomes inefficient due to the duplicate work done by each process.
To work around this, multiple outputs could be generated within a single FFmpeg command line, decoding the frames of a video once and sending them to each output’s encoder instance. This eliminates a lot of overhead by deduplicating the video decoding and process startup time overhead incurred by each command line. Given that we process over 1 billion video uploads daily, each requiring multiple FFmpeg executions, reductions in per-process compute usage yield significant efficiency gains.
Our internal FFmpeg fork provided an additional optimization to this: parallelized video encoding. While individual video encoders are often internally multi-threaded, previous FFmpeg versions executed each encoder in serial for a given frame when multiple encoders were in use. By running all encoder instances in parallel, better parallelism can be obtained overall.
Thanks to contributions from FFmpeg developers, including those at FFlabs and VideoLAN, more efficient threading was implemented starting with FFmpeg 6.0, with the finishing touches landing in 8.0. This was directly influenced by the design of our internal fork and was one of the main features we had relied on it to provide. This development led to the most complex refactoring of FFmpeg in decades and has enabled more efficient encodings for all FFmpeg users.
To fully migrate off of our internal fork we needed one more feature implemented upstream: real-time quality metrics.
Visual quality metrics, which give a numeric representation of the perceived visual quality of media, can be used to quantify the quality loss incurred from compression. These metrics are categorized as reference or no-reference metrics, where the former compares a reference encoding to some other distorted encoding.
FFmpeg can compute various visual quality metrics such as PSNR, SSIM, and VMAF using two existing encodings in a separate command line after encoding has finished. This is okay for offline or VOD use cases, but not for livestreaming where we might want to compute quality metrics in real time.
To do this, we need to insert a video decoder after each video encoder used by each output lane. These provide bitmaps for each frame in the video after compression has been applied so that we can compare against the frames before compression. In the end, we can produce a quality metric for each encoded lane in real time using a single FFmpeg command line.
Thanks to “in-loop” decoding, which was enabled by FFmpeg developers including those from FFlabs and VideoLAN, beginning with FFmpeg 7.0, we no longer have to rely on our internal FFmpeg fork for this capability.
Things like real-time quality metrics while transcoding and more efficient threading can bring efficiency gains to a variety of FFmpeg-based pipelines both in and outside of Meta, and we strive to enable these developments upstream to benefit the FFmpeg community and wider industry. However, there are some patches we’ve developed internally that don’t make sense to contribute upstream. These are highly specific to our infrastructure and don’t generalize well.
FFmpeg supports hardware-accelerated decoding, encoding, and filtering with devices such as NVIDIA’s NVDEC and NVENC, AMD’s Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each device is supported through an implementation of standard APIs in FFmpeg, allowing for easier integration and minimizing the need for device-specific command line flags. We’ve added support for the Meta Scalable Video Processor (MSVP), our custom ASIC for video transcoding, through these same APIs, enabling the use of common tooling across different hardware platforms with minimal platform-specific quirks.
As MSVP is only used within Meta’s own infrastructure, it would create a challenge for FFmpeg developers to support it without access to the hardware for testing and validation. In this case, it makes sense to keep patches like this internal since they wouldn’t provide benefit externally. We’ve taken on the responsibility of rebasing our internal patches onto more recent FFmpeg versions over time, utilizing extensive validation to ensure robustness and correctness during upgrades.
With more efficient multi-lane encoding and real-time quality metrics, we were able to fully deprecate our internal FFmpeg fork for all VOD and livestreaming pipelines. And thanks to standardized hardware APIs in FFmpeg, we’ve been able to support our MSVP ASIC alongside software-based pipelines with minimal friction.
FFmpeg has withstood the test of time with over 25 years of active development. Developments that improve resource utilization, add support for new codecs and features, and increase reliability enable robust support for a wider range of media. For people on our platforms, this means enabling new experiences and improving the reliability of existing ones. We plan to continue investing in FFmpeg in partnership with open source developers, bringing benefits to Meta, the wider industry, and people who use our products.
We would like to acknowledge contributions from the open source community, our partners in FFlabs and VideoLAN, and many Meta engineers, including Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.
...
Read the original on engineering.fb.com »
Social media was once hailed as the great democratizer of information. It promised a world where anyone could share their voice, connect with like-minded communities, and discover authentic human perspectives. But in 2025, the landscape looks very different. Social platforms are drowning in a flood of Generative AI (GenAI) content—articles, images, videos, and even entire conversations churned out at industrial scale. The result is a noisy ocean of sameness, where genuine human voices struggle to be heard.
For years, one could track the slow decline of social platforms: from algorithmic manipulation and advertising overload, to bot armies, political polarization, and the prioritization of engagement over truth. Yet, the explosion of AI-generated content has dramatically accelerated this downfall. Instead of scrolling for connection, users now find themselves wading through endless streams of repetitive, shallow, or outright meaningless material. This is not just a temporary nuisance. It signals the death of social media as we know it. And in this environment, an old technology—long thought to be obsolete—emerges as a savior: RSS, or Really Simple Syndication. RSS offers a way out of the algorithmic maze, a return to authenticity, and a chance to reclaim control over the information we consume.
This article explores why AI-generated content is destroying social media, how RSS provides a viable and elegant solution, and why now might be the perfect moment to revive this forgotten technology.
Generative AI has lowered the barrier to content creation to near zero. What once required hours of writing, designing, or editing can now be produced in seconds. Tools like ChatGPT, Midjourney, and Runway generate polished text, stunning visuals, and slick videos instantly. This is revolutionary in some ways, but devastating in others. The problem lies in scale. When anyone—or any bot—can publish unlimited content at no cost, the supply of information skyrockets, but human attention remains finite. The inevitable result is oversaturation, an endless flood of low-value material that drowns out everything else.
AI-generated articles and posts often sound competent, but they rarely sound alive. They mimic human style but lack human depth. After reading a dozen AI-written articles, a pattern emerges: similar phrases, repetitive structures, and predictable conclusions. The internet is filling up with machine-generated déjà vu. For readers, this creates fatigue in encountering the same types of content over and over, along with the erosion of trust as it becomes difficult to distinguish genuine human thought from automated output.
Compounding this problem is the way social media algorithms function. They don’t care about authenticity; they care about engagement. AI-generated content, designed for clicks and shares, fits neatly into their goals. As more content is produced, algorithms amplify the ones that trigger emotions or quick interactions, even if they are shallow, manipulative, or misleading. AI makes it cheaper to produce clickbait, and social media ensures it spreads faster than ever. For creators, this is crushing. Human-made articles, videos, or posts now compete against endless waves of machine-made content. Audiences can barely tell the difference, and many no longer care. The result is suffocation of authentic voices. For users, the platforms feel less personal, less inspiring, and less trustworthy. Social media promised community, but what we now get is content sludge.
The decay of social media did not begin with AI. For over a decade, cracks have been visible. Platforms increasingly prioritized advertisers over users, forcing ad overload onto feeds. Algorithms replaced user choice with automated curation. Fake news and bots undermined trust. Excessive use was linked to anxiety, depression, and polarization. Yet now, the flood of AI content accelerates this process. If every scroll reveals machine-written posts, AI art clones, or synthetic influencers, the value of the platform erodes further. People log in less, engagement drops, and the feedback loop worsens.
At the heart of this decline lies the collapse of authenticity. The core appeal of social media was its human connection, the spontaneity of real people sharing real moments. But with AI content dominating feeds, that connection is diluted. Imagine logging into Instagram and seeing 90 percent AI-generated fashion shoots, travel photos, or influencer avatars. Or scrolling Twitter/X and finding endless AI-written hot takes. The magic of human unpredictability vanishes. Platforms are also trapped by their reliance on advertising. Ads depend on user trust and engagement, but as feeds become indistinguishable from AI sludge, users disengage and advertisers see diminishing returns. The result is a death spiral: platforms push harder for monetization while driving users further away. This is the trajectory we are on, and it points toward the death of social media as we knew it.
RSS, or Really Simple Syndication, is an old technology from the early web. At its core, it is a feed format that allows websites to publish updates which users can subscribe to. Instead of visiting each site individually, you can aggregate updates into a single reader app. Think of it as an inbox for the internet. While RSS was once popular in the blogging era of the 2000s, it faded as social media took over. But now, in the age of AI content floods, it might be the perfect antidote.
The benefits of RSS are striking. With RSS, you subscribe directly to websites, blogs, or news outlets, meaning there is no middleman algorithm deciding what you see. By curating your own sources, you can focus on human-written blogs, company press releases, or trusted outlets. Most feeds deliver clean content without injected ads or surveillance. Updates appear instantly in your reader, often faster than social media platforms surface them. Above all, RSS returns control to the user. You decide which voices matter, with no algorithm filtering or manipulating your feed.
Unlike social feeds, where AI-generated spam floods in, RSS allows you to choose human sources deliberately. If you want updates from your favorite indie journalist, you simply add their blog feed. If you want company news straight from the source, you subscribe to their press release feed. The result is a curated, personalized feed of authentic voices. In other words, RSS restores the signal-to-noise ratio.
Despite its age, RSS never disappeared. Many apps and services still support it, some paid, some free, some open source. During my own exploration, I tested multiple RSS reader apps. Most worked, but one stood out: Feeder.
Feeder is free, lightweight, and refreshingly simple. Unlike bloated apps or paid services, it focuses on what matters. There are no distracting ads, only content. It works across phone and desktop, providing seamless use. Transparency is another key advantage—it is open source on GitHub, meaning you can audit what it does. Notifications, search, folders, and syncing all work smoothly. It doesn’t try to reinvent RSS; it just makes it work. In a time where transparency and trust matter, Feeder’s open-source nature is a massive plus.
Of course, Feeder is not the only option. Other strong contenders include Feedly, Inoreader, and NetNewsWire. But Feeder captures the spirit of RSS: independence, simplicity, and user control.
AI-driven social media thrives on removing choice. It decides what you see, when you see it, and how you feel about it. RSS flips the script. You decide your sources. You build your own information ecosystem. Instead of waiting for an article to appear on Twitter/X or Facebook—filtered by algorithms, distorted by ads—you get it straight from the source, with no delays, no manipulation, and no man in the middle.
By subscribing directly to human creators—bloggers, journalists, thinkers—you bypass the noise and support authenticity. These voices are harder to find on AI-clogged platforms, but with RSS, they come straight to you. With RSS, there is no hidden data mining, no invisible AI influence. Just a feed of your chosen content, delivered locally. In a digital world plagued by hidden algorithms, this transparency is refreshing.
Social media will not vanish overnight, but its role is changing. For many, it will become background noise, a chaotic carnival of AI bots shouting into the void. Platforms may survive as entertainment machines, but their credibility as sources of authentic human insight is gone. The early web was decentralized, built on blogs, forums, and personal sites. Social media centralized everything, but at great cost. RSS offers a way back: decentralized, user-driven, and authentic. Instead of one feed controlled by one corporation, each person builds their own.
In the age of infinite AI content, curation becomes the most valuable act. RSS empowers individuals to curate their own feeds. Instead of passively scrolling, we actively select. Instead of drowning, we choose.
Social media as we knew it is dying—not because humans lost interest in sharing, but because machines overwhelmed the system. Generative AI has flooded platforms with repetitive, shallow, and manipulative content. Algorithms amplify the noise. Authenticity evaporates.
But out of this collapse emerges an opportunity: the rebirth of RSS. This old, simple technology empowers us to reclaim control, cut out middlemen, and reconnect with genuine human voices. Tools like Feeder make it accessible again, providing clean, ad-free, transparent feeds of the content that matters most.
The future of information consumption may not be algorithm-driven feeds but self-curated, human-centered inboxes of authenticity. The death of social media could be the start of something better: a renaissance of the open web. RSS isn’t just nostalgia. It’s a survival tool in the age of AI content floods. And the sooner we embrace it, the sooner we can rediscover the joy of consuming information that feels truly human.
By the way, here is my own RSS blog feed!
...
Read the original on www.smartlab.at »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.