10 interesting stories served every morning and every evening.




1 2,268 shares, 95 trendiness

What an unprocessed photo looks like

(Photography)

Here’s a photo of a Christmas tree, as my cam­er­a’s sen­sor sees it:

This is be­cuase while the cam­er­a’s ana­log-to-dig­i­tal con­verter (ADC) out­put can the­o­ret­i­cally out­put val­ues from 0 to 16382, the data does­n’t cover that whole range:

The real range of ADC val­ues is ~2110 to ~136000. Let’s set those val­ues as the white and black in the im­age:

Much bet­ter, but it’s still more mono­chro­matic then I re­mem­ber the tree be­ing. Camera sen­sors aren’t ac­tu­ally able to see color: They only mea­sure how much light hit each pixel.

In a color cam­era, the sen­sor is cov­ered by a grid of al­ter­nat­ing color fil­ters:

Let’s color each pixel the same as the fil­ter it’s look­ing through:

This ver­sion is more col­or­ful, but each pixel only has one third of its RGB color.

To fix this, I just av­er­aged the val­ues each pixel with its neigh­bors:

Applying this process to the whole photo gives the lights some color:

However, the im­age is still very dark. This is be­cause mon­i­tors don’t have as much dy­namic range as the hu­man eye, or a cam­era sen­sor: Even if you are us­ing an OLED, the screen still has some am­bi­ent light re­flect­ing off of it and lim­it­ing how black it can get.

There’s also an­other, sneakier fac­tor caus­ing this:

Our per­cep­tion of bright­ness is non-lin­ear.

If bright­ness val­ues are quan­tized, most of the ADC bins will be wasted on nearly iden­ti­cal shades of white while every other tone is crammed into the bot­tom. Because this is an in­ef­fi­cient use of mem­ory, most color spaces as­sign ex­tra bins to darker col­ors:

As a re­sult of this, if the lin­ear data is dis­played di­rectly, it will ap­pear much darker then it should be.

Both prob­lems can be solved by ap­ply­ing a non-lin­ear curve to each color chan­nel to brighten up the dark ar­eas… but this does­n’t quite work out:

Some of this green cast is caused by the cam­era sen­sor be­ing in­trin­si­cally more sen­si­tive to green light, but some of it is my fault: There are twice as many green pix­els in the fil­ter ma­trix. When com­bined with my rather naive demo­saic­ing, this re­sulted in the green chan­nel be­ing boosted even higher.

In ei­ther case, it can fixed with proper white-bal­ance: Equalize the chan­nels by mul­ti­pling each one with a con­stant.

However, be­cause the im­age is now non-lin­ear, I have to go back a step to do this. Here’s the dark im­age from be­fore with all the val­ues tem­porar­ily scaled up so I can see the prob­lem:

… here’s that im­age with the green taken down to match the other chan­nels:

… and af­ter re-ap­ply­ing the curve:

This is re­ally just the bare min­i­mum: I haven’t done any color cal­i­bra­tion, the white bal­ance is­n’t per­fect, the black points are too high, there’s lots of noise that needs to be cleaned up…

Additionally, ap­ply­ing the curve to each color chan­nel ac­ci­den­tally de­sat­u­rated the high­lights. This ef­fect looks rather good — and is what we’ve come to ex­pect from film — but it has de-yel­lowed the star. It’s pos­si­ble to sep­a­rate the lu­mi­nance and curve it while pre­serv­ing color. On its own, this would make the LED Christmas lights into an over­stat­u­rated mess, but com­bin­ing both meth­ods can pro­duce nice re­sults.

For com­par­i­son, here’s the im­age my cam­era pro­duced from the same data:

Far from be­ing an unedited” photo: there’s a huge amount of math that’s gone into mak­ing an im­age that nicely rep­re­sents what the sub­ject looks like in per­son.

There’s noth­ing that hap­pens when you ad­just the con­trast or white bal­ance in edit­ing soft­ware that the cam­era has­n’t done un­der the hood. The edited im­age is­n’t faker” then the orig­i­nal: they are dif­fer­ent ren­di­tions of the same data.

In the end, repli­cat­ing hu­man per­cep­tion is hard, and it’s made harder when con­strained to the lim­i­ta­tions of dis­play tech­nol­ogy or printed im­ages. There’s noth­ing wrong with tweak­ing the im­age when the au­to­mated al­go­rithms make the wrong call.

...

Read the original on maurycyz.com »

2 1,103 shares, 40 trendiness

Linux is good now, and if you want to feel like you actually own your PC, make 2026 the year of Linux on (your) desktop

I’m all-in, baby. I’m com­mit­ted. If up­grad­ing any dis­tinct com­po­nent of my PC did­n’t re­quire me tak­ing out a loan right now, I’d be se­ri­ously con­sid­er­ing switch­ing my GPU over to some kind of AMD thing just to make my life slightly, slightly eas­ier.

I’ve had it with Windows and as­cended to the sun­lit up­lands of Linux, where the trees heave with open-source fruits and men with large beards grep things with their minds.

I’m not alone. In last mon­th’s Steam hard­ware sur­vey, the num­ber of Linux users hit a new all-time high for the sec­ond month run­ning, reach­ing the heady sum­mit of a whop­ping, ah, 3.2% of over­all Steam users. Hey, we’re beat­ing Mac play­ers.

I think that num­ber will only grow as the new year goes by. More and more of us are get­ting sick of Windows, sure—the AI guff, the con­stant up­selling on Office subs, the mid­dle taskbar*—but also, all my ex­pe­ri­ence goof­ing about with Linux this year has dis­pelled a lot of the, frankly, er­ro­neous ideas I had about it. It’s re­ally not hard! Really! I know Linux guys have been say­ing this for three decades, but it’s true now!

As I’ve al­ready writ­ten about, the bulk of my Linux-futzing time this year has been spent in Bazzite, a dis­tro tai­lor-made for gam­ing and also tai­lor-made to stop id­iots (me) from do­ing some­thing likely to det­o­nate their boot drive.

I grew up think­ing of Linux as the com­mand-line OS that lets you delete your boot­load­er’ and, well, I sup­pose that’s not un­true, but I’ve been con­sis­tently im­pressed at how sim­ple Bazzite has been to run on my PC, even with my per­snick­ety Nvidia GPU.

Everything I’ve played this year has been as easy—if not eas­ier—to run on a free OS put to­gether by a gag­gle of pas­sion­ate nerds as it is on Windows, the OS made by one of the most valu­able cor­po­ra­tions on planet Earth. I’ve never had to dip into the com­mand line (which is, to be frank, a shame, as the com­mand line is ob­jec­tively cool).

But to be hon­est, it’s not as if the Bazzite team has mirac­u­lously made Linux pleas­ant to use af­ter decades of it seem­ing dif­fi­cult and es­o­teric to normie com­puter users. I think main­stream Linux dis­tros are just, well, sort of good now. Apart from my gam­ing PC, I also have an old lap­top con­verted into a me­dia server that lives un­der­neath my tele­vi­sion. It runs Debian 13 (which I up­dated to from Debian 12 ear­lier in the year) and re­quires es­sen­tially zero in­put from me at all.

What’s more, the only soft­ware I have on there is soft­ware I ac­tu­ally want on there. Oh for a ver­sion of Windows that let me do some­thing as zany as, I don’t know, unin­stall Edge.

That’s the true nub of it, I think. The stats can say what they like (and they do! We’ve all heard tales of Windows games ac­tu­ally run­ning bet­ter on Linux via Valve’s Proton com­pat­i­bil­ity layer), but the heart of my fa­tigue with Windows is that, for every new worth­less AI gad­get Microsoft crams into it and for every time the OS in­ex­plic­a­bly boots to a white screen and im­plores me to finish set­ting up” my PC with an Office 365 sub­scrip­tion, the real prob­lem is a feel­ing that my com­puter is­n’t mine, that I am some­how rent­ing this thing I put to­gether with my own two hands from an AI cor­po­ra­tion in Redmond.

That’s fine for con­soles. Indeed, part of the whole pitch of an Xbox or PlayStation is the no­tion that you are hand­ing off a lot of re­spon­si­bil­ity for your de­vice to Sony and Microsoft’s teams of techs, but my PC? That I built? Get your grubby mitts off it.

Are there is­sues? Sure. HDRs still a crap­shoot (plus ça change) and, as you’ve no doubt heard, a lot of live-ser­vice games have an­ticheat soft­ware that won’t play with Linux. But I think both of these is­sues are grad­u­ally tick­ing to­ward their so­lu­tions, par­tic­u­larly with Valve mak­ing its own push into the liv­ing room.

So I say make 2026 the year you give Linux a try, if you haven’t al­ready. At the very least, you can stick it on a sep­a­rate boot drive and have a noo­dle about with it. I sus­pect you’ll find the open (source) wa­ter is a lot more hos­pitable than you might think.

*I’m ac­tu­ally fine with the mid­dle taskbar. I’m sorry.

...

Read the original on www.pcgamer.com »

3 1,079 shares, 41 trendiness

I Was Kidnapped by Deutsche Bahn and All I Got Was 1.50 EUR

If you live in Germany, you have been treated like live­stock by Deutsche Bahn (DB). Almost all of my friends have a story: they trav­eled with DB, got thrown out in the mid­dle of the night in some cow vil­lage, and had to wait hours for the next train.

I have some­thing bet­ter. I was kid­napped.

I am tak­ing the RE5 (ID 28521) to my grand­moth­er’s house in Meckenheim. Scheduled de­par­ture: 15:32. Scheduled ar­rival in Bonn: 15:54. From there, the S23 to Meckenheim. A jour­ney of 35 kilo­me­ters, or, in DB units, some­where be­tween forty-five min­utes and the heat death of the uni­verse.

I wanted to ar­rive early to spend more time with her. My fa­ther, who lives near Troisdorf, was sup­posed to join us later.

I board the train. It is twenty min­utes late. I con­sider this early. At least the train showed up. In DBs of­fi­cial sta­tis­tics, a train counts as on time” if it’s less than six min­utes late. Cancelled trains are not counted at all. If a train does­n’t ex­ist, it can­not be late.

The train starts mov­ing. The dri­ver an­nounces there are issues around Bonn.” He does not spec­ify what kind. No one asks. We have learned not to ask. He sug­gests we exit at Cologne South and take the sub­way, or con­tinue to Troisdorf and catch a bus from there.

I de­cide to con­tinue to Troisdorf. My fa­ther can just pick me up there and we drive to­gether. The plan adapts.

The dri­ver an­nounces the full de­tour: from Cologne South to Troisdorf to Neuwied to Koblenz. The en­tire left bank of the Rhine is un­avail­able. Only then I no­tice: the dri­ver has been speak­ing German only. If you were a tourist who got on in Cologne to visit Brühl, thir­teen min­utes away, you were about to have a very con­fus­ing Christmas in Troisdorf.

A woman near me is hold­ing choco­lates and flow­ers. She is on the phone with her mother. Sorry Mama, I’ll be late.” Pause. Deutsche Bahn.” Pause. Her mother un­der­stood.

Twenty min­utes later. We are ap­proach­ing Troisdorf. I stand up. I gather my things. My fa­ther texts me: he is at the sta­tion, wait­ing.

The dri­ver comes back on: Hello every­one. Apparently we were not reg­is­tered at Troisdorf sta­tion, so we are on the wrong tracks. We can­not stop.”

He says this the way some­one might say the cof­fee ma­chine is bro­ken.”

I watch Troisdorf slide past the win­dow. Somewhere in the park­ing lot out­side the sta­tion, my fa­ther is sit­ting in his car, watch­ing his son pass by as live­stock.

I was try­ing to travel 35 kilo­me­ters. I was now 63 kilo­me­ters from my grand­moth­er’s house. Further away than when I started.

There are fif­teen sta­tions be­tween Troisdorf and Neuwied. We pass all of them [^6].

At some point you stop be­ing a pas­sen­ger and start be­ing cargo. A cow trans­porter. Mooohhhhh. A cow trans­porter go­ing to a cow vil­lage. (Germany has a word for this: Kuhdorf. The cows are metaphor­i­cal. Usually.) I reached this point around Oberkassel.

DB once op­er­ated a bus to Llucalcari, a Mallorcan vil­lage of sev­en­teen peo­ple. I wanted to take it home.

An English speaker near the doors is get­ting ag­i­tated. What is hap­pen­ing? Why did­n’t we stop?”

We are not reg­is­tered for this track.”

But where will we stop?”

Fifty-five min­utes.” He said it again, qui­eter. I am be­ing kid­napped.”

My seat­mate, who had not looked up from his book in forty min­utes, turned a page. Deutsche Bahn.”

I had been kid­napped at a loss.

...

Read the original on www.theocharis.dev »

4 993 shares, 38 trendiness

Google is dead. Where do we go now?

It’s anec­do­tal, I know, but my main en­ter­tain­ment busi­ness rev­enue is down 50% over the past 3 months. Our main paid source of leads was Google Ads, which have served us well over the past 10 years or so — I think I know what I am do­ing in ad­words by now.

Once per month I check the an­a­lyt­ics, up­dat­ing key­words and tweak­ing ad cam­paigns. Over the past year we in­creased our bud­get, and then I started look­ing at it once per week, run­ning si­mul­ta­ne­ous cam­paigns with dif­fer­ent set­tings, just try­ing to get SOMETHING.

Last month Google gave us a bonus — free money! This was 5x our monthly ad spend, to spend just when we needed it most — over the December hol­i­days. I added an­other new cam­paign, up­dated the bud­gets for the ex­ist­ing ones. Still no change. The last week there was money to burn, left over from un­used ad spend. I in­creased our bud­get to 10x. ZERO RETURN.

The money ran out. I am not putting more in. Where do we go from here?

Research shows that many young peo­ple are get­ting their in­for­ma­tion from short video plat­forms like TikTok and Instagram. We are try­ing ads on there.

Our cus­tomer base is com­prised of 50% re­turn­ing cus­tomers (I am proud of that sta­tis­tic!). We have an email newslet­ter, we started send­ing them reg­u­larly over the past 2 months. Remember us?

We also plan to do some ac­tual phys­i­cal ad­ver­tis­ing — I am go­ing to a mar­ket next week­end, do­ing a free show or two, hand­ing out cards.

Also, we are branch­ing out — I have some pro­jects I want to make, re­lated to the Magic Poi pro­ject, and hope­fully sell. We or­dered sup­plies last week.

Right now, though — I’m broke. Anyone need a web­site or IOT pro­ject built? I am AI as­sisted, very fast!

...

Read the original on www.circusscientist.com »

5 925 shares, 36 trendiness

POSSE

...

Read the original on indieweb.org »

6 844 shares, 35 trendiness

The year in LLMs

This is the third in my an­nual se­ries re­view­ing every­thing that hap­pened in the LLM space over the past 12 months. For pre­vi­ous years see Stuff we fig­ured out about AI in 2023 and Things we learned about LLMs in 2024.

It’s been a year filled with a lot of dif­fer­ent trends.

OpenAI kicked off the reasoning” aka in­fer­ence-scal­ing aka Reinforcement Learning from Verifiable Rewards (RLVR) rev­o­lu­tion in September 2024 with o1 and o1-mini. They dou­bled down on that with o3, o3-mini and o4-mini in the open­ing months of 2025 and rea­son­ing has since be­come a sig­na­ture fea­ture of mod­els from nearly every other ma­jor AI lab.

My favourite ex­pla­na­tion of the sig­nif­i­cance of this trick comes from Andrej Karpathy:

By train­ing LLMs against au­to­mat­i­cally ver­i­fi­able re­wards across a num­ber of en­vi­ron­ments (e.g. think math/​code puz­zles), the LLMs spon­ta­neously de­velop strate­gies that look like reasoning” to hu­mans—they learn to break down prob­lem solv­ing into in­ter­me­di­ate cal­cu­la­tions and they learn a num­ber of prob­lem solv­ing strate­gies for go­ing back and forth to fig­ure things out (see DeepSeek R1 pa­per for ex­am­ples). […]

Running RLVR turned out to of­fer high ca­pa­bil­ity/$, which gob­bled up the com­pute that was orig­i­nally in­tended for pre­train­ing. Therefore, most of the ca­pa­bil­ity progress of 2025 was de­fined by the LLM labs chew­ing through the over­hang of this new stage and over­all we saw ~similar sized LLMs but a lot longer RL runs.

Every no­table AI lab re­leased at least one rea­son­ing model in 2025. Some labs re­leased hy­brids that could be run in rea­son­ing or non-rea­son­ing modes. Many API mod­els now in­clude di­als for in­creas­ing or de­creas­ing the amount of rea­son­ing ap­plied to a given prompt.

It took me a while to un­der­stand what rea­son­ing was use­ful for. Initial demos showed it solv­ing math­e­mat­i­cal logic puz­zles and count­ing the Rs in straw­berry—two things I did­n’t find my­self need­ing in my day-to-day model us­age.

It turned out that the real un­lock of rea­son­ing was in dri­ving tools. Reasoning mod­els with ac­cess to tools can plan out multi-step tasks, ex­e­cute on them and con­tinue to rea­son about the re­sults such that they can up­date their plans to bet­ter achieve the de­sired goal.

A no­table re­sult is that AI as­sisted search ac­tu­ally works now. Hooking up search en­gines to LLMs had ques­tion­able re­sults be­fore, but now I find even my more com­plex re­search ques­tions can of­ten be an­swered by GPT-5 Thinking in ChatGPT.

Reasoning mod­els are also ex­cep­tional at pro­duc­ing and de­bug­ging code. The rea­son­ing trick means they can start with an er­ror and step through many dif­fer­ent lay­ers of the code­base to find the root cause. I’ve found even the gnarli­est of bugs can be di­ag­nosed by a good rea­soner with the abil­ity to read and ex­e­cute code against even large and com­plex code­bases.

Combine rea­son­ing with tool-use and you get…

I started the year mak­ing a pre­dic­tion that agents were not go­ing to hap­pen. Throughout 2024 every­one was talk­ing about agents but there were few to no ex­am­ples of them work­ing, fur­ther con­fused by the fact that every­one us­ing the term agent” ap­peared to be work­ing from a slightly dif­fer­ent de­f­i­n­i­tion from every­one else.

By September I’d got fed up of avoid­ing the term my­self due to the lack of a clear de­f­i­n­i­tion and de­cided to treat them as an LLM that runs tools in a loop to achieve a goal. This un­blocked me for hav­ing pro­duc­tive con­ver­sa­tions about them, al­ways my goal for any piece of ter­mi­nol­ogy like that.

I did­n’t think agents would hap­pen be­cause I did­n’t think the gulli­bil­ity prob­lem could be solved, and I thought the idea of re­plac­ing hu­man staff mem­bers with LLMs was still laugh­able sci­ence fic­tion.

I was half right in my pre­dic­tion: the sci­ence fic­tion ver­sion of a magic com­puter as­sis­tant that does any­thing you ask of (Her) did­n’t ma­te­ri­al­ize…

But if you de­fine agents as LLM sys­tems that can per­form use­ful work via tool calls over mul­ti­ple steps then agents are here and they are prov­ing to be ex­tra­or­di­nar­ily use­ful.

The two break­out cat­e­gories for agents have been for cod­ing and for search.

The Deep Research pat­tern—where you chal­lenge an LLM to gather in­for­ma­tion and it churns away for 15+ min­utes build­ing you a de­tailed re­port—was pop­u­lar in the first half of the year but has fallen out of fash­ion now that GPT-5 Thinking (and Google’s AI mode”, a sig­nif­i­cantly bet­ter prod­uct than their ter­ri­ble AI overviews”) can pro­duce com­pa­ra­ble re­sults in a frac­tion of the time. I con­sider this to be an agent pat­tern, and one that works re­ally well.

The coding agents” pat­tern is a much big­ger deal.

The most im­pact­ful event of 2025 hap­pened in February, with the quiet re­lease of Claude Code.

I say quiet be­cause it did­n’t even get its own blog post! Anthropic bun­dled the Claude Code re­lease in as the sec­ond item in their post an­nounc­ing Claude 3.7 Sonnet.

Claude Code is the most promi­nent ex­am­ple of what I call cod­ing agents—LLM sys­tems that can write code, ex­e­cute that code, in­spect the re­sults and then it­er­ate fur­ther.

The ma­jor labs all put out their own CLI cod­ing agents in 2025

Vendor-independent op­tions in­clude GitHub Copilot CLI, Amp, OpenCode, OpenHands CLI, and Pi. IDEs such as Zed, VS Code and Cursor in­vested a lot of ef­fort in cod­ing agent in­te­gra­tion as well.

My first ex­po­sure to the cod­ing agent pat­tern was OpenAI’s ChatGPT Code Interpreter in early 2023—a sys­tem baked into ChatGPT that al­lowed it to run Python code in a Kubernetes sand­box.

I was de­lighted this year when Anthropic fi­nally re­leased their equiv­a­lent in September, al­beit un­der the baf­fling ini­tial name of Create and edit files with Claude”.

In October they re­pur­posed that con­tainer sand­box in­fra­struc­ture to launch Claude Code for web, which I’ve been us­ing on an al­most daily ba­sis ever since.

Claude Code for web is what I call an asyn­chro­nous cod­ing agent—a sys­tem you can prompt and for­get, and it will work away on the prob­lem and file a Pull Request once it’s done. OpenAI Codex cloud” (renamed to Codex web” in the last week) launched ear­lier in May 2025. Gemini’s en­try in this cat­e­gory is called Jules, also launched in May.

I love the asyn­chro­nous cod­ing agent cat­e­gory. They’re a great an­swer to the se­cu­rity chal­lenges of run­ning ar­bi­trary code ex­e­cu­tion on a per­sonal lap­top and it’s re­ally fun be­ing able to fire off mul­ti­ple tasks at once—of­ten from my phone—and get de­cent re­sults a few min­utes later.

I wrote more about how I’m us­ing these in Code re­search pro­jects with async cod­ing agents like Claude Code and Codex and Embracing the par­al­lel cod­ing agent lifestyle.

In 2024 I spent a lot of time hack­ing on my LLM com­mand-line tool for ac­cess­ing LLMs from the ter­mi­nal, all the time think­ing that it was weird that so few peo­ple were tak­ing CLI ac­cess to mod­els se­ri­ously—they felt like such a nat­ural fit for Unix mech­a­nisms like pipes.

Maybe the ter­mi­nal was just too weird and niche to ever be­come a main­stream tool for ac­cess­ing LLMs?

Claude Code and friends have con­clu­sively demon­strated that de­vel­op­ers will em­brace LLMs on the com­mand line, given pow­er­ful enough mod­els and the right har­ness.

It helps that ter­mi­nal com­mands with ob­scure syn­tax like sed and ffm­peg and bash it­self are no longer a bar­rier to en­try when an LLM can spit out the right com­mand for you.

As-of December 2nd Anthropic credit Claude Code with $1bn in run-rate rev­enue! I did not ex­pect a CLI tool to reach any­thing close to those num­bers.

With hind­sight, maybe I should have pro­moted LLM from a side-pro­ject to a key fo­cus!

The de­fault set­ting for most cod­ing agents is to ask the user for con­fir­ma­tion for al­most every ac­tion they take. In a world where an agent mis­take could wipe your home folder or a ma­li­cious prompt in­jec­tion at­tack could steal your cre­den­tials this de­fault makes to­tal sense.

Anyone who’s tried run­ning their agent with au­to­matic con­fir­ma­tion (aka YOLO mode—Codex CLI even aliases –dangerously-bypass-approvals-and-sandbox to –yolo) has ex­pe­ri­enced the trade-off: us­ing an agent with­out the safety wheels feels like a com­pletely dif­fer­ent prod­uct.

A big ben­e­fit of asyn­chro­nous cod­ing agents like Claude Code for web and Codex Cloud is that they can run in YOLO mode by de­fault, since there’s no per­sonal com­puter to dam­age.

I run in YOLO mode all the time, de­spite be­ing deeply aware of the risks in­volved. It has­n’t burned me yet…

One of my favourite pieces on LLM se­cu­rity this year is The Normalization of Deviance in AI by se­cu­rity re­searcher Johann Rehberger.

Johann de­scribes the Normalization of Deviance” phe­nom­e­non, where re­peated ex­po­sure to risky be­hav­iour with­out neg­a­tive con­se­quences leads peo­ple and or­ga­ni­za­tions to ac­cept that risky be­hav­iour as nor­mal.

This was orig­i­nally de­scribed by so­ci­ol­o­gist Diane Vaughan as part of her work to un­der­stand the 1986 Space Shuttle Challenger dis­as­ter, caused by a faulty O-ring that en­gi­neers had known about for years. Plenty of suc­cess­ful launches led NASA cul­ture to stop tak­ing that risk se­ri­ously.

Johann ar­gues that the longer we get away with run­ning these sys­tems in fun­da­men­tally in­se­cure ways, the closer we are get­ting to a Challenger dis­as­ter of our own.

ChatGPT Plus’s orig­i­nal $20/month price turned out to be a snap de­ci­sion by Nick Turley based on a Google Form poll on Discord. That price point has stuck firmly ever since.

This year a new pric­ing prece­dent has emerged: the Claude Pro Max 20x plan, at $200/month.

OpenAI have a sim­i­lar $200 plan called ChatGPT Pro. Gemini have Google AI Ultra at $249/month with a $124.99/month 3-month start­ing dis­count.

These plans ap­pear to be dri­ving some se­ri­ous rev­enue, though none of the labs have shared fig­ures that break down their sub­scribers by tier.

I’ve per­son­ally paid $100/month for Claude in the past and will up­grade to the $200/month plan once my cur­rent batch of free al­lowance (from pre­view­ing one of their mod­els—thanks, Anthropic) runs out. I’ve heard from plenty of other peo­ple who are happy to pay these prices too.

You have to use mod­els a lot in or­der to spend $200 of API cred­its, so you would think it would make eco­nomic sense for most peo­ple to pay by the to­ken in­stead. It turns out tools like Claude Code and Codex CLI can burn through enor­mous amounts of to­kens once you start set­ting them more chal­leng­ing tasks, to the point that $200/month of­fers a sub­stan­tial dis­count.

2024 saw some early signs of life from the Chinese AI labs mainly in the form of Qwen 2.5 and early DeepSeek. They were neat mod­els but did­n’t feel world-beat­ing.

This changed dra­mat­i­cally in 2025. My ai-in-china tag has 67 posts from 2025 alone, and I missed a bunch of key re­leases to­wards the end of the year (GLM-4.7 and MiniMax-M2.1 in par­tic­u­lar.)

GLM-4.7, Kimi K2 Thinking, MiMo-V2-Flash, DeepSeek V3.2, MiniMax-M2.1 are all Chinese open weight mod­els. The high­est non-Chi­nese model in that chart is OpenAI’s gpt-oss-120B (high), which comes in sixth place.

The Chinese model rev­o­lu­tion re­ally kicked off on Christmas day 2024 with the re­lease of DeepSeek 3, sup­pos­edly trained for around $5.5m. DeepSeek fol­lowed that on 20th January with DeepSeek R1 which promptly trig­gered a ma­jor AI/semiconductor sell­off: NVIDIA lost ~$593bn in mar­ket cap as in­vestors pan­icked that AI maybe was­n’t an American mo­nop­oly af­ter all.

The panic did­n’t last—NVIDIA quickly re­cov­ered and to­day are up sig­nif­i­cantly from their pre-DeepSeek R1 lev­els. It was still a re­mark­able mo­ment. Who knew an open weight model re­lease could have that kind of im­pact?

DeepSeek were quickly joined by an im­pres­sive ros­ter of Chinese AI labs. I’ve been pay­ing at­ten­tion to these ones in par­tic­u­lar:

Most of these mod­els aren’t just open weight, they are fully open source un­der OSI-approved li­censes: Qwen use Apache 2.0 for most of their mod­els, DeepSeek and Z.ai use MIT.

Some of them are com­pet­i­tive with Claude 4 Sonnet and GPT-5!

Sadly none of the Chinese labs have re­leased their full train­ing data or the code they used to train their mod­els, but they have been putting out de­tailed re­search pa­pers that have helped push for­ward the state of the art, es­pe­cially when it comes to ef­fi­cient train­ing and in­fer­ence.

One of the most in­ter­est­ing re­cent charts about LLMs is Time-horizon of soft­ware en­gi­neer­ing tasks dif­fer­ent LLMscan com­plete 50% of the time from METR:

The chart shows tasks that take hu­mans up to 5 hours, and plots the evo­lu­tion of mod­els that can achieve the same goals work­ing in­de­pen­dently. As you can see, 2025 saw some enor­mous leaps for­ward here with GPT-5, GPT-5.1 Codex Max and Claude Opus 4.5 able to per­form tasks that take hu­mans mul­ti­ple hours—2024’s best mod­els tapped out at un­der 30 min­utes.

METR con­clude that the length of tasks AI can do is dou­bling every 7 months”. I’m not con­vinced that pat­tern will con­tinue to hold, but it’s an eye-catch­ing way of il­lus­trat­ing cur­rent trends in agent ca­pa­bil­i­ties.

The most suc­cess­ful con­sumer prod­uct launch of all time hap­pened in March, and the prod­uct did­n’t even have a name.

One of the sig­na­ture fea­tures of GPT-4o in May 2024 was meant to be its mul­ti­modal out­put—the o” stood for omni” and OpenAI’s launch an­nounce­ment in­cluded nu­mer­ous coming soon” fea­tures where the model out­put im­ages in ad­di­tion to text.

Then… noth­ing. The im­age out­put fea­ture failed to ma­te­ri­al­ize.

In March we fi­nally got to see what this could do—al­beit in a shape that felt more like the ex­ist­ing DALL-E. OpenAI made this new im­age gen­er­a­tion avail­able in ChatGPT with the key fea­ture that you could up­load your own im­ages and use prompts to tell it how to mod­ify them.

This new fea­ture was re­spon­si­ble for 100 mil­lion ChatGPT signups in a week. At peak they saw 1 mil­lion ac­count cre­ations in a sin­gle hour!

Tricks like ghiblification”—modifying a photo to look like a frame from a Studio Ghibli movie—went vi­ral time and time again.

OpenAI re­leased an API ver­sion of the model called gpt-image-1”, later joined by a cheaper gpt-im­age-1-mini in October and a much im­proved gpt-im­age-1.5 on December 16th.

The most no­table open weight com­peti­tor to this came from Qwen with their Qwen-Image gen­er­a­tion model on August 4th fol­lowed by Qwen-Image-Edit on August 19th. This one can run on (well equipped) con­sumer hard­ware! They fol­lowed with Qwen-Image-Edit-2511 in November and Qwen-Image-2512 on 30th December, nei­ther of which I’ve tried yet.

The even big­ger news in im­age gen­er­a­tion came from Google with their Nano Banana mod­els, avail­able via Gemini.

Google pre­viewed an early ver­sion of this in March un­der the name Gemini 2.0 Flash na­tive im­age gen­er­a­tion”. The re­ally good one landed on August 26th, where they started cau­tiously em­brac­ing the co­de­name Nano Banana” in pub­lic (the API model was called Gemini 2.5 Flash Image”).

Nano Banana caught peo­ple’s at­ten­tion be­cause it could gen­er­ate use­ful text! It was also clearly the best model at fol­low­ing im­age edit­ing in­struc­tions.

In November Google fully em­braced the Nano Banana” name with the re­lease of Nano Banana Pro. This one does­n’t just gen­er­ate text, it can out­put gen­uinely use­ful de­tailed in­fo­graph­ics and other text and in­for­ma­tion-heavy im­ages. It’s now a pro­fes­sional-grade tool.

Max Woolf pub­lished the most com­pre­hen­sive guide to Nano Banana prompt­ing, and fol­lowed that up with an es­sen­tial guide to Nano Banana Pro in December.

I’ve mainly been us­ing it to add kākāpō par­rots to my pho­tos.

Given how in­cred­i­bly pop­u­lar these im­age tools are it’s a lit­tle sur­pris­ing that Anthropic haven’t re­leased or in­te­grated any­thing sim­i­lar into Claude. I see this as fur­ther ev­i­dence that they’re fo­cused on AI tools for pro­fes­sional work, but Nano Banana Pro is rapidly prov­ing it­self to be of value to any­one who’s work in­volves cre­at­ing pre­sen­ta­tions or other vi­sual ma­te­ri­als.

In July rea­son­ing mod­els from both OpenAI and Google Gemini achieved gold medal per­for­mance in the International Math Olympiad, a pres­ti­gious math­e­mat­i­cal com­pe­ti­tion held an­nu­ally (bar 1980) since 1959.

This was no­table be­cause the IMO poses chal­lenges that are de­signed specif­i­cally for that com­pe­ti­tion. There’s no chance any of these were al­ready in the train­ing data!

It’s also no­table be­cause nei­ther of the mod­els had ac­cess to tools—their so­lu­tions were gen­er­ated purely from their in­ter­nal knowl­edge and to­ken-based rea­son­ing ca­pa­bil­i­ties.

Turns out suf­fi­ciently ad­vanced LLMs can do math af­ter all!

In September OpenAI and Gemini pulled off a sim­i­lar feat for the International Collegiate Programming Contest (ICPC)—again no­table for hav­ing novel, pre­vi­ously un­pub­lished prob­lems. This time the mod­els had ac­cess to a code ex­e­cu­tion en­vi­ron­ment but oth­er­wise no in­ter­net ac­cess.

I don’t be­lieve the ex­act mod­els used for these com­pe­ti­tions have been re­leased pub­licly, but Gemini’s Deep Think and OpenAI’s GPT-5 Pro should pro­vide close ap­prox­i­ma­tions.

With hind­sight, 2024 was the year of Llama. Meta’s Llama mod­els were by far the most pop­u­lar open weight mod­els—the orig­i­nal Llama kicked off the open weight rev­o­lu­tion back in 2023 and the Llama 3 se­ries, in par­tic­u­lar the 3.1 and 3.2 dot-re­leases, were huge leaps for­ward in open weight ca­pa­bil­ity.

Llama 4 had high ex­pec­ta­tions, and when it landed in April it was… kind of dis­ap­point­ing.

There was a mi­nor scan­dal where the model tested on LMArena turned out not to be the model that was re­leased, but my main com­plaint was that the mod­els were too big. The neat­est thing about pre­vi­ous Llama re­leases was that they of­ten in­cluded sizes you could run on a lap­top. The Llama 4 Scout and Maverick mod­els were 109B and 400B, so big that even quan­ti­za­tion would­n’t get them run­ning on my 64GB Mac.

They were trained us­ing the 2T Llama 4 Behemoth which seems to have been for­got­ten now—it cer­tainly was­n’t re­leased.

It says a lot that none of the most pop­u­lar mod­els listed by LM Studio are from Meta, and the most pop­u­lar on Ollama is still Llama 3.1, which is low on the charts there too.

Meta’s AI news this year mainly in­volved in­ter­nal pol­i­tics and vast amounts of money spent hir­ing tal­ent for their new Superintelligence Labs. It’s not clear if there are any fu­ture Llama re­leases in the pipeline or if they’ve moved away from open weight model re­leases to fo­cus on other things.

Last year OpenAI re­mained the undis­puted leader in LLMs, es­pe­cially given o1 and the pre­view of their o3 rea­son­ing mod­els.

This year the rest of the in­dus­try caught up.

OpenAI still have top tier mod­els, but they’re be­ing chal­lenged across the board.

In im­age mod­els they’re still be­ing beaten by Nano Banana Pro. For code a lot of de­vel­op­ers rate Opus 4.5 very slightly ahead of GPT-5.2 Codex. In open weight mod­els their gpt-oss mod­els, while great, are falling be­hind the Chinese AI labs. Their lead in au­dio is un­der threat from the Gemini Live API.

Where OpenAI are win­ning is in con­sumer mind­share. Nobody knows what an LLM is but al­most every­one has heard of ChatGPT. Their con­sumer apps still dwarf Gemini and Claude in terms of user num­bers.

Their biggest risk here is Gemini. In December OpenAI de­clared a Code Red in re­sponse to Gemini 3, de­lay­ing work on new ini­tia­tives to fo­cus on the com­pe­ti­tion with their key prod­ucts.

They posted their own vic­to­ri­ous 2025 re­cap here. 2025 saw Gemini 2.0, Gemini 2.5 and then Gemini 3.0—each model fam­ily sup­port­ing au­dio/​video/​im­age/​text in­put of 1,000,000+ to­kens, priced com­pet­i­tively and prov­ing more ca­pa­ble than the last.

They also shipped Gemini CLI (their open source com­mand-line cod­ing agent, since forked by Qwen for Qwen Code), Jules (their asyn­chro­nous cod­ing agent), con­stant im­prove­ments to AI Studio, the Nano Banana im­age mod­els, Veo 3 for video gen­er­a­tion, the promis­ing Gemma 3 fam­ily of open weight mod­els and a stream of smaller fea­tures.

Google’s biggest ad­van­tage lies un­der the hood. Almost every other AI lab trains with NVIDIA GPUs, which are sold at a mar­gin that props up NVIDIAs multi-tril­lion dol­lar val­u­a­tion.

Google use their own in-house hard­ware, TPUs, which they’ve demon­strated this year work ex­cep­tion­ally well for both train­ing and in­fer­ence of their mod­els.

...

Read the original on simonwillison.net »

7 796 shares, 25 trendiness

GOG is getting acquired by its original co-founder: What it means for you

Today, Michał Kiciński, one of the co-founders of CD PROJEKT, and the co-founder of GOG, has ac­quired GOG from CD PROJEKT.

We be­lieve the games that shaped us de­serve to stay alive: easy to find, buy, down­load, and play for­ever. But time is an­noy­ingly good at eras­ing them. Rights get tan­gled, com­pat­i­bil­ity breaks, builds dis­ap­pear, and a nos­tal­gic evening of­ten turns into a trou­bleshoot­ing ses­sion. That’s the dif­fer­ence be­tween I’m play­ing to­day” (the game lives on) and I’ll play some­day” (the game dies).

As Michał put it: GOG stands for free­dom, in­de­pen­dence, and gen­uine con­trol.”

GOG has al­ways been built on strong val­ues and clear prin­ci­ples. When Marcin Iwiński and Michał Kiciński first came up with the idea for GOG in 2007, the vi­sion was sim­ple: bring clas­sic games back to play­ers, and make sure that once you buy a game, it truly be­longs to you, for­ever. In a mar­ket in­creas­ingly de­fined by manda­tory clients and closed ecosys­tems, that phi­los­o­phy feels more rel­e­vant than ever.

This new chap­ter is about dou­bling down on that vi­sion. We want to do more to pre­serve the clas­sics of the past, cel­e­brate stand­out games of to­day, and help shape the clas­sics of to­mor­row, in­clud­ing new games with real retro spirit.

First of all, DRM-free is more cen­tral to GOG than ever. Your li­brary stays yours to en­joy: same ac­cess, same of­fline in­stallers, same sense of own­er­ship. Your data stays with GOG, and GOG GALAXY re­mains op­tional.

We’ll keep our re­la­tion­ship with CD PROJEKT. CD PROJEKT RED games will con­tinue to be avail­able on GOG, and up­com­ing ti­tles from the stu­dio will also be re­leased on the plat­form.

If you’re a GOG Patron, or you do­nate to sup­port the Preservation Program, those funds stay within GOG. Your sup­port has been huge this year, and we think that with your help, we can un­der­take even more am­bi­tious res­cue mis­sions in 2026 and 2027. We’ll have more to say about that some­time in 2026.

GOG will re­main  in­de­pen­dent in its op­er­a­tions. We will con­tinue build­ing a plat­form that’s eth­i­cal, non-preda­tory, and made to last, while help­ing in­die de­vel­op­ers reach the world. We’re also com­mit­ted to giv­ing the com­mu­nity a stronger voice, with new ini­tia­tives planned for 2026.

Thanks for be­ing the rea­son this all mat­ters.

A lot of com­pa­nies sell games. Fewer do the unglam­orous work of mak­ing sure the games that shaped peo­ple’s lives don’t qui­etly rot into in­com­pat­i­bil­ity.

Thanks for car­ing about this mis­sion with us. We’ll keep you posted as we ship, and in the mean­time, you can dig into the full FAQ for the de­tailed an­swers.

...

Read the original on www.gog.com »

8 775 shares, 27 trendiness

2026 will be my year of the Linux desktop

TL;DR: 2026 is go­ing to be The Year of The Linux Desktop for me. I haven’t booted into Windows in over 3 months on my tower and I’m start­ing to re­al­ize that it’s not worth wast­ing the space for. I plan to unify my three SSDs and turn them all into btrfs dri­ves on Fedora.

I’ve been merely tol­er­at­ing Windows 11 for a while but re­cently it’s got­ten to the point where it’s just ab­solutely in­tol­er­a­ble. Somehow Linux on the desk­top has got­ten so much bet­ter by not even do­ing any­thing dif­fer­ently. Microsoft has man­aged to ac­tively sab­o­tage the desk­top ex­pe­ri­ence through years of ac­tive dis­re­gard and spite against their users. They’ve man­aged to take some of their most rev­o­lu­tion­ary tech­no­log­i­cal in­no­va­tions (the NT ker­nel’s hy­brid de­sign al­low­ing it to restart dri­vers, NTFS, ReFS, WSL, Hyper-V, etc.) then just shat all over them with start menus made with React Native, con­trol-alt-delete menus that are ac­tu­ally just we­b­views, and forc­ing Copilot down every­one’s throats to the point that I’ve ac­ci­den­tally got­ten stuck in Copilot in a hand­held gam­ing PC and had to hard re­boot the de­vice to get out of it. It’s as if the in­ter­nal teams at Microsoft have had decades of lead time in shoot­ing each other in the head with pre­dictable re­sults.

To be hon­est, I’ve had enough. I’m go­ing to go with Fedora on my tower and Bazzite (or SteamOS) on my hand­helds.

I think that Linux on the desk­top is ready for the masses now, not be­cause it’s ad­vanced in a huge leap/​bound. It’s ready for the masses to use be­cause Windows has got­ten so much ac­tively worse that con­tin­u­ing to use it is an ac­tive detri­ment to user ex­pe­ri­ence and sta­bil­ity. Not to men­tion with the price of ram lately, you need every gi­ga­byte you can get and desk­top Linux lets you waste less of it on su­per­flu­ous bull­shit that very few peo­ple ac­tu­ally want.

At the very least, when some­thing goes wrong on Linux you have log mes­sages that can let you know what went wrong so you can search for it.

Facts and cir­cum­stances may have changed since pub­li­ca­tion. Please con­tact me be­fore jump­ing to con­clu­sions if some­thing seems wrong or un­clear.

...

Read the original on xeiaso.net »

9 768 shares, 32 trendiness

Life in a Secret Chinese Nuclear City That Was Never on the Map

This site re­quires JavaScript to run cor­rectly. Please turn on JavaScript or un­block scripts

...

Read the original on substack.com »

10 727 shares, 29 trendiness

A website to destroy all websites.

Well, the Internet mostly feels bad these days. We were given this vast, holy realm of self-dis­cov­ery and joy and phi­los­o­phy and com­mu­nity; a thou­sand thou­sand acres of dig­i­tal land­scape, on which to grow our forests and grass­lands of imag­i­na­tion, plant our gar­dens of learn­ing, ex­plore the caves of our mak­ing. We were given the chance to know any­thing about any­thing, to be our own Prometheus, to make wishes and to grant them.But that’s not what we use the Internet for any­more. These days, in­stead of us­ing it to make our­selves, most of us are us­ing it to waste our­selves: we’re doom-scrolling brain-rot on the at­ten­tion-farm, we’re get­ting slop from the feed.In­stead of turn­ing freely in the HTTP mead­ows we grow for each other, we go to work: we break our backs at the foundry of al­go­rith­mic con­tent as this earnest, naïve, hu­man en­deav­or­ing to con­nect our lives with oth­ers is cor­rupted. Our pow­er­ful drive to learn about our­selves, each other, and our world, is bro­ken into scant rem­nants — hol­low, clutch­ing phan­tasms of Content Creation, speed-cut ver­ti­cal video, lis­ti­cle thought-lead­er­ship, rage­bait and the thread emoji.

It used to feel way bet­ter to Go Online, and some of us will re­mem­ber.We used to be able to learn about our hob­bies and in­ter­ests from hun­dreds of ex­perts on a wealth of web­sites whose only shared mo­ti­va­tion was their pas­sion. Some of those ven­er­a­ble old ed­u­ca­tional blogs, fo­rums, and wikis still stand, though most have been bull­dozed.Now, Learning On The Internet of­ten means fight­ing ads and end­less as­saults on one’s at­ten­tion — it means watch­ing part-1-part-2-part-3 short-form video clips, taped to­gether by ac­tion movie psy­chol­ogy hacks, nar­rated grace­lessly by TTS AI voices. We’re down from a thou­sand and one web­sites to three, and each of those re­main­ing mono­lith web­sites is just a soul­lessly-re­gur­gi­tated, com­pres­sion-down-scaled, AI-up-scaled ver­sion of the next.We used to make last­ing friend­ships with folks all over the world on shared in­ter­est and good hu­mor.But now those so­cial net­works, once hand-built and hand-tended, vi­brant and or­ganic, are un­cer­e­mo­ni­ously swal­lowed by so­cial me­dia net­works, pens built for trap­ping us and our lit­tle piggy at­ten­tions, turn­ing us all into clout-chasers & con­tent-cre­ators, and re­mov­ing us from what mean­ing­ful in­ti­macy & com­mu­nity felt like.Even cod­ing for the web used to be dif­fer­ent: One could Learn To Code™ to ex­press one­self cre­atively, im­bue one’s on­line pres­ence with pas­sion and mean­ing, and for some of us, build a real ca­reer.These days, how­ever, we write in­creas­ing amounts of com­pli­cated, un­se­cure code to ex­press less and less mean­ing, in or­der to in­fi­nitely gen­er­ate share­holder value. We don’t think about the art of our craft and the dis­ci­pline of its ap­pli­ca­tion, we think about through­put and scale.

you are not im­mune to nos­tal­gia.

To be very clear: I’m not try­ing to Good Old Days the in­ter­net. None of this is meant to make you feel nos­tal­gic — the Internet used to be slow and less pop­u­lated and less di­verse, and its ac­cess was lim­ited to those of a cer­tain class. The Web For All is a marked im­prove­ment, wide­spread global in­ter­net ac­cess is a marked im­prove­ment, and what I’m ask­ing you to con­sider is what it used to feel like to use these tools, and what we’ve lost in the Big Tech, Web 2.0 and web3 de­vour­ing of the Net.

The on­set of the au­to­mo­bile was a rev­e­la­tion for ac­cess and per­sonal lib­erty. With the ad­vent of cars, mem­bers of so­ci­ety could travel far­ther, get more done in their day, and bend their lim­ited time more to their cre­ative will!But as time wore on and the in­dus­tri­al­iza­tion & pro­lif­er­a­tion of the au­to­mo­bile pro­gressed, its mar­ginal util­ity di­min­ished — the in­dus­try started to of­fer so­ci­ety fewer & fewer ben­e­fits, and take more & more in ex­change. In American cities, for ex­am­ple: though at first the au­to­mo­bile en­abled hu­mans to travel fur­ther dis­tances, it now de­manded that hu­mans travel those dis­tances, and de­manded in­fra­struc­ture be cre­ated & main­tained to en­able it. Many now must use an au­to­mo­bile to get every­thing done in their town in a day, and must pay & take time for that au­to­mo­bile’s fu­el­ing & main­te­nance.Fur­ther than that, the au­to­mo­bile asks all of us to chip in tax rev­enue to pro­tect its in­fra­struc­ture, but only cer­tain classes can af­ford an au­to­mo­bile with which to use that in­fra­struc­ture, and those classes who can’t af­ford to do so are rel­e­gated to un­der­funded pub­lic tran­sit sys­tems.No longer a tool to serve our so­ci­eties, our so­ci­eties now serve the au­to­mo­bile.

no longer a tool to serve our so­ci­eties,

our so­ci­eties now serve the au­to­mo­bile.

the mark­ers of a de­cay­ing Net I men­tioned be­fore, with con­vivial tool­ing in mind:

Monolithic plat­forms like YouTube, TikTok, Medium, and Substack draw a ton of cre­ators and ed­u­ca­tors be­cause of the promise of mon­e­ti­za­tion and large au­di­ences, but they’ve shown time and time again how the lack of own­er­ship cre­ates a prob­lem. When those plat­forms fail, when they change their rules, when they de­mand cre­ators move or cre­ate a par­tic­u­lar way to main­tain their ac­cess to those au­di­ences, they pit cre­ators or their au­di­ences against the loss of the other. Without ad­her­ing to the al­go­rith­m’s re­quire­ments, writ­ers may not write an im­pact­ful doc­u­ment, and with­out by­pass­ing a pay­wall, read­ers can’t read it.

When those promises of ex­or­bi­tant wealth and a life of deca­dence through per-click mon­e­ti­za­tion ul­ti­mately dry up (or come with a steep moral or cre­ative cost), cre­ators and learn­ers must look for new so­lu­tions for how ed­u­ca­tional con­tent is shared on the Internet. The most self-ev­i­dent, con­vivial an­swer is an old one: blogs. HTML is free to ac­cess by de­fault, RSS has worked for about 130 years[ci­ta­tion needed], and com­bined with web­men­tions, it’s never been eas­ier to read new ideas, ex­per­i­ment with ideas, and build upon & grow those ideas with other strong thinkers on the web, own­ing that con­tent all along.

Connecting with friends on the WebSocial me­dia apps have im­pris­oned us all in this weird con­tent prison — in or­der to con­nect with friends we’re sort of forced to cre­ate or be van­ished by capri­cious black box al­go­rithms, and all that we do cre­ate is, as we’ve al­ready al­luded to, sub­se­quently owned by what­ever plat­form we’ve cre­ated it on. If Instagram goes away overnight, or de­cides to pivot cat­a­stroph­i­cally, your sto­ries and your net­work of friends goes with it.

The ad­vent and de­vel­op­ment of tools & method­olo­gies like POSSE (Publish On your Own Site, Syndicate Elsewhere), ActivityPub, mi­cro­for­mats, and ATProto, it’s be­com­ing quite achiev­able to gen­er­ate your own so­cial net­work, in­ter­op­er­a­ble with other net­works like Bluesky or Mastodon. That net­work, de­signed for own­er­ship and de­cen­tral­iza­tion, is durable, de­signed around sto­ry­telling in­stead of en­gage­ment, and free of the whims of weird tech bil­lion­aires.

With some ba­sic HTML knowl­edge and get­ting-stuff-on­line knowl­edge, a hand­ful of scrappy pro­to­cols, and a free af­ter­noon or two, one can build their own home to post bangers for the tight homies, make friends, and snipe those new friends with those hits of dopamine they so fiendishly rely on.

Lastly, con­sider the dis­ci­pline of web en­gi­neer­ing:

We have been asked to build the same B2B SaaS web­site with the same fea­ture­set n^∞ times, and our an­swers for the op­ti­mal way to do that are in­creas­ingly lim­ited. We’ve penned all of our markup into JavaScript tem­plates just in case a prod­uct man­ager needs the wrap­per com­po­nent to post JSON some­where down the line, and we’ve whit­tled away at style code un­til it’s just a mech­a­nism for de­ploy­ing one of two bor­der-ra­dius-drop-shadow com­bos to divs. It’s an in­dus­trial, pro­duc­tion-minded way of ap­proach­ing a dis­ci­pline that has all the hall­marks of be­ing a great craft, and that’s un­der­stand­ably unin­spir­ing to many of us.

Yet our young React shep­herds have no need to fear: there are count­less more col­ors than blur­ple out there, and count­less more fonts than Inter. HTML and CSS are bet­ter and more gen­er­a­tive tech­nolo­gies than they’ve ever been: Thanks to the tire­less work of the CSS work­ing groups and browser im­ple­menters, etc, there is an un­be­liev­able amount of cre­ative ex­pres­sion pos­si­ble with ba­sic web tools in a text ed­i­tor. Even JavaScript is more pro­gres­sively-en­hance­able than ever, and en­ables in­ter­fac­ing with a rapidly-grow­ing num­ber of ex­cit­ing browser APIs (still fuck Brendan Eich though). ${new Date.getCurrentYear()} is a ver­i­ta­ble re­nais­sance of web code, and it asks of au­thors only cu­rios­ity and a drive to ex­per­i­ment.

Sunrise on the Matterhorn, (after 1875)

You’re not crazy. The in­ter­net does feel gen­uinely so aw­ful right now, and for about a thou­sand and one rea­sons. But the path back to feel­ing like you have some con­trol is to un-spin your­self from the Five Apps of the Apocalypse and re­claim the Internet as a set of tools you use to build some­thing you can own & be proud of — or in most of our cases, be deeply ashamed of. Godspeed and good luck. That’s all for me. If you find any is­sues with this post, please reach out to me by email. Thanks eter­nally for your time and pa­tience, and thanks for read­ing. Find me here on­line at one of my per­sonal web­sites like henry.codes or strange.web­site or still­ness.dig­i­tal or strangers­byspring.com, or some­times on Bluesky and Mastodon.As ever, union­ize, free Palestine, trans rights are hu­man rights, fix your heart or die.

...

Read the original on henry.codes »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.