10 interesting stories served every morning and every evening.

Mercedes-Benz commits to bringing back physical buttons

www.drive.com.au

news

Another brand back­flips and ad­mits that touch-sen­si­tive but­tons for fre­quently used con­trols were a mis­take, but only af­ter the nudge from cus­tomers.

Electric Cars

Mercedes-Benz joins the grow­ing list of man­u­fac­tur­ers lis­ten­ing to cus­tomers and ad­mit­ting that touch-sen­si­tive con­trols and bury­ing con­trols in menus were mis­takes.

The German brand re­mains com­mit­ted to of­fer­ing large screens in its mod­els, but has lis­tened to its cus­tomers and will of­fer phys­i­cal but­tons for key func­tions in fu­ture.

This is partly un­like Audi and Volkswagen, which have cho­sen to re­duce the size of their in­fo­tain­ment screens to make room for the re­turn­ing phys­i­cal con­trols.

The up­com­ing GLC and C-Class will be of­fered with the 39.1-inch MBUX Hyperscreen’ that cov­ers al­most the en­tire width of the dash­board, but with phys­i­cal but­tons in front of the dual wire­less charg­ers, along with phys­i­cal but­tons and switches re­turn­ing to the steer­ing wheel.

Mercedes-Benz Sales boss Mathias Geisen, when speak­ing to Autocar, said the brand has changed its course: Customers told us two years ago, guys, nice idea, but it just does­n’t work for us’, so we changed that and made it more ana­logue.”

Physical but­tons, switches, and di­als will con­tinue to be in­cor­po­rated into up­com­ing mod­els, as the brand plans to blend its screen with the re­quired phys­i­cal con­trols.

He also ex­plained that I’m a big be­liever in screens, be­cause I re­ally be­lieve if you want to con­nect, you have to make the magic work be­hind the screen.”

But in our fu­ture prod­ucts, you will see more hard keys for spe­cific func­tions that cus­tomers want to have di­rect ac­cess for with hard keys.

When we do car re­search clin­ics, cus­tomers are very clear: We love the big screens, but we want to have [hard con­trols for] spe­cific func­tion­al­i­ties.’”

The brand will also of­fer a cus­tomis­able wall­pa­per el­e­ment for the near me­tre-wide seam­less touch­screen, a choice that its sales boss ad­mits was brought be­cause phones are such a huge part of peo­ple’s lives and they are used to that level of tech­nol­ogy.

If you want to con­nect to the cus­tomer, you’ve got to find a way to trans­late this dig­i­tal ex­pe­ri­ence from your phone to the cus­tomer.”

The new-gen­er­a­tion GLC SUV will show­case the brand’s new MB.EA elec­tric ve­hi­cle plat­form when it ar­rives in the fourth quar­ter of 2026 (October to December), shared with the up­com­ing C-Class when it’s due early next year.

9 Images

Electric Cars Guide

AI outperforms doctors in Harvard trial of emergency triage diagnoses

www.theguardian.com

From George Clooney in ER to Noah Wyle in The Pitt, emer­gency de­part­ment doc­tors have long been pop­u­lar he­roes. But will it soon be time to hang up the scrubs?

A ground­break­ing Harvard study has found that AI sys­tems out­per­formed hu­man doc­tors in high-pres­sure emer­gency med­i­cine triage, di­ag­nos­ing more ac­cu­rately in the po­ten­tially life and death mo­ments when peo­ple are first rushed to hos­pi­tal.

The re­sults were de­scribed by in­de­pen­dent ex­perts as show­ing a gen­uine step for­ward” in the clin­i­cal rea­son­ing of AIs and came as part of tri­als that tested the re­sponses of hun­dreds of doc­tors against an AI.

The au­thors said the re­sults, pub­lished in the jour­nal Science, showed large lan­guage mod­els (LLMs) have eclipsed most bench­marks of clin­i­cal rea­son­ing”.

One ex­per­i­ment fo­cused on 76 pa­tients who ar­rived at the emer­gency room of a Boston hos­pi­tal. An AI and a pair of hu­man doc­tors were each given the same stan­dard elec­tronic health record to read — typ­i­cally in­clud­ing vi­tal sign data, de­mo­graphic in­for­ma­tion and a few sen­tences from a nurse about why the pa­tient was there. The AI iden­ti­fied the ex­act or very close di­ag­no­sis in 67% of cases, beat­ing the hu­man doc­tors, who were right only 50%-55% of the time.

It showed the AIs’ ad­van­tage was par­tic­u­larly pro­nounced in triage cir­cum­stances re­quir­ing rapid de­ci­sions with min­i­mal in­for­ma­tion. The di­ag­no­sis ac­cu­racy of the AI — OpenAI’s o1 rea­son­ing model — rose to 82% when more de­tail was avail­able, com­pared with the 70 – 79% ac­cu­racy achieved by the ex­pert hu­mans, though this dif­fer­ence was not sta­tis­ti­cally sig­nif­i­cant.

It also out­per­formed a larger co­hort of hu­man doc­tors when asked to pro­vide longer term treat­ment plans, such as pro­vid­ing an­tibi­otics regimes or plan­ning end-of-life processes. The AI and 46 doc­tors were asked to ex­am­ine five clin­i­cal case stud­ies and the com­puter made sig­nif­i­cantly bet­ter plans, scor­ing 89% com­pared with 34% for hu­mans us­ing con­ven­tional re­sources, such as search en­gines.

But it is not cur­tains for emer­gency doc­tors yet, the re­searchers said. The study only tested hu­mans against AIs look­ing at pa­tient data that can be com­mu­ni­cated via text. The AIs read­ing of sig­nals, such as the pa­tien­t’s level of dis­tress and their vi­sual ap­pear­ance, were not tested. That means the AI was per­form­ing more like a clin­i­cian pro­duc­ing a sec­ond opin­ion based on pa­per­work.

I don’t think our find­ings mean that AI re­places doc­tors,” said Arjun Manrai, one of the lead au­thors of the study who heads an AI lab at Harvard Medical School. I think it does mean that we’re wit­ness­ing a re­ally pro­found change in tech­nol­ogy that will re­shape med­i­cine.”

Dr Adam Rodman, an­other lead au­thor and a doc­tor at Boston’s Beth Israel Deaconess med­ical cen­tre where the study took place, said AI LLMs were among the most im­pact­ful tech­nolo­gies in decades”. Over the next decade, he said, AI would not re­place physi­cians but join them in a new triadic care model … the doc­tor, the pa­tient, and an ar­ti­fi­cial in­tel­li­gence sys­tem”.

In one case in the Harvard study, a pa­tient pre­sented with a blood clot to the lungs and wors­en­ing symp­toms. Human doc­tors thought the anti-co­ag­u­lants were fail­ing, but the AI no­ticed some­thing the hu­mans did not: the pa­tien­t’s his­tory of lu­pus meant this might be caus­ing the in­flam­ma­tion of the lungs. The AI was proved cor­rect.

Nearly one in five US physi­cians are al­ready us­ing AI to as­sist di­ag­no­sis, ac­cord­ing to re­search pub­lished last month. In the UK, 16% of doc­tors are us­ing the tech daily and a fur­ther 15% weekly, with clinical de­ci­sion-mak­ing” be­ing one of the most com­mon uses, ac­cord­ing to a re­cent Royal College of Physicians sur­vey.

The UK doc­tors’ biggest con­cerns were AI er­ror and li­a­bil­ity risks. Billions are be­ing in­vested in AI health­care com­pa­nies, but ques­tions re­main about the con­se­quences of AI er­ror.

There is not a for­mal frame­work right now for ac­count­abil­ity,” said Rodman, who also stressed pa­tients ul­ti­mately want hu­mans to guide them through life or death de­ci­sions [and] to guide them through chal­leng­ing treat­ment de­ci­sions”.

Prof Ewen Harrison, co-di­rec­tor of the University of Edinburgh’s cen­tre for med­ical in­for­mat­ics, said the study was im­por­tant and showed that these sys­tems are no longer just pass­ing med­ical ex­ams or solv­ing ar­ti­fi­cial test cases. They are start­ing to look like use­ful sec­ond-opin­ion tools for clin­i­cians, par­tic­u­larly when it is im­por­tant to con­sider a wider range of pos­si­ble di­ag­noses and avoid miss­ing some­thing im­por­tant.”

Dr Wei Xing, an as­sis­tant pro­fes­sor at the University of Sheffield’s school of math­e­mat­i­cal and phys­i­cal sci­ences, said some of the other find­ings sug­gested doc­tors may un­con­sciously de­fer to the AIs an­swer rather than think­ing in­de­pen­dently.

This ten­dency could grow more sig­nif­i­cant as AI be­comes more rou­tinely used in clin­i­cal set­tings,” he said. He also high­lighted the lack of in­for­ma­tion about which pa­tients the AI was worse at di­ag­nos­ing and whether it strug­gled more with el­derly pa­tients or non-Eng­lish speak­ers.

He said: It does not demon­strate that AI is safe for rou­tine clin­i­cal use, nor that the pub­lic should turn to freely avail­able AI tools as a sub­sti­tute for med­ical ad­vice.”

GitHub - aattaran/deepclaude: Use Claude Code's autonomous agent loop with DeepSeek V4 Pro, OpenRouter, or any Anthropic-compatible backend. Same UX, 17x cheaper.

github.com

Use Claude Code’s au­tonomous agent loop with DeepSeek V4 Pro, OpenRouter, or any Anthropic-compatible back­end. Same UX, 17x cheaper.

What this does

Claude Code is the best au­tonomous cod­ing agent — but it costs $200/month with us­age caps. DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M out­put to­kens.

deep­claude swaps the brain while keep­ing the body:

Your ter­mi­nal

+– Claude Code CLI (tool loop, file edit­ing, bash, git - un­changed)

+– API calls -> DeepSeek V4 Pro ($0.87/M) in­stead of Anthropic ($15/M)

Everything works: file read­ing, edit­ing, bash ex­e­cu­tion, sub­agent spawn­ing, au­tonomous multi-step cod­ing loops. The only dif­fer­ence is which model thinks.

Quick start (2 min­utes)

1. Get a DeepSeek API key

Sign up at plat­form.deepseek.com, add $5 credit, copy your API key.

2. Set en­vi­ron­ment vari­ables

Windows (PowerShell):

setx DEEPSEEK_API_KEY sk-your-key-here”

ma­cOS/​Linux:

echo export DEEPSEEK_API_KEY=“sk-your-key-here”’ >> ~/.bashrc

source ~/.bashrc

3. Install

Windows:

# Copy the script to a di­rec­tory in your PATH

Copy-Item deep­claude.ps1 $env:USERPROFILE\.local\bin\deepclaude.ps1”

# Or add the repo di­rec­tory to PATH

setx PATH $env:PATH;C:\path\to\deepclaude”

ma­cOS/​Linux:

chmod +x deep­claude.sh

sudo ln -s $(pwd)/deepclaude.sh” /usr/local/bin/deepclaude

4. Use it

deep­claude # Launch Claude Code with DeepSeek V4 Pro

deep­claude –status # Show avail­able back­ends and keys

deep­claude –backend or # Use OpenRouter (cheapest, $0.44/M in­put)

deep­claude –backend fw # Use Fireworks AI (fastest, US servers)

deep­claude –backend an­thropic # Normal Claude Code (when you need Opus)

deep­claude –cost # Show pric­ing com­par­i­son

deep­claude –benchmark # Latency test across all providers

deep­claude –switch ds # Switch back­end mid-ses­sion (no restart)

How it works

Claude Code reads these en­vi­ron­ment vari­ables to de­ter­mine where to send API calls:

deep­claude sets these per-ses­sion (not per­ma­nently), launches Claude Code, then re­stores your orig­i­nal set­tings on exit.

Supported back­ends

Setup per back­end

DeepSeek (default - just needs DEEPSEEK_API_KEY):

setx DEEPSEEK_API_KEY sk-…” # Windows

ex­port DEEPSEEK_API_KEY=“sk-…” # ma­cOS/​Linux

OpenRouter (optional):

setx OPENROUTER_API_KEY sk-or-…” # Windows

ex­port OPENROUTER_API_KEY=“sk-or-…” # ma­cOS/​Linux

Fireworks AI (optional):

setx FIREWORKS_API_KEY fw_…” # Windows

ex­port FIREWORKS_API_KEY=“fw_…” # ma­cOS/​Linux

Cost com­par­i­son

DeepSeek’s au­to­matic con­text caching makes agent loops ex­tremely cheap - af­ter the first re­quest, the sys­tem prompt and file con­text are cached at $0.004/M (vs $0.44/M un­cached).

What works and what does­n’t

Works

File read­ing, writ­ing, edit­ing (Read/Write/Edit tools)

Bash/PowerShell ex­e­cu­tion

Glob and Grep search

Multi-step au­tonomous tool loops

Subagent spawn­ing

Git op­er­a­tions

Project ini­tial­iza­tion (/init)

Thinking mode (enabled by de­fault)

Doesn’t work or de­graded

Intelligence dif­fer­ence

Routine tasks (80% of work): DeepSeek V4 Pro is com­pa­ra­ble to Claude Opus

Complex rea­son­ing (20%): Claude Opus is stronger - switch with –backend an­thropic

Live switch­ing (no restart)

Switch be­tween Anthropic and DeepSeek mid-ses­sion - from in­side Claude Code it­self. No restart, no ter­mi­nal com­mands. Just type a slash com­mand.

In Claude Code ter­mi­nal:

In Claude Code VS Code ex­ten­sion:

How it works

The proxy runs on lo­cal­host:3200 and in­ter­cepts all API calls. A con­trol end­point (/_proxy/mode) lets you switch the ac­tive back­end in­stantly:

Claude Code -> lo­cal­host:3200 (proxy)

|

+– /_proxy/mode POST -> switch back­end

+– /_proxy/status GET -> cur­rent back­end + up­time

+– /_proxy/cost GET -> to­ken us­age + cost sav­ings

|

+– /v1/messages -> ac­tive back­end (DeepSeek/OpenRouter/Anthropic)

+– every­thing else -> Anthropic (passthrough)

Option 1: Slash com­mands (recommended)

Add these files to ~/.claude/commands/:

deepseek.md:

Switch the model proxy to DeepSeek. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=deepseek”

If suc­cess­ful, say: Switched to DeepSeek.”

an­thropic.md:

Switch the model proxy back to Anthropic. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=anthropic”

If suc­cess­ful, say: Switched to Anthropic.”

open­router.md:

Switch the model proxy to OpenRouter. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=openrouter”

If suc­cess­ful, say: Switched to OpenRouter.”

Then type /deepseek, /anthropic, or /openrouter in any Claude Code ses­sion to switch in­stantly.

Option 2: CLI flag

deep­claude –switch deepseek # or: ds, or, fw, an­thropic

deep­claude -s an­thropic

Option 3: VS Code key­board short­cuts

Add to .vscode/tasks.json:

{

nullagent (@nullagent@partyon.xyz)

partyon.xyz

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

Just a moment...

www.smithsonianmag.com

Agentic Coding is a Trap | Lars Faye

larsfaye.com

AI does the cod­ing, and the hu­man in the loop is the or­ches­tra­tor”

AI does the cod­ing, and the hu­man in the loop is the or­ches­tra­tor”

This is the sen­ti­ment be­ing hyped up around the in­dus­try cur­rently: tra­di­tional cod­ing is all but dead, and Spec Driven Development (SDD) is the fu­ture. You gen­er­ate a plan, and dis­con­nect from writ­ing any code. The agents know bet­ter, and han­dle all the im­ple­men­ta­tion. You are there as the ex­pert, to pro­vide good taste”, re­view the out­puts, and con­stantly steer the agent(s) to ex­e­cute the plan that you metic­u­lously put to­gether.

The work­flow takes many shapes at this point, but in gen­eral, it is a process where some­one de­fines the pro­jec­t’s re­quire­ments (simultaneously at a mi­cro and macro level), gen­er­ates a plan, and then pulls the slot ma­chine lever over and over, it­er­at­ing and re­it­er­at­ing with of­ten mul­ti­ple agent in­stances un­til it’s done. All the while, putting a grow­ing dis­tance be­tween the orchestrator” and the code that is be­ing gen­er­ated and com­mit­ted.

Coding Agents are help­ful, and pow­er­ful, but there’s al­ready some quan­tifi­able trade-offs that need to be dis­cussed:

An in­crease in the com­plex­ity of the sur­round­ing sys­tems to mit­i­gate the in­creased am­bi­gu­ity of AIs non-de­ter­min­ism.

Atrophying skills for a wide swath of the pop­u­la­tion.

Vendor lock-in for in­di­vid­u­als and en­tire teams (Claude Code out­ages have al­ready had en­tire teams at a stand-still).

Fluctuating and in­creas­ing costs to ac­cess the tools. An em­ploy­ee’s cost is fixed; to­kens are a con­stantly mov­ing tar­get.

Being suc­cess­ful with this ap­proach to cod­ing agents hinges on a rather cru­cial el­e­ment: only a skilled de­vel­oper who’s think­ing crit­i­cally, and com­fort­able op­er­at­ing at the ar­chi­tec­tural level, can spot is­sues in the thou­sands of lines of gen­er­ated code, be­fore they be­come a prob­lem.

Yet, in an ironic twist of fate, it’s the in­di­vid­u­al’s crit­i­cal think­ing skills and cog­ni­tive clar­ity that AI tool­ing has now been proven to im­pact neg­a­tively.

Not Just Another Abstraction”

A com­mon re­frain we hear in the com­mu­nity is that pro­gram­mers are just moving up the stack” and into a dif­fer­ent type of ab­strac­tion. Whether or not these tools are re­ally an ab­strac­tion layer in the first place is not a set­tled mat­ter; a higher level of am­bi­gu­ity is not a higher level of ab­strac­tion.

If we put that to the side though, it is true that pro­gram­mers tend to be wary of new lan­guages and new ways of pro­gram­ming. When FORTRAN was re­leased, pro­gram­mers were skep­ti­cal of it, too. They had sim­i­lar claims: it was likely to in­tro­duce more bugs and in­sta­bil­ity, and writ­ing as­sem­bly di­rectly was more ef­fi­cient. Later, there would be dis­course around the in­te­gra­tion of com­pil­ers in­tro­duc­ing too much magic” into the process. These were nor­ma­tive ar­gu­ments around a fear of what might be lost if these new tech­nolo­gies were em­braced.

The dif­fer­ence with what is hap­pen­ing to­day is that those pre­vi­ous fears were spec­u­la­tive and the­o­ret­i­cal. In just the short few years that AI tool­ing has ex­isted, we are al­ready see­ing sig­nif­i­cant im­pacts. These aren’t just ju­nior de­vel­op­ers, but even those with a decade (or more) of ex­pe­ri­ence:

Junior de­vel­op­ers are faced with an even steeper climb, as we trun­cate their abil­ity to work with code and re­place it with re­view­ing gen­er­ated code. Reviewing code is im­por­tant, but it’s only 50% of the learn­ing process, at best. Without the fric­tion and chal­lenges that come with work­ing with code di­rectly, their abil­ity to learn is se­ri­ously di­min­ished.

Studying this phe­nom­e­non takes time, so anec­do­tal ev­i­dence is im­por­tant to gather to get a real-time view of the sit­u­a­tion. But it has also been stud­ied, and there are nu­mer­ous re­ports re­in­forc­ing that this is a real phe­nom­e­non.

It ac­tu­ally is dif­fer­ent this time.

When a C++ de­vel­oper moved to Java or Python, they did­n’t com­plain of brain fog. When a sysad­min moved to AWS, they did­n’t feel like they were los­ing their abil­ity to un­der­stand net­work­ing.

A Senior Engineer los­ing their cod­ing edge and be­com­ing rusty” over time as they move into man­age­r­ial roles and prac­tice cod­ing less is not a new phe­nom­e­non. This was the nat­ural pro­gres­sion of ex­per­tise: an en­gi­neer who had decades of cod­ing, fric­tion, and ex­pe­ri­ence logged would have the time and ex­pe­ri­ence to so­lid­ify those skills and wis­dom. And they could ap­ply that wis­dom when their job be­came less about syn­tax, and more about higher-level ar­chi­tec­tural de­ci­sions. Those in­di­vid­u­als are not only ex­ceed­ingly rare, but you won’t get the next wave of se­niors if we’re all ab­di­cat­ing the fric­tion of writ­ing, prob­lem-solv­ing, and de­bug­ging.

What is hap­pen­ing right now is a trend where de­vel­op­ers, who’ve never had that longevity or the 30+ years of fric­tion that led to that deep un­der­stand­ing, are be­ing moved into higher-level work­flows re­quir­ing the same skills to man­age the AI agents that the se­nior en­gi­neer took decades to ob­tain.

However, Senior Engineers aren’t im­mune, ei­ther. Simon Willison, a de­vel­oper with nearly 30 years ex­pe­ri­ence, has re­ported not hav­ing a firm men­tal model of what the ap­pli­ca­tions can do and how they work, which means each ad­di­tional fea­ture be­comes harder to rea­son about”

The Skilled” Orchestrator Problem

Buried in a re­cent study by Anthropic was a sur­pris­ingly hon­est mo­ment when speak­ing about the risks of en­gag­ing with cod­ing agents on a reg­u­lar ba­sis:

One rea­son that the at­ro­phy of cod­ing skills is con­cern­ing is the paradox of su­per­vi­sion” … ef­fec­tively us­ing Claude re­quires su­per­vi­sion, and su­per­vis­ing Claude re­quires the very cod­ing skills that may at­ro­phy from AI overuse.

One rea­son that the at­ro­phy of cod­ing skills is con­cern­ing is the paradox of su­per­vi­sion” … ef­fec­tively us­ing Claude re­quires su­per­vi­sion, and su­per­vis­ing Claude re­quires the very cod­ing skills that may at­ro­phy from AI overuse.

Sandor Nyako, Director of Software Engineering at LinkedIn who over­sees 50 en­gi­neers, has no­ticed it pro­lif­er­at­ing through­out the or­ga­ni­za­tion and re­quested his team not to use them for tasks that re­quire crit­i­cal think­ing or prob­lem-solv­ing.”

To grow skills, peo­ple need to go through hard­ship. They need to de­velop the mus­cle to think through prob­lems,” he said. How would some­one ques­tion if AI is ac­cu­rate if they don’t have crit­i­cal think­ing?”

To grow skills, peo­ple need to go through hard­ship. They need to de­velop the mus­cle to think through prob­lems,” he said. How would some­one ques­tion if AI is ac­cu­rate if they don’t have crit­i­cal think­ing?”

There is also the ques­tion of what con­sti­tutes overuse”. We al­ready have ev­i­dence, both data-dri­ven and anec­do­tal, that these skills can at­ro­phy and dis­si­pate rather quickly (within months in some cases).

This is the con­tra­dic­tion that has many AI boost­ers talk­ing out of both sides of their mouths: The use of cod­ing agents is ac­tively di­min­ish­ing the very skills needed to ef­fec­tively man­age the cod­ing agents.

LLMs ac­cel­er­ate the wrong parts.

Contrary to the cur­rent nar­ra­tive that is be­ing es­poused, we did­n’t nec­es­sar­ily need to write code faster. Especially code we did­n’t fully un­der­stand, and par­tic­u­larly in huge swaths that we could­n’t re­view in rea­son­able time frames.

Before AI, a (good) de­vel­op­er’s pri­or­ity list might look like:

Understanding of the code and its re­la­tion to the code­base

If the code is aligned with the doc­u­mented and ef­fi­cient stan­dards

As few lines of code as needed to ac­com­plish the goal (while main­tain­ing read­abil­ity)

Turnaround time

Agentic cod­ing, and LLMs in gen­eral, com­pletely in­vert this list.

Their ca­pa­bil­i­ties and us­age tend to fo­cus on speed by in­creas­ing the amount of code that can be gen­er­ated in a spec­i­fied time frame. Speed is a nat­ural byprod­uct of high ap­ti­tude. When it’s forced, it al­ways leads to lower ac­cu­racy. The in­te­gra­tion of these tools does­n’t tend to fo­cus much on deeper un­der­stand­ing or con­cise­ness.

Can they be used that way? Yes, with de­ter­mi­na­tion, they cer­tainly can be.

Are they? No, not re­ally; forced man­dates and hype around to­ken us­age across or­ga­ni­za­tions is demon­strat­ing as such.

Coding === Planning

There is a di­vide be­tween de­vel­op­ers that is­n’t high­lighted as much: Some of us plan, and think, bet­ter with code. Thinking and work­ing in code is­n’t just mean­ing­less drudgery; it forces you to think about things on a tech­ni­cal level that in­volves every­thing from se­cu­rity to per­for­mance to user ex­pe­ri­ence to main­tain­abil­ity.

In a re­cent in­ter­view dis­cussing Spec Driven Development”, Dax, the cre­ator of OpenCode (an open-source cod­ing agent, no less) was quoted say­ing:

When work­ing on some­thing new or some­thing chal­leng­ing, me typ­ing out code is the process by which I fig­ure out what we should even be do­ing.

I have a re­ally tough time just sit­ting there, writ­ing out a gi­ant spec on ex­actly how the fea­ture should work. I like writ­ing out types. I like writ­ing out how some of the func­tions might play to­gether. I like play­ing with folder struc­ture to see what the dif­fer­ent con­cepts should be. And this is all stuff that I think most peo­ple—most pro­gram­mers—have al­ways done. I don’t re­ally see a good rea­son why I would stop that per­son­ally, be­cause it’s how I fig­ure out what to do.”

When work­ing on some­thing new or some­thing chal­leng­ing,

.

I have a re­ally tough time just sit­ting there, writ­ing out a gi­ant spec on ex­actly how the fea­ture should work.

I like writ­ing out types. I like writ­ing out how some of the func­tions might play to­gether. I like play­ing with folder struc­ture to see what the dif­fer­ent con­cepts should be. And this is all stuff that I think most peo­ple—most pro­gram­mers—have al­ways done. I don’t re­ally see a good rea­son why I would stop that per­son­ally, be­cause it’s how I fig­ure out what to do.”

What you say is of­ten not what you mean, and LLMs fill in am­bi­gu­ity with as­sump­tions (or hal­lu­ci­na­tions), which leads to: more re­view, more agent re­vi­sions, more to­kens burned, and more dis­con­nec­tion from what is be­ing cre­ated. Inversely, You can mar­vel at the most beau­ti­ful, un­am­bigu­ous, per­fectly struc­tured prompt you’ve ever writ­ten, and the LLM can still out­put a hal­lu­ci­nated method be­cause it is fun­da­men­tally a next-to­ken-pre­dic­tion en­gine, not a com­piler. You can­not re­place a de­ter­min­is­tic sys­tem with a prob­a­bilis­tic one and ex­pect zero am­bi­gu­ity.

Even the most AI-enthusiastic se­nior de­vel­op­ers are start­ing to see this dis­con­nec­tion as a loom­ing and grow­ing is­sue.

Vendor Lock-In

When I was brows­ing LinkedIn dur­ing the Claude out­age that oc­curred a bit ago, I no­ticed nu­mer­ous posts high­light­ing that cer­tain de­vel­op­ers and en­gi­neer­ing teams were at a stand­still. Their work­flows, their own cod­ing abil­i­ties, had al­ready reached a point where they were largely de­pen­dent on these ven­dors. What used to be a skill that they could ex­e­cute with just a key­board and text ed­i­tor sud­denly re­quired a sub­scrip­tion to an AI model provider.

You can’t pre­dict your to­ken cost.

Model providers are heav­ily sub­si­dized, and the mod­els them­selves are built on shift­ing sands. Every new model re­lease fol­lows the same pat­tern of high bench­marks, fol­lowed by hype, fol­lowed by the re­al­ity of us­age and every­one com­plain­ing of them be­ing nerfed” and burn­ing through 2x-3x as many to­kens to get the same job done.

You know how much your em­ploy­ees cost; you have no idea how much your to­ken costs will be day to day, month to month, year to year. If your en­tire team is us­ing agen­tic cod­ing as the de­fault, your ex­pense ac­count will need to re­main highly nim­ble. As Primeagen said re­cently: when you use these fully agen­tic work­flows, the model providers es­sen­tially own you”.

It’s not un­rea­son­able to play this pat­tern for­ward, where we could be cre­at­ing an in­dus­try where you need to pay for to­ken con­sump­tion to ac­com­plish some­thing that used to be the prod­uct of your own crit­i­cal think­ing and prob­lem-solv­ing abil­i­ties. This would re­sem­ble a type of vendor lock-in”, but for an en­tire in­dus­try skillset (and I’m sure the model providers are glee­fully rub­bing their hands in an­tic­i­pa­tion for that). The fi­nan­cial, and in­tel­lec­tual, rug-pull could come at any mo­ment, and lo­cal LLMs are nowhere near ready to scale to ab­sorb that level of us­age.

This is­n’t the­o­ret­i­cal con­jec­ture; it’s be­ing re­ported on right now. Even the model providers them­selves are bring­ing it to light. Yet an­other Anthropic study showed a pre­cip­i­tous 47% drop-off in de­bug­ging skills:

Incorporating AI ag­gres­sively into the work­place—es­pe­cially in soft­ware en­gi­neer­ing—in­evitably comes with trade-offs…de­vel­op­ers may lean on AI to de­liver quick re­sults at the ex­pense of build­ing crit­i­cal skills—most no­tably, the abil­ity to de­bug when things go wrong.”

Incorporating AI ag­gres­sively into the work­place—es­pe­cially in soft­ware en­gi­neer­ing—in­evitably comes with trade-offs…de­vel­op­ers may lean on AI to de­liver quick re­sults at the ex­pense of build­ing crit­i­cal skills—most no­tably, the abil­ity to de­bug when things go wrong.”

There’s a way to avoid all of this, of course. LLMs are a pow­er­house tech­no­log­i­cal ad­vance­ment, and when used re­spon­si­bly, they can be a stel­lar tool for learn­ing and up­skilling. They en­able me to dive deeper and wider into con­cepts and tech­niques, ex­pand­ing un­der­stand­ing and en­abling ex­plo­ration of new ideas that used to be more ar­du­ous and time con­sum­ing to ex­per­i­ment with. This is where I think they will of­fer the in­dus­try the most long-term value.

My Approach: Demote AIs role

I’m cer­tainly not ad­vo­cat­ing for typ­ing code out man­u­ally. Programmers have al­ways been look­ing for ways to cre­ate code with­out hav­ing to write code. This is why we even have Emmet, au­to­com­plete, and snip­pets in the first place. Even COBOL was de­signed to en­cap­su­late more in­struc­tions with less writ­ing by us­ing English-like” words such as MOVE and WRITE. jQuery’s motto was write less, do more”. LLMs are an­other ad­di­tion to this ar­ray of code gen­er­a­tion tools.

What I am ad­vo­cat­ing for, though, is lever­ag­ing LLMs and cod­ing agents as sec­ondary processes. A way that does­n’t sac­ri­fice the in­di­vid­u­al’s skills at the al­tar of pro­duc­tiv­ity. You can flip the script and lean on them to brain­storm the plan­ning parts of the process while stay­ing ac­tively en­gaged through­out im­ple­men­ta­tion, del­e­gat­ing to them on an as-needed ba­sis. You can lever­age the pro­duc­tiv­ity gains, and mit­i­gate the com­pre­hen­sion debt.

My daily work­flow:

I use LLMs to help gen­er­ate specs and plans, while I fa­cil­i­tate the im­ple­men­ta­tion. This is an in­ver­sion of the orchestration” work­flow; I am still man­u­ally cod­ing any­where from 20% to 100%, de­pend­ing on the task.

I very of­ten am writ­ing pseudo-code when I do en­gage with the mod­els, clos­ing the dis­tance be­tween the re­quest and the gen­er­ated code.

I use the mod­els as del­e­ga­tion util­i­ties for ad-hoc code gen­er­a­tion and in­ter­ac­tive doc­u­men­ta­tion, as well as re­search tools so that I can con­stantly ask ques­tions, it­er­ate, refac­tor, and gain clar­ity around my ap­proaches.

I never gen­er­ate more than I can re­view in a sit­ting. If it’s too much to re­view, I slow down and split the task up, man­u­ally refac­tor­ing where needed to en­sure a com­pre­hen­sive un­der­stand­ing of the end re­sult.

I never ask an LLM or agent to im­ple­ment some­thing that I’ve never done be­fore or could­n’t do on my own, ex­cept per­haps purely for ed­u­ca­tional or tu­to­r­ial pur­poses (and of­ten dis­carded af­ter­wards).

If I had to TL;DR this list, it would be: Use them like the Ship’s Computer, not Data. (any Star Trek fans should get the ref­er­ence)

I’m not go­ing faster, but I’m do­ing bet­ter qual­ity work.

The pro­duc­tiv­ity gains from these mod­els are real, and so is the fric­tion and un­der­stand­ing that come from en­gag­ing with the work on a tan­gi­ble and fre­quent ba­sis.

Despite the count­less failed at­tempts at try­ing to de­moc­ra­tize cod­ing while not un­der­stand­ing cod­ing, we’re faced with the re­al­ity that you can­not un­der­stand code with­out en­gag­ing with it. And it’s be­come clear that if you don’t keep en­gag­ing and writ­ing it, you can lose touch with that un­der­stand­ing, which will in turn make you a less ca­pa­ble or­ches­tra­tor in the first place, ren­der­ing this phase of AI cod­ing a strange and need­lessly stress­ful in­ter­lude.

Perhaps I am wor­ry­ing too much, but his­tory con­tains lessons.

This all feels sim­i­lar, though, like an­other large ex­per­i­ment we’re run­ning on our­selves. We’ve been through a sim­i­lar pe­riod with the in­tro­duc­tion of so­cial me­dia with­out un­der­stand­ing the long-term im­pli­ca­tions, and we’re now faced with at­ten­tion deficit (amongst many other is­sues) on a wide scale.

This time, we’re gam­bling with some­thing much riskier.

People who go all in on AI agents now are guar­an­tee­ing their ob­so­les­cence. If you out­source all your think­ing to com­put­ers, you stop up­skilling, learn­ing, and be­com­ing more com­pe­tent.”  — Jeremy Howard, cre­ator of fast.ai

People who go all in on AI agents now are guar­an­tee­ing their ob­so­les­cence. If you out­source all your think­ing to com­put­ers, you stop up­skilling, learn­ing, and be­com­ing more com­pe­tent.”

– Jeremy Howard, cre­ator of

Why TUIs are back by Alcides Fonseca

wiki.alcidesfonseca.com

Terminal User Interfaces (TUIs) are mak­ing a come­back. DHHs Omarchy is made of three types of user in­ter­faces: TUIs, for im­me­di­ate feed­back and bonus geek points, we­bapps be­cause 37signals (his com­pany) sells SAAS web ap­pli­ca­tions and the un­avoid­able gnome-style na­tive ap­pli­ca­tions that re­ally do not fit well in the style of the dis­tro.

The same pat­tern oc­curred around 10 years ago in code ed­i­tors. We came from the na­tive ed­i­tors of BBEdit, Textmate (also pro­moted by DHH), Notedpad++ and Sublime to Electro-powered apps like Atom, VSCode and all its forks. The hard­core, moved to vim or emacs, trad­ing im­me­di­ate feed­back and higher us­abil­ity for the steep­est learn­ing curve I’ve seen.

Windows

The les­son is clear: Native ap­pli­ca­tions are los­ing. Windows is do­ing the GUI li­brary stan­dard joke. Because one API does not have suc­cess, they make up an­other one, just for that one to fail within the sea of al­ter­na­tives that ex­ist.

MFC (1992) wrapped Win32 in C++. If Win32 was in­el­e­gant, MFC was Win32 wear­ing a tuxedo made of other tuxe­dos. Then came OLE. COM. ActiveX. None of these were re­ally GUI frame­works — they were com­po­nent ar­chi­tec­tures — but they in­fected every cor­ner of Windows de­vel­op­ment and in­tro­duced a level of cog­ni­tive com­plex­ity that makes Kierkegaard read like Hemingway.

MFC (1992) wrapped Win32 in C++. If Win32 was in­el­e­gant, MFC was Win32 wear­ing a tuxedo made of other tuxe­dos. Then came OLE. COM. ActiveX. None of these were re­ally GUI frame­works — they were com­po­nent ar­chi­tec­tures — but they in­fected every cor­ner of Windows de­vel­op­ment and in­tro­duced a level of cog­ni­tive com­plex­ity that makes Kierkegaard read like Hemingway.

— Jeffrey Snover, in Microsoft has­n’t had a co­her­ent GUI strat­egy since Petzold

Since then, Microsoft has gone through Winforms, WPF, Silverlight, WinUIs, MAUI with­out suc­cess. Many en­ter­prise and per­sonal desk­top ap­pli­ca­tion still rely on Electron Apps, and the last mem­ory of co­her­ent vi­sual in­te­gra­tion of the whole OS I have is of Windows 98 or 2000.

It turns out that it’s a lot of work to recre­ate one’s OS and UI APIs every few years. Coupled with the in­ter­mit­tent at­tempts at sand­box­ing and dep­re­cat­ing too pow­er­ful” func­tion­al­ity, the re­sult is that each new layer has gaps, where you can’t do cer­tain things which were pos­si­ble in the pre­vi­ous frame­work.

It turns out that it’s a lot of work to recre­ate one’s OS and UI APIs every few years. Coupled with the in­ter­mit­tent at­tempts at sand­box­ing and dep­re­cat­ing too pow­er­ful” func­tion­al­ity, the re­sult is that each new layer has gaps, where you can’t do cer­tain things which were pos­si­ble in the pre­vi­ous frame­work.

— Domenic Denicola, in Windows Native App Development Is a Mess

Linux

The UI in­con­sis­tency in Linux was cre­ated by de­sign. Different teams wanted dif­fer­ent out­comes and they had the free­dom to do it. GTK and Qt be­came the two reign­ing frame­works. While Qt is most known for it, both aimed to sup­port cross-plat­form na­tive de­vel­op­ment (once upon a time, I suc­cess­fully com­piled gedit on Windows, learn­ing a lot about C com­pi­la­tion, make files and en­vi­ron­ment vari­ables in the process) but are only widely used in Linux land. Luckily, ap­pli­ca­tions made in the dif­fer­ent toolk­its can look okay-ish next to each other, some­thing that the dif­fer­ent frame­works on Windows fail to achieve. How many en­gi­neer-hours does it take to redo the win­dows Control Panel?

Given the dif­fi­culty in test­ing the mil­lion dif­fer­ent com­bi­na­tions of dis­tros, desk­top en­vi­ron­ments and hard­ware in gen­eral, most com­pa­nies do not bother with a na­tive Linux ap­pli­ca­tion — they ei­ther ad­dress it us­ing elec­tron (minting the lock-down), or they let the open-source com­mu­nity solve it self (when they have open APIs).

ma­cOS

Apple used to be a one-book re­li­gion. Apple’s Human Interface Guidelines used to be cited by every User Interface course over the world. Xerox PARC and Apple were the two in­sti­tu­tions that stud­ied what it means to have a good hu­man in­ter­face. Fast for­ward a few decades, and Apple is do­ing the best worst it can to break all the guide­lines and con­sis­tency it was known for.

Now, Apple has been ig­nor­ing Fitts’ law, mak­ing re­siz­ing win­dows near-im­pos­si­ble (even af­ter try­ing to fix it) and adding icons to every sin­gle menu. MacOS is no longer the safe heaven where de­sign­ers can work peace­fully.

Electron

Everyone knows that the user ex­pe­ri­ence of elec­tron apps sucks. The most pop­u­lar claim is the mem­ory con­sump­tion, which to be fair has been de­creas­ing over the last decade, but my main com­plaint (as I usu­ally drive a 64GB RAM MacBook Pro) is the lack of vi­sual con­sis­tency and lack of key­board-dri­ven work­flows. Looking at my dock, I have 8 na­tive apps (text mate and ma­cOS sys­tem util­i­ties) and 6 elec­tron apps (Slack, Discord, Mattermost, VScode, Cursor, Plexampp). And that’s from some­one who re­ally wishes he could avoid hav­ing any elec­tron app at all.

Let us take the ex­am­ple of Cursor (but would be true for VSCode as well). If you are in the agent panel, re­quest­ing your next fea­ture, can you move to the agent list on the side panel with just the key­board? Can you archive it? These are ac­tions that should be the same across every ma­cOS ap­pli­ca­tion, and even if there are short­cuts, they are not an­nounced in the menus. And over the last decade, de­vel­op­ers have been for­get­ting to add menus to do the same things that are avail­able in their ap­pli­ca­tion (mostly be­cause the ap­pli­ca­tion is HTML within its sand­box). For the record, Slack does this bet­ter than the oth­ers, but it’s not per­fect.

Restarting from scratch

Together with Dart, Google wanted to de­sign a new op­er­at­ing sys­tem, with­out all the legacy of Android, for new de­vices. It wanted a fresh UI toolkit (Flutter UI) but Google gave up on the pro­ject be­fore a real prod­uct was launched. It’s one of those sit­u­a­tions where hav­ing a mo­nop­oly (or a large enough slice of the mar­ket) is re­quired to suc­ceed.

Meanwhile, Zed did the same thing in Rust: they de­signed their own GPU-renderer li­brary (GPUI) which is cross-plat­form. Despite the high-speed, it lacks in­te­gra­tion with the host OS on it­self, re­quir­ing the de­vel­op­ers to add the right bind­ings. Personally, I would rather have a slow ren­derer that in­te­grated with my OS than the ex­tra speed.

TUIs

TUIs are fast, easy to au­to­mate (RIP Automator) and work rea­son­ably well in dif­fer­ent op­er­at­ing sys­tems. You can even run them re­motely with­out any headache-in­duc­ing X for­ward­ing. When the na­tive UI toolk­its fail, we go back to ba­sics. Claude and Codex have been very suc­cess­ful on the com­mand-line: you fo­cus on the in­ter­ac­tion and for­get about the op­er­at­ing sys­tem around you. You can even drive code and apps on cloud ma­chines, or re­mote into your GPU-powered ma­chine from your iPad. TUIs are fill­ing the void left by Apple and Microsoft in the post-apoc­a­lyp­tic world where every ap­pli­ca­tion looks dif­fer­ent. Which is good if you are do­ing art (including com­puter games), but not if your goal is to get out of the way of let­ting the user do their job.

What’s next

A check­box is also part of an in­ter­face. You’re us­ing it to in­ter­act with a sys­tem by in­putting data. Interfaces are bet­ter the less think­ing they re­quire: whether the in­ter­face is a steer­ing wheel or an on­line form, if you have to spend any amount of time fig­ur­ing out how to use it, that’s bad. As you in­ter­act with many things, you want ho­mo­ge­neous in­ter­faces that give you con­sis­tent ex­pe­ri­ences. If you learn that Command + C is the key­board short­cut for copy, you want that to work every­where. You don’t want to have to re­mem­ber to use CTRL + Shift + C in cer­tain cir­cum­stances or right-click → copy in oth­ers, that’d be an­noy­ing.

A check­box is also part of an in­ter­face. You’re us­ing it to in­ter­act with a sys­tem by in­putting data. Interfaces are bet­ter the less think­ing they re­quire: whether the in­ter­face is a steer­ing wheel or an on­line form, if you have to spend any amount of time fig­ur­ing out how to use it, that’s bad. As you in­ter­act with many things, you want ho­mo­ge­neous in­ter­faces that give you con­sis­tent ex­pe­ri­ences. If you learn that Command + C is the key­board short­cut for copy, you want that to work every­where. You don’t want to have to re­mem­ber to use CTRL + Shift + C in cer­tain cir­cum­stances or right-click → copy in oth­ers, that’d be an­noy­ing.

— John Loeber in Bring Back Idiomatic Design

We need to go back to the ba­sics. Every de­vel­oper should learn the the­ory of what makes a good User Interface (software or not!), like Nielsen, Norman or Johnson, and stop treat­ing User Design as a soft skill that does not mat­ter in the Software Engineering Curriculum. In any course, if the UI does not make any sense, the pro­ject should be failed. And in the HCI course, we should aim for per­fect UIs. It takes work, but that work is mostly about un­der­stand­ing what we need. The pro­gram­ming is al­ready be­ing au­to­mated.

Operating sys­tems and Toolkits au­thors should drive this in­vest­ment. They should fo­cus on mak­ing ac­ces­si­ble toolk­its that de­vel­op­ers want to use, and lower the bar­rier to en­try, mak­ing those plat­forms last as long as pos­si­ble. I do not nec­es­sar­ily ar­gue for cross-plat­form sup­port, but hav­ing one such so­lu­tion would help re­duce the elec­tron and TUI de­pen­dency.

Spirit 2.0 — The Airline Owned by the People, for the People

letsbuyspiritair.com

A desktop made for one

isene.org

For the first time in twenty-five years I’m sit­ting in front of a com­puter where al­most every pro­gram I touch was de­signed by me. One tool at a time, the off-the-shelf op­tion got swapped out for some­thing a lit­tle closer to how my hands wanted to work. (I wrote about the start of this a cou­ple of weeks ago — that post laid out the early swaps; this one is the view from the other side of the jour­ney.)

It’s been a crazy few weeks guid­ing Claude Code in­be­tween all the other stuff I’m do­ing in life. I di­rect CC, it works while I do other stuff. I get a sec­ond or few in be­tween tasks, and I re­spond. Then off it goes adding fea­tures or hunt­ing bugs.

Two suites in a happy mar­riage: CHasm, the bedrock — pure x86_64 as­sem­bly, no libc, the layer that paints pix­els and reads keys. Fe₂O₃, the ap­pli­ca­tion layer in Rust, sit­ting on a small shared TUI li­brary called crust.

The CHasm layer (assembly)

The Fe₂O₃ layer (Rust on crust)

What’s left? WeeChat for IRC and other chats. Firefox — the only GUI pro­gram I still use reg­u­larly. That’s it. Everything else is mine.

The vim line

Let me get a bit sen­ti­men­tal about vim, be­cause vim was the one I thought I’d never re­place.

I started us­ing it in 2001. For twenty-five years, every email I wrote went through vim. Every ar­ti­cle. Every blog post. Every line of code, every HyperList, and every book. It was the one tool I would have called part of how I think. The mus­cle mem­ory was so deep that I’d open ran­dom text fields in browsers and ended with typ­ing :w.

Then in three days I had scribe and stopped us­ing vim.

The first com­mit landed at 00:09 on May 1st. By af­ter­noon to­day (May 3rd) vim was re­placed. Twenty-five years of mus­cle mem­ory rerouted in sev­enty-two hours.

Vim is won­der­ful, but scribe is mine. It’s modal like vim, but miss­ing the ninety per­cent of fea­tures I never used, and car­ry­ing the hand­ful of writer-shaped tweaks I al­ways wished vim had. Soft-wrap by de­fault. Reading mode with Limelight-style fo­cus. AI in the prompt with­out leav­ing the buffer. HyperList edit­ing with full syn­tax high­light­ing and the en­cryp­tion for­mat the Ruby HyperList app uses. Persistent reg­is­ters shared across con­cur­rent ses­sions is a cool fea­ture. None of it rev­o­lu­tion­ary, but all of it shaped to my ex­act work­flow. And when­ever I think of an en­hance­ment I want, it’s just min­utes away. It used to be wait­ing for months or years or for­ever for some de­vel­oper to get the same idea as mine and in­tro­duce it into the tool I use.

Why this is pos­si­ble now

It used to be that writ­ing your own ed­i­tor, your own file man­ager, your own win­dow man­ager, was a pro­ject of years. I know, it took me a few years to get RTFM right. A se­ri­ous un­der­tak­ing with a se­ri­ous cost. The eco­nom­ics of it did­n’t work for most peo­ple, even pro­gram­mers. You’d touch a piece of it, get most of the way, run out of week­end, and go back to the off-the-shelf tool.

That bar­rier is much lower now. With Rust, CC as the work­horse, and the fact that the hard prob­lems of TUI pro­gram­ming have been doc­u­mented to death… the cost of build the tool you ac­tu­ally want” has fallen by or­ders of mag­ni­tude.

I don’t think this is a story about AI or about Rust specif­i­cally. Both helped. But the deeper point is that the gap be­tween I wish my ed­i­tor did X” and okay, here’s an ed­i­tor that does X” is now small enough to fit in­side a few evenings of fo­cused work.

I’m not sell­ing any­thing

I should say what this post is not.

It’s not an in­vi­ta­tion to use my soft­ware. Honestly, please don’t. None of it is built for you. It’s built for me — for the way I hold my hands, the way I think about email, the way I want my cal­en­dar to ren­der. I’m sure other peo­ple would find a hun­dred sharp edges I’ve never no­ticed be­cause they hap­pen to align per­fectly with what I do.

It’s also not a re­quest for ku­dos. The code is­n’t novel, nor are the ideas. There’s noth­ing here that has­n’t been done be­fore by some­one with more taste, dis­ci­pline or tal­ent.

What I want to do is show one spe­cific thing: it is now gen­uinely fea­si­ble to make a desk­top com­put­ing en­vi­ron­ment that fits one per­son. Instead of a con­fig­u­ra­tion of some­one else’s tools. This is no longer a heroic decade-long un­der­tak­ing. This is an ac­tual, week­end-by-week­end, this thing in my life now does ex­actly what I want” re­place­ment.

The joy of an au­di­ence of one

The best part of build­ing for my­self: the re­lief of not hav­ing to care.

I don’t have to think about con­fig­ura­bil­ity for some­one with dif­fer­ent pref­er­ences. And I don’t have to sup­port cor­ner cases I’d never per­son­ally hit. Nor do I have to write doc­u­men­ta­tion for users who don’t ex­ist. No more ar­gu­ing on is­sue track­ers about whether a de­fault is the right de­fault — of course it’s the right de­fault, it’s the one I want.

The ed­i­tor’s \? cheat­sheet shows the keys I mem­o­rised, in the or­der I pre­fer, with the bind­ings I think are sen­si­ble. Arrogance? Nope, it’s de­sign with­out com­mit­tee. The au­di­ence is one per­son. Decisions take sec­onds.

It turns out an enor­mous amount of soft­ware com­plex­ity comes from ac­com­mo­dat­ing users who aren’t you. Strip that out and what’s left is small, fast, ex­actly-shaped, and a quiet plea­sure to use.

So

If you’ve ever caught your­self think­ing I wish my ed­i­tor / file man­ager / sta­tus bar / shell just did this one thing dif­fer­ently” and you’ve been told the an­swer is to write a plu­gin, learn an ob­scure con­fig lan­guage, or ac­cept the way it is, then con­sider that the third op­tion is more avail­able than it used to be: Build Your Own Software (BYOS).

You prob­a­bly won’t re­place your whole desk­top. I did­n’t plan to ei­ther. But the sat­is­fac­tion of hav­ing even one tool in your daily work­flow that fits you ex­actly is worth a week­end.

I’m a rab­bit in spring :)

Denuvo has been cracked in all single-player games it previously protected — 2K Games and Denuvo reportedly retaliate with mandatory 14-day online checks

www.tomshardware.com

We’ve re­ported pre­vi­ously on the feats of the skull-and-bones com­mu­nity against Denuvo’s DRM. The cat-and-mouse game has es­sen­tially come to a head for now, as the pi­rate crew has officially” re­ported that, as of yes­ter­day, there were zero games with Denuvo that haven’t been cracked or by­passed.

This de­vel­op­ment should be of lit­tle sur­prise to those fol­low­ing this story along, but here’s a quick re­cap: in late 2025, the MKDev col­lec­tive and the pro­lific DenuvOwO came up with a hy­per­vi­sor-based by­pass (HVB) that in­stalls a ker­nel-level dri­ver to in­ter­cept and re­spond to Denuvo’s checks. While that’s not an ac­tual crack, it’s good enough for piracy work, as the say­ing goes. Simultaneously, voic­es38, a well-known cracker, also fully stripped a few choice ti­tles of Denuvo en­tirely, in­clud­ing re­cent re­leases like Resident Evil: Requiem.

Article con­tin­ues be­low

Get Tom’s Hardware’s best news and in-depth re­views, straight to your in­box.

Follow Tom’s Hardware on Google News, or add us as a pre­ferred source, to get our lat­est news, analy­sis, & re­views in your feeds.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.