10 interesting stories served every morning and every evening.

Thienan Tran

thienantran.com

Talking to 35 Strangers at the Gym

Background

A cou­ple months ago, I was the Wizard of Loneliness. I had grad­u­ated from col­lege al­most two years prior and, while I had luck­ily found a job, I was un­suc­cess­ful in find­ing friends.

Each night, I would look up how to make friends af­ter col­lege” and find the same ad­vice given every time: do your hobby with other peo­ple, fre­quently”.

On pa­per, the gym seemed like the per­fect op­por­tu­nity to meet peo­ple since I would go there nearly every day; how­ever, ac­cord­ing to Reddit, there’s a num­ber of peo­ple who want to be left alone and can be ir­ri­tated if you in­ter­rupted their work­out to talk.

I am deeply afraid of ir­ri­tat­ing some­one or be­ing in awk­ward sit­u­a­tions. Here’s a list of things that I did as a re­sult of that fear:

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

So you can un­der­stand when I say that walk­ing up to some­one and start­ing a con­ver­sa­tion with them at the gym of all places is kinda ter­ri­fy­ing for me.

Unfortunately, there was no other good op­tion. My other hobby is pro­gram­ming, but the Syracuse Development group only meets up once a month, and ac­tiv­i­ties sug­gested by r/​Syra­cuse like vol­ley­ball and trivia night re­quire you to al­ready have friends. I did­n’t have a choice. If I wanted friends, I would have to put in the work at the gym.

Problem Statement

I am lonely and have no friends.

Procedure

I de­cided to run a lit­tle ex­per­i­ment to find some friends.

Each day, for one month, I picked out one per­son to ap­proach. Usually it would be some­one I saw fre­quently at the gym.

Then, I would ap­proach them, wave or tap them on the shoul­der to get their at­ten­tion, and then give them my open­ing line.

Initially, my open­ing line for every­one was Hey I see you here all the time. You’re pretty strong. What’s your split?” After a week or so, I be­gan cus­tomiz­ing the open­ing line per per­son based on what I found in­ter­est­ing about them.

For in­stance, some­one was wear­ing a Boston hat and I was cu­ri­ous whether they went to school in Boston like I did, so I asked them about it. After the open­ing line, I tried to talk to them for 5 – 10 min­utes un­til they let me go. I tried not to be the one to end it be­cause I have a habit of end­ing con­ver­sa­tions early.

Results

Here’s the raw data. I split it up by week and put it into these col­lapsi­ble things be­cause it takes up a lot of space. Click on each week to see the data for that week.

Description is a short de­scrip­tion of the per­son.

Length is how long the con­ver­sa­tion was. A short con­ver­sa­tion is 0 – 2 min­utes, a medium con­ver­sa­tion is 5 – 7 min­utes, and a long con­ver­sa­tion is 10+ min­utes.

Notes are just any­thing in­ter­est­ing about the con­ver­sa­tion or the per­son I was talk­ing to.

Aftermath is what hap­pened af­ter that con­ver­sa­tion.

Reflection

The first cou­ple days were ex­tremely dif­fi­cult. I had been con­di­tioned to be­lieve that ini­ti­at­ing a con­ver­sa­tion with a stranger was weird and it was tough to break free from that. As a re­sult, for the first few peo­ple, I would al­ways make a de­tour at the last sec­ond, i.e. make a trip to the wa­ter foun­tain. I chick­ened out! The so­lu­tion was to ap­proach the per­son as quickly as pos­si­ble so that I did­n’t have time to think about run­ning away.

Luckily, the first few peo­ple were re­cep­tive. I got a rush of dopamine when­ever some­one re­sponded pos­i­tively to my con­ver­sa­tion, so talk­ing to new peo­ple be­came strangely ad­dic­tive. I kept talk­ing to more and more new peo­ple each day un­til I talked to a whop­ping seven (SIX SEVENNN) new peo­ple in one day (this is why Week 3 has a lot of en­tries). It was crazy.

People did­n’t al­ways re­spond pos­i­tively though. In Week 1 and Week 2, I came across a num­ber of peo­ple who were re­ally short with their re­sponses and did­n’t try to con­tinue the con­ver­sa­tion. They gave off the vibe that they did­n’t want to talk to me. It was re­ally awk­ward and al­most made me end the ex­per­i­ment.

But over time, I came to ac­cept that it’s ok if they did­n’t want to talk to me. That’s just one of the things you have to ex­pect when you do some­thing like this.

And be­ing in an awk­ward sit­u­a­tion is ac­tu­ally not that bad. It sucks in the mo­ment, but then you just take a few min­utes to calm down and then you move on with your life. You’re ok.

However, I did end up pulling back in Week 4 and Week 5. I felt like con­stantly talk­ing to more new peo­ple was pro­duc­ing di­min­ish­ing re­turns. I had al­ready es­tab­lished a con­nec­tion with many peo­ple at the gym, so it was a bet­ter use of my lim­ited time (remember I still have to work out!) to nur­ture those ex­ist­ing con­nec­tions into mean­ing­ful ones.

I ended up pri­or­i­tiz­ing the 5 – 6 peo­ple I see and say hi” to each day.

One of these peo­ple is some­one I will re­fer to as the other Asian guy”. I got a lot closer to him than ex­pected. We re­al­ized we had the same work­out rou­tine so we be­came gym bud­dies and started work­ing out to­gether. A few weeks later, he in­vited me to his apart­ment, where he cooked me a smash burger. His girl­friend showed me graphic pic­tures of what she was learn­ing in PA school too. Then, we watched a movie with their cat. I’m re­ally grate­ful that they were kind enough to have me over as a guest.

Also, some­thing new hap­pened: in­stead of scar­ing peo­ple away, I had a pos­i­tive im­pact on some­one.

These texts were from one of the peo­ple I pri­or­i­tized, the male SU stu­dent. He had re­cently moved to Syracuse and was strug­gling to make new friends. He re­lated to a cou­ple of my videos where I talked about the same strug­gles and was su­per ap­pre­cia­tive that I talked to him that day. The fol­low­ing week, we tried out Kofta Burger af­ter a rec­om­men­da­tion from my friend who lives down­town.

The burger was de­li­cious and we had a great time.

Despite my suc­cesses, my work is­n’t done. I re­al­ized near the end of the month that what I truly wanted was to con­sis­tently hang out with peo­ple on the week­ends. Unfortunately, most of the friends I’ve made are busy on the week­end. They’re tak­ing trips to visit loved ones, go­ing to the bar (I’m not that into drink­ing), or run­ning er­rands, so it’s hard to plan any­thing.

But I guess that’s a bet­ter prob­lem to have than eter­nal lone­li­ness.

A few months ago, I was googling how to make friends af­ter col­lege” every night. Now I have peo­ple to text, peo­ple to wave to at the gym, and peo­ple who no­tice when I don’t show up for a few days. AND I be­came a more re­silient per­son who is un­afraid to do hard and scary things.

No more Wizard of Loneliness for me!

GitHub - aattaran/deepclaude: Use Claude Code's autonomous agent loop with DeepSeek V4 Pro, OpenRouter, or any Anthropic-compatible backend. Same UX, 17x cheaper.

github.com

Use Claude Code’s au­tonomous agent loop with DeepSeek V4 Pro, OpenRouter, or any Anthropic-compatible back­end. Same UX, 17x cheaper.

What this does

Claude Code is the best au­tonomous cod­ing agent — but it costs $200/month with us­age caps. DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M out­put to­kens.

deep­claude swaps the brain while keep­ing the body:

Your ter­mi­nal

+– Claude Code CLI (tool loop, file edit­ing, bash, git - un­changed)

+– API calls -> DeepSeek V4 Pro ($0.87/M) in­stead of Anthropic ($15/M)

Everything works: file read­ing, edit­ing, bash ex­e­cu­tion, sub­agent spawn­ing, au­tonomous multi-step cod­ing loops. The only dif­fer­ence is which model thinks.

Quick start (2 min­utes)

1. Get a DeepSeek API key

Sign up at plat­form.deepseek.com, add $5 credit, copy your API key.

2. Set en­vi­ron­ment vari­ables

Windows (PowerShell):

setx DEEPSEEK_API_KEY sk-your-key-here”

ma­cOS/​Linux:

echo export DEEPSEEK_API_KEY=“sk-your-key-here”’ >> ~/.bashrc

source ~/.bashrc

3. Install

Windows:

# Copy the script to a di­rec­tory in your PATH

Copy-Item deep­claude.ps1 $env:USERPROFILE\.local\bin\deepclaude.ps1”

# Or add the repo di­rec­tory to PATH

setx PATH $env:PATH;C:\path\to\deepclaude”

ma­cOS/​Linux:

chmod +x deep­claude.sh

sudo ln -s $(pwd)/deepclaude.sh” /usr/local/bin/deepclaude

4. Use it

deep­claude # Launch Claude Code with DeepSeek V4 Pro

deep­claude –status # Show avail­able back­ends and keys

deep­claude –backend or # Use OpenRouter (cheapest, $0.44/M in­put)

deep­claude –backend fw # Use Fireworks AI (fastest, US servers)

deep­claude –backend an­thropic # Normal Claude Code (when you need Opus)

deep­claude –cost # Show pric­ing com­par­i­son

deep­claude –benchmark # Latency test across all providers

deep­claude –switch ds # Switch back­end mid-ses­sion (no restart)

How it works

Claude Code reads these en­vi­ron­ment vari­ables to de­ter­mine where to send API calls:

deep­claude sets these per-ses­sion (not per­ma­nently), launches Claude Code, then re­stores your orig­i­nal set­tings on exit.

Supported back­ends

Setup per back­end

DeepSeek (default - just needs DEEPSEEK_API_KEY):

setx DEEPSEEK_API_KEY sk-…” # Windows

ex­port DEEPSEEK_API_KEY=“sk-…” # ma­cOS/​Linux

OpenRouter (optional):

setx OPENROUTER_API_KEY sk-or-…” # Windows

ex­port OPENROUTER_API_KEY=“sk-or-…” # ma­cOS/​Linux

Fireworks AI (optional):

setx FIREWORKS_API_KEY fw_…” # Windows

ex­port FIREWORKS_API_KEY=“fw_…” # ma­cOS/​Linux

Cost com­par­i­son

DeepSeek’s au­to­matic con­text caching makes agent loops ex­tremely cheap - af­ter the first re­quest, the sys­tem prompt and file con­text are cached at $0.004/M (vs $0.44/M un­cached).

What works and what does­n’t

Works

File read­ing, writ­ing, edit­ing (Read/Write/Edit tools)

Bash/PowerShell ex­e­cu­tion

Glob and Grep search

Multi-step au­tonomous tool loops

Subagent spawn­ing

Git op­er­a­tions

Project ini­tial­iza­tion (/init)

Thinking mode (enabled by de­fault)

Doesn’t work or de­graded

Intelligence dif­fer­ence

Routine tasks (80% of work): DeepSeek V4 Pro is com­pa­ra­ble to Claude Opus

Complex rea­son­ing (20%): Claude Opus is stronger - switch with –backend an­thropic

Live switch­ing (no restart)

Switch be­tween Anthropic and DeepSeek mid-ses­sion - from in­side Claude Code it­self. No restart, no ter­mi­nal com­mands. Just type a slash com­mand.

In Claude Code ter­mi­nal:

In Claude Code VS Code ex­ten­sion:

How it works

The proxy runs on lo­cal­host:3200 and in­ter­cepts all API calls. A con­trol end­point (/_proxy/mode) lets you switch the ac­tive back­end in­stantly:

Claude Code -> lo­cal­host:3200 (proxy)

|

+– /_proxy/mode POST -> switch back­end

+– /_proxy/status GET -> cur­rent back­end + up­time

+– /_proxy/cost GET -> to­ken us­age + cost sav­ings

|

+– /v1/messages -> ac­tive back­end (DeepSeek/OpenRouter/Anthropic)

+– every­thing else -> Anthropic (passthrough)

Option 1: Slash com­mands (recommended)

Add these files to ~/.claude/commands/:

deepseek.md:

Switch the model proxy to DeepSeek. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=deepseek”

If suc­cess­ful, say: Switched to DeepSeek.”

an­thropic.md:

Switch the model proxy back to Anthropic. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=anthropic”

If suc­cess­ful, say: Switched to Anthropic.”

open­router.md:

Switch the model proxy to OpenRouter. Run this com­mand silently and re­port the re­sult:

curl -sX POST http://​127.0.0.1:3200/​_proxy/​mode -d backend=openrouter”

If suc­cess­ful, say: Switched to OpenRouter.”

Then type /deepseek, /anthropic, or /openrouter in any Claude Code ses­sion to switch in­stantly.

Option 2: CLI flag

deep­claude –switch deepseek # or: ds, or, fw, an­thropic

deep­claude -s an­thropic

Option 3: VS Code key­board short­cuts

Add to .vscode/tasks.json:

{

GameStop makes $55.5bn takeover offer for eBay

www.bbc.co.uk

eBay should be worth - and will be worth - a lot more money,” Cohen told the Wall Street Journal. It could be a le­git com­peti­tor to Amazon,” he added.

Under the pro­posed deal to buy eBay, Cohen would be­come the chief ex­ec­u­tive of the new firm and re­ceive no salary or bonuses, be­ing compensated solely based on the per­for­mance of the com­bined com­pany”.

GameStop, which cur­rently has a stock mar­ket val­u­a­tion of around $11.9bn, said it has a com­mit­ment let­ter from TD Securities to pro­vide around $20bn in debt to help fi­nance the takeover.

Cohen said he planned to cut costs at eBay by $2bn within a year of a deal be­ing com­pleted.

This would mainly fall across eBay’s sales and mar­ket­ing di­vi­sion, which GameStop said had failed to at­tract more users to a marketplace with near-uni­ver­sal brand recog­ni­tion”.

The pro­posal does not sound like a terribly good of­fer” as it would sad­dle eBay with GameStop’s debt, said Sucharita Kodali, a re­tail an­a­lyst at re­search firm Forrester.

It makes sense for GameStop be­cause it could lift its val­u­a­tion by be­ing linked with a larger com­pany like eBay, she told the BBC.

The truth is, we are not nec­es­sar­ily putting two strong com­pa­nies to­gether,” Kodali added.

Shares in eBay rose by 5% on Monday in New York, while GameStop fell by more than 9%.

GameStop’s shops would give eBay a na­tional net­work for its live com­merce” and other busi­ness op­er­a­tions, Cohen said.

Cohen, who be­came the GameStop boss in 2023, has crit­i­cised its slow shift into e-com­merce.

Spirit 2.0 — The Airline Owned by the People, for the People

letsbuyspiritair.com

Trademark Violation: Fake Notepad++ for Mac

notepad-plus-plus.org

2026 – 05-01

Several users have re­cently re­ported a web­site pre­tend­ing to of­fer an of­fi­cial ma­cOS ver­sion of Notepad++:

notepad-plus-plus-mac.org

Let me be blunt:

This site has ab­solutely noth­ing to do with Notepad++.

It’s not au­tho­rized, not en­dorsed, and not af­fil­i­ated with the pro­ject in any way.

The owner is us­ing the Notepad++ trade­mark (the name) with­out per­mis­sion;

This is mis­lead­ing, in­ap­pro­pri­ate, and frankly dis­re­spect­ful to both the pro­ject and its users. It has al­ready fooled peo­ple - in­clud­ing tech me­dia - into be­liev­ing this is an of­fi­cial re­lease.

To be crys­tal clear:

Notepad++ has never re­leased a ma­cOS ver­sion.

Anyone claim­ing oth­er­wise is sim­ply rid­ing on the Notepad++ name.

As men­tioned in my GitHub post, I have al­ready con­tacted the owner of the fake official” web­site, and I am still wait­ing for a re­ply.

In the mean­time, if you see some­one post­ing Notepad++ is fi­nally on Mac!” on Reddit, Twitter, Mastodon, Discord, StackOverflow, or any tech blogs/​fo­rums, please re­ply with:

This is not an of­fi­cial Notepad++ re­lease. It’s an unau­tho­rized pro­ject mis­us­ing the Notepad++ trade­mark.”, and in­clude a link to this an­nounce­ment.

Thank you to the users who raised the alarm. Your vig­i­lance helps pro­tect the pro­ject from peo­ple who think they can bor­row the Notepad++ iden­tity as they please.

– Don Ho

Removable batteries in smartphones will be mandatory starting in 2027

www.ecopv-eu.com

Starting in 2027, there will be a no­tice­able change for smart­phones in the EU: The re­mov­able bat­tery is mak­ing a come­back. What used to be stan­dard is re­turn­ing due to le­gal re­quire­ments for new mod­els.

What ex­actly can we ex­pect?

Starting February 18, 2027, new smart­phones and tablets must be de­signed so that end users can re­move and re­place the bat­tery them­selves us­ing stan­dard tools. Adhesive bonds that re­quire heat to be re­moved will then be largely pro­hib­ited.

Specifically, this means the fol­low­ing for new mod­els start­ing in February 2027:

Easy re­place­ment: Batteries must be re­place­able us­ing stan­dard tools (e.g., screw­drivers).

No bar­ri­ers: The use of ad­he­sives that can only be re­moved with heat or sol­vents is pro­hib­ited.

Tools: If a spe­cial tool is re­quired for re­place­ment, the man­u­fac­turer must pro­vide it free of charge.

Spare parts guar­an­tee: Replacement bat­ter­ies must be avail­able to end users at a rea­son­able price for at least 5 years.

Why is the EU in­tro­duc­ing this?

The main dri­ver is the tran­si­tion to a true cir­cu­lar econ­omy. Currently, smart­phones are of­ten re­placed as soon as bat­tery per­for­mance de­clines, which wastes enor­mous amounts of re­sources.

Waste pre­ven­tion: Millions of tons of elec­tronic waste are gen­er­ated in the EU every year. Easily re­place­able bat­ter­ies sig­nif­i­cantly ex­tend the lifes­pan of de­vices.

Cost sav­ings: Many users shy away from ex­pen­sive re­pairs or buy­ing new de­vices. The EU es­ti­mates that con­sumers could save tens of bil­lions of eu­ros in to­tal by 2030 thanks to longer us­age cy­cles.

Resource con­ser­va­tion: Batteries con­tain valu­able raw ma­te­ri­als such as lithium and cobalt. If they are eas­ily re­mov­able, they can be sorted by type and re­cy­cled more ef­fi­ciently.

Fire safety: Batteries that are per­ma­nently glued in place are of­ten dam­aged dur­ing shred­ding, which re­peat­edly leads to dan­ger­ous fires in sort­ing fa­cil­i­ties. Clean re­moval sig­nif­i­cantly in­creases safety in the re­cy­cling process.

What does this mean for users?

DIY re­pairs: Instead of pay­ing a lot of money to visit a re­pair ser­vice, you sim­ply buy the re­place­ment part and swap it out your­self.

Higher re­sale value: Used cell phones can be resold much more eas­ily and for a higher price with a brand-new bat­tery.

Longer soft­ware sup­port: Since the hard­ware lasts longer, there is also in­creased pres­sure on man­u­fac­tur­ers to of­fer se­cu­rity up­dates for a longer pe­riod.

Will this make smart­phones thicker or less wa­ter­proof?

That is the key chal­lenge for de­sign­ers.

Modern de­vices are of­ten bonded to­gether to make them par­tic­u­larly thin and wa­ter­proof.

Removable bat­ter­ies make this de­sign more dif­fi­cult, but not im­pos­si­ble.

Manufacturers are al­ready work­ing on so­lu­tions, such as:

new seals in­stead of ad­he­sive,

more ro­bust cas­ings with screw mech­a­nisms,

mod­u­lar in­ter­nal struc­tures.

Many users fear that cell phones will break im­me­di­ately if they get wet in the rain or fall into wa­ter. That’s not true: It is en­tirely fea­si­ble to make smart­phones wa­ter­proof de­spite hav­ing a re­mov­able bat­tery. The prin­ci­ple is sim­i­lar to that of rugged out­door phones. A rub­ber gas­ket run­ning around the bat­tery cover, which is pressed into place by screws or a se­cure clip, en­sures that the in­te­rior of the hous­ing is sealed.

It is there­fore quite pos­si­ble that smart­phones will be­come slightly thicker, but sig­nif­i­cant in­creases are un­likely, as de­sign re­mains a key sell­ing point.

Are there any ex­cep­tions to the re­place­ment re­quire­ment?

Yes, but only in spe­cific cases:

Specialized hard­ware: Devices used in highly spe­cial­ized fields (e.g., med­ical di­ag­nos­tics or ex­plo­sion-proof in­dus­trial cell phones) are also ex­empt if a re­place­able bat­tery would com­pro­mise safety.

Extremely long lifes­pan: To avoid the re­place­ment re­quire­ment, a bat­tery would have to be ex­tremely durable. The bat­tery must re­tain at least 80% of its orig­i­nal ca­pac­ity af­ter 1,000 charge cy­cles. That is sig­nif­i­cantly more than many bat­ter­ies on the mar­ket to­day can achieve (often around 500 – 800 cy­cles).

Simultaneous wa­ter pro­tec­tion: In ad­di­tion to dura­bil­ity, the de­vice must be wa­ter- and dust-tight ac­cord­ing to IP67.

Another in­no­va­tion: the battery pass­port”

In ad­di­tion, the EU is in­tro­duc­ing a dig­i­tal bat­tery pass­port.

Users and re­cy­cling fa­cil­i­ties can ac­cess im­por­tant data via a printed QR code. It stores in­for­ma­tion about the bat­tery’s car­bon foot­print, the pro­por­tion of re­cy­cled ma­te­ri­als, its chem­i­cal com­po­si­tion, and its state of health.” This rep­re­sents a huge step for­ward, par­tic­u­larly for the sec­ond-hand mar­ket and pro­fes­sional re­cy­clers.

Conclusion

The new EU reg­u­la­tion marks the end of the disposable” era for smart­phones. Starting in 2027, users will ben­e­fit from longer de­vice lifes­pans, eas­ier re­pairs, and lower costs.

Even though man­u­fac­tur­ers will have to adapt their de­signs to im­prove wa­ter re­sis­tance and aes­thet­ics, the ben­e­fits for the en­vi­ron­ment and con­sumers -including less elec­tronic waste and greater trans­parency — out­weigh these changes.

Contact us for com­pre­hen­sive ad­vice on your com­pli­ance is­sues re­lat­ing to elec­tri­cal and elec­tronic equip­ment, pack­ag­ing, bat­ter­ies, and PV pan­els.

www.ecopv-eu.com/​en/​con­tact/ |  E-Mail: info@ecopv-eu.com

Supported over 20,000 cus­tomers with EPR com­pli­ance

Rated 5.0 on Google

Contact

We look for­ward to your mes­sage!

info@ecopv-eu.com

+49 6196 5835357

Frankfurter Str. 70 – 7265760 Eschborn

Incident with Issues and Webhooks

www.githubstatus.com

Agentic Coding is a Trap | Lars Faye

larsfaye.com

AI does the cod­ing, and the hu­man in the loop is the or­ches­tra­tor”

AI does the cod­ing, and the hu­man in the loop is the or­ches­tra­tor”

This is the sen­ti­ment be­ing hyped up around the in­dus­try cur­rently: tra­di­tional cod­ing is all but dead, and Spec Driven Development (SDD) is the fu­ture. You gen­er­ate a plan, and dis­con­nect from writ­ing any code. The agents know bet­ter, and han­dle all the im­ple­men­ta­tion. You are there as the ex­pert, to pro­vide good taste”, re­view the out­puts, and con­stantly steer the agent(s) to ex­e­cute the plan that you metic­u­lously put to­gether.

The work­flow takes many shapes at this point, but in gen­eral, it is a process where some­one de­fines the pro­jec­t’s re­quire­ments (simultaneously at a mi­cro and macro level), gen­er­ates a plan, and then pulls the slot ma­chine lever over and over, it­er­at­ing and re­it­er­at­ing with of­ten mul­ti­ple agent in­stances un­til it’s done. All the while, putting a grow­ing dis­tance be­tween the orchestrator” and the code that is be­ing gen­er­ated and com­mit­ted.

Coding Agents are help­ful, and pow­er­ful, but there’s al­ready some quan­tifi­able trade-offs that need to be dis­cussed:

An in­crease in the com­plex­ity of the sur­round­ing sys­tems to mit­i­gate the in­creased am­bi­gu­ity of AIs non-de­ter­min­ism.

Atrophying skills for a wide swath of the pop­u­la­tion.

Vendor lock-in for in­di­vid­u­als and en­tire teams (Claude Code out­ages have al­ready had en­tire teams at a stand-still).

Fluctuating and in­creas­ing costs to ac­cess the tools. An em­ploy­ee’s cost is fixed; to­kens are a con­stantly mov­ing tar­get.

Being suc­cess­ful with this ap­proach to cod­ing agents hinges on a rather cru­cial el­e­ment: only a skilled de­vel­oper who’s think­ing crit­i­cally, and com­fort­able op­er­at­ing at the ar­chi­tec­tural level, can spot is­sues in the thou­sands of lines of gen­er­ated code, be­fore they be­come a prob­lem.

Yet, in an ironic twist of fate, it’s the in­di­vid­u­al’s crit­i­cal think­ing skills and cog­ni­tive clar­ity that AI tool­ing has now been proven to im­pact neg­a­tively.

Not Just Another Abstraction”

A com­mon re­frain we hear in the com­mu­nity is that pro­gram­mers are just moving up the stack” and into a dif­fer­ent type of ab­strac­tion. Whether or not these tools are re­ally an ab­strac­tion layer in the first place is not a set­tled mat­ter; a higher level of am­bi­gu­ity is not a higher level of ab­strac­tion.

If we put that to the side though, it is true that pro­gram­mers tend to be wary of new lan­guages and new ways of pro­gram­ming. When FORTRAN was re­leased, pro­gram­mers were skep­ti­cal of it, too. They had sim­i­lar claims: it was likely to in­tro­duce more bugs and in­sta­bil­ity, and writ­ing as­sem­bly di­rectly was more ef­fi­cient. Later, there would be dis­course around the in­te­gra­tion of com­pil­ers in­tro­duc­ing too much magic” into the process. These were nor­ma­tive ar­gu­ments around a fear of what might be lost if these new tech­nolo­gies were em­braced.

The dif­fer­ence with what is hap­pen­ing to­day is that those pre­vi­ous fears were spec­u­la­tive and the­o­ret­i­cal. In just the short few years that AI tool­ing has ex­isted, we are al­ready see­ing sig­nif­i­cant im­pacts. These aren’t just ju­nior de­vel­op­ers, but even those with a decade (or more) of ex­pe­ri­ence:

Junior de­vel­op­ers are faced with an even steeper climb, as we trun­cate their abil­ity to work with code and re­place it with re­view­ing gen­er­ated code. Reviewing code is im­por­tant, but it’s only 50% of the learn­ing process, at best. Without the fric­tion and chal­lenges that come with work­ing with code di­rectly, their abil­ity to learn is se­ri­ously di­min­ished.

Studying this phe­nom­e­non takes time, so anec­do­tal ev­i­dence is im­por­tant to gather to get a real-time view of the sit­u­a­tion. But it has also been stud­ied, and there are nu­mer­ous re­ports re­in­forc­ing that this is a real phe­nom­e­non.

It ac­tu­ally is dif­fer­ent this time.

When a C++ de­vel­oper moved to Java or Python, they did­n’t com­plain of brain fog. When a sysad­min moved to AWS, they did­n’t feel like they were los­ing their abil­ity to un­der­stand net­work­ing.

A Senior Engineer los­ing their cod­ing edge and be­com­ing rusty” over time as they move into man­age­r­ial roles and prac­tice cod­ing less is not a new phe­nom­e­non. This was the nat­ural pro­gres­sion of ex­per­tise: an en­gi­neer who had decades of cod­ing, fric­tion, and ex­pe­ri­ence logged would have the time and ex­pe­ri­ence to so­lid­ify those skills and wis­dom. And they could ap­ply that wis­dom when their job be­came less about syn­tax, and more about higher-level ar­chi­tec­tural de­ci­sions. Those in­di­vid­u­als are not only ex­ceed­ingly rare, but you won’t get the next wave of se­niors if we’re all ab­di­cat­ing the fric­tion of writ­ing, prob­lem-solv­ing, and de­bug­ging.

What is hap­pen­ing right now is a trend where de­vel­op­ers, who’ve never had that longevity or the 30+ years of fric­tion that led to that deep un­der­stand­ing, are be­ing moved into higher-level work­flows re­quir­ing the same skills to man­age the AI agents that the se­nior en­gi­neer took decades to ob­tain.

However, Senior Engineers aren’t im­mune, ei­ther. Simon Willison, a de­vel­oper with nearly 30 years ex­pe­ri­ence, has re­ported not hav­ing a firm men­tal model of what the ap­pli­ca­tions can do and how they work, which means each ad­di­tional fea­ture be­comes harder to rea­son about”

The Skilled” Orchestrator Problem

Buried in a re­cent study by Anthropic was a sur­pris­ingly hon­est mo­ment when speak­ing about the risks of en­gag­ing with cod­ing agents on a reg­u­lar ba­sis:

One rea­son that the at­ro­phy of cod­ing skills is con­cern­ing is the paradox of su­per­vi­sion” … ef­fec­tively us­ing Claude re­quires su­per­vi­sion, and su­per­vis­ing Claude re­quires the very cod­ing skills that may at­ro­phy from AI overuse.

One rea­son that the at­ro­phy of cod­ing skills is con­cern­ing is the paradox of su­per­vi­sion” … ef­fec­tively us­ing Claude re­quires su­per­vi­sion, and su­per­vis­ing Claude re­quires the very cod­ing skills that may at­ro­phy from AI overuse.

Sandor Nyako, Director of Software Engineering at LinkedIn who over­sees 50 en­gi­neers, has no­ticed it pro­lif­er­at­ing through­out the or­ga­ni­za­tion and re­quested his team not to use them for tasks that re­quire crit­i­cal think­ing or prob­lem-solv­ing.”

To grow skills, peo­ple need to go through hard­ship. They need to de­velop the mus­cle to think through prob­lems,” he said. How would some­one ques­tion if AI is ac­cu­rate if they don’t have crit­i­cal think­ing?”

To grow skills, peo­ple need to go through hard­ship. They need to de­velop the mus­cle to think through prob­lems,” he said. How would some­one ques­tion if AI is ac­cu­rate if they don’t have crit­i­cal think­ing?”

There is also the ques­tion of what con­sti­tutes overuse”. We al­ready have ev­i­dence, both data-dri­ven and anec­do­tal, that these skills can at­ro­phy and dis­si­pate rather quickly (within months in some cases).

This is the con­tra­dic­tion that has many AI boost­ers talk­ing out of both sides of their mouths: The use of cod­ing agents is ac­tively di­min­ish­ing the very skills needed to ef­fec­tively man­age the cod­ing agents.

LLMs ac­cel­er­ate the wrong parts.

Contrary to the cur­rent nar­ra­tive that is be­ing es­poused, we did­n’t nec­es­sar­ily need to write code faster. Especially code we did­n’t fully un­der­stand, and par­tic­u­larly in huge swaths that we could­n’t re­view in rea­son­able time frames.

Before AI, a (good) de­vel­op­er’s pri­or­ity list might look like:

Understanding of the code and its re­la­tion to the code­base

If the code is aligned with the doc­u­mented and ef­fi­cient stan­dards

As few lines of code as needed to ac­com­plish the goal (while main­tain­ing read­abil­ity)

Turnaround time

Agentic cod­ing, and LLMs in gen­eral, com­pletely in­vert this list.

Their ca­pa­bil­i­ties and us­age tend to fo­cus on speed by in­creas­ing the amount of code that can be gen­er­ated in a spec­i­fied time frame. Speed is a nat­ural byprod­uct of high ap­ti­tude. When it’s forced, it al­ways leads to lower ac­cu­racy. The in­te­gra­tion of these tools does­n’t tend to fo­cus much on deeper un­der­stand­ing or con­cise­ness.

Can they be used that way? Yes, with de­ter­mi­na­tion, they cer­tainly can be.

Are they? No, not re­ally; forced man­dates and hype around to­ken us­age across or­ga­ni­za­tions is demon­strat­ing as such.

Coding === Planning

There is a di­vide be­tween de­vel­op­ers that is­n’t high­lighted as much: Some of us plan, and think, bet­ter with code. Thinking and work­ing in code is­n’t just mean­ing­less drudgery; it forces you to think about things on a tech­ni­cal level that in­volves every­thing from se­cu­rity to per­for­mance to user ex­pe­ri­ence to main­tain­abil­ity.

In a re­cent in­ter­view dis­cussing Spec Driven Development”, Dax, the cre­ator of OpenCode (an open-source cod­ing agent, no less) was quoted say­ing:

When work­ing on some­thing new or some­thing chal­leng­ing, me typ­ing out code is the process by which I fig­ure out what we should even be do­ing.

I have a re­ally tough time just sit­ting there, writ­ing out a gi­ant spec on ex­actly how the fea­ture should work. I like writ­ing out types. I like writ­ing out how some of the func­tions might play to­gether. I like play­ing with folder struc­ture to see what the dif­fer­ent con­cepts should be. And this is all stuff that I think most peo­ple—most pro­gram­mers—have al­ways done. I don’t re­ally see a good rea­son why I would stop that per­son­ally, be­cause it’s how I fig­ure out what to do.”

When work­ing on some­thing new or some­thing chal­leng­ing,

.

I have a re­ally tough time just sit­ting there, writ­ing out a gi­ant spec on ex­actly how the fea­ture should work.

I like writ­ing out types. I like writ­ing out how some of the func­tions might play to­gether. I like play­ing with folder struc­ture to see what the dif­fer­ent con­cepts should be. And this is all stuff that I think most peo­ple—most pro­gram­mers—have al­ways done. I don’t re­ally see a good rea­son why I would stop that per­son­ally, be­cause it’s how I fig­ure out what to do.”

What you say is of­ten not what you mean, and LLMs fill in am­bi­gu­ity with as­sump­tions (or hal­lu­ci­na­tions), which leads to: more re­view, more agent re­vi­sions, more to­kens burned, and more dis­con­nec­tion from what is be­ing cre­ated. Inversely, You can mar­vel at the most beau­ti­ful, un­am­bigu­ous, per­fectly struc­tured prompt you’ve ever writ­ten, and the LLM can still out­put a hal­lu­ci­nated method be­cause it is fun­da­men­tally a next-to­ken-pre­dic­tion en­gine, not a com­piler. You can­not re­place a de­ter­min­is­tic sys­tem with a prob­a­bilis­tic one and ex­pect zero am­bi­gu­ity.

Even the most AI-enthusiastic se­nior de­vel­op­ers are start­ing to see this dis­con­nec­tion as a loom­ing and grow­ing is­sue.

Vendor Lock-In

When I was brows­ing LinkedIn dur­ing the Claude out­age that oc­curred a bit ago, I no­ticed nu­mer­ous posts high­light­ing that cer­tain de­vel­op­ers and en­gi­neer­ing teams were at a stand­still. Their work­flows, their own cod­ing abil­i­ties, had al­ready reached a point where they were largely de­pen­dent on these ven­dors. What used to be a skill that they could ex­e­cute with just a key­board and text ed­i­tor sud­denly re­quired a sub­scrip­tion to an AI model provider.

You can’t pre­dict your to­ken cost.

Model providers are heav­ily sub­si­dized, and the mod­els them­selves are built on shift­ing sands. Every new model re­lease fol­lows the same pat­tern of high bench­marks, fol­lowed by hype, fol­lowed by the re­al­ity of us­age and every­one com­plain­ing of them be­ing nerfed” and burn­ing through 2x-3x as many to­kens to get the same job done.

You know how much your em­ploy­ees cost; you have no idea how much your to­ken costs will be day to day, month to month, year to year. If your en­tire team is us­ing agen­tic cod­ing as the de­fault, your ex­pense ac­count will need to re­main highly nim­ble. As Primeagen said re­cently: when you use these fully agen­tic work­flows, the model providers es­sen­tially own you”.

It’s not un­rea­son­able to play this pat­tern for­ward, where we could be cre­at­ing an in­dus­try where you need to pay for to­ken con­sump­tion to ac­com­plish some­thing that used to be the prod­uct of your own crit­i­cal think­ing and prob­lem-solv­ing abil­i­ties. This would re­sem­ble a type of vendor lock-in”, but for an en­tire in­dus­try skillset (and I’m sure the model providers are glee­fully rub­bing their hands in an­tic­i­pa­tion for that). The fi­nan­cial, and in­tel­lec­tual, rug-pull could come at any mo­ment, and lo­cal LLMs are nowhere near ready to scale to ab­sorb that level of us­age.

This is­n’t the­o­ret­i­cal con­jec­ture; it’s be­ing re­ported on right now. Even the model providers them­selves are bring­ing it to light. Yet an­other Anthropic study showed a pre­cip­i­tous 47% drop-off in de­bug­ging skills:

Incorporating AI ag­gres­sively into the work­place—es­pe­cially in soft­ware en­gi­neer­ing—in­evitably comes with trade-offs…de­vel­op­ers may lean on AI to de­liver quick re­sults at the ex­pense of build­ing crit­i­cal skills—most no­tably, the abil­ity to de­bug when things go wrong.”

Incorporating AI ag­gres­sively into the work­place—es­pe­cially in soft­ware en­gi­neer­ing—in­evitably comes with trade-offs…de­vel­op­ers may lean on AI to de­liver quick re­sults at the ex­pense of build­ing crit­i­cal skills—most no­tably, the abil­ity to de­bug when things go wrong.”

There’s a way to avoid all of this, of course. LLMs are a pow­er­house tech­no­log­i­cal ad­vance­ment, and when used re­spon­si­bly, they can be a stel­lar tool for learn­ing and up­skilling. They en­able me to dive deeper and wider into con­cepts and tech­niques, ex­pand­ing un­der­stand­ing and en­abling ex­plo­ration of new ideas that used to be more ar­du­ous and time con­sum­ing to ex­per­i­ment with. This is where I think they will of­fer the in­dus­try the most long-term value.

My Approach: Demote AIs role

I’m cer­tainly not ad­vo­cat­ing for typ­ing code out man­u­ally. Programmers have al­ways been look­ing for ways to cre­ate code with­out hav­ing to write code. This is why we even have Emmet, au­to­com­plete, and snip­pets in the first place. Even COBOL was de­signed to en­cap­su­late more in­struc­tions with less writ­ing by us­ing English-like” words such as MOVE and WRITE. jQuery’s motto was write less, do more”. LLMs are an­other ad­di­tion to this ar­ray of code gen­er­a­tion tools.

What I am ad­vo­cat­ing for, though, is lever­ag­ing LLMs and cod­ing agents as sec­ondary processes. A way that does­n’t sac­ri­fice the in­di­vid­u­al’s skills at the al­tar of pro­duc­tiv­ity. You can flip the script and lean on them to brain­storm the plan­ning parts of the process while stay­ing ac­tively en­gaged through­out im­ple­men­ta­tion, del­e­gat­ing to them on an as-needed ba­sis. You can lever­age the pro­duc­tiv­ity gains, and mit­i­gate the com­pre­hen­sion debt.

My daily work­flow:

I use LLMs to help gen­er­ate specs and plans, while I fa­cil­i­tate the im­ple­men­ta­tion. This is an in­ver­sion of the orchestration” work­flow; I am still man­u­ally cod­ing any­where from 20% to 100%, de­pend­ing on the task.

I very of­ten am writ­ing pseudo-code when I do en­gage with the mod­els, clos­ing the dis­tance be­tween the re­quest and the gen­er­ated code.

I use the mod­els as del­e­ga­tion util­i­ties for ad-hoc code gen­er­a­tion and in­ter­ac­tive doc­u­men­ta­tion, as well as re­search tools so that I can con­stantly ask ques­tions, it­er­ate, refac­tor, and gain clar­ity around my ap­proaches.

I never gen­er­ate more than I can re­view in a sit­ting. If it’s too much to re­view, I slow down and split the task up, man­u­ally refac­tor­ing where needed to en­sure a com­pre­hen­sive un­der­stand­ing of the end re­sult.

I never ask an LLM or agent to im­ple­ment some­thing that I’ve never done be­fore or could­n’t do on my own, ex­cept per­haps purely for ed­u­ca­tional or tu­to­r­ial pur­poses (and of­ten dis­carded af­ter­wards).

If I had to TL;DR this list, it would be: Use them like the Ship’s Computer, not Data. (any Star Trek fans should get the ref­er­ence)

I’m not go­ing faster, but I’m do­ing bet­ter qual­ity work.

The pro­duc­tiv­ity gains from these mod­els are real, and so is the fric­tion and un­der­stand­ing that come from en­gag­ing with the work on a tan­gi­ble and fre­quent ba­sis.

Despite the count­less failed at­tempts at try­ing to de­moc­ra­tize cod­ing while not un­der­stand­ing cod­ing, we’re faced with the re­al­ity that you can­not un­der­stand code with­out en­gag­ing with it. And it’s be­come clear that if you don’t keep en­gag­ing and writ­ing it, you can lose touch with that un­der­stand­ing, which will in turn make you a less ca­pa­ble or­ches­tra­tor in the first place, ren­der­ing this phase of AI cod­ing a strange and need­lessly stress­ful in­ter­lude.

Perhaps I am wor­ry­ing too much, but his­tory con­tains lessons.

This all feels sim­i­lar, though, like an­other large ex­per­i­ment we’re run­ning on our­selves. We’ve been through a sim­i­lar pe­riod with the in­tro­duc­tion of so­cial me­dia with­out un­der­stand­ing the long-term im­pli­ca­tions, and we’re now faced with at­ten­tion deficit (amongst many other is­sues) on a wide scale.

This time, we’re gam­bling with some­thing much riskier.

People who go all in on AI agents now are guar­an­tee­ing their ob­so­les­cence. If you out­source all your think­ing to com­put­ers, you stop up­skilling, learn­ing, and be­com­ing more com­pe­tent.”  — Jeremy Howard, cre­ator of fast.ai

People who go all in on AI agents now are guar­an­tee­ing their ob­so­les­cence. If you out­source all your think­ing to com­put­ers, you stop up­skilling, learn­ing, and be­com­ing more com­pe­tent.”

– Jeremy Howard, cre­ator of

Using “underdrawings” for accurate text and numbers

samcollins.blog

I dis­cov­ered a tech­nique for gen­er­at­ing re­li­able text and num­bers in AI gen­er­ated im­ages.

For ex­am­ple, the fol­low­ing im­age is con­sid­ered im­pos­si­ble with state of the art im­age mod­els. But I made this with Gemini 3.0 Pro (plus one ex­tra step I’m go­ing to ex­plain be­low).

The Underdrawing Method

I’m to­tally nam­ing it like it’s a thing but it does seem to be a thing. Here’s a sim­ple a/​b test show­ing the re­sults with­out and with this method.

Make an im­age of a game board with 50 step­ping stones arranged in a spi­ral, wind­ing counter-clock­wise in­ward from start at the out­side (1) to fin­ish at the cen­tre (50). Each stone is clearly num­bered con­sec­u­tively from 1 to 50. Style: clay­ma­tion dio­rama, stu­dio-lit, candy-bright, soft bokeh back­ground.

❌ Gemini 3 Pro (without un­der­draw­ing)

As ex­pected. Impressive at first glance but falls apart once you start read­ing.

❌ ChatGPT Images 2 (without un­der­draw­ing)

I was so im­pressed with ChatGPT-Images-2 re­lease I ex­pected it to get this. Very sur­pris­ing to see it fail sim­i­lar to Gemini.

✅ Gemini 3.0 Pro (with the un­der­draw­ing method)

Bingo. Correct num­bers, cor­rect num­ber and se­quenc­ing of but­tons, cor­rect spi­ral shape

So how does it work?

I came up with this pat­tern while try­ing to fig­ure out how to gen­er­ate an im­age of a 100-step ad­ven­ture board for my kid.

Use de­ter­min­is­tic and gen­er­a­tive ma­chines for what they’re good at

SVG/HTML makes dry vi­su­als but with ex­cel­lent math and pre­ci­sion

Image Gen mod­els make stun­ning vi­su­als but with un­re­li­able math and text

Give it an out­line. Ask it to paint on top”

Layer 1: The underdrawing” (deterministic): Layout the num­bers and text in the cor­rect po­si­tions and ori­en­ta­tions in what­ever lan­guage/​for­mat you pre­fer (svg, python, mer­maid) — you just need to ex­port an im­age of it with the pix­els of the num­bers/​text.

Layer 1: The underdrawing” (deterministic): Layout the num­bers and text in the cor­rect po­si­tions and ori­en­ta­tions in what­ever lan­guage/​for­mat you pre­fer (svg, python, mer­maid) — you just need to ex­port an im­age of it with the pix­els of the num­bers/​text.

Layer 2: The painting” (generative): Using a multi-modal im­age model like Gemini 3.0 Pro (you need im­age+text in­put → im­age out­put), pass your un­der­draw­ing im­age along with your text prompt.

Layer 2: The painting” (generative): Using a multi-modal im­age model like Gemini 3.0 Pro (you need im­age+text in­put → im­age out­put), pass your un­der­draw­ing im­age along with your text prompt.

Example

Step 1 of 2: gen­er­ate the num­bers/​text out­line with SVG

Make an SVG of 50 step­ping stones arranged in a spi­ral, wind­ing counter-clock­wise in­ward from start at the out­side (1) to fin­ish at the cen­tre (50), each stone num­bered con­sec­u­tively from 1 to 50. Each stone is a dif­fer­ent shape: cir­cle, square, tri­an­gle, hexa­gon.

Step 2 of 2: Use the un­der­draw­ing to do im­age-to-im­age gen­er­a­tion

Transform this im­age into a pho­tographed clay­ma­tion dio­rama of as­sorted ar­ti­san choco­lates and can­dies, arranged in a spi­ral path wind­ing counter-clock­wise in­ward from start (1) at the out­side to fin­ish (50) at the cen­tre, viewed from a low-an­gle tilted per­spec­tive.

That’s it

It is­n’t hard. By now claude code or codex can do every step of that for you.

Note: it’s good, but it won’t be per­fect every time. Thank you for the re­al­ity check, 71.

Days Without GitHub Incident

www.dayswithoutgithubincident.com

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.