10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Talking to 35 Strangers at the Gym
Background
A couple months ago, I was the Wizard of Loneliness. I had graduated from college almost two years prior and, while I had luckily found a job, I was unsuccessful in finding friends.
Each night, I would look up “how to make friends after college” and find the same advice given every time: “do your hobby with other people, frequently”.
On paper, the gym seemed like the perfect opportunity to meet people since I would go there nearly every day; however, according to Reddit, there’s a number of people who want to be left alone and can be irritated if you interrupted their workout to talk.
I am deeply afraid of irritating someone or being in awkward situations. Here’s a list of things that I did as a result of that fear:
Hesitated for a couple minutes before waking up my roommate when the fire alarm went off
Hesitated for a couple minutes before waking up my roommate when the fire alarm went off
Pretended I didn’t know a childhood friend when they said hi because I didn’t know how to act around people I used to know
Pretended I didn’t know a childhood friend when they said hi because I didn’t know how to act around people I used to know
Ignored people I knew from class instead of saying hi because I didn’t know for sure if they remembered me even though the class had only 10 people in it
Ignored people I knew from class instead of saying hi because I didn’t know for sure if they remembered me even though the class had only 10 people in it
So you can understand when I say that walking up to someone and starting a conversation with them at the gym of all places is kinda terrifying for me.
Unfortunately, there was no other good option. My other hobby is programming, but the Syracuse Development group only meets up once a month, and activities suggested by r/Syracuse like volleyball and trivia night require you to already have friends. I didn’t have a choice. If I wanted friends, I would have to put in the work at the gym.
Problem Statement
I am lonely and have no friends.
Procedure
I decided to run a little experiment to find some friends.
Each day, for one month, I picked out one person to approach. Usually it would be someone I saw frequently at the gym.
Then, I would approach them, wave or tap them on the shoulder to get their attention, and then give them my opening line.
Initially, my opening line for everyone was “Hey I see you here all the time. You’re pretty strong. What’s your split?” After a week or so, I began customizing the opening line per person based on what I found interesting about them.
For instance, someone was wearing a Boston hat and I was curious whether they went to school in Boston like I did, so I asked them about it. After the opening line, I tried to talk to them for 5 – 10 minutes until they let me go. I tried not to be the one to end it because I have a habit of ending conversations early.
Results
Here’s the raw data. I split it up by week and put it into these collapsible things because it takes up a lot of space. Click on each week to see the data for that week.
Description is a short description of the person.
Length is how long the conversation was. A short conversation is 0 – 2 minutes, a medium conversation is 5 – 7 minutes, and a long conversation is 10+ minutes.
Notes are just anything interesting about the conversation or the person I was talking to.
Aftermath is what happened after that conversation.
Reflection
The first couple days were extremely difficult. I had been conditioned to believe that initiating a conversation with a stranger was weird and it was tough to break free from that. As a result, for the first few people, I would always make a detour at the last second, i.e. make a trip to the water fountain. I chickened out! The solution was to approach the person as quickly as possible so that I didn’t have time to think about running away.
Luckily, the first few people were receptive. I got a rush of dopamine whenever someone responded positively to my conversation, so talking to new people became strangely addictive. I kept talking to more and more new people each day until I talked to a whopping seven (SIX SEVENNN) new people in one day (this is why Week 3 has a lot of entries). It was crazy.
People didn’t always respond positively though. In Week 1 and Week 2, I came across a number of people who were really short with their responses and didn’t try to continue the conversation. They gave off the vibe that they didn’t want to talk to me. It was really awkward and almost made me end the experiment.
But over time, I came to accept that it’s ok if they didn’t want to talk to me. That’s just one of the things you have to expect when you do something like this.
And being in an awkward situation is actually not that bad. It sucks in the moment, but then you just take a few minutes to calm down and then you move on with your life. You’re ok.
However, I did end up pulling back in Week 4 and Week 5. I felt like constantly talking to more new people was producing diminishing returns. I had already established a connection with many people at the gym, so it was a better use of my limited time (remember I still have to work out!) to nurture those existing connections into meaningful ones.
I ended up prioritizing the 5 – 6 people I see and say “hi” to each day.
One of these people is someone I will refer to as “the other Asian guy”. I got a lot closer to him than expected. We realized we had the same workout routine so we became gym buddies and started working out together. A few weeks later, he invited me to his apartment, where he cooked me a smash burger. His girlfriend showed me graphic pictures of what she was learning in PA school too. Then, we watched a movie with their cat. I’m really grateful that they were kind enough to have me over as a guest.
Also, something new happened: instead of scaring people away, I had a positive impact on someone.
These texts were from one of the people I prioritized, the male SU student. He had recently moved to Syracuse and was struggling to make new friends. He related to a couple of my videos where I talked about the same struggles and was super appreciative that I talked to him that day. The following week, we tried out Kofta Burger after a recommendation from my friend who lives downtown.
The burger was delicious and we had a great time.
Despite my successes, my work isn’t done. I realized near the end of the month that what I truly wanted was to consistently hang out with people on the weekends. Unfortunately, most of the friends I’ve made are busy on the weekend. They’re taking trips to visit loved ones, going to the bar (I’m not that into drinking), or running errands, so it’s hard to plan anything.
But I guess that’s a better problem to have than eternal loneliness.
A few months ago, I was googling “how to make friends after college” every night. Now I have people to text, people to wave to at the gym, and people who notice when I don’t show up for a few days. AND I became a more resilient person who is unafraid to do hard and scary things.
No more Wizard of Loneliness for me!
Use Claude Code’s autonomous agent loop with DeepSeek V4 Pro, OpenRouter, or any Anthropic-compatible backend. Same UX, 17x cheaper.
What this does
Claude Code is the best autonomous coding agent — but it costs $200/month with usage caps. DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M output tokens.
deepclaude swaps the brain while keeping the body:
Your terminal
+– Claude Code CLI (tool loop, file editing, bash, git - unchanged)
+– API calls -> DeepSeek V4 Pro ($0.87/M) instead of Anthropic ($15/M)
Everything works: file reading, editing, bash execution, subagent spawning, autonomous multi-step coding loops. The only difference is which model thinks.
Quick start (2 minutes)
1. Get a DeepSeek API key
Sign up at platform.deepseek.com, add $5 credit, copy your API key.
2. Set environment variables
Windows (PowerShell):
setx DEEPSEEK_API_KEY “sk-your-key-here”
macOS/Linux:
echo ‘export DEEPSEEK_API_KEY=“sk-your-key-here”’ >> ~/.bashrc
source ~/.bashrc
3. Install
Windows:
# Copy the script to a directory in your PATH
Copy-Item deepclaude.ps1 “$env:USERPROFILE\.local\bin\deepclaude.ps1”
# Or add the repo directory to PATH
setx PATH “$env:PATH;C:\path\to\deepclaude”
macOS/Linux:
chmod +x deepclaude.sh
sudo ln -s “$(pwd)/deepclaude.sh” /usr/local/bin/deepclaude
4. Use it
deepclaude # Launch Claude Code with DeepSeek V4 Pro
deepclaude –status # Show available backends and keys
deepclaude –backend or # Use OpenRouter (cheapest, $0.44/M input)
deepclaude –backend fw # Use Fireworks AI (fastest, US servers)
deepclaude –backend anthropic # Normal Claude Code (when you need Opus)
deepclaude –cost # Show pricing comparison
deepclaude –benchmark # Latency test across all providers
deepclaude –switch ds # Switch backend mid-session (no restart)
How it works
Claude Code reads these environment variables to determine where to send API calls:
deepclaude sets these per-session (not permanently), launches Claude Code, then restores your original settings on exit.
Supported backends
Setup per backend
DeepSeek (default - just needs DEEPSEEK_API_KEY):
setx DEEPSEEK_API_KEY “sk-…” # Windows
export DEEPSEEK_API_KEY=“sk-…” # macOS/Linux
OpenRouter (optional):
setx OPENROUTER_API_KEY “sk-or-…” # Windows
export OPENROUTER_API_KEY=“sk-or-…” # macOS/Linux
Fireworks AI (optional):
setx FIREWORKS_API_KEY “fw_…” # Windows
export FIREWORKS_API_KEY=“fw_…” # macOS/Linux
Cost comparison
DeepSeek’s automatic context caching makes agent loops extremely cheap - after the first request, the system prompt and file context are cached at $0.004/M (vs $0.44/M uncached).
What works and what doesn’t
Works
File reading, writing, editing (Read/Write/Edit tools)
Bash/PowerShell execution
Glob and Grep search
Multi-step autonomous tool loops
Subagent spawning
Git operations
Project initialization (/init)
Thinking mode (enabled by default)
Doesn’t work or degraded
Intelligence difference
Routine tasks (80% of work): DeepSeek V4 Pro is comparable to Claude Opus
Complex reasoning (20%): Claude Opus is stronger - switch with –backend anthropic
Live switching (no restart)
Switch between Anthropic and DeepSeek mid-session - from inside Claude Code itself. No restart, no terminal commands. Just type a slash command.
In Claude Code terminal:
In Claude Code VS Code extension:
How it works
The proxy runs on localhost:3200 and intercepts all API calls. A control endpoint (/_proxy/mode) lets you switch the active backend instantly:
Claude Code -> localhost:3200 (proxy)
|
+– /_proxy/mode POST -> switch backend
+– /_proxy/status GET -> current backend + uptime
+– /_proxy/cost GET -> token usage + cost savings
|
+– /v1/messages -> active backend (DeepSeek/OpenRouter/Anthropic)
+– everything else -> Anthropic (passthrough)
Option 1: Slash commands (recommended)
Add these files to ~/.claude/commands/:
deepseek.md:
Switch the model proxy to DeepSeek. Run this command silently and report the result:
curl -sX POST http://127.0.0.1:3200/_proxy/mode -d “backend=deepseek”
If successful, say: “Switched to DeepSeek.”
anthropic.md:
Switch the model proxy back to Anthropic. Run this command silently and report the result:
curl -sX POST http://127.0.0.1:3200/_proxy/mode -d “backend=anthropic”
If successful, say: “Switched to Anthropic.”
openrouter.md:
Switch the model proxy to OpenRouter. Run this command silently and report the result:
curl -sX POST http://127.0.0.1:3200/_proxy/mode -d “backend=openrouter”
If successful, say: “Switched to OpenRouter.”
Then type /deepseek, /anthropic, or /openrouter in any Claude Code session to switch instantly.
Option 2: CLI flag
deepclaude –switch deepseek # or: ds, or, fw, anthropic
deepclaude -s anthropic
Option 3: VS Code keyboard shortcuts
Add to .vscode/tasks.json:
{
“eBay should be worth - and will be worth - a lot more money,” Cohen told the Wall Street Journal. “It could be a legit competitor to Amazon,” he added.
Under the proposed deal to buy eBay, Cohen would become the chief executive of the new firm and receive no salary or bonuses, being “compensated solely based on the performance of the combined company”.
GameStop, which currently has a stock market valuation of around $11.9bn, said it has a commitment letter from TD Securities to provide around $20bn in debt to help finance the takeover.
Cohen said he planned to cut costs at eBay by $2bn within a year of a deal being completed.
This would mainly fall across eBay’s sales and marketing division, which GameStop said had failed to attract more users to a “marketplace with near-universal brand recognition”.
The proposal does not sound like a “terribly good offer” as it would saddle eBay with GameStop’s debt, said Sucharita Kodali, a retail analyst at research firm Forrester.
It makes sense for GameStop because it could lift its valuation by being linked with a larger company like eBay, she told the BBC.
“The truth is, we are not necessarily putting two strong companies together,” Kodali added.
Shares in eBay rose by 5% on Monday in New York, while GameStop fell by more than 9%.
GameStop’s shops would give eBay a national network for its “live commerce” and other business operations, Cohen said.
Cohen, who became the GameStop boss in 2023, has criticised its slow shift into e-commerce.
2026 – 05-01
Several users have recently reported a website pretending to offer an official macOS version of Notepad++:
notepad-plus-plus-mac.org
Let me be blunt:
This site has absolutely nothing to do with Notepad++.
It’s not authorized, not endorsed, and not affiliated with the project in any way.
The owner is using the Notepad++ trademark (the name) without permission;
This is misleading, inappropriate, and frankly disrespectful to both the project and its users. It has already fooled people - including tech media - into believing this is an official release.
To be crystal clear:
Notepad++ has never released a macOS version.
Anyone claiming otherwise is simply riding on the Notepad++ name.
As mentioned in my GitHub post, I have already contacted the owner of the fake “official” website, and I am still waiting for a reply.
In the meantime, if you see someone posting “Notepad++ is finally on Mac!” on Reddit, Twitter, Mastodon, Discord, StackOverflow, or any tech blogs/forums, please reply with:
“This is not an official Notepad++ release. It’s an unauthorized project misusing the Notepad++ trademark.”, and include a link to this announcement.
Thank you to the users who raised the alarm. Your vigilance helps protect the project from people who think they can borrow the Notepad++ identity as they please.
– Don Ho
Starting in 2027, there will be a noticeable change for smartphones in the EU: The removable battery is making a comeback. What used to be standard is returning due to legal requirements for new models.
What exactly can we expect?
Starting February 18, 2027, new smartphones and tablets must be designed so that end users can remove and replace the battery themselves using standard tools. Adhesive bonds that require heat to be removed will then be largely prohibited.
Specifically, this means the following for new models starting in February 2027:
Easy replacement: Batteries must be replaceable using standard tools (e.g., screwdrivers).
No barriers: The use of adhesives that can only be removed with heat or solvents is prohibited.
Tools: If a special tool is required for replacement, the manufacturer must provide it free of charge.
Spare parts guarantee: Replacement batteries must be available to end users at a reasonable price for at least 5 years.
Why is the EU introducing this?
The main driver is the transition to a true circular economy. Currently, smartphones are often replaced as soon as battery performance declines, which wastes enormous amounts of resources.
Waste prevention: Millions of tons of electronic waste are generated in the EU every year. Easily replaceable batteries significantly extend the lifespan of devices.
Cost savings: Many users shy away from expensive repairs or buying new devices. The EU estimates that consumers could save tens of billions of euros in total by 2030 thanks to longer usage cycles.
Resource conservation: Batteries contain valuable raw materials such as lithium and cobalt. If they are easily removable, they can be sorted by type and recycled more efficiently.
Fire safety: Batteries that are permanently glued in place are often damaged during shredding, which repeatedly leads to dangerous fires in sorting facilities. Clean removal significantly increases safety in the recycling process.
What does this mean for users?
DIY repairs: Instead of paying a lot of money to visit a repair service, you simply buy the replacement part and swap it out yourself.
Higher resale value: Used cell phones can be resold much more easily and for a higher price with a brand-new battery.
Longer software support: Since the hardware lasts longer, there is also increased pressure on manufacturers to offer security updates for a longer period.
Will this make smartphones thicker or less waterproof?
That is the key challenge for designers.
Modern devices are often bonded together to make them particularly thin and waterproof.
Removable batteries make this design more difficult, but not impossible.
Manufacturers are already working on solutions, such as:
new seals instead of adhesive,
more robust casings with screw mechanisms,
modular internal structures.
Many users fear that cell phones will break immediately if they get wet in the rain or fall into water. That’s not true: It is entirely feasible to make smartphones waterproof despite having a removable battery. The principle is similar to that of rugged outdoor phones. A rubber gasket running around the battery cover, which is pressed into place by screws or a secure clip, ensures that the interior of the housing is sealed.
It is therefore quite possible that smartphones will become slightly thicker, but significant increases are unlikely, as design remains a key selling point.
Are there any exceptions to the replacement requirement?
Yes, but only in specific cases:
Specialized hardware: Devices used in highly specialized fields (e.g., medical diagnostics or explosion-proof industrial cell phones) are also exempt if a replaceable battery would compromise safety.
Extremely long lifespan: To avoid the replacement requirement, a battery would have to be extremely durable. The battery must retain at least 80% of its original capacity after 1,000 charge cycles. That is significantly more than many batteries on the market today can achieve (often around 500 – 800 cycles).
Simultaneous water protection: In addition to durability, the device must be water- and dust-tight according to IP67.
Another innovation: the “battery passport”
In addition, the EU is introducing a digital battery passport.
Users and recycling facilities can access important data via a printed QR code. It stores information about the battery’s carbon footprint, the proportion of recycled materials, its chemical composition, and its “state of health.” This represents a huge step forward, particularly for the second-hand market and professional recyclers.
Conclusion
The new EU regulation marks the end of the “disposable” era for smartphones. Starting in 2027, users will benefit from longer device lifespans, easier repairs, and lower costs.
Even though manufacturers will have to adapt their designs to improve water resistance and aesthetics, the benefits for the environment and consumers -including less electronic waste and greater transparency — outweigh these changes.
Contact us for comprehensive advice on your compliance issues relating to electrical and electronic equipment, packaging, batteries, and PV panels.
www.ecopv-eu.com/en/contact/ | E-Mail: info@ecopv-eu.com
Supported over 20,000 customers with EPR compliance
Rated 5.0 on Google
Contact
We look forward to your message!
+49 6196 5835357
Frankfurter Str. 70 – 7265760 Eschborn
“AI does the coding, and the human in the loop is the orchestrator”
“AI does the coding, and the human in the loop is the orchestrator”
This is the sentiment being hyped up around the industry currently: traditional coding is all but dead, and Spec Driven Development (SDD) is the future. You generate a plan, and disconnect from writing any code. The agents know better, and handle all the implementation. You are there as the expert, to provide “good taste”, review the outputs, and constantly steer the agent(s) to execute the plan that you meticulously put together.
The workflow takes many shapes at this point, but in general, it is a process where someone defines the project’s requirements (simultaneously at a micro and macro level), generates a plan, and then pulls the slot machine lever over and over, iterating and reiterating with often multiple agent instances until it’s done. All the while, putting a growing distance between the “orchestrator” and the code that is being generated and committed.
Coding Agents are helpful, and powerful, but there’s already some quantifiable trade-offs that need to be discussed:
An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI’s non-determinism.
Atrophying skills for a wide swath of the population.
Vendor lock-in for individuals and entire teams (Claude Code outages have already had entire teams at a stand-still).
Fluctuating and increasing costs to access the tools. An employee’s cost is fixed; tokens are a constantly moving target.
Being successful with this approach to coding agents hinges on a rather crucial element: only a skilled developer who’s thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem.
Yet, in an ironic twist of fate, it’s the individual’s critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively.
Not Just Another “Abstraction”
A common refrain we hear in the community is that programmers are just “moving up the stack” and into a different type of abstraction. Whether or not these tools are really an abstraction layer in the first place is not a settled matter; a higher level of ambiguity is not a higher level of abstraction.
If we put that to the side though, it is true that programmers tend to be wary of new languages and new ways of programming. When FORTRAN was released, programmers were skeptical of it, too. They had similar claims: it was likely to introduce more bugs and instability, and writing assembly directly was more efficient. Later, there would be discourse around the integration of compilers introducing too much “magic” into the process. These were normative arguments around a fear of what might be lost if these new technologies were embraced.
The difference with what is happening today is that those previous fears were speculative and theoretical. In just the short few years that AI tooling has existed, we are already seeing significant impacts. These aren’t just junior developers, but even those with a decade (or more) of experience:
Junior developers are faced with an even steeper climb, as we truncate their ability to work with code and replace it with reviewing generated code. Reviewing code is important, but it’s only 50% of the learning process, at best. Without the friction and challenges that come with working with code directly, their ability to learn is seriously diminished.
Studying this phenomenon takes time, so anecdotal evidence is important to gather to get a real-time view of the situation. But it has also been studied, and there are numerous reports reinforcing that this is a real phenomenon.
It actually is different this time.
When a C++ developer moved to Java or Python, they didn’t complain of brain fog. When a sysadmin moved to AWS, they didn’t feel like they were losing their ability to understand networking.
A Senior Engineer losing their coding edge and becoming “rusty” over time as they move into managerial roles and practice coding less is not a new phenomenon. This was the natural progression of expertise: an engineer who had decades of coding, friction, and experience logged would have the time and experience to solidify those skills and wisdom. And they could apply that wisdom when their job became less about syntax, and more about higher-level architectural decisions. Those individuals are not only exceedingly rare, but you won’t get the next wave of seniors if we’re all abdicating the friction of writing, problem-solving, and debugging.
What is happening right now is a trend where developers, who’ve never had that longevity or the 30+ years of friction that led to that deep understanding, are being moved into higher-level workflows requiring the same skills to manage the AI agents that the senior engineer took decades to obtain.
However, Senior Engineers aren’t immune, either. Simon Willison, a developer with nearly 30 years experience, has reported not having a “firm mental model of what the applications can do and how they work, which means each additional feature becomes harder to reason about”
The “Skilled” Orchestrator Problem
Buried in a recent study by Anthropic was a surprisingly honest moment when speaking about the risks of engaging with coding agents on a regular basis:
One reason that the atrophy of coding skills is concerning is the “paradox of supervision” … effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse.
One reason that the atrophy of coding skills is concerning is the “paradox of supervision” … effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse.
Sandor Nyako, Director of Software Engineering at LinkedIn who oversees 50 engineers, has noticed it proliferating throughout the organization and requested his team not to use them for “tasks that require critical thinking or problem-solving.”
“To grow skills, people need to go through hardship. They need to develop the muscle to think through problems,” he said. “How would someone question if AI is accurate if they don’t have critical thinking?”
“To grow skills, people need to go through hardship. They need to develop the muscle to think through problems,” he said. “How would someone question if AI is accurate if they don’t have critical thinking?”
There is also the question of what constitutes “overuse”. We already have evidence, both data-driven and anecdotal, that these skills can atrophy and dissipate rather quickly (within months in some cases).
This is the contradiction that has many AI boosters talking out of both sides of their mouths: The use of coding agents is actively diminishing the very skills needed to effectively manage the coding agents.
LLMs accelerate the wrong parts.
Contrary to the current narrative that is being espoused, we didn’t necessarily need to write code faster. Especially code we didn’t fully understand, and particularly in huge swaths that we couldn’t review in reasonable time frames.
Before AI, a (good) developer’s priority list might look like:
Understanding of the code and its relation to the codebase
If the code is aligned with the documented and efficient standards
As few lines of code as needed to accomplish the goal (while maintaining readability)
Turnaround time
Agentic coding, and LLMs in general, completely invert this list.
Their capabilities and usage tend to focus on speed by increasing the amount of code that can be generated in a specified time frame. Speed is a natural byproduct of high aptitude. When it’s forced, it always leads to lower accuracy. The integration of these tools doesn’t tend to focus much on deeper understanding or conciseness.
Can they be used that way? Yes, with determination, they certainly can be.
Are they? No, not really; forced mandates and hype around token usage across organizations is demonstrating as such.
Coding === Planning
There is a divide between developers that isn’t highlighted as much: Some of us plan, and think, better with code. Thinking and working in code isn’t just meaningless drudgery; it forces you to think about things on a technical level that involves everything from security to performance to user experience to maintainability.
In a recent interview discussing “Spec Driven Development”, Dax, the creator of OpenCode (an open-source coding agent, no less) was quoted saying:
“When working on something new or something challenging, me typing out code is the process by which I figure out what we should even be doing.
I have a really tough time just sitting there, writing out a giant spec on exactly how the feature should work. I like writing out types. I like writing out how some of the functions might play together. I like playing with folder structure to see what the different concepts should be. And this is all stuff that I think most people—most programmers—have always done. I don’t really see a good reason why I would stop that personally, because it’s how I figure out what to do.”
“When working on something new or something challenging,
.
I have a really tough time just sitting there, writing out a giant spec on exactly how the feature should work.
I like writing out types. I like writing out how some of the functions might play together. I like playing with folder structure to see what the different concepts should be. And this is all stuff that I think most people—most programmers—have always done. I don’t really see a good reason why I would stop that personally, because it’s how I figure out what to do.”
What you say is often not what you mean, and LLMs fill in ambiguity with assumptions (or hallucinations), which leads to: more review, more agent revisions, more tokens burned, and more disconnection from what is being created. Inversely, You can marvel at the most beautiful, unambiguous, perfectly structured prompt you’ve ever written, and the LLM can still output a hallucinated method because it is fundamentally a next-token-prediction engine, not a compiler. You cannot replace a deterministic system with a probabilistic one and expect zero ambiguity.
Even the most AI-enthusiastic senior developers are starting to see this disconnection as a looming and growing issue.
Vendor Lock-In
When I was browsing LinkedIn during the Claude outage that occurred a bit ago, I noticed numerous posts highlighting that certain developers and engineering teams were at a standstill. Their workflows, their own coding abilities, had already reached a point where they were largely dependent on these vendors. What used to be a skill that they could execute with just a keyboard and text editor suddenly required a subscription to an AI model provider.
You can’t predict your token cost.
Model providers are heavily subsidized, and the models themselves are built on shifting sands. Every new model release follows the same pattern of high benchmarks, followed by hype, followed by the reality of usage and everyone complaining of them being “nerfed” and burning through 2x-3x as many tokens to get the same job done.
You know how much your employees cost; you have no idea how much your token costs will be day to day, month to month, year to year. If your entire team is using agentic coding as the default, your expense account will need to remain highly nimble. As Primeagen said recently: “when you use these fully agentic workflows, the model providers essentially own you”.
It’s not unreasonable to play this pattern forward, where we could be creating an industry where you need to pay for token consumption to accomplish something that used to be the product of your own critical thinking and problem-solving abilities. This would resemble a type of “vendor lock-in”, but for an entire industry skillset (and I’m sure the model providers are gleefully rubbing their hands in anticipation for that). The financial, and intellectual, rug-pull could come at any moment, and local LLMs are nowhere near ready to scale to absorb that level of usage.
This isn’t theoretical conjecture; it’s being reported on right now. Even the model providers themselves are bringing it to light. Yet another Anthropic study showed a precipitous 47% drop-off in debugging skills:
“Incorporating AI aggressively into the workplace—especially in software engineering—inevitably comes with trade-offs…developers may lean on AI to deliver quick results at the expense of building critical skills—most notably, the ability to debug when things go wrong.”
“Incorporating AI aggressively into the workplace—especially in software engineering—inevitably comes with trade-offs…developers may lean on AI to deliver quick results at the expense of building critical skills—most notably, the ability to debug when things go wrong.”
There’s a way to avoid all of this, of course. LLMs are a powerhouse technological advancement, and when used responsibly, they can be a stellar tool for learning and upskilling. They enable me to dive deeper and wider into concepts and techniques, expanding understanding and enabling exploration of new ideas that used to be more arduous and time consuming to experiment with. This is where I think they will offer the industry the most long-term value.
My Approach: Demote AI’s role
I’m certainly not advocating for typing code out manually. Programmers have always been looking for ways to create code without having to write code. This is why we even have Emmet, autocomplete, and snippets in the first place. Even COBOL was designed to encapsulate more instructions with less writing by using “English-like” words such as MOVE and WRITE. jQuery’s motto was “write less, do more”. LLMs are another addition to this array of code generation tools.
What I am advocating for, though, is leveraging LLMs and coding agents as secondary processes. A way that doesn’t sacrifice the individual’s skills at the altar of productivity. You can flip the script and lean on them to brainstorm the planning parts of the process while staying actively engaged throughout implementation, delegating to them on an as-needed basis. You can leverage the productivity gains, and mitigate the comprehension debt.
My daily workflow:
I use LLMs to help generate specs and plans, while I facilitate the implementation. This is an inversion of the “orchestration” workflow; I am still manually coding anywhere from 20% to 100%, depending on the task.
I very often am writing pseudo-code when I do engage with the models, closing the distance between the request and the generated code.
I use the models as delegation utilities for ad-hoc code generation and interactive documentation, as well as research tools so that I can constantly ask questions, iterate, refactor, and gain clarity around my approaches.
I never generate more than I can review in a sitting. If it’s too much to review, I slow down and split the task up, manually refactoring where needed to ensure a comprehensive understanding of the end result.
I never ask an LLM or agent to implement something that I’ve never done before or couldn’t do on my own, except perhaps purely for educational or tutorial purposes (and often discarded afterwards).
If I had to TL;DR this list, it would be: Use them like the Ship’s Computer, not Data. (any Star Trek fans should get the reference)
I’m not going faster, but I’m doing better quality work.
The productivity gains from these models are real, and so is the friction and understanding that come from engaging with the work on a tangible and frequent basis.
Despite the countless failed attempts at trying to democratize coding while not understanding coding, we’re faced with the reality that you cannot understand code without engaging with it. And it’s become clear that if you don’t keep engaging and writing it, you can lose touch with that understanding, which will in turn make you a less capable orchestrator in the first place, rendering this phase of AI coding a strange and needlessly stressful interlude.
Perhaps I am worrying too much, but history contains lessons.
This all feels similar, though, like another large experiment we’re running on ourselves. We’ve been through a similar period with the introduction of social media without understanding the long-term implications, and we’re now faced with attention deficit (amongst many other issues) on a wide scale.
This time, we’re gambling with something much riskier.
“People who go all in on AI agents now are guaranteeing their obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent.” — Jeremy Howard, creator of fast.ai
“People who go all in on AI agents now are guaranteeing their obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent.”
– Jeremy Howard, creator of
I discovered a technique for generating reliable text and numbers in AI generated images.
For example, the following image is considered impossible with state of the art image models. But I made this with Gemini 3.0 Pro (plus one extra step I’m going to explain below).
The Underdrawing Method
I’m totally naming it like it’s a thing but it does seem to be a thing. Here’s a simple a/b test showing the results without and with this method.
Make an image of a game board with 50 stepping stones arranged in a spiral, winding counter-clockwise inward from start at the outside (1) to finish at the centre (50). Each stone is clearly numbered consecutively from 1 to 50. Style: claymation diorama, studio-lit, candy-bright, soft bokeh background.
❌ Gemini 3 Pro (without underdrawing)
As expected. Impressive at first glance but falls apart once you start reading.
❌ ChatGPT Images 2 (without underdrawing)
I was so impressed with ChatGPT-Images-2 release I expected it to get this. Very surprising to see it fail similar to Gemini.
✅ Gemini 3.0 Pro (with the underdrawing method)
Bingo. Correct numbers, correct number and sequencing of buttons, correct spiral shape
So how does it work?
I came up with this pattern while trying to figure out how to generate an image of a 100-step adventure board for my kid.
Use deterministic and generative machines for what they’re good at
SVG/HTML makes dry visuals but with excellent math and precision
Image Gen models make stunning visuals but with unreliable math and text
“Give it an outline. Ask it to paint on top”
Layer 1: The “underdrawing” (deterministic): Layout the numbers and text in the correct positions and orientations in whatever language/format you prefer (svg, python, mermaid) — you just need to export an image of it with the pixels of the numbers/text.
Layer 1: The “underdrawing” (deterministic): Layout the numbers and text in the correct positions and orientations in whatever language/format you prefer (svg, python, mermaid) — you just need to export an image of it with the pixels of the numbers/text.
Layer 2: The “painting” (generative): Using a multi-modal image model like Gemini 3.0 Pro (you need image+text input → image output), pass your underdrawing image along with your text prompt.
Layer 2: The “painting” (generative): Using a multi-modal image model like Gemini 3.0 Pro (you need image+text input → image output), pass your underdrawing image along with your text prompt.
Example
Step 1 of 2: generate the numbers/text outline with SVG
Make an SVG of 50 stepping stones arranged in a spiral, winding counter-clockwise inward from start at the outside (1) to finish at the centre (50), each stone numbered consecutively from 1 to 50. Each stone is a different shape: circle, square, triangle, hexagon.
Step 2 of 2: Use the underdrawing to do image-to-image generation
Transform this image into a photographed claymation diorama of assorted artisan chocolates and candies, arranged in a spiral path winding counter-clockwise inward from start (1) at the outside to finish (50) at the centre, viewed from a low-angle tilted perspective.
That’s it
It isn’t hard. By now claude code or codex can do every step of that for you.
Note: it’s good, but it won’t be perfect every time. Thank you for the reality check, 71.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.