10 interesting stories served every morning and every evening.




1 461 shares, 15 trendiness

We mourn our craft

I did­n’t ask for this and nei­ther did you.

I did­n’t ask for a ro­bot to con­sume every blog post and piece of code I ever wrote and par­rot it back so that some hack could make money off of it.

I did­n’t ask for the role of a pro­gram­mer to be re­duced to that of a glo­ri­fied TSA agent, re­view­ing code to make sure the AI did­n’t smug­gle some­thing dan­ger­ous into pro­duc­tion.

And yet here we are. The worst fact about these tools is that they work. They can write code bet­ter than you or I can, and if you don’t be­lieve me, wait six months.

You could ab­stain out of moral prin­ci­ple. And that’s fine, es­pe­cially if you’re at the tail end of your ca­reer. And if you’re at the be­gin­ning of your ca­reer, you don’t need me to ex­plain any of this to you, be­cause you al­ready use Warp and Cursor and Claude, with ChatGPT as your ther­a­pist and pair pro­gram­mer and maybe even your lover. This post is for the 40-somethings in my au­di­ence who don’t re­al­ize this fact yet.

So as a se­nior, you could ab­stain. But then your ju­nior col­leagues will even­tu­ally code cir­cles around you, be­cause they’re wear­ing bazooka-pow­ered jet­packs and you’re still rid­ing around on a fixie bike. Eventually your boss will start ask­ing why you’re get­ting paid twice your zoomer col­leagues’ salary to pro­duce a tenth of the code.

Ultimately if you have a mort­gage and a car pay­ment and a fam­ily you love, you’re go­ing to make your de­ci­sion. It’s maybe not the de­ci­sion that your younger, more ide­al­is­tic self would want you to make, but it does keep your car and your house and your fam­ily safe in­side it.

Someday years from now we will look back on the era when we were the last gen­er­a­tion to code by hand. We’ll laugh and ex­plain to our grand­kids how silly it was that we typed out JavaScript syn­tax with our fin­gers. But se­cretly we’ll miss it.

We’ll miss the feel­ing of hold­ing code in our hands and mold­ing it like clay in the ca­ress of a mas­ter sculp­tor. We’ll miss the sleep­less wran­gling of some odd bug that even­tu­ally re­lents to the de­bug­ger at 2 AM. We’ll miss cre­at­ing some­thing we feel proud of, some­thing true and right and good. We’ll miss the sat­is­fac­tion of the artist’s sig­na­ture at the bot­tom of the oil paint­ing, the GitHub repo say­ing I made this.”

I don’t cel­e­brate the new world, but I also don’t re­sist it. The sun rises, the sun sets, I or­bit help­lessly around it, and my protests can’t stop it. It does­n’t care; it con­tin­ues its arc across the sky re­gard­less, mov­ing but un­moved.

If you would like to grieve, I in­vite you to grieve with me. We are the last of our kind, and those who fol­low us won’t un­der­stand our sor­row. Our craft, as we have prac­ticed it, will end up like some black­smith’s tool in an arche­o­log­i­cal dig, a cu­rio for fu­ture gen­er­a­tions. It can­not be helped, it is the na­ture of all things to pass to dust, and yet still we can mourn. Now is the time to mourn the pass­ing of our craft.

...

Read the original on nolanlawson.com »

2 277 shares, 39 trendiness

Open Source — DoNotNotify

We’re ex­cited to an­nounce that DoNotNotify has been open sourced. The full source code for the app is now pub­licly avail­able for any­one to view, study, and con­tribute to.

You can find the source code on GitHub:

...

Read the original on donotnotify.com »

3 275 shares, 21 trendiness

localgpt-app/localgpt

A lo­cal de­vice fo­cused AI as­sis­tant built in Rust — per­sis­tent mem­ory, au­tonomous tasks, ~27MB bi­nary. Inspired by and com­pat­i­ble with OpenClaw.

* Local de­vice fo­cused — runs en­tirely on your ma­chine, your mem­ory data stays yours

* Autonomous heart­beat — del­e­gate tasks and let it work in the back­ground

# Full in­stall (includes desk­top GUI)

cargo in­stall lo­cal­gpt

# Headless (no desk­top GUI — for servers, Docker, CI)

cargo in­stall lo­cal­gpt –no-default-features

# Initialize con­fig­u­ra­tion

lo­cal­gpt con­fig init

# Start in­ter­ac­tive chat

lo­cal­gpt chat

# Ask a sin­gle ques­tion

lo­cal­gpt ask What is the mean­ing of life?”

# Run as a dae­mon with heart­beat, HTTP API and web ui

lo­cal­gpt dae­mon start

LocalGPT uses plain mark­down files as its mem­ory:

Files are in­dexed with SQLite FTS5 for fast key­word search, and sqlite-vec for se­man­tic search with lo­cal em­bed­dings

[agent]

de­fault­_­model = claude-cli/opus”

[providers.anthropic]

api_key = ${ANTHROPIC_API_KEY}”

[heartbeat]

en­abled = true

in­ter­val = 30m”

ac­tive_hours = { start = 09:00”, end = 22:00″ }

[memory]

work­space = ~/.localgpt/workspace”

# Chat

lo­cal­gpt chat # Interactive chat

lo­cal­gpt chat –session

When the dae­mon is run­ning:

Why I Built LocalGPT in 4 Nights — the full story with com­mit-by-com­mit break­down.

...

Read the original on github.com »

4 256 shares, 11 trendiness

StrongDM Software Factory

We built a Software Factory: non-in­ter­ac­tive de­vel­op­ment where specs + sce­nar­ios drive agents that write code, run har­nesses, and con­verge with­out hu­man re­view.

The nar­ra­tive form is in­cluded be­low. If you’d pre­fer to work from first prin­ci­ples, I of­fer a few con­straints & guide­lines that, ap­plied it­er­a­tively, will ac­cel­er­ate any team to­ward the same in­tu­itions, con­vic­tions1, and ul­ti­mately a fac­to­ry2 of your own. In kōan or mantra form:

* Why am I do­ing this? (implied: the model should be do­ing this in­stead)

* Code must not be writ­ten by hu­mans

* Code must not be re­viewed by hu­mans

* If you haven’t spent at least $1,000 on to­kens to­day per hu­man en­gi­neer, your soft­ware fac­tory has room for im­prove­ment

On July 14th, 2025, Jay Taylor and Navan Chauhan joined me (Justin McCarthy, co-founder, CTO) in found­ing the StrongDM AI team.

The cat­a­lyst was a tran­si­tion ob­served in late 2024: with the sec­ond re­vi­sion of Claude 3.5 (October 2024), long-hori­zon agen­tic cod­ing work­flows be­gan to com­pound cor­rect­ness rather than er­ror.

By December of 2024, the mod­el’s long-hori­zon cod­ing per­for­mance was un­mis­tak­able via Cursor’s YOLO mode.

Prior to this model im­prove­ment, it­er­a­tive ap­pli­ca­tion of LLMs to cod­ing tasks would ac­cu­mu­late er­rors of all imag­in­able va­ri­eties (misunderstandings, hal­lu­ci­na­tions, syn­tax, ver­sion DRY vi­o­la­tions, li­brary in­com­pat­i­bil­ity, etc). The app or prod­uct would de­cay and ul­ti­mately collapse”: death by a thou­sand cuts, etc.

Together with YOLO mode, the up­dated model from Anthropic pro­vided the first glim­mer of what we now re­fer to in­ter­nally as non-in­ter­ac­tive de­vel­op­ment or grown soft­ware.

In the first hour of the first day of our AI team, we es­tab­lished a char­ter which set us on a path to­ward a se­ries of find­ings (which we re­fer to as our unlocks”). In ret­ro­spect, the most im­por­tant line in the char­ter doc­u­ment was the fol­low­ing:

Initially it was just a hunch. An ex­per­i­ment. How far could we get, with­out writ­ing any code by hand?

Not very far! At least: not very far, un­til we added tests. However, the agent, ob­sessed with the im­me­di­ate task, soon be­gan to take short­cuts: re­turn true is a great way to pass nar­rowly writ­ten tests, but prob­a­bly won’t gen­er­al­ize to the soft­ware you want.

Tests were not enough. How about in­te­gra­tion tests? Regression tests? End-to-end tests? Behavior tests?

One re­cur­ring theme of the agen­tic mo­ment: we need new lan­guage. For ex­am­ple, the word test” has proven in­suf­fi­cient and am­bigu­ous. A test, stored in the code­base, can be lazily rewrit­ten to match the code. The code could be rewrit­ten to triv­ially pass the test.

We re­pur­posed the word sce­nario to rep­re­sent an end-to-end user story”, of­ten stored out­side the code­base (similar to a holdout” set in model train­ing), which could be in­tu­itively un­der­stood and flex­i­bly val­i­dated by an LLM.

Because much of the soft­ware we grow it­self has an agen­tic com­po­nent, we tran­si­tioned from boolean de­f­i­n­i­tions of suc­cess (“the test suite is green”) to a prob­a­bilis­tic and em­pir­i­cal one. We use the term sat­is­fac­tion to quan­tify this val­i­da­tion: of all the ob­served tra­jec­to­ries through all the sce­nar­ios, what frac­tion of them likely sat­isfy the user?

In pre­vi­ous regimes, a team might rely on in­te­gra­tion tests, re­gres­sion tests, UI au­toma­tion to an­swer is it work­ing?”

We no­ticed two lim­i­ta­tions of pre­vi­ously re­li­able tech­niques:

Tests are too rigid - we were cod­ing with agents, but we’re also build­ing with LLMs and agent loops as de­sign prim­i­tives; eval­u­at­ing suc­cess of­ten re­quired LLM-as-judgeTests can be re­ward hacked - we needed val­i­da­tion that was less vul­ner­a­ble to the model cheat­ing

The Digital Twin Universe is our an­swer: be­hav­ioral clones of the third-party ser­vices our soft­ware de­pends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, repli­cat­ing their APIs, edge cases, and ob­serv­able be­hav­iors.

With the DTU, we can val­i­date at vol­umes and rates far ex­ceed­ing pro­duc­tion lim­its. We can test fail­ure modes that would be dan­ger­ous or im­pos­si­ble against live ser­vices. We can run thou­sands of sce­nar­ios per hour with­out hit­ting rate lim­its, trig­ger­ing abuse de­tec­tion, or ac­cu­mu­lat­ing API costs.

Our suc­cess with DTU il­lus­trates one of the many ways in which the Agentic Moment has pro­foundly changed the eco­nom­ics of soft­ware. Creating a high fi­delity clone of a sig­nif­i­cant SaaS ap­pli­ca­tion was al­ways pos­si­ble, but never eco­nom­i­cally fea­si­ble. Generations of en­gi­neers may have wanted a full in-mem­ory replica of their CRM to test against, but self-cen­sored the pro­posal to build it. They did­n’t even bring it to their man­ager, be­cause they knew the an­swer would be no.

Those of us build­ing soft­ware fac­to­ries must prac­tice a de­lib­er­ate naivete: find­ing and re­mov­ing the habits, con­ven­tions, and con­straints of Software 1.0. The DTU is our proof that what was un­think­able six months ago is now rou­tine.

* Principles: what we be­lieve is true about build­ing soft­ware with agents

* Products: tools we use daily and be­lieve oth­ers will ben­e­fit from

Thank you for read­ing. We wish you the best of luck con­struct­ing your own Software Factory.

...

Read the original on factory.strongdm.ai »

5 228 shares, 10 trendiness

Stories From 25 Years of Software Development

Last year, I com­pleted 20 years in pro­fes­sional soft­ware de­vel­op­ment. I wanted to write a post to mark the oc­ca­sion back then, but could­n’t find the time. This post is my at­tempt to make up for that omis­sion. In fact, I have been in­volved in soft­ware de­vel­op­ment for a lit­tle longer than 20 years. Although I had my first taste of com­puter pro­gram­ming as a child, it was only when I en­tered uni­ver­sity about 25 years ago that I se­ri­ously got into soft­ware de­vel­op­ment. So I’ll start my sto­ries from there. These sto­ries are less about soft­ware and more about peo­ple. Unlike many posts of this kind, this one of­fers no wis­dom or lessons. It only of­fers a col­lec­tion of sto­ries. I hope you’ll like at least a few of them.

The first story takes place in 2001, shortly af­ter I joined uni­ver­sity. One evening, I went to the uni­ver­sity com­puter lab­o­ra­tory to browse the Web. Out of cu­rios­ity, I typed susam.com into the ad­dress bar and landed on its

home page. I re­mem­ber the text and ban­ner look­ing much larger back then. Display res­o­lu­tions were lower, so they cov­ered al­most half the screen. I knew very lit­tle about the Internet then and I was just try­ing to make sense of it. I re­mem­ber won­der­ing what it would take to cre­ate my own web­site, per­haps at susam.com. That’s when an older stu­dent who had been watch­ing me browse over my shoul­der ap­proached and asked if I had cre­ated the web­site. I told him I had­n’t and that I had no idea how web­sites were made. He asked me to move aside, took my seat and clicked View > Source in Internet Explorer. He then ex­plained how web­sites are made of HTML pages and how those pages are sim­ply text in­struc­tions.

Next, he opened Notepad and wrote a sim­ple HTML page that looked some­thing like this:

Yes, we had a FONT tag back then and it was com­mon prac­tice to write HTML tags in up­per­case. He then opened the page in a web browser and showed how it ren­dered. After that, he demon­strated a few more fea­tures such as chang­ing the font face and size, cen­tring the text and al­ter­ing the page’s back­ground colour. Although the tu­to­r­ial lasted only about ten min­utes, it made the World Wide Web feel far less mys­te­ri­ous and much more fas­ci­nat­ing.

That per­son had an ul­te­rior mo­tive though. After the tu­to­r­ial, he never re­turned the seat to me. He just con­tin­ued brows­ing the Web and waited for me to leave. I was too timid to ask for my seat back. Seats were lim­ited, so I re­turned to my dorm room both dis­ap­pointed that I could­n’t con­tinue brows­ing that day and ex­cited about all the web­sites I might cre­ate with this new­found knowl­edge. I could never reg­is­ter susam.com for my­self though. That do­main was al­ways used by some busi­ness sell­ing Turkish cuisines. Eventually, I man­aged to get the next best thing: a .net do­main of my own. That brief en­counter in the uni­ver­sity lab­o­ra­tory set me on a life­long path of cre­at­ing and main­tain­ing per­sonal web­sites.

The sec­ond story also comes from my uni­ver­sity days. One af­ter­noon, I was hang­ing out with my mates in the com­puter lab­o­ra­tory. In front of me was an MS-DOS ma­chine pow­ered by an Intel 8086 mi­cro­proces­sor, on which I was writ­ing a lift con­trol pro­gram in as­sem­bly. In those days, it was con­sid­ered im­por­tant to de­lib­er­ately prac­tise solv­ing made-up prob­lems as a way of hon­ing our pro­gram­ming skills. As I worked on my pro­gram, my mind drifted to a small de­tail about the 8086 mi­cro­proces­sor that we had re­cently learnt in a lec­ture. Our pro­fes­sor had ex­plained that, when the 8086 mi­cro­proces­sor is re­set, ex­e­cu­tion be­gins with CS:IP set to FFFF:0000. So I mur­mured to any­one who cared to lis­ten, I won­der if the sys­tem will re­boot if I jump to FFFF:0000.’ I then opened DEBUG. EXE and jumped to that ad­dress.

C:\>DEBUG

-G =FFFF:0000

The ma­chine re­booted in­stantly. One of my friends, who topped the class every se­mes­ter, had been watch­ing over my shoul­der. As soon as the ma­chine restarted, he ex­claimed, How did you do that?’ I ex­plained that the re­set vec­tor is lo­cated at phys­i­cal ad­dress FFFF0 and that the CS:IP value FFFF:0000 maps to that ad­dress in real mode. After that, I went back to work­ing on my lift con­trol pro­gram and did­n’t think much more about the in­ci­dent.

About a week later, the same friend came to my dorm room. He sat down with a grave look on his face and asked, How did you know to do that? How did it oc­cur to you to jump to the re­set vec­tor?’ I must have said some­thing like, It just oc­curred to me. I re­mem­bered that de­tail from the lec­ture and wanted to try it out.’ He then said, I want to be able to think like that. I come top of the class every se­mes­ter, but I don’t think the way you do. I would never have thought of tak­ing a small de­tail like that and test­ing it my­self.’ I replied that I was just cu­ri­ous to see whether what we had learnt ac­tu­ally worked in prac­tice. He re­sponded, And that’s ex­actly it. It would never oc­cur to me to try some­thing like that. I feel dis­ap­pointed that I keep com­ing top of the class, yet I am not cu­ri­ous in the same way you are. I’ve de­cided I don’t want to top the class any­more. I just want to ex­plore and ex­per­i­ment with what we learn, the way you do.’

That was all he said be­fore get­ting up and head­ing back to his dorm room. I did­n’t take it very se­ri­ously at the time. I could­n’t imag­ine why some­one would will­ingly give up the ac­com­plish­ment of com­ing first every year. But he kept his word. He never topped the class again. He still ranked highly, of­ten within the top ten, but he kept his promise of never fin­ish­ing first again. To this day, I feel a mix of em­bar­rass­ment and pride when­ever I re­call that in­ci­dent. With a sin­gle jump to the proces­sor’s re­set en­try point, I had some­how in­spired some­one to step back from aca­d­e­mic com­pe­ti­tion in or­der to have more fun with learn­ing. Of course, there is no rea­son one can­not do both. But in the end, that was his de­ci­sion, not mine.

In my first job af­ter uni­ver­sity, I was as­signed to a tech­ni­cal sup­port team where part of my work in­volved run­ning an in­staller to de­ploy a spe­cific com­po­nent of an e-bank­ing prod­uct for cus­tomers, usu­ally large banks. As I learnt to use the in­staller, I re­alised how frag­ile it was. The in­staller, writ­ten in Python, of­ten failed be­cause of in­cor­rect as­sump­tions about the tar­get en­vi­ron­ment and al­most al­ways re­quired some man­ual in­ter­ven­tion to com­plete suc­cess­fully. During my first week on the pro­ject, I spent much of my time sta­bil­is­ing the in­staller and writ­ing a step-by-step user guide ex­plain­ing how to use it. The re­sult was well re­ceived by both my se­niors and man­age­ment. To my sur­prise, the user guide re­ceived more praise than the im­prove­ments I made to the in­staller it­self. While the first few weeks were pro­duc­tive, I soon re­alised I would not find the work ful­fill­ing for long. I wrote to man­age­ment a few times to ask whether I could trans­fer to a team where I could work on some­thing more sub­stan­tial.

My emails were ini­tially met with re­sis­tance. After sev­eral rounds of dis­cus­sion, how­ever, some­one who had heard about my sit­u­a­tion reached out and sug­gested a team whose man­ager might be in­ter­ested in in­ter­view­ing me. The team was based in a dif­fer­ent city. I was young and will­ing to re­lo­cate wher­ever I could find good work, so I im­me­di­ately agreed to the in­ter­view.

This was in 2006, when video con­fer­enc­ing was not yet com­mon. On the day of the in­ter­view, the hir­ing man­ager called me on my of­fice desk phone. He be­gan by in­tro­duc­ing the team, which was called Archie, short for ar­chi­tec­ture. The team de­vel­oped and main­tained the web frame­work and core ar­chi­tec­tural com­po­nents on which the en­tire e-bank­ing prod­uct was built. The prod­uct had ex­isted long be­fore open source frame­works such as Spring or Django came into ex­is­tence, so fea­tures such as API rout­ing, au­then­ti­ca­tion and au­tho­ri­sa­tion lay­ers, cookie man­age­ment, etc. were all im­ple­mented in-house as Java Servlets and JavaServer Pages (JSP). Since the soft­ware was used in bank­ing en­vi­ron­ments, it also had to pass strict se­cu­rity test­ing and reg­u­lar au­dits to min­imise the risk of se­ri­ous flaws.

The in­ter­view be­gan well. He asked sev­eral ques­tions re­lated to soft­ware se­cu­rity, such as what SQL in­jec­tion is and how it can be pre­vented or how one might de­sign a web frame­work that mit­i­gates cross-site script­ing at­tacks. He also asked pro­gram­ming ques­tions, most of which I an­swered pretty well. Towards the end, how­ever, he asked how we could pre­vent MITM at­tacks. I had never heard the term, so I ad­mit­ted that I did not know what MITM meant. He then asked, Man in the mid­dle?’ but I still had no idea what that meant or whether it was even a soft­ware en­gi­neer­ing con­cept. He replied, Learn every­thing you can about PKI and MITM. We need to build a dig­i­tal sig­na­tures fea­ture for one of our cor­po­rate bank­ing prod­ucts. That’s the first thing we’ll work on.’

Over the next few weeks, I stud­ied RFCs and doc­u­men­ta­tion re­lated to pub­lic key in­fra­struc­ture, pub­lic key cryp­tog­ra­phy stan­dards and re­lated top­ics. At first, the ma­te­r­ial felt in­tim­i­dat­ing, but af­ter spend­ing time each evening read­ing what­ever rel­e­vant lit­er­a­ture I could find, things grad­u­ally be­gan to make sense. Concepts that ini­tially seemed com­plex and over­whelm­ing even­tu­ally felt in­tu­itive and el­e­gant. I re­lo­cated to the new city a few weeks later and de­liv­ered the dig­i­tal sig­na­tures fea­ture about a month af­ter join­ing the team. We used the open source Bouncy Castle li­brary to im­ple­ment the fea­ture. After that pro­ject, I worked on other parts of the prod­uct too. The most re­ward­ing part was know­ing that the code I was writ­ing be­came part of a ma­ture prod­uct used by hun­dreds of banks and mil­lions of users. It was es­pe­cially sat­is­fy­ing to see the work pass se­cu­rity test­ing and au­dits and be con­sid­ered ready for re­lease.

That was my first real en­gi­neer­ing job. My man­ager also turned out to be an ex­cel­lent men­tor. Working with him helped me de­velop new skills and his en­cour­age­ment gave me con­fi­dence that stayed with me for years. Nearly two decades have passed since then, yet the prod­uct is still in ser­vice and con­tin­ues to be ac­tively de­vel­oped. In fact, in my cur­rent phase of life I some­times en­counter it as a cus­tomer. Occasionally, I open the browser’s de­vel­oper tools to view the page source where I can still see traces of the HTML gen­er­ated by code I wrote al­most twenty years ago.

Around 2007 or 2008, I be­gan work­ing on a proof of con­cept for de­vel­op­ing wid­gets for an OpenTV set-top box. The work in­volved writ­ing code in a heav­ily trimmed-down ver­sion of C. One af­ter­noon, while mak­ing good progress on a few wid­gets, I no­ticed that they would oc­ca­sion­ally crash at ran­dom. I tried track­ing down the bugs, but I was find­ing it sur­pris­ingly dif­fi­cult to un­der­stand my own code. I had man­aged to pro­duce some truly spaghetti code full of du­bi­ous pointer op­er­a­tions that were al­most cer­tainly re­spon­si­ble for the crashes, yet I could not pin­point where ex­actly things were go­ing wrong.

Ours was a small team of four peo­ple, each work­ing on an in­de­pen­dent proof of con­cept. The most se­nior per­son on the team acted as our lead and ar­chi­tect. Later that af­ter­noon, I showed him my progress and ex­plained that I was still try­ing to hunt down the bugs caus­ing the wid­gets to crash. He asked whether he could look at the code. After go­ing through it briefly and prob­a­bly re­al­is­ing that it was a bit of a mess, he asked me to send him the code as a tar­ball, which I promptly did.

He then went back to his desk to study the code. I re­mem­ber think­ing that there was no way he was go­ing to find the prob­lem any­time soon. I had been de­bug­ging it for hours and barely un­der­stood what I had writ­ten my­self; it was the worst spaghetti code I had ever pro­duced. With lit­tle hope of a quick so­lu­tion, I went back to de­bug­ging on my own.

Barely five min­utes later, he came back to my desk and asked me to open a spe­cific file. He then showed me ex­actly where the pointer bug was. It had taken him only a few min­utes not only to read my tan­gled code but also to un­der­stand it well enough to iden­tify the fault and point it out. As soon as I fixed that line, the crashes dis­ap­peared. I was gen­uinely in awe of his skill.

I have al­ways loved com­put­ing and pro­gram­ming, so I had as­sumed I was al­ready fairly good at it. That in­ci­dent, how­ever, made me re­alise how much fur­ther I still had to go be­fore I could con­sider my­self a good soft­ware de­vel­oper. I did im­prove sig­nif­i­cantly in the years that fol­lowed and to­day I am far bet­ter at man­ag­ing soft­ware com­plex­ity than I was back then.

In an­other pro­ject from that pe­riod, we worked on an­other set-top box plat­form that sup­ported Java Micro Edition (Java ME) for wid­get de­vel­op­ment. One day, the same ar­chi­tect from the pre­vi­ous story asked whether I could add an­i­ma­tions to the wid­gets. I told him that I be­lieved it should be pos­si­ble, though I’d need to test it to be sure. Before con­tin­u­ing with the story, I need to ex­plain how the dif­fer­ent stake­hold­ers in the pro­ject were or­gan­ised.

Our small team ef­fec­tively played the role of the soft­ware ven­dor. The fi­nal prod­uct go­ing to mar­ket would carry the brand of a ma­jor tele­com car­rier, of­fer­ing di­rect-to-home (DTH) tele­vi­sion ser­vices, with the set-top box be­ing one of the prod­ucts sold to cus­tomers. The set-top box was man­u­fac­tured by an­other com­pany. So the pro­ject was a part­ner­ship be­tween three par­ties: our com­pany as the soft­ware ven­dor, the tele­com car­rier and the set-top box man­u­fac­turer. The tele­com car­rier wanted to know whether wid­gets could be an­i­mated on screen with smooth slide-in and slide-out ef­fects. That was why the ar­chi­tect ap­proached me to ask whether it could be done.

I be­gan work­ing on an­i­mat­ing the wid­gets. Meanwhile, the ar­chi­tect and a few se­nior col­leagues at­tended a busi­ness meet­ing with all the part­ners pre­sent. During the meet­ing, he ex­plained that we were eval­u­at­ing whether wid­get an­i­ma­tions could be sup­ported. The set-top box man­u­fac­turer im­me­di­ately dis­missed the idea, say­ing, That’s im­pos­si­ble. Our set-top box does not sup­port an­i­ma­tion.’ When the ar­chi­tect re­turned and shared this with us, I replied, I do not un­der­stand. If I can draw a wid­get, I can an­i­mate it too. All it takes is clear­ing the wid­get and re­draw­ing it at slightly dif­fer­ent po­si­tions re­peat­edly. In fact, I al­ready have a work­ing ver­sion.’ I then showed a demo of the an­i­mated wid­gets run­ning on the em­u­la­tor.

The fol­low­ing week, the ar­chi­tect at­tended an­other part­ners’ meet­ing where he shared up­dates about our an­i­mated wid­gets. I was not per­son­ally pre­sent, so what fol­lows is sec­ond-hand in­for­ma­tion passed on by those who were there. I learnt that the set-top box com­pany re­acted an­grily. For some rea­son, they were un­happy that we had man­aged to achieve re­sults us­ing their set-top box and APIs that they had of­fi­cially de­scribed as im­pos­si­ble. They de­manded that we stop work on an­i­ma­tion im­me­di­ately, ar­gu­ing that our work could not be al­lowed to con­tra­dict their of­fi­cial po­si­tion. At that point, the tele­com car­ri­er’s rep­re­sen­ta­tive in­ter­vened and bluntly told the set-top box rep­re­sen­ta­tive to just shut up. If the set-top box guy was fu­ri­ous, the tele­com guy was even more so, You guys told us an­i­ma­tion was not pos­si­ble and these peo­ple are show­ing that it is! You man­u­fac­ture the set-top box. How can you not know what it is ca­pa­ble of?’

Meanwhile, I con­tin­ued work­ing on the proof of con­cept. It worked very well in the em­u­la­tor, but I did not yet have ac­cess to the ac­tual hard­ware. The de­vice was still in the process of be­ing shipped to us, so all my early proof-of-con­cepts ran on the em­u­la­tor. The fol­low­ing week, the ar­chi­tect planned to travel to the set-top box com­pa­ny’s of­fice to test my wid­gets on the real hard­ware.

At the time, I was quite proud of demon­strat­ing re­sults that even the hard­ware maker be­lieved were im­pos­si­ble. When the ar­chi­tect even­tu­ally trav­elled to test the wid­gets on the ac­tual de­vice, a prob­lem emerged. What looked like but­tery smooth an­i­ma­tion on the em­u­la­tor ap­peared no­tice­ably choppy on a real tele­vi­sion. Over the next few weeks, I ex­per­i­mented with frame rates, buffer­ing strate­gies and op­ti­mis­ing the com­pu­ta­tion done in the the ren­der­ing loop. Each week, the ar­chi­tect trav­elled for test­ing and re­turned with the same re­port: the an­i­ma­tion had im­proved some­what, but it still re­mained choppy. The mod­est em­bed­ded hard­ware sim­ply could not keep up with the re­quired com­pu­ta­tion and ren­der­ing. In the end, the tele­com car­rier de­cided that no an­i­ma­tion was bet­ter than poor an­i­ma­tion and dropped the idea al­to­gether. So in the end, the set-top box de­vel­op­ers turned out to be cor­rect af­ter all.

Back in 2009, af­ter com­plet­ing about a year at RSA Security, I be­gan look­ing for work that felt more in­tel­lec­tu­ally stim­u­lat­ing, es­pe­cially pro­jects in­volv­ing math­e­mat­ics and al­go­rithms. I spoke with a few se­nior lead­ers about this, but noth­ing ma­te­ri­alised for some time. Then one day, Dr Burt Kaliski, Chief Scientist at RSA Laboratories, asked to meet me to dis­cuss my ca­reer as­pi­ra­tions. I have writ­ten about this in more de­tail in an­other post here: Good Blessings. I will sum­marise what fol­lowed.

Dr Kaliski met me and of­fered a few sug­ges­tions about the kinds of teams I might ap­proach to find more in­ter­est­ing work. I fol­lowed his ad­vice and even­tu­ally joined a team that turned out to be an ex­cel­lent fit. I re­mained with that team for the next six years. During that time, I worked on parser gen­er­a­tors, for­mal lan­guage spec­i­fi­ca­tion and im­ple­men­ta­tion, as well as in­dex­ing and query­ing en­gines of a petabyte-scale data­base. I learnt some­thing new al­most every day dur­ing those six years. It re­mains one of the most en­joy­able pe­ri­ods of my ca­reer. I have es­pe­cially fond mem­o­ries of work­ing on parser gen­er­a­tors along­side re­mark­ably skilled en­gi­neers from whom I learnt a lot.

Years later, I re­flected on how that brief meet­ing with Dr Kaliski had al­tered the tra­jec­tory of my ca­reer. I re­alised I was not sure whether I had prop­erly ex­pressed my grat­i­tude to him for the role he had played in shap­ing my path. So I wrote to thank him and ex­plain how much that sin­gle con­ver­sa­tion had in­flu­enced my life. A few days later, Dr Kaliski replied, say­ing he was glad to know that the steps I took af­ter­wards had worked out well. Before end­ing his mes­sage, he wrote this heart-warm­ing note:

This story comes from 2019. By then, I was no longer a twenty-some­thing en­gi­neer just start­ing out. I was now a mid­dle-aged staff en­gi­neer with years of ex­pe­ri­ence build­ing both low-level net­work­ing sys­tems and data­base sys­tems. Most of my work up to that point had been in C and C++. I was now en­ter­ing a new phase of my ca­reer where I would be lead­ing the de­vel­op­ment of mi­croser­vices writ­ten in Go and Python. Like many peo­ple in this pro­fes­sion, com­put­ing has long been one of my favourite hob­bies. So al­though my pro­fes­sional work for the pre­vi­ous decade had fo­cused on C and C++, I had plenty of hobby pro­jects in other lan­guages, in­clud­ing Python and Go. As a re­sult, switch­ing gears from sys­tems pro­gram­ming to ap­pli­ca­tion de­vel­op­ment was a smooth tran­si­tion for me. I can­not even say that I missed work­ing in C and C++. After all, who wants to spend their days oc­ca­sion­ally chas­ing mem­ory bugs in core dumps when you could be build­ing fea­tures and de­liv­er­ing real value to cus­tomers?

In October 2019, dur­ing Cybersecurity Awareness Month, a Capture the Flag (CTF) event was or­gan­ised at our of­fice. The con­test fea­tured all kinds of tech­ni­cal puz­zles, rang­ing from SQL in­jec­tion chal­lenges to in­se­cure cryp­tog­ra­phy prob­lems. Some chal­lenges also in­volved re­vers­ing bi­na­ries and ex­ploit­ing stack over­flow is­sues.

I am usu­ally rather in­tim­i­dated by such con­tests. The whole idea of com­pet­i­tive prob­lem-solv­ing un­der time pres­sure tends to make me ner­vous. But one of my col­leagues per­suaded me to par­tic­i­pate in the CTF. And, some­what to my sur­prise, I turned out to be rather good at it. Within about eight hours, I had solved roughly 90% of the puz­zles. I fin­ished at the top of the score­board.

In my younger days, I was gen­er­ally known to be a good prob­lem solver. I was of­ten con­sulted when thorny prob­lems needed solv­ing and I usu­ally man­aged to de­liver re­sults. I also en­joyed solv­ing puz­zles. I had a knack for them and hap­pily spent hours, some­times days, work­ing through ob­scure math­e­mat­i­cal or tech­ni­cal puz­zles and shar­ing de­tailed write-ups with friends of the nerd va­ri­ety. Seen in that light, my per­for­mance at the CTF prob­a­bly should not have sur­prised me. Still, I was very pleased. It was re­as­sur­ing to know that I could still rely on my sys­tems pro­gram­ming ex­pe­ri­ence to solve ob­scure chal­lenges.

During the course of the con­test, my per­for­mance be­came some­thing of a talk­ing point in the of­fice. Colleagues oc­ca­sion­ally stopped by my desk to ap­pre­ci­ate my progress in the CTF. Two much younger col­leagues, both en­gi­neers I ad­mired for their skill and pro­fes­sion­al­ism, were dis­cussing the re­sults nearby. They were speak­ing softly, but I could still over­hear parts of their con­ver­sa­tion. Curious, I leaned slightly and lis­tened a bit more care­fully. I wanted to know what these two peo­ple, whom I ad­mired a lot, thought about my per­for­mance.

One of them re­marked on how well I was do­ing in the con­test. The other replied, Of course he is do­ing well. He has more than ten years of ex­pe­ri­ence in C.’ At that mo­ment, I re­alised that no mat­ter how well I solved those puz­zles, the re­sult would nat­u­rally be cred­ited to ex­pe­ri­ence. In my younger days, when I solved tricky prob­lems like these, peo­ple would some­times call me smart. Now peo­ple sim­ply saw it as a con­se­quence of my ex­pe­ri­ence. Not that I par­tic­u­larly care for la­bels such as smart’ any­way, but it did make me re­alise how things had changed. I was now sim­ply the per­son with many years of ex­pe­ri­ence. Solving tech­ni­cal puz­zles that in­volved dis­as­sem­bling bi­na­ries, trac­ing ex­e­cu­tion paths and re­con­struct­ing pro­gram logic was ex­pected rather than re­mark­able.

I con­tinue to sharpen my tech­ni­cal skills to this day. While my tech­ni­cal re­sults may now sim­ply be at­trib­uted to ex­pe­ri­ence, I hope I can con­tinue to make a good im­pres­sion through my pro­fes­sion­al­ism, ethics and kind­ness to­wards the peo­ple I work with. If those leave a last­ing im­pres­sion, that is good enough for me.

...

Read the original on susam.net »

6 213 shares, 10 trendiness

Jonathan Whiting

I am an un­usual beast. All my solo pro­ject games I’ve been mak­ing re­cently have been writ­ten in vanilla’ C. Nobody does this. So I think it might be in­ter­est­ing to ex­plain why I do.

Dry pro­gram­ming lan­guage opin­ions in­com­ing, you have been warned.

There’s some things which are non-ne­go­tiable. First of, it has to be re­li­able. I can’t af­ford to spend my time deal­ing with bugs I did­n’t cause my­self.

A lot of my games were writ­ten for flash, and now flash is dy­ing. I do not

want to spend my time port­ing old games to new plat­forms, I want to make new games. I need a plat­form that I am con­fi­dent will be around for a while.

Similarly I want to avoid ty­ing my­self to a par­tic­u­lar OS, and ide­ally I’d like to have the op­tion of de­vel­op­ing for con­soles. So it’s im­por­tant that my pro­gram­ming lan­guage is portable, and that it has good portable li­brary sup­port.

The strongest thing on my de­sired, but not re­quired list is sim­plic­ity. I find look­ing up lan­guage fea­tures, and quirky clever’ api’s in­cred­i­bly tir­ing. The ideal lan­guage would be one I can mem­o­rize, and then never have to look things up.

Dealing with bugs is huge cre­ative drain. I want to pro­duce less bugs, so I want strict typ­ing, strong warn­ing mes­sages and sta­tic code analy­sis. I want bugs to be eas­ier to find, so I want good de­bug­gers and dy­namic analy­sis.

I’m not in­ter­est­ing in high-def re­al­ism, but I do still care a bit about per­for­mance. Having more cy­cles avail­able broad­ens the palette of things you can do. It’s par­tic­u­larly in­ter­est­ing to ex­plore what is pos­si­ble with mod­ern, pow­er­ful com­put­ers if you aren’t per­su­ing fi­delity.

Even more than that I care about the speed of the com­piler. I am not a zen mas­ter of fo­cus, and wait­ing 10+ sec­onds is waste­ful, yes, but more im­por­tantly it breaks my flow. I flick over to Twitter and sud­denly 5+ min­utes are gone.

I am not an OOP con­vert. I’ve spent most of my pro­fes­sional life work­ing with classes and ob­jects, but the more time I spend, the less I un­der­stand why you’d want to com­bine code and data so rigidly. I want to han­dle data as data and write the code that best fits a par­tic­u­lar sit­u­a­tion.

C++ is still the most com­mon lan­guage for writ­ing games, and not with­out rea­son. I still do al­most all of my con­tract work in it. I dis­like it in­tensely.

C++ cov­ers my needs, but fails my wants badly. It is des­per­ately com­pli­cated. Despite de­cent tool­ing it’s easy to cre­ate in­sid­i­ous bugs. It is also slow to com­pile com­pared to C. It is high per­for­mance, and it of­fers fea­tures that C does­n’t have; but fea­tures I don’t want, and at a great com­plex­ity cost.

C# and Java have sim­i­lar is­sues. They are ver­bose and com­plex beasts, and I am search­ing for a con­cise, sim­ple crea­ture. They both do a lot to rail­road a pro­gram­mer into a strongly OOP style that I am op­posed to. As per most higher level lan­guages they have a ten­dency to hide away com­plex­ity in a way that does­n’t ac­tu­ally pre­vent it from bit­ing you.

I like Go a lot. In many ways it is C re­vis­ited, tak­ing into ac­count what has be learnt in the long years since it was re­leased. I would like to use it, but there are big road­blocks that pre­vent me. The stop-the-world garbage col­lec­tion is a big pain for games, stop­ping the world is some­thing you can’t re­ally af­ford to do. The li­brary sup­port for games is quite poor, and though you can wrap C libs with­out much trou­ble, do­ing so adds a lot of busy work. It is niche enough that I worry a lit­tle about long term rel­e­vance.

It would be nice to make things for the web, but it feels like a ter­ri­fy­ingly fast mov­ing en­vi­ro­ment. It is par­tic­u­larly scary with the death of flash. I re­ally dis­like javascript, it is so loose that I mar­vel that peo­ple are able to write big chunks of soft­ware in it. I have no in­ter­est in try­ing.

Haxe feels much more promis­ing than most al­ter­na­tives. If I do web stuff again I’ll be div­ing in here. There is some good li­brary sup­port. I am a lit­tle con­cerned by its rel­a­tive youth, will it last? I don’t much have else to say about it though, I’ve only dab­bled with the sur­face.

Some peo­ple just say screw it, I’ll write my own lan­guage, the lan­guage I want to use. I ad­mire this, and some­times I toy with the idea of do­ing the same. It feels like too much to throw away all ex­ist­ing li­brary sup­port, and tak­ing full re­spon­si­bilty for fu­ture com­pat­i­bil­ity. It is also very dif­fi­cult, and when it comes down to it I would rather be mak­ing games than pro­gram­ming lan­guages.

C is dan­ger­ous, but it is re­li­able. A very sharp knife that can cut fin­gers as well as veg, but so sim­ple it’s not too hard to learn to use it care­fully.

It is fast, and when it comes to com­pi­la­tion I can’t think of any­thing faster.

It can be made to run on just about any­thing. Usually this is rel­a­tively easy. It is hard to imag­ine a time when this won’t be the case.

The li­brary and tool­ing sup­port is strong and on­go­ing.

I say this with some sad­ness, but it is still the lan­guage for me.

I ab­solutely DO NOT mean to say hey, you should use C too”. I full appe­ci­ate pref­er­ences here are pretty spe­cific and un­usual. I have also al­ready writ­ten more vanilla’ C code than most, and this cer­tainly is part of my com­fort.

So yeah, that’s it :-)

...

Read the original on jonathanwhiting.com »

7 203 shares, 6 trendiness

Drivers over 70 to face eye tests every three years

Rebecca Guy, se­nior pol­icy man­ager at the Royal Society for the Prevention of Accidents, said: Regular vi­sion checks are a sen­si­ble way to re­duce risk as we age, but the pri­or­ity must be a sys­tem that sup­ports peo­ple to drive safely for as long as pos­si­ble, while en­sur­ing timely ac­tion is taken when health or eye­sight could put them or oth­ers in dan­ger.”

...

Read the original on www.bbc.com »

8 192 shares, 20 trendiness

The world heard JD Vance being booed at the Olympics. Except for viewers in the US

The mod­ern Olympics sell them­selves on a sim­ple premise: the whole world, watch­ing the same mo­ment, at the same time. On Friday night in Milan, that il­lu­sion frac­tured in real time.

When Team USA en­tered the San Siro dur­ing the pa­rade of na­tions, the speed skater Erin Jackson led the del­e­ga­tion into a wall of cheers. Moments later, when cam­eras cut to US vice-pres­i­dent JD Vance and sec­ond lady Usha Vance, large sec­tions of the crowd re­sponded with boos. Not sub­tle ones, but au­di­ble and sus­tained ones. Canadian view­ers heard them. Journalists seated in the press tri­bunes in the up­per deck, my­self in­cluded, clearly heard them. But as I quickly re­al­ized from a groupchat with friends back home, American view­ers watch­ing NBC did not.

On its own, the sit­u­a­tion might once have passed un­no­ticed. But the defin­ing fea­ture of the mod­ern sports me­dia land­scape is that no sin­gle broad­caster con­trols the mo­ment any more. CBC car­ried it. The BBC live­blogged it. Fans clipped it. Within min­utes, mul­ti­ple ver­sions of the same hap­pen­ing were cir­cu­lat­ing on­line — some with boos, some with­out — turn­ing what might once have been a rou­tine pro­duc­tion call into a case study in in­for­ma­tion asym­me­try.

For its part, NBC has de­nied edit­ing the crowd au­dio, al­though it is dif­fi­cult to re­solve why the boos so au­di­ble in the sta­dium and on other broad­casts were ab­sent for US view­ers. But in a broader sense, it is be­com­ing harder, not eas­ier, to cu­rate re­al­ity when the rest of the world is hold­ing up its own cam­era an­gles. And that raises an un­com­fort­able ques­tion as the United States moves to­ward host­ing two of the largest sport­ing events on the planet: the 2026 men’s World Cup and the 2028 Los Angeles Olympics.

If a US ad­min­is­tra­tion fig­ure is booed at the Olympics in Los Angeles, or a World Cup match in New Jersey or Dallas, will American do­mes­tic broad­casts sim­ply mute or avoid men­tion­ing the crowd au­dio? If so, what hap­pens when the world feed, or a for­eign broad­caster, shows some­thing else en­tirely? What hap­pens when 40,000 phones in the sta­dium up­load their own ver­sion in real time?

The risk is not just that view­ers will see through it. It is that at­tempts to man­age the nar­ra­tive will make American broad­cast­ers look less cred­i­ble, not more. Because the au­di­ence now as­sumes there is al­ways an­other an­gle. Every time a broad­caster makes that trade — cred­i­bil­ity for in­su­la­tion — it is a trade au­di­ences even­tu­ally no­tice.

There is also a deeper struc­tural pres­sure be­hind de­ci­sions like this. The Trump era has been de­fined in part by sus­tained hos­til­ity to­ward me­dia in­sti­tu­tions. Broadcasters do not op­er­ate in a vac­uum; they op­er­ate in­side reg­u­la­tory en­vi­ron­ments, po­lit­i­cal cli­mates and cor­po­rate risk cal­cu­la­tions. When pres­i­dents and their al­lies openly threaten or tar­get net­works, it is naive to pre­tend that has no down­stream ef­fect on ed­i­to­r­ial choices — es­pe­cially in high-stakes live broad­casts tied to bil­lion-dol­lar rights deals.

But there is a dif­fer­ence be­tween con­tex­tual pres­sure and vis­i­ble re­al­ity dis­tor­tion. When global au­di­ences can com­pare feeds in real time, the lat­ter be­gins to re­sem­ble some­thing else en­tirely: not ed­i­to­r­ial judg­ment, but nar­ra­tive man­age­ment. Which is why com­par­isons to Soviet-style state-con­trolled broad­cast­ing mod­els — once breath­less rhetor­i­cal ex­ag­ger­a­tions — are start­ing to feel less hy­per­bolic.

The irony is that the Olympics them­selves are built around the idea that sport can ex­ist along­side po­lit­i­cal ten­sion with­out pre­tend­ing it does not ex­ist. The International Olympic Committee’s own lan­guage — ath­letes should not be pun­ished for gov­ern­ments’ ac­tions — im­plic­itly ac­knowl­edges that gov­ern­ments are part of the Olympic the­ater whether or­ga­niz­ers like it or not.

Friday night il­lus­trated that per­fectly. American ath­letes were cheered, their enor­mous con­tin­gent given one of the most full-throated re­cep­tions of the night. The po­lit­i­cal emis­saries were not uni­ver­sally wel­comed. Both things can be true at once. Crowd dis­sent is not a fail­ure of the Olympic ideal. In open so­ci­eties, it is part of how pub­lic sen­ti­ment is ex­pressed. Attempting to erase one side of that equa­tion risks flat­ten­ing re­al­ity into some­thing au­di­ences no longer trust. And if Milan was a warn­ing shot, Los Angeles is the main event.

Since Donald Trump’s first term, American po­lit­i­cal cov­er­age around sport has fix­ated on the mi­cro-mo­ments: Was the pres­i­dent booed or cheered? Did the broad­cast show it? Did he at­tend or skip events likely to pro­duce hos­tile crowds? The dis­course has of­ten felt like a Rorschach test, fil­tered through par­ti­san in­ter­pre­ta­tion and se­lec­tive clips.

The LA Olympics will be some­thing else en­tirely. There is no hid­ing from an open­ing cer­e­mony for Trump. No duck­ing a sta­dium when the Olympic Charter re­quires the host coun­try’s head of state to of­fi­cially de­clare the Games open. No con­trol­ling how 200 in­ter­na­tional broad­cast­ers carry the mo­ment.

If Trump is still in the White House on 14 July 2028, one month af­ter his 82nd birth­day and in the thick of an­other heated US pres­i­den­tial cam­paign, he will stand in front of a global tele­vi­sion au­di­ence as a key part of the open­ing cer­e­mony. He will do so in California, in a po­lit­i­cal en­vi­ron­ment far less friendly than many do­mes­tic sport­ing venues he has ap­peared in over the past decade. And he will do it in a city syn­ony­mous with the po­lit­i­cal op­po­si­tion, po­ten­tially in the back yard of the Democratic pres­i­den­tial can­di­date.

There will be some cheers. There will al­most cer­tainly be boos. There will be every­thing in be­tween. And there will be no way to make them dis­ap­pear. The real risk for American broad­cast­ers is not that dis­sent will be vis­i­ble. It is that au­di­ences will start as­sum­ing any­thing they do not show is be­ing hid­den. In an era when trust in in­sti­tu­tions is al­ready frag­ile, that is a dan­ger­ous place to op­er­ate from.

The Olympics have al­ways been po­lit­i­cal, whether through boy­cotts, protests, sym­bolic ges­tures or crowd re­ac­tions. What has changed is not the pol­i­tics. It is the im­pos­si­bil­ity of con­tain­ing the op­tics.

Milan may ul­ti­mately be re­mem­bered as a small mo­ment — a few sec­onds of crowd noise dur­ing a long cer­e­mony. But it also felt like a pre­view of the next phase of global sport broad­cast­ing: one where nar­ra­tive con­trol is shared, con­tested and in­stantly ver­i­fi­able. The world is watch­ing. And this time, it is also record­ing.

...

Read the original on www.theguardian.com »

9 184 shares, 14 trendiness

Beyond agentic coding

I’m gen­er­ally pretty pro-AI with one ma­jor ex­cep­tion: agen­tic cod­ing. My con­sis­tent im­pres­sion is that agen­tic cod­ing does not ac­tu­ally im­prove pro­duc­tiv­ity and de­te­ri­o­rates the user’s com­fort and fa­mil­iar­ity with the code­base. I formed that im­pres­sion from:

Every time I use agen­tic cod­ing tools I’m con­sis­tently unim­pressed with the qual­ity of the re­sults.

I al­low in­ter­view can­di­dates to use agen­tic cod­ing tools and can­di­dates who do so con­sis­tently per­formed worse than other can­di­dates, fail­ing to com­plete the chal­lenge or pro­duc­ing in­cor­rect re­sults1. This was a huge sur­prise to me at first be­cause I ex­pected agen­tic cod­ing to con­fer an un­fair ad­van­tage but … nope!

Studies like the Becker study and Shen study show that users of agen­tic cod­ing per­form no bet­ter and some­times worse when you mea­sure pro­duc­tiv­ity in terms of fixed out­comes rather than code ve­loc­ity/​vol­ume.

I don’t be­lieve agen­tic cod­ing is a lost cause, but I do be­lieve agen­tic cod­ing in its pre­sent in­car­na­tion is do­ing more harm than good to soft­ware de­vel­op­ment. I also be­lieve it is still worth­while to push on the in­ad­e­qua­cies of agen­tic cod­ing so that it em­pow­ers de­vel­op­ers and im­proves code qual­ity.

However, in this post I’m tak­ing a dif­fer­ent tack: I want to pre­sent other ways to lever­age AI for soft­ware de­vel­op­ment. I be­lieve that agen­tic cod­ing has so cap­tured the cul­tural imag­i­na­tion that peo­ple are sleep­ing on other good and un­der­ex­plored so­lu­tions to AI-assisted soft­ware de­vel­op­ment.

I like to de­sign tools and in­ter­faces from first prin­ci­ples rather than re­act­ing to in­dus­try trends/​hype and I’ve ac­crued quite a few gen­eral de­sign prin­ci­ples from over a decade of work­ing in DevProd and also an even longer his­tory of open source pro­jects and con­tri­bu­tions.

One of those de­sign prin­ci­ples is my per­sonal master cue”, which is:

A good tool or in­ter­face should keep the user in a flow state as long as pos­si­ble

This prin­ci­ple is­n’t even spe­cific to AI-assisted soft­ware de­vel­op­ment, and yet still high­lights why agen­tic cod­ing some­times misses the mark. Both stud­ies and de­vel­oper tes­ti­mo­ni­als show that agen­tic cod­ing breaks flow and keeps de­vel­op­ers in an idle/​in­ter­rupt­ible hold­ing pat­tern more than or­di­nary cod­ing.

For ex­am­ple, the Becker study took screen record­ings and saw that idle time ap­prox­i­mately dou­bled:

I be­lieve we can im­prove AI-assisted cod­ing tools (agentic or not) if we set our north star to preserve flow state”.

Calm tech­nol­ogy is a de­sign dis­ci­pline that pro­motes flow state in tools that we build. The de­sign prin­ci­ples most rel­e­vant to cod­ing are:

tools should min­i­mize de­mands on our at­ten­tion

Interruptions and in­tru­sions on our at­ten­tion break us out of flow state.

tools should be built to be pass-through”

A tool is not meant to be the ob­ject of our at­ten­tion; rather the tool should re­veal the true ob­ject of our at­ten­tion (the thing the tool acts upon), rather than ob­scur­ing it. The more we use the tool the more the tool fades into the back­ground of our aware­ness while still sup­port­ing our work.

tools should cre­ate and en­hance calm (thus the name: calm tech­nol­ogy)

Engineers al­ready use calm” tools and in­ter­faces as part of our work and here are a cou­ple of ex­am­ples you’re prob­a­bly al­ready fa­mil­iar with:

IDEs (like VSCode) can sup­port in­lay hints that sprin­kle the code with use­ful an­no­ta­tions for the reader, such as in­ferred type an­no­ta­tions:

These types of in­lay hints em­body calm de­sign prin­ci­ples be­cause:

they min­i­mize de­mands on our at­ten­tion

They ex­ist on the pe­riph­ery of our at­ten­tion, avail­able for us if we’re in­ter­ested but un­ob­tru­sive if we’re not in­ter­ested.

they are built to be pass-through”

They don’t re­place or sub­sti­tute the code that we are edit­ing. They en­hance the code edit­ing ex­pe­ri­ence but the user is still in di­rect con­tact with the edited code. The more we use type hints the more they fade into the back­ground of our aware­ness and the more the code re­mains the fo­cus of our at­ten­tion.

They pro­mote a sense of calm by in­form­ing our un­der­stand­ing of the code pas­sively. As one of the Calm Technology prin­ci­ples puts it: Technology can com­mu­ni­cate, but does­n’t need to speak”.

Tools like VSCode or GitHub’s pull re­quest viewer let you pre­view at a glance changes to the file tree, like this:

You might think to your­self this is a very un­in­ter­est­ing thing to use as an ex­am­ple” but that’s ex­actly the point. The best tools (designed with the prin­ci­ples of calm tech­nol­ogy) are per­va­sive and bor­ing things that we take for granted (like light switches) and that have faded so strongly into the back­ground of our at­ten­tion that we for­get they even ex­ist as a part of our daily work­flow (also like light switches).

They’re there if we need the in­for­ma­tion, but easy to ig­nore (or even for­get they ex­ist) if we don’t use them.

are built to be pass-through”

When we in­ter­act with the file tree viewer we are in­ter­act­ing di­rectly with the filesys­tem and the in­ter­ac­tion be­tween the rep­re­sen­ta­tion (the viewer) and the re­al­ity (the filesys­tem) feels di­rect, snappy, and pre­cise. The more we use the viewer the more the rep­re­sen­ta­tion be­comes in­dis­tin­guish­able from the re­al­ity in our minds.

We do not need to con­stantly in­ter­act with the file tree to gather up-to-date in­for­ma­tion about our pro­ject struc­ture. It pas­sively up­dates in the back­ground as we make changes to the pro­ject and those up­dates are un­ob­tru­sive and not at­ten­tion-grab­bing.

We can think about the lim­i­ta­tions of chat-based agen­tic cod­ing tools through this same lens:

they place high de­mands on our at­ten­tion

The user has to ei­ther sit and wait for the agent to re­port back or do some­thing else and run the LLM in a semi-au­tonomous man­ner. However, even semi-au­tonomous ses­sions pre­vent the user from en­ter­ing flow state be­cause they have to re­main in­ter­rupt­ible.

they are not built to be pass-through”

Chat agents are a highly me­di­ated in­ter­face to the code which is in­di­rect (we in­ter­act more with the agent than the code), slow (we spend a lot of time wait­ing), and im­pre­cise (English is a dull in­ter­face).

The user needs to con­stantly stim­u­late the chat to gather new in­for­ma­tion or up­date their un­der­stand­ing of the code (the chat agent does­n’t in­form the user’s un­der­stand­ing pas­sively or qui­etly). Chat agents are also fine-tuned to max­i­mize en­gage­ment.

One of the ear­li­est ex­am­ples of an AI cod­ing as­sis­tant that be­gins to model calm de­sign prin­ci­ples is the OG AI-assistant: GitHub Copilot’s sup­port for in­line sug­ges­tions, with some caveats I’ll go into.

This does one thing re­ally well:

it’s built to be pass-through”

The user is still in­ter­act­ing di­rectly with the code and the sug­ges­tions are rea­son­ably snappy. The user can also ig­nore or type through the sug­ges­tion.

However, by de­fault these in­line sug­ges­tions vi­o­late other calm tech­nol­ogy prin­ci­ples:

By de­fault Copilot pre­sents the sug­ges­tions quite fre­quently and the user has to pause what they’re do­ing to ex­am­ine the out­put of the sug­ges­tion. After enough times the user be­gins to con­di­tion them­selves into reg­u­larly paus­ing and wait­ing for a sug­ges­tion which breaks them out of a flow state. Now in­stead of be­ing proac­tive the user’s been con­di­tioned by the tool to be re­ac­tive.

GitHub Copilot’s in­line sug­ges­tion in­ter­face is vi­su­ally busy and in­tru­sive. Even if the user ig­nores every sug­ges­tion the ef­fect is still dis­rup­tive: sug­ges­tions ap­pear on the user’s screen in the cen­ter of their vi­sual fo­cus and the user has to de­cide on the spot whether to ac­cept or ig­nore them be­fore pro­ceed­ing fur­ther. The user also can’t eas­ily pas­sively ab­sorb in­for­ma­tion pre­sented in this way: un­der­stand­ing each sug­ges­tion re­quires the user’s fo­cused at­ten­tion.

… bu­u­u­uut these is­sues are par­tially fix­able by dis­abling the au­to­matic sug­ges­tions and re­quir­ing them to be ex­plic­itly trig­gered by Alt + \. However, un­for­tu­nately that also dis­ables the next fea­ture, which I like even more:

Next edit sug­ges­tions (also from GitHub Copilot)

Next edit sug­ges­tions are a re­lated GitHub Copilot fea­ture that dis­play re­lated fol­low-up ed­its through­out the file/​pro­ject and let the user cy­cle be­tween them and pos­si­bly ac­cept each sug­gested change. They be­have like a super-charged find and re­place”:

These sug­ges­tions do an amaz­ing job of keep­ing the user in a flow state:

they min­i­mize de­mand on the user’s at­ten­tion

The cog­ni­tive load on the user is smaller than in­line sug­ges­tions be­cause the sug­ges­tions are more likely to be bite-sized (and there­fore eas­ier for a hu­man to re­view and ac­cept).

Just like in­line sug­ges­tions, next edit sug­ges­tions still keep the user in close con­tact with the code they are mod­i­fy­ing.

Suggestions are pre­sented in an un­ob­tru­sive way: they aren’t dumped in the dead cen­ter of the user’s at­ten­tion and they don’t de­mand im­me­di­ate re­view. They ex­ist on the pe­riph­ery of the user’s at­ten­tion as code sug­ges­tions that the user can ig­nore or fo­cus on at their leisure.

I be­lieve there is a lot of un­tapped po­ten­tial in AI-assisted cod­ing tools and in this sec­tion I’ll sketch a few small ex­am­ples of how we can em­body calm tech­nol­ogy de­sign prin­ci­ples in build­ing the next gen­er­a­tion of cod­ing tools.

You could browse a pro­ject by a tree of se­man­tic facets. For ex­am­ple, if you were edit­ing the Haskell im­ple­men­ta­tion of Dhall the tree viewer might look like this pro­to­type I hacked up2:

The goal here is to not only pro­vide a quick way to ex­plore the pro­ject by in­tent, but to also im­prove the user’s un­der­stand­ing of the pro­ject the more they use the fea­ture. String in­ter­po­la­tion re­gres­sion” is so much more in­for­ma­tive than dhall/​tests/​for­mat/​is­sue2078A.dhall3.

Also, the above video is based on a real tool and not just a mock. You can find the code I used to gen­er­ate that tree of se­man­tics facets here and I’ll write up an­other post soon walk­ing through how that code works.

You could take an ed­i­tor ses­sion, a diff, or a pull re­quest and au­to­mat­i­cally split it into a se­ries of more fo­cused com­mits that are eas­ier for peo­ple to re­view. This is one of the cases where the AI can re­duce hu­man re­view la­bor (most agen­tic cod­ing tools cre­ate more hu­man re­view la­bor).

There is some prior art here but this is still a nascent area of de­vel­op­ment.

You could add two new tools to the user’s tool­bar or con­text menu: Focus on…” and Edit as…”.

Focus on…” would al­low the user to spec­ify what they’re in­ter­ested in chang­ing and pre­sent only files and lines of code re­lated to their spec­i­fied in­ter­est. For ex­am­ple, if they want to fo­cus on command line op­tions” then only re­lated files and lines of code would be shown in the ed­i­tor and other lines of code would be hid­den/​col­lapsed/​folded. This would ba­si­cally be like Zen mode” but for edit­ing a fea­ture do­main of in­ter­est.

Edit as…” would al­low the user to edit the file or se­lected code as if it were a dif­fer­ent pro­gram­ming lan­guage or file for­mat. For ex­am­ple, some­one who was new to Haskell could edit a Haskell file as Python” and then af­ter fin­ish­ing their ed­its the AI at­tempts to back-prop­a­gate their changes to Haskell. Or some­one mod­i­fy­ing a com­mand-line parser could edit the file as YAML and be pre­sented with a sim­pli­fied YAML rep­re­sen­ta­tion of the com­mand line op­tions which they could mod­ify to add new op­tions.

This is ob­vi­ously not a com­pre­hen­sive list of ideas, but I wrote this to en­cour­age peo­ple to think of more in­no­v­a­tive ways to in­cor­po­rate AI into peo­ple’s work­flows be­sides just build­ing yet an­other chat­bot. I strongly be­lieve that chat is the least in­ter­est­ing in­ter­face to LLMs and AI-assisted soft­ware de­vel­op­ment is no ex­cep­tion to this.

Copyright © 2026 Gabriella Gonzalez. This work is li­censed un­der CC BY-SA 4.0

...

Read the original on haskellforall.com »

10 172 shares, 7 trendiness

If Windows can lock me out of Notepad, is Windows too reliant on the cloud?

If I can lose ac­cess to some­thing as sim­ple as Notepad, then Microsoft has prob­a­bly take cloud in­te­gra­tions too far.

If I can lose ac­cess to some­thing as sim­ple as Notepad, then Microsoft has prob­a­bly take cloud in­te­gra­tions too far.

A cou­ple of weeks ago, I found that I could­n’t open Notepad on my desk­top PC. It was­n’t be­cause Windows 11 had crashed, but rather, Microsoft told me it was­n’t available in (my) ac­count”. It turned out that an er­ror (0x803f8001) with Microsoft Store’s li­cens­ing ser­vice stopped me from open­ing a few first-party apps, in­clud­ing the Snipping Tool.

Yes, even the app I usu­ally use to screen­shot er­ror mes­sages was busted. Ironic. Now, I’m usu­ally a fairly level-headed Windows en­thu­si­ast who can re­late to users who both love and loathe Microsoft’s op­er­at­ing sys­tem, but I could­n’t open Notepad.exe — are we se­ri­ous?

You’ve prob­a­bly all seen the memes: it’s called This PC now, and not My Computer” any­more. It’s usu­ally easy to laugh off as a dis­grun­tled con­spir­acy, but I can see why it trends when the them­ings of Software as a Service (SaaS) are creep­ing into the most ba­sic Windows apps.

After all, Notepad is sup­posed to be the ab­solute bare­bones, most ul­tra-ba­sic app in the en­tire OS. Well, it was, be­fore Microsoft added Copilot and users started look­ing for a way to dis­able the un­usual AI ad­di­tion. Sure, you can still type C:\Windows\notepad.exe into Run’ with Windows + R for a legacy fall­back, but many per­haps would­n’t know about it.

I’m still a Windows guy, and I al­ways will be. Nevertheless, I can’t ig­nore that Windows 11 reg­u­larly feels less like an op­er­at­ing sys­tem and more like a thin client; just a con­nec­tion to Microsoft’s cloud with fewer op­tions for you to act as the ad­min­is­tra­tor of your own PC.

To be clear, I don’t have ma­jor prob­lems with the de­fault, out-of-box ex­pe­ri­ence (OOBE) of Windows 11. In fact, it does­n’t take me long to make changes when in­stalling fresh copies on new desk­top builds. Default pins on the Start menu don’t mat­ter be­cause I barely use it, and dis­abling ads is straight­for­ward enough. The ma­jor points pretty much boil down to:

* Uninstalling OneDrive: The web app is fine for man­ual back­ups, but I def­i­nitely don’t want my files au­to­mat­i­cally synced.

* Creating a lo­cal ac­count: Microsoft keeps mak­ing it harder, but I’ll al­ways use workarounds.

After that, I don’t take is­sue with the nor­mal desk­top — un­less some­thing un­ex­pect­edly breaks. Our Editor-in-Chief, Daniel Rubino, said it best, People don’t hate change. They hate sur­prise.” It was cer­tainly a sur­prise to lose ac­cess to my plain text ed­i­tor, loaded up with more than what an ex­tended (Windows + V) clip­board would be use­ful for. Nobody asked for this.

So, is the so­lu­tion to look for open‑source Notepad clones? Maybe for some en­thu­si­asts, but that’s just an­other app to add to a grow­ing Winget list, and I’d rather Microsoft stay true to its word about walk­ing back Windows 11′s AI over­load. I can’t abide by com­ments on so­cial plat­forms sug­gest­ing peo­ple just use a de­bloater” on a new Windows PC, ei­ther — we should­n’t have to.

That, and I gen­er­ally avoid rec­om­mend­ing Windows de­bloat scripts from GitHub to any­one in the first place. Windows can be ad­justed to your lik­ing if you fol­low the right guides, and while you can in­spect open-source code for your­self and gen­er­ally trust some well-re­spected coders on that plat­form, it’s a strange so­lu­tion that need­n’t ex­ist.

I’m not naive enough to think Windows is Microsoft’s top pri­or­ity. Cloud com­put­ing and Microsoft 365 are far more valu­able than a con­sumer-level op­er­at­ing sys­tem, though Microsoft does have a stag­ger­ing lead over the com­pe­ti­tion — one that would be ab­surd to jeop­ar­dize.

Still, my prob­lems with Notepad and Snipping Tool are a rain­drop in the Pacific Ocean of Microsoft’s broader plans, but I don’t want first-party apps ask­ing for au­then­ti­ca­tion from its servers — nor do I want our read­ers to down­load the first de­bloat script they find on the web.

There are jus­ti­fi­ca­tions for Microsoft adding el­e­ments of its cloud busi­ness to Windows, but I wish it would­n’t force it in a way that locks peo­ple into an on­line-only ex­pe­ri­ence. My PC should be en­tirely func­tional with­out an Internet con­nec­tion — es­pe­cially when I need a few scrib­bles from Notepad.

AI is un­doubt­edly the fu­ture, at least in some ca­pac­ity. Even if Satya Nadella says ar­ti­fi­cial in­tel­li­gence needs to prove its worth, there’s no be­liev­able chance that it’s go­ing away, es­pe­cially now that Copilot is so deeply in­grained in prac­ti­cally every­thing Microsoft owns.

Still, if on­line-only ser­vices are all ac­tive by de­fault and Windows 12 is ul­ti­mately an agen­tic AI OS, I would­n’t be sur­prised if more peo­ple stick with a de­bloated Windows 11, just as oth­ers did with Windows 10. Do you think the next ver­sion of Windows will re­turn some con­trol back to the user, or will it be even more Internet-dependent?

Join us on Reddit at r/​Win­dows­Cen­tral to share your in­sights and dis­cuss our lat­est news, re­views, and more.

...

Read the original on www.windowscentral.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.