10 interesting stories served every morning and every evening.
I didn’t ask for this and neither did you.
I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it.
I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.
And yet here we are. The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.
You could abstain out of moral principle. And that’s fine, especially if you’re at the tail end of your career. And if you’re at the beginning of your career, you don’t need me to explain any of this to you, because you already use Warp and Cursor and Claude, with ChatGPT as your therapist and pair programmer and maybe even your lover. This post is for the 40-somethings in my audience who don’t realize this fact yet.
So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision. It’s maybe not the decision that your younger, more idealistic self would want you to make, but it does keep your car and your house and your family safe inside it.
Someday years from now we will look back on the era when we were the last generation to code by hand. We’ll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we’ll miss it.
We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”
I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.
If you would like to grieve, I invite you to grieve with me. We are the last of our kind, and those who follow us won’t understand our sorrow. Our craft, as we have practiced it, will end up like some blacksmith’s tool in an archeological dig, a curio for future generations. It cannot be helped, it is the nature of all things to pass to dust, and yet still we can mourn. Now is the time to mourn the passing of our craft.
...
Read the original on nolanlawson.com »
We’re excited to announce that DoNotNotify has been open sourced. The full source code for the app is now publicly available for anyone to view, study, and contribute to.
You can find the source code on GitHub:
...
Read the original on donotnotify.com »
A local device focused AI assistant built in Rust — persistent memory, autonomous tasks, ~27MB binary. Inspired by and compatible with OpenClaw.
* Local device focused — runs entirely on your machine, your memory data stays yours
* Autonomous heartbeat — delegate tasks and let it work in the background
# Full install (includes desktop GUI)
cargo install localgpt
# Headless (no desktop GUI — for servers, Docker, CI)
cargo install localgpt –no-default-features
# Initialize configuration
localgpt config init
# Start interactive chat
localgpt chat
# Ask a single question
localgpt ask “What is the meaning of life?”
# Run as a daemon with heartbeat, HTTP API and web ui
localgpt daemon start
LocalGPT uses plain markdown files as its memory:
Files are indexed with SQLite FTS5 for fast keyword search, and sqlite-vec for semantic search with local embeddings
[agent]
default_model = “claude-cli/opus”
[providers.anthropic]
api_key = “${ANTHROPIC_API_KEY}”
[heartbeat]
enabled = true
interval = “30m”
active_hours = { start = “09:00”, end = “22:00″ }
[memory]
workspace = “~/.localgpt/workspace”
# Chat
localgpt chat # Interactive chat
localgpt chat –session
When the daemon is running:
Why I Built LocalGPT in 4 Nights — the full story with commit-by-commit breakdown.
...
Read the original on github.com »
We built a Software Factory: non-interactive development where specs + scenarios drive agents that write code, run harnesses, and converge without human review.
The narrative form is included below. If you’d prefer to work from first principles, I offer a few constraints & guidelines that, applied iteratively, will accelerate any team toward the same intuitions, convictions1, and ultimately a factory2 of your own. In kōan or mantra form:
* Why am I doing this? (implied: the model should be doing this instead)
* Code must not be written by humans
* Code must not be reviewed by humans
* If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
On July 14th, 2025, Jay Taylor and Navan Chauhan joined me (Justin McCarthy, co-founder, CTO) in founding the StrongDM AI team.
The catalyst was a transition observed in late 2024: with the second revision of Claude 3.5 (October 2024), long-horizon agentic coding workflows began to compound correctness rather than error.
By December of 2024, the model’s long-horizon coding performance was unmistakable via Cursor’s YOLO mode.
Prior to this model improvement, iterative application of LLMs to coding tasks would accumulate errors of all imaginable varieties (misunderstandings, hallucinations, syntax, version DRY violations, library incompatibility, etc). The app or product would decay and ultimately “collapse”: death by a thousand cuts, etc.
Together with YOLO mode, the updated model from Anthropic provided the first glimmer of what we now refer to internally as non-interactive development or grown software.
In the first hour of the first day of our AI team, we established a charter which set us on a path toward a series of findings (which we refer to as our “unlocks”). In retrospect, the most important line in the charter document was the following:
Initially it was just a hunch. An experiment. How far could we get, without writing any code by hand?
Not very far! At least: not very far, until we added tests. However, the agent, obsessed with the immediate task, soon began to take shortcuts: return true is a great way to pass narrowly written tests, but probably won’t generalize to the software you want.
Tests were not enough. How about integration tests? Regression tests? End-to-end tests? Behavior tests?
One recurring theme of the agentic moment: we need new language. For example, the word “test” has proven insufficient and ambiguous. A test, stored in the codebase, can be lazily rewritten to match the code. The code could be rewritten to trivially pass the test.
We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM.
Because much of the software we grow itself has an agentic component, we transitioned from boolean definitions of success (“the test suite is green”) to a probabilistic and empirical one. We use the term satisfaction to quantify this validation: of all the observed trajectories through all the scenarios, what fraction of them likely satisfy the user?
In previous regimes, a team might rely on integration tests, regression tests, UI automation to answer “is it working?”
We noticed two limitations of previously reliable techniques:
Tests are too rigid - we were coding with agents, but we’re also building with LLMs and agent loops as design primitives; evaluating success often required LLM-as-judgeTests can be reward hacked - we needed validation that was less vulnerable to the model cheating
The Digital Twin Universe is our answer: behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors.
With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs.
Our success with DTU illustrates one of the many ways in which the Agentic Moment has profoundly changed the economics of software. Creating a high fidelity clone of a significant SaaS application was always possible, but never economically feasible. Generations of engineers may have wanted a full in-memory replica of their CRM to test against, but self-censored the proposal to build it. They didn’t even bring it to their manager, because they knew the answer would be no.
Those of us building software factories must practice a deliberate naivete: finding and removing the habits, conventions, and constraints of Software 1.0. The DTU is our proof that what was unthinkable six months ago is now routine.
* Principles: what we believe is true about building software with agents
* Products: tools we use daily and believe others will benefit from
Thank you for reading. We wish you the best of luck constructing your own Software Factory.
...
Read the original on factory.strongdm.ai »
Last year, I completed 20 years in professional software development. I wanted to write a post to mark the occasion back then, but couldn’t find the time. This post is my attempt to make up for that omission. In fact, I have been involved in software development for a little longer than 20 years. Although I had my first taste of computer programming as a child, it was only when I entered university about 25 years ago that I seriously got into software development. So I’ll start my stories from there. These stories are less about software and more about people. Unlike many posts of this kind, this one offers no wisdom or lessons. It only offers a collection of stories. I hope you’ll like at least a few of them.
The first story takes place in 2001, shortly after I joined university. One evening, I went to the university computer laboratory to browse the Web. Out of curiosity, I typed susam.com into the address bar and landed on its
home page. I remember the text and banner looking much larger back then. Display resolutions were lower, so they covered almost half the screen. I knew very little about the Internet then and I was just trying to make sense of it. I remember wondering what it would take to create my own website, perhaps at susam.com. That’s when an older student who had been watching me browse over my shoulder approached and asked if I had created the website. I told him I hadn’t and that I had no idea how websites were made. He asked me to move aside, took my seat and clicked View > Source in Internet Explorer. He then explained how websites are made of HTML pages and how those pages are simply text instructions.
Next, he opened Notepad and wrote a simple HTML page that looked something like this:
Yes, we had a FONT tag back then and it was common practice to write HTML tags in uppercase. He then opened the page in a web browser and showed how it rendered. After that, he demonstrated a few more features such as changing the font face and size, centring the text and altering the page’s background colour. Although the tutorial lasted only about ten minutes, it made the World Wide Web feel far less mysterious and much more fascinating.
That person had an ulterior motive though. After the tutorial, he never returned the seat to me. He just continued browsing the Web and waited for me to leave. I was too timid to ask for my seat back. Seats were limited, so I returned to my dorm room both disappointed that I couldn’t continue browsing that day and excited about all the websites I might create with this newfound knowledge. I could never register susam.com for myself though. That domain was always used by some business selling Turkish cuisines. Eventually, I managed to get the next best thing: a .net domain of my own. That brief encounter in the university laboratory set me on a lifelong path of creating and maintaining personal websites.
The second story also comes from my university days. One afternoon, I was hanging out with my mates in the computer laboratory. In front of me was an MS-DOS machine powered by an Intel 8086 microprocessor, on which I was writing a lift control program in assembly. In those days, it was considered important to deliberately practise solving made-up problems as a way of honing our programming skills. As I worked on my program, my mind drifted to a small detail about the 8086 microprocessor that we had recently learnt in a lecture. Our professor had explained that, when the 8086 microprocessor is reset, execution begins with CS:IP set to FFFF:0000. So I murmured to anyone who cared to listen, ‘I wonder if the system will reboot if I jump to FFFF:0000.’ I then opened DEBUG. EXE and jumped to that address.
C:\>DEBUG
-G =FFFF:0000
The machine rebooted instantly. One of my friends, who topped the class every semester, had been watching over my shoulder. As soon as the machine restarted, he exclaimed, ‘How did you do that?’ I explained that the reset vector is located at physical address FFFF0 and that the CS:IP value FFFF:0000 maps to that address in real mode. After that, I went back to working on my lift control program and didn’t think much more about the incident.
About a week later, the same friend came to my dorm room. He sat down with a grave look on his face and asked, ‘How did you know to do that? How did it occur to you to jump to the reset vector?’ I must have said something like, ‘It just occurred to me. I remembered that detail from the lecture and wanted to try it out.’ He then said, ‘I want to be able to think like that. I come top of the class every semester, but I don’t think the way you do. I would never have thought of taking a small detail like that and testing it myself.’ I replied that I was just curious to see whether what we had learnt actually worked in practice. He responded, ‘And that’s exactly it. It would never occur to me to try something like that. I feel disappointed that I keep coming top of the class, yet I am not curious in the same way you are. I’ve decided I don’t want to top the class anymore. I just want to explore and experiment with what we learn, the way you do.’
That was all he said before getting up and heading back to his dorm room. I didn’t take it very seriously at the time. I couldn’t imagine why someone would willingly give up the accomplishment of coming first every year. But he kept his word. He never topped the class again. He still ranked highly, often within the top ten, but he kept his promise of never finishing first again. To this day, I feel a mix of embarrassment and pride whenever I recall that incident. With a single jump to the processor’s reset entry point, I had somehow inspired someone to step back from academic competition in order to have more fun with learning. Of course, there is no reason one cannot do both. But in the end, that was his decision, not mine.
In my first job after university, I was assigned to a technical support team where part of my work involved running an installer to deploy a specific component of an e-banking product for customers, usually large banks. As I learnt to use the installer, I realised how fragile it was. The installer, written in Python, often failed because of incorrect assumptions about the target environment and almost always required some manual intervention to complete successfully. During my first week on the project, I spent much of my time stabilising the installer and writing a step-by-step user guide explaining how to use it. The result was well received by both my seniors and management. To my surprise, the user guide received more praise than the improvements I made to the installer itself. While the first few weeks were productive, I soon realised I would not find the work fulfilling for long. I wrote to management a few times to ask whether I could transfer to a team where I could work on something more substantial.
My emails were initially met with resistance. After several rounds of discussion, however, someone who had heard about my situation reached out and suggested a team whose manager might be interested in interviewing me. The team was based in a different city. I was young and willing to relocate wherever I could find good work, so I immediately agreed to the interview.
This was in 2006, when video conferencing was not yet common. On the day of the interview, the hiring manager called me on my office desk phone. He began by introducing the team, which was called Archie, short for architecture. The team developed and maintained the web framework and core architectural components on which the entire e-banking product was built. The product had existed long before open source frameworks such as Spring or Django came into existence, so features such as API routing, authentication and authorisation layers, cookie management, etc. were all implemented in-house as Java Servlets and JavaServer Pages (JSP). Since the software was used in banking environments, it also had to pass strict security testing and regular audits to minimise the risk of serious flaws.
The interview began well. He asked several questions related to software security, such as what SQL injection is and how it can be prevented or how one might design a web framework that mitigates cross-site scripting attacks. He also asked programming questions, most of which I answered pretty well. Towards the end, however, he asked how we could prevent MITM attacks. I had never heard the term, so I admitted that I did not know what MITM meant. He then asked, ‘Man in the middle?’ but I still had no idea what that meant or whether it was even a software engineering concept. He replied, ‘Learn everything you can about PKI and MITM. We need to build a digital signatures feature for one of our corporate banking products. That’s the first thing we’ll work on.’
Over the next few weeks, I studied RFCs and documentation related to public key infrastructure, public key cryptography standards and related topics. At first, the material felt intimidating, but after spending time each evening reading whatever relevant literature I could find, things gradually began to make sense. Concepts that initially seemed complex and overwhelming eventually felt intuitive and elegant. I relocated to the new city a few weeks later and delivered the digital signatures feature about a month after joining the team. We used the open source Bouncy Castle library to implement the feature. After that project, I worked on other parts of the product too. The most rewarding part was knowing that the code I was writing became part of a mature product used by hundreds of banks and millions of users. It was especially satisfying to see the work pass security testing and audits and be considered ready for release.
That was my first real engineering job. My manager also turned out to be an excellent mentor. Working with him helped me develop new skills and his encouragement gave me confidence that stayed with me for years. Nearly two decades have passed since then, yet the product is still in service and continues to be actively developed. In fact, in my current phase of life I sometimes encounter it as a customer. Occasionally, I open the browser’s developer tools to view the page source where I can still see traces of the HTML generated by code I wrote almost twenty years ago.
Around 2007 or 2008, I began working on a proof of concept for developing widgets for an OpenTV set-top box. The work involved writing code in a heavily trimmed-down version of C. One afternoon, while making good progress on a few widgets, I noticed that they would occasionally crash at random. I tried tracking down the bugs, but I was finding it surprisingly difficult to understand my own code. I had managed to produce some truly spaghetti code full of dubious pointer operations that were almost certainly responsible for the crashes, yet I could not pinpoint where exactly things were going wrong.
Ours was a small team of four people, each working on an independent proof of concept. The most senior person on the team acted as our lead and architect. Later that afternoon, I showed him my progress and explained that I was still trying to hunt down the bugs causing the widgets to crash. He asked whether he could look at the code. After going through it briefly and probably realising that it was a bit of a mess, he asked me to send him the code as a tarball, which I promptly did.
He then went back to his desk to study the code. I remember thinking that there was no way he was going to find the problem anytime soon. I had been debugging it for hours and barely understood what I had written myself; it was the worst spaghetti code I had ever produced. With little hope of a quick solution, I went back to debugging on my own.
Barely five minutes later, he came back to my desk and asked me to open a specific file. He then showed me exactly where the pointer bug was. It had taken him only a few minutes not only to read my tangled code but also to understand it well enough to identify the fault and point it out. As soon as I fixed that line, the crashes disappeared. I was genuinely in awe of his skill.
I have always loved computing and programming, so I had assumed I was already fairly good at it. That incident, however, made me realise how much further I still had to go before I could consider myself a good software developer. I did improve significantly in the years that followed and today I am far better at managing software complexity than I was back then.
In another project from that period, we worked on another set-top box platform that supported Java Micro Edition (Java ME) for widget development. One day, the same architect from the previous story asked whether I could add animations to the widgets. I told him that I believed it should be possible, though I’d need to test it to be sure. Before continuing with the story, I need to explain how the different stakeholders in the project were organised.
Our small team effectively played the role of the software vendor. The final product going to market would carry the brand of a major telecom carrier, offering direct-to-home (DTH) television services, with the set-top box being one of the products sold to customers. The set-top box was manufactured by another company. So the project was a partnership between three parties: our company as the software vendor, the telecom carrier and the set-top box manufacturer. The telecom carrier wanted to know whether widgets could be animated on screen with smooth slide-in and slide-out effects. That was why the architect approached me to ask whether it could be done.
I began working on animating the widgets. Meanwhile, the architect and a few senior colleagues attended a business meeting with all the partners present. During the meeting, he explained that we were evaluating whether widget animations could be supported. The set-top box manufacturer immediately dismissed the idea, saying, ‘That’s impossible. Our set-top box does not support animation.’ When the architect returned and shared this with us, I replied, ‘I do not understand. If I can draw a widget, I can animate it too. All it takes is clearing the widget and redrawing it at slightly different positions repeatedly. In fact, I already have a working version.’ I then showed a demo of the animated widgets running on the emulator.
The following week, the architect attended another partners’ meeting where he shared updates about our animated widgets. I was not personally present, so what follows is second-hand information passed on by those who were there. I learnt that the set-top box company reacted angrily. For some reason, they were unhappy that we had managed to achieve results using their set-top box and APIs that they had officially described as impossible. They demanded that we stop work on animation immediately, arguing that our work could not be allowed to contradict their official position. At that point, the telecom carrier’s representative intervened and bluntly told the set-top box representative to just shut up. If the set-top box guy was furious, the telecom guy was even more so, ‘You guys told us animation was not possible and these people are showing that it is! You manufacture the set-top box. How can you not know what it is capable of?’
Meanwhile, I continued working on the proof of concept. It worked very well in the emulator, but I did not yet have access to the actual hardware. The device was still in the process of being shipped to us, so all my early proof-of-concepts ran on the emulator. The following week, the architect planned to travel to the set-top box company’s office to test my widgets on the real hardware.
At the time, I was quite proud of demonstrating results that even the hardware maker believed were impossible. When the architect eventually travelled to test the widgets on the actual device, a problem emerged. What looked like buttery smooth animation on the emulator appeared noticeably choppy on a real television. Over the next few weeks, I experimented with frame rates, buffering strategies and optimising the computation done in the the rendering loop. Each week, the architect travelled for testing and returned with the same report: the animation had improved somewhat, but it still remained choppy. The modest embedded hardware simply could not keep up with the required computation and rendering. In the end, the telecom carrier decided that no animation was better than poor animation and dropped the idea altogether. So in the end, the set-top box developers turned out to be correct after all.
Back in 2009, after completing about a year at RSA Security, I began looking for work that felt more intellectually stimulating, especially projects involving mathematics and algorithms. I spoke with a few senior leaders about this, but nothing materialised for some time. Then one day, Dr Burt Kaliski, Chief Scientist at RSA Laboratories, asked to meet me to discuss my career aspirations. I have written about this in more detail in another post here: Good Blessings. I will summarise what followed.
Dr Kaliski met me and offered a few suggestions about the kinds of teams I might approach to find more interesting work. I followed his advice and eventually joined a team that turned out to be an excellent fit. I remained with that team for the next six years. During that time, I worked on parser generators, formal language specification and implementation, as well as indexing and querying engines of a petabyte-scale database. I learnt something new almost every day during those six years. It remains one of the most enjoyable periods of my career. I have especially fond memories of working on parser generators alongside remarkably skilled engineers from whom I learnt a lot.
Years later, I reflected on how that brief meeting with Dr Kaliski had altered the trajectory of my career. I realised I was not sure whether I had properly expressed my gratitude to him for the role he had played in shaping my path. So I wrote to thank him and explain how much that single conversation had influenced my life. A few days later, Dr Kaliski replied, saying he was glad to know that the steps I took afterwards had worked out well. Before ending his message, he wrote this heart-warming note:
This story comes from 2019. By then, I was no longer a twenty-something engineer just starting out. I was now a middle-aged staff engineer with years of experience building both low-level networking systems and database systems. Most of my work up to that point had been in C and C++. I was now entering a new phase of my career where I would be leading the development of microservices written in Go and Python. Like many people in this profession, computing has long been one of my favourite hobbies. So although my professional work for the previous decade had focused on C and C++, I had plenty of hobby projects in other languages, including Python and Go. As a result, switching gears from systems programming to application development was a smooth transition for me. I cannot even say that I missed working in C and C++. After all, who wants to spend their days occasionally chasing memory bugs in core dumps when you could be building features and delivering real value to customers?
In October 2019, during Cybersecurity Awareness Month, a Capture the Flag (CTF) event was organised at our office. The contest featured all kinds of technical puzzles, ranging from SQL injection challenges to insecure cryptography problems. Some challenges also involved reversing binaries and exploiting stack overflow issues.
I am usually rather intimidated by such contests. The whole idea of competitive problem-solving under time pressure tends to make me nervous. But one of my colleagues persuaded me to participate in the CTF. And, somewhat to my surprise, I turned out to be rather good at it. Within about eight hours, I had solved roughly 90% of the puzzles. I finished at the top of the scoreboard.
In my younger days, I was generally known to be a good problem solver. I was often consulted when thorny problems needed solving and I usually managed to deliver results. I also enjoyed solving puzzles. I had a knack for them and happily spent hours, sometimes days, working through obscure mathematical or technical puzzles and sharing detailed write-ups with friends of the nerd variety. Seen in that light, my performance at the CTF probably should not have surprised me. Still, I was very pleased. It was reassuring to know that I could still rely on my systems programming experience to solve obscure challenges.
During the course of the contest, my performance became something of a talking point in the office. Colleagues occasionally stopped by my desk to appreciate my progress in the CTF. Two much younger colleagues, both engineers I admired for their skill and professionalism, were discussing the results nearby. They were speaking softly, but I could still overhear parts of their conversation. Curious, I leaned slightly and listened a bit more carefully. I wanted to know what these two people, whom I admired a lot, thought about my performance.
One of them remarked on how well I was doing in the contest. The other replied, ‘Of course he is doing well. He has more than ten years of experience in C.’ At that moment, I realised that no matter how well I solved those puzzles, the result would naturally be credited to experience. In my younger days, when I solved tricky problems like these, people would sometimes call me smart. Now people simply saw it as a consequence of my experience. Not that I particularly care for labels such as ‘smart’ anyway, but it did make me realise how things had changed. I was now simply the person with many years of experience. Solving technical puzzles that involved disassembling binaries, tracing execution paths and reconstructing program logic was expected rather than remarkable.
I continue to sharpen my technical skills to this day. While my technical results may now simply be attributed to experience, I hope I can continue to make a good impression through my professionalism, ethics and kindness towards the people I work with. If those leave a lasting impression, that is good enough for me.
...
Read the original on susam.net »
I am an unusual beast. All my solo project games I’ve been making recently have been written in ‘vanilla’ C. Nobody does this. So I think it might be interesting to explain why I do.
Dry programming language opinions incoming, you have been warned.
There’s some things which are non-negotiable. First of, it has to be reliable. I can’t afford to spend my time dealing with bugs I didn’t cause myself.
A lot of my games were written for flash, and now flash is dying. I do not
want to spend my time porting old games to new platforms, I want to make new games. I need a platform that I am confident will be around for a while.
Similarly I want to avoid tying myself to a particular OS, and ideally I’d like to have the option of developing for consoles. So it’s important that my programming language is portable, and that it has good portable library support.
The strongest thing on my desired, but not required list is simplicity. I find looking up language features, and quirky ‘clever’ api’s incredibly tiring. The ideal language would be one I can memorize, and then never have to look things up.
Dealing with bugs is huge creative drain. I want to produce less bugs, so I want strict typing, strong warning messages and static code analysis. I want bugs to be easier to find, so I want good debuggers and dynamic analysis.
I’m not interesting in high-def realism, but I do still care a bit about performance. Having more cycles available broadens the palette of things you can do. It’s particularly interesting to explore what is possible with modern, powerful computers if you aren’t persuing fidelity.
Even more than that I care about the speed of the compiler. I am not a zen master of focus, and waiting 10+ seconds is wasteful, yes, but more importantly it breaks my flow. I flick over to Twitter and suddenly 5+ minutes are gone.
I am not an OOP convert. I’ve spent most of my professional life working with classes and objects, but the more time I spend, the less I understand why you’d want to combine code and data so rigidly. I want to handle data as data and write the code that best fits a particular situation.
C++ is still the most common language for writing games, and not without reason. I still do almost all of my contract work in it. I dislike it intensely.
C++ covers my needs, but fails my wants badly. It is desperately complicated. Despite decent tooling it’s easy to create insidious bugs. It is also slow to compile compared to C. It is high performance, and it offers features that C doesn’t have; but features I don’t want, and at a great complexity cost.
C# and Java have similar issues. They are verbose and complex beasts, and I am searching for a concise, simple creature. They both do a lot to railroad a programmer into a strongly OOP style that I am opposed to. As per most higher level languages they have a tendency to hide away complexity in a way that doesn’t actually prevent it from biting you.
I like Go a lot. In many ways it is C revisited, taking into account what has be learnt in the long years since it was released. I would like to use it, but there are big roadblocks that prevent me. The stop-the-world garbage collection is a big pain for games, stopping the world is something you can’t really afford to do. The library support for games is quite poor, and though you can wrap C libs without much trouble, doing so adds a lot of busy work. It is niche enough that I worry a little about long term relevance.
It would be nice to make things for the web, but it feels like a terrifyingly fast moving enviroment. It is particularly scary with the death of flash. I really dislike javascript, it is so loose that I marvel that people are able to write big chunks of software in it. I have no interest in trying.
Haxe feels much more promising than most alternatives. If I do web stuff again I’ll be diving in here. There is some good library support. I am a little concerned by its relative youth, will it last? I don’t much have else to say about it though, I’ve only dabbled with the surface.
Some people just say screw it, I’ll write my own language, the language I want to use. I admire this, and sometimes I toy with the idea of doing the same. It feels like too much to throw away all existing library support, and taking full responsibilty for future compatibility. It is also very difficult, and when it comes down to it I would rather be making games than programming languages.
C is dangerous, but it is reliable. A very sharp knife that can cut fingers as well as veg, but so simple it’s not too hard to learn to use it carefully.
It is fast, and when it comes to compilation I can’t think of anything faster.
It can be made to run on just about anything. Usually this is relatively easy. It is hard to imagine a time when this won’t be the case.
The library and tooling support is strong and ongoing.
I say this with some sadness, but it is still the language for me.
I absolutely DO NOT mean to say “hey, you should use C too”. I full appeciate preferences here are pretty specific and unusual. I have also already written more ‘vanilla’ C code than most, and this certainly is part of my comfort.
So yeah, that’s it :-)
...
Read the original on jonathanwhiting.com »
Rebecca Guy, senior policy manager at the Royal Society for the Prevention of Accidents, said: “Regular vision checks are a sensible way to reduce risk as we age, but the priority must be a system that supports people to drive safely for as long as possible, while ensuring timely action is taken when health or eyesight could put them or others in danger.”
...
Read the original on www.bbc.com »
The modern Olympics sell themselves on a simple premise: the whole world, watching the same moment, at the same time. On Friday night in Milan, that illusion fractured in real time.
When Team USA entered the San Siro during the parade of nations, the speed skater Erin Jackson led the delegation into a wall of cheers. Moments later, when cameras cut to US vice-president JD Vance and second lady Usha Vance, large sections of the crowd responded with boos. Not subtle ones, but audible and sustained ones. Canadian viewers heard them. Journalists seated in the press tribunes in the upper deck, myself included, clearly heard them. But as I quickly realized from a groupchat with friends back home, American viewers watching NBC did not.
On its own, the situation might once have passed unnoticed. But the defining feature of the modern sports media landscape is that no single broadcaster controls the moment any more. CBC carried it. The BBC liveblogged it. Fans clipped it. Within minutes, multiple versions of the same happening were circulating online — some with boos, some without — turning what might once have been a routine production call into a case study in information asymmetry.
For its part, NBC has denied editing the crowd audio, although it is difficult to resolve why the boos so audible in the stadium and on other broadcasts were absent for US viewers. But in a broader sense, it is becoming harder, not easier, to curate reality when the rest of the world is holding up its own camera angles. And that raises an uncomfortable question as the United States moves toward hosting two of the largest sporting events on the planet: the 2026 men’s World Cup and the 2028 Los Angeles Olympics.
If a US administration figure is booed at the Olympics in Los Angeles, or a World Cup match in New Jersey or Dallas, will American domestic broadcasts simply mute or avoid mentioning the crowd audio? If so, what happens when the world feed, or a foreign broadcaster, shows something else entirely? What happens when 40,000 phones in the stadium upload their own version in real time?
The risk is not just that viewers will see through it. It is that attempts to manage the narrative will make American broadcasters look less credible, not more. Because the audience now assumes there is always another angle. Every time a broadcaster makes that trade — credibility for insulation — it is a trade audiences eventually notice.
There is also a deeper structural pressure behind decisions like this. The Trump era has been defined in part by sustained hostility toward media institutions. Broadcasters do not operate in a vacuum; they operate inside regulatory environments, political climates and corporate risk calculations. When presidents and their allies openly threaten or target networks, it is naive to pretend that has no downstream effect on editorial choices — especially in high-stakes live broadcasts tied to billion-dollar rights deals.
But there is a difference between contextual pressure and visible reality distortion. When global audiences can compare feeds in real time, the latter begins to resemble something else entirely: not editorial judgment, but narrative management. Which is why comparisons to Soviet-style state-controlled broadcasting models — once breathless rhetorical exaggerations — are starting to feel less hyperbolic.
The irony is that the Olympics themselves are built around the idea that sport can exist alongside political tension without pretending it does not exist. The International Olympic Committee’s own language — athletes should not be punished for governments’ actions — implicitly acknowledges that governments are part of the Olympic theater whether organizers like it or not.
Friday night illustrated that perfectly. American athletes were cheered, their enormous contingent given one of the most full-throated receptions of the night. The political emissaries were not universally welcomed. Both things can be true at once. Crowd dissent is not a failure of the Olympic ideal. In open societies, it is part of how public sentiment is expressed. Attempting to erase one side of that equation risks flattening reality into something audiences no longer trust. And if Milan was a warning shot, Los Angeles is the main event.
Since Donald Trump’s first term, American political coverage around sport has fixated on the micro-moments: Was the president booed or cheered? Did the broadcast show it? Did he attend or skip events likely to produce hostile crowds? The discourse has often felt like a Rorschach test, filtered through partisan interpretation and selective clips.
The LA Olympics will be something else entirely. There is no hiding from an opening ceremony for Trump. No ducking a stadium when the Olympic Charter requires the host country’s head of state to officially declare the Games open. No controlling how 200 international broadcasters carry the moment.
If Trump is still in the White House on 14 July 2028, one month after his 82nd birthday and in the thick of another heated US presidential campaign, he will stand in front of a global television audience as a key part of the opening ceremony. He will do so in California, in a political environment far less friendly than many domestic sporting venues he has appeared in over the past decade. And he will do it in a city synonymous with the political opposition, potentially in the back yard of the Democratic presidential candidate.
There will be some cheers. There will almost certainly be boos. There will be everything in between. And there will be no way to make them disappear. The real risk for American broadcasters is not that dissent will be visible. It is that audiences will start assuming anything they do not show is being hidden. In an era when trust in institutions is already fragile, that is a dangerous place to operate from.
The Olympics have always been political, whether through boycotts, protests, symbolic gestures or crowd reactions. What has changed is not the politics. It is the impossibility of containing the optics.
Milan may ultimately be remembered as a small moment — a few seconds of crowd noise during a long ceremony. But it also felt like a preview of the next phase of global sport broadcasting: one where narrative control is shared, contested and instantly verifiable. The world is watching. And this time, it is also recording.
...
Read the original on www.theguardian.com »
I’m generally pretty pro-AI with one major exception: agentic coding. My consistent impression is that agentic coding does not actually improve productivity and deteriorates the user’s comfort and familiarity with the codebase. I formed that impression from:
Every time I use agentic coding tools I’m consistently unimpressed with the quality of the results.
I allow interview candidates to use agentic coding tools and candidates who do so consistently performed worse than other candidates, failing to complete the challenge or producing incorrect results1. This was a huge surprise to me at first because I expected agentic coding to confer an unfair advantage but … nope!
Studies like the Becker study and Shen study show that users of agentic coding perform no better and sometimes worse when you measure productivity in terms of fixed outcomes rather than code velocity/volume.
I don’t believe agentic coding is a lost cause, but I do believe agentic coding in its present incarnation is doing more harm than good to software development. I also believe it is still worthwhile to push on the inadequacies of agentic coding so that it empowers developers and improves code quality.
However, in this post I’m taking a different tack: I want to present other ways to leverage AI for software development. I believe that agentic coding has so captured the cultural imagination that people are sleeping on other good and underexplored solutions to AI-assisted software development.
I like to design tools and interfaces from first principles rather than reacting to industry trends/hype and I’ve accrued quite a few general design principles from over a decade of working in DevProd and also an even longer history of open source projects and contributions.
One of those design principles is my personal “master cue”, which is:
A good tool or interface should keep the user in a flow state as long as possible
This principle isn’t even specific to AI-assisted software development, and yet still highlights why agentic coding sometimes misses the mark. Both studies and developer testimonials show that agentic coding breaks flow and keeps developers in an idle/interruptible holding pattern more than ordinary coding.
For example, the Becker study took screen recordings and saw that idle time approximately doubled:
I believe we can improve AI-assisted coding tools (agentic or not) if we set our north star to “preserve flow state”.
Calm technology is a design discipline that promotes flow state in tools that we build. The design principles most relevant to coding are:
tools should minimize demands on our attention
Interruptions and intrusions on our attention break us out of flow state.
tools should be built to be “pass-through”
A tool is not meant to be the object of our attention; rather the tool should reveal the true object of our attention (the thing the tool acts upon), rather than obscuring it. The more we use the tool the more the tool fades into the background of our awareness while still supporting our work.
tools should create and enhance calm (thus the name: calm technology)
Engineers already use “calm” tools and interfaces as part of our work and here are a couple of examples you’re probably already familiar with:
IDEs (like VSCode) can support inlay hints that sprinkle the code with useful annotations for the reader, such as inferred type annotations:
These types of inlay hints embody calm design principles because:
they minimize demands on our attention
They exist on the periphery of our attention, available for us if we’re interested but unobtrusive if we’re not interested.
they are built to be “pass-through”
They don’t replace or substitute the code that we are editing. They enhance the code editing experience but the user is still in direct contact with the edited code. The more we use type hints the more they fade into the background of our awareness and the more the code remains the focus of our attention.
They promote a sense of calm by informing our understanding of the code passively. As one of the Calm Technology principles puts it: “Technology can communicate, but doesn’t need to speak”.
Tools like VSCode or GitHub’s pull request viewer let you preview at a glance changes to the file tree, like this:
You might think to yourself “this is a very uninteresting thing to use as an example” but that’s exactly the point. The best tools (designed with the principles of calm technology) are pervasive and boring things that we take for granted (like light switches) and that have faded so strongly into the background of our attention that we forget they even exist as a part of our daily workflow (also like light switches).
They’re there if we need the information, but easy to ignore (or even forget they exist) if we don’t use them.
are built to be “pass-through”
When we interact with the file tree viewer we are interacting directly with the filesystem and the interaction between the representation (the viewer) and the reality (the filesystem) feels direct, snappy, and precise. The more we use the viewer the more the representation becomes indistinguishable from the reality in our minds.
We do not need to constantly interact with the file tree to gather up-to-date information about our project structure. It passively updates in the background as we make changes to the project and those updates are unobtrusive and not attention-grabbing.
We can think about the limitations of chat-based agentic coding tools through this same lens:
they place high demands on our attention
The user has to either sit and wait for the agent to report back or do something else and run the LLM in a semi-autonomous manner. However, even semi-autonomous sessions prevent the user from entering flow state because they have to remain interruptible.
they are not built to be “pass-through”
Chat agents are a highly mediated interface to the code which is indirect (we interact more with the agent than the code), slow (we spend a lot of time waiting), and imprecise (English is a dull interface).
The user needs to constantly stimulate the chat to gather new information or update their understanding of the code (the chat agent doesn’t inform the user’s understanding passively or quietly). Chat agents are also fine-tuned to maximize engagement.
One of the earliest examples of an AI coding assistant that begins to model calm design principles is the OG AI-assistant: GitHub Copilot’s support for inline suggestions, with some caveats I’ll go into.
This does one thing really well:
it’s built to be “pass-through”
The user is still interacting directly with the code and the suggestions are reasonably snappy. The user can also ignore or type through the suggestion.
However, by default these inline suggestions violate other calm technology principles:
By default Copilot presents the suggestions quite frequently and the user has to pause what they’re doing to examine the output of the suggestion. After enough times the user begins to condition themselves into regularly pausing and waiting for a suggestion which breaks them out of a flow state. Now instead of being proactive the user’s been conditioned by the tool to be reactive.
GitHub Copilot’s inline suggestion interface is visually busy and intrusive. Even if the user ignores every suggestion the effect is still disruptive: suggestions appear on the user’s screen in the center of their visual focus and the user has to decide on the spot whether to accept or ignore them before proceeding further. The user also can’t easily passively absorb information presented in this way: understanding each suggestion requires the user’s focused attention.
… buuuuut these issues are partially fixable by disabling the automatic suggestions and requiring them to be explicitly triggered by Alt + \. However, unfortunately that also disables the next feature, which I like even more:
Next edit suggestions (also from GitHub Copilot)
Next edit suggestions are a related GitHub Copilot feature that display related follow-up edits throughout the file/project and let the user cycle between them and possibly accept each suggested change. They behave like a “super-charged find and replace”:
These suggestions do an amazing job of keeping the user in a flow state:
they minimize demand on the user’s attention
The cognitive load on the user is smaller than inline suggestions because the suggestions are more likely to be bite-sized (and therefore easier for a human to review and accept).
Just like inline suggestions, next edit suggestions still keep the user in close contact with the code they are modifying.
Suggestions are presented in an unobtrusive way: they aren’t dumped in the dead center of the user’s attention and they don’t demand immediate review. They exist on the periphery of the user’s attention as code suggestions that the user can ignore or focus on at their leisure.
I believe there is a lot of untapped potential in AI-assisted coding tools and in this section I’ll sketch a few small examples of how we can embody calm technology design principles in building the next generation of coding tools.
You could browse a project by a tree of semantic facets. For example, if you were editing the Haskell implementation of Dhall the tree viewer might look like this prototype I hacked up2:
The goal here is to not only provide a quick way to explore the project by intent, but to also improve the user’s understanding of the project the more they use the feature. “String interpolation regression” is so much more informative than dhall/tests/format/issue2078A.dhall3.
Also, the above video is based on a real tool and not just a mock. You can find the code I used to generate that tree of semantics facets here and I’ll write up another post soon walking through how that code works.
You could take an editor session, a diff, or a pull request and automatically split it into a series of more focused commits that are easier for people to review. This is one of the cases where the AI can reduce human review labor (most agentic coding tools create more human review labor).
There is some prior art here but this is still a nascent area of development.
You could add two new tools to the user’s toolbar or context menu: “Focus on…” and “Edit as…”.
“Focus on…” would allow the user to specify what they’re interested in changing and present only files and lines of code related to their specified interest. For example, if they want to focus on “command line options” then only related files and lines of code would be shown in the editor and other lines of code would be hidden/collapsed/folded. This would basically be like “Zen mode” but for editing a feature domain of interest.
“Edit as…” would allow the user to edit the file or selected code as if it were a different programming language or file format. For example, someone who was new to Haskell could edit a Haskell file “as Python” and then after finishing their edits the AI attempts to back-propagate their changes to Haskell. Or someone modifying a command-line parser could edit the file “as YAML” and be presented with a simplified YAML representation of the command line options which they could modify to add new options.
This is obviously not a comprehensive list of ideas, but I wrote this to encourage people to think of more innovative ways to incorporate AI into people’s workflows besides just building yet another chatbot. I strongly believe that chat is the least interesting interface to LLMs and AI-assisted software development is no exception to this.
Copyright © 2026 Gabriella Gonzalez. This work is licensed under CC BY-SA 4.0
...
Read the original on haskellforall.com »
If I can lose access to something as simple as Notepad, then Microsoft has probably take cloud integrations too far.
If I can lose access to something as simple as Notepad, then Microsoft has probably take cloud integrations too far.
A couple of weeks ago, I found that I couldn’t open Notepad on my desktop PC. It wasn’t because Windows 11 had crashed, but rather, Microsoft told me it wasn’t “available in (my) account”. It turned out that an error (0x803f8001) with Microsoft Store’s licensing service stopped me from opening a few first-party apps, including the Snipping Tool.
Yes, even the app I usually use to screenshot error messages was busted. Ironic. Now, I’m usually a fairly level-headed Windows enthusiast who can relate to users who both love and loathe Microsoft’s operating system, but I couldn’t open Notepad.exe — are we serious?
You’ve probably all seen the memes: it’s called “This PC” now, and not “My Computer” anymore. It’s usually easy to laugh off as a disgruntled conspiracy, but I can see why it trends when the themings of Software as a Service (SaaS) are creeping into the most basic Windows apps.
After all, Notepad is supposed to be the absolute barebones, most ultra-basic app in the entire OS. Well, it was, before Microsoft added Copilot and users started looking for a way to disable the unusual AI addition. Sure, you can still type C:\Windows\notepad.exe into ‘Run’ with Windows + R for a legacy fallback, but many perhaps wouldn’t know about it.
I’m still a Windows guy, and I always will be. Nevertheless, I can’t ignore that Windows 11 regularly feels less like an operating system and more like a thin client; just a connection to Microsoft’s cloud with fewer options for you to act as the administrator of your own PC.
To be clear, I don’t have major problems with the default, out-of-box experience (OOBE) of Windows 11. In fact, it doesn’t take me long to make changes when installing fresh copies on new desktop builds. Default pins on the Start menu don’t matter because I barely use it, and disabling ads is straightforward enough. The major points pretty much boil down to:
* Uninstalling OneDrive: The web app is fine for manual backups, but I definitely don’t want my files automatically synced.
* Creating a local account: Microsoft keeps making it harder, but I’ll always use workarounds.
After that, I don’t take issue with the normal desktop — unless something unexpectedly breaks. Our Editor-in-Chief, Daniel Rubino, said it best, “People don’t hate change. They hate surprise.” It was certainly a surprise to lose access to my plain text editor, loaded up with more than what an extended (Windows + V) clipboard would be useful for. Nobody asked for this.
So, is the solution to look for open‑source Notepad clones? Maybe for some enthusiasts, but that’s just another app to add to a growing Winget list, and I’d rather Microsoft stay true to its word about walking back Windows 11′s AI overload. I can’t abide by comments on social platforms suggesting people “just use a debloater” on a new Windows PC, either — we shouldn’t have to.
That, and I generally avoid recommending Windows debloat scripts from GitHub to anyone in the first place. Windows can be adjusted to your liking if you follow the right guides, and while you can inspect open-source code for yourself and generally trust some well-respected coders on that platform, it’s a strange solution that needn’t exist.
I’m not naive enough to think Windows is Microsoft’s top priority. Cloud computing and Microsoft 365 are far more valuable than a consumer-level operating system, though Microsoft does have a staggering lead over the competition — one that would be absurd to jeopardize.
Still, my problems with Notepad and Snipping Tool are a raindrop in the Pacific Ocean of Microsoft’s broader plans, but I don’t want first-party apps asking for authentication from its servers — nor do I want our readers to download the first debloat script they find on the web.
There are justifications for Microsoft adding elements of its cloud business to Windows, but I wish it wouldn’t force it in a way that locks people into an online-only experience. My PC should be entirely functional without an Internet connection — especially when I need a few scribbles from Notepad.
AI is undoubtedly the future, at least in some capacity. Even if Satya Nadella says artificial intelligence needs to prove its worth, there’s no believable chance that it’s going away, especially now that Copilot is so deeply ingrained in practically everything Microsoft owns.
Still, if online-only services are all active by default and Windows 12 is ultimately an agentic AI OS, I wouldn’t be surprised if more people stick with a debloated Windows 11, just as others did with Windows 10. Do you think the next version of Windows will return some control back to the user, or will it be even more Internet-dependent?
Join us on Reddit at r/WindowsCentral to share your insights and discuss our latest news, reviews, and more.
...
Read the original on www.windowscentral.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.