10 interesting stories served every morning and every evening.
On this release, we’re showing what happens when you push modern web standards — HTML, CSS, and JS — to their peak:
This entire app is lighter than a React/ShadCN button:
See benchmark and details here ›
Here’s the same app, now with a Rust computation engine and Event Sourcing for instant search and other operations over 150,000 records — far past where JS-version of the engine choked on recursive calls over the records.
This demo is here ›
Nue crushes HMR and build speed records and sets you up with a millisecond feedback loop for your everyday VSCode/Sublime file-save operations:
Immediate feedback for design and component updates, preserving app state
This is a game-changer for Rust, Go, and JS engineers stuck wrestling with React idioms instead of leaning on timeless software patterns. Nue emphasizes a model-first approach, delivering modular design with simple, testable functions, true static typing, and minimal dependencies. Nue is a liberating experience for system devs whose skills can finally shine in a separated model layer.
This is an important shift for design engineers bogged down by React patterns and 40,000+ line design systems. Build radically simpler systems with modern CSS (@layers, variables, calc()) and take control of your typography and whitespace.
This is a wake-up call for UX engineers tangled in React hooks and utility class walls instead of owning the user experience. Build apps as light as a React button to push the web — and your skills — forward.
Nue is a web framework focused on web standards, currently in active development. We aim to reveal the hidden complexity that’s become normalized in modern web development. When a single button outweighs an entire application, something’s fundamentally broken.
Nue drives the inevitable shift. We’re rebuilding tools and frameworks from the ground up with a cleaner, more robust architecture. Our goal is to restore the joy of web development for all key skill sets: frontend architects, design engineers, and UX engineers.
...
Read the original on nuejs.org »
...
Read the original on oldvcr.blogspot.com »
The UN reports that less than 25% of global e-waste is properly collected and recycled.
...
Read the original on techoreon.com »
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
...
Read the original on secondlifestorage.com »
A decorated World War Two code breaker who spent her youth deciphering enemy messages at Bletchley Park has died at the age of 101. Charlotte “Betty” Webb MBE - who was among the last surviving Bletchley code breakers - died on Monday night, the Women’s Royal Army Corps Association confirmed.Mrs Webb, from Wythall in Worcestershire, joined operations at the Buckinghamshire base at the age of 18, later going on to help with Japanese codes at The Pentagon in the US. She was awarded France’s highest honour - the Légion d’Honneur - in 2021. The Women’s Royal Army Corps Association described Mrs Webb as a woman who “inspired women in the Army for decades”.
Bletchley Park Trust CEO Iain Standen said Mrs Webb will not only be remembered for her work but “also for her efforts to ensure that the story of what she and her colleagues achieved is not forgotten.“”Betty’s passion for preserving the history and legacy of Bletchley Park has undoubtedly inspired many people to engage with the story and visit the site,” he said in a statement. Tributes to Mrs Webb have begun to be posted on social media, including one from historian and author Dr Tessa Dunlop who said she was with her in her final hours.Describing Mrs Webb as “the very best”, she said on X: “She is one of the most remarkable woman I have ever known.”
Mrs Webb told the BBC in 2020 that she had “never heard of Bletchley”, Britain’s wartime code-breaking centre, before starting work there as a member of the ATS, the Auxiliary Territorial Service. She had been studying at a college near Shrewsbury, Shropshire, when she volunteered as she said she and others on the course felt they “ought to be serving our country rather than just making sausage rolls”.Her mother had taught her to speak German as a child and ahead of her posting remembered being “taken into the mansion [at Bletchley] to read the Official Secrets Act”.“I realised that from then on there was no way that I was going to be able to tell even my parents where I was and what I was doing until 1975 [when restrictions were lifted],” she recalled.She would tell the family with whom she lodged that she was a secretary.
When WW2 ended in Europe in May 1945, she went to work at the Pentagon after spending four years at Bletchley, which with its analysis of German communications had served as a vital cog in the Allies’ war machine. At the Pentagon she would paraphrase and transcribe already-decoded Japanese messages. She said she was the only member of the ATS to be sent to Washington, describing it as a “tremendous honour”.Mrs Webb, in 2020, recalled she had had no idea the Americans planned to end the conflict by dropping atomic weapon on Japanese cities, describing the weapons’ power as “utterly awful”.After the Allies’ final victory, it took Mrs Webb several months to organise return passage to the UK, where she worked as a secretary at a school in Shropshire.The head teacher there had also worked at Bletchley so knew of her professionalism, whereas other would-be employers, she recalled, were left stumped by her being unable to explain - due to secrecy requirements - her previous duties.More than half a century later, in 2021, Mrs Webb was one of 6,000 British citizens to receive the Légion d’Honneur, following a decision by President François Hollande in 2014 to recognise British veterans who helped liberate France.
In 2023, she and her niece were among 2,200 people from 203 countries invited to Westminster Abbey to see King Charles III’s coronation. The same year she celebrated her 100th birthday at Bletchley Park with a party. She and her guests were treated to a fly-past by a Lancaster bomber. She said at the time: “It was for me - it’s unbelievable isn’t it? Little me.”
...
Read the original on www.bbc.com »
If someone had told me a few months ago I’d be playing with .NET again after a 15+ years hiatus I probably would have laughed at this. Early on in my career I played with .NET and Java, and even though .NET had done some things better than Java (as it had the opportunity to learn from some early Java mistakes), I quickly settled on Java as it was a truly portable environment.
I guess everyone who reads my blog knows that in the past few years I’ve been playing on and off with OCaml and I think it’s safe to say that it has become one of my favorite programming languages - alongside the likes of Ruby and Clojure. My work with OCaml drew my attention recently to F#, an ML targeting .NET, developed by Microsoft. The functional counterpart of the (mostly) object-oriented C#. The newest ML language created…
Unfortunately, no one can be told what the Matrix is. You have to see it for yourself.
Before we start discussing F#, I guess we should answer first the question “What is F#?”. I’ll borrow a bit from the official page to answer it.
F# is a universal programming language for writing succinct, robust and performant code.
F# allows you to write uncluttered, self-documenting code, where your focus remains on your problem domain, rather than the details of programming.
It does this without compromising on speed and compatibility - it is open-source, cross-platform and interoperable.
Trivia: F# is the language that made the pipeline operator (|>) popular.
A full set of features are documented in the F# language guide.
F# 1.0 was officially released in May 2005 by Microsoft Research. It was initially developed by Don Syme at Microsoft Research in Cambridge and evolved from an earlier research project called “Caml. NET,” which aimed to bring OCaml to the .NET platform. F# was officially moved from Microsoft Research to Microsoft (as part of their developer tooling division) in 2010 (timed with the release of F# 2.0).
F# has been steadily evolving since those early days and the most recent release
F# 9.0 was released in November 2024. It seems only appropriate that F# would come to my attention in the year of its 20th birthday!
There were several reasons why I wanted to try out F#:
* .NET became open-source and portable a few years ago and I wanted to check the progress on that front
* I was curious if F# offers any advantages over OCaml
* I’ve heard good things about the F# tooling (e.g. Rider and Ionide)
* I like playing with new programming languages
Below you’ll find my initial impressions for several areas.
As a member of the ML family of languages, the syntax won’t surprise anyone familiar with OCaml. As there are quite few people familiar with OCaml, though, I’ll mention that Haskell programmers will also feel right at home with the syntax. And Lispers.
For everyone else - it’d be fairly easy to pick up the basics.
Nothing shocking here, right?
Here’s another slightly more involved example:
Why don’t you try saving the snippet above in a file called Sales.fsx and running it like this:
Now you know that F# is a great choice for ad-hoc scripts! Also, running dotnet fsi by itself will pop an F# REPL where you can explore the language at your leisure.
I’m not going to go into great details here, as much of what I wrote about OCaml
here applies to F# as well. I’d also suggest this quick tour of F#
to get a better feel for its syntax.
Tip: Check out the F# cheatsheet
if you’d like to see a quick syntax reference.
One thing that made a good impression to me is the focus of the language designers on making F# approachable to newcomers, by providing a lot of small quality of life improvements for them. Below are few examples, that probably don’t mean much to you, but would mean something to people familiar with OCaml:
I guess some of those might be controversial, depending on whether you’re a ML language purist or not, but in my book anything that makes ML more popular is a good thing.
Did I also mention it’s easy to work with unicode strings and regular expressions?
Often people say that F# is mostly a staging ground for future C# features, and perhaps that’s true. I haven’t observed both languages long enough to have my own opinion on the subject, but I was impressed to learn that async/await (of C# and later JavaScript fame) originated in… F# 2.0.
It all changed in 2012 when C#5 launched with the introduction of what has now become the popularized async/await keyword pairing. This feature allowed you to write code with all the benefits of hand-written asynchronous code, such as not blocking the UI when a long-running process started, yet read like normal synchronous code. This async/await pattern has now found its way into many modern programming languages such as Python, JS, Swift, Rust, and even C++.
F#’s approach to asynchronous programming is a little different from async/await
but achieves the same goal (in fact, async/await is a cut-down version of F#’s approach, which was introduced a few years previously, in F#2).
Time will tell what will happen, but I think it’s unlikely that C# will ever be able to fully replace F#.
I’ve also found this encouraging comment from 2022 that Microsoft might be willing to invest more in F#:
Some good news for you. After 10 years of F# being developed by 2.5 people internally and some random community efforts, Microsoft has finally decided to properly invest in F# and created a full-fledged team in Prague this summer. I’m a dev in this team, just like you I was an F# fan for many years so I am happy things got finally moving here.
Looking at the changes in F# 8.0 and F 9.0, it seems the new full-fledged team has done some great work!
It’s hard to assess the ecosystem around F# after such a brief period, but overall it seems to me that there are fairly few “native” F# libraries and frameworks out there and most people rely heavily on the core .NET APIs and many third-party libraries and frameworks geared towards C#. That’s a pretty common setup when it comes to hosted languages in general, so nothing surprising here as well.
If you’ve ever used another hosted language (e.g. Scala, Clojure, Groovy) then you probably know what to expect.
Awesome F# keeps track of popular F# libraries, tools and frameworks. I’ll highlight here the web development and data science libraries:
* Giraffe: A lightweight library for building web applications using ASP.NET Core. It provides a functional approach to web development.
* Suave: A simple and lightweight web server library with combinators for routing and task composition. (Giraffe was inspired by Suave)
* Saturn: Built on top of Giraffe and ASP.NET Core, it offers an MVC-style framework inspired by Ruby on Rails and Elixir’s Phoenix.
* Bolero: A framework for building client-side applications in F# using WebAssembly and Blazor.
* Fable: A compiler that translates F# code into JavaScript, enabling integration with popular JavaScript ecosystems like React or Node.js.
* Elmish: A model-view-update (MVU) architecture for building web UIs in F#, often used with Fable.
* SAFE Stack: An end-to-end, functional-first stack for building cloud-ready web applications. It combines technologies like Saturn, Azure, Fable, and Elmish for a type-safe development experience.
* Deedle: A library for data manipulation and exploratory analysis, similar to pandas in Python.
* FsLab: A collection of libraries tailored for data science, including visualization and statistical tools.
I haven’t played much with any of them at this point yet, so I’ll reserve any feedback and recommendations for some point in the future.
The official documentation is pretty good, although I find it kind of weird that some of it is hosted on Microsoft’s site
and the rest is on https://fsharp.org/ (the site of the F# Software Foundation).
I really liked the following parts of the documentation:
https://fsharpforfunandprofit.com/ is another good learning resource. (even if it seems a bit dated)
F# has a somewhat troubled dev tooling story, as historically support for F# was great only in Visual Studio, and somewhat subpar elsewhere. Fortunately, the tooling story has improved a lot in the past decade:
In 2014 a technical breakthrough was made with the creation of the FSharp. Compiler.Service (FCS) package by Tomas Petricek, Ryan Riley, and Dave Thomas with many later contributors. This contains the core implementation of the F# compiler, editor tooling and scripting engine in the form of a single library and can be used to make F# tooling for a wide range of situations. This has allowed F# to be delivered into many more editors, scripting and documentation tools and allowed the development of alternative backends for F#. Key editor community-based tooling includes Ionide, by Krzysztof Cieślak and contributors, used for rich editing support in the cross-platform VSCode editor, with over 1M downloads at time of writing.
I’ve played with the F# plugins for several editors:
Overall, Rider and VS Code provide the most (and the most polished) features, but the other options were quite usable as well. That’s largely due to the fact that the F# LSP server fsautocomplete (naming is hard!) is quite robust and any editor with good LSP support gets a lot of functionality for free.
Still, I’ll mention that I found the tooling lacking in some regards:
* fsharp-mode doesn’t use TreeSitter (yet) and doesn’t seem to be very actively developed (looking at the code - it seems it was derived from caml-mode)
* Zed’s support for F# is quite spartan
* In VS Code shockingly the expanding and shrinking selection is broken, which is quite odd for what is supposed to be the flagship editor for F#
I’m really struggling with VS Code’s keybindings (too many modifier keys and functions keys for my taste) and editing model, so I’ll likely stick with Emacs going forward. Or I’ll finally spend more quality time with neovim!
It seems that everyone is using the same code formatter (Fantomas), including the F# team, which is great! The linter story in F# is not as great (seems the only popular linter FSharpLint is abandonware these days), but when your compiler is so good, you don’t really need a linter as much.
Oh, well… It seems that Microsoft are not really particularly invested in supporting the tooling for F#, as pretty much all the major projects in this space are community-driven.
Using AI coding agents (e.g. Copilot) with F# worked pretty well, but I didn’t spend much time on this front.
In the end of the day any editor will likely do, as long as you’re using LSP.
By the way, I had an interesting observation while programming in F# (and OCaml for that matter) - that when you’re working with a language with a really good type system you don’t really need that much from your editor. Most the time I’m perfectly happy with just some inline type information (e.g. something like CodeLenses), auto-completion and the ability to easily send code to fsi. Simplicity continues to be the ultimate sophistication…
Other tools that should be on your radar are:
* Paket - Paket is a dependency manager for .NET projects. Think of it as something like bundler, npm or pip, but for .NET’s NuGet package ecosystem.
* FAKE - A DSL for build tasks and more, where you can use F# to specify the tasks. Somewhat similar to Ruby’s rake. Some people claim that’s the easiest way to sneak F# into an existing .NET project.
Given the depth and breath of .NET - I guess that sky is the limit for you!
Seems to me that F# will be a particularly good fit for data analysis and manipulation, because of features like type providers.
Probably a good fit for backend services and even full-stack apps, although I haven’t really played with the F# first solutions in this space yet.
Fable and Elmish make F# a viable option for client-side programming and might offer another easy way to sneak F# into your day-to-day work.
Note: Historically, Fable has been used to target JavaScript but since Fable 4, you can also target other languages such as TypeScript, Rust, Python, and more.
Here’s how easy it is to transpile an F# codebase into something else:
My initial impression of the community is that it’s fairly small, perhaps even smaller than that of OCaml. The F# Reddit and Discord (the one listed on Reddit) seem like the most active places for F# conversations. There’s supposed to be some F# Slack as well, but I couldn’t get an invite for it. (seems the automated process for issuing those invites has been broken for a while)
I’m still not sure what’s the role Microsoft plays in the community, as I haven’t seen much from them overall.
For a me a small community is not really a problem, as long as the community is vibrant and active. Also - I’ve noticed I always feel more connected to smaller communities. Moving from Java to Ruby back in the day felt like night and day as far as community engagement and sense of belonging go.
I didn’t find many books and community sites/blogs dedicated to F#, but I didn’t really expect to in the first place.
The most notable community initiatives I discovered were:
* Amplifying F# - an effort to promote F# and to get more businesses involved with it
* F# for Fun and Profit - a collection of tutorials and essays on F#
* F# Lab - The community driven toolkit for datascience in F#
* F# Weekly - a weekly newsletter about the latest developments in the world of F#
Seems to me that more can be done to promote the language and engage new programmers and businesses with it, although that’s never easy 20 years into the existence of some project. I continue to be somewhat puzzled as to why Microsoft doesn’t market F# more, as I think it could be a great marketing vehicle for them.
All in all - I don’t feel qualified to comment much on the F# community at this point.
Depending on the type of person you are you may or may not care about a a programming language’s “popularity”. People often ask my why I spent a lot of time with languages that are unlikely to ever result in job opportunities for me, e.g.:
Professional opportunities are important, of course, but so are:
* having fun (and the F in F# stands for “fun”)
* challenging yourself to think and work differently
That being said, F# is not a popular language by most conventional metrics. It’s not highly ranked on TIOBE, StackOverflow or most job boards. But it’s also not less popular than most “mainstream” functional programming languages. The sad reality is that functional programming is still not mainstream and perhaps it will never be.
A few more resources on the subject:
* How Popular is F# in 2024
Here’s also a video for the article above
* Here’s also a video for the article above
...
Read the original on batsov.com »
Get the hell out of the LLM as soon as possible
Don’t let an LLM make decisions or execute business logic: they suck at that. I build NPCs for an online game, and I get asked a lot “How did you get ChatGPT to do that?” The answer is invariably: “I didn’t, and also you shouldn’t”.
In most applications, the LLM should be the user-interface only between the user and an API into your application logic. The LLM shouldn’t be executing any logic. Get the hell out of the LLM as soon as possible, and stay out as long as you can.
This is best illustrated by a contrived example: you want to write a chess-playing bot you access over WhatsApp. The user sends a description of what they want to do (“use my bishop to take the knight”), and the bot plays against them.
Could you get the LLM to be in charge of maintaining the state of the chess board and playing convincingly? Possibly, maybe. Would you? Hell no, for some intuitive reasons:
Performance: It’s impressive that LLMs might be able to play chess at all, but they suck at it (as of 2025-04-01). A specialized chess engine is always going to be a faster, better, cheaper chess player. Even modern chess engines like Stockfish that incorporate neural networks are still purpose-built specialized systems with well-defined inputs and evaluation functions - not general-purpose language models trying to maintain game state through text.
Debugging and adjusting: It’s impossible to reason about and debug why the LLM made a given decision, which means it’s very hard to change how it makes those decisions if you need to tweak them. You don’t understand the journey it took through the high-dimensional semantic space to get to your answer, and it’s really poor at explaining it too. Even purpose-built neural networks like those in chess engines can be challenging for observability, and a general LLM is a nightmare, despite Anthropic’s great strides in this area
And the rest…: testing LLM outputs is much harder than unit-testing known code-paths; LLMs are much worse at math than your CPU; LLMs are insufficiently good at picking random numbers; version-control and auditing becomes much harder; monitoring and observability gets painful; state management through natural language is fragile; you’re at the mercy of API rate limits and costs; and security boundaries become fuzzy when everything flows through prompts.
The chess example illustrates the fundamental problem with using LLMs for core application logic, but this principle extends far beyond games. In any domain where precision, reliability, and efficiency matter, you should follow the same approach:
The user says they want to attack player X with their vorpal sword? The LLM shouldn’t be the system figuring out is the user has a vorpal sword, or what the results of that would be: the LLM is responsible for translating the free-text the user gave you into an API call only and translating the result into text for the user
You’re building a negotiation agent that should respond to user offers? The LLM isn’t in charge of the negotiation, just in charge of packaging it up, passing it off to the negotiating engine, and telling the user about the result
You need to make a random choice about how to respond to the user? The LLM doesn’t get to choose
Reminder of what LLMs are good at
While I’ve focused on what LLMs shouldn’t do, it’s equally important to understand their strengths so you can leverage them appropriately:
LLMs excel at transformation and at categorization, and have a pretty good grounding in “how the world works”, and this is where you in your process you should be deploying them.
The LLM is good at taking “hit the orc with my sword” and turning it into attack(target=“orc”, weapon=“sword”). Or taking {“error”: “insufficient_funds”} and turning it into “You don’t have enough gold for that.”
The LLM is good at figuring out what the hell the user is trying to do and routing it to the right part of your system. Is this a combat command? An inventory check? A request for help?
Finally, the LLM is good at knowing about human concepts, and knowing that a “blade” is probably a sword and “smash” probably means attack.
Notice that all these strengths involve transformation, interpretation, or communication—not complex decision-making or maintaining critical application state. By restricting LLMs to these roles, you get their benefits without the pitfalls described earlier.
What LLMs can and can’t do is ever-shifting and reminds me of the “God of the gaps”. a term from theology where each mysterious phenomenon was once explained by divine intervention—until science filled that gap. Likewise, people constantly identify new “human-only” tasks to claim that LLMs aren’t truly intelligent or capable. Then, just a few months later, a new model emerges that handles those tasks just fine, forcing everyone to move the goalposts again, examples passim. It’s a constantly evolving target, and what seems out of reach today may be solved sooner than we expect.
And so like in our chess example, we will probably soon end up with LLMs that can handle all of our above examples reasonably well. I suspect however that most of the drawbacks won’t go away: your non-LLM logic that you pass off to is going to be easier to reason about, easier to maintain, cheaper to run, and more easily version-controlled.
Even as LLMs continue to improve, the fundamental architectural principle remains: use LLMs for what they’re best at—the interface layer—and rely on purpose-built systems for your core logic. If your team promises to deliver (or buy!) ‘Agentic AI’, then everyone needs to have a shared understanding of what that means; you don’t want to be the one left trying to explain the mismatch to stakeholders six months later. There’s no current (2025-03-30) widely accepted definition, so if you’re using the term, be clear on what you mean, and if someone else is using the term, it’s worth figuring out which one they mean. Get these articles sent to youIf you liked it, you might like other stuff I write
...
Read the original on example.com »
The findings could help to explain the species’ fascinating flocking behaviour
The findings could help to explain the species’ fascinating flocking behaviour
Quantum entanglement is a fascinating phenomenon where two particles’ states are tied to each other, no matter how far apart the particles are. In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for groundbreaking experiments involving entangled photons. These experiments confirmed the predictions for the manifestation of entanglement that had been made by the late CERN theorist John Bell. This phenomenon has so far been observed in a wide variety of systems, such as in top quarks at CERN’s Large Hadron Collider (LHC) in 2024. Entanglement has also found several important societal applications, such as quantum cryptography and quantum computing. Now, it also explains the famous herd mentality of sheep.
A flock of sheep (ovis aries) has roamed the CERN site during the spring and summer months for over 40 years. Along with the CERN shepherd, they help to maintain the vast expanses of grassland around the LHC and are part of the Organization’s long-standing efforts to protect the site’s biodiversity. In addition, their flocking behaviour has been of great interest to CERN’s physicists. It is well known that sheep behave like particles: their stochastic behaviour has been studied by zoologists and physicists alike, who noticed that a flock’s ability to quickly change phase is similar to that of atoms in a solid and a liquid. Known as the Lamb Shift, this can cause them to get themselves into bizarre situations, such as walking in a circle for days on end.
Now, new research has shed light on the reason for these extraordinary abilities. Scientists at CERN have found evidence of quantum entanglement in sheep. Using sophisticated modelling techniques and specialised trackers, the findings show that the brains of individual sheep in a flock are quantum-entangled in such a way that the sheep can move and vocalise simultaneously, no matter how far apart they are. The evidence has several ramifications for ovine research and has set the baa for a new branch of quantum physics.
“The fact that we were having our lunch next to the flock was a shear coincidence,” says Mary Little, leader of the HERD collaboration, describing how the project came about. “When we saw and herd their behaviour, we wanted to investigate the movement of the flock using the technology at our disposal at the Laboratory.”
Observing the sheep’s ability to simultaneously move and vocalise together caused one main question to aries: since the sheep behave like subatomic particles, could quantum effects be the reason for their behaviour?
“Obviously, we couldn’t put them all in a box and see if they were dead or alive,” said Beau Peep, a researcher on the project. “However, by assuming that the sheep were spherical, we were able to model their behaviour in almost the exact same way as we model subatomic particles.”
Using sophisticated trackers, akin to those in the LHC experiments, the physicists were able to locate the precise particles in the sheep’s brains that might be the cause of this entanglement. Dubbed “moutons” and represented by the Greek letter lambda, l, these particles are leptons and are close relatives of the muon, but fluffier.
The statistical significance of the findings is 4 sigma, which is enough to show evidence of the phenomenon. However, it does not quite pass the baa to be classed as an observation.
“More research is needed to fully confirm that this was indeed an observation of ovine entanglement or a statistical fluctuation,” says Ewen Woolly, spokesperson for the HERD collaboration. “This may be difficult, as we have found that the research makes physicists become inexplicably drowsy.”
“While entanglement is now the leading theory for this phenomenon, we have to take everything into account,” adds Dolly Shepherd, a CERN theorist. “Who knows, maybe further variables are hidden beneath their fleeces. Wolves, for example.”
...
Read the original on home.cern »
Conversational interfaces are a bit of a meme. Every couple of years a shiny new AI development emerges and people in tech go “This is it! The next computing paradigm is here! We’ll only use natural language going forward!”. But then nothing actually changes and we continue using computers the way we always have, until the debate resurfaces a few years later.
We’ve gone through this cycle a couple of times now: Virtual assistants (Siri), smart speakers (Alexa, Google Home), chatbots (“conversational commerce”), AirPods-as-a-platform, and, most recently, large language models.
I’m not entirely sure where this obsession with conversational interfaces comes from. Perhaps it’s a type of anemoia, a nostalgia for a future we saw in StarTrek that never became reality. Or maybe it’s simply that people look at the term “natural language” and think “well, if it’s natural then it must be the logical end state”.
I’m here to tell you that it’s not.
When people say “natural language” what they mean is written or verbal communication. Natural language is a way to exchange ideas and knowledge between humans. In other words, it’s a data transfer mechanism.
Data transfer mechanisms have two critical factors: speed and lossiness.
Speed determines how quickly data is transferred from the sender to the receiver, while lossiness refers to how accurately the data is transferred. In an ideal state, you want data transfer to happen at maximum speed (instant) and with perfect fidelity (lossless), but these two attributes are often a bit of a trade-off.
Let’s look at how well natural language does on the speed dimension:
The first thing I should note is that these data points are very, very simplified averages. The important part to take away from this table is not the accuracy of individual numbers, but the overall pattern: We are significantly faster at receiving data (reading, listening) than sending it (writing, speaking). This is why we can listen to podcasts at 2x speed, but not record them at 2x speed.
To put the writing and speaking speeds into perspective, we form thoughts at 1,000-3,000 words per minute. Natural language might be natural, but it’s a bottleneck.
And yet, if you think about your day-to-day interactions with other humans, most communication feels really fast and efficient. That’s because natural language is only one of many data transfer mechanisms available to us.
For example, instead of saying “I think what you just said is a great idea”, I can just give you a thumbs up. Or nod my head. Or simply smile.
Gestures and facial expressions are effectively data compression techniques. They encode information in a more compact, but lossier, form to make it faster and more convenient to transmit.
Natural language is great for data transfer that requires high fidelity (or as a data storage mechanism for async communication), but whenever possible we switch to other modes of communication that are faster and more effortless. Speed and convenience always wins.
My favorite example of truly effortless communication is a memory I have of my grandparents. At the breakfast table, my grandmother never had to ask for the butter — my grandfather always seemed to pass it to her automatically, because after 50+ years of marriage he just sensed that she was about to ask for it. It was like they were communicating telepathically.
*That* is the type of relationship I want to have with my computer!
Similar to human-to-human communication, there are different data transfer mechanisms to exchange information between humans and computers. In the early days of computing, users interacted with computers through a command line. These text-based commands were effectively a natural language interface, but required precise syntax and a deep understanding of the system.
The introduction of the GUI primarily solved a discovery problem: Instead of having to memorize exact text commands, you could now navigate and perform tasks through visual elements like menus and buttons. This didn’t just make things easier to discover, but also more convenient: It’s faster to click a button than to type a long text command.
Today, we live in a productivity equilibrium that combines graphical interfaces with keyboard-based commands.
We still use our mouse to navigate and tell our computers what to do next, but routine actions are typically communicated in form of quick-fire keyboard presses: ⌘b to format text as bold, ⌘t to open a new tab, ⌘c/v to quickly copy things from one place to another, etc.
These shortcuts are not natural language though. They are another form of data compression. Like a thumbs up or a nod, they help us to communicate faster.
Modern productivity tools take these data compression shortcuts to the next level. In tools like Linear, Raycast or Superhuman every single command is just a keystroke away. Once you’ve built the muscle memory, the data input feels completely effortless. It’s almost like being handed the butter at the breakfast table without having to ask for it.
Touch-based interfaces are considered the third pivotal milestone in the evolution of human computer interaction, but they have always been more of an augmentation of desktop computing rather than a replacement for it. Smartphones are great for “away from keyboard” workflows, but important productivity work still happens on desktop.
That’s because text is not a mobile-native input mechanism. A physical keyboard can feel like a natural extension of your mind and body, but typing on a phone is always a little awkward — and it shows in data transfer speeds: Average typing speeds on mobile are just 36 words-per-minute, notably slower than the ~60 words-per-minute on desktop.
We’ve been able to replace natural language with mobile-specific data compression algorithms like emojis or Snapchat selfies, but we’ve never found a mobile equivalent for keyboard shortcuts. Guess why we still don’t have a truly mobile-first productivity app after almost 20 years since the introduction of the iPhone?
“But what about speech-to-text,” you might say, pointing to reports about increasing usage of voice messaging. It’s true that speaking (150wpm) is indeed a faster data transfer mechanism than typing (60wpm), but that doesn’t automatically make it a better method to interact with computers.
We keep telling ourselves that previous voice interfaces like Alexa or Siri didn’t succeed because the underlying AI wasn’t smart enough, but that’s only half of the story. The core problem was never the quality of the output function, but the inconvenience of the input function: A natural language prompt like “Hey Google, what’s the weather in San Francisco today?” just takes 10x longer than simply tapping the weather app on your homescreen.
LLMs don’t solve this problem. The quality of their output is improving at an astonishing rate, but the input modality is a step backwards from what we already have. Why should I have to describe my desired action using natural language, when I could simply press a button or keyboard shortcut? Just pass me the goddamn butter.
None of this is to say that LLMs aren’t great. I love LLMs. I use them all the time. In fact, I wrote this very essay with the help of an LLM.
Instead of drafting a first version with pen and paper (my preferred writing tools), I spent an entire hour walking outside, talking to ChatGPT in Advanced Voice Mode. We went through all the fuzzy ideas in my head, clarified and organized them, explored some additional talking points, and eventually pulled everything together into a first outline.
This wasn’t just a one-sided “Hey, can you write a few paragraphs about x” prompt. It felt like a genuine, in-depth conversation and exchange of ideas with a true thought partner. Even weeks later, I’m still amazed at how well it worked. It was one of those rare, magical moments where software makes you feel like you’re living in the future.
In contrast to typical human-to-computer commands, however, this workflow is not defined by speed. Like writing, my ChatGPT conversation is a thinking process — not an interaction that happens post-thought.
It should also be noted that ChatGPT does not substitute any existing software workflows in this example. It’s a completely new use case.
This brings me to my core thesis: The inconvenience and inferior data transfer speeds of conversational interfaces make them an unlikely replacement for existing computing paradigms — but what if they complement them?
The most convincing conversational UI I have seen to date was at a hackathon where a team turned Amazon Alexa into an in-game voice assistant for StarCraft II. Rather than replacing mouse and keyboard, voice acted as an additional input mechanism. It increased the bandwidth of the data transfer.
You could see the same pattern work for any type of knowledge work, where voice commands are available while you are busy doing other things. We will not replace Figma, Notion, or Excel with a chat interface. It’s not going to happen. Neither will we forever continue the status quo, where we constantly have to switch back and forth between these tools and an LLM.
Instead, AI should function as an always-on command meta-layer that spans across all tools. Users should be able to trigger actions from anywhere with simple voice prompts without having to interrupt whatever they are currently doing with mouse and keyboard.
For this future to become an actual reality, AI needs to work at the OS level. It’s not meant to be an interface for a single tool, but an interface across tools. Kevin Kwok famously wrote that “productivity and collaboration shouldn’t be two separate workflows”. And while he was referring to human-to-human collaboration, the statement is even more true in a world of human-to-AI collaboration, where the lines between productivity and coordination are becoming increasingly more blurry.
The second thing we need to figure out is how we can compress voice input to make it faster to transmit. What’s the voice equivalent of a thumbs-up or a keyboard shortcut? Can I prompt Claude faster with simple sounds and whistles? Should ChatGPT have access to my camera so it can change its answers in realtime based on my facial expressions?
Even as a secondary interface, speed and convenience is all that matters.
I admit that the title of this essay is a bit misleading (made you click though, didn’t it?). This isn’t really a case against conversational interfaces, it’s a case against zero-sum thinking.
We spend too much time thinking about AI as a substitute (for interfaces, workflows, and jobs) and too little time about AI as a complement. Progress rarely follows a simple path of replacement. It unlocks new, previously unimaginable things rather than merely displacing what came before.
The same is true here. The future isn’t about replacing existing computing paradigms with chat interfaces, but about enhancing them to make human-computer interaction feel effortless — like the silent exchange of butter at a well-worn breakfast table.
Thanks to Blake Robbins, Chris Paik, Jackson Dahl, Johannes Schickling, Jordan Singer, and signüll for reading drafts of this post.
...
Read the original on julian.digital »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.