10 interesting stories served every morning and every evening.




1 674 shares, 35 trendiness

Apps lighter than a React button

On this re­lease, we’re show­ing what hap­pens when you push mod­ern web stan­dards — HTML, CSS, and JS — to their peak:

This en­tire app is lighter than a React/ShadCN but­ton:

See bench­mark and de­tails here ›

Here’s the same app, now with a Rust com­pu­ta­tion en­gine and Event Sourcing for in­stant search and other op­er­a­tions over 150,000 records — far past where JS-version of the en­gine choked on re­cur­sive calls over the records.

This demo is here ›

Nue crushes HMR and build speed records and sets you up with a mil­lisec­ond feed­back loop for your every­day VSCode/Sublime file-save op­er­a­tions:

Immediate feed­back for de­sign and com­po­nent up­dates, pre­serv­ing app state

This is a game-changer for Rust, Go, and JS en­gi­neers stuck wrestling with React id­ioms in­stead of lean­ing on time­less soft­ware pat­terns. Nue em­pha­sizes a model-first ap­proach, de­liv­er­ing mod­u­lar de­sign with sim­ple, testable func­tions, true sta­tic typ­ing, and min­i­mal de­pen­den­cies. Nue is a lib­er­at­ing ex­pe­ri­ence for sys­tem devs whose skills can fi­nally shine in a sep­a­rated model layer.

This is an im­por­tant shift for de­sign en­gi­neers bogged down by React pat­terns and 40,000+ line de­sign sys­tems. Build rad­i­cally sim­pler sys­tems with mod­ern CSS (@layers, vari­ables, calc()) and take con­trol of your ty­pog­ra­phy and white­space.

This is a wake-up call for UX en­gi­neers tan­gled in React hooks and util­ity class walls in­stead of own­ing the user ex­pe­ri­ence. Build apps as light as a React but­ton to push the web — and your skills — for­ward.

Nue is a web frame­work fo­cused on web stan­dards, cur­rently in ac­tive de­vel­op­ment. We aim to re­veal the hid­den com­plex­ity that’s be­come nor­mal­ized in mod­ern web de­vel­op­ment. When a sin­gle but­ton out­weighs an en­tire ap­pli­ca­tion, some­thing’s fun­da­men­tally bro­ken.

Nue dri­ves the in­evitable shift. We’re re­build­ing tools and frame­works from the ground up with a cleaner, more ro­bust ar­chi­tec­ture. Our goal is to re­store the joy of web de­vel­op­ment for all key skill sets: fron­tend ar­chi­tects, de­sign en­gi­neers, and UX en­gi­neers.

...

Read the original on nuejs.org »

2 427 shares, 30 trendiness

The April Fools joke that might have got me fired

...

Read the original on oldvcr.blogspot.com »

3 425 shares, 23 trendiness

Fluentsubs

...

Read the original on app.fluentsubs.com »

4 343 shares, 40 trendiness

A Man Powers Home for 8 Years Using 1,000 Old Laptop Batteries

The UN re­ports that less than 25% of global e-waste is prop­erly col­lected and re­cy­cled.

...

Read the original on techoreon.com »

5 343 shares, 3 trendiness

Glubux's Powerwall

Follow along with the video be­low to see how to in­stall our site as a web app on your home screen.

Note: This fea­ture cur­rently re­quires ac­cess­ing the site us­ing the built-in Safari browser.

...

Read the original on secondlifestorage.com »

6 324 shares, 26 trendiness

Bletchley Park code breaker Betty Webb dies aged 101

A dec­o­rated World War Two code breaker who spent her youth de­ci­pher­ing en­emy mes­sages at Bletchley Park has died at the age of 101. Charlotte Betty” Webb MBE - who was among the last sur­viv­ing Bletchley code break­ers - died on Monday night, the Women’s Royal Army Corps Association con­firmed.Mrs Webb, from Wythall in Worcestershire, joined op­er­a­tions at the Buckinghamshire base at the age of 18, later go­ing on to help with Japanese codes at The Pentagon in the US. She was awarded France’s high­est ho­n­our - the Légion d’Hon­neur - in 2021. The Women’s Royal Army Corps Association de­scribed Mrs Webb as a woman who inspired women in the Army for decades”.

Bletchley Park Trust CEO Iain Standen said Mrs Webb will not only be re­mem­bered for her work but also for her ef­forts to en­sure that the story of what she and her col­leagues achieved is not for­got­ten.“”Bet­ty’s pas­sion for pre­serv­ing the his­tory and legacy of Bletchley Park has un­doubt­edly in­spired many peo­ple to en­gage with the story and visit the site,” he said in a state­ment. Tributes to Mrs Webb have be­gun to be posted on so­cial me­dia, in­clud­ing one from his­to­rian and au­thor Dr Tessa Dunlop who said she was with her in her fi­nal hours.De­scrib­ing Mrs Webb as the very best”, she said on X: She is one of the most re­mark­able woman I have ever known.”

Mrs Webb told the BBC in 2020 that she had never heard of Bletchley”, Britain’s wartime code-break­ing cen­tre, be­fore start­ing work there as a mem­ber of the ATS, the Auxiliary Territorial Service. She had been study­ing at a col­lege near Shrewsbury, Shropshire, when she vol­un­teered as she said she and oth­ers on the course felt they ought to be serv­ing our coun­try rather than just mak­ing sausage rolls”.Her mother had taught her to speak German as a child and ahead of her post­ing re­mem­bered be­ing taken into the man­sion [at Bletchley] to read the Official Secrets Act”.“I re­alised that from then on there was no way that I was go­ing to be able to tell even my par­ents where I was and what I was do­ing un­til 1975 [when re­stric­tions were lifted],” she re­called.She would tell the fam­ily with whom she lodged that she was a sec­re­tary.

When WW2 ended in Europe in May 1945, she went to work at the Pentagon af­ter spend­ing four years at Bletchley, which with its analy­sis of German com­mu­ni­ca­tions had served as a vi­tal cog in the Allies’ war ma­chine. At the Pentagon she would para­phrase and tran­scribe al­ready-de­coded Japanese mes­sages. She said she was the only mem­ber of the ATS to be sent to Washington, de­scrib­ing it as a tremendous ho­n­our”.Mrs Webb, in 2020, re­called she had had no idea the Americans planned to end the con­flict by drop­ping atomic weapon on Japanese cities, de­scrib­ing the weapons’ power as utterly aw­ful”.Af­ter the Allies’ fi­nal vic­tory, it took Mrs Webb sev­eral months to or­gan­ise re­turn pas­sage to the UK, where she worked as a sec­re­tary at a school in Shropshire.The head teacher there had also worked at Bletchley so knew of her pro­fes­sion­al­ism, whereas other would-be em­ploy­ers, she re­called, were left stumped by her be­ing un­able to ex­plain - due to se­crecy re­quire­ments - her pre­vi­ous du­ties.More than half a cen­tury later, in 2021, Mrs Webb was one of 6,000 British cit­i­zens to re­ceive the Légion d’Hon­neur, fol­low­ing a de­ci­sion by President François Hollande in 2014 to recog­nise British vet­er­ans who helped lib­er­ate France.

In 2023, she and her niece were among 2,200 peo­ple from 203 coun­tries in­vited to Westminster Abbey to see King Charles IIIs coro­na­tion. The same year she cel­e­brated her 100th birth­day at Bletchley Park with a party. She and her guests were treated to a fly-past by a Lancaster bomber. She said at the time: It was for me - it’s un­be­liev­able is­n’t it? Little me.”

...

Read the original on www.bbc.com »

7 309 shares, 19 trendiness

Why F#?

If some­one had told me a few months ago I’d be play­ing with .NET again af­ter a 15+ years hia­tus I prob­a­bly would have laughed at this. Early on in my ca­reer I played with .NET and Java, and even though .NET had done some things bet­ter than Java (as it had the op­por­tu­nity to learn from some early Java mis­takes), I quickly set­tled on Java as it was a truly portable en­vi­ron­ment.

I guess every­one who reads my blog knows that in the past few years I’ve been play­ing on and off with OCaml and I think it’s safe to say that it has be­come one of my fa­vorite pro­gram­ming lan­guages - along­side the likes of Ruby and Clojure. My work with OCaml drew my at­ten­tion re­cently to F#, an ML tar­get­ing .NET, de­vel­oped by Microsoft. The func­tional coun­ter­part of the (mostly) ob­ject-ori­ented C#. The newest ML lan­guage cre­ated…

Unfortunately, no one can be told what the Matrix is. You have to see it for your­self.

Before we start dis­cussing F#, I guess we should an­swer first the ques­tion What is F#?”. I’ll bor­row a bit from the of­fi­cial page to an­swer it.

F# is a uni­ver­sal pro­gram­ming lan­guage for writ­ing suc­cinct, ro­bust and per­for­mant code.

F# al­lows you to write un­clut­tered, self-doc­u­ment­ing code, where your fo­cus re­mains on your prob­lem do­main, rather than the de­tails of pro­gram­ming.

It does this with­out com­pro­mis­ing on speed and com­pat­i­bil­ity - it is open-source, cross-plat­form and in­ter­op­er­a­ble.

Trivia: F# is the lan­guage that made the pipeline op­er­a­tor (|>) pop­u­lar.

A full set of fea­tures are doc­u­mented in the F# lan­guage guide.

F# 1.0 was of­fi­cially re­leased in May 2005 by Microsoft Research. It was ini­tially de­vel­oped by Don Syme at Microsoft Research in Cambridge and evolved from an ear­lier re­search pro­ject called Caml. NET,” which aimed to bring OCaml to the .NET plat­form. F# was of­fi­cially moved from Microsoft Research to Microsoft (as part of their de­vel­oper tool­ing di­vi­sion) in 2010 (timed with the re­lease of F# 2.0).

F# has been steadily evolv­ing since those early days and the most re­cent re­lease

F# 9.0 was re­leased in November 2024. It seems only ap­pro­pri­ate that F# would come to my at­ten­tion in the year of its 20th birth­day!

There were sev­eral rea­sons why I wanted to try out F#:

* .NET be­came open-source and portable a few years ago and I wanted to check the progress on that front

* I was cu­ri­ous if F# of­fers any ad­van­tages over OCaml

* I’ve heard good things about the F# tool­ing (e.g. Rider and Ionide)

* I like play­ing with new pro­gram­ming lan­guages

Below you’ll find my ini­tial im­pres­sions for sev­eral ar­eas.

As a mem­ber of the ML fam­ily of lan­guages, the syn­tax won’t sur­prise any­one fa­mil­iar with OCaml. As there are quite few peo­ple fa­mil­iar with OCaml, though, I’ll men­tion that Haskell pro­gram­mers will also feel right at home with the syn­tax. And Lispers.

For every­one else - it’d be fairly easy to pick up the ba­sics.

Nothing shock­ing here, right?

Here’s an­other slightly more in­volved ex­am­ple:

Why don’t you try sav­ing the snip­pet above in a file called Sales.fsx and run­ning it like this:

Now you know that F# is a great choice for ad-hoc scripts! Also, run­ning dot­net fsi by it­self will pop an F# REPL where you can ex­plore the lan­guage at your leisure.

I’m not go­ing to go into great de­tails here, as much of what I wrote about OCaml

here ap­plies to F# as well. I’d also sug­gest this quick tour of F#

to get a bet­ter feel for its syn­tax.

Tip: Check out the F# cheat­sheet

if you’d like to see a quick syn­tax ref­er­ence.

One thing that made a good im­pres­sion to me is the fo­cus of the lan­guage de­sign­ers on mak­ing F# ap­proach­able to new­com­ers, by pro­vid­ing a lot of small qual­ity of life im­prove­ments for them. Below are few ex­am­ples, that prob­a­bly don’t mean much to you, but would mean some­thing to peo­ple fa­mil­iar with OCaml:

I guess some of those might be con­tro­ver­sial, de­pend­ing on whether you’re a ML lan­guage purist or not, but in my book any­thing that makes ML more pop­u­lar is a good thing.

Did I also men­tion it’s easy to work with uni­code strings and reg­u­lar ex­pres­sions?

Often peo­ple say that F# is mostly a stag­ing ground for fu­ture C# fea­tures, and per­haps that’s true. I haven’t ob­served both lan­guages long enough to have my own opin­ion on the sub­ject, but I was im­pressed to learn that async/​await (of C# and later JavaScript fame) orig­i­nated in… F# 2.0.

It all changed in 2012 when C#5 launched with the in­tro­duc­tion of what has now be­come the pop­u­lar­ized async/​await key­word pair­ing. This fea­ture al­lowed you to write code with all the ben­e­fits of hand-writ­ten asyn­chro­nous code, such as not block­ing the UI when a long-run­ning process started, yet read like nor­mal syn­chro­nous code. This async/​await pat­tern has now found its way into many mod­ern pro­gram­ming lan­guages such as Python, JS, Swift, Rust, and even C++.

F#’s ap­proach to asyn­chro­nous pro­gram­ming is a lit­tle dif­fer­ent from async/​await

but achieves the same goal (in fact, async/​await is a cut-down ver­sion of F#’s ap­proach, which was in­tro­duced a few years pre­vi­ously, in F#2).

Time will tell what will hap­pen, but I think it’s un­likely that C# will ever be able to fully re­place F#.

I’ve also found this en­cour­ag­ing com­ment from 2022 that Microsoft might be will­ing to in­vest more in F#:

Some good news for you. After 10 years of F# be­ing de­vel­oped by 2.5 peo­ple in­ter­nally and some ran­dom com­mu­nity ef­forts, Microsoft has fi­nally de­cided to prop­erly in­vest in F# and cre­ated a full-fledged team in Prague this sum­mer. I’m a dev in this team, just like you I was an F# fan for many years so I am happy things got fi­nally mov­ing here.

Looking at the changes in F# 8.0 and F 9.0, it seems the new full-fledged team has done some great work!

It’s hard to as­sess the ecosys­tem around F# af­ter such a brief pe­riod, but over­all it seems to me that there are fairly few native” F# li­braries and frame­works out there and most peo­ple rely heav­ily on the core .NET APIs and many third-party li­braries and frame­works geared to­wards C#. That’s a pretty com­mon setup when it comes to hosted lan­guages in gen­eral, so noth­ing sur­pris­ing here as well.

If you’ve ever used an­other hosted lan­guage (e.g. Scala, Clojure, Groovy) then you prob­a­bly know what to ex­pect.

Awesome F# keeps track of pop­u­lar F# li­braries, tools and frame­works. I’ll high­light here the web de­vel­op­ment and data sci­ence li­braries:

* Giraffe: A light­weight li­brary for build­ing web ap­pli­ca­tions us­ing ASP.NET Core. It pro­vides a func­tional ap­proach to web de­vel­op­ment.

* Suave: A sim­ple and light­weight web server li­brary with com­bi­na­tors for rout­ing and task com­po­si­tion. (Giraffe was in­spired by Suave)

* Saturn: Built on top of Giraffe and ASP.NET Core, it of­fers an MVC-style frame­work in­spired by Ruby on Rails and Elixir’s Phoenix.

* Bolero: A frame­work for build­ing client-side ap­pli­ca­tions in F# us­ing WebAssembly and Blazor.

* Fable: A com­piler that trans­lates F# code into JavaScript, en­abling in­te­gra­tion with pop­u­lar JavaScript ecosys­tems like React or Node.js.

* Elmish: A model-view-up­date (MVU) ar­chi­tec­ture for build­ing web UIs in F#, of­ten used with Fable.

* SAFE Stack: An end-to-end, func­tional-first stack for build­ing cloud-ready web ap­pli­ca­tions. It com­bines tech­nolo­gies like Saturn, Azure, Fable, and Elmish for a type-safe de­vel­op­ment ex­pe­ri­ence.

* Deedle: A li­brary for data ma­nip­u­la­tion and ex­ploratory analy­sis, sim­i­lar to pan­das in Python.

* FsLab: A col­lec­tion of li­braries tai­lored for data sci­ence, in­clud­ing vi­su­al­iza­tion and sta­tis­ti­cal tools.

I haven’t played much with any of them at this point yet, so I’ll re­serve any feed­back and rec­om­men­da­tions for some point in the fu­ture.

The of­fi­cial doc­u­men­ta­tion is pretty good, al­though I find it kind of weird that some of it is hosted on Microsoft’s site

and the rest is on https://​fsharp.org/ (the site of the F# Software Foundation).

I re­ally liked the fol­low­ing parts of the doc­u­men­ta­tion:

https://​fsharp­for­funand­profit.com/ is an­other good learn­ing re­source. (even if it seems a bit dated)

F# has a some­what trou­bled dev tool­ing story, as his­tor­i­cally sup­port for F# was great only in Visual Studio, and some­what sub­par else­where. Fortunately, the tool­ing story has im­proved a lot in the past decade:

In 2014 a tech­ni­cal break­through was made with the cre­ation of the FSharp. Compiler.Service (FCS) pack­age by Tomas Petricek, Ryan Riley, and Dave Thomas with many later con­trib­u­tors. This con­tains the core im­ple­men­ta­tion of the F# com­piler, ed­i­tor tool­ing and script­ing en­gine in the form of a sin­gle li­brary and can be used to make F# tool­ing for a wide range of sit­u­a­tions. This has al­lowed F# to be de­liv­ered into many more ed­i­tors, script­ing and doc­u­men­ta­tion tools and al­lowed the de­vel­op­ment of al­ter­na­tive back­ends for F#. Key ed­i­tor com­mu­nity-based tool­ing in­cludes Ionide, by Krzysztof Cieślak and con­trib­u­tors, used for rich edit­ing sup­port in the cross-plat­form VSCode ed­i­tor, with over 1M down­loads at time of writ­ing.

I’ve played with the F# plu­g­ins for sev­eral ed­i­tors:

Overall, Rider and VS Code pro­vide the most (and the most pol­ished) fea­tures, but the other op­tions were quite us­able as well. That’s largely due to the fact that the F# LSP server fsauto­com­plete (naming is hard!) is quite ro­bust and any ed­i­tor with good LSP sup­port gets a lot of func­tion­al­ity for free.

Still, I’ll men­tion that I found the tool­ing lack­ing in some re­gards:

* fsharp-mode does­n’t use TreeSitter (yet) and does­n’t seem to be very ac­tively de­vel­oped (looking at the code - it seems it was de­rived from caml-mode)

* Zed’s sup­port for F# is quite spar­tan

* In VS Code shock­ingly the ex­pand­ing and shrink­ing se­lec­tion is bro­ken, which is quite odd for what is sup­posed to be the flag­ship ed­i­tor for F#

I’m re­ally strug­gling with VS Code’s key­bind­ings (too many mod­i­fier keys and func­tions keys for my taste) and edit­ing model, so I’ll likely stick with Emacs go­ing for­ward. Or I’ll fi­nally spend more qual­ity time with neovim!

It seems that every­one is us­ing the same code for­mat­ter (Fantomas), in­clud­ing the F# team, which is great! The lin­ter story in F# is not as great (seems the only pop­u­lar lin­ter FSharpLint is aban­don­ware these days), but when your com­piler is so good, you don’t re­ally need a lin­ter as much.

Oh, well… It seems that Microsoft are not re­ally par­tic­u­larly in­vested in sup­port­ing the tool­ing for F#, as pretty much all the ma­jor pro­jects in this space are com­mu­nity-dri­ven.

Using AI cod­ing agents (e.g. Copilot) with F# worked pretty well, but I did­n’t spend much time on this front.

In the end of the day any ed­i­tor will likely do, as long as you’re us­ing LSP.

By the way, I had an in­ter­est­ing ob­ser­va­tion while pro­gram­ming in F# (and OCaml for that mat­ter) - that when you’re work­ing with a lan­guage with a re­ally good type sys­tem you don’t re­ally need that much from your ed­i­tor. Most the time I’m per­fectly happy with just some in­line type in­for­ma­tion (e.g. some­thing like CodeLenses), auto-com­ple­tion and the abil­ity to eas­ily send code to fsi. Simplicity con­tin­ues to be the ul­ti­mate so­phis­ti­ca­tion…

Other tools that should be on your radar are:

* Paket - Paket is a de­pen­dency man­ager for .NET pro­jects. Think of it as some­thing like bundler, npm or pip, but for .NETs NuGet pack­age ecosys­tem.

* FAKE - A DSL for build tasks and more, where you can use F# to spec­ify the tasks. Somewhat sim­i­lar to Ruby’s rake. Some peo­ple claim that’s the eas­i­est way to sneak F# into an ex­ist­ing .NET pro­ject.

Given the depth and breath of .NET - I guess that sky is the limit for you!

Seems to me that F# will be a par­tic­u­larly good fit for data analy­sis and ma­nip­u­la­tion, be­cause of fea­tures like type providers.

Probably a good fit for back­end ser­vices and even full-stack apps, al­though I haven’t re­ally played with the F# first so­lu­tions in this space yet.

Fable and Elmish make F# a vi­able op­tion for client-side pro­gram­ming and might of­fer an­other easy way to sneak F# into your day-to-day work.

Note: Historically, Fable has been used to tar­get JavaScript but since Fable 4, you can also tar­get other lan­guages such as TypeScript, Rust, Python, and more.

Here’s how easy it is to tran­spile an F# code­base into some­thing else:

My ini­tial im­pres­sion of the com­mu­nity is that it’s fairly small, per­haps even smaller than that of OCaml. The F# Reddit and Discord (the one listed on Reddit) seem like the most ac­tive places for F# con­ver­sa­tions. There’s sup­posed to be some F# Slack as well, but I could­n’t get an in­vite for it. (seems the au­to­mated process for is­su­ing those in­vites has been bro­ken for a while)

I’m still not sure what’s the role Microsoft plays in the com­mu­nity, as I haven’t seen much from them over­all.

For a me a small com­mu­nity is not re­ally a prob­lem, as long as the com­mu­nity is vi­brant and ac­tive. Also - I’ve no­ticed I al­ways feel more con­nected to smaller com­mu­ni­ties. Moving from Java to Ruby back in the day felt like night and day as far as com­mu­nity en­gage­ment and sense of be­long­ing go.

I did­n’t find many books and com­mu­nity sites/​blogs ded­i­cated to F#, but I did­n’t re­ally ex­pect to in the first place.

The most no­table com­mu­nity ini­tia­tives I dis­cov­ered were:

* Amplifying F# - an ef­fort to pro­mote F# and to get more busi­nesses in­volved with it

* F# for Fun and Profit - a col­lec­tion of tu­to­ri­als and es­says on F#

* F# Lab - The com­mu­nity dri­ven toolkit for data­science in F#

* F# Weekly - a weekly newslet­ter about the lat­est de­vel­op­ments in the world of F#

Seems to me that more can be done to pro­mote the lan­guage and en­gage new pro­gram­mers and busi­nesses with it, al­though that’s never easy 20 years into the ex­is­tence of some pro­ject. I con­tinue to be some­what puz­zled as to why Microsoft does­n’t mar­ket F# more, as I think it could be a great mar­ket­ing ve­hi­cle for them.

All in all - I don’t feel qual­i­fied to com­ment much on the F# com­mu­nity at this point.

Depending on the type of per­son you are you may or may not care about a a pro­gram­ming lan­guage’s popularity”. People of­ten ask my why I spent a lot of time with lan­guages that are un­likely to ever re­sult in job op­por­tu­ni­ties for me, e.g.:

Professional op­por­tu­ni­ties are im­por­tant, of course, but so are:

* hav­ing fun (and the F in F# stands for fun”)

* chal­leng­ing your­self to think and work dif­fer­ently

That be­ing said, F# is not a pop­u­lar lan­guage by most con­ven­tional met­rics. It’s not highly ranked on TIOBE, StackOverflow or most job boards. But it’s also not less pop­u­lar than most mainstream” func­tional pro­gram­ming lan­guages. The sad re­al­ity is that func­tional pro­gram­ming is still not main­stream and per­haps it will never be.

A few more re­sources on the sub­ject:

* How Popular is F# in 2024

Here’s also a video for the ar­ti­cle above

* Here’s also a video for the ar­ti­cle above

...

Read the original on batsov.com »

8 308 shares, 11 trendiness

Get the hell out of the LLM as soon as possible

Get the hell out of the LLM as soon as pos­si­ble

Don’t let an LLM make de­ci­sions or ex­e­cute busi­ness logic: they suck at that. I build NPCs for an on­line game, and I get asked a lot How did you get ChatGPT to do that?” The an­swer is in­vari­ably: I did­n’t, and also you should­n’t”.

In most ap­pli­ca­tions, the LLM should be the user-in­ter­face only be­tween the user and an API into your ap­pli­ca­tion logic. The LLM should­n’t be ex­e­cut­ing any logic. Get the hell out of the LLM as soon as pos­si­ble, and stay out as long as you can.

This is best il­lus­trated by a con­trived ex­am­ple: you want to write a chess-play­ing bot you ac­cess over WhatsApp. The user sends a de­scrip­tion of what they want to do (“use my bishop to take the knight”), and the bot plays against them.

Could you get the LLM to be in charge of main­tain­ing the state of the chess board and play­ing con­vinc­ingly? Possibly, maybe. Would you? Hell no, for some in­tu­itive rea­sons:

Performance: It’s im­pres­sive that LLMs might be able to play chess at all, but they suck at it (as of 2025-04-01). A spe­cial­ized chess en­gine is al­ways go­ing to be a faster, bet­ter, cheaper chess player. Even mod­ern chess en­gines like Stockfish that in­cor­po­rate neural net­works are still pur­pose-built spe­cial­ized sys­tems with well-de­fined in­puts and eval­u­a­tion func­tions - not gen­eral-pur­pose lan­guage mod­els try­ing to main­tain game state through text.

Debugging and ad­just­ing: It’s im­pos­si­ble to rea­son about and de­bug why the LLM made a given de­ci­sion, which means it’s very hard to change how it makes those de­ci­sions if you need to tweak them. You don’t un­der­stand the jour­ney it took through the high-di­men­sional se­man­tic space to get to your an­swer, and it’s re­ally poor at ex­plain­ing it too. Even pur­pose-built neural net­works like those in chess en­gines can be chal­leng­ing for ob­serv­abil­ity, and a gen­eral LLM is a night­mare, de­spite Anthropic’s great strides in this area

And the rest…: test­ing LLM out­puts is much harder than unit-test­ing known code-paths; LLMs are much worse at math than your CPU; LLMs are in­suf­fi­ciently good at pick­ing ran­dom num­bers; ver­sion-con­trol and au­dit­ing be­comes much harder; mon­i­tor­ing and ob­serv­abil­ity gets painful; state man­age­ment through nat­ural lan­guage is frag­ile; you’re at the mercy of API rate lim­its and costs; and se­cu­rity bound­aries be­come fuzzy when every­thing flows through prompts.

The chess ex­am­ple il­lus­trates the fun­da­men­tal prob­lem with us­ing LLMs for core ap­pli­ca­tion logic, but this prin­ci­ple ex­tends far be­yond games. In any do­main where pre­ci­sion, re­li­a­bil­ity, and ef­fi­ciency mat­ter, you should fol­low the same ap­proach:

The user says they want to at­tack player X with their vor­pal sword? The LLM should­n’t be the sys­tem fig­ur­ing out is the user has a vor­pal sword, or what the re­sults of that would be: the LLM is re­spon­si­ble for trans­lat­ing the free-text the user gave you into an API call only and trans­lat­ing the re­sult into text for the user

You’re build­ing a ne­go­ti­a­tion agent that should re­spond to user of­fers? The LLM is­n’t in charge of the ne­go­ti­a­tion, just in charge of pack­ag­ing it up, pass­ing it off to the ne­go­ti­at­ing en­gine, and telling the user about the re­sult

You need to make a ran­dom choice about how to re­spond to the user? The LLM does­n’t get to choose

Reminder of what LLMs are good at

While I’ve fo­cused on what LLMs should­n’t do, it’s equally im­por­tant to un­der­stand their strengths so you can lever­age them ap­pro­pri­ately:

LLMs ex­cel at trans­for­ma­tion and at cat­e­go­riza­tion, and have a pretty good ground­ing in how the world works”, and this is where you in your process you should be de­ploy­ing them.

The LLM is good at tak­ing hit the orc with my sword” and turn­ing it into at­tack(tar­get=“orc”, weapon=“sword”). Or tak­ing {“error”: insufficient_funds”} and turn­ing it into You don’t have enough gold for that.”

The LLM is good at fig­ur­ing out what the hell the user is try­ing to do and rout­ing it to the right part of your sys­tem. Is this a com­bat com­mand? An in­ven­tory check? A re­quest for help?

Finally, the LLM is good at know­ing about hu­man con­cepts, and know­ing that a blade” is prob­a­bly a sword and smash” prob­a­bly means at­tack.

Notice that all these strengths in­volve trans­for­ma­tion, in­ter­pre­ta­tion, or com­mu­ni­ca­tion—not com­plex de­ci­sion-mak­ing or main­tain­ing crit­i­cal ap­pli­ca­tion state. By re­strict­ing LLMs to these roles, you get their ben­e­fits with­out the pit­falls de­scribed ear­lier.

What LLMs can and can’t do is ever-shift­ing and re­minds me of the God of the gaps”. a term from the­ol­ogy where each mys­te­ri­ous phe­nom­e­non was once ex­plained by di­vine in­ter­ven­tion—un­til sci­ence filled that gap. Likewise, peo­ple con­stantly iden­tify new human-only” tasks to claim that LLMs aren’t truly in­tel­li­gent or ca­pa­ble. Then, just a few months later, a new model emerges that han­dles those tasks just fine, forc­ing every­one to move the goal­posts again, ex­am­ples pas­sim. It’s a con­stantly evolv­ing tar­get, and what seems out of reach to­day may be solved sooner than we ex­pect.

And so like in our chess ex­am­ple, we will prob­a­bly soon end up with LLMs that can han­dle all of our above ex­am­ples rea­son­ably well. I sus­pect how­ever that most of the draw­backs won’t go away: your non-LLM logic that you pass off to is go­ing to be eas­ier to rea­son about, eas­ier to main­tain, cheaper to run, and more eas­ily ver­sion-con­trolled.

Even as LLMs con­tinue to im­prove, the fun­da­men­tal ar­chi­tec­tural prin­ci­ple re­mains: use LLMs for what they’re best at—the in­ter­face layer—and rely on pur­pose-built sys­tems for your core logic. If your team promises to de­liver (or buy!) Agentic AI, then every­one needs to have a shared un­der­stand­ing of what that means; you don’t want to be the one left try­ing to ex­plain the mis­match to stake­hold­ers six months later. There’s no cur­rent (2025-03-30) widely ac­cepted de­f­i­n­i­tion, so if you’re us­ing the term, be clear on what you mean, and if some­one else is us­ing the term, it’s worth fig­ur­ing out which one they mean. Get these ar­ti­cles sent to youIf you liked it, you might like other stuff I write

...

Read the original on example.com »

9 288 shares, 21 trendiness

CERN scientists find evidence of quantum entanglement in sheep

The find­ings could help to ex­plain the species’ fas­ci­nat­ing flock­ing be­hav­iour

The find­ings could help to ex­plain the species’ fas­ci­nat­ing flock­ing be­hav­iour

Quantum en­tan­gle­ment is a fas­ci­nat­ing phe­nom­e­non where two par­ti­cles’ states are tied to each other, no mat­ter how far apart the par­ti­cles are. In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for ground­break­ing ex­per­i­ments in­volv­ing en­tan­gled pho­tons. These ex­per­i­ments con­firmed the pre­dic­tions for the man­i­fes­ta­tion of en­tan­gle­ment that had been made by the late CERN the­o­rist John Bell. This phe­nom­e­non has so far been ob­served in a wide va­ri­ety of sys­tems, such as in top quarks at CERNs Large Hadron Collider (LHC) in 2024. Entanglement has also found sev­eral im­por­tant so­ci­etal ap­pli­ca­tions, such as quan­tum cryp­tog­ra­phy and quan­tum com­put­ing. Now, it also ex­plains the fa­mous herd men­tal­ity of sheep.

A flock of sheep (ovis aries) has roamed the CERN site dur­ing the spring and sum­mer months for over 40 years. Along with the CERN shep­herd, they help to main­tain the vast ex­panses of grass­land around the LHC and are part of the Organization’s long-stand­ing ef­forts to pro­tect the site’s bio­di­ver­sity. In ad­di­tion, their flock­ing be­hav­iour has been of great in­ter­est to CERNs physi­cists. It is well known that sheep be­have like par­ti­cles: their sto­chas­tic be­hav­iour has been stud­ied by zo­ol­o­gists and physi­cists alike, who no­ticed that a flock’s abil­ity to quickly change phase is sim­i­lar to that of atoms in a solid and a liq­uid. Known as the Lamb Shift, this can cause them to get them­selves into bizarre sit­u­a­tions, such as walk­ing in a cir­cle for days on end.

Now, new re­search has shed light on the rea­son for these ex­tra­or­di­nary abil­i­ties. Scientists at CERN have found ev­i­dence of quan­tum en­tan­gle­ment in sheep. Using so­phis­ti­cated mod­el­ling tech­niques and spe­cialised track­ers, the find­ings show that the brains of in­di­vid­ual sheep in a flock are quan­tum-en­tan­gled in such a way that the sheep can move and vo­calise si­mul­ta­ne­ously, no mat­ter how far apart they are. The ev­i­dence has sev­eral ram­i­fi­ca­tions for ovine re­search and has set the baa for a new branch of quan­tum physics.

The fact that we were hav­ing our lunch next to the flock was a shear co­in­ci­dence,” says Mary Little, leader of the HERD col­lab­o­ra­tion, de­scrib­ing how the pro­ject came about. When we saw and herd their be­hav­iour, we wanted to in­ves­ti­gate the move­ment of the flock us­ing the tech­nol­ogy at our dis­posal at the Laboratory.”

Observing the sheep’s abil­ity to si­mul­ta­ne­ously move and vo­calise to­gether caused one main ques­tion to aries: since the sheep be­have like sub­atomic par­ti­cles, could quan­tum ef­fects be the rea­son for their be­hav­iour?

Obviously, we could­n’t put them all in a box and see if they were dead or alive,” said Beau Peep, a re­searcher on the pro­ject. However, by as­sum­ing that the sheep were spher­i­cal, we were able to model their be­hav­iour in al­most the ex­act same way as we model sub­atomic par­ti­cles.”

Using so­phis­ti­cated track­ers, akin to those in the LHC ex­per­i­ments, the physi­cists were able to lo­cate the pre­cise par­ti­cles in the sheep’s brains that might be the cause of this en­tan­gle­ment. Dubbed moutons” and rep­re­sented by the Greek let­ter lambda, l, these par­ti­cles are lep­tons and are close rel­a­tives of the muon, but fluffier.

The sta­tis­ti­cal sig­nif­i­cance of the find­ings is 4 sigma, which is enough to show ev­i­dence of the phe­nom­e­non. However, it does not quite pass the baa to be classed as an ob­ser­va­tion.

More re­search is needed to fully con­firm that this was in­deed an ob­ser­va­tion of ovine en­tan­gle­ment or a sta­tis­ti­cal fluc­tu­a­tion,” says Ewen Woolly, spokesper­son for the HERD col­lab­o­ra­tion. This may be dif­fi­cult, as we have found that the re­search makes physi­cists be­come in­ex­plic­a­bly drowsy.”

While en­tan­gle­ment is now the lead­ing the­ory for this phe­nom­e­non, we have to take every­thing into ac­count,” adds Dolly Shepherd, a CERN the­o­rist. Who knows, maybe fur­ther vari­ables are hid­den be­neath their fleeces. Wolves, for ex­am­ple.”

...

Read the original on home.cern »

10 264 shares, 12 trendiness

The case against conversational interfaces

Conversational in­ter­faces are a bit of a meme. Every cou­ple of years a shiny new AI de­vel­op­ment emerges and peo­ple in tech go This is it! The next com­put­ing par­a­digm is here! We’ll only use nat­ural lan­guage go­ing for­ward!”. But then noth­ing ac­tu­ally changes and we con­tinue us­ing com­put­ers the way we al­ways have, un­til the de­bate resur­faces a few years later.

We’ve gone through this cy­cle a cou­ple of times now: Virtual as­sis­tants (Siri), smart speak­ers (Alexa, Google Home), chat­bots (“conversational com­merce”), AirPods-as-a-platform, and, most re­cently, large lan­guage mod­els.

I’m not en­tirely sure where this ob­ses­sion with con­ver­sa­tional in­ter­faces comes from. Perhaps it’s a type of anemoia, a nos­tal­gia for a fu­ture we saw in StarTrek that never be­came re­al­ity. Or maybe it’s sim­ply that peo­ple look at the term natural lan­guage” and think well, if it’s nat­ural then it must be the log­i­cal end state”.

I’m here to tell you that it’s not.

When peo­ple say natural lan­guage” what they mean is writ­ten or ver­bal com­mu­ni­ca­tion. Natural lan­guage is a way to ex­change ideas and knowl­edge be­tween hu­mans. In other words, it’s a data trans­fer mech­a­nism.

Data trans­fer mech­a­nisms have two crit­i­cal fac­tors: speed and lossi­ness.

Speed de­ter­mines how quickly data is trans­ferred from the sender to the re­ceiver, while lossi­ness refers to how ac­cu­rately the data is trans­ferred. In an ideal state, you want data trans­fer to hap­pen at max­i­mum speed (instant) and with per­fect fi­delity (lossless), but these two at­trib­utes are of­ten a bit of a trade-off.

Let’s look at how well nat­ural lan­guage does on the speed di­men­sion:

The first thing I should note is that these data points are very, very sim­pli­fied av­er­ages. The im­por­tant part to take away from this table is not the ac­cu­racy of in­di­vid­ual num­bers, but the over­all pat­tern: We are sig­nif­i­cantly faster at re­ceiv­ing data (reading, lis­ten­ing) than send­ing it (writing, speak­ing). This is why we can lis­ten to pod­casts at 2x speed, but not record them at 2x speed.

To put the writ­ing and speak­ing speeds into per­spec­tive, we form thoughts at 1,000-3,000 words per minute. Natural lan­guage might be nat­ural, but it’s a bot­tle­neck.

And yet, if you think about your day-to-day in­ter­ac­tions with other hu­mans, most com­mu­ni­ca­tion feels re­ally fast and ef­fi­cient. That’s be­cause nat­ural lan­guage is only one of many data trans­fer mech­a­nisms avail­able to us.

For ex­am­ple, in­stead of say­ing I think what you just said is a great idea”, I can just give you a thumbs up. Or nod my head. Or sim­ply smile.

Gestures and fa­cial ex­pres­sions are ef­fec­tively data com­pres­sion tech­niques. They en­code in­for­ma­tion in a more com­pact, but lossier, form to make it faster and more con­ve­nient to trans­mit.

Natural lan­guage is great for data trans­fer that re­quires high fi­delity (or as a data stor­age mech­a­nism for async com­mu­ni­ca­tion), but when­ever pos­si­ble we switch to other modes of com­mu­ni­ca­tion that are faster and more ef­fort­less. Speed and con­ve­nience al­ways wins.

My fa­vorite ex­am­ple of truly ef­fort­less com­mu­ni­ca­tion is a mem­ory I have of my grand­par­ents. At the break­fast table, my grand­mother never had to ask for the but­ter — my grand­fa­ther al­ways seemed to pass it to her au­to­mat­i­cally, be­cause af­ter 50+ years of mar­riage he just sensed that she was about to ask for it. It was like they were com­mu­ni­cat­ing tele­path­i­cally.

*That* is the type of re­la­tion­ship I want to have with my com­puter!

Similar to hu­man-to-hu­man com­mu­ni­ca­tion, there are dif­fer­ent data trans­fer mech­a­nisms to ex­change in­for­ma­tion be­tween hu­mans and com­put­ers. In the early days of com­put­ing, users in­ter­acted with com­put­ers through a com­mand line. These text-based com­mands were ef­fec­tively a nat­ural lan­guage in­ter­face, but re­quired pre­cise syn­tax and a deep un­der­stand­ing of the sys­tem.

The in­tro­duc­tion of the GUI pri­mar­ily solved a dis­cov­ery prob­lem: Instead of hav­ing to mem­o­rize ex­act text com­mands, you could now nav­i­gate and per­form tasks through vi­sual el­e­ments like menus and but­tons. This did­n’t just make things eas­ier to dis­cover, but also more con­ve­nient: It’s faster to click a but­ton than to type a long text com­mand.

Today, we live in a pro­duc­tiv­ity equi­lib­rium that com­bines graph­i­cal in­ter­faces with key­board-based com­mands.

We still use our mouse to nav­i­gate and tell our com­put­ers what to do next, but rou­tine ac­tions are typ­i­cally com­mu­ni­cated in form of quick-fire key­board presses: ⌘b to for­mat text as bold, ⌘t to open a new tab, ⌘c/v to quickly copy things from one place to an­other, etc.

These short­cuts are not nat­ural lan­guage though. They are an­other form of data com­pres­sion. Like a thumbs up or a nod, they help us to com­mu­ni­cate faster.

Modern pro­duc­tiv­ity tools take these data com­pres­sion short­cuts to the next level. In tools like Linear, Raycast or Superhuman every sin­gle com­mand is just a key­stroke away. Once you’ve built the mus­cle mem­ory, the data in­put feels com­pletely ef­fort­less. It’s al­most like be­ing handed the but­ter at the break­fast table with­out hav­ing to ask for it.

Touch-based in­ter­faces are con­sid­ered the third piv­otal mile­stone in the evo­lu­tion of hu­man com­puter in­ter­ac­tion, but they have al­ways been more of an aug­men­ta­tion of desk­top com­put­ing rather than a re­place­ment for it. Smartphones are great for away from key­board” work­flows, but im­por­tant pro­duc­tiv­ity work still hap­pens on desk­top.

That’s be­cause text is not a mo­bile-na­tive in­put mech­a­nism. A phys­i­cal key­board can feel like a nat­ural ex­ten­sion of your mind and body, but typ­ing on a phone is al­ways a lit­tle awk­ward — and it shows in data trans­fer speeds: Average typ­ing speeds on mo­bile are just 36 words-per-minute, no­tably slower than the ~60 words-per-minute on desk­top.

We’ve been able to re­place nat­ural lan­guage with mo­bile-spe­cific data com­pres­sion al­go­rithms like emo­jis or Snapchat self­ies, but we’ve never found a mo­bile equiv­a­lent for key­board short­cuts. Guess why we still don’t have a truly mo­bile-first pro­duc­tiv­ity app af­ter al­most 20 years since the in­tro­duc­tion of the iPhone?

But what about speech-to-text,” you might say, point­ing to re­ports about in­creas­ing us­age of voice mes­sag­ing. It’s true that speak­ing (150wpm) is in­deed a faster data trans­fer mech­a­nism than typ­ing (60wpm), but that does­n’t au­to­mat­i­cally make it a bet­ter method to in­ter­act with com­put­ers.

We keep telling our­selves that pre­vi­ous voice in­ter­faces like Alexa or Siri did­n’t suc­ceed be­cause the un­der­ly­ing AI was­n’t smart enough, but that’s only half of the story. The core prob­lem was never the qual­ity of the out­put func­tion, but the in­con­ve­nience of the in­put func­tion: A nat­ural lan­guage prompt like Hey Google, what’s the weather in San Francisco to­day?” just takes 10x longer than sim­ply tap­ping the weather app on your home­screen.

LLMs don’t solve this prob­lem. The qual­ity of their out­put is im­prov­ing at an as­ton­ish­ing rate, but the in­put modal­ity is a step back­wards from what we al­ready have. Why should I have to de­scribe my de­sired ac­tion us­ing nat­ural lan­guage, when I could sim­ply press a but­ton or key­board short­cut? Just pass me the god­damn but­ter.

None of this is to say that LLMs aren’t great. I love LLMs. I use them all the time. In fact, I wrote this very es­say with the help of an LLM.

Instead of draft­ing a first ver­sion with pen and pa­per (my pre­ferred writ­ing tools), I spent an en­tire hour walk­ing out­side, talk­ing to ChatGPT in Advanced Voice Mode. We went through all the fuzzy ideas in my head, clar­i­fied and or­ga­nized them, ex­plored some ad­di­tional talk­ing points, and even­tu­ally pulled every­thing to­gether into a first out­line.

This was­n’t just a one-sided Hey, can you write a few para­graphs about x” prompt. It felt like a gen­uine, in-depth con­ver­sa­tion and ex­change of ideas with a true thought part­ner. Even weeks later, I’m still amazed at how well it worked. It was one of those rare, mag­i­cal mo­ments where soft­ware makes you feel like you’re liv­ing in the fu­ture.

In con­trast to typ­i­cal hu­man-to-com­puter com­mands, how­ever, this work­flow is not de­fined by speed. Like writ­ing, my ChatGPT con­ver­sa­tion is a think­ing process — not an in­ter­ac­tion that hap­pens post-thought.

It should also be noted that ChatGPT does not sub­sti­tute any ex­ist­ing soft­ware work­flows in this ex­am­ple. It’s a com­pletely new use case.

This brings me to my core the­sis: The in­con­ve­nience and in­fe­rior data trans­fer speeds of con­ver­sa­tional in­ter­faces make them an un­likely re­place­ment for ex­ist­ing com­put­ing par­a­digms — but what if they com­ple­ment them?

The most con­vinc­ing con­ver­sa­tional UI I have seen to date was at a hackathon where a team turned Amazon Alexa into an in-game voice as­sis­tant for StarCraft II. Rather than re­plac­ing mouse and key­board, voice acted as an ad­di­tional in­put mech­a­nism. It in­creased the band­width of the data trans­fer.

You could see the same pat­tern work for any type of knowl­edge work, where voice com­mands are avail­able while you are busy do­ing other things. We will not re­place Figma, Notion, or Excel with a chat in­ter­face. It’s not go­ing to hap­pen. Neither will we for­ever con­tinue the sta­tus quo, where we con­stantly have to switch back and forth be­tween these tools and an LLM.

Instead, AI should func­tion as an al­ways-on com­mand meta-layer that spans across all tools. Users should be able to trig­ger ac­tions from any­where with sim­ple voice prompts with­out hav­ing to in­ter­rupt what­ever they are cur­rently do­ing with mouse and key­board.

For this fu­ture to be­come an ac­tual re­al­ity, AI needs to work at the OS level. It’s not meant to be an in­ter­face for a sin­gle tool, but an in­ter­face across tools. Kevin Kwok fa­mously wrote that productivity and col­lab­o­ra­tion should­n’t be two sep­a­rate work­flows”. And while he was re­fer­ring to hu­man-to-hu­man col­lab­o­ra­tion, the state­ment is even more true in a world of hu­man-to-AI col­lab­o­ra­tion, where the lines be­tween pro­duc­tiv­ity and co­or­di­na­tion are be­com­ing in­creas­ingly more blurry.

The sec­ond thing we need to fig­ure out is how we can com­press voice in­put to make it faster to trans­mit. What’s the voice equiv­a­lent of a thumbs-up or a key­board short­cut? Can I prompt Claude faster with sim­ple sounds and whis­tles? Should ChatGPT have ac­cess to my cam­era so it can change its an­swers in re­al­time based on my fa­cial ex­pres­sions?

Even as a sec­ondary in­ter­face, speed and con­ve­nience is all that mat­ters.

I ad­mit that the ti­tle of this es­say is a bit mis­lead­ing (made you click though, did­n’t it?). This is­n’t re­ally a case against con­ver­sa­tional in­ter­faces, it’s a case against zero-sum think­ing.

We spend too much time think­ing about AI as a sub­sti­tute (for in­ter­faces, work­flows, and jobs) and too lit­tle time about AI as a com­ple­ment. Progress rarely fol­lows a sim­ple path of re­place­ment. It un­locks new, pre­vi­ously unimag­in­able things rather than merely dis­plac­ing what came be­fore.

The same is true here. The fu­ture is­n’t about re­plac­ing ex­ist­ing com­put­ing par­a­digms with chat in­ter­faces, but about en­hanc­ing them to make hu­man-com­puter in­ter­ac­tion feel ef­fort­less — like the silent ex­change of but­ter at a well-worn break­fast table.

Thanks to Blake Robbins, Chris Paik, Jackson Dahl, Johannes Schickling, Jordan Singer, and signüll for read­ing drafts of this post.

...

Read the original on julian.digital »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.