10 interesting stories served every morning and every evening.




1 694 shares, 28 trendiness

Apps lighter than a React button

On this re­lease, we’re show­ing what hap­pens when you push mod­ern web stan­dards — HTML, CSS, and JS — to their peak:

This en­tire app is lighter than a React/ShadCN but­ton:

See bench­mark and de­tails here ›

Here’s the same app, now with a Rust com­pu­ta­tion en­gine and Event Sourcing for in­stant search and other op­er­a­tions over 150,000 records — far past where JS-version of the en­gine choked on re­cur­sive calls over the records.

This demo is here ›

Nue crushes HMR and build speed records and sets you up with a mil­lisec­ond feed­back loop for your every­day VSCode/Sublime file-save op­er­a­tions:

Immediate feed­back for de­sign and com­po­nent up­dates, pre­serv­ing app state

This is a game-changer for Rust, Go, and JS en­gi­neers stuck wrestling with React id­ioms in­stead of lean­ing on time­less soft­ware pat­terns. Nue em­pha­sizes a model-first ap­proach, de­liv­er­ing mod­u­lar de­sign with sim­ple, testable func­tions, true sta­tic typ­ing, and min­i­mal de­pen­den­cies. Nue is a lib­er­at­ing ex­pe­ri­ence for sys­tem devs whose skills can fi­nally shine in a sep­a­rated model layer.

This is an im­por­tant shift for de­sign en­gi­neers bogged down by React pat­terns and 40,000+ line de­sign sys­tems. Build rad­i­cally sim­pler sys­tems with mod­ern CSS (@layers, vari­ables, calc()) and take con­trol of your ty­pog­ra­phy and white­space.

This is a wake-up call for UX en­gi­neers tan­gled in React hooks and util­ity class walls in­stead of own­ing the user ex­pe­ri­ence. Build apps as light as a React but­ton to push the web — and your skills — for­ward.

Nue is a web frame­work fo­cused on web stan­dards, cur­rently in ac­tive de­vel­op­ment. We aim to re­veal the hid­den com­plex­ity that’s be­come nor­mal­ized in mod­ern web de­vel­op­ment. When a sin­gle but­ton out­weighs an en­tire ap­pli­ca­tion, some­thing’s fun­da­men­tally bro­ken.

Nue dri­ves the in­evitable shift. We’re re­build­ing tools and frame­works from the ground up with a cleaner, more ro­bust ar­chi­tec­ture. Our goal is to re­store the joy of web de­vel­op­ment for all key skill sets: fron­tend ar­chi­tects, de­sign en­gi­neers, and UX en­gi­neers.

...

Read the original on nuejs.org »

2 476 shares, 25 trendiness

The April Fools joke that might have got me fired

...

Read the original on oldvcr.blogspot.com »

3 438 shares, 19 trendiness

Fluentsubs

...

Read the original on app.fluentsubs.com »

4 377 shares, 20 trendiness

Bletchley Park code breaker Betty Webb dies aged 101

A dec­o­rated World War Two code breaker who spent her youth de­ci­pher­ing en­emy mes­sages at Bletchley Park has died at the age of 101. Charlotte Betty” Webb MBE - who was among the last sur­viv­ing Bletchley code break­ers - died on Monday night, the Women’s Royal Army Corps Association con­firmed.Mrs Webb, from Wythall in Worcestershire, joined op­er­a­tions at the Buckinghamshire base at the age of 18, later go­ing on to help with Japanese codes at The Pentagon in the US. She was awarded France’s high­est ho­n­our - the Légion d’Hon­neur - in 2021. The Women’s Royal Army Corps Association de­scribed Mrs Webb as a woman who inspired women in the Army for decades”.

Bletchley Park Trust CEO Iain Standen said Mrs Webb will not only be re­mem­bered for her work but also for her ef­forts to en­sure that the story of what she and her col­leagues achieved is not for­got­ten.“”Bet­ty’s pas­sion for pre­serv­ing the his­tory and legacy of Bletchley Park has un­doubt­edly in­spired many peo­ple to en­gage with the story and visit the site,” he said in a state­ment. Tributes to Mrs Webb have be­gun to be posted on so­cial me­dia, in­clud­ing one from his­to­rian and au­thor Dr Tessa Dunlop who said she was with her in her fi­nal hours.De­scrib­ing Mrs Webb as the very best”, she said on X: She is one of the most re­mark­able woman I have ever known.”

Mrs Webb told the BBC in 2020 that she had never heard of Bletchley”, Britain’s wartime code-break­ing cen­tre, be­fore start­ing work there as a mem­ber of the ATS, the Auxiliary Territorial Service. She had been study­ing at a col­lege near Shrewsbury, Shropshire, when she vol­un­teered as she said she and oth­ers on the course felt they ought to be serv­ing our coun­try rather than just mak­ing sausage rolls”.Her mother had taught her to speak German as a child and ahead of her post­ing re­mem­bered be­ing taken into the man­sion [at Bletchley] to read the Official Secrets Act”.“I re­alised that from then on there was no way that I was go­ing to be able to tell even my par­ents where I was and what I was do­ing un­til 1975 [when re­stric­tions were lifted],” she re­called.She would tell the fam­ily with whom she lodged that she was a sec­re­tary.

When WW2 ended in Europe in May 1945, she went to work at the Pentagon af­ter spend­ing four years at Bletchley, which with its analy­sis of German com­mu­ni­ca­tions had served as a vi­tal cog in the Allies’ war ma­chine. At the Pentagon she would para­phrase and tran­scribe al­ready-de­coded Japanese mes­sages. She said she was the only mem­ber of the ATS to be sent to Washington, de­scrib­ing it as a tremendous ho­n­our”.Mrs Webb, in 2020, re­called she had had no idea the Americans planned to end the con­flict by drop­ping atomic weapon on Japanese cities, de­scrib­ing the weapons’ power as utterly aw­ful”.Af­ter the Allies’ fi­nal vic­tory, it took Mrs Webb sev­eral months to or­gan­ise re­turn pas­sage to the UK, where she worked as a sec­re­tary at a school in Shropshire.The head teacher there had also worked at Bletchley so knew of her pro­fes­sion­al­ism, whereas other would-be em­ploy­ers, she re­called, were left stumped by her be­ing un­able to ex­plain - due to se­crecy re­quire­ments - her pre­vi­ous du­ties.More than half a cen­tury later, in 2021, Mrs Webb was one of 6,000 British cit­i­zens to re­ceive the Légion d’Hon­neur, fol­low­ing a de­ci­sion by President François Hollande in 2014 to recog­nise British vet­er­ans who helped lib­er­ate France.

In 2023, she and her niece were among 2,200 peo­ple from 203 coun­tries in­vited to Westminster Abbey to see King Charles IIIs coro­na­tion. The same year she cel­e­brated her 100th birth­day at Bletchley Park with a party. She and her guests were treated to a fly-past by a Lancaster bomber. She said at the time: It was for me - it’s un­be­liev­able is­n’t it? Little me.”

...

Read the original on www.bbc.com »

5 363 shares, 26 trendiness

A Man Powers Home for 8 Years Using 1,000 Old Laptop Batteries

The UN re­ports that less than 25% of global e-waste is prop­erly col­lected and re­cy­cled.

...

Read the original on techoreon.com »

6 363 shares, 4 trendiness

Glubux's Powerwall

Follow along with the video be­low to see how to in­stall our site as a web app on your home screen.

Note: This fea­ture cur­rently re­quires ac­cess­ing the site us­ing the built-in Safari browser.

...

Read the original on secondlifestorage.com »

7 319 shares, 14 trendiness

Why F#?

If some­one had told me a few months ago I’d be play­ing with .NET again af­ter a 15+ years hia­tus I prob­a­bly would have laughed at this. Early on in my ca­reer I played with .NET and Java, and even though .NET had done some things bet­ter than Java (as it had the op­por­tu­nity to learn from some early Java mis­takes), I quickly set­tled on Java as it was a truly portable en­vi­ron­ment.

I guess every­one who reads my blog knows that in the past few years I’ve been play­ing on and off with OCaml and I think it’s safe to say that it has be­come one of my fa­vorite pro­gram­ming lan­guages - along­side the likes of Ruby and Clojure. My work with OCaml drew my at­ten­tion re­cently to F#, an ML tar­get­ing .NET, de­vel­oped by Microsoft. The func­tional coun­ter­part of the (mostly) ob­ject-ori­ented C#. The newest ML lan­guage cre­ated…

Unfortunately, no one can be told what the Matrix is. You have to see it for your­self.

Before we start dis­cussing F#, I guess we should an­swer first the ques­tion What is F#?”. I’ll bor­row a bit from the of­fi­cial page to an­swer it.

F# is a uni­ver­sal pro­gram­ming lan­guage for writ­ing suc­cinct, ro­bust and per­for­mant code.

F# al­lows you to write un­clut­tered, self-doc­u­ment­ing code, where your fo­cus re­mains on your prob­lem do­main, rather than the de­tails of pro­gram­ming.

It does this with­out com­pro­mis­ing on speed and com­pat­i­bil­ity - it is open-source, cross-plat­form and in­ter­op­er­a­ble.

Trivia: F# is the lan­guage that made the pipeline op­er­a­tor (|>) pop­u­lar.

A full set of fea­tures are doc­u­mented in the F# lan­guage guide.

F# 1.0 was of­fi­cially re­leased in May 2005 by Microsoft Research. It was ini­tially de­vel­oped by Don Syme at Microsoft Research in Cambridge and evolved from an ear­lier re­search pro­ject called Caml. NET,” which aimed to bring OCaml to the .NET plat­form. F# was of­fi­cially moved from Microsoft Research to Microsoft (as part of their de­vel­oper tool­ing di­vi­sion) in 2010 (timed with the re­lease of F# 2.0).

F# has been steadily evolv­ing since those early days and the most re­cent re­lease

F# 9.0 was re­leased in November 2024. It seems only ap­pro­pri­ate that F# would come to my at­ten­tion in the year of its 20th birth­day!

There were sev­eral rea­sons why I wanted to try out F#:

* .NET be­came open-source and portable a few years ago and I wanted to check the progress on that front

* I was cu­ri­ous if F# of­fers any ad­van­tages over OCaml

* I’ve heard good things about the F# tool­ing (e.g. Rider and Ionide)

* I like play­ing with new pro­gram­ming lan­guages

Below you’ll find my ini­tial im­pres­sions for sev­eral ar­eas.

As a mem­ber of the ML fam­ily of lan­guages, the syn­tax won’t sur­prise any­one fa­mil­iar with OCaml. As there are quite few peo­ple fa­mil­iar with OCaml, though, I’ll men­tion that Haskell pro­gram­mers will also feel right at home with the syn­tax. And Lispers.

For every­one else - it’d be fairly easy to pick up the ba­sics.

Nothing shock­ing here, right?

Here’s an­other slightly more in­volved ex­am­ple:

Why don’t you try sav­ing the snip­pet above in a file called Sales.fsx and run­ning it like this:

Now you know that F# is a great choice for ad-hoc scripts! Also, run­ning dot­net fsi by it­self will pop an F# REPL where you can ex­plore the lan­guage at your leisure.

I’m not go­ing to go into great de­tails here, as much of what I wrote about OCaml

here ap­plies to F# as well. I’d also sug­gest this quick tour of F#

to get a bet­ter feel for its syn­tax.

Tip: Check out the F# cheat­sheet

if you’d like to see a quick syn­tax ref­er­ence.

One thing that made a good im­pres­sion to me is the fo­cus of the lan­guage de­sign­ers on mak­ing F# ap­proach­able to new­com­ers, by pro­vid­ing a lot of small qual­ity of life im­prove­ments for them. Below are few ex­am­ples, that prob­a­bly don’t mean much to you, but would mean some­thing to peo­ple fa­mil­iar with OCaml:

I guess some of those might be con­tro­ver­sial, de­pend­ing on whether you’re a ML lan­guage purist or not, but in my book any­thing that makes ML more pop­u­lar is a good thing.

Did I also men­tion it’s easy to work with uni­code strings and reg­u­lar ex­pres­sions?

Often peo­ple say that F# is mostly a stag­ing ground for fu­ture C# fea­tures, and per­haps that’s true. I haven’t ob­served both lan­guages long enough to have my own opin­ion on the sub­ject, but I was im­pressed to learn that async/​await (of C# and later JavaScript fame) orig­i­nated in… F# 2.0.

It all changed in 2012 when C#5 launched with the in­tro­duc­tion of what has now be­come the pop­u­lar­ized async/​await key­word pair­ing. This fea­ture al­lowed you to write code with all the ben­e­fits of hand-writ­ten asyn­chro­nous code, such as not block­ing the UI when a long-run­ning process started, yet read like nor­mal syn­chro­nous code. This async/​await pat­tern has now found its way into many mod­ern pro­gram­ming lan­guages such as Python, JS, Swift, Rust, and even C++.

F#’s ap­proach to asyn­chro­nous pro­gram­ming is a lit­tle dif­fer­ent from async/​await

but achieves the same goal (in fact, async/​await is a cut-down ver­sion of F#’s ap­proach, which was in­tro­duced a few years pre­vi­ously, in F#2).

Time will tell what will hap­pen, but I think it’s un­likely that C# will ever be able to fully re­place F#.

I’ve also found this en­cour­ag­ing com­ment from 2022 that Microsoft might be will­ing to in­vest more in F#:

Some good news for you. After 10 years of F# be­ing de­vel­oped by 2.5 peo­ple in­ter­nally and some ran­dom com­mu­nity ef­forts, Microsoft has fi­nally de­cided to prop­erly in­vest in F# and cre­ated a full-fledged team in Prague this sum­mer. I’m a dev in this team, just like you I was an F# fan for many years so I am happy things got fi­nally mov­ing here.

Looking at the changes in F# 8.0 and F 9.0, it seems the new full-fledged team has done some great work!

It’s hard to as­sess the ecosys­tem around F# af­ter such a brief pe­riod, but over­all it seems to me that there are fairly few native” F# li­braries and frame­works out there and most peo­ple rely heav­ily on the core .NET APIs and many third-party li­braries and frame­works geared to­wards C#. That’s a pretty com­mon setup when it comes to hosted lan­guages in gen­eral, so noth­ing sur­pris­ing here as well.

If you’ve ever used an­other hosted lan­guage (e.g. Scala, Clojure, Groovy) then you prob­a­bly know what to ex­pect.

Awesome F# keeps track of pop­u­lar F# li­braries, tools and frame­works. I’ll high­light here the web de­vel­op­ment and data sci­ence li­braries:

* Giraffe: A light­weight li­brary for build­ing web ap­pli­ca­tions us­ing ASP.NET Core. It pro­vides a func­tional ap­proach to web de­vel­op­ment.

* Suave: A sim­ple and light­weight web server li­brary with com­bi­na­tors for rout­ing and task com­po­si­tion. (Giraffe was in­spired by Suave)

* Saturn: Built on top of Giraffe and ASP.NET Core, it of­fers an MVC-style frame­work in­spired by Ruby on Rails and Elixir’s Phoenix.

* Bolero: A frame­work for build­ing client-side ap­pli­ca­tions in F# us­ing WebAssembly and Blazor.

* Fable: A com­piler that trans­lates F# code into JavaScript, en­abling in­te­gra­tion with pop­u­lar JavaScript ecosys­tems like React or Node.js.

* Elmish: A model-view-up­date (MVU) ar­chi­tec­ture for build­ing web UIs in F#, of­ten used with Fable.

* SAFE Stack: An end-to-end, func­tional-first stack for build­ing cloud-ready web ap­pli­ca­tions. It com­bines tech­nolo­gies like Saturn, Azure, Fable, and Elmish for a type-safe de­vel­op­ment ex­pe­ri­ence.

* Deedle: A li­brary for data ma­nip­u­la­tion and ex­ploratory analy­sis, sim­i­lar to pan­das in Python.

* FsLab: A col­lec­tion of li­braries tai­lored for data sci­ence, in­clud­ing vi­su­al­iza­tion and sta­tis­ti­cal tools.

I haven’t played much with any of them at this point yet, so I’ll re­serve any feed­back and rec­om­men­da­tions for some point in the fu­ture.

The of­fi­cial doc­u­men­ta­tion is pretty good, al­though I find it kind of weird that some of it is hosted on Microsoft’s site

and the rest is on https://​fsharp.org/ (the site of the F# Software Foundation).

I re­ally liked the fol­low­ing parts of the doc­u­men­ta­tion:

https://​fsharp­for­funand­profit.com/ is an­other good learn­ing re­source. (even if it seems a bit dated)

F# has a some­what trou­bled dev tool­ing story, as his­tor­i­cally sup­port for F# was great only in Visual Studio, and some­what sub­par else­where. Fortunately, the tool­ing story has im­proved a lot in the past decade:

In 2014 a tech­ni­cal break­through was made with the cre­ation of the FSharp. Compiler.Service (FCS) pack­age by Tomas Petricek, Ryan Riley, and Dave Thomas with many later con­trib­u­tors. This con­tains the core im­ple­men­ta­tion of the F# com­piler, ed­i­tor tool­ing and script­ing en­gine in the form of a sin­gle li­brary and can be used to make F# tool­ing for a wide range of sit­u­a­tions. This has al­lowed F# to be de­liv­ered into many more ed­i­tors, script­ing and doc­u­men­ta­tion tools and al­lowed the de­vel­op­ment of al­ter­na­tive back­ends for F#. Key ed­i­tor com­mu­nity-based tool­ing in­cludes Ionide, by Krzysztof Cieślak and con­trib­u­tors, used for rich edit­ing sup­port in the cross-plat­form VSCode ed­i­tor, with over 1M down­loads at time of writ­ing.

I’ve played with the F# plu­g­ins for sev­eral ed­i­tors:

Overall, Rider and VS Code pro­vide the most (and the most pol­ished) fea­tures, but the other op­tions were quite us­able as well. That’s largely due to the fact that the F# LSP server fsauto­com­plete (naming is hard!) is quite ro­bust and any ed­i­tor with good LSP sup­port gets a lot of func­tion­al­ity for free.

Still, I’ll men­tion that I found the tool­ing lack­ing in some re­gards:

* fsharp-mode does­n’t use TreeSitter (yet) and does­n’t seem to be very ac­tively de­vel­oped (looking at the code - it seems it was de­rived from caml-mode)

* Zed’s sup­port for F# is quite spar­tan

* In VS Code shock­ingly the ex­pand­ing and shrink­ing se­lec­tion is bro­ken, which is quite odd for what is sup­posed to be the flag­ship ed­i­tor for F#

I’m re­ally strug­gling with VS Code’s key­bind­ings (too many mod­i­fier keys and func­tions keys for my taste) and edit­ing model, so I’ll likely stick with Emacs go­ing for­ward. Or I’ll fi­nally spend more qual­ity time with neovim!

It seems that every­one is us­ing the same code for­mat­ter (Fantomas), in­clud­ing the F# team, which is great! The lin­ter story in F# is not as great (seems the only pop­u­lar lin­ter FSharpLint is aban­don­ware these days), but when your com­piler is so good, you don’t re­ally need a lin­ter as much.

Oh, well… It seems that Microsoft are not re­ally par­tic­u­larly in­vested in sup­port­ing the tool­ing for F#, as pretty much all the ma­jor pro­jects in this space are com­mu­nity-dri­ven.

Using AI cod­ing agents (e.g. Copilot) with F# worked pretty well, but I did­n’t spend much time on this front.

In the end of the day any ed­i­tor will likely do, as long as you’re us­ing LSP.

By the way, I had an in­ter­est­ing ob­ser­va­tion while pro­gram­ming in F# (and OCaml for that mat­ter) - that when you’re work­ing with a lan­guage with a re­ally good type sys­tem you don’t re­ally need that much from your ed­i­tor. Most the time I’m per­fectly happy with just some in­line type in­for­ma­tion (e.g. some­thing like CodeLenses), auto-com­ple­tion and the abil­ity to eas­ily send code to fsi. Simplicity con­tin­ues to be the ul­ti­mate so­phis­ti­ca­tion…

Other tools that should be on your radar are:

* Paket - Paket is a de­pen­dency man­ager for .NET pro­jects. Think of it as some­thing like bundler, npm or pip, but for .NETs NuGet pack­age ecosys­tem.

* FAKE - A DSL for build tasks and more, where you can use F# to spec­ify the tasks. Somewhat sim­i­lar to Ruby’s rake. Some peo­ple claim that’s the eas­i­est way to sneak F# into an ex­ist­ing .NET pro­ject.

Given the depth and breath of .NET - I guess that sky is the limit for you!

Seems to me that F# will be a par­tic­u­larly good fit for data analy­sis and ma­nip­u­la­tion, be­cause of fea­tures like type providers.

Probably a good fit for back­end ser­vices and even full-stack apps, al­though I haven’t re­ally played with the F# first so­lu­tions in this space yet.

Fable and Elmish make F# a vi­able op­tion for client-side pro­gram­ming and might of­fer an­other easy way to sneak F# into your day-to-day work.

Note: Historically, Fable has been used to tar­get JavaScript but since Fable 4, you can also tar­get other lan­guages such as TypeScript, Rust, Python, and more.

Here’s how easy it is to tran­spile an F# code­base into some­thing else:

My ini­tial im­pres­sion of the com­mu­nity is that it’s fairly small, per­haps even smaller than that of OCaml. The F# Reddit and Discord (the one listed on Reddit) seem like the most ac­tive places for F# con­ver­sa­tions. There’s sup­posed to be some F# Slack as well, but I could­n’t get an in­vite for it. (seems the au­to­mated process for is­su­ing those in­vites has been bro­ken for a while)

I’m still not sure what’s the role Microsoft plays in the com­mu­nity, as I haven’t seen much from them over­all.

For a me a small com­mu­nity is not re­ally a prob­lem, as long as the com­mu­nity is vi­brant and ac­tive. Also - I’ve no­ticed I al­ways feel more con­nected to smaller com­mu­ni­ties. Moving from Java to Ruby back in the day felt like night and day as far as com­mu­nity en­gage­ment and sense of be­long­ing go.

I did­n’t find many books and com­mu­nity sites/​blogs ded­i­cated to F#, but I did­n’t re­ally ex­pect to in the first place.

The most no­table com­mu­nity ini­tia­tives I dis­cov­ered were:

* Amplifying F# - an ef­fort to pro­mote F# and to get more busi­nesses in­volved with it

* F# for Fun and Profit - a col­lec­tion of tu­to­ri­als and es­says on F#

* F# Lab - The com­mu­nity dri­ven toolkit for data­science in F#

* F# Weekly - a weekly newslet­ter about the lat­est de­vel­op­ments in the world of F#

Seems to me that more can be done to pro­mote the lan­guage and en­gage new pro­gram­mers and busi­nesses with it, al­though that’s never easy 20 years into the ex­is­tence of some pro­ject. I con­tinue to be some­what puz­zled as to why Microsoft does­n’t mar­ket F# more, as I think it could be a great mar­ket­ing ve­hi­cle for them.

All in all - I don’t feel qual­i­fied to com­ment much on the F# com­mu­nity at this point.

Depending on the type of per­son you are you may or may not care about a a pro­gram­ming lan­guage’s popularity”. People of­ten ask my why I spent a lot of time with lan­guages that are un­likely to ever re­sult in job op­por­tu­ni­ties for me, e.g.:

Professional op­por­tu­ni­ties are im­por­tant, of course, but so are:

* hav­ing fun (and the F in F# stands for fun”)

* chal­leng­ing your­self to think and work dif­fer­ently

That be­ing said, F# is not a pop­u­lar lan­guage by most con­ven­tional met­rics. It’s not highly ranked on TIOBE, StackOverflow or most job boards. But it’s also not less pop­u­lar than most mainstream” func­tional pro­gram­ming lan­guages. The sad re­al­ity is that func­tional pro­gram­ming is still not main­stream and per­haps it will never be.

A few more re­sources on the sub­ject:

* How Popular is F# in 2024

Here’s also a video for the ar­ti­cle above

* Here’s also a video for the ar­ti­cle above

...

Read the original on batsov.com »

8 308 shares, 16 trendiness

CERN scientists find evidence of quantum entanglement in sheep

The find­ings could help to ex­plain the species’ fas­ci­nat­ing flock­ing be­hav­iour

The find­ings could help to ex­plain the species’ fas­ci­nat­ing flock­ing be­hav­iour

Quantum en­tan­gle­ment is a fas­ci­nat­ing phe­nom­e­non where two par­ti­cles’ states are tied to each other, no mat­ter how far apart the par­ti­cles are. In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for ground­break­ing ex­per­i­ments in­volv­ing en­tan­gled pho­tons. These ex­per­i­ments con­firmed the pre­dic­tions for the man­i­fes­ta­tion of en­tan­gle­ment that had been made by the late CERN the­o­rist John Bell. This phe­nom­e­non has so far been ob­served in a wide va­ri­ety of sys­tems, such as in top quarks at CERNs Large Hadron Collider (LHC) in 2024. Entanglement has also found sev­eral im­por­tant so­ci­etal ap­pli­ca­tions, such as quan­tum cryp­tog­ra­phy and quan­tum com­put­ing. Now, it also ex­plains the fa­mous herd men­tal­ity of sheep.

A flock of sheep (ovis aries) has roamed the CERN site dur­ing the spring and sum­mer months for over 40 years. Along with the CERN shep­herd, they help to main­tain the vast ex­panses of grass­land around the LHC and are part of the Organization’s long-stand­ing ef­forts to pro­tect the site’s bio­di­ver­sity. In ad­di­tion, their flock­ing be­hav­iour has been of great in­ter­est to CERNs physi­cists. It is well known that sheep be­have like par­ti­cles: their sto­chas­tic be­hav­iour has been stud­ied by zo­ol­o­gists and physi­cists alike, who no­ticed that a flock’s abil­ity to quickly change phase is sim­i­lar to that of atoms in a solid and a liq­uid. Known as the Lamb Shift, this can cause them to get them­selves into bizarre sit­u­a­tions, such as walk­ing in a cir­cle for days on end.

Now, new re­search has shed light on the rea­son for these ex­tra­or­di­nary abil­i­ties. Scientists at CERN have found ev­i­dence of quan­tum en­tan­gle­ment in sheep. Using so­phis­ti­cated mod­el­ling tech­niques and spe­cialised track­ers, the find­ings show that the brains of in­di­vid­ual sheep in a flock are quan­tum-en­tan­gled in such a way that the sheep can move and vo­calise si­mul­ta­ne­ously, no mat­ter how far apart they are. The ev­i­dence has sev­eral ram­i­fi­ca­tions for ovine re­search and has set the baa for a new branch of quan­tum physics.

The fact that we were hav­ing our lunch next to the flock was a shear co­in­ci­dence,” says Mary Little, leader of the HERD col­lab­o­ra­tion, de­scrib­ing how the pro­ject came about. When we saw and herd their be­hav­iour, we wanted to in­ves­ti­gate the move­ment of the flock us­ing the tech­nol­ogy at our dis­posal at the Laboratory.”

Observing the sheep’s abil­ity to si­mul­ta­ne­ously move and vo­calise to­gether caused one main ques­tion to aries: since the sheep be­have like sub­atomic par­ti­cles, could quan­tum ef­fects be the rea­son for their be­hav­iour?

Obviously, we could­n’t put them all in a box and see if they were dead or alive,” said Beau Peep, a re­searcher on the pro­ject. However, by as­sum­ing that the sheep were spher­i­cal, we were able to model their be­hav­iour in al­most the ex­act same way as we model sub­atomic par­ti­cles.”

Using so­phis­ti­cated track­ers, akin to those in the LHC ex­per­i­ments, the physi­cists were able to lo­cate the pre­cise par­ti­cles in the sheep’s brains that might be the cause of this en­tan­gle­ment. Dubbed moutons” and rep­re­sented by the Greek let­ter lambda, l, these par­ti­cles are lep­tons and are close rel­a­tives of the muon, but fluffier.

The sta­tis­ti­cal sig­nif­i­cance of the find­ings is 4 sigma, which is enough to show ev­i­dence of the phe­nom­e­non. However, it does not quite pass the baa to be classed as an ob­ser­va­tion.

More re­search is needed to fully con­firm that this was in­deed an ob­ser­va­tion of ovine en­tan­gle­ment or a sta­tis­ti­cal fluc­tu­a­tion,” says Ewen Woolly, spokesper­son for the HERD col­lab­o­ra­tion. This may be dif­fi­cult, as we have found that the re­search makes physi­cists be­come in­ex­plic­a­bly drowsy.”

While en­tan­gle­ment is now the lead­ing the­ory for this phe­nom­e­non, we have to take every­thing into ac­count,” adds Dolly Shepherd, a CERN the­o­rist. Who knows, maybe fur­ther vari­ables are hid­den be­neath their fleeces. Wolves, for ex­am­ple.”

...

Read the original on home.cern »

9 271 shares, 13 trendiness

Excitable cells

...

Read the original on jenevoldsen.com »

10 242 shares, 52 trendiness

Stop syncing everything

back

Partial repli­ca­tion sounds easy—just sync the data your app needs, right? But choos­ing an ap­proach is tricky: log­i­cal repli­ca­tion pre­cisely tracks every change, com­pli­cat­ing strong con­sis­tency, while phys­i­cal repli­ca­tion avoids that com­plex­ity but re­quires sync­ing every change, even dis­carded ones. What if your app could com­bine the sim­plic­ity of phys­i­cal repli­ca­tion with the ef­fi­ciency of log­i­cal repli­ca­tion? That’s the key idea be­hind Graft, the open-source trans­ac­tional stor­age en­gine I’m launch­ing to­day. It’s de­signed specif­i­cally for lazy, par­tial repli­ca­tion with strong con­sis­tency, hor­i­zon­tal scal­a­bil­ity, and ob­ject stor­age dura­bil­ity.

Graft is de­signed with the fol­low­ing use cases in mind:

* Offline-first & mo­bile apps: Simplify de­vel­op­ment and im­prove re­li­a­bil­ity by of­fload­ing repli­ca­tion and stor­age to Graft.

* Cross-platform sync: Share data smoothly across de­vices, browsers, and plat­forms with­out ven­dor lock-in.

* Any data type: Replicate data­bases, files, or cus­tom for­mats—all with strong con­sis­tency.

I first dis­cov­ered the need for Graft while build­ing SQLSync. SQLSync is a fron­tend op­ti­mized data­base stack built on top of SQLite with a syn­chro­niza­tion en­gine pow­ered by ideas from Git and dis­trib­uted sys­tems. SQLSync makes mul­ti­player SQLite data­bases a re­al­ity, pow­er­ing in­ter­ac­tive apps that run di­rectly in your browser.

However, SQLSync repli­cates the en­tire log of changes to every client—sim­i­lar to how some data­bases im­ple­ment phys­i­cal repli­ca­tion. While this ap­proach works fine on servers, it’s poorly suited to the con­straints of edge and browser en­vi­ron­ments.

After ship­ping SQLSync, I de­cided to find a repli­ca­tion so­lu­tion more suited to the edge. I needed some­thing that could:

* Let clients sync at their own pace

* Sync only what they need

* Sync from any­where, in­clud­ing the edge and of­fline de­vices

That did­n’t ex­ist. So I built it.

If you’ve ever tried to keep data in sync across clients and servers, you know it’s harder than it sounds. Most ex­ist­ing so­lu­tions fall into one of two camps:

* Full repli­ca­tion, which syncs the en­tire dataset to each client—not prac­ti­cal for con­strained en­vi­ron­ments like server­less func­tions or web apps.

* Schema-aware diffs, like Change Data Capture (CDC) or Conflict-free Replicated Data Types (CRDTs), which track log­i­cal changes at the row or field level—but re­quire deep ap­pli­ca­tion in­te­gra­tion and don’t gen­er­al­ize to ar­bi­trary data.

Like full repli­ca­tion, Graft is schema-ag­nos­tic. It does­n’t know or care what kind of data you’re stor­ing—it just repli­cates bytes2. But in­stead of send­ing all the data, it be­haves more like log­i­cal repli­ca­tion: clients re­ceive a com­pact de­scrip­tion of what’s changed since their last sync.

At the core of this model is the Volume: a sparse, or­dered col­lec­tion of fixed-size Pages. Clients in­ter­act with Volumes through a trans­ac­tional API, read­ing and writ­ing at spe­cific Snapshots. Under the hood, Graft per­sists and repli­cates only what’s nec­es­sary—us­ing ob­ject stor­age as a durable, scal­able back­end.

The re­sult is a sys­tem that’s lazy, par­tial, edge-ca­pa­ble, and con­sis­tent.

Want to try the man­aged ver­sion of Graft?

Join the wait­list to get early ac­cess: Sign up here →

Each of these prop­er­ties de­serves a closer look—let’s un­pack them one by one.

Graft is de­signed for the real world—where edge clients wake up oc­ca­sion­ally, face un­re­li­able net­works, and run in short-lived, re­source-con­strained en­vi­ron­ments. Instead of re­ly­ing on con­tin­u­ous repli­ca­tion, clients choose when to sync, and Graft makes it easy to fast for­ward to the lat­est snap­shot.

That sync starts with a sim­ple ques­tion: what changed since my last snap­shot?

The server re­sponds with a graft—a com­pact bit­set of page in­dexes that have changed across all com­mits since that snap­shot3. This is where the pro­ject gets its name: a graft at­taches new changes to an ex­ist­ing snap­shot—like graft­ing a branch onto a tree. They act as a guide, in­form­ing the client which pages can be reused and which need to be fetched if needed.

Critically, when a client pulls a graft from the server, it does­n’t re­ceive any ac­tual data—only meta­data about what changed. This gives the client full con­trol over what to fetch and when, lay­ing the foun­da­tion for par­tial repli­ca­tion.

When you’re build­ing for edge en­vi­ron­ments—browser tabs, mo­bile apps, server­less func­tions—you can’t af­ford to down­load the en­tire dataset just to serve a hand­ful of queries. That’s where par­tial repli­ca­tion comes in.

After a client pulls a graft, it knows ex­actly what’s changed. It can use that in­for­ma­tion to de­ter­mine pre­cisely which pages are still valid and which pages need to be fetched. Instead of pulling every­thing, clients se­lec­tively re­trieve only the pages they’ll ac­tu­ally use—noth­ing more, noth­ing less.

To keep things snappy, Graft sup­ports sev­eral ways to prefetch pages:

General-purpose prefetch­ing: Graft in­cludes a built-in prefetcher based on the Leap al­go­rithm, which pre­dicts fu­ture page ac­cesses by iden­ti­fy­ing pat­terns4.

Domain-specific prefetch­ing: Applications can lever­age do­main knowl­edge to pre­emp­tively fetch rel­e­vant pages. For in­stance, if your app fre­quently queries a user’s pro­file, Graft can prefetch pages re­lated to that pro­file be­fore the data is needed.

Proactive fetch­ing: Clients can al­ways fall back to pulling all changes if needed, es­sen­tially re­vert­ing to full repli­ca­tion. This is par­tic­u­larly use­ful for Graft work­loads run­ning on the server side.

And be­cause Graft hosts pages di­rectly on ob­ject stor­age, they’re nat­u­rally durable and scal­able, cre­at­ing a strong foun­da­tion for edge-na­tive repli­ca­tion.

Edge repli­ca­tion is­n’t just about choos­ing what data to sync—it’s about mak­ing sure that data is avail­able where it’s ac­tu­ally needed. Graft does this in two key ways.

First, pages are served from ob­ject-stor­age through a global fleet of edge servers, al­low­ing fre­quently ac­cessed (“hot”) pages to be cached near clients. This keeps la­tency low and re­spon­sive­ness high, no mat­ter where in the world your users hap­pen to be.

Second, the Graft client it­self is light­weight and de­signed specif­i­cally to be em­bed­ded. With min­i­mal de­pen­den­cies and a tiny run­time, it in­te­grates into con­strained en­vi­ron­ments like browsers, de­vices, mo­bile apps, and server­less func­tions.

The re­sult? Your data is al­ways cached ex­actly where it’s most valu­able—right at the edge and em­bed­ded in your ap­pli­ca­tion.

But caching data on the edge brings new chal­lenges, par­tic­u­larly around main­tain­ing con­sis­tency and safely han­dling con­flicts. That’s where Graft’s ro­bust con­sis­tency model comes in.

Strong con­sis­tency is crit­i­cal—es­pe­cially when sync­ing data be­tween clients that might oc­ca­sion­ally con­flict. Graft ad­dresses this by pro­vid­ing a clear and ro­bust con­sis­tency model: Serializable Snapshot Isolation.5

This model gives clients iso­lated, con­sis­tent views of data at spe­cific snap­shots, al­low­ing reads to pro­ceed con­cur­rently with­out in­ter­fer­ence. At the same time, it en­sures that writes are strictly se­ri­al­ized, so there’s al­ways a clear, glob­ally con­sis­tent or­der for every trans­ac­tion.

However, be­cause Graft is de­signed for of­fline-first, lazy repli­ca­tion, clients some­times at­tempt to com­mit changes based on an out­dated snap­shot. Accepting these com­mits blindly would vi­o­late strict se­ri­al­iz­abil­ity. Instead, Graft safely re­jects the com­mit and lets the client choose how to re­solve the sit­u­a­tion. Typically, clients will:

Reset and re­play, by pulling the lat­est snap­shot, reap­ply­ing lo­cal trans­ac­tions, and try­ing again.

Locally, the client ex­pe­ri­ences Optimistic Snapshot Isolation, mean­ing:

However, these snap­shots may later be dis­carded if the com­mit is re­jected.

Merge their lo­cal state with the lat­est snap­shot from the server. This may de­grade the global con­sis­tency model to snap­shot iso­la­tion.

In short, Graft en­sures you never have to sac­ri­fice con­sis­tency—even when clients sync spo­rad­i­cally, op­er­ate of­fline, or col­lide with con­cur­rent writes.

Combining lazy sync­ing, par­tial repli­ca­tion, edge-friendly de­ploy­ment, and strong con­sis­tency, Graft pro­vides a ro­bust foun­da­tion for a va­ri­ety of edge-na­tive ap­pli­ca­tions. Here are just a few ex­am­ples of what you can build with Graft:

Offline-first apps: Note-taking, task man­age­ment, or CRUD apps that op­er­ate par­tially of­fline. Graft takes care of sync­ing, al­low­ing the ap­pli­ca­tion to for­get the net­work even ex­ists. When com­bined with a con­flict han­dler, Graft can also en­able mul­ti­player on top of ar­bi­trary data.

Cross-platform data: Eliminate ven­dor lock-in and al­low your users to seam­lessly ac­cess their data across mo­bile plat­forms, de­vices, and the web. Graft is ar­chi­tected to be em­bed­ded any­where6.

Stateless read repli­cas: Due to Graft’s unique ap­proach to repli­ca­tion, a data­base replica can be spun up with no lo­cal state, re­trieve the lat­est snap­shot meta­data, and im­me­di­ately start run­ning queries. No need to down­load all the data and re­play the log.

Replicate any­thing: Graft is just fo­cused on con­sis­tent page repli­ca­tion. It does­n’t care about what’s in­side those pages. So go crazy! Use Graft to sync AI mod­els, Parquet or Lance files, Geospatial tile­sets, or just pho­tos of your cats. The sky’s the limit with Graft.

Today, lib­graft is the eas­i­est way to start us­ing Graft. It’s a na­tive SQLite ex­ten­sion that works any­where SQLite does. It uses Graft to repli­cate just the parts of the data­base that a client ac­tu­ally uses, mak­ing it pos­si­ble to run SQLite in re­source con­strained en­vi­ron­ments.

lib­graft im­ple­ments a SQLite vir­tual file sys­tem (VFS) al­low­ing it to in­ter­cept all reads and writes to the data­base. It pro­vides the same trans­ac­tional and con­cur­rency se­man­tics as SQLite does when run­ning in WAL mode. Using lib­graft pro­vides your ap­pli­ca­tion with the fol­low­ing ben­e­fits:

* asyn­chro­nous repli­ca­tion to and from ob­ject stor­age

* lazy par­tial repli­cas on the edge and in de­vices

If you’re in­ter­ested in us­ing lib­graft, you can find the doc­u­men­ta­tion here.

Graft is de­vel­oped openly on GitHub, and con­tri­bu­tions from the com­mu­nity are very wel­come. You can open is­sues, par­tic­i­pate in dis­cus­sions, or sub­mit pull re­quests—check out our con­tri­bu­tion guide for de­tails.

If you’d like to chat about Graft, join the Discord or send me an email. I’d love your feed­back on Graft’s ap­proach to lazy, par­tial edge repli­ca­tion.

I’m also plan­ning on launch­ing a Graft Managed Service. If you’d like to join the wait­list, you can sign up here.

Keep read­ing to learn about Graft’s roadmap as well as a de­tailed com­par­i­son be­tween Graft and ex­ist­ing SQLite repli­ca­tion so­lu­tions.

Graft is the re­sult of a year of re­search, many it­er­a­tions, and one ma­jor piv­ot7. But Graft is far from done. There’s a lot left to build, and the roadmap is am­bi­tious. In no par­tic­u­lar or­der, here’s what’s planned:

WebAssembly sup­port: Supporting WebAssembly (Wasm) would al­low Graft to be used in the browser. I’d like to even­tu­ally sup­port SQLite’s of­fi­cial Wasm build, wa-sqlite, and sql.js.

Integrating Graft and SQLSync: Once Graft sup­ports Wasm, in­te­grat­ing it with SQLSync will be straight­for­ward. The plan is to split out SQLSync’s mu­ta­tion, re­base, and query sub­scrip­tion lay­ers so it can lay on top of a data­base us­ing Graft repli­ca­tion.

More client li­braries: I’d love to see na­tive Graft-client wrap­pers for pop­u­lar lan­guages in­clud­ing Python, Javascript, Go, and Java. This would al­low Graft to be used to repli­cate ar­bi­trary data in those lan­guages rather than be­ing re­stricted to SQLite.8

Low-latency writes: Graft cur­rently blocks push op­er­a­tions un­til they have been fully com­mit­ted into ob­ject stor­age. This can be ad­dressed in a num­ber of ways:

Buffer writes in a low-la­tency durable con­sen­sus group sit­ting in front of ob­ject stor­age.

Garbage col­lec­tion, check­point­ing, and com­paction: These fea­tures are needed to max­i­mize query per­for­mance, min­i­mize wasted space, and en­able delet­ing data per­ma­nently. They all re­late to Graft’s de­ci­sion to store data di­rectly in ob­ject stor­age, and batch changes to­gether into files called seg­ments.

Authentication and au­tho­riza­tion: This is a fairly broad task that en­com­passes every­thing from ac­counts on the Graft man­aged ser­vice to fine-grained au­tho­riza­tion to read/​write Volumes.

Volume fork­ing: The Graft ser­vice is al­ready setup to per­form zero-copy forks, since it can eas­ily copy Segment ref­er­ences over to the new Volume. However, to per­form a lo­cal fork, Graft cur­rently needs to copy all of the pages. This could be solved by lay­er­ing vol­umes lo­cally and al­low­ing reads to fall through or chang­ing how pages are ad­dressed lo­cally.

Conflict han­dling: Graft should of­fer built-in con­flict res­o­lu­tion strate­gies and ex­ten­sion points so ap­pli­ca­tions can con­trol how con­flicts are han­dled. The ini­tial built-in strat­egy will au­to­mat­i­cally merge non-over­lap­ping trans­ac­tions. While this re­laxes global con­sis­tency to op­ti­mistic snap­shot iso­la­tion, it can sig­nif­i­cantly boost per­for­mance in col­lab­o­ra­tive and mul­ti­player sce­nar­ios.

Graft builds on ideas pi­o­neered by many other pro­jects, while adding its own unique con­tri­bu­tions to the space. Here is a brief overview of the SQLite repli­ca­tion land­scape and how Graft com­pares.

The in­for­ma­tion in this sec­tion has been gath­ered from doc­u­men­ta­tion and blog posts, and might not be per­fectly ac­cu­rate. Please let me know if I’ve mis­rep­re­sented or mis­un­der­stood a pro­ject.

Among SQLite-based pro­jects, mvSQLite is the clos­est in con­cept to Graft. It im­ple­ments a cus­tom VFS layer that stores SQLite pages di­rectly in FoundationDB.

In mvSQLite, each page is stored by its con­tent hash and ref­er­enced by (page_number, snap­shot ver­sion). This struc­ture al­lows read­ers to lazily fetch pages from FoundationDB as needed. By lever­ag­ing page-level ver­sion­ing, mvSQLite sup­ports con­cur­rent write trans­ac­tions, pro­vided their read and write sets don’t over­lap.

How Graft com­pares: Graft and mvSQLite share sim­i­lar stor­age-layer de­signs, us­ing page-level ver­sion­ing to al­low lazy, on-de­mand fetch­ing and par­tial data­base views. The key dif­fer­ence lies in data stor­age lo­ca­tion and how page changes are tracked. mvSQLite de­pends on FoundationDB, re­quir­ing all nodes to have di­rect clus­ter ac­cess—mak­ing it un­suit­able for widely dis­trib­uted edge de­vices and web ap­pli­ca­tions. Additionally, Graft’s Splinter-based change­sets are self-con­tained, eas­ily dis­trib­utable, and do not re­quire di­rect queries against FoundationDB to de­ter­mine changed page ver­sions.

Litestream is a stream­ing backup so­lu­tion that con­tin­u­ously repli­cates SQLite WAL frames to ob­ject stor­age. Its pri­mary fo­cus is async dura­bil­ity, point-in-time re­store, and read repli­cas. It runs ex­ter­nally to your ap­pli­ca­tion, mon­i­tor­ing SQLite’s WAL through the filesys­tem.

How Graft com­pares: Unlike Litestream, Graft in­te­grates di­rectly into SQLite’s com­mit process via its cus­tom VFS, en­abling lazy, par­tial repli­ca­tion, and dis­trib­uted writes. Like Litestream, Graft repli­cates pages to ob­ject stor­age and sup­ports point-in-time re­stores.

cr-sqlite is a SQLite ex­ten­sion which turns ta­bles into Conflict-free Replicated Data Types (CRDTs), en­abling log­i­cal, row-level repli­ca­tion. It of­fers au­to­matic con­flict res­o­lu­tion but re­quires schema aware­ness and ap­pli­ca­tion-level in­te­gra­tion.

How Graft com­pares: Graft is schema-ag­nos­tic and does­n’t de­pend on log­i­cal CRDTs, mak­ing it com­pat­i­ble with ar­bi­trary SQLite ex­ten­sions and cus­tom data struc­tures. However, to achieve global se­ri­al­iz­abil­ity, Graft ex­pects ap­pli­ca­tions to han­dle con­flict res­o­lu­tion ex­plic­itly. In con­trast, cr-sqlite au­to­mat­i­cally merges changes from mul­ti­ple writ­ers, achiev­ing causal con­sis­tency.

By com­bin­ing Durable Objects with SQLite, you get a strongly con­sis­tent and highly durable data­base wrapped with your busi­ness logic and hosted hope­fully close to your users in Cloudflare’s mas­sive edge net­work. Under the hood, this so­lu­tion is sim­i­lar to Litestream in that it repli­cates the SQLite WAL to ob­ject stor­age and per­forms pe­ri­odic check­points.

How Graft com­pares: Graft ex­poses repli­ca­tion as a first class cit­i­zen, and is de­signed to repli­cate ef­fi­ciently to and from the edge. In com­par­i­son, SQLite in Durable Objects is fo­cused on ex­tend­ing Durable Objects with the full power of SQLite.

Cloudflare D1 is a man­aged SQLite data­base op­er­at­ing sim­i­larly to tra­di­tional data­base ser­vices like Amazon RDS or Turso, ac­cessed by ap­pli­ca­tions via an HTTP API.

How Graft com­pares: Graft repli­cates data di­rectly to the edge, em­bed­ding it within client ap­pli­ca­tions. This de­cen­tral­ized repli­ca­tion model con­trasts sig­nif­i­cantly with D1s cen­tral­ized data ser­vice.

Turso pro­vides man­aged SQLite data­bases and em­bed­ded repli­cas via lib­SQL, an open-source SQLite fork. Similar to Litestream and Cloudflare Durable Objects SQL Storage, Turso repli­cates SQLite WAL frames to ob­ject stor­age and pe­ri­od­i­cally check­points. Replicas catch up by re­triev­ing these check­points and re­play­ing the log.

How Graft com­pares: Graft dis­tin­guishes it­self with par­tial repli­ca­tion and sup­port for ar­bi­trary, schema-ag­nos­tic data struc­tures. Graft’s back­end ser­vice op­er­ates di­rectly at the page level and out­sources the en­tire trans­ac­tional life­cy­cle to clients.

The key idea be­hind rqlite and dqlite is to dis­trib­ute SQLite across mul­ti­ple servers. This is achieved through Raft based con­sen­sus and rout­ing SQLite op­er­a­tions through a net­work pro­to­col to the cur­rent Raft leader.

How Graft com­pares: These pro­jects are fo­cused on in­creas­ing SQLite’s dura­bil­ity and avail­abil­ity through con­sen­sus and tra­di­tional repli­ca­tion. They are de­signed to scale across a set of state­ful nodes that main­tain con­nec­tiv­ity to one an­other. Graft fun­da­men­tally dif­fers by be­ing a state­less sys­tem built on top of ob­ject stor­age, de­signed to repli­cate data to and from the edge.

Verneuil fo­cuses on asyn­chro­nously repli­cat­ing SQLite snap­shots to read repli­cas via ob­ject stor­age, pri­or­i­tiz­ing re­li­a­bil­ity with­out in­tro­duc­ing ad­di­tional fail­ure modes. Verneuil ex­plic­itly avoids mech­a­nisms to min­i­mize repli­ca­tion la­tency or stal­e­ness.

How Graft com­pares: Graft be­haves more like a multi-writer dis­trib­uted data­base, em­pha­siz­ing se­lec­tive, real-time par­tial repli­ca­tion. Verneuil’s ap­proach, mean­while, em­pha­sizes uni­di­rec­tional asyn­chro­nous snap­shot repli­ca­tion with­out guar­an­tees around repli­ca­tion fresh­ness.

...

Read the original on sqlsync.dev »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.