10 interesting stories served every morning and every evening.
On this release, we’re showing what happens when you push modern web standards — HTML, CSS, and JS — to their peak:
This entire app is lighter than a React/ShadCN button:
See benchmark and details here ›
Here’s the same app, now with a Rust computation engine and Event Sourcing for instant search and other operations over 150,000 records — far past where JS-version of the engine choked on recursive calls over the records.
This demo is here ›
Nue crushes HMR and build speed records and sets you up with a millisecond feedback loop for your everyday VSCode/Sublime file-save operations:
Immediate feedback for design and component updates, preserving app state
This is a game-changer for Rust, Go, and JS engineers stuck wrestling with React idioms instead of leaning on timeless software patterns. Nue emphasizes a model-first approach, delivering modular design with simple, testable functions, true static typing, and minimal dependencies. Nue is a liberating experience for system devs whose skills can finally shine in a separated model layer.
This is an important shift for design engineers bogged down by React patterns and 40,000+ line design systems. Build radically simpler systems with modern CSS (@layers, variables, calc()) and take control of your typography and whitespace.
This is a wake-up call for UX engineers tangled in React hooks and utility class walls instead of owning the user experience. Build apps as light as a React button to push the web — and your skills — forward.
Nue is a web framework focused on web standards, currently in active development. We aim to reveal the hidden complexity that’s become normalized in modern web development. When a single button outweighs an entire application, something’s fundamentally broken.
Nue drives the inevitable shift. We’re rebuilding tools and frameworks from the ground up with a cleaner, more robust architecture. Our goal is to restore the joy of web development for all key skill sets: frontend architects, design engineers, and UX engineers.
...
Read the original on nuejs.org »
...
Read the original on oldvcr.blogspot.com »
A decorated World War Two code breaker who spent her youth deciphering enemy messages at Bletchley Park has died at the age of 101. Charlotte “Betty” Webb MBE - who was among the last surviving Bletchley code breakers - died on Monday night, the Women’s Royal Army Corps Association confirmed.Mrs Webb, from Wythall in Worcestershire, joined operations at the Buckinghamshire base at the age of 18, later going on to help with Japanese codes at The Pentagon in the US. She was awarded France’s highest honour - the Légion d’Honneur - in 2021. The Women’s Royal Army Corps Association described Mrs Webb as a woman who “inspired women in the Army for decades”.
Bletchley Park Trust CEO Iain Standen said Mrs Webb will not only be remembered for her work but “also for her efforts to ensure that the story of what she and her colleagues achieved is not forgotten.“”Betty’s passion for preserving the history and legacy of Bletchley Park has undoubtedly inspired many people to engage with the story and visit the site,” he said in a statement. Tributes to Mrs Webb have begun to be posted on social media, including one from historian and author Dr Tessa Dunlop who said she was with her in her final hours.Describing Mrs Webb as “the very best”, she said on X: “She is one of the most remarkable woman I have ever known.”
Mrs Webb told the BBC in 2020 that she had “never heard of Bletchley”, Britain’s wartime code-breaking centre, before starting work there as a member of the ATS, the Auxiliary Territorial Service. She had been studying at a college near Shrewsbury, Shropshire, when she volunteered as she said she and others on the course felt they “ought to be serving our country rather than just making sausage rolls”.Her mother had taught her to speak German as a child and ahead of her posting remembered being “taken into the mansion [at Bletchley] to read the Official Secrets Act”.“I realised that from then on there was no way that I was going to be able to tell even my parents where I was and what I was doing until 1975 [when restrictions were lifted],” she recalled.She would tell the family with whom she lodged that she was a secretary.
When WW2 ended in Europe in May 1945, she went to work at the Pentagon after spending four years at Bletchley, which with its analysis of German communications had served as a vital cog in the Allies’ war machine. At the Pentagon she would paraphrase and transcribe already-decoded Japanese messages. She said she was the only member of the ATS to be sent to Washington, describing it as a “tremendous honour”.Mrs Webb, in 2020, recalled she had had no idea the Americans planned to end the conflict by dropping atomic weapon on Japanese cities, describing the weapons’ power as “utterly awful”.After the Allies’ final victory, it took Mrs Webb several months to organise return passage to the UK, where she worked as a secretary at a school in Shropshire.The head teacher there had also worked at Bletchley so knew of her professionalism, whereas other would-be employers, she recalled, were left stumped by her being unable to explain - due to secrecy requirements - her previous duties.More than half a century later, in 2021, Mrs Webb was one of 6,000 British citizens to receive the Légion d’Honneur, following a decision by President François Hollande in 2014 to recognise British veterans who helped liberate France.
In 2023, she and her niece were among 2,200 people from 203 countries invited to Westminster Abbey to see King Charles III’s coronation. The same year she celebrated her 100th birthday at Bletchley Park with a party. She and her guests were treated to a fly-past by a Lancaster bomber. She said at the time: “It was for me - it’s unbelievable isn’t it? Little me.”
...
Read the original on www.bbc.com »
The UN reports that less than 25% of global e-waste is properly collected and recycled.
...
Read the original on techoreon.com »
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
...
Read the original on secondlifestorage.com »
If someone had told me a few months ago I’d be playing with .NET again after a 15+ years hiatus I probably would have laughed at this. Early on in my career I played with .NET and Java, and even though .NET had done some things better than Java (as it had the opportunity to learn from some early Java mistakes), I quickly settled on Java as it was a truly portable environment.
I guess everyone who reads my blog knows that in the past few years I’ve been playing on and off with OCaml and I think it’s safe to say that it has become one of my favorite programming languages - alongside the likes of Ruby and Clojure. My work with OCaml drew my attention recently to F#, an ML targeting .NET, developed by Microsoft. The functional counterpart of the (mostly) object-oriented C#. The newest ML language created…
Unfortunately, no one can be told what the Matrix is. You have to see it for yourself.
Before we start discussing F#, I guess we should answer first the question “What is F#?”. I’ll borrow a bit from the official page to answer it.
F# is a universal programming language for writing succinct, robust and performant code.
F# allows you to write uncluttered, self-documenting code, where your focus remains on your problem domain, rather than the details of programming.
It does this without compromising on speed and compatibility - it is open-source, cross-platform and interoperable.
Trivia: F# is the language that made the pipeline operator (|>) popular.
A full set of features are documented in the F# language guide.
F# 1.0 was officially released in May 2005 by Microsoft Research. It was initially developed by Don Syme at Microsoft Research in Cambridge and evolved from an earlier research project called “Caml. NET,” which aimed to bring OCaml to the .NET platform. F# was officially moved from Microsoft Research to Microsoft (as part of their developer tooling division) in 2010 (timed with the release of F# 2.0).
F# has been steadily evolving since those early days and the most recent release
F# 9.0 was released in November 2024. It seems only appropriate that F# would come to my attention in the year of its 20th birthday!
There were several reasons why I wanted to try out F#:
* .NET became open-source and portable a few years ago and I wanted to check the progress on that front
* I was curious if F# offers any advantages over OCaml
* I’ve heard good things about the F# tooling (e.g. Rider and Ionide)
* I like playing with new programming languages
Below you’ll find my initial impressions for several areas.
As a member of the ML family of languages, the syntax won’t surprise anyone familiar with OCaml. As there are quite few people familiar with OCaml, though, I’ll mention that Haskell programmers will also feel right at home with the syntax. And Lispers.
For everyone else - it’d be fairly easy to pick up the basics.
Nothing shocking here, right?
Here’s another slightly more involved example:
Why don’t you try saving the snippet above in a file called Sales.fsx and running it like this:
Now you know that F# is a great choice for ad-hoc scripts! Also, running dotnet fsi by itself will pop an F# REPL where you can explore the language at your leisure.
I’m not going to go into great details here, as much of what I wrote about OCaml
here applies to F# as well. I’d also suggest this quick tour of F#
to get a better feel for its syntax.
Tip: Check out the F# cheatsheet
if you’d like to see a quick syntax reference.
One thing that made a good impression to me is the focus of the language designers on making F# approachable to newcomers, by providing a lot of small quality of life improvements for them. Below are few examples, that probably don’t mean much to you, but would mean something to people familiar with OCaml:
I guess some of those might be controversial, depending on whether you’re a ML language purist or not, but in my book anything that makes ML more popular is a good thing.
Did I also mention it’s easy to work with unicode strings and regular expressions?
Often people say that F# is mostly a staging ground for future C# features, and perhaps that’s true. I haven’t observed both languages long enough to have my own opinion on the subject, but I was impressed to learn that async/await (of C# and later JavaScript fame) originated in… F# 2.0.
It all changed in 2012 when C#5 launched with the introduction of what has now become the popularized async/await keyword pairing. This feature allowed you to write code with all the benefits of hand-written asynchronous code, such as not blocking the UI when a long-running process started, yet read like normal synchronous code. This async/await pattern has now found its way into many modern programming languages such as Python, JS, Swift, Rust, and even C++.
F#’s approach to asynchronous programming is a little different from async/await
but achieves the same goal (in fact, async/await is a cut-down version of F#’s approach, which was introduced a few years previously, in F#2).
Time will tell what will happen, but I think it’s unlikely that C# will ever be able to fully replace F#.
I’ve also found this encouraging comment from 2022 that Microsoft might be willing to invest more in F#:
Some good news for you. After 10 years of F# being developed by 2.5 people internally and some random community efforts, Microsoft has finally decided to properly invest in F# and created a full-fledged team in Prague this summer. I’m a dev in this team, just like you I was an F# fan for many years so I am happy things got finally moving here.
Looking at the changes in F# 8.0 and F 9.0, it seems the new full-fledged team has done some great work!
It’s hard to assess the ecosystem around F# after such a brief period, but overall it seems to me that there are fairly few “native” F# libraries and frameworks out there and most people rely heavily on the core .NET APIs and many third-party libraries and frameworks geared towards C#. That’s a pretty common setup when it comes to hosted languages in general, so nothing surprising here as well.
If you’ve ever used another hosted language (e.g. Scala, Clojure, Groovy) then you probably know what to expect.
Awesome F# keeps track of popular F# libraries, tools and frameworks. I’ll highlight here the web development and data science libraries:
* Giraffe: A lightweight library for building web applications using ASP.NET Core. It provides a functional approach to web development.
* Suave: A simple and lightweight web server library with combinators for routing and task composition. (Giraffe was inspired by Suave)
* Saturn: Built on top of Giraffe and ASP.NET Core, it offers an MVC-style framework inspired by Ruby on Rails and Elixir’s Phoenix.
* Bolero: A framework for building client-side applications in F# using WebAssembly and Blazor.
* Fable: A compiler that translates F# code into JavaScript, enabling integration with popular JavaScript ecosystems like React or Node.js.
* Elmish: A model-view-update (MVU) architecture for building web UIs in F#, often used with Fable.
* SAFE Stack: An end-to-end, functional-first stack for building cloud-ready web applications. It combines technologies like Saturn, Azure, Fable, and Elmish for a type-safe development experience.
* Deedle: A library for data manipulation and exploratory analysis, similar to pandas in Python.
* FsLab: A collection of libraries tailored for data science, including visualization and statistical tools.
I haven’t played much with any of them at this point yet, so I’ll reserve any feedback and recommendations for some point in the future.
The official documentation is pretty good, although I find it kind of weird that some of it is hosted on Microsoft’s site
and the rest is on https://fsharp.org/ (the site of the F# Software Foundation).
I really liked the following parts of the documentation:
https://fsharpforfunandprofit.com/ is another good learning resource. (even if it seems a bit dated)
F# has a somewhat troubled dev tooling story, as historically support for F# was great only in Visual Studio, and somewhat subpar elsewhere. Fortunately, the tooling story has improved a lot in the past decade:
In 2014 a technical breakthrough was made with the creation of the FSharp. Compiler.Service (FCS) package by Tomas Petricek, Ryan Riley, and Dave Thomas with many later contributors. This contains the core implementation of the F# compiler, editor tooling and scripting engine in the form of a single library and can be used to make F# tooling for a wide range of situations. This has allowed F# to be delivered into many more editors, scripting and documentation tools and allowed the development of alternative backends for F#. Key editor community-based tooling includes Ionide, by Krzysztof Cieślak and contributors, used for rich editing support in the cross-platform VSCode editor, with over 1M downloads at time of writing.
I’ve played with the F# plugins for several editors:
Overall, Rider and VS Code provide the most (and the most polished) features, but the other options were quite usable as well. That’s largely due to the fact that the F# LSP server fsautocomplete (naming is hard!) is quite robust and any editor with good LSP support gets a lot of functionality for free.
Still, I’ll mention that I found the tooling lacking in some regards:
* fsharp-mode doesn’t use TreeSitter (yet) and doesn’t seem to be very actively developed (looking at the code - it seems it was derived from caml-mode)
* Zed’s support for F# is quite spartan
* In VS Code shockingly the expanding and shrinking selection is broken, which is quite odd for what is supposed to be the flagship editor for F#
I’m really struggling with VS Code’s keybindings (too many modifier keys and functions keys for my taste) and editing model, so I’ll likely stick with Emacs going forward. Or I’ll finally spend more quality time with neovim!
It seems that everyone is using the same code formatter (Fantomas), including the F# team, which is great! The linter story in F# is not as great (seems the only popular linter FSharpLint is abandonware these days), but when your compiler is so good, you don’t really need a linter as much.
Oh, well… It seems that Microsoft are not really particularly invested in supporting the tooling for F#, as pretty much all the major projects in this space are community-driven.
Using AI coding agents (e.g. Copilot) with F# worked pretty well, but I didn’t spend much time on this front.
In the end of the day any editor will likely do, as long as you’re using LSP.
By the way, I had an interesting observation while programming in F# (and OCaml for that matter) - that when you’re working with a language with a really good type system you don’t really need that much from your editor. Most the time I’m perfectly happy with just some inline type information (e.g. something like CodeLenses), auto-completion and the ability to easily send code to fsi. Simplicity continues to be the ultimate sophistication…
Other tools that should be on your radar are:
* Paket - Paket is a dependency manager for .NET projects. Think of it as something like bundler, npm or pip, but for .NET’s NuGet package ecosystem.
* FAKE - A DSL for build tasks and more, where you can use F# to specify the tasks. Somewhat similar to Ruby’s rake. Some people claim that’s the easiest way to sneak F# into an existing .NET project.
Given the depth and breath of .NET - I guess that sky is the limit for you!
Seems to me that F# will be a particularly good fit for data analysis and manipulation, because of features like type providers.
Probably a good fit for backend services and even full-stack apps, although I haven’t really played with the F# first solutions in this space yet.
Fable and Elmish make F# a viable option for client-side programming and might offer another easy way to sneak F# into your day-to-day work.
Note: Historically, Fable has been used to target JavaScript but since Fable 4, you can also target other languages such as TypeScript, Rust, Python, and more.
Here’s how easy it is to transpile an F# codebase into something else:
My initial impression of the community is that it’s fairly small, perhaps even smaller than that of OCaml. The F# Reddit and Discord (the one listed on Reddit) seem like the most active places for F# conversations. There’s supposed to be some F# Slack as well, but I couldn’t get an invite for it. (seems the automated process for issuing those invites has been broken for a while)
I’m still not sure what’s the role Microsoft plays in the community, as I haven’t seen much from them overall.
For a me a small community is not really a problem, as long as the community is vibrant and active. Also - I’ve noticed I always feel more connected to smaller communities. Moving from Java to Ruby back in the day felt like night and day as far as community engagement and sense of belonging go.
I didn’t find many books and community sites/blogs dedicated to F#, but I didn’t really expect to in the first place.
The most notable community initiatives I discovered were:
* Amplifying F# - an effort to promote F# and to get more businesses involved with it
* F# for Fun and Profit - a collection of tutorials and essays on F#
* F# Lab - The community driven toolkit for datascience in F#
* F# Weekly - a weekly newsletter about the latest developments in the world of F#
Seems to me that more can be done to promote the language and engage new programmers and businesses with it, although that’s never easy 20 years into the existence of some project. I continue to be somewhat puzzled as to why Microsoft doesn’t market F# more, as I think it could be a great marketing vehicle for them.
All in all - I don’t feel qualified to comment much on the F# community at this point.
Depending on the type of person you are you may or may not care about a a programming language’s “popularity”. People often ask my why I spent a lot of time with languages that are unlikely to ever result in job opportunities for me, e.g.:
Professional opportunities are important, of course, but so are:
* having fun (and the F in F# stands for “fun”)
* challenging yourself to think and work differently
That being said, F# is not a popular language by most conventional metrics. It’s not highly ranked on TIOBE, StackOverflow or most job boards. But it’s also not less popular than most “mainstream” functional programming languages. The sad reality is that functional programming is still not mainstream and perhaps it will never be.
A few more resources on the subject:
* How Popular is F# in 2024
Here’s also a video for the article above
* Here’s also a video for the article above
...
Read the original on batsov.com »
The findings could help to explain the species’ fascinating flocking behaviour
The findings could help to explain the species’ fascinating flocking behaviour
Quantum entanglement is a fascinating phenomenon where two particles’ states are tied to each other, no matter how far apart the particles are. In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for groundbreaking experiments involving entangled photons. These experiments confirmed the predictions for the manifestation of entanglement that had been made by the late CERN theorist John Bell. This phenomenon has so far been observed in a wide variety of systems, such as in top quarks at CERN’s Large Hadron Collider (LHC) in 2024. Entanglement has also found several important societal applications, such as quantum cryptography and quantum computing. Now, it also explains the famous herd mentality of sheep.
A flock of sheep (ovis aries) has roamed the CERN site during the spring and summer months for over 40 years. Along with the CERN shepherd, they help to maintain the vast expanses of grassland around the LHC and are part of the Organization’s long-standing efforts to protect the site’s biodiversity. In addition, their flocking behaviour has been of great interest to CERN’s physicists. It is well known that sheep behave like particles: their stochastic behaviour has been studied by zoologists and physicists alike, who noticed that a flock’s ability to quickly change phase is similar to that of atoms in a solid and a liquid. Known as the Lamb Shift, this can cause them to get themselves into bizarre situations, such as walking in a circle for days on end.
Now, new research has shed light on the reason for these extraordinary abilities. Scientists at CERN have found evidence of quantum entanglement in sheep. Using sophisticated modelling techniques and specialised trackers, the findings show that the brains of individual sheep in a flock are quantum-entangled in such a way that the sheep can move and vocalise simultaneously, no matter how far apart they are. The evidence has several ramifications for ovine research and has set the baa for a new branch of quantum physics.
“The fact that we were having our lunch next to the flock was a shear coincidence,” says Mary Little, leader of the HERD collaboration, describing how the project came about. “When we saw and herd their behaviour, we wanted to investigate the movement of the flock using the technology at our disposal at the Laboratory.”
Observing the sheep’s ability to simultaneously move and vocalise together caused one main question to aries: since the sheep behave like subatomic particles, could quantum effects be the reason for their behaviour?
“Obviously, we couldn’t put them all in a box and see if they were dead or alive,” said Beau Peep, a researcher on the project. “However, by assuming that the sheep were spherical, we were able to model their behaviour in almost the exact same way as we model subatomic particles.”
Using sophisticated trackers, akin to those in the LHC experiments, the physicists were able to locate the precise particles in the sheep’s brains that might be the cause of this entanglement. Dubbed “moutons” and represented by the Greek letter lambda, l, these particles are leptons and are close relatives of the muon, but fluffier.
The statistical significance of the findings is 4 sigma, which is enough to show evidence of the phenomenon. However, it does not quite pass the baa to be classed as an observation.
“More research is needed to fully confirm that this was indeed an observation of ovine entanglement or a statistical fluctuation,” says Ewen Woolly, spokesperson for the HERD collaboration. “This may be difficult, as we have found that the research makes physicists become inexplicably drowsy.”
“While entanglement is now the leading theory for this phenomenon, we have to take everything into account,” adds Dolly Shepherd, a CERN theorist. “Who knows, maybe further variables are hidden beneath their fleeces. Wolves, for example.”
...
Read the original on home.cern »
back
Partial replication sounds easy—just sync the data your app needs, right? But choosing an approach is tricky: logical replication precisely tracks every change, complicating strong consistency, while physical replication avoids that complexity but requires syncing every change, even discarded ones. What if your app could combine the simplicity of physical replication with the efficiency of logical replication? That’s the key idea behind Graft, the open-source transactional storage engine I’m launching today. It’s designed specifically for lazy, partial replication with strong consistency, horizontal scalability, and object storage durability.
Graft is designed with the following use cases in mind:
* Offline-first & mobile apps: Simplify development and improve reliability by offloading replication and storage to Graft.
* Cross-platform sync: Share data smoothly across devices, browsers, and platforms without vendor lock-in.
* Any data type: Replicate databases, files, or custom formats—all with strong consistency.
I first discovered the need for Graft while building SQLSync. SQLSync is a frontend optimized database stack built on top of SQLite with a synchronization engine powered by ideas from Git and distributed systems. SQLSync makes multiplayer SQLite databases a reality, powering interactive apps that run directly in your browser.
However, SQLSync replicates the entire log of changes to every client—similar to how some databases implement physical replication. While this approach works fine on servers, it’s poorly suited to the constraints of edge and browser environments.
After shipping SQLSync, I decided to find a replication solution more suited to the edge. I needed something that could:
* Let clients sync at their own pace
* Sync only what they need
* Sync from anywhere, including the edge and offline devices
That didn’t exist. So I built it.
If you’ve ever tried to keep data in sync across clients and servers, you know it’s harder than it sounds. Most existing solutions fall into one of two camps:
* Full replication, which syncs the entire dataset to each client—not practical for constrained environments like serverless functions or web apps.
* Schema-aware diffs, like Change Data Capture (CDC) or Conflict-free Replicated Data Types (CRDTs), which track logical changes at the row or field level—but require deep application integration and don’t generalize to arbitrary data.
Like full replication, Graft is schema-agnostic. It doesn’t know or care what kind of data you’re storing—it just replicates bytes2. But instead of sending all the data, it behaves more like logical replication: clients receive a compact description of what’s changed since their last sync.
At the core of this model is the Volume: a sparse, ordered collection of fixed-size Pages. Clients interact with Volumes through a transactional API, reading and writing at specific Snapshots. Under the hood, Graft persists and replicates only what’s necessary—using object storage as a durable, scalable backend.
The result is a system that’s lazy, partial, edge-capable, and consistent.
Want to try the managed version of Graft?
Join the waitlist to get early access: Sign up here →
Each of these properties deserves a closer look—let’s unpack them one by one.
Graft is designed for the real world—where edge clients wake up occasionally, face unreliable networks, and run in short-lived, resource-constrained environments. Instead of relying on continuous replication, clients choose when to sync, and Graft makes it easy to fast forward to the latest snapshot.
That sync starts with a simple question: what changed since my last snapshot?
The server responds with a graft—a compact bitset of page indexes that have changed across all commits since that snapshot3. This is where the project gets its name: a graft attaches new changes to an existing snapshot—like grafting a branch onto a tree. They act as a guide, informing the client which pages can be reused and which need to be fetched if needed.
Critically, when a client pulls a graft from the server, it doesn’t receive any actual data—only metadata about what changed. This gives the client full control over what to fetch and when, laying the foundation for partial replication.
When you’re building for edge environments—browser tabs, mobile apps, serverless functions—you can’t afford to download the entire dataset just to serve a handful of queries. That’s where partial replication comes in.
After a client pulls a graft, it knows exactly what’s changed. It can use that information to determine precisely which pages are still valid and which pages need to be fetched. Instead of pulling everything, clients selectively retrieve only the pages they’ll actually use—nothing more, nothing less.
To keep things snappy, Graft supports several ways to prefetch pages:
General-purpose prefetching: Graft includes a built-in prefetcher based on the Leap algorithm, which predicts future page accesses by identifying patterns4.
Domain-specific prefetching: Applications can leverage domain knowledge to preemptively fetch relevant pages. For instance, if your app frequently queries a user’s profile, Graft can prefetch pages related to that profile before the data is needed.
Proactive fetching: Clients can always fall back to pulling all changes if needed, essentially reverting to full replication. This is particularly useful for Graft workloads running on the server side.
And because Graft hosts pages directly on object storage, they’re naturally durable and scalable, creating a strong foundation for edge-native replication.
Edge replication isn’t just about choosing what data to sync—it’s about making sure that data is available where it’s actually needed. Graft does this in two key ways.
First, pages are served from object-storage through a global fleet of edge servers, allowing frequently accessed (“hot”) pages to be cached near clients. This keeps latency low and responsiveness high, no matter where in the world your users happen to be.
Second, the Graft client itself is lightweight and designed specifically to be embedded. With minimal dependencies and a tiny runtime, it integrates into constrained environments like browsers, devices, mobile apps, and serverless functions.
The result? Your data is always cached exactly where it’s most valuable—right at the edge and embedded in your application.
But caching data on the edge brings new challenges, particularly around maintaining consistency and safely handling conflicts. That’s where Graft’s robust consistency model comes in.
Strong consistency is critical—especially when syncing data between clients that might occasionally conflict. Graft addresses this by providing a clear and robust consistency model: Serializable Snapshot Isolation.5
This model gives clients isolated, consistent views of data at specific snapshots, allowing reads to proceed concurrently without interference. At the same time, it ensures that writes are strictly serialized, so there’s always a clear, globally consistent order for every transaction.
However, because Graft is designed for offline-first, lazy replication, clients sometimes attempt to commit changes based on an outdated snapshot. Accepting these commits blindly would violate strict serializability. Instead, Graft safely rejects the commit and lets the client choose how to resolve the situation. Typically, clients will:
Reset and replay, by pulling the latest snapshot, reapplying local transactions, and trying again.
Locally, the client experiences Optimistic Snapshot Isolation, meaning:
However, these snapshots may later be discarded if the commit is rejected.
Merge their local state with the latest snapshot from the server. This may degrade the global consistency model to snapshot isolation.
In short, Graft ensures you never have to sacrifice consistency—even when clients sync sporadically, operate offline, or collide with concurrent writes.
Combining lazy syncing, partial replication, edge-friendly deployment, and strong consistency, Graft provides a robust foundation for a variety of edge-native applications. Here are just a few examples of what you can build with Graft:
Offline-first apps: Note-taking, task management, or CRUD apps that operate partially offline. Graft takes care of syncing, allowing the application to forget the network even exists. When combined with a conflict handler, Graft can also enable multiplayer on top of arbitrary data.
Cross-platform data: Eliminate vendor lock-in and allow your users to seamlessly access their data across mobile platforms, devices, and the web. Graft is architected to be embedded anywhere6.
Stateless read replicas: Due to Graft’s unique approach to replication, a database replica can be spun up with no local state, retrieve the latest snapshot metadata, and immediately start running queries. No need to download all the data and replay the log.
Replicate anything: Graft is just focused on consistent page replication. It doesn’t care about what’s inside those pages. So go crazy! Use Graft to sync AI models, Parquet or Lance files, Geospatial tilesets, or just photos of your cats. The sky’s the limit with Graft.
Today, libgraft is the easiest way to start using Graft. It’s a native SQLite extension that works anywhere SQLite does. It uses Graft to replicate just the parts of the database that a client actually uses, making it possible to run SQLite in resource constrained environments.
libgraft implements a SQLite virtual file system (VFS) allowing it to intercept all reads and writes to the database. It provides the same transactional and concurrency semantics as SQLite does when running in WAL mode. Using libgraft provides your application with the following benefits:
* asynchronous replication to and from object storage
* lazy partial replicas on the edge and in devices
If you’re interested in using libgraft, you can find the documentation here.
Graft is developed openly on GitHub, and contributions from the community are very welcome. You can open issues, participate in discussions, or submit pull requests—check out our contribution guide for details.
If you’d like to chat about Graft, join the Discord or send me an email. I’d love your feedback on Graft’s approach to lazy, partial edge replication.
I’m also planning on launching a Graft Managed Service. If you’d like to join the waitlist, you can sign up here.
Keep reading to learn about Graft’s roadmap as well as a detailed comparison between Graft and existing SQLite replication solutions.
Graft is the result of a year of research, many iterations, and one major pivot7. But Graft is far from done. There’s a lot left to build, and the roadmap is ambitious. In no particular order, here’s what’s planned:
WebAssembly support: Supporting WebAssembly (Wasm) would allow Graft to be used in the browser. I’d like to eventually support SQLite’s official Wasm build, wa-sqlite, and sql.js.
Integrating Graft and SQLSync: Once Graft supports Wasm, integrating it with SQLSync will be straightforward. The plan is to split out SQLSync’s mutation, rebase, and query subscription layers so it can lay on top of a database using Graft replication.
More client libraries: I’d love to see native Graft-client wrappers for popular languages including Python, Javascript, Go, and Java. This would allow Graft to be used to replicate arbitrary data in those languages rather than being restricted to SQLite.8
Low-latency writes: Graft currently blocks push operations until they have been fully committed into object storage. This can be addressed in a number of ways:
Buffer writes in a low-latency durable consensus group sitting in front of object storage.
Garbage collection, checkpointing, and compaction: These features are needed to maximize query performance, minimize wasted space, and enable deleting data permanently. They all relate to Graft’s decision to store data directly in object storage, and batch changes together into files called segments.
Authentication and authorization: This is a fairly broad task that encompasses everything from accounts on the Graft managed service to fine-grained authorization to read/write Volumes.
Volume forking: The Graft service is already setup to perform zero-copy forks, since it can easily copy Segment references over to the new Volume. However, to perform a local fork, Graft currently needs to copy all of the pages. This could be solved by layering volumes locally and allowing reads to fall through or changing how pages are addressed locally.
Conflict handling: Graft should offer built-in conflict resolution strategies and extension points so applications can control how conflicts are handled. The initial built-in strategy will automatically merge non-overlapping transactions. While this relaxes global consistency to optimistic snapshot isolation, it can significantly boost performance in collaborative and multiplayer scenarios.
Graft builds on ideas pioneered by many other projects, while adding its own unique contributions to the space. Here is a brief overview of the SQLite replication landscape and how Graft compares.
The information in this section has been gathered from documentation and blog posts, and might not be perfectly accurate. Please let me know if I’ve misrepresented or misunderstood a project.
Among SQLite-based projects, mvSQLite is the closest in concept to Graft. It implements a custom VFS layer that stores SQLite pages directly in FoundationDB.
In mvSQLite, each page is stored by its content hash and referenced by (page_number, snapshot version). This structure allows readers to lazily fetch pages from FoundationDB as needed. By leveraging page-level versioning, mvSQLite supports concurrent write transactions, provided their read and write sets don’t overlap.
How Graft compares: Graft and mvSQLite share similar storage-layer designs, using page-level versioning to allow lazy, on-demand fetching and partial database views. The key difference lies in data storage location and how page changes are tracked. mvSQLite depends on FoundationDB, requiring all nodes to have direct cluster access—making it unsuitable for widely distributed edge devices and web applications. Additionally, Graft’s Splinter-based changesets are self-contained, easily distributable, and do not require direct queries against FoundationDB to determine changed page versions.
Litestream is a streaming backup solution that continuously replicates SQLite WAL frames to object storage. Its primary focus is async durability, point-in-time restore, and read replicas. It runs externally to your application, monitoring SQLite’s WAL through the filesystem.
How Graft compares: Unlike Litestream, Graft integrates directly into SQLite’s commit process via its custom VFS, enabling lazy, partial replication, and distributed writes. Like Litestream, Graft replicates pages to object storage and supports point-in-time restores.
cr-sqlite is a SQLite extension which turns tables into Conflict-free Replicated Data Types (CRDTs), enabling logical, row-level replication. It offers automatic conflict resolution but requires schema awareness and application-level integration.
How Graft compares: Graft is schema-agnostic and doesn’t depend on logical CRDTs, making it compatible with arbitrary SQLite extensions and custom data structures. However, to achieve global serializability, Graft expects applications to handle conflict resolution explicitly. In contrast, cr-sqlite automatically merges changes from multiple writers, achieving causal consistency.
By combining Durable Objects with SQLite, you get a strongly consistent and highly durable database wrapped with your business logic and hosted hopefully close to your users in Cloudflare’s massive edge network. Under the hood, this solution is similar to Litestream in that it replicates the SQLite WAL to object storage and performs periodic checkpoints.
How Graft compares: Graft exposes replication as a first class citizen, and is designed to replicate efficiently to and from the edge. In comparison, SQLite in Durable Objects is focused on extending Durable Objects with the full power of SQLite.
Cloudflare D1 is a managed SQLite database operating similarly to traditional database services like Amazon RDS or Turso, accessed by applications via an HTTP API.
How Graft compares: Graft replicates data directly to the edge, embedding it within client applications. This decentralized replication model contrasts significantly with D1’s centralized data service.
Turso provides managed SQLite databases and embedded replicas via libSQL, an open-source SQLite fork. Similar to Litestream and Cloudflare Durable Objects SQL Storage, Turso replicates SQLite WAL frames to object storage and periodically checkpoints. Replicas catch up by retrieving these checkpoints and replaying the log.
How Graft compares: Graft distinguishes itself with partial replication and support for arbitrary, schema-agnostic data structures. Graft’s backend service operates directly at the page level and outsources the entire transactional lifecycle to clients.
The key idea behind rqlite and dqlite is to distribute SQLite across multiple servers. This is achieved through Raft based consensus and routing SQLite operations through a network protocol to the current Raft leader.
How Graft compares: These projects are focused on increasing SQLite’s durability and availability through consensus and traditional replication. They are designed to scale across a set of stateful nodes that maintain connectivity to one another. Graft fundamentally differs by being a stateless system built on top of object storage, designed to replicate data to and from the edge.
Verneuil focuses on asynchronously replicating SQLite snapshots to read replicas via object storage, prioritizing reliability without introducing additional failure modes. Verneuil explicitly avoids mechanisms to minimize replication latency or staleness.
How Graft compares: Graft behaves more like a multi-writer distributed database, emphasizing selective, real-time partial replication. Verneuil’s approach, meanwhile, emphasizes unidirectional asynchronous snapshot replication without guarantees around replication freshness.
...
Read the original on sqlsync.dev »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.