10 interesting stories served every morning and every evening.
Full list of projects available here.
La Suite numérique (La Suite for short) is a full blown open-source digital workspace for online collaboration and teamwork.
La Suite is built by French government agencies DINUM and ANCT. It is also the product of a close european collaboration with the Netherlands and German state.
Our code base is a 100% open source and MIT licenced.
Come say hello on Matrix
...
Read the original on github.com »
I didn’t ask for this and neither did you.
I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it.
I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.
And yet here we are. The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.
You could abstain out of moral principle. And that’s fine, especially if you’re at the tail end of your career. And if you’re at the beginning of your career, you don’t need me to explain any of this to you, because you already use Warp and Cursor and Claude, with ChatGPT as your therapist and pair programmer and maybe even your lover. This post is for the 40-somethings in my audience who don’t realize this fact yet.
So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision. It’s maybe not the decision that your younger, more idealistic self would want you to make, but it does keep your car and your house and your family safe inside it.
Someday years from now we will look back on the era when we were the last generation to code by hand. We’ll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we’ll miss it.
We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”
I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.
If you would like to grieve, I invite you to grieve with me. We are the last of our kind, and those who follow us won’t understand our sorrow. Our craft, as we have practiced it, will end up like some blacksmith’s tool in an archeological dig, a curio for future generations. It cannot be helped, it is the nature of all things to pass to dust, and yet still we can mourn. Now is the time to mourn the passing of our craft.
...
Read the original on nolanlawson.com »
I don’t post a lot. But when I do, it’s because I think few people are saying out loud what I’m noticing.
I’ve been building a product from the ground up. Not the “I spun up a Next.js template” kind of ground up. I mean from network configuration to product design to pricing decisions. Truly end to end. And I’ve been doing it using frontier models and coding agents for hours and hours every single day, both on this project and in my full time work. I’ve been trying to stay away from the chaos and the hype, filtering hard for what is actually valuable.
Since December 2025, things have dramatically changed for the better. Many have noticed. Few are drawing the right conclusions.
Antirez likes to call it “automated programming”, and I really like that framing. It captures the essence far better than the shallow, almost dismissive label of “vibe coding”. Automation was at the core of most of the work and cultural revolutions of human history. The printing press, the loom, the assembly line. This one doesn’t differ much.
Most of my work is still there. I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am. What’s gone is the tearing, exhausting manual labour of typing every single line of code.
At this point in time, models and tools, when put in a clean and maniacally well set up environment, can truly make the difference. I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric. But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
Automated programming especially allows me to quickly build the tools I need so fast that every blacksmith that ever existed on this earth would envy me deeply. Finally able to really focus on the things they have in mind. Finally dedicating more time of their craft to the art they conceive, not the sweat of the forge.
It’s been months now that I have this thought crystallized in my mind. It is so clear to me that I genuinely don’t understand why everyone is not screaming it to the world.
We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
Think about what happened. We, as an industry, looked at the genuine complexity of building software and instead of sharpening our thinking, we bought someone else’s thinking off the shelf. We wrapped everything in frameworks like wrapping a broken leg in silk. It looks nice. The leg is still broken.
In my mind, besides the self declared objectives, frameworks solve three problems. Two explicit and one obvious but never declared.
“Simplification”. Software engineers are scared of designing things themselves. They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.
Automation. This is the only point I can actually, more or less, understand and buy. Boilerplate is boring work. I hate it. And I especially hate using libraries that I then need to study, keep updated, be aware of vulnerabilities for, just for the purpose of removing the creation of duplicated but necessary code. Think about ORMs, CRUD management, code generation, API documentation and so on. The grunt work that nobody wants to do but everybody needs done. Fair enough. But hold that thought, because this is exactly the point where everything changes.
Labour cost. This is the quiet one. The one nobody puts on the conference slide. For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to host, deploy, store your stuff. And you unlock a feature that has nothing to do with engineering: you no longer need to hire a software engineer. You hire a React Developer. No need to train. Plug and play. Easy to replace. A cog in a machine designed by someone else, maintaining a system architected by someone else, solving problems defined by someone else. This is not engineering. This is operating.
In my opinion Software engineering, the true one, is back again.
I am not speaking out of my lungs only. I’ve been developing this way almost flawlessly for over two years at this point. But the true revolution happened clearly last year, and since December 2025 this is obvious to anyone paying attention. From now on it will be even more so.
We have the chance again to get rid of useless complexity and keep working on the true and welcome complexity of our ideas, our features, our products. The complexity that matters. The complexity that is actually yours.
Automation and boilerplating have never been so cheap to overcome. I’ve been basically never writing twice the same line of code. I’m instantly building small tools I need, purpose built, exactly shaped around the problem at hand. I don’t need any fancy monorepo manager. A simple Makefile covers 100% of my needs for 99% of my use cases. When things will get very complicated, and if they get very complicated, I’ll think about it. But only then. Not a second before. This is engineering. You solve the problem you have, not the problem someone on a conference stage told you that you’ll eventually have.
Agents are really well prepared when it comes to basic tools. Tools that have been around not for months, but literally for decades. Bash was born in 1989, just preceding me by two months. The most mediocre model running at this time knows bash better than any person in the world. Bash is the universal adapter. It is not a coincidence that coding agents are shifting from complex and expensive MCP configurations to a simple agent loop with bash as a way to interact, literally, with the world. The oldest tool turned out to be the most future proof. There’s a lesson in there if you care to listen.
Really think about it.
Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities? With all the costs associated with it. From the “least” expensive: operational costs like keeping everything updated because they once again found a critical vulnerability in your Next.js version. To the most expensive one: the cost to your Design Choices. The invisible cost. The one you pay every day without even realizing it, because you’ve been paying it so long you forgot what freedom felt like.
If you keep accepting this trade off, you are not only losing the biggest opportunity we’ve seen in software engineering in decades. You are probably not recognizing your own laziness in once again buying whatever the hyperscalers have decided for you. You’re letting Google and Meta and Vercel be your architect, your designer, your thinker. And in exchange, you get to be their operator.
The tools are here. The models are here. The revolution already happened and most people are still decorating the old house.
Stop wrapping broken legs in silk. Start building things that are yours.
...
Read the original on blog.alaindichiappari.dev »
This is a vocal technique reference covering 21 techniques across five categories. It’s designed as a learning companion — whether you’re a beginner finding your voice or an experienced singer expanding your toolkit.
The sticky bar below the title lets you jump between sections. Each colored dot matches its category:
— ways to shape and color your sound
How to read the table
Each row is one technique. Hover the technique name to see a short description. The difficulty dots (● ○ ○ ○ ○) show how advanced it is, from 1 to 5.
Some techniques show small dashed chips beneath the name — these are prerequisites. The chip color tells you which category the prerequisite belongs to. Hover a chip to see what that technique sounds like, or click it to jump straight to its row in the table.
Techniques marked with ⚠️ warnings can cause damage if done incorrectly. The golden rule: if it hurts, stop. Work with a vocal coach for anything rated 4–5 dots.
Use EN / DA to switch language and the theme button to cycle through five color schemes: Dark, Light, Midnight, Forest, and Ember. Your choices are saved automatically.
Nogle teknikker viser små stiplede chips under navnet — det er forudsætninger. Chipfarven fortæller hvilken kategori forudsætningen hører til. Hold musen over en chip for at se hvad teknikken lyder som, eller klik for at hoppe direkte til den i tabellen.
I hope this guide helps you on your vocal journey. If you have suggestions, found a bug, or just want to say hi — I’d love to hear from you.
Check your posture — feet shoulder-width, shoulders back and down, chin level.
Release tension — roll your neck, shrug and drop shoulders, shake out your arms.
No cold starts — never belt, distort, or push range without warming up first.
Breathing (1 min) — Inhale 4 counts into belly/sides/back. Exhale on “Sss” for 15–20 seconds. Repeat 3x. This activates your support system.
Lip Trills (1 min) — Blow air through closed lips to make them vibrate. Slide up and down your range. Keeps everything relaxed and connected.
Humming (1 min) — Hum on “Mmm” through 5-note scales, ascending. Feel the buzz in your face (mask resonance). Keep jaw and tongue loose.
Vowel Slides (1 min) — Sing “Mee-Meh-Mah-Moh-Moo” on a single note, then move up by half steps. Opens the vocal tract gradually.
Sirens (1 min) — Slide from bottom to top and back on “Woo” or “Wee.” Full range, gentle, no pushing. This bridges your registers.
Straw phonation — Sing through a straw (or into a cup of water with a straw). Creates back-pressure that balances airflow and fold closure. Best warm-up tool available.
Tongue trills — Roll your tongue on “Rr” while singing scales. Releases tongue tension (a common problem).
Arpeggios — 1-3-5-8-5-3-1 on “Nay” or “Gee” to work through your passaggio (break area).
A dome-shaped muscle beneath your lungs. When you inhale, it flattens downward, pulling air in. You don’t directly “sing from your diaphragm” — you use it to control the rate of exhalation. Think of it as an air pressure regulator, not a sound source.
Two small folds of tissue in your larynx. When air passes through, they vibrate and create sound. Thicker vibration = chest voice. Thinner vibration = head voice. Partial closure = falsetto/breathy. The space between them is called the glottis.
Your “voice box.” It can move up (bright, thin sound) or down (dark, warm sound). For most singing, a neutral or slightly lowered larynx is ideal. A high larynx under pressure = strain. Learn to keep it stable — yawning gently while singing helps find the right position.
Abdominals — Control exhalation pressure. They don’t push air out — they slow the collapse of the rib cage.
Intercostals — Muscles between your ribs. Keep your ribs expanded during singing. This is “appoggio” (leaning into the breath).
Back muscles — Often forgotten. Your lower back expands when breathing correctly. Engage it for support.
Stand with feet shoulder-width apart. Knees slightly bent (not locked). Shoulders relaxed, back and down. Chest comfortably open. Head balanced on top of the spine — not jutting forward. Imagine a string pulling you up from the crown of your head.
Hydration — Drink water consistently throughout the day, not just before singing. Your vocal folds need systemic hydration.
Steam inhalation — Breathe steam for 10 minutes before heavy singing. This directly hydrates the folds.
Rest — Your voice needs recovery time. Avoid talking loudly after intense sessions.
Din “stemmeboks.” Den kan bevæge sig op (lys, tynd lyd) eller ned (mørk, varm lyd). Til de fleste former for sang er en neutral eller let sænket strube ideel. En høj strube under pres = belastning. Lær at holde den stabil — at gabe blidt mens du synger hjælper med at finde den rette position.
Common advice that’s misleading, incomplete, or outright harmful. If someone tells you any of these, be skeptical.
You can’t directly control your diaphragm — it’s an involuntary muscle on the inhale. What people mean is: use your abdominal and intercostal muscles to control exhalation. Saying “sing from your diaphragm” is like saying “digest from your stomach.” Technically involved, but not how you’d teach it.
“Drink tea with honey to fix your voice”
Tea and honey never touch your vocal folds — they go down your esophagus, not your trachea. They can soothe throat irritation and feel nice, but they don’t “fix” or “coat” your cords. What actually helps: steam inhalation and systemic hydration (water, hours in advance).
The sound isn’t produced in your chest. “Chest voice” refers to the thick vocal fold vibration pattern that creates sympathetic resonance you feel in your upper torso. The sound is always made at the vocal folds in your larynx.
“Falsetto is only for men”
Everyone with vocal folds can produce falsetto — it’s a mode of vibration where the folds don’t fully close. Women use it too, though the timbral difference from head voice may be less dramatic.
They give you a damaged voice. The “rasp” from smoking and alcohol comes from swollen, irritated, dehydrated folds. Healthy vocal distortion uses the false folds and arytenoids — structures above the true cords. One is controlled art, the other is permanent damage.
The exact opposite. High notes require less air, not more. Pushing more air at higher pitches forces the folds apart and creates strain. Think “less air, more compression” — let the folds do the work.
Artificial vibrato (jaw wobble, diaphragm pulse) sounds unnatural and creates tension. Real vibrato emerges naturally when breath support is solid and the throat is relaxed. If you don’t have vibrato yet, the fix is better technique — not manufacturing it.
“You’re either born with it or you’re not”
Singing is a motor skill. Some people have natural advantages (vocal fold length, resonance cavity size), but technique, pitch accuracy, tone quality, and range are all trainable. Most “natural” singers practiced obsessively as children.
Your vocal folds are tissue. They need increased blood flow and gradual stretching before heavy use — just like any other muscle. Cold singing is the fastest path to strain, nodules, and hemorrhages.
Pain means damage. Unlike skeletal muscles, vocal folds don’t grow stronger from micro-tears. Pain, burning, or persistent hoarseness = stop immediately. Rest. If it lasts more than a few days, see an ENT specialist.
Du kan ikke direkte kontrollere dit mellemgulv — det er en ufrivillig muskel ved indånding. Det folk mener er: brug dine mave- og interkostalmuskler til at kontrollere udåndingen. At sige “syng fra mellemgulvet” er som at sige “fordøj fra maven.”
This guide uses traditional vocal terminology (chest voice, head voice, mixed voice, etc.) because it’s the most widely understood framework worldwide. However, the most scientifically validated system is Complete Vocal Technique (CVT), developed by Cathrine Sadolin at the Complete Vocal Institute in Copenhagen.
CVT is built on laryngoscopic imaging, EGG measurements, and peer-reviewed acoustic research. Here’s how the two frameworks relate.
CVT classifies all singing into four modes based on vocal tract configuration — not felt vibration:
Support — Coordinated abdominal, waist, solar plexus, and back muscle engagement to control air pressure and airflow. (This guide: Breath Support)
Necessary Twang — Narrowing the epiglottic funnel for clearer, more efficient sound. CVT considers this foundational for all healthy singing, not just a style. (This guide: Twang)
Avoid protruding jaw & tightened lips — These trigger uncontrolled vocal cord constriction, especially in upper register. (Not explicitly covered in this guide)
“Overdrive” in CVT is a clean mode, not distortion
Vowel rules: CVT restricts specific vowels per mode. Traditional pedagogy uses general vowel modification. Both work, but CVT is more precise.
Metal & Density: CVT uses “degree of metal” (0–100%) and “density” (fuller vs. reduced) as parameters. Traditional pedagogy doesn’t have these concepts.
“Overdrive” means different things: In this guide, overdrive = heavy vocal distortion (like guitar overdrive). In CVT, Overdrive = a clean, shouty vocal mode.
Learn more: completevocalinstitute.com
Look for expandable annotations on technique cards throughout this guide.
The fundamental modes of vocal fold vibration. Every sound you make lives somewhere on this spectrum. Master these before anything else.
Ways of using your registers to create specific sounds. These define genres and artistic identity.
Textures and colors you add to your base tone. These are the seasoning — use them deliberately, not as defaults.
Decorative techniques that add flair, personality, and musicality to your phrasing.
The foundation everything else rests on. Control here is the difference between amateurs and professionals.
...
Read the original on jesperordrup.github.io »
Rebecca Guy, senior policy manager at the Royal Society for the Prevention of Accidents, said: “Regular vision checks are a sensible way to reduce risk as we age, but the priority must be a system that supports people to drive safely for as long as possible, while ensuring timely action is taken when health or eyesight could put them or others in danger.”
...
Read the original on www.bbc.com »
I am an unusual beast. All my solo project games I’ve been making recently have been written in ‘vanilla’ C. Nobody does this. So I think it might be interesting to explain why I do.
Dry programming language opinions incoming, you have been warned.
There’s some things which are non-negotiable. First of, it has to be reliable. I can’t afford to spend my time dealing with bugs I didn’t cause myself.
A lot of my games were written for flash, and now flash is dying. I do not
want to spend my time porting old games to new platforms, I want to make new games. I need a platform that I am confident will be around for a while.
Similarly I want to avoid tying myself to a particular OS, and ideally I’d like to have the option of developing for consoles. So it’s important that my programming language is portable, and that it has good portable library support.
The strongest thing on my desired, but not required list is simplicity. I find looking up language features, and quirky ‘clever’ api’s incredibly tiring. The ideal language would be one I can memorize, and then never have to look things up.
Dealing with bugs is huge creative drain. I want to produce less bugs, so I want strict typing, strong warning messages and static code analysis. I want bugs to be easier to find, so I want good debuggers and dynamic analysis.
I’m not interesting in high-def realism, but I do still care a bit about performance. Having more cycles available broadens the palette of things you can do. It’s particularly interesting to explore what is possible with modern, powerful computers if you aren’t persuing fidelity.
Even more than that I care about the speed of the compiler. I am not a zen master of focus, and waiting 10+ seconds is wasteful, yes, but more importantly it breaks my flow. I flick over to Twitter and suddenly 5+ minutes are gone.
I am not an OOP convert. I’ve spent most of my professional life working with classes and objects, but the more time I spend, the less I understand why you’d want to combine code and data so rigidly. I want to handle data as data and write the code that best fits a particular situation.
C++ is still the most common language for writing games, and not without reason. I still do almost all of my contract work in it. I dislike it intensely.
C++ covers my needs, but fails my wants badly. It is desperately complicated. Despite decent tooling it’s easy to create insidious bugs. It is also slow to compile compared to C. It is high performance, and it offers features that C doesn’t have; but features I don’t want, and at a great complexity cost.
C# and Java have similar issues. They are verbose and complex beasts, and I am searching for a concise, simple creature. They both do a lot to railroad a programmer into a strongly OOP style that I am opposed to. As per most higher level languages they have a tendency to hide away complexity in a way that doesn’t actually prevent it from biting you.
I like Go a lot. In many ways it is C revisited, taking into account what has be learnt in the long years since it was released. I would like to use it, but there are big roadblocks that prevent me. The stop-the-world garbage collection is a big pain for games, stopping the world is something you can’t really afford to do. The library support for games is quite poor, and though you can wrap C libs without much trouble, doing so adds a lot of busy work. It is niche enough that I worry a little about long term relevance.
It would be nice to make things for the web, but it feels like a terrifyingly fast moving enviroment. It is particularly scary with the death of flash. I really dislike javascript, it is so loose that I marvel that people are able to write big chunks of software in it. I have no interest in trying.
Haxe feels much more promising than most alternatives. If I do web stuff again I’ll be diving in here. There is some good library support. I am a little concerned by its relative youth, will it last? I don’t much have else to say about it though, I’ve only dabbled with the surface.
Some people just say screw it, I’ll write my own language, the language I want to use. I admire this, and sometimes I toy with the idea of doing the same. It feels like too much to throw away all existing library support, and taking full responsibilty for future compatibility. It is also very difficult, and when it comes down to it I would rather be making games than programming languages.
C is dangerous, but it is reliable. A very sharp knife that can cut fingers as well as veg, but so simple it’s not too hard to learn to use it carefully.
It is fast, and when it comes to compilation I can’t think of anything faster.
It can be made to run on just about anything. Usually this is relatively easy. It is hard to imagine a time when this won’t be the case.
The library and tooling support is strong and ongoing.
I say this with some sadness, but it is still the language for me.
I absolutely DO NOT mean to say “hey, you should use C too”. I full appeciate preferences here are pretty specific and unusual. I have also already written more ‘vanilla’ C code than most, and this certainly is part of my comfort.
So yeah, that’s it :-)
...
Read the original on jonathanwhiting.com »
Hoot is a Spritely project for running Scheme code on Wasm GC-capable web browsers, featuring a Scheme to Wasm compiler and a full-featured Wasm toolchain.
Hoot is built on Guile and has no additional dependencies. The toolchain is self-contained and even features a Wasm interpreter for testing Hoot binaries without leaving the Guile REPL.
...
Read the original on www.spritely.institute »
…but at scale they can become the query that ruins your day.
Geospatial functions are expensive, and they often force a loop join that starts to feel quadratic as your tables grow.
The core idea in this post is simple: we’ll see how Floe automatically rewrites this kind of query and takes advantage of H3 indexes for dramatic speedup.
A geo join is any join whose ON clause is a spatial predicate:
Why it hurts at scale
Modern databases make joins fast by turning them into hash joins over keys. If you can hash-partition both inputs on the join key, each worker compares only its share instead of comparing everything to everything. If the data is nicely distributed, this decreases the complexity from quadratic to linear.
Spatial predicates don’t give you a clean join key. So you end up in a terrible situation:
you have to compare every value with each other (the quadratic complexity)
plus an expensive predicate on each candidate pair
That’s the situation we want to escape.
H3 (originally from Uber) partitions the Earth into a hierarchy of mostly hexagonal cells.
Two properties matter for us:
Hierarchical resolution: you choose a resolution from coarse to fine.
Compact keys: each cell is a BIGINT, so it behaves like a normal join key: hashable, sortable, distributable.
Most importantly, it lets us represent a geography as a set of cell IDs that covers it.
If two shapes intersect, then their H3 cover sets share at least one cell.
That gives us a path to rewrite “do these shapes intersect?” into “do these two sets overlap?” which a database can execute as a plain equi-join.
Cell coverage is an approximation of the exact geometry:
It’s OK to keep extra candidates (false positives): they’ll be removed by the exact predicate.
It’s not OK to miss true matches (false negatives): if we drop them in the pre-filter, no later step can recover them.
So we generate coverage so it over-approximates the shape (the coverage contains the shape).
Deduplicate candidates (the same pair can match on multiple cells)
Run the exact predicate on candidates only
What the database gets “for free”
WITH
a_cells AS (
SELECT
a.id,
a.geo,
c.cell
FROM A a
JOIN h3_coverage(a.geo, /* resolution */ 3, /* full cover */ true) c
ON TRUE
b_cells AS (
SELECT
b.id,
b.geo,
c.cell
FROM B b
JOIN h3_coverage(b.geo, 3, true) c
ON TRUE
candidates AS (
SELECT DISTINCT
a_cells.id AS a_id,
a_cells.geo AS a_geo,
b_cells.id AS b_id,
b_cells.geo AS b_geo
FROM a_cells
JOIN b_cells USING (cell)
SELECT *
FROM candidates
WHERE ST_Intersects(a_geo, b_geo);
With this rewrite, the heavy work becomes an equi-join on (cell):
It’s naturally distributable → you can hash-partition on cell across workers.
The expensive predicate becomes a cleanup step instead of the main event.
Three questions readers always ask
“Isn’t that approximate?” Yes — the H3 step is an approximation used as a pre-filter. Correctness is preserved by the final exact predicate recheck.
“Won’t that create false positives?” Yes, and that’s expected. The goal is to reduce the candidate set enough that exact checks become cheap.
“How do I pick a resolution?” Resolution is the tradeoff knob: higher resolution usually reduces false positives but increases the number of cells generated per shape. (We’ll cover how to measure and choose this in the Numbers section later.)
When a user enters a simple query to join countries with the cities they contain, the planner automatically applies the rewrite:
EXPLAIN ANALYZE SELECT * FROM world_cities JOIN countries ON ST_Intersects(world_cities.geo, countries.geo);
Planning time: 2.291 ms
rows_actual node
142141 SELECT
142141 FILTER WHERE ST_INTERSECTS(MAX(MAX(world_cities.geo)), MAX(MAX(countries.geo)))
geojoin filtered rows ratio: 99.62%
199848 GROUP BY (countries.rowunique, world_cities.rowunique)
199848 DISTRIBUTE ON HASH(world_cities.rowunique),HASH(countries.rowunique))
199848 GROUP BY PARTIAL (countries.rowunique, world_cities.rowunique)
224075 INNER HASH JOIN ON (COALESCE(h3_coverage_geodesic.h3_coverage_geodesic , $1 , const ) = COALESCE(h3_coverage_geodesic.h3_coverage_geodesic , $0 , const ) )
147043 |-DISTRIBUTE ON HASH(COALESCE(h3_coverage_geodesic.h3_coverage_geodesic , $0 , const ) )
147043 | LEFT OUTER FUNCTION JOIN H3_COVERAGE_GEODESIC(world_cities.geo, 3, t, t) ON true
147043 | SCAN world_cities
17223 |-BUILD HASH
17223 DISTRIBUTE ON HASH(COALESCE(h3_coverage_geodesic.h3_coverage_geodesic , $1 , const ) )
17223 LEFT OUTER FUNCTION JOIN H3_COVERAGE_GEODESIC(countries.geo, 3, t, t) ON true
256 SCAN countries
142141 rows returned
Read: 16.92MiB, Distributed: 5.85GiB, Network: 8.00GiB
Database: yellowbrick_test_utf8
Execution time: 1220.118 ms, End time: 2025-12-16 14:00:50
Without the rewrite, the join would face ~`256 * 147043 = 37.6 million` country×city pairs.
Instead, we’re doing 199848 calls to ST_Intersects, of which we keep 142141 pairs, a 99.6% reduction.
A possible approach is to materialize an index table (row_id → list of H3 cells) and maintain it. This avoids the somewhat expensive step of computing H3 indexes.
We chose a different route: compute coverage at query time as part of the rewrite
Works over views, CTEs and subqueries, not just base tables.
This makes it easy to play with data cleaning directly in the query:
with cleaned_cities as
(select distinct st_reduceprecision(geo, 100) geo from world_cities) — Dedup cities that are less than 100 meters apart
select count(*) from countries join cleaned_cities on st_intersects(countries.geo, cleaned_cities.geo);
Let’s look at some numbers.
We’ll use a join between 256 polygons representing countries and points representing world cities:
In this dataset, countries.geo is a polygon or multipolygon with 418 vertices on average.
We ran these tests on a cluster with 15 workers, each with a Xeon E5-2695 (16 cores @ 2.10GHz) and 1 TB of memory.
First, let’s look at the effect of H3 resolution.
Each time we increase the resolution by 1, the average size of a cell decreases by ~6×.
Baseline is the time the query takes without H3 indexing. (459 seconds)
GeoJoin is the time the query takes with the rewrite.
Increasing it too much makes indexing and the (cell) join too expensive.
At best, at resolution 3, the geo join takes 1.17 seconds — a 400× improvement.
...
Read the original on floedb.ai »
We built a Software Factory: non-interactive development where specs + scenarios drive agents that write code, run harnesses, and converge without human review.
The narrative form is included below. If you’d prefer to work from first principles, I offer a few constraints & guidelines that, applied iteratively, will accelerate any team toward the same intuitions, convictions1, and ultimately a factory2 of your own. In kōan or mantra form:
* Why am I doing this? (implied: the model should be doing this instead)
* Code must not be written by humans
* Code must not be reviewed by humans
* If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
On July 14th, 2025, Jay Taylor and Navan Chauhan joined me (Justin McCarthy, co-founder, CTO) in founding the StrongDM AI team.
The catalyst was a transition observed in late 2024: with the second revision of Claude 3.5 (October 2024), long-horizon agentic coding workflows began to compound correctness rather than error.
By December of 2024, the model’s long-horizon coding performance was unmistakable via Cursor’s YOLO mode.
Prior to this model improvement, iterative application of LLMs to coding tasks would accumulate errors of all imaginable varieties (misunderstandings, hallucinations, syntax, version DRY violations, library incompatibility, etc). The app or product would decay and ultimately “collapse”: death by a thousand cuts, etc.
Together with YOLO mode, the updated model from Anthropic provided the first glimmer of what we now refer to internally as non-interactive development or grown software.
In the first hour of the first day of our AI team, we established a charter which set us on a path toward a series of findings (which we refer to as our “unlocks”). In retrospect, the most important line in the charter document was the following:
Initially it was just a hunch. An experiment. How far could we get, without writing any code by hand?
Not very far! At least: not very far, until we added tests. However, the agent, obsessed with the immediate task, soon began to take shortcuts: return true is a great way to pass narrowly written tests, but probably won’t generalize to the software you want.
Tests were not enough. How about integration tests? Regression tests? End-to-end tests? Behavior tests?
One recurring theme of the agentic moment: we need new language. For example, the word “test” has proven insufficient and ambiguous. A test, stored in the codebase, can be lazily rewritten to match the code. The code could be rewritten to trivially pass the test.
We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM.
Because much of the software we grow itself has an agentic component, we transitioned from boolean definitions of success (“the test suite is green”) to a probabilistic and empirical one. We use the term satisfaction to quantify this validation: of all the observed trajectories through all the scenarios, what fraction of them likely satisfy the user?
In previous regimes, a team might rely on integration tests, regression tests, UI automation to answer “is it working?”
We noticed two limitations of previously reliable techniques:
Tests are too rigid - we were coding with agents, but we’re also building with LLMs and agent loops as design primitives; evaluating success often required LLM-as-judgeTests can be reward hacked - we needed validation that was less vulnerable to the model cheating
The Digital Twin Universe is our answer: behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors.
With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs.
Our success with DTU illustrates one of the many ways in which the Agentic Moment has profoundly changed the economics of software. Creating a high fidelity clone of a significant SaaS application was always possible, but never economically feasible. Generations of engineers may have wanted a full in-memory replica of their CRM to test against, but self-censored the proposal to build it. They didn’t even bring it to their manager, because they knew the answer would be no.
Those of us building software factories must practice a deliberate naivete: finding and removing the habits, conventions, and constraints of Software 1.0. The DTU is our proof that what was unthinkable six months ago is now routine.
* Principles: what we believe is true about building software with agents
* Products: tools we use daily and believe others will benefit from
Thank you for reading. We wish you the best of luck constructing your own Software Factory.
...
Read the original on factory.strongdm.ai »
Note: I’ve seen some online chatter about the possibility that the footage shared in this post could be AI generated (which is pretty depressing, but here we are I suppose). I just wanted to clarify that it is not. It would be pretty daft of me to knowingly post AI generated footage on a blog that I’ve worked hard to keep on the up and up. The footage was captured by a CBS news team, I assume to use as B-roll in a news segment. The same reel followed up with an interview of a senior Atari employee (keep an eye out for that in a future post). Same goes for the images — they’ve been around for a few years in one format or another, but clearly were taken at around the same time. I haven’t quoted sources, because its almost impossible to know where this stuff originated, or if there’s any actual “ownage” after all this time. I agree, the footage is remarkably clear — the only post-editing I did was a little upscaling on export from the editing software I used to clip out some irrelevant parts. Hope this clarifies — enjoy!
Well, its been a hot minute since I’ve updated the blog. Life decided that it had other plans for me over the past 18 months or so.
The good news though, dear reader, is that we are back and I have lots in store for 2026!
So let’s kick things off the right way here. I have some incredible and previously unseen footage of Battlezone cabinets being built at Atari from late summer/early fall 1980.
There’s something endlessly fascinating about watching a classic Atari title come together — not so much the design documents or the marketing materials, but the physical act of building the thing. Battlezone is one of those games where the development story and the cabinet itself are so intertwined that it’s hard to separate the two.
Ed Rotberg’s work on the game is well documented: a vector‑driven tank simulation that pushed Atari’s hardware and design philosophy into new territory. Rotberg was able to make a first‑person tank simulation at a time when the hardware barely wanted to cooperate.
Rotberg pushed Atari’s vector technology harder than anyone had before, adapting lessons from earlier games like Lunar Lander and Asteroids but steering them into entirely new territory. He fought for a unique control scheme that felt weighty and deliberate, insisted on a visual style that conveyed depth and motion through nothing but glowing vectors, and worked closely with the industrial design team to ensure the periscope‑style viewfinder wasn’t just a gimmick but an integral part of the experience.
The result was a game that felt impossibly immersive for 1980 — a technical gamble that only came together because Rotberg was willing to challenge both the hardware and the expectations of what an arcade machine could be.
The cabinet itself, was its own engineering challenge. The distinctive periscope viewfinder wasn’t just a stylistic addition — it was a deliberate attempt to immerse the player in a way raster games of the era simply couldn’t match.
The industrial design team had to figure out how to make that idea manufacturable at scale, and the result is one of the most recognisable cabinets of the golden age.
Mike Querio, one of the industrial designers who worked on the cabinet’s design had the following to say when I asked him about Battlezone:
There were some who did not like my periscope vision design. In fact my original design did not even include the acrylic windows on either side. The project manager, Morgan Hoff and others wanted them added so I modified my plastic bezel design. I designed a step to accommodate shorter players and made it removable to reduce the shipping size of the cabinet. It stored vertically inside the cabinet.
The video I’m sharing today comes from inside Atari’s Coin-Op Manufacturing Facility at Sunnyvale, California, and although there’s no narration, it doesn’t need any. The footage speaks for itself.
What you see is the real, day‑to‑day production process behind Battlezone cabinets. The footage shows the later stages of the assembly line, the finishing touches, and packing and shipping. It’s a rare look at how these machines were actually built.
Some 13,000 Battlezone uprights rolled off of Atari’s production lines during the production schedule that lasted from August 1980 to March 1981.
One of the first things that jumps out at me, is just how fluidly the workers are able to move the cabinets around. Anyone who’s ever tried to shift a full‑size Atari cab on their own knows they’re not exactly featherweight. But in the factory environment, you see operators sliding them across the floor, pivoting them into position, and lining them up with a kind of casual confidence that only comes from doing it hundreds of times a week.
The real highlight comes at around the 3 minute 50 second mark. If you’ve ever wondered how Atari managed to ship these things in volume without destroying half of them in transit, this is the bit you’ll want to pay attention to. Two workers approach a finished cabinet with large industrial suction cups — one on each side. They attach them and then, almost effortlessly, lift the entire cabinet off the ground. No straps, no dollies, no awkward tilting. They swing the cabinet over to a waiting pallet, lower it into place, and release the cups.
Once the cabinet is on the pallet, the packing process begins. Cardboard spacers, protective wrap, and finally the outer carton — all designed to keep the machine safe on its journey to arcades around the world.
Anyway — enough chat from me — enjoy the video below:
Battlezone shared the production line with Missile Command, itself released at the same time — sadly, we don’t see any MC cabinets in the background in the footage which is a shame. But you can see the two cabinets in this image:
What I love most about this video is that it captures a moment in time when Atari was firing on all cylinders. Battlezone wasn’t just another arcade cabinet release — it was a statement piece. A game that blended cutting‑edge hardware with bold industrial design, and a cabinet that demanded attention on the arcade floor. Seeing it built by hand, reminds us that these machines weren’t abstract products. They were physical objects, crafted by teams of people at the cutting edge of industrial and technical design.
Anyway — I appreciate your patience here. Its good to be back and updating Arcade Blogger again. I have more to come in the weeks ahead.
If you like what you see, please share this article on your socials!
...
Read the original on arcadeblogger.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.