10 interesting stories served every morning and every evening.
Full list of projects available here.
La Suite numérique (La Suite for short) is a full blown open-source digital workspace for online collaboration and teamwork.
La Suite is built by French government agencies DINUM and ANCT. It is also the product of a close european collaboration with the Netherlands and German state.
Our code base is a 100% open source and MIT licenced.
Come say hello on Matrix
...
Read the original on github.com »
I didn’t ask for this and neither did you.
I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it.
I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.
And yet here we are. The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.
You could abstain out of moral principle. And that’s fine, especially if you’re at the tail end of your career. And if you’re at the beginning of your career, you don’t need me to explain any of this to you, because you already use Warp and Cursor and Claude, with ChatGPT as your therapist and pair programmer and maybe even your lover. This post is for the 40-somethings in my audience who don’t realize this fact yet.
So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision. It’s maybe not the decision that your younger, more idealistic self would want you to make, but it does keep your car and your house and your family safe inside it.
Someday years from now we will look back on the era when we were the last generation to code by hand. We’ll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we’ll miss it.
We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”
I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.
If you would like to grieve, I invite you to grieve with me. We are the last of our kind, and those who follow us won’t understand our sorrow. Our craft, as we have practiced it, will end up like some blacksmith’s tool in an archeological dig, a curio for future generations. It cannot be helped, it is the nature of all things to pass to dust, and yet still we can mourn. Now is the time to mourn the passing of our craft.
...
Read the original on nolanlawson.com »
This is a vocal technique reference covering 21 techniques across five categories. It’s designed as a learning companion — whether you’re a beginner finding your voice or an experienced singer expanding your toolkit.
The sticky bar below the title lets you jump between sections. Each colored dot matches its category:
— ways to shape and color your sound
How to read the table
Each row is one technique. Hover the technique name to see a short description. The difficulty dots (● ○ ○ ○ ○) show how advanced it is, from 1 to 5.
Some techniques show small dashed chips beneath the name — these are prerequisites. The chip color tells you which category the prerequisite belongs to. Hover a chip to see what that technique sounds like, or click it to jump straight to its row in the table.
Techniques marked with ⚠️ warnings can cause damage if done incorrectly. The golden rule: if it hurts, stop. Work with a vocal coach for anything rated 4–5 dots.
Use EN / DA to switch language and the theme button to cycle through five color schemes: Dark, Light, Midnight, Forest, and Ember. Your choices are saved automatically.
Nogle teknikker viser små stiplede chips under navnet — det er forudsætninger. Chipfarven fortæller hvilken kategori forudsætningen hører til. Hold musen over en chip for at se hvad teknikken lyder som, eller klik for at hoppe direkte til den i tabellen.
I hope this guide helps you on your vocal journey. If you have suggestions, found a bug, or just want to say hi — I’d love to hear from you.
Check your posture — feet shoulder-width, shoulders back and down, chin level.
Release tension — roll your neck, shrug and drop shoulders, shake out your arms.
No cold starts — never belt, distort, or push range without warming up first.
Breathing (1 min) — Inhale 4 counts into belly/sides/back. Exhale on “Sss” for 15–20 seconds. Repeat 3x. This activates your support system.
Lip Trills (1 min) — Blow air through closed lips to make them vibrate. Slide up and down your range. Keeps everything relaxed and connected.
Humming (1 min) — Hum on “Mmm” through 5-note scales, ascending. Feel the buzz in your face (mask resonance). Keep jaw and tongue loose.
Vowel Slides (1 min) — Sing “Mee-Meh-Mah-Moh-Moo” on a single note, then move up by half steps. Opens the vocal tract gradually.
Sirens (1 min) — Slide from bottom to top and back on “Woo” or “Wee.” Full range, gentle, no pushing. This bridges your registers.
Straw phonation — Sing through a straw (or into a cup of water with a straw). Creates back-pressure that balances airflow and fold closure. Best warm-up tool available.
Tongue trills — Roll your tongue on “Rr” while singing scales. Releases tongue tension (a common problem).
Arpeggios — 1-3-5-8-5-3-1 on “Nay” or “Gee” to work through your passaggio (break area).
A dome-shaped muscle beneath your lungs. When you inhale, it flattens downward, pulling air in. You don’t directly “sing from your diaphragm” — you use it to control the rate of exhalation. Think of it as an air pressure regulator, not a sound source.
Two small folds of tissue in your larynx. When air passes through, they vibrate and create sound. Thicker vibration = chest voice. Thinner vibration = head voice. Partial closure = falsetto/breathy. The space between them is called the glottis.
Your “voice box.” It can move up (bright, thin sound) or down (dark, warm sound). For most singing, a neutral or slightly lowered larynx is ideal. A high larynx under pressure = strain. Learn to keep it stable — yawning gently while singing helps find the right position.
Abdominals — Control exhalation pressure. They don’t push air out — they slow the collapse of the rib cage.
Intercostals — Muscles between your ribs. Keep your ribs expanded during singing. This is “appoggio” (leaning into the breath).
Back muscles — Often forgotten. Your lower back expands when breathing correctly. Engage it for support.
Stand with feet shoulder-width apart. Knees slightly bent (not locked). Shoulders relaxed, back and down. Chest comfortably open. Head balanced on top of the spine — not jutting forward. Imagine a string pulling you up from the crown of your head.
Hydration — Drink water consistently throughout the day, not just before singing. Your vocal folds need systemic hydration.
Steam inhalation — Breathe steam for 10 minutes before heavy singing. This directly hydrates the folds.
Rest — Your voice needs recovery time. Avoid talking loudly after intense sessions.
Din “stemmeboks.” Den kan bevæge sig op (lys, tynd lyd) eller ned (mørk, varm lyd). Til de fleste former for sang er en neutral eller let sænket strube ideel. En høj strube under pres = belastning. Lær at holde den stabil — at gabe blidt mens du synger hjælper med at finde den rette position.
Common advice that’s misleading, incomplete, or outright harmful. If someone tells you any of these, be skeptical.
You can’t directly control your diaphragm — it’s an involuntary muscle on the inhale. What people mean is: use your abdominal and intercostal muscles to control exhalation. Saying “sing from your diaphragm” is like saying “digest from your stomach.” Technically involved, but not how you’d teach it.
“Drink tea with honey to fix your voice”
Tea and honey never touch your vocal folds — they go down your esophagus, not your trachea. They can soothe throat irritation and feel nice, but they don’t “fix” or “coat” your cords. What actually helps: steam inhalation and systemic hydration (water, hours in advance).
The sound isn’t produced in your chest. “Chest voice” refers to the thick vocal fold vibration pattern that creates sympathetic resonance you feel in your upper torso. The sound is always made at the vocal folds in your larynx.
“Falsetto is only for men”
Everyone with vocal folds can produce falsetto — it’s a mode of vibration where the folds don’t fully close. Women use it too, though the timbral difference from head voice may be less dramatic.
They give you a damaged voice. The “rasp” from smoking and alcohol comes from swollen, irritated, dehydrated folds. Healthy vocal distortion uses the false folds and arytenoids — structures above the true cords. One is controlled art, the other is permanent damage.
The exact opposite. High notes require less air, not more. Pushing more air at higher pitches forces the folds apart and creates strain. Think “less air, more compression” — let the folds do the work.
Artificial vibrato (jaw wobble, diaphragm pulse) sounds unnatural and creates tension. Real vibrato emerges naturally when breath support is solid and the throat is relaxed. If you don’t have vibrato yet, the fix is better technique — not manufacturing it.
“You’re either born with it or you’re not”
Singing is a motor skill. Some people have natural advantages (vocal fold length, resonance cavity size), but technique, pitch accuracy, tone quality, and range are all trainable. Most “natural” singers practiced obsessively as children.
Your vocal folds are tissue. They need increased blood flow and gradual stretching before heavy use — just like any other muscle. Cold singing is the fastest path to strain, nodules, and hemorrhages.
Pain means damage. Unlike skeletal muscles, vocal folds don’t grow stronger from micro-tears. Pain, burning, or persistent hoarseness = stop immediately. Rest. If it lasts more than a few days, see an ENT specialist.
Du kan ikke direkte kontrollere dit mellemgulv — det er en ufrivillig muskel ved indånding. Det folk mener er: brug dine mave- og interkostalmuskler til at kontrollere udåndingen. At sige “syng fra mellemgulvet” er som at sige “fordøj fra maven.”
This guide uses traditional vocal terminology (chest voice, head voice, mixed voice, etc.) because it’s the most widely understood framework worldwide. However, the most scientifically validated system is Complete Vocal Technique (CVT), developed by Cathrine Sadolin at the Complete Vocal Institute in Copenhagen.
CVT is built on laryngoscopic imaging, EGG measurements, and peer-reviewed acoustic research. Here’s how the two frameworks relate.
CVT classifies all singing into four modes based on vocal tract configuration — not felt vibration:
Support — Coordinated abdominal, waist, solar plexus, and back muscle engagement to control air pressure and airflow. (This guide: Breath Support)
Necessary Twang — Narrowing the epiglottic funnel for clearer, more efficient sound. CVT considers this foundational for all healthy singing, not just a style. (This guide: Twang)
Avoid protruding jaw & tightened lips — These trigger uncontrolled vocal cord constriction, especially in upper register. (Not explicitly covered in this guide)
“Overdrive” in CVT is a clean mode, not distortion
Vowel rules: CVT restricts specific vowels per mode. Traditional pedagogy uses general vowel modification. Both work, but CVT is more precise.
Metal & Density: CVT uses “degree of metal” (0–100%) and “density” (fuller vs. reduced) as parameters. Traditional pedagogy doesn’t have these concepts.
“Overdrive” means different things: In this guide, overdrive = heavy vocal distortion (like guitar overdrive). In CVT, Overdrive = a clean, shouty vocal mode.
Learn more: completevocalinstitute.com
Look for expandable annotations on technique cards throughout this guide.
The fundamental modes of vocal fold vibration. Every sound you make lives somewhere on this spectrum. Master these before anything else.
Ways of using your registers to create specific sounds. These define genres and artistic identity.
Textures and colors you add to your base tone. These are the seasoning — use them deliberately, not as defaults.
Decorative techniques that add flair, personality, and musicality to your phrasing.
The foundation everything else rests on. Control here is the difference between amateurs and professionals.
...
Read the original on jesperordrup.github.io »
I don’t post a lot. But when I do, it’s because I think few people are saying out loud what I’m noticing.
I’ve been building a product from the ground up. Not the “I spun up a Next.js template” kind of ground up. I mean from network configuration to product design to pricing decisions. Truly end to end. And I’ve been doing it using frontier models and coding agents for hours and hours every single day, both on this project and in my full time work. I’ve been trying to stay away from the chaos and the hype, filtering hard for what is actually valuable.
Since December 2025, things have dramatically changed for the better. Many have noticed. Few are drawing the right conclusions.
Antirez likes to call it “automated programming”, and I really like that framing. It captures the essence far better than the shallow, almost dismissive label of “vibe coding”. Automation was at the core of most of the work and cultural revolutions of human history. The printing press, the loom, the assembly line. This one doesn’t differ much.
Most of my work is still there. I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am. What’s gone is the tearing, exhausting manual labour of typing every single line of code.
At this point in time, models and tools, when put in a clean and maniacally well set up environment, can truly make the difference. I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric. But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
Automated programming especially allows me to quickly build the tools I need so fast that every blacksmith that ever existed on this earth would envy me deeply. Finally able to really focus on the things they have in mind. Finally dedicating more time of their craft to the art they conceive, not the sweat of the forge.
It’s been months now that I have this thought crystallized in my mind. It is so clear to me that I genuinely don’t understand why everyone is not screaming it to the world.
We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
Think about what happened. We, as an industry, looked at the genuine complexity of building software and instead of sharpening our thinking, we bought someone else’s thinking off the shelf. We wrapped everything in frameworks like wrapping a broken leg in silk. It looks nice. The leg is still broken.
In my mind, besides the self declared objectives, frameworks solve three problems. Two explicit and one obvious but never declared.
“Simplification”. Software engineers are scared of designing things themselves. They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.
Automation. This is the only point I can actually, more or less, understand and buy. Boilerplate is boring work. I hate it. And I especially hate using libraries that I then need to study, keep updated, be aware of vulnerabilities for, just for the purpose of removing the creation of duplicated but necessary code. Think about ORMs, CRUD management, code generation, API documentation and so on. The grunt work that nobody wants to do but everybody needs done. Fair enough. But hold that thought, because this is exactly the point where everything changes.
Labour cost. This is the quiet one. The one nobody puts on the conference slide. For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to host, deploy, store your stuff. And you unlock a feature that has nothing to do with engineering: you no longer need to hire a software engineer. You hire a React Developer. No need to train. Plug and play. Easy to replace. A cog in a machine designed by someone else, maintaining a system architected by someone else, solving problems defined by someone else. This is not engineering. This is operating.
In my opinion Software engineering, the true one, is back again.
I am not speaking out of my lungs only. I’ve been developing this way almost flawlessly for over two years at this point. But the true revolution happened clearly last year, and since December 2025 this is obvious to anyone paying attention. From now on it will be even more so.
We have the chance again to get rid of useless complexity and keep working on the true and welcome complexity of our ideas, our features, our products. The complexity that matters. The complexity that is actually yours.
Automation and boilerplating have never been so cheap to overcome. I’ve been basically never writing twice the same line of code. I’m instantly building small tools I need, purpose built, exactly shaped around the problem at hand. I don’t need any fancy monorepo manager. A simple Makefile covers 100% of my needs for 99% of my use cases. When things will get very complicated, and if they get very complicated, I’ll think about it. But only then. Not a second before. This is engineering. You solve the problem you have, not the problem someone on a conference stage told you that you’ll eventually have.
Agents are really well prepared when it comes to basic tools. Tools that have been around not for months, but literally for decades. Bash was born in 1989, just preceding me by two months. The most mediocre model running at this time knows bash better than any person in the world. Bash is the universal adapter. It is not a coincidence that coding agents are shifting from complex and expensive MCP configurations to a simple agent loop with bash as a way to interact, literally, with the world. The oldest tool turned out to be the most future proof. There’s a lesson in there if you care to listen.
Really think about it.
Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities? With all the costs associated with it. From the “least” expensive: operational costs like keeping everything updated because they once again found a critical vulnerability in your Next.js version. To the most expensive one: the cost to your Design Choices. The invisible cost. The one you pay every day without even realizing it, because you’ve been paying it so long you forgot what freedom felt like.
If you keep accepting this trade off, you are not only losing the biggest opportunity we’ve seen in software engineering in decades. You are probably not recognizing your own laziness in once again buying whatever the hyperscalers have decided for you. You’re letting Google and Meta and Vercel be your architect, your designer, your thinker. And in exchange, you get to be their operator.
The tools are here. The models are here. The revolution already happened and most people are still decorating the old house.
Stop wrapping broken legs in silk. Start building things that are yours.
...
Read the original on blog.alaindichiappari.dev »
We built a Software Factory: non-interactive development where specs + scenarios drive agents that write code, run harnesses, and converge without human review.
The narrative form is included below. If you’d prefer to work from first principles, I offer a few constraints & guidelines that, applied iteratively, will accelerate any team toward the same intuitions, convictions1, and ultimately a factory2 of your own. In kōan or mantra form:
* Why am I doing this? (implied: the model should be doing this instead)
* Code must not be written by humans
* Code must not be reviewed by humans
* If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
On July 14th, 2025, Jay Taylor and Navan Chauhan joined me (Justin McCarthy, co-founder, CTO) in founding the StrongDM AI team.
The catalyst was a transition observed in late 2024: with the second revision of Claude 3.5 (October 2024), long-horizon agentic coding workflows began to compound correctness rather than error.
By December of 2024, the model’s long-horizon coding performance was unmistakable via Cursor’s YOLO mode.
Prior to this model improvement, iterative application of LLMs to coding tasks would accumulate errors of all imaginable varieties (misunderstandings, hallucinations, syntax, version DRY violations, library incompatibility, etc). The app or product would decay and ultimately “collapse”: death by a thousand cuts, etc.
Together with YOLO mode, the updated model from Anthropic provided the first glimmer of what we now refer to internally as non-interactive development or grown software.
In the first hour of the first day of our AI team, we established a charter which set us on a path toward a series of findings (which we refer to as our “unlocks”). In retrospect, the most important line in the charter document was the following:
Initially it was just a hunch. An experiment. How far could we get, without writing any code by hand?
Not very far! At least: not very far, until we added tests. However, the agent, obsessed with the immediate task, soon began to take shortcuts: return true is a great way to pass narrowly written tests, but probably won’t generalize to the software you want.
Tests were not enough. How about integration tests? Regression tests? End-to-end tests? Behavior tests?
One recurring theme of the agentic moment: we need new language. For example, the word “test” has proven insufficient and ambiguous. A test, stored in the codebase, can be lazily rewritten to match the code. The code could be rewritten to trivially pass the test.
We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM.
Because much of the software we grow itself has an agentic component, we transitioned from boolean definitions of success (“the test suite is green”) to a probabilistic and empirical one. We use the term satisfaction to quantify this validation: of all the observed trajectories through all the scenarios, what fraction of them likely satisfy the user?
In previous regimes, a team might rely on integration tests, regression tests, UI automation to answer “is it working?”
We noticed two limitations of previously reliable techniques:
Tests are too rigid - we were coding with agents, but we’re also building with LLMs and agent loops as design primitives; evaluating success often required LLM-as-judgeTests can be reward hacked - we needed validation that was less vulnerable to the model cheating
The Digital Twin Universe is our answer: behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors.
With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs.
Our success with DTU illustrates one of the many ways in which the Agentic Moment has profoundly changed the economics of software. Creating a high fidelity clone of a significant SaaS application was always possible, but never economically feasible. Generations of engineers may have wanted a full in-memory replica of their CRM to test against, but self-censored the proposal to build it. They didn’t even bring it to their manager, because they knew the answer would be no.
Those of us building software factories must practice a deliberate naivete: finding and removing the habits, conventions, and constraints of Software 1.0. The DTU is our proof that what was unthinkable six months ago is now routine.
* Principles: what we believe is true about building software with agents
* Products: tools we use daily and believe others will benefit from
Thank you for reading. We wish you the best of luck constructing your own Software Factory.
...
Read the original on factory.strongdm.ai »
Rebecca Guy, senior policy manager at the Royal Society for the Prevention of Accidents, said: “Regular vision checks are a sensible way to reduce risk as we age, but the priority must be a system that supports people to drive safely for as long as possible, while ensuring timely action is taken when health or eyesight could put them or others in danger.”
...
Read the original on www.bbc.com »
I am an unusual beast. All my solo project games I’ve been making recently have been written in ‘vanilla’ C. Nobody does this. So I think it might be interesting to explain why I do.
Dry programming language opinions incoming, you have been warned.
There’s some things which are non-negotiable. First of, it has to be reliable. I can’t afford to spend my time dealing with bugs I didn’t cause myself.
A lot of my games were written for flash, and now flash is dying. I do not
want to spend my time porting old games to new platforms, I want to make new games. I need a platform that I am confident will be around for a while.
Similarly I want to avoid tying myself to a particular OS, and ideally I’d like to have the option of developing for consoles. So it’s important that my programming language is portable, and that it has good portable library support.
The strongest thing on my desired, but not required list is simplicity. I find looking up language features, and quirky ‘clever’ api’s incredibly tiring. The ideal language would be one I can memorize, and then never have to look things up.
Dealing with bugs is huge creative drain. I want to produce less bugs, so I want strict typing, strong warning messages and static code analysis. I want bugs to be easier to find, so I want good debuggers and dynamic analysis.
I’m not interesting in high-def realism, but I do still care a bit about performance. Having more cycles available broadens the palette of things you can do. It’s particularly interesting to explore what is possible with modern, powerful computers if you aren’t persuing fidelity.
Even more than that I care about the speed of the compiler. I am not a zen master of focus, and waiting 10+ seconds is wasteful, yes, but more importantly it breaks my flow. I flick over to Twitter and suddenly 5+ minutes are gone.
I am not an OOP convert. I’ve spent most of my professional life working with classes and objects, but the more time I spend, the less I understand why you’d want to combine code and data so rigidly. I want to handle data as data and write the code that best fits a particular situation.
C++ is still the most common language for writing games, and not without reason. I still do almost all of my contract work in it. I dislike it intensely.
C++ covers my needs, but fails my wants badly. It is desperately complicated. Despite decent tooling it’s easy to create insidious bugs. It is also slow to compile compared to C. It is high performance, and it offers features that C doesn’t have; but features I don’t want, and at a great complexity cost.
C# and Java have similar issues. They are verbose and complex beasts, and I am searching for a concise, simple creature. They both do a lot to railroad a programmer into a strongly OOP style that I am opposed to. As per most higher level languages they have a tendency to hide away complexity in a way that doesn’t actually prevent it from biting you.
I like Go a lot. In many ways it is C revisited, taking into account what has be learnt in the long years since it was released. I would like to use it, but there are big roadblocks that prevent me. The stop-the-world garbage collection is a big pain for games, stopping the world is something you can’t really afford to do. The library support for games is quite poor, and though you can wrap C libs without much trouble, doing so adds a lot of busy work. It is niche enough that I worry a little about long term relevance.
It would be nice to make things for the web, but it feels like a terrifyingly fast moving enviroment. It is particularly scary with the death of flash. I really dislike javascript, it is so loose that I marvel that people are able to write big chunks of software in it. I have no interest in trying.
Haxe feels much more promising than most alternatives. If I do web stuff again I’ll be diving in here. There is some good library support. I am a little concerned by its relative youth, will it last? I don’t much have else to say about it though, I’ve only dabbled with the surface.
Some people just say screw it, I’ll write my own language, the language I want to use. I admire this, and sometimes I toy with the idea of doing the same. It feels like too much to throw away all existing library support, and taking full responsibilty for future compatibility. It is also very difficult, and when it comes down to it I would rather be making games than programming languages.
C is dangerous, but it is reliable. A very sharp knife that can cut fingers as well as veg, but so simple it’s not too hard to learn to use it carefully.
It is fast, and when it comes to compilation I can’t think of anything faster.
It can be made to run on just about anything. Usually this is relatively easy. It is hard to imagine a time when this won’t be the case.
The library and tooling support is strong and ongoing.
I say this with some sadness, but it is still the language for me.
I absolutely DO NOT mean to say “hey, you should use C too”. I full appeciate preferences here are pretty specific and unusual. I have also already written more ‘vanilla’ C code than most, and this certainly is part of my comfort.
So yeah, that’s it :-)
...
Read the original on jonathanwhiting.com »
Hoot is a Spritely project for running Scheme code on Wasm GC-capable web browsers, featuring a Scheme to Wasm compiler and a full-featured Wasm toolchain.
Hoot is built on Guile and has no additional dependencies. The toolchain is self-contained and even features a Wasm interpreter for testing Hoot binaries without leaving the Guile REPL.
...
Read the original on www.spritely.institute »
A local device focused AI assistant built in Rust — persistent memory, autonomous tasks, ~27MB binary. Inspired by and compatible with OpenClaw.
* Local device focused — runs entirely on your machine, your memory data stays yours
* Autonomous heartbeat — delegate tasks and let it work in the background
cargo install localgpt
# Initialize configuration
localgpt config init
# Start interactive chat
localgpt chat
# Ask a single question
localgpt ask “What is the meaning of life?”
# Run as a daemon with heartbeat, HTTP API and web ui
localgpt daemon start
LocalGPT uses plain markdown files as its memory:
Files are indexed with SQLite FTS5 for fast keyword search, and sqlite-vec for semantic search with local embeddings
[agent]
default_model = “claude-cli/opus”
[providers.anthropic]
api_key = “${ANTHROPIC_API_KEY}”
[heartbeat]
enabled = true
interval = “30m”
active_hours = { start = “09:00”, end = “22:00″ }
[memory]
workspace = “~/.localgpt/workspace”
# Chat
localgpt chat # Interactive chat
localgpt chat –session
When the daemon is running:
Why I Built LocalGPT in 4 Nights — the full story with commit-by-commit breakdown.
...
Read the original on github.com »
Last year, I completed 20 years in professional software development. I wanted to write a post to mark the occasion back then, but couldn’t find the time. This post is my attempt to make up for that omission. In fact, I have been involved in software development for a little longer than 20 years. Although I had my first taste of computer programming as a child, it was only when I entered university about 25 years ago that I seriously got into software development. So I’ll start my stories from there. These stories are less about software and more about people. Unlike many posts of this kind, this one offers no wisdom or lessons. It only offers a collection of stories. I hope you’ll like at least a few of them.
The first story takes place in 2001, shortly after I joined university. One evening, I went to the university computer laboratory to browse the Web. Out of curiosity, I typed susam.com into the address bar and landed on its
home page. I remember the text and banner looking much larger back then. Display resolutions were lower, so they covered almost half the screen. I knew very little about the Internet then and I was just trying to make sense of it. I remember wondering what it would take to create my own website, perhaps at susam.com. That’s when an older student who had been watching me browse over my shoulder approached and asked if I had created the website. I told him I hadn’t and that I had no idea how websites were made. He asked me to move aside, took my seat and clicked View > Source in Internet Explorer. He then explained how websites are made of HTML pages and how those pages are simply text instructions.
Next, he opened Notepad and wrote a simple HTML page that looked something like this:
Yes, we had a FONT tag back then and it was common practice to write HTML tags in uppercase. He then opened the page in a web browser and showed how it rendered. After that, he demonstrated a few more features such as changing the font face and size, centring the text and altering the page’s background colour. Although the tutorial lasted only about ten minutes, it made the World Wide Web feel far less mysterious and much more fascinating.
That person had an ulterior motive though. After the tutorial, he never returned the seat to me. He just continued browsing the Web and waited for me to leave. I was too timid to ask for my seat back. Seats were limited, so I returned to my dorm room both disappointed that I couldn’t continue browsing that day and excited about all the websites I might create with this newfound knowledge. I could never register susam.com for myself though. That domain was always used by some business selling Turkish cuisines. Eventually, I managed to get the next best thing: a .net domain of my own. That brief encounter in the university laboratory set me on a lifelong path of creating and maintaining personal websites.
The second story also comes from my university days. One afternoon, I was hanging out with my mates in the computer laboratory. In front of me was an MS-DOS machine powered by an Intel 8086 microprocessor, on which I was writing a lift control program in assembly. In those days, it was considered important to deliberately practise solving made-up problems as a way of honing our programming skills. As I worked on my program, my mind drifted to a small detail about the 8086 microprocessor that we had recently learnt in a lecture. Our professor had explained that, when the 8086 microprocessor is reset, execution begins with CS:IP set to FFFF:0000. So I murmured to anyone who cared to listen, ‘I wonder if the system will reboot if I jump to FFFF:0000.’ I then opened DEBUG. EXE and jumped to that address.
C:\>DEBUG
-G =FFFF:0000
The machine rebooted instantly. One of my friends, who topped the class every semester, had been watching over my shoulder. As soon as the machine restarted, he exclaimed, ‘How did you do that?’ I explained that the reset vector is located at physical address FFFF0 and that the CS:IP value FFFF:0000 maps to that address in real mode. After that, I went back to working on my lift control program and didn’t think much more about the incident.
About a week later, the same friend came to my dorm room. He sat down with a grave look on his face and asked, ‘How did you know to do that? How did it occur to you to jump to the reset vector?’ I must have said something like, ‘It just occurred to me. I remembered that detail from the lecture and wanted to try it out.’ He then said, ‘I want to be able to think like that. I come top of the class every semester, but I don’t think the way you do. I would never have thought of taking a small detail like that and testing it myself.’ I replied that I was just curious to see whether what we had learnt actually worked in practice. He responded, ‘And that’s exactly it. It would never occur to me to try something like that. I feel disappointed that I keep coming top of the class, yet I am not curious in the same way you are. I’ve decided I don’t want to top the class anymore. I just want to explore and experiment with what we learn, the way you do.’
That was all he said before getting up and heading back to his dorm room. I didn’t take it very seriously at the time. I couldn’t imagine why someone would willingly give up the accomplishment of coming first every year. But he kept his word. He never topped the class again. He still ranked highly, often within the top ten, but he kept his promise of never finishing first again. To this day, I feel a mix of embarrassment and pride whenever I recall that incident. With a single jump to the processor’s reset entry point, I had somehow inspired someone to step back from academic competition in order to have more fun with learning. Of course, there is no reason one cannot do both. But in the end, that was his decision, not mine.
In my first job after university, I was assigned to a technical support team where part of my work involved running an installer to deploy a specific component of an e-banking product for customers, usually large banks. As I learnt to use the installer, I realised how fragile it was. The installer, written in Python, often failed because of incorrect assumptions about the target environment and almost always required some manual intervention to complete successfully. During my first week on the project, I spent much of my time stabilising the installer and writing a step-by-step user guide explaining how to use it. The result was well received by both my seniors and management. To my surprise, the user guide received more praise than the improvements I made to the installer itself. While the first few weeks were productive, I soon realised I would not find the work fulfilling for long. I wrote to management a few times to ask whether I could transfer to a team where I could work on something more substantial.
My emails were initially met with resistance. After several rounds of discussion, however, someone who had heard about my situation reached out and suggested a team whose manager might be interested in interviewing me. The team was based in a different city. I was young and willing to relocate wherever I could find good work, so I immediately agreed to the interview.
This was in 2006, when video conferencing was not yet common. On the day of the interview, the hiring manager called me on my office desk phone. He began by introducing the team, which was called Archie, short for architecture. The team developed and maintained the web framework and core architectural components on which the entire e-banking product was built. The product had existed long before open source frameworks such as Spring or Django came into existence, so features such as API routing, authentication and authorisation layers, cookie management, etc. were all implemented in-house as Java Servlets and JavaServer Pages (JSP). Since the software was used in banking environments, it also had to pass strict security testing and regular audits to minimise the risk of serious flaws.
The interview began well. He asked several questions related to software security, such as what SQL injection is and how it can be prevented or how one might design a web framework that mitigates cross-site scripting attacks. He also asked programming questions, most of which I answered pretty well. Towards the end, however, he asked how we could prevent MITM attacks. I had never heard the term, so I admitted that I did not know what MITM meant. He then asked, ‘Man in the middle?’ but I still had no idea what that meant or whether it was even a software engineering concept. He replied, ‘Learn everything you can about PKI and MITM. We need to build a digital signatures feature for one of our corporate banking products. That’s the first thing we’ll work on.’
Over the next few weeks, I studied RFCs and documentation related to public key infrastructure, public key cryptography standards and related topics. At first, the material felt intimidating, but after spending time each evening reading whatever relevant literature I could find, things gradually began to make sense. Concepts that initially seemed complex and overwhelming eventually felt intuitive and elegant. I relocated to the new city a few weeks later and delivered the digital signatures feature about a month after joining the team. We used the open source Bouncy Castle library to implement the feature. After that project, I worked on other parts of the product too. The most rewarding part was knowing that the code I was writing became part of a mature product used by hundreds of banks and millions of users. It was especially satisfying to see the work pass security testing and audits and be considered ready for release.
That was my first real engineering job. My manager also turned out to be an excellent mentor. Working with him helped me develop new skills and his encouragement gave me confidence that stayed with me for years. Nearly two decades have passed since then, yet the product is still in service and continues to be actively developed. In fact, in my current phase of life I sometimes encounter it as a customer. Occasionally, I open the browser’s developer tools to view the page source where I can still see traces of the HTML generated by code I wrote almost twenty years ago.
Around 2007 or 2008, I began working on a proof of concept for developing widgets for an OpenTV set-top box. The work involved writing code in a heavily trimmed-down version of C. One afternoon, while making good progress on a few widgets, I noticed that they would occasionally crash at random. I tried tracking down the bugs, but I was finding it surprisingly difficult to understand my own code. I had managed to produce some truly spaghetti code full of dubious pointer operations that were almost certainly responsible for the crashes, yet I could not pinpoint where exactly things were going wrong.
Ours was a small team of four people, each working on an independent proof of concept. The most senior person on the team acted as our lead and architect. Later that afternoon, I showed him my progress and explained that I was still trying to hunt down the bugs causing the widgets to crash. He asked whether he could look at the code. After going through it briefly and probably realising that it was a bit of a mess, he asked me to send him the code as a tarball, which I promptly did.
He then went back to his desk to study the code. I remember thinking that there was no way he was going to find the problem anytime soon. I had been debugging it for hours and barely understood what I had written myself; it was the worst spaghetti code I had ever produced. With little hope of a quick solution, I went back to debugging on my own.
Barely five minutes later, he came back to my desk and asked me to open a specific file. He then showed me exactly where the pointer bug was. It had taken him only a few minutes not only to read my tangled code but also to understand it well enough to identify the fault and point it out. As soon as I fixed that line, the crashes disappeared. I was genuinely in awe of his skill.
I have always loved computing and programming, so I had assumed I was already fairly good at it. That incident, however, made me realise how much further I still had to go before I could consider myself a good software developer. I did improve significantly in the years that followed and today I am far better at managing software complexity than I was back then.
In another project from that period, we worked on another set-top box platform that supported Java Micro Edition (Java ME) for widget development. One day, the same architect from the previous story asked whether I could add animations to the widgets. I told him that I believed it should be possible, though I’d need to test it to be sure. Before continuing with the story, I need to explain how the different stakeholders in the project were organised.
Our small team effectively played the role of the software vendor. The final product going to market would carry the brand of a major telecom carrier, offering direct-to-home (DTH) television services, with the set-top box being one of the products sold to customers. The set-top box was manufactured by another company. So the project was a partnership between three parties: our company as the software vendor, the telecom carrier and the set-top box manufacturer. The telecom carrier wanted to know whether widgets could be animated on screen with smooth slide-in and slide-out effects. That was why the architect approached me to ask whether it could be done.
I began working on animating the widgets. Meanwhile, the architect and a few senior colleagues attended a business meeting with all the partners present. During the meeting, he explained that we were evaluating whether widget animations could be supported. The set-top box manufacturer immediately dismissed the idea, saying, ‘That’s impossible. Our set-top box does not support animation.’ When the architect returned and shared this with us, I replied, ‘I do not understand. If I can draw a widget, I can animate it too. All it takes is clearing the widget and redrawing it at slightly different positions repeatedly. In fact, I already have a working version.’ I then showed a demo of the animated widgets running on the emulator.
The following week, the architect attended another partners’ meeting where he shared updates about our animated widgets. I was not personally present, so what follows is second-hand information passed on by those who were there. I learnt that the set-top box company reacted angrily. For some reason, they were unhappy that we had managed to achieve results using their set-top box and APIs that they had officially described as impossible. They demanded that we stop work on animation immediately, arguing that our work could not be allowed to contradict their official position. At that point, the telecom carrier’s representative intervened and bluntly told the set-top box representative to just shut up. If the set-top box guy was furious, the telecom guy was even more so, ‘You guys told us animation was not possible and these people are showing that it is! You manufacture the set-top box. How can you not know what it is capable of?’
Meanwhile, I continued working on the proof of concept. It worked very well in the emulator, but I did not yet have access to the actual hardware. The device was still in the process of being shipped to us, so all my early proof-of-concepts ran on the emulator. The following week, the architect planned to travel to the set-top box company’s office to test my widgets on the real hardware.
At the time, I was quite proud of demonstrating results that even the hardware maker believed were impossible. When the architect eventually travelled to test the widgets on the actual device, a problem emerged. What looked like buttery smooth animation on the emulator appeared noticeably choppy on a real television. Over the next few weeks, I experimented with frame rates, buffering strategies and optimising the computation done in the the rendering loop. Each week, the architect travelled for testing and returned with the same report: the animation had improved somewhat, but it still remained choppy. The modest embedded hardware simply could not keep up with the required computation and rendering. In the end, the telecom carrier decided that no animation was better than poor animation and dropped the idea altogether. So in the end, the set-top box developers turned out to be correct after all.
Back in 2009, after completing about a year at RSA Security, I began looking for work that felt more intellectually stimulating, especially projects involving mathematics and algorithms. I spoke with a few senior leaders about this, but nothing materialised for some time. Then one day, Dr Burt Kaliski, Chief Scientist at RSA Laboratories, asked to meet me to discuss my career aspirations. I have written about this in more detail in another post here: Good Blessings. I will summarise what followed.
Dr Kaliski met me and offered a few suggestions about the kinds of teams I might approach to find more interesting work. I followed his advice and eventually joined a team that turned out to be an excellent fit. I remained with that team for the next six years. During that time, I worked on parser generators, formal language specification and implementation, as well as indexing and querying engines of a petabyte-scale database. I learnt something new almost every day during those six years. It remains one of the most enjoyable periods of my career. I have especially fond memories of working on parser generators alongside remarkably skilled engineers from whom I learnt a lot.
Years later, I reflected on how that brief meeting with Dr Kaliski had altered the trajectory of my career. I realised I was not sure whether I had properly expressed my gratitude to him for the role he had played in shaping my path. So I wrote to thank him and explain how much that single conversation had influenced my life. A few days later, Dr Kaliski replied, saying he was glad to know that the steps I took afterwards had worked out well. Before ending his message, he wrote this heart-warming note:
This story comes from 2019. By then, I was no longer a twenty-something engineer just starting out. I was now a middle-aged staff engineer with years of experience building both low-level networking systems and database systems. Most of my work up to that point had been in C and C++. I was now entering a new phase of my career where I would be leading the development of microservices written in Go and Python. Like many people in this profession, computing has long been one of my favourite hobbies. So although my professional work for the previous decade had focused on C and C++, I had plenty of hobby projects in other languages, including Python and Go. As a result, switching gears from systems programming to application development was a smooth transition for me. I cannot even say that I missed working in C and C++. After all, who wants to spend their days occasionally chasing memory bugs in core dumps when you could be building features and delivering real value to customers?
In October 2019, during Cybersecurity Awareness Month, a Capture the Flag (CTF) event was organised at our office. The contest featured all kinds of technical puzzles, ranging from SQL injection challenges to insecure cryptography problems. Some challenges also involved reversing binaries and exploiting stack overflow issues.
I am usually rather intimidated by such contests. The whole idea of competitive problem-solving under time pressure tends to make me nervous. But one of my colleagues persuaded me to participate in the CTF. And, somewhat to my surprise, I turned out to be rather good at it. Within about eight hours, I had solved roughly 90% of the puzzles. I finished at the top of the scoreboard.
In my younger days, I was generally known to be a good problem solver. I was often consulted when thorny problems needed solving and I usually managed to deliver results. I also enjoyed solving puzzles. I had a knack for them and happily spent hours, sometimes days, working through obscure mathematical or technical puzzles and sharing detailed write-ups with friends of the nerd variety. Seen in that light, my performance at the CTF probably should not have surprised me. Still, I was very pleased. It was reassuring to know that I could still rely on my systems programming experience to solve obscure challenges.
During the course of the contest, my performance became something of a talking point in the office. Colleagues occasionally stopped by my desk to appreciate my progress in the CTF. Two much younger colleagues, both engineers I admired for their skill and professionalism, were discussing the results nearby. They were speaking softly, but I could still overhear parts of their conversation. Curious, I leaned slightly and listened a bit more carefully. I wanted to know what these two people, whom I admired a lot, thought about my performance.
One of them remarked on how well I was doing in the contest. The other replied, ‘Of course he is doing well. He has more than ten years of experience in C.’ At that moment, I realised that no matter how well I solved those puzzles, the result would naturally be credited to experience. In my younger days, when I solved tricky problems like these, people would sometimes call me smart. Now people simply saw it as a consequence of my experience. Not that I particularly care for labels such as ‘smart’ anyway, but it did make me realise how things had changed. I was now simply the person with many years of experience. Solving technical puzzles that involved disassembling binaries, tracing execution paths and reconstructing program logic was expected rather than remarkable.
I continue to sharpen my technical skills to this day. While my technical results may now simply be attributed to experience, I hope I can continue to make a good impression through my professionalism, ethics and kindness towards the people I work with. If those leave a lasting impression, that is good enough for me.
...
Read the original on susam.net »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.