10 interesting stories served every morning and every evening.
Last night, I was rejected from yet another pitch night. It was just the pre-interview, and the problem wasn’t my product. I already have MRR. I already have users who depend on it every day.
The feedback was simply: “What do you even need funding for?”
I hear this time and time again when I try to grow my ideas. Running lean is in my DNA. I’ve built tools you might have used, like websequencediagrams.com, and niche products you probably haven’t, like eh-trade.ca. That obsession with efficiency leads to successful bootstrapping, and honestly, a lot of VCs hate that.
Keeping costs near zero gives you the exact same runway as getting a million dollars in funding with a massive burn rate. It’s less stressful, it keeps your architecture incredibly simple, and it gives you adequate time to find product-market fit without the pressure of a board breathing down your neck.
If you are tired of the modern “Enterprise” boilerplate, here is the exact playbook of how I build my companies to run on nearly nothing.
The naive way to launch a web app in 2026 is to fire up AWS, provision an EKS cluster, set up an RDS instance, configure a NAT Gateway, and accidentally spend $300 a month before a single user has even looked at your landing page.
The smart way is to rent a single Virtual Private Server (VPS).
First thing I do is get a cheap, reliable box. Forget AWS. You aren’t going to need it, and their control panel is a labyrinth designed to extract billing upgrades. I use Linode or DigitalOcean. Pay no more than $5 to $10 a month.
1GB of RAM sounds terrifying to modern web developers, but it is plenty if you know what you are doing. If you need a little breathing room, just use a swapfile.
The goal is to serve requests, not to maintain infrastructure. When you have one server, you know exactly where the logs are, exactly why it crashed, and exactly how to restart it.
Now you have constraints. You only have a gigabyte of memory. You could run Python or Ruby as your main backend language—but why would you? You’ll spend half your RAM just booting the interpreter and managing gunicorn workers.
I write my backends in Go.
Go is infinitely more performant for web tasks, it’s strictly typed, and—crucially for 2026—it is incredibly easy for LLMs to reason about. But the real magic of Go is the deployment process. There is no pip install dependency hell. There is no virtual environment. You compile your entire application into a single, statically linked binary on your laptop, scp it to your $5 server, and run it.
Here is what a complete, production-ready web server looks like in Go. No bloated frameworks required:
package main
import (
“fmt”
“net/http”
func main() {
http.HandleFunc(“/”, func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, “Hello, your MRR is safe here.“)
// This will comfortably handle 10,000s of requests per second
// on a potato.
http.ListenAndServe(”:8080″, nil)
If you have a graphics card sitting somewhere in your house, you already have unlimited AI credits.
When I was building eh-trade.ca, I had a specific problem: I needed to perform deep, qualitative stock market research on thousands of companies, summarizing massive quarterly reports. The naive solution is to throw all of this at the OpenAI API. I could have paid hundreds of dollars in API credits, only to find a logic bug in my prompt loop that required me to run the whole batch over again.
Instead, I’m running VLLM on a dusty $900 graphics card (an RTX 3090 with 24GB of VRAM) I bought off Facebook Marketplace. It’s an upfront investment, sure, but I never have to pay a toll to an AI provider for batch processing again.
For local AI, you have a distinct upgrade path:
* Start with Ollama. It sets up in one command (ollama run qwen3:32b) and lets you try out dozens of models instantly. It’s the perfect environment for iterating on prompts.
* Move to VLLM for production. Once you have a system that works, Ollama becomes a bottleneck for concurrent requests. VLLM locks your GPU to one model, but it is drastically faster because it uses PagedAttention. Structure your system so you send 8 or 16 async requests simultaneously. VLLM will batch them together in the GPU memory, and all 16 will finish in roughly the same time it takes to process one.
* Use Transformer Lab for anything more advanced. If you need to do any model pre-training or fine-tuning, Transformer Lab makes it easy on local hardware.
To manage all this, I built laconic, an agentic researcher specifically optimized for running in a constrained 8K context window. It manages the LLM context like an operating system’s virtual memory manager—it “pages out” the irrelevant baggage of a conversation, keeping only the absolute most critical facts in the active LLM context window.
I also use llmhub, which abstracts any LLM into a simple provider/endpoint/apikey combo, gracefully handling both text and image IO whether the model is running under my desk or in the cloud.
You can’t do everything locally. Sometimes you need the absolute cutting-edge reasoning of Claude 3.5 Sonnet or GPT-4o for user-facing, low-latency chat interactions.
Instead of juggling billing accounts, API keys, and rate limits for Anthropic, Google, and OpenAI, I just use OpenRouter. You write one OpenAI-compatible integration in your code, and you instantly get access to every major frontier model.
More importantly, it allows for seamless fallback routing. If Anthropic’s API goes down on a Tuesday afternoon (which happens), my app automatically falls back to an equivalent OpenAI model. My users never see an error screen, and I don’t have to write complex retry logic.
New, insanely expensive models are being released every week. I constantly hear about developers dropping hundreds of dollars a month on Cursor subscriptions and Anthropic API keys just to have an AI write their boilerplate.
Meanwhile, I’m using Claude Opus 4.6 all day and my bill barely touches $60 a month. My secret? I exploit Microsoft’s pricing model.
I bought a GitHub Copilot subscription in 2023, plugged it into standard VS Code, and never left. I tried Cursor and the other fancy forks when they briefly surpassed it with agentic coding, but Copilot Chat always catches up.
Here is the trick that you might have missed: somehow, Microsoft is able to charge per request, not per token. And a “request” is simply what I type into the chat box. Even if the agent spends the next 30 minutes chewing through my entire codebase, mapping dependencies, and changing hundreds of files, I still pay roughly $0.04.
The optimal strategy is simple: write brutally detailed prompts with strict success criteria (which is best practice anyway), tell the agent to “keep going until all errors are fixed,” hit enter, and go make a coffee while Satya Nadella subsidizes your compute costs.
I always start a new venture using sqlite3 as the main database. Hear me out, this is not as insane as you think.
The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server.
“But what about concurrency?” you ask. Many people think SQLite locks the whole database on every write. They are wrong. You just need to turn on Write-Ahead Logging (WAL). Execute this pragma once when you open the database:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
Boom. Readers no longer block writers. Writers no longer block readers. You can now easily handle thousands of concurrent users off a single .db file on an NVMe drive.
Since implementing user authentication is usually the most annoying part of starting a new SQLite-based project, I built a library: smhanov/auth. It integrates directly with whatever database you are using and manages user signups, sessions, and password resets. It even lets users sign in with Google, Facebook, X, or their own company-specific SAML provider. No bloated dependencies, just simple, auditable code.
The tech industry wants you to believe that building a real business requires complex orchestration, massive monthly AWS bills, and millions in venture capital.
By utilizing a single VPS, statically compiled binaries, local GPU hardware for batch AI tasks, and the raw speed of SQLite, you can bootstrap a highly scalable startup that costs less than the price of a few coffees a month. You add infinite runway to your project, giving yourself the time to actually solve your users’ problems instead of sweating your burn rate.
If you are interested in running lean, check out my auth library and agent implementations on my GitHub. I’ll be hanging around the comments—let me know how you keep your server costs down, or tell me why I’m completely wrong.
...
Read the original on stevehanov.ca »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
You can’t perform that action at this time.
...
Read the original on github.com »
Seven countries now generate nearly all of their electricity from renewable energy sources, according to newly compiled figures.
Albania, Bhutan, Nepal, Paraguay, Iceland, Ethiopia and the Democratic Republic of Congo produced more than 99.7 per cent of the electricity they consumed using geothermal, hydro, solar or wind power.
Data from the International Energy Agency (IEA) and International Renewable Energy Agency (IRENA) also revealed that a further 40 countries generated at least 50 per cent of the electricity they consumed from renewable energy technologies in 2021 and 2022 — including 11 European countries.
“We don’t need miracle technologies,” said Stanford University Professor Mark Jacobson, who published the data.
“We need to stop emissions by electrifying everything and providing the electricity with Wind, Water and Solar (WWS), which includes onshore wind, solar photovoltaics, concentrated solar power, geothermal electricity, small hydroelectricity, and large hydroelectricity.”
Professor Jacobson also noted that other countries like Germany were also capable of running off 100 per cent renewable-generated electricity for short periods of time.
Figures released by the IEA in January show that the UK generated 41.5 per cent of its electricity from renewable sources in 2022 — up 10.5 per cent from the year before.
In Scotland, renewable energy technologies generated the equivalent of 113 per cent of the country’s overall electricity consumption in 2022.
“These record-breaking figures are a major milestone on Scotland’s journey to net-zero, clearly demonstrating the enormous potential of our world-class renewable energy resources,” Claire Mack, chief executive of Scottish Renewables, said at the time.
While Scotland’s electricity generation was dominated by wind power, researchers predict that solar will come to dominate global electricity supplies over the coming decades.
There has been significant progress in recent years with improving efficiency rates for solar cells, primarily boosted by the so-called ‘miracle material’ perovskite.
Commercial costs have also fallen, which led scientists at the University of Exeter and University College London to claim last year that solar energy has reached an “irreversible tipping point” that will see it become the world’s main source of energy by 2050.
Their 2023 paper, published in the journal Nature Communications, found that technological and economic advances meant the transition to clean energy is not just reachable, but inevitable.
“Due to technological trajectories set in motion by past policy, a global irreversible solar tipping point may have passed where solar energy gradually comes to dominate global electricity markets, without any further climate policies,” the researchers wrote in the study.
“Solar energy is the most widely available energy resource on Earth, and its economic attractiveness is improving fast in a cycle of increasing investments.”
...
Read the original on www.independent.co.uk »
I’m part of the desktop software generation. From Windows 95 to Windows 7, I grew using mostly-offline software on computers operated via mouse and keyboard, well before tablets and smartphones. Recently, I’ve been missing one particular part of that era: its consistency in design. I want to tell you about idiomatic design, emphasize the importance of homogeneous interfaces, and suggest that we’ve lost something important.
Suppose you’re logging into a website, and it asks: “do you want to stay logged in?” There are many ways in which the website could ask you for an answer: for example, a text field where you can enter “Yes” or “No”, or a dropdown where you can select “Keep me logged in” or “Log me out when I close this window.” But in reality, it’ll always be a checkbox. Why is that?
The checkbox is a design idiom: it’s such a common design that as a user, you know how to use it without thinking about it, and if you were making a website and had to ask this question, you would also put in a checkbox without thinking about it. To builders and users alike, it is a standard design pattern that everyone can rely on.
A checkbox is also part of an interface. You’re using it to interact with a system by inputting data. Interfaces are better the less thinking they require: whether the interface is a steering wheel or an online form, if you have to spend any amount of time figuring out how to use it, that’s bad. As you interact with many things, you want homogeneous interfaces that give you consistent experiences. If you learn that Command + C is the keyboard shortcut for copy, you want that to work everywhere. You don’t want to have to remember to use CTRL + Shift + C in certain circumstances or right-click → copy in others, that’d be annoying.
But that’s where we’ve ended up. Software is on the internet now, and the interfaces aren’t homogeneous at all. There are hundreds of ways that different websites ask you to pick dates, enter your credit card number, or do any number of common tasks. Keyboard shortcuts differ from app to app. There are so many different ways of interaction that you can’t remember or learn them at all. Using web applications in 2023 is an exercise of “where do I find what I want to do?” over and over again.
By contrast, one of the strengths of the desktop software era was high consistency across interfaces by use of design idioms. Look at this picture from Windows 2000:
The visuals feel a little ugly and dated: it’s blocky, the font isn’t great, and the colors are dull. But the interface gets a couple of things really right:
* The File, Edit, View… menu structure was standard. No matter whether you were in Adobe Photoshop or Microsoft Excel, you knew that save is under File, redo is under Edit, full-screen is under View, etc.
* The menu is navigable by keyboard: there’s a little underline in each of the menu items, e.g. F in File and N in New, underneath. They indicate keyboard shortcuts. You can enter ALT+F to open the File menu, then hit N to open a new file. This caters to power-users while making the shortcuts easy to learn.
* The status bar at the bottom tells you everything about the current state: page, column, word count, whether you’re tracking changes, in insert-mode, etc.
* Menu items are clearly labeled. Words, not icons, are the primary interface to actions. Icons are used only where they are most obvious. The entire interface leaves little to the imagination. In the picture above, there’s no “I wonder what this does?” You know how to use it, even if you’ve never used it before.
Crucially, these design idioms were used not just in Microsoft Word, but throughout the entire Windows ecosystem. Take a look at this Windows XP logout screen:
Every single button is clearly visually a button and says exactly what it does. And each one has a little underline to indicate its keyboard shortcut. Isn’t that nice?
The desktop software era was one of homogeneous interfaces, perhaps because the operating system and its GUI libraries dictated broad swaths of design, and those constraints guided developers toward conforming patterns.
The browser software era is one of heterogeneous interfaces. Take a look at these screenshots from two of my favorite web applications: Figma and Linear.
These are probably the two best pieces of enterprise software available today. And though they have many of the same features — team settings, abstract item hierarchies, collaborative comments, etc. — they don’t share a single icon. They have no design idioms in common. They have different keyboard shortcuts. Both are very well-designed from first principles, but do not conform to what other interfaces the user might be familiar with.
We are in an era of individually well-designed, useful web applications, and they’re all unique. Even in products by the same company, the experiences are heterogeneous: using GMail is nothing like using GSuites is nothing like using Google Docs. In aggregate, this is very frustrating. The lack of homogeneous interfaces means that I spend most of my digital time not in a state of productive flow, but hunting and pecking all over the screen, asking myself “Can I click that? Does this open in a new tab? Will the browser’s back button let me go back?” Awful!
This lack of homogeneity is for two reasons:
All the patterns in designing for mouse-and-keyboard applications had to be reinvented with the advent of the touchscreen. Most web applications have to enable both a mobile and desktop experience, and those interaction forms are very different. Therefore, most user experiences have since then been stuck in an awkward middle, e.g. hamburger menus intended for mobile apps also used for desktop apps, etc. Consequently, there are a ton of bad design patterns everywhere you look. Modern frontend development has a culture of copying and re-using modular components, so it’s easy to copy-paste bad design patterns and perpetuate the issue. After 10+ years of this, there has been a generationally corrosive effect on the quality of UI/UX design.
If everyone were to follow the same design idioms, then the interfaces would look pretty consistent. In the early days of the internet, there were strong design idioms: hyperlinks to other pages were underlined blue, and purple if you had already visited them. Great! Today, every website presents its own guessing game on how elements of the interface are styled. Is that a link? Maybe!
It may be surprising that modern web design is so unidiomatic, because the HTML/CSS standards are very prescriptive. The issue is that even though there are standards for writing HTML, no-one writes HTML anymore. People write React in TypeScript or the latest framework. They import countless npm packages. All that goes through a complex build process to output something that runs in the browser.
Frontend developers aren’t wrong to do this. Browsers today are extremely powerful and offer general-purpose APIs that can let you do pretty much anything if you are creative about it. For example, Figma doesn’t follow any HTML design idioms because there is no HTML. It’s written in web assembly; they are on the cutting edge of implementing desktop-style software in the browser. Of course that breaks the HTML-webpage-as-document model. The browser’s back button, keyboard shortcuts, etc. fall by the wayside while a human-computer interaction paradigm is rebuilt.
In short, there are few web design idioms because front-end development is moving too quickly. Engineers are concerned with what is possible more than with questions of polish, and rightfully so. Multi-user, real-time collaboration is much more valuable than power-user keyboard shortcuts. There are both endless frontend packages and interaction formats to deploy them into, so instituting one-size-fits-all idioms on a space so large is very difficult. It will take time for the cutting edge to cool down, and for the most successful patterns to become apparent and eventually idiomatic.
And yet, some of the most successful product organizations of today aggressively pursue their own design idioms and achieve some homogeneity in their interfaces.
Apple is a great example. We’ve talked about Microsoft of the past, but Apple today drives a highly opinionated design system. Apple’s general library of fonts, buttons, colors, etc. and its consistency across all of Apple’s native applications and devices have created a powerful conforming effect for third-party applications. Even when using a third-party app on your iPhone, interacting via the keyboard, pinch-to-zoom, etc. is all controlled by iOS. This is a big part of Apple’s it-just-works effect. Strong, tasteful, idiomatic design is at the core of Apple’s success.
What’s interesting about the it-just-works effect is that it makes users trust the defaults and avoid customization. You see a similar dynamic on platforms like Substack, where as an author I don’t have any ability to select the font or even underline text. But the constraining defaults are tastefully set, and it works great. Substack’s and Apple’s design principles gain adoption as those products succeed, since designers look to them as successful examples. Those designs eventually become idioms by (1) people converging on them as good designs and (2) frequency of use in the community.
As a product builder, you want to follow design idioms as closely as practically possible, because it makes your software easier to use and it maximizes compatibility across devices/browsers. I follow these rules of thumb and break them only rarely:
Study and follow HTML/CSS idioms whenever possible. For example, a link should be underlined, colorful, pointer on mouseover, and written as an tag. Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled elements. Study and follow browser idioms whenever possible. The back button should always work. Copy-pasting the URL should bring you to the same interface. CTRL-clicking a navigational element should open it in a new tab.If you deviate from general idioms, make sure that your designs are fully internally consistent and at least “idiomatic” within your organization.Prefer words to icons. Use only icons that are universally understood.If in doubt, make visual elements look obvious. There should never be confusion about whether something is a button or a tab.If at an impasse, refer to two types of resource to assist in your judgment:The best-designed websites that you know; Books on interface design from decades past. Most interface design problems of today are not new, but repeats of history, with solved analogies in the past.
I dream of the day when every datepicker or credit card form on the internet is exactly the same: when after thirty years of iterative development and millions of attempts, we’ve finally converged on the best one. I dream of a future where in every web app, CTRL-Click opens the thing in a new tab. It would be nice…
...
Read the original on essays.johnloeber.com »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
Cache TTL silently regressed from 1h to 5m around early March 2026, causing quota and cost inflation Cache TTL silently regressed from 1h to 5m around early March 2026, causing quota and cost inflation
You can’t perform that action at this time.
...
Read the original on github.com »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. A statement regarding the removal of DDLC from the Google Play Store:
...
Read the original on bsky.app »
In his classic
Programming Perl — affectionately known to a generation of technologists as “the Camel Book” — Larry Wall famously wrote of the three virtues of a programmer as laziness, impatience, and hubris:
If we’re going to talk about good software design, we have to talk about Laziness, Impatience, and Hubris, the basis of good software design. We’ve all fallen into the trap of using cut-and-paste when we should have defined a higher-level abstraction, if only just a loop or subroutine. To be sure, some folks have gone to the opposite extreme of defining ever-growing mounds of higher level abstractions when they should have used cut-and-paste. Generally, though, most of us need to think about using more abstraction rather than less.
Of these virtues, I have always found laziness to be the most profound: packed within its tongue-in-cheek self-deprecation is a commentary on not just the need for abstraction, but the aesthetics of it. Laziness drives us to make the system as simple as possible (but no simpler!) — to develop the powerful abstractions that then allow us to do much more, much more easily.
Of course, the implicit wink here is that it takes a lot of work to be lazy: when programmers are engaged in the seeming laziness of
hammock-driven development, we are in fact turning the problem over and over in our heads. We undertake the hard intellectual work of developing these abstractions in part because we are optimizing the hypothetical time of our future selves, even if at the expense of our current one. When we get this calculus right, it is glorious, as the abstraction serves not just ourselves, but all who come after us. That is, our laziness serves to make software easier to write, and systems easier to compose — to allow more people to write more of it.
Ideally, you would want those that benefit from abstractions to pay the virtue of laziness forward — to use their new-found power to themselves labor on the abstractions they make. But a consequence of the broadening of software creation over the past two decades is it includes more and more people who are unlikely to call themselves programmers — and for whom the virtue of laziness would lose its intended meaning.
Worse, the extraordinary productivity allowed by modern abstractions has given rise to an emphasis on a kind of false industriousness. Pejoratively, this was the
rise of the brogrammer, with the virtue of ironic laziness and hammock-driven development displaced by
hustle porn about crushing code.
Onto this dry tinder has struck the lightning bolt of LLMs. Whatever one’s disposition is to software creation, LLMs allow that to be applied with (much) greater force, so it should be of little surprise that LLMs have served as anabolic steroids for the brogrammer set.
Elated with their new-found bulk, they can’t seem to shut up about it. Take, for example, brogrammer-of-note
Garry Tan, who has been particularly insufferable about his LLM use, bragging about his rate of thirty-seven thousand
lines of code per day (and “still speeding up”):
If laziness is a virtue of a programmer, thinking about software this way is clearly a vice. And like assessing literature by the pound, its fallacy is clear even to novice programmers.
As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however,
took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s “newsletter-blog-thingy” included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
The problem here isn’t these issues per se (which are all fixable!), and it isn’t even the belief that the methodology that created them represents the future of software engineering (though that is certainly annoying!).
The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity. As I expanded on in my talk
The Complexity of Simplicity, this is a significant undertaking — and we cannot expect LLMs that do not operate under constraints of time or load to undertake it of their own volition.
This is not to say, of course, that LLMs won’t play an important role in our future: they are an extraordinary tool for software engineering, but — as outlined in our
guidelines for LLM use at Oxide — they are but a tool. We can put them to use tackling the non-ironic (and non-virtuous!) aspects of programmer laziness — helping us take on thorny problems like technical debt — or
use them to promote our engineering rigor, but it must be in service of our own virtuous laziness: to yield a simpler, more powerful system that serves not just ourselves, but the generations of software engineers to come after us.
...
Read the original on bcantrill.dtrace.org »
In 2020, I started making my own soft drinks, including a sugar-free, caffeine-free cola! If you’re just looking for the current recipes, you can find them on GitHub. They were inspired by recipes like Open Cola and Cube Cola.
Otherwise, read on for “log book”-style instructions and pictures of my first attempts, which have been adapted from this Mastodon thread.
We start by making a flavor emulsion from essential oils! These oils are extremely strong, and can cause skin irritations, so if you want to be safe, wear latex gloves for this step!
We also need super small quantities. In this attempt, I’m aiming for:
There! In total, we have about 2 ml of essential oil. This will lend flavor to about 9 L of cola!
But oils don’t dissolve in water, right? They would just float on the surface!
So what we’ll add next is gum arabic, a natural emulsifier, that will help the oil form tiny, tiny droplets that stay dispersed in water!
Combine 2 g of gum arabic, 4 ml water, and our oil mixture with a hand mixer. Keep mixing until it emulsifies, it will take on a milky brown color and the oily phase will disappear. This took me about 5 minutes.
Next, we add: 40 ml of caramel color, which will give the cola its dark appearance.
As well as 5 g citric acid, and an additional 10 ml of water. You can also add caffeine at this point, but one of the main motivations for me to make my own cola is to have cola without caffeine (it gives me headaches).
Stop mixing when it starts smiling at you! :D This is our finished cola concentrate. It smells delicious!
Other recipes would now integrate this into sugar syrup made from 1 kg of sugar, but I want to try a variation using artificial sweeteners, for dieting reasons. This is a combination of sodium cyclamate and saccharin, which is pretty common in Germany. I’ll use 70 ml.
This gives us about 120 ml of the finished cola syrup! \o/ I’ve been following the Cube-Cola recipe.
Because this is still a very concentrated solution and hard to handle, I’m diluting it down to 1 L. In theory, this should give a nice cola when diluted further in a ratio of 1:8. First taste test coming up!
It’s good! It’s really good! The sweetness is just right for me, and the oil blend tastes nice and cola-y! :)
I like my drinks really sour, so I might add another… 10 g of citric acid to this batch. There is a slightly bitter aftertaste, which I attribute to the artificial sweetener I’m using.
I might also tweak the oil ratios to accentuate the citrus flavors a bit more in the next batch. And I thought this would be too much food coloring, but it seems pretty perfect!
What I’d be really excited to try is making other flavors: Orange soda! Cherry soda! Almond & apricot soda!
Open Soda (Note: site is no longer online as of 2026…) has two pretty weird ones (including a bubble gum soda?!), but that’s pretty much all I found. There still seems to be potential for reverse engineering! :)
And here are two more resources I found while preparing my ingredients today: Jan Krüger’s blog post about tweaking cola recipes and Richard Grove’s Notes on Making Cola, which go deeper into food chemistry!
Made a second batch of cola syrup without caramel color. It’s much weirder to drink than I expected.
I also switched to sucralose as a sweetener, hoping that it would have less aftertaste. Instead of 1000 g sugar in the original recipe, I used 1.6 g sucralose, which is ridiculous, but it’s still too sweet for my taste. :O
And I made a tool upgrade! It’s important to get these small quantities right…
This week’s batch: blood orange + almond + lime (in a 2:1:1 ratio), plus as much orange food coloring as I dared to put in! :D
I cut the amount of sucralose in half — still a bit too sweet for my taste. Had to completely guess the oil ratios, but I’m happy with how it turned out! My whole apartment smells like marzipan now.
This time, when I did the mixing in a big plastic bowl, I noticed some floating stuff on the concentrate, which I assume are tiny plastic shavings produced by the hand mixer? I filtered them out, and might switch to glass or metal containers.
In today’s cola batch, I reduced the amount of sucralose further to 0.4 g, added 0.07 g vanillin, as well as a bit more cassia oil. Good stuff! I call it “Syntez-Cola”, because I combined ideas from Cube-Cola and Jan Krüger’s recipe! :)
… this is the point where I should start a blog post, and a Git repo with version numbers and a changelog, I guess. :P (Edit 2026: Here you go!)
I found decaf Coca-Cola in a supermarket last week, and could do a direct taste comparison. At this point, I prefer my cola by a large margin! Coca-Cola tasted bland, like molten cola popsicles. It had an interesting dryness to it, though, which I’m not sure how to replicate.
Made a simple orange soda today, really happy with how it turned out! I put the recipe here, along with my modified cola recipe, and the almond + blood orange drink I invented!
blinry almond 0.1.1 contains less almond oil, to highlights the citrus flavors a bit more. Find the updated recipe here.
One of these two bottles is a version with sugar instead of sucralose. 400 g of sugar seem to be a good replacement for 0.6 g sucralose. Curious to see whether people will be able to taste the difference! :) (Edit 2026: They totally could!)
blinry orange 0.1.1, now with 14% more orange! :D Recipe here.
Since these early experiments, I made a handful batches of these recipes without modification. I still think they’re really nice! Especially blinry orange is rather unique.
If you try them for yourself, I’d be happy about feedback!
And I’m still thinking what ingredients might go into a DIY Mountain Dew, or a DIY Fassbrause…
You can add your comment to this post in the Fediverse! Alternatively, drop me a mail at mil@blinry.org.
...
Read the original on blinry.org »
The macOS Dock is built around apps, not windows. Multiple displays and desktops make it hard to track what’s open where. boringBar keeps windows organized by desktop, with instant previews, one-click desktop switching, and pinned apps within reach.
boringBar also gives you a searchable app launcher, scroll-to-switch desktops, full window titles on chips, and a cleaner workspace when you hide the Dock. As a bonus, it makes macOS feel more familiar if you’ve just switched from Windows or Linux.
Shows only the windows on the active desktop for that display, so you stay focused on what is in front of you.
Jump to any desktop on the current display with one click, and see how many windows are open on each one.
Search and launch apps from a single list, then open it instantly with a configurable global shortcut.
Hover a chip to preview windows before switching, so you can pick the right one faster.
Unread badges from macOS appear directly on chips, so important updates stay visible.
When an app needs your attention, its chip gives a subtle pulse so you notice it right away.
Scroll up or down on the bar to move between desktops. Direction is configurable.
Choose small, medium, or large. Pick the size that suits you best.
Collapses multiple windows into one chip, with a count badge.
Hide app name text on chips while keeping icons and window count badges.
Show full window titles in chips, or just the app name for a cleaner look.
Keeps the Dock out of the way while you use boringBar, then brings it back when you turn the feature off.
Mirror bars across multiple monitors, including setups where the system setting “Displays have separate Spaces” is off.
Right-click the bar and choose Show Desktop, or click the ribbon at the far right edge to reveal the desktop.
Right click on any app in the bar to pin them to the bar itself or to the top of the application menu.
boringBar requires two macOS permissions. Accessibility is used to observe and interact with windows, desktops, and apps on your behalf. Screen Recording is used solely to fetch window thumbnail previews and is not used for anything else. You can verify this yourself: the purple dot in Control Center only appears when thumbnails are being fetched, and you will not see it from boringBar at any other time.
Yes. You can download and try boringBar free for 14 days with all features unlocked. A license is required to continue using it after the trial.
Personal licenses come in two flavors:
Perpetual — a one-time $40 purchase covering 2 devices, with 2 years of support and updates included. The software keeps working after that; you just won’t receive further updates.
Yearly — a subscription starting at $7.99/year for 1 device, with additional devices at $2/year each, up to 5.
Business licenses are billed annually, start at 6 users, and use volume pricing, so the per-user cost drops as your team grows:
For example, a 6-user business license is $20.99/year, a 20-user license is $69.99/year, and a 50-user license is $144.99/year.
What is the difference between a personal and business license?
Both licenses give you access to the same features. A personal license is for individual use and comes in two flavors: a one-time perpetual purchase covering 2 devices, or a yearly subscription covering up to 5 devices. A business license is an annual subscription intended for use within a company or organization and includes volume pricing.
Can I use my license on multiple Macs?
Each license seat is linked to one device. A perpetual personal license includes 2 seats; a yearly personal license covers 1 to 5 devices. For yearly personal and business licenses, make sure to purchase the appropriate number of seats upfront, as seats cannot be added to an existing license. If you need more later, you can purchase a separate license. Multiple users on the same machine share a single seat.
Personal license holders receive an activation key by email after purchase. Business license holders need to add each team member from the Business License Management page first. In the boringBar app, that team member then enters their email address and receives a one-time code to confirm and activate their license.
boringBar can hide the Dock for you while it is running, but it will still be visible in Mission Control. When you turn the feature off or quit the app, the Dock comes right back.
Enter the email address associated with your business license to sign in.
We sent a 6-digit code to
Enter the email address associated with your personal license.
...
Read the original on boringbar.app »
Sorry to bother you on Saturday. Thought this was important to share.
The first thing you learn about a loom is that it’s easy to break.
The shuttle runs along a track that warps with humidity. The heddles hang from cords that fray. The reed is a row of thin metal strips, bent by hand, that bend back just as easily. The warp beam cracks if you over-tighten it. The treadles loosen at the joints. The breast beam, the cloth roller, the ratchet and pawl, the lease sticks, the castle; the whole contraption is wood and string held together by tension. It’s a piece of ingenuity and craftsmanship, but one as delicate as the clothes it manifests out of wild plant fibers. It is, also, the foundational tool of an entire industry, textiles, that has kept its relevance to our days of heavy machinery, factories, energy facilities, and datacenters.
It is not nearly as easy to break a datacenter.
It is made of concrete and steel and copper and it’s on the bigger side. It has interchangeable servers, and biometric locks and tall electrified fences and heavily armed guards and redundancy upon redundancy: every component duplicated so that no single failure brings the whole thing down. There is no treadle to loosen or reed to bend back.
But say you managed to bypass the guards, jump the fences, open the locks, and locate all the servers. Then you’d face the algorithm. The datacenter was never your goal; the algorithm lurking inside is. It doesn’t run on that rack, or any rack for that matter. It is a digital pattern distributed across millions of chips, mirrored across continents; it could be reconstituted elsewhere, and it’s trained to addict you at a glance, like a modern Medusa.
But say you managed to elude the stare, stop the replication, and break the patterns. Then you’d face superintelligence. The algorithm was also not your goal; the vibrant, ethereal, latent superintelligence lurking inside is. Well, there’s nothing you can do here: It always “gets out of the box” and, suddenly, you are inside the box, like a chimp being played by a human with a banana. It’s just so tasty…
There’s another solution to break a datacenter: You can bomb it, like one hammers down the loom.
Some have argued that this is the way to ensure a rogue superintelligence doesn’t get out of the box. A different rogue creature took the proposal seriously: last month, Iran’s Revolutionary Guard released satellite footage of OpenAI’s Stargate campus in Abu Dhabi and promised its “complete and utter annihilation.”
But you probably don’t have a rogue nation handy to fulfill your wishes. Maybe you will end up bombed instead and we don’t want that to happen. That’s what happens with rogue intelligences: you can’t predict them.
And yet. Two hundred years of increasingly impenetrable technology—from looms to datacenters—have not changed the first thing about the people who live alongside it. The evolution of technology is a feature of the world just as much as the permanent fragility of the human body.
And so, more and more, it is people who are the weaker link in this chain of inevitable doom. And it is people who will be targeted.
April of 1812. A mill owner named William Horsfall was riding home on his beautiful white stallion back from the Cloth Hall market in Huddersfield, UK. He had spent weeks boasting that he would ride up to his saddle in Luddite blood (a precious substance that served as fuel for the mills).
A few yards later, at Crosland Moor, a man named George Mellor—twenty-two years old—shot him. It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse. People gathered, reproaching him for having been the oppressor of the poor. Naturally, loyal to his principles in death as he was in life, he couldn’t hear them. He died one day later in an inn. Mellor was hanged.
April of 2026. A datacenter owner named Samuel Altman was driving home on his beautiful white Koenigsegg Regera back from Market Street in San Francisco, US. He had spent weeks boasting that he would scrap and steal our blog posts (a precious substance that serves as fuel for the datacenters).
A few hours later, at Russian Hill, a man named Daniel Alejandro Moreno-Gama—twenty years old—allegedly threw a Molotov cocktail at his house. He hit an exterior gate. Altman and his family were asleep, but they’re fine. Moreno-Gama is in custody.
This kind of violence must be condemned. This is not the way. It’s horrible that it is happening at all. And yet, for some reason, it keeps happening.
Last week, the house of Ron Gibson, a councilman from Indianapolis, was shot at thirteen times. The bullet holes are still there. The shooter left a message on his doorstep: “NO DATA CENTERS.” Gibson supports a datacenter project in the Martindale-Brightwood neighborhood. He and his son were unharmed.
In November 2025, a 27-year-old anti-AI activist threatened to murder people at OpenAI’s SF offices, prompting a lockdown. He had expressed a desire to buy weapons.
Increasingly, as the objects of people’s anger and frustration and desperation become unreachable behind fences and guards, or abstracted away in ones and zeros, or elevated above the clouds, the mob will turn their unassailable emotions toward human targets.
I don’t want to trivialize the grievances of the people who fear for their futures. I don’t want to defend Altman’s decisions. But this is not the way. This is how things devolve into chaos.
And I wonder: how desperate can people be before these isolated events become a snowball of violence that will be resisted by neither datacenters nor rich people’s houses?
Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
There’s a lot of friction to escalating violence, but that friction dissolves the moment this sentiment starts to be common. Normally, it just fades away anyway, but there’s one scenario where I see it inevitably escalating:
If people feel that they have no place in the future.
If they feel expelled from the system—they’re unable to buy stuff, their skills become obsolete, their chance at earning a living is replaced by a swarm of AI agents, they think we are truly going to die (so far, the violence has been tied mostly to safety AI movements)—then they will feel they have nothing to lose.
And then, and I’m sorry to be so blunt, then it’s die or kill.
Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition, was making it explicit by doing constant discourses that amount to: “we are creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition.”
And, to top it off, they add “careful down there.”
The difference between AI and, say, looms, is that this has been broadcast to the entire globe, and it has been treated in a sort of self-conscious way. The AI leaders know the problems that will emerge and so they cannot help but talk about them constantly and so they are letting us know, which makes them look like psychopaths. How do you guys think people will react to this? You should be much less self-conscious and much more self-aware: realize what you sound like!
People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
AI has become the perfect scapegoat. It doesn’t help that the entire AI industry has decided that throwing rocks at its own roof is its best selling point: If AI is so powerful and so dangerous and soon to be so ubiquitous, then what is so unexpected about people blaming everything on it?
Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
I just hope that, contrary to the cases of ChatGPT-induced psychosis, chatbot addiction, AI-blamed job layoffs, and a growing trend of illiteracy, it stops.
...
Read the original on www.thealgorithmicbridge.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.