10 interesting stories served every morning and every evening.
...
Read the original on arxiv.org »
In 2020, I started making my own soft drinks, including a sugar-free, caffeine-free cola! If you’re just looking for the current recipes, you can find them on GitHub. They were inspired by recipes like Open Cola and Cube Cola.
Otherwise, read on for “log book”-style instructions and pictures of my first attempts, which have been adapted from this Mastodon thread.
We start by making a flavor emulsion from essential oils! These oils are extremely strong, and can cause skin irritations, so if you want to be safe, wear latex gloves for this step!
We also need super small quantities. In this attempt, I’m aiming for:
There! In total, we have about 2 ml of essential oil. This will lend flavor to about 9 L of cola!
But oils don’t dissolve in water, right? They would just float on the surface!
So what we’ll add next is gum arabic, a natural emulsifier, that will help the oil form tiny, tiny droplets that stay dispersed in water!
Combine 2 g of gum arabic, 4 ml water, and our oil mixture with a hand mixer. Keep mixing until it emulsifies, it will take on a milky brown color and the oily phase will disappear. This took me about 5 minutes.
Next, we add: 40 ml of caramel color, which will give the cola its dark appearance.
As well as 5 g citric acid, and an additional 10 ml of water. You can also add caffeine at this point, but one of the main motivations for me to make my own cola is to have cola without caffeine (it gives me headaches).
Stop mixing when it starts smiling at you! :D This is our finished cola concentrate. It smells delicious!
Other recipes would now integrate this into sugar syrup made from 1 kg of sugar, but I want to try a variation using artificial sweeteners, for dieting reasons. This is a combination of sodium cyclamate and saccharin, which is pretty common in Germany. I’ll use 70 ml.
This gives us about 120 ml of the finished cola syrup! \o/ I’ve been following the Cube-Cola recipe.
Because this is still a very concentrated solution and hard to handle, I’m diluting it down to 1 L. In theory, this should give a nice cola when diluted further in a ratio of 1:8. First taste test coming up!
It’s good! It’s really good! The sweetness is just right for me, and the oil blend tastes nice and cola-y! :)
I like my drinks really sour, so I might add another… 10 g of citric acid to this batch. There is a slightly bitter aftertaste, which I attribute to the artificial sweetener I’m using.
I might also tweak the oil ratios to accentuate the citrus flavors a bit more in the next batch. And I thought this would be too much food coloring, but it seems pretty perfect!
What I’d be really excited to try is making other flavors: Orange soda! Cherry soda! Almond & apricot soda!
Open Soda (Note: site is no longer online as of 2026…) has two pretty weird ones (including a bubble gum soda?!), but that’s pretty much all I found. There still seems to be potential for reverse engineering! :)
And here are two more resources I found while preparing my ingredients today: Jan Krüger’s blog post about tweaking cola recipes and Richard Grove’s Notes on Making Cola, which go deeper into food chemistry!
Made a second batch of cola syrup without caramel color. It’s much weirder to drink than I expected.
I also switched to sucralose as a sweetener, hoping that it would have less aftertaste. Instead of 1000 g sugar in the original recipe, I used 1.6 g sucralose, which is ridiculous, but it’s still too sweet for my taste. :O
And I made a tool upgrade! It’s important to get these small quantities right…
This week’s batch: blood orange + almond + lime (in a 2:1:1 ratio), plus as much orange food coloring as I dared to put in! :D
I cut the amount of sucralose in half — still a bit too sweet for my taste. Had to completely guess the oil ratios, but I’m happy with how it turned out! My whole apartment smells like marzipan now.
This time, when I did the mixing in a big plastic bowl, I noticed some floating stuff on the concentrate, which I assume are tiny plastic shavings produced by the hand mixer? I filtered them out, and might switch to glass or metal containers.
In today’s cola batch, I reduced the amount of sucralose further to 0.4 g, added 0.07 g vanillin, as well as a bit more cassia oil. Good stuff! I call it “Syntez-Cola”, because I combined ideas from Cube-Cola and Jan Krüger’s recipe! :)
… this is the point where I should start a blog post, and a Git repo with version numbers and a changelog, I guess. :P (Edit 2026: Here you go!)
I found decaf Coca-Cola in a supermarket last week, and could do a direct taste comparison. At this point, I prefer my cola by a large margin! Coca-Cola tasted bland, like molten cola popsicles. It had an interesting dryness to it, though, which I’m not sure how to replicate.
Made a simple orange soda today, really happy with how it turned out! I put the recipe here, along with my modified cola recipe, and the almond + blood orange drink I invented!
blinry almond 0.1.1 contains less almond oil, to highlights the citrus flavors a bit more. Find the updated recipe here.
One of these two bottles is a version with sugar instead of sucralose. 400 g of sugar seem to be a good replacement for 0.6 g sucralose. Curious to see whether people will be able to taste the difference! :) (Edit 2026: They totally could!)
blinry orange 0.1.1, now with 14% more orange! :D Recipe here.
Since these early experiments, I made a handful batches of these recipes without modification. I still think they’re really nice! Especially blinry orange is rather unique.
If you try them for yourself, I’d be happy about feedback!
And I’m still thinking what ingredients might go into a DIY Mountain Dew, or a DIY Fassbrause…
You can add your comment to this post in the Fediverse! Alternatively, drop me a mail at mil@blinry.org.
...
Read the original on blinry.org »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. A statement regarding the removal of DDLC from the Google Play Store:
...
Read the original on bsky.app »
In his classic
Programming Perl — affectionately known to a generation of technologists as “the Camel Book” — Larry Wall famously wrote of the three virtues of a programmer as laziness, impatience, and hubris:
If we’re going to talk about good software design, we have to talk about Laziness, Impatience, and Hubris, the basis of good software design. We’ve all fallen into the trap of using cut-and-paste when we should have defined a higher-level abstraction, if only just a loop or subroutine. To be sure, some folks have gone to the opposite extreme of defining ever-growing mounds of higher level abstractions when they should have used cut-and-paste. Generally, though, most of us need to think about using more abstraction rather than less.
Of these virtues, I have always found laziness to be the most profound: packed within its tongue-in-cheek self-deprecation is a commentary on not just the need for abstraction, but the aesthetics of it. Laziness drives us to make the system as simple as possible (but no simpler!) — to develop the powerful abstractions that then allow us to do much more, much more easily.
Of course, the implicit wink here is that it takes a lot of work to be lazy: when programmers are engaged in the seeming laziness of
hammock-driven development, we are in fact turning the problem over and over in our heads. We undertake the hard intellectual work of developing these abstractions in part because we are optimizing the hypothetical time of our future selves, even if at the expense of our current one. When we get this calculus right, it is glorious, as the abstraction serves not just ourselves, but all who come after us. That is, our laziness serves to make software easier to write, and systems easier to compose — to allow more people to write more of it.
Ideally, you would want those that benefit from abstractions to pay the virtue of laziness forward — to use their new-found power to themselves labor on the abstractions they make. But a consequence of the broadening of software creation over the past two decades is it includes more and more people who are unlikely to call themselves programmers — and for whom the virtue of laziness would lose its intended meaning.
Worse, the extraordinary productivity allowed by modern abstractions has given rise to an emphasis on a kind of false industriousness. Pejoratively, this was the
rise of the brogrammer, with the virtue of ironic laziness and hammock-driven development displaced by
hustle porn about crushing code.
Onto this dry tinder has struck the lightning bolt of LLMs. Whatever one’s disposition is to software creation, LLMs allow that to be applied with (much) greater force, so it should be of little surprise that LLMs have served as anabolic steroids for the brogrammer set.
Elated with their new-found bulk, they can’t seem to shut up about it. Take, for example, brogrammer-of-note
Garry Tan, who has been particularly insufferable about his LLM use, bragging about his rate of thirty-seven thousand
lines of code per day (and “still speeding up”):
If laziness is a virtue of a programmer, thinking about software this way is clearly a vice. And like assessing literature by the pound, its fallacy is clear even to novice programmers.
As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however,
took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s “newsletter-blog-thingy” included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
The problem here isn’t these issues per se (which are all fixable!), and it isn’t even the belief that the methodology that created them represents the future of software engineering (though that is certainly annoying!).
The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity. As I expanded on in my talk
The Complexity of Simplicity, this is a significant undertaking — and we cannot expect LLMs that do not operate under constraints of time or load to undertake it of their own volition.
This is not to say, of course, that LLMs won’t play an important role in our future: they are an extraordinary tool for software engineering, but — as outlined in our
guidelines for LLM use at Oxide — they are but a tool. We can put them to use tackling the non-ironic (and non-virtuous!) aspects of programmer laziness — helping us take on thorny problems like technical debt — or
use them to promote our engineering rigor, but it must be in service of our own virtuous laziness: to yield a simpler, more powerful system that serves not just ourselves, but the generations of software engineers to come after us.
...
Read the original on bcantrill.dtrace.org »
A few weeks ago I wrote about how I thought intelligence is becoming a commodity. The idea is quite straightforward, and widespread now: when everyone races to build the best model, the models get better, but so does every other model eventually. Every dollar spent on a bigger training run makes the previous one cheaper. The distance between frontier, second-best, and open-source alternatives is collapsing fast (actually Gemma4, Kimi K2.5 and GLM 5.1 are becoming my bedside models these days). Even more, as models become better, the unit of intelligence that can be deployed in local hardware with lower hardware capabilities increases significantly.
The irony of this situation is that this commoditisation of intelligence is benefiting the company that everyone was framing as the “AI loser”: Apple
There’s a version of the last three years where Apple genuinely failed at AI. They had Siri before anyone had a serious voice assistant, and then watched how ChatGPT ate their lunch already since their first release (even before they had introduced their native voice interaction). Apple didn’t have a flagship frontier (or even a vanity open-source) model, no $500B compute commitment with the usual suspects. Meanwhile, the rest of the AI labs and big tech companies were racing to win the next state-of-the-art benchmark by burning bags of cash.
What this also meant is that while these companies were burning money at a rate that would make a sovereign wealth fund uncomfortable, Apple was (and still is) sitting in a pile of undeployed cash (to the point of even increasing their stock buybacks) giving them optionality.
To me, OpenAI is the most paradigmatic example of this “infinite money burning machine”. OpenAI raised at a $300B valuation and then shut down Sora, the video product they’d been positioning as a creative industry flagship, because it was running at roughly $15M a day in costs against $2.1M in daily revenue. Disney had already signed a three-year licensing deal for Sora to generate content from Marvel, Pixar, and Star Wars characters. They were finalising a $1B equity stake in OpenAI. When Sora died, so did the billion. A $1B investment evaporated, because the product it was staked on couldn’t pay for itself (reducing their buffer that accommodates their daily burn).
On the infrastructure side: OpenAI signed non-binding letters of intent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global output. These were of course non-binding. Micron, reading the demand signal, shut down its 29-year-old Crucial consumer memory brand to redirect all capacity toward AI customers. Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
I don’t know about you, but I don’t see these behaviours as those of someone that is winning the AI race, independently of how good their models do in benchmarks, and how much they are burning in infrastructure. A small miscalculation in the expected revenue, and you are out of the game (I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions).
My sense is that the labs’ bet was always that raw model capability, i.e. intelligence, along with the infrastructure required to run them would stay scarce. Those who manage to secure the best model and the infrastructure to run it at scale would get the best moat. But I am afraid that having the best model in itself may not be enough moving forward. Less capable models are becoming as capable as previous versions of the frontier models.
The best recent example I can think of is Gemma 4, Google’s open-weight model. It was built to run on a phone, scores 85.2% on MMLU Pro and matches Claude Sonnet 4.5 Thinking on the Arena leaderboard. 2 million downloads in its first week. Models that would have been state-of-the-art eighteen months ago now run on a laptop, and they get better every quarter.
If you haven’t tried Gemma4 yourself I highly recommend it. I am running it on my AMD Ryzen AI Max+, and its performance in terms of tokens per second and intelligence are so good that I have already migrated some of my personal tools to use this model as the backend without visibly impacting their output. This trend can really change in the next few months way we access intelligence.
I feel that some of the labs see this coming. Anthropic has been particularly aggressive about it and they are releasing new (actually useful) tools every day that work like a charm with their models in order to lock users into their ecosystem. Claude Code for developers, Claude Cowork for teams, the recent Claude Managed Sessions to orchestrate agents, all designed to put Claude inside workflows people are already in.
The logic behind it: if the model itself won’t hold the moat, capture the usage layer and make switching painful. I think this is brilliant, and seeing how much Anthropic is growing in number of users and revenue, it seems to be paying off. The economics of their plans are still rough, though. One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription. The labs are subsidising the demand they’re chasing, which justifies their level of burn (let’s see for how long they can afford these subsidies).
Apple, by contrast, has spent almost nothing on AI infrastructure and subsidising users’ token burn. And this may be giving them more optionality and leverage than any of the other companies that jumped heads first into the AI race.
In that earlier post, I argued that if intelligence becomes abundant, context becomes the scarce resource. A model that can reason about anything but knows nothing about you or the environment it operates in is a generic tool. What makes AI genuinely useful day-to-day is reasoning plus personal context: your messages, your calendar, your code, your tools, your health data, your photos, your habits. I think here is where Anthropic is making an amazing job with their “Claude suite”.
But Apple already has all this context and access to your environment through their 2.5 billion active devices. Each one is a context mine that users have been filling for years. Health data from Apple Watch. Every photo taken on an iPhone. Notes, messages, location history, app behaviour, emails, and awareness of your environment through the pool of sensors of your device. Why build a commodity when they already have the context that can become their moat?
And they even have the ability to keep all this data on-device, which is where the “Privacy. That’s iPhone” positioning becomes something more than a PR strategy, and which could actually make a comeback to become one of their core value propositions. Apple spent years using privacy as a differentiator against the ad-driven models of Google and Meta. It worked, but it always felt a bit abstract and, honestly, fake. Now it could become really concrete. Would you hand OpenAI your medical records and fifteen years of photos to get better AI answers? Probably not. Some are, but I personally wouldn’t like Sam to have that personal data from me. Would you let a model running entirely on your device (no network request, no data leaving your phone) access all of that? That’s a different question. The on-device model gets full context because it never leaves the hardware. Apple built the reputation and the architecture for this when no one else thought it mattered.
Of course, there are still technological barriers to make this possible, but I feel like we may be getting there.
In this context, the Gemini deal, where Apple signed a $1B to license Google’s frontier model for the queries that need cloud-scale reasoning, makes total sense. Apple didn’t build a frontier model. They bought access to one, at a price that’s rounding error against OpenAI’s weekly compute bill. What they kept in-house: the context layer, the on-device stack, and the operating system that mediates everything.
Turns out Apple had another unexpected lever for AI as shown with the Mac Mini craze after OpenClaw’s release. Apple Silicon wasn’t built specifically for AI, it was built for efficiency, for battery life, for thermal performance, for the hardware/software co-design that Apple had been running for fifteen years. But it turned out to be the perfect architecture to run local models efficiently.
The key decision is unified memory. On a conventional architecture (that of most laptops, and even traditional data center-grade GPUs) the CPU and GPU are separate chips with separate memory pools. Moving data between them is slow and power-hungry. Nvidia’s GPUs are extremely fast at matrix operations, but they sit on the other side of a PCIe bus from the CPU, and feeding them is a constant bottleneck (as discussed when presenting the difference between DRAM and HBM in this post from a few weeks ago).
Apple’s M-series and A-series chips put the CPU, GPU, and Neural Engine (their proprietary accelerator) on the same die, sharing one high-bandwidth memory pool. No bus crossing, no transfer overhead, no latency switching between CPU and GPU work. For video editing or compiling Xcode, this is a nice efficiency win. For LLM inference, this has been key.
As described also in my post about RAM memory and TurboQuant, LLM inference is currently memory-bandwidth bound, not compute bound. The bottleneck isn’t so much how fast you can multiply matrices, it’s how fast you can stream model weights from memory into the compute units, and how big of a KV cache you can store to avoid having to re-compute it. Apple’s unified pool gives every compute unit direct, high-bandwidth access to the same memory simultaneously. That’s exactly the operation inference needs.
This is what makes the LLM in a Flash technique work so well on Apple hardware. Someone recently ran Qwen 397B, a 209GB model, on an M3 Max Mac at ~5.7 tokens per second, using only 5.5GB of active RAM. The weights live on the SSD and stream in at ~17.5 GB/s as needed. This works because Qwen is a mixture-of-experts architecture: each token only activates a small subset of expert layers, so you only ever need a fraction of the 209GB resident in memory. The SSD throughput Apple achieves (faster than their own figures from the original LLM in a Flash paper) comes from storage architecture they built for iPhone responsiveness, not AI. Claude wrote the ~5,000 lines of Objective-C and Metal shaders to make it all work. A 400-billion-parameter model, on a consumer laptop, from 5.5GB of RAM (another win of the autoresearch flow discussed in this newsletter).
What I find more interesting about all of this is the platform dynamic that this can result in. Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed. Developers didn’t target iOS because Apple asked, they targeted it because the users were there, the tooling was good, the hardware was consistent. My feeling is that the same thing could happen now with local inference. MLX is already a de facto framework for on-device AI. Gemma, Qwen, Mistral, the most relevant model architectures have MLX support. Apple doesn’t need to win the model race if they manage to become the de-facto platform where the models (or the agents that use them) run. Again, a great example of this is the Mac Mini craze after OpenClaw went viral.
I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
The hardware/software co-design strategy has been a key focus for years, and one that I’ve always agreed on myself (as an electrical engineering by training, I’ve always been into hardware/software co-design). If you can afford it, I think that’s the right approach. The privacy positioning, the on-device processing focus, the decision to build their own silicon when the rest of the industry was happy buying Nvidia and Intel, all of those were choices Apple made when they were commercially risky and the direction wasn’t obvious. Is it true that they were made with cost and governance in mind, not AI, but it turned out well for them.
What Apple couldn’t have planned (or could they?) is that their unified memory architecture would be a perfect fit for LLMs, and that open-weight models would get this capable, this fast, removing the need for huge hardware investment for AI infrastructure from their side. That the model race would commoditise intelligence as quickly as it did. Or that someone would stream a 400B parameter model from an SSD and it would actually work.
So some of this is luck. But it’s the kind of luck that finds you when you built the right foundation, even if you built it for completely different reasons. They were definitely well-positioned.
The rest of the industry spent three years racing to see who could build the best model with Apple looking from the sidelines, waiting to understand how their devices and own ecosystem could fit in this future. I don’t know if this is exactly the case, but I feel this was smart. Risky but smart.
I genuinely don’t know how this plays out over the next few years. The labs are not standing still, and Apple’s AI track record (looking at you, Siri, you already suck a bit) is not exactly flawless. But it’s hard to imagine a world where 2.5 billion devices, carrying your entire personal context, running capable models locally on purpose-built silicon, with Gemini on-call for the hard stuff, incurring in variable cost for inference instead of expensive CAPEX investment could be a bad position to be in a future where AI is everywhere.
Whether that was strategy or fortune, I’ll leave for you to decide. And if you do, please let me know what you think about it. My TL;DR is that, to my surprise, I am still bullish about Apple and their relevance in an AI-centric future.
Disclaimer: To frame the opinion of this post, I just want to be clear about the fact that I am not one of those Apple fan boys. Proof of this is that this post was written from a Linux machine and that I don’t even own a Mac :)
...
Read the original on adlrocha.substack.com »
This post works through the financial logic of software teams, from what a team of eight engineers actually costs per month to what it needs to generate to be economically viable. It also examines why most teams have no visibility into either number, how that condition was built over two decades, and what the arrival of LLMs now means for organizations that have been treating large engineering headcount as an asset.
Software development is one of the most capital-intensive activities a modern company undertakes, and it is also one of the least understood from a financial perspective. The people making daily decisions about what to build, what to delay, and what to abandon are rarely given the financial context to understand what those decisions actually cost. This is not a coincidence. It is a structural condition that most organizations have maintained, quietly and consistently, for roughly two decades.
A software engineer in Western Europe costs somewhere between €120,000 and €150,000 per year when you account for salary, social fees, pension contributions, equipment, social activities, management overhead, and office space. Call it €130,000 as a reasonable middle estimate. A team of eight engineers therefore costs approximately €1,040,000 per year, or €87,000 per month, or roughly €4,000 for every working day.
Most engineers do not know this number. Many of their managers do not either. And in the organizations where someone does know it, the number rarely makes its way into the conversations where prioritization decisions are actually made.
This matters because every decision a team makes carries an implicit cost that compounds over time. Choosing to spend three weeks on a feature that serves 2% of users is a €60,000 decision. Delaying an operational improvement for a quarter is a decision with a calculable daily price tag. Rebuilding a platform because the current one feels embarrassing, rather than because customers are leaving, is a capital allocation choice that would look very different if the people making it were spending their own money.
Consider a team of eight engineers whose mission is to build and maintain an internal developer platform serving one hundred other engineers. This is a common organizational structure, and it is one where the financial logic is rarely examined carefully.
The team costs €87,000 per month. To justify that cost, the platform they build needs to generate at least €87,000 per month in value for the engineers who use it. The most direct way to measure that value is through time saved, since the platform’s purpose is to make other engineers more productive.
At a cost of €130,000 per year, one engineer costs approximately €10,800 per month, or around €65 per working hour. For the platform team to break even, their platform needs to save the hundred engineers they serve a combined total of 1,340 hours per month. That is 13.4 hours per engineer per month, or roughly three hours per week per person.
Three hours per week is achievable. A well-built platform that eliminates manual deployment steps, reduces environment setup time, or removes the need for repetitive configuration work can easily clear that bar. Time saved is the most direct measure for a platform team, though value can also come from reducing outages, which carries a direct revenue impact of its own. But the question worth asking is whether anyone on that team knows this number, tracks it, or uses it to decide what to build next. In most organizations, the answer is no. The team has a roadmap driven by engineering preferences, stakeholder requests, and quarterly planning cycles, and the financial logic underlying their existence is left unexamined.
And break-even is not actually the right bar. Leah Tharin has written a sharp breakdown of the mathematics of this: a team with a 50% initiative success rate, which is already optimistic, needs its wins to cover its losses too. Leah’s calculation is growth-oriented, but even for non-growth organizations, the same investment thesis holds. Even a two-times return is not sufficient. Capital sitting in a bank carries no operational risk, no coordination costs, and no ongoing maintenance obligations. The systems a team builds will outlive the team itself, and the cost of owning, maintaining, and eventually replacing those systems is almost always larger than anticipated. The return has to cover not just the team’s current cost, but the long tail of what they leave behind.
That pushes the realistic threshold for financial viability to somewhere between three and five times annual cost. For an €87,000 per month team, that means generating between €260,000 and €435,000 in monthly value. The three hours per week calculation gets you to break-even. To clear the realistic financial bar, the platform needs to be genuinely transformative for the engineers using it, and the team needs to be ruthless about working on the highest-value problems rather than the most interesting ones.
A customer-facing product team of eight carries the same €87,000 monthly cost. The levers available to justify that cost are different, but the underlying logic is identical.
If the product has an average revenue per user of €50 per month, the team needs to generate or protect the equivalent of 1,740 users worth of value every month just to break even, and roughly 5,000 to 8,700 users worth of value to clear the three-to-five times threshold.
Churn is often the most direct lever. Consider a product with 50,000 active users losing 2% monthly to churn. That is 1,000 users per month, representing €50,000 in monthly recurring revenue walking out the door. A team that identifies the primary driver of that churn and eliminates it is generating nearly €50,000 per month in protected revenue, covering most of its break-even cost from a single initiative. But that calculation requires knowing the churn rate, understanding its causes, and connecting those causes to the team’s work, and most teams are not operating with that level of financial clarity.
Activation is another lever that is frequently underestimated. If 10,000 users sign up each month but only 30% complete the activation steps that lead to long-term retention, there are 7,000 users each month who paid acquisition costs but never converted to retained revenue. Improving the activation rate by five percentage points, from 30% to 35%, converts an additional 500 users per month. At €50 average revenue per user, that is €25,000 in additional monthly recurring revenue, representing roughly 29% of the team’s break-even threshold from one metric moving in the right direction.
Sales conversion follows the same logic. If the product has a free-to-paid conversion funnel processing 20,000 trials per month at a 4% conversion rate, that produces 800 paying customers monthly. Moving conversion from 4% to 4.5% produces 900 customers, an additional 100 paying users, and €5,000 in additional monthly revenue. Small improvements across multiple levers compound quickly, but only if the team understands which levers connect to which financial outcomes and by how much.
Given that software teams are expensive and that their value is, at least in principle, calculable, it is worth examining why most teams do not measure anything financially meaningful. Some measure activity proxies such as velocity, tickets closed, or features shipped. Others measure sentiment proxies such as NPS, CSAT, or engagement scores. These are not degraded versions of financial measurement. They are a different category entirely, one that was designed around the goal of understanding user behavior and team throughput rather than around the goal of understanding economic return.
The problem is that activity and sentiment metrics can trend upward while financial performance deteriorates. A team can ship more features while building the wrong things. Engagement scores can rise while churn accelerates among the users who actually generate revenue. Velocity can increase while the work being completed has no measurable connection to business outcomes. These metrics feel meaningful because they correlate with outcomes in many circumstances, but correlation is not a reliable guide to prioritization when the underlying financial logic is never examined.
This is a structural condition rather than a failure of individual judgment. Organizations chose these metrics because they are easier to instrument, easier to communicate, and easier to look good on than financial metrics. A team that measures its success by features shipped will always have something to show. A team that measures its success by return generated will sometimes have to report that it does not know, or that the return was disappointing, and that kind of transparency requires an organizational culture that most companies have not deliberately built.
The matrix above is drawn from a product management training program I run called Booster, where product leaders map their actual metrics against their investment thesis to surface gaps. The exercise is uncomfortable precisely because most leaders discover mid-mapping that their team’s daily measurements have no direct connection to the financial objective they were given.
Understanding why this condition exists requires looking at roughly two decades of macroeconomic context, because the financial dysfunction in modern software organizations did not emerge from bad intentions or intellectual failure. It emerged from a specific environment that made financial discipline in product teams economically unnecessary.
The picture is not a single clean era but two distinct phases. From roughly 2002 through 2011, capital was periodically cheap but conditions were mixed. Rates fell sharply after the dot-com crash and again after the global financial crisis, but in both cases risk appetite was suppressed. The money was technically inexpensive but investors were cautious, multiples were reasonable, and the growth-at-all-costs logic had not yet taken hold. Product organizations during this period still operated with some residual financial discipline inherited from the dot-com reckoning.
From approximately 2011 through 2022, something different happened. Zero-rate policy became fully normalized, risk appetite recovered and then overcorrected, and the SaaS mental model crystallized into a broadly shared investment thesis. All three conditions arrived simultaneously, and the result was about eleven years during which software companies could grow headcount aggressively, miss on the majority of their roadmap, and still look healthy on paper. Revenue growth forgave an enormous range of prioritization mistakes, and the cost of building the wrong thing was largely invisible.
Eleven years is not a long time, but it is long enough to form the professional instincts of an entire generation of product and engineering leaders. The frameworks they learned, the metrics they adopted, the planning rituals they practice, and the definitions of success they internalized were all formed during a window that was unusually short and unusually distorted. There is no cohort of senior product leaders who developed their judgment in conditions where their teams were expected to demonstrate financial return, because those conditions did not exist during the years when that cohort was learning the craft.
When capital became expensive again in 2022, the behavior did not automatically adjust, because the behavior was never connected to the financial logic in the first place.
There is a deeper consequence of this twenty-year period that is now becoming painfully visible, and it concerns how the industry has thought about large engineering organizations and codebases.
The conventional understanding is that a codebase representing years of engineering investment is a valuable asset. It encodes business logic, captures accumulated decisions, and represents the technical foundation on which future products are built. A large engineering organization is similarly understood as a source of capability, with more engineers meaning more capacity to build, maintain, and improve that foundation.
While some argued that large codebases actually shoulg be considered a liability, the industry as a whole has mostly ignored that. But this understanding is now being more closely examined. A large codebase also carries maintenance costs that grow over time as the system becomes more complex, more interconnected, and more difficult to change safely. Every engineer added to maintain it increases coordination costs, introduces new dependencies, and adds to the organizational weight that slows decision-making. The asset and the liability exist simultaneously, and for most of the past twenty years, the financial environment masked the liability side of that equation.
The arrival of large language models has made the liability visible in a way that is difficult to ignore. Recently, Nathan Cavaglione, a developer, built a functional replica of approximately 95% of Slack’s core product in fourteen days using LLM agents. Slack was built by thousands of engineers over the course of more than a decade, at a cost that represents billions of dollars in cumulative engineering investment. Nathan started without any of that accumulated complexity, without the organizational weight, without the legacy architectural decisions, and without the coordination costs, and arrived at a comparable product in a period that would not constitute a single sprint in most enterprise engineering organizations.
Day 14: A functional replica of Slack’s core product, built by a Nathan using LLM agents.
This does not mean that Slack’s engineering investment was wasted, because Slack also built enterprise sales infrastructure, compliance capabilities, data security practices, and organizational resilience that a fourteen-day prototype does not include. But it does mean that the assumption underlying large engineering organizations, which is that scale and accumulated complexity represent competitive moats, is no longer reliable in the way it once was. When the cost of building a functional approximation of a sophisticated software product can collapse to days of individual effort, the question of what a large engineering team justifies becomes both more urgent and more difficult to answer with the metrics most organizations currently track.
The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.
The competitive advantage available to organizations that take this seriously is not primarily technical. It is analytical. Companies that can clearly articulate what each of their teams costs, what value each team generates, and whether that value clears a financially viable threshold are in a structurally different position than companies that cannot. They can make build versus buy decisions based on actual economics rather than organizational preference. They can identify when a team is working on problems that cannot generate sufficient return at their cost level. They can sequence initiatives based on what value is being lost each day they are delayed, rather than on who argued most persuasively in the last planning meeting.
Most organizations cannot do this today. The measurement infrastructure does not exist, the financial data does not flow to the people making prioritization decisions, and the habit of asking these questions has not been built. Building it is uncomfortable, because the answers are sometimes unflattering. A team that examines its work through this lens will sometimes discover that it has spent a quarter on things that do not connect to financial outcomes in any meaningful way, and that is a difficult finding to sit with.
But the alternative is continuing to run an organization where teams with million-euro annual budgets make daily investment decisions without the financial context to know whether those decisions are generating return. That condition was sustainable when capital was cheap and growth forgave everything. It is increasingly difficult to sustain in an environment where boards expect financial returns, where the cost of building software is collapsing due to AI, and where the question of what a team justifies can no longer be deferred indefinitely.
The organizations that develop the habit of asking these questions clearly, regularly, and without flinching will accumulate an advantage that compounds over time. The question is simply whether they will start asking before or after the pressure forces them to.
...
Read the original on www.viktorcessan.com »
Servo is now available on crates.io
Today the Servo team has released v0.1.0 of the servo crate. This is our first crates.io release of the servo crate that allows Servo to be used as a library.
We currently do not have any plans of publishing our demo browser servoshell to crates.io. In the 5 releases since our initial GitHub release in October 2025, our release process has matured, with the main “bottleneck” now being the human-written monthly blog post. Since we’re quite excited about this release, we decided to not wait for the monthly blog post to be finished, but promise to deliver the monthly update in the coming weeks.
As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo’s embedding API and its ability to meet some users’ needs.
In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.
...
Read the original on servo.org »
Add AP News as your preferred source to see more of our stories on Google.
Add AP News as your preferred source to see more of our stories on Google.
BUDAPEST, Hungary (AP) — Hungarian voters on Sunday ousted long-serving Prime Minister Viktor Orbán after 16 years in power, rejecting the authoritarian policies and global far-right movement that he embodied in favor of a pro-European challenger in a bombshell election result with global repercussions.
It was a stunning blow for Orbán — a close ally of both U. S. President Donald Trump and Russian President Vladimir Putin — who quickly conceded defeat after what he called a ″painful″ election result. U.S. Vice President JD Vance had made a visit to Hungary just days earlier, meant to help push Orbán over the finish line.
Election victor Péter Magyar, a former Orbán loyalist who campaigned against corruption and on everyday issues like health care and public transport, has pledged to rebuild Hungary’s relationships with the European Union and NATO — ties that frayed under Orbán. European leaders quickly congratulated Magyar.
His victory was expected to transform political dynamics within the EU, where Orbán had upended the bloc by frequently vetoing key decisions, prompting concerns he sought to break it up from the inside.
It will also reverberate among far-right movements around the world, which have viewed Orbán as a beacon for how nationalist populism can be used to wage culture wars and leverage state power to undermine opponents.
It’s not yet clear whether Magyar’s Tisza party will have the two-thirds majority in parliament, which would give it the numbers needed for major changes in legislation. With 93% of the vote counted, it had more than 53% support to 37% for Orbán’s governing Fidesz party and looked set to win 94 of Hungary’s 106 voting districts.
“I congratulated the victorious party,″ Orban told followers. “We are going to serve the Hungarian nation and our homeland from opposition.″
In a speech to tens of thousands of jubilant supporters at a victory party along the Danube River, Magyar said his voters had rewritten Hungarian history.
“Tonight, truth prevailed over lies. Today, we won because Hungarians didn’t ask what their homeland could do for them — they asked what they could do for their homeland. You found the answer. And you followed through,” he said.
On the streets of Budapest, drivers blared car horns and cranked up anti-government songs while people marching in the streets chanted and screamed.
Many revelers chanted “Ruszkik haza!” or “Russians go home!” — a phrase used widely during Hungary’s 1956 anti-Soviet revolution, and which had gained increasing currency amid Orbán’s drift toward Moscow.
Turnout in the election was nearly 80%, according to the National Election Office, a record number in any vote in Hungary’s post-Communist history.
Orbán, the EU’s longest-serving leader and one of its biggest antagonists, traveled a long road from his early days as a liberal, anti-Soviet firebrand to the Russia-friendly nationalist admired today by the global far-right.
The EU will be waiting to see how Magyar changes Hungary’s approach to Ukraine. Orbán repeatedly frustrated EU efforts to support the neighboring country in its war against Russia’s full-scale invasion, while cultivating close ties to Putin and refusing to end Hungary’s dependence on Russian energy imports.
Recent revelations have shown a top member of Orbán’s government frequently shared the contents of EU discussions with Moscow, raising accusations that Hungary was acting on Russia’s behalf within the bloc.
Members of Trump’s “Make America Great Again” movement are among those who see Orbán’s government and his Fidesz political party as shining examples of conservative, anti-globalist politics in action, while he is reviled by advocates of liberal democracy and the rule of law.
In Budapest, Marcell Mehringer, 21, said he was voting “primarily so that Hungary will finally be a so-called European country, and so that young people, and really everyone, will do their fundamental civic duty to unite this nation a bit and to breakdown these boundaries borne of hatred.”
During his 16 years as prime minister, Orbán launched harsh crackdowns on minority rights and media freedoms, subverted many of Hungary’s institutions and been accused of siphoning large sums of money into the coffers of his allied business elite, an allegation he denies.
He also heavily strained Hungary’s relationship with the EU. Although Hungary is one of the smaller EU countries, with a population of 9.5 million, Orbán has repeatedly used his veto to block decisions that require unanimity.
Most recently, he blocked a 90-billion euro ($104 billion) EU loan to Ukraine, prompting his partners to accuse him of hijacking the critical aid.
Magyar, 45, rapidly rose to become Orbán’s most serious challenger.
A former insider within Orbán’s Fidesz, Magyar broke with the party in 2024 and quickly formed Tisza. Since then, he has toured Hungary relentlessly, holding rallies in settlements big and small in a campaign blitz that recently had him visiting up to six towns daily.
In an interview with The Associated Press earlier this month, Magyar said the election will be a “referendum” on whether Hungary continues on its drift toward Russia under Orbán, or can retake its place among the democratic societies of Europe.
Tisza is a member of the European People’s Party, the mainstream, center-right political family with leaders governing 12 of the EU’s 27 nations.
Magyar faced a tough fight. Orbán’s control of Hungary’s public media, which he has transformed into a mouthpiece for his party, and vast swaths of the private media market give him an advantage in spreading his message.
The unilateral transformation of Hungary’s electoral system and gerrymandering of its 106 voting districts by Fidesz also required Tisza to gain an estimated 5% more votes than Orbán’s party to achieve a simple majority.
Additionally, hundreds of thousands of ethnic Hungarians in neighboring countries had the right to vote in Hungarian elections and traditionally have voted overwhelmingly for Orbán’s party.
Russian secret services have plotted to interfere and tip the election in Orbán’s favor, according to numerous media reports including by The Washington Post. The prime minister, however, accused neighboring Ukraine, as well as Hungary’s allies in the EU, of seeking to interfere in the vote to install a “pro-Ukraine” government.
Associated Press journalists Béla Szandelszky, Marko Drobnjakovic, Ivan L. Nagy, Florent Bajrami in Budapest, Hungary, and Angela Charlton in Paris contributed to this report.
...
Read the original on apnews.com »
to own it. Europe holds unique strengths: a world-class academic ecosystem, a commitment to human-centric technology, and a single market of +450 million people. The question is no longer whether Europe can compete, but how it can turn these assets into a cohesive, self-reliant AI powerhouse. Europe has faced a growing technological gap, leaving its citizens, businesses, and governments increasingly reliant on foreign dominance. The cost is high: a diminished voice on the global stage, reduced control over the European future, and vulnerability to digital threats. Without action, we risk surveillance threats, economic decline, strategic weakness, and even the erosion of our democratic freedoms. But this challenge is also Europe’s greatest opportunity. The AI revolution has started and is a chance to not
only catch up but to lead and define our own paths. Europe is home to a vibrant pool of untapped talent and industrial champions whose unique assets can push the boundaries of what AI can achieve. The competition from the U. S. and China is fierce, but Europe is not a market to be dominated, it is a powerhouse of innovation, creativity, and resilience. The question is not whether we can compete, but how we will rise to the occasion. AI can be the tool that secures our autonomy, strengthens our strategic sectors, increases our economic wealth and amplifies our global influence. To seize this moment, we must act decisively. We need to drive demand for homegrown AI, secure strategic sectors, and empower European players. Controlling our AI and infrastructure is not optional, it’s the only way to win the AI race. So now is the time to act: grow our talent pool and bring our best minds back to Europe, scale our innovative companies across all 27 Member States, and turn our diversity into a competitive edge by compressing knowledge and building AI that reflects the world’s complexity. Europe’s AI ecosystem is brimming with potential. By fostering an environment that nurtures growth, we can transform challenges into opportunities and reclaim our future. The race is on, and Europe should be ready to win it. Europe holds unique strengths: a world-class academic ecosystem, a commitment to human-centric technology, and a single market of over 450 million people. The question is no longer whether Europe can compete,
but how it can turn these assets into a cohesive,
self-reliant AI powerhouse. This playbook provides a clear, actionable framework to position Europe as that powerhouse, accelerating AI development and adoption, attracting and retaining top talent, simplifying regulation without sacrificing values, and mobilizing public and private investment to build homegrown AI infrastructure. Only with it, Europe can ensure AI is not only developed in Europe, but for Europe and on Europe’s terms. This document is not a theoretical exercise. It is a practical playbook, born from the lived experience of a European AI startup, Mistral AI, navigating one of the world’s most competitive, fast and capital-intensive industries. We have experienced misaligned equity frameworks, bureaucratic barriers that require the CEO to travel for basic administrative tasks, and legal uncertainty that complicates contracts and customer relationships. We have seen how regulatory overlaps create legal quagmires, how fragmented markets hinder growth, and how talent slips away due to administrative friction. This document is a call to turn Europe’s strengths into scalable, competitive advantage. It is grounded in the urgency of the moment and the conviction that Europe can and must build an AI ecosystem that reflects its values, serves its citizens, and competes globally. It is our collective duty to ensure AI can also be developed in Europe on terms that aligns with our priorities as Europeans. These challenges shaped our approach and led us to agree on three key principles to unlock Europe’s AI potential: Every recommendation, from visa reform to procurement
gateways, is designed to be implemented, measured,
and scaled. Europe’s diversity is its strength, but its fragmentation
is its Achilles’ heel. This paper embraces the complexity
of the EU’s structure while offering solutions to
align markets, reduce redundancy, and accelerate decision-making. Speed is not an option: We propose fast-track mechanisms for talent, capital,
and compliance, so Europe’s innovators aren’t left
behind. At Mistral AI, we’ve built a frontier AI company in Europe because we believe in its potential. This playbook is our contribution to ensuring that potential becomes reality, not just for us, but for the entire ecosystem. Scale: Unleash the full potential of the Single Market The most transformative advancements in AI, those that push the boundaries of what is possible, are driven by human genius, scientific curiosity, and the relentless pursuit of knowledge. Beyond the algorithms and computational power, AI’s potential lies in its ability to serve human intelligence, ensuring that technology remains a tool that addresses greater needs. This human-centric approach is not a philosophical ideal, but a practical necessity. AI systems that are truly innovative and beneficial to society will always require human oversight, creativity, and judgment at their core. their core. As a consequence, the global competition for AI talent is fierce. The scarcity of highly skilled professionals in computer science, machine learning, and related fields has turned talent into the most critical resource in the AI race. These experts operate in a global, hyper-competitive market, where other regions are attracting talent thanks to faster relocation processes, higher salaries, and dynamic career opportunities. Fortunately, our continent is home to a vibrant AI academic ecosystem, from world-class universities to cutting-edge research institutions. The foundations are here but the full potential must now be realized. This requires deeper collaboration between academia and industry, ensuring that research excellence translates directly into innovation, and in general continent-wide measures that ease relocation, simplify administrative procedures, and secure long-term conditions for talent to thrive. The goal should be clear for Europe: becoming the premier destination for global AI experts, a place where talent is not just attracted but nurtured, retained, and empowered to push the boundaries of what AI can achieve. As competing regions become less open or predictable, Europe has a unique opportunity to position itself as a global hub for AI research and development and address its shortage of highly skilled AI talent. The future of AI will be first shaped by those who invest in talent today. Establish a “AI Blue Card”, a fast-track visa process enabling AI and compute researchers, engineers, and entrepreneurs, along with their immediate families (spouses/partners and dependent children under 18), to obtain a 4-year work and residency permit valid across all EU Member States. The visa would be processed in 15 working days via a unified digital portal, with possible renewability and portability between Member States. 2. Develop deeper and more systematic partnerships between European universities and AI companies Build deep, structured collaboration between academia and the AI industry to turn Europe’s research excellence into industrial innovation. Create a pan-European network of applied AI research institutes to boost industry-oriented innovation, modeled after Fraunhofer and Carnot, to accelerate AI adoption. These multidisciplinary centres would collaborate with companies of all sizes through R&D contracts on industry-relevant AI projects, enabling joint participation in bilateral research initiatives and supporting the diffusion of AI innovation across Europe’s industrial ecosystem. With 40% of EU companies struggling to hire AI talent, the EU must establish a comprehensive AI talent mobility initiative to enable large-scale, structured mobility across the AI talent lifecycle, from graduates to senior academics and industry experts. Provide compute to a selected pool of leading European universities in computing science for their Msc and Phd graduate programs, via university-managed allocation portals. Access should be dynamic and curiosity-driven, with a fair-use cap to ensure broad availability while still enabling compute-intensive projects. Scale: Unleash the full potential of the Single Market Europe stands at a critical juncture, facing not just a challenge of scale but a deeper crisis of acceleration. More than 50% of the world’s unicorns are based in the U.S., compared to less than 10% in the EU, a third of which have already relocated their headquarters abroad, predominantly to the U.S. This is not merely a disparity in numbers; it reflects a systemic failure where European startups, despite their potential, are forced to seek growth opportunities elsewhere. The issue runs deeper than funding gaps or market size. It’s about an ecosystem that doesn’t let its most promising companies flourish at home. of the world’s unicorns are based in the U.S. Less than 10% in the EU. Europe accounts for just 5% of global venture capital funds for the U.S. The fragmentation of Europe’s single market into 27 distinct regulatory landscapes creates an environment where expanding from Berlin to Paris can feel more complex than entering the entire U.S. market. Additionally, Europe accounts for just 5% of global venture capital funds, compared to 52% for the U.S. and around 40% for China. As a result, European startups grow slower, exit earlier, and too often end up acquired by non-European companies or relocate entirely to access larger markets and simpler regulations. This represents a double loss for Europe. First, we fail to cultivate homegrown tech leaders that could drive innovation, create jobs, and generate wealth across the single market. Second, and more critically, we surrender our most promising companies to other competing regions in the world, reinforcing U.S. and China dominance in the very sectors where Europe could be leading. The solution demands a fundamental shift in how Europe supports its scale-ups. We need more harmonization to unleash the full potential of the single market, we need to create financial mechanisms that can help fill the funding gap, and, perhaps most importantly, change the narrative that scaling in Europe is impossible. The single market remains Europe’s greatest competitive asset, but it must evolve to better enable our companies to scale and compete globally. It’s time to unlock its full potential Leverage the current momentum for simplification to streamline the EU digital regulatory framework. The complex EU digital regulatory framework should be revised to clarify inconsistencies, eliminate overlaps, and reduce compliance efforts without sacrificing underlying regulatory goals. Create a centralized, multilingual digital portal for AI developers to generate standardized reports, access real-time guidance, and automate compliance checks across the AI Act and GDPR, drawing on the European Single Access Point (ESAP). Establish a regulation-based unified, digital-first system for the automatic recognition of corporate acts across all EU Member States, eliminating bureaucratic barriers and legal uncertainty for companies scaling within the Single Market. Create an EU corporate banking passport via the European Digital Identity Wallet to grant all EU-based companies a right to a basic payment account and a harmonized, digital-first KYC passporting scheme. Invite Member States to adopt a flexible, subsidiarity-compliant ESOP Alignment Framework to align taxation event at sale, while respecting national tax rates. Create a SIU Passport by extending the European Commission’s Q4 2025 proposal on savings and investments union (SIU) allowing companies to raise capital in any EU Member State without refiling documentation. Extend the European Single Access Point (ESAP) to corporate filings and investors search with a centralized digital platform, the SIU Hub, for corporate filings, enabling companies to submit prospectuses, financial reports, and compliance documents once and have them automatically recognized and distributed across all EU national regulators. An AI EuVECA Label would be a certified designation for qualifying funds that commit a minimum percentage of their investments to AI and deep-tech companies. As the European Union implements the 2024 Solvency II reform and advances the IORP II review, policymakers should ensure that prudential and investment frameworks actively support long-term equity investments in strategic sectors such as artificial intelligence. Artificial intelligence is not an abstract promise. It is a tool that fulfills its potential when embedded in the real economy. The most advanced algorithms, the most powerful models, are meaningless if they remain confined to labs or siloed in superficial applications. AI’s true value emerges when it is deployed at scale, solving concrete challenges, whether in the most complex industrial use cases or the daily operations of SMEs. The opportunity is even bigger considering that embedding AI in a real-world context is not just a matter of technological deployment; but can trigger a virtuous cycle of demand and supply. Every new adoption refines the technology, every real-world application accelerates research, and every successful deployment attracts more investment. In short: adoption doesn’t just follow innovation, it drives it. The equation is simple: more adoption today means better AI tomorrow and so on. is the annual value of public procurement in Europe - a major lever for innovation. However, Europe faces a stark reality: only 20% of EU enterprises have adopted AI, and a mere 11% of SMEs are leveraging its potential. Worse still, over 80% of our digital infrastructure remains dependent on non-EU providers. This leaves us vulnerable to extra-territorial controls that threaten strategic autonomy and could disrupt the seamless deployment of AI technologies across the continent. Without rapid, large-scale adoption, Europe’s AI ecosystem faces stagnation, deprived of the market signals that guide investment and the real-world use cases that refine technology. To address this issue, the solution requires a three-pronged strategy: First, public procurement must become a
market-shaping tool. By mandating European AI
solutions in government contracts, we send a clear
signal that homegrown technology is the best option. Second, we must remove barriers for SMEs, ensuring
that even the smallest firms can access, adopt, and
benefit from AI through subsidies, training, and
simplified procurement processes. Third, we need strategic incentives to accelerate
private-sector adoption, from tax credits for
companies committing to European solutions to
compute vouchers that offset upfront costs. This is not about adoption for adoption’s sake. It is about ensuring that Europe’s digital transformation goes fast enough and is powered by European technology, on European terms. 15. EU institutions lead by example in AI-enhanced public administration Position EU institutions as global leaders in AI-enhanced governance by adopting European AI solutions, demonstrating how public administration can be smarter, faster, and more citizen-centric through homegrown innovation. This measure aims to remove barriers for SMEs, scale-ups, and innovative companies, ensuring transparent, efficient, and inclusive access to public contracts across the Single Market. Establish a targeted European preference mechanism in public procurement for strategic sectors, using public spending to strengthen technological autonomy, economic security, and industrial competitiveness. Establish a sustainability reporting and incentive framework requiring all AI providers operating in the EU with annual revenues exceeding €500 million to submit standardized, third-party-verified life-cycle assessments covering the full life cycle of their AI systems as a prerequisite for eligibility for public procurement contracts. While the continent has made significant strides in AI research and regulation, its infrastructure, the backbone of AI development, remains a weak point. Traditional data centers, designed for general-purpose cloud computing, are ill-equipped to handle the demands of frontier AI models, which require ultra-dense, high-performance compute infrastructure. Without this infrastructure, Europe risks falling further behind the United States and Asia, deepening its dependency on non-European hyperscalers for everything from model training to industrial applications. As AI is not just another technological advancement, building AI-ready infrastructure is a foundational capability that will shape Europe’s economic competitiveness, strategic autonomy, and ability to address global challenges, from climate change to healthcare. Yet today, most of Europe’s AI workloads run on infrastructure controlled by foreign providers, leaving the continent vulnerable to geopolitical risks, supply chain disruptions, and the loss of economic value. If Europe fails to act, it could cede leadership in AI to others, missing out on the productivity gains, innovation, and jobs that come with it. Europe has the resources, expertise, and ambition to build its own AI future, aligned with its values. The key lies in ultra-dense, high-performance compute infrastructure, purpose-built for the demands of next-generation AI. The infrastructure required for frontier AI is fundamentally different from what exists today. Modern AI models demand power densities of 100 kW per rack or more, far beyond the capabilities of traditional data centers. They require advanced cooling systems, such as liquid cooling, to manage heat loads efficiently, and they must be scalable to keep pace with the rapid evolution of AI. Most importantly, this infrastructure must be controlled by European entities to ensure that strategic decisions, economic benefits, and data governance remain in Europe. By investing in ultra-dense, independent AI infrastructure, Europe can reduce its dependencies on non-European hyperscalers, ensuring that its AI ecosystem is resilient and self-sufficient. It can also turn its energy abundance, from nuclear to renewables, into a competitive advantage, powering AI innovation with sustainable, low-carbon energy. This will create high-value jobs in tech, energy, and manufacturing, while fostering a new generation of European AI leaders. Finally, doing so will allow the European Union to align AI development with its climate goals, by building infrastructure that is not only powerful but also energy-efficient and sustainable. The question is no longer whether Europe should build this infrastructure, but how to do it quickly, efficiently, and at scale. To seize this opportunity, Europe must adopt a coordinated, forward-looking policy approach that prioritizes ultra-dense, European-controlled AI infrastructure. This requires action on multiple fronts: Europe needs clear, ambitious standards for what constitutes AI-ready infrastructure. This means defining technical requirements that go beyond traditional data centers: Power density thresholds (≥100 kW per rack)
to ensure that only infrastructure capable
of supporting frontier AI qualifies for
public support. European ownership and control, ensuring
that the infrastructure and the data it
processes remains under EU jurisdiction. These standards should be embedded in public funding programs, procurement policies, and energy allocation strategies, creating a level playing field that favors innovation and autonomy over legacy systems. Building ultra-dense AI infrastructure requires significant upfront investment, but the long-term benefits (economic growth, technological leadership, and strategic resilience) far outweigh the costs. Policymakers can accelerate deployment through: Public procurement policies that prioritize
ultra-dense, European-controlled
infrastructure for critical workloads, from
scientific research to public services. By leveraging public funds strategically, Europe can ensure that its AI infrastructure is built to last. Europe’s energy resources (nuclear, wind, hydro, and solar) are a strategic asset for AI development. However, these resources must be allocated in a way that maximizes their impact. This means: Prioritizing ultra-dense AI data centers in
energy planning, ensuring they have access
to the low-carbon power they need to operate
sustainably. Streamlining permitting and grid connections
for high-performance data centers, reducing
bureaucratic hurdles that slow down
deployment. Fostering partnerships between AI providers
and energy companies, to co-develop
infrastructure that is both powerful and
sustainable. Europe’s energy transition and its AI ambitions are two sides of the same coin. By aligning them, policymakers can create a virtuous cycle: AI infrastructure that supports the green transition, and a green transition that powers AI innovation. No single country or company can build Europe’s AI future alone. Success will require collaboration across borders and sectors: Public-private partnerships to co-develop
and deploy ultra-dense data centers,
leveraging the strengths of governments,
energy providers, and tech companies. Investment in R&D to advance cooling
technologies, chip design, and
interoperability, ensuring that Europe
remains at the cutting edge of AI
infrastructure. Skills development programs to prepare the
workforce for the AI-driven economy, from
data scientists to infrastructure engineers. Europe’s diversity is one of its greatest strengths. By fostering collaboration, policymakers can ensure that AI infrastructure is distributed across the continent, creating hubs of innovation in every region. The revision of the public procurement framework as defined in Measure 17, should also be complemented by a specific preference for AI infrastructure projects within the Cloud and AI Development Act (CAIDA). Establish a future-proof, equitable legal framework for the training of AI models in Europe, which is a sine-qua-non condition to ensure Europe’s global competitiveness in AI and strengthen the European creative economy. Create a European data-sharing framework where companies contribute pseudonymized, FAIR-compliant datasets to a centralized portal in exchange for tangible economic and strategic assets, in order to accelerate applied AI research and development. 22. Create a centralized and AI-ready archive for AI training and cultural preservation Create a centralized, multilingual repository of public domain works to provide high-quality training data for AI models, preserve Europe’s cultural heritage, and reduce dependency on non-EU datasets.
Mistral AI is a pioneer company in generative artificial intelligence, empowering the world with the tools to build and benefit from the most transformative technology of our time. The company democratizes AI through high- performance, optimized, and cutting-edge open- source models, products and solutions as well as end- to-end infrastructure with Mistral Compute. Headquartered in France and independent, Mistral AI defends a decentralized and transparent approach to technology, with a strong global presence in the United States, United Kingdom, and Singapore.
...
Read the original on europe.mistral.ai »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.