10 interesting stories served every morning and every evening.




1 710 shares, 49 trendiness

All elementary functions from a single binary operator

...

Read the original on arxiv.org »

2 623 shares, 27 trendiness

DIY soft drinks

In 2020, I started mak­ing my own soft drinks, in­clud­ing a sugar-free, caf­feine-free cola! If you’re just look­ing for the cur­rent recipes, you can find them on GitHub. They were in­spired by recipes like Open Cola and Cube Cola.

Otherwise, read on for log book”-style in­struc­tions and pic­tures of my first at­tempts, which have been adapted from this Mastodon thread.

We start by mak­ing a fla­vor emul­sion from es­sen­tial oils! These oils are ex­tremely strong, and can cause skin ir­ri­ta­tions, so if you want to be safe, wear la­tex gloves for this step!

We also need su­per small quan­ti­ties. In this at­tempt, I’m aim­ing for:

There! In to­tal, we have about 2 ml of es­sen­tial oil. This will lend fla­vor to about 9 L of cola!

But oils don’t dis­solve in wa­ter, right? They would just float on the sur­face!

So what we’ll add next is gum ara­bic, a nat­ural emul­si­fier, that will help the oil form tiny, tiny droplets that stay dis­persed in wa­ter!

Combine 2 g of gum ara­bic, 4 ml wa­ter, and our oil mix­ture with a hand mixer. Keep mix­ing un­til it emul­si­fies, it will take on a milky brown color and the oily phase will dis­ap­pear. This took me about 5 min­utes.

Next, we add: 40 ml of caramel color, which will give the cola its dark ap­pear­ance.

As well as 5 g cit­ric acid, and an ad­di­tional 10 ml of wa­ter. You can also add caf­feine at this point, but one of the main mo­ti­va­tions for me to make my own cola is to have cola with­out caf­feine (it gives me headaches).

Stop mix­ing when it starts smil­ing at you! :D This is our fin­ished cola con­cen­trate. It smells de­li­cious!

Other recipes would now in­te­grate this into sugar syrup made from 1 kg of sugar, but I want to try a vari­a­tion us­ing ar­ti­fi­cial sweet­en­ers, for di­et­ing rea­sons. This is a com­bi­na­tion of sodium cy­cla­mate and sac­cha­rin, which is pretty com­mon in Germany. I’ll use 70 ml.

This gives us about 120 ml of the fin­ished cola syrup! \o/ I’ve been fol­low­ing the Cube-Cola recipe.

Because this is still a very con­cen­trated so­lu­tion and hard to han­dle, I’m di­lut­ing it down to 1 L. In the­ory, this should give a nice cola when di­luted fur­ther in a ra­tio of 1:8. First taste test com­ing up!

It’s good! It’s re­ally good! The sweet­ness is just right for me, and the oil blend tastes nice and cola-y! :)

I like my drinks re­ally sour, so I might add an­other… 10 g of cit­ric acid to this batch. There is a slightly bit­ter af­ter­taste, which I at­tribute to the ar­ti­fi­cial sweet­ener I’m us­ing.

I might also tweak the oil ra­tios to ac­cen­tu­ate the cit­rus fla­vors a bit more in the next batch. And I thought this would be too much food col­or­ing, but it seems pretty per­fect!

What I’d be re­ally ex­cited to try is mak­ing other fla­vors: Orange soda! Cherry soda! Almond & apri­cot soda!

Open Soda (Note: site is no longer on­line as of 2026…) has two pretty weird ones (including a bub­ble gum soda?!), but that’s pretty much all I found. There still seems to be po­ten­tial for re­verse en­gi­neer­ing! :)

And here are two more re­sources I found while prepar­ing my in­gre­di­ents to­day: Jan Krüger’s blog post about tweak­ing cola recipes and Richard Grove’s Notes on Making Cola, which go deeper into food chem­istry!

Made a sec­ond batch of cola syrup with­out caramel color. It’s much weirder to drink than I ex­pected.

I also switched to su­cralose as a sweet­ener, hop­ing that it would have less af­ter­taste. Instead of 1000 g sugar in the orig­i­nal recipe, I used 1.6 g su­cralose, which is ridicu­lous, but it’s still too sweet for my taste. :O

And I made a tool up­grade! It’s im­por­tant to get these small quan­ti­ties right…

This week’s batch: blood or­ange + al­mond + lime (in a 2:1:1 ra­tio), plus as much or­ange food col­or­ing as I dared to put in! :D

I cut the amount of su­cralose in half — still a bit too sweet for my taste. Had to com­pletely guess the oil ra­tios, but I’m happy with how it turned out! My whole apart­ment smells like marzi­pan now.

This time, when I did the mix­ing in a big plas­tic bowl, I no­ticed some float­ing stuff on the con­cen­trate, which I as­sume are tiny plas­tic shav­ings pro­duced by the hand mixer? I fil­tered them out, and might switch to glass or metal con­tain­ers.

In to­day’s cola batch, I re­duced the amount of su­cralose fur­ther to 0.4 g, added 0.07 g vanillin, as well as a bit more cas­sia oil. Good stuff! I call it Syntez-Cola”, be­cause I com­bined ideas from Cube-Cola and Jan Krüger’s recipe! :)

… this is the point where I should start a blog post, and a Git repo with ver­sion num­bers and a changelog, I guess. :P (Edit 2026: Here you go!)

I found de­caf Coca-Cola in a su­per­mar­ket last week, and could do a di­rect taste com­par­i­son. At this point, I pre­fer my cola by a large mar­gin! Coca-Cola tasted bland, like molten cola pop­si­cles. It had an in­ter­est­ing dry­ness to it, though, which I’m not sure how to repli­cate.

Made a sim­ple or­ange soda to­day, re­ally happy with how it turned out! I put the recipe here, along with my mod­i­fied cola recipe, and the al­mond + blood or­ange drink I in­vented!

blinry al­mond 0.1.1 con­tains less al­mond oil, to high­lights the cit­rus fla­vors a bit more. Find the up­dated recipe here.

One of these two bot­tles is a ver­sion with sugar in­stead of su­cralose. 400 g of sugar seem to be a good re­place­ment for 0.6 g su­cralose. Curious to see whether peo­ple will be able to taste the dif­fer­ence! :) (Edit 2026: They to­tally could!)

blinry or­ange 0.1.1, now with 14% more or­ange! :D Recipe here.

Since these early ex­per­i­ments, I made a hand­ful batches of these recipes with­out mod­i­fi­ca­tion. I still think they’re re­ally nice! Especially blinry or­ange is rather unique.

If you try them for your­self, I’d be happy about feed­back!

And I’m still think­ing what in­gre­di­ents might go into a DIY Mountain Dew, or a DIY Fassbrause…

You can add your com­ment to this post in the Fediverse! Alternatively, drop me a mail at mil@blinry.org.

...

Read the original on blinry.org »

3 496 shares, 22 trendiness

Serenity Forge (@serenityforge.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. A state­ment re­gard­ing the re­moval of DDLC from the Google Play Store:

...

Read the original on bsky.app »

4 443 shares, 18 trendiness

The peril of laziness lost

In his clas­sic

Programming Perl — affectionately known to a gen­er­a­tion of tech­nol­o­gists as the Camel Book” — Larry Wall fa­mously wrote of the three virtues of a pro­gram­mer as lazi­ness, im­pa­tience, and hubris:

If we’re go­ing to talk about good soft­ware de­sign, we have to talk about Laziness, Impatience, and Hubris, the ba­sis of good soft­ware de­sign. We’ve all fallen into the trap of us­ing cut-and-paste when we should have de­fined a higher-level ab­strac­tion, if only just a loop or sub­rou­tine. To be sure, some folks have gone to the op­po­site ex­treme of defin­ing ever-grow­ing mounds of higher level ab­strac­tions when they should have used cut-and-paste. Generally, though, most of us need to think about us­ing more ab­strac­tion rather than less.

Of these virtues, I have al­ways found lazi­ness to be the most pro­found: packed within its tongue-in-cheek self-dep­re­ca­tion is a com­men­tary on not just the need for ab­strac­tion, but the aes­thet­ics of it. Laziness dri­ves us to make the sys­tem as sim­ple as pos­si­ble (but no sim­pler!) — to de­velop the pow­er­ful ab­strac­tions that then al­low us to do much more, much more eas­ily.

Of course, the im­plicit wink here is that it takes a lot of work to be lazy: when pro­gram­mers are en­gaged in the seem­ing lazi­ness of

ham­mock-dri­ven de­vel­op­ment, we are in fact turn­ing the prob­lem over and over in our heads. We un­der­take the hard in­tel­lec­tual work of de­vel­op­ing these ab­strac­tions in part be­cause we are op­ti­miz­ing the hy­po­thet­i­cal time of our fu­ture selves, even if at the ex­pense of our cur­rent one. When we get this cal­cu­lus right, it is glo­ri­ous, as the ab­strac­tion serves not just our­selves, but all who come af­ter us. That is, our lazi­ness serves to make soft­ware eas­ier to write, and sys­tems eas­ier to com­pose — to al­low more peo­ple to write more of it.

Ideally, you would want those that ben­e­fit from ab­strac­tions to pay the virtue of lazi­ness for­ward — to use their new-found power to them­selves la­bor on the ab­strac­tions they make. But a con­se­quence of the broad­en­ing of soft­ware cre­ation over the past two decades is it in­cludes more and more peo­ple who are un­likely to call them­selves pro­gram­mers — and for whom the virtue of lazi­ness would lose its in­tended mean­ing.

Worse, the ex­tra­or­di­nary pro­duc­tiv­ity al­lowed by mod­ern ab­strac­tions has given rise to an em­pha­sis on a kind of false in­dus­tri­ous­ness. Pejoratively, this was the

rise of the bro­gram­mer, with the virtue of ironic lazi­ness and ham­mock-dri­ven de­vel­op­ment dis­placed by

hus­tle porn about crush­ing code.

Onto this dry tin­der has struck the light­ning bolt of LLMs. Whatever one’s dis­po­si­tion is to soft­ware cre­ation, LLMs al­low that to be ap­plied with (much) greater force, so it should be of lit­tle sur­prise that LLMs have served as an­a­bolic steroids for the bro­gram­mer set.

Elated with their new-found bulk, they can’t seem to shut up about it. Take, for ex­am­ple, bro­gram­mer-of-note

Garry Tan, who has been par­tic­u­larly in­suf­fer­able about his LLM use, brag­ging about his rate of thirty-seven thou­sand

lines of code per day (and still speed­ing up”):

If lazi­ness is a virtue of a pro­gram­mer, think­ing about soft­ware this way is clearly a vice. And like as­sess­ing lit­er­a­ture by the pound, its fal­lacy is clear even to novice pro­gram­mers.

As for the ar­ti­fact that Tan was build­ing with such fre­netic en­ergy, I was broadly ig­nor­ing it. Polish soft­ware en­gi­neer Gregorein, how­ever,

took it apart, and the re­sults are at once pre­dictable, hi­lar­i­ous and in­struc­tive: A sin­gle load of Tan’s newsletter-blog-thingy” in­cluded mul­ti­ple test har­nesses (!), the Hello World Rails app (?!), a stow­away text ed­i­tor, and then eight dif­fer­ent vari­ants of the same logo — one of which with zero bytes.

The prob­lem here is­n’t these is­sues per se (which are all fix­able!), and it is­n’t even the be­lief that the method­ol­ogy that cre­ated them rep­re­sents the fu­ture of soft­ware en­gi­neer­ing (though that is cer­tainly an­noy­ing!).

The prob­lem is that LLMs in­her­ently lack the virtue of lazi­ness. Work costs noth­ing to an LLM. LLMs do not feel a need to op­ti­mize for their own (or any­one’s) fu­ture time, and will hap­pily dump more and more onto a lay­er­cake of garbage. Left unchecked, LLMs will make sys­tems larger, not bet­ter — ap­peal­ing to per­verse van­ity met­rics, per­haps, but at the cost of every­thing that mat­ters. As such, LLMs high­light how es­sen­tial our hu­man lazi­ness is: our fi­nite time forces us to de­velop crisp ab­strac­tions in part be­cause we don’t want to waste our (human!) time on the con­se­quences of clunky ones. The best en­gi­neer­ing is al­ways borne of con­straints, and the con­straint of our time places lim­its on the cog­ni­tive load of the sys­tem that we’re will­ing to ac­cept. This is what dri­ves us to make the sys­tem sim­pler, de­spite its es­sen­tial com­plex­ity. As I ex­panded on in my talk

The Complexity of Simplicity, this is a sig­nif­i­cant un­der­tak­ing — and we can­not ex­pect LLMs that do not op­er­ate un­der con­straints of time or load to un­der­take it of their own vo­li­tion.

This is not to say, of course, that LLMs won’t play an im­por­tant role in our fu­ture: they are an ex­tra­or­di­nary tool for soft­ware en­gi­neer­ing, but — as out­lined in our

guide­lines for LLM use at Oxide — they are but a tool. We can put them to use tack­ling the non-ironic (and non-vir­tu­ous!) as­pects of pro­gram­mer lazi­ness — help­ing us take on thorny prob­lems like tech­ni­cal debt —  or

use them to pro­mote our en­gi­neer­ing rigor, but it must be in ser­vice of our own vir­tu­ous lazi­ness: to yield a sim­pler, more pow­er­ful sys­tem that serves not just our­selves, but the gen­er­a­tions of soft­ware en­gi­neers to come af­ter us.

...

Read the original on bcantrill.dtrace.org »

5 362 shares, 24 trendiness

How the "AI Loser" may end up winning

A few weeks ago I wrote about how I thought in­tel­li­gence is be­com­ing a com­mod­ity. The idea is quite straight­for­ward, and wide­spread now: when every­one races to build the best model, the mod­els get bet­ter, but so does every other model even­tu­ally. Every dol­lar spent on a big­ger train­ing run makes the pre­vi­ous one cheaper. The dis­tance be­tween fron­tier, sec­ond-best, and open-source al­ter­na­tives is col­laps­ing fast (actually Gemma4, Kimi K2.5 and GLM 5.1 are be­com­ing my bed­side mod­els these days). Even more, as mod­els be­come bet­ter, the unit of in­tel­li­gence that can be de­ployed in lo­cal hard­ware with lower hard­ware ca­pa­bil­i­ties in­creases sig­nif­i­cantly.

The irony of this sit­u­a­tion is that this com­modi­ti­sa­tion of in­tel­li­gence is ben­e­fit­ing the com­pany that every­one was fram­ing as the AI loser”: Apple

There’s a ver­sion of the last three years where Apple gen­uinely failed at AI. They had Siri be­fore any­one had a se­ri­ous voice as­sis­tant, and then watched how ChatGPT ate their lunch al­ready since their first re­lease (even be­fore they had in­tro­duced their na­tive voice in­ter­ac­tion). Apple did­n’t have a flag­ship fron­tier (or even a van­ity open-source) model, no $500B com­pute com­mit­ment with the usual sus­pects. Meanwhile, the rest of the AI labs and big tech com­pa­nies were rac­ing to win the next state-of-the-art bench­mark by burn­ing bags of cash.

What this also meant is that while these com­pa­nies were burn­ing money at a rate that would make a sov­er­eign wealth fund un­com­fort­able, Apple was (and still is) sit­ting in a pile of un­de­ployed cash (to the point of even in­creas­ing their stock buy­backs) giv­ing them op­tion­al­ity.

To me, OpenAI is the most par­a­dig­matic ex­am­ple of this infinite money burn­ing ma­chine”. OpenAI raised at a $300B val­u­a­tion and then shut down Sora, the video prod­uct they’d been po­si­tion­ing as a cre­ative in­dus­try flag­ship, be­cause it was run­ning at roughly $15M a day in costs against $2.1M in daily rev­enue. Disney had al­ready signed a three-year li­cens­ing deal for Sora to gen­er­ate con­tent from Marvel, Pixar, and Star Wars char­ac­ters. They were fi­nal­is­ing a $1B eq­uity stake in OpenAI. When Sora died, so did the bil­lion. A $1B in­vest­ment evap­o­rated, be­cause the prod­uct it was staked on could­n’t pay for it­self (reducing their buffer that ac­com­mo­dates their daily burn).

On the in­fra­struc­ture side: OpenAI signed non-bind­ing let­ters of in­tent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global out­put. These were of course non-bind­ing. Micron, read­ing the de­mand sig­nal, shut down its 29-year-old Crucial con­sumer mem­ory brand to redi­rect all ca­pac­ity to­ward AI cus­tomers. Then Stargate Texas was can­celled, OpenAI and Oracle could­n’t agree terms, and the de­mand that had jus­ti­fied Micron’s en­tire strate­gic pivot sim­ply van­ished. Micron’s stock crashed.

I don’t know about you, but I don’t see these be­hav­iours as those of some­one that is win­ning the AI race, in­de­pen­dently of how good their mod­els do in bench­marks, and how much they are burn­ing in in­fra­struc­ture. A small mis­cal­cu­la­tion in the ex­pected rev­enue, and you are out of the game (I am ac­tu­ally of the opin­ion that with­out some kind of bailout, OpenAI could be bank­rupt in the next 18-24 months, but I am hor­ri­ble at pre­dic­tions).

My sense is that the labs’ bet was al­ways that raw model ca­pa­bil­ity, i.e. in­tel­li­gence, along with the in­fra­struc­ture re­quired to run them would stay scarce. Those who man­age to se­cure the best model and the in­fra­struc­ture to run it at scale would get the best moat. But I am afraid that hav­ing the best model in it­self may not be enough mov­ing for­ward. Less ca­pa­ble mod­els are be­com­ing as ca­pa­ble as pre­vi­ous ver­sions of the fron­tier mod­els.

The best re­cent ex­am­ple I can think of is Gemma 4, Google’s open-weight model. It was built to run on a phone, scores 85.2% on MMLU Pro and matches Claude Sonnet 4.5 Thinking on the Arena leader­board. 2 mil­lion down­loads in its first week. Models that would have been state-of-the-art eigh­teen months ago now run on a lap­top, and they get bet­ter every quar­ter.

If you haven’t tried Gemma4 your­self I highly rec­om­mend it. I am run­ning it on my AMD Ryzen AI Max+, and its per­for­mance in terms of to­kens per sec­ond and in­tel­li­gence are so good that I have al­ready mi­grated some of my per­sonal tools to use this model as the back­end with­out vis­i­bly im­pact­ing their out­put. This trend can re­ally change in the next few months way we ac­cess in­tel­li­gence.

I feel that some of the labs see this com­ing. Anthropic has been par­tic­u­larly ag­gres­sive about it and they are re­leas­ing new (actually use­ful) tools every day that work like a charm with their mod­els in or­der to lock users into their ecosys­tem. Claude Code for de­vel­op­ers, Claude Cowork for teams, the re­cent Claude Managed Sessions to or­ches­trate agents, all de­signed to put Claude in­side work­flows peo­ple are al­ready in.

The logic be­hind it: if the model it­self won’t hold the moat, cap­ture the us­age layer and make switch­ing painful. I think this is bril­liant, and see­ing how much Anthropic is grow­ing in num­ber of users and rev­enue, it seems to be pay­ing off. The eco­nom­ics of their plans are still rough, though. One analy­sis found a max-plan sub­scriber con­sum­ing $27,000 worth of com­pute with their 200$ Max sub­scrip­tion. The labs are sub­si­dis­ing the de­mand they’re chas­ing, which jus­ti­fies their level of burn (let’s see for how long they can af­ford these sub­si­dies).

Apple, by con­trast, has spent al­most noth­ing on AI in­fra­struc­ture and sub­si­dis­ing users’ to­ken burn. And this may be giv­ing them more op­tion­al­ity and lever­age than any of the other com­pa­nies that jumped heads first into the AI race.

In that ear­lier post, I ar­gued that if in­tel­li­gence be­comes abun­dant, con­text be­comes the scarce re­source. A model that can rea­son about any­thing but knows noth­ing about you or the en­vi­ron­ment it op­er­ates in is a generic tool. What makes AI gen­uinely use­ful day-to-day is rea­son­ing plus per­sonal con­text: your mes­sages, your cal­en­dar, your code, your tools, your health data, your pho­tos, your habits. I think here is where Anthropic is mak­ing an amaz­ing job with their Claude suite”.

But Apple al­ready has all this con­text and ac­cess to your en­vi­ron­ment through their 2.5 bil­lion ac­tive de­vices. Each one is a con­text mine that users have been fill­ing for years. Health data from Apple Watch. Every photo taken on an iPhone. Notes, mes­sages, lo­ca­tion his­tory, app be­hav­iour, emails, and aware­ness of your en­vi­ron­ment through the pool of sen­sors of your de­vice. Why build a com­mod­ity when they al­ready have the con­text that can be­come their moat?

And they even have the abil­ity to keep all this data on-de­vice, which is where the Privacy. That’s iPhone” po­si­tion­ing be­comes some­thing more than a PR strat­egy, and which could ac­tu­ally make a come­back to be­come one of their core value propo­si­tions. Apple spent years us­ing pri­vacy as a dif­fer­en­tia­tor against the ad-dri­ven mod­els of Google and Meta. It worked, but it al­ways felt a bit ab­stract and, hon­estly, fake. Now it could be­come re­ally con­crete. Would you hand OpenAI your med­ical records and fif­teen years of pho­tos to get bet­ter AI an­swers? Probably not. Some are, but I per­son­ally would­n’t like Sam to have that per­sonal data from me. Would you let a model run­ning en­tirely on your de­vice (no net­work re­quest, no data leav­ing your phone) ac­cess all of that? That’s a dif­fer­ent ques­tion. The on-de­vice model gets full con­text be­cause it never leaves the hard­ware. Apple built the rep­u­ta­tion and the ar­chi­tec­ture for this when no one else thought it mat­tered.

Of course, there are still tech­no­log­i­cal bar­ri­ers to make this pos­si­ble, but I feel like we may be get­ting there.

In this con­text, the Gemini deal, where Apple signed a $1B to li­cense Google’s fron­tier model for the queries that need cloud-scale rea­son­ing, makes to­tal sense. Apple did­n’t build a fron­tier model. They bought ac­cess to one, at a price that’s round­ing er­ror against OpenAI’s weekly com­pute bill. What they kept in-house: the con­text layer, the on-de­vice stack, and the op­er­at­ing sys­tem that me­di­ates every­thing.

Turns out Apple had an­other un­ex­pected lever for AI as shown with the Mac Mini craze af­ter OpenClaw’s re­lease. Apple Silicon was­n’t built specif­i­cally for AI, it was built for ef­fi­ciency, for bat­tery life, for ther­mal per­for­mance, for the hard­ware/​soft­ware co-de­sign that Apple had been run­ning for fif­teen years. But it turned out to be the per­fect ar­chi­tec­ture to run lo­cal mod­els ef­fi­ciently.

The key de­ci­sion is uni­fied mem­ory. On a con­ven­tional ar­chi­tec­ture (that of most lap­tops, and even tra­di­tional data cen­ter-grade GPUs) the CPU and GPU are sep­a­rate chips with sep­a­rate mem­ory pools. Moving data be­tween them is slow and power-hun­gry. Nvidia’s GPUs are ex­tremely fast at ma­trix op­er­a­tions, but they sit on the other side of a PCIe bus from the CPU, and feed­ing them is a con­stant bot­tle­neck (as dis­cussed when pre­sent­ing the dif­fer­ence be­tween DRAM and HBM in this post from a few weeks ago).

Apple’s M-series and A-series chips put the CPU, GPU, and Neural Engine (their pro­pri­etary ac­cel­er­a­tor) on the same die, shar­ing one high-band­width mem­ory pool. No bus cross­ing, no trans­fer over­head, no la­tency switch­ing be­tween CPU and GPU work. For video edit­ing or com­pil­ing Xcode, this is a nice ef­fi­ciency win. For LLM in­fer­ence, this has been key.

As de­scribed also in my post about RAM mem­ory and TurboQuant, LLM in­fer­ence is cur­rently mem­ory-band­width bound, not com­pute bound. The bot­tle­neck is­n’t so much how fast you can mul­ti­ply ma­tri­ces, it’s how fast you can stream model weights from mem­ory into the com­pute units, and how big of a KV cache you can store to avoid hav­ing to re-com­pute it. Apple’s uni­fied pool gives every com­pute unit di­rect, high-band­width ac­cess to the same mem­ory si­mul­ta­ne­ously. That’s ex­actly the op­er­a­tion in­fer­ence needs.

This is what makes the LLM in a Flash tech­nique work so well on Apple hard­ware. Someone re­cently ran Qwen 397B, a 209GB model, on an M3 Max Mac at ~5.7 to­kens per sec­ond, us­ing only 5.5GB of ac­tive RAM. The weights live on the SSD and stream in at ~17.5 GB/s as needed. This works be­cause Qwen is a mix­ture-of-ex­perts ar­chi­tec­ture: each to­ken only ac­ti­vates a small sub­set of ex­pert lay­ers, so you only ever need a frac­tion of the 209GB res­i­dent in mem­ory. The SSD through­put Apple achieves (faster than their own fig­ures from the orig­i­nal LLM in a Flash pa­per) comes from stor­age ar­chi­tec­ture they built for iPhone re­spon­sive­ness, not AI. Claude wrote the ~5,000 lines of Objective-C and Metal shaders to make it all work. A 400-billion-parameter model, on a con­sumer lap­top, from 5.5GB of RAM (another win of the au­tore­search flow dis­cussed in this newslet­ter).

What I find more in­ter­est­ing about all of this is the plat­form dy­namic that this can re­sult in. Think about the App Store. Apple did­n’t build the apps, they built the plat­form where apps ran best, and the ecosys­tem fol­lowed. Developers did­n’t tar­get iOS be­cause Apple asked, they tar­geted it be­cause the users were there, the tool­ing was good, the hard­ware was con­sis­tent. My feel­ing is that the same thing could hap­pen now with lo­cal in­fer­ence. MLX is al­ready a de facto frame­work for on-de­vice AI. Gemma, Qwen, Mistral, the most rel­e­vant model ar­chi­tec­tures have MLX sup­port. Apple does­n’t need to win the model race if they man­age to be­come the de-facto plat­form where the mod­els (or the agents that use them) run. Again, a great ex­am­ple of this is the Mac Mini craze af­ter OpenClaw went vi­ral.

I keep go­ing back and forth on this, hon­estly, and I still don’t know if this was Apple’s strat­egy all along, or they did­n’t feel in the po­si­tion to make a bet and are just flow­ing as the events un­fold max­imis­ing their op­tion­al­ity.

The hard­ware/​soft­ware co-de­sign strat­egy has been a key fo­cus for years, and one that I’ve al­ways agreed on my­self (as an elec­tri­cal en­gi­neer­ing by train­ing, I’ve al­ways been into hard­ware/​soft­ware co-de­sign). If you can af­ford it, I think that’s the right ap­proach. The pri­vacy po­si­tion­ing, the on-de­vice pro­cess­ing fo­cus, the de­ci­sion to build their own sil­i­con when the rest of the in­dus­try was happy buy­ing Nvidia and Intel, all of those were choices Apple made when they were com­mer­cially risky and the di­rec­tion was­n’t ob­vi­ous. Is it true that they were made with cost and gov­er­nance in mind, not AI, but it turned out well for them.

What Apple could­n’t have planned (or could they?) is that their uni­fied mem­ory ar­chi­tec­ture would be a per­fect fit for LLMs, and that open-weight mod­els would get this ca­pa­ble, this fast, re­mov­ing the need for huge hard­ware in­vest­ment for AI in­fra­struc­ture from their side. That the model race would com­modi­tise in­tel­li­gence as quickly as it did. Or that some­one would stream a 400B pa­ra­me­ter model from an SSD and it would ac­tu­ally work.

So some of this is luck. But it’s the kind of luck that finds you when you built the right foun­da­tion, even if you built it for com­pletely dif­fer­ent rea­sons. They were def­i­nitely well-po­si­tioned.

The rest of the in­dus­try spent three years rac­ing to see who could build the best model with Apple look­ing from the side­lines, wait­ing to un­der­stand how their de­vices and own ecosys­tem could fit in this fu­ture. I don’t know if this is ex­actly the case, but I feel this was smart. Risky but smart.

I gen­uinely don’t know how this plays out over the next few years. The labs are not stand­ing still, and Apple’s AI track record (looking at you, Siri, you al­ready suck a bit) is not ex­actly flaw­less. But it’s hard to imag­ine a world where 2.5 bil­lion de­vices, car­ry­ing your en­tire per­sonal con­text, run­ning ca­pa­ble mod­els lo­cally on pur­pose-built sil­i­con, with Gemini on-call for the hard stuff, in­cur­ring in vari­able cost for in­fer­ence in­stead of ex­pen­sive CAPEX in­vest­ment could be a bad po­si­tion to be in a fu­ture where AI is every­where.

Whether that was strat­egy or for­tune, I’ll leave for you to de­cide. And if you do, please let me know what you think about it. My TL;DR is that, to my sur­prise, I am still bull­ish about Apple and their rel­e­vance in an AI-centric fu­ture.

Disclaimer: To frame the opin­ion of this post, I just want to be clear about the fact that I am not one of those Apple fan boys. Proof of this is that this post was writ­ten from a Linux ma­chine and that I don’t even own a Mac :)

...

Read the original on adlrocha.substack.com »

6 350 shares, 7 trendiness

Mount Lebanon

...

Read the original on maps.apple.com »

7 346 shares, 30 trendiness

Why Most Engineering Organizations Are Flying Blind

This post works through the fi­nan­cial logic of soft­ware teams, from what a team of eight en­gi­neers ac­tu­ally costs per month to what it needs to gen­er­ate to be eco­nom­i­cally vi­able. It also ex­am­ines why most teams have no vis­i­bil­ity into ei­ther num­ber, how that con­di­tion was built over two decades, and what the ar­rival of LLMs now means for or­ga­ni­za­tions that have been treat­ing large en­gi­neer­ing head­count as an as­set.

Software de­vel­op­ment is one of the most cap­i­tal-in­ten­sive ac­tiv­i­ties a mod­ern com­pany un­der­takes, and it is also one of the least un­der­stood from a fi­nan­cial per­spec­tive. The peo­ple mak­ing daily de­ci­sions about what to build, what to de­lay, and what to aban­don are rarely given the fi­nan­cial con­text to un­der­stand what those de­ci­sions ac­tu­ally cost. This is not a co­in­ci­dence. It is a struc­tural con­di­tion that most or­ga­ni­za­tions have main­tained, qui­etly and con­sis­tently, for roughly two decades.

A soft­ware en­gi­neer in Western Europe costs some­where be­tween €120,000 and €150,000 per year when you ac­count for salary, so­cial fees, pen­sion con­tri­bu­tions, equip­ment, so­cial ac­tiv­i­ties, man­age­ment over­head, and of­fice space. Call it €130,000 as a rea­son­able mid­dle es­ti­mate. A team of eight en­gi­neers there­fore costs ap­prox­i­mately €1,040,000 per year, or €87,000 per month, or roughly €4,000 for every work­ing day.

Most en­gi­neers do not know this num­ber. Many of their man­agers do not ei­ther. And in the or­ga­ni­za­tions where some­one does know it, the num­ber rarely makes its way into the con­ver­sa­tions where pri­or­i­ti­za­tion de­ci­sions are ac­tu­ally made.

This mat­ters be­cause every de­ci­sion a team makes car­ries an im­plicit cost that com­pounds over time. Choosing to spend three weeks on a fea­ture that serves 2% of users is a €60,000 de­ci­sion. Delaying an op­er­a­tional im­prove­ment for a quar­ter is a de­ci­sion with a cal­cu­la­ble daily price tag. Rebuilding a plat­form be­cause the cur­rent one feels em­bar­rass­ing, rather than be­cause cus­tomers are leav­ing, is a cap­i­tal al­lo­ca­tion choice that would look very dif­fer­ent if the peo­ple mak­ing it were spend­ing their own money.

Consider a team of eight en­gi­neers whose mis­sion is to build and main­tain an in­ter­nal de­vel­oper plat­form serv­ing one hun­dred other en­gi­neers. This is a com­mon or­ga­ni­za­tional struc­ture, and it is one where the fi­nan­cial logic is rarely ex­am­ined care­fully.

The team costs €87,000 per month. To jus­tify that cost, the plat­form they build needs to gen­er­ate at least €87,000 per month in value for the en­gi­neers who use it. The most di­rect way to mea­sure that value is through time saved, since the plat­for­m’s pur­pose is to make other en­gi­neers more pro­duc­tive.

At a cost of €130,000 per year, one en­gi­neer costs ap­prox­i­mately €10,800 per month, or around €65 per work­ing hour. For the plat­form team to break even, their plat­form needs to save the hun­dred en­gi­neers they serve a com­bined to­tal of 1,340 hours per month. That is 13.4 hours per en­gi­neer per month, or roughly three hours per week per per­son.

Three hours per week is achiev­able. A well-built plat­form that elim­i­nates man­ual de­ploy­ment steps, re­duces en­vi­ron­ment setup time, or re­moves the need for repet­i­tive con­fig­u­ra­tion work can eas­ily clear that bar. Time saved is the most di­rect mea­sure for a plat­form team, though value can also come from re­duc­ing out­ages, which car­ries a di­rect rev­enue im­pact of its own. But the ques­tion worth ask­ing is whether any­one on that team knows this num­ber, tracks it, or uses it to de­cide what to build next. In most or­ga­ni­za­tions, the an­swer is no. The team has a roadmap dri­ven by en­gi­neer­ing pref­er­ences, stake­holder re­quests, and quar­terly plan­ning cy­cles, and the fi­nan­cial logic un­der­ly­ing their ex­is­tence is left un­ex­am­ined.

And break-even is not ac­tu­ally the right bar. Leah Tharin has writ­ten a sharp break­down of the math­e­mat­ics of this: a team with a 50% ini­tia­tive suc­cess rate, which is al­ready op­ti­mistic, needs its wins to cover its losses too. Leah’s cal­cu­la­tion is growth-ori­ented, but even for non-growth or­ga­ni­za­tions, the same in­vest­ment the­sis holds. Even a two-times re­turn is not suf­fi­cient. Capital sit­ting in a bank car­ries no op­er­a­tional risk, no co­or­di­na­tion costs, and no on­go­ing main­te­nance oblig­a­tions. The sys­tems a team builds will out­live the team it­self, and the cost of own­ing, main­tain­ing, and even­tu­ally re­plac­ing those sys­tems is al­most al­ways larger than an­tic­i­pated. The re­turn has to cover not just the team’s cur­rent cost, but the long tail of what they leave be­hind.

That pushes the re­al­is­tic thresh­old for fi­nan­cial vi­a­bil­ity to some­where be­tween three and five times an­nual cost. For an €87,000 per month team, that means gen­er­at­ing be­tween €260,000 and €435,000 in monthly value. The three hours per week cal­cu­la­tion gets you to break-even. To clear the re­al­is­tic fi­nan­cial bar, the plat­form needs to be gen­uinely trans­for­ma­tive for the en­gi­neers us­ing it, and the team needs to be ruth­less about work­ing on the high­est-value prob­lems rather than the most in­ter­est­ing ones.

A cus­tomer-fac­ing prod­uct team of eight car­ries the same €87,000 monthly cost. The levers avail­able to jus­tify that cost are dif­fer­ent, but the un­der­ly­ing logic is iden­ti­cal.

If the prod­uct has an av­er­age rev­enue per user of €50 per month, the team needs to gen­er­ate or pro­tect the equiv­a­lent of 1,740 users worth of value every month just to break even, and roughly 5,000 to 8,700 users worth of value to clear the three-to-five times thresh­old.

Churn is of­ten the most di­rect lever. Consider a prod­uct with 50,000 ac­tive users los­ing 2% monthly to churn. That is 1,000 users per month, rep­re­sent­ing €50,000 in monthly re­cur­ring rev­enue walk­ing out the door. A team that iden­ti­fies the pri­mary dri­ver of that churn and elim­i­nates it is gen­er­at­ing nearly €50,000 per month in pro­tected rev­enue, cov­er­ing most of its break-even cost from a sin­gle ini­tia­tive. But that cal­cu­la­tion re­quires know­ing the churn rate, un­der­stand­ing its causes, and con­nect­ing those causes to the team’s work, and most teams are not op­er­at­ing with that level of fi­nan­cial clar­ity.

Activation is an­other lever that is fre­quently un­der­es­ti­mated. If 10,000 users sign up each month but only 30% com­plete the ac­ti­va­tion steps that lead to long-term re­ten­tion, there are 7,000 users each month who paid ac­qui­si­tion costs but never con­verted to re­tained rev­enue. Improving the ac­ti­va­tion rate by five per­cent­age points, from 30% to 35%, con­verts an ad­di­tional 500 users per month. At €50 av­er­age rev­enue per user, that is €25,000 in ad­di­tional monthly re­cur­ring rev­enue, rep­re­sent­ing roughly 29% of the team’s break-even thresh­old from one met­ric mov­ing in the right di­rec­tion.

Sales con­ver­sion fol­lows the same logic. If the prod­uct has a free-to-paid con­ver­sion fun­nel pro­cess­ing 20,000 tri­als per month at a 4% con­ver­sion rate, that pro­duces 800 pay­ing cus­tomers monthly. Moving con­ver­sion from 4% to 4.5% pro­duces 900 cus­tomers, an ad­di­tional 100 pay­ing users, and €5,000 in ad­di­tional monthly rev­enue. Small im­prove­ments across mul­ti­ple levers com­pound quickly, but only if the team un­der­stands which levers con­nect to which fi­nan­cial out­comes and by how much.

Given that soft­ware teams are ex­pen­sive and that their value is, at least in prin­ci­ple, cal­cu­la­ble, it is worth ex­am­in­ing why most teams do not mea­sure any­thing fi­nan­cially mean­ing­ful. Some mea­sure ac­tiv­ity prox­ies such as ve­loc­ity, tick­ets closed, or fea­tures shipped. Others mea­sure sen­ti­ment prox­ies such as NPS, CSAT, or en­gage­ment scores. These are not de­graded ver­sions of fi­nan­cial mea­sure­ment. They are a dif­fer­ent cat­e­gory en­tirely, one that was de­signed around the goal of un­der­stand­ing user be­hav­ior and team through­put rather than around the goal of un­der­stand­ing eco­nomic re­turn.

The prob­lem is that ac­tiv­ity and sen­ti­ment met­rics can trend up­ward while fi­nan­cial per­for­mance de­te­ri­o­rates. A team can ship more fea­tures while build­ing the wrong things. Engagement scores can rise while churn ac­cel­er­ates among the users who ac­tu­ally gen­er­ate rev­enue. Velocity can in­crease while the work be­ing com­pleted has no mea­sur­able con­nec­tion to busi­ness out­comes. These met­rics feel mean­ing­ful be­cause they cor­re­late with out­comes in many cir­cum­stances, but cor­re­la­tion is not a re­li­able guide to pri­or­i­ti­za­tion when the un­der­ly­ing fi­nan­cial logic is never ex­am­ined.

This is a struc­tural con­di­tion rather than a fail­ure of in­di­vid­ual judg­ment. Organizations chose these met­rics be­cause they are eas­ier to in­stru­ment, eas­ier to com­mu­ni­cate, and eas­ier to look good on than fi­nan­cial met­rics. A team that mea­sures its suc­cess by fea­tures shipped will al­ways have some­thing to show. A team that mea­sures its suc­cess by re­turn gen­er­ated will some­times have to re­port that it does not know, or that the re­turn was dis­ap­point­ing, and that kind of trans­parency re­quires an or­ga­ni­za­tional cul­ture that most com­pa­nies have not de­lib­er­ately built.

The ma­trix above is drawn from a prod­uct man­age­ment train­ing pro­gram I run called Booster, where prod­uct lead­ers map their ac­tual met­rics against their in­vest­ment the­sis to sur­face gaps. The ex­er­cise is un­com­fort­able pre­cisely be­cause most lead­ers dis­cover mid-map­ping that their team’s daily mea­sure­ments have no di­rect con­nec­tion to the fi­nan­cial ob­jec­tive they were given.

Understanding why this con­di­tion ex­ists re­quires look­ing at roughly two decades of macro­eco­nomic con­text, be­cause the fi­nan­cial dys­func­tion in mod­ern soft­ware or­ga­ni­za­tions did not emerge from bad in­ten­tions or in­tel­lec­tual fail­ure. It emerged from a spe­cific en­vi­ron­ment that made fi­nan­cial dis­ci­pline in prod­uct teams eco­nom­i­cally un­nec­es­sary.

The pic­ture is not a sin­gle clean era but two dis­tinct phases. From roughly 2002 through 2011, cap­i­tal was pe­ri­od­i­cally cheap but con­di­tions were mixed. Rates fell sharply af­ter the dot-com crash and again af­ter the global fi­nan­cial cri­sis, but in both cases risk ap­petite was sup­pressed. The money was tech­ni­cally in­ex­pen­sive but in­vestors were cau­tious, mul­ti­ples were rea­son­able, and the growth-at-all-costs logic had not yet taken hold. Product or­ga­ni­za­tions dur­ing this pe­riod still op­er­ated with some resid­ual fi­nan­cial dis­ci­pline in­her­ited from the dot-com reck­on­ing.

From ap­prox­i­mately 2011 through 2022, some­thing dif­fer­ent hap­pened. Zero-rate pol­icy be­came fully nor­mal­ized, risk ap­petite re­cov­ered and then over­cor­rected, and the SaaS men­tal model crys­tal­lized into a broadly shared in­vest­ment the­sis. All three con­di­tions ar­rived si­mul­ta­ne­ously, and the re­sult was about eleven years dur­ing which soft­ware com­pa­nies could grow head­count ag­gres­sively, miss on the ma­jor­ity of their roadmap, and still look healthy on pa­per. Revenue growth for­gave an enor­mous range of pri­or­i­ti­za­tion mis­takes, and the cost of build­ing the wrong thing was largely in­vis­i­ble.

Eleven years is not a long time, but it is long enough to form the pro­fes­sional in­stincts of an en­tire gen­er­a­tion of prod­uct and en­gi­neer­ing lead­ers. The frame­works they learned, the met­rics they adopted, the plan­ning rit­u­als they prac­tice, and the de­f­i­n­i­tions of suc­cess they in­ter­nal­ized were all formed dur­ing a win­dow that was un­usu­ally short and un­usu­ally dis­torted. There is no co­hort of se­nior prod­uct lead­ers who de­vel­oped their judg­ment in con­di­tions where their teams were ex­pected to demon­strate fi­nan­cial re­turn, be­cause those con­di­tions did not ex­ist dur­ing the years when that co­hort was learn­ing the craft.

When cap­i­tal be­came ex­pen­sive again in 2022, the be­hav­ior did not au­to­mat­i­cally ad­just, be­cause the be­hav­ior was never con­nected to the fi­nan­cial logic in the first place.

There is a deeper con­se­quence of this twenty-year pe­riod that is now be­com­ing painfully vis­i­ble, and it con­cerns how the in­dus­try has thought about large en­gi­neer­ing or­ga­ni­za­tions and code­bases.

The con­ven­tional un­der­stand­ing is that a code­base rep­re­sent­ing years of en­gi­neer­ing in­vest­ment is a valu­able as­set. It en­codes busi­ness logic, cap­tures ac­cu­mu­lated de­ci­sions, and rep­re­sents the tech­ni­cal foun­da­tion on which fu­ture prod­ucts are built. A large en­gi­neer­ing or­ga­ni­za­tion is sim­i­larly un­der­stood as a source of ca­pa­bil­ity, with more en­gi­neers mean­ing more ca­pac­ity to build, main­tain, and im­prove that foun­da­tion.

While some ar­gued that large code­bases ac­tu­ally shoulg be con­sid­ered a li­a­bil­ity, the in­dus­try as a whole has mostly ig­nored that. But this un­der­stand­ing is now be­ing more closely ex­am­ined. A large code­base also car­ries main­te­nance costs that grow over time as the sys­tem be­comes more com­plex, more in­ter­con­nected, and more dif­fi­cult to change safely. Every en­gi­neer added to main­tain it in­creases co­or­di­na­tion costs, in­tro­duces new de­pen­den­cies, and adds to the or­ga­ni­za­tional weight that slows de­ci­sion-mak­ing. The as­set and the li­a­bil­ity ex­ist si­mul­ta­ne­ously, and for most of the past twenty years, the fi­nan­cial en­vi­ron­ment masked the li­a­bil­ity side of that equa­tion.

The ar­rival of large lan­guage mod­els has made the li­a­bil­ity vis­i­ble in a way that is dif­fi­cult to ig­nore. Recently, Nathan Cavaglione, a de­vel­oper, built a func­tional replica of ap­prox­i­mately 95% of Slack’s core prod­uct in four­teen days us­ing LLM agents. Slack was built by thou­sands of en­gi­neers over the course of more than a decade, at a cost that rep­re­sents bil­lions of dol­lars in cu­mu­la­tive en­gi­neer­ing in­vest­ment. Nathan started with­out any of that ac­cu­mu­lated com­plex­ity, with­out the or­ga­ni­za­tional weight, with­out the legacy ar­chi­tec­tural de­ci­sions, and with­out the co­or­di­na­tion costs, and ar­rived at a com­pa­ra­ble prod­uct in a pe­riod that would not con­sti­tute a sin­gle sprint in most en­ter­prise en­gi­neer­ing or­ga­ni­za­tions.

Day 14: A func­tional replica of Slack’s core prod­uct, built by a Nathan us­ing LLM agents.

This does not mean that Slack’s en­gi­neer­ing in­vest­ment was wasted, be­cause Slack also built en­ter­prise sales in­fra­struc­ture, com­pli­ance ca­pa­bil­i­ties, data se­cu­rity prac­tices, and or­ga­ni­za­tional re­silience that a four­teen-day pro­to­type does not in­clude. But it does mean that the as­sump­tion un­der­ly­ing large en­gi­neer­ing or­ga­ni­za­tions, which is that scale and ac­cu­mu­lated com­plex­ity rep­re­sent com­pet­i­tive moats, is no longer re­li­able in the way it once was. When the cost of build­ing a func­tional ap­prox­i­ma­tion of a so­phis­ti­cated soft­ware prod­uct can col­lapse to days of in­di­vid­ual ef­fort, the ques­tion of what a large en­gi­neer­ing team jus­ti­fies be­comes both more ur­gent and more dif­fi­cult to an­swer with the met­rics most or­ga­ni­za­tions cur­rently track.

The ob­vi­ous ob­jec­tion is that code pro­duced at that speed be­comes un­man­age­able, a li­a­bil­ity in it­self. That is a rea­son­able con­cern, but it largely ap­plies when agents pro­duce code that hu­mans then main­tain. Agentic plat­forms are be­ing it­er­ated upon quickly, and for es­tab­lished pat­terns and non-busi­ness-crit­i­cal code, which is the ma­jor­ity of what most en­gi­neer­ing or­ga­ni­za­tions ac­tu­ally main­tain, de­tailed hu­man fa­mil­iar­ity with the code­base mat­ters less than it once did. A messy code­base is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to rea­son through an un­fa­mil­iar sys­tem, that is still faster and cheaper than most de­vel­op­ment teams op­er­at­ing to­day. The li­a­bil­ity ar­gu­ment holds in a hu­man-to-hu­man or agent-to-hu­man world. In an agent-to-agent world, it largely dis­solves.

The com­pet­i­tive ad­van­tage avail­able to or­ga­ni­za­tions that take this se­ri­ously is not pri­mar­ily tech­ni­cal. It is an­a­lyt­i­cal. Companies that can clearly ar­tic­u­late what each of their teams costs, what value each team gen­er­ates, and whether that value clears a fi­nan­cially vi­able thresh­old are in a struc­turally dif­fer­ent po­si­tion than com­pa­nies that can­not. They can make build ver­sus buy de­ci­sions based on ac­tual eco­nom­ics rather than or­ga­ni­za­tional pref­er­ence. They can iden­tify when a team is work­ing on prob­lems that can­not gen­er­ate suf­fi­cient re­turn at their cost level. They can se­quence ini­tia­tives based on what value is be­ing lost each day they are de­layed, rather than on who ar­gued most per­sua­sively in the last plan­ning meet­ing.

Most or­ga­ni­za­tions can­not do this to­day. The mea­sure­ment in­fra­struc­ture does not ex­ist, the fi­nan­cial data does not flow to the peo­ple mak­ing pri­or­i­ti­za­tion de­ci­sions, and the habit of ask­ing these ques­tions has not been built. Building it is un­com­fort­able, be­cause the an­swers are some­times un­flat­ter­ing. A team that ex­am­ines its work through this lens will some­times dis­cover that it has spent a quar­ter on things that do not con­nect to fi­nan­cial out­comes in any mean­ing­ful way, and that is a dif­fi­cult find­ing to sit with.

But the al­ter­na­tive is con­tin­u­ing to run an or­ga­ni­za­tion where teams with mil­lion-euro an­nual bud­gets make daily in­vest­ment de­ci­sions with­out the fi­nan­cial con­text to know whether those de­ci­sions are gen­er­at­ing re­turn. That con­di­tion was sus­tain­able when cap­i­tal was cheap and growth for­gave every­thing. It is in­creas­ingly dif­fi­cult to sus­tain in an en­vi­ron­ment where boards ex­pect fi­nan­cial re­turns, where the cost of build­ing soft­ware is col­laps­ing due to AI, and where the ques­tion of what a team jus­ti­fies can no longer be de­ferred in­def­i­nitely.

The or­ga­ni­za­tions that de­velop the habit of ask­ing these ques­tions clearly, reg­u­larly, and with­out flinch­ing will ac­cu­mu­late an ad­van­tage that com­pounds over time. The ques­tion is sim­ply whether they will start ask­ing be­fore or af­ter the pres­sure forces them to.

...

Read the original on www.viktorcessan.com »

8 271 shares, 73 trendiness

Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.

Servo is now avail­able on crates.io

Today the Servo team has re­leased v0.1.0 of the servo crate. This is our first crates.io re­lease of the servo crate that al­lows Servo to be used as a li­brary.

We cur­rently do not have any plans of pub­lish­ing our demo browser ser­voshell to crates.io. In the 5 re­leases since our ini­tial GitHub re­lease in October 2025, our re­lease process has ma­tured, with the main bottleneck” now be­ing the hu­man-writ­ten monthly blog post. Since we’re quite ex­cited about this re­lease, we de­cided to not wait for the monthly blog post to be fin­ished, but promise to de­liver the monthly up­date in the com­ing weeks.

As you can see from the ver­sion num­ber, this re­lease is not a 1.0 re­lease. In fact, we still haven’t fin­ished dis­cussing what 1.0 means for Servo. Nevertheless, the in­creased ver­sion num­ber re­flects our grow­ing con­fi­dence in Servo’s em­bed­ding API and its abil­ity to meet some users’ needs.

In the mean­time we also de­cided to of­fer a long-term sup­port (LTS) ver­sion of Servo, since break­ing changes in the reg­u­lar monthly re­leases are ex­pected and some em­bed­ders might pre­fer do­ing ma­jor up­grades on a sched­uled half-yearly ba­sis while still re­ceiv­ing se­cu­rity up­dates and (hopefully!) some mi­gra­tion guides. For more de­tails on the LTS re­lease, see the re­spec­tive sec­tion in the Servo book.

...

Read the original on servo.org »

9 251 shares, 4 trendiness

Hungarian Prime Minister Orbán is ejected after 16 years in a European electoral earthquake

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

BUDAPEST, Hungary (AP) — Hungarian vot­ers on Sunday ousted long-serv­ing Prime Minister Viktor Orbán af­ter 16 years in power, re­ject­ing the au­thor­i­tar­ian poli­cies and global far-right move­ment that he em­bod­ied in fa­vor of a pro-Eu­ro­pean chal­lenger in a bomb­shell elec­tion re­sult with global reper­cus­sions.

It was a stun­ning blow for Orbán — a close ally of both U. S. President Donald Trump and Russian President Vladimir Putin — who quickly con­ceded de­feat af­ter what he called a ″painful″ elec­tion re­sult. U.S. Vice President JD Vance had made a visit to Hungary just days ear­lier, meant to help push Orbán over the fin­ish line.

Election vic­tor Péter Magyar, a for­mer Orbán loy­al­ist who cam­paigned against cor­rup­tion and on every­day is­sues like health care and pub­lic trans­port, has pledged to re­build Hungary’s re­la­tion­ships with the European Union and NATO — ties that frayed un­der Orbán. European lead­ers quickly con­grat­u­lated Magyar.

His vic­tory was ex­pected to trans­form po­lit­i­cal dy­nam­ics within the EU, where Orbán had up­ended the bloc by fre­quently ve­to­ing key de­ci­sions, prompt­ing con­cerns he sought to break it up from the in­side.

It will also re­ver­ber­ate among far-right move­ments around the world, which have viewed Orbán as a bea­con for how na­tion­al­ist pop­ulism can be used to wage cul­ture wars and lever­age state power to un­der­mine op­po­nents.

It’s not yet clear whether Magyar’s Tisza party will have the two-thirds ma­jor­ity in par­lia­ment, which would give it the num­bers needed for ma­jor changes in leg­is­la­tion. With 93% of the vote counted, it had more than 53% sup­port to 37% for Orbán’s gov­ern­ing Fidesz party and looked set to win 94 of Hungary’s 106 vot­ing dis­tricts.

I con­grat­u­lated the vic­to­ri­ous party,″ Orban told fol­low­ers. We are go­ing to serve the Hungarian na­tion and our home­land from op­po­si­tion.″

In a speech to tens of thou­sands of ju­bi­lant sup­port­ers at a vic­tory party along the Danube River, Magyar said his vot­ers had rewrit­ten Hungarian his­tory.

Tonight, truth pre­vailed over lies. Today, we won be­cause Hungarians did­n’t ask what their home­land could do for them — they asked what they could do for their home­land. You found the an­swer. And you fol­lowed through,” he said.

On the streets of Budapest, dri­vers blared car horns and cranked up anti-gov­ern­ment songs while peo­ple march­ing in the streets chanted and screamed.

Many rev­el­ers chanted Ruszkik haza!” or Russians go home!” — a phrase used widely dur­ing Hungary’s 1956 anti-So­viet rev­o­lu­tion, and which had gained in­creas­ing cur­rency amid Orbán’s drift to­ward Moscow.

Turnout in the elec­tion was nearly 80%, ac­cord­ing to the National Election Office, a record num­ber in any vote in Hungary’s post-Com­mu­nist his­tory.

Orbán, the EUs longest-serv­ing leader and one of its biggest an­tag­o­nists, trav­eled a long road from his early days as a lib­eral, anti-So­viet fire­brand to the Russia-friendly na­tion­al­ist ad­mired to­day by the global far-right.

The EU will be wait­ing to see how Magyar changes Hungary’s ap­proach to Ukraine. Orbán re­peat­edly frus­trated EU ef­forts to sup­port the neigh­bor­ing coun­try in its war against Russia’s full-scale in­va­sion, while cul­ti­vat­ing close ties to Putin and re­fus­ing to end Hungary’s de­pen­dence on Russian en­ergy im­ports.

Recent rev­e­la­tions have shown a top mem­ber of Orbán’s gov­ern­ment fre­quently shared the con­tents of EU dis­cus­sions with Moscow, rais­ing ac­cu­sa­tions that Hungary was act­ing on Russia’s be­half within the bloc.

Members of Trump’s Make America Great Again” move­ment are among those who see Orbán’s gov­ern­ment and his Fidesz po­lit­i­cal party as shin­ing ex­am­ples of con­ser­v­a­tive, anti-glob­al­ist pol­i­tics in ac­tion, while he is re­viled by ad­vo­cates of lib­eral democ­racy and the rule of law.

In Budapest, Marcell Mehringer, 21, said he was vot­ing primarily so that Hungary will fi­nally be a so-called European coun­try, and so that young peo­ple, and re­ally every­one, will do their fun­da­men­tal civic duty to unite this na­tion a bit and to break­down these bound­aries borne of ha­tred.”

During his 16 years as prime min­is­ter, Orbán launched harsh crack­downs on mi­nor­ity rights and me­dia free­doms, sub­verted many of Hungary’s in­sti­tu­tions and been ac­cused of si­phon­ing large sums of money into the cof­fers of his al­lied busi­ness elite, an al­le­ga­tion he de­nies.

He also heav­ily strained Hungary’s re­la­tion­ship with the EU. Although Hungary is one of the smaller EU coun­tries, with a pop­u­la­tion of 9.5 mil­lion, Orbán has re­peat­edly used his veto to block de­ci­sions that re­quire una­nim­ity.

Most re­cently, he blocked a 90-billion euro ($104 bil­lion) EU loan to Ukraine, prompt­ing his part­ners to ac­cuse him of hi­jack­ing the crit­i­cal aid.

Magyar, 45, rapidly rose to be­come Orbán’s most se­ri­ous chal­lenger.

A for­mer in­sider within Orbán’s Fidesz, Magyar broke with the party in 2024 and quickly formed Tisza. Since then, he has toured Hungary re­lent­lessly, hold­ing ral­lies in set­tle­ments big and small in a cam­paign blitz that re­cently had him vis­it­ing up to six towns daily.

In an in­ter­view with The Associated Press ear­lier this month, Magyar said the elec­tion will be a referendum” on whether Hungary con­tin­ues on its drift to­ward Russia un­der Orbán, or can re­take its place among the de­mo­c­ra­tic so­ci­eties of Europe.

Tisza is a mem­ber of the European People’s Party, the main­stream, cen­ter-right po­lit­i­cal fam­ily with lead­ers gov­ern­ing 12 of the EUs 27 na­tions.

Magyar faced a tough fight. Orbán’s con­trol of Hungary’s pub­lic me­dia, which he has trans­formed into a mouth­piece for his party, and vast swaths of the pri­vate me­dia mar­ket give him an ad­van­tage in spread­ing his mes­sage.

The uni­lat­eral trans­for­ma­tion of Hungary’s elec­toral sys­tem and ger­ry­man­der­ing of its 106 vot­ing dis­tricts by Fidesz also re­quired Tisza to gain an es­ti­mated 5% more votes than Orbán’s party to achieve a sim­ple ma­jor­ity.

Additionally, hun­dreds of thou­sands of eth­nic Hungarians in neigh­bor­ing coun­tries had the right to vote in Hungarian elec­tions and tra­di­tion­ally have voted over­whelm­ingly for Orbán’s party.

Russian se­cret ser­vices have plot­ted to in­ter­fere and tip the elec­tion in Orbán’s fa­vor, ac­cord­ing to nu­mer­ous me­dia re­ports in­clud­ing by The Washington Post. The prime min­is­ter, how­ever, ac­cused neigh­bor­ing Ukraine, as well as Hungary’s al­lies in the EU, of seek­ing to in­ter­fere in the vote to in­stall a pro-Ukraine” gov­ern­ment.

Associated Press jour­nal­ists Béla Szandelszky, Marko Drobnjakovic, Ivan L. Nagy, Florent Bajrami in Budapest, Hungary, and Angela Charlton in Paris con­tributed to this re­port.

...

Read the original on apnews.com »

10 194 shares, 6 trendiness

European AI: a playbook to own it

to own it. Europe holds unique strengths: a world-class aca­d­e­mic ecosys­tem, a com­mit­ment to hu­man-cen­tric tech­nol­ogy, and a sin­gle mar­ket of +450 mil­lion peo­ple. The ques­tion is no longer whether Europe can com­pete, but how it can turn these as­sets into a co­he­sive, self-re­liant AI pow­er­house. Europe has faced a grow­ing tech­no­log­i­cal gap, leav­ing its cit­i­zens, busi­nesses, and gov­ern­ments in­creas­ingly re­liant on for­eign dom­i­nance. The cost is high: a di­min­ished voice on the global stage, re­duced con­trol over the European fu­ture, and vul­ner­a­bil­ity to dig­i­tal threats. Without ac­tion, we risk sur­veil­lance threats, eco­nomic de­cline, strate­gic weak­ness, and even the ero­sion of our de­mo­c­ra­tic free­doms. But this chal­lenge is also Europe’s great­est op­por­tu­nity. The AI rev­o­lu­tion has started and is a chance to not

only catch up but to lead and de­fine our own paths. Europe is home to a vi­brant pool of un­tapped tal­ent and in­dus­trial cham­pi­ons whose unique as­sets can push the bound­aries of what AI can achieve. The com­pe­ti­tion from the U. S. and China is fierce, but Europe is not a mar­ket to be dom­i­nated, it is a pow­er­house of in­no­va­tion, cre­ativ­ity, and re­silience. The ques­tion is not whether we can com­pete, but how we will rise to the oc­ca­sion. AI can be the tool that se­cures our au­ton­omy, strength­ens our strate­gic sec­tors, in­creases our eco­nomic wealth and am­pli­fies our global in­flu­ence. To seize this mo­ment, we must act de­ci­sively. We need to drive de­mand for home­grown AI, se­cure strate­gic sec­tors, and em­power European play­ers. Controlling our AI and in­fra­struc­ture is not op­tional, it’s the only way to win the AI race. So now is the time to act: grow our tal­ent pool and bring our best minds back to Europe, scale our in­no­v­a­tive com­pa­nies across all 27 Member States, and turn our di­ver­sity into a com­pet­i­tive edge by com­press­ing knowl­edge and build­ing AI that re­flects the world’s com­plex­ity. Europe’s AI ecosys­tem is brim­ming with po­ten­tial. By fos­ter­ing an en­vi­ron­ment that nur­tures growth, we can trans­form chal­lenges into op­por­tu­ni­ties and re­claim our fu­ture. The race is on, and Europe should be ready to win it. Europe holds unique strengths: a world-class aca­d­e­mic ecosys­tem, a com­mit­ment to hu­man-cen­tric tech­nol­ogy, and a sin­gle mar­ket of over 450 mil­lion peo­ple. The ques­tion is no longer whether Europe can com­pete,

but how it can turn these as­sets into a co­he­sive,

self-re­liant AI pow­er­house. This play­book pro­vides a clear, ac­tion­able frame­work to po­si­tion Europe as that pow­er­house, ac­cel­er­at­ing AI de­vel­op­ment and adop­tion, at­tract­ing and re­tain­ing top tal­ent, sim­pli­fy­ing reg­u­la­tion with­out sac­ri­fic­ing val­ues, and mo­bi­liz­ing pub­lic and pri­vate in­vest­ment to build home­grown AI in­fra­struc­ture. Only with it, Europe can en­sure AI is not only de­vel­oped in Europe, but for Europe and on Europe’s terms. This doc­u­ment is not a the­o­ret­i­cal ex­er­cise. It is a prac­ti­cal play­book, born from the lived ex­pe­ri­ence of a European AI startup, Mistral AI, nav­i­gat­ing one of the world’s most com­pet­i­tive, fast and cap­i­tal-in­ten­sive in­dus­tries. We have ex­pe­ri­enced mis­aligned eq­uity frame­works, bu­reau­cratic bar­ri­ers that re­quire the CEO to travel for ba­sic ad­min­is­tra­tive tasks, and le­gal un­cer­tainty that com­pli­cates con­tracts and cus­tomer re­la­tion­ships. We have seen how reg­u­la­tory over­laps cre­ate le­gal quag­mires, how frag­mented mar­kets hin­der growth, and how tal­ent slips away due to ad­min­is­tra­tive fric­tion. This doc­u­ment is a call to turn Europe’s strengths into scal­able, com­pet­i­tive ad­van­tage. It is grounded in the ur­gency of the mo­ment and the con­vic­tion that Europe can and must build an AI ecosys­tem that re­flects its val­ues, serves its cit­i­zens, and com­petes glob­ally. It is our col­lec­tive duty to en­sure AI can also be de­vel­oped in Europe on terms that aligns with our pri­or­i­ties as Europeans. These chal­lenges shaped our ap­proach and led us to agree on three key prin­ci­ples to un­lock Europe’s AI po­ten­tial: Every rec­om­men­da­tion, from visa re­form to pro­cure­ment

gate­ways, is de­signed to be im­ple­mented, mea­sured,

and scaled. Europe’s di­ver­sity is its strength, but its frag­men­ta­tion

is its Achilles’ heel. This pa­per em­braces the com­plex­ity

of the EUs struc­ture while of­fer­ing so­lu­tions to

align mar­kets, re­duce re­dun­dancy, and ac­cel­er­ate de­ci­sion-mak­ing. Speed is not an op­tion: We pro­pose fast-track mech­a­nisms for tal­ent, cap­i­tal,

and com­pli­ance, so Europe’s in­no­va­tors aren’t left

be­hind. At Mistral AI, we’ve built a fron­tier AI com­pany in Europe be­cause we be­lieve in its po­ten­tial. This play­book is our con­tri­bu­tion to en­sur­ing that po­ten­tial be­comes re­al­ity, not just for us, but for the en­tire ecosys­tem. Scale: Unleash the full po­ten­tial of the Single Market The most trans­for­ma­tive ad­vance­ments in AI, those that push the bound­aries of what is pos­si­ble, are dri­ven by hu­man ge­nius, sci­en­tific cu­rios­ity, and the re­lent­less pur­suit of knowl­edge. Beyond the al­go­rithms and com­pu­ta­tional power, AIs po­ten­tial lies in its abil­ity to serve hu­man in­tel­li­gence, en­sur­ing that tech­nol­ogy re­mains a tool that ad­dresses greater needs. This hu­man-cen­tric ap­proach is not a philo­soph­i­cal ideal, but a prac­ti­cal ne­ces­sity. AI sys­tems that are truly in­no­v­a­tive and ben­e­fi­cial to so­ci­ety will al­ways re­quire hu­man over­sight, cre­ativ­ity, and judg­ment at their core. their core. As a con­se­quence, the global com­pe­ti­tion for AI tal­ent is fierce. The scarcity of highly skilled pro­fes­sion­als in com­puter sci­ence, ma­chine learn­ing, and re­lated fields has turned tal­ent into the most crit­i­cal re­source in the AI race. These ex­perts op­er­ate in a global, hy­per-com­pet­i­tive mar­ket, where other re­gions are at­tract­ing tal­ent thanks to faster re­lo­ca­tion processes, higher salaries, and dy­namic ca­reer op­por­tu­ni­ties. Fortunately, our con­ti­nent is home to a vi­brant AI aca­d­e­mic ecosys­tem, from world-class uni­ver­si­ties to cut­ting-edge re­search in­sti­tu­tions. The foun­da­tions are here but the full po­ten­tial must now be re­al­ized. This re­quires deeper col­lab­o­ra­tion be­tween acad­e­mia and in­dus­try, en­sur­ing that re­search ex­cel­lence trans­lates di­rectly into in­no­va­tion, and in gen­eral con­ti­nent-wide mea­sures that ease re­lo­ca­tion, sim­plify ad­min­is­tra­tive pro­ce­dures, and se­cure long-term con­di­tions for tal­ent to thrive. The goal should be clear for Europe: be­com­ing the pre­mier des­ti­na­tion for global AI ex­perts, a place where tal­ent is not just at­tracted but nur­tured, re­tained, and em­pow­ered to push the bound­aries of what AI can achieve. As com­pet­ing re­gions be­come less open or pre­dictable, Europe has a unique op­por­tu­nity to po­si­tion it­self as a global hub for AI re­search and de­vel­op­ment and ad­dress its short­age of highly skilled AI tal­ent. The fu­ture of AI will be first shaped by those who in­vest in tal­ent to­day. Establish a AI Blue Card”, a fast-track visa process en­abling AI and com­pute re­searchers, en­gi­neers, and en­tre­pre­neurs, along with their im­me­di­ate fam­i­lies (spouses/partners and de­pen­dent chil­dren un­der 18), to ob­tain a 4-year work and res­i­dency per­mit valid across all EU Member States. The visa would be processed in 15 work­ing days via a uni­fied dig­i­tal por­tal, with pos­si­ble re­newa­bil­ity and porta­bil­ity be­tween Member States. 2. Develop deeper and more sys­tem­atic part­ner­ships be­tween European uni­ver­si­ties and AI com­pa­nies Build deep, struc­tured col­lab­o­ra­tion be­tween acad­e­mia and the AI in­dus­try to turn Europe’s re­search ex­cel­lence into in­dus­trial in­no­va­tion. Create a pan-Eu­ro­pean net­work of ap­plied AI re­search in­sti­tutes to boost in­dus­try-ori­ented in­no­va­tion, mod­eled af­ter Fraunhofer and Carnot, to ac­cel­er­ate AI adop­tion. These mul­ti­dis­ci­pli­nary cen­tres would col­lab­o­rate with com­pa­nies of all sizes through R&D con­tracts on in­dus­try-rel­e­vant AI pro­jects, en­abling joint par­tic­i­pa­tion in bi­lat­eral re­search ini­tia­tives and sup­port­ing the dif­fu­sion of AI in­no­va­tion across Europe’s in­dus­trial ecosys­tem. With 40% of EU com­pa­nies strug­gling to hire AI tal­ent, the EU must es­tab­lish a com­pre­hen­sive AI tal­ent mo­bil­ity ini­tia­tive to en­able large-scale, struc­tured mo­bil­ity across the AI tal­ent life­cy­cle, from grad­u­ates to se­nior aca­d­e­mics and in­dus­try ex­perts. Provide com­pute to a se­lected pool of lead­ing European uni­ver­si­ties in com­put­ing sci­ence for their Msc and Phd grad­u­ate pro­grams, via uni­ver­sity-man­aged al­lo­ca­tion por­tals. Access should be dy­namic and cu­rios­ity-dri­ven, with a fair-use cap to en­sure broad avail­abil­ity while still en­abling com­pute-in­ten­sive pro­jects. Scale: Unleash the full po­ten­tial of the Single Market Europe stands at a crit­i­cal junc­ture, fac­ing not just a chal­lenge of scale but a deeper cri­sis of ac­cel­er­a­tion. More than 50% of the world’s uni­corns are based in the U.S., com­pared to less than 10% in the EU, a third of which have al­ready re­lo­cated their head­quar­ters abroad, pre­dom­i­nantly to the U.S. This is not merely a dis­par­ity in num­bers; it re­flects a sys­temic fail­ure where European star­tups, de­spite their po­ten­tial, are forced to seek growth op­por­tu­ni­ties else­where. The is­sue runs deeper than fund­ing gaps or mar­ket size. It’s about an ecosys­tem that does­n’t let its most promis­ing com­pa­nies flour­ish at home. of the world’s uni­corns are based in the U.S. Less than 10% in the EU. Europe ac­counts for just 5% of global ven­ture cap­i­tal funds for the U.S. The frag­men­ta­tion of Europe’s sin­gle mar­ket into 27 dis­tinct reg­u­la­tory land­scapes cre­ates an en­vi­ron­ment where ex­pand­ing from Berlin to Paris can feel more com­plex than en­ter­ing the en­tire U.S. mar­ket. Additionally, Europe ac­counts for just 5% of global ven­ture cap­i­tal funds, com­pared to 52% for the U.S. and around 40% for China. As a re­sult, European star­tups grow slower, exit ear­lier, and too of­ten end up ac­quired by non-Eu­ro­pean com­pa­nies or re­lo­cate en­tirely to ac­cess larger mar­kets and sim­pler reg­u­la­tions. This rep­re­sents a dou­ble loss for Europe. First, we fail to cul­ti­vate home­grown tech lead­ers that could drive in­no­va­tion, cre­ate jobs, and gen­er­ate wealth across the sin­gle mar­ket. Second, and more crit­i­cally, we sur­ren­der our most promis­ing com­pa­nies to other com­pet­ing re­gions in the world, re­in­forc­ing U.S. and China dom­i­nance in the very sec­tors where Europe could be lead­ing. The so­lu­tion de­mands a fun­da­men­tal shift in how Europe sup­ports its scale-ups. We need more har­mo­niza­tion to un­leash the full po­ten­tial of the sin­gle mar­ket, we need to cre­ate fi­nan­cial mech­a­nisms that can help fill the fund­ing gap, and, per­haps most im­por­tantly, change the nar­ra­tive that scal­ing in Europe is im­pos­si­ble. The sin­gle mar­ket re­mains Europe’s great­est com­pet­i­tive as­set, but it must evolve to bet­ter en­able our com­pa­nies to scale and com­pete glob­ally. It’s time to un­lock its full po­ten­tial Leverage the cur­rent mo­men­tum for sim­pli­fi­ca­tion to stream­line the EU dig­i­tal reg­u­la­tory frame­work. The com­plex EU dig­i­tal reg­u­la­tory frame­work should be re­vised to clar­ify in­con­sis­ten­cies, elim­i­nate over­laps, and re­duce com­pli­ance ef­forts with­out sac­ri­fic­ing un­der­ly­ing reg­u­la­tory goals. Create a cen­tral­ized, mul­ti­lin­gual dig­i­tal por­tal for AI de­vel­op­ers to gen­er­ate stan­dard­ized re­ports, ac­cess real-time guid­ance, and au­to­mate com­pli­ance checks across the AI Act and GDPR, draw­ing on the European Single Access Point (ESAP). Establish a reg­u­la­tion-based uni­fied, dig­i­tal-first sys­tem for the au­to­matic recog­ni­tion of cor­po­rate acts across all EU Member States, elim­i­nat­ing bu­reau­cratic bar­ri­ers and le­gal un­cer­tainty for com­pa­nies scal­ing within the Single Market. Create an EU cor­po­rate bank­ing pass­port via the European Digital Identity Wallet to grant all EU-based com­pa­nies a right to a ba­sic pay­ment ac­count and a har­mo­nized, dig­i­tal-first KYC pass­port­ing scheme. Invite Member States to adopt a flex­i­ble, sub­sidiar­ity-com­pli­ant ESOP Alignment Framework to align tax­a­tion event at sale, while re­spect­ing na­tional tax rates. Create a SIU Passport by ex­tend­ing the European Commission’s Q4 2025 pro­posal on sav­ings and in­vest­ments union (SIU) al­low­ing com­pa­nies to raise cap­i­tal in any EU Member State with­out re­fil­ing doc­u­men­ta­tion. Extend the European Single Access Point (ESAP) to cor­po­rate fil­ings and in­vestors search with a cen­tral­ized dig­i­tal plat­form, the SIU Hub, for cor­po­rate fil­ings, en­abling com­pa­nies to sub­mit prospec­tuses, fi­nan­cial re­ports, and com­pli­ance doc­u­ments once and have them au­to­mat­i­cally rec­og­nized and dis­trib­uted across all EU na­tional reg­u­la­tors. An AI EuVECA Label would be a cer­ti­fied des­ig­na­tion for qual­i­fy­ing funds that com­mit a min­i­mum per­cent­age of their in­vest­ments to AI and deep-tech com­pa­nies. As the European Union im­ple­ments the 2024 Solvency II re­form and ad­vances the IORP II re­view, pol­i­cy­mak­ers should en­sure that pru­den­tial and in­vest­ment frame­works ac­tively sup­port long-term eq­uity in­vest­ments in strate­gic sec­tors such as ar­ti­fi­cial in­tel­li­gence. Artificial in­tel­li­gence is not an ab­stract promise. It is a tool that ful­fills its po­ten­tial when em­bed­ded in the real econ­omy. The most ad­vanced al­go­rithms, the most pow­er­ful mod­els, are mean­ing­less if they re­main con­fined to labs or siloed in su­per­fi­cial ap­pli­ca­tions. AIs true value emerges when it is de­ployed at scale, solv­ing con­crete chal­lenges, whether in the most com­plex in­dus­trial use cases or the daily op­er­a­tions of SMEs. The op­por­tu­nity is even big­ger con­sid­er­ing that em­bed­ding AI in a real-world con­text is not just a mat­ter of tech­no­log­i­cal de­ploy­ment; but can trig­ger a vir­tu­ous cy­cle of de­mand and sup­ply. Every new adop­tion re­fines the tech­nol­ogy, every real-world ap­pli­ca­tion ac­cel­er­ates re­search, and every suc­cess­ful de­ploy­ment at­tracts more in­vest­ment. In short: adop­tion does­n’t just fol­low in­no­va­tion, it dri­ves it. The equa­tion is sim­ple: more adop­tion to­day means bet­ter AI to­mor­row and so on. is the an­nual value of pub­lic pro­cure­ment in Europe - a ma­jor lever for in­no­va­tion. However, Europe faces a stark re­al­ity: only 20% of EU en­ter­prises have adopted AI, and a mere 11% of SMEs are lever­ag­ing its po­ten­tial. Worse still, over 80% of our dig­i­tal in­fra­struc­ture re­mains de­pen­dent on non-EU providers. This leaves us vul­ner­a­ble to ex­tra-ter­ri­to­r­ial con­trols that threaten strate­gic au­ton­omy and could dis­rupt the seam­less de­ploy­ment of AI tech­nolo­gies across the con­ti­nent. Without rapid, large-scale adop­tion, Europe’s AI ecosys­tem faces stag­na­tion, de­prived of the mar­ket sig­nals that guide in­vest­ment and the real-world use cases that re­fine tech­nol­ogy. To ad­dress this is­sue, the so­lu­tion re­quires a three-pronged strat­egy: First, pub­lic pro­cure­ment must be­come a

mar­ket-shap­ing tool. By man­dat­ing European AI

so­lu­tions in gov­ern­ment con­tracts, we send a clear

sig­nal that home­grown tech­nol­ogy is the best op­tion. Second, we must re­move bar­ri­ers for SMEs, en­sur­ing

that even the small­est firms can ac­cess, adopt, and

ben­e­fit from AI through sub­si­dies, train­ing, and

sim­pli­fied pro­cure­ment processes. Third, we need strate­gic in­cen­tives to ac­cel­er­ate

pri­vate-sec­tor adop­tion, from tax cred­its for

com­pa­nies com­mit­ting to European so­lu­tions to

com­pute vouch­ers that off­set up­front costs. This is not about adop­tion for adop­tion’s sake. It is about en­sur­ing that Europe’s dig­i­tal trans­for­ma­tion goes fast enough and is pow­ered by European tech­nol­ogy, on European terms. 15. EU in­sti­tu­tions lead by ex­am­ple in AI-enhanced pub­lic ad­min­is­tra­tion Position EU in­sti­tu­tions as global lead­ers in AI-enhanced gov­er­nance by adopt­ing European AI so­lu­tions, demon­strat­ing how pub­lic ad­min­is­tra­tion can be smarter, faster, and more cit­i­zen-cen­tric through home­grown in­no­va­tion. This mea­sure aims to re­move bar­ri­ers for SMEs, scale-ups, and in­no­v­a­tive com­pa­nies, en­sur­ing trans­par­ent, ef­fi­cient, and in­clu­sive ac­cess to pub­lic con­tracts across the Single Market. Establish a tar­geted European pref­er­ence mech­a­nism in pub­lic pro­cure­ment for strate­gic sec­tors, us­ing pub­lic spend­ing to strengthen tech­no­log­i­cal au­ton­omy, eco­nomic se­cu­rity, and in­dus­trial com­pet­i­tive­ness. Establish a sus­tain­abil­ity re­port­ing and in­cen­tive frame­work re­quir­ing all AI providers op­er­at­ing in the EU with an­nual rev­enues ex­ceed­ing €500 mil­lion to sub­mit stan­dard­ized, third-party-ver­i­fied life-cy­cle as­sess­ments cov­er­ing the full life cy­cle of their AI sys­tems as a pre­req­ui­site for el­i­gi­bil­ity for pub­lic pro­cure­ment con­tracts. While the con­ti­nent has made sig­nif­i­cant strides in AI re­search and reg­u­la­tion, its in­fra­struc­ture, the back­bone of AI de­vel­op­ment, re­mains a weak point. Traditional data cen­ters, de­signed for gen­eral-pur­pose cloud com­put­ing, are ill-equipped to han­dle the de­mands of fron­tier AI mod­els, which re­quire ul­tra-dense, high-per­for­mance com­pute in­fra­struc­ture. Without this in­fra­struc­ture, Europe risks falling fur­ther be­hind the United States and Asia, deep­en­ing its de­pen­dency on non-Eu­ro­pean hy­per­scalers for every­thing from model train­ing to in­dus­trial ap­pli­ca­tions. As AI is not just an­other tech­no­log­i­cal ad­vance­ment, build­ing AI-ready in­fra­struc­ture is a foun­da­tional ca­pa­bil­ity that will shape Europe’s eco­nomic com­pet­i­tive­ness, strate­gic au­ton­omy, and abil­ity to ad­dress global chal­lenges, from cli­mate change to health­care. Yet to­day, most of Europe’s AI work­loads run on in­fra­struc­ture con­trolled by for­eign providers, leav­ing the con­ti­nent vul­ner­a­ble to geopo­lit­i­cal risks, sup­ply chain dis­rup­tions, and the loss of eco­nomic value. If Europe fails to act, it could cede lead­er­ship in AI to oth­ers, miss­ing out on the pro­duc­tiv­ity gains, in­no­va­tion, and jobs that come with it. Europe has the re­sources, ex­per­tise, and am­bi­tion to build its own AI fu­ture, aligned with its val­ues. The key lies in ul­tra-dense, high-per­for­mance com­pute in­fra­struc­ture, pur­pose-built for the de­mands of next-gen­er­a­tion AI. The in­fra­struc­ture re­quired for fron­tier AI is fun­da­men­tally dif­fer­ent from what ex­ists to­day. Modern AI mod­els de­mand power den­si­ties of 100 kW per rack or more, far be­yond the ca­pa­bil­i­ties of tra­di­tional data cen­ters. They re­quire ad­vanced cool­ing sys­tems, such as liq­uid cool­ing, to man­age heat loads ef­fi­ciently, and they must be scal­able to keep pace with the rapid evo­lu­tion of AI. Most im­por­tantly, this in­fra­struc­ture must be con­trolled by European en­ti­ties to en­sure that strate­gic de­ci­sions, eco­nomic ben­e­fits, and data gov­er­nance re­main in Europe. By in­vest­ing in ul­tra-dense, in­de­pen­dent AI in­fra­struc­ture, Europe can re­duce its de­pen­den­cies on non-Eu­ro­pean hy­per­scalers, en­sur­ing that its AI ecosys­tem is re­silient and self-suf­fi­cient. It can also turn its en­ergy abun­dance, from nu­clear to re­new­ables, into a com­pet­i­tive ad­van­tage, pow­er­ing AI in­no­va­tion with sus­tain­able, low-car­bon en­ergy. This will cre­ate high-value jobs in tech, en­ergy, and man­u­fac­tur­ing, while fos­ter­ing a new gen­er­a­tion of European AI lead­ers. Finally, do­ing so will al­low the European Union to align AI de­vel­op­ment with its cli­mate goals, by build­ing in­fra­struc­ture that is not only pow­er­ful but also en­ergy-ef­fi­cient and sus­tain­able. The ques­tion is no longer whether Europe should build this in­fra­struc­ture, but how to do it quickly, ef­fi­ciently, and at scale. To seize this op­por­tu­nity, Europe must adopt a co­or­di­nated, for­ward-look­ing pol­icy ap­proach that pri­or­i­tizes ul­tra-dense, European-controlled AI in­fra­struc­ture. This re­quires ac­tion on mul­ti­ple fronts: Europe needs clear, am­bi­tious stan­dards for what con­sti­tutes AI-ready in­fra­struc­ture. This means defin­ing tech­ni­cal re­quire­ments that go be­yond tra­di­tional data cen­ters: Power den­sity thresh­olds (≥100 kW per rack)

to en­sure that only in­fra­struc­ture ca­pa­ble

of sup­port­ing fron­tier AI qual­i­fies for

pub­lic sup­port. European own­er­ship and con­trol, en­sur­ing

that the in­fra­struc­ture and the data it

processes re­mains un­der EU ju­ris­dic­tion. These stan­dards should be em­bed­ded in pub­lic fund­ing pro­grams, pro­cure­ment poli­cies, and en­ergy al­lo­ca­tion strate­gies, cre­at­ing a level play­ing field that fa­vors in­no­va­tion and au­ton­omy over legacy sys­tems. Building ul­tra-dense AI in­fra­struc­ture re­quires sig­nif­i­cant up­front in­vest­ment, but the long-term ben­e­fits (economic growth, tech­no­log­i­cal lead­er­ship, and strate­gic re­silience) far out­weigh the costs. Policymakers can ac­cel­er­ate de­ploy­ment through: Public pro­cure­ment poli­cies that pri­or­i­tize

ul­tra-dense, European-controlled

in­fra­struc­ture for crit­i­cal work­loads, from

sci­en­tific re­search to pub­lic ser­vices. By lever­ag­ing pub­lic funds strate­gi­cally, Europe can en­sure that its AI in­fra­struc­ture is built to last. Europe’s en­ergy re­sources (nuclear, wind, hy­dro, and so­lar) are a strate­gic as­set for AI de­vel­op­ment. However, these re­sources must be al­lo­cated in a way that max­i­mizes their im­pact. This means: Prioritizing ul­tra-dense AI data cen­ters in

en­ergy plan­ning, en­sur­ing they have ac­cess

to the low-car­bon power they need to op­er­ate

sus­tain­ably. Streamlining per­mit­ting and grid con­nec­tions

for high-per­for­mance data cen­ters, re­duc­ing

bu­reau­cratic hur­dles that slow down

de­ploy­ment. Fostering part­ner­ships be­tween AI providers

and en­ergy com­pa­nies, to co-de­velop

in­fra­struc­ture that is both pow­er­ful and

sus­tain­able. Europe’s en­ergy tran­si­tion and its AI am­bi­tions are two sides of the same coin. By align­ing them, pol­i­cy­mak­ers can cre­ate a vir­tu­ous cy­cle: AI in­fra­struc­ture that sup­ports the green tran­si­tion, and a green tran­si­tion that pow­ers AI in­no­va­tion. No sin­gle coun­try or com­pany can build Europe’s AI fu­ture alone. Success will re­quire col­lab­o­ra­tion across bor­ders and sec­tors: Public-private part­ner­ships to co-de­velop

and de­ploy ul­tra-dense data cen­ters,

lever­ag­ing the strengths of gov­ern­ments,

en­ergy providers, and tech com­pa­nies. Investment in R&D to ad­vance cool­ing

tech­nolo­gies, chip de­sign, and

in­ter­op­er­abil­ity, en­sur­ing that Europe

re­mains at the cut­ting edge of AI

in­fra­struc­ture. Skills de­vel­op­ment pro­grams to pre­pare the

work­force for the AI-driven econ­omy, from

data sci­en­tists to in­fra­struc­ture en­gi­neers. Europe’s di­ver­sity is one of its great­est strengths. By fos­ter­ing col­lab­o­ra­tion, pol­i­cy­mak­ers can en­sure that AI in­fra­struc­ture is dis­trib­uted across the con­ti­nent, cre­at­ing hubs of in­no­va­tion in every re­gion. The re­vi­sion of the pub­lic pro­cure­ment frame­work as de­fined in Measure 17, should also be com­ple­mented by a spe­cific pref­er­ence for AI in­fra­struc­ture pro­jects within the Cloud and AI Development Act (CAIDA). Establish a fu­ture-proof, eq­ui­table le­gal frame­work for the train­ing of AI mod­els in Europe, which is a sine-qua-non con­di­tion to en­sure Europe’s global com­pet­i­tive­ness in AI and strengthen the European cre­ative econ­omy. Create a European data-shar­ing frame­work where com­pa­nies con­tribute pseu­do­nymized, FAIR-compliant datasets to a cen­tral­ized por­tal in ex­change for tan­gi­ble eco­nomic and strate­gic as­sets, in or­der to ac­cel­er­ate ap­plied AI re­search and de­vel­op­ment. 22. Create a cen­tral­ized and AI-ready archive for AI train­ing and cul­tural preser­va­tion Create a cen­tral­ized, mul­ti­lin­gual repos­i­tory of pub­lic do­main works to pro­vide high-qual­ity train­ing data for AI mod­els, pre­serve Europe’s cul­tural her­itage, and re­duce de­pen­dency on non-EU datasets.

Mistral AI is a pi­o­neer com­pany in gen­er­a­tive ar­ti­fi­cial in­tel­li­gence, em­pow­er­ing the world with the tools to build and ben­e­fit from the most trans­for­ma­tive tech­nol­ogy of our time. The com­pany de­moc­ra­tizes AI through high- per­for­mance, op­ti­mized, and cut­ting-edge open- source mod­els, prod­ucts and so­lu­tions as well as end- to-end in­fra­struc­ture with Mistral Compute. Headquartered in France and in­de­pen­dent, Mistral AI de­fends a de­cen­tral­ized and trans­par­ent ap­proach to tech­nol­ogy, with a strong global pres­ence in the United States, United Kingdom, and Singapore.

...

Read the original on europe.mistral.ai »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.