10 interesting stories served every morning and every evening.




1 658 shares, 68 trendiness

Concrete Laptop Stand

I am a great lover of bru­tal­ist ar­chi­tec­ture. 1960’s con­crete build­ings may not be for every­one, but I love the aes­thetic. I’ve made a lap­top stand, to help me hack in true bru­tal­ist style. It has the char­ac­ter­is­tic be­ton brut

(raw con­crete) sur­face tex­ture, and is quite pos­si­bly the heav­i­est lap­top stand in the world. It also boasts 2 x 2.1 amp USB charge ports, a three-pin plug socket for my lap­top, and an in­te­gral plant pot. Here are some of its high­lights.

Rusted re­bar and ex­posed wire add to the theme of urbex and de­cay

It was a slow process, but here are some ac­tion shots of mak­ing the lap­top stand:

There were two main pours of con­crete, to do the base and the side walls. It in­ten­tion­ally was­n’t mixed very thor­oughly, to pro­duce ar­eas on the sur­face where there was more sand or more ce­ment. Sanding the sides has also ex­posed the gravel in the con­crete. This help to make it look aged and weath­ered.

On smaller pieces such as lit­tle plant pots or coast­ers, it is pos­si­ble to use quick dry­ing ce­ment and get the bub­bles out by vi­brat­ing the form with an elec­tric tooth­brush af­ter the pour. For very large pieces such as a din­ing table, you need to use slow dry­ing ce­ment, and walk around the table­top for ages, tap­ping the form with a rub­ber mal­let to re­move any air bub­bles. For a medium-sized piece like this, a vi­brat­ing dildo is ac­tu­ally the best thing to use. Just think of it like any other power tool.

The plant pot is made of a ghee tin. Four bolts were drilled through it and cov­ered in con­crete dur­ing the first pour to fix it in place. The in­ner pot is a grey plas­tic plant pot which fits per­fectly in the ghee tin. I’ve cho­sen a string of pearls plant, be­cause I liked the ef­fect of a run­ning plant hang­ing over the edge. It re­minds me of the derelict build­ings I’ve seen dur­ing ur­ban ex­plo­ration.

The ex­posed wire re­ally adds a sense fo di­lap­i­da­tion and ur­ban de­cay. This is­n’t ac­tu­ally the live power ca­ble, but it has been made to look like one. The real ca­ble dis­ap­pears into the con­crete on the right hand side of the lap­top stand, and the dam­aged fake ca­ble comes out of the other side of the wall. The real power lead is strapped to the re­bar cage with ca­ble ties, but the over­all ef­fect is that it looks like the live ca­ble is badly dam­aged.

The wire had to be wrapped in kitchen pa­per and sprayed with am­mo­nia and wa­ter, to pro­duce the ap­pro­pri­ate cor­ro­sion ef­fect. Attempts to lower it into a lit­tle pot filled with liq­uid did­n’t re­ally work - the cop­per com­pounds turned the liq­uid blue, but it was­n’t form­ing a patina on the wire.

Here’s what seems to be hap­pen­ing here:

$$ \ce{Cu2+ + 2NH3 + 3H2O -> Cu(OH)2 + 2NH4+} $$

The ex­posed re­bar was first pol­ished with a wire brush at­tach­ment on a Dremel tool, to re­move the con­crete and ex­pose the metal, then it was rusted with wa­ter, salt, and hy­dro­gen per­ox­ide.

The pen­pot was sim­i­larly rusted with salt wa­ter and per­ox­ide, af­ter be­ing scuffed up with some sand­pa­per. It has also had some moss added to it: acrylic paint cut with sand was added, to pro­duce a re­al­is­tic tex­ture. Dab, don’t wipe.

I’m de­lighted with my lap­top stand, even if the aes­thetic is­n’t to every­one’s taste. The themes of bru­tal­ist ar­chi­tec­ture, ur­ban de­cay, and di­lap­i­da­tion have worked out re­ally nicely, es­pe­cially with the de­lib­er­ate hole and the rusted metal. It has pride of place on a desk it had to be car­ried to on a trol­ley be­cause of the sheer weight of the stand, but noth­ing worth­while comes easy.

...

Read the original on sam-burns.com »

2 584 shares, 40 trendiness

Are We Idiocracy Yet?

...

Read the original on idiocracy.wtf »

3 573 shares, 170 trendiness

Securing critical software for the AI era

Today we’re an­nounc­ing Project Glasswing1, a new ini­tia­tive that brings to­gether Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an ef­fort to se­cure the world’s most crit­i­cal soft­ware. We formed Project Glasswing be­cause of ca­pa­bil­i­ties we’ve ob­served in a new fron­tier model trained by Anthropic that we be­lieve could re­shape cy­ber­se­cu­rity. Claude Mythos2 Preview is a gen­eral-pur­pose, un­re­leased fron­tier model that re­veals a stark fact: AI mod­els have reached a level of cod­ing ca­pa­bil­ity where they can sur­pass all but the most skilled hu­mans at find­ing and ex­ploit­ing soft­ware vul­ner­a­bil­i­ties.Mythos Preview has al­ready found thou­sands of high-sever­ity vul­ner­a­bil­i­ties, in­clud­ing some in every ma­jor op­er­at­ing sys­tem and web browser. Given the rate of AI progress, it will not be long be­fore such ca­pa­bil­i­ties pro­lif­er­ate, po­ten­tially be­yond ac­tors who are com­mit­ted to de­ploy­ing them safely. The fall­out—for economies, pub­lic safety, and na­tional se­cu­rity—could be se­vere. Project Glasswing is an ur­gent at­tempt to put these ca­pa­bil­i­ties to work for de­fen­sive pur­poses.As part of Project Glasswing, the launch part­ners listed above will use Mythos Preview as part of their de­fen­sive se­cu­rity work; Anthropic will share what we learn so the whole in­dus­try can ben­e­fit. We have also ex­tended ac­cess to a group of over 40 ad­di­tional or­ga­ni­za­tions that build or main­tain crit­i­cal soft­ware in­fra­struc­ture so they can use the model to scan and se­cure both first-party and open-source sys­tems. Anthropic is com­mit­ting up to $100M in us­age cred­its for Mythos Preview across these ef­forts, as well as $4M in di­rect do­na­tions to open-source se­cu­rity or­ga­ni­za­tions.Pro­ject Glasswing is a start­ing point. No one or­ga­ni­za­tion can solve these cy­ber­se­cu­rity prob­lems alone: fron­tier AI de­vel­op­ers, other soft­ware com­pa­nies, se­cu­rity re­searchers, open-source main­tain­ers, and gov­ern­ments across the world all have es­sen­tial roles to play. The work of de­fend­ing the world’s cy­ber in­fra­struc­ture might take years; fron­tier AI ca­pa­bil­i­ties are likely to ad­vance sub­stan­tially over just the next few months. For cy­ber de­fend­ers to come out ahead, we need to act now.Cy­ber­se­cu­rity in the age of AIThe soft­ware that all of us rely on every day—re­spon­si­ble for run­ning bank­ing sys­tems, stor­ing med­ical records, link­ing up lo­gis­tics net­works, keep­ing power grids func­tion­ing, and much more—has al­ways con­tained bugs. Many are mi­nor, but some are se­ri­ous se­cu­rity flaws that, if dis­cov­ered, could al­low cy­ber­at­tack­ers to hi­jack sys­tems, dis­rupt op­er­a­tions, or steal data.We have al­ready seen the se­ri­ous con­se­quences of cy­ber­at­tacks for im­por­tant cor­po­rate net­works, health­care sys­tems, en­ergy in­fra­struc­ture, trans­port hubs, and the in­for­ma­tion se­cu­rity of gov­ern­ment agen­cies across the world. On the global stage, state-spon­sored at­tacks from ac­tors like China, Iran, North Korea, and Russia have threat­ened to com­pro­mise the in­fra­struc­ture that un­der­pins both civil­ian life and mil­i­tary readi­ness. Even smaller-scale at­tacks, such as those where in­di­vid­ual hos­pi­tals or schools are tar­geted, can still in­flict sub­stan­tial eco­nomic dam­age, ex­pose sen­si­tive data, and even put lives at risk. The cur­rent global fi­nan­cial costs of cy­ber­crime are chal­leng­ing to es­ti­mate, but might be around $500B every year.Many flaws in soft­ware go un­no­ticed for years be­cause find­ing and ex­ploit­ing them has re­quired ex­per­tise held by only a few skilled se­cu­rity ex­perts. With the lat­est fron­tier AI mod­els, the cost, ef­fort, and level of ex­per­tise re­quired to find and ex­ploit soft­ware vul­ner­a­bil­i­ties have all dropped dra­mat­i­cally. Over the past year, AI mod­els have be­come in­creas­ingly ef­fec­tive at read­ing and rea­son­ing about code—in par­tic­u­lar, they show a strik­ing abil­ity to spot vul­ner­a­bil­i­ties and work out ways to ex­ploit them. Claude Mythos Preview demon­strates a leap in these cy­ber skills—the vul­ner­a­bil­i­ties it has spot­ted have in some cases sur­vived decades of hu­man re­view and mil­lions of au­to­mated se­cu­rity tests, and the ex­ploits it de­vel­ops are in­creas­ingly so­phis­ti­cated.Ten years af­ter the first DARPA Cyber Grand Challenge, fron­tier AI mod­els are now be­com­ing com­pet­i­tive with the best hu­mans at find­ing and ex­ploit­ing vul­ner­a­bil­i­ties. Without the nec­es­sary safe­guards, these pow­er­ful cy­ber ca­pa­bil­i­ties could be used to ex­ploit the many ex­ist­ing flaws in the world’s most im­por­tant soft­ware. This could make cy­ber­at­tacks of all kinds much more fre­quent and de­struc­tive, and em­power ad­ver­saries of the United States and its al­lies. Addressing these is­sues is there­fore an im­por­tant se­cu­rity pri­or­ity for de­mo­c­ra­tic states.Al­though the risks from AI-augmented cy­ber­at­tacks are se­ri­ous, there is rea­son for op­ti­mism: the same ca­pa­bil­i­ties that make AI mod­els dan­ger­ous in the wrong hands make them in­valu­able for find­ing and fix­ing flaws in im­por­tant soft­ware—and for pro­duc­ing new soft­ware with far fewer se­cu­rity bugs. Project Glasswing is an im­por­tant step to­ward giv­ing de­fend­ers a durable ad­van­tage in the com­ing AI-driven era of cy­ber­se­cu­rity.

Over the past few weeks, we have used Claude Mythos Preview to iden­tify thou­sands of zero-day vul­ner­a­bil­i­ties (that is, flaws that were pre­vi­ously un­known to the soft­ware’s de­vel­op­ers), many of them crit­i­cal, in every ma­jor op­er­at­ing sys­tem and every ma­jor web browser, along with a range of other im­por­tant pieces of soft­ware.In a post on our Frontier Red Team blog, we pro­vide tech­ni­cal de­tails for a sub­set of these vul­ner­a­bil­i­ties that have al­ready been patched and, in some cases, the ways that Mythos Preview found to ex­ploit them. It was able to iden­tify nearly all of these vul­ner­a­bil­i­ties—and de­velop many re­lated ex­ploits—en­tirely au­tonomously, with­out any hu­man steer­ing. The fol­low­ing are three ex­am­ples:Mythos Preview found a 27-year-old vul­ner­a­bil­ity in OpenBSD—which has a rep­u­ta­tion as one of the most se­cu­rity-hard­ened op­er­at­ing sys­tems in the world and is used to run fire­walls and other crit­i­cal in­fra­struc­ture. The vul­ner­a­bil­ity al­lowed an at­tacker to re­motely crash any ma­chine run­ning the op­er­at­ing sys­tem just by con­nect­ing to it;It also dis­cov­ered a 16-year-old vul­ner­a­bil­ity in FFmpeg—which is used by in­nu­mer­able pieces of soft­ware to en­code and de­code video—in a line of code that au­to­mated test­ing tools had hit five mil­lion times with­out ever catch­ing the prob­lem;The model au­tonomously found and chained to­gether sev­eral vul­ner­a­bil­i­ties in the Linux ker­nel—the soft­ware that runs most of the world’s servers—to al­low an at­tacker to es­ca­late from or­di­nary user ac­cess to com­plete con­trol of the ma­chine.We have re­ported the above vul­ner­a­bil­i­ties to the main­tain­ers of the rel­e­vant soft­ware, and they have all now been patched. For many other vul­ner­a­bil­i­ties, we are pro­vid­ing a cryp­to­graphic hash of the de­tails to­day (see the Red Team blog), and we will re­veal the specifics af­ter a fix is in place.Eval­u­a­tion bench­marks such as CyberGym re­in­force the sub­stan­tial dif­fer­ence be­tween Mythos Preview and our next-best model, Claude Opus 4.6:In ad­di­tion to our own work, many of our part­ners have al­ready been us­ing Claude Mythos Preview for sev­eral weeks. This is what they’ve found:“AI ca­pa­bil­i­ties have crossed a thresh­old that fun­da­men­tally changes the ur­gency re­quired to pro­tect crit­i­cal in­fra­struc­ture from cy­ber threats, and there is no go­ing back. Our foun­da­tional work with these mod­els has shown we can iden­tify and fix se­cu­rity vul­ner­a­bil­i­ties across hard­ware and soft­ware at a pace and scale pre­vi­ously im­pos­si­ble. That is a pro­found shift, and a clear sig­nal that the old ways of hard­en­ing sys­tems are no longer suf­fi­cient.

Providers of tech­nol­ogy must ag­gres­sively adopt new ap­proaches now, and cus­tomers need to be ready to de­ploy. That is why Cisco joined Project Glasswing—this work is too im­por­tant and too ur­gent to do alone.”“At AWS, we build de­fenses be­fore threats emerge, from our cus­tom sil­i­con up through the tech­nol­ogy stack. Security is­n’t a phase for us; it’s con­tin­u­ous and em­bed­ded in every­thing we do. Our teams an­a­lyze over 400 tril­lion net­work flows every day for threats, and AI is cen­tral to our abil­ity to de­fend at scale.

We’ve been test­ing Claude Mythos Preview in our own se­cu­rity op­er­a­tions, ap­ply­ing it to crit­i­cal code­bases, where it’s al­ready help­ing us strengthen our code. We’re bring­ing deep se­cu­rity ex­per­tise to our part­ner­ship with Anthropic and are help­ing to harden Claude Mythos Preview so even more or­ga­ni­za­tions can ad­vance their most am­bi­tious work with se­cu­rity that sets the stan­dard.”“As we en­ter a phase where cy­ber­se­cu­rity is no longer bound by purely hu­man ca­pac­ity, the op­por­tu­nity to use AI re­spon­si­bly to im­prove se­cu­rity and re­duce risk at scale is un­prece­dented. Joining Project Glasswing, with ac­cess to Claude Mythos Preview, al­lows us to iden­tify and mit­i­gate risk early and aug­ment our se­cu­rity and de­vel­op­ment so­lu­tions so we can bet­ter pro­tect cus­tomers and Microsoft.

When tested against CTI-REALM, our open-source se­cu­rity bench­mark, Claude Mythos Preview showed sub­stan­tial im­prove­ments com­pared to pre­vi­ous mod­els. We look for­ward to part­ner­ing with Anthropic and the broader in­dus­try to im­prove se­cu­rity out­comes for all.”“The win­dow be­tween a vul­ner­a­bil­ity be­ing dis­cov­ered and be­ing ex­ploited by an ad­ver­sary has col­lapsed—what once took months now hap­pens in min­utes with AI.

Claude Mythos Preview demon­strates what is now pos­si­ble for de­fend­ers at scale, and ad­ver­saries will in­evitably look to ex­ploit the same ca­pa­bil­i­ties. That is not a rea­son to slow down; it’s a rea­son to move to­gether, faster. If you want to de­ploy AI, you need se­cu­rity. That is why CrowdStrike is part of this ef­fort from day one.”“In the past, se­cu­rity ex­per­tise has been a lux­ury re­served for or­ga­ni­za­tions with large se­cu­rity teams. Open source main­tain­ers—whose soft­ware un­der­pins much of the world’s crit­i­cal in­fra­struc­ture—have his­tor­i­cally been left to fig­ure out se­cu­rity on their own. Open source soft­ware con­sti­tutes the vast ma­jor­ity of code in mod­ern sys­tems, in­clud­ing the very sys­tems AI agents use to write new soft­ware.

By giv­ing the main­tain­ers of these crit­i­cal open source code­bases ac­cess to a new gen­er­a­tion of AI mod­els that can proac­tively iden­tify and fix vul­ner­a­bil­i­ties at scale, Project Glasswing of­fers a cred­i­ble path to chang­ing that equa­tion. This is how AI-augmented se­cu­rity can be­come a trusted side­kick for every main­tainer, not just those who can af­ford ex­pen­sive se­cu­rity teams.”“Pro­mot­ing the cy­ber­se­cu­rity and re­siliency of the fi­nan­cial sys­tem is cen­tral to JPMorganChase’s mis­sion, and we be­lieve the in­dus­try is strongest when lead­ing in­sti­tu­tions work to­gether on shared chal­lenges. Project Glasswing pro­vides a unique, early stage op­por­tu­nity to eval­u­ate next-gen­er­a­tion AI tools for de­fen­sive cy­ber­se­cu­rity across crit­i­cal in­fra­struc­ture both on our own terms and along­side re­spected tech­nol­ogy lead­ers.

We will take a rig­or­ous, in­de­pen­dent ap­proach to de­ter­min­ing how to pro­ceed and where we can help. Anthropic’s ini­tia­tive re­flects the kind of for­ward-look­ing, col­lab­o­ra­tive ap­proach that this mo­ment de­mands.”“Google is pleased to see this cross-in­dus­try cy­ber­se­cu­rity ini­tia­tive com­ing to­gether and to make Mythos Preview avail­able to par­tic­i­pants via Vertex AI. It’s al­ways been crit­i­cal that the in­dus­try work to­gether on emerg­ing se­cu­rity is­sues, whether it’s post-quan­tum cryp­tog­ra­phy, re­spon­si­ble zero-day dis­clo­sure, se­cure open source soft­ware, or de­fense against AI-based at­tacks.

We have long be­lieved that AI poses new chal­lenges and opens new op­por­tu­ni­ties in cy­ber de­fense, which is why we’ve built AI-powered tools—such as Big Sleep and CodeMender—to find and fix crit­i­cal soft­ware flaws. We will con­tinue in­vest­ing in our lead­ing cy­ber­se­cu­rity plat­form and a cul­ture fo­cused on pro­tect­ing users, cus­tomers, the ecosys­tem, and na­tional se­cu­rity.”“Over the past few weeks, we’ve had ac­cess to the Claude Mythos Preview model, us­ing it to iden­tify com­plex vul­ner­a­bil­i­ties that prior-gen­er­a­tion mod­els missed en­tirely. This is not only a game changer for find­ing pre­vi­ously hid­den vul­ner­a­bil­i­ties, but it also sig­nals a dan­ger­ous shift where at­tack­ers can soon find even more zero-day vul­ner­a­bil­i­ties and de­velop ex­ploits faster than ever be­fore.

It’s clear that these mod­els need to be in the hands of open source own­ers and de­fend­ers every­where to find and fix these vul­ner­a­bil­i­ties be­fore at­tack­ers get ac­cess. Perhaps even more im­por­tant: every­one needs to pre­pare for AI-assisted at­tack­ers. There will be more at­tacks, faster at­tacks, and more so­phis­ti­cated at­tacks. Now is the time to mod­ern­ize cy­ber­se­cu­rity stacks every­where. We com­mend Anthropic for part­ner­ing with the in­dus­try to en­sure these pow­er­ful ca­pa­bil­i­ties pri­or­i­tize de­fense first.”The pow­er­ful cy­ber ca­pa­bil­i­ties of Claude Mythos Preview are a re­sult of its strong agen­tic cod­ing and rea­son­ing skills. For ex­am­ple, as shown in the eval­u­a­tion re­sults be­low, the model has the high­est scores of any model yet de­vel­oped on a va­ri­ety of soft­ware cod­ing tasks.More in­for­ma­tion on the mod­el’s ca­pa­bil­i­ties, its safety prop­er­ties, and its gen­eral char­ac­ter­is­tics can be found in the Claude Mythos Preview sys­tem card.We do not plan to make Claude Mythos Preview gen­er­ally avail­able, but our even­tual goal is to en­able our users to safely de­ploy Mythos-class mod­els at scale—for cy­ber­se­cu­rity pur­poses, but also for the myr­iad other ben­e­fits that such highly ca­pa­ble mod­els will bring. To do so, we need to make progress in de­vel­op­ing cy­ber­se­cu­rity (and other) safe­guards that de­tect and block the mod­el’s most dan­ger­ous out­puts. We plan to launch new safe­guards with an up­com­ing Claude Opus model, al­low­ing us to im­prove and re­fine them with a model that does not pose the same level of risk as Mythos Preview3.Today’s an­nounce­ment is the be­gin­ning of a longer-term ef­fort. To be suc­cess­ful, it will re­quire broad in­volve­ment from across the tech­nol­ogy in­dus­try and be­yond.Pro­ject Glasswing part­ners will re­ceive ac­cess to Claude Mythos Preview to find and fix vul­ner­a­bil­i­ties or weak­nesses in their foun­da­tional sys­tems—sys­tems that rep­re­sent a very large por­tion of the world’s shared cy­ber­at­tack sur­face. We an­tic­i­pate this work will fo­cus on tasks like lo­cal vul­ner­a­bil­ity de­tec­tion, black box test­ing of bi­na­ries, se­cur­ing end­points, and pen­e­tra­tion test­ing of sys­tems.An­throp­ic’s com­mit­ment of $100M in model us­age cred­its to Project Glasswing and ad­di­tional par­tic­i­pants will cover sub­stan­tial us­age through­out this re­search pre­view. Afterward, Claude Mythos Preview will be avail­able to par­tic­i­pants at $25/$125 per mil­lion in­put/​out­put to­kens (participants can ac­cess the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).In ad­di­tion to our com­mit­ment of model us­age cred­its, we’ve do­nated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to en­able the main­tain­ers of open-source soft­ware to re­spond to this chang­ing land­scape (maintainers in­ter­ested in ac­cess can ap­ply through the Claude for Open Source pro­gram).We in­tend for this work to grow in scope and con­tinue for many months, and we’ll share as much as we can so that other or­ga­ni­za­tions can ap­ply the lessons to their own se­cu­rity. Partners will, to the ex­tent they’re able, share in­for­ma­tion and best prac­tices with each other; within 90 days, Anthropic will re­port pub­licly on what we’ve learned, as well as the vul­ner­a­bil­i­ties fixed and im­prove­ments made that can be dis­closed. We will also col­lab­o­rate with lead­ing se­cu­rity or­ga­ni­za­tions to pro­duce a set of prac­ti­cal rec­om­men­da­tions for how se­cu­rity prac­tices should evolve in the AI era. This will po­ten­tially in­clude:An­thropic has also been in on­go­ing dis­cus­sions with US gov­ern­ment of­fi­cials about Claude Mythos Preview and its of­fen­sive and de­fen­sive cy­ber ca­pa­bil­i­ties. As we noted above, se­cur­ing crit­i­cal in­fra­struc­ture is a top na­tional se­cu­rity pri­or­ity for de­mo­c­ra­tic coun­tries—the emer­gence of these cy­ber ca­pa­bil­i­ties is an­other rea­son why the US and its al­lies must main­tain a de­ci­sive lead in AI tech­nol­ogy. Governments have an es­sen­tial role to play in help­ing main­tain that lead, and in both as­sess­ing and mit­i­gat­ing the na­tional se­cu­rity risks as­so­ci­ated with AI mod­els. We are ready to work with lo­cal, state, and fed­eral rep­re­sen­ta­tives to as­sist in these tasks.We are hope­ful that Project Glasswing can seed a larger ef­fort across in­dus­try and the pub­lic sec­tor, with all par­ties help­ing to ad­dress the biggest ques­tions around the im­pact of pow­er­ful mod­els on se­cu­rity. We in­vite other AI in­dus­try mem­bers to join us in help­ing to set the stan­dards for the in­dus­try. In the medium term, an in­de­pen­dent, third-party body—one that can bring to­gether pri­vate- and pub­lic-sec­tor or­ga­ni­za­tions—might be the ideal home for con­tin­ued work on these large-scale cy­ber­se­cu­rity pro­jects.

The pro­ject is named for the glass­wing but­ter­fly, Greta oto. The metaphor can be ap­plied in two ways: the but­ter­fly’s trans­par­ent wings let it hide in plain sight, much like the vul­ner­a­bil­i­ties dis­cussed in this post; they also al­low it to evade harm—like the trans­parency we’re ad­vo­cat­ing for in our ap­proach. From the Ancient Greek for utterance” or narrative”: the sys­tem of sto­ries through which civ­i­liza­tions made sense of the world.Se­cu­rity pro­fes­sion­als whose le­git­i­mate work is af­fected by these safe­guards will be able to ap­ply to an up­com­ing Cyber Verification Program.

...

Read the original on www.anthropic.com »

4 356 shares, 36 trendiness

Dropping Cloudflare for bunny.net

TL;DR my mo­ti­va­tion and ex­pe­ri­ence for mov­ing my blog from Cloudflare to bunny.net

I’ve been a long time Cloudflare user. They of­fer a solid ser­vice that is free for the vast ma­jor­ity of their users, that’s very gen­er­ous. Their in­fra­struc­ture is mas­sive and their fea­ture set is un­de­ni­ably in­cred­i­ble.

One of my biggest con­cerns though is around how eas­ily I could be­come heav­ily de­pen­dent on this one sin­gle com­pany that then can de­cide to cut me off and dis­able all of my web­sites, for any ar­bi­trary rea­son. It’s a sin­gle point of fail­ure for the in­ter­net. Every Cloudflare out­age ends up in the news. And I can’t help but feel that the idea of cen­tral­iz­ing the in­ter­net into a sin­gle US cor­po­ra­tion feels off. Not to men­tion the var­i­ous scan­dals that have sur­rounded them. So I was open to al­ter­na­tives.

Bunny.net (affiliate link be­cause why not, raw link here) is a Slovenian (EU) com­pany that is build­ing up a lot of mo­men­tum. Their CDN-related ser­vices ri­val Cloudflare al­ready, and al­though their PoP net­work is smaller than Cloudflare’s, they score highly on per­for­mance and speed across the globe. It’s a gen­uinely com­pet­i­tive al­ter­na­tive to Cloudflare.

It has the ad­di­tional ben­e­fit of be­ing a European com­pany, and I like the idea of grow­ing and sup­port­ing the European tech scene.

What I was mov­ing away from

I’ve been us­ing var­i­ous dif­fer­ent ser­vices, but fo­cus­ing on this blog, the first thing was Cloudflare as the reg­is­trar for the do­main name. I did some re­search on al­ter­na­tive reg­is­trars, but I just did­n’t find any good European op­tions. The clos­est I found was INWX, but their lack of free WHOIS Privacy made them a non-op­tion. I ended up with Porkbun. They run on Cloudflare in­fra­struc­ture, but they have bet­ter sup­port. So the re­main­ing thing Cloudflare was do­ing for me was the Orange Cloud”: au­to­matic caching, ori­gin hid­ing, and op­tional pro­tec­tion fea­tures.

So that’s what we’re mov­ing over! I’m gonna walk you through how to set up the bunny.net CDN for your web­site, with some sen­si­ble de­faults.

Setting up your bunny.net ac­count is quick and you get $20 worth of free cred­its to play around with, those are valid for 14 days. You don’t need to give them a credit card up front to try things out, but if you do, you get an­other $30 worth of cred­its. You do need to con­firm your email though be­fore you can start set­ting things up. Once you’re out of the trial, you pay per use, which for most cases is cents a month. However, note that bunny.net re­quire a min­i­mum pay­ment of $1 per month.

I guess a cheap price to pay to stop be­ing the prod­uct and start be­com­ing the cus­tomer.

The pull zone is the main mech­a­nism for en­abling the CDN for your web­site. You’ll find them un­der CDN in the left nav­i­ga­tion bar. Here’s how to set one up:

Fill in the pull zone name. Just make it some­thing mean­ing­ful to you, for ex­am­ple the web­site name.

Fill in your Origin URL. This would be the ad­dress for di­rectly ac­cess­ing your server. In my case, it’s the pub­lic IP of my server.

If you’re run­ning mul­ti­ple apps on your server, for ex­am­ple us­ing Dokploy, coo­lify, or self-hosted PaaSs like that, you’ll want to pass the Host header as well. Here you put in the do­main of your app. In my case, that’s jola.dev.

Finally you can se­lect your pric­ing zones. Note that some zones are more ex­pen­sive, so you can choose to dis­able them. This just means that peo­ple in those ar­eas will get redi­rected to the clos­est zone you do have en­abled.

And you’re done with the first part!

Now that you’ve set up the pull zone, it’s time to hook it up to your web­site and do­main. Go to the pull zone you cre­ated. You’ll see a hostnames” screen. Time to con­nect things.

Under Add a cus­tom host­name” fill in your web­site do­main name.

You’ll get a modal with some in­struc­tions. You need to fol­low them to set up the DNS name to point your web­site to go through the CDN.

Go to where you man­age do­main name and add a CNAME record to point your do­main to the given CNAME value in the modal, some­thing like web­site.b-cdn.net.

Once you’ve done that, wait a few min­utes to let it prop­a­gate, and then click Verify & Activate SSL.

If it says suc­cess, you’re done. Your web­site is now run­ning through the bunny.net CDN, sim­i­lar to the Cloudflare or­ange cloud.

This is the part where bunny.net will re­ally shine through!

If your web­site is set up to re­turn the ap­pro­pri­ate cache head­ers for each re­source, things will just work. Bunny de­faults to re­spect­ing the cache con­trol head­ers when point­ing a pull zone at an ori­gin site. To ver­ify, go to Caching → General and check that Respect ori­gin Cache-Control” is set un­der Cache ex­pi­ra­tion time”. Note that if you set no-cache, bunny will use that and will not cache at the edge.

Alternatively, if you don’t have cache head­ers set up, and you don’t want to con­trol that your­self, you can in­stead en­able Smart Cache. This will de­fault to caching typ­i­cally cached re­sources like im­ages, CSS, JS files etc, while avoid­ing caching things like HTML pages. This will work for most cases!

But I wanted to go faster. If you’ve read my post about build­ing this web­site, here’s how I’ve set up my cache head­ers: I added a new pipeline in the router called pub­lic and added an ex­tra mid­dle­ware to it. I tech­ni­cally have every­thing us­ing this pipeline, but leav­ing the stan­dard browser pipeline that comes out of the box with Phoenix keeps my op­tions open to add au­then­ti­cated (uncached) pages in the fu­ture.

pipeline :public do

plug :accepts, [“html”]

plug :put_root_layout, html: {JolaDevWeb. Layouts, :root}

plug :put_secure_browser_headers, @secure_headers

plug :put_cdn_cache_header

end

defp put_cd­n_­cache_­header(conn, _opts) do

put_re­sp_­header(conn, cache-control”, public, s-max­age=86400, max-age=0″)

end

You can see the whole router here https://​github.com/​jo­ladev/​jola.dev/​blob/​main/​lib/​jo­la_de­v_web/​router.ex.

This setup means I even cache the HTML pages, which makes this ridicu­lously fast. Here’s the land­ing page re­sponse time from var­i­ous lo­ca­tions, us­ing the Larm re­sponse time checker tool:

Because I’m caching the HTML pages, if I pub­lish a new post I do need to purge the pull zone to re­set the cached HTML files.

All of these are op­tional, but nice to have!

On your pull zone page, un­der General → Hostnames, go tog­gle Force SSL on for your do­main to en­sure that all re­quests use SSL. SSL/TLS is pretty stan­dard these days, and many TLDs and web­sites use HSTS to en­force it, but no harm in en­abling it here too.

DDoS pro­tec­tion comes out of the box, but we can set some other things up. First of all, go to Caching and then Origin Shield in the left menu on your pull zone, and ac­ti­vate Origin Shield. Select the lo­ca­tion clos­est to your ori­gin. This re­duces load on your server, as bunny.net will cache every­thing in the Origin Shield lo­ca­tion, and all edge lo­ca­tions will try that lo­ca­tion first be­fore hit­ting your server.

Next, go to Caching → General and scroll down. At the bot­tom of the page you can se­lect Stale Cache: While Origin Offline and While Updating. This means bunny will keep serv­ing cached con­tent even if it is stale, if it can’t reach your ori­gin, and that it will serve stale con­tent while fetch­ing the lat­est ver­sion. Both are nice to haves, noth­ing you have to en­able, but pro­vide a slightly bet­ter ser­vice to your users!

Next, let’s set up an Edge rule to redi­rect any re­quests to our au­to­mat­i­cally gen­er­ated pull zone do­main to our ac­tual do­main, to avoid con­fus­ing crawlers. On your pull zone, in the left menu, click Edge rules.

For URL, in­put your URL plus the path vari­able. Eg for me it’s https://​jola.dev{{path}} .

For con­di­tions, pick Match any and Request URL Match any.

Input *:// re­plac­ing with the name given to your pull zone.

Now you should be able to go to https://​slug.b-cdn.net for your pull zone and get redi­rected to your proper do­main!

This post just cov­ers the very ba­sics of get­ting set up on bunny.net. I haven’t even scratched the sur­face of edge rules, cache con­fig­u­ra­tion, the Shield fea­tures for se­cu­rity and fire­walls, video host­ing and stream­ing, edge script­ing and edge dis­trib­uted con­tain­ers, and much more.

I es­pe­cially ap­pre­ci­ate the great sta­tis­tics, logs, and met­rics you get out of the dash­board. You can even see every sin­gle re­quest com­ing through to help you in­ves­ti­gate is­sues, and clear feed­back on what’s get­ting cached and not. I’m ac­tively mov­ing every­thing else over and I’m ex­cited for the up­com­ing S3 com­pat­i­ble stor­age!

...

Read the original on jola.dev »

5 355 shares, 31 trendiness

A bug on the dark side of the Moon

The Apollo Guidance Computer (AGC) is one of the most scru­ti­nised code­bases in his­tory. Thousands of de­vel­op­ers have read it. Academics have pub­lished pa­pers on its re­li­a­bil­ity. Emulators run it in­struc­tion by in­struc­tion. We found a bug in it that had been missed for fifty-seven years: a re­source lock in the gyro con­trol code that leaks on an er­ror path, silently dis­abling the guid­ance plat­for­m’s abil­ity to re­align.

We used Claude and Allium, our open-source be­hav­ioural spec­i­fi­ca­tion lan­guage, to dis­til 130,000 lines of AGC as­sem­bly into 12,500 lines of specs. The specs were de­rived from the code it­self, and the process sign­posted us di­rectly to the de­fect.

The source code has been pub­licly avail­able since 2003, when Ron Burkey and a team of vol­un­teers be­gan painstak­ingly tran­scrib­ing it by hand from printed list­ings at the MIT Instrumentation Laboratory. In 2016, for­mer NASA in­tern Chris Garry’s GitHub repos­i­tory went vi­ral, top­ping the trend­ing page. Thousands of de­vel­op­ers scrolled through the as­sem­bly lan­guage of a ma­chine with 2K of erasable RAM and a 1MHz clock.

The AGCs pro­grams were stored in 74KB of core rope: cop­per wire threaded by hand through tiny mag­netic cores in a fac­tory (a wire pass­ing through a core was a 1; a wire by­pass­ing it was a 0). The women who wove it were known in­ter­nally as the Little Old Ladies”, and the mem­ory it­self was called LOL mem­ory. The pro­gram was phys­i­cally wo­ven into the hard­ware. Ken Shirriff has analysed it down to in­di­vid­ual gates, and the Virtual AGC pro­ject runs the soft­ware in em­u­la­tion, hav­ing con­firmed the re­cov­ered source byte-for-byte against the orig­i­nal core rope dumps.

As far as we can de­ter­mine, no for­mal ver­i­fi­ca­tion, model check­ing or sta­tic analy­sis has been pub­lished against the flight code. The scrutiny has been deep, but it has been a par­tic­u­lar kind of scrutiny: read­ing the code, em­u­lat­ing the code, ver­i­fy­ing the tran­scrip­tion.

We took a dif­fer­ent ap­proach. We used Allium to dis­til a be­hav­ioural spec­i­fi­ca­tion from the Inertial Measurement Unit (IMU) sub­sys­tem, the gy­ro­scope-based plat­form that tells the space­craft which way it is point­ing. The spec­i­fi­ca­tion mod­els the life­cy­cle of every shared re­source: when it is ac­quired, when it must be re­leased, and on which paths.

It sur­faced a flaw that read­ing and em­u­la­tion had missed.

The AGC man­ages the IMU through a shared re­source lock called LGYRO. When the com­puter needs to torque the gy­ro­scopes (to cor­rect plat­form drift or per­form a star align­ment), it ac­quires LGYRO at the start and re­leases it when all three axes have been torqued. The lock pre­vents two rou­tines from fight­ing over the gyro hard­ware at the same time.

The lock is ac­quired on the way in and re­leased on the way out. But there is a third pos­si­bil­ity, and it does­n’t re­lease the lock.

Caging’ is an emer­gency mea­sure: a phys­i­cal clamp that locks the IMUs gim­bals in place to pro­tect the gy­ro­scopes from dam­age. The crew could trig­ger it with a guarded switch in the cock­pit.

When the torque com­pletes nor­mally, the rou­tine ex­its via STRTGYR2 and the LGYRO lock is cleared. When the IMU is caged while a torque is in progress, the code ex­its via a rou­tine called BADEND, which does not clear the lock. Two in­struc­tions are miss­ing:

CAF ZERO

TS LGYRO

Once LGYRO is stuck, every sub­se­quent at­tempt to torque the gy­ros finds the lock held, sleeps wait­ing for a wake sig­nal that will never come, and hangs. Fine align­ment, drift com­pen­sa­tion, man­ual gyro torque: all blocked.

On 21 July 1969, while Neil Armstrong and Buzz Aldrin walked on the lu­nar sur­face, Michael Collins or­bited alone in the Command Module Columbia. Every two hours he dis­ap­peared be­hind the Moon, out of ra­dio con­tact with Earth. I am alone now, truly alone, and ab­solutely iso­lated from any known life. I am it,” he wrote in Carrying the Fire. If a count were taken, the score would be three bil­lion plus two over on the other side of the moon, and one plus God knows what on this side.”

During each pass he ran Program 52, a star-sight­ing align­ment that kept the guid­ance plat­form pointed in the right di­rec­tion. If the plat­form drifted, the en­gine burn to bring him home would point the wrong way.

Here’s how the bug might have man­i­fested.

Collins has just fin­ished his star sight­ings at the op­tics sta­tion in the lower equip­ment bay and keyed in the fi­nal com­mands. The com­puter is torquing the gy­ro­scopes to ap­ply the cor­rec­tion across all three axes.

He moves back to­ward the main panel in a cramped cock­pit, past a cage switch pro­tected by a flip-up cover. An el­bow catches the cover and nudges the switch. The code han­dles this grace­fully: a rou­tine called CAGETEST de­tects the cage, aban­dons the torque and ex­its. The P52 fails, and he un­der­stands why: the cage in­ter­rupted the cor­rec­tion. He uncages the IMU and heads back to the op­tics sta­tion to re­align.

He starts a new P52. The pro­gram hangs.

No alarm, no pro­gram light. The DSKY (display and key­board, his only in­ter­face to the com­puter) ac­cepts the in­put and does noth­ing. He tries V41, the man­ual gyro torque verb. Same re­sult. Everything else on the com­puter works. Only gyro op­er­a­tions are dead.

The first fail­ure looked nor­mal: a cage event dur­ing align­ment, with a known re­cov­ery. The sec­ond gives no clue what is wrong. The trained re­sponse to an ac­ci­den­tal cage is to uncage and re­align. Collins had been trained to restart the com­puter, but noth­ing about this fail­ure would sug­gest he needed to. Commands were ac­cepted, every­thing else worked. It would look like faulty hard­ware, not a stuck lock.

My se­cret ter­ror for the last six months has been leav­ing them on the Moon and re­turn­ing to Earth alone”, Collins later wrote of the ren­dezvous. A dead gyro sys­tem be­hind the Moon, with Armstrong and Aldrin on the sur­face wait­ing for a ren­dezvous burn that de­pends on a plat­form he can no longer align, is ex­actly that sce­nario.

A hard re­set would have cleared it. But the 1202 alarms dur­ing the lu­nar de­scent had been stress­ful enough with Mission Control on the line and Steve Bales mak­ing a snap abort-or-con­tinue call.

Behind the Moon, alone, with a com­puter that was ac­cept­ing com­mands and do­ing noth­ing, Collins would have had to make that call by him­self.

Margaret Hamilton (as rope mother” for LUMINARY) ap­proved the fi­nal flight pro­grams be­fore they were wo­ven into core rope mem­ory. Her team at the MIT Instrumentation Laboratory pi­o­neered con­cepts we now take for granted: pri­or­ity sched­ul­ing, asyn­chro­nous mul­ti­task­ing, restart pro­tec­tion and soft­ware-based er­ror re­cov­ery. Even the term software en­gi­neer­ing’ is hers.

Their pri­or­ity sched­ul­ing saved the Apollo 11 land­ing when the 1202 alarms fired dur­ing de­scent, shed­ding low-pri­or­ity tasks un­der load ex­actly as de­signed. Most mod­ern sys­tems don’t han­dle over­load that grace­fully.

The most se­ri­ous bugs that did sur­face were spec­i­fi­ca­tion er­rors, not cod­ing mis­takes. Don Eyles, who wrote the lu­nar land­ing guid­ance code, doc­u­mented sev­eral. For ex­am­ple, the ICD for the ren­dezvous radar spec­i­fied that two 800 Hz power sup­plies would be fre­quency-locked but said noth­ing about phase syn­chro­ni­sa­tion. The re­sult­ing phase drift made the an­tenna ap­pear to dither, gen­er­at­ing roughly 6,400 spu­ri­ous in­ter­rupts per sec­ond per an­gle and con­sum­ing roughly 13% of the com­put­er’s ca­pac­ity dur­ing Apollo 11’s de­scent. This was the un­der­ly­ing cause of the 1202 alarms.

This de­fect has the same shape. BADEND is a gen­eral-pur­pose ter­mi­na­tion rou­tine shared by all IMU mode-switch­ing op­er­a­tions. It clears MODECADR (the stall reg­is­ter), wakes sleep­ing jobs, and ex­its. But LGYRO is a gyro-spe­cific lock, ac­quired only by the pulse-torquing code and re­leased only by the nor­mal com­ple­tion path in STRTGYR2. When the er­ror path routes through BADEND, it han­dles the gen­eral re­sources cor­rectly, but not the gyro-spe­cific lock.

The AGC was writ­ten so de­fen­sively that la­tent faults like this would be silently cor­rected by the restart logic, which clears LGYRO as a side ef­fect of full erasable-mem­ory ini­tial­i­sa­tion. Any test that hap­pened to trig­ger a restart af­ter the bug would see the sys­tem re­cover seam­lessly.

The de­fen­sive cod­ing hid the prob­lem, but it did­n’t elim­i­nate it. A cage event with­out a sub­se­quent restart would still leave the gy­ros locked. Collins would have no way to re­align the guid­ance plat­form and no di­ag­nos­tic clue point­ing to the fix.

We found this de­fect by dis­till­ing a be­hav­ioural spec­i­fi­ca­tion of the IMU sub­sys­tem us­ing Allium, an AI-native be­hav­ioural spec­i­fi­ca­tion lan­guage. The spec­i­fi­ca­tion mod­els each shared re­source as an en­tity with a life­cy­cle: ac­quired, held, re­leased.

The IMU en­tity de­clares a gy­ros_busy field mod­el­ling LGYRO. Two rules gov­ern it:

rule GyroTorque {

– Sends gyro torquing pulse com­mands. Reserves the gy­ros,

– en­ables power sup­ply, and dis­patches pulses per axis.

when: GyroTorque(command: GyroTorqueCommand)

re­quires:

imu.mode != caged

imu.gy­ros_busy = false

en­sures:

imu.gy­ros_busy = true

GyroTorqueStarted()

rule GyroTorqueBusy {

– Gyros al­ready re­served by an­other torquing op­er­a­tion.

– Caller sleeps un­til LGYRO is cleared.

when: GyroTorque(command: GyroTorqueCommand)

re­quires: imu.gy­ros_busy = true

en­sures:

JobSleep(job: call­ing_job())

GyroTorque re­quires gy­ros_busy = false and en­sures gy­ros_busy = true: the lock is ac­quired. Somewhere, on every path that fol­lows, the lock must be re­leased. The spec does­n’t show where in the code the re­lease hap­pens, but it makes the oblig­a­tion ex­plicit: if gy­ros_busy goes to true, some­thing must set it back to false.

With that oblig­a­tion writ­ten down, Claude traced every path that runs af­ter gy­ros_busy is set to true. The nor­mal com­ple­tion path (STRTGYR2) clears it. The cage-in­ter­rupted path (BADEND) does not. MODECADR, the other shared re­source, is cor­rectly cleared in BADEND: LGYRO is miss­ing.

The spec­i­fi­ca­tion forces this ques­tion on every path through the IMU mode-switch­ing code. A re­viewer ex­am­in­ing BADEND would see cor­rect, com­plete cleanup for every re­source BADEND was de­signed to han­dle.

The spec­i­fi­ca­tion ap­proaches from the other di­rec­tion: start­ing from LGYRO and ask­ing whether any paths fail to clear it.

Tests ver­ify the code as writ­ten; a be­hav­ioural spec­i­fi­ca­tion asks what the code is for.

A spec­i­fi­ca­tion dis­tilled by Allium mod­els re­source life­cy­cles across all paths, in­clud­ing the ones no­body thought to test. You can view the Allium spec­i­fi­ca­tions and re­pro­duc­tion of the bug on GitHub.

Hamilton’s team re­leased re­sources by load­ing the con­stant zero into the ac­cu­mu­la­tor (CAF ZERO) and stor­ing it into the lock reg­is­ter (TS LGYRO). Every re­lease placed man­u­ally, by a pro­gram­mer who re­mem­bered every path that could reach that point.

Modern lan­guages have tried to make lock leaks struc­turally im­pos­si­ble: Go has de­fer, Java has try-with-re­sources, Python has with, Rust’s own­er­ship sys­tem makes lock leaks a com­pile-time er­ror.

Nevertheless, lock leaks per­sist. MITRE clas­si­fies the pat­tern as CWE-772: Missing Release of Resource af­ter Effective Lifetime”, and rates its like­li­hood of ex­ploita­tion as high. Not all re­sources are man­aged by a lan­guage run­time. Database con­nec­tions, dis­trib­uted locks, file han­dles in shell scripts, in­fra­struc­ture that must be torn down in the right or­der: these are still of­ten the pro­gram­mer’s re­spon­si­bil­ity. Anywhere the pro­gram­mer is re­spon­si­ble for writ­ing the cleanup, the same bug is wait­ing.

Every Apollo crew came home safely. But the IMU mode-switch­ing rou­tines were car­ried for­ward across mis­sions in both the Command Module soft­ware (COMANCHE) and the Lunar Module soft­ware (LUMINARY). The fault was never no­ticed and never fixed.

A fifty-seven-year-old bug hid in flight-proven as­sem­bly. What’s hid­ing in yours? Let’s talk.

Thanks to Farzad Fuzz” Pezeshkpour for in­de­pen­dently re­pro­duc­ing the bug, and to Danny Smith and Prashant Gandhi for re­view­ing early drafts of this ar­ti­cle.

...

Read the original on www.juxt.pro »

6 343 shares, 22 trendiness

My Experience As A Rice Farmer

I’ve been in­spired to write some­thing for

April Cools Club, and what fits bet­ter from my nor­mal con­tent than my ex­pe­ri­ence rice farm­ing in rural Japan!

For those who aren’t aware, in 2025 I spent January-July in Japan stay­ing with my wife’s fam­ily. During that time we helped out on the fam­ily rice farm near Shuzenji in the Shizuoka pre­fec­ture. I un­for­tu­nately had to leave be­fore the full har­vest process was done but I’ll take you as far as I got and also try and share other in­sights I gleaned.

Unfortunately, while I thought I took a lot of pho­tos it seems I’m miss­ing things I would have liked to have cap­tured for this. Where ap­plic­a­ble there’ll be other sources and at least one video linked for more in­for­ma­tion.

The farm is pri­mar­ily a rice farm, there’s no an­i­mals (ignoring the koi fish that were in the gar­den pond). There is a por­tion of a bam­boo for­est and space to plant non-rice crops so we also grow or har­vest for con­sump­tion:

Warabi - an ed­i­ble bracken. Not so much planted it just grows every­where

Whatever other veg­eta­bles they de­cide to plant

And look­ing out from the dri­ve­way this is the view at the end of win­ter be­fore every­thing starts grow­ing and spring prop­erly kicks in:

Obviously, me and my wife aren’t around all the time. When we’re not my brother-in-law and mother-in-law work on the farm part time of­ten 1-2 days per week each.

At the start of spring we come back to the fields. They’ve been left fal­low over the win­ter and the dead rice plants from last year cover them. They’re cur­rently dry, we’ll flood them later on once they’ve been pre­pared. Because it’s hard and spiky we have cord­less strim­mers with metal blades to cut through this and not take for­ever.

I don’t have a pic­ture of a rice field be­fore clear­ing but here is one of the fields par­tially cleared:

As part of clear­ing and get­ting ready we also dig the drainage ditches along the side of the fields. When the fields were drained last year the soft mud flows back into the ditches and hard­ens again so they need to be re­dug.

The field will also be ploughed to break up the soil and loosen it, and we’ll re­move large rocks we find. After plough­ing we can level the field to flat­ten it. With the field level, the rice will be at equal depths and the plant­ing process is more con­sis­tent.

This might be the first time some­one’s pre­pared a rice field wear­ing a Rust London t-shirt. It’s def­i­nitely my first time dri­ving a trac­tor!

But be­fore we get that far we have to pre­pare the route for the wa­ter to get into the field. This work is only ac­tu­ally needed for one of our fields, the oth­ers have a fairly di­rect route from the river to the field. But for one field we have to clear a few hun­dred me­tres of a chan­nel that goes along the edge of a bam­boo for­est clear­ing the dead bam­boo and other nat­ural de­tri­tus.

I don’t have any pic­tures of this, but imag­ine all the joys of clear­ing out hun­dreds of me­ters of ditches among dense veg­e­ta­tion in high hu­mid­ity.

I do have a pic­ture af­ter plough­ing with the drainage ditch for the field dug next to the river that will sup­ply our wa­ter:

One last thing we might do be­fore flood­ing is drive metal rods into the perime­ter of the field as part of build­ing a fence. This does­n’t have to be done for every field, just the ones that bor­der the bam­boo for­est where the wild boar and deer might sneak through at night and eat the rice.

Rice fields are typ­i­cally placed near rivers, be­fore plant­ing we have to flood and level the fields. We’ll go down into the river, and place a wooden board by a drainage pipe at the edge redi­rect­ing wa­ter down that pipe and into a chan­nel next to the field. We can then open a hole and let wa­ter go from that chan­nel into the field. Water can then drain out of the edge of the field when it gets full, con­tin­u­ing on into other fields and even­tu­ally back into the river. The wa­ter will re­join the river in part so that farm­ing does­n’t dry out rivers and en­sure the longevity of the en­vi­ron­ment.

For the field with the more oner­ous ditch clear­ing that wa­ter flows un­der the field and even­tu­ally back into the river. For that there’s an ad-hoc con­struc­tion of some old drain and bam­boo to move the wa­ter across into the field:

And the wa­ter en­ter­ing that field:

After flood­ing, a trac­tor with a flat rear blade will be moved over the field a few times to level it. When the plant­ing ve­hi­cle goes over the field lit­tle arms pluck off some rice and stick it down. If the soil is too far from the arm you end up with loose rice float­ing around on top of the wa­ter. Obviously, we don’t want to waste rice like this so lev­el­ling is an im­por­tant step.

One thing to note, with a rice field the deeper soil is com­pacted and firm, it should­n’t be able to drain by the wa­ter go­ing into the wa­ter table and dis­ap­pear­ing. However, our field with the tricky wa­ter in­take did suf­fer from a mi­nor sink­hole as wa­ter was able to go down and re­join the stream that flows un­der the field. This re­sulted in some work to dig down and fill the area let­ting wa­ter out with rocks and harder mud and com­pact­ing it with the bucket of a dig­ger. After this work was done the field held it’s wa­ter and we were able to think about plant­ing.

After pok­ing around to fig­ure out why wa­ter was drain­ing I man­aged to get this pic­ture of the hole that started to open up. I guess that’s a sink­hole of sorts.

An in­ter­est­ing fact is that rice does­n’t ac­tu­ally need stand­ing wa­ter to grow. The wa­ter helps by stop­ping weeds grow­ing around the rice tak­ing re­sources and pro­tects the rice from cer­tain pests that would eat it.

For some fur­ther watch­ing, this video

shows a more ad­vanced but very sim­i­lar for a dif­fer­ent farm. The main dif­fer­ence is that they don’t need to man­u­ally go to the in­let gates to open/​close them and in­stead have some more mod­ern gates con­trolled via mo­bile phone.

It’s plant­ing day, turn­ing up I can see the neigh­bours have al­ready planted and here you can see our ready but empty field next to their freshly planted field:

But here we go, every­thing we’ve been work­ing to­ward. The pre­vi­ous process has taken from mid-Feb­ru­ary and now it’s early May. We go off and buy seed trays of rice to load into the Rice Transplanter. Below you can see a pic­ture of the plant­ing process:

An arm will move along the bot­tom of the rice and pull off a clump of rice and then plunge it into the ground. It will keep mov­ing back and forth do­ing this at reg­u­lar in­ter­vals. It the mo­tion of it work­ing is rem­i­nis­cent of a type­writer at work.

After it’s done there’s some left­over rice, and there might be gaps where things weren’t per­fectly level. We go out into the field wear­ing

jika-tabi. These are boots with a split be­tween the big toe and the other toes. It’s meant to help our feet not get stuck in the wet mud. Grabbing rice in small bunches we pull them from the seed tray and plant them about an inch deep into the mud and com­pact some mud around it.

Fun lan­guage note, my wife asked me if I saw the tabi once and I thought she meant a tabby cat. I was­n’t aware of the name of the footwear.

Now the rice is in the field we’re at risk of at­tack. Wild boar and deer just love to snack on our hard work - this means it’s time to put up the elec­tric fence. This is fairly sim­ple drive the poles into the ground at reg­u­lar in­ter­vals, then feed the wire along it wrap­ping it round the clips mak­ing sure it’s mod­er­ately taut. Also check for any breaks in the wire and if so get a bit of elec­tri­cal tape and fix it.

After wiring we place a box which is just a so­lar panel and bat­tery on a timer next to the fence and try to ham­mer it into the ground or prop it up se­curely enough with rocks where the dirt is too shal­low.

We’ll have to come back every week or so to cut the grass that sprouts up on the edge of the field. If we don’t it will ground the fence and drain the bat­tery and our rice will fall vic­tim to the lo­cal wildlife.

After plant­ing our fields look like this:

When the rice gets older - around waist height the field is drained. Some sort of nar­row plough is moved be­tween the rows push­ing the mud up around the rice to hold it up and then the wa­ter in­take is closed and the field left to drain and dry out. Then the rice will con­tinue to grow un­til it’s har­vest time.

Unfortunately, I left Japan a cou­ple of weeks af­ter drain­ing and I haven’t ex­pe­ri­enced the fi­nal stage of har­vest yet. I have this pic­ture I was sent of the rice near har­vest time but the fi­nal stages will have to re­main shrouded in mys­tery for now. I’m not ready for spoil­ers when I may learn this in fu­ture first­hand:

A spec­tre has been loom­ing over this post. The wild boar. I got an up­date one day it seems a baby boar man­aged to squeeze un­der an un­elec­tri­fied part of the fence and help it­self to an all you can eat buf­fet:

Luckily some­one came to the farm the day be­fore and the day af­ter it hap­pened and it was closed up be­fore the boars started vis­it­ing nightly. But it seems im­por­tant to re­main vig­i­lant of your de­fences. I’ve still not seen a boar in the wild even go­ing through the nearby forests - they’re noc­tur­nal and rather dan­ger­ous so I’m glad of that!

In rice fields you can see a lot of in­ter­est­ing wildlife. Frogs and sala­man­ders help pro­tect the crop by eat­ing bugs that might feed on the rice. You also might see snakes nearby that feed on them as well.

When clear­ing grass once I saw a snake dart out from un­der a pickup truck we’d had parked up for a few hours as I walked past. I then looked at the grass I was go­ing to cut and saw it hun­kered down in the grass but ob­scured enough to not get a pic­ture and not want­ing to dis­turb it I moved on. After all I don’t know how dan­ger­ous it might be.

I asked my brother-in-law about the snakes later on when he came to the truck to get a drink and asked if they’re dan­ger­ous. He asked if it’s brown or blue” (aoi 青い) - it was brown. Also blue here is­n’t blue but green, his­tor­i­cally ao used to mean the en­tire blue-green spec­trum so for some older terms (often things like an­i­mal colours), aoi is still used in­stead of the more mod­ern word for green (midori 緑). Anyway, his re­sponse to my an­swer is how I first heard the Japanese word 有毒な (yūdokuna) - or ven­omous in English. Not speak­ing any English, he fur­ther trans­lated it by grab­bing his throat and mim­ing froth­ing at the mouth.

There are also black kites fly­ing around, they’ve been known to swoop down and snatch up kit­tens and there are warn­ing signs in some more pop­u­lated places about keep­ing close to small pets. I’ve seen them cir­cling in the heat but it’s hard to get a good pho­to­graph of birds with a nor­mal smart­phone cam­era. But I have my best cap­ture of one:

When I was in Japan there was a rice price cri­sis (try say­ing that three times fast). With a 95% in­crease in price, it ac­tu­ally be­came cheaper to fly to South Korea, fill a suit­case with rice and fly back. Eventually, the gov­ern­ment re­leased part of its emer­gency rice sup­ply kept in stor­age to tackle food short­ages and mit­i­gate against dis­as­ters. This sit­u­a­tion is likely to oc­cur again, and as an out­sider look­ing at how Japan’s farm­ing sys­tem is or­gan­ised it seems un­avoid­able with­out sig­nif­i­cant re­forms.

In Japan the av­er­age age of a full-time rice farmer is around 70. For younger gen­er­a­tions they can only af­ford to do it part-time, 1 or 2 days a week. They also own 4-6 rice fields. There are no fac­tory farms and large scale op­er­a­tions.

In this re­spect my wife’s fam­ily is very av­er­age. Rice farm­ing does­n’t gen­er­ate enough in­come to do it full time so my Mother-in-law and Brother-in-law only farm 1 or 2 days a week max­i­mum. Without more time they’re able to just plant enough fields to ac­count for the fam­ily rice con­sump­tion and not to sell rice.

Part of the rea­son of this is the Gentan sys­tem, de­signed to pro­tect small-scale farm­ers in­come it pre­vents large scale fac­tory farm­ing of rice and en­cour­ages own­er­ship of smaller farms. It has been of­fi­cially abol­ished but it still shapes how the rice econ­omy works. This was part of a sys­tem to dis­cour­age com­mu­nism ini­tially by en­cour­ag­ing own­er­ship of busi­ness and pre­vent­ing ab­sen­tee land­lords ac­cu­mu­lat­ing large tracts of land where peo­ple who work the fields would be forced into rent­ing. It should be noted the UKs sys­tem is like this with rich landown­ers ac­cu­mu­lat­ing more farm­land for tax rea­sons and rent­ing it to farm­ers who of­ten strug­gle to make farm­ing prof­itable.

Farmers also sell their crops via a cen­trally man­aged sys­tem which fixes the price. Historically, crops used for an­i­mal feed have fetched higher price than hu­man qual­ity rice lead­ing to a num­ber of farm­ers plant­ing rice for them­selves and then sell­ing an­i­mal feed to make a liv­ing.

Another is­sue is au­toma­tion of farm­ing. Reading this ac­count of rice farm­ing you might think this seems very man­ual and it is. In America rice is aeri­ally planted. With con­sis­tency in fields and the dis­tri­b­u­tion of the rice leads to higher yields. And if you’re deal­ing with such small farm area that be­comes more im­por­tant - and things like aer­ial plant­ing be­come less eco­nom­i­cally vi­able. An American farm can be roughly 100 times larger than a Japanese one.

Additionally, with ris­ing cost of liv­ing a lot of the youth of Japan move to cities like Tokyo, Osaka and Nagoya where they can find bet­ter paid of­fice work. Local rural economies strug­gle more as they lose peo­ple and in­come from work does­n’t scale with the costs of liv­ing. It seems un­avoid­able we’ll see more and more rice farms close and fur­ther im­pacts due to de­creased out­put.

If you’re in­ter­ested in this there’s a video about this on

Asianometry.

Reading this last sec­tion it might seem to end in doom and gloom. This is­n’t re­ally how I wanted to sign off on things. Rice farm­ing was a pos­i­tive ex­pe­ri­ence for me, a con­nec­tion with na­ture, build­ing re­la­tion­ships with my wife’s fam­ily and grow­ing my Japanese skills. Doing a day of man­ual labour, chat­ting shit, then go­ing for the on­sen and some BBQ and beers is far bet­ter than grind­ing away at some en­ter­prise SaaS that will prob­a­bly dis­ap­pear in a few years.

Farming be­com­ing eco­nom­i­cally un­vi­able seems to be some­thing af­flict­ing many coun­tries. At some point I ex­pect a wakeup call or tran­si­tion. Either things are changed to make it vi­able full-time or Japan’s sys­tem of small in­de­pen­dent farms will grad­u­ally fade away. Only time will tell, but I hope that rural com­mu­ni­ties can con­tinue to sur­vive and also thrive.

...

Read the original on xd009642.github.io »

7 330 shares, 67 trendiness

...

Read the original on z.ai »

8 291 shares, 22 trendiness

Every GPU That Mattered

Drag, click an era, or use the ar­rows. 49 cards across 30 years.

Pick any two GPUs. See how they com­pare.

Every GPU plot­ted by year and tran­sis­tor count. Click any dot to ex­plore.

What Gamers Actually Use

The flag­ship costs $1,999. The most pop­u­lar card costs $329.

RTX 3060 at 4.1% vs RTX 5090 at 0.42%.

...

Read the original on sheets.works »

9 273 shares, 11 trendiness

Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute

We have signed a new agree­ment with Google and Broadcom for mul­ti­ple gi­gawatts of next-gen­er­a­tion TPU ca­pac­ity that we ex­pect to come on­line start­ing in 2027. This sig­nif­i­cant ex­pan­sion of our com­pute in­fra­struc­ture will power our fron­tier Claude mod­els and help us serve ex­tra­or­di­nary de­mand from cus­tomers world­wide.

This ground­break­ing part­ner­ship with Google and Broadcom is a con­tin­u­a­tion of our dis­ci­plined ap­proach to scal­ing in­fra­struc­ture: we are build­ing the ca­pac­ity nec­es­sary to serve the ex­po­nen­tial growth we have seen in our cus­tomer base while also en­abling Claude to de­fine the fron­tier of AI de­vel­op­ment,” said Krishna Rao, CFO of Anthropic. We are mak­ing our most sig­nif­i­cant com­pute com­mit­ment to date to keep pace with our un­prece­dented growth.”

Demand from Claude cus­tomers has ac­cel­er­ated in 2026. Our run-rate rev­enue has now sur­passed $30 bil­lion—up from ap­prox­i­mately $9 bil­lion at the end of 2025. When we an­nounced our Series G fundrais­ing in February, we shared that over 500 busi­ness cus­tomers were each spend­ing over $1 mil­lion on an an­nu­al­ized ba­sis. Today that num­ber ex­ceeds 1,000, dou­bling in less than two months.

The vast ma­jor­ity of the new com­pute will be sited in the United States, mak­ing this part­ner­ship a ma­jor ex­pan­sion of our November 2025 com­mit­ment to in­vest $50 bil­lion in strength­en­ing American com­put­ing in­fra­struc­ture.

The part­ner­ship deep­ens our ex­ist­ing work with Google Cloud—building on the in­creased TPU ca­pac­ity we an­nounced last October—as well as our re­la­tion­ship with Broadcom.

We train and run Claude on a range of AI hard­ware—AWS Trainium, Google TPUs, and NVIDIA GPUs—which means we can match work­loads to the chips best suited for them. This di­ver­sity of plat­forms trans­lates to bet­ter per­for­mance and greater re­silience for cus­tomers who de­pend on Claude for crit­i­cal work. Amazon re­mains our pri­mary cloud provider and train­ing part­ner, and we con­tinue to work closely with AWS on Project Rainier. Claude re­mains the only fron­tier AI model avail­able to cus­tomers on all three of the world’s largest cloud plat­forms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).

...

Read the original on www.anthropic.com »

10 227 shares, 32 trendiness

Cloudflare targets 2029 for full post-quantum security

Cloudflare is ac­cel­er­at­ing its post-quan­tum roadmap. We now tar­get 2029 to be fully post-quan­tum (PQ) se­cure in­clud­ing, cru­cially, post-quan­tum au­then­ti­ca­tion.

At Cloudflare, we be­lieve in mak­ing the Internet pri­vate and se­cure by de­fault. We started by of­fer­ing free uni­ver­sal SSL cer­tifi­cates in 2014, be­gan prepar­ing our post-quan­tum mi­gra­tion in 2019, and en­abled post-quan­tum en­cryp­tion for all web­sites and APIs in 2022, mit­i­gat­ing har­vest-now/​de­crypt-later at­tacks. While we’re ex­cited by the fact that over 65% of hu­man traf­fic to Cloudflare is post-quan­tum en­crypted, our work is not done un­til au­then­ti­ca­tion is also up­graded. Credible new re­search and rapid in­dus­try de­vel­op­ments sug­gest that the dead­line to mi­grate is much sooner than ex­pected. This is a chal­lenge that any or­ga­ni­za­tion must treat with ur­gency, which is why we’re ex­pe­dit­ing our own in­ter­nal Q-Day readi­ness time­line.

What hap­pened? Last week, Google an­nounced they had dras­ti­cally im­proved upon the quan­tum al­go­rithm to break el­lip­tic curve cryp­tog­ra­phy, which is widely used to se­cure the Internet. They did not re­veal the al­go­rithm, but in­stead pro­vided a zero-knowl­edge proof that they have one.

This is not even the biggest break­through. That same day, Oratomic pub­lished a re­source es­ti­mate for break­ing RSA-2048 and P-256 on a neu­tral atom com­puter. For P-256, it only re­quires a shock­ingly low 10,000 qubits. Google’s mo­ti­va­tion be­hind their re­cent an­nounce­ment to also pur­sue neu­tral atoms along­side su­per­con­duct­ing quan­tum com­put­ers be­comes clear now. Although Oratomic ex­plains their ba­sic ap­proach, they still leave out cru­cial de­tails on pur­pose.

These in­de­pen­dent ad­vances prompted Google to ac­cel­er­ate their post-quan­tum mi­gra­tion time­line to 2029. What’s more, in their an­nounce­ment and other talks, Google has placed a pri­or­ity on quan­tum-se­cure au­then­ti­ca­tion over mit­i­gat­ing har­vest-now/​de­crypt-later at­tacks. As we dis­cuss next, this pri­or­ity in­di­cates that Google is con­cerned about Q-Day com­ing as soon as 2030. Following the an­nounce­ments, IBM Quantum Safe’s CTO is more pes­simistic and can’t rule out quan­tum moonshot at­tacks” on high value tar­gets as early as 2029.

The quan­tum threat is well known: Q-Day is the day that suf­fi­ciently ca­pa­ble quan­tum com­put­ers can break es­sen­tial cryp­tog­ra­phy used to pro­tect data and ac­cess across sys­tems to­day. Cryptographically rel­e­vant quan­tum com­put­ers (CRQCs) don’t ex­ist yet, but many labs across the world are pur­su­ing dif­fer­ent ap­proaches to build­ing one. Until re­cently, progress on CRQCs has been mostly pub­lic, but there is no rea­son to ex­pect that will con­tinue. Indeed, there is am­ple rea­son to ex­pect that progress will leave the pub­lic eye. As quan­tum com­puter sci­en­tist Scott Aaronson warned at the end of 2025:

[A]t some point, the peo­ple do­ing de­tailed es­ti­mates of how many phys­i­cal qubits and gates it’ll take to break ac­tu­ally de­ployed cryp­tosys­tems us­ing Shor’s al­go­rithm are go­ing to stop pub­lish­ing those es­ti­mates, if for no other rea­son than the risk of giv­ing too much in­for­ma­tion to ad­ver­saries. Indeed, for all we know, that point may have been passed al­ready.

That point has now passed in­deed.

We’d like to spend some words on why it’s dif­fi­cult to pre­dict progress on quan­tum com­put­ing. Sudden quantum” leaps in un­der­stand­ing, like the one we wit­nessed last week, can oc­cur even if every­thing hap­pens in the pub­lic eye. Simply put, break­ing cryp­tog­ra­phy with a quan­tum com­puter re­quires en­gi­neer­ing on three in­de­pen­dent fronts: quan­tum hard­ware, er­ror cor­rec­tion, and quan­tum soft­ware. Progress on each front com­pounds progress on the oth­ers.

Hardware. There are many dif­fer­ent com­pet­ing ap­proaches. We men­tioned neu­tral atoms and su­per­con­duct­ing qubits, but there are also ion-trap, pho­ton­ics, and moon­shots like topo­log­i­cal qubits. Complementary ap­proaches can even be com­bined. Most of these ap­proaches are pur­sued by sev­eral labs around the world. They all have their dis­tinct en­gi­neer­ing chal­lenges and prob­lems to solve be­fore they can scale up. A few years ago, all of them had a long list of open chal­lenges, and it was un­clear if any of them would scale. Today most of them have made good progress. None have been demon­strated to scale yet: if they had, we would­n’t have a cou­ple of years left. But these ap­proaches are much closer now, es­pe­cially neu­tral atoms. To ig­nore this progress, you’d have to be­lieve that every sin­gle ap­proach will hit a wall.

Error cor­rec­tion. All quan­tum com­put­ers are noisy and re­quire er­ror-cor­rect­ing codes to per­form mean­ing­ful com­pu­ta­tion. This adds quite a bit of over­head, though how much de­pends on the ar­chi­tec­ture. More noise re­quires more er­ror cor­rec­tion, but more in­ter­est­ingly, im­proved qubit con­nec­tiv­ity al­lows for much more ef­fi­cient codes. For a sense of scale: typ­i­cally around a thou­sand phys­i­cal qubits are re­quired for one log­i­cal qubit for the su­per­con­duct­ing quan­tum com­put­ers that are noisy and only have neigh­bor qubit con­nec­tiv­ity. We knew reconfigurable qubits” such as those of neu­tral-atom ma­chines al­low for an or­der of mag­ni­tude bet­ter er­ror-cor­rect­ing codes. Surprisingly, Oratomic showed the ad­van­tage is even larger: only about 3-4 phys­i­cal neu­tral atom qubits are re­quired per log­i­cal qubit.

Software. Lastly, the quan­tum al­go­rithms to crack cryp­tog­ra­phy can be im­proved. This is Google’s break­through: they mas­sively sped up the al­go­rithm to crack P-256. On top of that, Oratomic showed fur­ther ar­chi­tec­ture spe­cific op­ti­miza­tions for re­con­fig­urable qubits.

The pic­ture comes to­gether: in 2025 neu­tral atoms turned out to be more scal­able than ex­pected, and now Oratomic fig­ured out how to do much bet­ter er­ror-cor­rect­ing codes with such highly con­nected qubits. On top of that, break­ing P-256 re­quires much less work. The re­sult is that Q-Day has been pulled for­ward sig­nif­i­cantly from typ­i­cal 2035+ time­lines, with neu­tral atoms in the lead, and other ap­proaches not far be­hind.

In pre­vi­ous blog posts we’ve dis­cussed how dif­fer­ent quan­tum com­put­ers com­pare on phys­i­cal qubit count and fi­delity, com­pared to the con­ser­v­a­tive goal­post of crack­ing RSA-2048 on a su­per­con­duct­ing qubit ar­chi­tec­ture. This analy­sis gives us a rough idea of how much time we have, and it’s cer­tainly bet­ter than track­ing quan­tum fac­tor­ing records, but it misses ar­chi­tec­ture-spe­cific op­ti­miza­tion and soft­ware im­prove­ments. What to watch for now is when the fi­nal miss­ing ca­pa­bil­i­ties for each ar­chi­tec­ture are achieved.

Historically, the in­dus­try’s fo­cus on post-quan­tum cryp­tog­ra­phy (PQC) has been based largely on PQ en­cryp­tion, which stops har­vest-now/​de­crypt-later (HNDL) at­tacks. In an HNDL at­tack, an ad­ver­sary har­vests sen­si­tive en­crypted net­work traf­fic to­day and stores it un­til a fu­ture date when it can use a pow­er­ful quan­tum com­puter to de­crypt the data. HNDL at­tacks are the pri­mary threat when Q-Day is far away. That’s why our fo­cus, thus far, has been on mit­i­gat­ing this risk, by adopt­ing post-quan­tum en­cryp­tion by de­fault in our prod­ucts since 2022. Today, as we men­tioned above, most Cloudflare prod­ucts are se­cure against HNDL at­tacks, and we’re work­ing to up­grade the rest as we speak.

The other cat­e­gory of at­tacks is against au­then­ti­ca­tion: ad­ver­saries armed with func­tion­ing quan­tum com­put­ers im­per­son­ate servers or forge ac­cess cre­den­tials. If Q-Day is far off, au­then­ti­ca­tion is not ur­gent: de­ploy­ing PQ cer­tifi­cates and sig­na­tures does not add any value, only ef­fort.

An im­mi­nent Q-Day flips the script: data leaks are se­vere, but bro­ken au­then­ti­ca­tion is cat­a­strophic. Any over­looked quan­tum-vul­ner­a­ble re­mote-lo­gin key is an ac­cess point for an at­tacker to do as they wish, whether that’s to ex­tort, take down, or snoop on your sys­tem. Any au­to­matic soft­ware-up­date mech­a­nism be­comes a re­mote code ex­e­cu­tion vec­tor. An ac­tive quan­tum at­tacker has it easy — they only need to find one trusted quan­tum-vul­ner­a­ble key to get in.

When ex­perts in the field of build­ing quan­tum com­put­ers start patch­ing au­then­ti­ca­tion sys­tems, we should all lis­ten. The ques­tion is no longer when will our en­crypted data be at risk?” but how long be­fore an at­tacker walks in the front door with a quan­tum-forged key?”

If quan­tum com­put­ers ar­rive in the next few years, they will be scarce and ex­pen­sive. Attackers will pri­or­i­tize high-value tar­gets, like long-lived keys that un­lock sub­stan­tial as­sets or per­sis­tent ac­cess such as root cer­tifi­cates, API auth keys and code-sign­ing certs. If an at­tacker is able to com­pro­mise one such key, they re­tain in­def­i­nite ac­cess un­til they are dis­cov­ered or that key is re­voked.

This sug­gests long-lived keys should be pri­or­i­tized. That is cer­tainly true if the quan­tum at­tack of a sin­gle key is ex­pen­sive and slow, which is to be ex­pected for the first gen­er­a­tion of neu­tral atom quan­tum com­put­ers. That’s not the case for scal­able su­per­con­duct­ing quan­tum com­put­ers and later gen­er­a­tions of neu­tral atom quan­tum com­put­ers, which could well crack keys much faster. Such fast CRQCs flip the script again, and an ad­ver­sary with one might fo­cus purely on HNDL at­tacks so that their at­tacks re­main un­de­tected. Google’s Sophie Schmieg com­pares this sce­nario to Enigma’s crypt­analy­sis that changed the di­rec­tion of World War II.

Adding sup­port for PQ cryp­tog­ra­phy is not enough. Systems must dis­able sup­port for quan­tum-vul­ner­a­ble cryp­tog­ra­phy to be se­cure against down­grade at­tacks. In larger, es­pe­cially fed­er­ated sys­tems such as the web, this is not fea­si­ble be­cause not every client (browser) will sup­port post-quan­tum cer­tifi­cates, and servers need to keep sup­port­ing these legacy clients. However, down­grade pro­tec­tion for HTTPS is still achiev­able us­ing PQ HSTS and/​or cer­tifi­cate trans­parency.

Disabling quan­tum-vul­ner­a­ble cryp­tog­ra­phy is not the last step: once done, all se­crets such as pass­words and ac­cess to­kens pre­vi­ously ex­posed in the quan­tum-vul­ner­a­ble sys­tem need to be ro­tated. Unlike post-quan­tum en­cryp­tion, which takes one big push, mi­grat­ing to post-quan­tum au­then­ti­ca­tion has a long de­pen­dency chain — not to men­tion third-party val­i­da­tion and fraud mon­i­tor­ing. This will take years, not months.

It’s nat­ural for or­ga­ni­za­tions read­ing this to rush out and think about which in­ter­nal sys­tems they need to up­grade. But that’s not the end of the story. Q-day threat­ens all sys­tems. As such, it’s im­por­tant to un­der­stand the im­pact of a po­ten­tial Q-day on third-party de­pen­den­cies, both di­rect and in­di­rect. Not just the third-par­ties you speak cryp­tog­ra­phy to, but also any third par­ties that are crit­i­cal busi­ness de­pen­den­cies like fi­nan­cial ser­vices and util­i­ties.

With Q-day ap­proach­ing on a shorter time­line, post-quan­tum au­then­ti­ca­tion is top pri­or­ity. Long-term keys should be up­graded first. Deep de­pen­dency chains and the fact that every­one has third-party ven­dors means this ef­fort will take on the or­der of years, not months. Upgrading to post-quan­tum cryp­tog­ra­phy is not enough: to pre­vent down­grades, quan­tum-vul­ner­a­ble cryp­tog­ra­phy must also be turned off.

Today, Cloudflare pro­vides post-quan­tum en­cryp­tion for the ma­jor­ity of our prod­ucts mit­i­gat­ing har­vest-now/​de­crypt-later. This is the prod­uct of work we started over a decade ago to pro­tect our cus­tomers and the Internet at large.

We are tar­get­ing full post-quan­tum se­cu­rity in­clud­ing au­then­ti­ca­tion for our en­tire prod­uct suite by 2029. Here we’re shar­ing some in­ter­me­di­ate mile­stones we’ve set, sub­ject to change as our un­der­stand­ing of the risk and de­ploy­ment chal­lenges evolve.

For busi­nesses, we rec­om­mend mak­ing post-quan­tum sup­port a re­quire­ment for any pro­cure­ment. Common best prac­tices, like keep­ing soft­ware up­dated and au­tomat­ing cer­tifi­cate is­suance, are mean­ing­ful and will get you pretty far. We rec­om­mend as­sess­ing crit­i­cal ven­dors early for what their fail­ure to take ac­tion would mean for your busi­ness.

For reg­u­la­tory agen­cies and gov­ern­ments: lead­ing by set­ting early time­lines has been cru­cial for in­dus­try-wide progress so far. We are now in a piv­otal po­si­tion where frag­men­ta­tion in stan­dards and ef­fort be­tween and within ju­ris­dic­tions could put progress at risk. We rec­om­mend that gov­ern­ments as­sign and em­power a lead agency to co­or­di­nate the mi­gra­tion on a clear time­line, stay se­cu­rity-fo­cused, and pro­mote the use of ex­ist­ing in­ter­na­tional stan­dards. Governments need not panic, but can lead mi­gra­tion with con­fi­dence.

For Cloudflare cus­tomers, with re­spect to our ser­vices, you do not need to take any mit­i­gat­ing ac­tion. We are fol­low­ing the lat­est ad­vance­ments in quan­tum com­put­ing closely and tak­ing proac­tive steps to pro­tect your data. As we have done in the past, we will turn on post-quan­tum se­cu­rity by de­fault, with no switches to flip. What we don’t con­trol is the other side: browsers, ap­pli­ca­tions, and ori­gins need to up­grade. Corporate net­work traf­fic on Cloudflare need not worry: Cloudflare One of­fers end-to-end pro­tec­tion when tun­nelling traf­fic through our post-quan­tum en­crypted in­fra­struc­ture.

Privacy and se­cu­rity are table stakes for the Internet. That’s why every post-quan­tum up­grade we build will con­tinue to be avail­able to all cus­tomers, on every plan, at no ad­di­tional cost. Making post-quan­tum se­cu­rity the de­fault is the only way to pro­tect the Internet at scale.

Free TLS helped en­crypt the web. Free post-quan­tum cryp­tog­ra­phy will help se­cure it for what comes next.

...

Read the original on blog.cloudflare.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.