10 interesting stories served every morning and every evening.

The West Forgot How to Build. Now It's Forgetting Code

techtrenches.dev

In 2023, Raytheon’s pres­i­dent stood at the Paris Air Show and de­scribed what it took to restart Stinger mis­sile pro­duc­tion. They brought back en­gi­neers in their 70s to teach younger work­ers how to build a mis­sile from pa­per schemat­ics drawn dur­ing the Carter ad­min­is­tra­tion. Test equip­ment had been sit­ting in ware­houses for years. The nose cone still had to be at­tached by hand, ex­actly as it was forty years ago.

The Pentagon had­n’t bought a new Stinger in twenty years. Then Russia in­vaded Ukraine, and sud­denly every­one needed them. The pro­duc­tion line was shut down. The elec­tron­ics were ob­so­lete. The seeker com­po­nent was out of pro­duc­tion. An or­der placed in May 2022 would­n’t de­liver un­til 2026. Four years. Not be­cause of money. Because the peo­ple who knew how to build them re­tired a decade ear­lier and no­body re­placed them.

I run en­gi­neer­ing teams in Ukraine. My peo­ple lived the other side of this equa­tion. Not the fac­tory floor. The re­ceiv­ing end. While Raytheon was strug­gling to restart pro­duc­tion from forty-year-old blue­prints, the US was ship­ping thou­sands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thir­teen years’ worth of Stinger pro­duc­tion. I’ve seen this pat­tern be­fore. It’s hap­pen­ing in my in­dus­try right now.

In March 2023, the EU promised Ukraine one mil­lion ar­tillery shells within twelve months. European pro­duc­tion ca­pac­ity sat at 230,000 shells per year. Ukraine was con­sum­ing 5,000 to 7,000 rounds per day. Anyone with a cal­cu­la­tor could see this would­n’t work.

By the dead­line, Europe de­liv­ered about half. Macron called the orig­i­nal promise reck­less. An in­ves­ti­ga­tion by eleven me­dia out­lets across nine coun­tries found ac­tual pro­duc­tion ca­pac­ity was roughly one-third of of­fi­cial EU claims. The mil­lion-shell mark was­n’t hit un­til December 2024, nine months late.

It was­n’t one bot­tle­neck. It was all of them. France had halted do­mes­tic pro­pel­lant pro­duc­tion in 2007. Seventeen years of noth­ing. Europe’s sin­gle ma­jor TNT pro­ducer was in Poland. Germany had two days of am­mu­ni­tion stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The en­tire con­ti­nen­t’s de­fense in­dus­try had been op­ti­mized for mak­ing small batches of ex­pen­sive cus­tom prod­ucts. Nobody planned for vol­ume. Nobody planned for cri­sis.

The U.S. was­n’t much bet­ter. One plant in Scranton, one fa­cil­ity in Iowa for ex­plo­sive fill, no do­mes­tic TNT pro­duc­tion since 1986. Billions of in­vest­ment later, pro­duc­tion still had­n’t hit half the tar­get.

This was­n’t an ac­ci­dent. In 1993, the Pentagon told de­fense CEOs to con­sol­i­date or die. Fifty-one ma­jor de­fense con­trac­tors col­lapsed into five. Tactical mis­sile sup­pli­ers went from thir­teen to three. Shipbuilders from eight to two. The work­force fell from 3.2 mil­lion to 1.1 mil­lion. A 65% cut.

The am­mu­ni­tion sup­ply chain had sin­gle points of fail­ure every­where. One man­u­fac­turer for 155mm shell cas­ings, sit­ting in Coachella, California, on the San Andreas Fault. One fa­cil­ity in Canada for pro­pel­lant charges. Optimized for min­i­mum cost with zero mar­gin for surge. On pa­per, ef­fi­cient. In prac­tice, one bad day away from col­lapse.

Then there’s Fogbank. A clas­si­fied ma­te­r­ial used in nu­clear war­heads. Produced from 1975 to 1989, then the fa­cil­ity was shut down. When the gov­ern­ment needed to re­pro­duce it for a war­head life ex­ten­sion pro­gram in 2000, they dis­cov­ered they could­n’t. A GAO re­port found that al­most all staff with pro­duc­tion ex­per­tise had re­tired, died, or left the agency. Few records ex­isted.

After spend­ing an ad­di­tional $69 mil­lion and years of re­verse en­gi­neer­ing, they fi­nally pro­duced vi­able Fogbank. Then dis­cov­ered the new batch was too pure. The orig­i­nal had con­tained an un­in­ten­tional im­pu­rity that was crit­i­cal to its func­tion. That fact ex­isted nowhere in any doc­u­ment. Only the work­ers who made the orig­i­nal batch knew it, and they had re­tired years ear­lier.

A nu­clear weapons pro­gram lost the abil­ity to make a ma­te­r­ial it in­vented. The knowl­edge ex­isted only in peo­ple, and the peo­ple were gone.

I read the Fogbank story and rec­og­nized it im­me­di­ately. Not the nu­clear ma­te­r­ial. The pat­tern. Build ca­pa­bil­ity over decades. Find a cheaper sub­sti­tute. Let the hu­man pipeline at­ro­phy. Enjoy the sav­ings. Then watch it all col­lapse when a cri­sis de­mands what you op­ti­mized away.

In de­fense, the sub­sti­tute was the peace div­i­dend. In soft­ware, it’s AI.

I wrote about the tal­ent pipeline col­lapse be­fore. The hir­ing num­bers and the ju­nior-to-se­nior prob­lem are doc­u­mented. So is the com­pre­hen­sion cri­sis. What I did­n’t have was the right his­tor­i­cal par­al­lel. Now I do.

And it tells you some­thing the hir­ing data does­n’t: how long re­build­ing ac­tu­ally takes.

Every ma­jor de­fense pro­duc­tion ramp-up took three to five years for sim­ple sys­tems. Five to ten for com­plex ones. Stinger: thirty months min­i­mum from or­der to de­liv­ery. Javelin: four and a half years to less than dou­ble pro­duc­tion. 155mm shells: four years and still not at tar­get de­spite five bil­lion dol­lars in­vested. France only restarted pro­pel­lant pro­duc­tion in 2024, sev­en­teen years af­ter shut­ting it down.

Money was never the con­straint. Knowledge was. RAND found that 10% of tech­ni­cal skills for sub­ma­rine de­sign need ten years of on-the-job ex­pe­ri­ence to de­velop, some­times fol­low­ing a PhD. Apprenticeships in de­fense trades take two to four years, with five to eight years to reach su­per­vi­sory com­pe­tence.

Now map that onto soft­ware. A ju­nior de­vel­oper needs three to five years to be­come a com­pe­tent mid-level en­gi­neer. Five to eight years to be­come se­nior. Ten or more to be­come a prin­ci­pal or ar­chi­tect. That time­line can’t be com­pressed by throw­ing money at it. It can’t be com­pressed by AI ei­ther.

A METR ran­dom­ized con­trolled trial found that ex­pe­ri­enced de­vel­op­ers us­ing AI cod­ing tools ac­tu­ally took 19% longer on real-world open source tasks. Before start­ing, they pre­dicted AI would make them 24% faster. The gap be­tween pre­dic­tion and re­al­ity was 43 per­cent­age points. When re­searchers tried to run a fol­low-up, a sig­nif­i­cant share of de­vel­op­ers re­fused to par­tic­i­pate if it meant work­ing with­out AI. They could­n’t imag­ine go­ing back.

The soft­ware in­dus­try is in year three of the same op­ti­miza­tion. Salesforce said it won’t hire more soft­ware en­gi­neers in 2025. A LeadDev sur­vey found 54% of en­gi­neer­ing lead­ers be­lieve AI copi­lots will re­duce ju­nior hir­ing long-term. A CRA sur­vey of uni­ver­sity com­put­ing de­part­ments found 62% re­ported de­clin­ing en­roll­ment this year.

I see it in code re­view. Review is now the bot­tle­neck. AI gen­er­ates code fast. Humans re­view it slow. The in­dus­try’s an­swer is pre­dictable: let AI re­view AIs code. I’m not do­ing that. I’ve re­worked our pull re­quest tem­plates in­stead. Every PR now has to ex­plain what changed, why, what type of change it is, screen­shots of be­fore and af­ter. Structured con­text so the re­viewer is­n’t guess­ing. I’m adding ded­i­cated re­view­ers per pro­ject. More eyes, more chances to catch what the model missed.

But even that does­n’t solve the deeper prob­lem. The skills you need to be ef­fec­tive now are dif­fer­ent. Technical ex­per­tise alone is­n’t enough any­more. You need peo­ple who can take own­er­ship, com­mu­ni­cate trade­offs, push back on bad sug­ges­tions from a ma­chine that sounds very con­fi­dent. Leadership qual­i­ties. Our last hir­ing round tells you how rare that is: 2,253 can­di­dates, 2,069 dis­qual­i­fied, 4 hired. A 0.18% con­ver­sion rate. The com­bi­na­tion of tech­ni­cal skill and the judg­ment to know when the AI is wrong barely ex­ists in the mar­ket any­more.

We doc­u­ment every­thing. Site Books, SDDs, RVS re­ports, boil­er­plate mod­ules with full cov­er­age. It works to­day, be­cause the peo­ple read­ing those docs have the en­gi­neer­ing ex­per­tise to act on them. What hap­pens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t mat­ter. Maybe the prob­lem stays man­age­able. I can’t pre­dict the ca­pa­bil­i­ties of mod­els in 2031.

But crises don’t send cal­en­dar in­vites. Nobody ex­pected a full-scale land war in Europe in 2022. The de­fense in­dus­try had thirty years to pre­pare and did­n’t. Even Fogbank had records. They weren’t enough with­out the peo­ple who un­der­stood what they meant.

Five to ten years from now, we’ll need se­nior en­gi­neers. People who un­der­stand sys­tems end to end, who can de­bug dis­trib­uted fail­ures at 2 AM, who carry in­sti­tu­tional knowl­edge that ex­ists nowhere in the code­base. Those en­gi­neers don’t ex­ist yet be­cause we’re not cre­at­ing them. The ju­niors who should be learn­ing right now are ei­ther not be­ing hired or de­vel­op­ing what a DoD-funded work­force study calls AI-mediated com­pe­tence.” They can prompt an AI. They can’t tell you what the AI got wrong.

It’s Fogbank for code. When ju­niors skip de­bug­ging and skip the for­ma­tive mis­takes, they don’t build the tacit ex­per­tise. And when my gen­er­a­tion of en­gi­neers re­tires, that knowl­edge does­n’t trans­fer to the AI.

It just dis­ap­pears.

The West al­ready made this mis­take once. The bill came due in Ukraine.

I know how this sounds. I know I’ve writ­ten about the tal­ent pipeline be­fore. The de­fense ex­am­ple is­n’t about re­peat­ing the ar­gu­ment. It’s about show­ing what hap­pens if the in­dus­try’s ex­pec­ta­tions don’t work out. Stinger, Javelin, Fogbank, a mil­lion shells no­body could make. That’s the cost of bet­ting wrong on op­ti­miza­tion. We’re mak­ing the same bet with soft­ware en­gi­neer­ing right now.

Maybe AI gets good enough, and the bet pays off. Maybe it does­n’t. The de­fense in­dus­try thought peace would last for­ever, too.

No posts

Progress Report: Linux 7.0 - Asahi Linux

asahilinux.org

After al­most three years of 6.x se­ries ker­nels, Linux 7.0 is fi­nally here.

That means it’s also time for an­other Asahi progress re­port!

Automate Everything

Users of al­ter­nate dis­tros and keen-eyed in­di­vid­u­als may have no­ticed some

changes to the Asahi Installer. After al­most two years, we fi­nally got around

to push­ing an up­dated ver­sion of the in­staller to the CDN! Two years is a long

time to go be­tween up­dates, so what took so long?

Our up­stream in­staller pack­age is a lit­tle bit of a Rube-Goldberg ma­chine. The

bulk of the in­staller is writ­ten in Python, with some small Bash scripts to

boot­strap it. When you run curl | sh, you’re ac­tu­ally down­load­ing the boos­t­rap

script, which then fetches the ac­tual in­staller bun­dle from our CDN. This bun­dle

con­sists of a Python in­ter­preter and very stripped down stan­dard li­brary, a built

m1n1 stage 1 bi­nary, and the in­staller it­self.

Until re­cently, cut­ting an in­staller re­lease meant:

Tagging the in­staller repo

Downloading a ma­cOS Python build

Building m1n1 from a blessed com­mit

Bundling Python, m1n1 and the in­staller

Uploading the in­staller bun­dle to the CDN

Updating the CDNs ver­sion flag file

This process was time-con­sum­ing and re­quired ad­min­is­tra­tive ac­cess to the CDN.

As a re­sult, we ne­glected to push in­staller up­dated for quite some time; the

pre­vi­ous in­staller tag was from June 2024! As up­stream­ing work has pro­gressed

and Devicetree bind­ings churned, this be­came rather prob­lem­atic for our friends

main­tain­ing dis­tros.

The Asahi Installer of­fers a UEFI-only in­stal­la­tion op­tion. This op­tion

shrinks ma­cOS and only in­stalls what is nec­es­sary to boot a UEFI ex­e­cutable,

mean­ing m1n1 stage 1, the Devicetrees, and U-Boot. This al­lows users to

boot from live me­dia with Asahi sup­port, such as spe­cialised Gentoo Asahi

LiveCD im­ages.

Since the Devicetrees on a fresh UEFI-only in­stall come from the in­staller

bun­dle it­self, a ker­nel will only suc­cess­fully boot when the in­staller-bun­dled

Devicetrees match what that ker­nel ex­pects to see. The two have got­ten rather

out of sync as time has gone on due to Devicetree bind­ings chang­ing

as a re­sult of the up­stream­ing process. This sit­u­a­tion fi­nally came to a

head with ker­nel 6.18, which re­quired nu­mer­ous changes to both m1n1 and

the Devicetree bind­ings for the Apple USB sub­sys­tem. This made boot­ing

ker­nel 6.18 and above from live me­dia im­pos­si­ble. Oops.

Rather than go through the trou­ble of man­u­ally push­ing out an­other up­date,

we took the op­por­tu­nity to build some au­toma­tion and solve this prob­lem

per­ma­nently.

We moved the man­i­fest of in­stal­lable im­ages into the asahi-in­staller-data repo,

al­low­ing us to up­date it in­de­pen­dently of the in­staller code­base.

On top of this, we also now de­ploy

the in­staller us­ing GitHub work­flows. Going for­ward, every push to the main

branch of asahi-in­staller will

au­to­mat­i­cally build the in­staller and up­load it to https://​alx.sh/​dev.

Every tag pushed to GitHub will do the same for https://​alx.sh.

The lat­est ver­sion, 0.8.0, bumps the bun­dled m1n1 stage 1 bi­nary to

ver­sion 1.5.2, in­tro­duces in­staller sup­port for the Mac Pro, and adds

a firmware up­date mode which ties in nicely with…

How do you ov­erengi­neer a light sen­sor?

Basically every­thing with a screen now comes with some sort of light

sen­sor. This is usu­ally to en­able au­to­matic bright­ness ad­just­ment based

on am­bi­ent con­di­tions. It’s a very con­ve­nient fea­ture in de­vices like

smart­phones, where a user may walk out­side and find their dis­play too

dim to see. The cheap­est ver­sions of this use a sim­ple pho­tore­sis­tor.

This is fine if the goal is just to change bright­ness, but bright­ness

is not the only thing af­fected by am­bi­ent light­ing con­di­tions. What about

colour ren­der­ing?

Apple’s de­vices have had the True Tone dis­play fea­ture for quite some time.

This works by mea­sur­ing both the bright­ness and the colour char­ac­ter­is­tics

of the en­vi­ron­men­t’s am­bi­ent light­ing. This data is then used to ap­ply

bright­ness and colour trans­for­ma­tions to the dis­play to en­sure that it is

al­ways dis­play­ing con­tent as ac­cu­rately as pos­si­ble. This is most no­tice­able

in en­vi­ron­ments with light­ing fix­tures that have a low Colour Rendering

Index, such as flu­o­res­cent tubes or cheap cool white LEDs. The de­vices that

en­able this, am­bi­ent light sen­sors, are usu­ally lit­tle ICs that con­nect

to the sys­tem over I2C or other in­dus­try-stan­dard bus. This is

fine for ba­sic ap­pli­ca­tions, but this is Apple. There are some other con­sid­er­a­tions

to be had:

The light sen­sor is do­ing stuff when­ever the screen is on, so pro­cess­ing its

out­put should be as ef­fi­cient as pos­si­ble

The light sen­sor should be able to be cal­i­brated for max­i­mum ac­cu­racy

There are mul­ti­ple mod­els of light sen­sor in use, and the OS should not

have to care too much about that

The light sen­sor has to have a three let­ter acronym like every other piece

of hard­ware on this plat­form (ALS)

Naturally, this sounds like a job for the Always-On Processor1 (AOP)!

We’ve had a work­ing AOP+ALS dri­ver set for a while thanks to chaos_princess,

how­ever the raw data AOP re­ports back from ALS is rather in­ac­cu­rate with­out

cal­i­bra­tion. That cal­i­bra­tion is a bi­nary blob that must be up­loaded to the AOP

at run­time. It is es­sen­tially firmware. Since we can­not re­dis­trib­ute Apple’s

bi­na­ries, it must be re­trieved from ma­cOS at in­stall time and then stored some­where the dri­ver

knows to look for it.

To achieve this, the Asahi Installer gath­ers up all

the firmware it knows we will need in Linux and stores it on the EFI System

Partition it cre­ates. A Dracut mod­ule then mounts this to a sub­di­rec­tory of

/lib/firmware/, where dri­vers can find it. However, is­sues arise when we

need to re­trieve more firmware from ma­cOS af­ter Asahi Linux has al­ready been

in­stalled. To avoid a re­peat of the we­b­cam sit­u­a­tion, where users

were re­quired to man­u­ally do surgery on their EFI System Partition, chaos_princess

added the abil­ity for the Asahi Installer to au­to­mat­i­cally up­date the firmware

pack­age. Starting with ALS, any re­quired firmware up­dates will be a sim­ple

mat­ter of boot­ing into ma­cOS or ma­cOS Recovery, re-run­ning the Asahi Installer, and fol­low­ing

the prompts.

To en­able ALS sup­port (and to do firmware up­grades in the fu­ture), fol­low these steps:

Ensure you are run­ning ver­sion 6.19 or above of the Asahi ker­nel

Ensure your dis­tro ships iio-sen­sor-proxy as a de­pen­dency of your DE (Fedora

Asahi Remix does this)

GoDaddy Gave a Domain to a Stranger Without Any Documentation

anchor.host

What would you do if your or­ga­ni­za­tion had used a do­main name for 27 years, and the reg­is­trar hold­ing the do­main seized it with­out any ad­vance warn­ing? All email and web­sites went dark. The com­pa­ny’s tech sup­port spent four days telling you to Just wait, we are work­ing on it.” On the fourth day, the com­pany in­formed you that some­one else has the do­main now, and it is no longer yours.

Read on. This crazy story hap­pened ex­actly one week ago.

My friend Lee Landis is a part­ner in Flagstream Technologies, a lo­cal IT firm in Lancaster, PA. Last Saturday af­ter­noon one of his clien­t’s do­mains van­ished from his GoDaddy ac­count.

Lee is one of the most com­pe­tent IT guys I know. The GoDaddy ac­count had dual two-fac­tor au­then­ti­ca­tion en­abled, re­quir­ing both an email code and an au­then­ti­ca­tion app code to log in. The do­main it­self had own­er­ship pro­tec­tion turned on. The au­dit log just said Transfer to Another GoDaddy Account” by an Internal User” with Change Validated: No.”

Some names have been changed

Some names and the do­main it­self have been changed be­cause peo­ple wanted to re­main anony­mous. The pat­tern of the do­main names mir­rors the ac­tual mis­take, so the ex­pla­na­tion still makes sense. Every fact in this post is true. Lee has hard ev­i­dence for every one of them.

As you can see above, GoDaddy emailed Flagstream at 1:39pm that an ac­count re­cov­ery had been re­quested. Three min­utes later, the trans­fer was ini­ti­ated. Four min­utes later, it was com­plete. On a Saturday af­ter­noon.

Everything at the im­pacted or­ga­ni­za­tion went of­fline be­cause GoDaddy re­set the DNS zone to de­fault when they moved the do­main into the new ac­count. Same name­servers. Empty DNS zone file.

Lee’s client lost their web­site and email for the next four days.

27 yrs

Domain in ac­tive use

32

Calls to GoDaddy

9.6 hrs

On the phone with GoDaddy

17

Emails to GoDaddy. Zero call­backs.

Domain and ac­count were fully pro­tected.

The do­main had the Full Domain Privacy and Protection” se­cu­rity prod­uct that GoDaddy sells. Dual two-fac­tor on the ac­count. None of it mat­tered. The trans­fer was done by an Internal User” in­side GoDaddy.

The do­main was HELPNETWORKINC.ORG. The real do­main name has been changed be­cause the or­ga­ni­za­tion wanted to re­main anony­mous. It be­longs to a na­tional or­ga­ni­za­tion with twenty lo­ca­tions across the United States. The do­main has been in ac­tive use for 27 years. Each chap­ter runs its web­site and email on a sub­do­main of that one par­ent do­main. When HELPNETWORKINC.ORG went dark, every chap­ter went dark with it.

Thirty-two calls. 9.6 hours on the phone. Zero call­backs.

Lee called GoDaddy on Sunday. They con­firmed the do­main was no longer in his ac­count but could not say where it went due to pri­vacy con­cerns. They told him to email undo@go­daddy.com. He did but did not re­ceive any type of re­sponse when email­ing that ad­dress. Of course Lee did­n’t re­ally feel like this was the ap­pro­pri­ate level of ur­gency for this is­sue. He asked for a su­per­vi­sor who was even less help­ful. Lee was not happy. He may have said some hurt­ful things to GoDaddy’s sup­port per­son­nel dur­ing this call. That first call lasted 2 hours, 33 min­utes, and 14 sec­onds.

On Monday morn­ing, Lee and a coworker started work­ing in earnest on this is­sue be­cause there was still no up­date from GoDaddy. Calling in yielded a dif­fer­ent agent who told Lee to email trans­fer­dis­putes@go­daddy.com instead. By Tuesday the ad­dress had changed again to artre­view@go­daddy.com. The in­struc­tions shifted by the day. It seemed like every GoDaddy tech sup­port per­son had a slightly dif­fer­ent rec­om­men­da­tion.

The one thing that stayed con­sis­tent was the mes­sage: Just wait a day or two. We are work­ing on it. Why do you think this is so ur­gent?”

One of the most frus­trat­ing parts of this process is that all of­fi­cial com­mu­ni­ca­tion to and from GoDaddy about this is­sue was done with gener­i­cally named email ac­counts. It just seems like there should have been a named in­di­vid­ual in charge of man­ag­ing and com­mu­ni­cat­ing about this is­sue. Rather there were just ran­dom generic email ac­counts that seemed to change on a daily ba­sis.

Every call gen­er­ated a fresh case num­ber. Lee lost count of the to­tal num­ber of cases. A few of the cases are 01368489. 894760. 01376819. 01373017. 01376804. 01373134. 01370012. None of them tied to­gether on GoDaddy’s side. Every es­ca­la­tion started from zero. These are ac­tual case num­bers, in case any­one at GoDaddy wants to check into this.

I posted on X to see if any­one I knew at GoDaddy could es­ca­late.

Can any of my GoDaddy friends help? A good friend of mine had a do­main taken. My friend is very com­pe­tent. Domain own­er­ship pro­tec­tion was on. Owner did not get any no­tices. Audit log looks fishy. Phone/email sup­port telling them to wait. Did a GoDaddy em­ployee take it? pic.twit­ter.com/​OW­cJIal­WcF— Austin Ginder (@austinginder) April 20, 2026

Can any of my GoDaddy friends help? A good friend of mine had a do­main taken. My friend is very com­pe­tent. Domain own­er­ship pro­tec­tion was on. Owner did not get any no­tices. Audit log looks fishy. Phone/email sup­port telling them to wait. Did a GoDaddy em­ployee take it? pic.twit­ter.com/​OW­cJIal­WcF

My friend Courtney Robertson, who works at GoDaddy, re­posted it and started es­ca­lat­ing in­ter­nally on her own time. Thank you, Courtney. GoDaddy has a lot of great peo­ple like her. That part is not in ques­tion. What GoDaddy does not have is a way to ac­tu­ally fix a mis­take once one has been made. Tickets pile up. Phone calls re­set. Every es­ca­la­tion is a new per­son read­ing the case from scratch. The thing you ac­tu­ally need solved drifts be­tween queues.

And there was no real way to dis­pute it.

While Lee was on the phone, his col­league was on a dif­fer­ent phone try­ing to file a Transfer Dispute. GoDaddy di­rected him to cas.go­daddy.com/​Form/​Trans­fer­Dis­pute. He filed a dis­pute and re­ceived this mes­sage, which he cap­tured via a screen­shot.

Lee and his col­leagues worked dili­gently at chal­leng­ing the trans­fer. They sup­plied the cor­rect name of the per­son listed on the do­main. They sup­plied that per­son’s dri­vers li­cense as re­quired. They also sup­plied the cor­rect busi­ness doc­u­men­ta­tion as listed in GoDaddy’s own re­quire­ments. Every time they sub­mit­ted a re­quest, they were told they would hear back in 48 to 72 hours.

GoDaddy FINALLY re­sponds with a SHOCKING state­ment

Tuesday af­ter­noon, af­ter four days of wait­ing, Flagstream fi­nally got an of­fi­cial email re­sponse back from GoDaddy.

GoDaddy’s re­ply to Lee

After in­ves­ti­gat­ing the do­main name(s) in ques­tion, we have de­ter­mined that the reg­is­trant of the do­main name(s) pro­vided the nec­es­sary doc­u­men­ta­tion to ini­ti­ate a change of ac­count. … GoDaddy now con­sid­ers this mat­ter closed.

That was it. No ex­pla­na­tion of what doc­u­men­ta­tion. The sug­gested next steps were three links. A WHOIS lookup. ICANN ar­bi­tra­tion providers. A page about get­ting a lawyer in­volved to rep­re­sent you in lit­i­ga­tion.

Flagstream mi­grates client to new do­main

Once GoDaddy de­clared the mat­ter closed, Flagstream be­gan mi­grat­ing the client to a new do­main. New email ad­dresses. New web­site ad­dresses. Coordinating with var­i­ous teams through­out the night to change every­thing over to a new do­main.

Switching to a new do­main is a mas­sive amount of work, and it leaves a lot of lin­ger­ing prob­lems be­hind be­cause there is no con­trol over the orig­i­nal do­main.

Every email ad­dress that ex­ists out in the world is now wrong. You have to tell every­one the new ad­dress. If they try the old one, it bounces.

Every piece of mar­ket­ing ma­te­r­ial that ref­er­ences the old do­main is now in­cor­rect. There is no way to for­ward any­thing to the new do­main.

All of the SEO is gone. You are start­ing an on­line pres­ence from scratch.

Then a stranger found the do­main in her ac­count.

Wednesday morn­ing Susan (not her real name), 2,000 miles away from the clien­t’s head­quar­ters, no­ticed some­thing odd. Susan had been work­ing at re­claim­ing a to­tally dif­fer­ent do­main used by a for­mer em­ployee. When she looked closely at her GoDaddy ac­count, the do­main in her ac­count was­n’t the one she had re­quested. She made a few phone calls be­cause she knew this was a prob­lem and even­tu­ally got hooked up with Flagstream. Working with Susan, they ran a GoDaddy ac­count-to-ac­count trans­fer, and put the do­main back where it be­longed. DNS came back up while Lee was still typ­ing the email telling me it was over. The en­tire process of re­claim­ing the do­main lasted less than 5 min­utes.

Once the do­main was back and DNS was work­ing, Flagstream started the ar­du­ous task of re­vert­ing every­thing that they had done the day be­fore. They switched email and web­sites back to the orig­i­nal do­main, once again work­ing through the night to get every­thing fixed.

The res­o­lu­tion for this prob­lem did not come from GoDaddy sup­port. It did not come from the dis­pute team. It did not come from the Office of the CEO team. It came from a stranger who ac­ci­den­tally ended up with the do­main and was smart and hon­est enough to start call­ing around be­cause she knew some­thing was­n’t right.

Susan is re­ally the hero of this en­tire story. Without her, Flagstream would still have no idea what hap­pened to this do­main. Lawyers would have got­ten in­volved, but it would prob­a­bly be months un­til any­thing was re­solved.

Timeline of events

Apr 18, 1:39pm

GoDaddy emails Flagstream that an Account Recovery has been re­quested for the ac­count.

Apr 18, 1:42pm

Transfer ini­ti­ated by GoDaddy Internal User. Three min­utes af­ter the re­cov­ery no­tice.

Apr 18, 1:43pm

Transfer com­pleted. Change Validated is listed as No”. Website and email go dark across the en­tire or­ga­ni­za­tion.

Apr 19

Lee dis­cov­ers the do­main is gone. GoDaddy says email undo@go­daddy.com and wait.

Apr 20

Flagstream team starts call­ing and email­ing GoDaddy for up­dates. GoDaddy now says email trans­fer­dis­putes@go­daddy.com. Austin posts on X. Courtney Robertson routes the case to the Office of the CEO team.

Apr 21

Flagstream files mul­ti­ple Transfer Dispute cases with the re­quested doc­u­men­ta­tion. Every sub­mis­sion is met with a 48 to 72 hour re­sponse win­dow. GoDaddy emails Lee that the mat­ter is closed and the do­main be­longs to some­one else. Flagstream starts the painful process of mi­grat­ing the or­ga­ni­za­tion to a new do­main so they can func­tion.

Apr 22

Susan no­tices the wrong do­main in her ac­count and calls Lee. Account-to-account trans­fer brings it home.

Then it got cra­zier. GoDaddy ap­proved the trans­fer with zero doc­u­ments.

The or­ga­ni­za­tion on the re­ceiv­ing end of the trans­fer was a re­gional chap­ter of the same net­work. Susan, the ex­ec­u­tive as­sis­tant, had emailed GoDaddy two weeks ear­lier ask­ing to re­cover a dif­fer­ent do­main. HELPNETWORKLOCAL.ORG. Not HELPNETWORKINC.ORG.

Flagstream spent some time talk­ing to Susan to fig­ure out ex­actly how she was able to ac­ci­den­tally get the do­main trans­ferred into her ac­count. Did she un­in­ten­tion­ally sup­ply all of the cor­rect doc­u­men­ta­tion? Talking to Susan they fig­ured out that GoDaddy ac­tu­ally ap­proved the trans­fer with­out her sup­ply­ing ANY doc­u­men­ta­tion.

Her email sig­na­ture hap­pened to ref­er­ence her chap­ter’s web­site at a sub­do­main of HELPNETWORKINC.ORG. GoDaddy’s re­cov­ery team ap­par­ently looked at the sig­na­ture, saw the par­ent do­main, and trans­ferred that do­main into her ac­count.

GoDaddy sent Susan a link to up­load sup­port­ing doc­u­ments. The link ex­pired be­fore she got around to us­ing it. She emailed back re­quest­ing a new link so she could up­load the re­quired doc­u­men­ta­tion. However, be­fore the new link ar­rived, she re­ceived an email say­ing the do­main trans­fer had been ap­proved.

Susan never sub­mit­ted a sin­gle doc­u­ment. Not for the do­main she was ac­tu­ally try­ing to re­cover, and cer­tainly not for the one GoDaddy ended up giv­ing her. GoDaddy ap­proved the change of ac­count, trans­ferred a 27-year-old non-prof­it’s do­main into a stranger’s ac­count, and considered the mat­ter closed” with­out re­quir­ing any doc­u­men­ta­tion.

This is a huge se­cu­rity is­sue.

If Susan had been a bad ac­tor, she could have in­ter­cepted email. She could have used that email to re­set pass­words, get MFA codes, launch phish­ing at­tacks, etc. She could have put up a new web­site with mal­ware on it, redi­rected pay­ments on the web­site, etc.

When the do­main ini­tially dis­ap­peared and Flagstream was un­able to ob­tain any in­for­ma­tion about who had it, Flagstream feared the worst. Flagstream and the im­pacted client started to come up with a plan to pro­tect against the threats men­tioned above which was a huge un­der­tak­ing for an or­ga­ni­za­tion of this size. Basically, all users across the en­tire or­ga­ni­za­tion needed to start log­ging into every im­por­tant web­site and make sure the com­pro­mised do­main was re­moved from the ac­count. This in­cludes bank web­sites, Amazon, IRS, pay­roll, Dropbox, email ac­counts, and even iron­i­cally enough, GoDaddy ac­counts.

It is out­ra­geous that Susan was able to ob­tain this do­main with­out sup­ply­ing any doc­u­men­ta­tion. Everyone was lucky it was Susan that got this do­main.

GoDaddy: please fol­low up with Flagstream.

This is not ac­cept­able.

A GoDaddy em­ployee trans­ferred a 27-year-old do­main out of a pay­ing cus­tomer’s ac­count with no val­i­da­tion. With zero doc­u­men­ta­tion sub­mit­ted by the re­cip­i­ent. When the cus­tomer dis­puted with le­git­i­mate doc­u­men­ta­tion, every sub­mis­sion was met with We will re­spond in 48 to 72 hours.” After four days, GoDaddy claimed the do­main be­longed to some­one else and the case was closed. The fix came from the re­cip­i­ent of the mis­take, not from GoDaddy de­spite 9.6 hours of phone con­ver­sa­tions.

To any­one at GoDaddy read­ing this. Please fol­low up with Lee Landis at Flagstream Technologies and make this right. An apol­ogy is prob­a­bly in or­der. An in­ter­nal re­view of how the trans­fer team val­i­dates doc­u­men­ta­tion is in or­der, in­clud­ing how a trans­fer can be ap­proved with zero doc­u­men­ta­tion. Lee would like a clear an­swer on how this hap­pened. Lee does­n’t want an email from a generic GoDaddy ac­count. Lee wants a real per­son to call or email him. This per­son needs to leave an email ad­dress and phone num­ber in case Lee has fol­low-up ques­tions.

Even dis­clos­ing this to GoDaddy was bro­ken.

Before pub­lish­ing this post, I wanted to share the find­ings with GoDaddy’s se­cu­rity team di­rectly. I emailed se­cu­rity@go­daddy.com with the full re­port. The mes­sage bounced.

GoDaddy’s auto-re­ply to se­cu­rity@go­daddy.com

A cus­tom mail flow rule cre­ated by an ad­min at se­cure­server­net.on­mi­crosoft.com has blocked your mes­sage. We hope this mes­sage finds you well. This email mail­box is no longer mon­i­tored. To ad­dress your needs, we have out­lined two pop­u­lar op­tions for you: 1: To sub­mit an abuse re­port, please visit our Abuse Reporting Form. 2: If you are look­ing to sub­mit a vul­ner­a­bil­ity, please visit our bounty pro­gram https://​hackerone.com/​go­daddy-vdp.

So I filed the same re­port through HackerOne in­stead, re­port #3696718.

This is the same pat­tern that played out across the four-day out­age. The of­fi­cial chan­nel does not work. The al­ter­na­tive path re­quires know­ing to by­pass it. Most hon­est peo­ple who no­tice a se­cu­rity is­sue are not go­ing to have a HackerOne ac­count. They send an email. How is it that GoDaddy does­n’t have a pub­lic se­cu­rity dis­clo­sure email ad­dress?

Whether the orig­i­nal trans­fer was a sin­gle agen­t’s mis­take or a flaw in the re­cov­ery work­flow, it is still a se­cu­rity is­sue. And there is no clean path from I found some­thing” to a hu­man at GoDaddy is look­ing at it.”

The only way to get GoDaddy’s at­ten­tion is to leave.

Lee is up­set about the four days of stress and lost pro­duc­tiv­ity across the im­pacted or­ga­ni­za­tion. But his big­ger con­cern is what comes next. Apparently there is no way to pro­tect against this threat if your do­main is hosted at GoDaddy. In ad­di­tion, it seems like there is no ef­fi­cient way to con­test the GoDaddy trans­fer.

Flagstream will most likely mi­grate every one of their do­mains off GoDaddy. That is the only pro­tec­tion they have left, and the only es­ca­la­tion GoDaddy seems to re­spond to.

Are you at risk?

Is your do­main hosted on GoDaddy? What would you do if the do­main dis­ap­peared out of your GoDaddy ac­count and your en­tire busi­ness went dark?

Just a moment...

ca98am79.medium.com

A.I. Should Elevate Your Thinking, Not Replace It - Blog - Koshy John

www.koshyjohn.com

In talk­ing to en­gi­neer­ing man­age­ment across tech in­dus­try heavy-weights, it’s ap­par­ent that soft­ware en­gi­neer­ing is start­ing to split peo­ple into two neb­u­lous groups:

The first group will use A.I. to re­move drudgery, move faster, and spend more time on the parts of the job that ac­tu­ally mat­ter i.e. fram­ing prob­lems, mak­ing trade­offs, spot­ting risks, cre­at­ing clar­ity, and pro­duc­ing orig­i­nal in­sight.

The sec­ond group will use A.I. to avoid think­ing. They will paste prompts into a box, col­lect pol­ished out­put, and pre­sent it as though it re­flects their own rea­son­ing. For a while, that can look like pro­duc­tiv­ity. It can even look like tal­ent. But it is a dead end.

The soft­ware en­gi­neers who will be most valu­able in the fu­ture are not the ones who do every­thing them­selves. They are the ones who refuse to spend time on work that A.I. can do for them, while still un­der­stand­ing every­thing that is done on their be­half. They use the time sav­ings to op­er­ate at a higher level. They el­e­vate their thought process through rigor rather than out­sourc­ing it.

That dis­tinc­tion mat­ters more than peo­ple think.

In this post:

The New Failure Mode: Outsourced Thinking (& analo­gies)

What the Best Engineers Will Do Instead

The Real Source of Value

The Risk for Early-In-Career Engineers

There Is No Shortcut to Judgment

In Summary: The Dividing Line & Organizational Implications

Why This Matters Even More to Organizational Health

The New Failure Mode: Outsourced Thinking

A.I. can al­ready gen­er­ate code, sum­ma­rize meet­ings, ex­plain con­cepts, pro­duce de­sign drafts, and write sta­tus up­dates in sec­onds. That is use­ful but also dan­ger­ous.

The dan­ger is not that A.I. will make peo­ple lazy in some vague moral sense. It is that it makes it easy to sim­u­late com­pe­tence with­out build­ing com­pe­tence.

There is now a very real temp­ta­tion to hand a model a prob­lem, re­ceive a plau­si­ble an­swer, and then re­peat that an­swer as if it re­flects your own un­der­stand­ing. That is close to pla­gia­rism, but in some ways worse. At least when a stu­dent copies from an­other per­son, there is still a real hu­man source be­hind the an­swer. Here, peo­ple can pre­sent ma­chine-pro­duced rea­son­ing they do not un­der­stand, can­not de­fend, and could not re­pro­duce on their own.

That is in­tel­lec­tual de­pen­dency be­ing la­beled as lever­age.

And that de­pen­dency has a cost. Every time you sub­sti­tute gen­er­ated out­put for your own com­pre­hen­sion, you are skip­ping the ex­er­cises / reps that build judg­ment. You are trad­ing long-term ca­pa­bil­ity for short-term ap­pear­ance.

I’m go­ing to share some analo­gies to make this line of thought more con­crete and ap­proach­able.

[CLICK HERE TO SHOW ANALOGIES]

What the Best Engineers Will Do Instead

The best en­gi­neers will ab­solutely use A.I. more, not less. But they will use it with a very dif­fer­ent pos­ture.

They will let A.I. draft boil­er­plate, sum­ma­rize docs, gen­er­ate test scaf­fold­ing, pro­pose refac­tor­ings, sur­face pos­si­ble fail­ure modes, ac­cel­er­ate in­ves­ti­ga­tion, and com­press rou­tine work. They will hap­pily of­fload the me­chan­i­cal parts of the job. But they will also:

ask sharper ques­tions.

de­fine the real prob­lem in­stead of merely re­spond­ing to the vis­i­ble one.

op­ti­mize for clar­ity and brevity (as be­fore), in­stead of a lot of pol­ished lan­guage that says lit­tle of sub­stance.

gen­er­ate new, high-value knowl­edge - in­stead of sim­ply re­hash­ing / remix­ing ex­ist­ing knowl­edge in the sys­tem.

Then they will take the re­claimed time and in­vest it where it mat­ters most.

The Real Source of Value

For years, peo­ple have con­fused soft­ware en­gi­neer­ing with code pro­duc­tion. That con­fu­sion is now get­ting ex­posed.

If the job were mainly about pro­duc­ing syn­tac­ti­cally valid code, then of course A.I. would be on a di­rect path to re­plac­ing large parts of the pro­fes­sion. But that was never the high­est-value part of the work. The value was al­ways in judg­ment.

The valu­able en­gi­neer is the one who sees the hid­den con­straint be­fore it causes an out­age. The one who no­tices that the team is solv­ing the wrong prob­lem. The one who re­duces a vague de­bate into crisp trade­offs. The one who iden­ti­fies the miss­ing ab­strac­tion. The one who can de­bug re­al­ity, not just read code. The one who can cre­ate clar­ity where every­one else sees noise.

A.I. can sup­port that work. It can­not own it.

In fact, the en­gi­neers who pro­duce the most value in the fu­ture will of­ten be the ones gen­er­at­ing the knowl­edge that makes A.I. more use­ful in the first place. They will cre­ate the de­sign prin­ci­ples, do­main un­der­stand­ing, pat­terns, con­text, and de­ci­sion frame­works that im­prove the ma­chine’s ef­fec­tive­ness. They will feed the sys­tem with bet­ter ques­tions, bet­ter con­straints, and bet­ter cor­rec­tions.

In that world, the en­gi­neer is not re­placed by A.I. The en­gi­neer be­comes more lever­aged be­cause they are op­er­at­ing above the level of raw out­put.

The Risk for Early-in-Career Engineers

This is­sue is es­pe­cially im­por­tant for peo­ple early in their ca­reers.

Early years mat­ter be­cause that is when foun­da­tional skills are formed. Debugging in­stinct. System in­tu­ition. Precision. Taste. Skepticism. The abil­ity to de­com­pose a prob­lem. The abil­ity to ex­plain why some­thing works, not just that it ap­pears to work.

Those skills are built through fric­tion. Through strug­gle. Through get­ting things wrong and fix­ing them. Through trac­ing fail­ures back to root cause. Through writ­ing some­thing and re­al­iz­ing it does not sur­vive con­tact with re­al­ity.

That process is not op­tional. It is how en­gi­neers ac­quire and el­e­vate their com­pe­tency. If early-ca­reer en­gi­neers use A.I. to re­move all strug­gle from the learn­ing loop, they are hurt­ing their de­vel­op­ment.

Someone who uses A.I. to an­swer every hard ques­tion may look ef­fi­cient for a quar­ter or two. But they may also be qui­etly fail­ing to build the very ca­pa­bil­i­ties their fu­ture de­pends on. They are skip­ping the stage where un­der­stand­ing is forged.

Going back to the analo­gies: This is like copy­ing an­swers through uni­ver­sity and then show­ing up to a job that re­quires in­de­pen­dent thought. It is like us­ing a cal­cu­la­tor for every arith­metic task and never de­vel­op­ing num­ber sense. It is like re­ly­ing on self-dri­ving fea­tures be­fore learn­ing how to ac­tu­ally drive. The sup­port sys­tem may make you look func­tional, but it does not make you ca­pa­ble.

And even­tu­ally raw ca­pa­bil­ity is the main thing that mat­ters. There is no sub­sti­tute.

There is No Shortcut to Judgment

This is the part that some peo­ple may not want to hear –

There is no gen­er­ated ex­pla­na­tion that trans­fers mas­tery into your brain with­out you do­ing the work.

There is no way to out­source rea­son­ing for long enough that you still end up strong at rea­son­ing.

You can out­source me­chan­ics, ac­cel­er­ate re­search and com­press rou­tine tasks. You can re­move enor­mous amounts of low-value la­bor. All of that is good and should hap­pen.

But you can­not skip the for­ma­tion of skill and ex­pect to pos­sess it any­way.

That is the cen­tral mis­take be­hind the most naive uses of A.I. People think they are sav­ing time, when in re­al­ity they are of­ten de­fer­ring a bill that will come due later in the form of weak judg­ment, shal­low un­der­stand­ing, and lim­ited adapt­abil­ity.

In Summary: The Dividing Line & Organizational Implications

The di­vid­ing line is sim­ple:

If A.I. is help­ing you un­der­stand faster, think deeper, and op­er­ate at a higher level, it is mak­ing you more valu­able.

If A.I. is help­ing you avoid un­der­stand­ing, avoid strug­gle, and avoid own­er­ship of the rea­son­ing, it is mak­ing you less valu­able.

One path com­pounds, while the other path hol­lows you out and sets you up ripe for ir­rel­e­vance.

That is why the fu­ture does not be­long to the en­gi­neers who merely use A.I. It be­longs to the en­gi­neers who know ex­actly what to del­e­gate, ex­actly what to own, and ex­actly how to turn time sav­ings into bet­ter think­ing.

If not al­ready, it’s time to make in­formed choices on how you shape your fu­ture in the in­dus­try.

Why This Matters Even More to Organizational Health

Engineering man­age­ment will face the same di­vid­ing line.

Some lead­ers will rec­og­nize the dif­fer­ence be­tween en­gi­neers who use A.I. to ac­cel­er­ate un­der­stand­ing and en­gi­neers who use it to sim­u­late un­der­stand­ing. Others will not. That gap will mat­ter more than many or­ga­ni­za­tions re­al­ize.

One of the defin­ing traits of strong en­gi­neer­ing lead­er­ship in the A.I. era will be the abil­ity to dis­tin­guish pol­ished out­put from real judg­ment. Leaders who can­not tell the dif­fer­ence may re­ward speed, flu­ency, and pre­sen­ta­tion while miss­ing the deeper sig­nals of tech­ni­cal depth: orig­i­nal­ity, rigor, sound trade­off analy­sis, and the abil­ity to rea­son clearly about un­fa­mil­iar prob­lems.

That cre­ates or­ga­ni­za­tional risk.

The most ca­pa­ble en­gi­neers are of­ten the ones pro­duc­ing the in­sight, con­text, de­sign judg­ment, and cor­rec­tive feed­back that make both teams and A.I. sys­tems more ef­fec­tive. If an or­ga­ni­za­tion al­lows low-un­der­stand­ing, high-flu­ency work to spread unchecked, it does not just lower the qual­ity of in­di­vid­ual out­put. It starts to de­grade the knowl­edge en­vi­ron­ment it­self. Reviews get weaker. Design dis­cus­sions get shal­lower. Documents be­come more pol­ished and less use­ful. Over time, the or­ga­ni­za­tion be­comes worse at gen­er­at­ing the very clar­ity and tech­ni­cal judg­ment it de­pends on.

This is why lead­er­ship mat­ters so much here. The chal­lenge is not merely adopt­ing A.I. tools. It is pro­tect­ing the con­di­tions un­der which real think­ing, learn­ing, and crafts­man­ship con­tinue to thrive.

That starts with hir­ing. Organizations will need bet­ter ways to de­tect gen­uine un­der­stand­ing rather than sur­face-level flu­ency. They will need in­ter­view loops that test rea­son­ing, not just pol­ished an­swers. They will need eval­u­a­tion sys­tems that re­ward clar­ity, depth, sound judg­ment, and durable tech­ni­cal con­tri­bu­tion rather than sheer out­put vol­ume.

It also af­fects team de­sign and cul­ture. Strong en­gi­neers should not spend dis­pro­por­tion­ate amounts of time clean­ing up plau­si­ble but shal­low work gen­er­ated by peo­ple who have out­sourced their think­ing. If lead­er­ship does not ac­tively guard against that, high per­form­ers be­come force mul­ti­pli­ers for every­one ex­cept them­selves. That is a fast path to frus­tra­tion, low­ered stan­dards, and even­tual at­tri­tion.

The or­ga­ni­za­tions that han­dle this well will not be the ones that sim­ply push A.I. adop­tion hard­est. They will be the ones that learn to sep­a­rate lever­age from de­pen­dency, ac­cel­er­a­tion from im­i­ta­tion, and gen­uine ca­pa­bil­ity from con­vinc­ing out­put.

In the A.I. era, or­ga­ni­za­tional qual­ity will in­creas­ingly de­pend on whether lead­er­ship can still rec­og­nize the dif­fer­ence.

Editorial note: Like all con­tent on this site, the views ex­pressed here are my own and do not nec­es­sar­ily re­flect the views of my em­ployer.

London Marathon 2026 results: Sabastian Sawe makes history with first competitive sub-two-hour marathon

www.bbc.com

Sawe smashes two-hour mark to move goal­posts for marathon run­ning’

Absolutely in­cred­i­ble!’ - Sawe runs sub-two-hour marathon in London

ByHarry Poole

BBC Sport jour­nal­ist

Sabastian Sawe made his­tory at the London Marathon by be­com­ing the first ath­lete to run a sub-two-hour marathon in a com­pet­i­tive race.

The 31-year-old Kenyan crossed the line to win in one hour 59 min­utes 30 sec­onds, more than one minute faster than the late Kelvin Kiptum’s pre­vi­ous record of 2:00:35, set in 2023.

The great Eliud Kipchoge be­came the first man to run a marathon in un­der two hours in 2019, but that was not record-el­i­gi­ble as it was held un­der con­trolled con­di­tions.

Already on world record pace as he crossed the halfway mark in 1:00:29, Sawe was able to speed up over the sec­ond half of the race to run even faster than Kipchoge’s time.

Sawe made his de­ci­sive move be­fore the fi­nal 10km, with only debu­tant Yomif Kejelcha able to cover his surge off the front.

Remarkably, Kejelcha, mak­ing his marathon de­but, be­came the sec­ond man to run un­der two hours in race con­di­tions, fin­ish­ing run­ner-up in 1:59:41.

Half marathon world record holder Jacob Kiplimo also crossed the line faster than Kiptum’s for­mer record, com­plet­ing the podium in 2:00:28.

Sawe, speak­ing on BBC TV, said: I am feel­ing good. I am so happy. It is a day to re­mem­ber for me.”

We started the race well. Approaching fin­ish­ing the race, I was feel­ing strong. Finally reach­ing the fin­ish line, I saw the time, and I was so ex­cited.”

Assefa sets new world record to win London Marathon for sec­ond year in a row

In the wom­en’s race, Ethiopia’s Tigst Assefa im­proved her own world record for a women-only field as she surged clear of Kenyan ri­vals Hellen Obiri and Joyciline Jepkosgei in a thrilling fin­ish to re­tain her ti­tle in 2:15:41.

Swiss great Marcel Hug cruised to a record-equalling eighth London Marathon vic­tory in the elite men’s wheel­chair race, ty­ing level with Great Britain’s David Weir by win­ning for a sixth suc­ces­sive year.

Catherine Debrunner also re­tained the elite wom­en’s wheel­chair ti­tle as the Swiss burst clear of American Tatyana McFadden in the clos­ing stages.

How Sawe achieved sport­ing im­mor­tal­ity in London

Much of the fo­cus be­fore­hand had been about Sawe - win­ner of last year’s race in 2:02:27 - tar­get­ing Kiptum’s London Marathon course record of 2:01:25.

He told BBC Sport this week that it was only a mat­ter of time” be­fore he broke Kiptum’s world record, adding I hope and wish one day [it will be me]” when asked about be­com­ing the first per­son to run un­der two hours in a race.

Sawe had tar­geted Kiptum’s world record in Berlin last September, when he went through halfway in 60:16, be­fore that bid was ul­ti­mately un­done by the hot weather.

But, in per­fect race con­di­tions in London, Sawe stormed down The Mall to achieve that his­toric feat, do­ing so in a time which was once con­sid­ered im­pos­si­ble.

BBC com­men­ta­tor and for­mer world cham­pion Steve Cram said: There are things that hap­pen in sport and you want to be there to see his­tory be­ing made - if you are watch­ing on TV then well done, but if you’re in London, it is a priv­i­lege and it is in­cred­i­ble.

We said it was a day for records but I don’t think in our wildest dreams we could have fore­seen this.”

I am so hap­py’ - Sawe re­acts to win­ning London marathon

After cov­er­ing the first half of the course in 60:29, Sawe moved through the gears to com­plete the sec­ond half in just 59:01.

Only 63 men in his­tory have run a half marathon as quickly as that - with Sawe’s own per­sonal best stand­ing at 58:05.

His splits con­tin­ued to quicken as he chased down his tar­get, clock­ing 13:54 for the five kilo­me­tres from 30 – 35km, and 13:42 for the 35 – 40km stretch - an av­er­age pace of 2:45 per kilo­me­tre.

This will re­ver­ber­ate around the world,” said for­mer wom­en’s marathon world record holder Paula Radcliffe.

The goal­posts have lit­er­ally just moved for marathon run­ning and where you bench­mark your­self as be­ing world-class.

It is a les­son to every­body out there. We say don’t go out too fast’ - they went out smartly and paced it re­ally well.”

We’ve wit­nessed some­thing in­cred­i­ble’

Pundits re­act to Sawe’s land­mark sub-two-hour marathon

Kitted out in spon­sor Adidas’ lat­est su­per­shoes, Sawe, who has won all four marathons he has con­tested, man­aged to take two min­utes and 35 sec­onds off his marathon per­sonal best.

He has sought to en­sure con­fi­dence in his per­for­mances by un­der­go­ing fre­quent drug tests and was tested 25 times be­fore com­pet­ing in Berlin, where he faded to fin­ish in 2:02:16.

I want to thank the crowds for cheer­ing us. I think they help a lot, be­cause if it was not for them, you don’t feel like you are so loved,” Sawe said.

I think they help a lot be­cause them call­ing makes you feel so happy and strong and push­ing.

That is why I can say what comes for me to­day is not for me alone but all of us in London.”

Reacting to Sawe’s record, Britain’s four-time Olympic cham­pion Mo Farah said: We’ve waited long enough to see a hu­man go sub-two.

That’s al­ways been the ques­tion that we’ve asked. We’ve just wit­nessed some­thing in­cred­i­ble.”

Assefa im­proves record as Hug makes his­tory

Hug wins London Marathon wheel­chair race for sixth con­sec­u­tive year

Assefa, the third-fastest woman in his­tory, lined up as favourite to re­peat her 2025 tri­umph in London af­ter in­juries forced Olympic gold medal­list Sifan Hassan and world cham­pion Peres Jepchirchir to with­draw.

The lead­ing trio in Sunday’s race re­mained in­sep­a­ra­ble un­til the clos­ing kilo­me­tres, as Obiri and Jepkosgei ac­com­pa­nied Assefa in­side the Ethiopian’s record pace set in London 12 months ago.

But it was Assefa who sum­moned the en­ergy to push on for vic­tory, go­ing nine sec­onds faster than her pre­vi­ous women-only record.

The wom­en’s elite run­ners be­gin 30 min­utes be­fore the elite men in the London Marathon, mean­ing the event is classed as a women-only race.

Obiri, a six-time global medal­list on the track, crossed the line 12 sec­onds af­ter Assefa, closely fol­lowed by Kenya’s 2021 win­ner Jepkosgei.

Eilish McColgan was the first British woman across the line, plac­ing sev­enth over­all in 2:24:51, while Rose Harvey was ninth in 2:26:14.

Mahamed Mahamed was the best-placed home ath­lete in the men’s event, fin­ish­ing 10th in 2:06:14 and re­plac­ing Alex Yee as the sec­ond-fastest Briton in his­tory.

Debrunner wins wom­en’s wheel­chair race

Hug pro­duced an­other dom­i­nant per­for­mance to tie Weir’s record for the most vic­to­ries in London Marathon his­tory.

Hug, 40, crossed the line in 1:24:13, more than four and a half min­utes clear of Chinese 23-year-old Luo Xingchuan.

Briton Weir com­pleted the podium in 1:29:23 in his 27th con­sec­u­tive ap­pear­ance at the event.

Debrunner cel­e­brated her fourth London Marathon win af­ter out­last­ing McFadden, fin­ish­ing just five sec­onds ahead of the American in clock­ing 1:38:29.

Briton Eden Rainbow-Cooper went into the race with podium as­pi­ra­tions af­ter fin­ish­ing fourth last year and re­gain­ing her Boston Marathon ti­tle on Monday, but those hopes were dashed by a pre-race punc­ture which caused her to start the race late.

Welcome to the world of Statecharts

statecharts.dev

What is a stat­e­chart?

A stat­e­chart can be ex­plained in many ways, and we’ll get to those ex­pla­na­tions, but es­sen­tially, a stat­e­chart is a draw­ing. Here’s a sim­ple stat­e­chart:

However, this draw­ing is­n’t very use­ful for soft­ware en­gi­neers who want to reap the ben­e­fits out­lined else­where on this site, so let’s dive into some other ways of de­scrib­ing what a stat­e­chart is. The orig­i­nal pa­per that de­fines stat­e­charts bills them as A vi­sual for­mal­ism for com­plex sys­tems” (Harel, 1987). With that out of the way, let’s try to ex­plain stat­e­charts.

Introduction to stat­e­charts

Put sim­ply, a stat­e­chart is a beefed up state ma­chine. The beef­ing up solves a lot of the prob­lems that state ma­chines have, es­pe­cially state ex­plo­sion that hap­pens as state ma­chines grow. One of the goals of this site is to help ex­plain what stat­e­charts are and how they are use­ful.

What is a state ma­chine?

What is a stat­e­chart?

Why should you use stat­e­charts?

Statecharts of­fer a sur­pris­ing ar­ray of ben­e­fits

It’s eas­ier to un­der­stand a stat­e­chart than many other forms of code.

The be­hav­iour is de­cou­pled from the com­po­nent in ques­tion.

This makes it eas­ier to make changes to the be­hav­iour.

It also makes it eas­ier to rea­son about the code.

And the be­hav­iour can be tested in­de­pen­dently of the com­po­nent.

This makes it eas­ier to make changes to the be­hav­iour.

It also makes it eas­ier to rea­son about the code.

And the be­hav­iour can be tested in­de­pen­dently of the com­po­nent.

The process of build­ing a stat­e­chart causes all the states to be ex­plored.

Studies have shown that stat­e­chart based code has lower bug counts than tra­di­tional code.

Statecharts lends it­self to deal­ing with ex­cep­tional sit­u­a­tions that might oth­er­wise be over­looked.

As com­plex­ity grows, stat­e­charts scale well.

A stat­e­chart is a great com­mu­ni­ca­tor: Non-developers can un­der­stand the stat­e­charts, while QA can use a stat­e­charts as an ex­ploratory tool.

It’s worth not­ing that you’re al­ready cod­ing state ma­chines, ex­cept that they’re hid­den in the code.

Why should you not use stat­e­charts?

There are a few down­sides to us­ing stat­e­charts that you should be aware of.

Programmers typ­i­cally need to learn some­thing new, al­though the un­der­pin­nings (state ma­chines) would be some­thing that most pro­gram­mers are fa­mil­iar with.

It’s usu­ally a very for­eign way of cod­ing, so teams might ex­pe­ri­ence push­back based on how very dif­fer­ent it is.

There is an over­head to ex­tract­ing the be­hav­iour in that the num­ber of lines of code might in­crease with smaller stat­e­charts.

Why are they not used?

People don’t know about them, and YAGNI.

What are the main ar­gu­ments against stat­e­charts?

There are a few com­mon ar­gu­ments against stat­e­charts in ad­di­tion to the ones listed above:

It’s sim­ply not needed.

It goes against the grain of [insert name of tech­nol­ogy].

It in­creases the num­ber of li­braries, for web ap­pli­ca­tions this means in­creased load time.

The ben­e­fits out­lined above should make it clear that the in­tro­duc­tion of stat­e­charts is gen­er­ally a net pos­i­tive.

How do you use stat­e­charts?

First of all, know that a W3C com­mit­tee spent 10+ years (2005 to 2015) stan­dard­iz­ing some­thing called SCXML (yes, Statechart XML), and that it de­fines a lot of the se­man­tics and spec­i­fies how to deal with cer­tain edge cases. There are tools to read, au­thor and even ex­e­cute stat­e­charts writ­ten in SCXML, in var­i­ous lan­guages. There are also some de­riv­a­tives that sup­port the same model as SCXML, but us­ing a dif­fer­ent syn­tax.

Additionally, there are stat­e­chart libaries for a va­ri­ety of plat­forms, that in vary­ing de­grees sup­port the se­man­tics de­scribed by SCXML. You should con­sider us­ing these li­braries just to get those edge cases taken care of. The li­braries gen­er­ally per­form en­try and exit ac­tions in the right or­der and so on.

With that out of the way, read on!

Executable stat­e­charts

In ad­di­tion to just us­ing stat­e­charts to model the be­hav­iour in doc­u­ments sep­a­rate from the ac­tual run­ning code, it’s pos­si­ble to use one of var­i­ous ma­chine for­mats, both to de­sign the be­hav­iour, and at run-time to ac­tu­ally be the be­hav­iour. The idea is to have a sin­gle source of truth that de­scribes the be­hav­iour of a com­po­nent, and that this sin­gle source dri­ves both the ac­tual run-time code, but that it can also be used to gen­er­ate a pre­cise di­a­gram that vi­su­alises the stat­e­chart.

This car­ries along some dif­fer­ent pros and cons:

Why should you use ex­e­cutable stat­e­charts?

No need to trans­late di­a­grams into code

No bugs in­tro­duced by hand trans­la­tion of di­a­grams

The di­a­grams are al­ways in sync

The di­a­grams are more pre­cise

Why should you not use ex­e­cutable stat­e­charts?

The di­a­grams may be­come quite com­plex

The for­mat and tools for ex­e­cutable stat­e­charts is lim­ited

Type safety be­tween stat­e­chart and the com­po­nent is hard to en­force

How do you use ex­e­cutable stat­e­charts?

In essence, if you have any de­f­i­n­i­tion of a stat­e­chart in your code, all you need to do is to take that rep­re­sen­ta­tion and au­to­mate the gen­er­a­tion of the vi­sual stat­e­chart. This is of course sim­pler when the de­f­i­n­i­tion is in a sep­a­rate file, e.g. in a JSON or XML file.

This is all ex­plained on the page on how to use stat­e­charts!

If you feel like chat­ting to some­one about stat­e­charts, you can go to git­ter.im (no lo­gin re­quired to see the chat), where you’ll find a com­mu­nity of like minded de­vel­op­ers that can help you un­der­stand and reap the ben­e­fits of us­ing Statecharts. For a more Q&A-type site, head on over to the stat­e­charts GitHub dis­cus­sions, where we’ll do your best to an­swer your ques­tion.

Quite a few peo­ple have writ­ten books or held pre­sen­ta­tions that deal with stat­e­charts in var­i­ous ways, and they’re in­cluded in our re­sources page. If you’ve writ­ten some­thing, please share it by post­ing it to GitHub Discussions.

There are some pages that haven’t found any place in the web of doc­u­ments, so they’re ho­n­ourably men­tioned here:

Use case: Statecharts in User Interfaces

Concepts — The most im­por­tant con­cepts in a stat­e­chart and what they look like in a di­a­gram.

Glossary — A list of terms that get thrown around when talk­ing about stat­e­charts, with their de­f­i­n­i­tions.

FizzBuzz — FizzBuzz is a well known prob­lem, and it’s been used as a back­drop to ex­plain var­i­ous stat­e­chart con­cepts.

Acknowledgements

openai.com

Issue links open automatically in a popup · community · Discussion #192666

github.com

🏷️ Discussion Type

Product Feedback

💬 Feature/Topic Area

Issues

Body

In some repos­i­to­ries, any link to an is­sue form an is­sue stared to open in a popup over­lay in­stead of nav­i­gat­ing to it.

Is that some­thing that is rolling out grad­u­ally? I checked both the changelog and prod­uct roadmap but could­n’t find any men­tions of the new be­hav­ior. Is there a way to turn it off or con­fig­ure? It com­pletely breaks the ex­pe­ri­ence and neg­a­tively af­fects pro­duc­tiv­ity.

Guidelines

I have read the above state­ment and can con­firm my post is rel­e­vant to the GitHub fea­ture ar­eas Issues and/​or Projects.

Thanks for all the feed­back, this was some­thing we were try­ing out as it im­proved load time for cross-repo links, we are go­ing to re­vert the change.

View full an­swer

💬 Your Product Feedback Has Been Submitted 🎉

Thank you for tak­ing the time to share your in­sights with us! Your feed­back is in­valu­able as we build a bet­ter GitHub ex­pe­ri­ence for all our users.

Here’s what you can ex­pect mov­ing for­ward ⏩

Your in­put will be care­fully re­viewed and cat­a­loged by mem­bers of our prod­uct teams. 

Due to the high vol­ume of sub­mis­sions, we may not al­ways be able to pro­vide in­di­vid­ual re­sponses.

Rest as­sured, your feed­back will help chart our course for prod­uct im­prove­ments.

Due to the high vol­ume of sub­mis­sions, we may not al­ways be able to pro­vide in­di­vid­ual re­sponses.

Rest as­sured, your feed­back will help chart our course for prod­uct im­prove­ments.

Other users may en­gage with your post, shar­ing their own per­spec­tives or ex­pe­ri­ences.

GitHub staff may reach out for fur­ther clar­i­fi­ca­tion or in­sight. 

We may Answer’ your dis­cus­sion if there is a cur­rent so­lu­tion, workaround, or roadmap/​changelog post re­lated to the feed­back.

We may Answer’ your dis­cus­sion if there is a cur­rent so­lu­tion, workaround, or roadmap/​changelog post re­lated to the feed­back.

Where to look to see what’s ship­ping 👀

Read the Changelog for real-time up­dates on the lat­est GitHub fea­tures, en­hance­ments, and calls for feed­back.

Explore our Product Roadmap, which de­tails up­com­ing ma­jor re­leases and ini­tia­tives.

What you can do in the mean­time 💻

Upvote and com­ment on other user feed­back Discussions that res­onate with you.

Add more in­for­ma­tion at any point! Useful de­tails in­clude: use cases, rel­e­vant la­bels, de­sired out­comes, and any ac­com­pa­ny­ing screen­shots.

As a mem­ber of the GitHub com­mu­nity, your par­tic­i­pa­tion is es­sen­tial. While we can’t promise that every sug­ges­tion will be im­ple­mented, we want to em­pha­size that your feed­back is in­stru­men­tal in guid­ing our de­ci­sions and pri­or­i­ties.

Thank you once again for your con­tri­bu­tion to mak­ing GitHub even bet­ter! We’re grate­ful for your on­go­ing sup­port and col­lab­o­ra­tion in shap­ing the fu­ture of our plat­form. ⭐

0 replies

This re­ally breaks my ex­pe­ri­ence us­ing AI agents! I click an is­sue link to copy the URL of that is­sue to paste into an agent, I get the par­ent is­sue link in­stead!

It looks aw­ful too, I thought at first my browser had glitched and there was a ren­der­ing er­ror

0 replies

Please give us an op­tion to dis­able this non-stan­dard link be­hav­ior. If I click a link it should open and NOT just throw up an over­lay.

0 replies

0 replies

I don’t want this. It breaks as­sis­tive tech­nolo­gies. I want an op­tion to get the old be­hav­ior.

0 replies

Wow, that’s se­ri­ously great work on GitHub’s side. It looks out­stand­ing, works amaz­ing, should have been im­ple­mented long ago, in fact.

should have been opt-in

should have been opt-in

@ZimbiX Could have been opt-in. Not should……

keep links act­ing like links.

keep links act­ing like links.

@ZimbiX links are still links; you can still right-click > open in new tab.

GitHub’s qual­ity has eroded since the Microsoft ac­qui­si­tion.

GitHub’s qual­ity has eroded since the Microsoft ac­qui­si­tion.

@ZimbiX 50/50 I guess…. for me and my use case, GitHub pro­gressed. But, again, thats only me…..

0 replies

Not a huge fan. It looks weird with the pop-up not cen­tered in the mid­dle and off to the right (FF 149).

Much more in­tu­itive be­hav­ior is just sup­port­ing mouse-hover over an is­sue to get a pre­view, with click/​ctrl-click open­ing the page.

0 replies

Please re­vert this aw­ful be­hav­ior. When I click a link on a web browser, I want to go to that link.

At the very least, if you’re go­ing to in­tro­duce non-stan­dard UX, please pro­vide users with the op­tion to dis­able it.

0 replies

Just an­other an­gry user here - please re­vert this silly fea­ture, or make it opt-in. Terrible UX!

0 replies

Another down­vote here. This is an an­noy­ance. Please get rid of it.

0 replies

Personally I pre­fer the old way; pop-ups are just an­noy­ing to me. I have an ul­tra-wide

mon­i­tor, so with a popup I lose about 20% to 30% of my widescreen, for no real rea­son

I can see is nec­es­sary.

Even if GitHub does not want to re­vert to the prior de­fault, I think al­low­ing users to

de­cide at their own dis­cre­tion what they would pre­fer, would be bet­ter. That way I

could use the old op­tion, with­out hav­ing to use the new vari­ant.

(It is prob­a­bly also pos­si­ble to do via cus­tom CSS or so, but that would re­quire

some time in­vest­ment to pre­vent a popup and in­stead use the old is­sue tracker

way.)

Edit: Oops, mis­read this, so this is about links. I also agree that the old vari­ant was

bet­ter here. Who is mak­ing these hor­ri­ble UI de­ci­sions lately?

0 replies

Is Microsoft ran by AI slop? Do you even use GitHub your­self? Obviously not.

0 replies

So it was not only me then! Please re­vert this. It sucks.

0 replies

Hey, cool fea­ture for peo­ple who’s browser don’t sup­port tabs.

I’m lucky enough to have a browser that does, so I’d rather just have the links open in a new tab.

Thanks.

1 re­ply

You mean for peo­ple on mo­bile? 100% de­vel­op­ers has tabs!

T-minus 10 sec­onds un­til Refined GitHub fixes this user-hos­tile abom­i­na­tion in their next up­date…

0 replies

Thanks for all the feed­back, this was some­thing we were try­ing out as it im­proved load time for cross-repo links, we are go­ing to re­vert the change.

4 replies

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

GH must have not been very happy with the com­mu­nity re­sponse so they de­ployed some­one on a Sunday.

Anyways, nice to hear that com­mu­nity has poked GH into not… slop­ping more stuff, or en****tifi­ca­tion.

please at least con­sider mak­ing it a tog­gle op­tion next time. I can see why y’all have implementation ideas” but I don’t think fetch­ing the is­sue to dis­play in a popup will in­crease load time for cross repo PR/Issues than I can just click on the link and it loads in < 500ms.

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

I was re­spon­si­ble for this go­ing out. The goal was to pro­vide a more con­sis­tent user ex­pe­ri­ence in that what hap­pens when you click an is­sue would be the same in more places where we use the is­sue viewer (sub-issues on an is­sue, our ded­i­cated is­sues dash­board, GitHub Projects, and oth­ers). It also meant you would­n’t lose your place when click­ing an is­sue ref­er­ence when read­ing a dis­cus­sion. There were some per­for­mance im­prove­ments that came with the change too. It was well in­ten­tioned, but we hear you, and thanks for the feed­back. We missed the mark on this one and it’s been rolled back.

Expecting driverless taxis to respect bike lanes “too high a bar” – because customers want to be dropped off in them, autonomous vehicle firm Waymo tells cyclists

road.cc

Waymo, the au­tonomous dri­ving tech firm whose so-called robo-taxis’ are now roam­ing the streets of London, has told cy­cling cam­paign­ers that ex­pect­ing their dri­ver­less cars to re­spect cy­cle lanes is too high a bar” — be­cause their cus­tomers want to be dropped off in them.

According to the Highway Code, mo­torists must not drive or park in a cy­cle lane marked by a solid white line dur­ing its times of op­er­a­tion” or block a bike lane marked by a bro­ken white line unless it is un­avoid­able”.

Drivers are also told that they should give way to cy­clists us­ing the bike lane and wait for a safe gap in the flow of cy­clists” be­fore cross­ing the in­fra­struc­ture.

However, just as its robo-taxis be­gin dri­ving au­tonomously in the UK for the first time, cy­cling cam­paign­ers in the US have claimed that Waymo has told them that the cars are pro­grammed to pull into cy­cle lanes to pick up and drop off pas­sen­gers.

Speaking to Streets Blog NYC, Christopher White, ex­ec­u­tive di­rec­tor of the San Francisco Bike Coalition, said that Waymo has told cam­paign­ers that it is normal prac­tice” for the au­tonomous ve­hi­cles to veer into bike lanes and block cy­cling in­fra­struc­ture.

People al­ways point out that un­like hu­man dri­ven cars, the AVs stop at lights and obey the speed limit,” White said.

However, they are re­ally only as good and ef­fec­tive and safe as they are pro­grammed to be. Waymos pull over into bike lanes all the time for pick­ups and drop-offs and that’s nei­ther le­gal nor safe.

But the com­pa­nies say that is a nor­mal prac­tice and that’s what cus­tomers ex­pect.”

> Cyclist doored’ by pas­sen­ger of dri­ver­less taxi il­le­gally parked in bike lane sues Google-owned com­pany af­ter tech fail­ure caused violent” crash

Last June, a cy­clist in San Francisco sued the Google-owned com­pany af­ter she was se­ri­ously in­jured when one of the brand’s dri­ver­less taxis stopped in a cy­cle lane and a pas­sen­ger opened its back door, strik­ing the cy­clist and caus­ing her to smash into an­other Waymo car that was also il­le­gally block­ing the bike path.

According to the law­suit, the Safe Exit sys­tem em­ployed by Waymo, which aims to alert pas­sen­gers of sur­round­ing dan­gers and haz­ards, failed — lead­ing 26-year-old Jenifer Hanki to claim that Waymo knows its cars are dooring’ cy­clists.

Following the violent” crash, which left her with a brain in­jury, as well as spine and soft tis­sue dam­age, pre­vent­ing her from work­ing or rid­ing her bike, Hanki sued Waymo and Google’s par­ent com­pany Alphabet in San Francisco County Superior Court al­leg­ing bat­tery, emo­tional dis­tress, and neg­li­gence, while seek­ing un­spec­i­fied dam­ages.

Waymo, for­merly known as the Google Self-Driving Car Project, an­nounced in January that a pi­lot ser­vice for its robo-taxi ser­vice will launch this year in London, in prepa­ra­tion for the UK gov­ern­men­t’s plans to change its reg­u­la­tions on dri­ver­less ve­hi­cles at some point in the sec­ond half of 2026.

In November 2019, Waymo — owned by Google’s par­ent com­pany Alphabet — se­cured per­mis­sion from the California Department of Motor Vehicles for its ve­hi­cles to carry pas­sen­gers with­out the need for a safety dri­ver who could in­ter­vene in the case of a po­ten­tial col­li­sion, mak­ing it the first com­pany in the world to se­cure such clear­ance.

It has since es­tab­lished it­self as the mar­ket leader in the United States for self-dri­ving taxis, with com­mer­cial op­er­a­tions in San Francisco, Phoenix, Los Angeles, and Austin, and be­gan test­ing its au­tonomous robocabs’ in New York City last year.

After be­ing dri­ven around London by a safety dri­ver’ map­ping the cap­i­tal’s roads since last au­tumn, ear­lier this month Waymo con­firmed that their cars are now start­ing to be con­trolled by ar­ti­fi­cial in­tel­li­gence — though a hu­man is still sit­ting in the dri­ver’s seat, in case any­thing goes wrong.

Waymo de­scribed the move as the the next step” to­wards a fully au­tonomous pas­sen­ger ser­vice later this year, pend­ing gov­ern­ment ap­proval”.

> Safety of dri­ver­less taxis on London’s infamously com­plex, con­gested, and con­tested streets re­mains to be seen”, say cy­cling cam­paign­ers — as robotaxi’ ser­vice set to launch this year

Once the gov­ern­ment signs off on the pro­posed new reg­u­la­tions, when the scheme even­tu­ally launches, it will be dri­ver-free, with cus­tomers able to hail a robo-taxi through an app, with fares at a competitive but pre­mium” price, the com­pany says.

According to Waymo, their cars use four sen­sor sys­tems to gather data from the world out­side — radar, li­dar, vi­sion, and mi­cro­phone — en­abling the ve­hi­cles to be aware” of their sur­round­ings up to a dis­tance of three foot­ball pitches, and in­clud­ing dur­ing bad weather.

A pow­er­ful com­puter in the boot processes the data ob­tained by the sen­sor, de­ter­min­ing how the car acts and re­acts in real time”.

However, ques­tions have been raised con­cern­ing the scheme’s safety fea­tures, with the London Cycling Campaign ex­press­ing reser­va­tions to road.cc about the taxi ser­vice’s abil­ity to adapt from the wide, straight roads of California to London’s wind­ing lanes.

As with all new in­no­va­tion, it’s re­ally early days for Waymo and other au­tonomous ride-hail­ing ser­vices in London,” the cam­paigns chief ex­ec­u­tive Tom Fyans told us in January.

Waymo claims they’re far safer in the US than tra­di­tional taxi ser­vices. But whether that is still the case on London’s in­fa­mously com­plex, con­gested and con­tested streets, re­mains to be seen.

At LCC, we talk to po­lit­i­cal lead­ers, in­no­va­tors and pri­vate com­pa­nies of all stripes all the time — to make sure every­one’s work­ing hard to make London a bet­ter place for healthy, safe cy­cling for every­one. We hope new ride-shar­ing ser­vices will add to that, rather than de­tract from it.”

When it first launched as Waymo back in 2016, the firm said its cars are pro­grammed to recog­nise cy­clists as unique users of the road”, drive con­ser­v­a­tively around them, and recog­nise com­mon hand sig­nals.

In 2019, the com­pany also re­leased a video show­ing one of its ve­hi­cles pre­dict­ing that cy­clists will move out onto the road to pass a car block­ing a cy­cle lane, with the taxi slow­ing to al­low them to safely move across.

However, in February 2024, an­other San Francisco cy­clist was left with non-life-threatening in­juries” af­ter one of the com­pa­ny’s taxis failed to de­tect his pres­ence and struck him.

According to the com­pany, the cy­clist was oc­cluded by the truck and quickly fol­lowed be­hind it, cross­ing into the Waymo ve­hi­cle’s path. When they be­came fully vis­i­ble, our ve­hi­cle ap­plied heavy brak­ing but was not able to avoid the col­li­sion.”

And things haven’t got off to the best start in London ei­ther, with a TikTok video posted on Thursday show­ing a Waymo dri­ving through a po­lice cor­don in the west of the city — though Waymo has since stated that the ve­hi­cle was be­ing dri­ven man­u­ally at the time of the in­ci­dent.

@zonjy.mediaDriverless taxi waymo dri­ver al­most hit some­one and drove straigh into crime scene tape al­most hit a po­lice of­fi­cer ob­vi­ously dri­ver­less taxi soft­ware seems like are not trained to avoid crime scene, crime scene tape, po­lice car blue lights or am­bu­lance blue lights in case of an ac­ci­dent. i think in my opin­ion this dri­ver­less taxi waymo are more risk than the pub­lic thinks. do you think this is safe enough to be in the streets of lon­don. it puts po­lice of­fi­cers and emer­gency ser­vice peo­ple at risk last night.♬ orig­i­nal sound — London News

The ar­gu­ment that self-dri­ving cars will make city streets safer — by cut­ting out hu­man er­ror — has also been crit­i­cised by jour­nal­ists and cam­paign­ers, who point out that dri­ver­less taxis could en­cour­age peo­ple to use cars more and pub­lic trans­port less, in­creas­ing the chances of crashes.

We should­n’t be ask­ing only like, Hey, are ro­b­o­t­axis safer than hu­mans on a per-mile-dri­ven ba­sis?’ be­cause there’s a real risk that AVs in­duce peo­ple to take a lot more car trips or to re­place tran­sit,” Bloomberg re­porter David Zipper told Streets Blog NYC this week.

We could end up with a lot more dri­ving. And even if every in­di­vid­ual, self-dri­ven mile is safer, if you have that much more dri­ving, you have more crashes over­all.”

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.