10 interesting stories served every morning and every evening.

Just a moment...

ca98am79.medium.com

Progress Report: Linux 7.0 - Asahi Linux

asahilinux.org

After al­most three years of 6.x se­ries ker­nels, Linux 7.0 is fi­nally here.

That means it’s also time for an­other Asahi progress re­port!

Automate Everything

Users of al­ter­nate dis­tros and keen-eyed in­di­vid­u­als may have no­ticed some

changes to the Asahi Installer. After al­most two years, we fi­nally got around

to push­ing an up­dated ver­sion of the in­staller to the CDN! Two years is a long

time to go be­tween up­dates, so what took so long?

Our up­stream in­staller pack­age is a lit­tle bit of a Rube-Goldberg ma­chine. The

bulk of the in­staller is writ­ten in Python, with some small Bash scripts to

boot­strap it. When you run curl | sh, you’re ac­tu­ally down­load­ing the boos­t­rap

script, which then fetches the ac­tual in­staller bun­dle from our CDN. This bun­dle

con­sists of a Python in­ter­preter and very stripped down stan­dard li­brary, a built

m1n1 stage 1 bi­nary, and the in­staller it­self.

Until re­cently, cut­ting an in­staller re­lease meant:

Tagging the in­staller repo

Downloading a ma­cOS Python build

Building m1n1 from a blessed com­mit

Bundling Python, m1n1 and the in­staller

Uploading the in­staller bun­dle to the CDN

Updating the CDNs ver­sion flag file

This process was time-con­sum­ing and re­quired ad­min­is­tra­tive ac­cess to the CDN.

As a re­sult, we ne­glected to push in­staller up­dated for quite some time; the

pre­vi­ous in­staller tag was from June 2024! As up­stream­ing work has pro­gressed

and Devicetree bind­ings churned, this be­came rather prob­lem­atic for our friends

main­tain­ing dis­tros.

The Asahi Installer of­fers a UEFI-only in­stal­la­tion op­tion. This op­tion

shrinks ma­cOS and only in­stalls what is nec­es­sary to boot a UEFI ex­e­cutable,

mean­ing m1n1 stage 1, the Devicetrees, and U-Boot. This al­lows users to

boot from live me­dia with Asahi sup­port, such as spe­cialised Gentoo Asahi

LiveCD im­ages.

Since the Devicetrees on a fresh UEFI-only in­stall come from the in­staller

bun­dle it­self, a ker­nel will only suc­cess­fully boot when the in­staller-bun­dled

Devicetrees match what that ker­nel ex­pects to see. The two have got­ten rather

out of sync as time has gone on due to Devicetree bind­ings chang­ing

as a re­sult of the up­stream­ing process. This sit­u­a­tion fi­nally came to a

head with ker­nel 6.18, which re­quired nu­mer­ous changes to both m1n1 and

the Devicetree bind­ings for the Apple USB sub­sys­tem. This made boot­ing

ker­nel 6.18 and above from live me­dia im­pos­si­ble. Oops.

Rather than go through the trou­ble of man­u­ally push­ing out an­other up­date,

we took the op­por­tu­nity to build some au­toma­tion and solve this prob­lem

per­ma­nently.

We moved the man­i­fest of in­stal­lable im­ages into the asahi-in­staller-data repo,

al­low­ing us to up­date it in­de­pen­dently of the in­staller code­base.

On top of this, we also now de­ploy

the in­staller us­ing GitHub work­flows. Going for­ward, every push to the main

branch of asahi-in­staller will

au­to­mat­i­cally build the in­staller and up­load it to https://​alx.sh/​dev.

Every tag pushed to GitHub will do the same for https://​alx.sh.

The lat­est ver­sion, 0.8.0, bumps the bun­dled m1n1 stage 1 bi­nary to

ver­sion 1.5.2, in­tro­duces in­staller sup­port for the Mac Pro, and adds

a firmware up­date mode which ties in nicely with…

How do you ov­erengi­neer a light sen­sor?

Basically every­thing with a screen now comes with some sort of light

sen­sor. This is usu­ally to en­able au­to­matic bright­ness ad­just­ment based

on am­bi­ent con­di­tions. It’s a very con­ve­nient fea­ture in de­vices like

smart­phones, where a user may walk out­side and find their dis­play too

dim to see. The cheap­est ver­sions of this use a sim­ple pho­tore­sis­tor.

This is fine if the goal is just to change bright­ness, but bright­ness

is not the only thing af­fected by am­bi­ent light­ing con­di­tions. What about

colour ren­der­ing?

Apple’s de­vices have had the True Tone dis­play fea­ture for quite some time.

This works by mea­sur­ing both the bright­ness and the colour char­ac­ter­is­tics

of the en­vi­ron­men­t’s am­bi­ent light­ing. This data is then used to ap­ply

bright­ness and colour trans­for­ma­tions to the dis­play to en­sure that it is

al­ways dis­play­ing con­tent as ac­cu­rately as pos­si­ble. This is most no­tice­able

in en­vi­ron­ments with light­ing fix­tures that have a low Colour Rendering

Index, such as flu­o­res­cent tubes or cheap cool white LEDs. The de­vices that

en­able this, am­bi­ent light sen­sors, are usu­ally lit­tle ICs that con­nect

to the sys­tem over I2C or other in­dus­try-stan­dard bus. This is

fine for ba­sic ap­pli­ca­tions, but this is Apple. There are some other con­sid­er­a­tions

to be had:

The light sen­sor is do­ing stuff when­ever the screen is on, so pro­cess­ing its

out­put should be as ef­fi­cient as pos­si­ble

The light sen­sor should be able to be cal­i­brated for max­i­mum ac­cu­racy

There are mul­ti­ple mod­els of light sen­sor in use, and the OS should not

have to care too much about that

The light sen­sor has to have a three let­ter acronym like every other piece

of hard­ware on this plat­form (ALS)

Naturally, this sounds like a job for the Always-On Processor1 (AOP)!

We’ve had a work­ing AOP+ALS dri­ver set for a while thanks to chaos_princess,

how­ever the raw data AOP re­ports back from ALS is rather in­ac­cu­rate with­out

cal­i­bra­tion. That cal­i­bra­tion is a bi­nary blob that must be up­loaded to the AOP

at run­time. It is es­sen­tially firmware. Since we can­not re­dis­trib­ute Apple’s

bi­na­ries, it must be re­trieved from ma­cOS at in­stall time and then stored some­where the dri­ver

knows to look for it.

To achieve this, the Asahi Installer gath­ers up all

the firmware it knows we will need in Linux and stores it on the EFI System

Partition it cre­ates. A Dracut mod­ule then mounts this to a sub­di­rec­tory of

/lib/firmware/, where dri­vers can find it. However, is­sues arise when we

need to re­trieve more firmware from ma­cOS af­ter Asahi Linux has al­ready been

in­stalled. To avoid a re­peat of the we­b­cam sit­u­a­tion, where users

were re­quired to man­u­ally do surgery on their EFI System Partition, chaos_princess

added the abil­ity for the Asahi Installer to au­to­mat­i­cally up­date the firmware

pack­age. Starting with ALS, any re­quired firmware up­dates will be a sim­ple

mat­ter of boot­ing into ma­cOS or ma­cOS Recovery, re-run­ning the Asahi Installer, and fol­low­ing

the prompts.

To en­able ALS sup­port (and to do firmware up­grades in the fu­ture), fol­low these steps:

Ensure you are run­ning ver­sion 6.19 or above of the Asahi ker­nel

Ensure your dis­tro ships iio-sen­sor-proxy as a de­pen­dency of your DE (Fedora

Asahi Remix does this)

GoDaddy Gave a Domain to a Stranger Without Any Documentation

anchor.host

What would you do if your or­ga­ni­za­tion had used a do­main name for 27 years, and the reg­is­trar hold­ing the do­main seized it with­out any ad­vance warn­ing? All email and web­sites went dark. The com­pa­ny’s tech sup­port spent four days telling you to Just wait, we are work­ing on it.” On the fourth day, the com­pany in­formed you that some­one else has the do­main now, and it is no longer yours.

Read on. This crazy story hap­pened ex­actly one week ago.

My friend Lee Landis is a part­ner in Flagstream Technologies, a lo­cal IT firm in Lancaster, PA. Last Saturday af­ter­noon one of his clien­t’s do­mains van­ished from his GoDaddy ac­count.

Lee is one of the most com­pe­tent IT guys I know. The GoDaddy ac­count had dual two-fac­tor au­then­ti­ca­tion en­abled, re­quir­ing both an email code and an au­then­ti­ca­tion app code to log in. The do­main it­self had own­er­ship pro­tec­tion turned on. The au­dit log just said Transfer to Another GoDaddy Account” by an Internal User” with Change Validated: No.”

Some names have been changed

Some names and the do­main it­self have been changed be­cause peo­ple wanted to re­main anony­mous. The pat­tern of the do­main names mir­rors the ac­tual mis­take, so the ex­pla­na­tion still makes sense. Every fact in this post is true. Lee has hard ev­i­dence for every one of them.

As you can see above, GoDaddy emailed Flagstream at 1:39pm that an ac­count re­cov­ery had been re­quested. Three min­utes later, the trans­fer was ini­ti­ated. Four min­utes later, it was com­plete. On a Saturday af­ter­noon.

Everything at the im­pacted or­ga­ni­za­tion went of­fline be­cause GoDaddy re­set the DNS zone to de­fault when they moved the do­main into the new ac­count. Same name­servers. Empty DNS zone file.

Lee’s client lost their web­site and email for the next four days.

27 yrs

Domain in ac­tive use

32

Calls to GoDaddy

9.6 hrs

On the phone with GoDaddy

17

Emails to GoDaddy. Zero call­backs.

Domain and ac­count were fully pro­tected.

The do­main had the Full Domain Privacy and Protection” se­cu­rity prod­uct that GoDaddy sells. Dual two-fac­tor on the ac­count. None of it mat­tered. The trans­fer was done by an Internal User” in­side GoDaddy.

The do­main was HELPNETWORKINC.ORG. The real do­main name has been changed be­cause the or­ga­ni­za­tion wanted to re­main anony­mous. It be­longs to a na­tional or­ga­ni­za­tion with twenty lo­ca­tions across the United States. The do­main has been in ac­tive use for 27 years. Each chap­ter runs its web­site and email on a sub­do­main of that one par­ent do­main. When HELPNETWORKINC.ORG went dark, every chap­ter went dark with it.

Thirty-two calls. 9.6 hours on the phone. Zero call­backs.

Lee called GoDaddy on Sunday. They con­firmed the do­main was no longer in his ac­count but could not say where it went due to pri­vacy con­cerns. They told him to email undo@go­daddy.com. He did but did not re­ceive any type of re­sponse when email­ing that ad­dress. Of course Lee did­n’t re­ally feel like this was the ap­pro­pri­ate level of ur­gency for this is­sue. He asked for a su­per­vi­sor who was even less help­ful. Lee was not happy. He may have said some hurt­ful things to GoDaddy’s sup­port per­son­nel dur­ing this call. That first call lasted 2 hours, 33 min­utes, and 14 sec­onds.

On Monday morn­ing, Lee and a coworker started work­ing in earnest on this is­sue be­cause there was still no up­date from GoDaddy. Calling in yielded a dif­fer­ent agent who told Lee to email trans­fer­dis­putes@go­daddy.com instead. By Tuesday the ad­dress had changed again to artre­view@go­daddy.com. The in­struc­tions shifted by the day. It seemed like every GoDaddy tech sup­port per­son had a slightly dif­fer­ent rec­om­men­da­tion.

The one thing that stayed con­sis­tent was the mes­sage: Just wait a day or two. We are work­ing on it. Why do you think this is so ur­gent?”

One of the most frus­trat­ing parts of this process is that all of­fi­cial com­mu­ni­ca­tion to and from GoDaddy about this is­sue was done with gener­i­cally named email ac­counts. It just seems like there should have been a named in­di­vid­ual in charge of man­ag­ing and com­mu­ni­cat­ing about this is­sue. Rather there were just ran­dom generic email ac­counts that seemed to change on a daily ba­sis.

Every call gen­er­ated a fresh case num­ber. Lee lost count of the to­tal num­ber of cases. A few of the cases are 01368489. 894760. 01376819. 01373017. 01376804. 01373134. 01370012. None of them tied to­gether on GoDaddy’s side. Every es­ca­la­tion started from zero. These are ac­tual case num­bers, in case any­one at GoDaddy wants to check into this.

I posted on X to see if any­one I knew at GoDaddy could es­ca­late.

Can any of my GoDaddy friends help? A good friend of mine had a do­main taken. My friend is very com­pe­tent. Domain own­er­ship pro­tec­tion was on. Owner did not get any no­tices. Audit log looks fishy. Phone/email sup­port telling them to wait. Did a GoDaddy em­ployee take it? pic.twit­ter.com/​OW­cJIal­WcF— Austin Ginder (@austinginder) April 20, 2026

Can any of my GoDaddy friends help? A good friend of mine had a do­main taken. My friend is very com­pe­tent. Domain own­er­ship pro­tec­tion was on. Owner did not get any no­tices. Audit log looks fishy. Phone/email sup­port telling them to wait. Did a GoDaddy em­ployee take it? pic.twit­ter.com/​OW­cJIal­WcF

My friend Courtney Robertson, who works at GoDaddy, re­posted it and started es­ca­lat­ing in­ter­nally on her own time. Thank you, Courtney. GoDaddy has a lot of great peo­ple like her. That part is not in ques­tion. What GoDaddy does not have is a way to ac­tu­ally fix a mis­take once one has been made. Tickets pile up. Phone calls re­set. Every es­ca­la­tion is a new per­son read­ing the case from scratch. The thing you ac­tu­ally need solved drifts be­tween queues.

And there was no real way to dis­pute it.

While Lee was on the phone, his col­league was on a dif­fer­ent phone try­ing to file a Transfer Dispute. GoDaddy di­rected him to cas.go­daddy.com/​Form/​Trans­fer­Dis­pute. He filed a dis­pute and re­ceived this mes­sage, which he cap­tured via a screen­shot.

Lee and his col­leagues worked dili­gently at chal­leng­ing the trans­fer. They sup­plied the cor­rect name of the per­son listed on the do­main. They sup­plied that per­son’s dri­vers li­cense as re­quired. They also sup­plied the cor­rect busi­ness doc­u­men­ta­tion as listed in GoDaddy’s own re­quire­ments. Every time they sub­mit­ted a re­quest, they were told they would hear back in 48 to 72 hours.

GoDaddy FINALLY re­sponds with a SHOCKING state­ment

Tuesday af­ter­noon, af­ter four days of wait­ing, Flagstream fi­nally got an of­fi­cial email re­sponse back from GoDaddy.

GoDaddy’s re­ply to Lee

After in­ves­ti­gat­ing the do­main name(s) in ques­tion, we have de­ter­mined that the reg­is­trant of the do­main name(s) pro­vided the nec­es­sary doc­u­men­ta­tion to ini­ti­ate a change of ac­count. … GoDaddy now con­sid­ers this mat­ter closed.

That was it. No ex­pla­na­tion of what doc­u­men­ta­tion. The sug­gested next steps were three links. A WHOIS lookup. ICANN ar­bi­tra­tion providers. A page about get­ting a lawyer in­volved to rep­re­sent you in lit­i­ga­tion.

Flagstream mi­grates client to new do­main

Once GoDaddy de­clared the mat­ter closed, Flagstream be­gan mi­grat­ing the client to a new do­main. New email ad­dresses. New web­site ad­dresses. Coordinating with var­i­ous teams through­out the night to change every­thing over to a new do­main.

Switching to a new do­main is a mas­sive amount of work, and it leaves a lot of lin­ger­ing prob­lems be­hind be­cause there is no con­trol over the orig­i­nal do­main.

Every email ad­dress that ex­ists out in the world is now wrong. You have to tell every­one the new ad­dress. If they try the old one, it bounces.

Every piece of mar­ket­ing ma­te­r­ial that ref­er­ences the old do­main is now in­cor­rect. There is no way to for­ward any­thing to the new do­main.

All of the SEO is gone. You are start­ing an on­line pres­ence from scratch.

Then a stranger found the do­main in her ac­count.

Wednesday morn­ing Susan (not her real name), 2,000 miles away from the clien­t’s head­quar­ters, no­ticed some­thing odd. Susan had been work­ing at re­claim­ing a to­tally dif­fer­ent do­main used by a for­mer em­ployee. When she looked closely at her GoDaddy ac­count, the do­main in her ac­count was­n’t the one she had re­quested. She made a few phone calls be­cause she knew this was a prob­lem and even­tu­ally got hooked up with Flagstream. Working with Susan, they ran a GoDaddy ac­count-to-ac­count trans­fer, and put the do­main back where it be­longed. DNS came back up while Lee was still typ­ing the email telling me it was over. The en­tire process of re­claim­ing the do­main lasted less than 5 min­utes.

Once the do­main was back and DNS was work­ing, Flagstream started the ar­du­ous task of re­vert­ing every­thing that they had done the day be­fore. They switched email and web­sites back to the orig­i­nal do­main, once again work­ing through the night to get every­thing fixed.

The res­o­lu­tion for this prob­lem did not come from GoDaddy sup­port. It did not come from the dis­pute team. It did not come from the Office of the CEO team. It came from a stranger who ac­ci­den­tally ended up with the do­main and was smart and hon­est enough to start call­ing around be­cause she knew some­thing was­n’t right.

Susan is re­ally the hero of this en­tire story. Without her, Flagstream would still have no idea what hap­pened to this do­main. Lawyers would have got­ten in­volved, but it would prob­a­bly be months un­til any­thing was re­solved.

Timeline of events

Apr 18, 1:39pm

GoDaddy emails Flagstream that an Account Recovery has been re­quested for the ac­count.

Apr 18, 1:42pm

Transfer ini­ti­ated by GoDaddy Internal User. Three min­utes af­ter the re­cov­ery no­tice.

Apr 18, 1:43pm

Transfer com­pleted. Change Validated is listed as No”. Website and email go dark across the en­tire or­ga­ni­za­tion.

Apr 19

Lee dis­cov­ers the do­main is gone. GoDaddy says email undo@go­daddy.com and wait.

Apr 20

Flagstream team starts call­ing and email­ing GoDaddy for up­dates. GoDaddy now says email trans­fer­dis­putes@go­daddy.com. Austin posts on X. Courtney Robertson routes the case to the Office of the CEO team.

Apr 21

Flagstream files mul­ti­ple Transfer Dispute cases with the re­quested doc­u­men­ta­tion. Every sub­mis­sion is met with a 48 to 72 hour re­sponse win­dow. GoDaddy emails Lee that the mat­ter is closed and the do­main be­longs to some­one else. Flagstream starts the painful process of mi­grat­ing the or­ga­ni­za­tion to a new do­main so they can func­tion.

Apr 22

Susan no­tices the wrong do­main in her ac­count and calls Lee. Account-to-account trans­fer brings it home.

Then it got cra­zier. GoDaddy ap­proved the trans­fer with zero doc­u­ments.

The or­ga­ni­za­tion on the re­ceiv­ing end of the trans­fer was a re­gional chap­ter of the same net­work. Susan, the ex­ec­u­tive as­sis­tant, had emailed GoDaddy two weeks ear­lier ask­ing to re­cover a dif­fer­ent do­main. HELPNETWORKLOCAL.ORG. Not HELPNETWORKINC.ORG.

Flagstream spent some time talk­ing to Susan to fig­ure out ex­actly how she was able to ac­ci­den­tally get the do­main trans­ferred into her ac­count. Did she un­in­ten­tion­ally sup­ply all of the cor­rect doc­u­men­ta­tion? Talking to Susan they fig­ured out that GoDaddy ac­tu­ally ap­proved the trans­fer with­out her sup­ply­ing ANY doc­u­men­ta­tion.

Her email sig­na­ture hap­pened to ref­er­ence her chap­ter’s web­site at a sub­do­main of HELPNETWORKINC.ORG. GoDaddy’s re­cov­ery team ap­par­ently looked at the sig­na­ture, saw the par­ent do­main, and trans­ferred that do­main into her ac­count.

GoDaddy sent Susan a link to up­load sup­port­ing doc­u­ments. The link ex­pired be­fore she got around to us­ing it. She emailed back re­quest­ing a new link so she could up­load the re­quired doc­u­men­ta­tion. However, be­fore the new link ar­rived, she re­ceived an email say­ing the do­main trans­fer had been ap­proved.

Susan never sub­mit­ted a sin­gle doc­u­ment. Not for the do­main she was ac­tu­ally try­ing to re­cover, and cer­tainly not for the one GoDaddy ended up giv­ing her. GoDaddy ap­proved the change of ac­count, trans­ferred a 27-year-old non-prof­it’s do­main into a stranger’s ac­count, and considered the mat­ter closed” with­out re­quir­ing any doc­u­men­ta­tion.

This is a huge se­cu­rity is­sue.

If Susan had been a bad ac­tor, she could have in­ter­cepted email. She could have used that email to re­set pass­words, get MFA codes, launch phish­ing at­tacks, etc. She could have put up a new web­site with mal­ware on it, redi­rected pay­ments on the web­site, etc.

When the do­main ini­tially dis­ap­peared and Flagstream was un­able to ob­tain any in­for­ma­tion about who had it, Flagstream feared the worst. Flagstream and the im­pacted client started to come up with a plan to pro­tect against the threats men­tioned above which was a huge un­der­tak­ing for an or­ga­ni­za­tion of this size. Basically, all users across the en­tire or­ga­ni­za­tion needed to start log­ging into every im­por­tant web­site and make sure the com­pro­mised do­main was re­moved from the ac­count. This in­cludes bank web­sites, Amazon, IRS, pay­roll, Dropbox, email ac­counts, and even iron­i­cally enough, GoDaddy ac­counts.

It is out­ra­geous that Susan was able to ob­tain this do­main with­out sup­ply­ing any doc­u­men­ta­tion. Everyone was lucky it was Susan that got this do­main.

GoDaddy: please fol­low up with Flagstream.

This is not ac­cept­able.

A GoDaddy em­ployee trans­ferred a 27-year-old do­main out of a pay­ing cus­tomer’s ac­count with no val­i­da­tion. With zero doc­u­men­ta­tion sub­mit­ted by the re­cip­i­ent. When the cus­tomer dis­puted with le­git­i­mate doc­u­men­ta­tion, every sub­mis­sion was met with We will re­spond in 48 to 72 hours.” After four days, GoDaddy claimed the do­main be­longed to some­one else and the case was closed. The fix came from the re­cip­i­ent of the mis­take, not from GoDaddy de­spite 9.6 hours of phone con­ver­sa­tions.

To any­one at GoDaddy read­ing this. Please fol­low up with Lee Landis at Flagstream Technologies and make this right. An apol­ogy is prob­a­bly in or­der. An in­ter­nal re­view of how the trans­fer team val­i­dates doc­u­men­ta­tion is in or­der, in­clud­ing how a trans­fer can be ap­proved with zero doc­u­men­ta­tion. Lee would like a clear an­swer on how this hap­pened. Lee does­n’t want an email from a generic GoDaddy ac­count. Lee wants a real per­son to call or email him. This per­son needs to leave an email ad­dress and phone num­ber in case Lee has fol­low-up ques­tions.

Even dis­clos­ing this to GoDaddy was bro­ken.

Before pub­lish­ing this post, I wanted to share the find­ings with GoDaddy’s se­cu­rity team di­rectly. I emailed se­cu­rity@go­daddy.com with the full re­port. The mes­sage bounced.

GoDaddy’s auto-re­ply to se­cu­rity@go­daddy.com

A cus­tom mail flow rule cre­ated by an ad­min at se­cure­server­net.on­mi­crosoft.com has blocked your mes­sage. We hope this mes­sage finds you well. This email mail­box is no longer mon­i­tored. To ad­dress your needs, we have out­lined two pop­u­lar op­tions for you: 1: To sub­mit an abuse re­port, please visit our Abuse Reporting Form. 2: If you are look­ing to sub­mit a vul­ner­a­bil­ity, please visit our bounty pro­gram https://​hackerone.com/​go­daddy-vdp.

So I filed the same re­port through HackerOne in­stead, re­port #3696718.

This is the same pat­tern that played out across the four-day out­age. The of­fi­cial chan­nel does not work. The al­ter­na­tive path re­quires know­ing to by­pass it. Most hon­est peo­ple who no­tice a se­cu­rity is­sue are not go­ing to have a HackerOne ac­count. They send an email. How is it that GoDaddy does­n’t have a pub­lic se­cu­rity dis­clo­sure email ad­dress?

Whether the orig­i­nal trans­fer was a sin­gle agen­t’s mis­take or a flaw in the re­cov­ery work­flow, it is still a se­cu­rity is­sue. And there is no clean path from I found some­thing” to a hu­man at GoDaddy is look­ing at it.”

The only way to get GoDaddy’s at­ten­tion is to leave.

Lee is up­set about the four days of stress and lost pro­duc­tiv­ity across the im­pacted or­ga­ni­za­tion. But his big­ger con­cern is what comes next. Apparently there is no way to pro­tect against this threat if your do­main is hosted at GoDaddy. In ad­di­tion, it seems like there is no ef­fi­cient way to con­test the GoDaddy trans­fer.

Flagstream will most likely mi­grate every one of their do­mains off GoDaddy. That is the only pro­tec­tion they have left, and the only es­ca­la­tion GoDaddy seems to re­spond to.

Are you at risk?

Is your do­main hosted on GoDaddy? What would you do if the do­main dis­ap­peared out of your GoDaddy ac­count and your en­tire busi­ness went dark?

A.I. Should Elevate Your Thinking, Not Replace It - Blog - Koshy John

www.koshyjohn.com

In talk­ing to en­gi­neer­ing man­age­ment across tech in­dus­try heavy-weights, it’s ap­par­ent that soft­ware en­gi­neer­ing is start­ing to split peo­ple into two neb­u­lous groups:

The first group will use A.I. to re­move drudgery, move faster, and spend more time on the parts of the job that ac­tu­ally mat­ter i.e. fram­ing prob­lems, mak­ing trade­offs, spot­ting risks, cre­at­ing clar­ity, and pro­duc­ing orig­i­nal in­sight.

The sec­ond group will use A.I. to avoid think­ing. They will paste prompts into a box, col­lect pol­ished out­put, and pre­sent it as though it re­flects their own rea­son­ing. For a while, that can look like pro­duc­tiv­ity. It can even look like tal­ent. But it is a dead end.

The soft­ware en­gi­neers who will be most valu­able in the fu­ture are not the ones who do every­thing them­selves. They are the ones who refuse to spend time on work that A.I. can do for them, while still un­der­stand­ing every­thing that is done on their be­half. They use the time sav­ings to op­er­ate at a higher level. They el­e­vate their thought process through rigor rather than out­sourc­ing it.

That dis­tinc­tion mat­ters more than peo­ple think.

In this post:

The New Failure Mode: Outsourced Thinking (& analo­gies)

What the Best Engineers Will Do Instead

The Real Source of Value

The Risk for Early-In-Career Engineers

There Is No Shortcut to Judgment

In Summary: The Dividing Line & Organizational Implications

Why This Matters Even More to Organizational Health

The New Failure Mode: Outsourced Thinking

A.I. can al­ready gen­er­ate code, sum­ma­rize meet­ings, ex­plain con­cepts, pro­duce de­sign drafts, and write sta­tus up­dates in sec­onds. That is use­ful but also dan­ger­ous.

The dan­ger is not that A.I. will make peo­ple lazy in some vague moral sense. It is that it makes it easy to sim­u­late com­pe­tence with­out build­ing com­pe­tence.

There is now a very real temp­ta­tion to hand a model a prob­lem, re­ceive a plau­si­ble an­swer, and then re­peat that an­swer as if it re­flects your own un­der­stand­ing. That is close to pla­gia­rism, but in some ways worse. At least when a stu­dent copies from an­other per­son, there is still a real hu­man source be­hind the an­swer. Here, peo­ple can pre­sent ma­chine-pro­duced rea­son­ing they do not un­der­stand, can­not de­fend, and could not re­pro­duce on their own.

That is in­tel­lec­tual de­pen­dency be­ing la­beled as lever­age.

And that de­pen­dency has a cost. Every time you sub­sti­tute gen­er­ated out­put for your own com­pre­hen­sion, you are skip­ping the ex­er­cises / reps that build judg­ment. You are trad­ing long-term ca­pa­bil­ity for short-term ap­pear­ance.

I’m go­ing to share some analo­gies to make this line of thought more con­crete and ap­proach­able.

[CLICK HERE TO SHOW ANALOGIES]

What the Best Engineers Will Do Instead

The best en­gi­neers will ab­solutely use A.I. more, not less. But they will use it with a very dif­fer­ent pos­ture.

They will let A.I. draft boil­er­plate, sum­ma­rize docs, gen­er­ate test scaf­fold­ing, pro­pose refac­tor­ings, sur­face pos­si­ble fail­ure modes, ac­cel­er­ate in­ves­ti­ga­tion, and com­press rou­tine work. They will hap­pily of­fload the me­chan­i­cal parts of the job. But they will also:

ask sharper ques­tions.

de­fine the real prob­lem in­stead of merely re­spond­ing to the vis­i­ble one.

op­ti­mize for clar­ity and brevity (as be­fore), in­stead of a lot of pol­ished lan­guage that says lit­tle of sub­stance.

gen­er­ate new, high-value knowl­edge - in­stead of sim­ply re­hash­ing / remix­ing ex­ist­ing knowl­edge in the sys­tem.

Then they will take the re­claimed time and in­vest it where it mat­ters most.

The Real Source of Value

For years, peo­ple have con­fused soft­ware en­gi­neer­ing with code pro­duc­tion. That con­fu­sion is now get­ting ex­posed.

If the job were mainly about pro­duc­ing syn­tac­ti­cally valid code, then of course A.I. would be on a di­rect path to re­plac­ing large parts of the pro­fes­sion. But that was never the high­est-value part of the work. The value was al­ways in judg­ment.

The valu­able en­gi­neer is the one who sees the hid­den con­straint be­fore it causes an out­age. The one who no­tices that the team is solv­ing the wrong prob­lem. The one who re­duces a vague de­bate into crisp trade­offs. The one who iden­ti­fies the miss­ing ab­strac­tion. The one who can de­bug re­al­ity, not just read code. The one who can cre­ate clar­ity where every­one else sees noise.

A.I. can sup­port that work. It can­not own it.

In fact, the en­gi­neers who pro­duce the most value in the fu­ture will of­ten be the ones gen­er­at­ing the knowl­edge that makes A.I. more use­ful in the first place. They will cre­ate the de­sign prin­ci­ples, do­main un­der­stand­ing, pat­terns, con­text, and de­ci­sion frame­works that im­prove the ma­chine’s ef­fec­tive­ness. They will feed the sys­tem with bet­ter ques­tions, bet­ter con­straints, and bet­ter cor­rec­tions.

In that world, the en­gi­neer is not re­placed by A.I. The en­gi­neer be­comes more lever­aged be­cause they are op­er­at­ing above the level of raw out­put.

The Risk for Early-in-Career Engineers

This is­sue is es­pe­cially im­por­tant for peo­ple early in their ca­reers.

Early years mat­ter be­cause that is when foun­da­tional skills are formed. Debugging in­stinct. System in­tu­ition. Precision. Taste. Skepticism. The abil­ity to de­com­pose a prob­lem. The abil­ity to ex­plain why some­thing works, not just that it ap­pears to work.

Those skills are built through fric­tion. Through strug­gle. Through get­ting things wrong and fix­ing them. Through trac­ing fail­ures back to root cause. Through writ­ing some­thing and re­al­iz­ing it does not sur­vive con­tact with re­al­ity.

That process is not op­tional. It is how en­gi­neers ac­quire and el­e­vate their com­pe­tency. If early-ca­reer en­gi­neers use A.I. to re­move all strug­gle from the learn­ing loop, they are hurt­ing their de­vel­op­ment.

Someone who uses A.I. to an­swer every hard ques­tion may look ef­fi­cient for a quar­ter or two. But they may also be qui­etly fail­ing to build the very ca­pa­bil­i­ties their fu­ture de­pends on. They are skip­ping the stage where un­der­stand­ing is forged.

Going back to the analo­gies: This is like copy­ing an­swers through uni­ver­sity and then show­ing up to a job that re­quires in­de­pen­dent thought. It is like us­ing a cal­cu­la­tor for every arith­metic task and never de­vel­op­ing num­ber sense. It is like re­ly­ing on self-dri­ving fea­tures be­fore learn­ing how to ac­tu­ally drive. The sup­port sys­tem may make you look func­tional, but it does not make you ca­pa­ble.

And even­tu­ally raw ca­pa­bil­ity is the main thing that mat­ters. There is no sub­sti­tute.

There is No Shortcut to Judgment

This is the part that some peo­ple may not want to hear –

There is no gen­er­ated ex­pla­na­tion that trans­fers mas­tery into your brain with­out you do­ing the work.

There is no way to out­source rea­son­ing for long enough that you still end up strong at rea­son­ing.

You can out­source me­chan­ics, ac­cel­er­ate re­search and com­press rou­tine tasks. You can re­move enor­mous amounts of low-value la­bor. All of that is good and should hap­pen.

But you can­not skip the for­ma­tion of skill and ex­pect to pos­sess it any­way.

That is the cen­tral mis­take be­hind the most naive uses of A.I. People think they are sav­ing time, when in re­al­ity they are of­ten de­fer­ring a bill that will come due later in the form of weak judg­ment, shal­low un­der­stand­ing, and lim­ited adapt­abil­ity.

In Summary: The Dividing Line & Organizational Implications

The di­vid­ing line is sim­ple:

If A.I. is help­ing you un­der­stand faster, think deeper, and op­er­ate at a higher level, it is mak­ing you more valu­able.

If A.I. is help­ing you avoid un­der­stand­ing, avoid strug­gle, and avoid own­er­ship of the rea­son­ing, it is mak­ing you less valu­able.

One path com­pounds, while the other path hol­lows you out and sets you up ripe for ir­rel­e­vance.

That is why the fu­ture does not be­long to the en­gi­neers who merely use A.I. It be­longs to the en­gi­neers who know ex­actly what to del­e­gate, ex­actly what to own, and ex­actly how to turn time sav­ings into bet­ter think­ing.

If not al­ready, it’s time to make in­formed choices on how you shape your fu­ture in the in­dus­try.

Why This Matters Even More to Organizational Health

Engineering man­age­ment will face the same di­vid­ing line.

Some lead­ers will rec­og­nize the dif­fer­ence be­tween en­gi­neers who use A.I. to ac­cel­er­ate un­der­stand­ing and en­gi­neers who use it to sim­u­late un­der­stand­ing. Others will not. That gap will mat­ter more than many or­ga­ni­za­tions re­al­ize.

One of the defin­ing traits of strong en­gi­neer­ing lead­er­ship in the A.I. era will be the abil­ity to dis­tin­guish pol­ished out­put from real judg­ment. Leaders who can­not tell the dif­fer­ence may re­ward speed, flu­ency, and pre­sen­ta­tion while miss­ing the deeper sig­nals of tech­ni­cal depth: orig­i­nal­ity, rigor, sound trade­off analy­sis, and the abil­ity to rea­son clearly about un­fa­mil­iar prob­lems.

That cre­ates or­ga­ni­za­tional risk.

The most ca­pa­ble en­gi­neers are of­ten the ones pro­duc­ing the in­sight, con­text, de­sign judg­ment, and cor­rec­tive feed­back that make both teams and A.I. sys­tems more ef­fec­tive. If an or­ga­ni­za­tion al­lows low-un­der­stand­ing, high-flu­ency work to spread unchecked, it does not just lower the qual­ity of in­di­vid­ual out­put. It starts to de­grade the knowl­edge en­vi­ron­ment it­self. Reviews get weaker. Design dis­cus­sions get shal­lower. Documents be­come more pol­ished and less use­ful. Over time, the or­ga­ni­za­tion be­comes worse at gen­er­at­ing the very clar­ity and tech­ni­cal judg­ment it de­pends on.

This is why lead­er­ship mat­ters so much here. The chal­lenge is not merely adopt­ing A.I. tools. It is pro­tect­ing the con­di­tions un­der which real think­ing, learn­ing, and crafts­man­ship con­tinue to thrive.

That starts with hir­ing. Organizations will need bet­ter ways to de­tect gen­uine un­der­stand­ing rather than sur­face-level flu­ency. They will need in­ter­view loops that test rea­son­ing, not just pol­ished an­swers. They will need eval­u­a­tion sys­tems that re­ward clar­ity, depth, sound judg­ment, and durable tech­ni­cal con­tri­bu­tion rather than sheer out­put vol­ume.

It also af­fects team de­sign and cul­ture. Strong en­gi­neers should not spend dis­pro­por­tion­ate amounts of time clean­ing up plau­si­ble but shal­low work gen­er­ated by peo­ple who have out­sourced their think­ing. If lead­er­ship does not ac­tively guard against that, high per­form­ers be­come force mul­ti­pli­ers for every­one ex­cept them­selves. That is a fast path to frus­tra­tion, low­ered stan­dards, and even­tual at­tri­tion.

The or­ga­ni­za­tions that han­dle this well will not be the ones that sim­ply push A.I. adop­tion hard­est. They will be the ones that learn to sep­a­rate lever­age from de­pen­dency, ac­cel­er­a­tion from im­i­ta­tion, and gen­uine ca­pa­bil­ity from con­vinc­ing out­put.

In the A.I. era, or­ga­ni­za­tional qual­ity will in­creas­ingly de­pend on whether lead­er­ship can still rec­og­nize the dif­fer­ence.

Editorial note: Like all con­tent on this site, the views ex­pressed here are my own and do not nec­es­sar­ily re­flect the views of my em­ployer.

London Marathon 2026 results: Sabastian Sawe makes history with first competitive sub-two-hour marathon

www.bbc.com

Sawe smashes two-hour mark to move goal­posts for marathon run­ning’

Absolutely in­cred­i­ble!’ - Sawe runs sub-two-hour marathon in London

ByHarry Poole

BBC Sport jour­nal­ist

Sabastian Sawe made his­tory at the London Marathon by be­com­ing the first ath­lete to run a sub-two-hour marathon in a com­pet­i­tive race.

The 31-year-old Kenyan crossed the line to win in one hour 59 min­utes 30 sec­onds, more than one minute faster than the late Kelvin Kiptum’s pre­vi­ous record of 2:00:35, set in 2023.

The great Eliud Kipchoge be­came the first man to run a marathon in un­der two hours in 2019, but that was not record-el­i­gi­ble as it was held un­der con­trolled con­di­tions.

Already on world record pace as he crossed the halfway mark in 1:00:29, Sawe was able to speed up over the sec­ond half of the race to run even faster than Kipchoge’s time.

Sawe made his de­ci­sive move be­fore the fi­nal 10km, with only debu­tant Yomif Kejelcha able to cover his surge off the front.

Remarkably, Kejelcha, mak­ing his marathon de­but, be­came the sec­ond man to run un­der two hours in race con­di­tions, fin­ish­ing run­ner-up in 1:59:41.

Half marathon world record holder Jacob Kiplimo also crossed the line faster than Kiptum’s for­mer record, com­plet­ing the podium in 2:00:28.

Sawe, speak­ing on BBC TV, said: I am feel­ing good. I am so happy. It is a day to re­mem­ber for me.”

We started the race well. Approaching fin­ish­ing the race, I was feel­ing strong. Finally reach­ing the fin­ish line, I saw the time, and I was so ex­cited.”

Assefa sets new world record to win London Marathon for sec­ond year in a row

In the wom­en’s race, Ethiopia’s Tigst Assefa im­proved her own world record for a women-only field as she surged clear of Kenyan ri­vals Hellen Obiri and Joyciline Jepkosgei in a thrilling fin­ish to re­tain her ti­tle in 2:15:41.

Swiss great Marcel Hug cruised to a record-equalling eighth London Marathon vic­tory in the elite men’s wheel­chair race, ty­ing level with Great Britain’s David Weir by win­ning for a sixth suc­ces­sive year.

Catherine Debrunner also re­tained the elite wom­en’s wheel­chair ti­tle as the Swiss burst clear of American Tatyana McFadden in the clos­ing stages.

How Sawe achieved sport­ing im­mor­tal­ity in London

Much of the fo­cus be­fore­hand had been about Sawe - win­ner of last year’s race in 2:02:27 - tar­get­ing Kiptum’s London Marathon course record of 2:01:25.

He told BBC Sport this week that it was only a mat­ter of time” be­fore he broke Kiptum’s world record, adding I hope and wish one day [it will be me]” when asked about be­com­ing the first per­son to run un­der two hours in a race.

Sawe had tar­geted Kiptum’s world record in Berlin last September, when he went through halfway in 60:16, be­fore that bid was ul­ti­mately un­done by the hot weather.

But, in per­fect race con­di­tions in London, Sawe stormed down The Mall to achieve that his­toric feat, do­ing so in a time which was once con­sid­ered im­pos­si­ble.

BBC com­men­ta­tor and for­mer world cham­pion Steve Cram said: There are things that hap­pen in sport and you want to be there to see his­tory be­ing made - if you are watch­ing on TV then well done, but if you’re in London, it is a priv­i­lege and it is in­cred­i­ble.

We said it was a day for records but I don’t think in our wildest dreams we could have fore­seen this.”

I am so hap­py’ - Sawe re­acts to win­ning London marathon

After cov­er­ing the first half of the course in 60:29, Sawe moved through the gears to com­plete the sec­ond half in just 59:01.

Only 63 men in his­tory have run a half marathon as quickly as that - with Sawe’s own per­sonal best stand­ing at 58:05.

His splits con­tin­ued to quicken as he chased down his tar­get, clock­ing 13:54 for the five kilo­me­tres from 30 – 35km, and 13:42 for the 35 – 40km stretch - an av­er­age pace of 2:45 per kilo­me­tre.

This will re­ver­ber­ate around the world,” said for­mer wom­en’s marathon world record holder Paula Radcliffe.

The goal­posts have lit­er­ally just moved for marathon run­ning and where you bench­mark your­self as be­ing world-class.

It is a les­son to every­body out there. We say don’t go out too fast’ - they went out smartly and paced it re­ally well.”

We’ve wit­nessed some­thing in­cred­i­ble’

Pundits re­act to Sawe’s land­mark sub-two-hour marathon

Kitted out in spon­sor Adidas’ lat­est su­per­shoes, Sawe, who has won all four marathons he has con­tested, man­aged to take two min­utes and 35 sec­onds off his marathon per­sonal best.

He has sought to en­sure con­fi­dence in his per­for­mances by un­der­go­ing fre­quent drug tests and was tested 25 times be­fore com­pet­ing in Berlin, where he faded to fin­ish in 2:02:16.

I want to thank the crowds for cheer­ing us. I think they help a lot, be­cause if it was not for them, you don’t feel like you are so loved,” Sawe said.

I think they help a lot be­cause them call­ing makes you feel so happy and strong and push­ing.

That is why I can say what comes for me to­day is not for me alone but all of us in London.”

Reacting to Sawe’s record, Britain’s four-time Olympic cham­pion Mo Farah said: We’ve waited long enough to see a hu­man go sub-two.

That’s al­ways been the ques­tion that we’ve asked. We’ve just wit­nessed some­thing in­cred­i­ble.”

Assefa im­proves record as Hug makes his­tory

Hug wins London Marathon wheel­chair race for sixth con­sec­u­tive year

Assefa, the third-fastest woman in his­tory, lined up as favourite to re­peat her 2025 tri­umph in London af­ter in­juries forced Olympic gold medal­list Sifan Hassan and world cham­pion Peres Jepchirchir to with­draw.

The lead­ing trio in Sunday’s race re­mained in­sep­a­ra­ble un­til the clos­ing kilo­me­tres, as Obiri and Jepkosgei ac­com­pa­nied Assefa in­side the Ethiopian’s record pace set in London 12 months ago.

But it was Assefa who sum­moned the en­ergy to push on for vic­tory, go­ing nine sec­onds faster than her pre­vi­ous women-only record.

The wom­en’s elite run­ners be­gin 30 min­utes be­fore the elite men in the London Marathon, mean­ing the event is classed as a women-only race.

Obiri, a six-time global medal­list on the track, crossed the line 12 sec­onds af­ter Assefa, closely fol­lowed by Kenya’s 2021 win­ner Jepkosgei.

Eilish McColgan was the first British woman across the line, plac­ing sev­enth over­all in 2:24:51, while Rose Harvey was ninth in 2:26:14.

Mahamed Mahamed was the best-placed home ath­lete in the men’s event, fin­ish­ing 10th in 2:06:14 and re­plac­ing Alex Yee as the sec­ond-fastest Briton in his­tory.

Debrunner wins wom­en’s wheel­chair race

Hug pro­duced an­other dom­i­nant per­for­mance to tie Weir’s record for the most vic­to­ries in London Marathon his­tory.

Hug, 40, crossed the line in 1:24:13, more than four and a half min­utes clear of Chinese 23-year-old Luo Xingchuan.

Briton Weir com­pleted the podium in 1:29:23 in his 27th con­sec­u­tive ap­pear­ance at the event.

Debrunner cel­e­brated her fourth London Marathon win af­ter out­last­ing McFadden, fin­ish­ing just five sec­onds ahead of the American in clock­ing 1:38:29.

Briton Eden Rainbow-Cooper went into the race with podium as­pi­ra­tions af­ter fin­ish­ing fourth last year and re­gain­ing her Boston Marathon ti­tle on Monday, but those hopes were dashed by a pre-race punc­ture which caused her to start the race late.

interblah.net - Self-updating screenshots

interblah.net

I think this might be the neat­est thing I’ve built in

Jelly that no­body will ever no­tice.

If you’ve ever main­tained a help cen­tre or doc­u­men­ta­tion site for a web

ap­pli­ca­tion, you’ll know the par­tic­u­lar mis­ery of screen­shots. You write a

lovely help ar­ti­cle, care­fully cap­ture a screen­shot of the fea­ture you’re

doc­u­ment­ing, crop it, maybe add a bor­der and a shadow, up­load it, and it looks

great. Then you change the UI slightly — tweak a colour, move a but­ton,

up­date some copy — and sud­denly every screen­shot that in­cludes that el­e­ment

is stale. You know they’re stale. Your users might not no­tice, but you

know, and it gnaws at you.

Or maybe that’s just me.

Either way, I de­cided to fix it. The help cen­tre in Jelly has a build sys­tem

where screen­shots are cap­tured au­to­mat­i­cally from the run­ning ap­pli­ca­tion,

and they up­date them­selves when­ever you re­build.

Markdown with a twist

The help ar­ti­cles are writ­ten in Markdown, which gets processed into HTML via

Redcarpet and then ren­dered as ERB views in the Rails app. So far, so

or­di­nary. But scat­tered through the Markdown are com­ments like this:

<!– SCREENSHOT: acme-tools/​in­box | el­e­ment | se­lec­tor=#in­box-brand-new-sec­tion –>

![The Brand New” sec­tion](im­ages/​ba­sics-brand-new-sec­tion.png :screenshot’)

That HTML com­ment is an in­struc­tion to the screen­shot sys­tem. It says: go to

the in­box page for the Acme Tools demo team, find the el­e­ment match­ing

#inbox-brand-new-section, and cap­ture a screen­shot of it.” The im­age tag

be­low it is where the re­sult ends up.

How it works

Under the hood, it’s a Rake task that fires up a head­less Chrome browser via

Capybara and Cuprite. It scans every Markdown file for those SCREENSHOT

com­ments, groups them by team (so it only needs to log in once per team),

nav­i­gates to each URL, and cap­tures the screen­shot.

The cap­ture modes are:

el­e­ment — screen­shot a spe­cific DOM el­e­ment by CSS se­lec­tor

ful­l_­page — cap­ture the whole page, op­tion­ally cropped to a height

view­port — just what’s vis­i­ble in the browser win­dow

And there are a hand­ful of op­tions that han­dle the fid­dly cases:

<!– SCREENSHOT: nec­tar-stu­dio/​man­age/​rules | ful­l_­page | click=”.rule-cre­ate-but­ton” wait=200 crop=0,800 –>

That one nav­i­gates to the rules page, clicks a but­ton to open a form, waits

200 mil­lisec­onds for the an­i­ma­tion, then cap­tures a full-page screen­shot

cropped to a spe­cific re­gion. The click op­tion is the one that re­ally makes

it sing — so many fea­tures live be­hind a but­ton press or a popover, and be­ing

able to cap­ture those states au­to­mat­i­cally is won­der­ful.

There’s also torn — which ap­plies a torn-pa­per edge ef­fect via a CSS

clip-path — and hide, which tem­porar­ily hides el­e­ments you don’t want in

the shot (dev tool­bars, cookie ban­ners, that sort of thing).

The sat­is­fy­ing bit

The whole pipeline runs with just this:

rails man­ual:build

That cap­tures every screen­shot and then builds all the help pages. When I

change the UI, I run that com­mand and every screen­shot up­dates to match. No

man­ual crop­ping, no oh I for­got to up­date that one”, no slowly-di­verg­ing

screen­shots that make the help cen­tre look aban­doned.

The mark­down files live in pub­lic/​man­ual/, or­gan­ised by sec­tion — ba­sics,

setup, ad­vanced — and the build step processes them into ERB views in

app/​views/​help/, com­plete with bread­crumbs and sec­tion nav­i­ga­tion, all gen­er­ated

from the source mark­down files.

This also makes it easy to up­date the help cen­tre at the same time I’m work­ing

on the fea­ture; the code and the doc­u­men­ta­tion live to­gether and can be kept in

sync within the same PR or even com­mit.

One of those why did­n’t I do this sooner” things

I put off build­ing this for ages be­cause it seemed like a lot of work for a

nice to have”. It was a fair bit of work, hon­estly. Handling the edge cases

– el­e­ments that need scrolling into view, popovers that need click­ing,

im­ages that need crop­ping to avoid show­ing ir­rel­e­vant con­tent — took longer

than the happy path.

But now that it ex­ists, I up­date the help cen­tre far more of­ten than I used to,

be­cause the fric­tion is al­most gone. Change the UI, run the build,

com­mit the re­sults. The screen­shots are al­ways cur­rent, and I never have to

open a browser and fum­ble around with the ma­cOS screen­shot tool.

openai.com

Fast16: The Cyberweapon That Predates Stuxnet by Five Years

hackingpassion.com

Want to learn eth­i­cal hack­ing? I built a com­plete course. Have a look!Learn pen­e­tra­tion test­ing, web ex­ploita­tion, net­work se­cu­rity, and the hacker mind­set:→ Master eth­i­cal hack­ing hands-on­Hack­ing is not a hobby but a way of life!

For 21 years, a cy­ber­weapon called fast16 sat com­pletely un­de­tected. This one did not de­stroy ma­chines or blow things up. It cor­rupted the math. Scientists run­ning nu­clear and en­gi­neer­ing sim­u­la­tions got out­put that looked com­pletely nor­mal, every num­ber added up, every re­sult made sense, and all of it was de­lib­er­ately wrong. It sur­faced last week. It pre­dates Stuxnet by five years.

SentinelOne re­searchers Vitaly Kamluk and Juan Andrés Guerrero-Saade pre­sented the full analy­sis of fast16 at Black Hat Asia last week. Fast16’s core bi­nary has a com­pi­la­tion time­stamp of August 30, 2005. Stuxnet’s C&C in­fra­struc­ture was set up in November that same year.

Most peo­ple in se­cu­rity know Stuxnet as the worm that de­stroyed cen­trifuges at Iran’s Natanz nu­clear fa­cil­ity around 2010, by push­ing them past their me­chan­i­cal lim­its while ly­ing to the mon­i­tor­ing soft­ware about what was hap­pen­ing. It was the first known cy­ber­weapon de­signed to cause phys­i­cal de­struc­tion, and for years it was con­sid­ered the start­ing point of this whole era. Fast16 was there first, and for a long time it was the only one.

Kamluk started with a hunch. He had no­ticed that the most so­phis­ti­cated state-spon­sored mal­ware fam­i­lies he knew about all shared one tech­ni­cal habit: each one had a small script­ing en­gine built in called Lua. Lua works like a re­mote con­trol for mal­ware, it lets op­er­a­tors change what the im­plant does while it is al­ready run­ning on a tar­get ma­chine, with­out need­ing to send a com­pletely new file. He wanted to know if some­thing older had done the same thing first, and went look­ing through old col­lec­tions.

What he found was a file on VirusTotal called svcmgmt.exe, up­loaded in October 2016 and flagged by al­most no­body. It looked like a bor­ing Windows ser­vice wrap­per from the XP era. But in­side it was an em­bed­ded Lua 5.0 vir­tual ma­chine, en­crypted byte­code, and a path point­ing to a ker­nel dri­ver called fast16.sys. That makes fast16 the ear­li­est known Windows mal­ware to em­bed a Lua en­gine, pre­dat­ing the next known ex­am­ple by three full years.

One more thing con­firms the time­line. Fast16 only runs on sin­gle-core proces­sors, built at a time when most ma­chines were still run­ning on a sin­gle core and multi-core was just be­gin­ning to ar­rive on the mar­ket.

The frame­work runs in three lay­ers. The outer layer is svcmgmt.exe, a car­rier that be­haves dif­fer­ently de­pend­ing on how it is launched. Pass it -p and it spreads across the net­work. Pass it -i and it in­stalls it­self as a Windows ser­vice and runs the em­bed­ded pay­load. Pass it -r and it runs the pay­load with­out in­stalling. Inside the car­rier are three things stored in en­crypted form: the Lua byte­code that han­dles the op­er­a­tional logic, a DLL that hooks into Windows’ dial-up and VPN con­nec­tion sys­tem, and fast16.sys it­self. That DLL is worth a closer look. Every time a ma­chine con­nects to a re­mote net­work, it writes the con­nec­tion de­tails to a named pipe that op­er­a­tors can read. So while fast16.sys was cor­rupt­ing cal­cu­la­tions on disk, the DLL was qui­etly map­ping out which ma­chines were con­nect­ing to which net­works, giv­ing op­er­a­tors a live pic­ture of the fa­cil­i­ty’s in­ter­nal struc­ture.

Part of what makes that outer layer in­ter­est­ing is how it spreads. The mech­a­nism works like a de­liv­ery truck with mul­ti­ple com­part­ments. Each com­part­ment, called a worm­let, can carry a dif­fer­ent pay­load for a dif­fer­ent pur­pose. The car­rier copies it­self across net­work shares with weak au­then­ti­ca­tion and starts up as a ser­vice on every ma­chine it reaches. SentinelOne calls this clus­ter mu­ni­tion ar­chi­tec­ture. In the re­cov­ered sam­ple, only one of those com­part­ments is filled. The oth­ers are empty, which raises an ob­vi­ous ques­tion about whether other vari­ants ex­ist with dif­fer­ent pay­loads that no­body has found yet.

Before any of this runs, the code checks the reg­istry for se­cu­rity soft­ware. If it finds Kaspersky, Symantec, McAfee, F-Secure, Zone Labs, or about a dozen other prod­ucts that were com­mon in the mid-2000s, it stops im­me­di­ately. That list was not guess­work. It re­flects ex­actly what the op­er­a­tors ex­pected to find on the ma­chines they were af­ter.

The sec­ond layer is the worm, which spreads us­ing stan­dard Windows ser­vice con­trol and file-shar­ing APIs, noth­ing cus­tom. It re­lies on weak or de­fault ad­min pass­words on net­work shares to move from ma­chine to ma­chine, which was a re­al­is­tic as­sump­tion for a lot of in­ter­nal net­works in 2005.

The third layer is fast16.sys, and this is where the sab­o­tage ac­tu­ally hap­pens. A ker­nel dri­ver sits very deep in­side an op­er­at­ing sys­tem, be­low where an­tivirus soft­ware nor­mally looks. Fast16.sys loads at boot and po­si­tions it­self above every stor­age layer on the ma­chine: NTFS, FAT, the net­work filesys­tem. The first thing it does when it loads is dis­able the Windows Prefetcher, a sys­tem that nor­mally caches fre­quently-used files to speed things up. With that off, every sin­gle file read has to go through the full stor­age stack, and through the dri­ver. Everything that reads from disk passes through it first. And then it just waits. Nothing hap­pens un­til some­one logs in and the desk­top starts. Only then does it be­gin watch­ing every ex­e­cutable that gets opened.

The dri­ver does not go af­ter every file it sees. It is look­ing for soft­ware built with a spe­cific tool: the Intel C++ com­piler leaves a small iden­ti­fy­ing string in every ex­e­cutable it pro­duces, right af­ter the last sec­tion header. The de­vel­op­ers knew ex­actly what com­piler their tar­gets used, and built the se­lec­tion logic around that fin­ger­print.

For every file that matches, the dri­ver in­ter­cepts the float­ing-point cal­cu­la­tion rou­tines in mem­ory as the file is be­ing read from disk. Floating-point cal­cu­la­tions are the math be­hind pre­ci­sion sim­u­la­tions, the kind that tell you whether a bridge de­sign will hold un­der load, or whether an ex­plo­sive trig­ger will det­o­nate at the right mo­ment. The dri­ver patches those rou­tines us­ing 101 pat­tern-match­ing rules, in­jects a block of FPU in­struc­tions that qui­etly shifts val­ues in in­ter­nal cal­cu­la­tion ar­rays, and lets the file load as if noth­ing hap­pened. The orig­i­nal code on disk is un­touched. The soft­ware runs nor­mally. The re­sults are wrong.

Running those 101 rules against soft­ware from that era pointed to three spe­cific tar­gets.

The first is LS-DYNA 970, a sim­u­la­tion suite used for mod­el­ing ex­plo­sions, struc­tural fail­ures, and high-speed im­pacts. The Institute for Science and International Security pub­lished a re­view in September 2024 of 157 aca­d­e­mic pa­pers show­ing that Iranian re­searchers used LS-DYNA in work con­nected to nu­clear weapons de­vel­op­ment, specif­i­cally mod­el­ing the ex­plo­sive trig­gers that ini­ti­ate war­head det­o­na­tion. If fast16 was run­ning on those ma­chines, the sci­en­tists had no way of know­ing their re­sults were wrong. Every de­sign de­ci­sion based on those num­bers was built on cor­rupted out­put.

The sec­ond tar­get is PKPM, and this is the part most cov­er­age misses en­tirely. PKPM is China’s dom­i­nant struc­tural en­gi­neer­ing soft­ware, de­vel­oped by Tsinghua University and the China Academy of Building Research and used across Chinese con­struc­tion pro­jects for over three decades. What makes it more than a stan­dard civil en­gi­neer­ing tool is that PKPM is also used for seis­mic struc­tural analy­sis of nu­clear re­ac­tor fa­cil­i­ties. A 2024 pa­per in Advances in Civil Engineering doc­u­ments the use of PKPM to model the struc­tural be­hav­ior of China’s TMSR-LF1 tho­rium molten salt re­ac­tor un­der earth­quake con­di­tions. SentinelOne can­not con­firm who the PKPM tar­get was or where fast16 ran. Whether this was aimed at a sec­ond tar­get coun­try is left as an open ques­tion.

The third is MOHID, an open-source wa­ter mod­el­ing plat­form de­vel­oped at the Instituto Superior Tecnico in Lisbon. It is used for mod­el­ing coastal wa­ter sys­tems, sed­i­ment trans­port, dam be­hav­ior, and en­vi­ron­men­tal im­pact of large con­struc­tion pro­jects near wa­ter. SentinelOne says openly they can­not iden­tify what the in­tended sab­o­tage ef­fect on this soft­ware would have been, and they are ask­ing the re­search com­mu­nity for help. Why it was tar­geted may still be in a sam­ple no­body has found yet.

The NSA con­nec­tion comes from a list in the ShadowBrokers leak. In April 2017, the ShadowBrokers pub­lished a large col­lec­tion of ma­te­ri­als widely un­der­stood to have come from the NSAs Equation Group. Inside was a file called drv_list.txt, ba­si­cally a do-not-touch list for op­er­a­tors. When a team landed on a tar­get ma­chine and found a dri­ver from that list, it told them whether that dri­ver be­longed to a friendly op­er­a­tion and whether they should leave it alone. It was a sys­tem for mak­ing sure dif­fer­ent teams did not ac­ci­den­tally in­ter­fere with each oth­er’s work.

Most en­tries on that list got a note to be cau­tious or pull back. Fast16 got some­thing dif­fer­ent:

1

fast16”,“*** NOTHING TO SEE HERE - CARRY ON ***”

That is one op­er­a­tor telling an­other: if you find this dri­ver, do not touch it, it is ours. Researchers at CrySyS Lab no­ticed this en­try when they an­a­lyzed the ShadowBrokers dump in 2018 and had no sam­ple to con­nect it to. Eight years later, there is one. The ShadowBrokers ma­te­ri­als are widely linked to the NSAs Equation Group, though as with all in­tel­li­gence leaks, the full pic­ture is not avail­able from the out­side.

One more thing in the code stands out. The source files con­tain ver­sion con­trol mark­ers that come from Unix de­vel­op­ment en­vi­ron­ments of the 1970s and 1980s, long be­fore Windows ex­isted. They look like this:

1

@(#)par.h $Revision: 1.3 $

That kind of no­ta­tion, called SCCS/RCS, is the equiv­a­lent of find­ing a ro­tary phone in a mod­ern of­fice. Nobody uses it in 2005 Windows ker­nel code un­less their pro­gram­ming back­ground goes back decades, to gov­ern­ment and mil­i­tary com­put­ing en­vi­ron­ments from a com­pletely dif­fer­ent era. These are not week­end hack­ers or free­lancers. This is a long-run­ning in­sti­tu­tional pro­gram built by peo­ple who spent their ca­reers in very spe­cific places.

What makes all of this worse is the de­tec­tion record. Svcmgmt.exe was up­loaded to VirusTotal in October 2016 and sat there for nearly a decade, com­pletely in the open. One an­tivirus en­gine out of roughly sev­enty flagged it, weakly, as gen­er­ally ma­li­cious. A self-prop­a­gat­ing car­rier that de­ploys a boot-level ker­nel dri­ver with an in-mem­ory float­ing-point patch­ing en­gine had been sit­ting in a pub­lic data­base for nine years, al­most in­vis­i­ble to every scan­ner that looked at it.

During his analy­sis, Kamluk used Claude to help analyse fast16 and write up the find­ings. At one point the AI re­peat­edly failed to fin­ish a re­port he had asked it to write. When he asked why, Claude pro­duced para­graphs of self-crit­i­cism, urg­ing it­self to just get it done. It even­tu­ally did, and con­cluded that who­ever built fast16 had in­ti­mate knowl­edge of the tar­get soft­ware and that in­dus­trial sab­o­tage was the most likely in­ten­tion. A 21-year-old piece of mal­ware stumped a mod­ern AI long enough to make it re­flect on its own lim­i­ta­tions.

If you work with older sim­u­la­tion soft­ware, par­tic­u­larly older ver­sions of LS-DYNA or PKPM from the mid-2000s, SentinelOne has al­ready no­ti­fied the ven­dors di­rectly. The rec­om­mended ac­tion is to ver­ify crit­i­cal cal­cu­la­tion out­puts against a com­pletely in­de­pen­dent sys­tem sit­ting out­side any po­ten­tially af­fected net­work. If fast16 spread across an en­tire fa­cil­ity and patched every work­sta­tion, a com­par­i­son cal­cu­la­tion run in­side that same net­work would pro­duce the same wrong out­put. A ma­chine com­pletely out­side that en­vi­ron­ment would not.

The in­di­ca­tors to look for:

→ Driver: fast16.sys | MD5: 0ff6abe0252d4f37a196a1231fae5f26

→ Carrier: svcmgmt.exe | MD5: dbe51e­abebf9d4e­f9581e­f99844a2944

→ Notification DLL: svcmgmt.dll | MD5: 410eddfc19de44249897986ecc8ac449

→ Named pipe used for re­port­ing: \\.\pipe\p577

→ Device ob­jects cre­ated by the dri­ver: \Device\fast16 and \??\fast16

→ Custom DeviceType value in the dri­ver: 0xA57C

→ Service name in­stalled by the car­rier: SvcMgmt

SentinelOne pub­lished full YARA rules for hunt­ing both the car­rier and the dri­ver in their re­search pa­per.

If some­thing this so­phis­ti­cated spent 21 years un­de­tected, sit­ting on VirusTotal for nearly a decade while al­most no an­tivirus en­gine no­ticed it, what else is sit­ting in sim­i­lar col­lec­tions right now wait­ing for some­one to ask a dif­fer­ent ques­tion. Probably more than any­one wants to know.

Fast16 in­stalled it­self as a Windows ser­vice, spread through net­work shares, ran as a ker­nel dri­ver, and stayed com­pletely hid­den while it worked. The con­cepts be­hind that, ex­ploita­tion, post-ex­ploita­tion, per­sis­tence, priv­i­lege es­ca­la­tion, and mov­ing through a net­work with­out be­ing no­ticed, are ex­actly what my eth­i­cal hack­ing course cov­ers step by step:

→ Join my com­plete eth­i­cal hack­ing course

Hacking is not a hobby but a way of life.

Hacking is not a hobby but a way of life.

Sources:

→ SentinelOne Labs: fast16: Mystery ShadowBrokers Reference Reveals High-Precision Software Sabotage 5 Years Before Stuxnet

→ Institute for Science and International Security: Iran’s Likely Violations of Section T

403 Forbidden

www.sentinelone.com

Issue links open automatically in a popup · community · Discussion #192666

github.com

🏷️ Discussion Type

Product Feedback

💬 Feature/Topic Area

Issues

Body

In some repos­i­to­ries, any link to an is­sue form an is­sue stared to open in a popup over­lay in­stead of nav­i­gat­ing to it.

Is that some­thing that is rolling out grad­u­ally? I checked both the changelog and prod­uct roadmap but could­n’t find any men­tions of the new be­hav­ior. Is there a way to turn it off or con­fig­ure? It com­pletely breaks the ex­pe­ri­ence and neg­a­tively af­fects pro­duc­tiv­ity.

Guidelines

I have read the above state­ment and can con­firm my post is rel­e­vant to the GitHub fea­ture ar­eas Issues and/​or Projects.

Thanks for all the feed­back, this was some­thing we were try­ing out as it im­proved load time for cross-repo links, we are go­ing to re­vert the change.

View full an­swer

💬 Your Product Feedback Has Been Submitted 🎉

Thank you for tak­ing the time to share your in­sights with us! Your feed­back is in­valu­able as we build a bet­ter GitHub ex­pe­ri­ence for all our users.

Here’s what you can ex­pect mov­ing for­ward ⏩

Your in­put will be care­fully re­viewed and cat­a­loged by mem­bers of our prod­uct teams. 

Due to the high vol­ume of sub­mis­sions, we may not al­ways be able to pro­vide in­di­vid­ual re­sponses.

Rest as­sured, your feed­back will help chart our course for prod­uct im­prove­ments.

Due to the high vol­ume of sub­mis­sions, we may not al­ways be able to pro­vide in­di­vid­ual re­sponses.

Rest as­sured, your feed­back will help chart our course for prod­uct im­prove­ments.

Other users may en­gage with your post, shar­ing their own per­spec­tives or ex­pe­ri­ences.

GitHub staff may reach out for fur­ther clar­i­fi­ca­tion or in­sight. 

We may Answer’ your dis­cus­sion if there is a cur­rent so­lu­tion, workaround, or roadmap/​changelog post re­lated to the feed­back.

We may Answer’ your dis­cus­sion if there is a cur­rent so­lu­tion, workaround, or roadmap/​changelog post re­lated to the feed­back.

Where to look to see what’s ship­ping 👀

Read the Changelog for real-time up­dates on the lat­est GitHub fea­tures, en­hance­ments, and calls for feed­back.

Explore our Product Roadmap, which de­tails up­com­ing ma­jor re­leases and ini­tia­tives.

What you can do in the mean­time 💻

Upvote and com­ment on other user feed­back Discussions that res­onate with you.

Add more in­for­ma­tion at any point! Useful de­tails in­clude: use cases, rel­e­vant la­bels, de­sired out­comes, and any ac­com­pa­ny­ing screen­shots.

As a mem­ber of the GitHub com­mu­nity, your par­tic­i­pa­tion is es­sen­tial. While we can’t promise that every sug­ges­tion will be im­ple­mented, we want to em­pha­size that your feed­back is in­stru­men­tal in guid­ing our de­ci­sions and pri­or­i­ties.

Thank you once again for your con­tri­bu­tion to mak­ing GitHub even bet­ter! We’re grate­ful for your on­go­ing sup­port and col­lab­o­ra­tion in shap­ing the fu­ture of our plat­form. ⭐

0 replies

This re­ally breaks my ex­pe­ri­ence us­ing AI agents! I click an is­sue link to copy the URL of that is­sue to paste into an agent, I get the par­ent is­sue link in­stead!

It looks aw­ful too, I thought at first my browser had glitched and there was a ren­der­ing er­ror

0 replies

Please give us an op­tion to dis­able this non-stan­dard link be­hav­ior. If I click a link it should open and NOT just throw up an over­lay.

0 replies

0 replies

I don’t want this. It breaks as­sis­tive tech­nolo­gies. I want an op­tion to get the old be­hav­ior.

0 replies

Wow, that’s se­ri­ously great work on GitHub’s side. It looks out­stand­ing, works amaz­ing, should have been im­ple­mented long ago, in fact.

should have been opt-in

should have been opt-in

@ZimbiX Could have been opt-in. Not should……

keep links act­ing like links.

keep links act­ing like links.

@ZimbiX links are still links; you can still right-click > open in new tab.

GitHub’s qual­ity has eroded since the Microsoft ac­qui­si­tion.

GitHub’s qual­ity has eroded since the Microsoft ac­qui­si­tion.

@ZimbiX 50/50 I guess…. for me and my use case, GitHub pro­gressed. But, again, thats only me…..

0 replies

Not a huge fan. It looks weird with the pop-up not cen­tered in the mid­dle and off to the right (FF 149).

Much more in­tu­itive be­hav­ior is just sup­port­ing mouse-hover over an is­sue to get a pre­view, with click/​ctrl-click open­ing the page.

0 replies

Please re­vert this aw­ful be­hav­ior. When I click a link on a web browser, I want to go to that link.

At the very least, if you’re go­ing to in­tro­duce non-stan­dard UX, please pro­vide users with the op­tion to dis­able it.

0 replies

Just an­other an­gry user here - please re­vert this silly fea­ture, or make it opt-in. Terrible UX!

0 replies

Another down­vote here. This is an an­noy­ance. Please get rid of it.

0 replies

Personally I pre­fer the old way; pop-ups are just an­noy­ing to me. I have an ul­tra-wide

mon­i­tor, so with a popup I lose about 20% to 30% of my widescreen, for no real rea­son

I can see is nec­es­sary.

Even if GitHub does not want to re­vert to the prior de­fault, I think al­low­ing users to

de­cide at their own dis­cre­tion what they would pre­fer, would be bet­ter. That way I

could use the old op­tion, with­out hav­ing to use the new vari­ant.

(It is prob­a­bly also pos­si­ble to do via cus­tom CSS or so, but that would re­quire

some time in­vest­ment to pre­vent a popup and in­stead use the old is­sue tracker

way.)

Edit: Oops, mis­read this, so this is about links. I also agree that the old vari­ant was

bet­ter here. Who is mak­ing these hor­ri­ble UI de­ci­sions lately?

0 replies

Is Microsoft ran by AI slop? Do you even use GitHub your­self? Obviously not.

0 replies

So it was not only me then! Please re­vert this. It sucks.

0 replies

Hey, cool fea­ture for peo­ple who’s browser don’t sup­port tabs.

I’m lucky enough to have a browser that does, so I’d rather just have the links open in a new tab.

Thanks.

1 re­ply

You mean for peo­ple on mo­bile? 100% de­vel­op­ers has tabs!

T-minus 10 sec­onds un­til Refined GitHub fixes this user-hos­tile abom­i­na­tion in their next up­date…

0 replies

Thanks for all the feed­back, this was some­thing we were try­ing out as it im­proved load time for cross-repo links, we are go­ing to re­vert the change.

4 replies

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

GH must have not been very happy with the com­mu­nity re­sponse so they de­ployed some­one on a Sunday.

Anyways, nice to hear that com­mu­nity has poked GH into not… slop­ping more stuff, or en****tifi­ca­tion.

please at least con­sider mak­ing it a tog­gle op­tion next time. I can see why y’all have implementation ideas” but I don’t think fetch­ing the is­sue to dis­play in a popup will in­crease load time for cross repo PR/Issues than I can just click on the link and it loads in < 500ms.

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never com­mited any­thing via GitHub? Who is re­spon­si­ble for this stu­pid idea?

I was re­spon­si­ble for this go­ing out. The goal was to pro­vide a more con­sis­tent user ex­pe­ri­ence in that what hap­pens when you click an is­sue would be the same in more places where we use the is­sue viewer (sub-issues on an is­sue, our ded­i­cated is­sues dash­board, GitHub Projects, and oth­ers). It also meant you would­n’t lose your place when click­ing an is­sue ref­er­ence when read­ing a dis­cus­sion. There were some per­for­mance im­prove­ments that came with the change too. It was well in­ten­tioned, but we hear you, and thanks for the feed­back. We missed the mark on this one and it’s been rolled back.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.