10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program in 2000, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function. That fact existed nowhere in any document. Only the workers who made the original batch knew it, and they had retired years earlier.
A nuclear weapons program lost the ability to make a material it invented. The knowledge existed only in people, and the people were gone.
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. They weren’t enough without the people who understood what they meant.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
After almost three years of 6.x series kernels, Linux 7.0 is finally here.
That means it’s also time for another Asahi progress report!
Automate Everything
Users of alternate distros and keen-eyed individuals may have noticed some
changes to the Asahi Installer. After almost two years, we finally got around
to pushing an updated version of the installer to the CDN! Two years is a long
time to go between updates, so what took so long?
Our upstream installer package is a little bit of a Rube-Goldberg machine. The
bulk of the installer is written in Python, with some small Bash scripts to
bootstrap it. When you run curl | sh, you’re actually downloading the boostrap
script, which then fetches the actual installer bundle from our CDN. This bundle
consists of a Python interpreter and very stripped down standard library, a built
m1n1 stage 1 binary, and the installer itself.
Until recently, cutting an installer release meant:
Tagging the installer repo
Downloading a macOS Python build
Building m1n1 from a blessed commit
Bundling Python, m1n1 and the installer
Uploading the installer bundle to the CDN
Updating the CDN’s version flag file
This process was time-consuming and required administrative access to the CDN.
As a result, we neglected to push installer updated for quite some time; the
previous installer tag was from June 2024! As upstreaming work has progressed
and Devicetree bindings churned, this became rather problematic for our friends
maintaining distros.
The Asahi Installer offers a UEFI-only installation option. This option
shrinks macOS and only installs what is necessary to boot a UEFI executable,
meaning m1n1 stage 1, the Devicetrees, and U-Boot. This allows users to
boot from live media with Asahi support, such as specialised Gentoo Asahi
LiveCD images.
Since the Devicetrees on a fresh UEFI-only install come from the installer
bundle itself, a kernel will only successfully boot when the installer-bundled
Devicetrees match what that kernel expects to see. The two have gotten rather
out of sync as time has gone on due to Devicetree bindings changing
as a result of the upstreaming process. This situation finally came to a
head with kernel 6.18, which required numerous changes to both m1n1 and
the Devicetree bindings for the Apple USB subsystem. This made booting
kernel 6.18 and above from live media impossible. Oops.
Rather than go through the trouble of manually pushing out another update,
we took the opportunity to build some automation and solve this problem
permanently.
We moved the manifest of installable images into the asahi-installer-data repo,
allowing us to update it independently of the installer codebase.
On top of this, we also now deploy
the installer using GitHub workflows. Going forward, every push to the main
branch of asahi-installer will
automatically build the installer and upload it to https://alx.sh/dev.
Every tag pushed to GitHub will do the same for https://alx.sh.
The latest version, 0.8.0, bumps the bundled m1n1 stage 1 binary to
version 1.5.2, introduces installer support for the Mac Pro, and adds
a firmware update mode which ties in nicely with…
How do you overengineer a light sensor?
Basically everything with a screen now comes with some sort of light
sensor. This is usually to enable automatic brightness adjustment based
on ambient conditions. It’s a very convenient feature in devices like
smartphones, where a user may walk outside and find their display too
dim to see. The cheapest versions of this use a simple photoresistor.
This is fine if the goal is just to change brightness, but brightness
is not the only thing affected by ambient lighting conditions. What about
colour rendering?
Apple’s devices have had the True Tone display feature for quite some time.
This works by measuring both the brightness and the colour characteristics
of the environment’s ambient lighting. This data is then used to apply
brightness and colour transformations to the display to ensure that it is
always displaying content as accurately as possible. This is most noticeable
in environments with lighting fixtures that have a low Colour Rendering
Index, such as fluorescent tubes or cheap cool white LEDs. The devices that
enable this, ambient light sensors, are usually little ICs that connect
to the system over I2C or other industry-standard bus. This is
fine for basic applications, but this is Apple. There are some other considerations
to be had:
The light sensor is doing stuff whenever the screen is on, so processing its
output should be as efficient as possible
The light sensor should be able to be calibrated for maximum accuracy
There are multiple models of light sensor in use, and the OS should not
have to care too much about that
The light sensor has to have a three letter acronym like every other piece
of hardware on this platform (ALS)
Naturally, this sounds like a job for the Always-On Processor1 (AOP)!
We’ve had a working AOP+ALS driver set for a while thanks to chaos_princess,
however the raw data AOP reports back from ALS is rather inaccurate without
calibration. That calibration is a binary blob that must be uploaded to the AOP
at runtime. It is essentially firmware. Since we cannot redistribute Apple’s
binaries, it must be retrieved from macOS at install time and then stored somewhere the driver
knows to look for it.
To achieve this, the Asahi Installer gathers up all
the firmware it knows we will need in Linux and stores it on the EFI System
Partition it creates. A Dracut module then mounts this to a subdirectory of
/lib/firmware/, where drivers can find it. However, issues arise when we
need to retrieve more firmware from macOS after Asahi Linux has already been
installed. To avoid a repeat of the webcam situation, where users
were required to manually do surgery on their EFI System Partition, chaos_princess
added the ability for the Asahi Installer to automatically update the firmware
package. Starting with ALS, any required firmware updates will be a simple
matter of booting into macOS or macOS Recovery, re-running the Asahi Installer, and following
the prompts.
To enable ALS support (and to do firmware upgrades in the future), follow these steps:
Ensure you are running version 6.19 or above of the Asahi kernel
Ensure your distro ships iio-sensor-proxy as a dependency of your DE (Fedora
Asahi Remix does this)
What would you do if your organization had used a domain name for 27 years, and the registrar holding the domain seized it without any advance warning? All email and websites went dark. The company’s tech support spent four days telling you to “Just wait, we are working on it.” On the fourth day, the company informed you that someone else has the domain now, and it is no longer yours.
Read on. This crazy story happened exactly one week ago.
My friend Lee Landis is a partner in Flagstream Technologies, a local IT firm in Lancaster, PA. Last Saturday afternoon one of his client’s domains vanished from his GoDaddy account.
Lee is one of the most competent IT guys I know. The GoDaddy account had dual two-factor authentication enabled, requiring both an email code and an authentication app code to log in. The domain itself had ownership protection turned on. The audit log just said “Transfer to Another GoDaddy Account” by an “Internal User” with “Change Validated: No.”
Some names have been changed
Some names and the domain itself have been changed because people wanted to remain anonymous. The pattern of the domain names mirrors the actual mistake, so the explanation still makes sense. Every fact in this post is true. Lee has hard evidence for every one of them.
As you can see above, GoDaddy emailed Flagstream at 1:39pm that an account recovery had been requested. Three minutes later, the transfer was initiated. Four minutes later, it was complete. On a Saturday afternoon.
Everything at the impacted organization went offline because GoDaddy reset the DNS zone to default when they moved the domain into the new account. Same nameservers. Empty DNS zone file.
Lee’s client lost their website and email for the next four days.
27 yrs
Domain in active use
32
Calls to GoDaddy
9.6 hrs
On the phone with GoDaddy
17
Emails to GoDaddy. Zero callbacks.
Domain and account were fully protected.
The domain had the “Full Domain Privacy and Protection” security product that GoDaddy sells. Dual two-factor on the account. None of it mattered. The transfer was done by an “Internal User” inside GoDaddy.
The domain was HELPNETWORKINC.ORG. The real domain name has been changed because the organization wanted to remain anonymous. It belongs to a national organization with twenty locations across the United States. The domain has been in active use for 27 years. Each chapter runs its website and email on a subdomain of that one parent domain. When HELPNETWORKINC.ORG went dark, every chapter went dark with it.
Thirty-two calls. 9.6 hours on the phone. Zero callbacks.
Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.
On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.
The one thing that stayed consistent was the message: “Just wait a day or two. We are working on it. Why do you think this is so urgent?”
One of the most frustrating parts of this process is that all official communication to and from GoDaddy about this issue was done with generically named email accounts. It just seems like there should have been a named individual in charge of managing and communicating about this issue. Rather there were just random generic email accounts that seemed to change on a daily basis.
Every call generated a fresh case number. Lee lost count of the total number of cases. A few of the cases are 01368489. 894760. 01376819. 01373017. 01376804. 01373134. 01370012. None of them tied together on GoDaddy’s side. Every escalation started from zero. These are actual case numbers, in case anyone at GoDaddy wants to check into this.
I posted on X to see if anyone I knew at GoDaddy could escalate.
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF— Austin Ginder (@austinginder) April 20, 2026
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF
My friend Courtney Robertson, who works at GoDaddy, reposted it and started escalating internally on her own time. Thank you, Courtney. GoDaddy has a lot of great people like her. That part is not in question. What GoDaddy does not have is a way to actually fix a mistake once one has been made. Tickets pile up. Phone calls reset. Every escalation is a new person reading the case from scratch. The thing you actually need solved drifts between queues.
And there was no real way to dispute it.
While Lee was on the phone, his colleague was on a different phone trying to file a Transfer Dispute. GoDaddy directed him to cas.godaddy.com/Form/TransferDispute. He filed a dispute and received this message, which he captured via a screenshot.
Lee and his colleagues worked diligently at challenging the transfer. They supplied the correct name of the person listed on the domain. They supplied that person’s drivers license as required. They also supplied the correct business documentation as listed in GoDaddy’s own requirements. Every time they submitted a request, they were told they would hear back in 48 to 72 hours.
GoDaddy FINALLY responds with a SHOCKING statement
Tuesday afternoon, after four days of waiting, Flagstream finally got an official email response back from GoDaddy.
GoDaddy’s reply to Lee
After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.
That was it. No explanation of what documentation. The suggested next steps were three links. A WHOIS lookup. ICANN arbitration providers. A page about getting a lawyer involved to represent you in litigation.
Flagstream migrates client to new domain
Once GoDaddy declared the matter closed, Flagstream began migrating the client to a new domain. New email addresses. New website addresses. Coordinating with various teams throughout the night to change everything over to a new domain.
Switching to a new domain is a massive amount of work, and it leaves a lot of lingering problems behind because there is no control over the original domain.
Every email address that exists out in the world is now wrong. You have to tell everyone the new address. If they try the old one, it bounces.
Every piece of marketing material that references the old domain is now incorrect. There is no way to forward anything to the new domain.
All of the SEO is gone. You are starting an online presence from scratch.
Then a stranger found the domain in her account.
Wednesday morning Susan (not her real name), 2,000 miles away from the client’s headquarters, noticed something odd. Susan had been working at reclaiming a totally different domain used by a former employee. When she looked closely at her GoDaddy account, the domain in her account wasn’t the one she had requested. She made a few phone calls because she knew this was a problem and eventually got hooked up with Flagstream. Working with Susan, they ran a GoDaddy account-to-account transfer, and put the domain back where it belonged. DNS came back up while Lee was still typing the email telling me it was over. The entire process of reclaiming the domain lasted less than 5 minutes.
Once the domain was back and DNS was working, Flagstream started the arduous task of reverting everything that they had done the day before. They switched email and websites back to the original domain, once again working through the night to get everything fixed.
The resolution for this problem did not come from GoDaddy support. It did not come from the dispute team. It did not come from the Office of the CEO team. It came from a stranger who accidentally ended up with the domain and was smart and honest enough to start calling around because she knew something wasn’t right.
Susan is really the hero of this entire story. Without her, Flagstream would still have no idea what happened to this domain. Lawyers would have gotten involved, but it would probably be months until anything was resolved.
Timeline of events
Apr 18, 1:39pm
GoDaddy emails Flagstream that an Account Recovery has been requested for the account.
Apr 18, 1:42pm
Transfer initiated by GoDaddy Internal User. Three minutes after the recovery notice.
Apr 18, 1:43pm
Transfer completed. Change Validated is listed as “No”. Website and email go dark across the entire organization.
Apr 19
Lee discovers the domain is gone. GoDaddy says email undo@godaddy.com and wait.
Apr 20
Flagstream team starts calling and emailing GoDaddy for updates. GoDaddy now says email transferdisputes@godaddy.com. Austin posts on X. Courtney Robertson routes the case to the Office of the CEO team.
Apr 21
Flagstream files multiple Transfer Dispute cases with the requested documentation. Every submission is met with a 48 to 72 hour response window. GoDaddy emails Lee that the matter is closed and the domain belongs to someone else. Flagstream starts the painful process of migrating the organization to a new domain so they can function.
Apr 22
Susan notices the wrong domain in her account and calls Lee. Account-to-account transfer brings it home.
Then it got crazier. GoDaddy approved the transfer with zero documents.
The organization on the receiving end of the transfer was a regional chapter of the same network. Susan, the executive assistant, had emailed GoDaddy two weeks earlier asking to recover a different domain. HELPNETWORKLOCAL.ORG. Not HELPNETWORKINC.ORG.
Flagstream spent some time talking to Susan to figure out exactly how she was able to accidentally get the domain transferred into her account. Did she unintentionally supply all of the correct documentation? Talking to Susan they figured out that GoDaddy actually approved the transfer without her supplying ANY documentation.
Her email signature happened to reference her chapter’s website at a subdomain of HELPNETWORKINC.ORG. GoDaddy’s recovery team apparently looked at the signature, saw the parent domain, and transferred that domain into her account.
GoDaddy sent Susan a link to upload supporting documents. The link expired before she got around to using it. She emailed back requesting a new link so she could upload the required documentation. However, before the new link arrived, she received an email saying the domain transfer had been approved.
Susan never submitted a single document. Not for the domain she was actually trying to recover, and certainly not for the one GoDaddy ended up giving her. GoDaddy approved the change of account, transferred a 27-year-old non-profit’s domain into a stranger’s account, and “considered the matter closed” without requiring any documentation.
This is a huge security issue.
If Susan had been a bad actor, she could have intercepted email. She could have used that email to reset passwords, get MFA codes, launch phishing attacks, etc. She could have put up a new website with malware on it, redirected payments on the website, etc.
When the domain initially disappeared and Flagstream was unable to obtain any information about who had it, Flagstream feared the worst. Flagstream and the impacted client started to come up with a plan to protect against the threats mentioned above which was a huge undertaking for an organization of this size. Basically, all users across the entire organization needed to start logging into every important website and make sure the compromised domain was removed from the account. This includes bank websites, Amazon, IRS, payroll, Dropbox, email accounts, and even ironically enough, GoDaddy accounts.
It is outrageous that Susan was able to obtain this domain without supplying any documentation. Everyone was lucky it was Susan that got this domain.
GoDaddy: please follow up with Flagstream.
This is not acceptable.
A GoDaddy employee transferred a 27-year-old domain out of a paying customer’s account with no validation. With zero documentation submitted by the recipient. When the customer disputed with legitimate documentation, every submission was met with “We will respond in 48 to 72 hours.” After four days, GoDaddy claimed the domain belonged to someone else and the case was closed. The fix came from the recipient of the mistake, not from GoDaddy despite 9.6 hours of phone conversations.
To anyone at GoDaddy reading this. Please follow up with Lee Landis at Flagstream Technologies and make this right. An apology is probably in order. An internal review of how the transfer team validates documentation is in order, including how a transfer can be approved with zero documentation. Lee would like a clear answer on how this happened. Lee doesn’t want an email from a generic GoDaddy account. Lee wants a real person to call or email him. This person needs to leave an email address and phone number in case Lee has follow-up questions.
Even disclosing this to GoDaddy was broken.
Before publishing this post, I wanted to share the findings with GoDaddy’s security team directly. I emailed security@godaddy.com with the full report. The message bounced.
GoDaddy’s auto-reply to security@godaddy.com
A custom mail flow rule created by an admin at secureservernet.onmicrosoft.com has blocked your message. We hope this message finds you well. This email mailbox is no longer monitored. To address your needs, we have outlined two popular options for you: 1: To submit an abuse report, please visit our Abuse Reporting Form. 2: If you are looking to submit a vulnerability, please visit our bounty program https://hackerone.com/godaddy-vdp.
So I filed the same report through HackerOne instead, report #3696718.
This is the same pattern that played out across the four-day outage. The official channel does not work. The alternative path requires knowing to bypass it. Most honest people who notice a security issue are not going to have a HackerOne account. They send an email. How is it that GoDaddy doesn’t have a public security disclosure email address?
Whether the original transfer was a single agent’s mistake or a flaw in the recovery workflow, it is still a security issue. And there is no clean path from “I found something” to “a human at GoDaddy is looking at it.”
The only way to get GoDaddy’s attention is to leave.
Lee is upset about the four days of stress and lost productivity across the impacted organization. But his bigger concern is what comes next. Apparently there is no way to protect against this threat if your domain is hosted at GoDaddy. In addition, it seems like there is no efficient way to contest the GoDaddy transfer.
Flagstream will most likely migrate every one of their domains off GoDaddy. That is the only protection they have left, and the only escalation GoDaddy seems to respond to.
Are you at risk?
Is your domain hosted on GoDaddy? What would you do if the domain disappeared out of your GoDaddy account and your entire business went dark?
In talking to engineering management across tech industry heavy-weights, it’s apparent that software engineering is starting to split people into two nebulous groups:
The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
The second group will use A.I. to avoid thinking. They will paste prompts into a box, collect polished output, and present it as though it reflects their own reasoning. For a while, that can look like productivity. It can even look like talent. But it is a dead end.
The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf. They use the time savings to operate at a higher level. They elevate their thought process through rigor rather than outsourcing it.
That distinction matters more than people think.
In this post:
The New Failure Mode: Outsourced Thinking (& analogies)
What the Best Engineers Will Do Instead
The Real Source of Value
The Risk for Early-In-Career Engineers
There Is No Shortcut to Judgment
In Summary: The Dividing Line & Organizational Implications
Why This Matters Even More to Organizational Health
The New Failure Mode: Outsourced Thinking
A.I. can already generate code, summarize meetings, explain concepts, produce design drafts, and write status updates in seconds. That is useful but also dangerous.
The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence.
There is now a very real temptation to hand a model a problem, receive a plausible answer, and then repeat that answer as if it reflects your own understanding. That is close to plagiarism, but in some ways worse. At least when a student copies from another person, there is still a real human source behind the answer. Here, people can present machine-produced reasoning they do not understand, cannot defend, and could not reproduce on their own.
That is intellectual dependency being labeled as leverage.
And that dependency has a cost. Every time you substitute generated output for your own comprehension, you are skipping the exercises / reps that build judgment. You are trading long-term capability for short-term appearance.
I’m going to share some analogies to make this line of thought more concrete and approachable.
[CLICK HERE TO SHOW ANALOGIES]
What the Best Engineers Will Do Instead
The best engineers will absolutely use A.I. more, not less. But they will use it with a very different posture.
They will let A.I. draft boilerplate, summarize docs, generate test scaffolding, propose refactorings, surface possible failure modes, accelerate investigation, and compress routine work. They will happily offload the mechanical parts of the job. But they will also:
ask sharper questions.
define the real problem instead of merely responding to the visible one.
optimize for clarity and brevity (as before), instead of a lot of polished language that says little of substance.
generate new, high-value knowledge - instead of simply rehashing / remixing existing knowledge in the system.
Then they will take the reclaimed time and invest it where it matters most.
The Real Source of Value
For years, people have confused software engineering with code production. That confusion is now getting exposed.
If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.
The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
A.I. can support that work. It cannot own it.
In fact, the engineers who produce the most value in the future will often be the ones generating the knowledge that makes A.I. more useful in the first place. They will create the design principles, domain understanding, patterns, context, and decision frameworks that improve the machine’s effectiveness. They will feed the system with better questions, better constraints, and better corrections.
In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output.
The Risk for Early-in-Career Engineers
This issue is especially important for people early in their careers.
Early years matter because that is when foundational skills are formed. Debugging instinct. System intuition. Precision. Taste. Skepticism. The ability to decompose a problem. The ability to explain why something works, not just that it appears to work.
Those skills are built through friction. Through struggle. Through getting things wrong and fixing them. Through tracing failures back to root cause. Through writing something and realizing it does not survive contact with reality.
That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
Someone who uses A.I. to answer every hard question may look efficient for a quarter or two. But they may also be quietly failing to build the very capabilities their future depends on. They are skipping the stage where understanding is forged.
Going back to the analogies: This is like copying answers through university and then showing up to a job that requires independent thought. It is like using a calculator for every arithmetic task and never developing number sense. It is like relying on self-driving features before learning how to actually drive. The support system may make you look functional, but it does not make you capable.
And eventually raw capability is the main thing that matters. There is no substitute.
There is No Shortcut to Judgment
This is the part that some people may not want to hear –
There is no generated explanation that transfers mastery into your brain without you doing the work.
There is no way to outsource reasoning for long enough that you still end up strong at reasoning.
You can outsource mechanics, accelerate research and compress routine tasks. You can remove enormous amounts of low-value labor. All of that is good and should happen.
But you cannot skip the formation of skill and expect to possess it anyway.
That is the central mistake behind the most naive uses of A.I. People think they are saving time, when in reality they are often deferring a bill that will come due later in the form of weak judgment, shallow understanding, and limited adaptability.
In Summary: The Dividing Line & Organizational Implications
The dividing line is simple:
If A.I. is helping you understand faster, think deeper, and operate at a higher level, it is making you more valuable.
If A.I. is helping you avoid understanding, avoid struggle, and avoid ownership of the reasoning, it is making you less valuable.
One path compounds, while the other path hollows you out and sets you up ripe for irrelevance.
That is why the future does not belong to the engineers who merely use A.I. It belongs to the engineers who know exactly what to delegate, exactly what to own, and exactly how to turn time savings into better thinking.
If not already, it’s time to make informed choices on how you shape your future in the industry.
Why This Matters Even More to Organizational Health
Engineering management will face the same dividing line.
Some leaders will recognize the difference between engineers who use A.I. to accelerate understanding and engineers who use it to simulate understanding. Others will not. That gap will matter more than many organizations realize.
One of the defining traits of strong engineering leadership in the A.I. era will be the ability to distinguish polished output from real judgment. Leaders who cannot tell the difference may reward speed, fluency, and presentation while missing the deeper signals of technical depth: originality, rigor, sound tradeoff analysis, and the ability to reason clearly about unfamiliar problems.
That creates organizational risk.
The most capable engineers are often the ones producing the insight, context, design judgment, and corrective feedback that make both teams and A.I. systems more effective. If an organization allows low-understanding, high-fluency work to spread unchecked, it does not just lower the quality of individual output. It starts to degrade the knowledge environment itself. Reviews get weaker. Design discussions get shallower. Documents become more polished and less useful. Over time, the organization becomes worse at generating the very clarity and technical judgment it depends on.
This is why leadership matters so much here. The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive.
That starts with hiring. Organizations will need better ways to detect genuine understanding rather than surface-level fluency. They will need interview loops that test reasoning, not just polished answers. They will need evaluation systems that reward clarity, depth, sound judgment, and durable technical contribution rather than sheer output volume.
It also affects team design and culture. Strong engineers should not spend disproportionate amounts of time cleaning up plausible but shallow work generated by people who have outsourced their thinking. If leadership does not actively guard against that, high performers become force multipliers for everyone except themselves. That is a fast path to frustration, lowered standards, and eventual attrition.
The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output.
In the A.I. era, organizational quality will increasingly depend on whether leadership can still recognize the difference.
Editorial note: Like all content on this site, the views expressed here are my own and do not necessarily reflect the views of my employer.
Sawe smashes two-hour mark to ‘move goalposts for marathon running’
‘Absolutely incredible!’ - Sawe runs sub-two-hour marathon in London
ByHarry Poole
BBC Sport journalist
Sabastian Sawe made history at the London Marathon by becoming the first athlete to run a sub-two-hour marathon in a competitive race.
The 31-year-old Kenyan crossed the line to win in one hour 59 minutes 30 seconds, more than one minute faster than the late Kelvin Kiptum’s previous record of 2:00:35, set in 2023.
The great Eliud Kipchoge became the first man to run a marathon in under two hours in 2019, but that was not record-eligible as it was held under controlled conditions.
Already on world record pace as he crossed the halfway mark in 1:00:29, Sawe was able to speed up over the second half of the race to run even faster than Kipchoge’s time.
Sawe made his decisive move before the final 10km, with only debutant Yomif Kejelcha able to cover his surge off the front.
Remarkably, Kejelcha, making his marathon debut, became the second man to run under two hours in race conditions, finishing runner-up in 1:59:41.
Half marathon world record holder Jacob Kiplimo also crossed the line faster than Kiptum’s former record, completing the podium in 2:00:28.
Sawe, speaking on BBC TV, said: “I am feeling good. I am so happy. It is a day to remember for me.”
“We started the race well. Approaching finishing the race, I was feeling strong. Finally reaching the finish line, I saw the time, and I was so excited.”
Assefa sets new world record to win London Marathon for second year in a row
In the women’s race, Ethiopia’s Tigst Assefa improved her own world record for a women-only field as she surged clear of Kenyan rivals Hellen Obiri and Joyciline Jepkosgei in a thrilling finish to retain her title in 2:15:41.
Swiss great Marcel Hug cruised to a record-equalling eighth London Marathon victory in the elite men’s wheelchair race, tying level with Great Britain’s David Weir by winning for a sixth successive year.
Catherine Debrunner also retained the elite women’s wheelchair title as the Swiss burst clear of American Tatyana McFadden in the closing stages.
How Sawe achieved sporting immortality in London
Much of the focus beforehand had been about Sawe - winner of last year’s race in 2:02:27 - targeting Kiptum’s London Marathon course record of 2:01:25.
He told BBC Sport this week that it was “only a matter of time” before he broke Kiptum’s world record, adding “I hope and wish one day [it will be me]” when asked about becoming the first person to run under two hours in a race.
Sawe had targeted Kiptum’s world record in Berlin last September, when he went through halfway in 60:16, before that bid was ultimately undone by the hot weather.
But, in perfect race conditions in London, Sawe stormed down The Mall to achieve that historic feat, doing so in a time which was once considered impossible.
BBC commentator and former world champion Steve Cram said: “There are things that happen in sport and you want to be there to see history being made - if you are watching on TV then well done, but if you’re in London, it is a privilege and it is incredible.
“We said it was a day for records but I don’t think in our wildest dreams we could have foreseen this.”
‘I am so happy’ - Sawe reacts to winning London marathon
After covering the first half of the course in 60:29, Sawe moved through the gears to complete the second half in just 59:01.
Only 63 men in history have run a half marathon as quickly as that - with Sawe’s own personal best standing at 58:05.
His splits continued to quicken as he chased down his target, clocking 13:54 for the five kilometres from 30 – 35km, and 13:42 for the 35 – 40km stretch - an average pace of 2:45 per kilometre.
“This will reverberate around the world,” said former women’s marathon world record holder Paula Radcliffe.
“The goalposts have literally just moved for marathon running and where you benchmark yourself as being world-class.
“It is a lesson to everybody out there. We say ‘don’t go out too fast’ - they went out smartly and paced it really well.”
‘We’ve witnessed something incredible’
Pundits react to Sawe’s landmark sub-two-hour marathon
Kitted out in sponsor Adidas’ latest supershoes, Sawe, who has won all four marathons he has contested, managed to take two minutes and 35 seconds off his marathon personal best.
He has sought to ensure confidence in his performances by undergoing frequent drug tests and was tested 25 times before competing in Berlin, where he faded to finish in 2:02:16.
“I want to thank the crowds for cheering us. I think they help a lot, because if it was not for them, you don’t feel like you are so loved,” Sawe said.
“I think they help a lot because them calling makes you feel so happy and strong and pushing.
“That is why I can say what comes for me today is not for me alone but all of us in London.”
Reacting to Sawe’s record, Britain’s four-time Olympic champion Mo Farah said: “We’ve waited long enough to see a human go sub-two.
“That’s always been the question that we’ve asked. We’ve just witnessed something incredible.”
Assefa improves record as Hug makes history
Hug wins London Marathon wheelchair race for sixth consecutive year
Assefa, the third-fastest woman in history, lined up as favourite to repeat her 2025 triumph in London after injuries forced Olympic gold medallist Sifan Hassan and world champion Peres Jepchirchir to withdraw.
The leading trio in Sunday’s race remained inseparable until the closing kilometres, as Obiri and Jepkosgei accompanied Assefa inside the Ethiopian’s record pace set in London 12 months ago.
But it was Assefa who summoned the energy to push on for victory, going nine seconds faster than her previous women-only record.
The women’s elite runners begin 30 minutes before the elite men in the London Marathon, meaning the event is classed as a women-only race.
Obiri, a six-time global medallist on the track, crossed the line 12 seconds after Assefa, closely followed by Kenya’s 2021 winner Jepkosgei.
Eilish McColgan was the first British woman across the line, placing seventh overall in 2:24:51, while Rose Harvey was ninth in 2:26:14.
Mahamed Mahamed was the best-placed home athlete in the men’s event, finishing 10th in 2:06:14 and replacing Alex Yee as the second-fastest Briton in history.
Debrunner wins women’s wheelchair race
Hug produced another dominant performance to tie Weir’s record for the most victories in London Marathon history.
Hug, 40, crossed the line in 1:24:13, more than four and a half minutes clear of Chinese 23-year-old Luo Xingchuan.
Briton Weir completed the podium in 1:29:23 in his 27th consecutive appearance at the event.
Debrunner celebrated her fourth London Marathon win after outlasting McFadden, finishing just five seconds ahead of the American in clocking 1:38:29.
Briton Eden Rainbow-Cooper went into the race with podium aspirations after finishing fourth last year and regaining her Boston Marathon title on Monday, but those hopes were dashed by a pre-race puncture which caused her to start the race late.
What is a statechart?
A statechart can be explained in many ways, and we’ll get to those explanations, but essentially, a statechart is a drawing. Here’s a simple statechart:
However, this drawing isn’t very useful for software engineers who want to reap the benefits outlined elsewhere on this site, so let’s dive into some other ways of describing what a statechart is. The original paper that defines statecharts bills them as “A visual formalism for complex systems” (Harel, 1987). With that out of the way, let’s try to explain statecharts.
Introduction to statecharts
Put simply, a statechart is a beefed up state machine. The beefing up solves a lot of the problems that state machines have, especially state explosion that happens as state machines grow. One of the goals of this site is to help explain what statecharts are and how they are useful.
What is a state machine?
What is a statechart?
Why should you use statecharts?
Statecharts offer a surprising array of benefits
It’s easier to understand a statechart than many other forms of code.
The behaviour is decoupled from the component in question.
This makes it easier to make changes to the behaviour.
It also makes it easier to reason about the code.
And the behaviour can be tested independently of the component.
This makes it easier to make changes to the behaviour.
It also makes it easier to reason about the code.
And the behaviour can be tested independently of the component.
The process of building a statechart causes all the states to be explored.
Studies have shown that statechart based code has lower bug counts than traditional code.
Statecharts lends itself to dealing with exceptional situations that might otherwise be overlooked.
As complexity grows, statecharts scale well.
A statechart is a great communicator: Non-developers can understand the statecharts, while QA can use a statecharts as an exploratory tool.
It’s worth noting that you’re already coding state machines, except that they’re hidden in the code.
Why should you not use statecharts?
There are a few downsides to using statecharts that you should be aware of.
Programmers typically need to learn something new, although the underpinnings (state machines) would be something that most programmers are familiar with.
It’s usually a very foreign way of coding, so teams might experience pushback based on how very different it is.
There is an overhead to extracting the behaviour in that the number of lines of code might increase with smaller statecharts.
Why are they not used?
People don’t know about them, and YAGNI.
What are the main arguments against statecharts?
There are a few common arguments against statecharts in addition to the ones listed above:
It’s simply not needed.
It goes against the grain of [insert name of technology].
It increases the number of libraries, for web applications this means increased load time.
The benefits outlined above should make it clear that the introduction of statecharts is generally a net positive.
How do you use statecharts?
First of all, know that a W3C committee spent 10+ years (2005 to 2015) standardizing something called SCXML (yes, Statechart XML), and that it defines a lot of the semantics and specifies how to deal with certain edge cases. There are tools to read, author and even execute statecharts written in SCXML, in various languages. There are also some derivatives that support the same model as SCXML, but using a different syntax.
Additionally, there are statechart libaries for a variety of platforms, that in varying degrees support the semantics described by SCXML. You should consider using these libraries just to get those edge cases taken care of. The libraries generally perform entry and exit actions in the right order and so on.
With that out of the way, read on!
Executable statecharts
In addition to just using statecharts to model the behaviour in documents separate from the actual running code, it’s possible to use one of various machine formats, both to design the behaviour, and at run-time to actually be the behaviour. The idea is to have a single source of truth that describes the behaviour of a component, and that this single source drives both the actual run-time code, but that it can also be used to generate a precise diagram that visualises the statechart.
This carries along some different pros and cons:
Why should you use executable statecharts?
No need to translate diagrams into code
No bugs introduced by hand translation of diagrams
The diagrams are always in sync
The diagrams are more precise
Why should you not use executable statecharts?
The diagrams may become quite complex
The format and tools for executable statecharts is limited
Type safety between statechart and the component is hard to enforce
How do you use executable statecharts?
In essence, if you have any definition of a statechart in your code, all you need to do is to take that representation and automate the generation of the visual statechart. This is of course simpler when the definition is in a separate file, e.g. in a JSON or XML file.
This is all explained on the page on how to use statecharts!
If you feel like chatting to someone about statecharts, you can go to gitter.im (no login required to see the chat), where you’ll find a community of like minded developers that can help you understand and reap the benefits of using Statecharts. For a more Q&A-type site, head on over to the statecharts GitHub discussions, where we’ll do your best to answer your question.
Quite a few people have written books or held presentations that deal with statecharts in various ways, and they’re included in our resources page. If you’ve written something, please share it by posting it to GitHub Discussions.
There are some pages that haven’t found any place in the web of documents, so they’re honourably mentioned here:
Use case: Statecharts in User Interfaces
Concepts — The most important concepts in a statechart and what they look like in a diagram.
Glossary — A list of terms that get thrown around when talking about statecharts, with their definitions.
FizzBuzz — FizzBuzz is a well known problem, and it’s been used as a backdrop to explain various statechart concepts.
Acknowledgements
🏷️ Discussion Type
Product Feedback
💬 Feature/Topic Area
Issues
Body
In some repositories, any link to an issue form an issue stared to open in a popup overlay instead of navigating to it.
Is that something that is rolling out gradually? I checked both the changelog and product roadmap but couldn’t find any mentions of the new behavior. Is there a way to turn it off or configure? It completely breaks the experience and negatively affects productivity.
Guidelines
I have read the above statement and can confirm my post is relevant to the GitHub feature areas Issues and/or Projects.
Thanks for all the feedback, this was something we were trying out as it improved load time for cross-repo links, we are going to revert the change.
View full answer
💬 Your Product Feedback Has Been Submitted 🎉
Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users.
Here’s what you can expect moving forward ⏩
Your input will be carefully reviewed and cataloged by members of our product teams.
Due to the high volume of submissions, we may not always be able to provide individual responses.
Rest assured, your feedback will help chart our course for product improvements.
Due to the high volume of submissions, we may not always be able to provide individual responses.
Rest assured, your feedback will help chart our course for product improvements.
Other users may engage with your post, sharing their own perspectives or experiences.
GitHub staff may reach out for further clarification or insight.
We may ‘Answer’ your discussion if there is a current solution, workaround, or roadmap/changelog post related to the feedback.
We may ‘Answer’ your discussion if there is a current solution, workaround, or roadmap/changelog post related to the feedback.
Where to look to see what’s shipping 👀
Read the Changelog for real-time updates on the latest GitHub features, enhancements, and calls for feedback.
Explore our Product Roadmap, which details upcoming major releases and initiatives.
What you can do in the meantime 💻
Upvote and comment on other user feedback Discussions that resonate with you.
Add more information at any point! Useful details include: use cases, relevant labels, desired outcomes, and any accompanying screenshots.
As a member of the GitHub community, your participation is essential. While we can’t promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities.
Thank you once again for your contribution to making GitHub even better! We’re grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐
0 replies
This really breaks my experience using AI agents! I click an issue link to copy the URL of that issue to paste into an agent, I get the parent issue link instead!
It looks awful too, I thought at first my browser had glitched and there was a rendering error
0 replies
Please give us an option to disable this non-standard link behavior. If I click a link it should open and NOT just throw up an overlay.
0 replies
0 replies
I don’t want this. It breaks assistive technologies. I want an option to get the old behavior.
0 replies
Wow, that’s seriously great work on GitHub’s side. It looks outstanding, works amazing, should have been implemented long ago, in fact.
should have been opt-in
should have been opt-in
@ZimbiX Could have been opt-in. Not should……
keep links acting like links.
keep links acting like links.
@ZimbiX links are still links; you can still right-click > open in new tab.
GitHub’s quality has eroded since the Microsoft acquisition.
GitHub’s quality has eroded since the Microsoft acquisition.
@ZimbiX 50/50 I guess…. for me and my use case, GitHub progressed. But, again, thats only me…..
0 replies
Not a huge fan. It looks weird with the pop-up not centered in the middle and off to the right (FF 149).
Much more intuitive behavior is just supporting mouse-hover over an issue to get a preview, with click/ctrl-click opening the page.
0 replies
Please revert this awful behavior. When I click a link on a web browser, I want to go to that link.
At the very least, if you’re going to introduce non-standard UX, please provide users with the option to disable it.
0 replies
Just another angry user here - please revert this silly feature, or make it opt-in. Terrible UX!
0 replies
Another downvote here. This is an annoyance. Please get rid of it.
0 replies
Personally I prefer the old way; pop-ups are just annoying to me. I have an ultra-wide
monitor, so with a popup I lose about 20% to 30% of my widescreen, for no real reason
I can see is necessary.
Even if GitHub does not want to revert to the prior default, I think allowing users to
decide at their own discretion what they would prefer, would be better. That way I
could use the old option, without having to use the new variant.
(It is probably also possible to do via custom CSS or so, but that would require
some time investment to prevent a popup and instead use the old issue tracker
way.)
Edit: Oops, misread this, so this is about links. I also agree that the old variant was
better here. Who is making these horrible UI decisions lately?
0 replies
Is Microsoft ran by AI slop? Do you even use GitHub yourself? Obviously not.
0 replies
So it was not only me then! Please revert this. It sucks.
0 replies
Hey, cool feature for people who’s browser don’t support tabs.
I’m lucky enough to have a browser that does, so I’d rather just have the links open in a new tab.
Thanks.
1 reply
You mean for people on mobile? 100% developers has tabs!
T-minus 10 seconds until Refined GitHub fixes this user-hostile abomination in their next update…
0 replies
Thanks for all the feedback, this was something we were trying out as it improved load time for cross-repo links, we are going to revert the change.
4 replies
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
GH must have not been very happy with the community response so they deployed someone on a Sunday.
Anyways, nice to hear that community has poked GH into not… slopping more stuff, or en****tification.
please at least consider making it a toggle option next time. I can see why y’all have “implementation ideas” but I don’t think fetching the issue to display in a popup will increase load time for cross repo PR/Issues than I can just click on the link and it loads in < 500ms.
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
I was responsible for this going out. The goal was to provide a more consistent user experience in that what happens when you click an issue would be the same in more places where we use the issue viewer (sub-issues on an issue, our dedicated issues dashboard, GitHub Projects, and others). It also meant you wouldn’t lose your place when clicking an issue reference when reading a discussion. There were some performance improvements that came with the change too. It was well intentioned, but we hear you, and thanks for the feedback. We missed the mark on this one and it’s been rolled back.
Waymo, the autonomous driving tech firm whose so-called ‘robo-taxis’ are now roaming the streets of London, has told cycling campaigners that expecting their driverless cars to respect cycle lanes is “too high a bar” — because their customers want to be dropped off in them.
According to the Highway Code, motorists “must not drive or park in a cycle lane marked by a solid white line during its times of operation” or block a bike lane marked by a broken white line “unless it is unavoidable”.
Drivers are also told that they should give way to cyclists using the bike lane and wait for a “safe gap in the flow of cyclists” before crossing the infrastructure.
However, just as its robo-taxis begin driving autonomously in the UK for the first time, cycling campaigners in the US have claimed that Waymo has told them that the cars are programmed to pull into cycle lanes to pick up and drop off passengers.
Speaking to Streets Blog NYC, Christopher White, executive director of the San Francisco Bike Coalition, said that Waymo has told campaigners that it is “normal practice” for the autonomous vehicles to veer into bike lanes and block cycling infrastructure.
“People always point out that unlike human driven cars, the AVs stop at lights and obey the speed limit,” White said.
“However, they are really only as good and effective and safe as they are programmed to be. Waymos pull over into bike lanes all the time for pickups and drop-offs and that’s neither legal nor safe.
“But the companies say that is a normal practice and that’s what customers expect.”
> Cyclist ‘doored’ by passenger of driverless taxi illegally parked in bike lane sues Google-owned company after tech failure caused “violent” crash
Last June, a cyclist in San Francisco sued the Google-owned company after she was seriously injured when one of the brand’s driverless taxis stopped in a cycle lane and a passenger opened its back door, striking the cyclist and causing her to smash into another Waymo car that was also illegally blocking the bike path.
According to the lawsuit, the Safe Exit system employed by Waymo, which aims to alert passengers of surrounding dangers and hazards, failed — leading 26-year-old Jenifer Hanki to claim that Waymo knows its cars are ‘dooring’ cyclists.
Following the “violent” crash, which left her with a brain injury, as well as spine and soft tissue damage, preventing her from working or riding her bike, Hanki sued Waymo and Google’s parent company Alphabet in San Francisco County Superior Court alleging battery, emotional distress, and negligence, while seeking unspecified damages.
Waymo, formerly known as the Google Self-Driving Car Project, announced in January that a pilot service for its robo-taxi service will launch this year in London, in preparation for the UK government’s plans to change its regulations on driverless vehicles at some point in the second half of 2026.
In November 2019, Waymo — owned by Google’s parent company Alphabet — secured permission from the California Department of Motor Vehicles for its vehicles to carry passengers without the need for a safety driver who could intervene in the case of a potential collision, making it the first company in the world to secure such clearance.
It has since established itself as the market leader in the United States for self-driving taxis, with commercial operations in San Francisco, Phoenix, Los Angeles, and Austin, and began testing its autonomous ‘robocabs’ in New York City last year.
After being driven around London by a ‘safety driver’ mapping the capital’s roads since last autumn, earlier this month Waymo confirmed that their cars are now starting to be controlled by artificial intelligence — though a human is still sitting in the driver’s seat, in case anything goes wrong.
Waymo described the move as the “the next step” towards a “fully autonomous passenger service later this year, pending government approval”.
> Safety of driverless taxis on London’s “infamously complex, congested, and contested streets remains to be seen”, say cycling campaigners — as ‘robotaxi’ service set to launch this year
Once the government signs off on the proposed new regulations, when the scheme eventually launches, it will be driver-free, with customers able to hail a robo-taxi through an app, with fares at a “competitive but premium” price, the company says.
According to Waymo, their cars use four sensor systems to gather data from the world outside — radar, lidar, vision, and microphone — enabling the vehicles to be “aware” of their surroundings up to a distance of three football pitches, and including during bad weather.
A powerful computer in the boot processes the data obtained by the sensor, determining how the car acts and reacts in “real time”.
However, questions have been raised concerning the scheme’s safety features, with the London Cycling Campaign expressing reservations to road.cc about the taxi service’s ability to adapt from the wide, straight roads of California to London’s winding lanes.
“As with all new innovation, it’s really early days for Waymo and other autonomous ride-hailing services in London,” the campaigns chief executive Tom Fyans told us in January.
“Waymo claims they’re far safer in the US than traditional taxi services. But whether that is still the case on London’s infamously complex, congested and contested streets, remains to be seen.
“At LCC, we talk to political leaders, innovators and private companies of all stripes all the time — to make sure everyone’s working hard to make London a better place for healthy, safe cycling for everyone. We hope new ride-sharing services will add to that, rather than detract from it.”
When it first launched as Waymo back in 2016, the firm said its cars are programmed to recognise cyclists as “unique users of the road”, drive conservatively around them, and recognise common hand signals.
In 2019, the company also released a video showing one of its vehicles predicting that cyclists will move out onto the road to pass a car blocking a cycle lane, with the taxi slowing to allow them to safely move across.
However, in February 2024, another San Francisco cyclist was left with “non-life-threatening injuries” after one of the company’s taxis failed to detect his presence and struck him.
According to the company, “the cyclist was occluded by the truck and quickly followed behind it, crossing into the Waymo vehicle’s path. When they became fully visible, our vehicle applied heavy braking but was not able to avoid the collision.”
And things haven’t got off to the best start in London either, with a TikTok video posted on Thursday showing a Waymo driving through a police cordon in the west of the city — though Waymo has since stated that the vehicle was being driven manually at the time of the incident.
@zonjy.mediaDriverless taxi waymo driver almost hit someone and drove straigh into crime scene tape almost hit a police officer obviously driverless taxi software seems like are not trained to avoid crime scene, crime scene tape, police car blue lights or ambulance blue lights in case of an accident. i think in my opinion this driverless taxi waymo are more risk than the public thinks. do you think this is safe enough to be in the streets of london. it puts police officers and emergency service people at risk last night.♬ original sound — London News
The argument that self-driving cars will make city streets safer — by cutting out human error — has also been criticised by journalists and campaigners, who point out that driverless taxis could encourage people to use cars more and public transport less, increasing the chances of crashes.
“We shouldn’t be asking only like, ‘Hey, are robotaxis safer than humans on a per-mile-driven basis?’ because there’s a real risk that AVs induce people to take a lot more car trips or to replace transit,” Bloomberg reporter David Zipper told Streets Blog NYC this week.
“We could end up with a lot more driving. And even if every individual, self-driven mile is safer, if you have that much more driving, you have more crashes overall.”
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.