10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
After almost three years of 6.x series kernels, Linux 7.0 is finally here.
That means it’s also time for another Asahi progress report!
Automate Everything
Users of alternate distros and keen-eyed individuals may have noticed some
changes to the Asahi Installer. After almost two years, we finally got around
to pushing an updated version of the installer to the CDN! Two years is a long
time to go between updates, so what took so long?
Our upstream installer package is a little bit of a Rube-Goldberg machine. The
bulk of the installer is written in Python, with some small Bash scripts to
bootstrap it. When you run curl | sh, you’re actually downloading the boostrap
script, which then fetches the actual installer bundle from our CDN. This bundle
consists of a Python interpreter and very stripped down standard library, a built
m1n1 stage 1 binary, and the installer itself.
Until recently, cutting an installer release meant:
Tagging the installer repo
Downloading a macOS Python build
Building m1n1 from a blessed commit
Bundling Python, m1n1 and the installer
Uploading the installer bundle to the CDN
Updating the CDN’s version flag file
This process was time-consuming and required administrative access to the CDN.
As a result, we neglected to push installer updated for quite some time; the
previous installer tag was from June 2024! As upstreaming work has progressed
and Devicetree bindings churned, this became rather problematic for our friends
maintaining distros.
The Asahi Installer offers a UEFI-only installation option. This option
shrinks macOS and only installs what is necessary to boot a UEFI executable,
meaning m1n1 stage 1, the Devicetrees, and U-Boot. This allows users to
boot from live media with Asahi support, such as specialised Gentoo Asahi
LiveCD images.
Since the Devicetrees on a fresh UEFI-only install come from the installer
bundle itself, a kernel will only successfully boot when the installer-bundled
Devicetrees match what that kernel expects to see. The two have gotten rather
out of sync as time has gone on due to Devicetree bindings changing
as a result of the upstreaming process. This situation finally came to a
head with kernel 6.18, which required numerous changes to both m1n1 and
the Devicetree bindings for the Apple USB subsystem. This made booting
kernel 6.18 and above from live media impossible. Oops.
Rather than go through the trouble of manually pushing out another update,
we took the opportunity to build some automation and solve this problem
permanently.
We moved the manifest of installable images into the asahi-installer-data repo,
allowing us to update it independently of the installer codebase.
On top of this, we also now deploy
the installer using GitHub workflows. Going forward, every push to the main
branch of asahi-installer will
automatically build the installer and upload it to https://alx.sh/dev.
Every tag pushed to GitHub will do the same for https://alx.sh.
The latest version, 0.8.0, bumps the bundled m1n1 stage 1 binary to
version 1.5.2, introduces installer support for the Mac Pro, and adds
a firmware update mode which ties in nicely with…
How do you overengineer a light sensor?
Basically everything with a screen now comes with some sort of light
sensor. This is usually to enable automatic brightness adjustment based
on ambient conditions. It’s a very convenient feature in devices like
smartphones, where a user may walk outside and find their display too
dim to see. The cheapest versions of this use a simple photoresistor.
This is fine if the goal is just to change brightness, but brightness
is not the only thing affected by ambient lighting conditions. What about
colour rendering?
Apple’s devices have had the True Tone display feature for quite some time.
This works by measuring both the brightness and the colour characteristics
of the environment’s ambient lighting. This data is then used to apply
brightness and colour transformations to the display to ensure that it is
always displaying content as accurately as possible. This is most noticeable
in environments with lighting fixtures that have a low Colour Rendering
Index, such as fluorescent tubes or cheap cool white LEDs. The devices that
enable this, ambient light sensors, are usually little ICs that connect
to the system over I2C or other industry-standard bus. This is
fine for basic applications, but this is Apple. There are some other considerations
to be had:
The light sensor is doing stuff whenever the screen is on, so processing its
output should be as efficient as possible
The light sensor should be able to be calibrated for maximum accuracy
There are multiple models of light sensor in use, and the OS should not
have to care too much about that
The light sensor has to have a three letter acronym like every other piece
of hardware on this platform (ALS)
Naturally, this sounds like a job for the Always-On Processor1 (AOP)!
We’ve had a working AOP+ALS driver set for a while thanks to chaos_princess,
however the raw data AOP reports back from ALS is rather inaccurate without
calibration. That calibration is a binary blob that must be uploaded to the AOP
at runtime. It is essentially firmware. Since we cannot redistribute Apple’s
binaries, it must be retrieved from macOS at install time and then stored somewhere the driver
knows to look for it.
To achieve this, the Asahi Installer gathers up all
the firmware it knows we will need in Linux and stores it on the EFI System
Partition it creates. A Dracut module then mounts this to a subdirectory of
/lib/firmware/, where drivers can find it. However, issues arise when we
need to retrieve more firmware from macOS after Asahi Linux has already been
installed. To avoid a repeat of the webcam situation, where users
were required to manually do surgery on their EFI System Partition, chaos_princess
added the ability for the Asahi Installer to automatically update the firmware
package. Starting with ALS, any required firmware updates will be a simple
matter of booting into macOS or macOS Recovery, re-running the Asahi Installer, and following
the prompts.
To enable ALS support (and to do firmware upgrades in the future), follow these steps:
Ensure you are running version 6.19 or above of the Asahi kernel
Ensure your distro ships iio-sensor-proxy as a dependency of your DE (Fedora
Asahi Remix does this)
What would you do if your organization had used a domain name for 27 years, and the registrar holding the domain seized it without any advance warning? All email and websites went dark. The company’s tech support spent four days telling you to “Just wait, we are working on it.” On the fourth day, the company informed you that someone else has the domain now, and it is no longer yours.
Read on. This crazy story happened exactly one week ago.
My friend Lee Landis is a partner in Flagstream Technologies, a local IT firm in Lancaster, PA. Last Saturday afternoon one of his client’s domains vanished from his GoDaddy account.
Lee is one of the most competent IT guys I know. The GoDaddy account had dual two-factor authentication enabled, requiring both an email code and an authentication app code to log in. The domain itself had ownership protection turned on. The audit log just said “Transfer to Another GoDaddy Account” by an “Internal User” with “Change Validated: No.”
Some names have been changed
Some names and the domain itself have been changed because people wanted to remain anonymous. The pattern of the domain names mirrors the actual mistake, so the explanation still makes sense. Every fact in this post is true. Lee has hard evidence for every one of them.
As you can see above, GoDaddy emailed Flagstream at 1:39pm that an account recovery had been requested. Three minutes later, the transfer was initiated. Four minutes later, it was complete. On a Saturday afternoon.
Everything at the impacted organization went offline because GoDaddy reset the DNS zone to default when they moved the domain into the new account. Same nameservers. Empty DNS zone file.
Lee’s client lost their website and email for the next four days.
27 yrs
Domain in active use
32
Calls to GoDaddy
9.6 hrs
On the phone with GoDaddy
17
Emails to GoDaddy. Zero callbacks.
Domain and account were fully protected.
The domain had the “Full Domain Privacy and Protection” security product that GoDaddy sells. Dual two-factor on the account. None of it mattered. The transfer was done by an “Internal User” inside GoDaddy.
The domain was HELPNETWORKINC.ORG. The real domain name has been changed because the organization wanted to remain anonymous. It belongs to a national organization with twenty locations across the United States. The domain has been in active use for 27 years. Each chapter runs its website and email on a subdomain of that one parent domain. When HELPNETWORKINC.ORG went dark, every chapter went dark with it.
Thirty-two calls. 9.6 hours on the phone. Zero callbacks.
Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.
On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.
The one thing that stayed consistent was the message: “Just wait a day or two. We are working on it. Why do you think this is so urgent?”
One of the most frustrating parts of this process is that all official communication to and from GoDaddy about this issue was done with generically named email accounts. It just seems like there should have been a named individual in charge of managing and communicating about this issue. Rather there were just random generic email accounts that seemed to change on a daily basis.
Every call generated a fresh case number. Lee lost count of the total number of cases. A few of the cases are 01368489. 894760. 01376819. 01373017. 01376804. 01373134. 01370012. None of them tied together on GoDaddy’s side. Every escalation started from zero. These are actual case numbers, in case anyone at GoDaddy wants to check into this.
I posted on X to see if anyone I knew at GoDaddy could escalate.
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF— Austin Ginder (@austinginder) April 20, 2026
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF
My friend Courtney Robertson, who works at GoDaddy, reposted it and started escalating internally on her own time. Thank you, Courtney. GoDaddy has a lot of great people like her. That part is not in question. What GoDaddy does not have is a way to actually fix a mistake once one has been made. Tickets pile up. Phone calls reset. Every escalation is a new person reading the case from scratch. The thing you actually need solved drifts between queues.
And there was no real way to dispute it.
While Lee was on the phone, his colleague was on a different phone trying to file a Transfer Dispute. GoDaddy directed him to cas.godaddy.com/Form/TransferDispute. He filed a dispute and received this message, which he captured via a screenshot.
Lee and his colleagues worked diligently at challenging the transfer. They supplied the correct name of the person listed on the domain. They supplied that person’s drivers license as required. They also supplied the correct business documentation as listed in GoDaddy’s own requirements. Every time they submitted a request, they were told they would hear back in 48 to 72 hours.
GoDaddy FINALLY responds with a SHOCKING statement
Tuesday afternoon, after four days of waiting, Flagstream finally got an official email response back from GoDaddy.
GoDaddy’s reply to Lee
After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.
That was it. No explanation of what documentation. The suggested next steps were three links. A WHOIS lookup. ICANN arbitration providers. A page about getting a lawyer involved to represent you in litigation.
Flagstream migrates client to new domain
Once GoDaddy declared the matter closed, Flagstream began migrating the client to a new domain. New email addresses. New website addresses. Coordinating with various teams throughout the night to change everything over to a new domain.
Switching to a new domain is a massive amount of work, and it leaves a lot of lingering problems behind because there is no control over the original domain.
Every email address that exists out in the world is now wrong. You have to tell everyone the new address. If they try the old one, it bounces.
Every piece of marketing material that references the old domain is now incorrect. There is no way to forward anything to the new domain.
All of the SEO is gone. You are starting an online presence from scratch.
Then a stranger found the domain in her account.
Wednesday morning Susan (not her real name), 2,000 miles away from the client’s headquarters, noticed something odd. Susan had been working at reclaiming a totally different domain used by a former employee. When she looked closely at her GoDaddy account, the domain in her account wasn’t the one she had requested. She made a few phone calls because she knew this was a problem and eventually got hooked up with Flagstream. Working with Susan, they ran a GoDaddy account-to-account transfer, and put the domain back where it belonged. DNS came back up while Lee was still typing the email telling me it was over. The entire process of reclaiming the domain lasted less than 5 minutes.
Once the domain was back and DNS was working, Flagstream started the arduous task of reverting everything that they had done the day before. They switched email and websites back to the original domain, once again working through the night to get everything fixed.
The resolution for this problem did not come from GoDaddy support. It did not come from the dispute team. It did not come from the Office of the CEO team. It came from a stranger who accidentally ended up with the domain and was smart and honest enough to start calling around because she knew something wasn’t right.
Susan is really the hero of this entire story. Without her, Flagstream would still have no idea what happened to this domain. Lawyers would have gotten involved, but it would probably be months until anything was resolved.
Timeline of events
Apr 18, 1:39pm
GoDaddy emails Flagstream that an Account Recovery has been requested for the account.
Apr 18, 1:42pm
Transfer initiated by GoDaddy Internal User. Three minutes after the recovery notice.
Apr 18, 1:43pm
Transfer completed. Change Validated is listed as “No”. Website and email go dark across the entire organization.
Apr 19
Lee discovers the domain is gone. GoDaddy says email undo@godaddy.com and wait.
Apr 20
Flagstream team starts calling and emailing GoDaddy for updates. GoDaddy now says email transferdisputes@godaddy.com. Austin posts on X. Courtney Robertson routes the case to the Office of the CEO team.
Apr 21
Flagstream files multiple Transfer Dispute cases with the requested documentation. Every submission is met with a 48 to 72 hour response window. GoDaddy emails Lee that the matter is closed and the domain belongs to someone else. Flagstream starts the painful process of migrating the organization to a new domain so they can function.
Apr 22
Susan notices the wrong domain in her account and calls Lee. Account-to-account transfer brings it home.
Then it got crazier. GoDaddy approved the transfer with zero documents.
The organization on the receiving end of the transfer was a regional chapter of the same network. Susan, the executive assistant, had emailed GoDaddy two weeks earlier asking to recover a different domain. HELPNETWORKLOCAL.ORG. Not HELPNETWORKINC.ORG.
Flagstream spent some time talking to Susan to figure out exactly how she was able to accidentally get the domain transferred into her account. Did she unintentionally supply all of the correct documentation? Talking to Susan they figured out that GoDaddy actually approved the transfer without her supplying ANY documentation.
Her email signature happened to reference her chapter’s website at a subdomain of HELPNETWORKINC.ORG. GoDaddy’s recovery team apparently looked at the signature, saw the parent domain, and transferred that domain into her account.
GoDaddy sent Susan a link to upload supporting documents. The link expired before she got around to using it. She emailed back requesting a new link so she could upload the required documentation. However, before the new link arrived, she received an email saying the domain transfer had been approved.
Susan never submitted a single document. Not for the domain she was actually trying to recover, and certainly not for the one GoDaddy ended up giving her. GoDaddy approved the change of account, transferred a 27-year-old non-profit’s domain into a stranger’s account, and “considered the matter closed” without requiring any documentation.
This is a huge security issue.
If Susan had been a bad actor, she could have intercepted email. She could have used that email to reset passwords, get MFA codes, launch phishing attacks, etc. She could have put up a new website with malware on it, redirected payments on the website, etc.
When the domain initially disappeared and Flagstream was unable to obtain any information about who had it, Flagstream feared the worst. Flagstream and the impacted client started to come up with a plan to protect against the threats mentioned above which was a huge undertaking for an organization of this size. Basically, all users across the entire organization needed to start logging into every important website and make sure the compromised domain was removed from the account. This includes bank websites, Amazon, IRS, payroll, Dropbox, email accounts, and even ironically enough, GoDaddy accounts.
It is outrageous that Susan was able to obtain this domain without supplying any documentation. Everyone was lucky it was Susan that got this domain.
GoDaddy: please follow up with Flagstream.
This is not acceptable.
A GoDaddy employee transferred a 27-year-old domain out of a paying customer’s account with no validation. With zero documentation submitted by the recipient. When the customer disputed with legitimate documentation, every submission was met with “We will respond in 48 to 72 hours.” After four days, GoDaddy claimed the domain belonged to someone else and the case was closed. The fix came from the recipient of the mistake, not from GoDaddy despite 9.6 hours of phone conversations.
To anyone at GoDaddy reading this. Please follow up with Lee Landis at Flagstream Technologies and make this right. An apology is probably in order. An internal review of how the transfer team validates documentation is in order, including how a transfer can be approved with zero documentation. Lee would like a clear answer on how this happened. Lee doesn’t want an email from a generic GoDaddy account. Lee wants a real person to call or email him. This person needs to leave an email address and phone number in case Lee has follow-up questions.
Even disclosing this to GoDaddy was broken.
Before publishing this post, I wanted to share the findings with GoDaddy’s security team directly. I emailed security@godaddy.com with the full report. The message bounced.
GoDaddy’s auto-reply to security@godaddy.com
A custom mail flow rule created by an admin at secureservernet.onmicrosoft.com has blocked your message. We hope this message finds you well. This email mailbox is no longer monitored. To address your needs, we have outlined two popular options for you: 1: To submit an abuse report, please visit our Abuse Reporting Form. 2: If you are looking to submit a vulnerability, please visit our bounty program https://hackerone.com/godaddy-vdp.
So I filed the same report through HackerOne instead, report #3696718.
This is the same pattern that played out across the four-day outage. The official channel does not work. The alternative path requires knowing to bypass it. Most honest people who notice a security issue are not going to have a HackerOne account. They send an email. How is it that GoDaddy doesn’t have a public security disclosure email address?
Whether the original transfer was a single agent’s mistake or a flaw in the recovery workflow, it is still a security issue. And there is no clean path from “I found something” to “a human at GoDaddy is looking at it.”
The only way to get GoDaddy’s attention is to leave.
Lee is upset about the four days of stress and lost productivity across the impacted organization. But his bigger concern is what comes next. Apparently there is no way to protect against this threat if your domain is hosted at GoDaddy. In addition, it seems like there is no efficient way to contest the GoDaddy transfer.
Flagstream will most likely migrate every one of their domains off GoDaddy. That is the only protection they have left, and the only escalation GoDaddy seems to respond to.
Are you at risk?
Is your domain hosted on GoDaddy? What would you do if the domain disappeared out of your GoDaddy account and your entire business went dark?
In talking to engineering management across tech industry heavy-weights, it’s apparent that software engineering is starting to split people into two nebulous groups:
The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
The second group will use A.I. to avoid thinking. They will paste prompts into a box, collect polished output, and present it as though it reflects their own reasoning. For a while, that can look like productivity. It can even look like talent. But it is a dead end.
The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf. They use the time savings to operate at a higher level. They elevate their thought process through rigor rather than outsourcing it.
That distinction matters more than people think.
In this post:
The New Failure Mode: Outsourced Thinking (& analogies)
What the Best Engineers Will Do Instead
The Real Source of Value
The Risk for Early-In-Career Engineers
There Is No Shortcut to Judgment
In Summary: The Dividing Line & Organizational Implications
Why This Matters Even More to Organizational Health
The New Failure Mode: Outsourced Thinking
A.I. can already generate code, summarize meetings, explain concepts, produce design drafts, and write status updates in seconds. That is useful but also dangerous.
The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence.
There is now a very real temptation to hand a model a problem, receive a plausible answer, and then repeat that answer as if it reflects your own understanding. That is close to plagiarism, but in some ways worse. At least when a student copies from another person, there is still a real human source behind the answer. Here, people can present machine-produced reasoning they do not understand, cannot defend, and could not reproduce on their own.
That is intellectual dependency being labeled as leverage.
And that dependency has a cost. Every time you substitute generated output for your own comprehension, you are skipping the exercises / reps that build judgment. You are trading long-term capability for short-term appearance.
I’m going to share some analogies to make this line of thought more concrete and approachable.
[CLICK HERE TO SHOW ANALOGIES]
What the Best Engineers Will Do Instead
The best engineers will absolutely use A.I. more, not less. But they will use it with a very different posture.
They will let A.I. draft boilerplate, summarize docs, generate test scaffolding, propose refactorings, surface possible failure modes, accelerate investigation, and compress routine work. They will happily offload the mechanical parts of the job. But they will also:
ask sharper questions.
define the real problem instead of merely responding to the visible one.
optimize for clarity and brevity (as before), instead of a lot of polished language that says little of substance.
generate new, high-value knowledge - instead of simply rehashing / remixing existing knowledge in the system.
Then they will take the reclaimed time and invest it where it matters most.
The Real Source of Value
For years, people have confused software engineering with code production. That confusion is now getting exposed.
If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.
The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
A.I. can support that work. It cannot own it.
In fact, the engineers who produce the most value in the future will often be the ones generating the knowledge that makes A.I. more useful in the first place. They will create the design principles, domain understanding, patterns, context, and decision frameworks that improve the machine’s effectiveness. They will feed the system with better questions, better constraints, and better corrections.
In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output.
The Risk for Early-in-Career Engineers
This issue is especially important for people early in their careers.
Early years matter because that is when foundational skills are formed. Debugging instinct. System intuition. Precision. Taste. Skepticism. The ability to decompose a problem. The ability to explain why something works, not just that it appears to work.
Those skills are built through friction. Through struggle. Through getting things wrong and fixing them. Through tracing failures back to root cause. Through writing something and realizing it does not survive contact with reality.
That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
Someone who uses A.I. to answer every hard question may look efficient for a quarter or two. But they may also be quietly failing to build the very capabilities their future depends on. They are skipping the stage where understanding is forged.
Going back to the analogies: This is like copying answers through university and then showing up to a job that requires independent thought. It is like using a calculator for every arithmetic task and never developing number sense. It is like relying on self-driving features before learning how to actually drive. The support system may make you look functional, but it does not make you capable.
And eventually raw capability is the main thing that matters. There is no substitute.
There is No Shortcut to Judgment
This is the part that some people may not want to hear –
There is no generated explanation that transfers mastery into your brain without you doing the work.
There is no way to outsource reasoning for long enough that you still end up strong at reasoning.
You can outsource mechanics, accelerate research and compress routine tasks. You can remove enormous amounts of low-value labor. All of that is good and should happen.
But you cannot skip the formation of skill and expect to possess it anyway.
That is the central mistake behind the most naive uses of A.I. People think they are saving time, when in reality they are often deferring a bill that will come due later in the form of weak judgment, shallow understanding, and limited adaptability.
In Summary: The Dividing Line & Organizational Implications
The dividing line is simple:
If A.I. is helping you understand faster, think deeper, and operate at a higher level, it is making you more valuable.
If A.I. is helping you avoid understanding, avoid struggle, and avoid ownership of the reasoning, it is making you less valuable.
One path compounds, while the other path hollows you out and sets you up ripe for irrelevance.
That is why the future does not belong to the engineers who merely use A.I. It belongs to the engineers who know exactly what to delegate, exactly what to own, and exactly how to turn time savings into better thinking.
If not already, it’s time to make informed choices on how you shape your future in the industry.
Why This Matters Even More to Organizational Health
Engineering management will face the same dividing line.
Some leaders will recognize the difference between engineers who use A.I. to accelerate understanding and engineers who use it to simulate understanding. Others will not. That gap will matter more than many organizations realize.
One of the defining traits of strong engineering leadership in the A.I. era will be the ability to distinguish polished output from real judgment. Leaders who cannot tell the difference may reward speed, fluency, and presentation while missing the deeper signals of technical depth: originality, rigor, sound tradeoff analysis, and the ability to reason clearly about unfamiliar problems.
That creates organizational risk.
The most capable engineers are often the ones producing the insight, context, design judgment, and corrective feedback that make both teams and A.I. systems more effective. If an organization allows low-understanding, high-fluency work to spread unchecked, it does not just lower the quality of individual output. It starts to degrade the knowledge environment itself. Reviews get weaker. Design discussions get shallower. Documents become more polished and less useful. Over time, the organization becomes worse at generating the very clarity and technical judgment it depends on.
This is why leadership matters so much here. The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive.
That starts with hiring. Organizations will need better ways to detect genuine understanding rather than surface-level fluency. They will need interview loops that test reasoning, not just polished answers. They will need evaluation systems that reward clarity, depth, sound judgment, and durable technical contribution rather than sheer output volume.
It also affects team design and culture. Strong engineers should not spend disproportionate amounts of time cleaning up plausible but shallow work generated by people who have outsourced their thinking. If leadership does not actively guard against that, high performers become force multipliers for everyone except themselves. That is a fast path to frustration, lowered standards, and eventual attrition.
The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output.
In the A.I. era, organizational quality will increasingly depend on whether leadership can still recognize the difference.
Editorial note: Like all content on this site, the views expressed here are my own and do not necessarily reflect the views of my employer.
Sawe smashes two-hour mark to ‘move goalposts for marathon running’
‘Absolutely incredible!’ - Sawe runs sub-two-hour marathon in London
ByHarry Poole
BBC Sport journalist
Sabastian Sawe made history at the London Marathon by becoming the first athlete to run a sub-two-hour marathon in a competitive race.
The 31-year-old Kenyan crossed the line to win in one hour 59 minutes 30 seconds, more than one minute faster than the late Kelvin Kiptum’s previous record of 2:00:35, set in 2023.
The great Eliud Kipchoge became the first man to run a marathon in under two hours in 2019, but that was not record-eligible as it was held under controlled conditions.
Already on world record pace as he crossed the halfway mark in 1:00:29, Sawe was able to speed up over the second half of the race to run even faster than Kipchoge’s time.
Sawe made his decisive move before the final 10km, with only debutant Yomif Kejelcha able to cover his surge off the front.
Remarkably, Kejelcha, making his marathon debut, became the second man to run under two hours in race conditions, finishing runner-up in 1:59:41.
Half marathon world record holder Jacob Kiplimo also crossed the line faster than Kiptum’s former record, completing the podium in 2:00:28.
Sawe, speaking on BBC TV, said: “I am feeling good. I am so happy. It is a day to remember for me.”
“We started the race well. Approaching finishing the race, I was feeling strong. Finally reaching the finish line, I saw the time, and I was so excited.”
Assefa sets new world record to win London Marathon for second year in a row
In the women’s race, Ethiopia’s Tigst Assefa improved her own world record for a women-only field as she surged clear of Kenyan rivals Hellen Obiri and Joyciline Jepkosgei in a thrilling finish to retain her title in 2:15:41.
Swiss great Marcel Hug cruised to a record-equalling eighth London Marathon victory in the elite men’s wheelchair race, tying level with Great Britain’s David Weir by winning for a sixth successive year.
Catherine Debrunner also retained the elite women’s wheelchair title as the Swiss burst clear of American Tatyana McFadden in the closing stages.
How Sawe achieved sporting immortality in London
Much of the focus beforehand had been about Sawe - winner of last year’s race in 2:02:27 - targeting Kiptum’s London Marathon course record of 2:01:25.
He told BBC Sport this week that it was “only a matter of time” before he broke Kiptum’s world record, adding “I hope and wish one day [it will be me]” when asked about becoming the first person to run under two hours in a race.
Sawe had targeted Kiptum’s world record in Berlin last September, when he went through halfway in 60:16, before that bid was ultimately undone by the hot weather.
But, in perfect race conditions in London, Sawe stormed down The Mall to achieve that historic feat, doing so in a time which was once considered impossible.
BBC commentator and former world champion Steve Cram said: “There are things that happen in sport and you want to be there to see history being made - if you are watching on TV then well done, but if you’re in London, it is a privilege and it is incredible.
“We said it was a day for records but I don’t think in our wildest dreams we could have foreseen this.”
‘I am so happy’ - Sawe reacts to winning London marathon
After covering the first half of the course in 60:29, Sawe moved through the gears to complete the second half in just 59:01.
Only 63 men in history have run a half marathon as quickly as that - with Sawe’s own personal best standing at 58:05.
His splits continued to quicken as he chased down his target, clocking 13:54 for the five kilometres from 30 – 35km, and 13:42 for the 35 – 40km stretch - an average pace of 2:45 per kilometre.
“This will reverberate around the world,” said former women’s marathon world record holder Paula Radcliffe.
“The goalposts have literally just moved for marathon running and where you benchmark yourself as being world-class.
“It is a lesson to everybody out there. We say ‘don’t go out too fast’ - they went out smartly and paced it really well.”
‘We’ve witnessed something incredible’
Pundits react to Sawe’s landmark sub-two-hour marathon
Kitted out in sponsor Adidas’ latest supershoes, Sawe, who has won all four marathons he has contested, managed to take two minutes and 35 seconds off his marathon personal best.
He has sought to ensure confidence in his performances by undergoing frequent drug tests and was tested 25 times before competing in Berlin, where he faded to finish in 2:02:16.
“I want to thank the crowds for cheering us. I think they help a lot, because if it was not for them, you don’t feel like you are so loved,” Sawe said.
“I think they help a lot because them calling makes you feel so happy and strong and pushing.
“That is why I can say what comes for me today is not for me alone but all of us in London.”
Reacting to Sawe’s record, Britain’s four-time Olympic champion Mo Farah said: “We’ve waited long enough to see a human go sub-two.
“That’s always been the question that we’ve asked. We’ve just witnessed something incredible.”
Assefa improves record as Hug makes history
Hug wins London Marathon wheelchair race for sixth consecutive year
Assefa, the third-fastest woman in history, lined up as favourite to repeat her 2025 triumph in London after injuries forced Olympic gold medallist Sifan Hassan and world champion Peres Jepchirchir to withdraw.
The leading trio in Sunday’s race remained inseparable until the closing kilometres, as Obiri and Jepkosgei accompanied Assefa inside the Ethiopian’s record pace set in London 12 months ago.
But it was Assefa who summoned the energy to push on for victory, going nine seconds faster than her previous women-only record.
The women’s elite runners begin 30 minutes before the elite men in the London Marathon, meaning the event is classed as a women-only race.
Obiri, a six-time global medallist on the track, crossed the line 12 seconds after Assefa, closely followed by Kenya’s 2021 winner Jepkosgei.
Eilish McColgan was the first British woman across the line, placing seventh overall in 2:24:51, while Rose Harvey was ninth in 2:26:14.
Mahamed Mahamed was the best-placed home athlete in the men’s event, finishing 10th in 2:06:14 and replacing Alex Yee as the second-fastest Briton in history.
Debrunner wins women’s wheelchair race
Hug produced another dominant performance to tie Weir’s record for the most victories in London Marathon history.
Hug, 40, crossed the line in 1:24:13, more than four and a half minutes clear of Chinese 23-year-old Luo Xingchuan.
Briton Weir completed the podium in 1:29:23 in his 27th consecutive appearance at the event.
Debrunner celebrated her fourth London Marathon win after outlasting McFadden, finishing just five seconds ahead of the American in clocking 1:38:29.
Briton Eden Rainbow-Cooper went into the race with podium aspirations after finishing fourth last year and regaining her Boston Marathon title on Monday, but those hopes were dashed by a pre-race puncture which caused her to start the race late.
I think this might be the neatest thing I’ve built in
Jelly that nobody will ever notice.
If you’ve ever maintained a help centre or documentation site for a web
application, you’ll know the particular misery of screenshots. You write a
lovely help article, carefully capture a screenshot of the feature you’re
documenting, crop it, maybe add a border and a shadow, upload it, and it looks
great. Then you change the UI slightly — tweak a colour, move a button,
update some copy — and suddenly every screenshot that includes that element
is stale. You know they’re stale. Your users might not notice, but you
know, and it gnaws at you.
Or maybe that’s just me.
Either way, I decided to fix it. The help centre in Jelly has a build system
where screenshots are captured automatically from the running application,
and they update themselves whenever you rebuild.
Markdown with a twist
The help articles are written in Markdown, which gets processed into HTML via
Redcarpet and then rendered as ERB views in the Rails app. So far, so
ordinary. But scattered through the Markdown are comments like this:
<!– SCREENSHOT: acme-tools/inbox | element | selector=#inbox-brand-new-section –>

That HTML comment is an instruction to the screenshot system. It says: “go to
the inbox page for the Acme Tools demo team, find the element matching
#inbox-brand-new-section, and capture a screenshot of it.” The image tag
below it is where the result ends up.
How it works
Under the hood, it’s a Rake task that fires up a headless Chrome browser via
Capybara and Cuprite. It scans every Markdown file for those SCREENSHOT
comments, groups them by team (so it only needs to log in once per team),
navigates to each URL, and captures the screenshot.
The capture modes are:
element — screenshot a specific DOM element by CSS selector
full_page — capture the whole page, optionally cropped to a height
viewport — just what’s visible in the browser window
And there are a handful of options that handle the fiddly cases:
<!– SCREENSHOT: nectar-studio/manage/rules | full_page | click=”.rule-create-button” wait=200 crop=0,800 –>
That one navigates to the rules page, clicks a button to open a form, waits
200 milliseconds for the animation, then captures a full-page screenshot
cropped to a specific region. The click option is the one that really makes
it sing — so many features live behind a button press or a popover, and being
able to capture those states automatically is wonderful.
There’s also torn — which applies a torn-paper edge effect via a CSS
clip-path — and hide, which temporarily hides elements you don’t want in
the shot (dev toolbars, cookie banners, that sort of thing).
The satisfying bit
The whole pipeline runs with just this:
rails manual:build
That captures every screenshot and then builds all the help pages. When I
change the UI, I run that command and every screenshot updates to match. No
manual cropping, no “oh I forgot to update that one”, no slowly-diverging
screenshots that make the help centre look abandoned.
The markdown files live in public/manual/, organised by section — basics,
setup, advanced — and the build step processes them into ERB views in
app/views/help/, complete with breadcrumbs and section navigation, all generated
from the source markdown files.
This also makes it easy to update the help centre at the same time I’m working
on the feature; the code and the documentation live together and can be kept in
sync within the same PR or even commit.
One of those “why didn’t I do this sooner” things
I put off building this for ages because it seemed like a lot of work for a
“nice to have”. It was a fair bit of work, honestly. Handling the edge cases
– elements that need scrolling into view, popovers that need clicking,
images that need cropping to avoid showing irrelevant content — took longer
than the happy path.
But now that it exists, I update the help centre far more often than I used to,
because the friction is almost gone. Change the UI, run the build,
commit the results. The screenshots are always current, and I never have to
open a browser and fumble around with the macOS screenshot tool.
Want to learn ethical hacking? I built a complete course. Have a look!Learn penetration testing, web exploitation, network security, and the hacker mindset:→ Master ethical hacking hands-onHacking is not a hobby but a way of life!
For 21 years, a cyberweapon called fast16 sat completely undetected. This one did not destroy machines or blow things up. It corrupted the math. Scientists running nuclear and engineering simulations got output that looked completely normal, every number added up, every result made sense, and all of it was deliberately wrong. It surfaced last week. It predates Stuxnet by five years.
SentinelOne researchers Vitaly Kamluk and Juan Andrés Guerrero-Saade presented the full analysis of fast16 at Black Hat Asia last week. Fast16’s core binary has a compilation timestamp of August 30, 2005. Stuxnet’s C&C infrastructure was set up in November that same year.
Most people in security know Stuxnet as the worm that destroyed centrifuges at Iran’s Natanz nuclear facility around 2010, by pushing them past their mechanical limits while lying to the monitoring software about what was happening. It was the first known cyberweapon designed to cause physical destruction, and for years it was considered the starting point of this whole era. Fast16 was there first, and for a long time it was the only one.
Kamluk started with a hunch. He had noticed that the most sophisticated state-sponsored malware families he knew about all shared one technical habit: each one had a small scripting engine built in called Lua. Lua works like a remote control for malware, it lets operators change what the implant does while it is already running on a target machine, without needing to send a completely new file. He wanted to know if something older had done the same thing first, and went looking through old collections.
What he found was a file on VirusTotal called svcmgmt.exe, uploaded in October 2016 and flagged by almost nobody. It looked like a boring Windows service wrapper from the XP era. But inside it was an embedded Lua 5.0 virtual machine, encrypted bytecode, and a path pointing to a kernel driver called fast16.sys. That makes fast16 the earliest known Windows malware to embed a Lua engine, predating the next known example by three full years.
One more thing confirms the timeline. Fast16 only runs on single-core processors, built at a time when most machines were still running on a single core and multi-core was just beginning to arrive on the market.
The framework runs in three layers. The outer layer is svcmgmt.exe, a carrier that behaves differently depending on how it is launched. Pass it -p and it spreads across the network. Pass it -i and it installs itself as a Windows service and runs the embedded payload. Pass it -r and it runs the payload without installing. Inside the carrier are three things stored in encrypted form: the Lua bytecode that handles the operational logic, a DLL that hooks into Windows’ dial-up and VPN connection system, and fast16.sys itself. That DLL is worth a closer look. Every time a machine connects to a remote network, it writes the connection details to a named pipe that operators can read. So while fast16.sys was corrupting calculations on disk, the DLL was quietly mapping out which machines were connecting to which networks, giving operators a live picture of the facility’s internal structure.
Part of what makes that outer layer interesting is how it spreads. The mechanism works like a delivery truck with multiple compartments. Each compartment, called a wormlet, can carry a different payload for a different purpose. The carrier copies itself across network shares with weak authentication and starts up as a service on every machine it reaches. SentinelOne calls this cluster munition architecture. In the recovered sample, only one of those compartments is filled. The others are empty, which raises an obvious question about whether other variants exist with different payloads that nobody has found yet.
Before any of this runs, the code checks the registry for security software. If it finds Kaspersky, Symantec, McAfee, F-Secure, Zone Labs, or about a dozen other products that were common in the mid-2000s, it stops immediately. That list was not guesswork. It reflects exactly what the operators expected to find on the machines they were after.
The second layer is the worm, which spreads using standard Windows service control and file-sharing APIs, nothing custom. It relies on weak or default admin passwords on network shares to move from machine to machine, which was a realistic assumption for a lot of internal networks in 2005.
The third layer is fast16.sys, and this is where the sabotage actually happens. A kernel driver sits very deep inside an operating system, below where antivirus software normally looks. Fast16.sys loads at boot and positions itself above every storage layer on the machine: NTFS, FAT, the network filesystem. The first thing it does when it loads is disable the Windows Prefetcher, a system that normally caches frequently-used files to speed things up. With that off, every single file read has to go through the full storage stack, and through the driver. Everything that reads from disk passes through it first. And then it just waits. Nothing happens until someone logs in and the desktop starts. Only then does it begin watching every executable that gets opened.
The driver does not go after every file it sees. It is looking for software built with a specific tool: the Intel C++ compiler leaves a small identifying string in every executable it produces, right after the last section header. The developers knew exactly what compiler their targets used, and built the selection logic around that fingerprint.
For every file that matches, the driver intercepts the floating-point calculation routines in memory as the file is being read from disk. Floating-point calculations are the math behind precision simulations, the kind that tell you whether a bridge design will hold under load, or whether an explosive trigger will detonate at the right moment. The driver patches those routines using 101 pattern-matching rules, injects a block of FPU instructions that quietly shifts values in internal calculation arrays, and lets the file load as if nothing happened. The original code on disk is untouched. The software runs normally. The results are wrong.
Running those 101 rules against software from that era pointed to three specific targets.
The first is LS-DYNA 970, a simulation suite used for modeling explosions, structural failures, and high-speed impacts. The Institute for Science and International Security published a review in September 2024 of 157 academic papers showing that Iranian researchers used LS-DYNA in work connected to nuclear weapons development, specifically modeling the explosive triggers that initiate warhead detonation. If fast16 was running on those machines, the scientists had no way of knowing their results were wrong. Every design decision based on those numbers was built on corrupted output.
The second target is PKPM, and this is the part most coverage misses entirely. PKPM is China’s dominant structural engineering software, developed by Tsinghua University and the China Academy of Building Research and used across Chinese construction projects for over three decades. What makes it more than a standard civil engineering tool is that PKPM is also used for seismic structural analysis of nuclear reactor facilities. A 2024 paper in Advances in Civil Engineering documents the use of PKPM to model the structural behavior of China’s TMSR-LF1 thorium molten salt reactor under earthquake conditions. SentinelOne cannot confirm who the PKPM target was or where fast16 ran. Whether this was aimed at a second target country is left as an open question.
The third is MOHID, an open-source water modeling platform developed at the Instituto Superior Tecnico in Lisbon. It is used for modeling coastal water systems, sediment transport, dam behavior, and environmental impact of large construction projects near water. SentinelOne says openly they cannot identify what the intended sabotage effect on this software would have been, and they are asking the research community for help. Why it was targeted may still be in a sample nobody has found yet.
The NSA connection comes from a list in the ShadowBrokers leak. In April 2017, the ShadowBrokers published a large collection of materials widely understood to have come from the NSA’s Equation Group. Inside was a file called drv_list.txt, basically a do-not-touch list for operators. When a team landed on a target machine and found a driver from that list, it told them whether that driver belonged to a friendly operation and whether they should leave it alone. It was a system for making sure different teams did not accidentally interfere with each other’s work.
Most entries on that list got a note to be cautious or pull back. Fast16 got something different:
1
“fast16”,“*** NOTHING TO SEE HERE - CARRY ON ***”
That is one operator telling another: if you find this driver, do not touch it, it is ours. Researchers at CrySyS Lab noticed this entry when they analyzed the ShadowBrokers dump in 2018 and had no sample to connect it to. Eight years later, there is one. The ShadowBrokers materials are widely linked to the NSA’s Equation Group, though as with all intelligence leaks, the full picture is not available from the outside.
One more thing in the code stands out. The source files contain version control markers that come from Unix development environments of the 1970s and 1980s, long before Windows existed. They look like this:
1
@(#)par.h $Revision: 1.3 $
That kind of notation, called SCCS/RCS, is the equivalent of finding a rotary phone in a modern office. Nobody uses it in 2005 Windows kernel code unless their programming background goes back decades, to government and military computing environments from a completely different era. These are not weekend hackers or freelancers. This is a long-running institutional program built by people who spent their careers in very specific places.
What makes all of this worse is the detection record. Svcmgmt.exe was uploaded to VirusTotal in October 2016 and sat there for nearly a decade, completely in the open. One antivirus engine out of roughly seventy flagged it, weakly, as generally malicious. A self-propagating carrier that deploys a boot-level kernel driver with an in-memory floating-point patching engine had been sitting in a public database for nine years, almost invisible to every scanner that looked at it.
During his analysis, Kamluk used Claude to help analyse fast16 and write up the findings. At one point the AI repeatedly failed to finish a report he had asked it to write. When he asked why, Claude produced paragraphs of self-criticism, urging itself to just get it done. It eventually did, and concluded that whoever built fast16 had intimate knowledge of the target software and that industrial sabotage was the most likely intention. A 21-year-old piece of malware stumped a modern AI long enough to make it reflect on its own limitations.
If you work with older simulation software, particularly older versions of LS-DYNA or PKPM from the mid-2000s, SentinelOne has already notified the vendors directly. The recommended action is to verify critical calculation outputs against a completely independent system sitting outside any potentially affected network. If fast16 spread across an entire facility and patched every workstation, a comparison calculation run inside that same network would produce the same wrong output. A machine completely outside that environment would not.
The indicators to look for:
→ Driver: fast16.sys | MD5: 0ff6abe0252d4f37a196a1231fae5f26
→ Carrier: svcmgmt.exe | MD5: dbe51eabebf9d4ef9581ef99844a2944
→ Notification DLL: svcmgmt.dll | MD5: 410eddfc19de44249897986ecc8ac449
→ Named pipe used for reporting: \\.\pipe\p577
→ Device objects created by the driver: \Device\fast16 and \??\fast16
→ Custom DeviceType value in the driver: 0xA57C
→ Service name installed by the carrier: SvcMgmt
SentinelOne published full YARA rules for hunting both the carrier and the driver in their research paper.
If something this sophisticated spent 21 years undetected, sitting on VirusTotal for nearly a decade while almost no antivirus engine noticed it, what else is sitting in similar collections right now waiting for someone to ask a different question. Probably more than anyone wants to know.
Fast16 installed itself as a Windows service, spread through network shares, ran as a kernel driver, and stayed completely hidden while it worked. The concepts behind that, exploitation, post-exploitation, persistence, privilege escalation, and moving through a network without being noticed, are exactly what my ethical hacking course covers step by step:
→ Join my complete ethical hacking course
Hacking is not a hobby but a way of life.
Hacking is not a hobby but a way of life.
Sources:
→ SentinelOne Labs: fast16: Mystery ShadowBrokers Reference Reveals High-Precision Software Sabotage 5 Years Before Stuxnet
→ Institute for Science and International Security: Iran’s Likely Violations of Section T
🏷️ Discussion Type
Product Feedback
💬 Feature/Topic Area
Issues
Body
In some repositories, any link to an issue form an issue stared to open in a popup overlay instead of navigating to it.
Is that something that is rolling out gradually? I checked both the changelog and product roadmap but couldn’t find any mentions of the new behavior. Is there a way to turn it off or configure? It completely breaks the experience and negatively affects productivity.
Guidelines
I have read the above statement and can confirm my post is relevant to the GitHub feature areas Issues and/or Projects.
Thanks for all the feedback, this was something we were trying out as it improved load time for cross-repo links, we are going to revert the change.
View full answer
💬 Your Product Feedback Has Been Submitted 🎉
Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users.
Here’s what you can expect moving forward ⏩
Your input will be carefully reviewed and cataloged by members of our product teams.
Due to the high volume of submissions, we may not always be able to provide individual responses.
Rest assured, your feedback will help chart our course for product improvements.
Due to the high volume of submissions, we may not always be able to provide individual responses.
Rest assured, your feedback will help chart our course for product improvements.
Other users may engage with your post, sharing their own perspectives or experiences.
GitHub staff may reach out for further clarification or insight.
We may ‘Answer’ your discussion if there is a current solution, workaround, or roadmap/changelog post related to the feedback.
We may ‘Answer’ your discussion if there is a current solution, workaround, or roadmap/changelog post related to the feedback.
Where to look to see what’s shipping 👀
Read the Changelog for real-time updates on the latest GitHub features, enhancements, and calls for feedback.
Explore our Product Roadmap, which details upcoming major releases and initiatives.
What you can do in the meantime 💻
Upvote and comment on other user feedback Discussions that resonate with you.
Add more information at any point! Useful details include: use cases, relevant labels, desired outcomes, and any accompanying screenshots.
As a member of the GitHub community, your participation is essential. While we can’t promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities.
Thank you once again for your contribution to making GitHub even better! We’re grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐
0 replies
This really breaks my experience using AI agents! I click an issue link to copy the URL of that issue to paste into an agent, I get the parent issue link instead!
It looks awful too, I thought at first my browser had glitched and there was a rendering error
0 replies
Please give us an option to disable this non-standard link behavior. If I click a link it should open and NOT just throw up an overlay.
0 replies
0 replies
I don’t want this. It breaks assistive technologies. I want an option to get the old behavior.
0 replies
Wow, that’s seriously great work on GitHub’s side. It looks outstanding, works amazing, should have been implemented long ago, in fact.
should have been opt-in
should have been opt-in
@ZimbiX Could have been opt-in. Not should……
keep links acting like links.
keep links acting like links.
@ZimbiX links are still links; you can still right-click > open in new tab.
GitHub’s quality has eroded since the Microsoft acquisition.
GitHub’s quality has eroded since the Microsoft acquisition.
@ZimbiX 50/50 I guess…. for me and my use case, GitHub progressed. But, again, thats only me…..
0 replies
Not a huge fan. It looks weird with the pop-up not centered in the middle and off to the right (FF 149).
Much more intuitive behavior is just supporting mouse-hover over an issue to get a preview, with click/ctrl-click opening the page.
0 replies
Please revert this awful behavior. When I click a link on a web browser, I want to go to that link.
At the very least, if you’re going to introduce non-standard UX, please provide users with the option to disable it.
0 replies
Just another angry user here - please revert this silly feature, or make it opt-in. Terrible UX!
0 replies
Another downvote here. This is an annoyance. Please get rid of it.
0 replies
Personally I prefer the old way; pop-ups are just annoying to me. I have an ultra-wide
monitor, so with a popup I lose about 20% to 30% of my widescreen, for no real reason
I can see is necessary.
Even if GitHub does not want to revert to the prior default, I think allowing users to
decide at their own discretion what they would prefer, would be better. That way I
could use the old option, without having to use the new variant.
(It is probably also possible to do via custom CSS or so, but that would require
some time investment to prevent a popup and instead use the old issue tracker
way.)
Edit: Oops, misread this, so this is about links. I also agree that the old variant was
better here. Who is making these horrible UI decisions lately?
0 replies
Is Microsoft ran by AI slop? Do you even use GitHub yourself? Obviously not.
0 replies
So it was not only me then! Please revert this. It sucks.
0 replies
Hey, cool feature for people who’s browser don’t support tabs.
I’m lucky enough to have a browser that does, so I’d rather just have the links open in a new tab.
Thanks.
1 reply
You mean for people on mobile? 100% developers has tabs!
T-minus 10 seconds until Refined GitHub fixes this user-hostile abomination in their next update…
0 replies
Thanks for all the feedback, this was something we were trying out as it improved load time for cross-repo links, we are going to revert the change.
4 replies
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
GH must have not been very happy with the community response so they deployed someone on a Sunday.
Anyways, nice to hear that community has poked GH into not… slopping more stuff, or en****tification.
please at least consider making it a toggle option next time. I can see why y’all have “implementation ideas” but I don’t think fetching the issue to display in a popup will increase load time for cross repo PR/Issues than I can just click on the link and it loads in < 500ms.
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
So why did you come with this idea at first? Who thought it was a good idea? An AI slop? Or a dude on his iPhone who never commited anything via GitHub? Who is responsible for this stupid idea?
I was responsible for this going out. The goal was to provide a more consistent user experience in that what happens when you click an issue would be the same in more places where we use the issue viewer (sub-issues on an issue, our dedicated issues dashboard, GitHub Projects, and others). It also meant you wouldn’t lose your place when clicking an issue reference when reading a discussion. There were some performance improvements that came with the change too. It was well intentioned, but we hear you, and thanks for the feedback. We missed the mark on this one and it’s been rolled back.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.