10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program in 2000, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function. That fact existed nowhere in any document. Only the workers who made the original batch knew it, and they had retired years earlier.
A nuclear weapons program lost the ability to make a material it invented. The knowledge existed only in people, and the people were gone.
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. They weren’t enough without the people who understood what they meant.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
April 24, 2026
4 min read
Add Us On GoogleAdd SciAm
An amateur just solved a 60-year-old math problem—by asking AI
An amateur just solved a 60-year-old math problem—by asking AI
A ChatGPT AI has proved a conjecture with a method no human had thought of. Experts believe it may have further uses
By Joseph Howlett edited by Lee Billings
Eugene Mymrin/Getty Images
Liam Price just cracked a 60-year-old problem that world-class mathematicians have tried and failed to solve. He’s 23 years old and has no advanced mathematics training. What he does have is a ChatGPT Pro subscription, which gives him access to the latest large language models from OpenAI.
Artificial intelligence has recently made headlines for solving a number of “Erdős problems,” conjectures left behind by the prolific mathematician Paul Erdős. But experts have warned that these problems are an imperfect benchmark of artificial intelligence’s mathematical prowess. They range dramatically in both significance and difficulty, and many AI solutions have turned out to be less original than they appeared.
The new solution—which Price got in response to a single prompt to GPT-5.4 Pro and posted on www.erdosproblems.com, a website devoted to the Erdős problems, just over a week ago—is different. The problem it solves has eluded some prominent minds, bestowing it some esteem. And more importantly, the AI seems to have used a totally new method for problems of this kind. It’s too soon to say with certainty, but this LLM-conceived connection may be useful for broader applications—something hard to find among recently touted AI triumphs in math.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”
The question Price solved—or prompted ChatGPT to solve—concerns special sets of whole numbers, where no number in the set can be evenly divided by any other. Erdős called these “primitive sets” because of their connection to similarly indivisible prime numbers.
“A number is prime if it has no other divisors, and this is kind of generalizing that definition from an individual number to a collection of numbers,” says Jared Lichtman, a mathematician at Stanford University. Any set of prime numbers is automatically primitive, because primes have no factors (except themselves and the number one).
Erdős also came up with the Erdős sum, a “score” you can calculate for any primitive set. He showed that the biggest the sum could be was about 1.6—and conjectured that this value must also hold for the (infinite) set of all prime numbers. Lichtman proved Erdős right as part of his doctoral thesis in 2022.
Erdős also noticed that the score drops if all of a set’s numbers are large—the larger the numbers, the lower the score. He guessed that the lowest this score could be was exactly one, a limit that the score would approach as the set’s numbers approached infinity. Lichtman tried to prove this, too, but got stuck like everyone else before him.
Price wasn’t aware of this history when he entered the problem into ChatGPT on an idle Monday afternoon. “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.”
He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge. The duo had jump-started the AI-for-Erdős craze late last year by prompting a free version of ChatGPT with open problems chosen at random from the Erdős problems website. (An AI researcher subsequently gifted them each a ChatGPT Pro subscription to encourage their “vibe mathing.”)
Reviewing Price’s message, Barreto realized what they had was special, and experts whom he notified quickly took notice.
“There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.
More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.”
Lichtman is hopeful because ChatGPT’s discovery validates a sense he’s had since graduate school. “I had the intuition that these problems were kind of clustered together and they had some kind of unifying feel to them,” he says. “And this new method is really confirming that intuition.”
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
After almost three years of 6.x series kernels, Linux 7.0 is finally here.
That means it’s also time for another Asahi progress report!
Automate Everything
Users of alternate distros and keen-eyed individuals may have noticed some
changes to the Asahi Installer. After almost two years, we finally got around
to pushing an updated version of the installer to the CDN! Two years is a long
time to go between updates, so what took so long?
Our upstream installer package is a little bit of a Rube-Goldberg machine. The
bulk of the installer is written in Python, with some small Bash scripts to
bootstrap it. When you run curl | sh, you’re actually downloading the boostrap
script, which then fetches the actual installer bundle from our CDN. This bundle
consists of a Python interpreter and very stripped down standard library, a built
m1n1 stage 1 binary, and the installer itself.
Until recently, cutting an installer release meant:
Tagging the installer repo
Downloading a macOS Python build
Building m1n1 from a blessed commit
Bundling Python, m1n1 and the installer
Uploading the installer bundle to the CDN
Updating the CDN’s version flag file
This process was time-consuming and required administrative access to the CDN.
As a result, we neglected to push installer updated for quite some time; the
previous installer tag was from June 2024! As upstreaming work has progressed
and Devicetree bindings churned, this became rather problematic for our friends
maintaining distros.
The Asahi Installer offers a UEFI-only installation option. This option
shrinks macOS and only installs what is necessary to boot a UEFI executable,
meaning m1n1 stage 1, the Devicetrees, and U-Boot. This allows users to
boot from live media with Asahi support, such as specialised Gentoo Asahi
LiveCD images.
Since the Devicetrees on a fresh UEFI-only install come from the installer
bundle itself, a kernel will only successfully boot when the installer-bundled
Devicetrees match what that kernel expects to see. The two have gotten rather
out of sync as time has gone on due to Devicetree bindings changing
as a result of the upstreaming process. This situation finally came to a
head with kernel 6.18, which required numerous changes to both m1n1 and
the Devicetree bindings for the Apple USB subsystem. This made booting
kernel 6.18 and above from live media impossible. Oops.
Rather than go through the trouble of manually pushing out another update,
we took the opportunity to build some automation and solve this problem
permanently.
We moved the manifest of installable images into the asahi-installer-data repo,
allowing us to update it independently of the installer codebase.
On top of this, we also now deploy
the installer using GitHub workflows. Going forward, every push to the main
branch of asahi-installer will
automatically build the installer and upload it to https://alx.sh/dev.
Every tag pushed to GitHub will do the same for https://alx.sh.
The latest version, 0.8.0, bumps the bundled m1n1 stage 1 binary to
version 1.5.2, introduces installer support for the Mac Pro, and adds
a firmware update mode which ties in nicely with…
How do you overengineer a light sensor?
Basically everything with a screen now comes with some sort of light
sensor. This is usually to enable automatic brightness adjustment based
on ambient conditions. It’s a very convenient feature in devices like
smartphones, where a user may walk outside and find their display too
dim to see. The cheapest versions of this use a simple photoresistor.
This is fine if the goal is just to change brightness, but brightness
is not the only thing affected by ambient lighting conditions. What about
colour rendering?
Apple’s devices have had the True Tone display feature for quite some time.
This works by measuring both the brightness and the colour characteristics
of the environment’s ambient lighting. This data is then used to apply
brightness and colour transformations to the display to ensure that it is
always displaying content as accurately as possible. This is most noticeable
in environments with lighting fixtures that have a low Colour Rendering
Index, such as fluorescent tubes or cheap cool white LEDs. The devices that
enable this, ambient light sensors, are usually little ICs that connect
to the system over I2C or other industry-standard bus. This is
fine for basic applications, but this is Apple. There are some other considerations
to be had:
The light sensor is doing stuff whenever the screen is on, so processing its
output should be as efficient as possible
The light sensor should be able to be calibrated for maximum accuracy
There are multiple models of light sensor in use, and the OS should not
have to care too much about that
The light sensor has to have a three letter acronym like every other piece
of hardware on this platform (ALS)
Naturally, this sounds like a job for the Always-On Processor1 (AOP)!
We’ve had a working AOP+ALS driver set for a while thanks to chaos_princess,
however the raw data AOP reports back from ALS is rather inaccurate without
calibration. That calibration is a binary blob that must be uploaded to the AOP
at runtime. It is essentially firmware. Since we cannot redistribute Apple’s
binaries, it must be retrieved from macOS at install time and then stored somewhere the driver
knows to look for it.
To achieve this, the Asahi Installer gathers up all
the firmware it knows we will need in Linux and stores it on the EFI System
Partition it creates. A Dracut module then mounts this to a subdirectory of
/lib/firmware/, where drivers can find it. However, issues arise when we
need to retrieve more firmware from macOS after Asahi Linux has already been
installed. To avoid a repeat of the webcam situation, where users
were required to manually do surgery on their EFI System Partition, chaos_princess
added the ability for the Asahi Installer to automatically update the firmware
package. Starting with ALS, any required firmware updates will be a simple
matter of booting into macOS or macOS Recovery, re-running the Asahi Installer, and following
the prompts.
To enable ALS support (and to do firmware upgrades in the future), follow these steps:
Ensure you are running version 6.19 or above of the Asahi kernel
Ensure your distro ships iio-sensor-proxy as a dependency of your DE (Fedora
Asahi Remix does this)
What would you do if your organization had used a domain name for 27 years, and the registrar holding the domain seized it without any advance warning? All email and websites went dark. The company’s tech support spent four days telling you to “Just wait, we are working on it.” On the fourth day, the company informed you that someone else has the domain now, and it is no longer yours.
Read on. This crazy story happened exactly one week ago.
My friend Lee Landis is a partner in Flagstream Technologies, a local IT firm in Lancaster, PA. Last Saturday afternoon one of his client’s domains vanished from his GoDaddy account.
Lee is one of the most competent IT guys I know. The GoDaddy account had dual two-factor authentication enabled, requiring both an email code and an authentication app code to log in. The domain itself had ownership protection turned on. The audit log just said “Transfer to Another GoDaddy Account” by an “Internal User” with “Change Validated: No.”
Some names have been changed
Some names and the domain itself have been changed because people wanted to remain anonymous. The pattern of the domain names mirrors the actual mistake, so the explanation still makes sense. Every fact in this post is true. Lee has hard evidence for every one of them.
As you can see above, GoDaddy emailed Flagstream at 1:39pm that an account recovery had been requested. Three minutes later, the transfer was initiated. Four minutes later, it was complete. On a Saturday afternoon.
Everything at the impacted organization went offline because GoDaddy reset the DNS zone to default when they moved the domain into the new account. Same nameservers. Empty DNS zone file.
Lee’s client lost their website and email for the next four days.
27 yrs
Domain in active use
32
Calls to GoDaddy
9.6 hrs
On the phone with GoDaddy
17
Emails to GoDaddy. Zero callbacks.
Domain and account were fully protected.
The domain had the “Full Domain Privacy and Protection” security product that GoDaddy sells. Dual two-factor on the account. None of it mattered. The transfer was done by an “Internal User” inside GoDaddy.
The domain was HELPNETWORKINC.ORG. The real domain name has been changed because the organization wanted to remain anonymous. It belongs to a national organization with twenty locations across the United States. The domain has been in active use for 27 years. Each chapter runs its website and email on a subdomain of that one parent domain. When HELPNETWORKINC.ORG went dark, every chapter went dark with it.
Thirty-two calls. 9.6 hours on the phone. Zero callbacks.
Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.
On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.
The one thing that stayed consistent was the message: “Just wait a day or two. We are working on it. Why do you think this is so urgent?”
One of the most frustrating parts of this process is that all official communication to and from GoDaddy about this issue was done with generically named email accounts. It just seems like there should have been a named individual in charge of managing and communicating about this issue. Rather there were just random generic email accounts that seemed to change on a daily basis.
Every call generated a fresh case number. Lee lost count of the total number of cases. A few of the cases are 01368489. 894760. 01376819. 01373017. 01376804. 01373134. 01370012. None of them tied together on GoDaddy’s side. Every escalation started from zero. These are actual case numbers, in case anyone at GoDaddy wants to check into this.
I posted on X to see if anyone I knew at GoDaddy could escalate.
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF— Austin Ginder (@austinginder) April 20, 2026
Can any of my GoDaddy friends help? A good friend of mine had a domain taken. My friend is very competent. Domain ownership protection was on. Owner did not get any notices. Audit log looks fishy. Phone/email support telling them to wait. Did a GoDaddy employee take it? pic.twitter.com/OWcJIalWcF
My friend Courtney Robertson, who works at GoDaddy, reposted it and started escalating internally on her own time. Thank you, Courtney. GoDaddy has a lot of great people like her. That part is not in question. What GoDaddy does not have is a way to actually fix a mistake once one has been made. Tickets pile up. Phone calls reset. Every escalation is a new person reading the case from scratch. The thing you actually need solved drifts between queues.
And there was no real way to dispute it.
While Lee was on the phone, his colleague was on a different phone trying to file a Transfer Dispute. GoDaddy directed him to cas.godaddy.com/Form/TransferDispute. He filed a dispute and received this message, which he captured via a screenshot.
Lee and his colleagues worked diligently at challenging the transfer. They supplied the correct name of the person listed on the domain. They supplied that person’s drivers license as required. They also supplied the correct business documentation as listed in GoDaddy’s own requirements. Every time they submitted a request, they were told they would hear back in 48 to 72 hours.
GoDaddy FINALLY responds with a SHOCKING statement
Tuesday afternoon, after four days of waiting, Flagstream finally got an official email response back from GoDaddy.
GoDaddy’s reply to Lee
After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.
That was it. No explanation of what documentation. The suggested next steps were three links. A WHOIS lookup. ICANN arbitration providers. A page about getting a lawyer involved to represent you in litigation.
Flagstream migrates client to new domain
Once GoDaddy declared the matter closed, Flagstream began migrating the client to a new domain. New email addresses. New website addresses. Coordinating with various teams throughout the night to change everything over to a new domain.
Switching to a new domain is a massive amount of work, and it leaves a lot of lingering problems behind because there is no control over the original domain.
Every email address that exists out in the world is now wrong. You have to tell everyone the new address. If they try the old one, it bounces.
Every piece of marketing material that references the old domain is now incorrect. There is no way to forward anything to the new domain.
All of the SEO is gone. You are starting an online presence from scratch.
Then a stranger found the domain in her account.
Wednesday morning Susan (not her real name), 2,000 miles away from the client’s headquarters, noticed something odd. Susan had been working at reclaiming a totally different domain used by a former employee. When she looked closely at her GoDaddy account, the domain in her account wasn’t the one she had requested. She made a few phone calls because she knew this was a problem and eventually got hooked up with Flagstream. Working with Susan, they ran a GoDaddy account-to-account transfer, and put the domain back where it belonged. DNS came back up while Lee was still typing the email telling me it was over. The entire process of reclaiming the domain lasted less than 5 minutes.
Once the domain was back and DNS was working, Flagstream started the arduous task of reverting everything that they had done the day before. They switched email and websites back to the original domain, once again working through the night to get everything fixed.
The resolution for this problem did not come from GoDaddy support. It did not come from the dispute team. It did not come from the Office of the CEO team. It came from a stranger who accidentally ended up with the domain and was smart and honest enough to start calling around because she knew something wasn’t right.
Susan is really the hero of this entire story. Without her, Flagstream would still have no idea what happened to this domain. Lawyers would have gotten involved, but it would probably be months until anything was resolved.
Timeline of events
Apr 18, 1:39pm
GoDaddy emails Flagstream that an Account Recovery has been requested for the account.
Apr 18, 1:42pm
Transfer initiated by GoDaddy Internal User. Three minutes after the recovery notice.
Apr 18, 1:43pm
Transfer completed. Change Validated is listed as “No”. Website and email go dark across the entire organization.
Apr 19
Lee discovers the domain is gone. GoDaddy says email undo@godaddy.com and wait.
Apr 20
Flagstream team starts calling and emailing GoDaddy for updates. GoDaddy now says email transferdisputes@godaddy.com. Austin posts on X. Courtney Robertson routes the case to the Office of the CEO team.
Apr 21
Flagstream files multiple Transfer Dispute cases with the requested documentation. Every submission is met with a 48 to 72 hour response window. GoDaddy emails Lee that the matter is closed and the domain belongs to someone else. Flagstream starts the painful process of migrating the organization to a new domain so they can function.
Apr 22
Susan notices the wrong domain in her account and calls Lee. Account-to-account transfer brings it home.
Then it got crazier. GoDaddy approved the transfer with zero documents.
The organization on the receiving end of the transfer was a regional chapter of the same network. Susan, the executive assistant, had emailed GoDaddy two weeks earlier asking to recover a different domain. HELPNETWORKLOCAL.ORG. Not HELPNETWORKINC.ORG.
Flagstream spent some time talking to Susan to figure out exactly how she was able to accidentally get the domain transferred into her account. Did she unintentionally supply all of the correct documentation? Talking to Susan they figured out that GoDaddy actually approved the transfer without her supplying ANY documentation.
Her email signature happened to reference her chapter’s website at a subdomain of HELPNETWORKINC.ORG. GoDaddy’s recovery team apparently looked at the signature, saw the parent domain, and transferred that domain into her account.
GoDaddy sent Susan a link to upload supporting documents. The link expired before she got around to using it. She emailed back requesting a new link so she could upload the required documentation. However, before the new link arrived, she received an email saying the domain transfer had been approved.
Susan never submitted a single document. Not for the domain she was actually trying to recover, and certainly not for the one GoDaddy ended up giving her. GoDaddy approved the change of account, transferred a 27-year-old non-profit’s domain into a stranger’s account, and “considered the matter closed” without requiring any documentation.
This is a huge security issue.
If Susan had been a bad actor, she could have intercepted email. She could have used that email to reset passwords, get MFA codes, launch phishing attacks, etc. She could have put up a new website with malware on it, redirected payments on the website, etc.
When the domain initially disappeared and Flagstream was unable to obtain any information about who had it, Flagstream feared the worst. Flagstream and the impacted client started to come up with a plan to protect against the threats mentioned above which was a huge undertaking for an organization of this size. Basically, all users across the entire organization needed to start logging into every important website and make sure the compromised domain was removed from the account. This includes bank websites, Amazon, IRS, payroll, Dropbox, email accounts, and even ironically enough, GoDaddy accounts.
It is outrageous that Susan was able to obtain this domain without supplying any documentation. Everyone was lucky it was Susan that got this domain.
GoDaddy: please follow up with Flagstream.
This is not acceptable.
A GoDaddy employee transferred a 27-year-old domain out of a paying customer’s account with no validation. With zero documentation submitted by the recipient. When the customer disputed with legitimate documentation, every submission was met with “We will respond in 48 to 72 hours.” After four days, GoDaddy claimed the domain belonged to someone else and the case was closed. The fix came from the recipient of the mistake, not from GoDaddy despite 9.6 hours of phone conversations.
To anyone at GoDaddy reading this. Please follow up with Lee Landis at Flagstream Technologies and make this right. An apology is probably in order. An internal review of how the transfer team validates documentation is in order, including how a transfer can be approved with zero documentation. Lee would like a clear answer on how this happened. Lee doesn’t want an email from a generic GoDaddy account. Lee wants a real person to call or email him. This person needs to leave an email address and phone number in case Lee has follow-up questions.
Even disclosing this to GoDaddy was broken.
Before publishing this post, I wanted to share the findings with GoDaddy’s security team directly. I emailed security@godaddy.com with the full report. The message bounced.
GoDaddy’s auto-reply to security@godaddy.com
A custom mail flow rule created by an admin at secureservernet.onmicrosoft.com has blocked your message. We hope this message finds you well. This email mailbox is no longer monitored. To address your needs, we have outlined two popular options for you: 1: To submit an abuse report, please visit our Abuse Reporting Form. 2: If you are looking to submit a vulnerability, please visit our bounty program https://hackerone.com/godaddy-vdp.
So I filed the same report through HackerOne instead, report #3696718.
This is the same pattern that played out across the four-day outage. The official channel does not work. The alternative path requires knowing to bypass it. Most honest people who notice a security issue are not going to have a HackerOne account. They send an email. How is it that GoDaddy doesn’t have a public security disclosure email address?
Whether the original transfer was a single agent’s mistake or a flaw in the recovery workflow, it is still a security issue. And there is no clean path from “I found something” to “a human at GoDaddy is looking at it.”
The only way to get GoDaddy’s attention is to leave.
Lee is upset about the four days of stress and lost productivity across the impacted organization. But his bigger concern is what comes next. Apparently there is no way to protect against this threat if your domain is hosted at GoDaddy. In addition, it seems like there is no efficient way to contest the GoDaddy transfer.
Flagstream will most likely migrate every one of their domains off GoDaddy. That is the only protection they have left, and the only escalation GoDaddy seems to respond to.
Are you at risk?
Is your domain hosted on GoDaddy? What would you do if the domain disappeared out of your GoDaddy account and your entire business went dark?
Most people think EU Age Control apps are about identifying users. The sales pitch is all zero-knowledge proofs of age. You prove you’re over 18 without the site learning your name, exact birthday or anything that can link one proof to another.
Before going further, it is worth laying out three separate problems this post is worried about. They are easy to blur but they are very different. First: the DSA fallback — platforms don’t actually need the privacy-preserving wallet; the rules let them use a normal KYC provider instead. Second: attestation lock-in — Google and Apple decide what software runs on the phones that can use this system. Third: the system itself is weaker than advertised — the cryptography the reference app actually ships is not the cryptography the marketing describes, unlinkability depends on wallet behavior not math, and there is a whole class of relay attacks the protocol cannot stop. When commentators wave away “the hacks,” they usually mean bugs in the mock-up.
It is also worth asking when this app started being described as “just a reference implementation” or a “white-label demo.” The README tells a story. On 12 May 2025, a disclaimer appeared framing the project as an “Age Verification Solution Toolbox” that Member States are expected to build on. On 31 July 2025, further softening was added — language explicitly calling the app a white-label reference for countries to adapt — and in the exact same edit, the earlier, blunter disclaimer (which said this was an initial version not intended for production) was quietly removed. In any case, it was always presented as a toolbox that countries should adapt into their apps — so judging the app by itself does not make much sense, it depends on how these techniques are implemented in each country’s verification app. There will be no single EU app, despite what the honchos of EU say.
The DSA fallback nobody talks about
Big platforms must verify age for certain content. They can use the fancy EU wallet with its privacy features. They can also just plug in a normal KYC provider that scans your full passport, runs liveness checks and sees everything. Which path do you think most companies will actually take when the “privacy-preserving” option requires integrating with systems that barely exist yet across 27 countries?
It’s marketing sleight of hand. They push the privacy angle hard while the rules quietly allow the non-private fallback. The privacy part is optional. (I think they mainly know the apps will not be ready by the end of the year).
KYC companies have been avoiding real electronic IDs for years. I have a Slovak eID chip that’s been in my wallet forever. It has proper cryptographic keys and can prove who I am far more cleanly than a photo of my driver’s license plus video call. Yet almost every KYC provider still does the bitmap and liveness routine. The reason is simple. Integrating with 27 different national eID systems is a nightmare. Maintaining a database of what every country’s physical ID looks like is cheaper and works everywhere. The cryptographic route doesn’t — in practice, not in theory.
So the EU solution only “works” if platforms decide to do all that integration work themselves. Right now the official trusted list has zero production apps. The reference implementation is still half-baked. Believing this turns into clean interoperability across all EU countries by the end of 2026 is wishful thinking.
How verification actually works
The main high-assurance path in the reference app uses an NFC passport. You scan the MRZ code at the bottom of the photo page; it gives the keys to read and decrypt the data on the NFC chip. That chip contains signed data including a JPEG photo of the holder. The design calls for a live photo to be taken and matched locally against the chip’s JPEG — this is intended to stop a kid scanning a parent’s passport to get a credential for themselves.
The app is open-source so you can read every line. But changing even one bit would break the hardware attestation once attestation is actually enforced by national deployments. In the current reference code, attestation verification is not wired up on the server side — it is a promise national deployments would need to add. The binary must ultimately match exactly what Google or Apple signed. No GrapheneOS, no custom Linux phones.
Attestation locks it down. It is the same EU that hates these American corporations and wants EU alternatives for everything — yet no one can make a phone usable for age verification without the blessing of Google (or Apple, who does not certify third-party devices for iOS at all). Bought a Huawei phone that does not pass Play Integrity? Sorry. Note: Huawei phones can produce hardware attestation via their factory key chain, but they cannot pass Google’s Play Integrity verdict — the same applies to GrapheneOS, Linux phones, and anything outside the Google blessing. Use a Daylight computer that doesn’t wreck your circadian rhythm? Back to the office.
There is a simpler MRZ-only path in the reference app where you photograph an ID card with no NFC read or face match. Real national apps may not support it, and the reference recommends the high-assurance path. Countries will probably force the chip-based route. It’s a trojan horse to digital ID anyway.
The marketed crypto and the shipped crypto are not the same thing
The public story is built around zero-knowledge proofs. The reference Android app doesn’t actually use zero-knowledge crypto in the flow that runs. It uses an older ISO standard (ISO 18013 – 5 mdoc with ES256) where each attribute is signed in advance and the wallet reveals only the ones asked for, hiding the rest using salted-digest commitments. A ZK library is pulled in, but nothing in the presentation path ever calls it. So when people cite “ZK age proofs” as the innovation, they are citing something that is in the repo but is not switched on. Whether national apps eventually turn it on is an open question. Today’s reference is plain signatures.
The cryptography could be solid — zero-knowledge proofs over passport signatures are a real and tractable thing. But the crypto actually shipping in the current reference is the older plain-signature format with disposable-batch unlinkability, not ZK. So when people defend “the math works,” they are defending math that is not turned on. Although if you use each signed attestation only once, it only reveals that you are over 18 and maybe from the signature who issued the attestation. There’s no unique identifier.
What’s private and what isn’t
The overall flow is local-first, but still needs a server to issue credentials. Scanning and initial checks happen on the phone. Because the app is (or would be) attested, the issuing server can be reasonably confident what exact code actually executed. The server verifies the document signatures and issues a signed credential. That credential can then be used to produce a proof of age when talking to websites.
From the verifier’s (say, a porn site or social media platform) point of view it looks unlinkable — as long as the wallet behaves. The design is not “the math guarantees two proofs can’t be correlated.” The design is “the wallet hands out a pile of disposable credentials, uses each one once, then asks for more.” If the wallet obeys that rule, two verifiers see two different signatures and can’t tie them together. If the wallet cheats, or if a proof is replayed, the two verifiers see the same signature bytes, and the linkage is trivial. This is an important nuance — the usual “ZK = math = unlinkable forever” pitch doesn’t apply here. The property holds because the wallet is supposed to rotate credentials, not because the cryptography makes reuse impossible. Real cryptographic unlinkability schemes like BBS+ or CL signatures would produce uncorrelated proofs even on reuse. This is not that.
From the issuer’s point of view — they issue credentials when you present your ID. The issuer doesn’t know what you’ll use the credential for, or how many times you’ll use it — the one-use rule lives inside the wallet, not on the server. So if the wallet is modified, or if proofs are captured and replayed, nobody upstream sees it. Any “rate limit” you might imagine is a limit on how many credentials you mint, not on how many times a credential is used in the wild.
They can of course infer that you are an EU country citizen. But they can’t (under normal wallet behavior) tell which accounts are yours or link your activity across sites.
What about relay attacks?
Here’s a scenario the spec doesn’t really answer. Suppose a child wants to get into an age-gated site. A service pops up — call it Grandma-as-a-Service — that offers to verify on their behalf for a few euros. The child opens the site, gets a QR code or a link, and instead of scanning it themselves they paste it into the proxy service. The proxy forwards it to a real adult somewhere with a real, government-issued wallet on a clean phone. The adult approves. The adult’s wallet produces a cryptographically perfect “over 18” proof. The site sees a valid proof and lets the child in.
Nothing failed. Every signature is real, every attestation is real, the adult really is over 18, the wallet really is running unmodified on a genuine Android. The catch is that the protocol binds the proof to “some wallet somewhere said yes,” not to “the human at this browser right now.” There is no proximity check. The browser-side Digital Credentials API partially closes this — but only when the user verifies on the same phone they’re browsing from. QR codes and deep links, which work across devices, are wide open.
People assume Google’s Play Integrity would stop this. It doesn’t. Play Integrity attests what code is running on what device. It says nothing about who is in front of it or where the device is. In the proxy flow, the adult’s phone is a real phone and every attestation is real. The relay — the web service the child talks to — isn’t being attested; it’s just moving bytes.
And once an adult is enrolled, the resale version gets ugly. The wallet has thirty disposable credentials, refreshed on a short interval. The issuer never sees how those get used. So the proxy operator can reuse each credential across many children; nothing upstream raises an alarm. The “one-time use” rule is an honor-system rule inside the wallet software, not something the issuer can enforce after the fact. This is not a bug that production apps will “fix.” It’s inherited from the shape of the protocol, so it will be present in all 27 national apps.
In any case, this is the trojan horse. Start with “protect the children from porn and scary social media.” Create enough friction that people reach for the convenient attested wallet. The app itself must be attested — which in practice means Google or Apple decide what runs. The credential can be killed by the issuer.
The reference app leaks face photos, although only locally. Twenty-seven countries will each build their own version. With their own privacy bugs.
Then you get the Hawthorne effect. Every controversial site that makes you pull out the wallet creates self-censorship, even if the proof is supposedly anonymous. Governments have a terrible track record protecting this data. Any data. History is full of examples.
(Want to watch porn? Criticize a politician? Are you really going to open the EU country’s ID app to verify that you are over 18 and believe it’s unlinkable ZK proof — even if it really is?)
Later they link it to Digital Euro and everything else. Suddenly a big chunk of your life can be switched off remotely. Didn’t pay a parking ticket on time? Let’s temporarily revoke your credentials — when you can’t log in anywhere, you will come and pay the ticket.
The architecture and politics are the usual control layer with fresher paint. We don’t need revocable digital IDs as the price of entry to the internet. We were doing just fine.
Are the published hacks real?
It’s worth splitting the reported problems into two piles. Pile one: “bugs in the mock-up” — leaked files, unchecked MRZ scans, Chrome-extension demos hitting a placeholder backend. These are fixable and will be fixed per country. Pile two: structural properties that fall out of the protocol itself — no proximity binding, client-side one-time-use, unlinkability that breaks on reuse. These are not bugs. They’ll be present in every national implementation that follows the spec. When commentators wave away “the hacks,” they usually mean pile one. Pile two is what this post is actually about.
There have been several “hacks,” mostly by people who don’t understand how this is supposed to work. Leaving files on disk in the reference app is something that will be fixed, and does not really matter. The reference app will not be used by any country directly — they will have their own bugs. It’s for countries to know how to generate the proofs and stay interoperable. It doesn’t even matter that you can fool it into giving you a test credential, because the primary verification path will be countries’ own eID systems, not their mock-up of unchecked MRZ scanner.
There was a “hack” that created a custom Chrome extension. That would fail app attestation once attestation is enforced. The MRZ path also does not connect to a real backend, because there’s no real EU-side backend — the registries of valid documents are the competence of individual countries.
I’m 99% sure that even though I consider EU completely incompetent, these particular mock-up hacks won’t work in production apps. So this “haha, I hacked the app with my Claude Max subscription” doesn’t mean anything. They’re hacking a mock-up showing the use of a library. Yes, Frau Ursula called it “EU Age Verification app,” but there will not be an EU app — there will be a Slovak app, a Hungarian app, a German app, a Dutch app, a French app…
But Why?
Many of us naturally ask why people want this. I think it’s a mistake to think they don’t. There is demand for this. The internet is scary, parents think they can’t protect their children from many bad things happening, and someone came to provide a “solution.” Doesn’t matter that I am sure the kids will go around it easily. The clients (the voters) are not the children being protected, but their parents, feeling good.
I think a very good and deep explanation is in my novel Tamers of Entropy. Have a look. It is very cypherpunk/lunarpunk and explains also the psychology behind these dystopias — and a way outside. Plus it’s fun to read. Check it out at tamersofentropy.net. The characters also have Nostr accounts.
Conclusion
The EU fancy ZK apps will not be ready. Platforms will use normal KYC providers, AI face age estimators and other means.
When done according to spec, the age verification app has meaningful privacy properties — the platforms don’t know your identity or link your accounts. But those properties rest on wallet behavior, not cryptographic guarantees. The ZK math that would make unlinkability a hard guarantee is in the repo and not switched on.
The apps will not work unless you have a Google or Apple approved device. Forget Linux, GrapheneOS, Huawei, after-market firmwares. It’s part of the security model.
And relay attacks — children using adult proxies to get into age-gated sites — are not fixable bugs. They are a structural property of the protocol that will ship in all 27 national apps.
The privacy theater hides the wolf. The wolf is still there.
As the global population of photographers swells, so do their digital libraries, leaving everyone with the same question: where and how to share their best work. Flickr was among the first online communities designed to address that dilemma, and it remains one of the best. Some demand sweeping overhauls or argue the price isn’t justified.
However, Flickr’s refusal to chase fleeting trends—opting instead for iterative improvements—is actually one of its greatest strengths. And while its annual Pro subscription is on the pricier side, ultimately, the benefits continue to outweigh the costs.
Editor’s Note: This article was written largely as a rebuttal to Matt Payne’s January 2026 article, Empty Promises: A Deep Dive into Flickr Pro for 2026. It is worth familiarizing yourself with that perspective before diving into Mr. Weinstein’s response below.
A Brief History
Launched in 2004 with an iconically missing vowel, Flickr pioneered the Web 2.0 era of social photo sharing before enduring a decade of minor and cosmetic changes amid corporate stasis under Yahoo.
After years of neglect, SmugMug acquired the platform in 2018. Don MacAskill, SmugMug’s CEO, said “[w]e’ll work very hard to not ruin Flickr. After successfully not ruining it, we’ll work even hard[er] to make it better than its already awesome self,” and “Flickr’s community is unique in the world and on the Internet. That’s where we’d like to invest.” So, what are the results of those investments, and is Flickr Pro still worth it?
Flickr in 2026
The Social Core
In stark contrast to the majority of photo-focused services, Flickr remains primarily a simple photo-sharing website where one can find friends and view their work in a clean, chronological stream. While the platform supports video, the feature feels like a quiet afterthought—a logical choice for a site built by and for photography enthusiasts. There is simply no chance that Flickr will suddenly pivot to video to chase short-form trends.
Groups & Discovery
The heart of the Flickr community lies in its Groups, many of which cater to highly specific niches that you won’t find elsewhere. These range from technical communities focused on specific lenses, camera bodies, or brands, to aesthetic enclaves for analog purists, black-and-white enthusiasts, and quirkier corners like Stick Figures in Peril.
Metadata & Organization
The platform’s utility is bolstered by its robust handling of tags and geotagging, allowing for a level of searchability that modern social media often lacks. Users can manage their libraries through Sets, Galleries, and Albums, making it easy to organize thousands of images by subject matter, location, person, or era. Flickr preserves and displays comprehensive EXIF data, including detailed camera and lens information for every shot.
Integration & Syndication
Flickr also retains its early web roots: every user has an RSS feed, and the site maintains open APIs and makes it simple to create embeds for other websites—a lingering reminder of the flexible features that made early Flickr such a vital tool for bloggers and curators.
Explore
Of course, there’s also Explore, Flickr’s way of highlighting 500 photos each day. When a photo is selected for Explore—driven by an inscrutable, often mercurial algorithm—it typically receives thousands of views and a surge of engagement.
Pro Benefits
In 2026, the leap from a free account to Flickr Pro primarily allows a user to present a long-term or large body of work publicly. The most immediate benefit is the removal of the 1,000-photo cap (which also limits free users to a mere 50 non-public photos), replaced by unlimited, full-resolution JPEG storage. For those who use Flickr as a portfolio, the Pro status also ensures an ad-free experience—not just for the photographer, but for anyone visiting their photostream, ensuring the work remains the sole focus without the distraction of third-party banners.
Pro users also gain access to Advanced Stats, providing granular data on the sources of views and traffic, including which specific groups or tags are driving traffic. Pro members get a suite of partner perks, including savings on Adobe Creative Cloud, Blurb photo books, Phlearn memberships, and SmugMug plans, and a significant 5% off gear at KEH. Additionally, Pro members gain access to exclusive savings on a wide range of classes and education. These are, at best, fringe benefits, but a user who spends a bit under $2,000 at KEH in a year will have essentially justified the entire cost of the Pro membership through the discount.
Why Flickr is Still Great in 2026
There are certainly cheaper ways in 2026 to host an ad-free, public portfolio on the open web. Yet, few to none meet those criteria while simultaneously offering an active, built-in community of dedicated photography enthusiasts seeking out high quality photography. I suspect that’s the value proposition that keeps many Flickr users paying for Pro in 2026, myself included.
Other options are better positioned to present a professional photographer’s work to the world exactly as they want it seen. But Flickr Pro shouldn’t be confused with “Flickr for professionals,” just like the iPhone Pro isn’t intended for “professional smartphone users.” Most Flickr users are serious—or not-so-serious—hobbyists.
But more generally, Flickr is great precisely because it isn’t trying to become the next Instagram, TikTok, crypto play, metaverse experiment, or AI training ground. While it’s always nice to have exposure on Flickr, the platform is largely devoid of the “influencers” who dominate other networks. In an era of algorithm-driven content, Flickr remains a sanctuary for photography enthusiasts who are genuinely excited to see what their peers are up to. The community remains very active; while you’ll encounter the occasional robotic “Great shot!” comment, the platform still fosters engaged discussion, honest feedback, and shared tips that are hard to find on more transactional social networks. If it feels like a ghost town, consider joining new groups and interacting with new users whose work you enjoy and might learn from.
The robust tagging and geotagging systems make Flickr an underappreciated platform for location scouting. Before heading to a new area, a user can search within the area or for specific landmarks to see how a location looks at different times of day, in varying weather conditions, or across different seasons. Furthermore, the full EXIF data display makes Flickr a great place to learn. There is no better place to see what a different lens or camera body can produce in the hands of real photographers.
One of Flickr’s most underrated power features is the Organize tool. It provides a high-level view of your entire library, allowing you to batch-edit titles, tags, and permissions with a simple drag-and-drop interface, ensuring every photo has the exact attributes you want it to. Flickr offers robust features to limit who sees your work, allowing you to hide specific photos from public searches while still sharing them with a select circle via private links. And it’s easy to change the license associated with photos in bulk, for instance to assign a Creative Commons license so others can share or reuse your work if you so choose.
To support the sense of community, Flickr regularly hosts free photography competitions that celebrate its members’ talent, including the annual Your Best Shot contest and themed events like the World Photography Day Contest. Flickr often hands out prizes, big and small, in conjunction with popular photo-related brands. And photos entered into contests often get a boost in interaction from other participants—a nice consolation prize.
Flickr supports its community in the real world too. The site facilitates photo walks, sponsors Photoville in New York City, and maintains a presence at major photography gatherings. These events are excellent opportunities to meet like-minded photographers, swap stories about gear, and discover new subjects to shoot. I’ve personally met avid Flickr users in places like New York City, Atlanta, and London; it’s a true global network. While it’s a rarely used feature, if a photo uploaded to the site contains another Flickr member, you can tag that user directly, making it easy to keep track of friends and collaborators from real-world photowalks.
The site is also heavily promoting MODE by Flickr, a three-day photography festival taking place in Minneapolis from September 18 – 20, 2026. Billed as a “photographer’s playground,” MODE is designed to bring the community away from their devices and into the physical world through workshops, darkroom sessions, and city-wide photowalks. At a minimum of $330 for admission, plus airfare to and lodging in Minnesota, MODE may prove to be a one-time experiment, but it’s a genuine effort to invigorate the community, which is worthy of praise.
And while Explore is and has been algorithmically curated for years, the site is generally free of artificial intelligence, both with respect to the content users upload and useless features shoehorned into the service. Flickr’s Terms make clear that users own the copyright to their photos:
You retain all intellectual property rights in and to any User Content you post, upload or otherwise make available through the Services, including the copyright in and to your photos and videos. SmugMug does not claim any ownership, right, title or interest in and to your User Content.
You retain all intellectual property rights in and to any User Content you post, upload or otherwise make available through the Services, including the copyright in and to your photos and videos. SmugMug does not claim any ownership, right, title or interest in and to your User Content.
While users grant SmugMug the right to reproduce users’ images to provide the service there’s little risk—at least under the current Terms—that Flickr will turn into an AI-focused platform, mining its users’ photos. Of course, third parties may take a different view and scrape the full Flickr corpus, but there’s only so much Flickr, like virtually every website operator, can do with respect to that scenario.
While Flickr has dabbled in allowing users to license photos, commerce has never been the core element of the service. Today, rather than acting as a middleman for stock sales, as do many of its competitors, Flickr focuses on providing the infrastructure for photographers to manage their own destinies. Ultimately, Flickr’s greatest strength in 2026 is its refusal to pivot or sell out.
It’s Not Perfect
Tech Issues
While Flickr has an impressive list of attributes, it is far from flawless. When SmugMug acquired the service and migrated its massive library to Amazon Web Services (AWS), the platform entered a period of relative instability. Even in 2026, users occasionally encounter the dreaded “bad panda”—Flickr’s internal parlance for a site error or outage—and intermittent slow-loading pages remain an unfortunate reality of the browsing experience. A fully functional platform is table stakes, especially for the price Pro users pay.
Stagnant Community Hubs
Flickr Groups used to feature robust conversations, but much of that energy has migrated to platforms like Reddit or Facebook. While many groups remain active—specifically those centered around local photography clubs, specific social organizations, and regional events—the broader “global” discussion feels quieter than it once was. Similarly, the internal FlickrMail messaging system has not seen a significant update in years; it lacks conveniences like multi-person threads or the ability to easily embed photos and map locations directly into a chat. The SmugMug management promised improvements to the community aspects of Flickr, and more is needed—beyond a pricey, experimental festival in Minnesota—before they can declare success on this front.
Rusty Features
Some of the site’s most beloved legacy features are beginning to show their age. The Camera Finder, for example, is still a useful resource for seeing trending gear, but it lacks granular data or the ability to filter in any useful way.It used to be possible to filter photos taken by a specific camera by genre (e.g., landscape, sports). Restoring this feature—and building out robust searchability by camera body, lens, and exact settings—would be a massive win for the community.
The World Map could also use attention. While geotags are a fantastic resource, the World Map currently lacks the filtering and searchability that would make it a much more powerful and useful way to find photos with certain keywords at a specific place at a specific time.
The “Interestingness” Algorithm
The “Interestingness” algorithm—which powers the Explore page—can be enigmatic. While tastes vary, virtually everyone can agree that the algorithm sometimes rewards objectively mundane photos as more “interesting” than more captivating work. I suspect that the algorithm is tuned to reward certain user behaviors that Flickr considers desirable at the expense of showcasing truly “interesting” photos. While some users have long since learned to game the system, complaining about Explore is an old cliché—and it ultimately represents only a fraction of the platform’s value. Nonetheless, improvements would be welcome.
Beyond JPEG
Flickr allows Pro users to showcase their work at full resolution, but as of 2026, JPEG is over 30 years old, and camera and display hardware has surpassed its limitations. While Flickr doesn’t overly compress photos and does support modern color profiles—allowing the service to take advantage of wide gamuts like Display P3 used by high-end smartphones and monitors—it still lacks native support for next-generation formats like JPEG XL, HEIC, or AVIF. These formats are increasingly supported and commonplace, offer better compression and greater bit depths, and adding them would significantly modernize the platform’s technical foundation.
The Cost of Independence
There is an old adage in tech: “If you’re not paying for the product, you are the product.” Through that lens, Flickr Pro users are definitively not the product. Currently, Flickr Pro costs $82 when billed once per year, which is a significant jump from its early days. To put that in perspective, 500px is $59.94 per year, and Glass, a recent entrant in the field sometimes considered Flickr’s closest competitor, costs roughly $40 per year. On the other hand, they lack the full feature set described above, and they don’t offer their Pro-level users an ad-free gallery space open to the public that doesn’t generate its profit by profiling its users for advertisers.
A 100-Year Vision
Hosting petabytes of high-resolution data is an expensive endeavor—Yahoo should have never offered terabytes of storage for free. MacAskill addressed this balance directly when speaking to the community about two years ago:
“Flickr is the healthiest it’s ever been. More active users, more engagement, more connections, more revenue, more of everything — except people treating it like a photo dump’. Most importantly, our members are ecstatic about it, it’s now profitable and cash flow positive, so not in imminent danger (and we’re trying to build it, sustainably, for 100+ years). IMHO, it’s not nearly enough, yet, but the trajectory is awesome. It’s working. And it’s working without invading people’s privacy, unlike nearly every other social media platform.”
“Flickr is the healthiest it’s ever been. More active users, more engagement, more connections, more revenue, more of everything — except people treating it like a photo dump’. Most importantly, our members are ecstatic about it, it’s now profitable and cash flow positive, so not in imminent danger (and we’re trying to build it, sustainably, for 100+ years). IMHO, it’s not nearly enough, yet, but the trajectory is awesome. It’s working. And it’s working without invading people’s privacy, unlike nearly every other social media platform.”
He’s also been clear very recently that SmugMug is “not planning on selling Flickr.” Ultimately, while the site may feel rusty in a few places, its trajectory suggests a platform that is finally stable. For those who value privacy, a long-term home for their work, and an ad-free portfolio-like space, the Pro price tag is the cost of ensuring Flickr survives into the next decade and beyond.
It’s not officially a part of Flickr, but the closely affiliated non-profit Flickr Foundation is working on projects like the Data Lifeboat, which aims to be a “user-friendly archiving solution to ensure memories on Flickr can be enjoyed by future generations, in easily browsable packages.”
If you’re looking for the next big thing, Flickr may not be for you. Flickr is great because—in contrast to virtually all of its competitors—it offers the features photography enthusiasts care about while avoiding distractions and minimal monetization of its Pro users via advertising. It’s a community with virtual and real-world events. It’s a place to post and seek out your favorite photos. It’s a place to be inspired. Because it isn’t (currently) beholden to massive shareholder demands, it hasn’t needed to “move fast and break things.” Instead, it has moved deliberately, maintaining and improving the tools that matter. I expect to see more of that going forward and will willingly pay the (admittedly high) fee necessary to keep this little slice of the early, more pure web alive—not for the sake of nostalgia, but because things actually were better back when the web connected real people, and platforms didn’t aspire to take over the world. In short, if it’s not broken, why fix it?
About the author: Brett Weinstein is an amateur photographer and will mark 20 years of Flickr membership this year. His work is featured in the Smithsonian National Museum of African American History and Culture, he was the Photography Editor at the Emory Wheel and the 2008 Southeast Journalism Conference Best Press Photographer, and his photos have been listed with Getty and featured in press and advertising. By day, he is a privacy and consumer protection lawyer. The opinions expressed above are solely those of the author.
What is a statechart?
A statechart can be explained in many ways, and we’ll get to those explanations, but essentially, a statechart is a drawing. Here’s a simple statechart:
However, this drawing isn’t very useful for software engineers who want to reap the benefits outlined elsewhere on this site, so let’s dive into some other ways of describing what a statechart is. The original paper that defines statecharts bills them as “A visual formalism for complex systems” (Harel, 1987). With that out of the way, let’s try to explain statecharts.
Introduction to statecharts
Put simply, a statechart is a beefed up state machine. The beefing up solves a lot of the problems that state machines have, especially state explosion that happens as state machines grow. One of the goals of this site is to help explain what statecharts are and how they are useful.
What is a state machine?
What is a statechart?
Why should you use statecharts?
Statecharts offer a surprising array of benefits
It’s easier to understand a statechart than many other forms of code.
The behaviour is decoupled from the component in question.
This makes it easier to make changes to the behaviour.
It also makes it easier to reason about the code.
And the behaviour can be tested independently of the component.
This makes it easier to make changes to the behaviour.
It also makes it easier to reason about the code.
And the behaviour can be tested independently of the component.
The process of building a statechart causes all the states to be explored.
Studies have shown that statechart based code has lower bug counts than traditional code.
Statecharts lends itself to dealing with exceptional situations that might otherwise be overlooked.
As complexity grows, statecharts scale well.
A statechart is a great communicator: Non-developers can understand the statecharts, while QA can use a statecharts as an exploratory tool.
It’s worth noting that you’re already coding state machines, except that they’re hidden in the code.
Why should you not use statecharts?
There are a few downsides to using statecharts that you should be aware of.
Programmers typically need to learn something new, although the underpinnings (state machines) would be something that most programmers are familiar with.
It’s usually a very foreign way of coding, so teams might experience pushback based on how very different it is.
There is an overhead to extracting the behaviour in that the number of lines of code might increase with smaller statecharts.
Why are they not used?
People don’t know about them, and YAGNI.
What are the main arguments against statecharts?
There are a few common arguments against statecharts in addition to the ones listed above:
It’s simply not needed.
It goes against the grain of [insert name of technology].
It increases the number of libraries, for web applications this means increased load time.
The benefits outlined above should make it clear that the introduction of statecharts is generally a net positive.
How do you use statecharts?
First of all, know that a W3C committee spent 10+ years (2005 to 2015) standardizing something called SCXML (yes, Statechart XML), and that it defines a lot of the semantics and specifies how to deal with certain edge cases. There are tools to read, author and even execute statecharts written in SCXML, in various languages. There are also some derivatives that support the same model as SCXML, but using a different syntax.
Additionally, there are statechart libaries for a variety of platforms, that in varying degrees support the semantics described by SCXML. You should consider using these libraries just to get those edge cases taken care of. The libraries generally perform entry and exit actions in the right order and so on.
With that out of the way, read on!
Executable statecharts
In addition to just using statecharts to model the behaviour in documents separate from the actual running code, it’s possible to use one of various machine formats, both to design the behaviour, and at run-time to actually be the behaviour. The idea is to have a single source of truth that describes the behaviour of a component, and that this single source drives both the actual run-time code, but that it can also be used to generate a precise diagram that visualises the statechart.
This carries along some different pros and cons:
Why should you use executable statecharts?
No need to translate diagrams into code
No bugs introduced by hand translation of diagrams
The diagrams are always in sync
The diagrams are more precise
Why should you not use executable statecharts?
The diagrams may become quite complex
The format and tools for executable statecharts is limited
Type safety between statechart and the component is hard to enforce
How do you use executable statecharts?
In essence, if you have any definition of a statechart in your code, all you need to do is to take that representation and automate the generation of the visual statechart. This is of course simpler when the definition is in a separate file, e.g. in a JSON or XML file.
This is all explained on the page on how to use statecharts!
If you feel like chatting to someone about statecharts, you can go to gitter.im (no login required to see the chat), where you’ll find a community of like minded developers that can help you understand and reap the benefits of using Statecharts. For a more Q&A-type site, head on over to the statecharts GitHub discussions, where we’ll do your best to answer your question.
Quite a few people have written books or held presentations that deal with statecharts in various ways, and they’re included in our resources page. If you’ve written something, please share it by posting it to GitHub Discussions.
There are some pages that haven’t found any place in the web of documents, so they’re honourably mentioned here:
Use case: Statecharts in User Interfaces
Concepts — The most important concepts in a statechart and what they look like in a diagram.
Glossary — A list of terms that get thrown around when talking about statecharts, with their definitions.
FizzBuzz — FizzBuzz is a well known problem, and it’s been used as a backdrop to explain various statechart concepts.
Acknowledgements
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.