10 interesting stories served every morning and every evening.
...
Read the original on www.cbsnews.com »
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Souveraineté numérique : l’État accélère la réduction de ses dépendances extra-européennes
À l’initiative du Premier ministre, du ministre de l’Action et des Comptes publics, et de la ministre déléguée chargée de l’Intelligence artificielle et du Numérique, la direction interministérielle du numérique (DINUM) a organisé mercredi 8 avril 2026 avec la direction générale des entreprises (DGE), l’agence nationale de la sécurité des systèmes d’information (ANSSI) et la direction des achats de l’État (DAE) un séminaire interministériel visant à renforcer la dynamique collective de réduction des dépendances numériques extra-européennes. Réunissant ministres, administrations, opérateurs publics et acteurs privés, cet événement marque une accélération de la stratégie française et européenne en faveur de la souveraineté numérique. Dans la continuité des directives récentes communiquées par le Premier ministre, notamment les circulaires relatives à la commande publique numérique ainsi qu’à la généralisation de l’outil de visioconférence « Visio », le séminaire a permis de fixer un objectif clair : réduire les dépendances numériques extra-européennes de l’État.S’agissant de l’évolution du poste de travail, la DINUM annonce sa sortie de Windows au profit de postes sous système d’exploitation Linux.S’agissant de la migration vers des solutions souveraines, la Caisse nationale d’Assurance maladie a annoncé il y a quelques jours la migration de ses 80 000 agents vers des outils du socle numérique interministériel (Tchap, Visio et FranceTransfert pour le transfert de documents).Le mois dernier, le Gouvernement annonçait la migration de la plateforme des données de santé vers une solution de confiance d’ici à fin 2026.Le séminaire a permis de lancer une nouvelle méthode pour sortir des dépendances en formant des coalitions inédites associant ministères, grands opérateurs publics et acteurs privés. Cette démarche vise à fédérer les énergies publiques et privées autour de projets précis, en s’appuyant notamment sur les communs numériques et les standards d’interopérabilité (initiatives Open-Interop, OpenBuro).La DINUM coordonnera un plan interministériel de réduction des dépendances extra-européennes. Chaque ministère (opérateurs inclus) sera tenu de formaliser son propre plan d’ici l’automne, portant sur les axes suivants : poste de travail, outils collaboratifs, anti-virus, intelligence artificielle, bases de données, virtualisation, équipements réseau. Ces plans d’action permettront de donner de la visibilité quant aux besoins de l’Etat à la filière industrielle du numérique, qui dispose d’atouts majeurs qu’il convient de valoriser par la commande publique.Le travail de cartographie et de diagnostic des dépendances réalisé par la Direction des Achats de l’État (DAE), ainsi que celui autour de la définition d’un service numérique européen porté par la Direction générale des Entreprises (DGE), permettra d’affiner l’objectif chiffré de réduction avec un calendrier clair.Les premières « rencontres industrielles du numérique », qui seront organisées par la DINUM en juin 2026, constitueront l’occasion de concrétiser des coalitions ministérielles publiques - privées, avec notamment la formalisation d’une « alliance public-privé pour la souveraineté européenne ».
L’État ne peut plus se contenter de constater sa dépendance, il doit en sortir. Nous devons nous désensibiliser des outils américains et reprendre le contrôle de notre destin numérique. Nous ne pouvons plus accepter que nos données, nos infrastructures et nos décisions stratégiques dépendent de solutions dont nous ne maîtrisons ni les règles, ni les tarifs, ni les évolutions, ni les risques. La transition est en marche : nos ministères, nos opérateurs et nos partenaires industriels s’engagent aujourd’hui dans une démarche sans précédent pour cartographier nos dépendances et renforcer notre souveraineté numérique. La souveraineté numérique n’est pas une option.
ministre de l’Action et des Comptes publics
La souveraineté numérique n’est pas une option, c’est une nécessité stratégique. L’Europe doit se doter des moyens de ses ambitions, et la France montre l’exemple en accélérant la bascule vers des solutions souveraines, interopérables et durables. En réduisant nos dépendances à des solutions extra-européennes, l’État envoie un message clair : celui d’une puissance publique qui reprend la main sur ses choix technologiques au service de sa souveraineté numérique.
ministre déléguée chargée de l’Intelligence artificielle et du Numérique
À propos de la direction interministérielle du numérique (DINUM) : La DINUM a pour mission d’élaborer la stratégie numérique de l’État et de piloter sa mise en œuvre. Elle accompagne les projets numériques de l’État, au service des priorités gouvernementales et dans le souci d’une amélioration de l’efficacité de l’action publique.
(Ouvre une nouvelle fenêtre) En savoir plus sur numerique.gouv.fr
...
Read the original on numerique.gouv.fr »
1d-chess is a new variant where you can play the beautiful game without all those unneccessary and complicated extra dimensions. Play as white against the AI. You might initally find it more difficult than expected, but assming optimal play, is there a forced win for white?
Mouse over to reveal answer: Try this line: N4 N5, N6 K7, R4 K6, R2 K7, R5++
There are three pieces in 1d-chess:
Can move one square in any direction.
Can move 2 squares forward or backward. (jumping over any pieces in the way)
Can move in a straight line in any direction.
Win by checkmating the enemy king. This occurs when the enemy king is in check (under attack by one of your pieces) and there are no legal moves for the opponent to get their king out of check.
* A player is not in check and there are no legal moves for them to play
* The same board position is repeated 3 times in a game.
* There are only kings left on the board, thus it is impossible to checkmate the opponent
This chess variant was first described by Martin Gardner in the Mathematical Games column of the July 1980 issue of Scientific American
See The column on JSTOR
...
Read the original on rowan441.github.io »
← Back
I file the sharp corners off my MacBooks. People like to freak out about this, so I wanted to post it here to make sure that everyone who wants to freak out about it gets the opportunity to do so.
Here are some photos so you know what I’m talking about:
The bottom edge of the MacBook is very sharp. Indeed, the industrial designers at Apple chose an aluminum unibody partly for the fact that it can handle such a geometry. But, it is uncomfortable on my wrists, and I believe strongly in customizing one’s tools, so I filed it off.
The corner is sharp all around the machine, but it’s particularly pointed at the notch, which is where I focused my effort. It was quite pleasing to blend the smaller radius curves into the larger radius notch curve. I was slightly concerned that I’d file through the machine, so I did this in increments. It didn’t end up being an issue.
I taped off the speakers and keyboard while filing, as I’m sure aluminum dust wouldn’t do the machine any favors. I also clamped (with a respectful pressure) the machine to my workbench while doing this. I used a fairly rough file, as that is what I had on hand, and then sanded with 150 then 400 grit sandpaper. I was quite pleased with the finish. The photos above are taken months after, and have the scratches and dings that you’d expect someone who has this level of respect for their machine to acquire over that amount of time.
This was on my work computer. I expect to similarly modify future work computers, and I would be happy to help you modify yours if you need a little encouragement. Don’t be scared. Fuck around a bit.
...
Read the original on kentwalters.com »
A new report from 404 Media reveals that the FBI was able to recover deleted Signal messages from an iPhone by extracting data stored in the device’s notification database. Here are the details.
According to 404 Media, testimony in a recent trial involving “a group of people setting off fireworks and vandalizing property at the ICE Prairieland Detention Facility in Alvarado, Texas,” showed that the FBI was able to recover content of incoming Signal messages from a defendant’s iPhone, even though Signal had been removed from the device:
One of the defendants was Lynette Sharp, who previously pleaded guilty to providing material support to terrorists. During one day of the related trial, FBI Special Agent Clark Wiethorn testified about some of the collected evidence. A summary of Exhibit 158 published on a group of supporters’ website says, “Messages were recovered from Sharp’s phone through Apple’s internal notification storage—Signal had been removed, but incoming notifications were preserved in internal memory. Only incoming messages were captured (no outgoing).”
As 404 Media notes, Signal’s settings include an option that prevents the actual message content from being previewed in notifications. However, it appears the defendant did not have that setting enabled, which, in turn, seemingly allowed the system to store the content in the database.
404 Media reached out to Signal and Apple, but neither company provided any statements on how notifications are handled or stored.
With little to no technical details about the exact condition of the defendant’s iPhone, it is obviously impossible to pinpoint the precise method the FBI used to recover the information.
For instance, there are multiple system states an iPhone can be in, each with its own security and data access constraints, such as BFU (Before First Unlock), AFU (After First Unlock) mode, and so on.
Security and data access also change even more dramatically when the device is unlocked, since the system assumes the user is present and permits access to a wider range of protected data.
That said, iOS does store and cache a lot of data locally, trusting that it can rely on these different states to keep that information safe but readily available in case the device’s rightful owner needs it.
Another important factor to keep in mind: the token used to send push notifications isn’t immediately invalidated when an app is deleted. And since the server has no way of knowing whether the app is still installed after the last notification it sent, it may continue pushing notifications, leaving it up to the iPhone to decide whether to display them.
Interestingly, Apple just changed how iOS validates push notification tokens on iOS 26.4. While it is impossible to tell whether this is a result of this case, the timing is still notable.
Back to the case, given Exhibit 158’s description that the messages “were recovered from Sharp’s phone through Apple’s internal notification storage,” it is possible the FBI extracted the information from a device backup.
In that case, there are many commercially available tools for law enforcement that exploit iOS vulnerabilities to extract data that could have helped the FBI access this information.
To read 404 Media’s original report of this case, follow this link.
...
Read the original on 9to5mac.com »
France is trying to move on from Microsoft Windows. The country said it plans to move some of its government computers currently running Windows to the open source operating system Linux to further reduce its reliance on U. S. technology.
Linux is an open source operating system that is free to download and use, with various customized distributions that are tailored and designed for specific use cases or operations.
In a statement, French minister David Amiel said (translated) that the effort was to “regain control of our digital destiny” by relying less on U. S. tech companies. Amiel said that the French government can no longer accept that it doesn’t have control over its data and digital infrastructure.
The French government did not provide a specific timeline for the switchover, or which distributions it was considering. The switchover will begin with computers at the French government’s digital agency, DINUM. When reached by TechCrunch, a spokesperson for Microsoft did not comment on the news.
This is the latest effort by France to reduce its dependence on U. S. tech giants and use technology and cloud services originated within its borders, known as digital sovereignty, following growing instability and unpredictability on the part of the Trump administration.
Lawmakers and government leaders across Europe are growing more aware of the looming threat facing them at home, and their over-reliance on U. S. technology. In January, the European Parliament voted to adopt a report directing the European Commission to identify areas where the EU can reduce its reliance on foreign providers.
Since taking office in January 2025, Trump has upped his attacks on world leaders — straight-out capturing one and aiding in the killing of another. He has also weaponized sanctions against his critics, who include judges on the International Criminal Court, effectively cutting them off from transacting with U. S. companies. Those who have been sanctioned have reported having their bank accounts closed and access to U.S. tech services terminated, as well as being blocked from any other U.S. service.
France’s decision to ditch Windows comes months after the government announced it would stop using Microsoft Teams for video conferencing in favor of French-made Visio, a tool based on the open source end-to-end encrypted video meeting tool Jitsi.
The French government said it also plans to migrate its health data platform to a new trusted platform by the end of the year.
...
Read the original on techcrunch.com »
Jason at zx2c4.com
Previous message (by thread): Adding message type 5/6 for PQC (was Re: Export noise primitives for additional “chain key ratcheting”)
Next message (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released
Hey folks,
I generally don’t send announcement emails for the Windows software, because the built-in updater takes care of notifying the relevant users. But because this hasn’t been updated in so long, and because of recent news articles, I thought it’d be a good idea to notify the list.
After a lot of hardwork, we’ve released an updated Windows client, both the low level kernel driver and api harness, called WireGuardNT, and the higher level management software, command line utilities, and UI, called WireGuard for Windows.
There are some new features — such as support for removing individual allowed IPs without dropping packets (as was added already to Linux and FreeBSD) and setting very low MTUs on IPv4 connections — but the main improvement is lots of accumulated bug fixes, performance improvements, and above all, immense code streamlining due to ratcheting forward our minimum supported Windows version [1]. These projects are now built in a much more solid foundation, without having to maintain decades of compatibility hacks and alternative codepaths, and bizarre logic, and dynamic dispatching, and all manner of crust. There have also been large toolchain updates — the EWDK version used for the driver, the Clang/LLVM/MingW version used for the userspace tooling, the Go version used for the main UI, the EV certificate and signing infrastructure — which all together should amount to better performance and more modern code.
But, as it’s our first Windows release in a long while, please test and let me know how it goes. Hopefully there are no regressions, and we’ve tested this quite a bit — including on Windows 10 1507 Build 10240, the most ancient Windows that we support which Microsoft does not anymore — but you never know. So feel free to write me as needed.
As always, the built-in updater should be prompting users to click the update button, which will check signatures and securely update the software. Alternatively, if you’re installing for the first time or want to update immediately, our mini 80k fetcher will download and verify the latest version: - https://download.wireguard.com/windows-client/wireguard-installer.exe - https://www.wireguard.com/install/
And to learn more about each of these two Windows projects: - https://git.zx2c4.com/wireguard-windows/about/ - https://git.zx2c4.com/wireguard-nt/about/
Finally, I should comment on the aforementioned news articles. When we tried to submit the new NT kernel driver to Microsoft for signing, they had suspended our account, as I wrote about first in a random comment [2] on Hacker News in a thread about this happening to another project, and then later that day on Twitter [3]. The comments that followed were a bit off the rails. There’s no conspiracy here from Microsoft. But the Internet discussion wound up catching the attention of Microsoft, and a day later, the account was unblocked, and all was well. I think this is just a case of bureaucratic processes getting a bit out of hand, which Microsoft was able to easily remedy. I don’t think there’s been any malice or conspiracy or anything weird. I think most news articles currently circulating haven’t been updated to show that this was actually fixed pretty quickly. So, in case you were wondering, “but how can there be a new WireGuard for Windows update when the account is blocked?!”, now you know that the answer is, “because the account was unblocked.”
Anyway, enjoy the new software, and let me know how it works for you.
Thanks, Jason
[1] https://lists.zx2c4.com/pipermail/wireguard/2026-March/009541.html [2] https://news.ycombinator.com/item?id=47687884 [3] https://x.com/EdgeSecurity/status/2041872931576299888
Previous message (by thread): Adding message type 5/6 for PQC (was Re: Export noise primitives for additional “chain key ratcheting”)
Next message (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released
More information about the WireGuard
mailing list
...
Read the original on lists.zx2c4.com »
In this Friday’s magic demonstration, I’m going to show how what you see in Privacy & Security settings can be misleading, when it tells you that an app doesn’t have access to a protected folder, but it really does.
Although it appears you can achieve this using several ordinary apps, to make things simpler and clearer I’ve written a little app for this purpose, Insent, available from here: insent11
I’m working in macOS Tahoe 26.4, but I suspect you should see much the same in any version from macOS 13.5 onwards, as supported by Insent.
For this magic demo, I’m only going to use two of Insent’s six buttons:
* Open by consent, which results in Insent choosing a random text file from the top level of your Documents folder, and displaying its name and the start of its contents below. As it does this without involving the user in the process, the macOS privacy system TCC requires it to obtain the user’s consent to list and access the contents of that protected folder.
* Open from folder, which opens an Open and Save Panel where you select a folder. Insent then picks a random text file from the top level of that folder, and displays its name and the start of its contents below. Because you expressed your intent to access that protected folder, TCC considers that is good enough to give access without requiring any consent.
Once you have downloaded Insent, extracted it from its archive, and dragged the app from that folder into one of your Applications folders, follow this sequence of actions:
Open Insent, click on Open by consent, and consent to the prompt to allow it to access your Documents folder. Shortly afterwards, Insent will display the opening of one of the text files in Documents. Quit Insent.
Open Privacy & Security settings, select Files & Folders, and confirm that Insent has been given access to Documents.
Open Insent, click on Open by consent, and confirm it now gains access to a text file without asking for consent. Quit Insent.
Open Privacy & Security settings, select Files & Folders, and disable Documents access in Insent’s entry there using the toggle.
Open Insent, click on Open by consent, and confirm that it can no longer open a text file, but displays [Couldn’t get contents of Documents folder].
Click on Open from folder and select your Documents folder there. Confirm that works as expected and displays the name and contents of one of the text files in Documents.
Click on Open by consent, and confirm that now works again.
Confirm that Documents access for Insent is still disabled in Files & Folders.
Whatever you do now, the app retains full access to Documents, no matter what is shown or set in Files & Folders.
Indeed, the only way you can protect your Documents folder from access by Insent is to run the following command in Terminal:
tccutil reset All co.eclecticlight. Insent
then restart your Mac. That should set Insent’s privacy settings back to their default.
You can also demonstrate that this behaviour is specific to one protected folder at a time. If you select a different protected folder like Desktop or Downloads using the Open from folder button, then Insent still won’t be able to list the contents of the Documents folder, as its TCC settings will function as expected.
Insent is an ordinary notarised app, and doesn’t run in a sandbox or pull any clever tricks. When System Integrity Protection (SIP) is enabled some of its operations are sandboxed, though, including attempts to list or access the contents of locations that are protected by TCC.
When you click on its Open by consent button, sandboxd intercepts the File Manager call to list the contents of Documents, as a protected folder. It then requests approval for that from TCC, as seen in the following log entries:
1.204592 Insent sendAction: 1.205160 Insent: trying to list files in ~/Documents 1.205828 sandboxd request approval 1.205919 sandboxd tcc_send_request_authorization() IPC
TCC doesn’t have authorisation for that access by Insent, either by Full Disk Access or specific access to Documents, so it prompts the user for their consent. If that’s given, the following log entries show that being passed back to the sandbox, and the change being notified to com.apple.chrono, followed by Insent actioning the original request:
3.798770 com.apple.sandbox kTCCServiceSystemPolicyDocumentsFolder granted by TCC for Insent 3.802225 com.apple.chrono appAuth:co.eclecticlight. Insent] tcc authorization(s) changed 3.809558 Insent: trying to look in ~/Documents for text files 3.809691 Insent: trying to read from: /Users/hoakley/Documents/asHelp.text 3.842101 Insent: read from: /Users/hoakley/Documents/asHelp.text
If you then disable Insent’s access to Documents in Privacy & Security settings, TCC denies access to Documents, and Insent can’t get the list of its contents:
1.093533 com.apple. TCC AUTHREQ_RESULT: msgID=440.109, authValue=0, authReason=4, authVersion=1, desired_auth=0, error=(null), 1.093669 com.apple.sandbox kTCCServiceSystemPolicyDocumentsFolder denied by TCC for Insent 1.094007 Insent: couldn’t get contents of ~/Documents
If you then access Documents by intent through the Open and Save Panel, sandboxd no longer intercepts the request, and TCC therefore doesn’t grant or deny access:
0.897244 Insent sendAction: 0.897318 Insent: trying to list files in ~/Documents 0.900828 Insent: trying to look in ~/Documents for text files 0.901112 Insent: trying to read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text 0.904101 Insent: read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text
Thus, access to a protected folder by user intent, such as through the Open and Save Panel, changes the sandboxing applied to the caller by removing its constraint to that specific protected folder. As the sandboxing isn’t controlled by or reflected in Privacy & Security settings, that allows TCC, in Files & Folders, to continue showing access restrictions that aren’t applied because the sandbox isn’t applied.
Access restrictions shown in Privacy & Security settings, specifically those to protected locations in Files & Folders, aren’t an accurate or trustworthy reflection of those that are actually applied. It’s possible for an app to have unrestricted access to one or more protected folders while its listing in Files & Folders shows it being blocked from access, or for it to have no entry at all in that list.
Most apps that want access to protected folders like Documents appear to seek that during their initialisation, and before any user interaction that could result in intent overriding the need for consent. However, many users report that apps appear to have access to Documents but aren’t listed in Files & Folders, suggesting that at some time that sequence of events does occur.
To be effectively exploited this would need careful sequencing, and for the user to select the protected folder in an Open and Save Panel, so drawing attention to the manoeuvre.
Most concerning is the apparent permanence of the access granted, requiring an arcane command in Terminal and a restart in order to reset the app’s privacy settings. It’s hard to believe that this was intended to trap the user into surrendering control over access to protected locations. But it can do.
I’m very grateful to Richard for drawing my attention to this.
...
Read the original on eclecticlight.co »
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
...
Read the original on www.wired.com »
Analyzing every Firefox extension Installing every Firefox extension Using every Firefox extension
*All but 8 we didn’t scrape (or got deleted between me checking the website and me scraping) and 42 missing from extensions.json.1 Technically we only installed 99.94% of the extensions.
It turns out there’s only 84 thousand Firefox extensions. That sounds feasibly small. That even sounds like it’s less than 50 gigabytes. Let’s install them all!
There’s a public API for the add-ons store. No authentication required, and seemingly no rate limits. This should be easy.
The search endpoint can take an empty query. Let’s read every page:
The search API only gives me 600 pages, meaning I can only see 30 thousand extensions, less than half of them.
A solution I found is to use different sorts. The default sort is sort=recommended,users: first recommended extensions, then sorted by users, descending. Changing to just sort=created gave me some of the long tail:
I’m still missing 30,0252 extensions, so I added rating and hotness too.
Starting to hit diminishing returns. While I was waiting 7 minutes for that last list to get scraped because my code didn’t fetch in parallel, I had an epiphany: use exclude_addons. I can just fetch page 600 and exclude all its addons to get page 601.
It works! There is a URL length limit, sadly, so I can only fetch an extra 20 pages.
A lot less than I expected, especially considering what happens when I add the downloads sort:
Reading the docs again, I notice I can filter by category as well. I’m tired of waiting 7 minutes so I’ll just fetch every page in parallel.
I got basically all the extensions with this, making everything I did before this look really stupid.
That’s 8 less extensions than what it says on the website. When I ran this in September 2025, it found 21 more extensions than what was mentioned on the website, so I think this is enough.
So that nobody has to do this again, I’ve uploaded this dataset to Hugging Face.
The search API supports date filters: created__gte and created__lte. The API also returns the full number of extensions that match your search.
You can start with a filter that includes all extensions, then keep splitting the ranges in half until it is less than 30 thousand, then fetch all of them.
I’ve updated the downloader: it is faster, wastes fewer requests, and seems to scrape exactly all the extensions, too.
This won’t work if over 30 thousand extensions get created in a single second, which I can’t imagine will ever happen.
I have a copy of Bun and all_extensions.json, so I will torment you with my unmatched script power.
The biggest Firefox extension is dmitlichess at 196.3 MB, which contains 2000+ audio files.
Here’s the rest of the top ten:
The first time I ran this analysis, in September, “Cute doggy - Dog puppies” was the 10th largest extension. I’m still mentioning it here, because I was so fucking confused:
The smallest extension is theTabs-saver, which is 7518 bytes and has no code.
FalscheLaden, with no users, requests 3,695 permissions. The author has posted a writeup.
Second place is Google Dark Theme, which requests 2,675 permissions but has 1,687 users.
Dr. B is the king of slop, with 84 extensions published, all of them vibe coded.
How do I know? Most of their extensions have a README.md in them describing their process of getting these through addon review, and mention Grok 3. Also, not a single one of them have icons or screenshots.
Personally, I’m shocked this number is this low. I expected to see some developers with hundreds!
I reviewed the source of a couple homoglyph attacks on crypto wallets discovered in the dataset and was disappointed to find out they just pop up a form asking for your seed phrase and send it off to their server. It’s an extension!!! You can steal their coinbase.com token! You can monitor the clipboard and swap out their address for yours! You can crash their browser and claim your real malware is the fix!
Why would you make a fake MetaMask extension and bot 1-star reviews?
Is this the doing of their cybercrime competitors, who bot 4-star reviews on extensions of their own?
Either way, these extensions are clearly phishing. I reported some to Mozilla, and the next day they were all gone, even the ones I was too lazy to report. I forgot to archive them, so I guess they live on in May’s VM!
In terms of implementation, the most interesting one is “Іron Wаllеt” (the I, a, and e are Cyrillic). Three seconds after install, it fetches the phishing page’s URL from the first record of a NocoDB spreadsheet and opens it:
I think the extension’s “no accounts or remote code” description is really funny, like putting “no copyright infringement intended” in your video’s description in case YouTube is watching. The API key had write access, so I wiped the spreadsheet.
You get a “Homepage” link in your extension’s page and your own page.
It’s been nofollow for two years, but that hasn’t stopped grifters from trying anyway.
On Attempt 1, I encountered Typo Sniper and Tab Fortune Teller, AI generated extensions with casinos in their author’s Homepage links.
In the dataset, there’s many “Code Injector” extensions, which are all virtually identical and also have random websites in their author’s Homepage link.
All of these extensions are from 2025. Is there an ancient SEO guide circulating? Is there some evil AMO frontend they’re still getting a backlink from? I have no idea what’s happening here.
All of these extensions are their author’s only uploads and they have their own domains. Most of them are on both Chrome and Firefox, their websites look the same, and they all have a terms of service referencing “Innover Online Group Ltd”, which is a .png for some reason.
Because I scraped every Firefox extension twice, I can see what got removed in between the runs. Three of Innover Group’s extensions—Earth View 360°, View Manuals, and View Recipes, totaling 115 thousand users—have been disabled by Mozilla.
Innover Group runs Google ads for their extensions, a lot of them simply saying “Continue”.
The “Custom Web Search” is Yahoo but with their affilate code. That code being safeplexsearch, which has a website of its own which of course mentions Innover Online Group Ltd, and links to an addon with 3,892 users, which is actually a Firefox exclusive. Actually, “Custom Web Search” is a Firefox exclusive on all of these extensions. Why did they even make a Chrome version, to sell them to the NSA??
One user claimed Ezy Speed Test “disables Ublock [sic] Origin once installed”, which I did not find in its code.
There’s a million companies like this, though. I just went to Download.com with my ad-blocker off and discovered the company Atom Apps in an ad, which also uploads extensions for both Chrome and Firefox, with a new account for each extension, only includes Yahoo in the Firefox version, with names that end in either “and Search” or ”& Search”, and has their company name as a .png in their terms of service. They have 220 thousand daily users total across 12 extensions, and none of theirs have been disabled.
* 34.3% of extensions have no daily users
25.1% of extensions have more than 10 daily users
10.6% of extensions have more than 100 daily users
3.2% of extensions have more than 1000 daily users
0.7% of extensions have more than 10000 daily users
* 25.1% of extensions have more than 10 daily users
* 10.6% of extensions have more than 100 daily users
* 3.2% of extensions have more than 1000 daily users
* 0.7% of extensions have more than 10000 daily users
* 76.7% of extensions are open source (SPDX license that isn’t All Rights Reserved)
* 23% of extensions were created after I started writing this article
19% of extensions have no users, no reviews, no screenshots, no downloads, and no icon
* 19% of extensions have no users, no reviews, no screenshots, no downloads, and no icon
* 2.4% of extensions require payment
38.1% of those are open source???
* 38.1% of those are open source???
Obviously I’m not going to open each of these in a new tab and go through those prompts. Not for lack of trying:
Each extension has the current_version.file.url property which is a direct download for the extension. I download them to my profile’s extensions folder with the guid property as the base name and the .xpi file extension, because anything else will not be installed.
Then, I delete the addonStartup.json.lz4 and extensions.json files. When I reopen Firefox, each extension is disabled. Tampering with extensions.json is common enough that you can ask any chatbot to do it for you:
My first attempt was in a tiny11 core VM on my desktop.
At first, instead of downloading all of them with a script, I tried using enterprise policies, but this copies all the extensions into the folder. I quickly ran out of memory, and the pagefile took up the rest of the storage allocated to the VM. I had also expected Firefox to open immediately and the extensions to install themselves as the browser is being used, but that also did not happen: it just froze.
After that, I tried downloading them myself.
To make sure I was installing extensions correctly, I moved the extensions folder elsewhere and then moved about a thousand extensions back in. It worked.
There were multiple extensions that changed all text to a certain string. bruh-ifier lost to Se ni važn. Goku is in the background.
My context menu is so long that I’m showing it sideways:
I had installed lots of protection extensions. One blocks traffic to .zip and .mov domains, presumably because they are file extensions. This is .cab erasure! Then, I realized that there were likely multiple people viewing my browsing history, so I went to send them a message.
That “⚠️ SCAM WARNING!” popup is from Anti-Phishing Alert. As you may have inferred, it seems to only exists for its Homepage link. How does it work?
Vasavi Fraudulent Detector also has a popup for when a site is safe:
Only the addons from Attempt 1 were actually loaded, because I didn’t know I needed to delete addonStartup.json.lz4 yet. I scrolled through the addons page, then I opened DevTools to verify it was the full 65,335, at which point Firefox froze and I was unable to reopen it.
After that, I made a new (non-admin) user on my Mac to try again on a more powerful device.
Every time I glanced at my script downloading extensions one at a time for six hours, I kept recognizing names. Oops, I’m the AMO subject-matter expert now! Parallelizing was making it slower by the last 4000 extensions, which didn’t happen on my Windows VM.
When that finished, I found out my hardware couldn’t run 65,335 extensions at once, sadly. The window does open after some time I didn’t measure, but the window never starts responding. I don’t have the balls to run my laptop overnight.3
Firefox did make over 400 GB of disk writes. Because I forgot swap existed, I checked the profile trying to find the culprit, which is when I learned I needed to delete addonStartup.json.lz4 and modify extensions.json. The extensions.json was 144 MB. For comparison, my PC’s extensions.json is 336 KB.
My solution: add 1000 extensions at a time until Firefox took too long to open. I got to 6000.
3000 extensions was the last point where I was at least able to load webpages.
After 4000 or more extensions, the experience is basically identical. Here’s a video of mine (epilepsy warning):
5000 was the same as 4000 but every website was blocked by some extension I know starts with an S and ends with Blocker and has a logo with CJK characters. At 6000 extensions, the only page that I could load was about:addons.
My desktop has 16 GB of RAM, and my laptop has 24 GB of unified memory. You might notice that 49.3 GB is more than twice that.
What you’re about to see was recorded in May’s virtual machine. Do not try this on your main profile.
My download script started in parallel, then we switched it to serial when it slowed down. In total, downloading took about 1 hour and 43 minutes.
I was on a call the entire time, and we spotted a lot of strange extensions in the logs. What kind of chud would use “KiwiFarms Math Renderer”? Are they drafting the theory of soytivity?
Turning on Mullvad VPN and routing to Tel Aviv appeared to speed up the process. This was not because of Big Yahu, but because May restarted the script, so she repeated that a couple times. Whether that’s a Bun bug, I don’t know and I don’t care. May joked about a “version 2” that I dread thinking about.
Defender marked one extension, HackTools, as malware. May excluded the folder after that, so it may not be the only one.
Firefox took its sweet time remaking extensions.json, and it kept climbing. About 39 minutes of Firefox displaying a skeleton (hence “it has yet to render a second frame”) later, it was 189 MB large: a new record! May killed Firefox and ran enable.js.
I did some research to find why this took so long.
13 years ago, extensions.json used to be extensions.sqlite. Nowadays, extensions.json is serialized and rewritten in full on every write debounced to 20 ms, which works fine for 15 extensions but not 84,194.
Finally, we see the browser. The onboarding tabs trickled in, never loading.
May reopened it, took a shower, and came back to this:
IT STABLIZED. YOU CAN (barely) RUN FIREFOX WITH ALL 84 THOUSAND EXTENSIONS.
Well, we were pretty sure it had 84 thousand extensions. It had Tab Counter, at least, and the scrollbar in the extensions panel was absolutely massive.
She loaded the configure pages of two extensions. The options iframe never loaded.
I realized we need to disable auto update before Firefox sends another 84 thousand requests. This one took a while to load.
The list loaded but with no icons and stopped responding, and 6 hours later it had loaded fully.
We recorded the entire process; the memory usage fluctuated between 27 and 37 GiB the entire time.
...
Read the original on jack.cab »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.