10 interesting stories served every morning and every evening.
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Souveraineté numérique : l’État accélère la réduction de ses dépendances extra-européennes
À l’initiative du Premier ministre, du ministre de l’Action et des Comptes publics, et de la ministre déléguée chargée de l’Intelligence artificielle et du Numérique, la direction interministérielle du numérique (DINUM) a organisé mercredi 8 avril 2026 avec la direction générale des entreprises (DGE), l’agence nationale de la sécurité des systèmes d’information (ANSSI) et la direction des achats de l’État (DAE) un séminaire interministériel visant à renforcer la dynamique collective de réduction des dépendances numériques extra-européennes. Réunissant ministres, administrations, opérateurs publics et acteurs privés, cet événement marque une accélération de la stratégie française et européenne en faveur de la souveraineté numérique. Dans la continuité des directives récentes communiquées par le Premier ministre, notamment les circulaires relatives à la commande publique numérique ainsi qu’à la généralisation de l’outil de visioconférence « Visio », le séminaire a permis de fixer un objectif clair : réduire les dépendances numériques extra-européennes de l’État.S’agissant de l’évolution du poste de travail, la DINUM annonce sa sortie de Windows au profit de postes sous système d’exploitation Linux.S’agissant de la migration vers des solutions souveraines, la Caisse nationale d’Assurance maladie a annoncé il y a quelques jours la migration de ses 80 000 agents vers des outils du socle numérique interministériel (Tchap, Visio et FranceTransfert pour le transfert de documents).Le mois dernier, le Gouvernement annonçait la migration de la plateforme des données de santé vers une solution de confiance d’ici à fin 2026.Le séminaire a permis de lancer une nouvelle méthode pour sortir des dépendances en formant des coalitions inédites associant ministères, grands opérateurs publics et acteurs privés. Cette démarche vise à fédérer les énergies publiques et privées autour de projets précis, en s’appuyant notamment sur les communs numériques et les standards d’interopérabilité (initiatives Open-Interop, OpenBuro).La DINUM coordonnera un plan interministériel de réduction des dépendances extra-européennes. Chaque ministère (opérateurs inclus) sera tenu de formaliser son propre plan d’ici l’automne, portant sur les axes suivants : poste de travail, outils collaboratifs, anti-virus, intelligence artificielle, bases de données, virtualisation, équipements réseau. Ces plans d’action permettront de donner de la visibilité quant aux besoins de l’Etat à la filière industrielle du numérique, qui dispose d’atouts majeurs qu’il convient de valoriser par la commande publique.Le travail de cartographie et de diagnostic des dépendances réalisé par la Direction des Achats de l’État (DAE), ainsi que celui autour de la définition d’un service numérique européen porté par la Direction générale des Entreprises (DGE), permettra d’affiner l’objectif chiffré de réduction avec un calendrier clair.Les premières « rencontres industrielles du numérique », qui seront organisées par la DINUM en juin 2026, constitueront l’occasion de concrétiser des coalitions ministérielles publiques - privées, avec notamment la formalisation d’une « alliance public-privé pour la souveraineté européenne ».
L’État ne peut plus se contenter de constater sa dépendance, il doit en sortir. Nous devons nous désensibiliser des outils américains et reprendre le contrôle de notre destin numérique. Nous ne pouvons plus accepter que nos données, nos infrastructures et nos décisions stratégiques dépendent de solutions dont nous ne maîtrisons ni les règles, ni les tarifs, ni les évolutions, ni les risques. La transition est en marche : nos ministères, nos opérateurs et nos partenaires industriels s’engagent aujourd’hui dans une démarche sans précédent pour cartographier nos dépendances et renforcer notre souveraineté numérique. La souveraineté numérique n’est pas une option.
ministre de l’Action et des Comptes publics
La souveraineté numérique n’est pas une option, c’est une nécessité stratégique. L’Europe doit se doter des moyens de ses ambitions, et la France montre l’exemple en accélérant la bascule vers des solutions souveraines, interopérables et durables. En réduisant nos dépendances à des solutions extra-européennes, l’État envoie un message clair : celui d’une puissance publique qui reprend la main sur ses choix technologiques au service de sa souveraineté numérique.
ministre déléguée chargée de l’Intelligence artificielle et du Numérique
À propos de la direction interministérielle du numérique (DINUM) : La DINUM a pour mission d’élaborer la stratégie numérique de l’État et de piloter sa mise en œuvre. Elle accompagne les projets numériques de l’État, au service des priorités gouvernementales et dans le souci d’une amélioration de l’efficacité de l’action publique.
(Ouvre une nouvelle fenêtre) En savoir plus sur numerique.gouv.fr
...
Read the original on numerique.gouv.fr »
1d-chess is a new variant where you can play the beautiful game without all those unneccessary and complicated extra dimensions. Play as white against the AI. You might initally find it more difficult than expected, but assming optimal play, is there a forced win for white?
Mouse over to reveal answer: Try this line: N4 N5, N6 K7, R4 K6, R2 K7, R5++
There are three pieces in 1d-chess:
Can move one square in any direction.
Can move 2 squares forward or backward. (jumping over any pieces in the way)
Can move in a straight line in any direction.
Win by checkmating the enemy king. This occurs when the enemy king is in check (under attack by one of your pieces) and there are no legal moves for the opponent to get their king out of check.
* A player is not in check and there are no legal moves for them to play
* The same board position is repeated 3 times in a game.
* There are only kings left on the board, thus it is impossible to checkmate the opponent
This chess variant was first described by Martin Gardner in the Mathematical Games column of the July 1980 issue of Scientific American
See The column on JSTOR
...
Read the original on rowan441.github.io »
A new report from 404 Media reveals that the FBI was able to recover deleted Signal messages from an iPhone by extracting data stored in the device’s notification database. Here are the details.
According to 404 Media, testimony in a recent trial involving “a group of people setting off fireworks and vandalizing property at the ICE Prairieland Detention Facility in Alvarado, Texas,” showed that the FBI was able to recover content of incoming Signal messages from a defendant’s iPhone, even though Signal had been removed from the device:
One of the defendants was Lynette Sharp, who previously pleaded guilty to providing material support to terrorists. During one day of the related trial, FBI Special Agent Clark Wiethorn testified about some of the collected evidence. A summary of Exhibit 158 published on a group of supporters’ website says, “Messages were recovered from Sharp’s phone through Apple’s internal notification storage—Signal had been removed, but incoming notifications were preserved in internal memory. Only incoming messages were captured (no outgoing).”
As 404 Media notes, Signal’s settings include an option that prevents the actual message content from being previewed in notifications. However, it appears the defendant did not have that setting enabled, which, in turn, seemingly allowed the system to store the content in the database.
404 Media reached out to Signal and Apple, but neither company provided any statements on how notifications are handled or stored.
With little to no technical details about the exact condition of the defendant’s iPhone, it is obviously impossible to pinpoint the precise method the FBI used to recover the information.
For instance, there are multiple system states an iPhone can be in, each with its own security and data access constraints, such as BFU (Before First Unlock), AFU (After First Unlock) mode, and so on.
Security and data access also change even more dramatically when the device is unlocked, since the system assumes the user is present and permits access to a wider range of protected data.
That said, iOS does store and cache a lot of data locally, trusting that it can rely on these different states to keep that information safe but readily available in case the device’s rightful owner needs it.
Another important factor to keep in mind: the token used to send push notifications isn’t immediately invalidated when an app is deleted. And since the server has no way of knowing whether the app is still installed after the last notification it sent, it may continue pushing notifications, leaving it up to the iPhone to decide whether to display them.
Interestingly, Apple just changed how iOS validates push notification tokens on iOS 26.4. While it is impossible to tell whether this is a result of this case, the timing is still notable.
Back to the case, given Exhibit 158’s description that the messages “were recovered from Sharp’s phone through Apple’s internal notification storage,” it is possible the FBI extracted the information from a device backup.
In that case, there are many commercially available tools for law enforcement that exploit iOS vulnerabilities to extract data that could have helped the FBI access this information.
To read 404 Media’s original report of this case, follow this link.
...
Read the original on 9to5mac.com »
France is trying to move on from Microsoft Windows. The country said it plans to move some of its government computers currently running Windows to the open source operating system Linux to further reduce its reliance on U. S. technology.
Linux is an open source operating system that is free to download and use, with various customized distributions that are tailored and designed for specific use cases or operations.
In a statement, French minister David Amiel said (translated) that the effort was to “regain control of our digital destiny” by relying less on U. S. tech companies. Amiel said that the French government can no longer accept that it doesn’t have control over its data and digital infrastructure.
The French government did not provide a specific timeline for the switchover, or which distributions it was considering. The switchover will begin with computers at the French government’s digital agency, DINUM. When reached by TechCrunch, a spokesperson for Microsoft did not comment on the news.
This is the latest effort by France to reduce its dependence on U. S. tech giants and use technology and cloud services originated within its borders, known as digital sovereignty, following growing instability and unpredictability on the part of the Trump administration.
Lawmakers and government leaders across Europe are growing more aware of the looming threat facing them at home, and their over-reliance on U. S. technology. In January, the European Parliament voted to adopt a report directing the European Commission to identify areas where the EU can reduce its reliance on foreign providers.
Since taking office in January 2025, Trump has upped his attacks on world leaders — straight-out capturing one and aiding in the killing of another. He has also weaponized sanctions against his critics, who include judges on the International Criminal Court, effectively cutting them off from transacting with U. S. companies. Those who have been sanctioned have reported having their bank accounts closed and access to U.S. tech services terminated, as well as being blocked from any other U.S. service.
France’s decision to ditch Windows comes months after the government announced it would stop using Microsoft Teams for video conferencing in favor of French-made Visio, a tool based on the open source end-to-end encrypted video meeting tool Jitsi.
The French government said it also plans to migrate its health data platform to a new trusted platform by the end of the year.
...
Read the original on techcrunch.com »
In this Friday’s magic demonstration, I’m going to show how what you see in Privacy & Security settings can be misleading, when it tells you that an app doesn’t have access to a protected folder, but it really does.
Although it appears you can achieve this using several ordinary apps, to make things simpler and clearer I’ve written a little app for this purpose, Insent, available from here: insent11
I’m working in macOS Tahoe 26.4, but I suspect you should see much the same in any version from macOS 13.5 onwards, as supported by Insent.
For this magic demo, I’m only going to use two of Insent’s six buttons:
* Open by consent, which results in Insent choosing a random text file from the top level of your Documents folder, and displaying its name and the start of its contents below. As it does this without involving the user in the process, the macOS privacy system TCC requires it to obtain the user’s consent to list and access the contents of that protected folder.
* Open from folder, which opens an Open and Save Panel where you select a folder. Insent then picks a random text file from the top level of that folder, and displays its name and the start of its contents below. Because you expressed your intent to access that protected folder, TCC considers that is good enough to give access without requiring any consent.
Once you have downloaded Insent, extracted it from its archive, and dragged the app from that folder into one of your Applications folders, follow this sequence of actions:
Open Insent, click on Open by consent, and consent to the prompt to allow it to access your Documents folder. Shortly afterwards, Insent will display the opening of one of the text files in Documents. Quit Insent.
Open Privacy & Security settings, select Files & Folders, and confirm that Insent has been given access to Documents.
Open Insent, click on Open by consent, and confirm it now gains access to a text file without asking for consent. Quit Insent.
Open Privacy & Security settings, select Files & Folders, and disable Documents access in Insent’s entry there using the toggle.
Open Insent, click on Open by consent, and confirm that it can no longer open a text file, but displays [Couldn’t get contents of Documents folder].
Click on Open from folder and select your Documents folder there. Confirm that works as expected and displays the name and contents of one of the text files in Documents.
Click on Open by consent, and confirm that now works again.
Confirm that Documents access for Insent is still disabled in Files & Folders.
Whatever you do now, the app retains full access to Documents, no matter what is shown or set in Files & Folders.
Indeed, the only way you can protect your Documents folder from access by Insent is to run the following command in Terminal:
tccutil reset All co.eclecticlight. Insent
then restart your Mac. That should set Insent’s privacy settings back to their default.
You can also demonstrate that this behaviour is specific to one protected folder at a time. If you select a different protected folder like Desktop or Downloads using the Open from folder button, then Insent still won’t be able to list the contents of the Documents folder, as its TCC settings will function as expected.
Insent is an ordinary notarised app, and doesn’t run in a sandbox or pull any clever tricks. When System Integrity Protection (SIP) is enabled some of its operations are sandboxed, though, including attempts to list or access the contents of locations that are protected by TCC.
When you click on its Open by consent button, sandboxd intercepts the File Manager call to list the contents of Documents, as a protected folder. It then requests approval for that from TCC, as seen in the following log entries:
1.204592 Insent sendAction: 1.205160 Insent: trying to list files in ~/Documents 1.205828 sandboxd request approval 1.205919 sandboxd tcc_send_request_authorization() IPC
TCC doesn’t have authorisation for that access by Insent, either by Full Disk Access or specific access to Documents, so it prompts the user for their consent. If that’s given, the following log entries show that being passed back to the sandbox, and the change being notified to com.apple.chrono, followed by Insent actioning the original request:
3.798770 com.apple.sandbox kTCCServiceSystemPolicyDocumentsFolder granted by TCC for Insent 3.802225 com.apple.chrono appAuth:co.eclecticlight. Insent] tcc authorization(s) changed 3.809558 Insent: trying to look in ~/Documents for text files 3.809691 Insent: trying to read from: /Users/hoakley/Documents/asHelp.text 3.842101 Insent: read from: /Users/hoakley/Documents/asHelp.text
If you then disable Insent’s access to Documents in Privacy & Security settings, TCC denies access to Documents, and Insent can’t get the list of its contents:
1.093533 com.apple. TCC AUTHREQ_RESULT: msgID=440.109, authValue=0, authReason=4, authVersion=1, desired_auth=0, error=(null), 1.093669 com.apple.sandbox kTCCServiceSystemPolicyDocumentsFolder denied by TCC for Insent 1.094007 Insent: couldn’t get contents of ~/Documents
If you then access Documents by intent through the Open and Save Panel, sandboxd no longer intercepts the request, and TCC therefore doesn’t grant or deny access:
0.897244 Insent sendAction: 0.897318 Insent: trying to list files in ~/Documents 0.900828 Insent: trying to look in ~/Documents for text files 0.901112 Insent: trying to read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text 0.904101 Insent: read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text
Thus, access to a protected folder by user intent, such as through the Open and Save Panel, changes the sandboxing applied to the caller by removing its constraint to that specific protected folder. As the sandboxing isn’t controlled by or reflected in Privacy & Security settings, that allows TCC, in Files & Folders, to continue showing access restrictions that aren’t applied because the sandbox isn’t applied.
Access restrictions shown in Privacy & Security settings, specifically those to protected locations in Files & Folders, aren’t an accurate or trustworthy reflection of those that are actually applied. It’s possible for an app to have unrestricted access to one or more protected folders while its listing in Files & Folders shows it being blocked from access, or for it to have no entry at all in that list.
Most apps that want access to protected folders like Documents appear to seek that during their initialisation, and before any user interaction that could result in intent overriding the need for consent. However, many users report that apps appear to have access to Documents but aren’t listed in Files & Folders, suggesting that at some time that sequence of events does occur.
To be effectively exploited this would need careful sequencing, and for the user to select the protected folder in an Open and Save Panel, so drawing attention to the manoeuvre.
Most concerning is the apparent permanence of the access granted, requiring an arcane command in Terminal and a restart in order to reset the app’s privacy settings. It’s hard to believe that this was intended to trap the user into surrendering control over access to protected locations. But it can do.
I’m very grateful to Richard for drawing my attention to this.
...
Read the original on eclecticlight.co »
The Right Tool for the Job
TL;DR: The AI space is pushing hard for “Skills” as the new standard for giving LLMs capabilities, but I’m not a fan. Skills are great for pure knowledge and teaching an LLM how to use an existing tool. But for giving an LLM actual access to services, the Model Context Protocol (MCP) is the far superior, more pragmatic architectural choice. We should be building connectors, not just more CLIs.
Maybe it’s an artifact of spending too much time on X, but lately, the narrative that “MCP is dead” and “Skills are the new standard” has been hammered into my brain. Everywhere I look, someone is celebrating the death of the Model Context Protocol in favor of dropping a SKILL.md into their repository.
I am a very heavy AI user. I use Claude Code, Codex, and Gemini for coding. I rely on ChatGPT, Claude, and Perplexity almost every day to manage everything from Notion notes to my DEVONthink databases, and even my emails.
And honestly? I just don’t like Skills.
I hope MCP sticks around. I really don’t want a future where every single service integration requires a dedicated CLI and a markdown manual.
Here’s why I think the push for Skills as a universal solution is a step backward, and why MCP still gets the architecture right.
Claude pulling recent user feedback from Kikuyo through the Kikuyo MCP, no CLI needed.
The core philosophy of MCP is simple: it’s an API abstraction. The LLM doesn’t need to understand the how; it just needs to know the what. If the LLM wants to interact with DEVONthink, it calls devonthink.do_x(), and the MCP server handles the rest.
This separation of concerns brings some unbeatable advantages:
Zero-Install Remote Usage: For remote MCP servers, you don’t need to install anything locally. You just point your client to the MCP server URL, and it works. Seamless Updates: When a remote MCP server is updated with new tools or resources, every client instantly gets the latest version. No need to push updates, upgrade packages, or reinstall binaries.Saner Auth: Authentication is handled gracefully (often with OAuth). Once the client finishes the handshake, it can perform actions against the MCP. You aren’t forcing the user to manage raw tokens and secrets in plain text.True Portability: My remote MCP servers work from anywhere: my Mac, my phone, the web. It doesn’t matter. I can manage my Notion via my LLM of choice from wherever a client is available.Sandboxing: Remote MCPs are naturally sandboxed. They expose a controlled interface rather than giving the LLM raw execution power in your local environment.Smart Discovery: Modern apps (ChatGPT, Claude, etc.) have tool search built-in. They only look for and load tools when they are actually needed, saving precious context window.Frictionless Auto-Updates: Even for local setups, an MCP installed directly via npx -y or uv can auto-update on every launch.
Not all Skills are the same. A pure knowledge skill (one that teaches the LLM how to format a commit message, write tests a certain way, or use your internal jargon) actually works well. The problems start when a Skill requires a CLI to actually do something.
My biggest gripe with Skills is the assumption that every environment can, or should, run arbitrary CLIs.
Most skills require you to install a dedicated CLI. But what if you aren’t in a local terminal? ChatGPT can’t run CLIs. Neither can Perplexity or the standard web version of Claude. Unless you are using a full-blown compute environment (like Perplexity Computer, Claude Cowork, Claude Code, or Codex), any skill that relies on a CLI is dead on arrival.
This leads to a cascade of annoying UX and architectural problems:
The Deployment Mess: CLIs need to be published, managed, and installed through binaries, NPM, uv, etc. The Secret Management Nightmare: Where do you put the API tokens required to authenticate? If you’re lucky, the environment has a .env file you can dump plain-text secrets into. Some ephemeral environments wipe themselves, meaning your CLI works today but forgets your secrets tomorrow.Fragmented Ecosystems: Skill management is currently the wild west. When a skill updates, you have to reinstall it. Some tools support installing skills via npx skills, but that only works in Codex and Claude Code, not Claude Cowork or standard Claude. Pure knowledge skills work in Claude, but most others don’t. Some tools support a “skills marketplace,” others don’t. Some can install from GitHub, others can’t. You try to install an OpenClaw skill into Claude and it explodes with YAML parsing errors because the metadata fields don’t match.Context Bloat: Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs. It’s like forcing someone to read the entire car’s owner’s manual when all they want to do is call car.turn_on().
If a Skill’s instructions start with “install this CLI first,” you’ve just added an unnecessary abstraction layer and extra steps. Why not just use a remote MCP instead?
Codex pulling up a pure knowledge skill to learn how Phoenix colocated hooks work. No CLI, no MCP, just context.
The Right Tool for the Job#
I don’t want Skills to become the de facto way to connect an LLM to a service. We can explain API shapes in a Skill so the LLM can curl it, but how is that better than providing a clean, strongly-typed interface via MCP?
Here’s how I think the ecosystem should look:
When to use MCP:
MCP should be the standard for giving an LLM an interface to connect to something: a website, a service, an application. The service itself should dictate the interface it exposes.
Take Google Calendar. A gcal CLI is fine. The problem is a Skill that tells the LLM to install it, manage auth tokens, and shell out to it. An OAuth-backed remote MCP owned by Google handles all of that at the protocol level, and works from any client without any setup. To control Chrome, the browser should expose an MCP endpoint for stateful control, rather than relying on a janky chrome-cli.To debug with Hopper, the current built-in MCP that lets the LLM run step() is infinitely better than a separate hopper-cli.Xcode should ship with a built-in MCP that handles auth when an LLM connects to a project.Notion should have mcp.notion.so/mcp available natively, instead of forcing me to download notion-cli and manage auth state manually. (They actually do have a remote MCP now, which is exactly the right call.)
When to use Skills:
Skills should be “pure.” They should focus on knowledge and context.
Teaching existing tools: I love having a .claude/skills folder that teaches the LLM how to use tools I already have installed. A skill explaining how to use curl, git, gh, or gcloud makes complete sense. We don’t need a “curl MCP”. We just need to teach the LLM how to construct good curl commands. However, a dedicated remote GitHub MCP makes much more sense for managing issues than relying on a gh CLI skill. Standardizing workflows: Skills are perfect for teaching Claude your business jargon, internal communication style, or organizational structure.Teaching handling of certain things: This is another great example and what Anthropic does as well with the PDF Skill - it explains how to deal with PDF files and how to manipulate them with Python.Secret Management patterns: Having a skill that tells Claude “Use fnox for this repo, here is how to use it” just makes sense. Every time we deal with secrets, Claude pulls up the skill. That’s way better than building a custom MCP just to call get_secret().
Skills living directly in the repo. The LLM picks them up automatically when working in that project.
Shower thought: Maybe the terminology is the problem. Skills should just be called LLM_MANUAL.md, and MCPs should be called Connectors.
Both have their place.
For the services I own, I already do this. A few examples:
mcp-server-devonthink: A local MCP server that gives any LLM direct control over DEVONthink. No CLI wrapper, just a clean tool interface.microfn: Exposes a remote MCP at mcp.microfn.dev so any MCP-capable client can use it out of the box. MCP Nest: Tunnels local MCP servers through the cloud so they’re reachable remotely at mcp.mcpnest.dev/mcp. Built it because I kept wanting remote access to local MCPs without exposing my machine directly.
For microfn and Kikuyo I also published Skills, but they cover the CLI, not the MCP. That said, writing this made me realize: a skill that explains how to use an MCP server actually makes a lot of sense. Not to replace the MCP, but to give the LLM context before it starts calling tools. What the service does, how the tools relate to each other, when to use which one. A knowledge layer on top of a connector layer. That’s the combination I’d want.
And this is actually a pattern I’ve been using more and more in practice. When I’m working with a MCP server, I inevitably discover gotchas and non-obvious patterns: a date format that needs to be YYYY-MM-DD instead of YYYYMMDD, a search function that truncates results unless you bump a parameter, a tool name that doesn’t do what you’d expect. Rather than rediscovering these every session, I just ask Claude to wrap everything we learned into a Skill. The LLM already has the context from our conversation, so it writes the Skill with all the gotchas, common patterns, and corrected assumptions baked in.
After discovering backlink gotchas and date format quirks in the NotePlan MCP, I asked Claude to package everything into a skill. Now every future session starts with that knowledge.
The result is a Skill that acts as a cheat sheet for the MCP, not a replacement for it. The MCP still handles the actual connection and tool execution. The Skill just makes sure the LLM doesn’t waste tokens stumbling through the same pitfalls I already solved. It’s the combination of both that makes the experience actually smooth.
At the same time, I’ll keep maintaining my dotfiles repo full of Skills for procedures I use often, and I’ll keep dropping .claude/skills into my repositories to guide the AI’s behavior.
I just hope the industry doesn’t abandon the Model Context Protocol. The dream of seamless AI integration relies on standardized interfaces, not a fractured landscape of hacky CLIs. I’m still holding out hope for official Skyscanner, Booking.com, Trip.com, and Agoda.com MCPs.
Speaking of remote MCPs: I built MCP Nest specifically for this problem. A lot of useful MCP servers are local-only by nature, think Fastmail, Gmail, or anything that runs on your machine. MCP Nest tunnels them through the cloud so they become remotely accessible, usable from Claude, ChatGPT, Perplexity, or any MCP-capable client, across all your devices. If you want your local MCPs to work everywhere without exposing your machine directly, that’s what it’s for.
...
Read the original on david.coffee »
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
...
Read the original on www.wired.com »
Jason at zx2c4.com
Previous message (by thread): Adding message type 5/6 for PQC (was Re: Export noise primitives for additional “chain key ratcheting”)
Next message (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released
Hey folks,
I generally don’t send announcement emails for the Windows software, because the built-in updater takes care of notifying the relevant users. But because this hasn’t been updated in so long, and because of recent news articles, I thought it’d be a good idea to notify the list.
After a lot of hardwork, we’ve released an updated Windows client, both the low level kernel driver and api harness, called WireGuardNT, and the higher level management software, command line utilities, and UI, called WireGuard for Windows.
There are some new features — such as support for removing individual allowed IPs without dropping packets (as was added already to Linux and FreeBSD) and setting very low MTUs on IPv4 connections — but the main improvement is lots of accumulated bug fixes, performance improvements, and above all, immense code streamlining due to ratcheting forward our minimum supported Windows version [1]. These projects are now built in a much more solid foundation, without having to maintain decades of compatibility hacks and alternative codepaths, and bizarre logic, and dynamic dispatching, and all manner of crust. There have also been large toolchain updates — the EWDK version used for the driver, the Clang/LLVM/MingW version used for the userspace tooling, the Go version used for the main UI, the EV certificate and signing infrastructure — which all together should amount to better performance and more modern code.
But, as it’s our first Windows release in a long while, please test and let me know how it goes. Hopefully there are no regressions, and we’ve tested this quite a bit — including on Windows 10 1507 Build 10240, the most ancient Windows that we support which Microsoft does not anymore — but you never know. So feel free to write me as needed.
As always, the built-in updater should be prompting users to click the update button, which will check signatures and securely update the software. Alternatively, if you’re installing for the first time or want to update immediately, our mini 80k fetcher will download and verify the latest version: - https://download.wireguard.com/windows-client/wireguard-installer.exe - https://www.wireguard.com/install/
And to learn more about each of these two Windows projects: - https://git.zx2c4.com/wireguard-windows/about/ - https://git.zx2c4.com/wireguard-nt/about/
Finally, I should comment on the aforementioned news articles. When we tried to submit the new NT kernel driver to Microsoft for signing, they had suspended our account, as I wrote about first in a random comment [2] on Hacker News in a thread about this happening to another project, and then later that day on Twitter [3]. The comments that followed were a bit off the rails. There’s no conspiracy here from Microsoft. But the Internet discussion wound up catching the attention of Microsoft, and a day later, the account was unblocked, and all was well. I think this is just a case of bureaucratic processes getting a bit out of hand, which Microsoft was able to easily remedy. I don’t think there’s been any malice or conspiracy or anything weird. I think most news articles currently circulating haven’t been updated to show that this was actually fixed pretty quickly. So, in case you were wondering, “but how can there be a new WireGuard for Windows update when the account is blocked?!”, now you know that the answer is, “because the account was unblocked.”
Anyway, enjoy the new software, and let me know how it works for you.
Thanks, Jason
[1] https://lists.zx2c4.com/pipermail/wireguard/2026-March/009541.html [2] https://news.ycombinator.com/item?id=47687884 [3] https://x.com/EdgeSecurity/status/2041872931576299888
Previous message (by thread): Adding message type 5/6 for PQC (was Re: Export noise primitives for additional “chain key ratcheting”)
Next message (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released
More information about the WireGuard
mailing list
...
Read the original on lists.zx2c4.com »
Today we’re announcing that GitButler has raised a $17M Series A led by a16z with continuing support from our lead seed investors, Fly Ventures and A Capital.
I know what you’re thinking. You’re hoping that we’ll use phrases such as “we’re excited,” “this is just the beginning,” and “AI is changing everything”. While all those things are true, I’ll try to avoid them and instead make this announcement a little more personal.
For me this is a long story.
I was one of the cofounders of GitHub and over the last 15 years I’ve watched Git go from a rather niche developer tool written for a very esoteric collaboration style to the foundational infrastructure of all software development on the planet. I may have even had a small hand in some part of that.
What I learned from watching that story unfold is that developer platforms win when they remove friction from collaboration, and when they let the people producing code have less overhead to deal with.
GitButler was started three years ago because we felt like our development practices have been shoehorned into what Git could do for such a long time, it would be amazing to see what we could do with tooling that was actually designed for those practices.
That’s fundamentally what is behind this round.
We think software development is quickly moving into a new phase, and the problem that Git has solved for the last 20 years is overdue for a redesign. Today, with Git, we’re all teaching swarms of agents to use a tool built for sending patches over mailing lists. That’s far from what is needed today.
At GitHub, one thing became painfully clear over and over: developers don’t struggle because they can’t write code. They struggle because context falls apart between tools, between people, and now between people and agents. The hard problem is not generating change, it’s organizing, reviewing, and integrating change without creating chaos.
The old model assumed one person, one branch, one terminal, one linear flow. Not only has the problem not been solved well for that old model, it’s now only been compounded with our new AI tools.
Last week we released our first answer to that, the technical preview of the GitButler CLI.
This is a tool designed for the GitHub Flow style - the short lived branch, trunk based workflows that so many of us are using. This is a tool designed for humans, designed for agents, designed for scripting. Designed to stack branches, to multitask, to control and organize your changes, to easily undo - to be simple, powerful and intuitive, no matter who (or what) you are. Best of all, it just drops into any existing Git project.
But of course, that’s just the beginning. (Damn, I said I wasn’t going to say that…)
There was a tagline at GitHub that I always loved, but I never felt like we lived up to the promise of: “Social Coding”.
While GitHub certainly made it easier to collaborate on open source projects with forks and pull requests, it otherwise didn’t much improve the process of working together. There are still lists of issues and kanban boards, there are still patches (we just call them PRs now), we still chat in external chat rooms. We don’t look at commit messages and our PR descriptions aren’t stored in Git and usually lost in history. Heck, it could be argued that development in teams is less social than it was when version control was centralized.
But what if coding was actually social? What if it was easier to for a team to work together than it is to work alone?
Imagine your version control tool taking what you’ve worked on and helping you craft logical, beautiful changes with proper context. Imagine being able to access agent interactions, related conversations and other information we’re currently losing. Imagine your tools telling you as soon as there are possible merge conflicts between teammates, rather than at the end of the process. Imaging being able to work on a branch stacked on a coworkers branch while you’re both constantly modifying them. Imagine your agent being fully aware of not only what your other agents are working on, but what everyone on your team is working on, right now.
There is so much more that this fundamental layer of our software tooling could be doing for us. This is what we’re doing at GitButler, this is why we’ve raised the funding to help build all of this, faster.
We’re building the infrastructure for how software gets built next.
...
Read the original on blog.gitbutler.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.