10 interesting stories served every morning and every evening.
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Fermer
Le numérique au sein de l’État
La stratégie numérique de l’État
La DINUM
Les acteurs du numérique de l’État
En Europe et à l’international
La transformation numérique des territoires
Offre d’accompagnement
Services numériques
Données publiques
IA
Actualités
Dernières informations
Espace presse
Blog
Postuler
Souveraineté numérique : l’État accélère la réduction de ses dépendances extra-européennes
À l’initiative du Premier ministre, du ministre de l’Action et des Comptes publics, et de la ministre déléguée chargée de l’Intelligence artificielle et du Numérique, la direction interministérielle du numérique (DINUM) a organisé mercredi 8 avril 2026 avec la direction générale des entreprises (DGE), l’agence nationale de la sécurité des systèmes d’information (ANSSI) et la direction des achats de l’État (DAE) un séminaire interministériel visant à renforcer la dynamique collective de réduction des dépendances numériques extra-européennes. Réunissant ministres, administrations, opérateurs publics et acteurs privés, cet événement marque une accélération de la stratégie française et européenne en faveur de la souveraineté numérique. Dans la continuité des directives récentes communiquées par le Premier ministre, notamment les circulaires relatives à la commande publique numérique ainsi qu’à la généralisation de l’outil de visioconférence « Visio », le séminaire a permis de fixer un objectif clair : réduire les dépendances numériques extra-européennes de l’État.S’agissant de l’évolution du poste de travail, la DINUM annonce sa sortie de Windows au profit de postes sous système d’exploitation Linux.S’agissant de la migration vers des solutions souveraines, la Caisse nationale d’Assurance maladie a annoncé il y a quelques jours la migration de ses 80 000 agents vers des outils du socle numérique interministériel (Tchap, Visio et FranceTransfert pour le transfert de documents).Le mois dernier, le Gouvernement annonçait la migration de la plateforme des données de santé vers une solution de confiance d’ici à fin 2026.Le séminaire a permis de lancer une nouvelle méthode pour sortir des dépendances en formant des coalitions inédites associant ministères, grands opérateurs publics et acteurs privés. Cette démarche vise à fédérer les énergies publiques et privées autour de projets précis, en s’appuyant notamment sur les communs numériques et les standards d’interopérabilité (initiatives Open-Interop, OpenBuro).La DINUM coordonnera un plan interministériel de réduction des dépendances extra-européennes. Chaque ministère (opérateurs inclus) sera tenu de formaliser son propre plan d’ici l’automne, portant sur les axes suivants : poste de travail, outils collaboratifs, anti-virus, intelligence artificielle, bases de données, virtualisation, équipements réseau. Ces plans d’action permettront de donner de la visibilité quant aux besoins de l’Etat à la filière industrielle du numérique, qui dispose d’atouts majeurs qu’il convient de valoriser par la commande publique.Le travail de cartographie et de diagnostic des dépendances réalisé par la Direction des Achats de l’État (DAE), ainsi que celui autour de la définition d’un service numérique européen porté par la Direction générale des Entreprises (DGE), permettra d’affiner l’objectif chiffré de réduction avec un calendrier clair.Les premières « rencontres industrielles du numérique », qui seront organisées par la DINUM en juin 2026, constitueront l’occasion de concrétiser des coalitions ministérielles publiques - privées, avec notamment la formalisation d’une « alliance public-privé pour la souveraineté européenne ».
L’État ne peut plus se contenter de constater sa dépendance, il doit en sortir. Nous devons nous désensibiliser des outils américains et reprendre le contrôle de notre destin numérique. Nous ne pouvons plus accepter que nos données, nos infrastructures et nos décisions stratégiques dépendent de solutions dont nous ne maîtrisons ni les règles, ni les tarifs, ni les évolutions, ni les risques. La transition est en marche : nos ministères, nos opérateurs et nos partenaires industriels s’engagent aujourd’hui dans une démarche sans précédent pour cartographier nos dépendances et renforcer notre souveraineté numérique. La souveraineté numérique n’est pas une option.
ministre de l’Action et des Comptes publics
La souveraineté numérique n’est pas une option, c’est une nécessité stratégique. L’Europe doit se doter des moyens de ses ambitions, et la France montre l’exemple en accélérant la bascule vers des solutions souveraines, interopérables et durables. En réduisant nos dépendances à des solutions extra-européennes, l’État envoie un message clair : celui d’une puissance publique qui reprend la main sur ses choix technologiques au service de sa souveraineté numérique.
ministre déléguée chargée de l’Intelligence artificielle et du Numérique
À propos de la direction interministérielle du numérique (DINUM) : La DINUM a pour mission d’élaborer la stratégie numérique de l’État et de piloter sa mise en œuvre. Elle accompagne les projets numériques de l’État, au service des priorités gouvernementales et dans le souci d’une amélioration de l’efficacité de l’action publique.
(Ouvre une nouvelle fenêtre) En savoir plus sur numerique.gouv.fr
...
Read the original on numerique.gouv.fr »
• 3 min read • more posts View on • 3 min read • more posts View on
The worst part about the MacOS window management situation is the inability to instantly switch spaces, and that Apple has continuously ignored requests to disable the nauseating switching animation. Sure, it’s not that long, but I switch spaces often enough to the point where it becomes very noticeable and drives me insane.
I believe to have found the best solution to instant space switching!
But before I show you, of course, other people share the same sentiment. I claim that none of the surveyed contemporary solutions, except for what I bring up at the end of this article, suffice for what I want:
This is always the default answer to this question online, and I’m sick of it! It doesn’t even solve the problem, but rather replaces it with an equally useless fade-in animation. It also has the side effect of activating the prefers-reduced-motion media query on web browsers.
Install the yabai tiling window manager and use its instant space switcher.
And to be fair, it works pretty well. There are only two problems: for one, yabai does this by binary patching a part of the operating system. This is only possible by disabling System Integrity Protection at your own discretion. For the second, installing yabai forces you to learn and use it as your tiling window manager1. I personally use PaperWM.spoon as my window manager. Both of which are incompatible when installed together.
Use a third-party virtual space manager facade, hiding and showing windows as needed when switching spaces.
Some popular options are FlashSpace and AeroSpace virtual workspaces. I actually offer no criticism other than that they are not native to MacOS, and feel unnecessary given that all we want to do is disable an animation.
Pay for a license for BetterTouchTool. Enable “Move Right Space (Without Animation)” and “Move Left Space (Without Animation)”.
Without further ado, I managed to find InstantSpaceSwitcher by jurplel on GitHub. It is a simple menu bar application that achieves instant space switching while offering none of the aforementioned drawbacks.
InstantSpaceSwitcher does not require disabling Security Integration Protection; it works by simulating a trackpad swipe with a large amount of velocity. It additionally allows you to instantly jump to a space number. The last thing it provides is a command line interface.
The installation instructions are listed on the README, and I will briefly repeat them. You can either install InstantSpaceSwiter via Homebrew:
$ brew install –cask jurplel/tap/instant-space-switcher
$ git clone https://github.com/jurplel/InstantSpaceSwitcher
$ cd InstantSpaceSwitcher
$ ./dist/build.sh
$ open ./build/InstantSpaceSwitcher.app
Once InstantSpaceSwitcher is installed as a native application, the command line interface is provided at:
$ InstantSpaceSwitcher.app/Contents/MacOS/ISSCli –help
Usage: InstantSpaceSwitcher.app/Contents/MacOS/ISSCli [left|right|index
Did I mention that the repository literally has one star on GitHub (me)? I want more people to discover InstantSpaceSwitcher and consider it trustworthy; hence, please consider giving it a star if you find it helpful.
↑ Scroll to top ↑
...
Read the original on arhan.sh »
A new report from 404 Media reveals that the FBI was able to recover deleted Signal messages from an iPhone by extracting data stored in the device’s notification database. Here are the details.
According to 404 Media, testimony in a recent trial involving “a group of people setting off fireworks and vandalizing property at the ICE Prairieland Detention Facility in Alvarado, Texas,” showed that the FBI was able to recover content of incoming Signal messages from a defendant’s iPhone, even though Signal had been removed from the device:
One of the defendants was Lynette Sharp, who previously pleaded guilty to providing material support to terrorists. During one day of the related trial, FBI Special Agent Clark Wiethorn testified about some of the collected evidence. A summary of Exhibit 158 published on a group of supporters’ website says, “Messages were recovered from Sharp’s phone through Apple’s internal notification storage—Signal had been removed, but incoming notifications were preserved in internal memory. Only incoming messages were captured (no outgoing).”
As 404 Media notes, Signal’s settings include an option that prevents the actual message content from being previewed in notifications. However, it appears the defendant did not have that setting enabled, which, in turn, seemingly allowed the system to store the content in the database.
404 Media reached out to Signal and Apple, but neither company provided any statements on how notifications are handled or stored.
With little to no technical details about the exact condition of the defendant’s iPhone, it is obviously impossible to pinpoint the precise method the FBI used to recover the information.
For instance, there are multiple system states an iPhone can be in, each with its own security and data access constraints, such as BFU (Before First Unlock), AFU (After First Unlock) mode, and so on.
Security and data access also change even more dramatically when the device is unlocked, since the system assumes the user is present and permits access to a wider range of protected data.
That said, iOS does store and cache a lot of data locally, trusting that it can rely on these different states to keep that information safe but readily available in case the device’s rightful owner needs it.
Another important factor to keep in mind: the token used to send push notifications isn’t immediately invalidated when an app is deleted. And since the server has no way of knowing whether the app is still installed after the last notification it sent, it may continue pushing notifications, leaving it up to the iPhone to decide whether to display them.
Interestingly, Apple just changed how iOS validates push notification tokens on iOS 26.4. While it is impossible to tell whether this is a result of this case, the timing is still notable.
Back to the case, given Exhibit 158’s description that the messages “were recovered from Sharp’s phone through Apple’s internal notification storage,” it is possible the FBI extracted the information from a device backup.
In that case, there are many commercially available tools for law enforcement that exploit iOS vulnerabilities to extract data that could have helped the FBI access this information.
To read 404 Media’s original report of this case, follow this link.
...
Read the original on 9to5mac.com »
The Right Tool for the Job
TL;DR: The AI space is pushing hard for “Skills” as the new standard for giving LLMs capabilities, but I’m not a fan. Skills are great for pure knowledge and teaching an LLM how to use an existing tool. But for giving an LLM actual access to services, the Model Context Protocol (MCP) is the far superior, more pragmatic architectural choice. We should be building connectors, not just more CLIs.
Maybe it’s an artifact of spending too much time on X, but lately, the narrative that “MCP is dead” and “Skills are the new standard” has been hammered into my brain. Everywhere I look, someone is celebrating the death of the Model Context Protocol in favor of dropping a SKILL.md into their repository.
I am a very heavy AI user. I use Claude Code, Codex, and Gemini for coding. I rely on ChatGPT, Claude, and Perplexity almost every day to manage everything from Notion notes to my DEVONthink databases, and even my emails.
And honestly? I just don’t like Skills.
I hope MCP sticks around. I really don’t want a future where every single service integration requires a dedicated CLI and a markdown manual.
Here’s why I think the push for Skills as a universal solution is a step backward, and why MCP still gets the architecture right.
Claude pulling recent user feedback from Kikuyo through the Kikuyo MCP, no CLI needed.
The core philosophy of MCP is simple: it’s an API abstraction. The LLM doesn’t need to understand the how; it just needs to know the what. If the LLM wants to interact with DEVONthink, it calls devonthink.do_x(), and the MCP server handles the rest.
This separation of concerns brings some unbeatable advantages:
Zero-Install Remote Usage: For remote MCP servers, you don’t need to install anything locally. You just point your client to the MCP server URL, and it works. Seamless Updates: When a remote MCP server is updated with new tools or resources, every client instantly gets the latest version. No need to push updates, upgrade packages, or reinstall binaries.Saner Auth: Authentication is handled gracefully (often with OAuth). Once the client finishes the handshake, it can perform actions against the MCP. You aren’t forcing the user to manage raw tokens and secrets in plain text.True Portability: My remote MCP servers work from anywhere: my Mac, my phone, the web. It doesn’t matter. I can manage my Notion via my LLM of choice from wherever a client is available.Sandboxing: Remote MCPs are naturally sandboxed. They expose a controlled interface rather than giving the LLM raw execution power in your local environment.Smart Discovery: Modern apps (ChatGPT, Claude, etc.) have tool search built-in. They only look for and load tools when they are actually needed, saving precious context window.Frictionless Auto-Updates: Even for local setups, an MCP installed directly via npx -y or uv can auto-update on every launch.
Not all Skills are the same. A pure knowledge skill (one that teaches the LLM how to format a commit message, write tests a certain way, or use your internal jargon) actually works well. The problems start when a Skill requires a CLI to actually do something.
My biggest gripe with Skills is the assumption that every environment can, or should, run arbitrary CLIs.
Most skills require you to install a dedicated CLI. But what if you aren’t in a local terminal? ChatGPT can’t run CLIs. Neither can Perplexity or the standard web version of Claude. Unless you are using a full-blown compute environment (like Perplexity Computer, Claude Cowork, Claude Code, or Codex), any skill that relies on a CLI is dead on arrival.
This leads to a cascade of annoying UX and architectural problems:
The Deployment Mess: CLIs need to be published, managed, and installed through binaries, NPM, uv, etc. The Secret Management Nightmare: Where do you put the API tokens required to authenticate? If you’re lucky, the environment has a .env file you can dump plain-text secrets into. Some ephemeral environments wipe themselves, meaning your CLI works today but forgets your secrets tomorrow.Fragmented Ecosystems: Skill management is currently the wild west. When a skill updates, you have to reinstall it. Some tools support installing skills via npx skills, but that only works in Codex and Claude Code, not Claude Cowork or standard Claude. Pure knowledge skills work in Claude, but most others don’t. Some tools support a “skills marketplace,” others don’t. Some can install from GitHub, others can’t. You try to install an OpenClaw skill into Claude and it explodes with YAML parsing errors because the metadata fields don’t match.Context Bloat: Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs. It’s like forcing someone to read the entire car’s owner’s manual when all they want to do is call car.turn_on().
If a Skill’s instructions start with “install this CLI first,” you’ve just added an unnecessary abstraction layer and extra steps. Why not just use a remote MCP instead?
Codex pulling up a pure knowledge skill to learn how Phoenix colocated hooks work. No CLI, no MCP, just context.
The Right Tool for the Job#
I don’t want Skills to become the de facto way to connect an LLM to a service. We can explain API shapes in a Skill so the LLM can curl it, but how is that better than providing a clean, strongly-typed interface via MCP?
Here’s how I think the ecosystem should look:
When to use MCP:
MCP should be the standard for giving an LLM an interface to connect to something: a website, a service, an application. The service itself should dictate the interface it exposes.
Take Google Calendar. A gcal CLI is fine. The problem is a Skill that tells the LLM to install it, manage auth tokens, and shell out to it. An OAuth-backed remote MCP owned by Google handles all of that at the protocol level, and works from any client without any setup. To control Chrome, the browser should expose an MCP endpoint for stateful control, rather than relying on a janky chrome-cli.To debug with Hopper, the current built-in MCP that lets the LLM run step() is infinitely better than a separate hopper-cli.Xcode should ship with a built-in MCP that handles auth when an LLM connects to a project.Notion should have mcp.notion.so/mcp available natively, instead of forcing me to download notion-cli and manage auth state manually. (They actually do have a remote MCP now, which is exactly the right call.)
When to use Skills:
Skills should be “pure.” They should focus on knowledge and context.
Teaching existing tools: I love having a .claude/skills folder that teaches the LLM how to use tools I already have installed. A skill explaining how to use curl, git, gh, or gcloud makes complete sense. We don’t need a “curl MCP”. We just need to teach the LLM how to construct good curl commands. However, a dedicated remote GitHub MCP makes much more sense for managing issues than relying on a gh CLI skill. Standardizing workflows: Skills are perfect for teaching Claude your business jargon, internal communication style, or organizational structure.Teaching handling of certain things: This is another great example and what Anthropic does as well with the PDF Skill - it explains how to deal with PDF files and how to manipulate them with Python.Secret Management patterns: Having a skill that tells Claude “Use fnox for this repo, here is how to use it” just makes sense. Every time we deal with secrets, Claude pulls up the skill. That’s way better than building a custom MCP just to call get_secret().
Skills living directly in the repo. The LLM picks them up automatically when working in that project.
Shower thought: Maybe the terminology is the problem. Skills should just be called LLM_MANUAL.md, and MCPs should be called Connectors.
Both have their place.
For the services I own, I already do this. A few examples:
mcp-server-devonthink: A local MCP server that gives any LLM direct control over DEVONthink. No CLI wrapper, just a clean tool interface.microfn: Exposes a remote MCP at mcp.microfn.dev so any MCP-capable client can use it out of the box. MCP Nest: Tunnels local MCP servers through the cloud so they’re reachable remotely at mcp.mcpnest.dev/mcp. Built it because I kept wanting remote access to local MCPs without exposing my machine directly.
For microfn and Kikuyo I also published Skills, but they cover the CLI, not the MCP. That said, writing this made me realize: a skill that explains how to use an MCP server actually makes a lot of sense. Not to replace the MCP, but to give the LLM context before it starts calling tools. What the service does, how the tools relate to each other, when to use which one. A knowledge layer on top of a connector layer. That’s the combination I’d want.
And this is actually a pattern I’ve been using more and more in practice. When I’m working with a MCP server, I inevitably discover gotchas and non-obvious patterns: a date format that needs to be YYYY-MM-DD instead of YYYYMMDD, a search function that truncates results unless you bump a parameter, a tool name that doesn’t do what you’d expect. Rather than rediscovering these every session, I just ask Claude to wrap everything we learned into a Skill. The LLM already has the context from our conversation, so it writes the Skill with all the gotchas, common patterns, and corrected assumptions baked in.
After discovering backlink gotchas and date format quirks in the NotePlan MCP, I asked Claude to package everything into a skill. Now every future session starts with that knowledge.
The result is a Skill that acts as a cheat sheet for the MCP, not a replacement for it. The MCP still handles the actual connection and tool execution. The Skill just makes sure the LLM doesn’t waste tokens stumbling through the same pitfalls I already solved. It’s the combination of both that makes the experience actually smooth.
At the same time, I’ll keep maintaining my dotfiles repo full of Skills for procedures I use often, and I’ll keep dropping .claude/skills into my repositories to guide the AI’s behavior.
I just hope the industry doesn’t abandon the Model Context Protocol. The dream of seamless AI integration relies on standardized interfaces, not a fractured landscape of hacky CLIs. I’m still holding out hope for official Skyscanner, Booking.com, Trip.com, and Agoda.com MCPs.
Speaking of remote MCPs: I built MCP Nest specifically for this problem. A lot of useful MCP servers are local-only by nature, think Fastmail, Gmail, or anything that runs on your machine. MCP Nest tunnels them through the cloud so they become remotely accessible, usable from Claude, ChatGPT, Perplexity, or any MCP-capable client, across all your devices. If you want your local MCPs to work everywhere without exposing your machine directly, that’s what it’s for.
...
Read the original on david.coffee »
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
...
Read the original on www.wired.com »
Transform your old laptop into a powerful always-online server. Based in Amsterdam, we aim to provide professional colocation services in the US and across European datacenters in partnership with Hetzner.
Most VPS providers give you severely limited compute resources at premium prices. You even share these resources with other customers without knowing!
Your old laptop packs more CPU power, RAM, and storage than their entry-level offerings - and with us, you’ll pay just €7/month for professional hosting. Why settle for a restricted virtual slice when you can have your own laptop running dedicated just to you, 24/7 in a professional datacenter?
In addition, you cut-down on e-waste and help save the planet!
Every laptop gets its own static, fully routable IPv4 address for maximum accessibility.
Professional datacenter hosting with guaranteed uptime and monitoring, to be powered by Hetzner’s infrastructure.
Free KVM-over-IP access to your laptop - just like having it right next to you.
We offer free assistance for your initial setup and ensure you get your choice of server software up and running. A Kubernetes cluster, Proxmox or a niche CI/CD solution? No problem!
Fill out our application form and we’ll contact you within 2 working days.
We’ll send you a prepaid shipping box - just pack your laptop and drop it at your nearest collection point. Please note that we are still figuring out the specifics of logistics.
Our team connects your laptop and sends you a link to access your machine via KVM. If you need further assistance, we’re just an email away!
Access your laptop server from anywhere, anytime.
Click here to signup with your details and we’ll contact you within 2 working days to discuss your setup.
How much does it cost?
We charge a flat fee of €7 per month, regardless of power consumption. This includes all services: colocation, IPv4 address, KVM access, and monitoring.
What do I need to send?
Your laptop and its power brick. We’ll provide a prepaid shipping box - just pack everything securely and drop it at your nearest collection point. It’s completely free!
What are the connectivity requirements?
Your laptop must have either an ethernet port or a USB port (we’ll provide a USB ethernet adapter if needed). We connect all laptops via ethernet - WiFi is not available in the datacenter.
What kind of setup assistance do you provide?
We offer complimentary assistance with initial setup, including installation of most Linux distributions, Kubernetes clusters, Proxmox virtualization, and other common server software. Just let us know what you need, and we’ll help you get started.
What are the laptop requirements?
Your laptop should be fully functional with a working power supply and either an ethernet port or USB port for connectivity. Age isn’t a factor. We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.
Where are your datacenters located?
We’re based in Amsterdam and aim to work with Hetzner to provide colocation services in the US and across their European datacenter network. This ensures your laptop is hosted in professional, secure facilities with excellent connectivity.
Your laptop will be hosted in Hetzner’s professional datacenters with 24/7 security, climate control, and redundant power. We also provide basic firewall services and DDoS protection.
...
Read the original on colaptop.pages.dev »
I’d like to tell the story of job I just completed for a customer, so that I can make a point about how I feel Microsoft and other large technology companies are actively hostile to their users.
I received a call from my neighbour asking if I would be willing to help her husband with an issue he’d been having with his laptop. As the proud new owner of my own IT services company, I of course agreed to take a look.
I spoke with my neighbour’s husband, and immediately saw that he was not tech literate. I learned to identify the type while doing IT work for my previous employer. This made understanding his problem difficult, but through conversation we did manage to come to an understanding about what the real issue was that he was experiencing.
What he was seeing was that he was no longer receiving email in Outlook, and that there was an error message claiming he had ‘run out of available storage’, or some other similar nonsense. He is a very light email user, and he knows it. He was confused as to why he’d run out of storage. I was confused as well, at first.
Through investigation I discovered that the Outlook email service uses Onedrive for storage of all messages and attachments. He had 5 GB of available storage, the amount that is given with his free account. This had yet to explain why he was seeing that error message, there was no way he had consumed 5 GB of storage with just his email use.
Unsurprisingly, his Onedrive storage wasn’t filled by his email, it was filled by the personal files from his Windows 11 desktop. Did he configure Windows to save those files to his Onedrive directory, instead of his local home directory? Of course not, that was done by default. Did he even know that this was happening? Also, no. He had no idea this was happening until he saw that error message, which oh-so-helpfully offered to ‘solve’ his problem by offering him a subscription to additional paid storage capacity on the account.
He did manage to loosely understand what was happening, enough at least to start deleting files from his computer to try and make the error message go away. I was never able to confirm with him, but I suspect that he deleted files (including family photos) for which he had no other backup.
I will be blunt, this infuriates me. This wasn’t the first time I’ve seen this. I saw it many times while working for my previous employer. Microsoft has intentionally broken a fundamental assumption about how files are stored on a computer running Windows. They do this without asking the user, and without adequately explaining what they have done. Microsoft is very obviously employing dark patterns in order to goad its users into paying for Onedrive storage.
I’m a computer nerd, and if you are reading this you probably are as well. We can change that setting ourselves without much thought, and we probably have backups of our important data in case recovery is necessary. I will tell you that many people are extremely utilitarian about their computer use. They use their computers only to the degree that they must to serve their other interests in life. They also trust that their property, the device that cost them hundreds of dollars isn’t trying to cheat them like some back-alley con artist.
This isn’t a game. My customer isn’t a number on a spreadsheet, merely an increment towards reaching some useless KPI. He deleted family photos to try and get that error message to go away, so that he could just receive emails again. He may not understand what happened, but he’s not stupid. He suspected that this was a scam to get him to pay for something he didn’t need, he just didn’t understand how the scam worked.
First and foremost, I performed a complete backup of his data. I took everything that I could find locally on the machine, as well as everything from the Onedrive account, including the Trash. It wasn’t much, only a few gigabytes, which I transferred to a separate USB drive.
I carefully transferred all files out of the Onedrive directory structure and back into his home folder. The Windows file explorer did not make this easy or intuitive.
I proceeded to delete everything from the Onedrive account, through the web interface. I did notice that deleting files merely moved them into the Trash, which was still being counted towards total storage usage. I assumed this was yet another subtle dark pattern.
I alluded to changing settings as a way to solve this. The approach we often took at my previous employer was to simply disable Onedrive in the Windows startup list. That could have worked in this case but I had a better idea. Remove Onedrive entirely.
I have muscle memory at this point for how to do it, if you were wondering this is the procedure I used:
Open an admin Terminal and load up the Chris Titus’ winutil.
This entirely removes the Onedrive application from Windows, including all integrations into other programs, such as the file explorer.
I then proceeded to delete everything from the Onedrive account, including the Trash. The error messages finally went away in Outlook and he was able to recieve email messages again.
I may be preaching to the choir, but regardless I want to use this post as my opportunity to make these points in my own way. Microsoft is actively hostile towards its users.They have become a basket-case of an organisation, where chasing irrelevant KPI’s has become more important than product quality, or even baseline respect for their users.The exact same can be said, to varying degrees, to every other large consumer-tech company.
I see this as the result of bad incentive structures. A toxic game theory that has been allowed to play out over many years without proper scrutiny. The lefty in me might think that this is a manifestation of Late Capitalism. If so then it feels like we’re about 30 seconds away from midnight.
I think a lot about the possible ways to tweak said incentive structures, to build a choice architecture that can prevent even the first step in the process that led to this.
Days like today, when I’m thinking about the real actual ways that this nonsense impacts real actual people, I can’t ignore the humans in this loop. People need to actually take responsibility for their choices, not just turn their brain off when the number looks right in the spreadsheet.
If you enjoyed this post, let me know! Email me at mail@lzon.ca, or reach out through one of my social accounts linked on the homepage.
...
Read the original on lzon.ca »
Your AI chatbot sessions and cloud-stored photos might get more expensive if other states follow Maine’s lead. Lawmakers there just advanced the nation’s first statewide moratorium on large data centers, citing concerns that the AI boom is pushing electricity costs even higher in a state already suffering America’s priciest power bills.
The Democratic-controlled legislature advanced bill LD 307, temporarily blocking permits for any new data center requiring more than 20 megawatts. The measure runs until November 2027, buying time for a new Data Center Coordination Council to study how these facilities strain Maine’s aging electrical grid.
Gov. Janet Mills supports the pause while developers scramble for exemptions.
The bill gained traction after residents in Wiscasset and Lewiston successfully opposed data center proposals over water usage and safety concerns. Projects now in limbo include facilities planned for:
* Jay (at an old paper mill site)
“Taking this pause now is going to be crucial,” Rep. Christopher Kessler said, according to Maine Public Radio, reflecting growing legislative concern about grid capacity. Developer Tony McDonald disagrees, calling the proposed restrictions “disastrous” and claiming his team got “caught in this dragnet.”
The Pine Tree State isn’t alone in pumping the brakes. Counties in Michigan and Indiana have imposed their own local pauses on data center development, while cities from Denver to Detroit weigh restrictions as hyperscale facilities chase cheap land and reliable power.
The timing reflects broader anxiety about AI’s infrastructure appetite. Data centers now consume roughly 4% of U. S. electricity, with projections suggesting that figure could double by 2030. For Mainers already paying some of the nation’s highest residential rates, that mathematical reality hits differently than Silicon Valley’s endless optimization rhetoric.
Maine’s move represents what economist Anirban Basu called a “canary in the coal mine” for state-level resistance to Big Tech’s energy demands. Whether that precedent spreads depends on how aggressively other governors follow Maine’s lead—and whether your favorite AI services start charging accordingly.
...
Read the original on www.gadgetreview.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.