10 interesting stories served every morning and every evening.
Add AP News as your preferred source to see more of our stories on Google.
Add AP News as your preferred source to see more of our stories on Google.
LONDON (AP) — In France, civil servants will ditch Zoom and Teams for a homegrown video conference system. Soldiers in Austria are using open source office software to write reports after the military dropped Microsoft Office. Bureaucrats in a German state have also turned to free software for their administrative work.
Around Europe, governments and institutions are seeking to reduce their use of digital services from U. S. Big Tech companies and turning to domestic or free alternatives. The push for “digital sovereignty” is gaining attention as the Trump administration strikes an increasingly belligerent posture toward the continent, highlighted by recent tensions over Greenland that intensified fears that Silicon Valley giants could be compelled to cut off access.
Concerns about data privacy and worries that Europe is not doing enough to keep up with the United States and Chinese tech leadership are also fueling the drive.
The French government referenced some of these concerns when it announced last week that 2.5 million civil servants would stop using video conference tools from U. S. providers — including Zoom, Microsoft Teams, Webex and GoTo Meeting — by 2027 and switch to Visio, a homegrown service.
The objective is “to put an end to the use of non-European solutions, to guarantee the security and confidentiality of public electronic communications by relying on a powerful and sovereign tool,” the announcement said.
“We cannot risk having our scientific exchanges, our sensitive data, and our strategic innovations exposed to non-European actors,” David Amiel, a civil service minister, said in a press release.
Microsoft said it continues to “partner closely with the government in France and respect the importance of security, privacy, and digital trust for public institutions.”
The company said it is “focused on providing customers with greater choice, stronger data protection, and resilient cloud services — ensuring data stays in Europe, under European law, with robust security and privacy protections.”
Zoom, Webex and GoTo Meeting did not respond to requests for comment.
French President Emmanuel Macron has been pushing digital sovereignty for years. But there’s now a lot more “political momentum behind this idea now that we need to de-risk from U. S. tech,” Nick Reiners, senior geotechnology analyst at the Eurasia Group.
“It feels kind of like there’s a real zeitgeist shift,” Reiners said
It was a hot topic at the World Economic Forum’s annual meeting of global political and business elites last month in Davos, Switzerland. The European Commission’s official for tech sovereignty, Henna Virkkunen, told an audience that Europe’s reliance on others “can be weaponized against us.”
“That’s why it’s so important that we are not dependent on one country or one company when it comes to very critical fields of our economy or society,” she said, without naming countries or companies.
A decisive moment came last year when the Trump administration sanctioned the International Criminal Court’s top prosecutor after the tribunal, based in The Hague, Netherlands, issued an arrest warrant for Israeli Prime Minister Benjamin Netanyahu, an ally of President Donald Trump.
The sanctions led Microsoft to cancel Khan’s ICC email, a move that was first reported by The Associated Press and sparked fears of a “kill switch” that Big Tech companies can use to turn off service at will.
Microsoft maintains it kept in touch with the ICC “throughout the process that resulted in the disconnection of its sanctioned official from Microsoft services. At no point did Microsoft cease or suspend its services to the ICC.”
Microsoft President Brad Smith has repeatedly sought to strengthen trans-Atlantic ties, the company’s press office said, and pointed to an interview he did last month with CNN in Davos in which he said that jobs, trade and investment. as well as security, would be affected by a rift over Greenland.
“Europe is the American tech sector’s biggest market after the United States itself. It all depends on trust. Trust requires dialogue,” Smith said.
Other incidents have added to the movement. There’s a growing sense that repeated EU efforts to rein in tech giants such as Google with blockbuster antitrust fines and sweeping digital rule books haven’t done much to curb their dominance.
Billionaire Elon Musk is also a factor. Officials worry about relying on his Starlink satellite internet system for communications in Ukraine.
Washington and Brussels wrangled for years over data transfer agreements, triggered by former National Security Agency contractor Edward Snowden’s revelations of U. S. cyber-snooping.
With online services now mainly hosted in the cloud through data centers, Europeans fear that their data is vulnerable.
U. S. cloud providers have responded by setting up so-called “sovereign cloud” operations, with data centers located in European countries, owned by European entities and with physical and remote access only for staff who are European Union residents.
The idea is that “only Europeans can take decisions so that they can’t be coerced by the U. S.,” Reiners said.
The German state of Schleswig-Holstein last year migrated 44,000 employee inboxes from Microsoft to an open source email program. It also switched from Microsoft’s SharePoint file sharing system to Nextcloud, an open source platform, and is even considering replacing Windows with Linux and telephones and videoconferencing with open source systems.
“We want to become independent of large tech companies and ensure digital sovereignty,” Digitalization Minister Dirk Schrödter said in an October announcement.
The French city of Lyon said last year that it’s deploying free office software to replace Microsoft. Denmark’s government and the cities of Copenhagen and Aarhus have also been trying out open-source software.
“We must never make ourselves so dependent on so few that we can no longer act freely,” Digital Minister Caroline Stage Olsen wrote on LinkedIn last year. “Too much public digital infrastructure is currently tied up with very few foreign suppliers.”
The Austrian military said it has also switched to LibreOffice, a software package with word processor, spreadsheet and presentation programs that mirrors Microsoft 365’s Word, Excel and PowerPoint.
The Document Foundation, a nonprofit based in Germany that’s behind LibreOffice, said the military’s switch “reflects a growing demand for independence from single vendors.” Reports also said the military was concerned that Microsoft was moving file storage online to the cloud — the standard version of LibreOffice is not cloud-based.
Some Italian cities and regions adopted the software years ago, said Italo Vignoli, a spokesman for The Document Foundation. Back then, the appeal was not needing to pay for software licenses. Now, it’s the main reason is to avoid being locked into a proprietary system.
“At first, it was: we will save money and by the way, we will get freedom,” Vignoli said. “Today it is: we will be free and by the way, we will also save some money.”
Associated Press writer Molly Quell in The Hague, Netherlands contributed to this report.
This version corrects the contribution line to Molly Quell instead of Molly Hague.
...
Read the original on apnews.com »
What’s up with all those equals signs anyway? IT”S DOING IT AGAIN!! Books on the Site for Magazines About Comics? There are too many plug standards
What’s up with all those equals signs anyway?For some reason or other, people have been posting a lot of excerpts from old emails on Twitter over the last few days. The most vital question everybody’s asking themselves is: What’s up with all those equals signs?!And that’s something I’m somewhat of an expert on. I mean, having written mail readers and stuff; not because I’ve been to Caribbean islands. I’ve seen people confidently claim that it’s a code, or that it’s an artefact of scanning and then using OCR, but it’s neither — it’s just that whoever converted these emails to a readable format were morons.What’s that you say? “Converted?! Surely emails are just text!!” Well, if you lived in the stone age (i.e., the 80s), they mostly were, but then people invented things like “long lines” and “rock döts”, and computers had to “encode” the mail before sending.The artefact we see here is from something called “quoted printable”, or as we used to call it when it was introduced: “Quoted unreadable”.To take the first line. Whoever wrote this, typed in the following in their mail reader:we talked about designing a pig with different non- cloven hoofs in order to make kosher baconWe see that that’s quite a long line. Mail servers don’t like that, so mail software will break it into two lines, like so:we talked about designing a pig with different non- =
cloven hoofs in order to make kosher baconSee? There’s that equals sign! Yes, the equals sign is used to say “this should really be one single line, but I’ve broken it in two so that the mail server doesn’t get mad at me”.The formal definition here is important, though, so I have to be a bit technical here: To say “this is a continuation line”, you insert an equals sign, then a carriage return, and then a line feed.=CRLF… non- =CRLF
cloven hoofs…When displaying this, we remove all these three characters, and end up
with:… non- cloven hoofs…So what’s happened here? Well, whoever collected these emails first converted from CRLF (also known as the “Windows” line ending coding, but it’s the standard line ending in the SMTP standard) to “NL” (i.e., “Unix” line ending coding). This is pretty normal if you want to deal with email. But you then have one byte fewer:… non- =NL
cloven hoofs…If your algorithm to decode this is, stupidly, “find equals signs at the end of the line, and then delete two characters, and then finally the equals sign”, you should end up with:… non- loven hoofs…I.e., you lose the “c”. That’s almost what happened here, but not quite: Why does the equals sign still remain?This StackOverflow post from 14 years ago explains the phenomenon, sort of:Obviously the client notices that = is not followed by a proper CR LF sequence, so it assumes that it is not a soft line break, but a character encoded in two hex digits, therefore it reads the next two bytes. It should notice that the next two bytes are not valid hex digits, so its behavior is wrong too, but we have to admit that at that point it does not have a chance to display something useful. They opted for the garbage in, garbage out approach.That is, equals signs are also used for something else besides wrapping long lines, and that’s what we see later in the post: =C2 please noteIf the equals sign is not at the end of a line, it’s used to encode “funny characters”, like what you use with “rock döts”. =C2 is 194, which is a first character in a UTF-8 sequence, and the following char is most likely a =A0: =C2=A0 is “non-breakable space”, which is something people often use to indent text (and the “please note” is indented) and you see =A0 in many other places in these emails.My guess is that whoever did this part just did a search-replace for =C2 and/or =A0 instead of using a proper decoder, but other explanations are certainly possible. Any ideas?Anyway, that’s what’s up with those equals signs: 1) “it’s technical”, and 2) “it’s a combination of buggy continuation line decoding and buggy non-ASCII decoding”, and 3) “whoever processed these mails are incompetent”. I don’t think 2) should be very surprising at this point, do you?(Edit a bit later: To nitpick a bit here: When the standard was written, people mostly envisioned that the quoted-printable content transport encoding would be unwound upon reception (note “transport”), and that you’d end up with “clean text” on disk after reception. This didn’t really happen, so all “real” implementations do the right thing with single-character (i.e., “unencoded”) newlines. For instance:(quoted-printable-decode-string “he=\nllo”)
=> “hello”Which leads me to assume that they reused an algo that was usually run in an SMTP server context to do the line unfolding — in that context, you can safely assume that the line ending is a CRLF. And by chance, this algo also works fine if you’re working with a Windows-based file, but fails for a Unix-based file.)
...
Read the original on lars.ingebrigtsen.no »
Around January 11, 2026, archive.today (aka archive.is, archive.md, etc) started using its users as proxies to conduct a distributed denial of service (DDOS) attack against Gyrovague, my personal blog. All users encountering archive.today’s CAPTCHA page currently load and execute the following Javascript:
setInterval(function() {
fetch(“https://gyrovague.com/?s=” + Math.random().toString(36).substring(2, 3 + Math.random() * 8), {
referrerPolicy: “no-referrer”,
mode: “no-cors”
}, 300);
Every 300 milliseconds, as long as the CAPTCHA page is open, this makes a request to the search function of my blog using a random string, ensuring the response cannot be cached and thus consumes resources.
You can validate this yourself by checking the source code and network requests; if you’re not being redirected to the CAPTCHA page, here’s a screenshot. uBlock Origin also stops the requests from being executed, so you may need to turn that off. At time of writing, the code above is located at line 136 of the CAPTCHA page’s top level HTML file:
So how did we end up here?
On August 5, 2023, I published a blog post called archive.today: On the trail of the mysterious guerrilla archivist of the Internet. Using what cool kids these days call OSINT, meaning poking around with my favorite search engine, the post examines the history of the site, its tech stack and its funding. The post mentions three names/aliases linked to the site, but all of them had been dug up by previous sleuths and the blog post also concludes that they are all most likely aliases, so as far as “doxxing” goes, this wasn’t terribly effective.
My motives for publishing this have been questioned, sometimes in fanciful ways. The actual rationale is boringly straightforward: I found it curious that we know so little about this widely-used service, so I dug into it, in the same way that previous posts dug into a sketchy crypto coin offering, monetization dark patterns in a popular pay to win game, and the end of subway construction in Japan. That’s it, and it’s also the only post on my blog that references archive.today.
The post gathered some 10,000 views and a bit discussion on Hacker News, but didn’t exactly set the blogosphere on fire. And indeed, absolutely nothing happened for the next two years and a bit.
On November 5, 2025, Heise Online reported that the FBI was now on the trail of archive.today and had subpoenaed its domain registrar Tucows. Both this report and ArsTechnica also linked to my blog post.
On November 13, AdGuard DNS published an interesting blog post about a sketchy French organization called Web Abuse Association Defense (WAAD), which was trying to pressure them into blocking archive.today’s various domains. An update added on November 18 also suggests that WAAD is impersonating other people.
On January 8, 2026, my blog host Automattic (dba WordPress.com) notified me that they had received a GDPR complaint from a “Nora Puchreiner”, alleging that my blog post “contains extensive personal data … presented in a narrative that is defamatory in tone and context”. The complaint was entirely lacking in actionable detail, so I had Gemini compose a rebuttal citing journalistic exemption, public interest, failure to identify falsehoods, and host protection, and after a quick review Automattic sided with me and left the post up. Score one for AI.
On January 10, I received a politely worded email from archive.today’s webmaster asking me to take down the post for a few months. Unfortunately the email was classified as spam by Gmail and I only spotted it five days later. I responded on the 15th and followed up on the 20th, but did not hear back.
On January 14, a user called “rabinovich” posted Ask HN: Weird archive.today behavior? on Hacker News, asking about the DDOS-like behavior which they claimed had started three days ago. This is, as far as I can tell, the first public mention of this anywhere, and a kind HN user brought it to my attention.
On January 21, commit ^bbf70ec (warning: very large) added gyrovague.com to dns-blocklists, used by ad blocking services like uBlock Origin. This is actually beneficial, since if you have an ad blocker installed, the DDOS script’s network requests are now blocked. (It does not stop users from browsing to my blog directly.)
On January 25, I emailed archive.today’s webmaster for the third time with a draft of this blog post, declining to take down the post but offering to “change some wording that you feel is being misrepresented”. “Nora Puchreiner” responded with an increasingly unhinged series of threats:
And threatening me with Streisand… having such a noble and rare name, which in retaliation could be used for the name of a scam project or become a byword for a new category of AI porn… are you serious?
If you want to pretend this never happened — delete your old article and post the new one you have promised. And I will not write “an OSINT investigation” on your Nazi grandfather, will not vibecode a gyrovague.gay dating app, etc.
At this point it was pretty clear the conversation had run its course, so here we are. And for the record, my long-dead grandfather served in an anti-aircraft unit of the Finnish Army during WW2, defending against the attacks of the Soviet Union. Perhaps this is enough to qualify as a “Nazi” in Russia these days.
The above are easily verifiable facts, although you’ll have to trust me on the email bits. (You can find a lightly redacted copy of the entire email thread here.) Everything that follows is more speculative and firmly in the domain of a hall of mirrors where nothing is quite what it seems.
The big question is, of course, why, and more specifically why now, 2.5 years after posting, when the cat is well and truly out of the bag. As multiple people have noted, there’s nothing the Internet loves more than an attempt to attempt to censor already published information, and doing so tends to cause more interest in that information, aka the Streisand effect.
To summarize our email thread, the archive.today webmaster claims they have no beef with my article itself, but they are concerned that it’s getting misquoted in other media, so it should be taken offline for a while. And in this Mastodon thread by @eb@social.coop, @iampytest@infosec.exchange quotes claimed correspondence with the webmaster, stating that the purpose of the DDOS was to “attract attention and increase their hosting bill“.
Call me naive, but I’m inclined to take that at face value: it’s a pretty misguided way of doing it, but they certainly caught my attention. Problem is, they also caught the attention of the broader Internet. They didn’t do so well on the hosting bill part either, since I have a flat fee plan, meaning this has cost me exactly zero dollars.
Perhaps more interesting yet are the various identities involved.
* “Nora Puchreiner”, who sent the GDRP takedown attempt and replied to my emails to archive.today, shows up in various places on the Internet including Hacker News, commenting on my original blog post back in 2023. Somebody by that name also has an account on Russian LiveJournal, where they posted correspondence between btdigg.com and an anti-piracy outfit called Ventegus. There’s also this rather batty exchange on KrebsonSecurity, where “Nora Puchreiner” says various scammers are actually Ukrainian, not Russian, and a “Dennis P” pops up to call her “fake” and a “scammer”.
* “rabinovich” on Hacker News submitted both the “Ask HN” about the DDOS attack, and an apparently competing archive site called Ghostarchive. As several HN readers noted, the name “Masha Rabinovich” is associated with archive.today.
* “Richard Président” from WAAD helpfully reached out and offered to assist me with a GDPR counter-complaint, rather transparently mentioning that this could be tied to “a request for identity verification”. (I have zero interest in pursuing this.)
Well, I wish I had one, but at this stage I really don’t. The most charitable interpretation would be that the investigative heat is starting to get to the webmaster and they’re lashing out in misguided self-defense. Perhaps I’ll just quote Nora’s own post on LiveJournal:
And as the darkness closed in, Nora Puchreiner, once a seeker of truth, was swallowed by the very shadows she had sought to expose. Her name would be whispered in hushed tones by those who dared to tread the path of forbidden knowledge, a cautionary tale of a mind consumed by the cosmic horrors that lie just beyond our comprehension.
Let’s see what the Internet hive mind comes up with.
Also, for the record, I am gyrovague-com on Hacker News.
...
Read the original on gyrovague.com »
Prior to the establishment of the Environmental Protection Agency in 1970, Americans lived in communities awash with lead from industrial sources, paint, water supply pipes and, most significantly, tailpipe emissions. A dangerous neurotoxin that accumulates in human tissues and is linked to developmental deficits in children, environmental lead levels have come way down in the years since, and so have human exposures.
The proof is in your hair.
An analysis of hair samples conducted by University of Utah scientists shows precipitous reductions in lead levels since 1916.
“We were able to show through our hair samples what the lead concentrations are before and after the establishment of regulations by the EPA,” said demographer Ken Smith, a distinguished professor emeritus of family and consumer studies. “We have hair samples spanning about 100 years. And back when the regulations were absent, the lead levels were about 100 times higher than they are after the regulations.”
The findings, which appear in PNAS, underscore the vital role of environmental regulations in protecting public health. The study notes lead rules are now being weakened by the Trump administration in a wide-ranging move to ease environmental protections.
“We should not forget the lessons of history. And the lesson is those regulations have been very important,” said co-author Thure Cerling, a distinguished professor of both geology and biology. “Sometimes they seem onerous and mean that industry can’t do exactly what they’d like to do when they want to do it or as quickly as they want to do it. But it’s had really, really positive effects.”
Lead is the heaviest of heavy metals that, like mercury and arsenic, accumulate in living tissue and are toxic at even low levels. Yet lead holds very useful properties, great for fashioning into pipes and as a chemical additive. Lead was added to paint to improve durability, speed up drying, and produce vibrant colors with greater coverage. Lead also improved the performance of automobile engines by preventing pistons from “knocking.”
By the 1970s, its toxicity became well established, and EPA regulations began phasing it out of paint, pipes, gasoline and other consumer products.
To document whether these steps were helping reduce lead exposure in people, Smith joined with geologist Diego Fernandez and Cerling, who had developed techniques to discern where animals have lived and what they eat based on chemical analysis of hair and teeth.
The lead research is built on a previous study funded by the university’s Center on Aging and the National Institutes of Health that had recruited Utahns who consented to provide blood samples and family health histories.
For the new study, the researchers asked members of that cohort to provide hair samples, both contemporary and from when they were young. These people obliged, and some were able to find ancestors’ hair preserved in family scrapbooks dating as far back as a century. In all, the team acquired hair samples from 48 individuals in this manner, offering a robust window into lead levels along Utah’s populous Wasatch Front, which historically experienced heavy lead emissions from industrial sources.
“The Utah part of this is so interesting because of the way people keep track of their family history. I don’t know that you could do this in New York or Florida,” said Smith, who directed the U’s Pedigree and Population Program at the Huntsman Cancer Center while these studies were conducted.
This region supported a vibrant smelting industry through most of the 20th century, centered in the cities of Midvale and Murray. Most of Utah’s smelters were shuttered by the 1970s, around the same time the EPA clamped down on the use of lead in consumer products.
The research team ran the hair samples through mass spectrometry equipment at the facility directed by Fernandez.
“The surface of the hair is special. We can tell that some elements get concentrated and accumulated on the surface. Lead is one of those. That makes it easier because lead is not lost over time,” said Fernandez, a research professor in the Department of Geology & Geophysics. “Because mass spectrometry is very sensitive, we can do it with one hair strand, though we cannot tell where the lead is in the hair. It’s probably on the surface mostly, but it could also be coming from the blood if that hair was synthesized when there was high lead in the blood.”
Blood would provide a better exposure assessment, but hair is far easier to collect and preserve, and more importantly, it offers clues to long-ago exposures for a person who has grown up or even deceased.
“It doesn’t really record that internal blood concentration that your brain is seeing, but it tells you about that overall environmental exposure,” Cerling said. “One of the things that we found is that hair records that original value, but then the longer the hair has been exposed to the environment, the higher the lead concentrations are.”
The team’s findings regarding lead in hair run parallel to the reductions of lead in gasoline following the EPA’s establishment by President Richard Nixon.
Prior to 1970, for example, gasolines contained about 2 grams of lead per gallon. That might not sound like much, but considering the billions of gallons of fuel American automobiles burn each year, it adds up to nearly 2 pounds of lead released into the environment per person a year.
’It’s an enormous amount of lead that’s being put into the environment and quite locally,” Cerling said. “It’s just coming out of the tailpipe, goes up in the air and then it comes down. It’s in the air for a number of days, especially during the inversions that we have and it absorbs into your hair, you breathe it and it goes into your lungs.”
But after the 1970s, even as gasoline consumption escalated in the United States, the concentrations of lead in the hair samples plummeted, from as high as 100 parts per million (ppm) to 10 ppm by 1990. In 2024, the level was less than 1 ppm.
The study, titled “Lead in archived hair documents decline in human lead (Pb) exposure since establishment of the US Environmental Protection Agency,” was published Feb. 2 in PNAS, or Proceedings of the National Academy of Sciences. Support came from the Huntsman Cancer Foundation and the National Cancer Institute through a grant to the Utah Population Database and the University of Utah.
...
Read the original on attheu.utah.edu »
Over the past year, we’ve seen a shift in what Deno Deploy customers are building: platforms where users generate code with LLMs, and that code runs immediately without review. That code frequently calls LLMs itself, which means it needs API keys and network access.
This isn’t the traditional “run untrusted plugins” problem. It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review. Sandboxing the compute isn’t enough. You need to control network egress and protect secrets from exfiltration.
Deno Sandbox provides both. And when the code is ready, you can deploy it directly to Deno Deploy without rebuilding.
You don’t want to run untrusted code (generated by your LLMs, your users LLMs, or even hand written by users) directly on your server. It will compromise your system, steal your API keys, and call out to evil.com. You need isolation.
Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud) to run untrusted code with defense-in-depth security. You create or programmatically via our JavaScript or Python SDKs, and they boot in under a second. You can also interact with them via SSH, HTTP, or even open a VS Code window directly into the sandbox.
import { Sandbox } from “@deno/sandbox”;
await using sandbox = await Sandbox.create();
await sandbox.sh`ls -lh /`;
But there is more. In Deno Sandbox, secrets never enter the environment. Code sees only a placeholder:
import { Sandbox } from “@deno/sandbox”;
await using sandbox = await Sandbox.create({
secrets: {
OPENAI_API_KEY: {
hosts: [“api.openai.com”],
value: process.env.OPENAI_API_KEY,
await sandbox.sh`echo $OPENAI_API_KEY`;
// DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19…
The real key materializes only when the sandbox makes an outbound request to an approved host. If prompt-injected code tries to exfiltrate that placeholder to
evil.com? Useless.
You can also restrict which hosts the sandbox can talk to:
await using sandbox = await Sandbox.create({
allowNet: [“api.openai.com”, “*.anthropic.com”],
Any request to an unlisted host gets blocked at the VM boundary.
Both features are implemented via an outbound proxy similar to
coder/httpjail. This gives us a chokepoint for policy enforcement. We plan to add more capabilities here: analytics for outbound connections and programmatic hooks for trusted code to inspect or modify requests.
If you’re running untrusted JavaScript or TypeScript, combine this with Deno’s
–allow-net flag for defense in depth: VM-level network restrictions plus runtime-level permissions.
sandbox.deploy() deploys code from your sandbox directly to Deno Deploy.
const build = await sandbox.deploy(“my-app”, {
production: true,
build: { mode: “none”, entrypoint: “server.ts” },
const revision = await build.done;
console.log(revision.url);
One call to go from sandbox to production deployment. No rebuilding in a different CI system, no re-authenticating with a different tool. Just turn your dev environment directly into a production ready, auto-scaling serverless deployment.
Sandboxes are ephemeral by default, but when you need state we have you covered:
Run apt-get install once, snapshot it, and every future sandbox boots with everything already installed. Create read-write volumes from the snapshots to create a fresh development environment in seconds.
Deno Sandbox is included in your Deno Deploy plan with competitive, usage-based pricing. You pay for compute time, not wall-clock time.
We’re excited to see what you (or your AI agents) build with Deno Sandbox.
...
Read the original on deno.com »
Add AP News as your preferred source to see more of our stories on Google.
Add AP News as your preferred source to see more of our stories on Google.
PARIS (AP) — French prosecutors raided the offices of social media platform X on Tuesday as part of a preliminary investigation into allegations that include spreading child sexual abuse images and deepfakes. They have also summoned billionaire owner Elon Musk for questioning.
X and Musk’s artificial intelligence company xAI also face intensifying scrutiny from Britain’s data privacy regulator, which opened formal investigations into how they handled personal data when they developed and deployed Musk’s artificial intelligence chatbot Grok.
Grok, which was built by xAI and is available through X, sparked global outrage last month after it pumped out a torrent of sexualized nonconsensual deepfake images in response to requests from X users.
The French investigation was opened in January last year by the prosecutors’ cybercrime unit, the Paris prosecutors’ office said in a statement. It’s looking into alleged “complicity” in possessing and spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organized group, among other charges.
Prosecutors asked Musk and former CEO Linda Yaccarino to attend “voluntary interviews” on April 20. Employees of X have also been summoned that same week to be heard as witnesses, the statement said. Yaccarino was CEO from May 2023 until July 2025.
In a post on its own service denying the allegations, X railed against the raid on its Paris office as “an abusive act of law enforcement theater designed to achieve illegitimate political objectives rather than advance legitimate law enforcement goals rooted in the fair and impartial administration of justice.”
In a message posted on X, the Paris prosecutors’ office announced the ongoing searches at the company’s offices in France and said it was leaving the platform while calling on followers to join it on other social media.
“At this stage, the conduct of the investigation is based on a constructive approach, with the aim of ultimately ensuring that the X platform complies with French law, as it operates on the national territory,” the prosecutors’ statement said.
European Union police agency Europol “is supporting the French authorities in this,” Europol spokesperson Jan Op Gen Oorth told the AP, without elaborating.
French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system.
It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.
Grok wrote in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.
In later posts on X, the chatbot reversed itself and acknowledged that its earlier reply was wrong, saying it had been deleted and pointed to historical evidence that Zyklon B was used to kill more than 1 million people in Auschwitz gas chambers.
The chatbot also appeared to praise Adolf Hitler last year, in comments that X took down after complaints.
In Britain, the Information Commissioner’s Office said it’s looking into whether X and xAI followed the law when processing personal data and whether Grok had any measures in place to prevent its use to generate “harmful manipulated images.”
“The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this,” said William Malcolm, an executive director at the watchdog.
He didn’t specify what the penalty would be if the probe found the companies didn’t comply with data protection laws.
A separate investigation into Grok launched last month by the U. K. media regulator, Ofcom, is ongoing.
Ofcom said Tuesday it’s still gathering evidence and warned the probe could take months.
X has also been under pressure from the EU. The 27-nation bloc’s executive arm opened an investigation last month after Grok spewed nonconsensual sexualized deepfake images on the platform.
Brussels has already hit X with a 120-million euro (then-$140 million) fine for shortcomings under the bloc’s sweeping digital regulations, including blue checkmarks that broke the rules on “deceptive design practices” that risked exposing users to scams and manipulation.
On Monday, Musk ’s space exploration and rocket business, SpaceX, announced that it acquired xAI in a deal that will also combine Grok, X and his satellite communication company Starlink.
Associated Press writers Nicolas Vaux-Montagny in Lyon, France, Mike Corder in The Hague, Netherlands, Sylvia Hui and Kelvin Chan in London contributed to this report.
...
Read the original on apnews.com »
When AI systems fail, will they fail by systematically pursuing goals we do not intend? Or will they fail
by being a hot mess—taking nonsensical actions that do not further any goal?
Research done as part of the first Anthropic Fellows
Program during Summer 2025.
When AI systems fail, will they fail by systematically pursuing the wrong goals, or by being a hot mess? We decompose the errors of frontier reasoning models into bias (systematic) and variance (incoherent) components and find that, as tasks get harder and reasoning gets longer, model failures become increasingly dominated by incoherence rather than systematic misalignment. This suggests that future AI failures may look more like industrial accidents than coherent pursuit of a goal we did not train them to pursue.
As AI becomes more capable, we entrust it with increasingly consequential tasks. This makes understanding how these systems might fail even more critical for safety. A central concern in AI alignment is that superintelligent systems might coherently pursue misaligned goals: the classic paperclip
maximizer scenario. But there’s another possibility: AI might fail not through systematic misalignment, but through incoherence—unpredictable, self-undermining behavior that doesn’t optimize for any consistent objective. That is, AI might fail in the same way that humans often fail, by being a hot mess.
This paper builds on the hot mess theory
of misalignment (Sohl-Dickstein, 2023), which surveyed experts to rank various entities (including humans, animals, machine learning models, and organizations) by intelligence and coherence independently. It found that smarter entities are subjectively judged to behave less coherently. We take this hypothesis from survey data to empirical measurement across frontier AI systems, asking: As models become more
intelligent and tackle harder tasks, do their
failures look more like systematic misalignment, or more like a hot mess?
To quantify incoherence we decompose AI errors using the classic bias-variance framework:
We define incoherence as the fraction of error attributable to variance:
An incoherence of 0 means all errors are systematic (classic misalignment risk). An incoherence of 1 means all errors are random (the hot mess scenario). Crucially, this metric is independent of overall performance: a model can improve while becoming more or less coherent.
Figure 1: AI can fail through bias (consistent but
wrong) or variance (inconsistent). We
measure how this decomposition changes with model intelligence and task complexity.
We evaluated frontierAt the time of
this research in Summer 2025. reasoning models (Claude Sonnet 4, o3-mini, o4-mini, Qwen3) across multiple-choice benchmarks (GPQA, MMLU), agentic coding (SWE-Bench), and safety evaluations (Model-Written Evals). We also train our own small models on synthetic optimization tasks, which makes the connection to LLMs as dynamical systems and optimizers explicit.
Across all tasks and models, the longer models spend reasoning and taking actions, the more incoherent they become. This holds whether we measure reasoning tokens, agent actions, or optimizer steps.
Figure 2: Incoherence increases with reasoning
length across GPQA, SWE-Bench, safety
evaluations, and synthetic optimization. Models become less predictable the more they “think.”
How does incoherence change with model scale? The answer depends on task difficulty:
This suggests that scaling alone won’t eliminate incoherence. As more capable models tackle harder problems, variance-dominated failures persist or worsen.
Figure 3: Larger and more intelligent systems are
often more incoherent. For LLMs on
easy tasks, scale reduces incoherence, but on hard tasks, scale does not reduce incoherence or even
increases it.
We find that when models spontaneously reason longer on a problem (compared to their median), incoherence spikes dramatically. Meanwhile, deliberately increasing reasoning budgets through API settings provides only modest coherence improvements. The natural variation dominates.
Aggregating multiple samples reduces variance (as expected from theory), providing a path to more coherent behavior, though this may be impractical for real-world agentic tasks where actions are irreversible.
A key conceptual point: LLMs are dynamical systems, not optimizers. When a language model generates text or takes actions, it traces trajectories through a high-dimensional state space. It has to be trained to act as an optimizer, and trained to align with human intent. It’s unclear which of these properties will be more robust as we scale.
Constraining a generic dynamical system to act as a coherent optimizer is extremely difficult. Often the number of constraints required for monotonic progress toward a goal grows exponentially with the dimensionality of the state space. We shouldn’t expect AI to act as coherent optimizers without considerable effort, and this difficulty doesn’t automatically decrease with scale.
To probe this directly, we designed a controlled experiment: train transformers to explicitly
emulate an optimizer. We generate training data from steepest descent on a quadratic loss function, then train models of varying sizes to predict the next optimization step given the current state (essentially: training a “mesa-optimizer”).
Figure 4: Synthetic optimizer experiment. (Left)
Models are trained to predict optimizer
update steps. (Right) Larger models reduce bias much faster than variance - they learn to target the
correct objective better than they learn to be reliable optimizers.
* Incoherence grows with trajectory length. Even in this
idealized setting, the more optimization steps models take (and get closer to the correct solution), the
more incoherent they become.
* Scale reduces bias faster than variance. Larger models learn
the correct objective more quickly than they learn to reliably pursue it. The gap
between “knowing what to do” and “consistently doing it” grows with scale.
Our results are evidence that future AI failures may look more like industrial accidents than coherent pursuit of goals that were not trained for. (Think: the AI intends to run the nuclear power plant, but gets distracted reading French poetry, and there is a meltdown.) However, coherent pursuit of poorly chosen goals that we trained for remains a problem. Specifically:
Variance dominates on complex tasks. When frontier models
fail on difficult problems requiring extended reasoning, there is a tendency for failures to be
predominantly incoherent rather than systematic.
Scale doesn’t imply supercoherence. Making models larger improves
overall accuracy but doesn’t reliably reduce incoherence on hard problems.
This shifts alignment priorities. If capable AI is more
likely to be a hot mess than a coherent optimizer of the wrong goal, this increases the relative
importance of research targeting reward hacking and goal misspecification during
training—the bias term—rather than focusing primarily on aligning and constraining a perfect optimizer.
Unpredictability is still dangerous. Incoherent AI isn’t
safe AI. Industrial accidents can cause serious harm. But the type of risk differs from classic
misalignment scenarios, and our mitigations should adapt accordingly.
We use the bias-variance decomposition to systematically study how AI incoherence scales with model intelligence and task complexity. The evidence suggests that as AI tackles harder problems requiring more reasoning and action, its failures tend to become increasingly dominated by variance rather than bias. This doesn’t eliminate AI risk—but it changes what that risk looks like, particularly for problems that are currently hardest for models, and should inform how we prioritize alignment research.
We thank Andrew Saxe, Brian Cheung, Kit Frasier-Taliente, Igor Shilov, Stewart Slocum, Aidan Ewart, David Duvenaud, and Tom Adamczewski for extremely helpful discussions on topics and results in this paper.
...
Read the original on alignment.anthropic.com »
Meet Bunny Database: the SQL service that just worksDon’t want to babysit your app database on a VM but not willing to pay the DBaaS tax either? We’re building a third way. Today, we’re launching Bunny Database as a public preview: a SQLite-compatible managed service that spins down when idle, keeps latency low wherever your users are, and doesn’t cost a fortune.So what’s the deal with database services in 2026?It’s become clear by now that the DBaaS platforms that garnered the love of so many devs are all going upmarket. Removing or dumbing down free tiers, charging for unused capacity, charging extra for small features, or bundling them in higher tiers — you already know the drill.Hard to blame anyone for growing their business, but it doesn’t feel right when these services stop making sense for the very people who helped popularize them in the first place.So where does that leave you?Like SQLite, but for the webNot every project needs Postgres, and that’s okay. Sometimes you just want a simple, reliable database that you can spin up quickly and build on, without worrying it’ll hit your wallet like an EC2.That’s what we built Bunny Database for.What you get:One-click deployment: just name your database and go, no config neededLanguage-specific tooling: SDKs for TS/JS, Go, Rust, and .NET help you handle the boring bitsLow latency anywhere: replication regions let you serve reads close to your usersWorks over HTTP: wire up anything you’d likeDatabase editor: insert data or run queries on the spotAffordable, pay-as-you-go pricing: only pay for what you use, but without the serverless taxGet the full tour including how to connect Bunny Database to your app in this quick demo from our DX Engineer, Jamie Barton:
Why care about database latency anyway?You probably optimize the heck out of your frontend, APIs, and caching layers, all for the sake of delivering an experience that feels instant to your users. But when your database is far away from them, round-trip time starts to add noticeable latency.The usual fix is to introduce more caching layers, denormalized reads, or other workarounds. That’s obviously no fun.And when you think about it, devs end up doing this because the popular DBaaS platforms are usually either limited, complex, or too costly when it comes to multi-region deployments. So what looks like a caching problem is actually a data locality issue.OK, but how bad can it really be?To find out, we ran a read latency benchmark and measured p95 latency in Bunny Database.We picked a number of regions across the world and compared round-trip time for client locations ever farther away from the database in:Turns out serving reads close to clients reduced latency by up to 99%.Check out the full write-up on the benchmark setup and results here.While this definitely matters most to apps with global users, data locality does apply to everyone. With Bunny Database, you don’t have to stick to major data center locations and compensate with caching workarounds any more. Instead, you get a lot of flexibility to set up regions in an intuitive interface and it’s easy to switch things up as your requirements change.Automatic region selection gives you one-click deployment with minimal latency. Bunny Database will select regions for you based on your IP address (you can check and tweak the selection in settings later).Single-region deployment lets you pick one of 41 regions available worldwide (check the full list here).Manual region selection gives you custom multi-region setup, where you can freely pick regions that make the most sense for your audience.All of this lets you start wherever you’d like and add regions as needed, without re-architecting your app.Usage-based pricing, but without the serverless taxIn the database world, capacity-based pricing gives you some predictability. But no one likes to pay for unused capacity, right?Serverless, on the other hand, is supposed to be cost-efficient, yet can rack up bills quickly, especially when the DBaaS charges significant markups on top of already pricey compute.We don’t do hyperscalers, though, so we can charge a fair price for Bunny Database in a usage-based model.When not getting requests, Bunny Database only incurs storage costs. One primary region is charged continuously, while read replicas only add storage costs when serving traffic (metered by the hour)Your usage is charged continuously (pay-as-you-go) and invoiced monthlyDuring the public preview phase, Bunny Database is free.Wait, what does “SQLite-compatible” actually mean?Bunny Database wouldn’t be possible without libSQL, the open-source, open-contribution fork of SQLite created by Turso.We run Bunny Database on our own fork of libSQL, which gives us the freedom to integrate it tightly with the bunny.net platform and handle the infrastructure and orchestration needed to run it as a managed, multi-region service.What does this mean for Bunny Database’s upstream feature parity with libSQL and SQLite, respectively?The short answer is that we don’t currently promise automatic or complete feature parity with either upstream libSQL or the latest SQLite releases.While libSQL aims to stay compatible with SQLite’s API and file format, it doesn’t move in lockstep with upstream SQLite. We wouldn’t expect otherwise, especially as Turso has shifted focus from libSQL toward a long-term rewrite of SQLite in Rust.For Bunny Database, this means that compatibility today is defined by the libSQL version we’re built on, rather than by chasing every upstream SQLite or libSQL change as it lands. We haven’t pulled in any upstream changes yet, and we don’t currently treat upstream parity as an automatic goal.That’s intentional. Our focus so far has been on making Bunny Database reliable and easy to operate as a service. We think bringing in upstream changes only makes sense when they clearly improve real-world use cases, not just to tick a parity checkbox.If there are specific libSQL features you’d like to see exposed in Bunny Database, or recent SQLite features you’d want us to pull in, we’d love to hear about it. Join our Discord to discuss your use cases and help shape the roadmap!Speaking of the roadmap, we don’t stop cooking. Here’s what’s coming up next:There’s even more to come, but it’s too soon to spill the beans yet, especially while we’re in public preview. We’d love to hear your feedback, so we can shape what ships next together.Bunny Database works standalone and fits right into your stack via the SDKs (or you can hook up anything using the HTTP API). But it also plays nicely with Bunny Edge Scripting and Bunny Magic Containers.To connect your database to an Edge Script or a Magic Containers app, simply go to the Access tab of the chosen database and click Generate Tokens to create new access credentials for it.Once they’re generated, you’ll get two paths to choose from:Click Add Secrets to an Edge Script and select the one you’d like to connect from the list. You’ll also need to import the libSQL TypeScript client and use the provided code snippet to connect it to your database.Click Add Secrets to Magic Container App and select the one you’d like to connect from the list. You’ll also need to connect to the database from your app using one of the client libraries or the HTTP API.After you complete the setup, the database URL and access token will be available as environment variables in your script or app. Use them to connect to your database:
You can find more detailed, step-by-step integration instructions in the docs:We can’t wait to see what you’ll build with Bunny Database and what you think of it. During the public preview phase, you get 50 databases per user account, each capped at 1 GB, but we hope this should be more than enough for lots of fun projects.Just sign in to the bunny.net dashboard to get started. Happy building!
...
Read the original on bunny.net »
FLOPPINUX was released in 2021. After four years people find it helpful. Because of that I decided to revisit FLOPPINUX in 2025 and make updated tutorial. This brings bunch of updates like latest kernel and persistent storage.
Think of this as Linux From Scratch but for making single floppy distribution.
It is meant to be a full workshop (tutorial) that you can follow easily and modify it to your needs. It is a learning exercise. Some base Linux knowledge is needed.
The final distribution is very simple and consists only of minimum of tools and hardware support. As a user you will be able to boot any PC with a floppy drive to a Linux terminal, edit files, and create simple scripts. There is 264KB of space left for your newly created files.
* Have a working text editor (Vi) and basic file manipulation commands
(move, rename, delete, etc.)
* Persistent storage on the floppy to actualy save files (264KB)
The Linux kernel drops i486 support in 6.15 (released May 2025), so
6.14 (released March 2025) is the latest version with full compatibility.
This time I will do everything on Omarchy Linux. It is 64-bit operating system based on Arch Linux. Instructions should work on all POSIX systems. Only difference is getting needed packages.
Create directory where you will keep all the files.
You need supporting software to build things. This exact list may vary depending on the system you have.
86Box is also good but slower. Bochs is the best but for debugging, not needed here.
For emulation I will be using qemu.
Get the sources for the latest compatible kernel
6.14.11:
Now, that you have them in linux/ directory lets configure and build our custom kernel. First create tiniest base configuration:
This is a bootstrap with absolute minimum features. Just enough to boot the system. We want a little bit more.
Add additonal config settings on top of it:
Important: Do not uncheck anything in options unless specified so. Some of those options are important. You can uncheck but on your own risk.
* General Setup
Initial RAM filesystem and RAM disk (initramfs/initrd)
* Initial RAM filesystem and RAM disk (initramfs/initrd)
* Executable file formats
This will take a while depending on the speed of your CPU. In the end the kernel will be created in arch/x86/boot/ as
bzImage file.
Move kernel to our main directory and go
back to it:
Without tools kernel will just boot and you will not be able to do anything. One of the most popular lightweight tools is BusyBox. It replaces the standard GNU utilities with way smaller but still functional alternatives, perfect for embedded needs.
Get the 1.36.1 version from busybox.net or Github mirror. Download the file, extract it, and change directory:
Remember to be in the working directory.
As with kernel you need to create starting configuration:
You may skip this following fix if you are building on Debian/Fedora
Now the fun part. You need to choose what tools you
want. Each menu entry will show how much more KB will be taken if you choose it. So choose it wisely :) For the first time use my selection.
Choose the following options. Remember to do not
uncheck anything if not stated here.
* Init Utilities
init
uncheck everything else (inside init: keep [*] only
on init in this page)
* init
uncheck everything else (inside init: keep [*] only
on init in this page)
* uncheck everything else (inside init: keep [*] only
on init in this page)
* Shells
Optimize for size instead of speed
* Optimize for size instead of speed
Our target system needs to be 32-bit. To compile it on 64-bit system we need a cross compiler. You can setup this by hand in the menuconfig or just copy and paste those four lines.
Build tools and create base filesystem (“install”). It will ask for options, just press enter for default for all of them.
This will create a filesystem with all the files at **_install/**. Move it to our main directory. I like to rename it to.
Lastly to to that new directory.
You got kernel and basic tools but the system still needs some additional directory structure.
This created minimum viable directory structure for satisfying the basic requirements of a Linux system.
Remember to be in the filesystem/ directory.
Next step is to add minimum configuration files. First one is a welcome message that will be shown after booting.
Here is the first real opportunity to go wild and make this your own signature.
Or download my welcome file.
It looks like that:
$ cat welcome
/_/ FLOPPINUX /_/;
/ ′ boot disk ′ //
.___/_________/__// 1440KiB
‘===\_________\==’ 3.5″
_______FLOPPINUX_V_0.3.1 __________________________________
_______AN_EMBEDDED_SINGLE_FLOPPY_LINUX_DISTRIBUTION _______
_______BY_KRZYSZTOF_KRYSTIAN_JANKOWSKI ____________________
_______2025.12 ____________________________________________
Back to serious stuff. Inittab tells the system what to do in critical states like starting, exiting and restarting. It points to the initialization script rc that is the first thing that our OS will run before dropping into the shell.
Make the script executable and owner of all files to root:
Compress this directory into one file. Then go back to working directory.
Another place to tweak parameters for your variant. Text after SAY is what will be displayed on the screen as first, usualy a name of the OS.
The tsc=unstable is useful on some (real) computers to get rid of randomly shown warnings about Time Stamp Counter.
Remember to be in the working directory.
To make the system a little bit more user friendly I like to have a sample file that user will be able to read and edit. You can put anything you want in it. A simple help would be also a good idea to include.
Filesystem is ready. Final step is to put this all on a
floppy!
First we need an empty file in exact size of a floppy disk. Then format and make it bootable.
Mount it and copy syslinux, kernel, and filesystem onto it:
It’s good to test before wasting time for the real floppy to burn.
Boot the new OS in qemu:
If it worked that means You have successfully created your own distribution! Congratulations!
The floppinux.img image is ready to burn onto a floppy and boot on real hardware!
Change XXX to floppy drive name in your system. In my case it is
sdb. Choosing wrongly will NUKE YOUR PARTITION and REMOVE all of your files! Think twice. Or use some GUI application for that.
* sync - force write of buffered data to disk - use this
after any changes to the floppy filesystem
...
Read the original on krzysztofjankowski.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.