10 interesting stories served every morning and every evening.
To help train AI models, Meta and other tech companies have downloaded and shared pirated books via BitTorrent from Anna’s Archive and other shadow libraries. In an ongoing lawsuit, Meta now argues that uploading pirated books to strangers via BitTorrent qualifies as fair use. The company also stresses that the data helped establish U. S. global leadership in AI.
To help train AI models, Meta and other tech companies have downloaded and shared pirated books via BitTorrent from Anna’s Archive and other shadow libraries. In an ongoing lawsuit, Meta now argues that uploading pirated books to strangers via BitTorrent qualifies as fair use. The company also stresses that the data helped establish U. S. global leadership in AI.
In the race to build the most capable LLM models, several tech companies sourced copyrighted content for use as training data, without obtaining permission from content owners.
Meta, the parent company of Facebook and Instagram, was one of the companies to get sued. In 2023, well-known book authors, including Richard Kadrey, Sarah Silverman, and Christopher Golden, filed a class-action lawsuit against the company.
Last summer, Meta scored a key victory in this case, as the court concluded that using pirated books to train its Llama LLM qualified as fair use, based on the arguments presented in this case. This was a bittersweet victory, however, as Meta remained on the hook for downloading and sharing the books via BitTorrent.
By downloading books from shadow libraries such as Anna’s Archive, Meta relied on BitTorrent transfers. In addition to downloading content, these typically upload data to others as well. According to the authors, this means that Meta was engaged in widespread and direct copyright infringement.
In recent months, the lawsuit continued based on this remaining direct copyright infringement claim. While both parties collected additional evidence through the discovery process, it remained unclear what defense Meta would use. Until now.
Last week, Meta served a supplemental interrogatory response at the California federal court, which marks a new direction in its defense. For the first time, the company argued that uploading pirated books to other BitTorrent users during the torrent download process also qualifies as fair use.
Meta’s reasoning is straightforward. Anyone who uses BitTorrent to transfer files automatically uploads content to other people, as it is inherent to the protocol. In other words, the uploading wasn’t a choice, it was simply how the technology works.
Meta also argued that the BitTorrent sharing was a necessity to get the valuable (but pirated) data. In the case of Anna’s Archive, Meta said, the datasets were only available in bulk through torrent downloads, making BitTorrent the only practical option.
“Meta used BitTorrent because it was a more efficient and reliable means of obtaining the datasets, and in the case of Anna’s Archive, those datasets were only available in bulk through torrent downloads,” Meta’s attorney writes.
“Accordingly, to the extent Plaintiffs can come forth with evidence that their works or portions thereof were theoretically ‘made available’ to others on the BitTorrent network during the torrent download process, this was part-and-parcel of the download of Plaintiffs’ works in furtherance of Meta’s transformative fair use purpose.”
In other words, obtaining the millions of books that were needed to engage in the fair use training of its LLM, required the direct downloading, which ultimately serves the same fair use purpose.
The authors were not happy with last week’s late Friday submission and the new defense. On Monday morning, their lawyers filed a letter with Judge Vince Chhabria flagging the late-night filing as an improper end-run around the discovery deadline.
They point out that Meta had been aware of the uploading claims since November 2024, but that it never brought up this fair use defense in the past, not even when the court asked about it.
The letter specifically mentions that while Meta has a “continuing duty” to supplement discovery under Rule 26(e), this rule does not create a “loophole” allowing a party to add new defenses to its advantage after a court deadline has passed.
“Meta (for understandable reasons) never once suggested it would assert a fair use defense to the uploading-based claims, including after this Court raised the issue with Meta last November,” the lawyers write.
Meta’s legal team fired back the following day, filing their own letter with Judge Chhabria. This letter explains that the fair use argument for the direct copyright infringement claim is not new at all.
Meta pointed to the parties’ joint December 2025 case management statement, in which it had explicitly flagged the defense, and noted that the author’s own attorney had addressed it at a court hearing days later.
“In short, Plaintiffs’ assertion that Meta ‘never once suggested it would assert a fair use defense to the uploading-based claims, including after’ the November 2025 hearing, is false” Meta’s attorney writes in the letter.
Meanwhile, it’s worth noting that Meta’s interrogatory response also cites deposition testimony from the authors themselves, using their own words to bolster its fair use defense.
The company notes that every named author has admitted they are unaware of any Meta model output that replicates content from their books. Sarah Silverman, when asked whether it mattered if Meta’s models never output language from her book, testified that “It doesn’t matter at all.”
Meta argues these admissions undercut any theory of market harm. If the authors themselves cannot point to infringing output or lost sales, the lawsuit is less about protecting their books and more about challenging the training process itself, which the court already ruled was fair use.
These admissions were central to Meta’s fair use defense on the training claims, which Meta won last summer. Whether they carry the same weight in the remaining BitTorrent distribution dispute has yet to be seen.
In its interrogatory response, Meta added further weight by stressing that its investment in AI has helped the U. S. to establish U.S. global leadership, putting the country ahead of geopolitical competitors. That’s a valuable asset worth treasuring, it indirectly suggested.
As the case moves forward, Judge Chhabria will have to decide whether to allow this “fair use by technical necessity” defense. Needless to say, this will be of vital importance to this and many other AI lawsuits, where the use of shadow libraries is at stake.
For now, the BitTorrent distribution claims remain the last live piece of a lawsuit filed in 2023. Whether Judge Chhabria will allow Meta’s new defense to proceed has yet to be seen.
A copy of Meta’s supplemental interrogatory response is available here (pdf). The authors’ letter to Judge Chhabria can be found here (pdf). Meta’s response to that letter is available here (pdf).
...
Read the original on torrentfreak.com »
Bridge the gap between coding intent and action: manipulate syntax structures directly, avoiding mouse or keyboard gymnastics. Amplify your coding efficiency: wield multiple cursors for parallel syntax node operations, revolutionizing bulk edits and refactoring.Selection Modes standardize movements across words, lines, syntax nodes, and more, offering unprecedented flexibility and consistency.
...
Read the original on ki-editor.org »
A US ZIP code is 5 characters. From those 5 characters you can determine the city, the state, and the country. That’s 3 fields. Autofilled. From one input.
But you don’t do that, do you? No. You make me type my street address, then my city, then scroll through a dropdown of 50 states to find Illinois wedged between Idaho and Indiana, then type my ZIP, then — the pièce de résistance — scroll through 200+ countries to find United States, which half the time is filed under “T” because some dipshit thought “The United States of America” was the correct sort key.
It’s 2026. What the fuck are we doing.
I type 90210. You now know I’m in Beverly Hills, California, United States. You didn’t need me to tell you that. You didn’t need a dropdown. You didn’t need me to scroll past Turkmenistan. You had the answer the entire time, in 5 digits, and you just… didn’t use it.
And here’s the bonus: once you know the ZIP, your street address autocomplete is searching a few thousand addresses instead of 160 million. It’s faster. It’s more accurate. I type less. You get cleaner data. Everyone wins.
This is not new technology. Free APIs exist. It’s like 4 lines of code. Look:
const res = await fetch(`https://api.zippopotam.us/us/${zip}`)
const data = await res.json()
city.value = data.places[0][“place name”]
state.value = data.places[0][“state”]
country.value = “United States”
That’s it. That’s the whole thing. You could have shipped this instead of reading this website.
See how that works? See how you typed 5 numbers and 3 fields filled themselves in? See how you’re now typing your street address and it already knows what city you’re in? That’s not magic. That’s a lookup table. We’ve had those since the 1960s.
Tier 1: ZIP at the bottom. Street, city, state, ZIP, country. You had the data to autofill 3 fields and you just… put it last. Amazon does this. Target does this. Walmart does this. Basically everyone does this. Billions of collective hours of human life, spent scrolling for “Illinois.”
Tier 2: No autofill at all. You collect the ZIP. You have the ZIP. You do nothing with it. The ZIP just sits there in your database, inert, like a fire extinguisher in a glass case that says “do not break.” What are you saving it for.
Tier 3: The scrollable country dropdown. 240 countries. No search. No type-ahead. Just pure, unfiltered, alphabetical scrolling. Bonus points if the US is under “T.” Extra bonus points if it’s not even alphabetical. You absolute psychopaths.
Tier 4: The form that resets when you hit back. I filled out 14 fields. Your payment processor failed. I hit back. Everything is gone. My street. My city. My state. My will to live. All of it. Returned to the void. The developer responsible for this sleeps eight hours a night. That’s the part that haunts me.
While we’re here:
Invoke the right keyboard. If you’re asking for a ZIP code, use inputmode=“numeric”. It’s one HTML attribute. On mobile, I should see a number pad, not a full QWERTY keyboard. This applies to phone numbers, credit cards, and anything else that’s obviously just digits. You already know the input type. Tell the phone.
Work with autofill, not against it. Browsers have had autofill for over a decade. Use the right autocomplete attributes — postal-code, address-line1, country. If your form fights the browser’s autofill, your form is wrong. The browser is trying to save your user 45 seconds. Let it.
Fine, maybe country first. The purists in the comments are technically correct — postal codes aren’t globally unique. You could do country first (pre-filled via IP), then postal code, then let the magic happen. The point was never “skip the country field.” The point is: stop making me type things you already know.
Found a site that puts the ZIP code last? A country dropdown sorted by vibes? A form that makes you cry?
Send it to us →
Put the ZIP code first. Autofill the city. Autofill the state. Autofill the country. Let the user type their street address last, with autocomplete scoped to their ZIP.
It is a solved problem. The API is free. The code is 5 lines. There is genuinely no reason not to do this other than the mass institutional inertia of a million product managers copy-pasting the same address form template from 2009 and never once asking “wait, why is the ZIP code at the bottom?”
Why is the ZIP code at the bottom?
Put it first, you animals.
Tweet this ·
Post to HN ·
Copy link
Share this before you have to fill out another address form.
...
Read the original on zipcodefirst.com »
Effort comes after reports of individuals suspiciously earning massive payouts before Iran Strikes, Venezuela Military Actions
Washington, D. C. — Today, Oregon’s U.S. Senator Jeff Merkley and Minnesota’s U.S. Senator Amy Klobuchar launched a new effort to prevent government officials at the highest levels from engaging in prediction markets, cracking down on the potential for any insider trading.
Following multiple public reports on the growing influence of prediction markets and their potential for corruption, Merkley and Klobuchar introduced the End Prediction Market Corruption Act—a new bill to ban the President, Vice President, Members of Congress, and other public officials from trading event contracts. The bill will ensure that federal elected officials maintain their oath of office to serve the people by preventing them from trading on information that they gained through their role.
“When public officials use non-public information to win a bet, you have the perfect recipe to undermine the public’s belief that government officials are working for the public good, not for their own personal profits,” said Merkley. “Perfectly timed bets on prediction markets have the unmistakable stench of corruption. To protect the public interest, Congress must step up and pass my End Prediction Market Corruption Act to crack down on this bad bet for democracy.”
“At the same time that prediction markets have seen huge growth, we have seen increasing reports of misconduct. This legislation strengthens the Commodity Futures Trading Commission’s ability to go after bad actors and provides rules of the road to prevent those with confidential government or policy information from exploiting their access for financial gain,” said Klobuchar.
Merkley and Klobuchar’s End Prediction Market Corruption Act is cosponsored by U. S. Senators Chris Van Hollen (D-MD), Adam Schiff (D-CA), and Kirsten Gillibrand (D-NY).
Their bill is supported by Public Citizen, Citizens for Responsibility and Ethics in Washington (CREW), and Project On Government Oversight (POGO).
“The American people deserve unwavering ethical standards from their government officials. Officials have a responsibility to avoid not only actual conflicts of interest but even the appearance of impropriety. POGO is pleased to endorse the End Prediction Market Corruption Act, which will further prohibit covered government officials from exploiting nonpublic information for personal gain in prediction markets,” said Janice Luong, Policy Associate for the Project On Government Oversight (POGO).
“It is now more important than ever that prediction markets be governed by ethical constraints, especially when it comes to bets placed by governmental officials. Sen. Merkley’s legislation would appropriately prohibit key government officials from buying or selling on the prediction markets contracts in which they could have insider information on changes in the market. Public Citizen heartily endorses this bill,” said Craig Holman, Ph. D., Public Citizen.
“The rapid rise of retail prediction markets creates the risk that officials across the government could use nonpublic information to trade on and profit off event contracts,” said Debra Perlin, Vice President of Policy of Citizens for Responsibility and Ethics in Washington (CREW). “The American people must be able to trust that their government officials are working on their behalf rather than for personal gain. Senator Merkley’s legislation represents a vital step forward to ensure that those in positions of power, including senior executive branch officials and members of Congress, cannot abuse their access to nonpublic information in order to profit.”
Merkley has been a long-time leader in the push to end public corruption. He has led the charge to crack down on election gambling and dark money in politics, prevent lawmakers from trading stocks, and ban cryptocurrency-related corruption by elected officials at the highest levels of the federal government.
Full text of the End Prediction Market Corruption Act can be found by clicking here.
...
Read the original on www.merkley.senate.gov »
CasNum (Compass and straightedge Number) is a library that implements arbitrary precision arithmetic using
compass and straightedge constructions. Arbitrary precision arithmetic, now with 100% more Euclid. Featuring a functional modified Game Boy emulator where every ALU opcode is implemented entirely through geometric constructions.
This project began with a simple compass-and-straightedge ‘engine’, which can be found under the directory cas/. In compass-and-straightedge constructions, one start with just two points: The origin, and a unit. Exactly as God intended. The engine then allows us to do what the ancients did:
* Construct the line through two points
* Construct the circle that contains one point and has a center at another point
* Construct the point at the intersection of two (non-parallel) lines
* Construct the one or two points in the intersection of a line and a circle (if they intersect)
* Construct the one point or two points in the intersection of two circles (if they intersect) (Which, by the way turns out to be a nasty 4th degree equation. Check out the formula in circle.py, over 3600 characters, yikes. Good thing we have WolframAlpha).
These five constructions are considered the basic compass and straightedge constructions. Think of these as your ISA.
On top of the compass-and-straightedge engine, we have the CasNum class. In CasNum, a number x is represented as the point (x,0) in the plane. Now, the fun part: implementing all arithmetic and logical operations. We can construct the addition of two points by finding the midpoint between them and doubling it, which are both standard compass-and-straightedge constructions. Then, we can build the product and quotient of numbers using triangle similarity. The logical operations (AND, OR, XOR) are a little uglier, since they are not a “clean algebraic operation” in the relevant sense, but, hey, it works right?
What I thought was pretty neat is that implementing all this from scratch leaves a lot of room for optimization. For example, multiplication by 2 can be implemented much more efficiently than the generic algorithm for multiplication using triangle similarity. Then, implementing modulo by first removing the highest power of two times the modulus from the dividend yielded much better results than the naive implementation.
* Integrate into the ALU of a Game Boy emulator, thus obtaining a Game Boy that arithmetically and logically runs solely on compass and straightedge constructions
The first two examples were actually implemented and can be found under the examples/ directory. So apparently one cannot square the circle
using a compass and a straightedge, but at least one can run Pokémon Red. Man, I’m sure the ancient Greeks would have loved to see this.
Thanks to the great code written by PyBoy, integrating CasNum within it was pretty seamless. The only file I needed to edit was opcodes_gen.py, and the edit was pretty minimal.
As always, please save any important work before running anything I ever write.
To clone the repo, and install requirements:
git clone –recursive git@github.com:0x0mer/CasNum.git
cd CasNum
pip install -r requirements.txt
You can run the rsa and basic examples from the repo’s root directory like so:
python3 -m examples.basic
python3 -m examples.rsa
The library comes with a viewer (casnum/cas/viewer.py) that shows the compass and straightedge constructions. It has an automatic zoom that kinda works, but it goes crazy in the rsa example, so you may want to use manual zoom there.
In order to run PyBoy, first you need a ROM. In order to avoid copyright infringement, I included the ROM for 2048, free to distribute under the zlib license. But if, for example, the ROM you have is ‘Pokemon.gb’, then you can place it in examples/Pyboy and run:
cd examples/PyBoy
pip install -r requirements.txt
PYTHONPATH=../.. python
Then, once in python, run:
from pyboy import PyBoy
from casnum import viewer
viewer.start()
pyboy = PyBoy(‘2048.gb’) # Or whatever ROM you have
while pyboy.tick():
pass
pyboy.stop()
the viewer.start() just displays the compass-and-straightedge constructions, it is not strictly needed, but it is fun.
Notice however that the first run of Pokemon on the Game Boy emulator takes approximately 15 minutes to boot, so playing it may require somewhat increased patience. You see, Euclid wouldn’t have optimized the Game Boy boot screen. He would have spent those 15 minutes in silent appreciation, thinking, “Yeah. That’s about how long that should take.”
After running it once, most calculations should already be cached if you run it from the same python interpreter instance, so on the second run you should be able to get a decent 0.5~1 FPS, which is totally almost playable.
Most modern developers are content with a + b. They don’t want to work for it. They don’t want to see the midpoint being birthed from the intersection of two circles.
CasNum is for the developer who believes that if you didn’t have to solve a 4th-degree polynomial just to increment a loop counter, you didn’t really increment it.
Python’s lru_cache is used to cache almost any calculation done in the library, as everything is so expensive. Memory usage may blow up, run at your own risk.
* pyglet (optional but highly recommended. Only needed if you want to display the
compass-and-straightedge constructions)
* pytest-lazy-fixtures (Only needed in order to run the tests)
* pycryptodome (Only needed if you want to run the rsa example)
A: It can’t really “run” anything, its a number.
A: Define “fast”. If you mean “faster than copying Euclid by hand”, then yes, dramatically.
Q: Why did you make this?
A: I wanted arbitrary precision arithmetic, but I also wanted to feel something.
The code in the root of this repository is licensed under the MIT License.
This project incorporates the following third-party materials:
PyBoy (Modified): Located in ./examples/PyBoy/. Distributed under the GNU Lesser General Public License (LGPL) v3.0.
Notice of Modification: This version of PyBoy has been modified from the original source code to use the CasNum library instead of Python’s int.
The original, unmodified source code for PyBoy can be found at: https://github.com/Baekalfen/PyBoy.
The full LGPL license text is available in ./examples/PyBoy/License.md.
* Notice of Modification: This version of PyBoy has been modified from the original source code to use the CasNum library instead of Python’s int.
* The original, unmodified source code for PyBoy can be found at: https://github.com/Baekalfen/PyBoy.
* The full LGPL license text is available in ./examples/PyBoy/License.md.
2048.gb: This Game Boy ROM binary is distributed under the zlib License.
Disclaimer: This software is provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
* Disclaimer: This software is provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
...
Read the original on github.com »
As loneliness deepens in one of the world’s fastest-ageing nations, a network of women delivering probiotic milk drinks has become a vital source of routine, connection and care.
A woman in a neat navy suit and powder-blue shirt cycles purposefully down a quiet residential street in Tokyo. It’s 08:30 but already balmy, and she’s grateful for the matching visor that shields her eyes from the summer sun.
She arrives at her first stop, parks her bike and knocks on the door of a small wooden house with potted plants flanking the entrance. Inside, an elderly woman waits. Her face breaks into a broad smile as she opens the door — she has been expecting this visit.
Japan is the world’s most rapidly ageing major economy. Nearly 30% of its population is now over 65, and the number of elderly people living alone continues to rise. As families shrink and traditional multi-generational households decline, isolation has become one of the country’s most pressing social challenges.
The suited woman is a Yakult Lady — one of tens of thousands across Japan who deliver the eponymous probiotic drinks directly to people’s homes. On paper they’re delivery workers, but in practice they’re part of the country’s informal social safety net. In a country grappling with a rapidly ageing population and a deepening loneliness crisis, Yakult Ladies have become an unlikely source of community, helping to reduce the problem of isolation one drop-off at a time.
With their distinctive squat plastic bottles and shiny red caps, Yakult pioneered a genre. The probiotic drink was launched in Japan 90 years ago — long before “microbiome” became common parlance. But today, the women who deliver them are as important to the brand’s identity as the product itself.
...
Read the original on www.bbc.com »
Ayatollah Ali Khamenei was not, it’s safe to assume, a devoted Polymarket user. If he had been, the Iranian leader might still be alive. Hours before Khamenei’s compound in Tehran was reduced to rubble last week, an account under the username “magamyman” bet about $20,000 that the supreme leader would no longer be in power by the end of March. Polymarket placed the odds at just 14 percent, netting “magamyman” a profit of more than $120,000.
Everyone knew that an attack might be in the works—some American aircraft carriers had already been deployed to the Middle East weeks ago—but the Iranian government was caught off guard by the timing. Although the ayatollah surely was aware of the risks to his life, he presumably did not know that he would be targeted on this particular Saturday morning. Yet on Polymarket, plenty of warning signs pointed to an impending attack. The day before, 150 users bet at least $1,000 that the United States would strike Iran within the next 24 hours, according to a New York Times analysis. Until then, few people on the platform were betting that kind of money on an immediate attack.
Maybe all of this sounds eerily familiar. In January, someone on Polymarket made a series of suspiciously well-timed bets right before the U. S. attacked a foreign country and deposed its leader. By the time Nicolás Maduro was extracted from Venezuela and flown to New York, the user had pocketed more than $400,000. Perhaps this trader and the Iran bettors who are now flush with cash simply had the luck of a lifetime—the gambling equivalent of making a half-court shot. Or maybe they knew what was happening ahead of time and flipped it for easy money. We simply do not know.
Polymarket traders swap crypto, not cash, and conceal their identities through the blockchain. Even so, investigations into insider trading are already underway: Last month, Israel charged a military reservist for allegedly using classified information to make unspecified bets on Polymarket.
The platform forbids illegal activity, which includes insider trading in the U. S. But with a few taps on a smartphone, anyone with privileged knowledge can now make a quick buck (or a hundred thousand). Polymarket and other prediction markets—the sanitized, industry-favored term for sites that let you wager on just about anything—have been dogged by accusations of insider trading in markets of all flavors. How did a Polymarket user know that Lady Gaga, Cardi B, and Ricky Martin would make surprise appearances during the Super Bowl halftime show, but that Drake and Travis Scott wouldn’t? Shady bets on war are even stranger and more disturbing. They risk unleashing an entirely new kind of national-security threat. The U.S. caught a break: The Venezuela and Iran strikes were not thwarted by insider traders whose bets could have prompted swift retaliation. The next time, we may not be so lucky.
The attacks in Venezuela and Iran—like so many military campaigns—were conducted under the guise of secrecy. You don’t swoop in on an adversary when they know you are coming. The Venezuela raid was reportedly so confidential that Pentagon officials did not know about its exact timing until a few hours before President Trump gave the orders.
Any insiders who put money down on impending war may not have thought that they were giving anything away. An anonymous bet that reeks of insider trading is not always easy to spot in the moment. After the suspicious Polymarket bets on the Venezuela raid, the site’s forecast placed the odds that Maduro would be ousted at roughly 10 percent. Even if Maduro and his team had been glued to Polymarket, it’s hard to imagine that such long odds would have compelled him to flee in the middle of the night. And even with so many people betting last Friday on an imminent strike in Iran, Polymarket forecasted only a 26 percent chance, at most, of an attack the next day. What’s the signal, and what’s the noise?
In both cases, someone adept at parsing prediction markets could have known that something was up. “It’s possible to spot these bets ahead of time,” Rajiv Sethi, a Barnard College economist who studies prediction markets, told me. There are some telltale behaviors that could help distinguish a military contractor betting off a state secret from a college student mindlessly scrolling on his phone after one too many cans of Celsius. Someone who’s using a newly created account to wager a lot of money against the conventional wisdom is probably the former, not the latter. And spotting these kinds of suspicious bettors is only getting easier. The prediction-market boom has created a cottage industry of tools that instantaneously flag potential insider trading—not for legal purposes but so that you, too, can profit off of what the select few already know.
Unlike Kalshi, the other big prediction-market platform, Polymarket can be used in the U. S only through a virtual private network, or VPN. In effect, the site is able to skirt regulations that require tracking the identities of its customers and reporting shady bets to the government. In some ways, insider trading seems to be the whole point: “What’s cool about Polymarket is that it creates this financial incentive for people to go and divulge the information to the market,” Shayne Coplan, the company’s 27-year-old CEO, said in an interview last year. (Polymarket did not respond to a request for comment.)
Consider if the Islamic Revolutionary Guard Corps had paid the monthly fee for a service that flagged relevant activity on Polymarket two hours before the strike. The supreme leader might not have hosted in-person meetings with his top advisers where they were easy targets for missiles. Perhaps Iran would have launched its own preemptive strikes, targeting military bases across the Middle East. Six American service members have already died from Iran’s drone attacks in the region; the death toll could have been higher if Iran had struck first. In other words, someone’s idea of a get-rich-quick scheme may have ended with a military raid gone horribly awry. (The Department of Defense did not respond to a request for comment.)
Maybe this all sounds far-fetched, but it shouldn’t. “Any advance notice to an adversary is problematic,” Alex Goldenberg, a fellow at the Rutgers Miller Center who has written about war markets, told me. “And these predictive markets, as they stand, are designed to leak out this information.” In all likelihood, he added, intelligence agencies across the world are already paying attention to Polymarket. Last year, the military’s bulletin for intelligence professionals published an article advocating for the armed forces to integrate data from Polymarket to “more fully anticipate national security threats.” After all, the Pentagon already has some experience with prediction markets. During the War on Terror, DARPA toyed with creating what it billed the “Policy Analysis Market,” a site that would let anonymous traders bet on world events to forecast terrorist attacks and coups. (Democrats in Congress revolted, and the site was quickly canned.)
Now every adversary and terrorist group in the world can easily access war markets that are far more advanced than what the DOD ginned up two decades ago. What makes Polymarket’s entrance into warfare so troubling is not just potential insider trading from users like “magamyman.” If governments are eyeing Polymarket for signs of an impending attack, they can also be led astray. A government or another sophisticated actor wouldn’t need to spend much money to massively swing the Polymarket odds on whether a Gulf state will imminently strike Iran—breeding panic and paranoia. More fundamentally, prediction markets risk warping the basic incentives of war, Goldenberg said. He gave the example of a Ukrainian military commander making less than $1,000 a month, who could place bets that go against his own military’s objective. “Maybe you choose to retreat a day early because you can double, triple, or quadruple your money and then send that back to your family,” he said.
Again, we don’t know for sure whether any of this is happening. That may be the scariest part. As long as Polymarket lets anyone bet on war anonymously, we may never know. Last Saturday, the day of the initial Iran attack, Polymarket processed a record $478 million in bets, according to one analysis. All the while, Polymarket continues to wedge itself into the mainstream. Substack recently struck a partnership with Polymarket to incorporate the platform’s forecasts into its newsletters. (“Journalism is better when it’s backed by live markets,” Polymarket posted on X in announcing the deal.) All of this makes the site even more valuable as an intelligence asset, and even more destructive for the rest of us. Polymarket keeps launching more war markets: Will the U. S. strike Iraq? Will Israel strike Beirut? Will Iran strike Cyprus? Somewhere out there, someone likely already knows the answers.
...
Read the original on www.theatlantic.com »
WorldIran warns it will hit US bases across region hours after president’s apology In the centre of the storm: what does the Iran war mean for Dubai?Trump’s war on Iran is spreading. Where does it stop?USIran warns it will hit US bases across region hours after president’s apology Trump’s war on Iran is spreading. Where does it stop?Donald Trump calls for more US military action in Latin AmericaUS faced with few good options to tamp down surging oil prices CompaniesIn the centre of the storm: what does the Iran war mean for Dubai?Is the nightmare scenario for global energy here?The last of Hong Kong’s colonial-era trainee schemesShips in Gulf declare themselves Chinese to dodge attackTechUS draws up strict new AI guidelines amid Anthropic clashGoogle gives CEO Sundar Pichai new pay deal worth up to $692mnMarketsIs the nightmare scenario for global energy here?US faced with few good options to tamp down surging oil prices Britain is now the home of the Middle ManInvestors are not ready for a true shockA week of war in charts: the impact on the USOpinionIs the nightmare scenario for global energy here?Britain is now the home of the Middle ManWork & CareersGoogle gives CEO Sundar Pichai new pay deal worth up to $692mnPapier founder: ’I don’t own stocks or shares — it’s too much risk’Are you financially ‘prepped’ for higher inflation? After years of pay gap reporting, what do we know? Life & ArtsTrump’s war on Iran is spreading. Where does it stop?Marinelli: my 15-year quest to ski the biggest face in the Alps How To Spend It
US economy sheds 92,000 jobs in February in sharp slide per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial. Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.Check whether you already have access via your university or organisation.Discover all the plans currently available in your countrySee why over a million readers pay to read the Financial Times.Find out why
How To Spend It
...
Read the original on www.ft.com »
I used to work at a vector database company. My entire job was helping people understand why they needed a database purpose-built for AI; embeddings, semantic search, the whole thing. So it’s a little funny that I’m writing this. But here I am, watching everyone in the AI ecosystem suddenly rediscover the humble filesystem, and I think they might be onto something bigger than most people realize.
Not bigger than databases. Different from databases. I need to say that upfront because I already know someone is going to read this and think I’m saying “files good, databases bad.” I’m not. Stay with me.
If you’ve been paying any attention to the AI agent space over the last few months, you’ve noticed something strange. LlamaIndex published “Files Are All You Need.” LangChain wrote about how agents can use filesystems for context engineering. Oracle, yes Oracle (who is cooking btw), put out a piece comparing filesystems and databases for agent memory. Dan Abramov wrote about a social filesystem built on the AT Protocol. Archil is building cloud volumes specifically because agents want POSIX file systems.
Jerry Liu from LlamaIndex put it bluntly: instead of one agent with hundreds of tools, we’re moving toward a world where the agent has access to a filesystem and maybe 5-10 tools. That’s it. Filesystem, code interpreter, web access. And that’s as general, if not more general than an agent with 100+ MCP tools.
Karpathy made the adjacent observation that stuck with me. He pointed out that Claude Code works because it runs on your computer, with your environment, your data, your context. It’s not a website you go to — it’s a little spirit that lives on your machine. OpenAI got this wrong, he argued, by focusing on cloud deployments in containers orchestrated from ChatGPT instead of simply running on localhost.
And here’s the thing that makes all of this matter commercially: coding agents make up the majority of actual AI use cases right now. Anthropic is reportedly approaching profitability, and a huge chunk of that is driven by Claude Code, a CLI tool. Not a chatbot. A tool that reads and writes files on your filesystem.
Here’s where I think most of the discourse misses the deeper point.
Memory; in the human, psychological sense is fundamental to how we function. We don’t re-read our entire life story every time we make a decision. We have long-term storage, selective recall, the ability to forget things that don’t matter and surface things that do. Context windows in LLMs are none of that. They’re more like a whiteboard that someone keeps erasing.
If you’ve used Claude Code for any real project, you know the dread of watching that “context left until auto-compact” notification creep closer. Your entire conversation, all the context the agent has built up about your codebase, your preferences, your decisions about to be compressed or lost.
Filesystems solve this in the most boring, obvious way possible. Write things down. Put them in files. Read them back when you need them. Claude’s CLAUDE.md file gives the agent persistent context about your project. Cursor stores past chat history as searchable files. People are writing aboutme.md files that act as portable identity descriptors any agent can read i.e. your preferences, your skills, your working style, all in a file that moves between applications without anyone needing to coordinate an API.
Except! It might not be quite that simple.
A recent paper from ETH Zürich evaluated whether these repository-level context files actually help coding agents complete tasks. The finding was counterintuitive: across multiple agents and models, context files tended to reduce task success rates while increasing inference cost by over 20%. Agents given context files explored more broadly, ran more tests, traversed more files — but all that thoroughness delayed them from actually reaching the code that needed fixing. The files acted like a checklist that agents took too seriously.
This sounds like it undermines the whole premise. But I think it actually sharpens it. The paper’s conclusion wasn’t “don’t use context files.” It was that unnecessary requirements make tasks harder, and context files should describe only minimal requirements. The problem isn’t the filesystem as a persistence layer. The problem is people treating CLAUDE.md like a 2,000-word onboarding document instead of a concise set of constraints. Which brings us to the question of standards.
Right now we have CLAUDE.md, AGENTS.md, copilot-instructions.md, .cursorrules, and probably five more by the time you read this. Everyone agrees that agents need persistent filesystem-based context. Nobody agrees on what the file should be called or what should go in it. I see efforts to consolidate, this is good.
Dan Abramov’s piece on a social filesystem crystallized something important here. He describes how the AT Protocol treats user data as files in a personal repository; structured, owned by the user, readable by any app that speaks the format. The critical design choice is that different apps don’t need to agree on what a “post” is. They just need to namespace their formats (using domain names, like Java packages) so they don’t collide. Apps are reactive to files. Every app’s database becomes derived data i.e. a cached materialized view of everybody’s folders.
The same tension exists in the agent context file space. We don’t need CLAUDE.md and AGENTS.md and copilot-instructions.md to converge into one file. We need them to coexist without collision. And to be fair, some convergence is happening. Anthropic released Agent Skills as an open standard, a SKILL.md format that Microsoft, OpenAI, Atlassian, GitHub, and Cursor have all adopted. A skill you write for Claude Code works in Codex, works in Copilot. The file format is the API.
NanoClaw, a lightweight personal AI assistant framework, takes this to its logical conclusion. Instead of building an ever-expanding feature set, it uses a “skills over features” model. Want Telegram support? There’s no Telegram module. There’s a /add-telegram skill, essentially a markdown file that teaches Claude Code how to rewrite your installation to add the integration. Skills are just files. They’re portable, auditable, and composable. No MCP server required. No plugin marketplace to browse. Just a folder with a SKILL.md in it.
This is interoperability without coordination. And I want to be specific about what I mean by that, because it’s a strong claim. In tech, getting two competing products to work together usually requires either a formal standard that takes years to ratify, or a dominant platform that forces compatibility. Files sidestep both. If two apps can read markdown, they can share context. If they both understand the SKILL.md format, they can share capabilities. Nobody had to sign a partnership agreement. Nobody had to attend a standards body meeting. The file format does the coordinating.
There’s a useful analogy from infrastructure. Traditional data architectures were designed around the assumption that storage was the bottleneck. The CPU waited for data from memory or disk, and computation was essentially reactive to whatever storage made available. But as processing power outpaced storage I/O, the paradigm shifted. The industry moved toward decoupling storage and compute, letting each scale independently, which is how we ended up with architectures like S3 plus ephemeral compute clusters. The bottleneck moved, and everything reorganized around the new constraint.
Something similar is happening with AI agents. The bottleneck isn’t model capability or compute. It’s context. Models are smart enough. They’re just forgetful. And filesystems, for all their simplicity, are an incredibly effective way to manage persistent context at the exact point where the agent runs — on the developer’s machine, in their environment, with their data already there.
Now, I’d be a frawd if I didn’t acknowledge the tension here. Someone on Twitter joked that “all of you saying you don’t need a graph for agents while using the filesystem are just in denial about using a graph.” And… they’re not wrong. A filesystem is a tree structure. Directories, subdirectories, files i.e. a directed acyclic graph. When your agent runs ls, grep, reads a file, follows a reference to another file, it’s traversing a graph.
Richmond in Oracle’s piece made the sharpest distinction I’ve seen: filesystems are winning as an interface, databases are winning as a substrate. The moment you want concurrent access, semantic search at scale, deduplication, recency weighting — you end up building your own indexes. Which is, let’s be honest, basically a database.
Having worked at Weaviate, I can tell you that this isn’t an either/or situation. The file interface is powerful because it’s universal and LLMs already understand it. The database substrate is powerful because it provides the guarantees you need when things get real. The interesting future isn’t files versus databases. It’s files as the interface humans and agents interact with, backed by whatever substrate makes sense for the use case.
Here’s my actual take on all of this, the thing I think people are dancing around but not saying directly.
Filesystems can redefine what personal computing means in the age of AI.
Not in the “everything runs locally” sense (but maybe?). In the sense that your data, your context, your preferences, your skills, your memory — lives in a format you own, that any agent can read, that isn’t locked inside a specific application. Your aboutme.md works with your flavour of OpenClaw/NanoClaw today and whatever comes tomorrow. Your skills files are portable. Your project context persists across tools.
This is what personal computing was supposed to be before everything moved into walled-garden SaaS apps and proprietary databases. Files are the original open protocol. And now that AI agents are becoming the primary interface to computing, files are becoming the interoperability layer that makes it possible to switch tools, compose workflows, and maintain continuity across applications, all without anyone’s permission.
I’ll admit this is a bit idealistic. The history of open formats is littered with standards that won on paper and lost in practice. Companies have strong incentives to make their context files just different enough that switching costs remain high. The fact that we already have CLAUDE.md and AGENTS.md and .cursorrules coexisting rather than one universal format, is evidence that fragmentation is the default, not the exception. And the ETH Zürich paper is a reminder that even when the format exists, writing good context files is harder than it sounds. Most people will write bad ones, and bad context files are apparently worse than none at all.
But I keep coming back to something Dan Abramov wrote: our memories, our thoughts, our designs should outlive the software we used to create them. That’s not a technical argument. It’s a values argument. And it’s one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it’s the best technology. But because it’s the one technology that already belongs to you.
...
Read the original on madalitso.me »
Those who have never endured the relentless ringing of tinnitus can only dream of the torment. In fact, a bad dream may be the closest some get to experiencing anything like it.
The subjective sound, which can also be a hissing, buzzing, or clicking, is heard by no one else, and it may be present constantly, or may come and go.
Neuroscientists at the University of Oxford now suspect that sleep and tinnitus are closely intertwined in the brain.
Their findings hint at a fundamental relationship between the two conditions — one that has, surprisingly, been overlooked in the brain until very recently.
“What first made me and my colleagues curious were the remarkable parallels between tinnitus and sleep,” neuroscientist Linus Milinski at Oxford’s Sleep and Circadian Neuroscience Institute told ScienceAlert.
“Tinnitus is a debilitating medical condition, whereas sleep is a natural state we enter regularly, yet both appear to rely on spontaneous brain activity. Because there is still no effective treatment for subjective tinnitus, I believe that exploring these similarities might offer new ways to understand and eventually treat phantom percepts.”
Watch the video below for a summary of the study:
A ‘phantom percept’ is when our brains fool us into thinking we are seeing, hearing, feeling, or smelling something that is not there, physically speaking.
Many people experience phantom percepts only during sleep, but for about 15 percent of the world’s population, an inescapable noise rings in their ears during waking hours, too.
Tinnitus is the world’s most common phantom percept, and yet there is no known cause or cure, despite a long list of hypotheses.
While many individuals with tinnitus report poor sleep and show poor sleep patterns, the potential connection to this crucial bodily function has only recently come to light.
In 2022, Milinski led a review, which the authors claim is the first to consider, at a functional level, how sleep might impact tinnitus, and vice versa.
The Oxford researchers proposed that the large spontaneous waves of brain activity that occur during deep sleep, or non-rapid eye movement sleep (non-REM), might suppress the brain activity that leads to tinnitus.
To test that idea, the team turned to ferrets, which have a similar auditory system to humans. In experiments published in 2024, researchers found that ferrets that developed more severe tinnitus also showed disrupted sleep.
“We could actually see these sleep problems appear at the same time as tinnitus after noise exposure,” Milinski told ScienceAlert. “This suggested, for the first time, a clear link between developing tinnitus and disrupted sleep.”
Crucially, the ferrets that developed tinnitus showed overly responsive brain activity to sound. When the ferrets finally did manage to slip into non-REM sleep, that hyperactivity was dampened.
This suggests that sleep may temporarily mask the effects of tinnitus by engaging the same brain circuits.
“Our findings indicate that deep sleep may indeed help mitigate tinnitus and could reveal natural brain mechanisms for modulating abnormal activity,” said Milinski.
Research on non-human animals has its obvious limitations, but the same sort of brain activity patterns may exist in humans, too.
Since their 2022 review, Milinski says the field has rapidly expanded, with a growing number of large-scale studies investigating how sleep, the environment, and tinnitus interact — and not just in ferrets.
“I hope this research will lead to greater awareness of tinnitus and open new ways of exploring treatments,” Milinski told ScienceAlert.
“Acknowledging the impact of tinnitus, especially in older adults, where hearing loss and tinnitus can increase isolation and contribute to mental health problems, is incredibly important.”
Just last year, a study from China found that individuals with tinnitus were less able to suppress the hyperactivity of their awake brains as they transitioned into a sleep state.
During deep sleep, however, the hyperactivity linked to tinnitus was suppressed.
“This study establishes sleep as a critical therapeutic target to interrupt the 24-hour dysfunctional cycle of tinnitus,” the authors conclude, led by Xiaoyu Bao of South China University of Technology.
At Oxford, Milinski and his colleagues are now focusing on how sleep may affect the development of tinnitus.
“Tinnitus can make sleep worse, and poor sleep may, in turn, make tinnitus worse. It may be a kind of vicious circle, although I do not believe it is unbreakable,” speculated Milinski.
“When we do not sleep well, we become more vulnerable to stress, and stress is one of the strongest factors known to worsen tinnitus. Stress can even trigger tinnitus to begin with.”
Further research could not only lead to effective tinnitus treatments but also help scientists better understand the mysteries of sleep itself.
The 2022 review was published in Brain Communications.
An earlier version of this article was published in November 2025.
...
Read the original on www.sciencealert.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.