10 interesting stories served every morning and every evening.
These days it seems you need a trillion fake dollars, or lunch with politicians to get your own data center. They may help, but they’re not required. At comma we’ve been running our own data center for years. All of our model training, metrics, and data live in our own data center in our own office. Having your own data center is cool, and in this blog post I will describe how ours works, so you can be inspired to have your own data center too.
If your business relies on compute, and you run that compute in the cloud, you are putting a lot of trust in your cloud provider. Cloud companies generally make onboarding very easy, and offboarding very difficult. If you are not vigilant you will sleepwalk into a situation of high cloud costs and no way out. If you want to control your own destiny, you must run your own compute.
Self-reliance is great, but there are other benefits to running your own compute. It inspires good engineering. Maintaining a data center is much more about solving real-world challenges. The cloud requires expertise in company-specific APIs and billing systems. A data center requires knowledge of Watts, bits, and FLOPs. I know which one I rather think about.
Avoiding the cloud for ML also creates better incentives for engineers. Engineers generally want to improve things. In ML many problems go away by just using more compute. In the cloud that means improvements are just a budget increase away. This locks you into inefficient and expensive solutions. Instead, when all you have available is your current compute, the quickest improvements are usually speeding up your code, or fixing fundamental issues.
Finally there’s cost, owning a data center can be far cheaper than renting in the cloud. Especially if your compute or storage needs are fairly consistent, which tends to be true if you are in the business of training or running models. In comma’s case I estimate we’ve spent ~5M on our data center, and we would have spent 25M+ had we done the same things in the cloud.
Our data center is pretty simple. It’s maintained and built by only a couple engineers and technicians. Your needs may be slightly different, our implementation should provide useful context.
To run servers you need power. We currently use about 450kW at max. Operating a data center exposes you to many fun engineering challenges, but procuring power is not one of them. San Diego power cost is over 40c/kWh, ~3x the global average. It’s a ripoff, and overpriced simply due to political dysfunction. We spent $540,112 on power in 2025, a big part of the data center cost. In a future blog post I hope I can tell you about how we produce our own power and you should too.
Data centers need cool dry air. Typically this is achieved with a CRAC system, but they are power-hungry. San Diego has a mild climate and we opted for pure outside air cooling. This gives us less control of the temperature and humidity, but uses only a couple dozen kW. We have dual 48” intake fans and dual 48” exhaust fans to keep the air cool. To ensure low humidity (
The majority of our current compute is 600 GPUs in 75 TinyBox Pro machines. They were built in-house, which saves us money and ensures they suit our needs. Our self-built machines fail at a similar rate to pre-built machines we’ve bought, but we’re capable of fixing them ourselves quickly. They have 2 CPUs and 8 GPUs each, and work as both training machines and general compute workers.
For data storage we have a few racks of Dell machines (R630 and R730). They are filled with SSDs for a total of ~4PB of storage. We use SSDs for reliability and speed. Our main storage arrays have no redundancy and each node needs to be able to saturate the network bandwidth with random access reads. For the storage machines this means reading up to 20Gbps of each 80TB chunk.
Other than storage and compute machines we have several one-off machines to run services. This includes a router, climate controller, data ingestion machine, storage master servers, metric servers, redis servers, and a few more.
Running the network requires switches, but at this scale we don’t need to bother with complicated switch topologies. We have 3 100Gbps interconnected Z9264F switches, which serve as the main ethernet network. We have two more infiniband switches to interconnect the 2 tinybox pro groups for training all-reduce.
To effectively use all these compute and storage machines you need some infra. At this scale, services don’t need redundancy to achieve 99% uptime. We use a single master for all services, which makes things pretty simple.
All servers get ubuntu installed with pxeboot and are managed by salt.
All of our storage arrays use mkv. The main array is 3PB of non-redundant storage hosting our driving data we train on. We can read from this array at ~1TB/s, which means we can train directly on the raw data without caching. Redundancy is not needed since no specific data is critical.
We have an additional ~300TB non-redundant array to cache intermediate processed results. And lastly, we have a redundant mkv storage array to store all of our trained models and training metrics. Each of these 3 arrays have a separate single master server.
We use slurm to manage the compute nodes, and compute jobs. We schedule two types of distributed compute. Pytorch training jobs, and miniray workers.
To train models across multiple GPU nodes we use torch.distributed FSDP. We have 2 separate training partitions, each intra-connected with Infiniband for training across machines. We wrote our own training framework which handles the training loop boilerplate, but it’s mostly just pytorch.
We have a custom model experiment tracking service (similar to wandb or tensorboard). It provides a dashboard for tracking experiments, and shows custom metrics and reports. It is also the interface for the mkv storage array that hosts the model weights. The training runs store the model weights there with a uuid, and they are available to download for whoever needs to run them. The metrics and reports for our latest models are also open.
Besides training we have many other compute tasks. This can be anything from running tests, running models, pre-processing data, or even running agent rollouts for on-policy training. We wrote a lightweight open-source task scheduler called miniray that allows you to run arbitrary python code on idle machines. This is a simpler version of dask, with a focus on extreme simplicity. Slurm will schedule any idle machine to be an active miniray worker, and accept pending tasks. All the task information is hosted in a central redis server.
Miniray workers with GPUs will spin up a triton inference server to run model inference with dynamic batching. A miniray worker can thus easily and efficiently run any of the models hosted in the model mkv storage array.
Miniray makes it extremely easy to scale parallel tasks to hundreds of machines. For example, the controls challenge record was set by just having ~1hr of access to our data center with miniray.
All our code is in a monorepo that we have cloned on our workstations. This monorepo is kept small (
The most complex thing we do at comma is train driving models on-policy, these training runs require training data to be generated during training by running simulated driving rollouts with the most recent model weights. Here’s a real-world command we just used to train such a model. This training run uses all of the infrastructure described above. While only this small command is needed to kick everything off, it orchestrates a lot of moving parts.
Does all this stuff sound exciting? Then build your own datacenter for yourself or your company! You can also come work here.
...
Read the original on blog.comma.ai »
There are many good places for advertising. A conversation with Claude is not one of them.
Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We’ve run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry.
But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.
We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.
When people use search engines or social media, they’ve come to expect a mixture of organic and sponsored content. Filtering signal from noise is part of the interaction.
Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it’s also what makes them susceptible to influence in ways that other digital products are not.
Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal—the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous—and, in many cases, inappropriate.
We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits—like people finding support they couldn’t access elsewhere—and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviors is still developing; an ad-based system could therefore have unpredictable results.
Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle.
Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align—but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable.
Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation.
We recognize that not all advertising implementations are equivalent. More transparent or opt-in approaches—where users explicitly choose to see sponsored content—might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets and product development, blurring boundaries that were once more clear-cut. We’ve chosen not to introduce these dynamics into Claude.
Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.
Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we’ve brought AI tools and training to educators in over 60 countries, begun national AI education pilots with multiple governments, and made Claude available to nonprofits at a significant discount. We continue to invest in our smaller models so that our free offering remains at the frontier of intelligence, and we may consider lower-cost subscription tiers and regional pricing where there is clear demand for it. Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.
AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end. And we’ll continue to build features that enable our users to find, compare, or buy products, connect with businesses, and more—when they choose to do so.
We’re also exploring more ways to make Claude a focused space to be at your most productive. Users can already connect third-party tools they use for work—like Figma, Asana, and Canva—and interact with them directly within Claude. We expect to introduce many more useful integrations and expand this toolkit over time.
All third-party interactions will be grounded in the same overarching design principle: they should be initiated by the user (where the AI is working for them) rather than an advertiser (where the AI is working, at least in part, for someone else). Today, whether someone asks Claude to research running shoes, compare mortgage rates, or recommend a restaurant for a special occasion, Claude’s only incentive is to give a helpful answer. We’d like to preserve that.
We want our users to trust Claude to help them keep thinking—about their work, their challenges, and their ideas.
Our experience of using the internet has made it easy to assume that advertising on the products we use is inevitable. But open a notebook, pick up a well-crafted tool, or stand in front of a clean chalkboard, and there are no ads in sight.
We think Claude should work the same way.
...
Read the original on www.anthropic.com »
SaaS is the most profitable business model on Earth. It’s easy to understand why: build once, sell the same thing again ad infinitum, and don’t suffer any marginal costs on more sales.
I have been writing software for more than half my life. In the last year itself, I’ve talked to hundreds of founders and operators in SF, from preseed to Series E companies.
AI is bringing an existential threat to a lot of B2B SaaS executives: How to keep asking customers for renewal, when every customer feels they can get something better built with vibe-coded AI products?
And the market is pricing it in. Morgan Stanley’s SaaS basket has lagged the Nasdaq by 40 points since December. HubSpot and Klaviyo are down ~30%. Analysts are writing notes titled “No Reasons to Own” software stocks.
The new problem for B2B SaaS is that with AI, customers can get something working with vibe coding. There are tens of vibe coding “internal tool” services that promise to connect to every integration in the world to pump out CRUD and workflow apps.
Whatever they build simply works. It takes some wrangling to get there (one Series C VP listed eleven different vibe coding tools they’ve tried and the pros and cons between each on a phone call once), but productivity gains are immediate.
And vibe coding is fun. You feel like a mad wizard using the right incantation to get this magical new silicon intelligence to do exactly what you want.
What they don’t know, though, is that a poorly architected system will fail, eventually. As every senior programmer (eventually) understands, our job is complex because we have to understand the relationships in the real world, the processes involved, and the workflows needed, and representing it in a robust way to create a stable system. AI can’t do that.
Non-programmers don’t know any of this nuance. One Series E CEO told me that they’re re-evaluating the quarterly renewal of their engineering productivity software because they along with an engineer reimplemented something using Github and Notion APIs. They were paying $30,000 to a popular tool and they were not going to renew anymore.
If customers feel like they aren’t being served exactly like they want to, they are more likely to churn. The reason behind all this is that customers are demanding more from their B2B vendors, because they know what’s possible.
Previously, you would change your company to fit what your ERP and pay them hundreds of thousands of dollars. Now, everyone can see that agentic coding makes an unprecedented level of flexibility possible. And customers are demanding that flexibility, and if they don’t get it, they’ll leave.
This week itself I was on a phone call with a Series B AE talking about how they’re potentially losing an $X00,000 account just because the customer can’t use a specific failure reporting workflow in the SaaS. They’re now working with me to build what the customer needs and retain them.
If the entire company’s workflows operates on your platform, i.e. you’re a line-of-business SaaS, you are integrated into their existing team already. They know your UI and rely on you on the day to day.
For example, to create a data visualization I won’t seek any SaaS. I’ll just code one myself using many of the popular vibe coding tools (my team actually did that and it’s vastly more flexible than what we’d get off-the-shelf).
Being a “System of Record” means you’re embedded so deeply that there’s no choice but to win. My prediction is that we’ll see more SaaS companies go from the application layer to offering their robust SoR as their primary selling point.
This is where vibe-coded apps silently fail — and where established SaaS platforms earn their keep.
When a non-technical team vibe-codes an internal tool, they’re not thinking about environment keys, XSS vulnerabilities or API keys hardcoded in client-side JavaScript. They’re not implementing rate limiting, audit logs, or proper session management. They’re definitely not thinking about SOC 2 compliance, GDPR data residency requirements, or HIPAA audit trails.
I’ve seen it firsthand: a finance team built a “quick” expense approval tool that stored unencrypted reports in a public S3 bucket. A sales ops team created a commission calculator that anyone with the URL could access — no auth required. These aren’t edge cases. They’re the norm when software is built without security as a foundational concern.
Enterprise SaaS platforms have spent years (and millions) solving these problems: role-based access control, encryption at rest and in transit, penetration testing, compliance certifications, incident response procedures. Your customers may not consciously value this — until something breaks.
The challenge is that security is invisible when it works. You need to communicate this value proactively: remind customers that the “simple” tool they could vibe-code themselves would require them to also handle auth, permissions, backups, uptime, and compliance.
The times of asking customers to change how they work are gone. Now, SaaS vendors that differentiate by being ultra customizable win the hearts of customers.
How? It’s the most powerful secret to increase usage. We’ve all heard the classic SaaS problem where the software is sold at the beginning of the year, but no one actually ends up using it because of how inflexible it is and the amount of training needed.
And if a SaaS is underutilized, it gets noticed. And that leads to churn.
This is the case with one of my customers, they have a complex SaaS for maintenance operations. But turns out, this was not being used at the technician level because they found the UI too complex.
How I’m solving this is essentially a whitelabelled vibe-coding platform with in-built distribution and secure deployments. When they heard of my solution they were immediately onboard. Their customer success teams quickly coded a very specific mobile webapp for the technicians to use and deployed it in a few days.
Now, the IC technician is exposed to just those parts of the SaaS that they care about i.e. creating maintenance work orders. The executives get what they want too, vibe coding custom reports exactly the way they want vs going through complicated BI config. They are able to build exactly what they want and feel like digital gods while doing it.
Usage for that account was under 35%, and is now over 70%. They are now working closely with me to vibe code new “micro-apps” that work according to all of their customer workflows. And the best part? This is all on top of their existing SaaS which works as a system of record and handles security, authentication, and supports lock-in by being a data and a UI moat.
This is exactly what I’m building: a way for SaaS companies to let their end-users vibe code on top of their platform (More on that below). My customers tell me it’s the best thing they’ve done for retention, engagement, and expansion in 2026 — because when your users are building on your platform, they’re not evaluating your competitors.
Here’s what I’ve realized after hundreds of conversations with founders and operators: AI isn’t killing B2B SaaS. It’s killing B2B SaaS that refuses to evolve.
The SaaS model was built on a simple premise: we build it once, you pay forever. That worked when building software was hard. But now your customers have tasted what’s possible. They’ve seen their finance team whip up a custom dashboard in an afternoon. They’ve watched a non-technical PM build an internal tool that actually fits their workflow.
You can’t unsee that. You can’t go back to paying $X0,000/year for software that almost does what you need.
The survivors won’t be the SaaS companies with the best features. They’ll be the ones who become platforms — who let customers build on top of them instead of instead of them. When I showed a well-known VC what I was building to help SaaS companies do exactly this, he said: “This is the future of marketplaces and software companies.”
Maybe. Or maybe this is just another cycle and traditional SaaS will adapt like it always has. But I know this: the companies I’m talking to aren’t waiting around to find out. They’re already rebuilding their relationship with customers from “use our product” to “build on our platform.”
The question isn’t whether AI will eat your SaaS.
It’s whether you’ll be the one holding the fork.
I’m solving exactly this problem with a whitelabelled AI platform for B2B SaaS companies, so your users can vibe code customized workflows on top of their existing system of record.
My customers tell me this is the best way to support retention, engagement, and expansion in 2026. If this sounds interesting to you or someone you know, I can reach out with a custom demo or you can learn more about Giga Catalyst.
...
Something strange is happening with Mac Minis. They’re selling out everywhere, and it’s not because people suddenly need more coffee table computers.
If you browse Reddit or HN, you’ll see the same pattern: people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-5, or whatever model you want to actually control your computer—has become the killer app for Mac hardware. Not Final Cut. Not Logic. An AI agent that clicks buttons.
This is exactly what Apple Intelligence should have been.
Apple had everything: the hardware, the ecosystem, the reputation for “it just works.” They could have shipped an agentic AI that actually automated your computer instead of summarizing your notifications. Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar by actually using your apps, not through some brittle API layer that breaks every update.
They could have charged $500 more per device and people would have paid it. The margins would have been obscene. And they would have won the AI race not by building the best model, but by being the only company that could ship an AI you’d actually trust with root access to your computer. That trust—built over decades—was their moat.
So why didn’t they?
Maybe they just didn’t see it. That sounds mundane, but it’s probably the most common reason companies miss opportunities. When you’re Apple, you’re thinking about chip design, manufacturing scale, and retail strategy. An open-source project letting AI agents control computers might not ping your radar until it’s already happening.
Or maybe they saw it and decided the risk wasn’t worth it. If you’re Apple, you don’t want your AI agent automatically buying things, posting on social media, or making irreversible decisions. The liability exposure would be enormous. Better to ship something safe and limited than something powerful and unpredictable.
But there’s another dynamic at play. Look at who’s about to get angry about OpenClaw-style automation: LinkedIn, Facebook, anyone with a walled garden and a careful API strategy. These services depend on friction. They want you to use their app, see their ads, stay in their ecosystem. An AI that can automate away that friction is an existential threat.
If Apple had built this, they’d be fighting Instagram over ToS violations by Tuesday. They’d be testifying in front of Congress about AI agents committing fraud. Every tech platform would be updating their terms to explicitly ban Apple Intelligence.
By letting some third party do it, Apple gets plausible deniability. They’re just selling hardware. Not their fault what people run on it. It’s the same strategy that made them billions in the App Store while maintaining they’re “not responsible for what developers do.”
But I think this is short-term thinking.
Here’s what people miss about moats: they compound. The reason Microsoft dominated PCs wasn’t just that they had the best OS. It’s that everyone built for Windows, which made Windows more valuable, which made more people build for Windows. Network effects.
If Apple owned the agent layer, they could have created the most defensible moat in tech. Because an AI agent gets better the more it knows about you. And Apple already has all your data, all your apps, all your devices. They could have built an agent that works across your iPhone, Mac, iPad, and Watch seamlessly—something no one else can do.
More importantly, they could have owned the API. Want your service to work with Apple Agent? You play by Apple’s rules. Suddenly Apple isn’t fighting with platforms—they’re the platform that platforms need to integrate with. It’s the App Store playbook all over again, but for the AI era.
The Mac Mini rush is a preview of this future. People want agents. They want automation. They want to pay for it. They’re literally buying extra computers just to run someone else’s AI on Apple’s hardware.
Apple is getting the hardware revenue but missing the platform revenue. That might look smart this quarter. But platform revenue is what built Apple into a $3 trillion company. And platforms are what create trillion-dollar moats.
I suspect ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it. Not because they couldn’t build it—they obviously could—but because they were optimizing for this year’s legal risk instead of next decade’s platform power.
The people buying Mac Minis to run AI agents aren’t just early adopters. They’re showing Apple exactly what product they should have built. Whether Apple is paying attention is another question entirely.
...
Read the original on www.jakequist.com »
If you’re on one of the cheaper Anthropic plans like me, it’s a pretty common scenario when you’re deep into Claude coding an idea, to hit a daily or weekly quota limit. If you want to keep going, you can connect to a local open source model instead of Anthropic. To monitor your current quota, type: /usage
The best open source model is changing pretty frequently, but at the time of writing this post, I recommend GLM-4.7-Flash from Z. AI or Qwen3-Coder-Next. If you want or need to save some disk space and GPU memory, try a smaller quantized version which will load and run quicker with a quality cost. I’ll save another detailed post for how to find the best open source model for your task and machine constraints.
If you haven’t used LM Studio before, it’s an accessible way to find and run open source LLMs and vision models locally on your machine. In version 0.4.1, they introduce support to connect to Claude Code (CC). See here: https://lmstudio.ai/blog/claudecode or follow the instructions below:
Find the model search button to install a model (see image above). LM Studio recommends running the model with a context of > 25K.
Open a new terminal session to:
a. start the server: lms server start –port 1234
b. configure environment variables to point CC at LM Studio:
export ANTHROPIC_BASE_URL=http://localhost:1234
export ANTHROPIC_AUTH_TOKEN=lmstudio
c. start CC pointing at your server: claude –model openai/gpt-oss-20b
Reduce your expectations about speed and performance!
To confirm which model you are using or when you want to switch back, type /model
LM Studio is built on top of the open source project llama.cpp.
If you prefer not to use LM Studio, you can install and run the project directly and connect Claude Code to it but honestly, unless you are fine tuning a model, or have really specific needs, probably LM Studio is going to be a quicker setup.
For the moment, this is a backup solution. Unless you have a monster of a machine, you’re going to notice the time it takes to do things and a drop in code quality but it works(!) and it’s easy enough to switch between your local OSS model and Claude when you’re quota limit is back, so it’s a good way to keep coding when you’re stuck or you just want to save some quota. If you try it let me know how you go and which model works for you.
...
Read the original on boxc.net »
Have you wondered why the stock market has been so choppy since October and why crypto and gold keep flash crashing? The western media would have you believe this is due to AI bubble, war in Greenland, and Trump’s tweets. We have a better story to tell.
There’s been a fair bit of quiet chaos in financial markets recently. Cryptocurrencies have lost 40% of their value. We saw silver drop 40% which hasn’t happened since 1980. Stocks like Microsoft are getting picked off one-by-one with 15% drops when positive earnings reports come out. Meanwhile the broader market chops sideways, so people think things are fine. Trump and Europe were on the brink of war for control of a desolate arctic territory. Truth Social has overtaken FOMC as the most important source of financial news. These things may all appear to the untrained eye as a series of idiosyncratic, disconnected shocks. The prevailing media narrative is that the market is reacting negatively to AI CapEx spending and a hawkish new Fed chair. But our systematic analysis of cross-asset flows, derivatives positioning, central bank policy minutes, and institutional balance sheets suggests a singular, unified causality that binds these disparate anomalies, which is the covert unwinding of the Japanese Yen carry trade.
For nearly thirty years, the Bank of Japan’s (BOJ) Zero Interest Rate Policy (ZIRP) and subsequent Negative Interest Rate Policy (NIRP) effectively transformed the Yen into the world’s funding currency. We would call it the greatest free money printer ever made. By anchoring borrowing costs at or near zero, the BOJ enabled Wall Street to borrow Yen cheaply and invest it with leverage into higher yielding instruments globally, such as U. S. treasuries, equities, and cryptography. For example, you borrow Yen from Japan at 0% interest, you exchange it for USD, and then you buy treasury bonds that pay 4%. It’s that simple. This funded government benefits and provided continuous reliable liquidity for financial markets that made stocks keep going up while suppressing volatility.
Trillions of dollars of free loans from the Bank of Japan were used by a generation of investors to buy a double digit percentage of the U. S. economy. Now those loans are being recalled. Wall Street traders who levered up on the free Japanese money now have to sell trillions of assets and convert the proceeds back to Yen in order to not be liquidated. These aren’t happy times for them. They get liquidated when Japan raises interest rates; they get liquidated when the Federal Reserve lowers interest rates; they get liquidated when the Japanese Yen increases in value; they get liquidated when tech stocks aren’t going up enough, and all four of these things have been happening at once.
Wall Street may be greedy, but they’re very intelligent too. They made many smart choices about where to put the “free” money. Now let’s say you’re someone who’s also smart, but was wise enough to not use Sauron’s ring. Chances are you invested in the same things as Wall Street. So by now you’ve probably seen your whole portfolio move against you; you’re wondering why your hedges don’t work; and you feel like you’re being punished for making all the right choices. It’s because other smart people, who got greedy, are being forced to close their positions, and you’re the whipping boy for their avarice.
The Japanese Yen is sort of like GameStop ($GME). It’s the most shorted currency on Earth. When you borrow yen to buy American assets, you’re effectively shorting the yen. Currency can be rehypothecated so that yen-denominated debt ends up exceeding the actual yen supply, the same way GME’s short interest exceeded 100% of its float. When shorts start covering it compounds tragedy, because they all have to buy yen, which makes its value increase, forcing more shorts to cover, and Japan is a small island.
This December 2025 rate hike to 0.75%, followed by the explicitly hawkish signalling from Prime Minister Sanae Takaichi’s administration, has fundamentally altered the risk-reward calculus of these leveraged positions. The market disruptions observed in January 2026 bear the distinct mathematical signature of a forced liquidation event rather than a fundamental repricing of growth prospects. When correlations between historically uncorrelated assets (e.g. Gold, Bitcoin, Microsoft, and Silver) approach 1.0 during a sell-off, it serves as a distinct indicator that traders are not selling what they want to sell, but rather what they must sell in order to meet margin calls in a funding currency that is rapidly appreciating against their liabilities.
We shall investigate the mechanics of this unwind in exhaustive detail. We analyze the “Greenland Distraction” not as a root cause but as a volatility trigger that shattered the complacent calm of the “Davos Consensus.” We examine the anomalous liquidation in precious metals following the nomination of Kevin Warsh to the Federal Reserve Chairmanship, and we dissect the flow of funds from major Japanese institutional whales like Norinchukin Bank, whose retreat from foreign bond markets has left a liquidity vacuum in the U. S. Treasury complex. The evidence points to a systemic repricing of the global cost of capital, originating in Tokyo and transmitting violently through the plumbing of Wall Street, leaving no asset class untouched.
To fully comprehend the market chaos of January 2026, one must look beyond the immediate headlines of the new year and scrutinize the subtle yet seismic shifts that occurred in Tokyo during the closing months of 2025. The conventional market narrative has long regarded the Bank of Japan as a passive, almost paralyzed actor, perpetually trapped in a deflationary mire and unable to normalize policy. This view has always been demonstratably false. The truth is that Wall Street leaders have been planning for the next quarter, while the Japanese have been preparing for the next century. The data confirms a deliberate, aggressive shift toward normalization that caught global carry traders offguard.
In a move that many Western analysts critically underestimated, the Policy Board of the Bank of Japan voted unanimously to raise the uncollateralized overnight call rate to 0.75% during its policy session on December 18-19, 2025. While a 25 basis point hike might appear negligible in the context of Federal Reserve or ECB tightening cycles, in the context of the Japanese financial system, which has operated near the zero-bound for decades, it represents a massive tightening of financial conditions.
This move was not merely a technical adjustment; it was a fundamental regime change. Coming from a baseline of -0.1% in early 2024 and 0.50% in late 2025, the move to 0.75% signaled that the era of “free money” had definitively ended. The rationale provided by the BOJ was grounded in shifting inflationary dynamics. Core CPI (excluding fresh food), the central bank’s preferred metric, was tracking near 3% in late 2025, persistently exceeding the 2% price stability target. Although inflation eased slightly to 2.4% in December, the BOJ minutes reveal a board convinced that “wage gains may be durable,” thus justifying higher rates to prevent a wage-price spiral.
Crucially, the minutes from the December meeting, which were released in late January 2026, contain explicit language suggesting that the tightening cycle is far from complete. The board noted that “real interest rates are expected to remain negative,” implying that a policy rate of 0.75% is still considered accommodative relative to inflation. To a bond trader, this is hawkish code. It suggests that the “neutral rate” is significantly higher, potentially between 1.5% and 2.0%. If the market prices in a terminal rate of 2.0%, the cost of funding for carry trades effectively triples from previous levels, turning profitable arbitrage positions into deep losses.
The political dimension in Japan has exacerbated the monetary tightness, creating a “double tightening” effect that algorithms have struggled to price. Prime Minister Sanae Takaichi, preparing for a snap election on February 8, 2026, has adopted a complex economic stance that blends fiscal expansion with monetary discipline, a volatile mix for currency markets.
Takaichi advocates for “strategic fiscal spending” and tax cuts to stimulate the domestic economy. In standard macroeconomic theory, an expansionary fiscal policy (increased government spending) combined with a tightening monetary policy (higher rates to combat the resulting inflation) is the perfect recipe for currency appreciation. While Takaichi has publicly softened her rhetoric to avoid accusations of currency manipulation, stating she “did not have a preference for the yen’s direction”, her policies speak louder than her soundbites.
The market fears that Takaichi’s proposed fiscal largesse will force the BOJ to hike rates faster than currently projected to counteract the inflationary effects of government spending. This creates a two-front war on the Yen carry trade:
Exchange Rate Risk: If the Yen appreciates due to the fiscal-monetary policy mix, the principal value of the USD-denominated assets held by Japanese investors falls in Yen terms, triggering margin calls.
The tension between the Prime Minister’s office and the Ministry of Finance (MOF) adds another layer of uncertainty. Finance Minister Satsuki Katayama has been far less tolerant of currency volatility, repeatedly intervening or threatening intervention when USD/JPY approaches the 155-160 danger zone. This political friction creates a “floor” for the Yen, making shorting the currency a perilous endeavor for global macro funds.
Perhaps the most critical, yet underreported, development is the behavior of Japan’s gargantuan institutional investors, specifically Norinchukin Bank (often referred to as the “CLO Whale”) and Nippon Life Insurance. These entities have historically been the largest buyers of U. S. debt, recycling Japan’s trade surplus into U.S. Treasuries and corporate bonds.
The data indicates a massive reversal in these flows. Following significant losses in 2024 and 2025 due to unhedged exposure to U. S. and European sovereign bonds, Norinchukin has been actively liquidating foreign assets. By the end of December 2025, the bank had unloaded nearly ¥12.8 trillion (approximately $88 billion) in foreign government bonds.The bank’s CEO, Taro Kitabayashi, confirmed the completion of this sell-off, stating the bank would “take its time” before committing capital to fresh investments.
The significance of this cannot be overstated. A major, price-insensitive buyer of U. S. debt has left the building. When the U.S. Treasury issues debt to fund its deficit, Norinchukin is no longer the guaranteed bid. This removal of liquidity support weakens the floor for U.S. Treasuries, contributing to the yield spikes seen in January. Similarly, Nippon Life has signaled a rotation back into domestic Japanese Government Bonds (JGBs), acknowledging that “unrealized losses” on foreign bonds had swelled to ¥4.7 trillion.The logic is simple: why take currency risk for a 4.5% U.S. yield when domestic JGB yields are rising and offer a risk-free return in your home currency?
By December 31, 2025, the stage was set. The “free money” era was over. The largest holders of capital in Tokyo were repatriating funds or moving into cash. Global markets, however, were still positioned for “business as usual”, long Nvidia, long Bitcoin, short Yen. The dissonance between Japanese reality and Western positioning created the perfect conditions for a crash.
To validate the thesis that the Yen unwind is the primary driver of volatility, we must examine the sequence of events. The crash did not happen in a vacuum; it followed a precise timeline where geopolitical shocks acted as triggers for a structural fragility that had been building since the BOJ’s December pivot.
The pressure began to build in Q4 2025. As the BOJ signaled its intention to hike rates, Japanese traders, often the “canary in the coal mine” for global liquidity, began to reduce risk. This cycle started with Bitcoin. Bitcoin is a pure liquidity asset; it has no yield and is often funded via margin. As the cost of Yen margin rose, Japanese selling pressure on Bitcoin intensified from October through December. This was the first tremor.
Was the “Greenland War” theater? While the military dimensions may have been performative, the economic consequences were tangible and acted as the catalyst that exposed the fragility of the Yen carry trade.
On January 17, 2026, President Trump escalated his demand to purchase Greenland by threatening a 10% tariff on eight European nations (including the UK, Germany, and France) and escalating to 25% by June if the territory was not ceded. This introduced a “tail risk” that markets had not priced: the fracture of the Atlantic economic alliance.
Following the Martin Luther King Jr. holiday, U. S. markets opened on January 20 to a bloodbath. The S&P 500 fell 2.1%, the Nasdaq composite dropped 2.4%, and yields on U.S. Treasuries spiked.The narrative was “Greenland,” but the market mechanics told a different story. The threat of tariffs on close allies disrupts the “Atlantic Trade” narrative. For Japanese investors holding U.S. assets, this introduced a new risk premium. It wasn’t just about rates anymore; it was about the stability of the U.S.-led global order. This geopolitical volatility forced risk parity funds and algorithmic traders to reduce gross exposure. When a global portfolio deleverages, it buys back its funding currency. In this case, it bought Yen.
While Trump walked back the military threat on January 21 at Davos, the economic threat of tariffs remained a live wire. The volatility persisted, suggesting that the “Greenland” narrative was merely the match that lit the fuse of a much larger powder keg.
The final and most violent phase of the crash occurred at the end of the month, triggered by the nomination of Kevin Warsh as Federal Reserve Chair. Warsh is widely perceived as a hawk, favoring sound money and skepticism toward quantitative easing. His nomination signaled the potential end of the “Fed Put”, the assumption that the central bank would always intervene to support asset prices.
This announcement triggered a massive repricing of the “Debasement Trade.” Assets that thrive on currency debasement, Gold, Silver, and Bitcoin, collapsed. Gold fell ~11%, and Silver crashed ~36% in a single session. This synchronization of losses across uncorrelated assets (Tech and Gold falling together) is the definitive signature of a liquidity crisis driven by margin calls.
The unwinding of a carry trade is not a monolithic event; it is a cascade that ripples outward from the most liquid and speculative assets to the core holdings of institutional portfolios. The sequence of asset price collapses observed from October 2025 to January 2026 follows this classic liquidation hierarchy perfectly.
As noted, the unwind began in the crypto markets. Japan is home to a massive retail crypto trading base, and the Yen is a major pair for Bitcoin trading. Snippets indicate that Japanese traders began selling off Bitcoin in October 2025.
This timing is crucial. It correlates with the period when the BOJ began signaling the December rate hike. Retail traders, facing higher mortgage rates and loan costs in Japan, likely liquidated their most volatile, liquid asset first to raise cash. The selling was exacerbated by the looming tax reform in Japan. While the proposal to move to a flat 20% tax rate is bullish in the long term, the immediate pressure of rising funding costs forced traders to sell before the tax cut could be realized. By January 31, massive outflows from Bitcoin ETFs ($528 million) coincided with the broader market crash, confirming that crypto was being used as a source of liquidity to cover losses elsewhere.
Consider the “painful ~3% dump” in the Nasdaq and Microsoft’s staggering 15% drop. On January 29, 2026, Microsoft reported earnings. Despite beating revenue estimates ($81.27 billion vs. $80.28 billion), the stock plummeted ~11-15% intraday.
The street blamed concerns over “AI CapEx”, the idea that Microsoft was spending billions on data centers with slow return on investment. However, a 15% drop in a $3 trillion company on a “good” earnings beat is rarely fundamental; it is mechanical. Microsoft is a quintessential “momentum” stock, heavily held by foreign institutional investors, including Japanese pension funds. When the Yen strengthens, the value of these USD-denominated assets falls in JPY terms.
If a Japanese insurer holds Microsoft unhedged, a falling USD/JPY exchange rate hurts their balance sheet. If they hold it hedged
(rolling short USD positions), the rising U. S. rates (driven by the Warsh nomination) make the hedge prohibitively expensive. The January 29 drop was likely exacerbated by a “stop-loss” cascade from Tokyo desks. As the price broke key technical levels, algorithms programmed to protect Yen-denominated returns indiscriminately sold the most liquid blocks. Microsoft, being one of the most liquid stocks in the world, became the ATM for the rest of the portfolio.
The most compelling evidence of a forced liquidation event is the behavior of Gold and Silver on January 31, 2026. Gold fell ~11-12% and Silver crashed ~31-36% in a single session. Historically, Gold acts as a safe haven during equity market turmoil. If the Nasdaq is crashing due to “Greenland” fears, Gold should rally. Instead, it crashed.
This anomaly can be explained by two factors:
The Warsh Effect: As discussed, Warsh’s nomination strengthened the USD and undermined the thesis for holding anti-fiat assets.
Margin Call Dynamics: Snippets reveal that CME Group and the Shanghai Gold Exchange raised margin requirements on gold and silver futures days before the crash. When Japanese traders faced losses on their Microsoft longs and their Yen shorts, they needed cash immediately. They couldn’t sell illiquid bonds quickly enough, so they sold their “winners.” Gold had rallied to ~$5,400/oz prior to the crash. Traders liquidated their profitable Gold positions to pay for the margin calls on their losing Tech and Yen positions.
Cross-Asset Correlations (Week Ending Jan 31, 2026)
Figure 2: Cross-asset correlations, Jan 15–Jan 31,
2026. Note the spike in correlation between Gold, Bitcoin, and
Nasdaq 100 on Jan 31, indicating a systemic “sell-everything” margin
call.
Data sources:
Alex Lexington,
Finance Magnates,
Morningstar,
Investing.com,
Seeking Alpha
This correlation breakdown is visualized in Figure 2, where the correlation between Gold and the Nasdaq 100 spikes to nearly 1.0 during the crash week, a statistical anomaly that only occurs during severe liquidity events.
The “Yen Whale” hypothesis is strongly supported by the data on futures volumes and repo market stress. The “central mystery” is not just in the price action, but in the unseen flows of the derivatives market.
About a week ago, some whale kicked off an astronomically large market order for a /6J long when it hit recent lows. /6J (CME Yen Futures) hit a low of ~0.00647 (approximately 154.50 USD/JPY) in late January. This level has historically been a “line in the sand” for the Japanese Ministry of Finance (MOF).
Figure 4: The whale event that kicked off
the Japanese Yen unwind. Note the massive spike as /6J hit recent
lows, rallying investors worldwide to go long on yen
futures.
CME reported record volumes in FX and Interest Rate products for January 2026. The aggressive buying off the lows suggests a massive repatriation flow. Who is the Whale? Two theories emerge:
The MOF Thesis: The Ministry of Finance has a history of stealth intervention. Buying /6J (Long Yen) is functionally equivalent to selling USD reserves. Buying futures allows them to support the currency without immediately depleting cash reserves, squeezing speculators who are short.
The Carry Unwind: A massive hedge fund or bank (like Norinchukin) realizing that the “game is up” and closing out short-Yen positions. The size of the order suggests an entity that needed to move billions, not millions.
The subsequent price action, a sharp rally followed by “hammering back down”, represents the battleground. U. S. macro funds are still trying to short the Yen (betting on U.S. economic exceptionalism and Warsh’s policies), while Japanese domestic accounts are buying it. The volatility is the result of these tectonic plates grinding against each other.
The plumbing of the U. S. financial system showed signs of stress that coincided with the Japanese retreat. The Overnight Reverse Repo facility (ON RRP) saw a year-end spike to $106 billion but has since drained.
Japanese banks are typically huge participants in the U. S. repo market to fund their dollar assets. As Norinchukin and others retreat (repatriating funds to Japan), liquidity in the U.S. repo market becomes thinner. The “air pocket” in Microsoft and Gold prices was likely exacerbated by a lack of market maker depth in the repo-funded derivatives market. When market makers cannot access cheap repo funding, they widen spreads and reduce liquidity provision, leading to “gaps” in price action during sell-offs.
There have been significant moves in other currency futures as well: /6A increased 87 basis points, /6L rose 19 basis points, and /6S rose 18 basis points.
/6A (Australian Dollar): The 87 basis point rise in the Aussie Dollar is notable. AUD is often a proxy for Chinese growth and global risk sentiment. A rise here, amidst a tech crash, suggests a rotation out of U. S. assets and into commodities or Asia-Pacific currencies, further supporting the “Sell America” thesis triggered by the Greenland tariff threats.
/6L (Brazilian Real) and /6S (Swiss Franc): The rise in the Swiss Franc (a classic safe haven) aligns with the risk-off sentiment. The move in the Brazilian Real suggests that emerging markets are also seeing volatile flows as the dollar stabilizes.
Why was the VIX at 16 despite the chaos? The VIX measures implied volatility of S&P 500 options. Its relatively low level (16) compared to the violence in individual names (MSFT -15%, Gold -11%) indicates that the crash is a de-leveraging event, not a panic event.
In a panic, investors buy Puts on the index to protect themselves, spiking the VIX. In a de-leveraging event, investors simply sell the underlying assets (stocks, gold, crypto) to raise cash. They are not hedging; they are exiting. This explains why the VIX remained subdued while prices collapsed, the selling was orderly, algorithmic, and relentless, rather than emotional and panicked.
Skepticism about the “Greenland War” is well-founded. While the diplomatic row was real, its utility as a financial
narrative was far greater than its geopolitical reality.
President Trump’s threat of military force was retracted on January 21 at Davos. This “de-escalation” should theoretically have calmed markets. Instead, the volatility worsened into month-end. This confirms that the real problem wasn’t Greenland; it was the re-pricing of the Yen.
The financial media loves a simple cause-and-effect narrative. “Stocks down because of War” is easy to digest. “Stocks down because the cross-currency basis swap spread widened due to BOJ minutes” is not. The “Greenland” narrative provided the perfect cover for sophisticated actors to liquidate positions in Gold and Tech under the guise of “war jitters.” This allowed them to exit without sparking a broader panic about liquidity in the banking system. The focus on the Arctic masked the structural rot in the leverage complex.
The evidence suggests a covert, structural unwinding of the Yen carry trade is the primary driver of the January 2026 market chaos.
The interconnectedness of these events is undeniable. The BOJ’s rate hike in December 2025 and the subsequent hawkish signaling from the Takaichi administration fundamentally altered the cost of capital for the world’s largest carry trade. The “Greenland Crisis” acted as the initial volatility trigger, forcing a reduction in gross exposure. The nomination of Kevin Warsh acted as the final catalyst, shattering the “Debasement Trade” and forcing a liquidation of precious metals and crypto to cover margin calls on Yen-funded positions.
Here are some key takeaways:
The “Free Money” Era is Over: BOJ policies have fundamentally altered the global cost of capital. The flow of liquidity from Tokyo to New York has reversed.
Geopolitics as Catalyst: “Greenland” may have been the spark, but the Yen leverage was the powder keg. The tariff threats disrupted the “Atlantic Trade” narrative, forcing a repatriation of capital.
Liquidity Event: The synchronized crash of Gold, Crypto, and Tech confirms a systemic de-leveraging. The “Whale” orders in Yen futures and the breakdown of correlations are the smoking guns of a margin-driven event.
With the Japanese election on February 8 and U. S. tariffs looming, the “hammering” of the Yen is likely temporary. The structural trend is now toward repatriation. This implies lower U.S. asset prices,
higher U.S. yields, and a stronger Yen over the medium term. The “mystery” of the low VIX is explained by the mechanical nature of the unwind, a controlled demolition of leverage rather than a chaotic panic.
This won’t just be the big one. This could be the last one. If you’ve been preparing your whole life, knowing that something’s coming, then this could be the thing you’ve been preparing for. One final opportunity to get the guys who did this.
Longing the Yen is commonly referred to as “The Widowmaker Trade” on Wall Street, because you have trillions of dollars of monopoly money working against you. The carry traders have compromised every level of our government. Their greatest vulnerability is the Yen rising in value. They will do anything to defend their positions, even if that means bringing America’s economy down with them. Since recent events have made it obvious they’re going to lose, we might as well fight them. Most of us probably won’t make it out of this fight. But if we at least try, then there’s a chance we might prosper when it’s over.
The IV on OTM CME
/6J futures calls is 11% which is astonishingly low. The same is true for calls on the
FXY
ETF. Call options have defined risk. The more Yen we control, the more its value goes up, and the more crooks on Wall Street get liquidated. The worst that can happen is you lose your monopoly money, but that’s been happening anyway. Since carry traders own 10% of all U. S. treasuries, when they get liquidated they’ll have to sell a lot of treasury bonds, which means that
CME
/UB futures and the TLT
ETF will fall.
This blog is brought to you by various radicals, malcontents, and people who think the system is rigged. We’re not affiliated with any organization. Nothing here constitutes financial advice. Occupy Wall Street is not your financial advisor or your lawyer. We’re retail investors like you. Do your own research. Past performance does not guarantee future results. We are the 99 percent. The only solution is world revolution. Wall Street’s time has finally come.
...
Read the original on occupywallst.com »
Immigration and Customs Enforcement (ICE) is surveying the commercial advertising technology market for tools capable of supplying location data and large-scale analytics to federal investigators, according to a recent Request for Information (RFI).
Framed as market research rather than a procurement, the RFI seeks information from companies offering “Ad Tech compliant and location data services” that could support criminal, civil, and administrative investigations across ICE’s mission set.
The RFI, issued by ICE’s Homeland Security Investigations (HSI), emphasizes that the government is not soliciting proposals or committing to a future contract, but it does signal active interest in selecting vendors for live demonstrations of operational platforms and data services, a step that typically precedes pilot deployments or integration into existing investigative environments.
ICE says it is attempting to better understand how commercial big data providers and advertising technology firms might directly support investigative activities, while remaining sensitive to “regulatory constraints and privacy expectations.”
The agency noted that its components are handling increasing volumes of criminal, civil, and administrative information from both internal and external sources and are assessing whether commercial off-the-shelf platforms comparable to large investigative data and legal analytics providers can help manage and exploit that data at scale.
At the center of the inquiry is a category of information traditionally associated with digital advertising rather than law enforcement: location data, device identifiers, IP intelligence, and behavioral signals derived from everyday consumer activity.
Advertising technology, commonly referred to as ad tech, is the sprawling ecosystem of software, data brokers, analytics platforms, and intermediaries that power targeted advertising on the modern Internet.
Ad tech companies collect and process information about where devices are located, how users move between physical and digital spaces, which apps are installed on their phones, and how devices can be linked across websites, applications, and networks.
While the industry typically frames this activity as anonymous or pseudonymous, the underlying data is often persistent, granular, and capable of tracking individuals over time.
Location data is a particularly valuable component of that ecosystem. Mobile applications routinely share latitude and longitude coordinates with advertising partners through embedded software development kits.
Even when precise GPS data is not available, companies infer location through IP addresses, Wi-Fi networks, Bluetooth beacons, and cell tower connections. That information is then aggregated, analyzed, and sold to advertisers seeking to measure foot traffic, target audiences, or assess the effectiveness of campaigns.
ICE’s RFI suggests that the agency is exploring whether those same mechanisms can be repurposed as investigative tools.
The document asks vendors to describe platforms and data services that can support investigative needs while remaining “Ad Tech compliant,” a phrase that reflects industry norms rather than statutory law enforcement standards.
ICE appears to be looking into tapping into the commercial data ecosystem rather than building bespoke surveillance tools from scratch, a strategy that allows agencies to access rich data streams without directly collecting the information themselves.
ICE’s interest is not limited to raw data. The RFI repeatedly references “operational platforms,” signaling a desire for systems that can ingest, correlate, analyze, and visualize information from multiple sources.
In practice, that means software environments capable of fusing location data with other records, such as criminal histories, financial data, travel records, social media activity, or administrative files, to generate investigative leads or support ongoing cases.
The agency frames its inquiry as exploratory and cautious. It notes that the government is seeking to understand the “current state” of ad tech and location data services available to federal investigative entities, particularly considering regulatory constraints and privacy expectations.
That language reflects growing scrutiny of commercial data practices by courts, regulators, and civil liberties advocates, especially when such data is accessed by federal agencies like ICE.
In recent years, federal agencies have increasingly relied on commercially available data to sidestep traditional legal barriers.
Because ad tech data is collected by private companies under consumer-facing privacy policies, agencies have argued that purchasing or accessing that data does not constitute a search under the Fourth Amendment.
Critics counter that this approach allows the government to obtain highly sensitive information, including detailed location histories, without warrants, probable cause, or meaningful oversight.
The U. S. Supreme Court has signaled skepticism of such practices in cases recognizing the sensitivity of long-term location tracking, even when data is held by third parties.
At the same time, regulators have brought enforcement actions against data brokers accused of selling sensitive location information without adequate safeguards.
Against that backdrop, ICE’s assertion that it is considering privacy expectations appears designed to reassure both policymakers and potential vendors that the agency is aware of the controversy surrounding commercial surveillance data.
Yet the RFI itself provides little detail about how those concerns would be operationalized. It does not reference warrants, court orders, or judicial authorization.
Nor does it explain how ICE would distinguish between data associated with U. S. persons and noncitizens, how long information would be retained, or whether data obtained for one investigative purpose could be reused for others.
That ambiguity is particularly significant given HSI’s broad mandate. Unlike agencies focused solely on criminal enforcement, HSI conducts civil and administrative investigations alongside criminal cases.
Location data or ad tech-derived insights could therefore be used in contexts ranging from immigration enforcement to customs violations to sanctions and export control investigations, often under lower legal thresholds than those required in criminal proceedings.
ICE’s emphasis on “Ad Tech compliant” services also underscore a fundamental tension. Compliance in the advertising industry typically refers to adherence to self-regulatory frameworks, contractual obligations, and privacy policies that permit extensive data collection so long as certain disclosures are made.
Those standards are not designed to constrain government use, nor do they substitute for constitutional or statutory protections governing law enforcement surveillance.
Companies marketing “privacy-friendly” location or IP intelligence tools often argue that they avoid directly identifying individuals. But researchers and regulators have repeatedly demonstrated that supposedly anonymized or aggregated data can be reidentified when combined with other datasets.
In an investigative context, reidentification is not a bug but a feature, enabling analysts to link digital signals back to real-world subjects.
Biometric Update earlier reported that a Government Accountability Office audit had found that publicly accessible data — from social media posts to commercial geolocation records — can be aggregated into detailed “digital profiles” that expose U. S. personnel, military operations, and senior leaders to targeting, coercion, and disruption.
In January 2025, Gravy Analytics, a prominent location data broker, disclosed that a significant data breach had potentially exposed through de-anonymization the precise location information of millions of individuals.
The RFI’s focus on live demonstrations suggests that ICE is interested in mature, deployable capabilities rather than theoretical offerings. Vendors selected to present would be expected to show how their platforms operate in practice, how data is accessed and analyzed, and how investigative outputs are generated.
While the agency stresses that it is not committing to a future solicitation, such demonstrations often inform subsequent procurements, task orders, or pilot programs conducted under existing contracts.
ICE has used similar market research approaches in the past to normalize new surveillance capabilities before formal adoption.
Social media monitoring tools, mobile biometric systems, and large-scale analytics platforms were all introduced through incremental steps that began with RFIs and demonstrations rather than headline-grabbing contracts.
For privacy advocates, the latest filing fits a familiar pattern. Commercial surveillance markets evolve rapidly, driven by advertising and marketing demand. Government agencies then adopt those tools after the fact, often before lawmakers have fully grappled with the implications.
Oversight mechanisms, however, lag technical capability, leaving key questions unanswered until after systems are already in use.
ICE’s RFI does not indicate when demonstrations might occur or whether a solicitation will follow. It does make clear, though, that the agency sees the ad tech ecosystem as a potential investigative resource worth serious consideration.
As debates over commercial data, surveillance, and constitutional protections continue, the filing offers a window into how federal law enforcement is adapting to — and seeking to leverage — a data economy built for advertising rather than accountability.
For now, ICE is asking industry to explain how ad tech-derived location and analytics services can be made suitable for investigative use while respecting privacy expectations.
What remains unclear is who will define those expectations, how they will be enforced, and whether existing legal frameworks are equipped to govern a surveillance model that blurs the line between consumer marketing and government intelligence.
...
Read the original on www.biometricupdate.com »
Claude Code for infrastructure. Debug, act, and audit everything Fluid does on your infrastructure.
Fluid is a terminal agent that do work on production infrastructure like VMs/K8s cluster/etc. by making sandbox clones of the infrastructure for AI agents to work on, allowing the agents to run commands, test connections, edit files, and then generate Infra-as-code like an Ansible Playbook to be applied on production.
LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work. By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying.
I got the idea after seeing how much Claude Code has helped me work on code, I thought “I wish there was something like that for infrastructure”, and here we are.
Safety. I didn’t want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else.
Fluid gives access to a live output of commands run (it’s pretty cool) and does this by ephemeral SSH Certificates. Fluid gives tools for creating IaC and requires human approval for creating sandboxes on hosts with low memory/CPU and for accessing the internet or installing packages.
Note: this is a terminal agent (like Claude Code) meant to be installed on your local laptop/workstation
...
Read the original on www.fluid.sh »
Skip to main contentHow Jeff Bezos Brought Down the Washington PostThe Amazon founder bought the paper to save it. Instead, with a mass layoff, he’s forced it into severe decline. On September 4, 2013, the Amazon founder Jeff Bezos held his first meeting with the staff of the Washington Post, the newspaper he had agreed to purchase a month earlier from the Graham family, for two hundred and fifty million dollars. It had been a long and unsettling stretch for the paper’s staff. We—I was a deputy editor of the editorial page at the time—had suffered through years of retrenchment. We trusted that Don Graham would place us in capable hands, but we did not know this new owner, and he did not know or love our business in the way that the Graham family had. Bezos’s words at that meeting, about “a new golden era for the Washington Post,” were reassuring. Bob Woodward asked why he had purchased the paper, and Bezos was clear about the commitment he was prepared to make. “I finally concluded that I could provide runway—financial runway—because I don’t think you can keep shrinking the business,” he said. “You can be profitable and shrinking. And that’s a survival strategy, but it ultimately leads to irrelevance, at best. And, at worst, it leads to extinction.”To look back on that moment is to wonder: How could it have come to this? The paper had some profitable years under Bezos, sparked by the 2016 election and the first Trump term. But it began losing enormous sums: seventy-seven million dollars in 2023, another hundred million in 2024. The owner who once offered runway was unwilling to tolerate losses of that magnitude. And so, after years of Bezos-fuelled growth, the Post endured two punishing rounds of voluntary buyouts, in 2023 and 2025, that reduced its newsroom from more than a thousand staffers to under eight hundred, and cost the Post some of its best writers and editors. Then, early Wednesday morning, newsroom employees received an e-mail announcing “some significant actions.” They were instructed to stay home and attend a “Zoom webinar at 8:30 a.m.” Everyone knew what was coming—mass layoffs.The scale of the demolition, though, was staggering—reportedly more than three hundred newsroom staffers. The announcement was left to the executive editor, Matt Murray, and human-relations chief Wayne Connell; the newspaper’s publisher, Will Lewis, was nowhere to be seen as the grim news was unveiled. In what Murray termed a “broad strategic reset,” the Post’s storied sports department was shuttered “in its current form”; several reporters will now cover sports as a “cultural and societal phenomenon.” The metro staff, already cut to about forty staffers during the past five years, has been shrunk to about twelve; the foreign desks will be reduced to approximately twelve locations from more than twenty; Peter Finn, the international editor, told me that he asked to be laid off. The books section and the flagship podcast, “Post Reports,” will end. Shortly after the meeting, staffers received individualized e-mails letting them know whether they would stay or go. Murray said the retrenched Post would “concentrate on areas that demonstrate authority, distinctiveness, and impact,” focussing on areas such as politics and national security. This strategy, a kind of Politico-lite, would be more convincing if so many of the most talented players were not already gone.Graham, who has previously been resolutely silent about changes at the paper, posted a message on Facebook that pulsed with anguish. “It’s a bad day,” he wrote, adding, “I am sad that so many excellent reporters and editors—and old friends—are losing their jobs. My first concern is for them; I will do anything I can to help.” As for himself, Graham, who once edited the sports section, said, “I will have to learn a new way to read the paper, since I have started with the sports page since the late 1940’s.”What happened to the Bezos of 2013, a self-proclaimed optimist who seemed to have absorbed the importance of the Post in the nation’s journalistic ecosystem? In 2016, dedicating the paper’s new headquarters, he boasted that it had become “a little more swashbuckling” and had a “little more swagger.” As recently as December, 2024, at the New York Times’ DealBook Summit, Bezos expressed his commitment to nurturing the paper: “The advantage I bring to the Post is when they need financial resources, I’m available. I’m like that. I’m the doting parent in that regard.” Not long ago, he envisioned attracting as many as a hundred million paying subscribers to the Post. With these brutal cuts, he seems content to let the paper limp along, diminished in size and ambition.“In the beginning, he was wonderful,” Sally Quinn, the veteran Post contributor and wife of its legendary executive editor, Ben Bradlee, told me of Bezos. “He was smart and funny and kind and interested. He was joyful. He was a person of integrity and conscience. He really meant it when he said this was a sacred trust, to buy the Post. And now I don’t know who this person is.”The author David Maraniss was with the Post for forty-eight years. He resigned as an associate editor in 2024, after Bezos killed the editorial page’s planned endorsement of Kamala Harris. “He bought the Post thinking that it would give him some gravitas and grace that he couldn’t get just from billions of dollars, and then the world changed,” Maraniss said of Bezos. “Now I don’t think he gives us—I don’t think he gives a flying fuck.”I asked Maraniss what cuts of this magnitude would mean for the institution. “I don’t even want to call it the Washington Post,” he said. “I don’t know what it’ll be without all of that.”The first sign of impending layoffs came in late January, when the sports staff was informed that plans to send writers to Italy to cover the Winter Olympics had been cancelled. (Management later agreed to send a smaller crew.) In the following days, as rumors began to spread of severe cuts, the paper’s reporters began posting messages directed at Bezos on X, with the plaintive hashtag #SaveThePost. “Our reporters on the ground drove exclusive coverage during pivotal moments of recent history,” the foreign staff wrote to Bezos. “We have so much left to do.” The local staff noted that it had already been slashed in half in the past five years. “Watergate,” they wrote, “started as a local story.”It did not help the staff’s morale that Lewis and his team were hobnobbing in Davos, or that Bezos and his wife, Lauren Sánchez, were in Paris for Haute Couture Week. More troubling were reminders that Bezos, who once emblazoned “Democracy Dies in Darkness” on the paper’s masthead, appears to be pursuing a policy of appeasement toward the Trump Administration. During the first Trump term, Bezos stood by the Post even when his stewardship threatened to cost him billions in government contracts. Now Bezos had not said a word about a recent F.B.I. raid on the home of the Post federal-government reporter Hannah Natanson, in which the agency seized her phones, laptops, and other devices. As the staff awaited the axe, the President and the First Lady celebrated the première of “Melania,” a documentary that Amazon had licensed for forty million dollars and was reported to be spending another thirty-five million to promote. The deal was inked after Bezos had dinner with the Trumps shortly before the Inauguration.Martin Baron, who oversaw coverage at the paper that garnered eleven Pulitzer Prizes during his eight years as executive editor, said in a statement, “This ranks among the darkest days in the history of one of the world’s greatest news organizations. The Washington Post’s ambitions will be sharply diminished, its talented and brave staff will be further depleted, and the public will be denied the ground-level, fact-based reporting in our communities and around the world that is needed more than ever.” The news industry is in “a period of head-spinning change,” Baron told me. But the Post’s problems “were made infinitely worse by ill-conceived decisions that came from the very top.” He pointed to Bezos’s decision to kill the Harris endorsement—a “gutless order” that cost the paper more than two hundred fifty thousand subscribers. “Loyal readers, livid as they saw owner Jeff Bezos betraying the values he was supposed to uphold, fled The Post. In truth, they were driven away, by the hundreds of thousands,” Baron said. “Bezos’s sickening efforts to curry favor with President Trump have left an especially ugly stain of their own. This is a case study in near-instant, self-inflicted brand destruction.”I spent more than forty years at the Post, as a reporter, an editor, an editorial writer, and a columnist. I resigned last March, after Bezos announced that the Opinions section, where I worked, would henceforth be concentrating on the twin pillars of “personal liberties and free markets.” More alarming, Bezos advised, “Viewpoints opposing those pillars will be left to be published by others.” We had been an opinion section reflecting a wide range of views—which Bezos himself had encouraged. It seemed obvious that this change was deeply misguided.I had written a column critical of the non-endorsement decision several months earlier. The paper published it without any substantive changes. But, when I wrote a column disagreeing with the no-dissent-allowed dictum, I was told that Lewis had killed it—it apparently didn’t meet the “high bar” for the Post to write about itself—and declined my request to meet. I submitted my letter of resignation. A new editorial-page editor went on to shift both unsigned editorials and signed opinion columns dramatically to the right, to the point that no liberal columnists remain. One recent editorial praised the President’s plan for a new ballroom and excused his unauthorized bulldozing of the East Wing, saying that “the blueprints would have faced death by a thousand papercuts.” Another endorsed the move to rename the Defense Department the Department of War as “a worthy blow against government euphemism.” There are some editorials critical of Trump, but the inclination to fawning praise is unmistakable. Had I not defenestrated myself, I would, no doubt, have been advised to take my buyout and go.But I am not—at least, I have not been—a Bezos-hater. I am grateful for the resources, financial and technological, that he devoted to the paper in his early years as owner. The surprise of Bezos’s tenure at the Post has been his bad business decisions. Fred Ryan, a former chief of staff to Ronald Reagan and founding president of Politico, was hired as the publisher and C.E.O. in 2014 and oversaw a period of spectacular growth. Buoyed by Bezos-funded expansion and the public’s fixation on the new Trump Administration, the number of digital subscribers soared from thirty-five thousand when he arrived to two and a half million when he left, in the summer of 2023. But Ryan failed to develop an adequate plan for how the newspaper would thrive in a post-Trump environment. As traffic and revenue plunged, Ryan found himself increasingly at odds with the newsroom. He held a year-end town-hall meeting in 2022 at which he announced that layoffs were coming, and then, to the consternation of the staff, left without taking questions. As Clare Malone reported for The New Yorker, Woodward beseeched Bezos to intercede. The owner made a rare visit to the paper in January, 2023, for meetings with key staffers, taking notes on a legal pad as they poured out their anxiety.Ryan left that summer, but Lewis, his eventual replacement, accomplished the feat of making the newsroom nostalgic for Ryan. A decade earlier, Lewis, then a senior executive in Rupert Murdoch’s British-tabloid empire, had played a pivotal role in dealing with the fallout from the phone-hacking scandal at some of Murdoch’s papers. Lewis had said that he was acting to protect “journalistic integrity,” when the Post questioned him about his actions during that time, but in 2024 questions arose, fuelled by a civil lawsuit brought against the papers, about whether Lewis had sought to conceal evidence, including by carrying out a plan to delete millions of e-mails. (Lewis has said the allegations against him were “completely untrue.”) At the Post, Lewis clashed with executive editor Sally Buzbee over coverage of the story, reportedly insisting that it was not newsworthy. Shortly afterward, Lewis announced Buzbee’s departure, and his plan to replace her with Robert Winnett, a former colleague of his from London’s Daily Telegraph and Sunday Times. The Post and the Times both reported on how Lewis and Winnett had used fraudulently obtained material as the basis for articles. “His ambition outran his ethics,” one of Lewis’s former reporters told the Times. Winnett ended up withdrawing from the position, but the episode poisoned relations between Lewis and the newsroom.The staff, meanwhile, became increasingly concerned that Lewis was offering corporate word salad in place of a vision to address the Post’s decline. “Fix it, build it, scale it” was his catchphrase when he arrived, in January, 2024. In June of that year came an amorphous plan for what Lewis called a “third newsroom.” (The second newsroom, we were surprised to learn, was the Opinions section.) First, it was to focus on social media and service journalism. Then it was rechristened WP Ventures and, according to a memo to staff, would “focus entirely on building personality-driven content and franchises around personalities.” By February, 2025, the situation had deteriorated to the point that two former top editors, Leonard Downie and Robert Kaiser, wrote to Bezos about Lewis. “Replacing him is a crucial first step in saving The Washington Post,” they urged in an e-mail. Bezos never responded.Downie, who served as executive editor from 1991 to 2008, contrasted the paths of the Times and the Post. During the past decade, the Times transformed itself into a one-stop-shopping environment that lured readers with games such as Spelling Bee, a cooking app, and a shopping guide. By the end of 2025, it was reporting close to thirteen million digital subscribers and an operating profit of more than a hundred and ninety-two million dollars. The Post does not release information about its digital subscribers, but it was reported to have two and a half million digital subscribers at the time of the non-endorsement decision, in 2024.“One of the big differences to me was that they hired a publisher”—Ryan—“who didn’t come up with any ideas,” Downie told me. “And then when he left . . . we knew that Bezos was losing money, and we were encouraged by the fact that they were looking for somebody who could improve the business side of the paper and the circulation side of the paper. And then they chose this guy who we hardly ever heard from, who had a checkered past in British journalism.”Writing last month on a private Listserv for former Post employees, Paul Farhi, who as the media reporter for the Post covered Bezos’s acquisition of the paper, shared his “utter mystification and bafflement” about Bezos’s tolerance of Lewis. “Even as a hands-off boss,” he wondered, “could Bezos not see what was obvious to even casual observers within a few months of Will’s arrival—that Will was ill-suited to the Post, that he had alienated the newsroom, that he had an ethically suspect past, and—most important—that none of his big ideas was working or even being implemented?” (Farhi, who took a buyout in 2023, gave me permission to quote his message.)Even before these new cuts, a parade of key staffers had left the Post. A beloved managing editor, Matea Gold, went to the Times. The national editor, Philip Rucker, decamped to CNN, and the political reporter Josh Dawsey to the Wall Street Journal. The Atlantic hired, among others, three stars of the paper’s White House team: Ashley Parker, Michael Scherer, and Toluse Olorunnipa. These are losses that would take years to rebuild—if the Post were in a rebuilding mode. The Post, Woodward said, “lives and is doing an extraordinary reporting job on the political crisis that is Donald Trump”—including its scoop on the second strike to kill survivors of an attack on an alleged Venezuelan drug boat. But the print edition is a shadow of its former self, with metro, style, and sports melded into an anemic second section; daily print circulation is now below one hundred thousand. More pressingly, it’s unclear whether a newsroom so stripped of resources can sustain the quality of its work.The sports columnist Sally Jenkins, who left the Post in August, 2025, as part of the second wave of buyouts, has been more supportive of management than many other Post veterans. So it was striking that, when we spoke recently, she was both passionate about the work of her newsroom colleagues and unsparing about how the business side had failed them. “When you whack at these sections, you’re whacking at the roots of the tree,” she told me. “We train great journalists in every section of the paper, and we train them to cover every subject on the globe. And when you whack whole sections of people away, you are really, really in danger of killing the whole tree.” When I asked how she felt about the losses, Jenkins said, “My heart is cracked in about five different pieces.”Jenkins, who was in California covering Super Bowl week for the Atlantic, has spent a career studying what accounts for the difference between winning teams and losing ones. Bezos, she said, had been generous with his money and laudable for never interfering in the work of the newsroom. But, she added, “making money at journalism, you have to break rocks with a shovel. You have to love thinking about journalism to the point that it wakes you up at night with an idea, and then you have to be willing to try it. And I don’t see a sense that he loves the business enough to think about it at night. It’s almost like he’s treated it like Pets.com—an interesting experiment that he’s willing to lose some money on until he’s not. But the difference with this business is it’s not Pets.com. It’s not a business that just disappears into the muck of venture capitalism. It’s a business that is essential to the survival of the Republic, for Christ’s sake. So you don’t fuck around with it like that.”As Post staffers and alumni braced for the cuts, I called Kaiser, the former managing editor, who spent more than half a century at the paper. “Mr. Bezos’s personal system has failed him in a way I fear he doesn’t grasp,” Kaiser, now eighty-two, told me. “He has no sense of the damage that will be done to his reputation in history if he becomes seen as the man who destroyed the institution that Katharine Graham”—the famed publisher who led the paper from the sixties to the nineties—“and Ben Bradlee built.” Kaiser recalled arriving at the paper’s London bureau in 1964. “If I say, ‘I’m Kaiser from the Washington Post’—what’s that? They never heard of it.” A decade later, he was posted in Moscow, as Woodward and Carl Bernstein were breaking the Watergate story. “Explaining was not necessary,” Kaiser said. “The Russians, in fact, had a gloriously exaggerated impression of the Washington Post as the king-maker and the king-destroyer.”Bezos, Kaiser continued, “knew what the role was, acknowledged the role—those words ‘doting parent’—and then he walked away from it. What the hell?” The damage, he predicted, will reverberate beyond the immediate cuts. “What purpose does any honorable, attractive, competent journalist have for remaining at the Post? None.”At one point, as we talked about the transformation of the Post, Kaiser stopped himself. “I’m going to cry,” he said, and paused. “Oh, God, it’s killing me.”Bezos may be tiring of the Post, but he has not seemed inclined to sell the paper. Nor is it clear that would be a better, or at this point even feasible, outcome. Newspapers across the country are being bought up by private-equity firms that are essentially selling off the valuable parts. But there is another model for Bezos to consider: turning the Post into a nonprofit, endowed by Bezos but operating independently of him. For Bezos, this would reduce the role of the Post as a headache and a threat to other, more favored endeavors, such as his rocket company, Blue Origin. For the Post, assuming the endowment is sufficient, it would provide that continuing runway.There are models for this approach. In Philadelphia, the late cable-television tycoon H. F. “Gerry” Lenfest purchased the Inquirer, the Daily News, and Philly.com in 2015, and the following year donated the publications to a charitable trust. “What would the city be without the Inquirer and the Daily News?” asked Lenfest, whose contribution to the endeavor has been valued at almost a hundred and thirty million dollars. In Utah, the investor Paul Huntsman bought the Salt Lake Tribune from the hedge fund Alden Global Capital in 2016; three years later, he transformed it into a nonprofit, supported in part by tax-deductible contributions from readers.Writing in the Columbia Journalism Review in 2024, Steven Waldman suggested that Bezos follow a similar course. “ ‘Nonprofit’ does not mean ‘losing money,’ ” Waldman wrote. “Nonprofit news organizations can sell ads, offer subscriptions, and take donations. Done well, it is an especially strong business model, because it provides an extra revenue stream (philanthropy) and is deeply embedded in serving the community.” My quibble with Waldman’s pitch is that he asked Bezos to ante up a paltry hundred million. When Bezos purchased the Post, his net worth was about twenty-five billion; it is now an estimated two hundred fifty billion. Why not one per cent of that for the Post, enough to sustain the paper indefinitely? A pipe dream, I know, but this arrangement would make Bezos the savior of the Post, not the man who presided over its demise.In the 1941 movie “Citizen Kane,” Charles Foster Kane, a newspaper publisher who, like Bezos, is one of the richest men in the world, is confronted by his legal guardian, Walter Thatcher, about the folly of funding his paper. “Honestly, my boy, don’t you think it’s rather unwise to continue this philanthropic enterprise, this Inquirer that’s costing you a million dollars a year?” Thatcher demands. “You’re right, Mr. Thatcher. I did lose a million dollars last year,” Kane replies. “I expect to lose a million dollars this year. I expect to lose a million dollars next year. You know, Mr. Thatcher, at the rate of a million dollars a year, I’ll have to close this place in sixty years.” Update Kane’s outlays to assume losses of a hundred million annually, in perpetuity. By that math, Bezos would have more than two millennia before needing to turn out the lights. ♦
...
Read the original on www.newyorker.com »
sqldef is a CLI tool for diffing two SQL schemas. You can use it to manage the migration of RDBMSs using regular SQL DDLs.
The online demo uses the WebAssembly build of sqldef to diff two SQL schemas and generate DDLs.
...
Read the original on sqldef.github.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.