10 interesting stories served every morning and every evening.
Yesterday, Microsoft unveiled a sweeping set of changes coming soon to Windows 11 that are designed to address common complaints and issues that people have with the platform. Everything from slow performance to unreliable updates, AI bloat, and excessive ads and enshittification are being tackled.
It’s all great news, but there’s one big issue that Microsoft notably isn’t addressing with these sweeping changes, and that’s Windows 11′s forced Microsoft account requirements. Even with all these improvements in tow, Windows 11 will still force you to setup an internet connection and sign-in with a Microsoft account during the out of box experience.
That’s really unfortunate, as the forced Microsoft account requirement is probably one of the biggest complaints I see people have about Windows 11, and it’s not currently being addressed. In fact, the blog post Microsoft published doesn’t mention Microsoft accounts at all.
The good news is that this might not be the case forever. Microsoft Vice President and overall developer legend Scott Hanselman has posted on X in response to someone asking him about possibly relaxing the Microsoft account requirements, saying “Ya I hate that. Working on it.”
I understand that Hanselman isn’t the only person at Microsoft who doesn’t like the forced Microsoft account requirements, and that there are a number of people internally that are pushing to relax this. However, it’s not as simple as just deciding to remove the requirement and calling it a day.
Microsoft is a big company, and a lot of teams and organizations stand to benefit from Windows forcing a Microsoft account. So it’s likely that this processes has to go through a committee of sorts, where each side will present their reasons for and against removing the requirement.
So the good news is the Windows team is definitely looking into its options when it comes to relaxing or removing the Microsoft account requirement, but that there’s no committed plan to move ahead with actually making any changes as of yet.
But people with influence internally at Microsoft are fighting for it, which should hopefully reassure those who were hoping to see progress in this area. Ultimately, this is a policy issue, not a technical one. It would be very easy for Microsoft to remove this requirement from a technical perspective, it’s just whether or not the company can agree to make the change that needs to be decided.
If Microsoft were to relax its requirements around using an online account on Windows 11, is that something you would take advantage of? I feel like I’m in the minority when it comes to using a Microsoft account on Windows, as I actually like syncing with OneDrive and being automatically signed-in to Edge and Office. So, let me know if this requirement is something you want Microsoft to relax.
Join us on Reddit at r/WindowsCentral to share your insights and discuss our latest news, reviews, and more.
...
Read the original on www.windowscentral.com »
A warning about rising prices, vanishing consumer choice, and a future where owning a computer may matter more than ever as hardware, power, and control drift toward data centers and away from people.
A warning about rising prices, vanishing consumer choice, and a future where owning a computer may matter more than ever as hardware, power, and control drift toward data centers and away from people.
For the better part of two decades, consumers lived in a golden age of tech. Memory got cheaper, storage increased in capacity and hardware got faster and absurdly affordable. Upgrades were routine, almost casual. If you needed more RAM, a bigger SSD, or a faster CPU or GPU, you barely had to wait a week for a discount offer and you moved on with your life. This era is ending.
What’s forming now isn’t just another pricing cycle or a short-term shortage, it is a structural shift in the hardware industry that paints a deeply grim outlook for consumers. Today, I am urging you to hold on to your hardware, as you may not be able to replace it affordably in the future. While I have always been a stark critic of today’s consumer industry, as well as
the ideas behind it, and a strong proponent of buying it for life
(meaning, investing into durable, repairable, quality products) the industry’s shift has nothing to do with the protection of valuable resources or the environment, but is instead a move towards a trajectory that has the potential to erode technological self-sufficiency and independence for people all over the world.
In recent months the buzzword RAM-pocalypse has started popping up across tech journalism and enthusiast circles. It’s an intentionally dramatic term that describes the sharp increase in RAM prices, primarily driven by high demand from data centers and “AI” technology, which most people had considered a mere
blip in the market. This presumed temporary blip, however, turned out to be a lot more than just that, with one manufacturer after the other openly stating that prices will continue to rise, with suppliers forecasting shortages of specific components that could last well beyond 2028, and with key players like
Western Digital and Micron either completely disregarding or even exiting the consumer market altogether.
The RAM-pocalypse isn’t just a temporary headline anymore, but has seemingly become long-term reality. However, RAM and memory in general is only the beginning.
The main reason for the shortages and hence the increased prices is data center demand, specifically from “AI” companies. These data centers require mind-boggling amounts of hardware, specifically RAM, storage drives and GPUs, which in turn are RAM-heavy graphics units for “AI” workloads. The enterprise demand for specific components simply outpaces the current global production capacity, and outbids the comparatively poor consumer market.
For example, OpenAI’s Stargate project alone reportedly
requires approximately 900,000 DRAM wafers per month, which could account for roughly 40% of current global DRAM output. Other big tech giants including Google, Amazon, Microsoft, and Meta have placed open-ended orders with memory suppliers, accepting as much supply as available. The existing and future data centers for/of these companies are expected to consume 70% of all memory chips produced in 2026.
However, memory is just the first domino.
RAM and SSDs are where the pain is most visible today, but rest assured that the same forces are quietly reshaping all aspects of consumer hardware. One of the most immediate and tangible consequences of this broader supply-chain realignment are sharp, cascading price hikes across consumer electronics, with
LPDDR memory standing out as an early pressure point that most consumers didn’t recognize until it was already unavoidable.
LPDDR is used in smartphones, laptops, tablets, handheld consoles, routers, and increasingly even low-power PCs. It sits at the intersection of consumer demand and enterprise prioritization, making it uniquely vulnerable when manufacturers reallocate capacity toward “AI” accelerators, servers, and data-center-grade memory, where margins are higher and contracts are long-term. As fabs shift production toward HBM and server DRAM, as well as GPU wafers, consumer hardware production quietly becomes non-essential, tightening supply just as devices become more power- and memory-hungry, all while continuing on their path to remain frustratingly unserviceable and un-upgradable.
The result is a ripple effect, in which device makers pay more for chips and memory and pass those costs on through higher retail prices, cut base configurations to preserve margins, or lock features behind premium tiers. At the same time, consumers lose the ability to compensate by upgrading later, because most components these days, like LPDDR, are soldered down by design. This is further amplified by scarcity, as even modest supply disruptions can spike prices disproportionately in a market where just a few suppliers dominate, turning what should be incremental cost increases into sudden jumps that affect entire product categories at once.
In practice, this means that phones, ultrabooks, and embedded devices are becoming more expensive overnight, not because of new features, but because the invisible silicon inside them has quietly become a
contested resource in a world that no longer builds hardware primarily for consumers.
In late January 2026, the Western Digital CEO
confirmed during an earnings call that the company’s entire HDD production capacity for calendar year 2026 is already sold out. Let that sink in for a moment. Q1 hasn’t even ended and a major hard drive manufacturer has
zero remaining capacity for the year. Firm purchase orders are in place with its top customers, and long-term agreements already extend into 2027 and 2028. Consumer revenue now accounts for just 5% of Western Digital’s total sales, while cloud and enterprise clients make up 89%. The company has, for all practical purposes, stopped being a consumer storage company.
And Western Digital is not alone. Kioxia, one of the world’s largest NAND flash manufacturers, admitted that its entire 2026 production volume is
already in a “sold out” state, with the company expecting tight supply to persist through at least 2027 and long-term customers facing 30% or higher year-on-year price increases. Adding to this, the Silicon Motion CEO put it bluntly during a recent earnings call:
We’re facing what has never happened before: HDD, DRAM, HBM, NAND… all in severe shortage in 2026.
In addition, the Phison CEO has gone even further, warning that the NAND shortage could persist until 2030, and that it risks the
“destruction” of entire segments of the consumer electronics industry. He also noted that factories are now demanding prepayment for capacity three years in advance, an unprecedented practice that effectively locks out smaller players.
The collateral damage of this can already be felt, and it’s significant. For example Valve confirmed that the Steam Deck OLED is now out of stock intermittently in multiple regions “due to memory and storage shortages”. All models are currently unavailable in the US and Canada, the cheaper LCD model has been discontinued entirely, and there is no timeline for when supply will return to normal. Valve has also
been forced to delay the pricing and launch details for its upcoming Steam Machine console and Steam Frame VR headset, directly citing memory and storage shortages.
At the same time, Sony is considering delaying the PlayStation 6 to 2028 or even 2029, and Nintendo is reportedly
contemplating a price increase for the Switch 2, less than a year after its launch. Both decisions are seemingly driven by the same memory supply constraints. Meanwhile, Microsoft has already raised
prices on the Xbox.
Now you might think that everything so far is about GPUs and other gaming-related hardware, but that couldn’t be further from the truth. General computing, like the Raspberry Pi is not immune to any of this either. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flagship Raspberry Pi 5 (16GB) jumping from $120 at launch to $205 as of February 2026, a 70% increase driven entirely by LPDDR4
memory costs. What was once a symbol of affordable computing is rapidly being priced out of reach for the educational and hobbyist communities it was designed to serve.
HP, on the other hand, seems to have already prepared for the hardware shortage by launching a laptop subscription service where you pay a monthly fee to use a laptop but never own it, no matter how long you subscribe. While HP frames this as a convenience, the timing, right in the middle of a hardware affordability crisis, makes it feel a lot more like a preview of a rented compute future. But more on that in a second.
“But we’ve seen price spikes before, due to crypto booms, pandemic shortages, factory floods and fires!”, you might say. And while we did live through those crises, things eventually eased when bubbles popped and markets or supply chains recovered. The current situation, however, doesn’t appear to be going away anytime soon, as it looks like the industry’s priorities have fundamentally
changed.
These days, the biggest customers are not gamers, creators, PC builders or even crypto miners anymore. Today, it’s hyperscalers. Companies that use hardware for “AI” training clusters, cloud providers, enterprise data centers, as well as governments and defense contractors. Compared to these hyperscalers
consumers are small fish in a big pond.
These buyers don’t care if RAM costs 20% more and neither do they wait for
Black Friday deals. Instead, they sign contracts measured in exabytes and billions of dollars. With such clients lining up, the consumer market in contrast is suddenly an inconvenience for manufacturers. Why settle for smaller margins and deal with higher marketing and support costs, fragmented SKUs, price sensitivity and retail logistics headaches, when you can have behemoths throwing money at you? Why sell a $100 SSD to one consumer, when you can sell a whole rack of enterprise NVMe drives to a data center with
virtually infinite money?
All of this goes to show that the consumer market is not just deprioritized, but instead it is being starved. In fact, IDC has already warned
that the PC market could shrink by up to 9% in 2026 due to skyrocketing memory prices, and has described the situation not as a cyclical shortage but as “a potentially permanent, strategic reallocation of the world’s silicon wafer capacity”.
Leading PC OEMs including Lenovo, Dell, HP, Acer, and ASUS have all signaled 15-20% PC price increases for 2026, with some models seeing even steeper hikes. Framework, the repairable laptop company, has also been transparent about rising memory costs impacting its pricing. And analyst Jukan Choi recently revised his shortage timeline estimate, noting that DRAM production capacity is expected to grow at just 4.8% annually through 2030, with even that incremental capacity concentrated on HBM rather than consumer memory. TrendForce’s latest forecast projects DRAM contract prices rising by 90-95% quarter over quarter in Q1 2026. And that is not a typo.
The price of hardware is one thing, but value-for-money is another aspect that appears to be only getting worse from here on. Already today consumer parts feel like cut-down versions of enterprise silicon. As “AI” accelerators and server chips dominate R&D budgets, consumer improvements will slow even further, or arrive at higher prices justified as premium features. This is true for CPUs and GPUs, and it will be equally true for motherboards, chipsets, power supplies, networking, etc. We will likely see fewer low-end options, more segmentation, artificial feature gating and generally higher baseline prices that, once established, won’t be coming back down again.
As enterprise standards become the priority, consumer gear is becoming an afterthought that is being rebadged, overpriced, and poorly supported. The uncomfortable truth is that the consumer hardware market is no longer the center of gravity, as we all were able to see at this year’s CES. It’s orbiting something much larger, and none of this is accidental. The industry isn’t failing, it’s succeeding, just not for you.
And to be fair, from a corporate standpoint, this pivot makes perfect sense.
“AI” and enterprise customers are rewriting revenue charts, all while consumers continue to be noisy, demanding, and comparatively poor. It is pretty clear that consumer hardware is becoming a second-class citizen, which means that the machines we already own are more valuable than we might be thinking right now.
“But what does the industry think the future will look like if nobody can afford new hardware?”, you might be asking.
There is a darker, conspiratorial interpretation of today’s hardware trends that reads less like market economics and more like a rehearsal for a managed
future. Businesses, having discovered that ownership is inefficient and obedience is profitable, are quietly steering society toward a world where no one owns compute at all, where hardware exists only as an abstraction rented back to the public through virtual servers, SaaS subscriptions, and metered experiences, and where digital sovereignty, that anyone with a PC tower under their desk once had, becomes an outdated, eccentric, and even suspicious concept.
… a morning in said future, where an ordinary citizen wakes up, taps their terminal, which is a sealed device without ports, storage, and sophisticated local execution capabilities, and logs into their Personal Compute Allocation. This bundle of cloud CPU minutes, RAM credits, and storage tokens leased from a conglomerate whose logo has quietly replaced the word “computer” in everyday speech, just like “to search” has made way for “to google”, has removed the concept of installing software, because software no longer exists as a thing, but only as a service tier in which every task routes through servers owned by entities. Entities that insist that this is all for the planet. Entities that outlawed consumer hardware years ago under the banner of environmental protectionism, citing e-waste statistics, carbon budgets, and unsafe unregulated silicon, while conveniently ignoring that the data centers
humming beyond the city limits burn more power in an hour than the old neighborhood ever did in a decade. In this world, the ordinary citizen remembers their parents’ dusty Personal Computer, locked away in a storage unit like contraband. A machine that once ran freely, offline if it wanted, immune to arbitrary account suspensions and pricing changes. As they go about their day, paying a micro-fee to open a document, losing access to their own photos because a subscription lapsed, watching a warning banner appear when they type something that violates the ever evolving terms-of-service, and shouting “McDonald’s!” to skip the otherwise unskippable ads within every other app they open, they begin to understand that the true crime of consumer hardware wasn’t primarily pollution but independence. They realize that owning a machine meant owning the means of computation, and that by centralizing hardware under the guise of efficiency, safety, and sustainability, society traded resilience for convenience and autonomy for comfort.In this utopia, nothing ever breaks because nothing is
yours, nothing is repairable because nothing is physical, and nothing is
private because everything runs somewhere else, on someone else’s computer. The quiet moral, felt when the network briefly stutters and the world freezes, is that keeping old hardware alive was never nostalgia or paranoia, but a small, stubborn act of digital self-defense; A refusal to accept that the future must be rented, permissioned, and revocable at any moment.
If you think that dystopian “rented compute over owned hardware” future could never happen, think again. In fact, you’re already likely renting rather than owning in many different areas. Your means of
communication are run by Meta, your music is provided by Spotify, your movies are streamed from Netflix, your data is stored in Google’s data centers and your office suite runs on Microsoft’s cloud. Maybe even your car is leased instead of owned, and you pay a monthly premium for seat heating or sElF-dRiViNg, whatever that means. After all, the average Gen Z and Millennial US consumer today apparently has 8.2 subscriptions, not including their DaIlY aVoCaDo ToAsTs and StArBuCkS cHoCoLate ChIp LaTtEs that the same Boomers
responsible for the current (and past) economic crises love to dunk on.
Besides, look no further than what’s already happening in for example China, a country that manufactures massive amounts of the world’s sought-after hardware yet faces restrictions on buying that very hardware. In recent years, a complex web of export controls and chip bans has put a spotlight on how hardware can become a geopolitical bargaining chip rather than a consumer good. For example, export controls imposed by the United States in recent years barred Nvidia
from selling many of its high-performance GPUs into China without special licenses, significantly reducing legal access to cutting-edge compute inside the country.
Meanwhile, enforcement efforts have repeatedly busted smuggling operations moving prohibited Nvidia chips into Chinese territory through Southeast Asian hubs, with over $1 billion worth of banned GPUs reportedly moving through gray markets, even as official channels remain restricted. Coverage by outlets such as Bloomberg, as well as actual investigative journalism like
Gamer’s Nexus has documented these black-market flows and the lengths to which both sides go to enforce or evade restrictions, including smuggling networks and increased regulatory scrutiny.
On top of this, Chinese regulators have at times restricted domestic tech firms from buying specific Nvidia models, further underscoring how government policy can override basic market access for hardware, even in the country where much of that hardware is manufactured. While some of these export rules have seen partial reversals or regulatory shifts, the overall situation highlights a world in which hardware access is increasingly determined by politics, security regimes, and corporate strategy, and not by consumer demand. This should serve as a cautionary tale for anyone who thinks owning their own machines won’t matter in the years to come.
In an ironic twist, however, one of the few potential sources of relief may, in fact, come from China. Two Chinese manufacturers, CXMT (ChangXin Memory Technologies) and YMTC (Yangtze Memory Technologies), are embarking on their most aggressive capacity expansions ever, viewing the global shortage as a golden opportunity to close the gap with the incumbent big three
(Samsung, SK Hynix, Micron).
CXMT is now the world’s fourth-largest DRAM maker by production volume, holding roughly 10-11% of global wafer capacity, and is building a massive new DRAM facility in Shanghai expected to be two to three times larger than its existing Hefei headquarters, with volume production targeted for 2027. The company is also preparing a $4.2 billion IPO on Shanghai’s STAR Market to fund further expansion and has reportedly delivered HBM3 samples to domestic customers including Huawei.
YMTC, traditionally a NAND flash supplier, is constructing a third fab in Wuhan with roughly half of its capacity dedicated to DRAM, and has reached 270-layer 3D NAND capability, rapidly narrowing the gap with Samsung (286 layers) and SK Hynix (321 layers). Its NAND market share by shipments reached 13% in Q3 2025, close to Micron’s 14%. What’s particularly notable is that
major PC manufacturers are already
turning to these suppliers.
However, as mentioned before, with hardware having become a geopolitical topic, both companies face ongoing (US-imposed) restrictions. Hence, for example HP
has indicated it would only use CXMT chips in devices for non-US markets. Nevertheless, for consumers worldwide the emergence of viable fourth and fifth players in the memory market represents the most tangible hope of eventually breaking the current supply stranglehold. Whether that relief arrives in time to prevent lasting damage to the consumer hardware ecosystem remains an open question, though.
The reason I’m writing all of this isn’t to create panic, but to help put things into perspective. You don’t need to scavenger-hunt for legacy parts in your local landfill (yet) or swear off upgrades forever, but you do need to recognize that the rules have changed. The market that once catered to enthusiasts and everyday users is turning its back. So take care of your hardware, stretch its lifespan, upgrade thoughtfully, and don’t assume replacement will always be easy or affordable.
That PC, laptop, NAS, or home server isn’t disposable anymore. Clean it, maintain it, repaste it, replace fans and protect it, as it may need to last far longer than you originally planned.
Also, realize that the best time to upgrade your hardware was yesterday and that the second best time is now. If you can afford sensible upgrades, especially RAM and SSD capacity, it may be worth doing sooner rather than later. Not for performance, but for insurance, because the next time something fails, it might be unaffordable to replace, as the era of casual upgrades seems to be over. Five-year systems may become eight- or ten-year systems.
Software bloat will hurt more and
will require re-thinking. Efficiency will
matter again. And looking at it from a different angle, maybe that’s a good thing.
Additionally, the assumption that prices will normalize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer applies when manufacturers are deliberately constraining supply. If you
need a new device, buy it; If you don’t, however, there is absolutely no need to spend money on the minor yearly refresh cycle any longer, as the returns will be increasingly diminishing. And again, looking at it from a different angle, probably that is also a good thing.
Consumer hardware is heading toward a bleak future where owning powerful, affordable machines becomes harder or maybe even impossible, as manufacturers abandon everyday users to chase vastly more profitable data centers, “AI”
firms, and enterprise clients. RAM and SSD price spikes, Micron’s exit from the consumer market, and the resulting Samsung/SK Hynix duopoly are early warning signs of a broader shift that will eventually affect CPUs, GPUs, and the entire PC ecosystem.
With large manufacturers having sold out their entire production capacity to
hyperscalers for the rest of the year while simultaneously cutting consumer production by double-digit percentages, consumers will have to take a back seat. Already today consumer hardware is overpriced, out of stock or even intentionally being delayed due to supply issues.
In addition, manufacturers are pivoting towards consumer hardware subscriptions, where you never own the hardware and in the most dystopian trajectory, consumers might not buy any hardware at all, with the exception of low-end thin-clients
that are merely interfaces, and will rent compute through cloud platforms, losing digital sovereignty in exchange for convenience. And despite all of this sounding like science fiction, there is already hard evidence proving that access to hardware can in fact be politically and economically revoked.
Therefor I am urging you to maintain and upgrade wisely, and hold on to your
existing hardware, because ownership may soon be a luxury rather than the norm.
...
Read the original on xn--gckvb8fzb.com »
I can’t express how much I utterly hate the “Continuing Disability Review.”
It is a letter that arrives every few years from the government, asking a question that is medically absurd and philosophically insulting: “Are you still disabled?”
As if my blindness were a seasonal allergy. As if I might have woken up last Tuesday, blinked, and realized that my optic nerves had decided to regenerate spontaneously.
This week, I received The Letter. It demanded “updated medical evidence” to prove that I—a man who has been blind since birth—am, in fact, still blind.
I called the number. I navigated the phone tree hellscape. I finally reached a human being. Let’s call her “Karen from Compliance.”
“I have the documents in PDF format,” I told her, using my polite, I haven’t had my morning tea so make this easy on me, voice. “I can email them to you right now. You’ll have them in ten seconds.”
“We cannot accept email,” Karen said. Her voice was flat, dry, and sounded like stale coffee and rigid adherence to a rulebook written in 1994. “It is a security risk. You must mail physical copies, or you can fax them.”
“Fax them?” I asked. “You want me to fax you medical records when you could just delete the email after saving the attachments?”
“Those are the options, sir. If we don’t receive them by Friday, your benefits will be suspended.”
I didn’t understand why they couldn’t just look back in my file, noticed nothing had changed in decades, and update it based on past data.
She said it with a challenge in her tone. She knew who she was talking to. She was talking to a blind man living below the poverty line. She assumed that “fax it” was an impossible hurdle. She assumed I would have to find a ride to a library, pay twenty cents a page, and struggle with a physical machine I couldn’t read. She was counting on the friction of the physical world to make me give up.
I am a nerd. And I have an internet connection.
“Okay,” I said, my voice dropping into the cool, smooth, ‘Let’s systemically tango,’ tone of a man with a plan. “I will fax them. What is the number?”
I hung up. And then, I went to work.
She wanted evidence? Oh boy, I would give her evidence.
I didn’t just pull the recent files. I went into the archives. I dug into the deep, digital bedrock of my hard drive. I pulled records from when I was five. I pulled the surgical notes from my cerebral palsy treatments. I pulled the intake forms from every specialist, every therapist, every social worker who has ever written a note about my “deficits.”
I compiled a single, monolithic PDF. It was a monument to medical trauma. It was a library of diagnosis.
It was five hundred and twelve pages long.
I opened my preferred internet faxing service. This is a tool that allows me to send a fax purely through digital data. It would cost $20, exactly the amount someone had donated to the blog last week, but if I didn’t do this, I would lose all my benifits. It costs me zero paper. It costs me zero toner.
By the way, your tips keep me writing.
But for the recipient?
For the recipient, a fax is a physical reality. It requires paper. It requires ink. It requires time.
I imagined Karen’s fax machine. It was probably an old, beige beast sitting in the corner of a gray office. It was likely low on paper. It was almost certainly low on patience.
I uploaded the file. The file size was massive. The progress bar on my screen reader ticked up. Uploading… 20%… 50%… 80%…
And then, I sat back and listened to the most beautiful sound in the world.
“Your fax has been sent,” my screen reader announced.
I imagined the scene in that office.
At first, it would just be a single page. Whirrr. Chunk. A standard medical form. Karen would ignore it.
By page fifty, the machine would be heating up. The smell of hot toner would start to fill the cubicle. The rhythmic chunk-chunk-chunk of the printing would become a drone, a mechanical chant of malicious compliance.
By page one hundred, the paper tray would run out. The machine would start beeping. That high-pitched, insistent beep-beep-beep that demands attention. Karen would have to get up. She would have to find a ream of paper. She would have to feed the beast.
And the beast would not stop.
Because I had set the retry limit to “Infinity.” If the line busied out? It would call back. If the paper ran out? It would wait. It was a digital siege engine.
I sent them everything. I sent them the eye charts that prove I can’t read eye charts. I sent them the physical therapy logs. I sent them the blurry scans of notes written by doctors who are long since dead.
I sent them the Tsunami of Truth.
I wanted them to hold the weight of it. I wanted them to physically feel the burden of proof they place on disabled bodies. They want us to document our existence? Fine. Here is my existence, one sheet of hot, curled paper at a time.
Two hours later, my phone rang.
It was Karen. She sounded breathless. She sounded like she was standing next to a machine that was hyperventilating. In the background, I could hear a rhythmic whir-chunk, whir-chunk.
“Yes?” I answered, my voice the picture of innocent helpfulness.
“Sir, please. You have to stop the fax. It’s… it’s been printing for an hour. It’s jamming the machine. We’re out of toner.”
“Oh, you’re out of toner? It’s jammed? Oh my! Oh, I’m so sorry,” I said, putting exactly zero percent sincerity into the apology. “But you said you couldn’t accept email. You said I had to provide complete documentation. I’m just following the rules, Karen. I wouldn’t want my benefits to be suspended because I missed documentation, so here’s documentation all the way back to when I’m five years old.”
“Jesus Christ, We have it!” she snapped. “We have enough! Please, just… cancel the rest.”
“I’m afraid I can’t do that,” I lied. “It’s an automated process. Once it starts, it has to finish. Security protocols, you understand.”
There was a long, strangled silence on the line. Then, a defeated sigh.
“Fine! Fine,” she snapped. “We will mark your file as updated.”
“Thank you,” I said. “Have a wonderful day.”
I sat there in my quiet apartment, eating a cookie. I imagined the pile of paper in that office, a physical mountain of evidence testifying to the fact that yes, I am blind, and yes, I am smarter than your bureaucracy.
If you enjoyed this tiny victory in a hostile world, you might enjoy, Seven Days in June by Tia Williams
learn how to follow the podcast or join my street team,
You can follow the main RSS feed, learn how to follow the podcast or join my street team, or follow via email with the form below.
...
Read the original on sightlessscribbles.com »
Most teams have adopted AI in some form, but the gap between “using AI” and “getting measurable ROI from AI” is larger than people realize.
Postman released a cost savings analysis that looks at six common API development workflows and benchmarks the actual time and cost difference when AI is built into the platform versus bolted on externally.
It’s a short, data-driven read that helps engineering leads make the case for where AI-native tooling actually moves the needle.
You can grab the guide for free here →
Thanks to Postman for partnering today!
Claude Code users typically treat the .claude folder like a black box. They know it exists. They’ve seen it appear in their project root. But they’ve never opened it, let alone understood what every file inside it does.
The .claude folder is the control center for how Claude behaves in your project.
It holds your instructions, your custom commands, your permission rules, and even Claude’s memory across sessions. Once you understand what lives where and why, you can configure Claude Code to behave exactly the way your team needs it to.
This newsletter walks you through the entire anatomy of the folder, from the files you’ll use daily to the ones you’ll set once and forget.
Before diving in, one thing worth knowing upfront: there are actually two .claude directories, not one.
The first lives inside your project, and the second lives in your home directory:
The project-level folder holds team configuration. You commit it to git. Everyone on the team gets the same rules, the same custom commands, the same permission policies.
The global ~/.claude/ folder holds your personal preferences and machine-local state, like session history and auto-memory.
This is the most important file in the entire system. When you start a Claude Code session, the first thing it reads is CLAUDE.md. It loads it straight into the system prompt and keeps it in mind for the entire conversation.
Simply put: whatever you write in CLAUDE.md, Claude will follow.
If you tell Claude to always write tests before implementation, it will. If you say “never use console.log for error handling, always use the custom logger module,” it will respect that every time.
A CLAUDE.md at your project root is the most common setup. But you can also have one in ~/.claude/CLAUDE.md for global preferences that apply across all projects, and even one inside subdirectories for folder-specific rules. Claude reads all of them and combines them.
Most people either write too much or too little. Here’s what works.
* Non-obvious gotchas (”TypeScript strict mode is on, unused variables are errors”)
* File and folder structure for the main modules
* Anything that belongs in a linter or formatter config
* Full documentation you can already link to
Keep CLAUDE.md under 200 lines. Files longer than that start eating too much context, and Claude’s instruction adherence actually drops.
That’s ~20 lines. It gives Claude everything it needs to work productively in this codebase without constant clarification.
Sometimes you have a preference that’s specific to you, not the whole team. Maybe you prefer a different test runner, or you want Claude to always open files using a specific pattern.
Create CLAUDE.local.md in your project root. Claude reads it alongside the main CLAUDE.md, and it’s automatically gitignored so your personal tweaks never land in the repo.
CLAUDE.md works great for a single project. But once your team grows, you end up with a 300-line CLAUDE.md that nobody maintains and everyone ignores.
Every markdown file inside .claude/rules/ gets loaded alongside your CLAUDE.md automatically. Instead of one giant file, you split instructions by concern:
Each file stays focused and easy to update. The team member who owns API conventions edits api-conventions.md. The person who owns the testing standards edits testing.md. Nobody stomps on each other.
The real power comes from path-scoped rules. Add a YAML frontmatter block to a rule file and it only activates when Claude is working with matching files:
Claude won’t load this file when editing a React component. It only loads when it’s working inside src/api/ or src/handlers/. Rules without a paths field load unconditionally, every session.
This is the right pattern once your CLAUDE.md starts feeling crowded.
Out of the box, Claude Code has built-in slash commands like /help and /compact. The commands/ folder lets you add your own.
Every markdown file you drop into .claude/commands/ becomes a slash command.
A file named review.md creates /project:review. A file named fix-issue.md creates /project:fix-issue. The filename is the command name.
Now run /project:review in Claude Code and it automatically injects the real git diff into the prompt before Claude sees it. The ! backtick syntax runs shell commands and embeds the output. That’s what makes these commands genuinely useful instead of just saved text.
Use $ARGUMENTS to pass text after the command name:
Project commands in .claude/commands/ are committed and shared with your team. For commands you want everywhere regardless of project, put them in ~/.claude/commands/. Those show up as /user:command-name instead.
A useful personal command: a daily standup helper, a command for generating commit messages following your convention, or a quick security scan.
You now know how commands work. Skills look similar on the surface, but the trigger is fundamentally different. Here’s the distinction before we go any further:
Skills are workflows that Claude can invoke on its own, without you typing a slash command, when the task matches the skill’s description. Commands wait for you. Skills watch the conversation and act when the moment is right.
Each skill lives in its own subdirectory with a SKILL.md file:
The SKILL.md uses YAML frontmatter to describe when to use it:
When you say “review this PR for security issues,” Claude reads the description, recognizes it matches, and invokes the skill automatically. You can also call it explicitly with /security-review.
The key difference from commands: skills can bundle supporting files alongside them. The DETAILED_GUIDE.md reference above pulls in a detailed document that lives right next to SKILL.md. Commands are single files. Skills are packages.
Personal skills go in ~/.claude/skills/ and are available across all your projects.
When a task is complex enough to benefit from a dedicated specialist, you can define a subagent persona in .claude/agents/. Each agent is a markdown file with its own system prompt, tool access, and model preference:
Here’s what a code-reviewer.md looks like:
When Claude needs a code review done, it spawns this agent in its own isolated context window. The agent does its work, compresses the findings, and reports back. Your main session doesn’t get cluttered with thousands of tokens of intermediate exploration.
The tools field restricts what the agent can do. A security auditor only needs Read, Grep, and Glob. It has no business writing files. That restriction is intentional and worth being explicit about.
The model field lets you use a cheaper, faster model for focused tasks. Haiku handles most read-only exploration well. Save Sonnet and Opus for the work that actually needs them.
Personal agents go in ~/.claude/agents/ and are available across all projects.
The settings.json file inside .claude/ controls what Claude is and isn’t allowed to do. It’s where you define which tools Claude can run, which files it can read, and whether it needs to ask before running certain commands.
The complete file looks like this:
Here’s what each part does.
The $schema line enables autocomplete and inline validation in VS Code or Cursor. Always include it.
The allow list contains commands that run without Claude asking for confirmation. For most projects, a good allow list covers:
* Bash(npm run *) or Bash(make *) so Claude can run your scripts freely
The deny list contains commands that are blocked entirely, no matter what. A sensible deny list blocks:
* Sensitive files like .env and anything in secrets/
If something isn’t in either list, Claude asks before proceeding. That middle ground is intentional. It gives you a safety net without having to anticipate every possible command upfront.
That said, you can also have settings.local.json for personal overrides. It has the same idea as CLAUDE.local.md. Create .claude/settings.local.json for permission changes you don’t want committed. It’s auto-gitignored.
You don’t interact with this folder often, but it’s useful to know what’s in it.
~/.claude/CLAUDE.md loads into every Claude Code session, across all your projects. Good place for your personal coding principles, preferred style, or anything you want Claude to remember, regardless of which repo you’re in.
~/.claude/projects/ stores session transcripts and auto-memory per project. Claude Code automatically saves notes to itself as it works: commands it discovers, patterns it observes, and architecture insights. These persist across sessions. You can browse and edit them with /memory.
~/.claude/commands/ and ~/.claude/skills/ hold personal commands and skills available across all projects.
You generally don’t need to manually manage these. But knowing they exist is handy when Claude seems to “remember” something you never told it, or when you want to wipe a project’s auto-memory and start fresh.
Here’s how everything comes together:
If you’re starting from scratch, here’s a progression that works well.
Step 1. Run /init inside Claude Code. It generates a starter CLAUDE.md by reading your project. Edit it down to the essentials.
Step 2. Add .claude/settings.json with allow/deny rules appropriate for your stack. At minimum, allow your run commands and deny .env reads.
Step 3. Create one or two commands for the workflows you do most. Code review and issue fixing are good starting points.
Step 4. As your project grows and your CLAUDE.md gets crowded, start splitting instructions into .claude/rules/ files. Scope them by path where it makes sense.
Step 5. Add a ~/.claude/CLAUDE.md with your personal preferences. This might be something like “always write types before implementations” or “prefer functional patterns over class-based.”
That’s genuinely all you need for 95% of projects. Skills and agents come in when you have recurring complex workflows worth packaging up.
The .claude folder is really a protocol for telling Claude who you are, what your project does, and what rules it should follow. The more clearly you define that, the less time you spend correcting Claude and the more time it spends doing useful work.
CLAUDE.md is your highest-leverage file. Get that right first. Everything else is optimization.
Start small, refine as you go, and treat it like any other piece of infrastructure in your project: something that pays dividends every day once it’s set up properly.
...
Read the original on blog.dailydoseofds.com »
The Neko House Desk understands who’s really in charge of your home, and helps you convince them to let you have some space too.
A big part of the appeal of working from home is that even if you’re still working for somebody else, you get much more control over your immediate environment. Want to play music during your shift, while also spending it all in your pajamas and setting the thermostat to whatever temperature you want? Go right ahead! No one is going to stop you.
However, while you may not have to worry about your personal freedoms being encroached on by fussy managers or inconsiderate coworkers, that still doesn’t make you the ultimate boss of your home office if you have a pet cat. As feline fans know all too well, their pets tend to think of themselves as being the higher-ranking half of the relationship, and so really it’s a matter of how much of your workspace they’ll let you use, since they’ll, of course, claim dominion over at least a part of it.
Unfortunately, no amount of attempted explaining will convey to your cat how it’s in their best interest too to let you work efficiently, so that you can earn the money that pays for their treats, toys, and other essentials. So instead, Japanese furniture and homewares company Bibilab has developed a compromise: the Neko House Desk.
Neko is the Japanese word for cat, so you could also call this the Cat House Desk, but again, this piece of furniture is for people who own pets, not bordellos. Bibilab proudly built in numerous design points to help promote harmonious sharing between you and your kitties, starting with a two-tier cat space along the right side of the desk, each with side-access portals and a weight capacity of 20 kilograms (44 pounds) for the top section.
There’s another cat space on the underside of the desk surface, in the normally unutilized space in front of your knees when you’re sitting.
This cat lounging area is positioned so that you won’t bump into it while sitting at the desk, but is also close enough so that your cat can easily cross the gap onto your lap for some cuddles.
The section of the desk is also connected to the wider top-tier section on its right edge, allowing your cat to easily move from one to the other, or in or out of the desk entirely, without having to go under the desk itself.
There’s also a way for your cat to go through the top of the desk, as the designers have included a feature they call the “Surprise Cat Hole,” so that your cat can poke its head topside for a quick pet/scratch or for playtime.
This feature also gives you a great reminder/excuse to take periodic breaks for mental refreshment, helping you avoid the always dangerous situations of work burnout and cuteness deficiencies.
▼ The monitor says “Start,” but the cat says “Stop…and play with me!”
Combined, these features give your cat attractive places to occupy, which will hopefully keep them from lazing about in spots that would make it hard for you to get work done. Despite all the playful touches, the Neko House Desk is still a practical piece of furniture too, with slits to run cables through and even a spot for desktop PC users to put their tower (though this does take up the bottom half of the two-tier cat space, so you’ll probably want to get your pet’s permission first).
It’s also sized to easily pair with Bibilab’s Cat Tower Rack, another item from the company that gives your cats more space of their own so that you can have a little more for yourself too.
The Neko House Desk is priced at 24,800 yen (US$160) and available through Amazon Japan here, while the 16,862-yen Cat Tower Rack can be ordered here. And if you’re looking for some uniquely purpose-built furniture for when you’re done working for the day, Bibilab can help with that too.
Source: Bibilab, PR Times
Images: PR Times
● Want to hear about SoraNews24’s latest articles as soon as they’re published? Follow us on Facebook and Twitter!
[ Read in Japanese ]
...
Read the original on soranews24.com »
Alongside the various bugs you get, one of the issues of upgrading to MacOS 26 is that it has one of the most notorious inconsistency issues in window corners. I’m not sure what exactly pushes product designers to like the excessive roundnessOne of the ugliest roundness examples I’ve ever seen is the current one in the YouTube UI design. I believe that UI design is the most influencivethat’s to say, contagious form inwards field ever since designers just try to follow whatever big companies do (in fact I see this a lot in my work, when two designers are having an argument, one of them would resolve it to, let’s see how Apple draw that button), which means that we are probably going to see this ugly effect elsewhere very soon.
Anyway, recently I had to upgrade recently to MacOS 26. And I found the edges ugly, like everyone else did. However, what’s even uglier, is the inconsistency. Many people try to resolve this by disabling MacOS system integrity protection, which results in making them possibly vulnerableArguable, since you just loose security over /root, which is not a big deal if someone already gained access to your machine, at least for me.. The reason why you need to disable SIP, is that to edit the dynamic libraries that system apps like Safari (which has crazy bad corners) use, you need to edit system libraries that exist the root. To me though, I don’t find the corners so bad, but I find the inconsistency very annoying. So I think a better solution to this is; instead of making everything roundless, make everything more rounded, which requires you to edit only user apps (i.e. no SIP disabling needed). I forked a solution that makes things roundless to modify it to have my approach. It’s simply as follows:
You can have this plist too to load it in once your computer loads:
Now at least everything is consistently bad. #Programming
...
Read the original on lr0.org »
On the first morning of Operation Epic Fury, 28 February 2026, American forces struck the Shajareh Tayyebeh primary school in Minab, in southern Iran, hitting the building at least two times during the morning session. American forces killed between 175 and 180 people, most of them girls between the ages of seven and 12.
Within days, the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target. Congress wrote to the US secretary of defense, Pete Hegseth, about the extent of AI use in the strikes. The New Yorker magazine asked whether Claude could be trusted to obey orders in combat, whether it might resort to blackmail as a self-preservation strategy, and whether the Pentagon’s chief concern should be that the chatbot had a personality. Almost none of this had any relationship to reality. The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.
Eight years ago, Maven was the most contested project in Silicon Valley. In 2018, more than 4,000 Google employees signed a letter opposing the company’s contract to build artificial intelligence for the Pentagon’s targeting systems. Workers organised a walk out. Engineers quit. And Google ultimately abandoned the contract. Palantir Technologies, a data analytics company and defence contractor co-founded by Peter Thiel, took it over and spent the next six years building Maven into a targeting infrastructure that pulls together satellite imagery, signals intelligence and sensor data to identify targets and carry them through every step from first detection to the order to strike.
The building in Minab had been classified as a military facility in a Defense Intelligence Agency database that, according to CNN, had not been updated to reflect that the building had been separated from the adjacent Islamic Revolutionary Guard Corps compound and converted into a school, a change that satellite imagery shows had occurred by 2016 at the latest. A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal. By the start of the Iran war, Maven — the system that had enabled that speed — had sunk into the plumbing, it had become part of the military’s infrastructure, and the argument was all about Claude. This obsession with Claude is a kind of AI psychosis, though not of the kind we normally talk about, and it afflicts critics and opponents of the technology as fiercely as it does its boosters. You do not have to use a language model to let it organise your attention or distort your thinking.
In 2019, the scholar Morgan Ames published The Charisma Machine, a study of how certain technologies draw attention, resources and attribution toward themselves and away from everything else. The usual framework for understanding this dynamic is “hype”, but hype only describes what boosters do, and it assigns critics a privileged debunking role that still leaves the technology at the centre of every argument. A charismatic technology shapes the whole field around it, the way a magnet organises iron filings. LLMs may be the most powerful instance of this type in history.
By the time the war began, “AI safety” and “alignment” and “hallucination” and “stochastic parrots” had become the terms of every argument about artificial intelligence, structuring and limiting what we could even say. Worse, “artificial intelligence” itself had come to be synonymous with LLMs. When the school was bombed, those were the terms people reached for, despite the fact that this critical apparatus offered a poor fit for the older, more mature stack of technologies involved in targeting. The real question, the question almost nobody was asking, is not about Claude or any language model. It is a bureaucratic question about what happened to the kill chain, and the answer is Palantir.
As military jargon goes, “kill chain” is a remarkably honest term. In essence, it refers to the bureaucratic framework for organising the steps between detecting something and destroying it. The oldest reference to the term itself I can find is from the 1990s, but the idea is quite old — dating at least to the 1760s, when French artillery reformers began replacing the gunner’s experienced eye with ballistic tables, elevation screws and standardised firing procedures. The steps in the kill chain are subject to constant change, to keep pace with changes in targeting doctrine, but also to incorporate whatever management fads come to afflict the military’s strategic thinkers. The US military has named and renamed the steps for 80 years. In the second world war the sequence was find, fix, fight, finish. By the 1990s the air force had stretched it to find, fix, track, target, engage, assess, or F2T2EA. Every generation of military technology has been sold on the promise of making everything about kill chains shorter, except for the acronyms.
Palantir’s Maven Smart System is the latest iteration of this compression, and it grew out of a shift in strategic thinking during Obama’s second term. In 2014, the secretary of defense, Chuck Hagel, and his deputy, Robert Work, announced what they called the “third offset strategy”. An “offset” in this line of thinking is a bet that a technological advantage can compensate for a strategic weakness the country cannot fix directly. The first two offsets addressed the same problem: the United States could not match the Soviet Union in conventional forces. The thinking was that the Red Army could just continue to throw personnel at a problem, as they did at Stalingrad, or, to be anachronistic, as the contemporary Russian army did at Bakhmut and Avdiivka. Nuclear weapons, the first offset, made the personnel advantage irrelevant in the 1950s. When the Soviets reached nuclear parity in the 1970s, precision-guided munitions and stealth offered the promise that a smaller force could defeat a larger one. By 2014, that advantage was eroding. China and Russia had spent two decades acquiring precision-guided munitions and building defence systems designed to keep American forces out of range. Robert Work insisted that the third offset was not about any particular technology but about using technology to reorganise how the military operated, letting the US make decisions faster than China and Russia, overwhelming and disorienting the enemy by maintaining a faster operational tempo than they could match.
In April 2017, early in the first Trump administration, Work helped establish the Algorithmic Warfare Cross-Functional Team, designated Project Maven. One of the generals overseeing Maven, Lt Gen Jack Shanahan, put the problem plainly: thousands of intelligence analysts were spending 80% of their time on mundane tasks, drowning in footage from surveillance drones that no one had time to watch. A single Predator drone mission could generate hundreds of hours of video, and the analysts tasked with understanding this were faced with an information overload problem. “We’re not going to solve it by throwing more people at the problem,” Shanahan said. “That’s the last thing that we actually want to do.” The core conceit of the project was that the machine could watch so that the analyst could think.
The Pentagon needed someone to build it. Google took the contract, and what happened next became the most visible labour action in the history of Silicon Valley.
After Google abandoned the Maven contract, Palantir took it over in 2019. The XVIII Airborne Corps began testing the system in an exercise called Scarlet Dragon, which started in 2020 as a tabletop wargaming exercise in a windowless basement at Fort Bragg. Its commander, Lt Gen Michael Erik Kurilla, wanted to build what he called the first “AI-enabled corps” in the army. The goal was to test whether the system could give a small team the targeting capacity that had previously required thousands of people.
Over the next five years, Scarlet Dragon grew into a military exercise using live ammunition, spanning multiple states and branches of the armed forces, with “forward-deployed engineers” from Palantir and other contractors embedded alongside soldiers. Each time the exercise was run, it was meant to answer the same question: how fast could the system move from detection to decision? The benchmark was the 2003 invasion of Iraq, where roughly 2,000 people worked the targeting process for the entire war. During Scarlet Dragon, 20 soldiers using Maven handled the same volume of work. By 2024, the stated goal was 1,000 targeting decisions in an hour. That is 3.6 seconds per decision, or from the individual “targeteer’s” perspective, one decision every 72 seconds.
The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran. There are real limits to what a civilian such as myself can know about this system, and what follows is based on publicly available information, assembled from Palantir product demos, conferences, as well as instructional material produced for military users. But we can know quite a bit.
The Maven interface looks like a military-skinned version of corporate project management software crossed with a mapping application. What the military analyst building the target list sees is either a map layered with intelligence data or a screen organised into columns, each representing a stage of the targeting process. Individual targets move across the columns from left to right as they progress through each stage, a format borrowed from Kanban, a “lean manufacturing” workflow system developed at Toyota, and now widely used in software development.
Before Maven, operators worked across eight or nine separate systems simultaneously, pulling data from one, cross-referencing in another, manually moving detections between platforms to assemble the intelligence and approvals needed for each strike. Maven consolidated all of these behind a single interface. Cameron Stanley, the Pentagon’s chief digital and AI officer, called it an “abstraction layer”, a common term in software engineering, meaning a system that hides the complexity underneath it. Humans run the targeting. Underneath the interface, machine-learning systems analyse satellite imagery and sensor data to detect and classify objects, scoring each identification by how confident the system is that it got it right. Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target — which aircraft, drone or missile to use, which weapon to pair with it — what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution.
The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer — this is where Claude sits — that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system. What mattered was what Maven did to the targeting process: it consolidated the systems, compressed the time and reduced the people. That is not a new idea. The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed, and every attempt has produced the same failure. Maven may not even be the most extreme case.
In the late 1960s, the US faced a version of the same problem in Vietnam. Supplies were moving south along the Ho Chi Minh trail through jungle the military could not see into. The solution was Operation Igloo White, a $1bn-a-year programme that scattered 20,000 acoustic and seismic sensors along the trail. These sensors transmitted data to relay aircraft overhead, which fed the signals to IBM 360 computers at Nakhon Phanom airbase in Thailand. The computers analysed the sensor data and predicted where convoys would be, and strike aircraft were directed to those coordinates.
The system could sense but it could not see. It could detect a vibration but it could not tell a truck from an ox cart. The North Vietnamese figured this out. They played recordings of truck engines, herded animals near the sensors to trigger vibration detection, and hung buckets of urine in trees to set off the chemical detectors. The system could be fooled because nobody in the process could look at what it was sensing. The air force claimed 46,000 trucks were destroyed or damaged over the course of the campaign. The CIA reported that the claims for a single year exceeded the total number of trucks believed to exist in all of North Vietnam. The system’s own output was the only measure of its performance, and nobody outside the system had standing to challenge it. Air force historian Bernard Nalty later called the service’s casualty computations “an exercise in metaphysics rather than mathematics” and his colleague Earl Tilford concluded that “the air force succeeded only in fooling itself”. When daytime reconnaissance flights failed to find the wreckage of all those trucks, air force personnel invented a creature to explain the absence. They called it the “great Laotian truck eater”.
The pattern that played out in Vietnam — a targeting system that could only measure its own performance and ended up believing its own output — is actually older than digital computing. Michael Sherry’s 1987 book The Rise of American Air Power traces it to the founding doctrine of precision bombing, whose confidence in its own methods made examining what those methods produced unnecessary. “Belief in success,” Sherry wrote, “encouraged imprecision about how to achieve it.” By 1944, operations analysts on both sides of the Atlantic were measuring bombing in a shared language of industrial optimisation. Civilians bombed out of their homes were recorded as “dehoused”. For every tonne of bombs dropped, analysts calculated how many hours of enemy labour it destroyed. One British evaluation treated the bomber itself as a capital asset: a single sortie against a German city wiped off the cost of building the aircraft, and everything after that was “clear profit”. Sherry called the resulting mindset “technological fanaticism”.
Sherry’s point was not that anyone chose destruction. It was that the people refining the technique of bombing stopped asking what the bombing was for. But even by the time the operations researchers had got their hands on targeting, this logic was already taking shape. As the historian of science William Thomas has argued, the operations analysts did not impose this logic on the military; the military was already converting operational experience into systematic procedure, and had been for decades. Nobody stopped making judgments. But the judgments were no longer about whether the bombing served a strategic purpose. They were about how to measure it and how to optimise around those measurements.
Carl von Clausewitz, the 19th-century Prussian general whose writings remain the foundation of western military thought, had a word for everything the optimisation leaves out. He called it “friction”, the accumulation of uncertainty, error and contradiction that ensures no operation goes as planned. But friction is also where judgment forms. Clausewitz observed that most intelligence is false, that reports contradict each other. The commander who has worked through this learns to see the way an eye adjusts to darkness, not by getting better light but by staying long enough to use what light there is. This “staying” is what takes time. Compress the time and the friction does not disappear. You just stop noticing it. Clausewitz called this kind of planning a “war on paper”. The plan proceeds without resistance, not because there is none, but because everything connecting the plan to the real world has been stripped out.
Air power is uniquely vulnerable to this. The pilot never sees what the bomb hits. The analyst works from imagery, coordinates and databases. The entire enterprise is mediated by representations of the target, not the target itself, which means the gap between the package and the world can widen without anyone in the process feeling it. The 2003 invasion of Iraq, the operation that Scarlet Dragon would later use as its benchmark, was a case in point. Marc Garlasco, the Pentagon’s chief of high-value targeting during the invasion, ran the fastest targeting cycle the US had operated to that point. He recommended 50 strikes on senior Iraqi leadership. The bombs were precise — they hit exactly where they were aimed — but the intelligence behind them was not. None of the 50 killed its intended target. Two weeks after the invasion, Garlasco left the Pentagon for Human Rights Watch, went to Iraq, and stood in the crater of a strike he had targeted himself. “These aren’t just nameless, faceless targets,” he said later. “This is a place where people are going to feel ramifications for a long time.” The targeting cycle had been fast enough to hit 50 buildings and too fast to discover it was hitting the wrong ones.
The air force’s own targeting guide, in effect during the Iraq war, said this was never supposed to happen. Published in 1998, it described the six functions of targeting as “intertwined”, with the targeteer moving “back” to refine objectives and “forward” to assess feasibility. “The best analysis,” the manual stated, “is reasoned thought with facts and conclusions, not a checklist.” But Jon Lindsay, who served as a navy intelligence officer in Kosovo and later studied special operations targeting in Iraq, found something different. Once a target was reified on a PowerPoint slide — the target intelligence package, or TIP — it became a black box. Questioning the assumptions behind it got harder as the hunt gained momentum, as the folder thickened with what Lindsay calls “representational residua”. There was more machinery for building up a target than for inspecting the quality of its construction. Personnel became disinclined to ask whether some targets were potential allies, or not actually bad guys at all, because producing targets meant participating in the hunt. The targeting guide had warned about this too. “If targeteers don’t provide full targeting service,” it read, “then other well meaning but undertrained and ill-experienced groups will step in.” Maven eventually would.
Lindsay’s book Information Technology and Military Power is the most careful study I’ve found of how targeting actually works, at least partially because it was written by someone who actually did it. During the Kosovo air war, Gen Wesley Clark demanded 2,000 targets, which made it easy to justify any target’s connection to the Milošević government. The CIA nominated just one target during the entire war: the federal directorate of supply and procurement. Analysts had a street address but not coordinates, so they tried to reverse-engineer a location from three outdated maps. They ended up hitting the Chinese embassy — which had recently relocated — 300 metres from the building they were aiming for. The state department knew that the embassy had moved. The military’s facilities database did not. Target reviews failed to notice, because each validation relied on the last. Lindsay calls this “circular reporting”: an accumulation of supporting documents that “created the illusion of multiple validations” while amplifying a single error. The PowerPoint slide looked as well vetted as the hundreds of others that Nato struck without incident. On the night of the strike, an intelligence analyst phoned headquarters to express doubts. Asked specifically about collateral damage, he could not articulate a concern. The strike proceeded. It killed three Chinese journalists. Lindsay, writing in his journal at the time, called the result “an immense error, perfectly packaged”.
In 2005, Lt Col John Fyfe of the US air force published a study of time-sensitive targeting during the 2003 invasion. Fyfe highlighted the different ways UK and US forces approached this challenge. In the Combined Air Operations Center, RAF officers served in key leadership positions alongside their American counterparts. They operated under more restricted rules of engagement. Fyfe noted that their “more reserved, conservative personalities” produced what he called “a very positive dampening effect on the sometimes harried, chaotic pace of offensive operations”. The contrast between shifts was visible: American leaders pressed ahead full bore, while British officers methodically reconsidered risk and cost-benefit trade-offs before approving execution. On UK-led shifts, there were no friendly fire incidents and no significant collateral damage. On numerous occasions, Fyfe notes, the British officer in charge prevented the operation from getting ahead of itself. What the next generation of reformers would measure as latency — the delay between identifying a target and striking it — was the window in which mistakes could be caught.
From inside the efficiency frame, every feature Fyfe describes registered as a defect. The UK shifts were slower. The restricted rules of engagement added constraints. The dampening effect added time. Speed saves lives, the argument goes, but the fastest targeting cycle before Maven was Garlasco’s, and it struck 50 buildings without hitting a single intended target. Scarlet Dragon eliminated all of it. The disagreements about targeting stopped. So did the deliberation, the hesitation and the moments when someone had time to object or notice something was off.
Organisations that run on formal procedure need someone inside the process to interpret rules, notice exceptions, recognise when the categories no longer fit the case. If the organisation concedes that its outcomes depend on the discretion of the people executing it, then the procedure is not a procedure but a suggestion, and the authority the organisation derives from appearing rule-governed collapses. So the judgment has to happen, and it has to look like something else. It has to look like following the procedure rather than interpreting it.
I’ve come to think of this as the “bureaucratic double bind” — the organisation cannot function without the judgment, and it cannot acknowledge the judgment without undermining itself and being seen as “political”. One solution to this problem is to replace the judgment with a number. In his 1995 book Trust in Numbers, the historian of science Theodore Porter argued that organisations adopt quantitative rules not because numbers are more accurate but because they are more defensible. Judgment is politically vulnerable. Rules are not. The procedure exists to make discretion disappear, or seem to. The system’s actual flexibility lives entirely in this unacknowledged interpretive work, which means it can be removed by anyone who mistakes it for inefficiency.
In 1984, the historian David Noble showed that when the US military and American manufacturers automated their factory floors, they consistently chose systems that were slower and more expensive but which moved decision-making away from workers and into management. The point was not efficiency — it was frequently extremely wasteful — but control. A worker who understands what they are doing can exercise judgment the institution cannot govern. Move that understanding into the system, and the worker has nothing left to do but follow instructions. Alex Karp, the CEO of Palantir, describes exactly this achievement in his 2025 book, The Technological Republic. “Software is now at the helm,” he writes, with hardware “serving as the means by which the recommendations of AI are implemented in the world.” His model for what this should look like comes from nature: bee swarms and the murmurations of starlings. “There is no mediation of the information captured by the scouts once they return to the hive,” Karp writes. The starlings need no permission from above, they require “no weekly reports to middle management, no presentations to more senior leaders, no meetings or conference calls to prepare for other meetings”. This sounds liberating, even utopian. But the signal that passes without mediation is also the signal that nobody can question.
Karp thinks he is destroying bureaucracy. He is encoding it. The contempt for meetings and weekly reports and presentations to senior leaders; he treats these as the bureaucratic process itself. They are not. They were where people interpreted procedure, the place where someone could notice when categories no longer fit the case. The targeting doctrine is still there. They are columns on a workflow board now, stages a target passes through on its way to being struck. What Karp eliminated was the discretion the institution could never admit it depended on. What remains is a bureaucracy that can execute its rules but with no one left to interpret them. Bureaucracy encoded in software does not bend. It shatters.
The target package for the Shajareh Tayyebeh school presented a military facility. Lucy Suchman, whose 1987 book Plans and Situated Actions remains the sharpest account of how formal procedures obscure the work that actually produces their outcomes, would not have been surprised. Plans always look complete afterward. They achieve completeness by filtering out everything that wasn’t legible to their categories. This package looked like every other package in the queue. But outside the package, the school appeared in Iranian business listings. It was visible on Google Maps. A search engine could have found it. Nobody searched. At 1,000 decisions an hour, nobody was going to. A former senior government official asked the obvious question: “The building was on a target list for years. Yet this was missed, and the question is how.” How indeed.
Congress did not authorise this war. In two weeks, American forces struck 6,000 targets. The school was one of them. American forces killed almost 200 people, and the reporting reached for “AI error”, which domesticated the event into something a better algorithm or better guardrails could have prevented.
In the days after the strike, the charisma of AI organised the entire political conversation around the technology: whether Claude hallucinated, whether the model was aligned, whether Anthropic bore responsibility for its deployment. The constitutional question of who authorised this war and the legal question of whether this strike constitutes a war crime were displaced by a technical question that is easier to ask and impossible to answer in the terms it set. The Claude debate absorbed the energy. That is what charisma does.
It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.
An earlier version of this article appeared on Artificial Bureaucracy, Kevin T Baker’s Substack
...
Read the original on www.theguardian.com »
The International Olympic Committee has barred transgender athletes from competing in the women’s category of the Olympics and said that all participants in those events must undergo genetic testing.
The decision, the most consequential since Kirsty Coventry was elected last year as the first woman to serve as president of the I. O.C., followed a board meeting and months of speculation over the organization’s policy on one of the most contentious issues facing global sports. The rules will be applicable starting at the next Olympics, in Los Angeles in 2028.
Under the new policy eligibility will be determined by a one-time gene test, according to the I. O.C. The test, which is already being used in track and field, requires screening via saliva, a cheek swab or a blood sample.
When Ms. Coventry, a former Olympic champion swimmer from Zimbabwe, campaigned to lead the organization, she frequently said how important it was to protect the women’s category amid broader — and often bitter — debates about the participation of transgender athletes in sporting competitions.
...
Read the original on www.nytimes.com »
There’s a gap between giving an agent your real account and stopping everything to build a container or VM. jai fills that gap. One command, no images, no Dockerfiles — just a light-weight boundary for the workflows you’re already running: quick coding help, one-off local tasks, running installer scripts you didn’t write. Use AI agents without handing over your whole account. jai gives your working directory full access and keeps the rest of your home behind a copy-on-write overlay — or hidden entirely.One-line installer scripts, AI-generated shell commands, unfamiliar CLIs — stop running them against your real home directory. Drop jai in front and the worst case gets a lot smaller.No images to build, no Dockerfiles to maintain, no 40-flag bwrap invocations. Just jai your-agent. If containment isn’t easier than YOLO mode, nobody will bother.
Pick the level of isolation that fits your workflow.
jai is free software, brought to you by the Stanford Secure Computer Systems research group and the Future of Digital Currency Initiative. The goal is to get people using AI more safely.
jai is not trying to replace containers. It fills a different niche. Great for reproducible, image-based environments. Heavier to set up for ad-hoc sandboxing of host tools. No overlay-on-home workflow.Powerful namespace sandbox. Requires explicitly assembling the filesystem view — often turns into a long wrapper script, which is the friction jai removes.Not a security mechanism. No mount isolation, no PID namespace, no credential separation. Linux documents it as not intended for sandboxing.
jai is not a promise of perfect safety.jai is a casual sandbox — it reduces the blast radius, but does not eliminate all the ways AI agents can harm you or your system. Casual mode does not protect confidentiality. Even strict mode is not equivalent to a hardened container runtime or VM. When you need strong multi-tenant isolation or defense against a determined adversary, use a proper container or virtual machine. Read the full security model →
...
Read the original on jai.scs.stanford.edu »
Iran-linked hackers successfully broke into FBI Director Kash Patel’s personal email, the Department of Justice confirmed to Reuters on Friday.
Reuters could not authenticate the leaked emails themselves but noted that the Gmail address matched an email account “linked to Patel in previous data breaches preserved by the dark web intelligence firm District 4 Labs.” The DOJ suggested the emails appeared to be authentic.
On their website, the Handala Hack Team boasted that Patel “will now find his name among the list of successfully hacked victims.” The hacker group taunted Patel by sharing photos of him sniffing cigars and holding up a jug of rum, along with other documents that Reuters reported were from 2010 to 2019.
“Soon you will realize that the FBI’s security was nothing more than a joke,” the group posted, as documented in screenshots from the website shared widely on X.
The hack came after the DOJ disrupted some of the hacker group’s websites earlier this month. In a press release, Patel threatened to “hunt” down the group, which Reuters reported “calls itself a group of pro-Palestinian vigilante hackers.” After detailing four attacks this month that the group had taken credit for, Patel offered rewards of up to $10 million for information on its members.
“Iran thought they could hide behind fake websites and keyboard threats to terrorize Americans and silence dissidents,” Patel said. “We took down four of their operation’s pillars and we’re not done. This FBI will hunt down every actor behind these cowardly death threats and cyberattacks and will bring the full force of American law enforcement down on them.”
...
Read the original on arstechnica.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.