10 interesting stories served every morning and every evening.




1 644 shares, 32 trendiness

Microsoft insiders push to end mandatory Microsoft Accounts on Windows 11

Yesterday, Microsoft un­veiled a sweep­ing set of changes com­ing soon to Windows 11 that are de­signed to ad­dress com­mon com­plaints and is­sues that peo­ple have with the plat­form. Everything from slow per­for­mance to un­re­li­able up­dates, AI bloat, and ex­ces­sive ads and en­shit­ti­fi­ca­tion are be­ing tack­led.

It’s all great news, but there’s one big is­sue that Microsoft no­tably is­n’t ad­dress­ing with these sweep­ing changes, and that’s Windows 11′s forced Microsoft ac­count re­quire­ments. Even with all these im­prove­ments in tow, Windows 11 will still force you to setup an in­ter­net con­nec­tion and sign-in with a Microsoft ac­count dur­ing the out of box ex­pe­ri­ence.

That’s re­ally un­for­tu­nate, as the forced Microsoft ac­count re­quire­ment is prob­a­bly one of the biggest com­plaints I see peo­ple have about Windows 11, and it’s not cur­rently be­ing ad­dressed. In fact, the blog post Microsoft pub­lished does­n’t men­tion Microsoft ac­counts at all.

The good news is that this might not be the case for­ever. Microsoft Vice President and over­all de­vel­oper leg­end Scott Hanselman has posted on X in re­sponse to some­one ask­ing him about pos­si­bly re­lax­ing the Microsoft ac­count re­quire­ments, say­ing Ya I hate that. Working on it.”

I un­der­stand that Hanselman is­n’t the only per­son at Microsoft who does­n’t like the forced Microsoft ac­count re­quire­ments, and that there are a num­ber of peo­ple in­ter­nally that are push­ing to re­lax this. However, it’s not as sim­ple as just de­cid­ing to re­move the re­quire­ment and call­ing it a day.

Microsoft is a big com­pany, and a lot of teams and or­ga­ni­za­tions stand to ben­e­fit from Windows forc­ing a Microsoft ac­count. So it’s likely that this processes has to go through a com­mit­tee of sorts, where each side will pre­sent their rea­sons for and against re­mov­ing the re­quire­ment.

So the good news is the Windows team is def­i­nitely look­ing into its op­tions when it comes to re­lax­ing or re­mov­ing the Microsoft ac­count re­quire­ment, but that there’s no com­mit­ted plan to move ahead with ac­tu­ally mak­ing any changes as of yet.

But peo­ple with in­flu­ence in­ter­nally at Microsoft are fight­ing for it, which should hope­fully re­as­sure those who were hop­ing to see progress in this area. Ultimately, this is a pol­icy is­sue, not a tech­ni­cal one. It would be very easy for Microsoft to re­move this re­quire­ment from a tech­ni­cal per­spec­tive, it’s just whether or not the com­pany can agree to make the change that needs to be de­cided.

If Microsoft were to re­lax its re­quire­ments around us­ing an on­line ac­count on Windows 11, is that some­thing you would take ad­van­tage of? I feel like I’m in the mi­nor­ity when it comes to us­ing a Microsoft ac­count on Windows, as I ac­tu­ally like sync­ing with OneDrive and be­ing au­to­mat­i­cally signed-in to Edge and Office. So, let me know if this re­quire­ment is some­thing you want Microsoft to re­lax.

Join us on Reddit at r/​Win­dows­Cen­tral to share your in­sights and dis­cuss our lat­est news, re­views, and more.

...

Read the original on www.windowscentral.com »

2 612 shares, 24 trendiness

Hold on to Your Hardware

A warn­ing about ris­ing prices, van­ish­ing con­sumer choice, and a fu­ture where own­ing a com­puter may mat­ter more than ever as hard­ware, power, and con­trol drift to­ward data cen­ters and away from peo­ple.

A warn­ing about ris­ing prices, van­ish­ing con­sumer choice, and a fu­ture where own­ing a com­puter may mat­ter more than ever as hard­ware, power, and con­trol drift to­ward data cen­ters and away from peo­ple.

For the bet­ter part of two decades, con­sumers lived in a golden age of tech. Memory got cheaper, stor­age in­creased in ca­pac­ity and hard­ware got faster and ab­surdly af­ford­able. Upgrades were rou­tine, al­most ca­sual. If you needed more RAM, a big­ger SSD, or a faster CPU or GPU, you barely had to wait a week for a dis­count of­fer and you moved on with your life. This era is end­ing.

What’s form­ing now is­n’t just an­other pric­ing cy­cle or a short-term short­age, it is a struc­tural shift in the hard­ware in­dus­try that paints a deeply grim out­look for con­sumers. Today, I am urg­ing you to hold on to your hard­ware, as you may not be able to re­place it af­ford­ably in the fu­ture. While I have al­ways been a stark critic of to­day’s con­sumer in­dus­try, as well as

the ideas be­hind it, and a strong pro­po­nent of buy­ing it for life

(meaning, in­vest­ing into durable, re­pairable, qual­ity prod­ucts) the in­dus­try’s shift has noth­ing to do with the pro­tec­tion of valu­able re­sources or the en­vi­ron­ment, but is in­stead a move to­wards a tra­jec­tory that has the po­ten­tial to erode tech­no­log­i­cal self-suf­fi­ciency and in­de­pen­dence for peo­ple all over the world.

In re­cent months the buzz­word RAM-pocalypse has started pop­ping up across tech jour­nal­ism and en­thu­si­ast cir­cles. It’s an in­ten­tion­ally dra­matic term that de­scribes the sharp in­crease in RAM prices, pri­mar­ily dri­ven by high de­mand from data cen­ters and AI tech­nol­ogy, which most peo­ple had con­sid­ered a mere

blip in the mar­ket. This pre­sumed tem­po­rary blip, how­ever, turned out to be a lot more than just that, with one man­u­fac­turer af­ter the other openly stat­ing that prices will con­tinue to rise, with sup­pli­ers fore­cast­ing short­ages of spe­cific com­po­nents that could last well be­yond 2028, and with key play­ers like

Western Digital and Micron ei­ther com­pletely dis­re­gard­ing or even ex­it­ing the con­sumer mar­ket al­to­gether.

The RAM-pocalypse is­n’t just a tem­po­rary head­line any­more, but has seem­ingly be­come long-term re­al­ity. However, RAM and mem­ory in gen­eral is only the be­gin­ning.

The main rea­son for the short­ages and hence the in­creased prices is data cen­ter de­mand, specif­i­cally from AI com­pa­nies. These data cen­ters re­quire mind-bog­gling amounts of hard­ware, specif­i­cally RAM, stor­age dri­ves and GPUs, which in turn are RAM-heavy graph­ics units for AI work­loads. The en­ter­prise de­mand for spe­cific com­po­nents sim­ply out­paces the cur­rent global pro­duc­tion ca­pac­ity, and out­bids the com­par­a­tively poor con­sumer mar­ket.

For ex­am­ple, OpenAI’s Stargate pro­ject alone re­port­edly

re­quires ap­prox­i­mately 900,000 DRAM wafers per month, which could ac­count for roughly 40% of cur­rent global DRAM out­put. Other big tech gi­ants in­clud­ing Google, Amazon, Microsoft, and Meta have placed open-ended or­ders with mem­ory sup­pli­ers, ac­cept­ing as much sup­ply as avail­able. The ex­ist­ing and fu­ture data cen­ters for/​of these com­pa­nies are ex­pected to con­sume 70% of all mem­ory chips pro­duced in 2026.

However, mem­ory is just the first domino.

RAM and SSDs are where the pain is most vis­i­ble to­day, but rest as­sured that the same forces are qui­etly re­shap­ing all as­pects of con­sumer hard­ware. One of the most im­me­di­ate and tan­gi­ble con­se­quences of this broader sup­ply-chain re­align­ment are sharp, cas­cad­ing price hikes across con­sumer elec­tron­ics, with

LPDDR mem­ory stand­ing out as an early pres­sure point that most con­sumers did­n’t rec­og­nize un­til it was al­ready un­avoid­able.

LPDDR is used in smart­phones, lap­tops, tablets, hand­held con­soles, routers, and in­creas­ingly even low-power PCs. It sits at the in­ter­sec­tion of con­sumer de­mand and en­ter­prise pri­or­i­ti­za­tion, mak­ing it uniquely vul­ner­a­ble when man­u­fac­tur­ers re­al­lo­cate ca­pac­ity to­ward AI ac­cel­er­a­tors, servers, and data-cen­ter-grade mem­ory, where mar­gins are higher and con­tracts are long-term. As fabs shift pro­duc­tion to­ward HBM and server DRAM, as well as GPU wafers, con­sumer hard­ware pro­duc­tion qui­etly be­comes non-es­sen­tial, tight­en­ing sup­ply just as de­vices be­come more power- and mem­ory-hun­gry, all while con­tin­u­ing on their path to re­main frus­trat­ingly un­ser­vice­able and un-upgrad­able.

The re­sult is a rip­ple ef­fect, in which de­vice mak­ers pay more for chips and mem­ory and pass those costs on through higher re­tail prices, cut base con­fig­u­ra­tions to pre­serve mar­gins, or lock fea­tures be­hind pre­mium tiers. At the same time, con­sumers lose the abil­ity to com­pen­sate by up­grad­ing later, be­cause most com­po­nents these days, like LPDDR, are sol­dered down by de­sign. This is fur­ther am­pli­fied by scarcity, as even mod­est sup­ply dis­rup­tions can spike prices dis­pro­por­tion­ately in a mar­ket where just a few sup­pli­ers dom­i­nate, turn­ing what should be in­cre­men­tal cost in­creases into sud­den jumps that af­fect en­tire prod­uct cat­e­gories at once.

In prac­tice, this means that phones, ul­tra­books, and em­bed­ded de­vices are be­com­ing more ex­pen­sive overnight, not be­cause of new fea­tures, but be­cause the in­vis­i­ble sil­i­con in­side them has qui­etly be­come a

con­tested re­source in a world that no longer builds hard­ware pri­mar­ily for con­sumers.

In late January 2026, the Western Digital CEO

con­firmed dur­ing an earn­ings call that the com­pa­ny’s en­tire HDD pro­duc­tion ca­pac­ity for cal­en­dar year 2026 is al­ready sold out. Let that sink in for a mo­ment. Q1 has­n’t even ended and a ma­jor hard drive man­u­fac­turer has

zero re­main­ing ca­pac­ity for the year. Firm pur­chase or­ders are in place with its top cus­tomers, and long-term agree­ments al­ready ex­tend into 2027 and 2028. Consumer rev­enue now ac­counts for just 5% of Western Digital’s to­tal sales, while cloud and en­ter­prise clients make up 89%. The com­pany has, for all prac­ti­cal pur­poses, stopped be­ing a con­sumer stor­age com­pany.

And Western Digital is not alone. Kioxia, one of the world’s largest NAND flash man­u­fac­tur­ers, ad­mit­ted that its en­tire 2026 pro­duc­tion vol­ume is

al­ready in a sold out” state, with the com­pany ex­pect­ing tight sup­ply to per­sist through at least 2027 and long-term cus­tomers fac­ing 30% or higher year-on-year price in­creases. Adding to this, the Silicon Motion CEO put it bluntly dur­ing a re­cent earn­ings call:

We’re fac­ing what has never hap­pened be­fore: HDD, DRAM, HBM, NAND… all in se­vere short­age in 2026.

In ad­di­tion, the Phison CEO has gone even fur­ther, warn­ing that the NAND short­age could per­sist un­til 2030, and that it risks the

destruction” of en­tire seg­ments of the con­sumer elec­tron­ics in­dus­try. He also noted that fac­to­ries are now de­mand­ing pre­pay­ment for ca­pac­ity three years in ad­vance, an un­prece­dented prac­tice that ef­fec­tively locks out smaller play­ers.

The col­lat­eral dam­age of this can al­ready be felt, and it’s sig­nif­i­cant. For ex­am­ple Valve con­firmed that the Steam Deck OLED is now out of stock in­ter­mit­tently in mul­ti­ple re­gions due to mem­ory and stor­age short­ages”. All mod­els are cur­rently un­avail­able in the US and Canada, the cheaper LCD model has been dis­con­tin­ued en­tirely, and there is no time­line for when sup­ply will re­turn to nor­mal. Valve has also

been forced to de­lay the pric­ing and launch de­tails for its up­com­ing Steam Machine con­sole and Steam Frame VR head­set, di­rectly cit­ing mem­ory and stor­age short­ages.

At the same time, Sony is con­sid­er­ing de­lay­ing the PlayStation 6 to 2028 or even 2029, and Nintendo is re­port­edly

con­tem­plat­ing a price in­crease for the Switch 2, less than a year af­ter its launch. Both de­ci­sions are seem­ingly dri­ven by the same mem­ory sup­ply con­straints. Meanwhile, Microsoft has al­ready raised

prices on the Xbox.

Now you might think that every­thing so far is about GPUs and other gam­ing-re­lated hard­ware, but that could­n’t be fur­ther from the truth. General com­put­ing, like the Raspberry Pi is not im­mune to any of this ei­ther. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flag­ship Raspberry Pi 5 (16GB) jump­ing from $120 at launch to $205 as of February 2026, a 70% in­crease dri­ven en­tirely by LPDDR4

mem­ory costs. What was once a sym­bol of af­ford­able com­put­ing is rapidly be­ing priced out of reach for the ed­u­ca­tional and hob­by­ist com­mu­ni­ties it was de­signed to serve.

HP, on the other hand, seems to have al­ready pre­pared for the hard­ware short­age by launch­ing a lap­top sub­scrip­tion ser­vice where you pay a monthly fee to use a lap­top but never own it, no mat­ter how long you sub­scribe. While HP frames this as a con­ve­nience, the tim­ing, right in the mid­dle of a hard­ware af­ford­abil­ity cri­sis, makes it feel a lot more like a pre­view of a rented com­pute fu­ture. But more on that in a sec­ond.

But we’ve seen price spikes be­fore, due to crypto booms, pan­demic short­ages, fac­tory floods and fires!”, you might say. And while we did live through those crises, things even­tu­ally eased when bub­bles popped and mar­kets or sup­ply chains re­cov­ered. The cur­rent sit­u­a­tion, how­ever, does­n’t ap­pear to be go­ing away any­time soon, as it looks like the in­dus­try’s pri­or­i­ties have fun­da­men­tally

changed.

These days, the biggest cus­tomers are not gamers, cre­ators, PC builders or even crypto min­ers any­more. Today, it’s hy­per­scalers. Companies that use hard­ware for AI train­ing clus­ters, cloud providers, en­ter­prise data cen­ters, as well as gov­ern­ments and de­fense con­trac­tors. Compared to these hy­per­scalers

con­sumers are small fish in a big pond.

These buy­ers don’t care if RAM costs 20% more and nei­ther do they wait for

Black Friday deals. Instead, they sign con­tracts mea­sured in ex­abytes and bil­lions of dol­lars. With such clients lin­ing up, the con­sumer mar­ket in con­trast is sud­denly an in­con­ve­nience for man­u­fac­tur­ers. Why set­tle for smaller mar­gins and deal with higher mar­ket­ing and sup­port costs, frag­mented SKUs, price sen­si­tiv­ity and re­tail lo­gis­tics headaches, when you can have be­he­moths throw­ing money at you? Why sell a $100 SSD to one con­sumer, when you can sell a whole rack of en­ter­prise NVMe dri­ves to a data cen­ter with

vir­tu­ally in­fi­nite money?

All of this goes to show that the con­sumer mar­ket is not just de­pri­or­i­tized, but in­stead it is be­ing starved. In fact, IDC has al­ready warned

that the PC mar­ket could shrink by up to 9% in 2026 due to sky­rock­et­ing mem­ory prices, and has de­scribed the sit­u­a­tion not as a cycli­cal short­age but as a po­ten­tially per­ma­nent, strate­gic re­al­lo­ca­tion of the world’s sil­i­con wafer ca­pac­ity”.

Leading PC OEMs in­clud­ing Lenovo, Dell, HP, Acer, and ASUS have all sig­naled 15-20% PC price in­creases for 2026, with some mod­els see­ing even steeper hikes. Framework, the re­pairable lap­top com­pany, has also been trans­par­ent about ris­ing mem­ory costs im­pact­ing its pric­ing. And an­a­lyst Jukan Choi re­cently re­vised his short­age time­line es­ti­mate, not­ing that DRAM pro­duc­tion ca­pac­ity is ex­pected to grow at just 4.8% an­nu­ally through 2030, with even that in­cre­men­tal ca­pac­ity con­cen­trated on HBM rather than con­sumer mem­ory. TrendForce’s lat­est fore­cast pro­jects DRAM con­tract prices ris­ing by 90-95% quar­ter over quar­ter in Q1 2026. And that is not a typo.

The price of hard­ware is one thing, but value-for-money is an­other as­pect that ap­pears to be only get­ting worse from here on. Already to­day con­sumer parts feel like cut-down ver­sions of en­ter­prise sil­i­con. As AI ac­cel­er­a­tors and server chips dom­i­nate R&D bud­gets, con­sumer im­prove­ments will slow even fur­ther, or ar­rive at higher prices jus­ti­fied as pre­mium fea­tures. This is true for CPUs and GPUs, and it will be equally true for moth­er­boards, chipsets, power sup­plies, net­work­ing, etc. We will likely see fewer low-end op­tions, more seg­men­ta­tion, ar­ti­fi­cial fea­ture gat­ing and gen­er­ally higher base­line prices that, once es­tab­lished, won’t be com­ing back down again.

As en­ter­prise stan­dards be­come the pri­or­ity, con­sumer gear is be­com­ing an af­ter­thought that is be­ing re­badged, over­priced, and poorly sup­ported. The un­com­fort­able truth is that the con­sumer hard­ware mar­ket is no longer the cen­ter of grav­ity, as we all were able to see at this year’s CES. It’s or­bit­ing some­thing much larger, and none of this is ac­ci­den­tal. The in­dus­try is­n’t fail­ing, it’s suc­ceed­ing, just not for you.

And to be fair, from a cor­po­rate stand­point, this pivot makes per­fect sense.

AI and en­ter­prise cus­tomers are rewrit­ing rev­enue charts, all while con­sumers con­tinue to be noisy, de­mand­ing, and com­par­a­tively poor. It is pretty clear that con­sumer hard­ware is be­com­ing a sec­ond-class cit­i­zen, which means that the ma­chines we al­ready own are more valu­able than we might be think­ing right now.

But what does the in­dus­try think the fu­ture will look like if no­body can af­ford new hard­ware?”, you might be ask­ing.

There is a darker, con­spir­a­to­r­ial in­ter­pre­ta­tion of to­day’s hard­ware trends that reads less like mar­ket eco­nom­ics and more like a re­hearsal for a man­aged

fu­ture. Businesses, hav­ing dis­cov­ered that own­er­ship is in­ef­fi­cient and obe­di­ence is prof­itable, are qui­etly steer­ing so­ci­ety to­ward a world where no one owns com­pute at all, where hard­ware ex­ists only as an ab­strac­tion rented back to the pub­lic through vir­tual servers, SaaS sub­scrip­tions, and me­tered ex­pe­ri­ences, and where dig­i­tal sov­er­eignty, that any­one with a PC tower un­der their desk once had, be­comes an out­dated, ec­cen­tric, and even sus­pi­cious con­cept.

… a morn­ing in said fu­ture, where an or­di­nary cit­i­zen wakes up, taps their ter­mi­nal, which is a sealed de­vice with­out ports, stor­age, and so­phis­ti­cated lo­cal ex­e­cu­tion ca­pa­bil­i­ties, and logs into their Personal Compute Allocation. This bun­dle of cloud CPU min­utes, RAM cred­its, and stor­age to­kens leased from a con­glom­er­ate whose logo has qui­etly re­placed the word computer” in every­day speech, just like to search” has made way for to google”, has re­moved the con­cept of in­stalling soft­ware, be­cause soft­ware no longer ex­ists as a thing, but only as a ser­vice tier in which every task routes through servers owned by en­ti­ties. Entities that in­sist that this is all for the planet. Entities that out­lawed con­sumer hard­ware years ago un­der the ban­ner of en­vi­ron­men­tal pro­tec­tion­ism, cit­ing e-waste sta­tis­tics, car­bon bud­gets, and un­safe un­reg­u­lated sil­i­con, while con­ve­niently ig­nor­ing that the data cen­ters

hum­ming be­yond the city lim­its burn more power in an hour than the old neigh­bor­hood ever did in a decade. In this world, the or­di­nary cit­i­zen re­mem­bers their par­ents’ dusty Personal Computer, locked away in a stor­age unit like con­tra­band. A ma­chine that once ran freely, of­fline if it wanted, im­mune to ar­bi­trary ac­count sus­pen­sions and pric­ing changes. As they go about their day, pay­ing a mi­cro-fee to open a doc­u­ment, los­ing ac­cess to their own pho­tos be­cause a sub­scrip­tion lapsed, watch­ing a warn­ing ban­ner ap­pear when they type some­thing that vi­o­lates the ever evolv­ing terms-of-ser­vice, and shout­ing McDonald’s!” to skip the oth­er­wise un­skip­pable ads within every other app they open, they be­gin to un­der­stand that the true crime of con­sumer hard­ware was­n’t pri­mar­ily pol­lu­tion but in­de­pen­dence. They re­al­ize that own­ing a ma­chine meant own­ing the means of com­pu­ta­tion, and that by cen­tral­iz­ing hard­ware un­der the guise of ef­fi­ciency, safety, and sus­tain­abil­ity, so­ci­ety traded re­silience for con­ve­nience and au­ton­omy for com­fort.In this utopia, noth­ing ever breaks be­cause noth­ing is

yours, noth­ing is re­pairable be­cause noth­ing is phys­i­cal, and noth­ing is

pri­vate be­cause every­thing runs some­where else, on some­one else’s com­puter. The quiet moral, felt when the net­work briefly stut­ters and the world freezes, is that keep­ing old hard­ware alive was never nos­tal­gia or para­noia, but a small, stub­born act of dig­i­tal self-de­fense; A re­fusal to ac­cept that the fu­ture must be rented, per­mis­sioned, and re­vo­ca­ble at any mo­ment.

If you think that dystopian rented com­pute over owned hard­ware” fu­ture could never hap­pen, think again. In fact, you’re al­ready likely rent­ing rather than own­ing in many dif­fer­ent ar­eas. Your means of

com­mu­ni­ca­tion are run by Meta, your mu­sic is pro­vided by Spotify, your movies are streamed from Netflix, your data is stored in Google’s data cen­ters and your of­fice suite runs on Microsoft’s cloud. Maybe even your car is leased in­stead of owned, and you pay a monthly pre­mium for seat heat­ing or sElF-dRi­ViNg, what­ever that means. After all, the av­er­age Gen Z and Millennial US con­sumer to­day ap­par­ently has 8.2 sub­scrip­tions, not in­clud­ing their DaIlY aV­o­CaDo ToAsTs and StArBuCkS cHoCo­Late ChIp LaTtEs that the same Boomers

re­spon­si­ble for the cur­rent (and past) eco­nomic crises love to dunk on.

Besides, look no fur­ther than what’s al­ready hap­pen­ing in for ex­am­ple China, a coun­try that man­u­fac­tures mas­sive amounts of the world’s sought-af­ter hard­ware yet faces re­stric­tions on buy­ing that very hard­ware. In re­cent years, a com­plex web of ex­port con­trols and chip bans has put a spot­light on how hard­ware can be­come a geopo­lit­i­cal bar­gain­ing chip rather than a con­sumer good. For ex­am­ple, ex­port con­trols im­posed by the United States in re­cent years barred Nvidia

from sell­ing many of its high-per­for­mance GPUs into China with­out spe­cial li­censes, sig­nif­i­cantly re­duc­ing le­gal ac­cess to cut­ting-edge com­pute in­side the coun­try.

Meanwhile, en­force­ment ef­forts have re­peat­edly busted smug­gling op­er­a­tions mov­ing pro­hib­ited Nvidia chips into Chinese ter­ri­tory through Southeast Asian hubs, with over $1 bil­lion worth of banned GPUs re­port­edly mov­ing through gray mar­kets, even as of­fi­cial chan­nels re­main re­stricted. Coverage by out­lets such as Bloomberg, as well as ac­tual in­ves­tiga­tive jour­nal­ism like

Gamer’s Nexus has doc­u­mented these black-mar­ket flows and the lengths to which both sides go to en­force or evade re­stric­tions, in­clud­ing smug­gling net­works and in­creased reg­u­la­tory scrutiny.

On top of this, Chinese reg­u­la­tors have at times re­stricted do­mes­tic tech firms from buy­ing spe­cific Nvidia mod­els, fur­ther un­der­scor­ing how gov­ern­ment pol­icy can over­ride ba­sic mar­ket ac­cess for hard­ware, even in the coun­try where much of that hard­ware is man­u­fac­tured. While some of these ex­port rules have seen par­tial re­ver­sals or reg­u­la­tory shifts, the over­all sit­u­a­tion high­lights a world in which hard­ware ac­cess is in­creas­ingly de­ter­mined by pol­i­tics, se­cu­rity regimes, and cor­po­rate strat­egy, and not by con­sumer de­mand. This should serve as a cau­tion­ary tale for any­one who thinks own­ing their own ma­chines won’t mat­ter in the years to come.

In an ironic twist, how­ever, one of the few po­ten­tial sources of re­lief may, in fact, come from China. Two Chinese man­u­fac­tur­ers, CXMT (ChangXin Memory Technologies) and YMTC (Yangtze Memory Technologies), are em­bark­ing on their most ag­gres­sive ca­pac­ity ex­pan­sions ever, view­ing the global short­age as a golden op­por­tu­nity to close the gap with the in­cum­bent big three

(Samsung, SK Hynix, Micron).

CXMT is now the world’s fourth-largest DRAM maker by pro­duc­tion vol­ume, hold­ing roughly 10-11% of global wafer ca­pac­ity, and is build­ing a mas­sive new DRAM fa­cil­ity in Shanghai ex­pected to be two to three times larger than its ex­ist­ing Hefei head­quar­ters, with vol­ume pro­duc­tion tar­geted for 2027. The com­pany is also prepar­ing a $4.2 bil­lion IPO on Shanghai’s STAR Market to fund fur­ther ex­pan­sion and has re­port­edly de­liv­ered HBM3 sam­ples to do­mes­tic cus­tomers in­clud­ing Huawei.

YMTC, tra­di­tion­ally a NAND flash sup­plier, is con­struct­ing a third fab in Wuhan with roughly half of its ca­pac­ity ded­i­cated to DRAM, and has reached 270-layer 3D NAND ca­pa­bil­ity, rapidly nar­row­ing the gap with Samsung (286 lay­ers) and SK Hynix (321 lay­ers). Its NAND mar­ket share by ship­ments reached 13% in Q3 2025, close to Micron’s 14%. What’s par­tic­u­larly no­table is that

ma­jor PC man­u­fac­tur­ers are al­ready

turn­ing to these sup­pli­ers.

However, as men­tioned be­fore, with hard­ware hav­ing be­come a geopo­lit­i­cal topic, both com­pa­nies face on­go­ing (US-imposed) re­stric­tions. Hence, for ex­am­ple HP

has in­di­cated it would only use CXMT chips in de­vices for non-US mar­kets. Nevertheless, for con­sumers world­wide the emer­gence of vi­able fourth and fifth play­ers in the mem­ory mar­ket rep­re­sents the most tan­gi­ble hope of even­tu­ally break­ing the cur­rent sup­ply stran­gle­hold. Whether that re­lief ar­rives in time to pre­vent last­ing dam­age to the con­sumer hard­ware ecosys­tem re­mains an open ques­tion, though.

The rea­son I’m writ­ing all of this is­n’t to cre­ate panic, but to help put things into per­spec­tive. You don’t need to scav­enger-hunt for legacy parts in your lo­cal land­fill (yet) or swear off up­grades for­ever, but you do need to rec­og­nize that the rules have changed. The mar­ket that once catered to en­thu­si­asts and every­day users is turn­ing its back. So take care of your hard­ware, stretch its lifes­pan, up­grade thought­fully, and don’t as­sume re­place­ment will al­ways be easy or af­ford­able.

That PC, lap­top, NAS, or home server is­n’t dis­pos­able any­more. Clean it, main­tain it, repaste it, re­place fans and pro­tect it, as it may need to last far longer than you orig­i­nally planned.

Also, re­al­ize that the best time to up­grade your hard­ware was yes­ter­day and that the sec­ond best time is now. If you can af­ford sen­si­ble up­grades, es­pe­cially RAM and SSD ca­pac­ity, it may be worth do­ing sooner rather than later. Not for per­for­mance, but for in­sur­ance, be­cause the next time some­thing fails, it might be un­af­ford­able to re­place, as the era of ca­sual up­grades seems to be over. Five-year sys­tems may be­come eight- or ten-year sys­tems.

Software bloat will hurt more and

will re­quire re-think­ing. Efficiency will

mat­ter again. And look­ing at it from a dif­fer­ent an­gle, maybe that’s a good thing.

Additionally, the as­sump­tion that prices will nor­mal­ize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer ap­plies when man­u­fac­tur­ers are de­lib­er­ately con­strain­ing sup­ply. If you

need a new de­vice, buy it; If you don’t, how­ever, there is ab­solutely no need to spend money on the mi­nor yearly re­fresh cy­cle any longer, as the re­turns will be in­creas­ingly di­min­ish­ing. And again, look­ing at it from a dif­fer­ent an­gle, prob­a­bly that is also a good thing.

Consumer hard­ware is head­ing to­ward a bleak fu­ture where own­ing pow­er­ful, af­ford­able ma­chines be­comes harder or maybe even im­pos­si­ble, as man­u­fac­tur­ers aban­don every­day users to chase vastly more prof­itable data cen­ters, AI

firms, and en­ter­prise clients. RAM and SSD price spikes, Micron’s exit from the con­sumer mar­ket, and the re­sult­ing Samsung/SK Hynix du­op­oly are early warn­ing signs of a broader shift that will even­tu­ally af­fect CPUs, GPUs, and the en­tire PC ecosys­tem.

With large man­u­fac­tur­ers hav­ing sold out their en­tire pro­duc­tion ca­pac­ity to

hy­per­scalers for the rest of the year while si­mul­ta­ne­ously cut­ting con­sumer pro­duc­tion by dou­ble-digit per­cent­ages, con­sumers will have to take a back seat. Already to­day con­sumer hard­ware is over­priced, out of stock or even in­ten­tion­ally be­ing de­layed due to sup­ply is­sues.

In ad­di­tion, man­u­fac­tur­ers are piv­ot­ing to­wards con­sumer hard­ware sub­scrip­tions, where you never own the hard­ware and in the most dystopian tra­jec­tory, con­sumers might not buy any hard­ware at all, with the ex­cep­tion of low-end thin-clients

that are merely in­ter­faces, and will rent com­pute through cloud plat­forms, los­ing dig­i­tal sov­er­eignty in ex­change for con­ve­nience. And de­spite all of this sound­ing like sci­ence fic­tion, there is al­ready hard ev­i­dence prov­ing that ac­cess to hard­ware can in fact be po­lit­i­cally and eco­nom­i­cally re­voked.

Therefor I am urg­ing you to main­tain and up­grade wisely, and hold on to your

ex­ist­ing hard­ware, be­cause own­er­ship may soon be a lux­ury rather than the norm.

...

Read the original on xn--gckvb8fzb.com »

3 548 shares, 19 trendiness

How I Drowned a Bureaucrat before dinner.

I can’t ex­press how much I ut­terly hate the Continuing Disability Review.”

It is a let­ter that ar­rives every few years from the gov­ern­ment, ask­ing a ques­tion that is med­ically ab­surd and philo­soph­i­cally in­sult­ing: Are you still dis­abled?”

As if my blind­ness were a sea­sonal al­lergy. As if I might have woken up last Tuesday, blinked, and re­al­ized that my op­tic nerves had de­cided to re­gen­er­ate spon­ta­neously.

This week, I re­ceived The Letter. It de­manded updated med­ical ev­i­dence” to prove that I—a man who has been blind since birth—am, in fact, still blind.

I called the num­ber. I nav­i­gated the phone tree hellscape. I fi­nally reached a hu­man be­ing. Let’s call her Karen from Compliance.”

I have the doc­u­ments in PDF for­mat,” I told her, us­ing my po­lite, I haven’t had my morn­ing tea so make this easy on me, voice. I can email them to you right now. You’ll have them in ten sec­onds.”

We can­not ac­cept email,” Karen said. Her voice was flat, dry, and sounded like stale cof­fee and rigid ad­her­ence to a rule­book writ­ten in 1994. It is a se­cu­rity risk. You must mail phys­i­cal copies, or you can fax them.”

Fax them?” I asked. You want me to fax you med­ical records when you could just delete the email af­ter sav­ing the at­tach­ments?”

Those are the op­tions, sir. If we don’t re­ceive them by Friday, your ben­e­fits will be sus­pended.”

I did­n’t un­der­stand why they could­n’t just look back in my file, no­ticed noth­ing had changed in decades, and up­date it based on past data.

She said it with a chal­lenge in her tone. She knew who she was talk­ing to. She was talk­ing to a blind man liv­ing be­low the poverty line. She as­sumed that fax it” was an im­pos­si­ble hur­dle. She as­sumed I would have to find a ride to a li­brary, pay twenty cents a page, and strug­gle with a phys­i­cal ma­chine I could­n’t read. She was count­ing on the fric­tion of the phys­i­cal world to make me give up.

I am a nerd. And I have an in­ter­net con­nec­tion.

Okay,” I said, my voice drop­ping into the cool, smooth, Let’s sys­tem­i­cally tango,’ tone of a man with a plan. I will fax them. What is the num­ber?”

I hung up. And then, I went to work.

She wanted ev­i­dence? Oh boy, I would give her ev­i­dence.

I did­n’t just pull the re­cent files. I went into the archives. I dug into the deep, dig­i­tal bedrock of my hard drive. I pulled records from when I was five. I pulled the sur­gi­cal notes from my cere­bral palsy treat­ments. I pulled the in­take forms from every spe­cial­ist, every ther­a­pist, every so­cial worker who has ever writ­ten a note about my deficits.”

I com­piled a sin­gle, mono­lithic PDF. It was a mon­u­ment to med­ical trauma. It was a li­brary of di­ag­no­sis.

It was five hun­dred and twelve pages long.

I opened my pre­ferred in­ter­net fax­ing ser­vice. This is a tool that al­lows me to send a fax purely through dig­i­tal data. It would cost $20, ex­actly the amount some­one had do­nated to the blog last week, but if I did­n’t do this, I would lose all my benifits. It costs me zero pa­per. It costs me zero toner.

By the way, your tips keep me writ­ing.

But for the re­cip­i­ent?

For the re­cip­i­ent, a fax is a phys­i­cal re­al­ity. It re­quires pa­per. It re­quires ink. It re­quires time.

I imag­ined Karen’s fax ma­chine. It was prob­a­bly an old, beige beast sit­ting in the cor­ner of a gray of­fice. It was likely low on pa­per. It was al­most cer­tainly low on pa­tience.

I up­loaded the file. The file size was mas­sive. The progress bar on my screen reader ticked up. Uploading… 20%… 50%… 80%…

And then, I sat back and lis­tened to the most beau­ti­ful sound in the world.

Your fax has been sent,” my screen reader an­nounced.

I imag­ined the scene in that of­fice.

At first, it would just be a sin­gle page. Whirrr. Chunk. A stan­dard med­ical form. Karen would ig­nore it.

By page fifty, the ma­chine would be heat­ing up. The smell of hot toner would start to fill the cu­bi­cle. The rhyth­mic chunk-chunk-chunk of the print­ing would be­come a drone, a me­chan­i­cal chant of ma­li­cious com­pli­ance.

By page one hun­dred, the pa­per tray would run out. The ma­chine would start beep­ing. That high-pitched, in­sis­tent beep-beep-beep that de­mands at­ten­tion. Karen would have to get up. She would have to find a ream of pa­per. She would have to feed the beast.

And the beast would not stop.

Because I had set the retry limit to Infinity.” If the line bus­ied out? It would call back. If the pa­per ran out? It would wait. It was a dig­i­tal siege en­gine.

I sent them every­thing. I sent them the eye charts that prove I can’t read eye charts. I sent them the phys­i­cal ther­apy logs. I sent them the blurry scans of notes writ­ten by doc­tors who are long since dead.

I sent them the Tsunami of Truth.

I wanted them to hold the weight of it. I wanted them to phys­i­cally feel the bur­den of proof they place on dis­abled bod­ies. They want us to doc­u­ment our ex­is­tence? Fine. Here is my ex­is­tence, one sheet of hot, curled pa­per at a time.

Two hours later, my phone rang.

It was Karen. She sounded breath­less. She sounded like she was stand­ing next to a ma­chine that was hy­per­ven­ti­lat­ing. In the back­ground, I could hear a rhyth­mic whir-chunk, whir-chunk.

Yes?” I an­swered, my voice the pic­ture of in­no­cent help­ful­ness.

Sir, please. You have to stop the fax. It’s… it’s been print­ing for an hour. It’s jam­ming the ma­chine. We’re out of toner.”

Oh, you’re out of toner? It’s jammed? Oh my! Oh, I’m so sorry,” I said, putting ex­actly zero per­cent sin­cer­ity into the apol­ogy. But you said you could­n’t ac­cept email. You said I had to pro­vide com­plete doc­u­men­ta­tion. I’m just fol­low­ing the rules, Karen. I would­n’t want my ben­e­fits to be sus­pended be­cause I missed doc­u­men­ta­tion, so here’s doc­u­men­ta­tion all the way back to when I’m five years old.”

Jesus Christ, We have it!” she snapped. We have enough! Please, just… can­cel the rest.”

I’m afraid I can’t do that,” I lied. It’s an au­to­mated process. Once it starts, it has to fin­ish. Security pro­to­cols, you un­der­stand.”

There was a long, stran­gled si­lence on the line. Then, a de­feated sigh.

Fine! Fine,” she snapped. We will mark your file as up­dated.”

Thank you,” I said. Have a won­der­ful day.”

I sat there in my quiet apart­ment, eat­ing a cookie. I imag­ined the pile of pa­per in that of­fice, a phys­i­cal moun­tain of ev­i­dence tes­ti­fy­ing to the fact that yes, I am blind, and yes, I am smarter than your bu­reau­cracy.

If you en­joyed this tiny vic­tory in a hos­tile world, you might en­joy, Seven Days in June by Tia Williams

learn how to fol­low the pod­cast or join my street team,

You can fol­low the main RSS feed, learn how to fol­low the pod­cast or join my street team, or fol­low via email with the form be­low.

...

Read the original on sightlessscribbles.com »

4 482 shares, 24 trendiness

Anatomy of the .claude/ Folder

Most teams have adopted AI in some form, but the gap be­tween using AI and getting mea­sur­able ROI from AI is larger than peo­ple re­al­ize.

Postman re­leased a cost sav­ings analy­sis that looks at six com­mon API de­vel­op­ment work­flows and bench­marks the ac­tual time and cost dif­fer­ence when AI is built into the plat­form ver­sus bolted on ex­ter­nally.

It’s a short, data-dri­ven read that helps en­gi­neer­ing leads make the case for where AI-native tool­ing ac­tu­ally moves the nee­dle.

You can grab the guide for free here →

Thanks to Postman for part­ner­ing to­day!

Claude Code users typ­i­cally treat the .claude folder like a black box. They know it ex­ists. They’ve seen it ap­pear in their pro­ject root. But they’ve never opened it, let alone un­der­stood what every file in­side it does.

The .claude folder is the con­trol cen­ter for how Claude be­haves in your pro­ject.

It holds your in­struc­tions, your cus­tom com­mands, your per­mis­sion rules, and even Claude’s mem­ory across ses­sions. Once you un­der­stand what lives where and why, you can con­fig­ure Claude Code to be­have ex­actly the way your team needs it to.

This newslet­ter walks you through the en­tire anatomy of the folder, from the files you’ll use daily to the ones you’ll set once and for­get.

Before div­ing in, one thing worth know­ing up­front: there are ac­tu­ally two .claude di­rec­to­ries, not one.

The first lives in­side your pro­ject, and the sec­ond lives in your home di­rec­tory:

The pro­ject-level folder holds team con­fig­u­ra­tion. You com­mit it to git. Everyone on the team gets the same rules, the same cus­tom com­mands, the same per­mis­sion poli­cies.

The global ~/.claude/ folder holds your per­sonal pref­er­ences and ma­chine-lo­cal state, like ses­sion his­tory and auto-mem­ory.

This is the most im­por­tant file in the en­tire sys­tem. When you start a Claude Code ses­sion, the first thing it reads is CLAUDE.md. It loads it straight into the sys­tem prompt and keeps it in mind for the en­tire con­ver­sa­tion.

Simply put: what­ever you write in CLAUDE.md, Claude will fol­low.

If you tell Claude to al­ways write tests be­fore im­ple­men­ta­tion, it will. If you say never use con­sole.log for er­ror han­dling, al­ways use the cus­tom log­ger mod­ule,” it will re­spect that every time.

A CLAUDE.md at your pro­ject root is the most com­mon setup. But you can also have one in ~/.claude/CLAUDE.md for global pref­er­ences that ap­ply across all pro­jects, and even one in­side sub­di­rec­to­ries for folder-spe­cific rules. Claude reads all of them and com­bines them.

Most peo­ple ei­ther write too much or too lit­tle. Here’s what works.

* Non-obvious gotchas (”TypeScript strict mode is on, un­used vari­ables are er­rors”)

* File and folder struc­ture for the main mod­ules

* Anything that be­longs in a lin­ter or for­mat­ter con­fig

* Full doc­u­men­ta­tion you can al­ready link to

Keep CLAUDE.md un­der 200 lines. Files longer than that start eat­ing too much con­text, and Claude’s in­struc­tion ad­her­ence ac­tu­ally drops.

That’s ~20 lines. It gives Claude every­thing it needs to work pro­duc­tively in this code­base with­out con­stant clar­i­fi­ca­tion.

Sometimes you have a pref­er­ence that’s spe­cific to you, not the whole team. Maybe you pre­fer a dif­fer­ent test run­ner, or you want Claude to al­ways open files us­ing a spe­cific pat­tern.

Create CLAUDE.local.md in your pro­ject root. Claude reads it along­side the main CLAUDE.md, and it’s au­to­mat­i­cally git­ig­nored so your per­sonal tweaks never land in the repo.

CLAUDE.md works great for a sin­gle pro­ject. But once your team grows, you end up with a 300-line CLAUDE.md that no­body main­tains and every­one ig­nores.

Every mark­down file in­side .claude/rules/ gets loaded along­side your CLAUDE.md au­to­mat­i­cally. Instead of one gi­ant file, you split in­struc­tions by con­cern:

Each file stays fo­cused and easy to up­date. The team mem­ber who owns API con­ven­tions ed­its api-con­ven­tions.md. The per­son who owns the test­ing stan­dards ed­its test­ing.md. Nobody stomps on each other.

The real power comes from path-scoped rules. Add a YAML front­mat­ter block to a rule file and it only ac­ti­vates when Claude is work­ing with match­ing files:

Claude won’t load this file when edit­ing a React com­po­nent. It only loads when it’s work­ing in­side src/​api/ or src/​han­dlers/. Rules with­out a paths field load un­con­di­tion­ally, every ses­sion.

This is the right pat­tern once your CLAUDE.md starts feel­ing crowded.

Out of the box, Claude Code has built-in slash com­mands like /help and /compact. The com­mands/ folder lets you add your own.

Every mark­down file you drop into .claude/commands/ be­comes a slash com­mand.

A file named re­view.md cre­ates /project:review. A file named fix-is­sue.md cre­ates /project:fix-issue. The file­name is the com­mand name.

Now run /project:review in Claude Code and it au­to­mat­i­cally in­jects the real git diff into the prompt be­fore Claude sees it. The ! back­tick syn­tax runs shell com­mands and em­beds the out­put. That’s what makes these com­mands gen­uinely use­ful in­stead of just saved text.

Use $ARGUMENTS to pass text af­ter the com­mand name:

Project com­mands in .claude/commands/ are com­mit­ted and shared with your team. For com­mands you want every­where re­gard­less of pro­ject, put them in ~/.claude/commands/. Those show up as /user:command-name in­stead.

A use­ful per­sonal com­mand: a daily standup helper, a com­mand for gen­er­at­ing com­mit mes­sages fol­low­ing your con­ven­tion, or a quick se­cu­rity scan.

You now know how com­mands work. Skills look sim­i­lar on the sur­face, but the trig­ger is fun­da­men­tally dif­fer­ent. Here’s the dis­tinc­tion be­fore we go any fur­ther:

Skills are work­flows that Claude can in­voke on its own, with­out you typ­ing a slash com­mand, when the task matches the skil­l’s de­scrip­tion. Commands wait for you. Skills watch the con­ver­sa­tion and act when the mo­ment is right.

Each skill lives in its own sub­di­rec­tory with a SKILL.md file:

The SKILL.md uses YAML front­mat­ter to de­scribe when to use it:

When you say review this PR for se­cu­rity is­sues,” Claude reads the de­scrip­tion, rec­og­nizes it matches, and in­vokes the skill au­to­mat­i­cally. You can also call it ex­plic­itly with /security-review.

The key dif­fer­ence from com­mands: skills can bun­dle sup­port­ing files along­side them. The DETAILED_GUIDE.md ref­er­ence above pulls in a de­tailed doc­u­ment that lives right next to SKILL.md. Commands are sin­gle files. Skills are pack­ages.

Personal skills go in ~/.claude/skills/ and are avail­able across all your pro­jects.

When a task is com­plex enough to ben­e­fit from a ded­i­cated spe­cial­ist, you can de­fine a sub­agent per­sona in .claude/agents/. Each agent is a mark­down file with its own sys­tem prompt, tool ac­cess, and model pref­er­ence:

Here’s what a code-re­viewer.md looks like:

When Claude needs a code re­view done, it spawns this agent in its own iso­lated con­text win­dow. The agent does its work, com­presses the find­ings, and re­ports back. Your main ses­sion does­n’t get clut­tered with thou­sands of to­kens of in­ter­me­di­ate ex­plo­ration.

The tools field re­stricts what the agent can do. A se­cu­rity au­di­tor only needs Read, Grep, and Glob. It has no busi­ness writ­ing files. That re­stric­tion is in­ten­tional and worth be­ing ex­plicit about.

The model field lets you use a cheaper, faster model for fo­cused tasks. Haiku han­dles most read-only ex­plo­ration well. Save Sonnet and Opus for the work that ac­tu­ally needs them.

Personal agents go in ~/.claude/agents/ and are avail­able across all pro­jects.

The set­tings.json file in­side .claude/ con­trols what Claude is and is­n’t al­lowed to do. It’s where you de­fine which tools Claude can run, which files it can read, and whether it needs to ask be­fore run­ning cer­tain com­mands.

The com­plete file looks like this:

Here’s what each part does.

The $schema line en­ables au­to­com­plete and in­line val­i­da­tion in VS Code or Cursor. Always in­clude it.

The al­low list con­tains com­mands that run with­out Claude ask­ing for con­fir­ma­tion. For most pro­jects, a good al­low list cov­ers:

* Bash(npm run *) or Bash(make *) so Claude can run your scripts freely

The deny list con­tains com­mands that are blocked en­tirely, no mat­ter what. A sen­si­ble deny list blocks:

* Sensitive files like .env and any­thing in se­crets/

If some­thing is­n’t in ei­ther list, Claude asks be­fore pro­ceed­ing. That mid­dle ground is in­ten­tional. It gives you a safety net with­out hav­ing to an­tic­i­pate every pos­si­ble com­mand up­front.

That said, you can also have set­tings.lo­cal.json for per­sonal over­rides. It has the same idea as CLAUDE.local.md. Create .claude/settings.local.json for per­mis­sion changes you don’t want com­mit­ted. It’s auto-git­ig­nored.

You don’t in­ter­act with this folder of­ten, but it’s use­ful to know what’s in it.

~/.claude/CLAUDE.md loads into every Claude Code ses­sion, across all your pro­jects. Good place for your per­sonal cod­ing prin­ci­ples, pre­ferred style, or any­thing you want Claude to re­mem­ber, re­gard­less of which repo you’re in.

~/.claude/projects/ stores ses­sion tran­scripts and auto-mem­ory per pro­ject. Claude Code au­to­mat­i­cally saves notes to it­self as it works: com­mands it dis­cov­ers, pat­terns it ob­serves, and ar­chi­tec­ture in­sights. These per­sist across ses­sions. You can browse and edit them with /memory.

~/.claude/commands/ and ~/.claude/skills/ hold per­sonal com­mands and skills avail­able across all pro­jects.

You gen­er­ally don’t need to man­u­ally man­age these. But know­ing they ex­ist is handy when Claude seems to remember” some­thing you never told it, or when you want to wipe a pro­jec­t’s auto-mem­ory and start fresh.

Here’s how every­thing comes to­gether:

If you’re start­ing from scratch, here’s a pro­gres­sion that works well.

Step 1. Run /init in­side Claude Code. It gen­er­ates a starter CLAUDE.md by read­ing your pro­ject. Edit it down to the es­sen­tials.

Step 2. Add .claude/settings.json with al­low/​deny rules ap­pro­pri­ate for your stack. At min­i­mum, al­low your run com­mands and deny .env reads.

Step 3. Create one or two com­mands for the work­flows you do most. Code re­view and is­sue fix­ing are good start­ing points.

Step 4. As your pro­ject grows and your CLAUDE.md gets crowded, start split­ting in­struc­tions into .claude/rules/ files. Scope them by path where it makes sense.

Step 5. Add a ~/.claude/CLAUDE.md with your per­sonal pref­er­ences. This might be some­thing like always write types be­fore im­ple­men­ta­tions” or prefer func­tional pat­terns over class-based.”

That’s gen­uinely all you need for 95% of pro­jects. Skills and agents come in when you have re­cur­ring com­plex work­flows worth pack­ag­ing up.

The .claude folder is re­ally a pro­to­col for telling Claude who you are, what your pro­ject does, and what rules it should fol­low. The more clearly you de­fine that, the less time you spend cor­rect­ing Claude and the more time it spends do­ing use­ful work.

CLAUDE.md is your high­est-lever­age file. Get that right first. Everything else is op­ti­miza­tion.

Start small, re­fine as you go, and treat it like any other piece of in­fra­struc­ture in your pro­ject: some­thing that pays div­i­dends every day once it’s set up prop­erly.

...

Read the original on blog.dailydoseofds.com »

5 410 shares, 21 trendiness

Japan now has a special desk for people who work at home with a pet cat[Photos]

The Neko House Desk un­der­stands who’s re­ally in charge of your home, and helps you con­vince them to let you have some space too.

A big part of the ap­peal of work­ing from home is that even if you’re still work­ing for some­body else, you get much more con­trol over your im­me­di­ate en­vi­ron­ment. Want to play mu­sic dur­ing your shift, while also spend­ing it all in your pa­ja­mas and set­ting the ther­mo­stat to what­ever tem­per­a­ture you want? Go right ahead! No one is go­ing to stop you.

However, while you may not have to worry about your per­sonal free­doms be­ing en­croached on by fussy man­agers or in­con­sid­er­ate cowork­ers, that still does­n’t make you the ul­ti­mate boss of your home of­fice if you have a pet cat. As fe­line fans know all too well, their pets tend to think of them­selves as be­ing the higher-rank­ing half of the re­la­tion­ship, and so re­ally it’s a mat­ter of how much of your work­space they’ll let you use, since they’ll, of course, claim do­min­ion over at least a part of it.

Unfortunately, no amount of at­tempted ex­plain­ing will con­vey to your cat how it’s in their best in­ter­est too to let you work ef­fi­ciently, so that you can earn the money that pays for their treats, toys, and other es­sen­tials. So in­stead, Japanese fur­ni­ture and home­wares com­pany Bibilab has de­vel­oped a com­pro­mise: the Neko House Desk.

Neko is the Japanese word for cat, so you could also call this the Cat House Desk, but again, this piece of fur­ni­ture is for peo­ple who own pets, not bor­del­los. Bibilab proudly built in nu­mer­ous de­sign points to help pro­mote har­mo­nious shar­ing be­tween you and your kit­ties, start­ing with a two-tier cat space along the right side of the desk, each with side-ac­cess por­tals and a weight ca­pac­ity of 20 kilo­grams (44 pounds) for the top sec­tion.

There’s an­other cat space on the un­der­side of the desk sur­face, in the nor­mally unuti­lized space in front of your knees when you’re sit­ting.

This cat loung­ing area is po­si­tioned so that you won’t bump into it while sit­ting at the desk, but is also close enough so that your cat can eas­ily cross the gap onto your lap for some cud­dles.

The sec­tion of the desk is also con­nected to the wider top-tier sec­tion on its right edge, al­low­ing your cat to eas­ily move from one to the other, or in or out of the desk en­tirely, with­out hav­ing to go un­der the desk it­self.

There’s also a way for your cat to go through the top of the desk, as the de­sign­ers have in­cluded a fea­ture they call the Surprise Cat Hole,” so that your cat can poke its head top­side for a quick pet/​scratch or for play­time.

This fea­ture also gives you a great re­minder/​ex­cuse to take pe­ri­odic breaks for men­tal re­fresh­ment, help­ing you avoid the al­ways dan­ger­ous sit­u­a­tions of work burnout and cute­ness de­fi­cien­cies.

▼ The mon­i­tor says Start,” but the cat says Stop…and play with me!”

Combined, these fea­tures give your cat at­trac­tive places to oc­cupy, which will hope­fully keep them from laz­ing about in spots that would make it hard for you to get work done. Despite all the play­ful touches, the Neko House Desk is still a prac­ti­cal piece of fur­ni­ture too, with slits to run ca­bles through and even a spot for desk­top PC users to put their tower (though this does take up the bot­tom half of the two-tier cat space, so you’ll prob­a­bly want to get your pet’s per­mis­sion first).

It’s also sized to eas­ily pair with Bibilab’s Cat Tower Rack, an­other item from the com­pany that gives your cats more space of their own so that you can have a lit­tle more for your­self too.

The Neko House Desk is priced at 24,800 yen (US$160) and avail­able through Amazon Japan here, while the 16,862-yen Cat Tower Rack can be or­dered here. And if you’re look­ing for some uniquely pur­pose-built fur­ni­ture for when you’re done work­ing for the day, Bibilab can help with that too.

Source: Bibilab, PR Times

Images: PR Times

● Want to hear about SoraNews24’s lat­est ar­ti­cles as soon as they’re pub­lished? Follow us on Facebook and Twitter!

[ Read in Japanese ]

...

Read the original on soranews24.com »

6 409 shares, 23 trendiness

Alongside the var­i­ous bugs you get, one of the is­sues of up­grad­ing to MacOS 26 is that it has one of the most no­to­ri­ous in­con­sis­tency is­sues in win­dow cor­ners. I’m not sure what ex­actly pushes prod­uct de­sign­ers to like the ex­ces­sive round­nes­sOne of the ugli­est round­ness ex­am­ples I’ve ever seen is the cur­rent one in the YouTube UI de­sign. I be­lieve that UI de­sign is the most in­flu­en­civethat’s to say, con­ta­gious form in­wards field ever since de­sign­ers just try to fol­low what­ever big com­pa­nies do (in fact I see this a lot in my work, when two de­sign­ers are hav­ing an ar­gu­ment, one of them would re­solve it to, let’s see how Apple draw that but­ton), which means that we are prob­a­bly go­ing to see this ugly ef­fect else­where very soon.

Anyway, re­cently I had to up­grade re­cently to MacOS 26. And I found the edges ugly, like every­one else did. However, what’s even uglier, is the in­con­sis­tency. Many peo­ple try to re­solve this by dis­abling MacOS sys­tem in­tegrity pro­tec­tion, which re­sults in mak­ing them pos­si­bly vul­ner­a­bleAr­guable, since you just loose se­cu­rity over /root, which is not a big deal if some­one al­ready gained ac­cess to your ma­chine, at least for me.. The rea­son why you need to dis­able SIP, is that to edit the dy­namic li­braries that sys­tem apps like Safari (which has crazy bad cor­ners) use, you need to edit sys­tem li­braries that ex­ist the root. To me though, I don’t find the cor­ners so bad, but I find the in­con­sis­tency very an­noy­ing. So I think a bet­ter so­lu­tion to this is; in­stead of mak­ing every­thing round­less, make every­thing more rounded, which re­quires you to edit only user apps (i.e. no SIP dis­abling needed). I forked a so­lu­tion that makes things round­less to mod­ify it to have my ap­proach. It’s sim­ply as fol­lows:

You can have this plist too to load it in once your com­puter loads:

Now at least every­thing is con­sis­tently bad. #Programming

...

Read the original on lr0.org »

7 370 shares, 20 trendiness

AI got the blame for the Iran school bombing. The truth is far more worrying

On the first morn­ing of Operation Epic Fury, 28 February 2026, American forces struck the Shajareh Tayyebeh pri­mary school in Minab, in south­ern Iran, hit­ting the build­ing at least two times dur­ing the morn­ing ses­sion. American forces killed be­tween 175 and 180 peo­ple, most of them girls be­tween the ages of seven and 12.

Within days, the ques­tion that or­gan­ised the cov­er­age was whether Claude, a chat­bot made by Anthropic, had se­lected the school as a tar­get. Congress wrote to the US sec­re­tary of de­fense, Pete Hegseth, about the ex­tent of AI use in the strikes. The New Yorker mag­a­zine asked whether Claude could be trusted to obey or­ders in com­bat, whether it might re­sort to black­mail as a self-preser­va­tion strat­egy, and whether the Pentagon’s chief con­cern should be that the chat­bot had a per­son­al­ity. Almost none of this had any re­la­tion­ship to re­al­ity. The tar­get­ing for Operation Epic Fury ran on a sys­tem called Maven. Nobody was ar­gu­ing about Maven.

Eight years ago, Maven was the most con­tested pro­ject in Silicon Valley. In 2018, more than 4,000 Google em­ploy­ees signed a let­ter op­pos­ing the com­pa­ny’s con­tract to build ar­ti­fi­cial in­tel­li­gence for the Pentagon’s tar­get­ing sys­tems. Workers or­gan­ised a walk out. Engineers quit. And Google ul­ti­mately aban­doned the con­tract. Palantir Technologies, a data an­a­lyt­ics com­pany and de­fence con­trac­tor co-founded by Peter Thiel, took it over and spent the next six years build­ing Maven into a tar­get­ing in­fra­struc­ture that pulls to­gether satel­lite im­agery, sig­nals in­tel­li­gence and sen­sor data to iden­tify tar­gets and carry them through every step from first de­tec­tion to the or­der to strike.

The build­ing in Minab had been clas­si­fied as a mil­i­tary fa­cil­ity in a Defense Intelligence Agency data­base that, ac­cord­ing to CNN, had not been up­dated to re­flect that the build­ing had been sep­a­rated from the ad­ja­cent Islamic Revolutionary Guard Corps com­pound and con­verted into a school, a change that satel­lite im­agery shows had oc­curred by 2016 at the lat­est. A chat­bot did not kill those chil­dren. People failed to up­date a data­base, and other peo­ple built a sys­tem fast enough to make that fail­ure lethal. By the start of the Iran war, Maven — the sys­tem that had en­abled that speed — had sunk into the plumb­ing, it had be­come part of the mil­i­tary’s in­fra­struc­ture, and the ar­gu­ment was all about Claude. This ob­ses­sion with Claude is a kind of AI psy­chosis, though not of the kind we nor­mally talk about, and it af­flicts crit­ics and op­po­nents of the tech­nol­ogy as fiercely as it does its boost­ers. You do not have to use a lan­guage model to let it or­gan­ise your at­ten­tion or dis­tort your think­ing.

In 2019, the scholar Morgan Ames pub­lished The Charisma Machine, a study of how cer­tain tech­nolo­gies draw at­ten­tion, re­sources and at­tri­bu­tion to­ward them­selves and away from every­thing else. The usual frame­work for un­der­stand­ing this dy­namic is hype”, but hype only de­scribes what boost­ers do, and it as­signs crit­ics a priv­i­leged de­bunk­ing role that still leaves the tech­nol­ogy at the cen­tre of every ar­gu­ment. A charis­matic tech­nol­ogy shapes the whole field around it, the way a mag­net or­gan­ises iron fil­ings. LLMs may be the most pow­er­ful in­stance of this type in his­tory.

By the time the war be­gan, AI safety” and alignment” and hallucination” and stochastic par­rots” had be­come the terms of every ar­gu­ment about ar­ti­fi­cial in­tel­li­gence, struc­tur­ing and lim­it­ing what we could even say. Worse, artificial in­tel­li­gence” it­self had come to be syn­ony­mous with LLMs. When the school was bombed, those were the terms peo­ple reached for, de­spite the fact that this crit­i­cal ap­pa­ra­tus of­fered a poor fit for the older, more ma­ture stack of tech­nolo­gies in­volved in tar­get­ing. The real ques­tion, the ques­tion al­most no­body was ask­ing, is not about Claude or any lan­guage model. It is a bu­reau­cratic ques­tion about what hap­pened to the kill chain, and the an­swer is Palantir.

As mil­i­tary jar­gon goes, kill chain” is a re­mark­ably hon­est term. In essence, it refers to the bu­reau­cratic frame­work for or­gan­is­ing the steps be­tween de­tect­ing some­thing and de­stroy­ing it. The old­est ref­er­ence to the term it­self I can find is from the 1990s, but the idea is quite old — dat­ing at least to the 1760s, when French ar­tillery re­form­ers be­gan re­plac­ing the gun­ner’s ex­pe­ri­enced eye with bal­lis­tic ta­bles, el­e­va­tion screws and stan­dard­ised fir­ing pro­ce­dures. The steps in the kill chain are sub­ject to con­stant change, to keep pace with changes in tar­get­ing doc­trine, but also to in­cor­po­rate what­ever man­age­ment fads come to af­flict the mil­i­tary’s strate­gic thinkers. The US mil­i­tary has named and re­named the steps for 80 years. In the sec­ond world war the se­quence was find, fix, fight, fin­ish. By the 1990s the air force had stretched it to find, fix, track, tar­get, en­gage, as­sess, or F2T2EA. Every gen­er­a­tion of mil­i­tary tech­nol­ogy has been sold on the promise of mak­ing every­thing about kill chains shorter, ex­cept for the acronyms.

Palantir’s Maven Smart System is the lat­est it­er­a­tion of this com­pres­sion, and it grew out of a shift in strate­gic think­ing dur­ing Obama’s sec­ond term. In 2014, the sec­re­tary of de­fense, Chuck Hagel, and his deputy, Robert Work, an­nounced what they called the third off­set strat­egy”. An offset” in this line of think­ing is a bet that a tech­no­log­i­cal ad­van­tage can com­pen­sate for a strate­gic weak­ness the coun­try can­not fix di­rectly. The first two off­sets ad­dressed the same prob­lem: the United States could not match the Soviet Union in con­ven­tional forces. The think­ing was that the Red Army could just con­tinue to throw per­son­nel at a prob­lem, as they did at Stalingrad, or, to be anachro­nis­tic, as the con­tem­po­rary Russian army did at Bakhmut and Avdiivka. Nuclear weapons, the first off­set, made the per­son­nel ad­van­tage ir­rel­e­vant in the 1950s. When the Soviets reached nu­clear par­ity in the 1970s, pre­ci­sion-guided mu­ni­tions and stealth of­fered the promise that a smaller force could de­feat a larger one. By 2014, that ad­van­tage was erod­ing. China and Russia had spent two decades ac­quir­ing pre­ci­sion-guided mu­ni­tions and build­ing de­fence sys­tems de­signed to keep American forces out of range. Robert Work in­sisted that the third off­set was not about any par­tic­u­lar tech­nol­ogy but about us­ing tech­nol­ogy to re­or­gan­ise how the mil­i­tary op­er­ated, let­ting the US make de­ci­sions faster than China and Russia, over­whelm­ing and dis­ori­ent­ing the en­emy by main­tain­ing a faster op­er­a­tional tempo than they could match.

In April 2017, early in the first Trump ad­min­is­tra­tion, Work helped es­tab­lish the Algorithmic Warfare Cross-Functional Team, des­ig­nated Project Maven. One of the gen­er­als over­see­ing Maven, Lt Gen Jack Shanahan, put the prob­lem plainly: thou­sands of in­tel­li­gence an­a­lysts were spend­ing 80% of their time on mun­dane tasks, drown­ing in footage from sur­veil­lance drones that no one had time to watch. A sin­gle Predator drone mis­sion could gen­er­ate hun­dreds of hours of video, and the an­a­lysts tasked with un­der­stand­ing this were faced with an in­for­ma­tion over­load prob­lem. We’re not go­ing to solve it by throw­ing more peo­ple at the prob­lem,” Shanahan said. That’s the last thing that we ac­tu­ally want to do.” The core con­ceit of the pro­ject was that the ma­chine could watch so that the an­a­lyst could think.

The Pentagon needed some­one to build it. Google took the con­tract, and what hap­pened next be­came the most vis­i­ble labour ac­tion in the his­tory of Silicon Valley.

After Google aban­doned the Maven con­tract, Palantir took it over in 2019. The XVIII Airborne Corps be­gan test­ing the sys­tem in an ex­er­cise called Scarlet Dragon, which started in 2020 as a table­top wargam­ing ex­er­cise in a win­dow­less base­ment at Fort Bragg. Its com­man­der, Lt Gen Michael Erik Kurilla, wanted to build what he called the first AI-enabled corps” in the army. The goal was to test whether the sys­tem could give a small team the tar­get­ing ca­pac­ity that had pre­vi­ously re­quired thou­sands of peo­ple.

Over the next five years, Scarlet Dragon grew into a mil­i­tary ex­er­cise us­ing live am­mu­ni­tion, span­ning mul­ti­ple states and branches of the armed forces, with forward-deployed en­gi­neers” from Palantir and other con­trac­tors em­bed­ded along­side sol­diers. Each time the ex­er­cise was run, it was meant to an­swer the same ques­tion: how fast could the sys­tem move from de­tec­tion to de­ci­sion? The bench­mark was the 2003 in­va­sion of Iraq, where roughly 2,000 peo­ple worked the tar­get­ing process for the en­tire war. During Scarlet Dragon, 20 sol­diers us­ing Maven han­dled the same vol­ume of work. By 2024, the stated goal was 1,000 tar­get­ing de­ci­sions in an hour. That is 3.6 sec­onds per de­ci­sion, or from the in­di­vid­ual targeteer’s” per­spec­tive, one de­ci­sion every 72 sec­onds.

The Maven Smart System is the plat­form that came out of those ex­er­cises, and it, not Claude, is what is be­ing used to pro­duce target pack­ages” in Iran. There are real lim­its to what a civil­ian such as my­self can know about this sys­tem, and what fol­lows is based on pub­licly avail­able in­for­ma­tion, as­sem­bled from Palantir prod­uct demos, con­fer­ences, as well as in­struc­tional ma­te­r­ial pro­duced for mil­i­tary users. But we can know quite a bit.

The Maven in­ter­face looks like a mil­i­tary-skinned ver­sion of cor­po­rate pro­ject man­age­ment soft­ware crossed with a map­ping ap­pli­ca­tion. What the mil­i­tary an­a­lyst build­ing the tar­get list sees is ei­ther a map lay­ered with in­tel­li­gence data or a screen or­gan­ised into columns, each rep­re­sent­ing a stage of the tar­get­ing process. Individual tar­gets move across the columns from left to right as they progress through each stage, a for­mat bor­rowed from Kanban, a lean man­u­fac­tur­ing” work­flow sys­tem de­vel­oped at Toyota, and now widely used in soft­ware de­vel­op­ment.

Before Maven, op­er­a­tors worked across eight or nine sep­a­rate sys­tems si­mul­ta­ne­ously, pulling data from one, cross-ref­er­enc­ing in an­other, man­u­ally mov­ing de­tec­tions be­tween plat­forms to as­sem­ble the in­tel­li­gence and ap­provals needed for each strike. Maven con­sol­i­dated all of these be­hind a sin­gle in­ter­face. Cameron Stanley, the Pentagon’s chief dig­i­tal and AI of­fi­cer, called it an abstraction layer”, a com­mon term in soft­ware en­gi­neer­ing, mean­ing a sys­tem that hides the com­plex­ity un­der­neath it. Humans run the tar­get­ing. Underneath the in­ter­face, ma­chine-learn­ing sys­tems analyse satel­lite im­agery and sen­sor data to de­tect and clas­sify ob­jects, scor­ing each iden­ti­fi­ca­tion by how con­fi­dent the sys­tem is that it got it right. Three clicks con­vert a data point on the map into a for­mal de­tec­tion and move it into a tar­get­ing pipeline. These tar­gets then move through columns rep­re­sent­ing dif­fer­ent de­ci­sion-mak­ing processes and rules of en­gage­ment. The sys­tem rec­om­mends how to strike each tar­get — which air­craft, drone or mis­sile to use, which weapon to pair with it — what the mil­i­tary calls a course of ac­tion”. The of­fi­cer se­lects from the ranked op­tions, and the sys­tem, de­pend­ing on who is us­ing it, ei­ther sends the tar­get pack­age to an of­fi­cer for ap­proval or moves it to ex­e­cu­tion.

The AI un­der­neath the in­ter­face is not a lan­guage model, or at least the AI that counts is not. The core tech­nolo­gies are the same ba­sic sys­tems that recog­nise your cat in a photo li­brary or let a self-dri­ving car com­bine its cam­era, radar and li­dar into a sin­gle pic­ture of the road, ap­plied here to drone footage, radar and satel­lite im­agery of mil­i­tary tar­gets. They pre­date large lan­guage mod­els by years. Neither Claude nor any other LLMs de­tects tar­gets, processes radar, fuses sen­sor data or pairs weapons to tar­gets. LLMs are late ad­di­tions to Palantir’s ecosys­tem. In late 2024, years af­ter the core sys­tem was op­er­a­tional, Palantir added an LLM layer — this is where Claude sits — that lets an­a­lysts search and sum­marise in­tel­li­gence re­ports in plain English. But the lan­guage model was never what mat­tered about this sys­tem. What mat­tered was what Maven did to the tar­get­ing process: it con­sol­i­dated the sys­tems, com­pressed the time and re­duced the peo­ple. That is not a new idea. The US mil­i­tary has been try­ing to close the gap be­tween see­ing some­thing and de­stroy­ing it for as long as that gap has ex­isted, and every at­tempt has pro­duced the same fail­ure. Maven may not even be the most ex­treme case.

In the late 1960s, the US faced a ver­sion of the same prob­lem in Vietnam. Supplies were mov­ing south along the Ho Chi Minh trail through jun­gle the mil­i­tary could not see into. The so­lu­tion was Operation Igloo White, a $1bn-a-year pro­gramme that scat­tered 20,000 acoustic and seis­mic sen­sors along the trail. These sen­sors trans­mit­ted data to re­lay air­craft over­head, which fed the sig­nals to IBM 360 com­put­ers at Nakhon Phanom air­base in Thailand. The com­put­ers analysed the sen­sor data and pre­dicted where con­voys would be, and strike air­craft were di­rected to those co­or­di­nates.

The sys­tem could sense but it could not see. It could de­tect a vi­bra­tion but it could not tell a truck from an ox cart. The North Vietnamese fig­ured this out. They played record­ings of truck en­gines, herded an­i­mals near the sen­sors to trig­ger vi­bra­tion de­tec­tion, and hung buck­ets of urine in trees to set off the chem­i­cal de­tec­tors. The sys­tem could be fooled be­cause no­body in the process could look at what it was sens­ing. The air force claimed 46,000 trucks were de­stroyed or dam­aged over the course of the cam­paign. The CIA re­ported that the claims for a sin­gle year ex­ceeded the to­tal num­ber of trucks be­lieved to ex­ist in all of North Vietnam. The sys­tem’s own out­put was the only mea­sure of its per­for­mance, and no­body out­side the sys­tem had stand­ing to chal­lenge it. Air force his­to­rian Bernard Nalty later called the ser­vice’s ca­su­alty com­pu­ta­tions an ex­er­cise in meta­physics rather than math­e­mat­ics” and his col­league Earl Tilford con­cluded that the air force suc­ceeded only in fool­ing it­self”. When day­time re­con­nais­sance flights failed to find the wreck­age of all those trucks, air force per­son­nel in­vented a crea­ture to ex­plain the ab­sence. They called it the great Laotian truck eater”.

The pat­tern that played out in Vietnam — a tar­get­ing sys­tem that could only mea­sure its own per­for­mance and ended up be­liev­ing its own out­put — is ac­tu­ally older than dig­i­tal com­put­ing. Michael Sherry’s 1987 book The Rise of American Air Power traces it to the found­ing doc­trine of pre­ci­sion bomb­ing, whose con­fi­dence in its own meth­ods made ex­am­in­ing what those meth­ods pro­duced un­nec­es­sary. Belief in suc­cess,” Sherry wrote, encouraged im­pre­ci­sion about how to achieve it.” By 1944, op­er­a­tions an­a­lysts on both sides of the Atlantic were mea­sur­ing bomb­ing in a shared lan­guage of in­dus­trial op­ti­mi­sa­tion. Civilians bombed out of their homes were recorded as dehoused”. For every tonne of bombs dropped, an­a­lysts cal­cu­lated how many hours of en­emy labour it de­stroyed. One British eval­u­a­tion treated the bomber it­self as a cap­i­tal as­set: a sin­gle sor­tie against a German city wiped off the cost of build­ing the air­craft, and every­thing af­ter that was clear profit”. Sherry called the re­sult­ing mind­set technological fa­nati­cism”.

Sherry’s point was not that any­one chose de­struc­tion. It was that the peo­ple re­fin­ing the tech­nique of bomb­ing stopped ask­ing what the bomb­ing was for. But even by the time the op­er­a­tions re­searchers had got their hands on tar­get­ing, this logic was al­ready tak­ing shape. As the his­to­rian of sci­ence William Thomas has ar­gued, the op­er­a­tions an­a­lysts did not im­pose this logic on the mil­i­tary; the mil­i­tary was al­ready con­vert­ing op­er­a­tional ex­pe­ri­ence into sys­tem­atic pro­ce­dure, and had been for decades. Nobody stopped mak­ing judg­ments. But the judg­ments were no longer about whether the bomb­ing served a strate­gic pur­pose. They were about how to mea­sure it and how to op­ti­mise around those mea­sure­ments.

Carl von Clausewitz, the 19th-century Prussian gen­eral whose writ­ings re­main the foun­da­tion of west­ern mil­i­tary thought, had a word for every­thing the op­ti­mi­sa­tion leaves out. He called it friction”, the ac­cu­mu­la­tion of un­cer­tainty, er­ror and con­tra­dic­tion that en­sures no op­er­a­tion goes as planned. But fric­tion is also where judg­ment forms. Clausewitz ob­served that most in­tel­li­gence is false, that re­ports con­tra­dict each other. The com­man­der who has worked through this learns to see the way an eye ad­justs to dark­ness, not by get­ting bet­ter light but by stay­ing long enough to use what light there is. This staying” is what takes time. Compress the time and the fric­tion does not dis­ap­pear. You just stop notic­ing it. Clausewitz called this kind of plan­ning a war on pa­per”. The plan pro­ceeds with­out re­sis­tance, not be­cause there is none, but be­cause every­thing con­nect­ing the plan to the real world has been stripped out.

Air power is uniquely vul­ner­a­ble to this. The pi­lot never sees what the bomb hits. The an­a­lyst works from im­agery, co­or­di­nates and data­bases. The en­tire en­ter­prise is me­di­ated by rep­re­sen­ta­tions of the tar­get, not the tar­get it­self, which means the gap be­tween the pack­age and the world can widen with­out any­one in the process feel­ing it. The 2003 in­va­sion of Iraq, the op­er­a­tion that Scarlet Dragon would later use as its bench­mark, was a case in point. Marc Garlasco, the Pentagon’s chief of high-value tar­get­ing dur­ing the in­va­sion, ran the fastest tar­get­ing cy­cle the US had op­er­ated to that point. He rec­om­mended 50 strikes on se­nior Iraqi lead­er­ship. The bombs were pre­cise — they hit ex­actly where they were aimed — but the in­tel­li­gence be­hind them was not. None of the 50 killed its in­tended tar­get. Two weeks af­ter the in­va­sion, Garlasco left the Pentagon for Human Rights Watch, went to Iraq, and stood in the crater of a strike he had tar­geted him­self. These aren’t just name­less, face­less tar­gets,” he said later. This is a place where peo­ple are go­ing to feel ram­i­fi­ca­tions for a long time.” The tar­get­ing cy­cle had been fast enough to hit 50 build­ings and too fast to dis­cover it was hit­ting the wrong ones.

The air force’s own tar­get­ing guide, in ef­fect dur­ing the Iraq war, said this was never sup­posed to hap­pen. Published in 1998, it de­scribed the six func­tions of tar­get­ing as intertwined”, with the tar­ge­teer mov­ing back” to re­fine ob­jec­tives and forward” to as­sess fea­si­bil­ity. The best analy­sis,” the man­ual stated, is rea­soned thought with facts and con­clu­sions, not a check­list.” But Jon Lindsay, who served as a navy in­tel­li­gence of­fi­cer in Kosovo and later stud­ied spe­cial op­er­a­tions tar­get­ing in Iraq, found some­thing dif­fer­ent. Once a tar­get was rei­fied on a PowerPoint slide — the tar­get in­tel­li­gence pack­age, or TIP — it be­came a black box. Questioning the as­sump­tions be­hind it got harder as the hunt gained mo­men­tum, as the folder thick­ened with what Lindsay calls representational residua”. There was more ma­chin­ery for build­ing up a tar­get than for in­spect­ing the qual­ity of its con­struc­tion. Personnel be­came dis­in­clined to ask whether some tar­gets were po­ten­tial al­lies, or not ac­tu­ally bad guys at all, be­cause pro­duc­ing tar­gets meant par­tic­i­pat­ing in the hunt. The tar­get­ing guide had warned about this too. If tar­ge­teers don’t pro­vide full tar­get­ing ser­vice,” it read, then other well mean­ing but un­der­trained and ill-ex­pe­ri­enced groups will step in.” Maven even­tu­ally would.

Lindsay’s book Information Technology and Military Power is the most care­ful study I’ve found of how tar­get­ing ac­tu­ally works, at least par­tially be­cause it was writ­ten by some­one who ac­tu­ally did it. During the Kosovo air war, Gen Wesley Clark de­manded 2,000 tar­gets, which made it easy to jus­tify any tar­get’s con­nec­tion to the Milošević gov­ern­ment. The CIA nom­i­nated just one tar­get dur­ing the en­tire war: the fed­eral di­rec­torate of sup­ply and pro­cure­ment. Analysts had a street ad­dress but not co­or­di­nates, so they tried to re­verse-en­gi­neer a lo­ca­tion from three out­dated maps. They ended up hit­ting the Chinese em­bassy — which had re­cently re­lo­cated — 300 me­tres from the build­ing they were aim­ing for. The state de­part­ment knew that the em­bassy had moved. The mil­i­tary’s fa­cil­i­ties data­base did not. Target re­views failed to no­tice, be­cause each val­i­da­tion re­lied on the last. Lindsay calls this circular re­port­ing”: an ac­cu­mu­la­tion of sup­port­ing doc­u­ments that created the il­lu­sion of mul­ti­ple val­i­da­tions” while am­pli­fy­ing a sin­gle er­ror. The PowerPoint slide looked as well vet­ted as the hun­dreds of oth­ers that Nato struck with­out in­ci­dent. On the night of the strike, an in­tel­li­gence an­a­lyst phoned head­quar­ters to ex­press doubts. Asked specif­i­cally about col­lat­eral dam­age, he could not ar­tic­u­late a con­cern. The strike pro­ceeded. It killed three Chinese jour­nal­ists. Lindsay, writ­ing in his jour­nal at the time, called the re­sult an im­mense er­ror, per­fectly pack­aged”.

In 2005, Lt Col John Fyfe of the US air force pub­lished a study of time-sen­si­tive tar­get­ing dur­ing the 2003 in­va­sion. Fyfe high­lighted the dif­fer­ent ways UK and US forces ap­proached this chal­lenge. In the Combined Air Operations Center, RAF of­fi­cers served in key lead­er­ship po­si­tions along­side their American coun­ter­parts. They op­er­ated un­der more re­stricted rules of en­gage­ment. Fyfe noted that their more re­served, con­ser­v­a­tive per­son­al­i­ties” pro­duced what he called a very pos­i­tive damp­en­ing ef­fect on the some­times har­ried, chaotic pace of of­fen­sive op­er­a­tions”. The con­trast be­tween shifts was vis­i­ble: American lead­ers pressed ahead full bore, while British of­fi­cers me­thod­i­cally re­con­sid­ered risk and cost-ben­e­fit trade-offs be­fore ap­prov­ing ex­e­cu­tion. On UK-led shifts, there were no friendly fire in­ci­dents and no sig­nif­i­cant col­lat­eral dam­age. On nu­mer­ous oc­ca­sions, Fyfe notes, the British of­fi­cer in charge pre­vented the op­er­a­tion from get­ting ahead of it­self. What the next gen­er­a­tion of re­form­ers would mea­sure as la­tency — the de­lay be­tween iden­ti­fy­ing a tar­get and strik­ing it — was the win­dow in which mis­takes could be caught.

From in­side the ef­fi­ciency frame, every fea­ture Fyfe de­scribes reg­is­tered as a de­fect. The UK shifts were slower. The re­stricted rules of en­gage­ment added con­straints. The damp­en­ing ef­fect added time. Speed saves lives, the ar­gu­ment goes, but the fastest tar­get­ing cy­cle be­fore Maven was Garlasco’s, and it struck 50 build­ings with­out hit­ting a sin­gle in­tended tar­get. Scarlet Dragon elim­i­nated all of it. The dis­agree­ments about tar­get­ing stopped. So did the de­lib­er­a­tion, the hes­i­ta­tion and the mo­ments when some­one had time to ob­ject or no­tice some­thing was off.

Organisations that run on for­mal pro­ce­dure need some­one in­side the process to in­ter­pret rules, no­tice ex­cep­tions, recog­nise when the cat­e­gories no longer fit the case. If the or­gan­i­sa­tion con­cedes that its out­comes de­pend on the dis­cre­tion of the peo­ple ex­e­cut­ing it, then the pro­ce­dure is not a pro­ce­dure but a sug­ges­tion, and the au­thor­ity the or­gan­i­sa­tion de­rives from ap­pear­ing rule-gov­erned col­lapses. So the judg­ment has to hap­pen, and it has to look like some­thing else. It has to look like fol­low­ing the pro­ce­dure rather than in­ter­pret­ing it.

I’ve come to think of this as the bureaucratic dou­ble bind” — the or­gan­i­sa­tion can­not func­tion with­out the judg­ment, and it can­not ac­knowl­edge the judg­ment with­out un­der­min­ing it­self and be­ing seen as political”. One so­lu­tion to this prob­lem is to re­place the judg­ment with a num­ber. In his 1995 book Trust in Numbers, the his­to­rian of sci­ence Theodore Porter ar­gued that or­gan­i­sa­tions adopt quan­ti­ta­tive rules not be­cause num­bers are more ac­cu­rate but be­cause they are more de­fen­si­ble. Judgment is po­lit­i­cally vul­ner­a­ble. Rules are not. The pro­ce­dure ex­ists to make dis­cre­tion dis­ap­pear, or seem to. The sys­tem’s ac­tual flex­i­bil­ity lives en­tirely in this un­ac­knowl­edged in­ter­pre­tive work, which means it can be re­moved by any­one who mis­takes it for in­ef­fi­ciency.

In 1984, the his­to­rian David Noble showed that when the US mil­i­tary and American man­u­fac­tur­ers au­to­mated their fac­tory floors, they con­sis­tently chose sys­tems that were slower and more ex­pen­sive but which moved de­ci­sion-mak­ing away from work­ers and into man­age­ment. The point was not ef­fi­ciency — it was fre­quently ex­tremely waste­ful — but con­trol. A worker who un­der­stands what they are do­ing can ex­er­cise judg­ment the in­sti­tu­tion can­not gov­ern. Move that un­der­stand­ing into the sys­tem, and the worker has noth­ing left to do but fol­low in­struc­tions. Alex Karp, the CEO of Palantir, de­scribes ex­actly this achieve­ment in his 2025 book, The Technological Republic. Software is now at the helm,” he writes, with hard­ware serving as the means by which the rec­om­men­da­tions of AI are im­ple­mented in the world.” His model for what this should look like comes from na­ture: bee swarms and the mur­mu­ra­tions of star­lings. There is no me­di­a­tion of the in­for­ma­tion cap­tured by the scouts once they re­turn to the hive,” Karp writes. The star­lings need no per­mis­sion from above, they re­quire no weekly re­ports to mid­dle man­age­ment, no pre­sen­ta­tions to more se­nior lead­ers, no meet­ings or con­fer­ence calls to pre­pare for other meet­ings”. This sounds lib­er­at­ing, even utopian. But the sig­nal that passes with­out me­di­a­tion is also the sig­nal that no­body can ques­tion.

Karp thinks he is de­stroy­ing bu­reau­cracy. He is en­cod­ing it. The con­tempt for meet­ings and weekly re­ports and pre­sen­ta­tions to se­nior lead­ers; he treats these as the bu­reau­cratic process it­self. They are not. They were where peo­ple in­ter­preted pro­ce­dure, the place where some­one could no­tice when cat­e­gories no longer fit the case. The tar­get­ing doc­trine is still there. They are columns on a work­flow board now, stages a tar­get passes through on its way to be­ing struck. What Karp elim­i­nated was the dis­cre­tion the in­sti­tu­tion could never ad­mit it de­pended on. What re­mains is a bu­reau­cracy that can ex­e­cute its rules but with no one left to in­ter­pret them. Bureaucracy en­coded in soft­ware does not bend. It shat­ters.

The tar­get pack­age for the Shajareh Tayyebeh school pre­sented a mil­i­tary fa­cil­ity. Lucy Suchman, whose 1987 book Plans and Situated Actions re­mains the sharpest ac­count of how for­mal pro­ce­dures ob­scure the work that ac­tu­ally pro­duces their out­comes, would not have been sur­prised. Plans al­ways look com­plete af­ter­ward. They achieve com­plete­ness by fil­ter­ing out every­thing that was­n’t leg­i­ble to their cat­e­gories. This pack­age looked like every other pack­age in the queue. But out­side the pack­age, the school ap­peared in Iranian busi­ness list­ings. It was vis­i­ble on Google Maps. A search en­gine could have found it. Nobody searched. At 1,000 de­ci­sions an hour, no­body was go­ing to. A for­mer se­nior gov­ern­ment of­fi­cial asked the ob­vi­ous ques­tion: The build­ing was on a tar­get list for years. Yet this was missed, and the ques­tion is how.” How in­deed.

Congress did not au­tho­rise this war. In two weeks, American forces struck 6,000 tar­gets. The school was one of them. American forces killed al­most 200 peo­ple, and the re­port­ing reached for AI er­ror”, which do­mes­ti­cated the event into some­thing a bet­ter al­go­rithm or bet­ter guardrails could have pre­vented.

In the days af­ter the strike, the charisma of AI or­gan­ised the en­tire po­lit­i­cal con­ver­sa­tion around the tech­nol­ogy: whether Claude hal­lu­ci­nated, whether the model was aligned, whether Anthropic bore re­spon­si­bil­ity for its de­ploy­ment. The con­sti­tu­tional ques­tion of who au­tho­rised this war and the le­gal ques­tion of whether this strike con­sti­tutes a war crime were dis­placed by a tech­ni­cal ques­tion that is eas­ier to ask and im­pos­si­ble to an­swer in the terms it set. The Claude de­bate ab­sorbed the en­ergy. That is what charisma does.

It has also oc­cluded some­thing deeper: the hu­man de­ci­sions that led to the killing of be­tween 175 and 180 peo­ple, most of them girls be­tween the ages of seven and 12. Someone de­cided to com­press the kill chain. Someone de­cided that de­lib­er­a­tion was la­tency. Someone de­cided to build a sys­tem that pro­duces 1,000 tar­get­ing de­ci­sions an hour and call them high-qual­ity. Someone de­cided to start this war. Several hun­dred peo­ple are sit­ting on Capitol Hill, re­fus­ing to stop it. Calling it an AI prob­lem” gives those de­ci­sions, and those peo­ple, a place to hide.

An ear­lier ver­sion of this ar­ti­cle ap­peared on Artificial Bureaucracy, Kevin T Baker’s Substack

...

Read the original on www.theguardian.com »

8 350 shares, 1 trendiness

Olympic Committee Bars Transgender Athletes From Women’s Events

The International Olympic Committee has barred trans­gen­der ath­letes from com­pet­ing in the wom­en’s cat­e­gory of the Olympics and said that all par­tic­i­pants in those events must un­dergo ge­netic test­ing.

The de­ci­sion, the most con­se­quen­tial since Kirsty Coventry was elected last year as the first woman to serve as pres­i­dent of the I. O.C., fol­lowed a board meet­ing and months of spec­u­la­tion over the or­ga­ni­za­tion’s pol­icy on one of the most con­tentious is­sues fac­ing global sports. The rules will be ap­plic­a­ble start­ing at the next Olympics, in Los Angeles in 2028.

Under the new pol­icy el­i­gi­bil­ity will be de­ter­mined by a one-time gene test, ac­cord­ing to the I. O.C. The test, which is al­ready be­ing used in track and field, re­quires screen­ing via saliva, a cheek swab or a blood sam­ple.

When Ms. Coventry, a for­mer Olympic cham­pion swim­mer from Zimbabwe, cam­paigned to lead the or­ga­ni­za­tion, she fre­quently said how im­por­tant it was to pro­tect the wom­en’s cat­e­gory amid broader — and of­ten bit­ter — de­bates about the par­tic­i­pa­tion of trans­gen­der ath­letes in sport­ing com­pe­ti­tions.

...

Read the original on www.nytimes.com »

9 340 shares, 39 trendiness

easy containment for AI agents

There’s a gap be­tween giv­ing an agent your real ac­count and stop­ping every­thing to build a con­tainer or VM. jai fills that gap. One com­mand, no im­ages, no Dockerfiles — just a light-weight bound­ary for the work­flows you’re al­ready run­ning: quick cod­ing help, one-off lo­cal tasks, run­ning in­staller scripts you did­n’t write. Use AI agents with­out hand­ing over your whole ac­count. jai gives your work­ing di­rec­tory full ac­cess and keeps the rest of your home be­hind a copy-on-write over­lay — or hid­den en­tirely.One-line in­staller scripts, AI-generated shell com­mands, un­fa­mil­iar CLIs — stop run­ning them against your real home di­rec­tory. Drop jai in front and the worst case gets a lot smaller.No im­ages to build, no Dockerfiles to main­tain, no 40-flag bwrap in­vo­ca­tions. Just jai your-agent. If con­tain­ment is­n’t eas­ier than YOLO mode, no­body will bother.

Pick the level of iso­la­tion that fits your work­flow.

jai is free soft­ware, brought to you by the Stanford Secure Computer Systems re­search group and the Future of Digital Currency Initiative. The goal is to get peo­ple us­ing AI more safely.

jai is not try­ing to re­place con­tain­ers. It fills a dif­fer­ent niche. Great for re­pro­ducible, im­age-based en­vi­ron­ments. Heavier to set up for ad-hoc sand­box­ing of host tools. No over­lay-on-home work­flow.Pow­er­ful name­space sand­box. Requires ex­plic­itly as­sem­bling the filesys­tem view — of­ten turns into a long wrap­per script, which is the fric­tion jai re­moves.Not a se­cu­rity mech­a­nism. No mount iso­la­tion, no PID name­space, no cre­den­tial sep­a­ra­tion. Linux doc­u­ments it as not in­tended for sand­box­ing.

jai is not a promise of per­fect safety.jai is a ca­sual sand­box — it re­duces the blast ra­dius, but does not elim­i­nate all the ways AI agents can harm you or your sys­tem. Casual mode does not pro­tect con­fi­den­tial­ity. Even strict mode is not equiv­a­lent to a hard­ened con­tainer run­time or VM. When you need strong multi-ten­ant iso­la­tion or de­fense against a de­ter­mined ad­ver­sary, use a proper con­tainer or vir­tual ma­chine. Read the full se­cu­rity model →

...

Read the original on jai.scs.stanford.edu »

10 254 shares, 19 trendiness

DOJ confirms FBI Director Kash Patel’s personal email was hacked

Iran-linked hack­ers suc­cess­fully broke into FBI Director Kash Patel’s per­sonal email, the Department of Justice con­firmed to Reuters on Friday.

Reuters could not au­then­ti­cate the leaked emails them­selves but noted that the Gmail ad­dress matched an email ac­count linked to Patel in pre­vi­ous data breaches ⁠preserved by the dark web in­tel­li­gence firm District 4 Labs.” The DOJ sug­gested the emails ap­peared to be au­then­tic.

On their web­site, the Handala Hack Team boasted that Patel will now find his name among the list of suc­cess­fully hacked vic­tims.” The hacker group taunted Patel by shar­ing pho­tos of him sniff­ing cig­ars and hold­ing up a jug of rum, along with other doc­u­ments that Reuters re­ported were from 2010 to 2019.

Soon you will re­al­ize that the FBIs se­cu­rity was noth­ing more than a joke,” the group posted, as doc­u­mented in screen­shots from the web­site shared widely on X.

The hack came af­ter the DOJ dis­rupted some of the hacker group’s web­sites ear­lier this month. In a press re­lease, Patel threat­ened to hunt” down the group, which Reuters re­ported calls it­self a group of pro-Pales­tin­ian vig­i­lante hack­ers.” After de­tail­ing four at­tacks this month that the group had taken credit for, Patel of­fered re­wards of up to $10 mil­lion for in­for­ma­tion on its mem­bers.

Iran thought they could hide be­hind fake web­sites and key­board threats to ter­ror­ize Americans and si­lence dis­si­dents,” Patel said. We took down four of their op­er­a­tion’s pil­lars and we’re not done. This FBI will hunt down every ac­tor be­hind these cow­ardly death threats and cy­ber­at­tacks and will bring the full force of American law en­force­ment down on them.”

...

Read the original on arstechnica.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.