10 interesting stories served every morning and every evening.

Tim Cook to become Apple Executive ChairmanJohn Ternusto become Apple CEO

www.apple.com

Tim Cook to be­come Apple Executive ChairmanJohn Ternusto be­come Apple CEO

Apple Executive Chairman

John Ternusto be­come Apple CEO

Text of this ar­ti­cle

Text of this ar­ti­cle

Images in this ar­ti­cle

Images in this ar­ti­cle

Introducing Claude Opus 4.7

www.anthropic.com

Our lat­est model, Claude Opus 4.7, is now gen­er­ally avail­able. Opus 4.7 is a no­table im­prove­ment on Opus 4.6 in ad­vanced soft­ware en­gi­neer­ing, with par­tic­u­lar gains on the most dif­fi­cult tasks. Users re­port be­ing able to hand off their hard­est cod­ing work—the kind that pre­vi­ously needed close su­per­vi­sion—to Opus 4.7 with con­fi­dence. Opus 4.7 han­dles com­plex, long-run­ning tasks with rigor and con­sis­tency, pays pre­cise at­ten­tion to in­struc­tions, and de­vises ways to ver­ify its own out­puts be­fore re­port­ing back.The model also has sub­stan­tially bet­ter vi­sion: it can see im­ages in greater res­o­lu­tion. It’s more taste­ful and cre­ative when com­plet­ing pro­fes­sional tasks, pro­duc­ing higher-qual­ity in­ter­faces, slides, and docs. And—although it is less broadly ca­pa­ble than our most pow­er­ful model, Claude Mythos Preview—it shows bet­ter re­sults than Opus 4.6 across a range of bench­marks:Last week we an­nounced Project Glasswing, high­light­ing the risks—and ben­e­fits—of AI mod­els for cy­ber­se­cu­rity. We stated that we would keep Claude Mythos Preview’s re­lease lim­ited and test new cy­ber safe­guards on less ca­pa­ble mod­els first. Opus 4.7 is the first such model: its cy­ber ca­pa­bil­i­ties are not as ad­vanced as those of Mythos Preview (indeed, dur­ing its train­ing we ex­per­i­mented with ef­forts to dif­fer­en­tially re­duce these ca­pa­bil­i­ties). We are re­leas­ing Opus 4.7 with safe­guards that au­to­mat­i­cally de­tect and block re­quests that in­di­cate pro­hib­ited or high-risk cy­ber­se­cu­rity uses. What we learn from the real-world de­ploy­ment of these safe­guards will help us work to­wards our even­tual goal of a broad re­lease of Mythos-class mod­els.Se­cu­rity pro­fes­sion­als who wish to use Opus 4.7 for le­git­i­mate cy­ber­se­cu­rity pur­poses (such as vul­ner­a­bil­ity re­search, pen­e­tra­tion test­ing, and red-team­ing) are in­vited to join our new Cyber Verification Program.Opus 4.7 is avail­able to­day across all Claude prod­ucts and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing re­mains the same as Opus 4.6: $5 per mil­lion in­put to­kens and $25 per mil­lion out­put to­kens. Developers can use claude-opus-4 – 7 via the Claude API.Claude Opus 4.7 has gar­nered strong feed­back from our early-ac­cess testers:In early test­ing, we’re see­ing the po­ten­tial for a sig­nif­i­cant leap for our de­vel­op­ers with Claude Opus 4.7. It catches its own log­i­cal faults dur­ing the plan­ning phase and ac­cel­er­ates ex­e­cu­tion, far be­yond pre­vi­ous Claude mod­els. As a fi­nan­cial tech­nol­ogy plat­form serv­ing mil­lions of con­sumers and busi­nesses at sig­nif­i­cant scale, this com­bi­na­tion of speed and pre­ci­sion could be game-chang­ing: ac­cel­er­at­ing de­vel­op­ment ve­loc­ity for faster de­liv­ery of the trusted fi­nan­cial so­lu­tions our cus­tomers rely on every day.An­thropic has al­ready set the stan­dard for cod­ing mod­els, and Claude Opus 4.7 pushes that fur­ther in a mean­ing­ful way as the state-of-the-art model on the mar­ket. In our in­ter­nal evals, it stands out not just for raw ca­pa­bil­ity, but for how well it han­dles real-world async work­flows—au­toma­tions, CI/CD, and long-run­ning tasks. It also thinks more deeply about prob­lems and brings a more opin­ion­ated per­spec­tive, rather than sim­ply agree­ing with the user.Claude Opus 4.7 is the strongest model Hex has eval­u­ated. It cor­rectly re­ports when data is miss­ing in­stead of pro­vid­ing plau­si­ble-but-in­cor­rect fall­backs, and it re­sists dis­so­nant-data traps that even Opus 4.6 falls for. It’s a more in­tel­li­gent, more ef­fi­cient Opus 4.6: low-ef­fort Opus 4.7 is roughly equiv­a­lent to medium-ef­fort Opus 4.6.On our 93-task cod­ing bench­mark, Claude Opus 4.7 lifted res­o­lu­tion by 13% over Opus 4.6, in­clud­ing four tasks nei­ther Opus 4.6 nor Sonnet 4.6 could solve. Combined with faster me­dian la­tency and strict in­struc­tion fol­low­ing, it’s par­tic­u­larly mean­ing­ful for com­plex, long-run­ning cod­ing work­flows. It cuts the fric­tion from those multi-step tasks so de­vel­op­ers can stay in the flow and fo­cus on build­ing.Based on our in­ter­nal re­search-agent bench­mark, Claude Opus 4.7 has the strongest ef­fi­ciency base­line we’ve seen for multi-step work. It tied for the top over­all score across our six mod­ules at 0.715 and de­liv­ered the most con­sis­tent long-con­text per­for­mance of any model we tested. On General Finance—our largest mod­ule—it im­proved mean­ing­fully on Opus 4.6, scor­ing 0.813 ver­sus 0.767, while also show­ing the best dis­clo­sure and data dis­ci­pline in the group. And on de­duc­tive logic, an area where Opus 4.6 strug­gled, Opus 4.7 is solid.Claude Opus 4.7 ex­tends the limit of what mod­els can do to in­ves­ti­gate and get tasks done. Anthropic has clearly op­ti­mized for sus­tained rea­son­ing over long runs, and it shows with mar­ket-lead­ing per­for­mance. As en­gi­neers shift from work­ing 1:1 with agents to man­ag­ing them in par­al­lel, this is ex­actly the kind of fron­tier ca­pa­bil­ity that un­locks new work­flows.We’re see­ing ma­jor im­prove­ments in Claude Opus 4.7’s mul­ti­modal un­der­stand­ing, from read­ing chem­i­cal struc­tures to in­ter­pret­ing com­plex tech­ni­cal di­a­grams. The higher res­o­lu­tion sup­port is help­ing Solve Intelligence build best-in-class tools for life sci­ences patent work­flows, from draft­ing and pros­e­cu­tion to in­fringe­ment de­tec­tion and in­va­lid­ity chart­ing.Claude Opus 4.7 takes long-hori­zon au­ton­omy to a new level in Devin. It works co­her­ently for hours, pushes through hard prob­lems rather than giv­ing up, and un­locks a class of deep in­ves­ti­ga­tion work we could­n’t re­li­ably run be­fore.For Replit, Claude Opus 4.7 was an easy up­grade de­ci­sion. For the work our users do every day, we ob­served it achiev­ing the same qual­ity at lower cost—more ef­fi­cient and pre­cise at tasks like an­a­lyz­ing logs and traces, find­ing bugs, and propos­ing fixes. Personally, I love how it pushes back dur­ing tech­ni­cal dis­cus­sions to help me make bet­ter de­ci­sions. It re­ally feels like a bet­ter coworker.Claude Opus 4.7 demon­strates strong sub­stan­tive ac­cu­racy on BigLaw Bench for Harvey, scor­ing 90.9% at high ef­fort with bet­ter rea­son­ing cal­i­bra­tion on re­view ta­bles and no­tice­ably smarter han­dling of am­bigu­ous doc­u­ment edit­ing tasks. It cor­rectly dis­tin­guishes as­sign­ment pro­vi­sions from change-of-con­trol pro­vi­sions, a task that has his­tor­i­cally chal­lenged fron­tier mod­els. Substance was con­sis­tently rated as a strength across our eval­u­a­tions: cor­rect, thor­ough, and well-cited.Claude Opus 4.7 is a very im­pres­sive cod­ing model, par­tic­u­larly for its au­ton­omy and more cre­ative rea­son­ing. On CursorBench, Opus 4.7 is a mean­ing­ful jump in ca­pa­bil­i­ties, clear­ing 70% ver­sus Opus 4.6 at 58%.For com­plex multi-step work­flows, Claude Opus 4.7 is a clear step up: plus 14% over Opus 4.6 at fewer to­kens and a third of the tool er­rors. It’s the first model to pass our im­plicit-need tests, and it keeps ex­e­cut­ing through tool fail­ures that used to stop Opus cold. This is the re­li­a­bil­ity jump that makes Notion Agent feel like a true team­mate.In our evals, we saw a dou­ble-digit jump in ac­cu­racy of tool calls and plan­ning in our core or­ches­tra­tor agents. As users lever­age Hebbia to plan and ex­e­cute on use cases like re­trieval, slide cre­ation, or doc­u­ment gen­er­a­tion, Claude Opus 4.7 shows the po­ten­tial to im­prove agent de­ci­sion-mak­ing in these work­flows.On Rakuten-SWE-Bench, Claude Opus 4.7 re­solves 3x more pro­duc­tion tasks than Opus 4.6, with dou­ble-digit gains in Code Quality and Test Quality. This is a mean­ing­ful lift and a clear up­grade for the en­gi­neer­ing work our teams are ship­ping every day.For CodeRabbit’s code re­view work­loads, Claude Opus 4.7 is the sharpest model we’ve tested. Recall im­proved by over 10%, sur­fac­ing some of the most dif­fi­cult-to-de­tect bugs in our most com­plex PRs, while pre­ci­sion re­mained sta­ble de­spite the in­creased cov­er­age. It’s a bit faster than GPT-5.4 xhigh on our har­ness, and we’re lin­ing it up for our heav­i­est re­view work at launch.For Genspark’s Super Agent, Claude Opus 4.7 nails the three pro­duc­tion dif­fer­en­tia­tors that mat­ter most: loop re­sis­tance, con­sis­tency, and grace­ful er­ror re­cov­ery. Loop re­sis­tance is the most crit­i­cal. A model that loops in­def­i­nitely on 1 in 18 queries wastes com­pute and blocks users. Lower vari­ance means fewer sur­prises in prod. And Opus 4.7 achieves the high­est qual­ity-per-tool-call ra­tio we’ve mea­sured.Claude Opus 4.7 is a mean­ing­ful step up for Warp. Opus 4.6 is one of the best mod­els out there for de­vel­op­ers, and this model is mea­sur­ably more thor­ough on top of that. It passed Terminal Bench tasks that prior Claude mod­els had failed, and worked through a tricky con­cur­rency bug Opus 4.6 could­n’t crack. For us, that’s the sig­nal.Claude Opus 4.7 is the best model in the world for build­ing dash­boards and data-rich in­ter­faces. The de­sign taste is gen­uinely sur­pris­ing—it makes choices I’d ac­tu­ally ship. It’s my de­fault daily dri­ver now.Claude Opus 4.7 is the most ca­pa­ble model we’ve tested at Quantium. Evaluated against lead­ing AI mod­els through our pro­pri­etary bench­mark­ing so­lu­tion, the biggest gains showed up where they mat­ter most: rea­son­ing depth, struc­tured prob­lem-fram­ing, and com­plex tech­ni­cal work. Fewer cor­rec­tions, faster it­er­a­tions, and stronger out­puts to solve the hard­est prob­lems our clients bring us.Claude Opus 4.7 feels like a real step up in in­tel­li­gence. Code qual­ity is no­tice­ably im­proved, it’s cut­ting out the mean­ing­less wrap­per func­tions and fall­back scaf­fold­ing that used to pile up, and fixes its own code as it goes. It’s the clean­est jump we’ve seen since the move from Sonnet 3.7 to the Claude 4 se­ries.For the com­puter-use work that sits at the heart of XBOWs au­tonomous pen­e­tra­tion test­ing, the new Claude Opus 4.7 is a step change: 98.5% on our vi­sual-acu­ity bench­mark ver­sus 54.5% for Opus 4.6. Our sin­gle biggest Opus pain point ef­fec­tively dis­ap­peared, and that un­locks its use for a whole class of work where we could­n’t use it be­fore.Claude Opus 4.7 is a solid up­grade with no re­gres­sions for Vercel. It’s phe­nom­e­nal on one-shot cod­ing tasks, more cor­rect and com­plete than Opus 4.6, and no­tice­ably more hon­est about its own lim­its. It even does proofs on sys­tems code be­fore start­ing work, which is new be­hav­ior we haven’t seen from ear­lier Claude mod­els.Claude Opus 4.7 is very strong and out­per­forms Opus 4.6 with a 10% to 15% lift in task suc­cess for Factory Droids, with fewer tool er­rors and more re­li­able fol­low-through on val­i­da­tion steps. It car­ries work all the way through in­stead of stop­ping halfway, which is ex­actly what en­ter­prise en­gi­neer­ing teams need.Claude Opus 4.7 au­tonomously built a com­plete Rust text-to-speech en­gine from scratch—neural model, SIMD ker­nels, browser demo—then fed its own out­put through a speech rec­og­nizer to ver­ify it matched the Python ref­er­ence. Months of se­nior en­gi­neer­ing, de­liv­ered au­tonomously. The step up from Opus 4.6 is clear, and the code­base is pub­lic.Claude Opus 4.7 passed three TBench tasks that prior Claude mod­els could­n’t, and it’s land­ing fixes our pre­vi­ous best model missed, in­clud­ing a race con­di­tion. It demon­strates strong pre­ci­sion in iden­ti­fy­ing real is­sues, and sur­faces im­por­tant find­ings that other mod­els ei­ther gave up on or did­n’t re­solve. In Qodo’s real-world code re­view bench­mark, we ob­served top-tier pre­ci­sion.On Databricks’ OfficeQA Pro, Claude Opus 4.7 shows mean­ing­fully stronger doc­u­ment rea­son­ing, with 21% fewer er­rors than Opus 4.6 when work­ing with source in­for­ma­tion. Across our agen­tic rea­son­ing over data bench­marks, it is the best-per­form­ing Claude model for en­ter­prise doc­u­ment analy­sis.For Ramp, Claude Opus 4.7 stands out in agent-team work­flows. We’re see­ing stronger role fi­delity, in­struc­tion-fol­low­ing, co­or­di­na­tion, and com­plex rea­son­ing, es­pe­cially on en­gi­neer­ing tasks that span tools, code­bases, and de­bug­ging con­text. Compared with Opus 4.6, it needs much less step-by-step guid­ance, help­ing us scale the in­ter­nal agent work­flows our en­gi­neer­ing teams run.Claude Opus 4.7 is mea­sur­ably bet­ter than Opus 4.6 for Bolt’s longer-run­ning app-build­ing work, up to 10% bet­ter in the best cases, with­out the re­gres­sions we’ve come to ex­pect from very agen­tic mod­els. It pushes the ceil­ing on what our users can ship in a sin­gle ses­sion.Be­low are some high­lights and notes from our early test­ing of Opus 4.7:Instruction fol­low­ing. Opus 4.7 is sub­stan­tially bet­ter at fol­low­ing in­struc­tions. Interestingly, this means that prompts writ­ten for ear­lier mod­els can some­times now pro­duce un­ex­pected re­sults: where pre­vi­ous mod­els in­ter­preted in­struc­tions loosely or skipped parts en­tirely, Opus 4.7 takes the in­struc­tions lit­er­ally. Users should re-tune their prompts and har­nesses ac­cord­ingly.Im­proved mul­ti­modal sup­port. Opus 4.7 has bet­ter vi­sion for high-res­o­lu­tion im­ages: it can ac­cept im­ages up to 2,576 pix­els on the long edge (~3.75 megapix­els), more than three times as many as prior Claude mod­els. This opens up a wealth of mul­ti­modal uses that de­pend on fine vi­sual de­tail: com­puter-use agents read­ing dense screen­shots, data ex­trac­tions from com­plex di­a­grams, and work that needs pixel-per­fect ref­er­ences.1Real-world work. As well as its state-of-the-art score on the Finance Agent eval­u­a­tion (see table above), our in­ter­nal test­ing showed Opus 4.7 to be a more ef­fec­tive fi­nance an­a­lyst than Opus 4.6, pro­duc­ing rig­or­ous analy­ses and mod­els, more pro­fes­sional pre­sen­ta­tions, and tighter in­te­gra­tion across tasks. Opus 4.7 is also state-of-the-art on GDPval-AA, a third-party eval­u­a­tion of eco­nom­i­cally valu­able knowl­edge work across fi­nance, le­gal, and other do­mains.Mem­ory. Opus 4.7 is bet­ter at us­ing file sys­tem-based mem­ory. It re­mem­bers im­por­tant notes across long, multi-ses­sion work, and uses them to move on to new tasks that, as a re­sult, need less up-front con­text.The charts be­low dis­play more eval­u­a­tion re­sults from our pre-re­lease test­ing, across a range of dif­fer­ent do­mains:Over­all, Opus 4.7 shows a sim­i­lar safety pro­file to Opus 4.6: our eval­u­a­tions show low rates of con­cern­ing be­hav­ior such as de­cep­tion, syco­phancy, and co­op­er­a­tion with mis­use. On some mea­sures, such as hon­esty and re­sis­tance to ma­li­cious prompt in­jec­tion” at­tacks, Opus 4.7 is an im­prove­ment on Opus 4.6; in oth­ers (such as its ten­dency to give overly de­tailed harm-re­duc­tion ad­vice on con­trolled sub­stances), Opus 4.7 is mod­estly weaker. Our align­ment as­sess­ment con­cluded that the model is largely well-aligned and trust­wor­thy, though not fully ideal in its be­hav­ior”. Note that Mythos Preview re­mains the best-aligned model we’ve trained ac­cord­ing to our eval­u­a­tions. Our safety eval­u­a­tions are dis­cussed in full in the Claude Opus 4.7 System Card.Overall mis­aligned be­hav­ior score from our au­to­mated be­hav­ioral au­dit. On this eval­u­a­tion, Opus 4.7 is a mod­est im­prove­ment on Opus 4.6 and Sonnet 4.6, but Mythos Preview still shows the low­est rates of mis­aligned be­hav­ior.In ad­di­tion to Claude Opus 4.7 it­self, we’re launch­ing the fol­low­ing up­dates:More ef­fort con­trol: Opus 4.7 in­tro­duces a new xhigh (“extra high”) ef­fort level be­tween high and max, giv­ing users finer con­trol over the trade­off be­tween rea­son­ing and la­tency on hard prob­lems. In Claude Code, we’ve raised the de­fault ef­fort level to xhigh for all plans. When test­ing Opus 4.7 for cod­ing and agen­tic use cases, we rec­om­mend start­ing with high or xhigh ef­fort.On the Claude Platform (API): as well as sup­port for higher-res­o­lu­tion im­ages, we’re also launch­ing task bud­gets in pub­lic beta, giv­ing de­vel­op­ers a way to guide Claude’s to­ken spend so it can pri­or­i­tize work across longer runs.In Claude Code: The new /ultrareview slash com­mand pro­duces a ded­i­cated re­view ses­sion that reads through changes and flags bugs and de­sign is­sues that a care­ful re­viewer would catch. We’re giv­ing Pro and Max Claude Code users three free ul­tra­reviews to try it out. In ad­di­tion, we’ve ex­tended auto mode to Max users. Auto mode is a new per­mis­sions op­tion where Claude makes de­ci­sions on your be­half, mean­ing that you can run longer tasks with fewer in­ter­rup­tions—and with less risk than if you had cho­sen to skip all per­mis­sions.Opus 4.7 is a di­rect up­grade to Opus 4.6, but two changes are worth plan­ning for be­cause they af­fect to­ken us­age. First, Opus 4.7 uses an up­dated to­k­enizer that im­proves how the model processes text. The trade­off is that the same in­put can map to more to­kens—roughly 1.0 – 1.35× de­pend­ing on the con­tent type. Second, Opus 4.7 thinks more at higher ef­fort lev­els, par­tic­u­larly on later turns in agen­tic set­tings. This im­proves its re­li­a­bil­ity on hard prob­lems, but it does mean it pro­duces more out­put to­kens. Users can con­trol to­ken us­age in var­i­ous ways: by us­ing the ef­fort pa­ra­me­ter, ad­just­ing their task bud­gets, or prompt­ing the model to be more con­cise. In our own test­ing, the net ef­fect is fa­vor­able—to­ken us­age across all ef­fort lev­els is im­proved on an in­ter­nal cod­ing eval­u­a­tion, as shown be­low—but we rec­om­mend mea­sur­ing the dif­fer­ence on real traf­fic. We’ve writ­ten a mi­gra­tion guide that pro­vides fur­ther ad­vice on up­grad­ing from Opus 4.6 to Opus 4.7.Score on an in­ter­nal agen­tic cod­ing eval­u­a­tion as a func­tion of to­ken us­age at each ef­fort level. In this eval­u­a­tion, the model works au­tonomously from a sin­gle user prompt, and re­sults may not be rep­re­sen­ta­tive of to­ken us­age in in­ter­ac­tive cod­ing. See the mi­gra­tion guide for more on tun­ing ef­fort lev­els.

Google Broke Its Promise to Me. Now ICE Has My Data.

www.eff.org

In September 2024, Amandla Thomas-Johnson was a Ph. D. candidate study­ing in the U.S. on a stu­dent visa when he briefly at­tended a pro-Pales­tin­ian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an ad­min­is­tra­tive sub­poena re­quest­ing his data. The next month, Google gave Thomas-Johnson’s information to ICE with­out giv­ing him the chance to chal­lenge the sub­poena, break­ing a nearly decade-long promise to no­tify users be­fore hand­ing their data to law en­force­ment.

Google names a hand­ful of ex­cep­tions to this promise (such as if Google re­ceives a gag or­der from a court) that do not ap­ply to Thomas-Johnson’s case. While ICE requested” that Google not no­tify Thomas-Johnson, the re­quest was not en­force­able or man­dated by a court. Today, the Electronic Frontier Foundation sent com­plaints to the California and New York Attorneys General ask­ing them to in­ves­ti­gate Google for de­cep­tive trade prac­tices for break­ing that promise. You can read about the com­plaints here. Below is Thomas-Johnson’s ac­count of his or­deal.

I thought my or­deal with U. S. immigration au­thor­i­ties was over a year ago, when I left the coun­try, cross­ing into Canada at Ni­a­gara Falls.

By that point, the Trump ad­min­is­tra­tion had ef­fec­tively turned fed­eral power against in­ter­na­tional stu­dents like me. After I attended a pro-Palestine protest at Cornell University—for all of five min­utes—the ad­min­is­tra­tion’s rhetoric about crack­ing down on stu­dents protest­ing what we saw as geno­cide forced me into hid­ing for three months. Federal agents came to my home look­ing for me. A friend was de­tained at an air­port in Tampa and in­ter­ro­gated about my where­abouts.

I’m currently a Ph. D. stu­dent. Before that, I was a re­porter. I’m a dual British and Trinadad and Tobago cit­i­zen. I have not been ac­cused of any crime.

I be­lieved that once I left U. S. territory, I had also left the reach of its au­thor­i­ties. I was wrong.

Weeks later, in Geneva, Switzerland, I re­ceived what looked like a rou­tine email from Google. It in­formed me that the com­pany had al­ready handed over my ac­count data to the Department of Homeland Security.

At first, I wasn’t alarmed. I had seen some­thing sim­i­lar be­fore. An as­so­ci­ate of mine, Momodou Taal, had re­ceived ad­vance no­tice from Google and Facebook that his data had been re­quested. He was given ad­vanced no­tice of the sub­poe­nas, and law en­force­ment even­tu­ally with­drew them be­fore the com­pa­nies turned over his data.

Google had al­ready dis­closed my data with­out telling me.

I as­sumed I would be given the same op­por­tu­nity. But the lan­guage in my email was dif­fer­ent. It was fi­nal: Google has re­ceived and re­sponded to le­gal process from a law en­force­ment au­thor­ity com­pelling the re­lease of in­for­ma­tion re­lated to your Google Account.”

Google had al­ready dis­closed my data with­out telling me. There was no op­por­tu­nity to con­test it.

To be clear, this should not have hap­pened this way. Google promises that it will no­tify users be­fore their data is handed over in re­sponse to le­gal processes, in­clud­ing ad­min­is­tra­tive sub­poe­nas. That no­tice is meant to pro­vide a chance to chal­lenge the re­quest. In my case, that safe­guard was by­passed. My data was handed over with­out warn­ing—at the re­quest of an ad­min­is­tra­tion tar­get­ing stu­dents en­gaged in pro­tected po­lit­i­cal speech.

Months later, my lawyer at the Electronic Frontier Foundation obtained the sub­poena it­self. On pa­per, the re­quest fo­cused largely on sub­scriber in­for­ma­tion: IP ad­dresses, phys­i­cal ad­dress, other iden­ti­fiers, and ses­sion times and du­ra­tions.

But taken to­gether, these frag­ments form some­thing far more pow­er­ful—a de­tailed sur­veil­lance pro­file. IP logs can be used to ap­prox­i­mate lo­ca­tion. Phys­i­cal ad­dresses show where you sleep. Ses­sion times would show when you were com­mu­ni­cat­ing with friends or fam­ily. Even with­out mes­sage con­tent, the pic­ture that emerges is in­ti­mate and in­va­sive.

What this ex­pe­ri­ence has made clear is that any­one can be tar­geted by law en­force­ment. And with their mas­sive stores of data, tech­nol­ogy com­pa­nies can fa­cil­i­tate those ar­bi­trary in­ves­ti­ga­tions. Together, they can com­bine state power, cor­po­rate data, and al­go­rith­mic in­fer­ence in ways that are dif­fi­cult to see—and even harder to chal­lenge.

The con­se­quences of what hap­pened to me are not ab­stract. I left the United States. But I do not feel that I have left its reach. Being in­ves­ti­gated by the fed­eral gov­ern­ment is in­tim­i­dat­ing. Questions run through your head. Am I now a marked in­di­vid­ual? Will I face height­ened scrutiny if I con­tinue my re­port­ing? Can I travel safely to see fam­ily in the Caribbean?

Who, ex­actly, can I hold ac­count­able?

Update: This post has been up­dated to in­clude more in­for­ma­tion about Google’s ex­cep­tions to their no­ti­fi­ca­tion pol­icy, none of which ap­plied to the sub­poena tar­get­ing Thomas-Johnson.

Safeguarding Your Website — BigScoots

www.theolivepress.es

We’re check­ing if you’re a real per­son and not an au­to­mated bad bot. Usually, the captcha be­low will com­plete it­self. If it does­n’t, sim­ply click the check­box in the captcha to ver­ify. Once ver­i­fied, you’ll be taken to the page you wanted to visit.

If for some rea­son af­ter ver­i­fy­ing the captcha above, you are con­stantly be­ing redi­rected to this ex­act same page to re-ver­ify the captcha again, then please click on the but­ton be­low to get in touch with the sup­port team.

Framework Laptop 13 Pro: Intel Core Ultra 3 & LPCAMM2

frame.work

Finally, great bat­tery life in a Framework Laptop

20 hours

Netflix 4K stream­ing250nit bright­ness, 30% vol­ume, Windows 11

17 hours

Active web us­age

250nit bright­ness, 30% vol­ume, Windows 11

11 hours

Video con­fer­enc­ing250nit bright­ness, 30% vol­ume, Windows 11

7 days

Standby with­out charg­ing

Wi-Fi con­nected on Ubuntu

Intel® Core™ Ultra Series 3 proces­sors

The Framework Laptop 13 Pro runs on Intel® Core™ Ultra Series 3 proces­sors, un­lock­ing 20 hours of bat­tery ϟ life, up to 64GB of LPCAMM2 LPDDR5X mem­ory, and sup­port for up to 8TB of PCIe Gen 5.0 NVMe stor­age. It’s de­signed to stay re­spon­sive un­der sus­tained, heavy work­loads.

Power-efficient mem­ory, made up­grade­able

We’re among the first to pair Intel® Core™ Ultra Series 3 with LPCAMM2. A high-den­sity in­ter­poser en­ables LPDDR5X in a mod­u­lar form, de­liv­er­ing 7467 MT/s and high per­for­mance per watt with­out sol­der­ing it down.

A lap­top that you own

You can cus­tomize it,

Pick your ports with the Framework Expansion Card sys­tem and in­stall them di­rectly into your lap­top with­out re­ly­ing on ex­ter­nal adapters. The mag­net-at­tach Bezel lets you cus­tomize with bold or translu­cent color op­tions.

USB-C

USB-A

Audio Jack

DisplayPort

HDMI

MicroSD

SD

Storage - 250GB

Storage - 1TB

Ethernet

re­pair it,

A truly easy-to-re­pair lap­top that’s built to re­spect your rights. Just scan the QR codes, fol­low the guides, and re­place any part with a sin­gle tool that’s in­cluded in the box.

up­grade it.

When you’re ready for more per­for­mance, you can up­grade in­di­vid­ual com­po­nents in­stead of re­plac­ing your en­tire lap­top. Install a new Mainboard for gen­er­a­tional proces­sor up­grades, add mem­ory to han­dle heav­ier work­loads, or ex­pand your stor­age to in­crease ca­pac­ity or en­able dual boot­ing. The Framework Marketplace makes it easy to find the com­pat­i­ble parts you need.

Runs Linux. Really well.

(you can also use Windows 11 if you want)

We don’t just sup­port Linux; we live in it. Framework Laptop 13 Pro with Intel® Core™ Ultra Series 3 is our first Ubuntu Certified sys­tem. We seed de­vel­op­ment hard­ware and pro­vide fund­ing to a range of other dis­tros like Fedora, Bazzite, NixOS, CachyOS, and more to en­sure re­li­able sup­port.

A sen­sory up­grade

13.5″ 2880x1920 Touchscreen Display

A cus­tom 13.5″ 3:2 touch­screen dis­play with sharp 2880×1920 res­o­lu­tion gives you the ver­ti­cal space you need for cod­ing and pro­duc­tiv­ity. A 30 – 120Hz vari­able re­fresh rate keeps mo­tion smooth while op­ti­miz­ing power, and with up to 700nits of bright­ness and a matte sur­face, it stays clear across a wide range of light­ing con­di­tions.

A hap­tic touch­pad that beats your ex­pec­ta­tions

The large 123.7mm × 76.7mm Haptic Touchpad, pow­ered by four piezo­elec­tric ac­tu­a­tors, de­liv­ers con­sis­tent, high-qual­ity clicks across the sur­face. Feedback and ges­tures are fully tun­able, so you can set it up ex­actly how you want.

The key­board you love, now even bet­ter

With 1.5mm of key travel, the key­board de­liv­ers deeper, more tac­tile feed­back than most mod­ern lap­tops with­out in­creas­ing noise. A CNC alu­minum Input Cover Frame re­duces deck flex for a more solid and con­sis­tent feel. Available in mul­ti­ple ANSI and ISO lay­outs, in black, black with laven­der, and black with gray and or­ange.

Dolby Atmos® au­dio

The side-fir­ing speak­ers are tuned with Dolby Atmos® to de­liver clear, bal­anced au­dio on Windows, ideal for calls or mu­sic while you work.

Thin, light, and fully alu­minum

At just 15.85mm thick and 1.4kg, gain­ing dura­bil­ity does­n’t mean los­ing porta­bil­ity. The Top Cover, Input Cover, and Bottom Cover are now CNC ma­chined from 6063 alu­minum, in­creas­ing rigid­ity and dura­bil­ity.

296.63mm

Width

228.98mm

Depth

15.85mm

Height

1.4kg

Weight

Open source ecosys­tems

We’ve open sourced de­sign files and doc­u­men­ta­tion for many core com­po­nents and firmware on GitHub, giv­ing you the free­dom to mod­ify, ex­tend, or re­pur­pose them.

Respecting your pri­vacy

Privacy switches

Your pri­vacy is pro­tected at a hard­ware level, with phys­i­cal switches that elec­tri­cally cut off the we­b­cam and mi­cro­phones when­ever you need.

No crap­ware

We hate soft­ware bloat as much as you do. Our pre-builts ship with Ubuntu or stock Windows 11 plus the nec­es­sary dri­vers, and our DIY Edition lets you bring whichever op­er­at­ing sys­tem you’d like.

The choice is yours

Framework Laptop 13 Pro is avail­able pre-built with Windows or Ubuntu pre-in­stalled, or as a DIY Edition that lets you in­stall the op­er­at­ing sys­tem of your choice.

Upgrade, cus­tomize, and re­pair

Pick up new parts and mod­ules for your Framework Laptop 13 Pro.

Keep track of what we’re work­ing on with the Framework Newsletter.

ϟ

Testing con­ducted by Framework in April 2026 us­ing Framework Laptop 13 Pro tested with Intel® Core™ Ultra X7 358H Processor, Intel® Arc™ B390 graph­ics, 2.8K touch­screen dis­play, 32GB mem­ory and 1TB stor­age, with dis­play bright­ness set to 250nits, dis­play re­fresh rate set to 60Hz, speaker vol­ume as 30%, Dolby Atmos® dis­abled, and wire­less en­abled. Battery life tested by stream­ing Netflix 4K con­tent in the Netflix app on Windows 11 un­der Best Power Efficiency mode. Battery life varies by use and con­fig­u­ra­tion.

Qwen Studio

qwen.ai

Introducing Claude Design by Anthropic Labs

www.anthropic.com

Today, we’re launch­ing Claude Design, a new Anthropic Labs prod­uct that lets you col­lab­o­rate with Claude to cre­ate pol­ished vi­sual work like de­signs, pro­to­types, slides, one-pagers, and more.

Claude Design is pow­ered by our most ca­pa­ble vi­sion model, Claude Opus 4.7, and is avail­able in re­search pre­view for Claude Pro, Max, Team, and Enterprise sub­scribers. We’re rolling out to users grad­u­ally through­out the day.

Even ex­pe­ri­enced de­sign­ers have to ra­tion ex­plo­ration—there’s rarely time to pro­to­type a dozen di­rec­tions, so you limit your­self to a few. And for founders, prod­uct man­agers, and mar­keters with an idea but not a de­sign back­ground, cre­at­ing and shar­ing those ideas can be daunt­ing.

Claude Design gives de­sign­ers room to ex­plore widely and every­one else a way to pro­duce vi­sual work. Describe what you need and Claude builds a first ver­sion. From there, you re­fine through con­ver­sa­tion, in­line com­ments, di­rect ed­its, or cus­tom slid­ers (made by Claude) un­til it’s right. When given ac­cess, Claude can also ap­ply your team’s de­sign sys­tem to every pro­ject au­to­mat­i­cally, so the out­put is con­sis­tent with the rest of your com­pa­ny’s de­signs.

Teams have been us­ing Claude Design for:

* Realistic pro­to­types: Designers can turn sta­tic mock­ups into eas­ily-share­able in­ter­ac­tive pro­to­types to gather feed­back and user-test, with­out code re­view or PRs.

* Product wire­frames and mock­ups: Product Managers can sketch out fea­ture flows and hand them off to Claude Code for im­ple­men­ta­tion, or share them with de­sign­ers to re­fine fur­ther.

* Design ex­plo­rations: Designers can quickly cre­ate a wide range of di­rec­tions to ex­plore.

* Pitch decks and pre­sen­ta­tions: Founders and Account Executives can go from a rough out­line to a com­plete, on-brand deck in min­utes, and then ex­port as a PPTX or send to Canva.

* Marketing col­lat­eral: Marketers can cre­ate land­ing pages, so­cial me­dia as­sets, and cam­paign vi­su­als, then loop in de­sign­ers to pol­ish.

* Frontier de­sign: Anyone can build code-pow­ered pro­to­types with voice, video, shaders, 3D and built-in AI.

Your brand, built in. During on­board­ing, Claude builds a de­sign sys­tem for your team by read­ing your code­base and de­sign files. Every pro­ject af­ter that uses your col­ors, ty­pog­ra­phy, and com­po­nents au­to­mat­i­cally. You can re­fine the sys­tem over time, and teams can main­tain more than one.

Import from any­where. Start from a text prompt, up­load im­ages and doc­u­ments (DOCX, PPTX, XLSX), or point Claude at your code­base. You can also use the web cap­ture tool to grab el­e­ments di­rectly from your web­site so pro­to­types look like the real prod­uct.

Refine with fine-grained con­trols. Comment in­line on spe­cific el­e­ments, edit text di­rectly, or use ad­just­ment knobs to tweak spac­ing, color, and lay­out live. Then ask Claude to ap­ply your changes across the full de­sign.

Collaborate. Designs have or­ga­ni­za­tion-scoped shar­ing. You can keep a doc­u­ment pri­vate, share it so any­one in your or­ga­ni­za­tion with the link can view it, or grant edit ac­cess so col­leagues can mod­ify the de­sign and chat with Claude to­gether in a group con­ver­sa­tion.

Export any­where. Share de­signs as an in­ter­nal URL within your or­ga­ni­za­tion, save as a folder, or ex­port to Canva, PDF, PPTX, or stand­alone HTML files.

Handoff to Claude Code. When a de­sign is ready to build, Claude pack­ages every­thing into a hand­off bun­dle that you can pass to Claude Code with a sin­gle in­struc­tion.

Over the com­ing weeks, we’ll make it eas­ier to build in­te­gra­tions with Claude Design, so you can con­nect it to more of the tools your team al­ready uses.

Claude Design is avail­able for Claude Pro, Max, Team, and Enterprise sub­scribers. Access is in­cluded with your plan and uses your sub­scrip­tion lim­its, with the op­tion to con­tinue be­yond those lim­its by en­abling ex­tra us­age.

For Enterprise or­ga­ni­za­tions, Claude Design is off by de­fault. Admins can en­able it in Organization set­tings.

Laws of Software Engineering

lawsofsoftwareengineering.com

A col­lec­tion of prin­ci­ples and pat­terns that shape soft­ware sys­tems, teams, and de­ci­sions.

56 laws

Click any card to learn more

Laws of Software Engineering

lawsofsoftwareengineering.com

Organizations de­sign sys­tems that mir­ror their own com­mu­ni­ca­tion struc­ture.

Premature op­ti­miza­tion is the root of all evil.

With a suf­fi­cient num­ber of API users, all ob­serv­able be­hav­iors of your sys­tem will be de­pended on by some­body.

Leave the code bet­ter than you found it.

YAGNI (You Aren’t Gonna Need It)

Don’t add func­tion­al­ity un­til it is nec­es­sary.

Adding man­power to a late soft­ware pro­ject makes it later.

A com­plex sys­tem that works is in­vari­ably found to have evolved from a sim­ple sys­tem that worked.

All non-triv­ial ab­strac­tions, to some de­gree, are leaky.

Every ap­pli­ca­tion has an in­her­ent amount of ir­re­ducible com­plex­ity that can only be shifted, not elim­i­nated.

A dis­trib­uted sys­tem can guar­an­tee only two of: con­sis­tency, avail­abil­ity, and par­ti­tion tol­er­ance.

Small, suc­cess­ful sys­tems tend to be fol­lowed by ov­erengi­neered, bloated re­place­ments.

A set of eight false as­sump­tions that new dis­trib­uted sys­tem de­sign­ers of­ten make.

Every pro­gram at­tempts to ex­pand un­til it can read mail.

There is a cog­ni­tive limit of about 150 sta­ble re­la­tion­ships one per­son can main­tain.

The square root of the to­tal num­ber of par­tic­i­pants does 50% of the work.

Those who un­der­stand tech­nol­ogy don’t man­age it, and those who man­age it don’t un­der­stand it.

In a hi­er­ar­chy, every em­ployee tends to rise to their level of in­com­pe­tence.

The min­i­mum num­ber of team mem­bers whose loss would put the pro­ject in se­ri­ous trou­ble.

Companies tend to pro­mote in­com­pe­tent em­ploy­ees to man­age­ment to limit the dam­age they can do.

Work ex­pands to fill the time avail­able for its com­ple­tion.

The first 90% of the code ac­counts for the first 90% of de­vel­op­ment time; the re­main­ing 10% ac­counts for the other 90%.

It al­ways takes longer than you ex­pect, even when you take into ac­count Hofstadter’s Law.

When a mea­sure be­comes a tar­get, it ceases to be a good mea­sure.

Anything you need to quan­tify can be mea­sured in some way bet­ter than not mea­sur­ing it.

Anything that can go wrong will go wrong.

Be con­ser­v­a­tive in what you do, be lib­eral in what you ac­cept from oth­ers.

Technical Debt is every­thing that slows us down when de­vel­op­ing soft­ware.

Given enough eye­balls, all bugs are shal­low.

Debugging is twice as hard as writ­ing the code in the first place.

A pro­ject should have many fast unit tests, fewer in­te­gra­tion tests, and only a small num­ber of UI tests.

Repeatedly run­ning the same tests be­comes less ef­fec­tive over time.

Software that re­flects the real world must evolve, and that evo­lu­tion has pre­dictable lim­its.

90% of every­thing is crap.

The speedup from par­al­leliza­tion is lim­ited by the frac­tion of work that can­not be par­al­lelized.

It is pos­si­ble to achieve sig­nif­i­cant speedup in par­al­lel pro­cess­ing by in­creas­ing the prob­lem size.

The value of a net­work is pro­por­tional to the square of the num­ber of users.

Every piece of knowl­edge must have a sin­gle, un­am­bigu­ous, au­thor­i­ta­tive rep­re­sen­ta­tion.

Designs and sys­tems should be as sim­ple as pos­si­ble.

Five main guide­lines that en­hance soft­ware de­sign, mak­ing code more main­tain­able and scal­able.

An ob­ject should only in­ter­act with its im­me­di­ate friends, not strangers.

Software and in­ter­faces should be­have in a way that least sur­prises users and other de­vel­op­ers.

The less you know about some­thing, the more con­fi­dent you tend to be.

Never at­tribute to mal­ice that which is ad­e­quately ex­plained by stu­pid­ity or care­less­ness.

The sim­plest ex­pla­na­tion is of­ten the most ac­cu­rate one.

Sticking with a choice be­cause you’ve in­vested time or en­ergy in it, even when walk­ing away helps you.

The Map Is Not the Territory

Our rep­re­sen­ta­tions of re­al­ity are not the same as re­al­ity it­self.

A ten­dency to fa­vor in­for­ma­tion that sup­ports our ex­ist­ing be­liefs or ideas.

We tend to over­es­ti­mate the ef­fect of a tech­nol­ogy in the short run and un­der­es­ti­mate the im­pact in the long run.

The longer some­thing has been in use, the more likely it is to con­tinue be­ing used.

Breaking a com­plex prob­lem into its most ba­sic blocks and then build­ing up from there.

Solving a prob­lem by con­sid­er­ing the op­po­site out­come and work­ing back­ward from it.

80% of the prob­lems re­sult from 20% of the causes.

The best way to get the cor­rect an­swer on the Internet is not to ask a ques­tion, it’s to post the wrong an­swer.

openai.com

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.