10 interesting stories served every morning and every evening.




1 1,053 shares, 43 trendiness

uBlock Origin filter list to hide YouTube Shorts

A main­tained uBlock Origin fil­ter list to hide all traces of YouTube shorts videos.

Copy the link be­low, go to uBlock Origin > Dashboard > Filter lists, scroll to the bot­tom, and paste the link un­der­neath the Import…’ head­ing:

https://​raw.githubuser­con­tent.com/​i5heu/​ublock-hide-yt-shorts/​mas­ter/​list.txt

> uBlock Origin sub­scribe link < (does not work on GitHub)

> uBlock Origin sub­scribe link < (does not work on GitHub)

After the ini­tial cre­ateor of this list @gijsdev is now van­ished for half a year, i ( i5heu ) took it on me to main­tain this list.

This pro­ject is an in­de­pen­dent, open-source ini­tia­tive and is not af­fil­i­ated with, en­dorsed by, spon­sored by, or as­so­ci­ated with Alphabet Inc., Google LLC, or YouTube.

...

Read the original on github.com »

2 767 shares, 50 trendiness

I love the work of the ArchWiki maintainers

For this year’s I love Free Software Day” I would like to thank the main­tain­ers of Free Software doc­u­men­ta­tion, and here es­pe­cially the main­tain­ers of the

ArchWiki. Maintainers in gen­eral, and main­tain­ers of doc­u­men­ta­tion most of the time get way too lit­tle recog­ni­tion for their con­tri­bu­tions to soft­ware free­dom.

Myself, Arch Project Leader Levente, ArchWiki main­tainer Ferdinand (Alad), and FSFEs vice pres­i­dent Heiki at FOSDEM 2026 af­ter I handed them over some hacker choco­late.

The ArchWiki is a re­source, I my­self and many peo­ple around me reg­u­larly con­sult - no mat­ter if it is ac­tu­ally about Arch or an­other Free Software dis­tri­b­u­tion. There are count­less times, when I read ar­ti­cles there to get a bet­ter un­der­stand­ing of the tools I daily use, like e-mail pro­grams, ed­i­tors, or all kinds of win­dow man­agers I used over time. It helped me to dis­cover some handy fea­tures or con­fig­u­ra­tion tips that were dif­fi­cult for me to find in the doc­u­men­ta­tion of the soft­ware it­self.

Whenever I run into is­sues set­ting up a GNU/Linux dis­tri­b­u­tion for my­self or fam­ily and friends, the ArchWiki had my back!

Whenever I want to bet­ter un­der­stand a soft­ware, the ArchWiki is most of­ten the first page I end up con­sult­ing.

You are one of the pearls of the in­ter­net! Or in Edward Snowden’s words:

Is it just me, or have search re­sults be­come ab­solute garbage for ba­si­cally every site? It’s nearly im­pos­si­ble to dis­cover use­ful in­for­ma­tion these days (outside the ArchWiki). https://​x.com/​Snow­den/​sta­tus/​1460666075033575425

Thank you, to all the ArchWiki con­trib­u­tors for gath­er­ing all the knowl­edge to help oth­ers in so­ci­ety to bet­ter un­der­stand tech­nol­ogy and for the ArchWiki main­tain­ers to en­sure the long term avail­abil­ity and re­li­a­bil­ity of this cru­cial re­source.

If you also ap­pre­ci­ated the work of the ArchWiki main­tain­ers for our so­ci­ety, tell them as well, and I en­cour­age you to make a do­na­tion to

Arch.

PS: Thanks also to Morton for con­nect­ing me with Ferdinand and Levente at FOSDEM.

...

Read the original on k7r.eu »

3 538 shares, 22 trendiness

News publishers limit Internet Archive access due to AI scraping concerns

As part of its mis­sion to pre­serve the web, the Internet Archive op­er­ates crawlers that cap­ture web­page snap­shots. Many of these snap­shots are ac­ces­si­ble through its pub­lic-fac­ing tool, the Wayback Machine. But as AI bots scav­enge the web for train­ing data to feed their mod­els, the Internet Archive’s com­mit­ment to free in­for­ma­tion ac­cess has turned its dig­i­tal li­brary into a po­ten­tial li­a­bil­ity for some news pub­lish­ers.

When The Guardian took a look at who was try­ing to ex­tract its con­tent, ac­cess logs re­vealed that the Internet Archive was a fre­quent crawler, said Robert Hahn, head of busi­ness af­fairs and li­cens­ing. The pub­lisher de­cided to limit the Internet Archive’s ac­cess to pub­lished ar­ti­cles, min­i­miz­ing the chance that AI com­pa­nies might scrape its con­tent via the non­prof­it’s repos­i­tory of over one tril­lion web­page snap­shots.

The Wayback Machine’s snap­shots of news home­pages plum­met af­ter a breakdown” in archiv­ing pro­jects

Specifically, Hahn said The Guardian has taken steps to ex­clude it­self from the Internet Archive’s APIs and fil­ter out its ar­ti­cle pages from the Wayback Machine’s URLs in­ter­face. The Guardian’s re­gional home­pages, topic pages, and other land­ing pages will con­tinue to ap­pear in the Wayback Machine.

In par­tic­u­lar, Hahn ex­pressed con­cern about the Internet Archive’s APIs.

A lot of these AI busi­nesses are look­ing for read­ily avail­able, struc­tured data­bases of con­tent,” he said. The Internet Archive’s API would have been an ob­vi­ous place to plug their own ma­chines into and suck out the IP.” (He ad­mits the Wayback Machine it­self is less risky,” since the data is not as well-struc­tured.)

As news pub­lish­ers try to safe­guard their con­tents from AI com­pa­nies, the Internet Archive is also get­ting caught in the crosshairs. The Financial Times, for ex­am­ple, blocks any bot that tries to scrape its pay­walled con­tent, in­clud­ing bots from OpenAI, Anthropic, Perplexity, and the Internet Archive. The ma­jor­ity of FT sto­ries are pay­walled, ac­cord­ing to di­rec­tor of global pub­lic pol­icy and plat­form strat­egy Matt Rogerson. As a re­sult, usu­ally only un­pay­walled FT sto­ries ap­pear in the Wayback Machine be­cause those are meant to be avail­able to the wider pub­lic any­way.

Common Crawl and Internet Archive are widely con­sid­ered to be the good guys’ and are used by the bad guys’ like OpenAI,” said Michael Nelson, a com­puter sci­en­tist and pro­fes­sor at Old Dominion University. In every­one’s aver­sion to not be con­trolled by LLMs, I think the good guys are col­lat­eral dam­age.”

To pre­serve their work — and drafts of his­tory — jour­nal­ists take archiv­ing into their own hands

The Guardian has­n’t doc­u­mented spe­cific in­stances of its web­pages be­ing scraped by AI com­pa­nies via the Wayback Machine. Instead, it’s tak­ing these mea­sures proac­tively and is work­ing di­rectly with the Internet Archive to im­ple­ment the changes. Hahn says the or­ga­ni­za­tion has been re­cep­tive to The Guardian’s con­cerns.

The out­let stopped short of an all-out block on the Internet Archive’s crawlers, Hahn said, be­cause it sup­ports the non­prof­it’s mis­sion to de­moc­ra­tize in­for­ma­tion, though that po­si­tion re­mains un­der re­view as part of its rou­tine bot man­age­ment.

[The de­ci­sion] was much more about com­pli­ance and a back­door threat to our con­tent,” he said.

When asked about The Guardian’s de­ci­sion, Internet Archive founder Brewster Kahle said that if pub­lish­ers limit li­braries, like the Internet Archive, then the pub­lic will have less ac­cess to the his­tor­i­cal record.” It’s a prospect, he im­plied, that could un­der­cut the or­ga­ni­za­tion’s work coun­ter­ing information dis­or­der.”

After 25 years, Brewster Kahle and the Internet Archive are still work­ing to de­moc­ra­tize knowl­edge

The Guardian is­n’t alone in reeval­u­at­ing its re­la­tion­ship to the Internet Archive. The New York Times con­firmed to Nieman Lab that it’s ac­tively hard block­ing” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its ro­bots.txt file, dis­al­low­ing ac­cess to its con­tent.

We be­lieve in the value of The New York Times’s hu­man-led jour­nal­ism and al­ways want to en­sure that our IP is be­ing ac­cessed and used law­fully,” said a Times spokesper­son. We are block­ing the Internet Archive’s bot from ac­cess­ing the Times be­cause the Wayback Machine pro­vides un­fet­tered ac­cess to Times con­tent — in­clud­ing by AI com­pa­nies — with­out au­tho­riza­tion.”

Last August, Reddit an­nounced that it would block the Internet Archive, whose dig­i­tal li­braries in­clude count­less archived Reddit fo­rums, com­ments sec­tions, and pro­files. This con­tent is not un­like what Reddit now li­censes to Google as AI train­ing data for tens of mil­lions of dol­lars.

[The] Internet Archive pro­vides a ser­vice to the open web, but we’ve been made aware of in­stances where AI com­pa­nies vi­o­late plat­form poli­cies, in­clud­ing ours, and scrape data from the Wayback Machine,” a Reddit spokesper­son told The Verge at the time. Until they’re able to de­fend their site and com­ply with plat­form poli­cies…we’re lim­it­ing some of their ac­cess to Reddit data to pro­tect red­di­tors.”

Kahle has also al­luded to steps the Internet Archive is tak­ing to re­strict bulk ac­cess to its li­braries. In a Mastodon post last fall, he wrote that there are many col­lec­tions that are avail­able to users but not for bulk down­load­ing. We use in­ter­nal rate-lim­it­ing sys­tems, fil­ter­ing mech­a­nisms, and net­work se­cu­rity ser­vices such as Cloudflare.”

Currently, how­ever, the Internet Archive does not dis­al­low any spe­cific crawlers through its ro­bots.txt file, in­clud­ing those of ma­jor AI com­pa­nies. As of January 12, the ro­bots.txt file for archive.org read: ​​Welcome to the Archive! Please crawl our files. We ap­pre­ci­ate it if you can crawl re­spon­si­bly. Stay open!” Shortly af­ter we in­quired about this lan­guage, it was changed. The file now reads, sim­ply, Welcome to the Internet Archive!”

There is ev­i­dence that the Wayback Machine, gen­er­ally speak­ing, has been used to train LLMs in the past. An analy­sis of Google’s C4 dataset by the Washington Post in 2023 showed that the Internet Archive was among mil­lions of web­sites in the train­ing data used to build Google’s T5 model and Meta’s Llama mod­els. Out of the 15 mil­lion do­mains in the C4 dataset, the do­main for the Wayback Machine (web.archive.org) was ranked as the 187th most pre­sent.

Hundreds of thou­sands of videos from news pub­lish­ers like The New York Times and Vox were used to train AI mod­elsIn May 2023, the Internet Archive went of­fline tem­porar­ily af­ter an AI com­pany caused a server over­load, Wayback Machine di­rec­tor Mark Graham told Nieman Lab this past fall. The com­pany sent tens of thou­sands of re­quests per sec­ond from vir­tual hosts on Amazon Web Services to ex­tract text data from the non­prof­it’s pub­lic do­main archives. The Internet Archive blocked the hosts twice be­fore putting out a pub­lic call to respectfully” scrape its site.

We got in con­tact with them. They ended up giv­ing us a do­na­tion,” Graham said. They ended up say­ing that they were sorry and they stopped do­ing it.”

Those want­ing to use our ma­te­ri­als in bulk should start slowly, and ramp up,” wrote Kahle in a blog post shortly af­ter the in­ci­dent. Also, if you are start­ing a large pro­ject please con­tact us …we are here to help.”

The Guardian’s moves to limit the Internet Archive’s ac­cess made us won­der whether other news pub­lish­ers were tak­ing sim­i­lar ac­tions. We looked at pub­lish­ers’ ro­bots.txt pages as a way to mea­sure po­ten­tial con­cern over the Internet Archive’s crawl­ing.

A web­site’s ro­bots.txt page tells bots which parts of the site they can crawl, act­ing like a doorman,” telling vis­i­tors who is and is­n’t al­lowed in the house and which parts are off lim­its. Robots.txt pages aren’t legally bind­ing, so the com­pa­nies run­ning crawl­ing bots aren’t ob­lig­ated to com­ply with them, but they in­di­cate where the Internet Archive is un­wel­come.

For ex­am­ple, in ad­di­tion to hard block­ing,” The New York Times and The Athletic in­clude the archive.org_bot in their ro­bots.txt file, though they do not cur­rently dis­al­low other bots op­er­ated by the Internet Archive.

To ex­plore this is­sue, Nieman Lab used jour­nal­ist Ben Welsh‘s data­base of 1,167 news web­sites as a start­ing point. As part of a larger side pro­ject to archive news sites’ home­pages, Welsh runs crawlers that reg­u­larly scrape the ro­bots.txt files of the out­lets in his data­base. In late December, we down­loaded a spread­sheet from Welsh’s site that dis­played all the bots dis­al­lowed in the ro­bots.txt files of those sites. We iden­ti­fied four bots that the AI user agent watch­dog ser­vice Dark Visitors has as­so­ci­ated with the Internet Archive. (The Internet Archive did not re­spond to re­quests to con­firm its own­er­ship of these bots.)

This data is not com­pre­hen­sive, but ex­ploratory. It does not rep­re­sent global, in­dus­try-wide trends — 76% of sites in the Welsh’s pub­lisher list are based in the U. S., for ex­am­ple — but in­stead be­gins to shed light on which pub­lish­ers are less ea­ger to have their con­tent crawled by the Internet Archive.

In to­tal, 241 news sites from nine coun­tries ex­plic­itly dis­al­low at least one out of the four Internet Archive crawl­ing bots.

Most of those sites (87%) are owned by USA Today Co., the largest news­pa­per con­glom­er­ate in the United States for­merly known as Gannett. (Gannett sites only make up 18% of Welsh’s orig­i­nal pub­lish­ers list.) Each Gannett-owned out­let in our dataset dis­al­lows the same two bots: archive.org_bot” and ia_archiver-web.archive.org”. These bots were added to the ro­bots.txt files of Gannett-owned pub­li­ca­tions in 2025.

Some Gannett sites have also taken stronger mea­sures to guard their con­tents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine re­turn a mes­sage that says, Sorry. This URL has been ex­cluded from the Wayback Machine.”

USA Today Co. has con­sis­tently em­pha­sized the im­por­tance of safe­guard­ing our con­tent and in­tel­lec­tual prop­erty,” a com­pany spokesper­son said via email. Last year, we in­tro­duced new pro­to­cols to de­ter unau­tho­rized data col­lec­tion and scrap­ing, redi­rect­ing such ac­tiv­ity to a des­ig­nated page out­lin­ing our li­cens­ing re­quire­ments.”

Gannett de­clined to com­ment fur­ther on its re­la­tion­ship with the Internet Archive. In an October 2025 earn­ings call, CEO Mike Reed spoke to the com­pa­ny’s anti-scrap­ing mea­sures.

In September alone, we blocked 75 mil­lion AI bots across our lo­cal and USA Today plat­forms, the vast ma­jor­ity of which were seek­ing to scrape our lo­cal con­tent,” Reed said on that call. About 70 mil­lion of those came from OpenAI.” (Gannett signed a con­tent li­cens­ing agree­ment with Perplexity in July 2025.)

About 93% (226 sites) of pub­lish­ers in our dataset dis­al­low two out of the four Internet Archive bots we iden­ti­fied. Three news sites in the sam­ple dis­al­low three Internet Archive crawlers: Le Huffington Post, Le Monde, and Le Monde in English, all of which are owned by Group Le Monde.

Some French pub­lish­ers are giv­ing AI rev­enue di­rectly to jour­nal­ists. Could that ever hap­pen in the U. S.?

The news sites in our sam­ple aren’t only tar­get­ing the Internet Archive. Out of the 241 sites that dis­al­low at least one of the four Internet Archive bots in our sam­ple, 240 sites dis­al­low Common Crawl — an­other non­profit in­ter­net preser­va­tion pro­ject that has been more closely linked to com­mer­cial LLM de­vel­op­ment. Of our sam­ple, 231 sites all dis­al­low bots op­er­ated by OpenAI, Google AI, and Common Crawl.

As we’ve pre­vi­ously re­ported, the Internet Archive has taken on the Herculean task of pre­serv­ing the in­ter­net, and many news or­ga­ni­za­tions aren’t equipped to save their own work. In December, Poynter an­nounced a joint ini­tia­tive with the Internet Archive to train lo­cal news out­lets on how to pre­serve their con­tent. Archiving ini­tia­tives like this, while ur­gently needed, are few and far be­tween. Since there is no fed­eral man­date that re­quires in­ter­net con­tent to be pre­served, the Internet Archive is the most ro­bust archiv­ing ini­tia­tive in the United States.

The Internet Archive tends to be good cit­i­zens,” Hahn said. It’s the law of un­in­tended con­se­quences: You do some­thing for re­ally good pur­poses, and it gets abused.”

Photo of Internet Archive home­page by SDF_QWE used un­der an Adobe Stock li­cense.

...

Read the original on www.niemanlab.org »

4 406 shares, 91 trendiness

Amazon's Ring and Google's Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U. S. Surveillance State is rapidly grow­ing to the point of ubiq­uity has been demon­strated over the past week by seem­ingly be­nign events. While the pic­ture that emerges is grim, to put it mildly, at least Americans are again con­fronted with crys­tal clar­ity over how se­vere this has be­come.

The lat­est round of valid panic over pri­vacy be­gan dur­ing the Super Bowl held on Sunday. During the game, Amazon ran a com­mer­cial for its Ring cam­era se­cu­rity sys­tem. The ad ma­nip­u­la­tively ex­ploited peo­ple’s love of dogs to in­duce them to ig­nore the con­se­quences of what Amazon was tout­ing. It seems that trick did not work.

The ad high­lighted what the com­pany calls its Search Party” fea­ture, whereby one can up­load a pic­ture, for ex­am­ple, of a lost dog. Doing so will ac­ti­vate mul­ti­ple other Amazon Ring cam­eras in the neigh­bor­hood, which will, in turn, use AI pro­grams to scan all dogs, it seems, and iden­tify the one that is lost. The 30-second com­mer­cial was full of heart-tug­ging scenes of young chil­dren and el­derly peo­ple be­ing re­united with their lost dogs.

But the graphic Amazon used seems to have un­wit­tingly de­picted how in­va­sive this tech­nol­ogy can be. That this ca­pa­bil­ity now ex­ists in a prod­uct that has long been pitched as noth­ing more than a sim­ple tool for home­own­ers to mon­i­tor their own homes cre­ated, it seems, an un­avoid­able con­trast be­tween pub­lic un­der­stand­ing of Ring and what Amazon was now boast­ing it could do.

Many peo­ple were not just sur­prised but quite shocked and alarmed to learn that what they thought was merely their own per­sonal se­cu­rity sys­tem now has the abil­ity to link with count­less other Ring cam­eras to form a neigh­bor­hood-wide (or city-wide, or state-wide) sur­veil­lance drag­net. That Amazon em­pha­sized that this fea­ture is avail­able (for now) only to those who opt-in” did not as­suage con­cerns.

Numerous me­dia out­lets sounded the alarm. The on­line pri­vacy group Electronic Frontier Foundation (EFF) con­demned Ring’s pro­gram as pre­view­ing a world where bio­met­ric iden­ti­fi­ca­tion could be un­leashed from con­sumer de­vices to iden­tify, track, and lo­cate any­thing — hu­man, pet, and oth­er­wise.”

Many pri­vate cit­i­zens who pre­vi­ously used Ring also re­acted neg­a­tively. Viral videos on­line show peo­ple re­mov­ing or de­stroy­ing their cam­eras over pri­vacy con­cerns,” re­ported USA Today. The back­lash be­came so se­vere that, just days later, Amazon — seek­ing to as­suage pub­lic anger — an­nounced the ter­mi­na­tion of a part­ner­ship be­tween Ring and Flock Safety, a po­lice sur­veil­lance tech com­pany (while Flock is un­re­lated to Search Party, pub­lic back­lash made it im­pos­si­ble, at least for now, for Amazon to send Ring’s user data to a po­lice sur­veil­lance firm).

The Amazon ad seems to have trig­gered a long-over­due spot­light on how the com­bi­na­tion of ubiq­ui­tous cam­eras, AI, and rapidly ad­vanc­ing fa­cial recog­ni­tion soft­ware will ren­der the term privacy” lit­tle more than a quaint con­cept from the past. As EFF put it, Ring’s pro­gram could al­ready run afoul of bio­met­ric pri­vacy laws in some states, which re­quire ex­plicit, in­formed con­sent from in­di­vid­u­als be­fore a com­pany can just run face recog­ni­tion on some­one.”

Those con­cerns es­ca­lated just a few days later in the con­text of the Tucson dis­ap­pear­ance of Nancy Guthrie, mother of long-time TODAY Show host Savannah Guthrie. At the home where she lives, Nancy Guthrie used Google’s Nest cam­era for se­cu­rity, a prod­uct sim­i­lar to Amazon’s Ring.

Guthrie, how­ever, did not pay Google for a sub­scrip­tion for those cam­eras, in­stead solely us­ing the cam­eras for real-time mon­i­tor­ing. As CBS News ex­plained, with a free Google Nest plan, the video should have been deleted within 3 to 6 hours — long af­ter Guthrie was re­ported miss­ing.” Even pro­fes­sional pri­vacy ad­vo­cates have un­der­stood that cus­tomers who use Nest with­out a sub­scrip­tion will not have their cam­eras con­nected to Google’s data servers, mean­ing that no record­ings will be stored or avail­able for any pe­riod be­yond a few hours.

For that rea­son, Pima County Sheriff Chris Nanos an­nounced early on that there was no video avail­able in part be­cause Guthrie did­n’t have an ac­tive sub­scrip­tion to the com­pany.” Many peo­ple, for ob­vi­ous rea­sons, pre­fer to avoid per­ma­nently stor­ing com­pre­hen­sive daily video re­ports with Google of when they leave and re­turn to their own home, or who vis­its them at their home, when, and for how long.

Despite all this, FBI in­ves­ti­ga­tors on the case were some­how mag­i­cally able to recover” this video from Guthrie’s cam­era many days later. FBI Director Kash Patel was es­sen­tially forced to ad­mit this when he re­leased still im­ages of what ap­pears to be the masked per­pe­tra­tor who broke into Guthrie’s home. (The Google user agree­ment, which few users read, does pro­tect the com­pany by stat­ing that im­ages may be stored even in the ab­sence of a sub­scrip­tion.)

While the discovery” of footage from this home cam­era by Google en­gi­neers is ob­vi­ously of great value to the Guthrie fam­ily and law en­force­ment agents search­ing for Guthrie, it raises ob­vi­ous yet se­ri­ous ques­tions about why Google, con­trary to com­mon un­der­stand­ing, was stor­ing the video footage of un­sub­scribed users. A for­mer NSA data re­searcher and CEO of a cy­ber­se­cu­rity firm, Patrick Johnson, told CBS: There’s kind of this old say­ing that data is never deleted, it’s just re­named.”

It is rather re­mark­able that Americans are be­ing led, more or less will­ingly, into a state-cor­po­rate, Panopticon-like do­mes­tic sur­veil­lance state with rel­a­tively lit­tle re­sis­tance, though the wide­spread re­ac­tion to Amazon’s Ring ad is en­cour­ag­ing. Much of that muted re­ac­tion may be due to a lack of re­al­iza­tion about the sever­ity of the evolv­ing pri­vacy threat. Beyond that, pri­vacy and other core rights can seem ab­stract and less of a pri­or­ity than more ma­te­r­ial con­cerns, at least un­til they are gone.

It is al­ways the case that there are ben­e­fits avail­able from re­lin­quish­ing core civil lib­er­ties: al­low­ing in­fringe­ments on free speech may re­duce false claims and hate­ful ideas; al­low­ing searches and seizures with­out war­rants will likely help the po­lice catch more crim­i­nals, and do so more quickly; giv­ing up pri­vacy may, in fact, en­hance se­cu­rity.

But the core premise of the West gen­er­ally, and the U. S. in par­tic­u­lar, is that those trade-offs are never worth­while. Americans still all learn and are taught to ad­mire the iconic (if not apoc­ryphal) 1775 words of Patrick Henry, which came to de­fine the core ethos of the Revolutionary War and American Founding: Give me lib­erty or give me death.” It is hard to ex­press in more de­fin­i­tive terms on which side of that lib­erty-ver­sus-se­cu­rity trade-off the U.S. was in­tended to fall.

These re­cent events emerge in a broader con­text of this new Silicon Valley-driven de­struc­tion of in­di­vid­ual pri­vacy. Palantir’s fed­eral con­tracts for do­mes­tic sur­veil­lance and do­mes­tic data man­age­ment con­tinue to ex­pand rapidly, with more and more in­tru­sive data about Americans con­sol­i­dated un­der the con­trol of this one sin­is­ter cor­po­ra­tion.

Facial recog­ni­tion tech­nol­ogy — now fully in use for an ar­ray of pur­poses from Customs and Border Protection at air­ports to ICEs pa­trolling of American streets — means that fully track­ing one’s move­ments in pub­lic spaces is eas­ier than ever, and is be­com­ing eas­ier by the day. It was only three years ago that we in­ter­viewed New York Times re­porter Kashmir Hill about her new book, Your Face Belongs to Us.” The warn­ings she is­sued about the dan­gers of this pro­lif­er­at­ing tech­nol­ogy have not only come true with star­tling speed but also ap­pear al­ready be­yond what even she en­vi­sioned.

On top of all this are ad­vances in AI. Its ef­fects on pri­vacy can­not yet be quan­ti­fied, but they will not be good. I have tried most AI pro­grams sim­ply to re­main abreast of how they func­tion.

After just a few weeks, I had to stop my use of Google’s Gemini be­cause it was com­pil­ing not just seg­re­gated data about me, but also a wide ar­ray of in­for­ma­tion to form what could rea­son­ably be de­scribed as a dossier on my life, in­clud­ing in­for­ma­tion I had not wit­tingly pro­vided it. It would an­swer ques­tions I asked it with creepy, un­re­lated ref­er­ences to the far-too-com­plete pic­ture it had man­aged to cre­ate of many as­pects of my life (at one point, it com­mented, some­what judg­men­tally or out of feigned concern,” about the late hours I was keep­ing while work­ing, a topic I never raised).

Many of these un­nerv­ing de­vel­op­ments have hap­pened with­out much pub­lic no­tice be­cause we are of­ten dis­tracted by what ap­pear to be more im­me­di­ate and prox­i­mate events in the news cy­cle. The lack of suf­fi­cient at­ten­tion to these pri­vacy dan­gers over the last cou­ple of years, in­clud­ing at times from me, should not ob­scure how con­se­quen­tial they are.

All of this is par­tic­u­larly re­mark­able, and par­tic­u­larly dis­con­cert­ing, since we are barely more than a decade re­moved from the dis­clo­sures about mass do­mes­tic sur­veil­lance en­abled by the coura­geous whistle­blower Edward Snowden. Although most of our re­port­ing fo­cused on state sur­veil­lance, one of the first sto­ries fea­tured the joint state-cor­po­rate spy­ing frame­work built in con­junc­tion with the U. S. se­cu­rity state and Silicon Valley gi­ants.

The Snowden sto­ries sparked years of anger, at­tempts at re­form, changes in diplo­matic re­la­tions, and even gen­uine (albeit forced) im­prove­ments in Big Tech’s user pri­vacy. But the cal­cu­la­tion of the U. S. se­cu­rity state and Big Tech was that at some point, at­ten­tion to pri­vacy con­cerns would dis­perse and then vir­tu­ally evap­o­rate, en­abling the state-cor­po­rate sur­veil­lance state to march on with­out much no­tice or re­sis­tance. At least as of now, the cal­cu­la­tion seems to have been vin­di­cated.

...

Read the original on greenwald.substack.com »

5 399 shares, 73 trendiness

Welcome to Johnny's World

Imagine you’re main­tain­ing a na­tive pro­ject. You use Visual Studio for build­ing on Windows, so you do the re­spon­si­ble thing and list it as a de­pen­dency

If you’re lucky enough not to know this yet, I envy you. Unfortunately, at this point even Boromir knows…

What you may not re­al­ize is, you’ve ac­tu­ally signed up to be un­paid tech sup­port for Microsoft’s Visual Studio Installer”. You might no­tice GitHub Issues be­com­ing less about your code and more about bro­ken builds, specif­i­cally on Windows. You find your­self ex­plain­ing to a con­trib­u­tor that they did­n’t check the Desktop de­vel­op­ment with C++” work­load, but specif­i­cally the v143 build tools and the 10.0.22621.0 SDK. No, not that one, the other one. You spend less time on your pro­ject be­cause you’re too busy be­ing a hu­man-pow­ered de­pen­dency re­solver for a 50GB IDE.

Saying Install Visual Studio” is like hand­ing con­trib­u­tors a choose-your-own-ad­ven­ture book rid­dled with bad end­ings, some of which don’t let you go back. I’ve had to re-im­age my en­tire OS more than once over the years.

Why is this tragedy unique to Windows?

On Linux, the tool­chain is usu­ally just a pack­age man­ager com­mand away. On the other hand, Visual Studio” is thou­sands of com­po­nents. It’s so vast that Microsoft dis­trib­utes it with a so­phis­ti­cated GUI in­staller where you nav­i­gate a maze of check­boxes, hunt­ing for which Workloads” or Individual Components” con­tain the ac­tual com­piler. Select the wrong one and you might lose hours in­stalling some­thing you don’t need. Miss one, like Windows 10 SDK (10.0.17763.0)” or Spectre-mitigated libs,” and your build fails three hours later with a cryp­tic er­ror like MSB8101. And heaven help you if you need to down­grade to an older ver­sion of the build tools for a legacy pro­ject.

The Visual Studio ecosys­tem is built on a legacy of all-in-one’ mono­liths. It con­flates the ed­i­tor, the com­piler, and the SDK into a sin­gle, tan­gled web. When we list Visual Studio’ as a de­pen­dency, we’re fail­ing to dis­tin­guish be­tween the tool we use to write code and the en­vi­ron­ment re­quired to com­pile it.

Hours-long waits: You spend an af­ter­noon watch­ing a progress bar down­load 15GB just to get a 50MB com­piler. Zero trans­parency: You have no idea which files were in­stalled or where they went. Your reg­istry is lit­tered with cruft and back­ground up­date ser­vices are per­ma­nent res­i­dents of your Task Manager.No ver­sion con­trol: You can’t check your com­piler into Git. If a team­mate has a slightly dif­fer­ent Build Tools ver­sion, your builds can silently di­verge.The ghost” en­vi­ron­ment: Uninstalling is never truly clean. Moving to a new ma­chine means re­peat­ing the en­tire GUI dance, pray­ing you checked the same boxes.

Even af­ter in­stal­la­tion, com­pil­ing a sin­gle C file from the com­mand line re­quires find­ing the Developer Command Prompt. Under the hood, this short­cut in­vokes vc­varsall.bat, a frag­ile batch script that glob­ally mu­tates your en­vi­ron­ment vari­ables just to lo­cate where the com­piler is hid­ing this week.

Ultimately, you end up with build in­struc­tions that look like a le­gal dis­claimer:

Works on my ma­chine with VS 17.4.2 (Build 33027.167) and SDK 10.0.22621.0. If you have 17.5, please see Issue #412. If you are on ARM64, god­speed.”

On Windows, this has be­come the cost of do­ing busi­ness”. We tell users to wait three hours for a 20GB in­stall just so they can com­pile a 5MB ex­e­cutable. It’s be­come an ac­tive de­ter­rent to na­tive de­vel­op­ment.

I’m not in­ter­ested in be­ing a hu­man de­bug­ger for some­one else’s in­staller. I want the MSVC tool­chain to be­have like a mod­ern de­pen­dency: ver­sioned, iso­lated, de­clar­a­tive.

I spent a few weeks build­ing an open source tool to make things bet­ter. It’s called msvcup. It’s a small CLI pro­gram. On good net­work/​hard­ware, it can in­stall the tool­chain/​SDK in a few min­utes, in­clud­ing every­thing to cross-com­pile to/​from ARM. Each ver­sion of the tool­chain/​SDK gets its own iso­lated di­rec­tory. It’s idem­po­tent and fast enough to in­voke every time you build. Let’s try it out.

#include @setlocal

@if not ex­ist msvcup.exe (

echo msvcup.exe: in­stalling…

curl -L -o msvcup.zip https://​github.com/​mar­ler8997/​msvcup/​re­leases/​down­load/​v2026_02_07/​msvcup-x86_64-win­dows.zip

tar xf msvcup.zip

del msvcup.zip

) else (

echo msvcup.exe: al­ready in­stalled

@if not ex­ist msvcup.exe exit /b 1

set MSVC=msvc-14.44.17.14

set SDK=sdk-10.0.22621.7

msvcup in­stall –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%

@if %errorlevel% neq 0 (exit /b %errorlevel%)

msvcup au­toenv –target-cpu x64 –out-dir au­toenv %MSVC% %SDK%

@if %errorlevel% neq 0 (exit /b %errorlevel%)

.\autoenv\cl hello.c

Believe it or not, this build.bat script re­places the need to Install Visual Studio”. This script should run on any Windows sys­tem since Windows 10 (assuming it has curl/​tar which have been shipped since 2018). It in­stalls the MSVC tool­chain, the Windows SDK and then com­piles our pro­gram.

For my fel­low Windows de­vel­op­ers, go ahead and take a mo­ment. Visual Studio can’t hurt you any­more. The build.bat above is­n’t just a helper script; it’s a de­c­la­ra­tion of in­de­pen­dence from the Visual Studio Installer. Our de­pen­den­cies are fully spec­i­fied, mak­ing builds re­pro­ducible across ma­chines. And when those de­pen­den­cies are in­stalled, they won’t pol­lute your reg­istry or lock you into a sin­gle global ver­sion.

Also note that af­ter the first run, the msvcup com­mands take mil­lisec­onds, mean­ing we can just leave these com­mands in our build script and now we have a fully self-con­tained script that can build our pro­ject on vir­tu­ally any mod­ern Windows ma­chine.

msvcup is in­spired by a small Python script writ­ten by Mārtiņš Možeiko. The key in­sight is that Microsoft pub­lishes JSON man­i­fests de­scrib­ing every com­po­nent in Visual Studio, the same man­i­fests the of­fi­cial in­staller uses. msvcup parses these man­i­fests, iden­ti­fies just the pack­ages needed for com­pi­la­tion (the com­piler, linker, head­ers, and li­braries), and down­loads them di­rectly from Microsoft’s CDN. Everything lands in ver­sioned di­rec­to­ries un­der C:\msvcup\. For de­tails on lock files, cross-com­pi­la­tion, and other fea­tures, see the msvcup README.md.

The as­tute will also no­tice that our build.bat script never sources any batch files to set up the Developer Environment”. The script con­tains two msvcup com­mands. The first in­stalls the tool­chain/​SDK, and like a nor­mal in­stal­la­tion, it in­cludes vcvars” scripts to set up a de­vel­oper en­vi­ron­ment. Instead, our build.bat lever­ages the msvcup au­toenv com­mand to cre­ate an Automatic Environment”. This cre­ates a di­rec­tory that con­tains wrap­per ex­e­cuta­bles to set the en­vi­ron­ment vari­ables on your be­half be­fore for­ward­ing to the un­der­ly­ing tools. It even in­cludes a tool­chain.cmake file which will point your CMake pro­jects to these tools, al­low­ing you to build your CMake pro­jects out­side a spe­cial en­vi­ron­ment.

At Tuple (a pair-pro­gram­ming app), I in­te­grated msvcup into our build sys­tem and CI, which al­lowed us to re­move the re­quire­ment for the user/​CI to pre-in­stall Visual Studio. Tuple com­piles hun­dreds of C/C++ pro­jects in­clud­ing WebRTC. This en­abled both x86_64 and ARM builds on the CI as well as keep­ing the CI and every­one on the same tool­chain/​SDK.

Everything in­stalls into a ver­sioned di­rec­tory. No prob­lem in­stalling ver­sions side-by-side. Easy to re­move or re­in­stall if some­thing goes wrong. Cross-compilation en­abled out of the box. msvcup cur­rently al­ways down­loads the tools for all sup­ported cross-tar­gets, so you don’t have to do any work look­ing for all the com­po­nents you need to cross-com­pile.Lock file sup­port. A self-con­tained list of all the pay­loads/​URLs. Everyone uses the same pack­ages, and if Microsoft changes some­thing up­stream, you’ll know.Blaz­ing fast. The in­stall and au­toenv com­mands are idem­po­tent and com­plete in mil­lisec­onds when there’s no work to do.

No more it works on my ma­chine be­cause I have the 2019 Build Tools in­stalled.” No more reg­istry-div­ing to find where cl.exe is hid­ing this week. With msvcup, your en­vi­ron­ment is de­fined by your code, portable across ma­chines, and ready to com­pile in mil­lisec­onds.

msvcup fo­cuses on the core com­pi­la­tion tool­chain. If you need the full Visual Studio IDE you’ll still need the of­fi­cial in­staller. For most na­tive de­vel­op­ment work­flows, though, it cov­ers what you ac­tu­ally need.

Let’s try this on a real pro­ject. Here’s a script that builds raylib from scratch on a clean Windows sys­tem. In this case, we’ll just use the SDK with­out the au­toenv:

@setlocal

set TARGET_CPU=x64

@if not ex­ist msvcup.exe (

echo msvcup.exe: in­stalling…

curl -L -o msvcup.zip https://​github.com/​mar­ler8997/​msvcup/​re­leases/​down­load/​v2026_02_07/​msvcup-x86_64-win­dows.zip

tar xf msvcup.zip

del msvcup.zip

set MSVC=msvc-14.44.17.14

set SDK=sdk-10.0.22621.7

msvcup.exe in­stall –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%

@if %errorlevel% neq 0 (exit /b %errorlevel%)

@if not ex­ist raylib (

git clone https://​github.com/​raysan5/​raylib -b 5.5

call C:\msvcup\%MSVC%\vcvars-%TARGET_CPU%.bat

call C:\msvcup\%SDK%\vcvars-%TARGET_CPU%.bat

cmd /c cd raylib\pro­jects\scripts && build-win­dows”

@if %errorlevel% neq 0 (exit /b %errorlevel%)

@echo build suc­cess: game exe at:

@echo .\raylib\projects\scripts\builds\windows-msvc\game.exe

No Visual Studio in­stal­la­tion. No GUI. No prayer. Just a script that does ex­actly what it says.

P. S. Here is a page that shows how to use msvcup to build LLVM and Zig from scratch on Windows.

...

Read the original on marler8997.github.io »

6 370 shares, 17 trendiness

Breaking the Spell of Vibe Coding – fast.ai

...

Read the original on www.fast.ai »

7 358 shares, 15 trendiness

IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption

More pro­fes­sion­als are tak­ing mini-sab­bat­i­cals, adult gap years, and other ex­tended ca­reer breaks. Here are the cre­ative ways they man­age the cost and The Associated PressElites are the vil­lains we love to hate. It’s American cul­ture’s most para­dox­i­cal ob­ses­sionKeke Palmer be­came a mil­lion­aire at 12—but even with $1 mil­lion, she’d still only pay $1,500 in rent and drive a Lexus: I live un­der my mean­s’Meet the grand­mother liv­ing out of a 400-ft granny pod’ to save money and help with child care—it’s be­come an American economic ne­ces­si­ty’­Fox News’ Dana Perino’s ad­vice for Gen Z grad­u­ates: Stop wait­ing for the per­fect job and just start work­ingVic­to­rian-era vinegar valen­ti­nes’ show that trolling ex­isted long be­fore so­cial me­dia or the in­ter­net

Microsoft AI chief gives it 18 months—for all white-col­lar work to be au­to­mated by AIMacKenzie Scott says her col­lege room­mate loaned her $1,000 so she would­n’t have to drop out—and is now in­spir­ing her to give away bil­lion­sAna­log-ob­sessed Gen Zers are buy­ing $40 app block­ers to limit their so­cial me­dia use and take a break from the slot ma­chine in your pock­et’­Mal­colm Gladwell tells young peo­ple if they want a STEM de­gree, don’t go to Harvard.’ You may end up at the bot­tom of your class and drop outA U. S. debt spi­ral’ could start soon as the in­ter­est rate on gov­ern­ment bor­row­ing is poised to ex­ceed eco­nomic growth, bud­get watch­dog saysCur­rent price of gold as of February 13, 2026

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

8 291 shares, 24 trendiness

Ultra-lightweight UI library

Oat is an ul­tra-light­weight HTML + CSS, se­man­tic UI com­po­nent li­brary with zero de­pen­den­cies. No frame­work, build, or dev com­plex­ity. Just in­clude the tiny CSS and JS files and you are good to go build­ing de­cent look­ing web ap­pli­ca­tions with most com­monly needed com­po­nents and el­e­ments.

Semantic tags and at­trib­utes are styled con­tex­tu­ally out of the box with­out classes, forc­ing best prac­tices, and re­duc­ing markup class pol­lu­tion. A few dy­namic com­po­nents are WebComponents and use min­i­mal JavaScript.

Fully-standalone with no de­pen­den­cies on any JS or CSS frame­works or li­braries. No Node.js ecosys­tem garbage or bloat.

Native el­e­ments like , , and se­man­tic at­trib­utes like role=“but­ton” are styled di­rectly. No classes.

Semantic HTML and ARIA roles are used (and forced in many places) through­out. Proper key­board nav­i­ga­tion sup­port for all com­po­nents and el­e­ments.

Easily cus­tomize the over­all theme by over­rid­ing a hand­ful of CSS vari­ables. data-theme=“dark” on body au­to­mat­i­cally uses the bun­dled dark theme.

This was made af­ter the un­end­ing frus­tra­tion with the over-en­gi­neered bloat, com­plex­ity, and de­pen­dency-hell of pretty much every Javascript UI li­brary and frame­work out there. Done with the con­tin­u­ous PTSD of rug-pulls and lock­ins of the Node.js ecosys­tem trash. [1]

I’ve pub­lished this, in case other Node.js ecosys­tem trauma vic­tims find it use­ful.

My goal is a sim­ple, min­i­mal, vanilla, stan­dards-based UI li­brary that I can use in my own pro­jects for the long term with­out hav­ing to worry about Javascript ecosys­tem trash. Long term be­cause it’s just sim­ple vanilla CSS and JS. The look and feel are in­flu­enced by the shadcn aes­thetic.

...

Read the original on oat.ink »

9 289 shares, 11 trendiness

Descent

...

Read the original on mrdoob.github.io »

10 257 shares, 23 trendiness

Flashpoint Archive

...

Read the original on flashpointarchive.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.