10 interesting stories served every morning and every evening.
A maintained uBlock Origin filter list to hide all traces of YouTube shorts videos.
Copy the link below, go to uBlock Origin > Dashboard > Filter lists, scroll to the bottom, and paste the link underneath the ‘Import…’ heading:
https://raw.githubusercontent.com/i5heu/ublock-hide-yt-shorts/master/list.txt
> uBlock Origin subscribe link < (does not work on GitHub)
> uBlock Origin subscribe link < (does not work on GitHub)
After the initial createor of this list @gijsdev is now vanished for half a year, i ( i5heu ) took it on me to maintain this list.
This project is an independent, open-source initiative and is not affiliated with, endorsed by, sponsored by, or associated with Alphabet Inc., Google LLC, or YouTube.
...
Read the original on github.com »
For this year’s “I love Free Software Day” I would like to thank the maintainers of Free Software documentation, and here especially the maintainers of the
ArchWiki. Maintainers in general, and maintainers of documentation most of the time get way too little recognition for their contributions to software freedom.
Myself, Arch Project Leader Levente, ArchWiki maintainer Ferdinand (Alad), and FSFE’s vice president Heiki at FOSDEM 2026 after I handed them over some hacker chocolate.
The ArchWiki is a resource, I myself and many people around me regularly consult - no matter if it is actually about Arch or another Free Software distribution. There are countless times, when I read articles there to get a better understanding of the tools I daily use, like e-mail programs, editors, or all kinds of window managers I used over time. It helped me to discover some handy features or configuration tips that were difficult for me to find in the documentation of the software itself.
Whenever I run into issues setting up a GNU/Linux distribution for myself or family and friends, the ArchWiki had my back!
Whenever I want to better understand a software, the ArchWiki is most often the first page I end up consulting.
You are one of the pearls of the internet! Or in Edward Snowden’s words:
“Is it just me, or have search results become absolute garbage for basically every site? It’s nearly impossible to discover useful information these days (outside the ArchWiki). ” https://x.com/Snowden/status/1460666075033575425
Thank you, to all the ArchWiki contributors for gathering all the knowledge to help others in society to better understand technology and for the ArchWiki maintainers to ensure the long term availability and reliability of this crucial resource.
If you also appreciated the work of the ArchWiki maintainers for our society, tell them as well, and I encourage you to make a donation to
Arch.
PS: Thanks also to Morton for connecting me with Ferdinand and Levente at FOSDEM.
...
As part of its mission to preserve the web, the Internet Archive operates crawlers that capture webpage snapshots. Many of these snapshots are accessible through its public-facing tool, the Wayback Machine. But as AI bots scavenge the web for training data to feed their models, the Internet Archive’s commitment to free information access has turned its digital library into a potential liability for some news publishers.
When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.
The Wayback Machine’s snapshots of news homepages plummet after a “breakdown” in archiving projects
Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.
In particular, Hahn expressed concern about the Internet Archive’s APIs.
“A lot of these AI businesses are looking for readily available, structured databases of content,” he said. “The Internet Archive’s API would have been an obvious place to plug their own machines into and suck out the IP.” (He admits the Wayback Machine itself is “less risky,” since the data is not as well-structured.)
As news publishers try to safeguard their contents from AI companies, the Internet Archive is also getting caught in the crosshairs. The Financial Times, for example, blocks any bot that tries to scrape its paywalled content, including bots from OpenAI, Anthropic, Perplexity, and the Internet Archive. The majority of FT stories are paywalled, according to director of global public policy and platform strategy Matt Rogerson. As a result, usually only unpaywalled FT stories appear in the Wayback Machine because those are meant to be available to the wider public anyway.
“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”
To preserve their work — and drafts of history — journalists take archiving into their own hands
The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes. Hahn says the organization has been receptive to The Guardian’s concerns.
The outlet stopped short of an all-out block on the Internet Archive’s crawlers, Hahn said, because it supports the nonprofit’s mission to democratize information, though that position remains under review as part of its routine bot management.
“[The decision] was much more about compliance and a backdoor threat to our content,” he said.
When asked about The Guardian’s decision, Internet Archive founder Brewster Kahle said that “if publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.” It’s a prospect, he implied, that could undercut the organization’s work countering “information disorder.”
After 25 years, Brewster Kahle and the Internet Archive are still working to democratize knowledge
The Guardian isn’t alone in reevaluating its relationship to the Internet Archive. The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.
“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”
Last August, Reddit announced that it would block the Internet Archive, whose digital libraries include countless archived Reddit forums, comments sections, and profiles. This content is not unlike what Reddit now licenses to Google as AI training data for tens of millions of dollars.
“[The] Internet Archive provides a service to the open web, but we’ve been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine,” a Reddit spokesperson told The Verge at the time. “Until they’re able to defend their site and comply with platform policies…we’re limiting some of their access to Reddit data to protect redditors.”
Kahle has also alluded to steps the Internet Archive is taking to restrict bulk access to its libraries. In a Mastodon post last fall, he wrote that “there are many collections that are available to users but not for bulk downloading. We use internal rate-limiting systems, filtering mechanisms, and network security services such as Cloudflare.”
Currently, however, the Internet Archive does not disallow any specific crawlers through its robots.txt file, including those of major AI companies. As of January 12, the robots.txt file for archive.org read: “Welcome to the Archive! Please crawl our files. We appreciate it if you can crawl responsibly. Stay open!” Shortly after we inquired about this language, it was changed. The file now reads, simply, “Welcome to the Internet Archive!”
There is evidence that the Wayback Machine, generally speaking, has been used to train LLMs in the past. An analysis of Google’s C4 dataset by the Washington Post in 2023 showed that the Internet Archive was among millions of websites in the training data used to build Google’s T5 model and Meta’s Llama models. Out of the 15 million domains in the C4 dataset, the domain for the Wayback Machine (web.archive.org) was ranked as the 187th most present.
Hundreds of thousands of videos from news publishers like The New York Times and Vox were used to train AI modelsIn May 2023, the Internet Archive went offline temporarily after an AI company caused a server overload, Wayback Machine director Mark Graham told Nieman Lab this past fall. The company sent tens of thousands of requests per second from virtual hosts on Amazon Web Services to extract text data from the nonprofit’s public domain archives. The Internet Archive blocked the hosts twice before putting out a public call to “respectfully” scrape its site.
“We got in contact with them. They ended up giving us a donation,” Graham said. “They ended up saying that they were sorry and they stopped doing it.”
“Those wanting to use our materials in bulk should start slowly, and ramp up,” wrote Kahle in a blog post shortly after the incident. “Also, if you are starting a large project please contact us …we are here to help.”
The Guardian’s moves to limit the Internet Archive’s access made us wonder whether other news publishers were taking similar actions. We looked at publishers’ robots.txt pages as a way to measure potential concern over the Internet Archive’s crawling.
A website’s robots.txt page tells bots which parts of the site they can crawl, acting like a “doorman,” telling visitors who is and isn’t allowed in the house and which parts are off limits. Robots.txt pages aren’t legally binding, so the companies running crawling bots aren’t obligated to comply with them, but they indicate where the Internet Archive is unwelcome.
For example, in addition to “hard blocking,” The New York Times and The Athletic include the archive.org_bot in their robots.txt file, though they do not currently disallow other bots operated by the Internet Archive.
To explore this issue, Nieman Lab used journalist Ben Welsh‘s database of 1,167 news websites as a starting point. As part of a larger side project to archive news sites’ homepages, Welsh runs crawlers that regularly scrape the robots.txt files of the outlets in his database. In late December, we downloaded a spreadsheet from Welsh’s site that displayed all the bots disallowed in the robots.txt files of those sites. We identified four bots that the AI user agent watchdog service Dark Visitors has associated with the Internet Archive. (The Internet Archive did not respond to requests to confirm its ownership of these bots.)
This data is not comprehensive, but exploratory. It does not represent global, industry-wide trends — 76% of sites in the Welsh’s publisher list are based in the U. S., for example — but instead begins to shed light on which publishers are less eager to have their content crawled by the Internet Archive.
In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.
Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.
Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”
“USA Today Co. has consistently emphasized the importance of safeguarding our content and intellectual property,” a company spokesperson said via email. “Last year, we introduced new protocols to deter unauthorized data collection and scraping, redirecting such activity to a designated page outlining our licensing requirements.”
Gannett declined to comment further on its relationship with the Internet Archive. In an October 2025 earnings call, CEO Mike Reed spoke to the company’s anti-scraping measures.
“In September alone, we blocked 75 million AI bots across our local and USA Today platforms, the vast majority of which were seeking to scrape our local content,” Reed said on that call. “About 70 million of those came from OpenAI.” (Gannett signed a content licensing agreement with Perplexity in July 2025.)
About 93% (226 sites) of publishers in our dataset disallow two out of the four Internet Archive bots we identified. Three news sites in the sample disallow three Internet Archive crawlers: Le Huffington Post, Le Monde, and Le Monde in English, all of which are owned by Group Le Monde.
Some French publishers are giving AI revenue directly to journalists. Could that ever happen in the U. S.?
The news sites in our sample aren’t only targeting the Internet Archive. Out of the 241 sites that disallow at least one of the four Internet Archive bots in our sample, 240 sites disallow Common Crawl — another nonprofit internet preservation project that has been more closely linked to commercial LLM development. Of our sample, 231 sites all disallow bots operated by OpenAI, Google AI, and Common Crawl.
As we’ve previously reported, the Internet Archive has taken on the Herculean task of preserving the internet, and many news organizations aren’t equipped to save their own work. In December, Poynter announced a joint initiative with the Internet Archive to train local news outlets on how to preserve their content. Archiving initiatives like this, while urgently needed, are few and far between. Since there is no federal mandate that requires internet content to be preserved, the Internet Archive is the most robust archiving initiative in the United States.
“The Internet Archive tends to be good citizens,” Hahn said. “It’s the law of unintended consequences: You do something for really good purposes, and it gets abused.”
Photo of Internet Archive homepage by SDF_QWE used under an Adobe Stock license.
...
Read the original on www.niemanlab.org »
That the U. S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.
The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.
The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.
But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contrast between public understanding of Ring and what Amazon was now boasting it could do.
Many people were not just surprised but quite shocked and alarmed to learn that what they thought was merely their own personal security system now has the ability to link with countless other Ring cameras to form a neighborhood-wide (or city-wide, or state-wide) surveillance dragnet. That Amazon emphasized that this feature is available (for now) only to those who “opt-in” did not assuage concerns.
Numerous media outlets sounded the alarm. The online privacy group Electronic Frontier Foundation (EFF) condemned Ring’s program as previewing “a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.”
Many private citizens who previously used Ring also reacted negatively. “Viral videos online show people removing or destroying their cameras over privacy concerns,” reported USA Today. The backlash became so severe that, just days later, Amazon — seeking to assuage public anger — announced the termination of a partnership between Ring and Flock Safety, a police surveillance tech company (while Flock is unrelated to Search Party, public backlash made it impossible, at least for now, for Amazon to send Ring’s user data to a police surveillance firm).
The Amazon ad seems to have triggered a long-overdue spotlight on how the combination of ubiquitous cameras, AI, and rapidly advancing facial recognition software will render the term “privacy” little more than a quaint concept from the past. As EFF put it, Ring’s program “could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone.”
Those concerns escalated just a few days later in the context of the Tucson disappearance of Nancy Guthrie, mother of long-time TODAY Show host Savannah Guthrie. At the home where she lives, Nancy Guthrie used Google’s Nest camera for security, a product similar to Amazon’s Ring.
Guthrie, however, did not pay Google for a subscription for those cameras, instead solely using the cameras for real-time monitoring. As CBS News explained, “with a free Google Nest plan, the video should have been deleted within 3 to 6 hours — long after Guthrie was reported missing.” Even professional privacy advocates have understood that customers who use Nest without a subscription will not have their cameras connected to Google’s data servers, meaning that no recordings will be stored or available for any period beyond a few hours.
For that reason, Pima County Sheriff Chris Nanos announced early on “that there was no video available in part because Guthrie didn’t have an active subscription to the company.” Many people, for obvious reasons, prefer to avoid permanently storing comprehensive daily video reports with Google of when they leave and return to their own home, or who visits them at their home, when, and for how long.
Despite all this, FBI investigators on the case were somehow magically able to “recover” this video from Guthrie’s camera many days later. FBI Director Kash Patel was essentially forced to admit this when he released still images of what appears to be the masked perpetrator who broke into Guthrie’s home. (The Google user agreement, which few users read, does protect the company by stating that images may be stored even in the absence of a subscription.)
While the “discovery” of footage from this home camera by Google engineers is obviously of great value to the Guthrie family and law enforcement agents searching for Guthrie, it raises obvious yet serious questions about why Google, contrary to common understanding, was storing the video footage of unsubscribed users. A former NSA data researcher and CEO of a cybersecurity firm, Patrick Johnson, told CBS: “There’s kind of this old saying that data is never deleted, it’s just renamed.”
It is rather remarkable that Americans are being led, more or less willingly, into a state-corporate, Panopticon-like domestic surveillance state with relatively little resistance, though the widespread reaction to Amazon’s Ring ad is encouraging. Much of that muted reaction may be due to a lack of realization about the severity of the evolving privacy threat. Beyond that, privacy and other core rights can seem abstract and less of a priority than more material concerns, at least until they are gone.
It is always the case that there are benefits available from relinquishing core civil liberties: allowing infringements on free speech may reduce false claims and hateful ideas; allowing searches and seizures without warrants will likely help the police catch more criminals, and do so more quickly; giving up privacy may, in fact, enhance security.
But the core premise of the West generally, and the U. S. in particular, is that those trade-offs are never worthwhile. Americans still all learn and are taught to admire the iconic (if not apocryphal) 1775 words of Patrick Henry, which came to define the core ethos of the Revolutionary War and American Founding: “Give me liberty or give me death.” It is hard to express in more definitive terms on which side of that liberty-versus-security trade-off the U.S. was intended to fall.
These recent events emerge in a broader context of this new Silicon Valley-driven destruction of individual privacy. Palantir’s federal contracts for domestic surveillance and domestic data management continue to expand rapidly, with more and more intrusive data about Americans consolidated under the control of this one sinister corporation.
Facial recognition technology — now fully in use for an array of purposes from Customs and Border Protection at airports to ICE’s patrolling of American streets — means that fully tracking one’s movements in public spaces is easier than ever, and is becoming easier by the day. It was only three years ago that we interviewed New York Times reporter Kashmir Hill about her new book, “Your Face Belongs to Us.” The warnings she issued about the dangers of this proliferating technology have not only come true with startling speed but also appear already beyond what even she envisioned.
On top of all this are advances in AI. Its effects on privacy cannot yet be quantified, but they will not be good. I have tried most AI programs simply to remain abreast of how they function.
After just a few weeks, I had to stop my use of Google’s Gemini because it was compiling not just segregated data about me, but also a wide array of information to form what could reasonably be described as a dossier on my life, including information I had not wittingly provided it. It would answer questions I asked it with creepy, unrelated references to the far-too-complete picture it had managed to create of many aspects of my life (at one point, it commented, somewhat judgmentally or out of feigned “concern,” about the late hours I was keeping while working, a topic I never raised).
Many of these unnerving developments have happened without much public notice because we are often distracted by what appear to be more immediate and proximate events in the news cycle. The lack of sufficient attention to these privacy dangers over the last couple of years, including at times from me, should not obscure how consequential they are.
All of this is particularly remarkable, and particularly disconcerting, since we are barely more than a decade removed from the disclosures about mass domestic surveillance enabled by the courageous whistleblower Edward Snowden. Although most of our reporting focused on state surveillance, one of the first stories featured the joint state-corporate spying framework built in conjunction with the U. S. security state and Silicon Valley giants.
The Snowden stories sparked years of anger, attempts at reform, changes in diplomatic relations, and even genuine (albeit forced) improvements in Big Tech’s user privacy. But the calculation of the U. S. security state and Big Tech was that at some point, attention to privacy concerns would disperse and then virtually evaporate, enabling the state-corporate surveillance state to march on without much notice or resistance. At least as of now, the calculation seems to have been vindicated.
...
Read the original on greenwald.substack.com »
Imagine you’re maintaining a native project. You use Visual Studio for building on Windows, so you do the responsible thing and list it as a dependency
If you’re lucky enough not to know this yet, I envy you. Unfortunately, at this point even Boromir knows…
What you may not realize is, you’ve actually signed up to be unpaid tech support for Microsoft’s “Visual Studio Installer”. You might notice GitHub Issues becoming less about your code and more about broken builds, specifically on Windows. You find yourself explaining to a contributor that they didn’t check the “Desktop development with C++” workload, but specifically the v143 build tools and the 10.0.22621.0 SDK. No, not that one, the other one. You spend less time on your project because you’re too busy being a human-powered dependency resolver for a 50GB IDE.
Saying “Install Visual Studio” is like handing contributors a choose-your-own-adventure book riddled with bad endings, some of which don’t let you go back. I’ve had to re-image my entire OS more than once over the years.
Why is this tragedy unique to Windows?
On Linux, the toolchain is usually just a package manager command away. On the other hand, “Visual Studio” is thousands of components. It’s so vast that Microsoft distributes it with a sophisticated GUI installer where you navigate a maze of checkboxes, hunting for which “Workloads” or “Individual Components” contain the actual compiler. Select the wrong one and you might lose hours installing something you don’t need. Miss one, like “Windows 10 SDK (10.0.17763.0)” or “Spectre-mitigated libs,” and your build fails three hours later with a cryptic error like MSB8101. And heaven help you if you need to downgrade to an older version of the build tools for a legacy project.
The Visual Studio ecosystem is built on a legacy of ‘all-in-one’ monoliths. It conflates the editor, the compiler, and the SDK into a single, tangled web. When we list ‘Visual Studio’ as a dependency, we’re failing to distinguish between the tool we use to write code and the environment required to compile it.
Hours-long waits: You spend an afternoon watching a progress bar download 15GB just to get a 50MB compiler. Zero transparency: You have no idea which files were installed or where they went. Your registry is littered with cruft and background update services are permanent residents of your Task Manager.No version control: You can’t check your compiler into Git. If a teammate has a slightly different Build Tools version, your builds can silently diverge.The “ghost” environment: Uninstalling is never truly clean. Moving to a new machine means repeating the entire GUI dance, praying you checked the same boxes.
Even after installation, compiling a single C file from the command line requires finding the Developer Command Prompt. Under the hood, this shortcut invokes vcvarsall.bat, a fragile batch script that globally mutates your environment variables just to locate where the compiler is hiding this week.
Ultimately, you end up with build instructions that look like a legal disclaimer:
“Works on my machine with VS 17.4.2 (Build 33027.167) and SDK 10.0.22621.0. If you have 17.5, please see Issue #412. If you are on ARM64, godspeed.”
On Windows, this has become the “cost of doing business”. We tell users to wait three hours for a 20GB install just so they can compile a 5MB executable. It’s become an active deterrent to native development.
I’m not interested in being a human debugger for someone else’s installer. I want the MSVC toolchain to behave like a modern dependency: versioned, isolated, declarative.
I spent a few weeks building an open source tool to make things better. It’s called msvcup. It’s a small CLI program. On good network/hardware, it can install the toolchain/SDK in a few minutes, including everything to cross-compile to/from ARM. Each version of the toolchain/SDK gets its own isolated directory. It’s idempotent and fast enough to invoke every time you build. Let’s try it out.
#include @setlocal
@if not exist msvcup.exe (
echo msvcup.exe: installing…
curl -L -o msvcup.zip https://github.com/marler8997/msvcup/releases/download/v2026_02_07/msvcup-x86_64-windows.zip
tar xf msvcup.zip
del msvcup.zip
) else (
echo msvcup.exe: already installed
@if not exist msvcup.exe exit /b 1
set MSVC=msvc-14.44.17.14
set SDK=sdk-10.0.22621.7
msvcup install –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
msvcup autoenv –target-cpu x64 –out-dir autoenv %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
.\autoenv\cl hello.c
Believe it or not, this build.bat script replaces the need to “Install Visual Studio”. This script should run on any Windows system since Windows 10 (assuming it has curl/tar which have been shipped since 2018). It installs the MSVC toolchain, the Windows SDK and then compiles our program.
For my fellow Windows developers, go ahead and take a moment. Visual Studio can’t hurt you anymore. The build.bat above isn’t just a helper script; it’s a declaration of independence from the Visual Studio Installer. Our dependencies are fully specified, making builds reproducible across machines. And when those dependencies are installed, they won’t pollute your registry or lock you into a single global version.
Also note that after the first run, the msvcup commands take milliseconds, meaning we can just leave these commands in our build script and now we have a fully self-contained script that can build our project on virtually any modern Windows machine.
msvcup is inspired by a small Python script written by Mārtiņš Možeiko. The key insight is that Microsoft publishes JSON manifests describing every component in Visual Studio, the same manifests the official installer uses. msvcup parses these manifests, identifies just the packages needed for compilation (the compiler, linker, headers, and libraries), and downloads them directly from Microsoft’s CDN. Everything lands in versioned directories under C:\msvcup\. For details on lock files, cross-compilation, and other features, see the msvcup README.md.
The astute will also notice that our build.bat script never sources any batch files to set up the “Developer Environment”. The script contains two msvcup commands. The first installs the toolchain/SDK, and like a normal installation, it includes “vcvars” scripts to set up a developer environment. Instead, our build.bat leverages the msvcup autoenv command to create an “Automatic Environment”. This creates a directory that contains wrapper executables to set the environment variables on your behalf before forwarding to the underlying tools. It even includes a toolchain.cmake file which will point your CMake projects to these tools, allowing you to build your CMake projects outside a special environment.
At Tuple (a pair-programming app), I integrated msvcup into our build system and CI, which allowed us to remove the requirement for the user/CI to pre-install Visual Studio. Tuple compiles hundreds of C/C++ projects including WebRTC. This enabled both x86_64 and ARM builds on the CI as well as keeping the CI and everyone on the same toolchain/SDK.
Everything installs into a versioned directory. No problem installing versions side-by-side. Easy to remove or reinstall if something goes wrong. Cross-compilation enabled out of the box. msvcup currently always downloads the tools for all supported cross-targets, so you don’t have to do any work looking for all the components you need to cross-compile.Lock file support. A self-contained list of all the payloads/URLs. Everyone uses the same packages, and if Microsoft changes something upstream, you’ll know.Blazing fast. The install and autoenv commands are idempotent and complete in milliseconds when there’s no work to do.
No more “it works on my machine because I have the 2019 Build Tools installed.” No more registry-diving to find where cl.exe is hiding this week. With msvcup, your environment is defined by your code, portable across machines, and ready to compile in milliseconds.
msvcup focuses on the core compilation toolchain. If you need the full Visual Studio IDE you’ll still need the official installer. For most native development workflows, though, it covers what you actually need.
Let’s try this on a real project. Here’s a script that builds raylib from scratch on a clean Windows system. In this case, we’ll just use the SDK without the autoenv:
@setlocal
set TARGET_CPU=x64
@if not exist msvcup.exe (
echo msvcup.exe: installing…
curl -L -o msvcup.zip https://github.com/marler8997/msvcup/releases/download/v2026_02_07/msvcup-x86_64-windows.zip
tar xf msvcup.zip
del msvcup.zip
set MSVC=msvc-14.44.17.14
set SDK=sdk-10.0.22621.7
msvcup.exe install –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
@if not exist raylib (
git clone https://github.com/raysan5/raylib -b 5.5
call C:\msvcup\%MSVC%\vcvars-%TARGET_CPU%.bat
call C:\msvcup\%SDK%\vcvars-%TARGET_CPU%.bat
cmd /c “cd raylib\projects\scripts && build-windows”
@if %errorlevel% neq 0 (exit /b %errorlevel%)
@echo build success: game exe at:
@echo .\raylib\projects\scripts\builds\windows-msvc\game.exe
No Visual Studio installation. No GUI. No prayer. Just a script that does exactly what it says.
P. S. Here is a page that shows how to use msvcup to build LLVM and Zig from scratch on Windows.
...
Read the original on marler8997.github.io »
...
Read the original on www.fast.ai »
More professionals are taking mini-sabbaticals, adult gap years, and other extended career breaks. Here are the creative ways they manage the cost and The Associated PressElites are the villains we love to hate. It’s American culture’s most paradoxical obsessionKeke Palmer became a millionaire at 12—but even with $1 million, she’d still only pay $1,500 in rent and drive a Lexus: ’I live under my means’Meet the grandmother living out of a 400-ft ‘granny pod’ to save money and help with child care—it’s become an American ‘economic necessity’Fox News’ Dana Perino’s advice for Gen Z graduates: Stop waiting for the perfect job and just start workingVictorian-era ‘vinegar valentines’ show that trolling existed long before social media or the internet
Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AIMacKenzie Scott says her college roommate loaned her $1,000 so she wouldn’t have to drop out—and is now inspiring her to give away billionsAnalog-obsessed Gen Zers are buying $40 app blockers to limit their social media use and take a break from the ’slot machine in your pocket’Malcolm Gladwell tells young people if they want a STEM degree, ‘don’t go to Harvard.’ You may end up at the bottom of your class and drop outA U. S. ‘debt spiral’ could start soon as the interest rate on government borrowing is poised to exceed economic growth, budget watchdog saysCurrent price of gold as of February 13, 2026
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
Oat is an ultra-lightweight HTML + CSS, semantic UI component library with zero dependencies. No framework, build, or dev complexity. Just include the tiny CSS and JS files and you are good to go building decent looking web applications with most commonly needed components and elements.
Semantic tags and attributes are styled contextually out of the box without classes, forcing best practices, and reducing markup class pollution. A few dynamic components are WebComponents and use minimal JavaScript.
Fully-standalone with no dependencies on any JS or CSS frameworks or libraries. No Node.js ecosystem garbage or bloat.
Native elements like , , and semantic attributes like role=“button” are styled directly. No classes.
Semantic HTML and ARIA roles are used (and forced in many places) throughout. Proper keyboard navigation support for all components and elements.
Easily customize the overall theme by overriding a handful of CSS variables. data-theme=“dark” on body automatically uses the bundled dark theme.
This was made after the unending frustration with the over-engineered bloat, complexity, and dependency-hell of pretty much every Javascript UI library and framework out there. Done with the continuous PTSD of rug-pulls and lockins of the Node.js ecosystem trash. [1]
I’ve published this, in case other Node.js ecosystem trauma victims find it useful.
My goal is a simple, minimal, vanilla, standards-based UI library that I can use in my own projects for the long term without having to worry about Javascript ecosystem trash. Long term because it’s just simple vanilla CSS and JS. The look and feel are influenced by the shadcn aesthetic.
...
Read the original on oat.ink »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.