10 interesting stories served every morning and every evening.
Prologue: The Dragon That Couldn’t Be Slain
A Denver company that developed legal software tried. They failed.
A game studio that made software for Disney tried. They spent over a year and hundreds of thousands of dollars. They had a team of programmers in Armenia, overseen by an American PhD in mathematics. They failed.
Commodore Computers tried. After three months of staring at the source code, they gave up and mailed it back.
Steam looked at it once, during the Greenlight era. “Too niche,” they said. “Nobody will buy it. And if they do, they’ll return it.”
For nearly four decades, Wall Street Raider existed as a kind of impossible object—a game so complex that its own creator barely understood parts of it, written in a programming language so primitive that professional developers couldn’t decode it. The code was, in the creator’s own words, “indecipherable to anyone but me.”
And then, in January 2024, a 29-year-old software developer from Ohio named Ben Ward sent an email.
Michael Jenkins, 80 years old and understandably cautious after decades of failed attempts by others, was honest with him: I appreciate the enthusiasm, but I’ve been through this before. Others have tried—talented people with big budgets—and none of them could crack it. I’ll send you the source code, but I want to be upfront: I’m not expecting miracles.
A year later, that same 29-year-old would announce on Reddit: “I am the chosen one, and the game is being remade. No ifs, ands, or buts about it.”
He was joking about the “chosen one” part. Sort of.
This is the story of how Wall Street Raider—the most comprehensive financial simulator ever made—was born, nearly died, and was resurrected. It’s a story about obsession, about code that takes on a life of its own, about a game that accidentally changed the careers of hundreds of people. And it’s about the 50-year age gap between two developers who, despite meeting in person for the first time only via video call, would trust each other with four decades of work.
Michael Jenkins was supposed to be studying. Instead, he was filling notebooks with ideas for a board game.
Not just any board game. Jenkins wanted to build something like Monopoly, but instead of hotels and railroads, you’d buy and sell corporations. You’d issue stock. You’d execute mergers. You’d structure leveraged buyouts. The game would simulate the entire machinery of American capitalism, from hostile takeovers to tax accounting.
There was just one problem: it was impossible.
“Nobody’s going to have the patience to do this,” Jenkins realized as he stared at his prototype—a board covered in tiny paper stock certificates, a calculator for working out the math, sessions that stretched for hours without reaching any satisfying conclusion.
The game he wanted to make required something that didn’t exist yet: a personal computer.
So Jenkins waited. He graduated from Harvard Law in 1969. He worked as an economist at a national consulting firm. He became a CPA at one of the world’s largest accounting firms. He practiced tax law at a prestigious San Francisco firm, structuring billion-dollar mergers—the exact kind of transactions he dreamed of simulating in his game.
And all the while, he kept filling notebooks.
Jenkins began sketching ideas for a corporate board game while at Harvard Law School
Sixteen years after those first notebooks, Jenkins finally got his hands on what he’d been waiting for: a Kaypro personal computer.
It had a screen about five inches across. It ran CP/M, an operating system that would soon be forgotten when MS-DOS arrived. It was primitive beyond belief by modern standards.
The Kaypro portable computer, with its iconic built-in screen
Jenkins pulled out a slim booklet that came with the machine—a guide to Microsoft Basic written by Bill Gates himself. He’d never written a line of code in his life. He had no formal training in computers. But that night, he sat down and typed:
The computer said hello.
“As soon as I did that,” Jenkins later recalled, “I realized: oh, this isn’t that complicated.”
What happened next became the stuff of legend in the small community of Wall Street Raider devotees. Jenkins stayed up until five in the morning, writing “all kinds of crazy stuff”—fake conversations that would prank his friends when they sat down at his computer, little programs that seemed to know things about his visitors.
It was a hoot. It was also the beginning of an obsession that would consume the next four decades of his life.
Within a year, Jenkins had built something he actually wanted: the first crude version of Wall Street Raider. It already had a moving stock ticker scrolling across that tiny five-inch screen. It already had news headlines streaming past. It was ugly and incomplete, but it was real.
Meanwhile, his law practice was suffering. “I was sitting in my office programming instead of drumming up business,” he admitted. Only his side business—a series of tax guides called “Starting and Operating a Business” that would eventually sell over a million copies across all fifty states—kept him financially afloat.
The most complex parts of Wall Street Raider weren’t written during normal working hours. They were written in the dead of night, in what Jenkins called “fits of rationality.”
Picture this: It’s three in the morning. Jenkins is hunched over his computer, trying to work out how to code a merger. Not just any merger—a merger where every party has to be dealt with correctly. Bondholders. Stockholders. Option holders. Futures positions. Interest rate swap counterparties. Proper ratios for every facet. Tax implications for every transaction.
“I felt like if I go to bed and I get up in the morning, I won’t remember how to do this. So I just stayed up until I wrote that code.”
The result? Code that worked perfectly—code that he tested for years and knew was correct—but code that even he didn’t fully understand anymore.
“When I look at that code today, I still don’t really quite understand it,” he admitted. “But I don’t want to mess with it.”
This became the pattern. Jenkins would obsess over a feature until the logic crystallized in his mind, usually sometime after midnight, and then race to get it coded before the fragile understanding slipped away. The game grew layer by layer, each new feature building on the ones before, each line of code a record of what Jenkins understood about corporate finance at that particular moment in his life.
Years later, Ben Ward would give this phenomenon a name: The Jenkins Market Hypothesis.
“The hypothesis,” Ward wrote in an email to Jenkins, “is that asset prices in the game reflect competition between Michael Jenkins’ understanding of how Wall Street Raider worked at the point in time that he wrote the code over the past 40 years.”
In other words: the game’s simulated market was really just forty different versions of Michael Jenkins, from forty different stages of his life, all competing with each other.
Jenkins loved the theory. “I think it’s very much related to chaos theory,” he replied.
In 1986, Michael Jenkins retired from law and CPA practice at the age of 42. His tax guides were selling well, and his publisher had agreed to release Wall Street Raider. He thought he might spend a few years polishing his hobby project.
Thirty-four years later, he was still at it.
“I chuckle when I get emails from customers who ask me when the team is going to do one thing or the other. Well, the team is me. Ronin Software is definitely a one horse operation and always has been.”
The game that started as a Monopoly variant had become something monstrous and magnificent. By the time Jenkins was done, Wall Street Raider contained:
1,600 simulated companies, each with its own financials
Antitrust regulations (toggleable for those who wanted “robber baron” mode)
Consolidated tax accounting based on actual IRS regulations (Jenkins had been trained by the person who literally wrote those regulations)
A karma system that tracked your ethical violations and increased your chances of getting caught the more you cheated
A 271-page manual that was sold separately because it was simply too dense to give away
Hidden beneath all this machinery was something that didn’t dawn on most players until they’d been immersed in the game for months or years: an enormous amount of text. New events, scenarios, and messages would continue to pop up long after a player thought they’d seen everything—often laced with Jenkins’ trademark tongue-in-cheek graveyard humor. The game wasn’t just deep mechanically; it was deep narratively, in ways that only revealed themselves over time.
The original Wall Street Raider: from DOS (left) to Windows (right) — dense, complex, and undeniably dated
The game had, in short, become the most comprehensive financial simulator ever created—so complex that most people bounced off it, but those who broke through became devoted for life.
“The Dwarf Fortress of the stock market.”
Jenkins played chess against the world. He’d release a new feature, and within weeks, some clever player would email him: “Man, I found how to make trillions of dollars overnight with that new feature.”
“I felt at times like the IRS plugging loopholes,” Jenkins admitted. Every exploit became a patch. Every patch created new edge cases. The code grew more intricate, more layered, more incomprehensible to anyone but its creator.
And then something strange started happening.
The emails started arriving from around the world, and they weren’t about bugs.
One came from the Philippines:
“I’ve been playing your game since I was 13 years old, living in a third world country. Couldn’t even afford to buy the full version. So I played the two-year demo for years and years. And it taught me so much that now I’m working for Morgan Stanley as a forex trader in Shanghai.”
Another came from a hedge fund manager:
“I played Wall Street Raider for years and noticed that buying cheap companies—companies with low PE ratios—and turning them around seemed very profitable in the game. But I wasn’t doing that with my real clients. I wasn’t doing well. Finally I decided to just start doing what I’d been doing in Wall Street Raider.”
He attached a document: an audited report from Price Waterhouse, showing a 10-year period where he’d averaged a 44% compounded annual return using strategies he’d learned from a video game.
“Your game has changed my life.”
Jenkins heard it again and again. From CEOs. From investment bankers. From traders and professors and finance students. People who’d played the free demo version as teenagers in developing countries and parlayed what they learned into careers at Goldman Sachs and Morgan Stanley. People who’d been stone masons wondering if they could do something more.
By his own count, over 200 CEOs and investment bankers had reached out over the years to say that Wall Street Raider had shaped their careers.
“I created the game because it was fun to do so,” Jenkins said. “But I’ve been pleasantly surprised to see the positive impact it has had on the lives of a lot of people who grew up playing it for years and years.”
Players from around the world credit the game with launching their finance careers
It was, it turned out, not just a game. It was accidentally one of the most effective financial education tools ever created—a simulator so realistic that its lessons transferred directly to real markets.
Everyone wanted to modernize Wall Street Raider. Everyone failed.
The interest was obvious. Here was a game with proven educational value, devoted fans, and gameplay depth that put most competitors to shame. The only problem was the interface—a relic of the 1990s Windows era, all dropdown menus and tiny text boxes and graphics that looked, as one longtime player put it, “like they came from the dark ages.”
So they came, the would-be saviors, with their teams and their budgets and their ambitions.
A Denver company that developed legal software sent their programmers. They couldn’t make it work.
A game studio that did work for Disney assembled a team in Armenia, overseen by an American PhD in mathematics. They spent over a year and “lots of money”—by some accounts, hundreds of thousands of dollars—trying to port the game to iPad.
“None of their people had the kind of in-depth knowledge of corporate finance, economics, law, and taxation that I was able to build into the game,” Jenkins explained. “So they simply couldn’t code the simulation correctly when they didn’t have a clue how it should work.”
Commodore Computers, back in 1990, licensed the DOS version. After three months of trying to understand the source code, they mailed it back.
Steam, during the Greenlight era, rejected it outright. “Too niche,” they said. “Almost no graphics. Looks clumsy and primitive.”
The pattern was always the same. Professional programmers would look at Jenkins’ 115,000 lines of primitive BASIC—code that “broke all the rules for good structured programming”—and try to rewrite it in something modern. C++, usually. They’d make progress for a while, get 60% or 80% of the way there, and then hit a wall.
The problem wasn’t technical skill. The problem was that to rewrite the code, you had to understand the code. And to understand the code, you had to understand corporate finance, tax law, economics, and securities regulation at the same depth as someone who’d spent decades as a CPA, tax attorney, and economist.
Those people didn’t tend to become video game programmers.
“My 115,000 lines of primitive BASIC source code,” Jenkins admitted, “was apparently indecipherable to anyone but me.”
The skeletons piled up around the dragon.
“Imagine a dragon that needs slaying, with skeletons of fallen knights all around it”
The End of the Road
By his late seventies, Michael Jenkins was running out of options.
His e-commerce provider had gone bankrupt, taking six months of income with them. Payment processors kept rejecting him—some because of obscure tax complications from selling software in hundreds of countries, others because their legal departments didn’t want to be associated with anything finance-related. For a period, you literally couldn’t buy Wall Street Raider anywhere.
“At one point the challenges got so overwhelming,” Jenkins admitted, “that I seriously considered just shutting down everything.”
In 2020, a gaming journalist named AJ Churchill sent Jenkins a simple email asking whether upgrades to Speculator (a companion game) were included in the purchase price.
Jenkins’ response was… more than Churchill expected:
“Also, as a registered user, you can buy Wall Street Raider at the discounted price of $12.95. As I make revisions over a period of a year or two, I eventually decide when I’ve done enough that it’s time to issue an upgrade version, but there is no timetable. And to be frank, I’m running out of feasible ideas for improvements to both games, and there may only be one or two more upgrades to either program.
Otherwise, at age 76, I may be finally coming near the end of development with my limited software skills, unless I can license my code to a large software/game company that is willing to hire the kind of expensive programming talent that writes software for firms like Merrill Lynch or Goldman Sachs—who would be the only programmers capable of porting my game to iOS, Android, or to a classy-looking Windows GUI. And that is very unlikely.
I’ve pretty much given up on the idea of anyone ever being able to port it.”
Churchill posted the exchange to the r/tycoon subreddit with the title: “I reached out to the 79-year-old creator of Wall Street Raider and here’s what he wrote back.”
The post got some attention. People commented about what a shame it was. A few bought the game out of curiosity. And then, like most Reddit posts, it faded into obscurity.
Hear the story from Michael Jenkins and Ben Ward themselves.
But somewhere in Ohio, a young software developer read it. And he couldn’t get the image of a Bloomberg terminal out of his head.
Ben Ward’s first memory of programming was going to the library as a small child and checking out a massive textbook on C++ game development.
“I barely probably knew how to read at that point,” he recalled. “I had no idea how to install a compiler, run the code that was in this book. But it just kind of got me thinking.”
Ward was, by his own admission, a terrible student. He had ADHD that went undiagnosed until adulthood. He spent more time helping his classmates with their homework than doing his own. His two-year programming degree took five years to complete.
Ben Ward, the developer who would finally crack the code
But when it came to code, something clicked.
At 18, working at his uncle’s manufacturing company, Ward built a management system in three months that replaced their spreadsheets. It ran the business for five years. He went on to build ERP and warehouse management systems, worked at fintech companies, and became a senior full-stack developer.
...
Read the original on www.wallstreetraider.com »
A maintained uBlock Origin filter list to hide all traces of YouTube shorts videos.
Copy the link below, go to uBlock Origin > Dashboard > Filter lists, scroll to the bottom, and paste the link underneath the ‘Import…’ heading:
https://raw.githubusercontent.com/i5heu/ublock-hide-yt-shorts/master/list.txt
> uBlock Origin subscribe link < (does not work on GitHub)
> uBlock Origin subscribe link < (does not work on GitHub)
After the initial createor of this list @gijsdev is now vanished for half a year, i ( i5heu ) took it on me to maintain this list.
This project is an independent, open-source initiative and is not affiliated with, endorsed by, sponsored by, or associated with Alphabet Inc., Google LLC, or YouTube.
...
Read the original on github.com »
How to Watch Your MLB TeamHow to Watch Your NFL TeamHow to Watch Sports on Regional Sports Networks (RSNs)
Babylon 5 Is Now Free to Watch On YouTube
In a move that has delighted fans of classic science fiction, Warner Bros. Discovery has begun uploading full episodes of the iconic series , providing free access to the show just as it departs from the ad-supported streaming platform Tubi. The transition comes at a pivotal time for the series, which has maintained a dedicated following since its original run in the 1990s. Viewers noticed notifications on Tubi indicating that all five seasons would no longer be available after February 10, 2026, effectively removing one of the most accessible free streaming options for the space opera. With this shift, Warner Bros. Discovery appears to be steering the property toward its own digital ecosystem, leveraging YouTube’s vast audience to reintroduce the show to both longtime enthusiasts and a new generation.
The uploads started with the pilot episode, “The Gathering,” which serves as the entry point to the series’ intricate universe. This was followed by subsequent episodes such as “Midnight on the Firing Line” and “Soul Hunter,” released in sequence to build narrative momentum. The strategy involves posting one episode each week, allowing audiences to experience the story at a paced rhythm that mirrors the original broadcast schedule. This approach not only encourages weekly viewership but also fosters online discussions and communal watching events, much like the fan communities that formed during the show’s initial airing. The episodes are hosted on a channel affiliated with Warner Bros., complete with links to purchase the full series, blending free access with opportunities for deeper engagement through official merchandise and digital ownership.
Babylon 5’s move to YouTube represents a broader trend in content distribution, where legacy titles are revitalized through free platforms to compete in a crowded streaming landscape. Tubi’s removal of the series aligns with periodic library rotations common among ad-based services, where licensing agreements expire and content migrates to new homes. For Warner Bros. Discovery, this initiative could signal plans to expand the franchise’s visibility, especially amid ongoing interest in reboots and spin-offs that have been rumored in recent years. The company’s ownership of the property, acquired through mergers, positions it to experiment with hybrid models that combine free episodic releases with premium offerings on platforms like Max.
To understand the significance of this development, a brief look at Babylon 5’s history provides context. Created by J. Michael Straczynski, the series premiered in 1993 as a groundbreaking space opera that unfolded over five seasons, concluding in 1998. Set in the mid-23rd century, it centers on a massive space station called Babylon 5, which serves as a diplomatic hub for various alien species and humans in the aftermath of devastating interstellar wars. The station, orbiting a neutral planet, becomes a focal point for political intrigue, alliances, and conflicts among federations representing Earth, the Minbari, Centauri, Narn, and others.
Unlike many contemporaneous shows, Babylon 5 was conceived as a single, cohesive narrative arc resembling a novel for television, with each season corresponding to a year from 2258 to 2262. This structure allowed for long-term character development and escalating plotlines, including themes of war, peace, prophecy, and ancient cosmic forces. The series was innovative for its use of computer-generated imagery in visual effects, a rarity at the time, and its departure from episodic formats in favor of serialized storytelling that influenced later hits like Battlestar Galactica and The Expanse. Production faced challenges, including network changes from PTEN to TNT for the final season, but it amassed critical acclaim for its depth and world-building.
The show’s legacy extends beyond television, spawning novels, comics, and telefilms that expanded its lore. Its human-centric yet multicultural perspective, portraying Earth as a mid-tier power in a vast galaxy, resonated with audiences seeking complex sci-fi beyond utopian visions. A decade before the main events, humanity narrowly escaped annihilation in the Earth-Minbari War, an event that underscored the fragility of interstellar relations and set the stage for the station’s role as a beacon of hope.
As episodes continue to roll out on YouTube, fans are already speculating on whether this signals a full revival or simply a nostalgic re-release. For now, the weekly drops offer a timely way to revisit or discover a cornerstone of genre television, ensuring Babylon 5’s enduring place in sci-fi history. With over 110 episodes across its run, the gradual rollout promises months of content, keeping the station’s lights on for viewers worldwide.
Please add Cord Cutters News as a source for your Google News feed HERE. Please follow us on Facebook and X for more news, tips, and reviews. Need cord cutting tech support? Join our Cord Cutting Tech Support Facebook Group for help.
Please enable JavaScript to view the comments powered by Disqus.
Disclaimer: To address the growing use of ad blockers we now use affiliate links to sites like http://Amazon.com, streaming services, and others. Affiliate links help sites like Cord Cutters News, stay open. Affiliate links cost you nothing but help me support my family. We do not allow paid reviews on this site. As an Amazon Associate I earn from qualifying purchases.
Cord Cutters News makes money through affiliate partner links: If you click on a link, we may earn a commission. Our writers and editors create all reviews, news, and other content to inform readers, with no influence from our business team. Learn more about how we make money. We take pains to ensure our site is accurate and up to date, but some information might be different than what you find by visiting a vendor website.
Do not sell my info
...
Read the original on cordcuttersnews.com »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on infosec.exchange »
As part of its mission to preserve the web, the Internet Archive operates crawlers that capture webpage snapshots. Many of these snapshots are accessible through its public-facing tool, the Wayback Machine. But as AI bots scavenge the web for training data to feed their models, the Internet Archive’s commitment to free information access has turned its digital library into a potential liability for some news publishers.
When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.
The Wayback Machine’s snapshots of news homepages plummet after a “breakdown” in archiving projects
Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.
In particular, Hahn expressed concern about the Internet Archive’s APIs.
“A lot of these AI businesses are looking for readily available, structured databases of content,” he said. “The Internet Archive’s API would have been an obvious place to plug their own machines into and suck out the IP.” (He admits the Wayback Machine itself is “less risky,” since the data is not as well-structured.)
As news publishers try to safeguard their contents from AI companies, the Internet Archive is also getting caught in the crosshairs. The Financial Times, for example, blocks any bot that tries to scrape its paywalled content, including bots from OpenAI, Anthropic, Perplexity, and the Internet Archive. The majority of FT stories are paywalled, according to director of global public policy and platform strategy Matt Rogerson. As a result, usually only unpaywalled FT stories appear in the Wayback Machine because those are meant to be available to the wider public anyway.
“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”
To preserve their work — and drafts of history — journalists take archiving into their own hands
The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes. Hahn says the organization has been receptive to The Guardian’s concerns.
The outlet stopped short of an all-out block on the Internet Archive’s crawlers, Hahn said, because it supports the nonprofit’s mission to democratize information, though that position remains under review as part of its routine bot management.
“[The decision] was much more about compliance and a backdoor threat to our content,” he said.
When asked about The Guardian’s decision, Internet Archive founder Brewster Kahle said that “if publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.” It’s a prospect, he implied, that could undercut the organization’s work countering “information disorder.”
After 25 years, Brewster Kahle and the Internet Archive are still working to democratize knowledge
The Guardian isn’t alone in reevaluating its relationship to the Internet Archive. The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.
“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”
Last August, Reddit announced that it would block the Internet Archive, whose digital libraries include countless archived Reddit forums, comments sections, and profiles. This content is not unlike what Reddit now licenses to Google as AI training data for tens of millions of dollars.
“[The] Internet Archive provides a service to the open web, but we’ve been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine,” a Reddit spokesperson told The Verge at the time. “Until they’re able to defend their site and comply with platform policies…we’re limiting some of their access to Reddit data to protect redditors.”
Kahle has also alluded to steps the Internet Archive is taking to restrict bulk access to its libraries. In a Mastodon post last fall, he wrote that “there are many collections that are available to users but not for bulk downloading. We use internal rate-limiting systems, filtering mechanisms, and network security services such as Cloudflare.”
Currently, however, the Internet Archive does not disallow any specific crawlers through its robots.txt file, including those of major AI companies. As of January 12, the robots.txt file for archive.org read: “Welcome to the Archive! Please crawl our files. We appreciate it if you can crawl responsibly. Stay open!” Shortly after we inquired about this language, it was changed. The file now reads, simply, “Welcome to the Internet Archive!”
There is evidence that the Wayback Machine, generally speaking, has been used to train LLMs in the past. An analysis of Google’s C4 dataset by the Washington Post in 2023 showed that the Internet Archive was among millions of websites in the training data used to build Google’s T5 model and Meta’s Llama models. Out of the 15 million domains in the C4 dataset, the domain for the Wayback Machine (web.archive.org) was ranked as the 187th most present.
Hundreds of thousands of videos from news publishers like The New York Times and Vox were used to train AI modelsIn May 2023, the Internet Archive went offline temporarily after an AI company caused a server overload, Wayback Machine director Mark Graham told Nieman Lab this past fall. The company sent tens of thousands of requests per second from virtual hosts on Amazon Web Services to extract text data from the nonprofit’s public domain archives. The Internet Archive blocked the hosts twice before putting out a public call to “respectfully” scrape its site.
“We got in contact with them. They ended up giving us a donation,” Graham said. “They ended up saying that they were sorry and they stopped doing it.”
“Those wanting to use our materials in bulk should start slowly, and ramp up,” wrote Kahle in a blog post shortly after the incident. “Also, if you are starting a large project please contact us …we are here to help.”
The Guardian’s moves to limit the Internet Archive’s access made us wonder whether other news publishers were taking similar actions. We looked at publishers’ robots.txt pages as a way to measure potential concern over the Internet Archive’s crawling.
A website’s robots.txt page tells bots which parts of the site they can crawl, acting like a “doorman,” telling visitors who is and isn’t allowed in the house and which parts are off limits. Robots.txt pages aren’t legally binding, so the companies running crawling bots aren’t obligated to comply with them, but they indicate where the Internet Archive is unwelcome.
For example, in addition to “hard blocking,” The New York Times and The Athletic include the archive.org_bot in their robots.txt file, though they do not currently disallow other bots operated by the Internet Archive.
To explore this issue, Nieman Lab used journalist Ben Welsh‘s database of 1,167 news websites as a starting point. As part of a larger side project to archive news sites’ homepages, Welsh runs crawlers that regularly scrape the robots.txt files of the outlets in his database. In late December, we downloaded a spreadsheet from Welsh’s site that displayed all the bots disallowed in the robots.txt files of those sites. We identified four bots that the AI user agent watchdog service Dark Visitors has associated with the Internet Archive. (The Internet Archive did not respond to requests to confirm its ownership of these bots.)
This data is not comprehensive, but exploratory. It does not represent global, industry-wide trends — 76% of sites in the Welsh’s publisher list are based in the U. S., for example — but instead begins to shed light on which publishers are less eager to have their content crawled by the Internet Archive.
In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.
Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.
Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”
“USA Today Co. has consistently emphasized the importance of safeguarding our content and intellectual property,” a company spokesperson said via email. “Last year, we introduced new protocols to deter unauthorized data collection and scraping, redirecting such activity to a designated page outlining our licensing requirements.”
Gannett declined to comment further on its relationship with the Internet Archive. In an October 2025 earnings call, CEO Mike Reed spoke to the company’s anti-scraping measures.
“In September alone, we blocked 75 million AI bots across our local and USA Today platforms, the vast majority of which were seeking to scrape our local content,” Reed said on that call. “About 70 million of those came from OpenAI.” (Gannett signed a content licensing agreement with Perplexity in July 2025.)
About 93% (226 sites) of publishers in our dataset disallow two out of the four Internet Archive bots we identified. Three news sites in the sample disallow three Internet Archive crawlers: Le Huffington Post, Le Monde, and Le Monde in English, all of which are owned by Group Le Monde.
Some French publishers are giving AI revenue directly to journalists. Could that ever happen in the U. S.?
The news sites in our sample aren’t only targeting the Internet Archive. Out of the 241 sites that disallow at least one of the four Internet Archive bots in our sample, 240 sites disallow Common Crawl — another nonprofit internet preservation project that has been more closely linked to commercial LLM development. Of our sample, 231 sites all disallow bots operated by OpenAI, Google AI, and Common Crawl.
As we’ve previously reported, the Internet Archive has taken on the Herculean task of preserving the internet, and many news organizations aren’t equipped to save their own work. In December, Poynter announced a joint initiative with the Internet Archive to train local news outlets on how to preserve their content. Archiving initiatives like this, while urgently needed, are few and far between. Since there is no federal mandate that requires internet content to be preserved, the Internet Archive is the most robust archiving initiative in the United States.
“The Internet Archive tends to be good citizens,” Hahn said. “It’s the law of unintended consequences: You do something for really good purposes, and it gets abused.”
Photo of Internet Archive homepage by SDF_QWE used under an Adobe Stock license.
...
Read the original on www.niemanlab.org »
I recently got a smart sleep mask from Kickstarter. I was not expecting to end up with the ability to read strangers’ brainwaves and send them electric impulses in their sleep. But here we are.
The mask was from a small Chinese research company, very cool hardware — EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio. The app was still rough around the edges though and the mask kept disconnecting, so I asked Claude to try reverse-engineer the Bluetooth protocol and build me a simple web control panel instead.
The first thing Claude did was scan for BLE (Bluetooth Low Energy) devices nearby. It found mine among 35 devices in range, connected, and mapped the interface — two data channels. One for sending commands, one for streaming data.
Then it tried talking to it. Sent maybe a hundred different command patterns. Modbus frames, JSON, raw bytes, common headers. Unfortunately, the device said nothing back, the protocol was not a standard one.
So Claude went after the app instead. Grabbed the Android APK, decompiled it with jadx. Turns out the app is built with Flutter, which is a bit of a problem for reverse engineering. Flutter compiles Dart source code into native ARM64 machine code — you can’t just read it back like normal Java Android apps. The actual business logic lives in a 9MB binary blob.
But even compiled binaries have strings in them. Error messages, URLs, debug logs. Claude ran strings on the binary and this was the most productive step of the whole session. Among the thousands of lines of Flutter framework noise, it found:
* Hardcoded credentials for the company’s message broker (shared by every copy of the app)
* All fifteen command builder function names (e.g. to set vibration, heating, electric stimulation, etc.)
We had the shape of the protocol. Still didn’t have the actual byte values though.
Claude then used blutter, a tool specifically for decompiling Flutter’s compiled Dart snapshots. It reconstructs the functions with readable annotations. Claude figured out the encoding, and just read off every command byte from every function. Fifteen commands, fully mapped.
Claude sent a six-byte query packet. The device came back with 153 bytes — model number, firmware version, serial number, all eight sensor channel configurations (EEG at 250Hz, respiration, 3-axis accelerometer, 3-axis gyroscope). Battery at 83%.
Vibration control worked. Heating worked. EMS worked. Music worked. Claude built me a little web dashboard with sliders for everything. I was pretty happy with it.
That could have been the end of the story.
Remember the hardcoded credentials from earlier? While poking around, Claude tried using them to connect to the company’s MQTT broker — MQTT is a pub/sub messaging system standard in IoT, where devices publish sensor readings and subscribe to commands. It connected fine. Then it started receiving data. Not just from my device — from all of them. About 25 were active:
Claude captured a couple minutes of EEG from two active sleep masks. One user seemed to be in REM sleep (mixed-frequency activity). The other was in deep slow-wave sleep (strong delta power below 4Hz). Real brainwaves from real people, somewhere in the world.
The mask also does EMS — electrical muscle stimulation around the eyes. Controlling it is just another command: mode, frequency, intensity, duration.
Since every device shares the same credentials and the same broker, if you can read someone’s brainwaves you can also send them electric impulses.
For obvious reasons, I am not naming the product/company here, but have reached out to inform them about the issue.
This whole thing made me revisit Karpathy’s Digital Hygiene post, and you probably should too.
The reverse engineering — Bluetooth, APK decompilation, Dart binary analysis, MQTT discovery — was more or less one-shotted by Claude (Opus 4.6) over a 30′ autonomous session.
Update: this somehow reached #1 on HN and folks have been asking for the claude conversation transcript. I’ve added it here.
...
Read the original on aimilios.bearblog.dev »
The Vim project is happy to announce that Vim 9.2 has been released. Vim 9.2 brings significant enhancements to the Vim9 scripting language, improved diff mode, comprehensive completion features, and platform-specific improvements including experimental Wayland support.
Comprehensive Completion: Added support for fuzzy matching during insert-mode completion and the ability to complete words directly from registers (CTRL-X CTRL-R). New ‘completeopt’ flags like nosort and nearest offer finer control over how matches are displayed and ordered.
Modern Platform Support: Full support for the Wayland UI and clipboard has been added. On Linux and Unix-like systems, Vim now adheres to the XDG Base Directory Specification, using $HOME/.config/vim for user configuration.
UI Enhancements: A new vertical tabpanel provides an alternative to the horizontal tabline. The MS-Windows GUI now supports native dark mode for the menu and title bars, along with improved fullscreen support and higher-quality toolbar icons.
Interactive Learning: A new built-in interactive tutor plugin (started via :Tutor) provides a modernized learning experience beyond the traditional vimtutor.
Significant language enhancements including native support for Enums, Generic functions, and the Tuple data type. Built-in functions are now integrated as object methods, and classes now support protected _new() methods and :defcompile for full method compilation.
The maturity of Vim9 script’s modern constructs is now being leveraged by advanced AI development tools. Contributor Yegappan Lakshmanan recently demonstrated the efficacy of these new features through two projects generated using GitHub Copilot:
Battleship in Vim9: A complete implementation of the classic game, showcasing classes and type aliases. [GitHub]
Number Puzzle: A logic game demonstrating the efficiency of modern Vim9 for interactive plugins. [GitHub]
Vim 9.2 introduces significant enhancements to how changes are visualized and aligned in diff mode:
Linematch Algorithm: Includes the “linematch” algorithm for the ‘diffopt’ setting. This aligns changes between buffers on similar lines, greatly improving diff highlighting accuracy.
Diff Anchors: The new ‘diffanchors’ option allows you to specify anchor points (comma-separated addresses) to split and independently diff buffer sections, ensuring better alignment in complex files.
Inline Highlighting: Improves highlighting for changes within a line. This is configurable via the “inline” sub-option for ‘diffopt’. Note that “inline:simple” has been added to the default ‘diffopt’ value.
Here are some examples for the improved inline highlighting:
Several long-standing defaults have been updated to better suit modern hardware and workflows. These values have been removed from defaults.vim as they are now the internal defaults.
On (Always visible in non-compatible mode)
These examples demonstrate how to use the powerful new completion and introspection tools available in Vim 9.2.
Vim’s standard completion frequently checks for user input while searching for new matches. It is responsive irrespective of file size. This makes it well-suited for smooth auto-completion.
vim9script
def InsComplete()
if getcharstr(1) == ‘’ && getline(‘.’)->strpart(0, col(‘.’) - 1) =~ ‘\k$’
SkipTextChangedIEvent()
feedkeys(“", “n”)
endif
enddef
def SkipTextChangedIEvent(): string
# Suppress next event caused by
vim9script
var selected_match = null_string
var allfiles: list
def GrepComplete(arglead: string, cmdline: string, cursorpos: number): list
return arglead->len() > 1 ? systemlist($‘grep -REIHns “{arglead}“’ ..
′ –exclude-dir=.git –exclude=”.*” –exclude=“tags” –exclude=“*.swp”’) : []
enddef
def VisitFile()
if (selected_match != null_string)
var qfitem = getqflist({lines: [selected_match]}).items[0]
if qfitem->has_key(‘bufnr’) && qfitem.lnum > 0
var pos = qfitem.vcol > 0 ? ‘setcharpos’ : ‘setpos’
exec $‘:b +call\ {pos}(”.”,\ [0,\ {qfitem.lnum},\ {qfitem.col},\ 0]) {qfitem.bufnr}’
setbufvar(qfitem.bufnr, ‘&buflisted’, 1)
endif
endif
enddef
def FuzzyFind(arglead: string, _: string, _: number): list
if allfiles == null_list
allfiles = systemlist($’find {get(g:, “fzfind_root”, ”.“)} \!
\( -path “*/.git” -prune -o -name “*.swp” \) -type f -follow’)
endif
return arglead == ‘’ ? allfiles : allfiles->matchfuzzy(arglead)
enddef
def FuzzyBuffer(arglead: string, _: string, _: number): list
var bufs = execute(‘buffers’, ‘silent!’)->split(“\n”)
var altbuf = bufs->indexof((_, v) => v =~ ‘^\s*\d\+\s\+#’)
if altbuf != -1
[bufs[0], bufs[altbuf]] = [bufs[altbuf], bufs[0]]
endif
return arglead == ‘’ ? bufs : bufs->matchfuzzy(arglead)
enddef
def SelectItem()
selected_match = ‘’
if getcmdline() =~ ‘^\s*\%(Grep\|Find\|Buffer\)\s’
var info = cmdcomplete_info()
if info != {} && info.pum_visible && !info.matches->empty()
selected_match = info.selected != -1 ? info.matches[info.selected] : info.matches[0]
setcmdline(info.cmdline_orig) # Preserve search pattern in history
endif
endif
enddef
command! -nargs=+ -complete=customlist,GrepComplete Grep VisitFile()
command! -nargs=* -complete=customlist,FuzzyBuffer Buffer exe ‘b ’ .. selected_match->matchstr(‘\d\+’)
command! -nargs=* -complete=customlist,FuzzyFind Find exe !empty(selected_match) ? $‘e {selected_match}’ : ‘’
nnoremap
vim9script
def CmdComplete()
var [cmdline, curpos] = [getcmdline(), getcmdpos()]
if getchar(1, {number: true}) == 0 # Typehead is empty
&& !pumvisible() && curpos == cmdline->len() + 1
&& cmdline =~ ‘\%(\w\|[*/:.-]\)$’ && cmdline !~ ‘^\d\+$’
feedkeys(“\
For automatic popup menu completion as you type in search or : commands, include this in your .vimrc:
vim9script
def CmdComplete()
var [cmdline, curpos, cmdmode] = [getcmdline(), getcmdpos(), expand(′
Other Improvements and ChangesMany bugs have been fixed since the release of Vim 9.1, including security vulnerabilities, memory leaks and potential crashes.
See the helpfile for other improvements: :h new-other-9.2
Changes to existing behaviour is documented at: :h changed-9.2
A few new functions, autocommands, ex commands and options have been added: :h added-9.2
The full list of patches is documented at: :h patches-9.2
For over 30 years, Vim has been “Charityware,” supporting children in Kibaale, Uganda. Following the passing of Bram Moolenaar, the ICCF Holland foundation was dissolved, and its mission has been carried forward by a new partner.
ICCF Holland Dissolution: Because the charity could not be sustained in its original form without Bram, ICCF Holland was dissolved and its remaining funds were transferred to ensure continued support for the Kibaale project.
Partnership with Kuwasha: To ensure that aid remained uninterrupted, all sponsorship activities were moved to Kuwasha, a long-term partner based in Canada that now manages the projects in Uganda.
Continuing the Legacy: Vim remains Charityware. We encourage users to continue supporting the needy children in Uganda through this new transition.
For information on how to support this cause, please visit the Sponsor page.We would like to thank everybody who contributed to the project through patches, translations, and bug reports. We are very grateful for any support.You can find the new release on the Download page.
...
Read the original on www.vim.org »
This page contains a curated list of recent changes to main branch Zig.
Also available as an RSS feed.
This page contains entries for the year 2026. Other years are available in the Devlog archive page.
This page contains a curated list of recent changes to main branch Zig.
Also available as an RSS feed.
This page contains entries for the year 2026. Other years are available in the Devlog archive page.
As we approach the end of the 0.16.0 release cycle, Jacob has been hard at work, bringing std. Io.Evented up to speed with all the latest API changes:Both of these are based on userspace stack switching, sometimes called “fibers”, “stackful coroutines”, or “green threads”.They are now available to tinker with, by constructing one’s application using std.Io.Evented. They should be considered experimental because there is important followup work to be done before they can be used reliably and robustly:diagnose the unexpected performance degradation when using IoMode.evented for the compilerbuiltin function to tell you the maximum stack size of a given function to make these implementations practical to use when overcommit is off.With those caveats in mind, it seems we are indeed reaching the Promised Land, where Zig code can have Io implementations effortlessly swapped out:const std = @import(“std”);
pub fn main(init: std.process.Init.Minimal) !void {
var debug_allocator: std.heap.DebugAllocator(.{}) = .init;
const gpa = debug_allocator.allocator();
var threaded: std.Io.Threaded = .init(gpa, .{
.argv0 = .init(init.args),
.environ = init.environ,
defer threaded.deinit();
const io = threaded.io();
return app(io);
fn app(io: std.Io) !void {
try std.Io.File.stdout().writeStreamingAll(io, “Hello, World!\n”);
$ strace ./hello_threaded
execve(”./hello_threaded”, [”./hello_threaded”], 0x7ffc1da88b20 /* 98 vars */) = 0
mmap(NULL, 262207, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f583f338000
arch_prctl(ARCH_SET_FS, 0x7f583f378018) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
prlimit64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0
sigaltstack({ss_sp=0x7f583f338000, ss_flags=0, ss_size=262144}, NULL) = 0
sched_getaffinity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8
rt_sigaction(SIGIO, {sa_handler=0x1019d90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=0x1019d90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
writev(1, [{iov_base=“Hello, World!\n”, iov_len=14}], 1Hello, World!
) = 14
rt_sigaction(SIGIO, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, NULL, 8) = 0
exit_group(0) = ?
+++ exited with 0 +++
Swapping out only the I/O implementation:const std = @import(“std”);
pub fn main(init: std.process.Init.Minimal) !void {
var debug_allocator: std.heap.DebugAllocator(.{}) = .init;
const gpa = debug_allocator.allocator();
var evented: std.Io.Evented = undefined;
try evented.init(gpa, .{
.argv0 = .init(init.args),
.environ = init.environ,
.backing_allocator_needs_mutex = false,
defer evented.deinit();
const io = evented.io();
return app(io);
fn app(io: std.Io) !void {
try std.Io.File.stdout().writeStreamingAll(io, “Hello, World!\n”);
execve(”./hello_evented”, [”./hello_evented”], 0x7fff368894f0 /* 98 vars */) = 0
mmap(NULL, 262215, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c28000
arch_prctl(ARCH_SET_FS, 0x7f70a4c68020) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
prlimit64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0
sigaltstack({ss_sp=0x7f70a4c28008, ss_flags=0, ss_size=262144}, NULL) = 0
sched_getaffinity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c27000
mmap(0x7f70a4c28000, 548864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4ba1000
io_uring_setup(64, {flags=IORING_SETUP_COOP_TASKRUN|IORING_SETUP_SINGLE_ISSUER, sq_thread_cpu=0, sq_thread_idle=1000, sq_entries=64, cq_entries=128, features=IORING_FEAT_SINGLE_MMAP|IORING_FEAT_NODROP|IORING_FEAT_SUBMIT_STABLE|IORING_FEAT_RW_CUR_POS|IORING_FEAT_CUR_PERSONALITY|IORING_FEAT_FAST_POLL|IORING_FEAT_POLL_32BITS|IORING_FEAT_SQPOLL_NONFIXED|IORING_FEAT_EXT_ARG|IORING_FEAT_NATIVE_WORKERS|IORING_FEAT_RSRC_TAGS|IORING_FEAT_CQE_SKIP|IORING_FEAT_LINKED_FILE|IORING_FEAT_REG_REG_RING|IORING_FEAT_RECVSEND_BUNDLE|IORING_FEAT_MIN_TIMEOUT|IORING_FEAT_RW_ATTR|IORING_FEAT_NO_IOWAIT, sq_off={head=0, tail=4, ring_mask=16, ring_entries=24, flags=36, dropped=32, array=2112, user_addr=0}, cq_off={head=8, tail=12, ring_mask=20, ring_entries=28, overflow=44, cqes=64, flags=40, user_addr=0}}) = 3
mmap(NULL, 2368, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0) = 0x7f70a4ba0000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0x10000000) = 0x7f70a4b9f000
io_uring_enter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8Hello, World!
) = 1
io_uring_enter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8) = 1
munmap(0x7f70a4b9f000, 4096) = 0
munmap(0x7f70a4ba0000, 2368) = 0
close(3) = 0
munmap(0x7f70a4ba1000, 548864) = 0
exit_group(0) = ?
+++ exited with 0 +++
Key point here being that the app function is identical between those two snippets.Moving beyond Hello World, the Zig compiler itself works fine using std.Io.Evented, both with io_uring and with GCD, but as mentioned above, there is a not-yet-diagnosed performance degradation when doing so.
If you have a Zig project with dependencies, two big changes just landed which I think you will be interested to learn about. Fetched packages are now stored locally in the zig-pkg directory of the project root (next to your build.zig file).For example here are a few results from awebo after running zig build:$ du -sh zig-pkg/*
13M freetype-2.14.1-alzUkTyBqgBwke4Jsot997WYSpl207Ij9oO-2QOvGrOi
20K opus-0.0.2-vuF-cMAkAADVsm707MYCtPmqmRs0gzg84Sz0qGbb5E3w
4.3M pulseaudio-16.1.1-9-mk_62MZkNwBaFwiZ7ZVrYRIf_3dTqqJR5PbMRCJzSuLw
5.2M uucode-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGFsN24436CuceC5pTJ25n
728K vaxis-0.5.1-BWNV_AxECQCj3p4Hcv4U3Yo1WMUJ7Z2FUj0UkpuJGxQQ
It is highly recommended to add this directory to the project-local source control ignore file (e.g. .gitignore). However, by being outside of .zig-cache, it provides the possibility of distributing self-contained source tarballs, which contain all dependencies and therefore can be used to build offline, or for archival purposes.Meanwhile, an additional copy of the dependency is cached globally. After filtering out all the unused files based on the paths filter, the contents are recompressed:$ du -sh ~/.cache/zig/p/*
2.4M freetype-2.14.1-alzUkTyBqgBwke4Jsot997WYSpl207Ij9oO-2QOvGrOi.tar.gz
4.0K opus-0.0.2-vuF-cMAkAADVsm707MYCtPmqmRs0gzg84Sz0qGbb5E3w.tar.gz
636K pulseaudio-16.1.1-9-mk_62MZkNwBaFwiZ7ZVrYRIf_3dTqqJR5PbMRCJzSuLw.tar.gz
880K uucode-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGFsN24436CuceC5pTJ25n.tar.gz
120K vaxis-0.5.1-BWNV_BFECQBbXeTeFd48uTJRjD5a-KD6kPuKanzzVB01.tar.gz
The motivation for this change is to make it easier to tinker. Go ahead and edit those files, see what happens. Swap out your package directory with a git clone. Grep your dependencies all together. Configure your IDE to auto-complete based on the zig-pkg directory. Run baobab on your dependency tree. Furthermore, by having the global cache have compressed files instead makes it easier to share that cached data between computers. In the future, it is planned to support peer-to-peer torrenting of dependency trees. By recompressing packages into a canonical form, this will allow peers to share Zig packages with minimal bandwidth. I love this idea because it simultaneously provides resilience to network outages, as well as a popularity contest. Find out which open source packages are popular based on number of seeders!The second change here is the addition of the –fork flag to zig build.In retrospect, it seems so obvious, I don’t know why I didn’t think of it since the beginning. It looks like this:zig build –fork=[path]
This is a project override option. Given a path to a source checkout of a project, all packages matching that project across the entire dependency tree will be overridden.Thanks to the fact that package content hashes include name and fingerprint, this resolves before the package is potentially fetched.This is an easy way to temporarily use one or more forks which are in entirely separate directories. You can iterate on your entire dependency tree until everything is working, while using comfortably the development environment and source control of the dependency projects.The fact that it is a CLI flag makes it appropriately ephemeral. The moment you drop the flags, you’re back to using your pristine, fetched dependency tree.If the project does not match, an error occurs, preventing confusion:$ zig build –fork=/home/andy/dev/mime
error: fork /home/andy/dev/mime matched no mime packages
If the project does match, you get a reminder that you are using a fork, preventing confusion:$ zig build –fork=/home/andy/dev/dvui
info: fork /home/andy/dev/dvui matched 1 (dvui) packages
This functionality is intended to enhance the workflow of dealing with ecosystem breakage. I already tried it a bit and found it to be quite pleasant to work with. The new workflow goes like this:Fail to build from source due to ecosystem breakage.Tinker with –fork until your project works again. During this time you can use the actual upstream source control, test suite, zig build test –watch -fincremental, etc.Now you have a new option: be selfish and just keep working on your own stuff, or you can proceed to submit your patches upstream.…and you can probably skip the step where you switch your build.zig.zon to your fork unless you expect upstream to take a long time to merge your fixes.
The Windows operating system provides a large ABI surface area for doing things in the kernel. However, not all ABIs are created equally. As Casey Muratori points out in his lecture, The Only Unbreakable Law, the organizational structure of software development teams has a direct impact on the structure of the software they produce. The DLLs on Windows are organized into a heirarchy, with some of the APIs being high-level wrappers around lower-level ones. For example, whenever you call functions of kernel32.dll, ultimately, the actual work is done by ntdll.dll. You can observe this directly by using ProcMon.exe and examining stack traces.What we’ve learned empirically is that the ntdll APIs are generally well-engineered, reasonable, and powerful, but the kernel32 wrappers introduce unnecessary heap allocations, additional failure modes, unintentional CPU usage, and bloat.This is why the Zig standard library policy is to Prefer the Native API over Win32. We’re not quite there yet - we have plenty of calls into kernel32 remaining - but we’ve taken great strides recently. I’ll give you two examples.According to the official documentation, Windows does not have a straightforward way to get random bytes.Many projects including Chromium, boringssl, Firefox, and Rust call SystemFunction036 from advapi32.dll because it worked on versions older than Windows 8.Unfortunately, starting with Windows 8, the first time you call this function, it dynamically loads bcryptprimitives.dll and calls ProcessPrng. If loading the DLL fails (for example due to an overloaded system, which we have observed on Zig CI several times), it returns error 38 (from a function that has void return type and is documented to never fail).The first thing ProcessPrng does is heap allocate a small, constant number of bytes. If this fails it returns NO_MEMORY in a BOOL (documented behavior is to never fail, and always return TRUE).bcryptprimitives.dll apparently also runs a test suite every time you load it.All that ProcessPrng is really doing is NtOpenFile on “\\Device\\CNG” and reading 48 bytes with NtDeviceIoControlFile to get a seed, and then initializing a per-CPU AES-based CSPRNG.So the dependency on bcryptprimitives.dll and advapi32.dll can both be avoided, and the nondeterministic failure and latencies on first RNG read can also be avoided.ReadFile looks like this:pub extern “kernel32″ fn ReadFile(
hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToRead: DWORD,
lpNumberOfBytesRead: ?*DWORD,
lpOverlapped: ?*OVERLAPPED,
) callconv(.winapi) BOOL;
NtReadFile looks like this:pub extern “ntdll” fn NtReadFile(
FileHandle: HANDLE,
Event: ?HANDLE,
ApcRoutine: ?*const IO_APC_ROUTINE,
ApcContext: ?*anyopaque,
...
Read the original on ziglang.org »
Victorian-era ‘vinegar valentines’ show that trolling existed long before social media or the internetAmericans wake up and smell the coffee price surge—skipping Starbucks, brewing at home, and drinking Diet Coke for caffeine and The Associated PressPrivate equity’s playbook to shake off the zombies: meet the continuation vehicleMacKenzie Scott says her college roommate loaned her $1,000 so she wouldn’t have to drop out—and is now inspiring her to give away billionsRomance scam from the front lines of the $16 billion fraud crisis: 6 dead dogs, a missing $39,000, and a wronged widow‘America’s Got Talent’ creator Simon Cowell has given up working on Fridays because ‘it’s pointless’—and research shows he’s right
Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AIMacKenzie Scott says her college roommate loaned her $1,000 so she wouldn’t have to drop out—and is now inspiring her to give away billionsSome folks on Wall Street think yesterday’s U. S. jobs number is ‘implausible’ and thus due for a downward correctionAnalog-obsessed Gen Zers are buying $40 app blockers to limit their social media use and take a break from the ’slot machine in your pocket’Actress Jennifer Garner just took her $724 million organic food empire public. She started her career making just $150 weekly as a ‘broke’ understudyEvery U.S. Olympian was promised a $200,000 payout, but how much they actually keep depends on where they live
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
Platforms bend over backward to help DHS censor ICE critics, advocates say
Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.
Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.
Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.
In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content “to control what the public can see, hear, or say about ICE operations.”
It’s the second lawsuit alleging that Bondi and DHS officials are using regulatory power to pressure private platforms to suppress speech protected by the First Amendment. It follows a complaint from the developer of an app called ICEBlock, which Apple removed from the App Store in October. Officials aren’t rushing to resolve that case—last month, they requested more time to respond—so it may remain unclear until March what defense they plan to offer for the takedown demands.
That leaves community members who monitor ICE in a precarious situation, as critical resources could disappear at the department’s request with no warning.
FIRE says people have legitimate reasons to share information about ICE. Some communities focus on helping people avoid dangerous ICE activity, while others aim to hold the government accountable and raise public awareness of how ICE operates. Unless there’s proof of incitement to violence or a true threat, such expression is protected.
Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the government dislikes.
Officials have ordered ICE-monitoring apps to be removed from app stores and even threatened to sanction CNN for simply reporting on the existence of one such app. Officials have also demanded that Meta delete at least one Chicago-based Facebook group with 100,000 members and made multiple unsuccessful attempts to unmask anonymous users behind other Facebook groups. Even encrypted apps like Signal don’t feel safe from officials’ seeming overreach. FBI Director Kash Patel recently said he has opened an investigation into Signal chats used by Minnesota residents to track ICE activity, NBC News reported.
As DHS censorship threats increase, platforms have done little to shield users, advocates say. Not only have they sometimes failed to reject unlawful orders that simply provided a “a bare mention of ‘officer safety/doxing’” as justification, but in one case, Google complied with a subpoena that left a critical section blank, the Electronic Frontier Foundation (EFF) reported.
For users, it’s increasingly difficult to trust that platforms won’t betray their own policies when faced with government intimidation, advocates say. Sometimes platforms notify users before complying with government requests, giving users a chance to challenge potentially unconstitutional demands. But in other cases, users learn about the requests only as platforms comply with them—even when those platforms have promised that would never happen.
Government emails with platforms may be exposed
Platforms could face backlash from users if lawsuits expose their communications to the government, a possibility in the coming months. Last fall, the EFF sued after DOJ, DHS, ICE, and Customs and Border Patrol failed to respond to Freedom of Information Act requests seeking emails between the government and platforms about takedown demands. Other lawsuits may surface emails in discovery. In the coming weeks, a judge will set a schedule for EFF’s litigation.
“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.
EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.
Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.
They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.
“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.
But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.
Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.
“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.
Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.
An open letter from the EFF and the American Civil Liberties Union (ACLU) documented two instances of tech companies complying with government demands without first notifying users.
The letter called out Meta for unmasking at least one user without prior notice, which groups noted “potentially” occured due to a “technical glitch.”
More troubling than buggy notifications, however, is the possibility that platforms may be routinely delaying notice until it’s too late.
After Google “received an ICE subpoena for user data and fulfilled it on the same day that it notified the user,” the company admitted that “sometimes when Google misses its response deadline, it complies with the subpoena and provides notice to a user at the same time to minimize the delay for an overdue production,” the letter said.
“This is a worrying admission that violates [Google’s] clear promise to users, especially because there is no legal consequence to missing the government’s response deadline,” the letter said.
Platforms face no sanctions for refusing to comply with government demands that have not been court-ordered, the letter noted. That’s why the EFF and ACLU have urged companies to use their “immense resources” to shield users who may not be able to drop everything and fight unconstitutional data requests.
In their letter, the groups asked companies to insist on court intervention before complying with a DHS subpoena. They should also resist DHS “gag orders” that ask platforms to hand over data without notifying users.
Instead, they should commit to giving users “as much notice as possible when they are the target of a subpoena,” as well as a copy of the subpoena. Ideally, platforms would also link users to legal aid resources and take up legal fights on behalf of vulnerable users, advocates suggested.
That’s not what’s happening so far. Trujillo told Ars that it feels like “companies have bent over backward to appease the Trump administration.”
The tide could turn this year if courts side with app makers behind crowdsourcing apps like ICEBlock and Eyes Up, who are suing to end the alleged government coercion. FIRE’s McDonnell, who represents the creator of Eyes Up, told Ars that platforms may feel more comfortable exercising their own editorial judgment moving forward if a court declares they were coerced into removing content.
DHS can’t use doxing to dodge First Amendment
FIRE’s lawsuit accuses Bondi and Noem of coercing Meta to disable a Facebook group with 100,000 members called “ICE Sightings–Chicagoland.”
The popularity of that group surged during “Operation Midway Blitz,” when hundreds of agents arrested more than 4,500 people over weeks of raids that used tear gas in neighborhoods and caused car crashes and other violence. Arrests included US citizens and immigrants of lawful status, which “gave Chicagoans reason to fear being injured or arrested due to their proximity to ICE raids, no matter their immigration status,” FIRE’s complaint said.
Kassandra Rosado, a lifelong Chicagoan and US citizen of Mexican descent, started the Facebook group and served as admin, moderating content with other volunteers. She prohibited “hate speech or bullying” and “instructed group members not to post anything threatening, hateful, or that promoted violence or illegal conduct.”
Facebook only ever flagged five posts that supposedly violated community guidelines, but in warnings, the company reassured Rosado that “groups aren’t penalized when members or visitors break the rules without admin approval.”
Rosado had no reason to suspect that her group was in danger of removal. When Facebook disabled her group, it told Rosado the group violated community standards “multiple times.” But her complaint noted that, confusingly, “Facebook policies don’t provide for disabling groups if a few members post ostensibly prohibited content; they call for removing groups when the group moderator repeatedly either creates prohibited content or affirmatively ‘approves’ such content.”
Facebook’s decision came after a right-wing influencer, Laura Loomer, tagged Noem and Bondi in a social media post alleging that the group was “getting people killed.” Within two days, Bondi bragged that she had gotten the group disabled while claiming that it “was being used to dox and target [ICE] agents in Chicago.”
McDonnell told Ars it seems clear that Bondi selectively uses the term “doxing” when people post images from ICE arrests. He pointed to “ICE’s own social media accounts,” which share favorable opinions of ICE alongside videos and photos of ICE arrests that Bondi doesn’t consider doxing.
“Rosado’s creation of Facebook groups to send and receive information about where and how ICE carries out its duties in public, to share photographs and videos of ICE carrying out its duties in public, and to exchange opinions about and criticism of ICE’s tactics in carrying out its duties, is speech protected by the First Amendment,” FIRE argued.
The same goes for speech managed by Mark Hodges, a US citizen who resides in Indiana. He created an app called Eyes Up to serve as an archive of ICE videos. Apple removed Eyes Up from the App Store around the same time that it removed ICEBlock.
“It is just videos of what government employees did in public carrying out their duties,” McDonnell said. “It’s nothing even close to threatening or doxing or any of these other theories that the government has used to justify suppressing speech.”
Bondi bragged that she had gotten ICEBlock banned, and FIRE’s complaint confirmed that Hodges’ company received the same notification that ICEBlock’s developer got after Bondi’s victory lap. The notice said that Apple received “information” from “law enforcement” claiming that the apps had violated Apple guidelines against “defamatory, discriminatory, or mean-spirited content.”
Apple did not reach the same conclusion when it independently reviewed Eyes Up prior to government meddling, FIRE’s complaint said. Notably, the app remains available in Google Play, and Rosado now manages a new Facebook group with similar content but somewhat tighter restrictions on who can join. Neither activity has required urgent intervention from either tech giants or the government.
McDonnell told Ars that it’s harmful for DHS to water down the meaning of doxing when pushing platforms to remove content critical of ICE.
“When most of us hear the word ‘doxing,’ we think of something that’s threatening, posting private information along with home addresses or places of work,” McDonnell said. “And it seems like the government is expanding that definition to encompass just sharing, even if there’s no threats, nothing violent. Just sharing information about what our government is doing.”
Expanding the definition and then using that term to justify suppressing speech is concerning, he said, especially since the First Amendment includes no exception for “doxing,” even if DHS ever were to provide evidence of it.
To suppress speech, officials must show that groups are inciting violence or making true threats. FIRE has alleged that the government has not met “the extraordinary justifications required for a prior restraint” on speech and is instead using vague doxing threats to discriminate against speech based on viewpoint. They’re seeking a permanent injunction barring officials from coercing tech companies into censoring ICE posts.
If plaintiffs win, the censorship threats could subside, and tech companies may feel safe reinstating apps and Facebook groups, advocates told Ars. That could potentially revive archives documenting thousands of ICE incidents and reconnect webs of ICE watchers who lost access to valued feeds.
Until courts possibly end threats of censorship, the most cautious community members are moving local ICE-watch efforts to group chats and listservs that are harder for the government to disrupt, Trujillo told Ars.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Verizon imposes new roadblock on users trying to unlock paid-off phones
NASA has a new problem to fix before the next Artemis II countdown test
$1.8 million MST3K Kickstarter brings in (almost) everyone from the old show
...
Read the original on arstechnica.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.