10 interesting stories served every morning and every evening.




1 689 shares, 28 trendiness

The Story of Wall Street Raider

Prologue: The Dragon That Couldn’t Be Slain

A Denver com­pany that de­vel­oped le­gal soft­ware tried. They failed.

A game stu­dio that made soft­ware for Disney tried. They spent over a year and hun­dreds of thou­sands of dol­lars. They had a team of pro­gram­mers in Armenia, over­seen by an American PhD in math­e­mat­ics. They failed.

Commodore Computers tried. After three months of star­ing at the source code, they gave up and mailed it back.

Steam looked at it once, dur­ing the Greenlight era. Too niche,” they said. Nobody will buy it. And if they do, they’ll re­turn it.”

For nearly four decades, Wall Street Raider ex­isted as a kind of im­pos­si­ble ob­ject—a game so com­plex that its own cre­ator barely un­der­stood parts of it, writ­ten in a pro­gram­ming lan­guage so prim­i­tive that pro­fes­sional de­vel­op­ers could­n’t de­code it. The code was, in the cre­ator’s own words, indecipherable to any­one but me.”

And then, in January 2024, a 29-year-old soft­ware de­vel­oper from Ohio named Ben Ward sent an email.

Michael Jenkins, 80 years old and un­der­stand­ably cau­tious af­ter decades of failed at­tempts by oth­ers, was hon­est with him: I ap­pre­ci­ate the en­thu­si­asm, but I’ve been through this be­fore. Others have tried—tal­ented peo­ple with big bud­gets—and none of them could crack it. I’ll send you the source code, but I want to be up­front: I’m not ex­pect­ing mir­a­cles.

A year later, that same 29-year-old would an­nounce on Reddit: I am the cho­sen one, and the game is be­ing re­made. No ifs, ands, or buts about it.”

He was jok­ing about the chosen one” part. Sort of.

This is the story of how Wall Street Raider—the most com­pre­hen­sive fi­nan­cial sim­u­la­tor ever made—was born, nearly died, and was res­ur­rected. It’s a story about ob­ses­sion, about code that takes on a life of its own, about a game that ac­ci­den­tally changed the ca­reers of hun­dreds of peo­ple. And it’s about the 50-year age gap be­tween two de­vel­op­ers who, de­spite meet­ing in per­son for the first time only via video call, would trust each other with four decades of work.

Michael Jenkins was sup­posed to be study­ing. Instead, he was fill­ing note­books with ideas for a board game.

Not just any board game. Jenkins wanted to build some­thing like Monopoly, but in­stead of ho­tels and rail­roads, you’d buy and sell cor­po­ra­tions. You’d is­sue stock. You’d ex­e­cute merg­ers. You’d struc­ture lever­aged buy­outs. The game would sim­u­late the en­tire ma­chin­ery of American cap­i­tal­ism, from hos­tile takeovers to tax ac­count­ing.

There was just one prob­lem: it was im­pos­si­ble.

Nobody’s go­ing to have the pa­tience to do this,” Jenkins re­al­ized as he stared at his pro­to­type—a board cov­ered in tiny pa­per stock cer­tifi­cates, a cal­cu­la­tor for work­ing out the math, ses­sions that stretched for hours with­out reach­ing any sat­is­fy­ing con­clu­sion.

The game he wanted to make re­quired some­thing that did­n’t ex­ist yet: a per­sonal com­puter.

So Jenkins waited. He grad­u­ated from Harvard Law in 1969. He worked as an econ­o­mist at a na­tional con­sult­ing firm. He be­came a CPA at one of the world’s largest ac­count­ing firms. He prac­ticed tax law at a pres­ti­gious San Francisco firm, struc­tur­ing bil­lion-dol­lar merg­ers—the ex­act kind of trans­ac­tions he dreamed of sim­u­lat­ing in his game.

And all the while, he kept fill­ing note­books.

Jenkins be­gan sketch­ing ideas for a cor­po­rate board game while at Harvard Law School

Sixteen years af­ter those first note­books, Jenkins fi­nally got his hands on what he’d been wait­ing for: a Kaypro per­sonal com­puter.

It had a screen about five inches across. It ran CP/M, an op­er­at­ing sys­tem that would soon be for­got­ten when MS-DOS ar­rived. It was prim­i­tive be­yond be­lief by mod­ern stan­dards.

The Kaypro portable com­puter, with its iconic built-in screen

Jenkins pulled out a slim book­let that came with the ma­chine—a guide to Microsoft Basic writ­ten by Bill Gates him­self. He’d never writ­ten a line of code in his life. He had no for­mal train­ing in com­put­ers. But that night, he sat down and typed:

The com­puter said hello.

As soon as I did that,” Jenkins later re­called, I re­al­ized: oh, this is­n’t that com­pli­cated.”

What hap­pened next be­came the stuff of leg­end in the small com­mu­nity of Wall Street Raider devo­tees. Jenkins stayed up un­til five in the morn­ing, writ­ing all kinds of crazy stuff”—fake con­ver­sa­tions that would prank his friends when they sat down at his com­puter, lit­tle pro­grams that seemed to know things about his vis­i­tors.

It was a hoot. It was also the be­gin­ning of an ob­ses­sion that would con­sume the next four decades of his life.

Within a year, Jenkins had built some­thing he ac­tu­ally wanted: the first crude ver­sion of Wall Street Raider. It al­ready had a mov­ing stock ticker scrolling across that tiny five-inch screen. It al­ready had news head­lines stream­ing past. It was ugly and in­com­plete, but it was real.

Meanwhile, his law prac­tice was suf­fer­ing. I was sit­ting in my of­fice pro­gram­ming in­stead of drum­ming up busi­ness,” he ad­mit­ted. Only his side busi­ness—a se­ries of tax guides called Starting and Operating a Business” that would even­tu­ally sell over a mil­lion copies across all fifty states—kept him fi­nan­cially afloat.

The most com­plex parts of Wall Street Raider weren’t writ­ten dur­ing nor­mal work­ing hours. They were writ­ten in the dead of night, in what Jenkins called fits of ra­tio­nal­ity.”

Picture this: It’s three in the morn­ing. Jenkins is hunched over his com­puter, try­ing to work out how to code a merger. Not just any merger—a merger where every party has to be dealt with cor­rectly. Bondholders. Stockholders. Option hold­ers. Futures po­si­tions. Interest rate swap coun­ter­par­ties. Proper ra­tios for every facet. Tax im­pli­ca­tions for every trans­ac­tion.

I felt like if I go to bed and I get up in the morn­ing, I won’t re­mem­ber how to do this. So I just stayed up un­til I wrote that code.”

The re­sult? Code that worked per­fectly—code that he tested for years and knew was cor­rect—but code that even he did­n’t fully un­der­stand any­more.

When I look at that code to­day, I still don’t re­ally quite un­der­stand it,” he ad­mit­ted. But I don’t want to mess with it.”

This be­came the pat­tern. Jenkins would ob­sess over a fea­ture un­til the logic crys­tal­lized in his mind, usu­ally some­time af­ter mid­night, and then race to get it coded be­fore the frag­ile un­der­stand­ing slipped away. The game grew layer by layer, each new fea­ture build­ing on the ones be­fore, each line of code a record of what Jenkins un­der­stood about cor­po­rate fi­nance at that par­tic­u­lar mo­ment in his life.

Years later, Ben Ward would give this phe­nom­e­non a name: The Jenkins Market Hypothesis.

The hy­poth­e­sis,” Ward wrote in an email to Jenkins, is that as­set prices in the game re­flect com­pe­ti­tion be­tween Michael Jenkins’ un­der­stand­ing of how Wall Street Raider worked at the point in time that he wrote the code over the past 40 years.”

In other words: the game’s sim­u­lated mar­ket was re­ally just forty dif­fer­ent ver­sions of Michael Jenkins, from forty dif­fer­ent stages of his life, all com­pet­ing with each other.

Jenkins loved the the­ory. I think it’s very much re­lated to chaos the­ory,” he replied.

In 1986, Michael Jenkins re­tired from law and CPA prac­tice at the age of 42. His tax guides were sell­ing well, and his pub­lisher had agreed to re­lease Wall Street Raider. He thought he might spend a few years pol­ish­ing his hobby pro­ject.

Thirty-four years later, he was still at it.

I chuckle when I get emails from cus­tomers who ask me when the team is go­ing to do one thing or the other. Well, the team is me. Ronin Software is def­i­nitely a one horse op­er­a­tion and al­ways has been.”

The game that started as a Monopoly vari­ant had be­come some­thing mon­strous and mag­nif­i­cent. By the time Jenkins was done, Wall Street Raider con­tained:

1,600 sim­u­lated com­pa­nies, each with its own fi­nan­cials

Antitrust reg­u­la­tions (toggleable for those who wanted robber baron” mode)

Consolidated tax ac­count­ing based on ac­tual IRS reg­u­la­tions (Jenkins had been trained by the per­son who lit­er­ally wrote those reg­u­la­tions)

A karma sys­tem that tracked your eth­i­cal vi­o­la­tions and in­creased your chances of get­ting caught the more you cheated

A 271-page man­ual that was sold sep­a­rately be­cause it was sim­ply too dense to give away

Hidden be­neath all this ma­chin­ery was some­thing that did­n’t dawn on most play­ers un­til they’d been im­mersed in the game for months or years: an enor­mous amount of text. New events, sce­nar­ios, and mes­sages would con­tinue to pop up long af­ter a player thought they’d seen every­thing—of­ten laced with Jenkins’ trade­mark tongue-in-cheek grave­yard hu­mor. The game was­n’t just deep me­chan­i­cally; it was deep nar­ra­tively, in ways that only re­vealed them­selves over time.

The orig­i­nal Wall Street Raider: from DOS (left) to Windows (right) — dense, com­plex, and un­de­ni­ably dated

The game had, in short, be­come the most com­pre­hen­sive fi­nan­cial sim­u­la­tor ever cre­ated—so com­plex that most peo­ple bounced off it, but those who broke through be­came de­voted for life.

The Dwarf Fortress of the stock mar­ket.”

Jenkins played chess against the world. He’d re­lease a new fea­ture, and within weeks, some clever player would email him: Man, I found how to make tril­lions of dol­lars overnight with that new fea­ture.”

I felt at times like the IRS plug­ging loop­holes,” Jenkins ad­mit­ted. Every ex­ploit be­came a patch. Every patch cre­ated new edge cases. The code grew more in­tri­cate, more lay­ered, more in­com­pre­hen­si­ble to any­one but its cre­ator.

And then some­thing strange started hap­pen­ing.

The emails started ar­riv­ing from around the world, and they weren’t about bugs.

One came from the Philippines:

I’ve been play­ing your game since I was 13 years old, liv­ing in a third world coun­try. Couldn’t even af­ford to buy the full ver­sion. So I played the two-year demo for years and years. And it taught me so much that now I’m work­ing for Morgan Stanley as a forex trader in Shanghai.”

Another came from a hedge fund man­ager:

I played Wall Street Raider for years and no­ticed that buy­ing cheap com­pa­nies—com­pa­nies with low PE ra­tios—and turn­ing them around seemed very prof­itable in the game. But I was­n’t do­ing that with my real clients. I was­n’t do­ing well. Finally I de­cided to just start do­ing what I’d been do­ing in Wall Street Raider.”

He at­tached a doc­u­ment: an au­dited re­port from Price Waterhouse, show­ing a 10-year pe­riod where he’d av­er­aged a 44% com­pounded an­nual re­turn us­ing strate­gies he’d learned from a video game.

Your game has changed my life.”

Jenkins heard it again and again. From CEOs. From in­vest­ment bankers. From traders and pro­fes­sors and fi­nance stu­dents. People who’d played the free demo ver­sion as teenagers in de­vel­op­ing coun­tries and par­layed what they learned into ca­reers at Goldman Sachs and Morgan Stanley. People who’d been stone ma­sons won­der­ing if they could do some­thing more.

By his own count, over 200 CEOs and in­vest­ment bankers had reached out over the years to say that Wall Street Raider had shaped their ca­reers.

I cre­ated the game be­cause it was fun to do so,” Jenkins said. But I’ve been pleas­antly sur­prised to see the pos­i­tive im­pact it has had on the lives of a lot of peo­ple who grew up play­ing it for years and years.”

Players from around the world credit the game with launch­ing their fi­nance ca­reers

It was, it turned out, not just a game. It was ac­ci­den­tally one of the most ef­fec­tive fi­nan­cial ed­u­ca­tion tools ever cre­ated—a sim­u­la­tor so re­al­is­tic that its lessons trans­ferred di­rectly to real mar­kets.

Everyone wanted to mod­ern­ize Wall Street Raider. Everyone failed.

The in­ter­est was ob­vi­ous. Here was a game with proven ed­u­ca­tional value, de­voted fans, and game­play depth that put most com­peti­tors to shame. The only prob­lem was the in­ter­face—a relic of the 1990s Windows era, all drop­down menus and tiny text boxes and graph­ics that looked, as one long­time player put it, like they came from the dark ages.”

So they came, the would-be sav­iors, with their teams and their bud­gets and their am­bi­tions.

A Denver com­pany that de­vel­oped le­gal soft­ware sent their pro­gram­mers. They could­n’t make it work.

A game stu­dio that did work for Disney as­sem­bled a team in Armenia, over­seen by an American PhD in math­e­mat­ics. They spent over a year and lots of money”—by some ac­counts, hun­dreds of thou­sands of dol­lars—try­ing to port the game to iPad.

None of their peo­ple had the kind of in-depth knowl­edge of cor­po­rate fi­nance, eco­nom­ics, law, and tax­a­tion that I was able to build into the game,” Jenkins ex­plained. So they sim­ply could­n’t code the sim­u­la­tion cor­rectly when they did­n’t have a clue how it should work.”

Commodore Computers, back in 1990, li­censed the DOS ver­sion. After three months of try­ing to un­der­stand the source code, they mailed it back.

Steam, dur­ing the Greenlight era, re­jected it out­right. Too niche,” they said. Almost no graph­ics. Looks clumsy and prim­i­tive.”

The pat­tern was al­ways the same. Professional pro­gram­mers would look at Jenkins’ 115,000 lines of prim­i­tive BASIC—code that broke all the rules for good struc­tured pro­gram­ming”—and try to rewrite it in some­thing mod­ern. C++, usu­ally. They’d make progress for a while, get 60% or 80% of the way there, and then hit a wall.

The prob­lem was­n’t tech­ni­cal skill. The prob­lem was that to rewrite the code, you had to un­der­stand the code. And to un­der­stand the code, you had to un­der­stand cor­po­rate fi­nance, tax law, eco­nom­ics, and se­cu­ri­ties reg­u­la­tion at the same depth as some­one who’d spent decades as a CPA, tax at­tor­ney, and econ­o­mist.

Those peo­ple did­n’t tend to be­come video game pro­gram­mers.

My 115,000 lines of prim­i­tive BASIC source code,” Jenkins ad­mit­ted, was ap­par­ently in­de­ci­pher­able to any­one but me.”

The skele­tons piled up around the dragon.

Imagine a dragon that needs slay­ing, with skele­tons of fallen knights all around it”

The End of the Road

By his late sev­en­ties, Michael Jenkins was run­ning out of op­tions.

His e-com­merce provider had gone bank­rupt, tak­ing six months of in­come with them. Payment proces­sors kept re­ject­ing him—some be­cause of ob­scure tax com­pli­ca­tions from sell­ing soft­ware in hun­dreds of coun­tries, oth­ers be­cause their le­gal de­part­ments did­n’t want to be as­so­ci­ated with any­thing fi­nance-re­lated. For a pe­riod, you lit­er­ally could­n’t buy Wall Street Raider any­where.

At one point the chal­lenges got so over­whelm­ing,” Jenkins ad­mit­ted, that I se­ri­ously con­sid­ered just shut­ting down every­thing.”

In 2020, a gam­ing jour­nal­ist named AJ Churchill sent Jenkins a sim­ple email ask­ing whether up­grades to Speculator (a com­pan­ion game) were in­cluded in the pur­chase price.

Jenkins’ re­sponse was… more than Churchill ex­pected:

Also, as a reg­is­tered user, you can buy Wall Street Raider at the dis­counted price of $12.95. As I make re­vi­sions over a pe­riod of a year or two, I even­tu­ally de­cide when I’ve done enough that it’s time to is­sue an up­grade ver­sion, but there is no timetable. And to be frank, I’m run­ning out of fea­si­ble ideas for im­prove­ments to both games, and there may only be one or two more up­grades to ei­ther pro­gram.

Otherwise, at age 76, I may be fi­nally com­ing near the end of de­vel­op­ment with my lim­ited soft­ware skills, un­less I can li­cense my code to a large soft­ware/​game com­pany that is will­ing to hire the kind of ex­pen­sive pro­gram­ming tal­ent that writes soft­ware for firms like Merrill Lynch or Goldman Sachs—who would be the only pro­gram­mers ca­pa­ble of port­ing my game to iOS, Android, or to a classy-look­ing Windows GUI. And that is very un­likely.

I’ve pretty much given up on the idea of any­one ever be­ing able to port it.”

Churchill posted the ex­change to the r/​ty­coon sub­red­dit with the ti­tle: I reached out to the 79-year-old cre­ator of Wall Street Raider and here’s what he wrote back.”

The post got some at­ten­tion. People com­mented about what a shame it was. A few bought the game out of cu­rios­ity. And then, like most Reddit posts, it faded into ob­scu­rity.

Hear the story from Michael Jenkins and Ben Ward them­selves.

But some­where in Ohio, a young soft­ware de­vel­oper read it. And he could­n’t get the im­age of a Bloomberg ter­mi­nal out of his head.

Ben Ward’s first mem­ory of pro­gram­ming was go­ing to the li­brary as a small child and check­ing out a mas­sive text­book on C++ game de­vel­op­ment.

I barely prob­a­bly knew how to read at that point,” he re­called. I had no idea how to in­stall a com­piler, run the code that was in this book. But it just kind of got me think­ing.”

Ward was, by his own ad­mis­sion, a ter­ri­ble stu­dent. He had ADHD that went un­di­ag­nosed un­til adult­hood. He spent more time help­ing his class­mates with their home­work than do­ing his own. His two-year pro­gram­ming de­gree took five years to com­plete.

Ben Ward, the de­vel­oper who would fi­nally crack the code

But when it came to code, some­thing clicked.

At 18, work­ing at his un­cle’s man­u­fac­tur­ing com­pany, Ward built a man­age­ment sys­tem in three months that re­placed their spread­sheets. It ran the busi­ness for five years. He went on to build ERP and ware­house man­age­ment sys­tems, worked at fin­tech com­pa­nies, and be­came a se­nior full-stack de­vel­oper.

...

Read the original on www.wallstreetraider.com »

2 639 shares, 71 trendiness

uBlock Origin filter list to hide YouTube Shorts

A main­tained uBlock Origin fil­ter list to hide all traces of YouTube shorts videos.

Copy the link be­low, go to uBlock Origin > Dashboard > Filter lists, scroll to the bot­tom, and paste the link un­der­neath the Import…’ head­ing:

https://​raw.githubuser­con­tent.com/​i5heu/​ublock-hide-yt-shorts/​mas­ter/​list.txt

> uBlock Origin sub­scribe link < (does not work on GitHub)

> uBlock Origin sub­scribe link < (does not work on GitHub)

After the ini­tial cre­ateor of this list @gijsdev is now van­ished for half a year, i ( i5heu ) took it on me to main­tain this list.

This pro­ject is an in­de­pen­dent, open-source ini­tia­tive and is not af­fil­i­ated with, en­dorsed by, spon­sored by, or as­so­ci­ated with Alphabet Inc., Google LLC, or YouTube.

...

Read the original on github.com »

3 594 shares, 24 trendiness

Babylon 5 Is Now Free to Watch On YouTube

How to Watch Your MLB TeamHow to Watch Your NFL TeamHow to Watch Sports on Regional Sports Networks (RSNs)

Babylon 5 Is Now Free to Watch On YouTube

In a move that has de­lighted fans of clas­sic sci­ence fic­tion, Warner Bros. Discovery has be­gun up­load­ing full episodes of the iconic se­ries , pro­vid­ing free ac­cess to the show just as it de­parts from the ad-sup­ported stream­ing plat­form Tubi. The tran­si­tion comes at a piv­otal time for the se­ries, which has main­tained a ded­i­cated fol­low­ing since its orig­i­nal run in the 1990s. Viewers no­ticed no­ti­fi­ca­tions on Tubi in­di­cat­ing that all five sea­sons would no longer be avail­able af­ter February 10, 2026, ef­fec­tively re­mov­ing one of the most ac­ces­si­ble free stream­ing op­tions for the space opera. With this shift, Warner Bros. Discovery ap­pears to be steer­ing the prop­erty to­ward its own dig­i­tal ecosys­tem, lever­ag­ing YouTube’s vast au­di­ence to rein­tro­duce the show to both long­time en­thu­si­asts and a new gen­er­a­tion.

The up­loads started with the pi­lot episode, The Gathering,” which serves as the en­try point to the se­ries’ in­tri­cate uni­verse. This was fol­lowed by sub­se­quent episodes such as Midnight on the Firing Line” and Soul Hunter,” re­leased in se­quence to build nar­ra­tive mo­men­tum. The strat­egy in­volves post­ing one episode each week, al­low­ing au­di­ences to ex­pe­ri­ence the story at a paced rhythm that mir­rors the orig­i­nal broad­cast sched­ule. This ap­proach not only en­cour­ages weekly view­er­ship but also fos­ters on­line dis­cus­sions and com­mu­nal watch­ing events, much like the fan com­mu­ni­ties that formed dur­ing the show’s ini­tial air­ing. The episodes are hosted on a chan­nel af­fil­i­ated with Warner Bros., com­plete with links to pur­chase the full se­ries, blend­ing free ac­cess with op­por­tu­ni­ties for deeper en­gage­ment through of­fi­cial mer­chan­dise and dig­i­tal own­er­ship.

Babylon 5’s move to YouTube rep­re­sents a broader trend in con­tent dis­tri­b­u­tion, where legacy ti­tles are re­vi­tal­ized through free plat­forms to com­pete in a crowded stream­ing land­scape. Tubi’s re­moval of the se­ries aligns with pe­ri­odic li­brary ro­ta­tions com­mon among ad-based ser­vices, where li­cens­ing agree­ments ex­pire and con­tent mi­grates to new homes. For Warner Bros. Discovery, this ini­tia­tive could sig­nal plans to ex­pand the fran­chise’s vis­i­bil­ity, es­pe­cially amid on­go­ing in­ter­est in re­boots and spin-offs that have been ru­mored in re­cent years. The com­pa­ny’s own­er­ship of the prop­erty, ac­quired through merg­ers, po­si­tions it to ex­per­i­ment with hy­brid mod­els that com­bine free episodic re­leases with pre­mium of­fer­ings on plat­forms like Max.

To un­der­stand the sig­nif­i­cance of this de­vel­op­ment, a brief look at Babylon 5’s his­tory pro­vides con­text. Created by J. Michael Straczynski, the se­ries pre­miered in 1993 as a ground­break­ing space opera that un­folded over five sea­sons, con­clud­ing in 1998. Set in the mid-23rd cen­tury, it cen­ters on a mas­sive space sta­tion called Babylon 5, which serves as a diplo­matic hub for var­i­ous alien species and hu­mans in the af­ter­math of dev­as­tat­ing in­ter­stel­lar wars. The sta­tion, or­bit­ing a neu­tral planet, be­comes a fo­cal point for po­lit­i­cal in­trigue, al­liances, and con­flicts among fed­er­a­tions rep­re­sent­ing Earth, the Minbari, Centauri, Narn, and oth­ers.

Unlike many con­tem­po­ra­ne­ous shows, Babylon 5 was con­ceived as a sin­gle, co­he­sive nar­ra­tive arc re­sem­bling a novel for tele­vi­sion, with each sea­son cor­re­spond­ing to a year from 2258 to 2262. This struc­ture al­lowed for long-term char­ac­ter de­vel­op­ment and es­ca­lat­ing plot­lines, in­clud­ing themes of war, peace, prophecy, and an­cient cos­mic forces. The se­ries was in­no­v­a­tive for its use of com­puter-gen­er­ated im­agery in vi­sual ef­fects, a rar­ity at the time, and its de­par­ture from episodic for­mats in fa­vor of se­ri­al­ized sto­ry­telling that in­flu­enced later hits like Battlestar Galactica and The Expanse. Production faced chal­lenges, in­clud­ing net­work changes from PTEN to TNT for the fi­nal sea­son, but it amassed crit­i­cal ac­claim for its depth and world-build­ing.

The show’s legacy ex­tends be­yond tele­vi­sion, spawn­ing nov­els, comics, and tele­films that ex­panded its lore. Its hu­man-cen­tric yet mul­ti­cul­tural per­spec­tive, por­tray­ing Earth as a mid-tier power in a vast galaxy, res­onated with au­di­ences seek­ing com­plex sci-fi be­yond utopian vi­sions. A decade be­fore the main events, hu­man­ity nar­rowly es­caped an­ni­hi­la­tion in the Earth-Minbari War, an event that un­der­scored the fragility of in­ter­stel­lar re­la­tions and set the stage for the sta­tion’s role as a bea­con of hope.

As episodes con­tinue to roll out on YouTube, fans are al­ready spec­u­lat­ing on whether this sig­nals a full re­vival or sim­ply a nos­tal­gic re-re­lease. For now, the weekly drops of­fer a timely way to re­visit or dis­cover a cor­ner­stone of genre tele­vi­sion, en­sur­ing Babylon 5’s en­dur­ing place in sci-fi his­tory. With over 110 episodes across its run, the grad­ual roll­out promises months of con­tent, keep­ing the sta­tion’s lights on for view­ers world­wide.

Please add Cord Cutters News as a source for your Google News feed HERE. Please fol­low us on Face­book and X for more news, tips, and re­views. Need cord cut­ting tech sup­port? Join our Cord Cutting Tech Support Facebook Group for help.

Please en­able JavaScript to view the com­ments pow­ered by Disqus.

Disclaimer: To ad­dress the grow­ing use of ad block­ers we now use af­fil­i­ate links to sites like http://​Ama­zon.com, stream­ing ser­vices, and oth­ers. Affiliate links help sites like Cord Cutters News, stay open. Affiliate links cost you noth­ing but help me sup­port my fam­ily. We do not al­low paid re­views on this site. As an Amazon Associate I earn from qual­i­fy­ing pur­chases.

Cord Cutters News makes money through af­fil­i­ate part­ner links: If you click on a link, we may earn a com­mis­sion. Our writ­ers and ed­i­tors cre­ate all re­views, news, and other con­tent to in­form read­ers, with no in­flu­ence from our busi­ness team. Learn more about how we make money. We take pains to en­sure our site is ac­cu­rate and up to date, but some in­for­ma­tion might be dif­fer­ent than what you find by vis­it­ing a ven­dor web­site.

Do not sell my info

...

Read the original on cordcuttersnews.com »

4 536 shares, 31 trendiness

Taggart (@mttaggart@infosec.exchange)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on infosec.exchange »

5 423 shares, 46 trendiness

News publishers limit Internet Archive access due to AI scraping concerns

As part of its mis­sion to pre­serve the web, the Internet Archive op­er­ates crawlers that cap­ture web­page snap­shots. Many of these snap­shots are ac­ces­si­ble through its pub­lic-fac­ing tool, the Wayback Machine. But as AI bots scav­enge the web for train­ing data to feed their mod­els, the Internet Archive’s com­mit­ment to free in­for­ma­tion ac­cess has turned its dig­i­tal li­brary into a po­ten­tial li­a­bil­ity for some news pub­lish­ers.

When The Guardian took a look at who was try­ing to ex­tract its con­tent, ac­cess logs re­vealed that the Internet Archive was a fre­quent crawler, said Robert Hahn, head of busi­ness af­fairs and li­cens­ing. The pub­lisher de­cided to limit the Internet Archive’s ac­cess to pub­lished ar­ti­cles, min­i­miz­ing the chance that AI com­pa­nies might scrape its con­tent via the non­prof­it’s repos­i­tory of over one tril­lion web­page snap­shots.

The Wayback Machine’s snap­shots of news home­pages plum­met af­ter a breakdown” in archiv­ing pro­jects

Specifically, Hahn said The Guardian has taken steps to ex­clude it­self from the Internet Archive’s APIs and fil­ter out its ar­ti­cle pages from the Wayback Machine’s URLs in­ter­face. The Guardian’s re­gional home­pages, topic pages, and other land­ing pages will con­tinue to ap­pear in the Wayback Machine.

In par­tic­u­lar, Hahn ex­pressed con­cern about the Internet Archive’s APIs.

A lot of these AI busi­nesses are look­ing for read­ily avail­able, struc­tured data­bases of con­tent,” he said. The Internet Archive’s API would have been an ob­vi­ous place to plug their own ma­chines into and suck out the IP.” (He ad­mits the Wayback Machine it­self is less risky,” since the data is not as well-struc­tured.)

As news pub­lish­ers try to safe­guard their con­tents from AI com­pa­nies, the Internet Archive is also get­ting caught in the crosshairs. The Financial Times, for ex­am­ple, blocks any bot that tries to scrape its pay­walled con­tent, in­clud­ing bots from OpenAI, Anthropic, Perplexity, and the Internet Archive. The ma­jor­ity of FT sto­ries are pay­walled, ac­cord­ing to di­rec­tor of global pub­lic pol­icy and plat­form strat­egy Matt Rogerson. As a re­sult, usu­ally only un­pay­walled FT sto­ries ap­pear in the Wayback Machine be­cause those are meant to be avail­able to the wider pub­lic any­way.

Common Crawl and Internet Archive are widely con­sid­ered to be the good guys’ and are used by the bad guys’ like OpenAI,” said Michael Nelson, a com­puter sci­en­tist and pro­fes­sor at Old Dominion University. In every­one’s aver­sion to not be con­trolled by LLMs, I think the good guys are col­lat­eral dam­age.”

To pre­serve their work — and drafts of his­tory — jour­nal­ists take archiv­ing into their own hands

The Guardian has­n’t doc­u­mented spe­cific in­stances of its web­pages be­ing scraped by AI com­pa­nies via the Wayback Machine. Instead, it’s tak­ing these mea­sures proac­tively and is work­ing di­rectly with the Internet Archive to im­ple­ment the changes. Hahn says the or­ga­ni­za­tion has been re­cep­tive to The Guardian’s con­cerns.

The out­let stopped short of an all-out block on the Internet Archive’s crawlers, Hahn said, be­cause it sup­ports the non­prof­it’s mis­sion to de­moc­ra­tize in­for­ma­tion, though that po­si­tion re­mains un­der re­view as part of its rou­tine bot man­age­ment.

[The de­ci­sion] was much more about com­pli­ance and a back­door threat to our con­tent,” he said.

When asked about The Guardian’s de­ci­sion, Internet Archive founder Brewster Kahle said that if pub­lish­ers limit li­braries, like the Internet Archive, then the pub­lic will have less ac­cess to the his­tor­i­cal record.” It’s a prospect, he im­plied, that could un­der­cut the or­ga­ni­za­tion’s work coun­ter­ing information dis­or­der.”

After 25 years, Brewster Kahle and the Internet Archive are still work­ing to de­moc­ra­tize knowl­edge

The Guardian is­n’t alone in reeval­u­at­ing its re­la­tion­ship to the Internet Archive. The New York Times con­firmed to Nieman Lab that it’s ac­tively hard block­ing” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its ro­bots.txt file, dis­al­low­ing ac­cess to its con­tent.

We be­lieve in the value of The New York Times’s hu­man-led jour­nal­ism and al­ways want to en­sure that our IP is be­ing ac­cessed and used law­fully,” said a Times spokesper­son. We are block­ing the Internet Archive’s bot from ac­cess­ing the Times be­cause the Wayback Machine pro­vides un­fet­tered ac­cess to Times con­tent — in­clud­ing by AI com­pa­nies — with­out au­tho­riza­tion.”

Last August, Reddit an­nounced that it would block the Internet Archive, whose dig­i­tal li­braries in­clude count­less archived Reddit fo­rums, com­ments sec­tions, and pro­files. This con­tent is not un­like what Reddit now li­censes to Google as AI train­ing data for tens of mil­lions of dol­lars.

[The] Internet Archive pro­vides a ser­vice to the open web, but we’ve been made aware of in­stances where AI com­pa­nies vi­o­late plat­form poli­cies, in­clud­ing ours, and scrape data from the Wayback Machine,” a Reddit spokesper­son told The Verge at the time. Until they’re able to de­fend their site and com­ply with plat­form poli­cies…we’re lim­it­ing some of their ac­cess to Reddit data to pro­tect red­di­tors.”

Kahle has also al­luded to steps the Internet Archive is tak­ing to re­strict bulk ac­cess to its li­braries. In a Mastodon post last fall, he wrote that there are many col­lec­tions that are avail­able to users but not for bulk down­load­ing. We use in­ter­nal rate-lim­it­ing sys­tems, fil­ter­ing mech­a­nisms, and net­work se­cu­rity ser­vices such as Cloudflare.”

Currently, how­ever, the Internet Archive does not dis­al­low any spe­cific crawlers through its ro­bots.txt file, in­clud­ing those of ma­jor AI com­pa­nies. As of January 12, the ro­bots.txt file for archive.org read: ​​Welcome to the Archive! Please crawl our files. We ap­pre­ci­ate it if you can crawl re­spon­si­bly. Stay open!” Shortly af­ter we in­quired about this lan­guage, it was changed. The file now reads, sim­ply, Welcome to the Internet Archive!”

There is ev­i­dence that the Wayback Machine, gen­er­ally speak­ing, has been used to train LLMs in the past. An analy­sis of Google’s C4 dataset by the Washington Post in 2023 showed that the Internet Archive was among mil­lions of web­sites in the train­ing data used to build Google’s T5 model and Meta’s Llama mod­els. Out of the 15 mil­lion do­mains in the C4 dataset, the do­main for the Wayback Machine (web.archive.org) was ranked as the 187th most pre­sent.

Hundreds of thou­sands of videos from news pub­lish­ers like The New York Times and Vox were used to train AI mod­elsIn May 2023, the Internet Archive went of­fline tem­porar­ily af­ter an AI com­pany caused a server over­load, Wayback Machine di­rec­tor Mark Graham told Nieman Lab this past fall. The com­pany sent tens of thou­sands of re­quests per sec­ond from vir­tual hosts on Amazon Web Services to ex­tract text data from the non­prof­it’s pub­lic do­main archives. The Internet Archive blocked the hosts twice be­fore putting out a pub­lic call to respectfully” scrape its site.

We got in con­tact with them. They ended up giv­ing us a do­na­tion,” Graham said. They ended up say­ing that they were sorry and they stopped do­ing it.”

Those want­ing to use our ma­te­ri­als in bulk should start slowly, and ramp up,” wrote Kahle in a blog post shortly af­ter the in­ci­dent. Also, if you are start­ing a large pro­ject please con­tact us …we are here to help.”

The Guardian’s moves to limit the Internet Archive’s ac­cess made us won­der whether other news pub­lish­ers were tak­ing sim­i­lar ac­tions. We looked at pub­lish­ers’ ro­bots.txt pages as a way to mea­sure po­ten­tial con­cern over the Internet Archive’s crawl­ing.

A web­site’s ro­bots.txt page tells bots which parts of the site they can crawl, act­ing like a doorman,” telling vis­i­tors who is and is­n’t al­lowed in the house and which parts are off lim­its. Robots.txt pages aren’t legally bind­ing, so the com­pa­nies run­ning crawl­ing bots aren’t ob­lig­ated to com­ply with them, but they in­di­cate where the Internet Archive is un­wel­come.

For ex­am­ple, in ad­di­tion to hard block­ing,” The New York Times and The Athletic in­clude the archive.org_bot in their ro­bots.txt file, though they do not cur­rently dis­al­low other bots op­er­ated by the Internet Archive.

To ex­plore this is­sue, Nieman Lab used jour­nal­ist Ben Welsh‘s data­base of 1,167 news web­sites as a start­ing point. As part of a larger side pro­ject to archive news sites’ home­pages, Welsh runs crawlers that reg­u­larly scrape the ro­bots.txt files of the out­lets in his data­base. In late December, we down­loaded a spread­sheet from Welsh’s site that dis­played all the bots dis­al­lowed in the ro­bots.txt files of those sites. We iden­ti­fied four bots that the AI user agent watch­dog ser­vice Dark Visitors has as­so­ci­ated with the Internet Archive. (The Internet Archive did not re­spond to re­quests to con­firm its own­er­ship of these bots.)

This data is not com­pre­hen­sive, but ex­ploratory. It does not rep­re­sent global, in­dus­try-wide trends — 76% of sites in the Welsh’s pub­lisher list are based in the U. S., for ex­am­ple — but in­stead be­gins to shed light on which pub­lish­ers are less ea­ger to have their con­tent crawled by the Internet Archive.

In to­tal, 241 news sites from nine coun­tries ex­plic­itly dis­al­low at least one out of the four Internet Archive crawl­ing bots.

Most of those sites (87%) are owned by USA Today Co., the largest news­pa­per con­glom­er­ate in the United States for­merly known as Gannett. (Gannett sites only make up 18% of Welsh’s orig­i­nal pub­lish­ers list.) Each Gannett-owned out­let in our dataset dis­al­lows the same two bots: archive.org_bot” and ia_archiver-web.archive.org”. These bots were added to the ro­bots.txt files of Gannett-owned pub­li­ca­tions in 2025.

Some Gannett sites have also taken stronger mea­sures to guard their con­tents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine re­turn a mes­sage that says, Sorry. This URL has been ex­cluded from the Wayback Machine.”

USA Today Co. has con­sis­tently em­pha­sized the im­por­tance of safe­guard­ing our con­tent and in­tel­lec­tual prop­erty,” a com­pany spokesper­son said via email. Last year, we in­tro­duced new pro­to­cols to de­ter unau­tho­rized data col­lec­tion and scrap­ing, redi­rect­ing such ac­tiv­ity to a des­ig­nated page out­lin­ing our li­cens­ing re­quire­ments.”

Gannett de­clined to com­ment fur­ther on its re­la­tion­ship with the Internet Archive. In an October 2025 earn­ings call, CEO Mike Reed spoke to the com­pa­ny’s anti-scrap­ing mea­sures.

In September alone, we blocked 75 mil­lion AI bots across our lo­cal and USA Today plat­forms, the vast ma­jor­ity of which were seek­ing to scrape our lo­cal con­tent,” Reed said on that call. About 70 mil­lion of those came from OpenAI.” (Gannett signed a con­tent li­cens­ing agree­ment with Perplexity in July 2025.)

About 93% (226 sites) of pub­lish­ers in our dataset dis­al­low two out of the four Internet Archive bots we iden­ti­fied. Three news sites in the sam­ple dis­al­low three Internet Archive crawlers: Le Huffington Post, Le Monde, and Le Monde in English, all of which are owned by Group Le Monde.

Some French pub­lish­ers are giv­ing AI rev­enue di­rectly to jour­nal­ists. Could that ever hap­pen in the U. S.?

The news sites in our sam­ple aren’t only tar­get­ing the Internet Archive. Out of the 241 sites that dis­al­low at least one of the four Internet Archive bots in our sam­ple, 240 sites dis­al­low Common Crawl — an­other non­profit in­ter­net preser­va­tion pro­ject that has been more closely linked to com­mer­cial LLM de­vel­op­ment. Of our sam­ple, 231 sites all dis­al­low bots op­er­ated by OpenAI, Google AI, and Common Crawl.

As we’ve pre­vi­ously re­ported, the Internet Archive has taken on the Herculean task of pre­serv­ing the in­ter­net, and many news or­ga­ni­za­tions aren’t equipped to save their own work. In December, Poynter an­nounced a joint ini­tia­tive with the Internet Archive to train lo­cal news out­lets on how to pre­serve their con­tent. Archiving ini­tia­tives like this, while ur­gently needed, are few and far be­tween. Since there is no fed­eral man­date that re­quires in­ter­net con­tent to be pre­served, the Internet Archive is the most ro­bust archiv­ing ini­tia­tive in the United States.

The Internet Archive tends to be good cit­i­zens,” Hahn said. It’s the law of un­in­tended con­se­quences: You do some­thing for re­ally good pur­poses, and it gets abused.”

Photo of Internet Archive home­page by SDF_QWE used un­der an Adobe Stock li­cense.

...

Read the original on www.niemanlab.org »

6 353 shares, 30 trendiness

My smart sleep mask broadcasts users' brainwaves to an open MQTT broker

I re­cently got a smart sleep mask from Kickstarter. I was not ex­pect­ing to end up with the abil­ity to read strangers’ brain­waves and send them elec­tric im­pulses in their sleep. But here we are.

The mask was from a small Chinese re­search com­pany, very cool hard­ware — EEG brain mon­i­tor­ing, elec­tri­cal mus­cle stim­u­la­tion around the eyes, vi­bra­tion, heat­ing, au­dio. The app was still rough around the edges though and the mask kept dis­con­nect­ing, so I asked Claude to try re­verse-en­gi­neer the Bluetooth pro­to­col and build me a sim­ple web con­trol panel in­stead.

The first thing Claude did was scan for BLE (Bluetooth Low Energy) de­vices nearby. It found mine among 35 de­vices in range, con­nected, and mapped the in­ter­face — two data chan­nels. One for send­ing com­mands, one for stream­ing data.

Then it tried talk­ing to it. Sent maybe a hun­dred dif­fer­ent com­mand pat­terns. Modbus frames, JSON, raw bytes, com­mon head­ers. Unfortunately, the de­vice said noth­ing back, the pro­to­col was not a stan­dard one.

So Claude went af­ter the app in­stead. Grabbed the Android APK, de­com­piled it with jadx. Turns out the app is built with Flutter, which is a bit of a prob­lem for re­verse en­gi­neer­ing. Flutter com­piles Dart source code into na­tive ARM64 ma­chine code — you can’t just read it back like nor­mal Java Android apps. The ac­tual busi­ness logic lives in a 9MB bi­nary blob.

But even com­piled bi­na­ries have strings in them. Error mes­sages, URLs, de­bug logs. Claude ran strings on the bi­nary and this was the most pro­duc­tive step of the whole ses­sion. Among the thou­sands of lines of Flutter frame­work noise, it found:

* Hardcoded cre­den­tials for the com­pa­ny’s mes­sage bro­ker (shared by every copy of the app)

* All fif­teen com­mand builder func­tion names (e.g. to set vi­bra­tion, heat­ing, elec­tric stim­u­la­tion, etc.)

We had the shape of the pro­to­col. Still did­n’t have the ac­tual byte val­ues though.

Claude then used blut­ter, a tool specif­i­cally for de­com­pil­ing Flutter’s com­piled Dart snap­shots. It re­con­structs the func­tions with read­able an­no­ta­tions. Claude fig­ured out the en­cod­ing, and just read off every com­mand byte from every func­tion. Fifteen com­mands, fully mapped.

Claude sent a six-byte query packet. The de­vice came back with 153 bytes — model num­ber, firmware ver­sion, se­r­ial num­ber, all eight sen­sor chan­nel con­fig­u­ra­tions (EEG at 250Hz, res­pi­ra­tion, 3-axis ac­celerom­e­ter, 3-axis gy­ro­scope). Battery at 83%.

Vibration con­trol worked. Heating worked. EMS worked. Music worked. Claude built me a lit­tle web dash­board with slid­ers for every­thing. I was pretty happy with it.

That could have been the end of the story.

Remember the hard­coded cre­den­tials from ear­lier? While pok­ing around, Claude tried us­ing them to con­nect to the com­pa­ny’s MQTT bro­ker — MQTT is a pub/​sub mes­sag­ing sys­tem stan­dard in IoT, where de­vices pub­lish sen­sor read­ings and sub­scribe to com­mands. It con­nected fine. Then it started re­ceiv­ing data. Not just from my de­vice — from all of them. About 25 were ac­tive:

Claude cap­tured a cou­ple min­utes of EEG from two ac­tive sleep masks. One user seemed to be in REM sleep (mixed-frequency ac­tiv­ity). The other was in deep slow-wave sleep (strong delta power be­low 4Hz). Real brain­waves from real peo­ple, some­where in the world.

The mask also does EMS — elec­tri­cal mus­cle stim­u­la­tion around the eyes. Controlling it is just an­other com­mand: mode, fre­quency, in­ten­sity, du­ra­tion.

Since every de­vice shares the same cre­den­tials and the same bro­ker, if you can read some­one’s brain­waves you can also send them elec­tric im­pulses.

For ob­vi­ous rea­sons, I am not nam­ing the prod­uct/​com­pany here, but have reached out to in­form them about the is­sue.

This whole thing made me re­visit Karpathy’s Digital Hygiene post, and you prob­a­bly should too.

The re­verse en­gi­neer­ing — Bluetooth, APK de­com­pi­la­tion, Dart bi­nary analy­sis, MQTT dis­cov­ery — was more or less one-shot­ted by Claude (Opus 4.6) over a 30′ au­tonomous ses­sion.

Update: this some­how reached #1 on HN and folks have been ask­ing for the claude con­ver­sa­tion tran­script. I’ve added it here.

...

Read the original on aimilios.bearblog.dev »

7 350 shares, 27 trendiness

Vim 9.2 released

The Vim pro­ject is happy to an­nounce that Vim 9.2 has been re­leased. Vim 9.2 brings sig­nif­i­cant en­hance­ments to the Vim9 script­ing lan­guage, im­proved diff mode, com­pre­hen­sive com­ple­tion fea­tures, and plat­form-spe­cific im­prove­ments in­clud­ing ex­per­i­men­tal Wayland sup­port.

Comprehensive Completion: Added sup­port for fuzzy match­ing dur­ing in­sert-mode com­ple­tion and the abil­ity to com­plete words di­rectly from reg­is­ters (CTRL-X CTRL-R). New completeopt’ flags like nosort and near­est of­fer finer con­trol over how matches are dis­played and or­dered.

Modern Platform Support: Full sup­port for the Wayland UI and clip­board has been added. On Linux and Unix-like sys­tems, Vim now ad­heres to the XDG Base Directory Specification, us­ing $HOME/.config/vim for user con­fig­u­ra­tion.

UI Enhancements: A new ver­ti­cal tab­panel pro­vides an al­ter­na­tive to the hor­i­zon­tal tabline. The MS-Windows GUI now sup­ports na­tive dark mode for the menu and ti­tle bars, along with im­proved fullscreen sup­port and higher-qual­ity tool­bar icons.

Interactive Learning: A new built-in in­ter­ac­tive tu­tor plu­gin (started via :Tutor) pro­vides a mod­ern­ized learn­ing ex­pe­ri­ence be­yond the tra­di­tional vim­tu­tor.

Significant lan­guage en­hance­ments in­clud­ing na­tive sup­port for Enums, Generic func­tions, and the Tuple data type. Built-in func­tions are now in­te­grated as ob­ject meth­ods, and classes now sup­port pro­tected _new() meth­ods and :defcompile for full method com­pi­la­tion.

The ma­tu­rity of Vim9 scrip­t’s mod­ern con­structs is now be­ing lever­aged by ad­vanced AI de­vel­op­ment tools. Contributor Yegappan Lakshmanan re­cently demon­strated the ef­fi­cacy of these new fea­tures through two pro­jects gen­er­ated us­ing GitHub Copilot:

Battleship in Vim9: A com­plete im­ple­men­ta­tion of the clas­sic game, show­cas­ing classes and type aliases. [GitHub]

Number Puzzle: A logic game demon­strat­ing the ef­fi­ciency of mod­ern Vim9 for in­ter­ac­tive plu­g­ins. [GitHub]

Vim 9.2 in­tro­duces sig­nif­i­cant en­hance­ments to how changes are vi­su­al­ized and aligned in diff mode:

Linematch Algorithm: Includes the linematch” al­go­rithm for the diffopt’ set­ting. This aligns changes be­tween buffers on sim­i­lar lines, greatly im­prov­ing diff high­light­ing ac­cu­racy.

Diff Anchors: The new diffanchors’ op­tion al­lows you to spec­ify an­chor points (comma-separated ad­dresses) to split and in­de­pen­dently diff buffer sec­tions, en­sur­ing bet­ter align­ment in com­plex files.

Inline Highlighting: Improves high­light­ing for changes within a line. This is con­fig­urable via the inline” sub-op­tion for diffopt’. Note that inline:simple” has been added to the de­fault diffopt’ value.

Here are some ex­am­ples for the im­proved in­line high­light­ing:

Several long-stand­ing de­faults have been up­dated to bet­ter suit mod­ern hard­ware and work­flows. These val­ues have been re­moved from de­faults.vim as they are now the in­ter­nal de­faults.

On (Always vis­i­ble in non-com­pat­i­ble mode)

These ex­am­ples demon­strate how to use the pow­er­ful new com­ple­tion and in­tro­spec­tion tools avail­able in Vim 9.2.

Vim’s stan­dard com­ple­tion fre­quently checks for user in­put while search­ing for new matches. It is re­spon­sive ir­re­spec­tive of file size. This makes it well-suited for smooth auto-com­ple­tion.

vim9script

def InsComplete()

if getcharstr(1) == ’ && get­line(‘.’)->str­part(0, col(‘.’) - 1) =~ \k$’

SkipTextChangedIEvent()

feed­keys(“", n”)

en­dif

end­def

def SkipTextChangedIEvent(): string

# Suppress next event caused by

vim9script

var se­lect­ed_­match = nul­l_string

var all­files: list

def GrepComplete(arglead: string, cmd­line: string, cur­sor­pos: num­ber): list

re­turn ar­glead->len() > 1 ? sys­tem­list($‘grep -REIHns {arglead}“’ ..

′ –exclude-dir=.git –exclude=”.*” –exclude=“tags” –exclude=“*.swp”’) : []

end­def

def VisitFile()

if (selected_match != nul­l_string)

var qfitem = getqflist({lines: [selected_match]}).items[0]

if qfitem->has_key(‘bufnr’) && qfitem.lnum > 0

var pos = qfitem.vcol > 0 ? setcharpos’ : setpos’

exec $‘:b +call\ {pos}(”.”,\ [0,\ {qfitem.lnum},\ {qfitem.col},\ 0]) {qfitem.bufnr}’

set­buf­var(qfitem.bufnr, &buflisted’, 1)

en­dif

en­dif

end­def

def FuzzyFind(arglead: string, _: string, _: num­ber): list

if all­files == nul­l_list

all­files = sys­tem­list($’find {get(g:, fzfind_root”, .“)} \!

\( -path */.git” -prune -o -name *.swp” \) -type f -follow’)

en­dif

re­turn ar­glead == ’ ? all­files : all­files->match­fuzzy(ar­glead)

end­def

def FuzzyBuffer(arglead: string, _: string, _: num­ber): list

var bufs = ex­e­cute(‘buffers’, silent!’)->split(“\n”)

var al­t­buf = bufs->in­dexof((_, v) => v =~ ^\s*\d\+\s\+#’)

if al­t­buf != -1

[bufs[0], bufs[al­t­buf]] = [bufs[altbuf], bufs[0]]

en­dif

re­turn ar­glead == ’ ? bufs : bufs->match­fuzzy(ar­glead)

end­def

def SelectItem()

se­lect­ed_­match =

if getcmd­line() =~ ^\s*\%(Grep\|Find\|Buffer\)\s’

var info = cmd­com­plete_info()

if info != {} && info.pum_vis­i­ble && !info.matches->empty()

se­lect­ed_­match = info.se­lected != -1 ? info.matches[info.se­lected] : info.matches[0]

setcmd­line(info.cmd­line_orig) # Preserve search pat­tern in his­tory

en­dif

en­dif

end­def

com­mand! -nargs=+ -complete=customlist,GrepComplete Grep VisitFile()

com­mand! -nargs=* -complete=customlist,FuzzyBuffer Buffer exe b  .. se­lect­ed_­match->match­str(‘\d\+’)

com­mand! -nargs=* -complete=customlist,FuzzyFind Find exe !empty(selected_match) ? $‘e {selected_match}’ :

nnoremap

vim9script

def CmdComplete()

var [cmdline, cur­pos] = [getcmdline(), getcmd­pos()]

if getchar(1, {number: true}) == 0 # Typehead is empty

&& !pumvisible() && cur­pos == cmd­line->len() + 1

&& cmd­line =~ \%(\w\|[*/:.-]\)$’ && cmd­line !~ ^\d\+$’

feed­keys(“\

For au­to­matic popup menu com­ple­tion as you type in search or : com­mands, in­clude this in your .vimrc:

vim9script

def CmdComplete()

var [cmdline, cur­pos, cmd­mode] = [getcmdline(), getcmd­pos(), ex­pand(′

Other Improvements and ChangesMany bugs have been fixed since the re­lease of Vim 9.1, in­clud­ing se­cu­rity vul­ner­a­bil­i­ties, mem­ory leaks and po­ten­tial crashes.

See the help­file for other im­prove­ments: :h new-other-9.2

Changes to ex­ist­ing be­hav­iour is doc­u­mented at: :h changed-9.2

A few new func­tions, au­to­com­mands, ex com­mands and op­tions have been added: :h added-9.2

The full list of patches is doc­u­mented at: :h patches-9.2

For over 30 years, Vim has been Charityware,” sup­port­ing chil­dren in Kibaale, Uganda. Following the pass­ing of Bram Moolenaar, the ICCF Holland foun­da­tion was dis­solved, and its mis­sion has been car­ried for­ward by a new part­ner.

ICCF Holland Dissolution: Because the char­ity could not be sus­tained in its orig­i­nal form with­out Bram, ICCF Holland was dis­solved and its re­main­ing funds were trans­ferred to en­sure con­tin­ued sup­port for the Kibaale pro­ject.

Partnership with Kuwasha: To en­sure that aid re­mained un­in­ter­rupted, all spon­sor­ship ac­tiv­i­ties were moved to Kuwasha, a long-term part­ner based in Canada that now man­ages the pro­jects in Uganda.

Continuing the Legacy: Vim re­mains Charityware. We en­cour­age users to con­tinue sup­port­ing the needy chil­dren in Uganda through this new tran­si­tion.

For in­for­ma­tion on how to sup­port this cause, please visit the Sponsor page.We would like to thank every­body who con­tributed to the pro­ject through patches, trans­la­tions, and bug re­ports. We are very grate­ful for any sup­port.You can find the new re­lease on the Download page.

...

Read the original on www.vim.org »

8 346 shares, 18 trendiness

Devlog

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

As we ap­proach the end of the 0.16.0 re­lease cy­cle, Jacob has been hard at work, bring­ing std. Io.Evented up to speed with all the lat­est API changes:Both of these are based on user­space stack switch­ing, some­times called fibers”, stackful corou­tines”, or green threads”.They are now avail­able to tin­ker with, by con­struct­ing one’s ap­pli­ca­tion us­ing std.Io.Evented. They should be con­sid­ered ex­per­i­men­tal be­cause there is im­por­tant fol­lowup work to be done be­fore they can be used re­li­ably and ro­bustly:di­ag­nose the un­ex­pected per­for­mance degra­da­tion when us­ing IoMode.evented for the com­piler­builtin func­tion to tell you the max­i­mum stack size of a given func­tion to make these im­ple­men­ta­tions prac­ti­cal to use when over­com­mit is off.With those caveats in mind, it seems we are in­deed reach­ing the Promised Land, where Zig code can have Io im­ple­men­ta­tions ef­fort­lessly swapped out:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var threaded: std.Io.Threaded = .init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

de­fer threaded.deinit();

const io = threaded.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

$ strace ./hello_threaded

ex­ecve(”./​hel­lo_threaded”, [”./hello_threaded”], 0x7ffc1da88b20 /* 98 vars */) = 0

mmap(NULL, 262207, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f583f338000

arch_prctl(ARCH_SET_FS, 0x7f583f378018) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f583f338000, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

rt_sigac­tion(SI­GIO, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

writev(1, [{iov_base=“Hello, World!\n”, iov_len=14}], 1Hello, World!

) = 14

rt_sigac­tion(SI­GIO, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Swapping out only the I/O im­ple­men­ta­tion:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var evented: std.Io.Evented = un­de­fined;

try evented.init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

.backing_allocator_needs_mutex = false,

de­fer evented.deinit();

const io = evented.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

ex­ecve(”./​hel­lo_evented”, [”./hello_evented”], 0x7fff368894f0 /* 98 vars */) = 0

mmap(NULL, 262215, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c28000

arch_prctl(ARCH_SET_FS, 0x7f70a4c68020) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f70a4c28008, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c27000

mmap(0x7f70a4c28000, 548864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4ba1000

io_ur­ing_setup(64, {flags=IORING_SETUP_COOP_TASKRUN|IORING_SETUP_SINGLE_ISSUER, sq_thread­_cpu=0, sq_thread­_i­dle=1000, sq_en­tries=64, cq_en­tries=128, fea­tures=IOR­ING_FEAT_S­IN­GLE_MMAP|IOR­ING_FEAT_N­ODROP|IOR­ING_FEAT_­SUB­MIT_STA­BLE|IOR­ING_FEAT_R­W_CUR_­POS|IOR­ING_FEAT_CUR_PER­SON­AL­ITY|IOR­ING_FEAT_­FAST_POLL|IOR­ING_FEAT_POL­L_32BITS|IOR­ING_FEAT_SQPOL­L_NON­FIXED|IOR­ING_FEAT_EX­T_ARG|IOR­ING_FEAT_­NA­TIVE_­WORK­ERS|IOR­ING_FEAT_RSR­C_­TAGS|IOR­ING_FEAT_C­QE_SKIP|IOR­ING_FEAT_LINKED_­FILE|IOR­ING_FEAT_REG_REG_RING|IOR­ING_FEAT_RECVSEND_BUN­DLE|IOR­ING_FEAT_MIN_­TIME­OUT|IOR­ING_FEAT_R­W_ATTR|IOR­ING_FEAT_NO_IOWAIT, sq_off={head=0, tail=4, ring_­mask=16, ring_en­tries=24, flags=36, dropped=32, ar­ray=2112, user_addr=0}, cq_off={head=8, tail=12, ring_­mask=20, ring_en­tries=28, over­flow=44, cqes=64, flags=40, user_addr=0}}) = 3

mmap(NULL, 2368, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0) = 0x7f70a4ba0000

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0x10000000) = 0x7f70a4b9f000

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8Hello, World!

) = 1

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8) = 1

mun­map(0x7f70a4b9f000, 4096) = 0

mun­map(0x7f70a4ba0000, 2368) = 0

close(3) = 0

mun­map(0x7f70a4ba1000, 548864) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Key point here be­ing that the app func­tion is iden­ti­cal be­tween those two snip­pets.Mov­ing be­yond Hello World, the Zig com­piler it­self works fine us­ing std.Io.Evented, both with io_ur­ing and with GCD, but as men­tioned above, there is a not-yet-di­ag­nosed per­for­mance degra­da­tion when do­ing so.

If you have a Zig pro­ject with de­pen­den­cies, two big changes just landed which I think you will be in­ter­ested to learn about. Fetched pack­ages are now stored lo­cally in the zig-pkg di­rec­tory of the pro­ject root (next to your build.zig file).For ex­am­ple here are a few re­sults from awebo af­ter run­ning zig build:$ du -sh zig-pkg/*

13M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi

20K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w

4.3M pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw

5.2M uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n

728K vaxis-0.5.1-BWN­V_Ax­EC­QCj3p4Hcv4U3Y­o1W­MU­J7Z2­FU­j0Ukpu­JGxQQ

It is highly rec­om­mended to add this di­rec­tory to the pro­ject-lo­cal source con­trol ig­nore file (e.g. .gitignore). However, by be­ing out­side of .zig-cache, it pro­vides the pos­si­bil­ity of dis­trib­ut­ing self-con­tained source tar­balls, which con­tain all de­pen­den­cies and there­fore can be used to build of­fline, or for archival pur­poses.Mean­while, an ad­di­tional copy of the de­pen­dency is cached glob­ally. After fil­ter­ing out all the un­used files based on the paths fil­ter, the con­tents are re­com­pressed:$ du -sh ~/.cache/zig/p/*

2.4M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi.tar.gz

4.0K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w.tar.gz

636K pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw.tar.gz

880K uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n.tar.gz

120K vaxis-0.5.1-BWN­V_BFEC­QB­bX­eTeFd48uTJR­jD5a-KD6kPuKanz­zVB01.tar.gz

The mo­ti­va­tion for this change is to make it eas­ier to tin­ker. Go ahead and edit those files, see what hap­pens. Swap out your pack­age di­rec­tory with a git clone. Grep your de­pen­den­cies all to­gether. Configure your IDE to auto-com­plete based on the zig-pkg di­rec­tory. Run baobab on your de­pen­dency tree. Furthermore, by hav­ing the global cache have com­pressed files in­stead makes it eas­ier to share that cached data be­tween com­put­ers. In the fu­ture, it is planned to sup­port peer-to-peer tor­rent­ing of de­pen­dency trees. By re­com­press­ing pack­ages into a canon­i­cal form, this will al­low peers to share Zig pack­ages with min­i­mal band­width. I love this idea be­cause it si­mul­ta­ne­ously pro­vides re­silience to net­work out­ages, as well as a pop­u­lar­ity con­test. Find out which open source pack­ages are pop­u­lar based on num­ber of seed­ers!The sec­ond change here is the ad­di­tion of the –fork flag to zig build.In ret­ro­spect, it seems so ob­vi­ous, I don’t know why I did­n’t think of it since the be­gin­ning. It looks like this:zig build –fork=[path]

This is a pro­ject over­ride op­tion. Given a path to a source check­out of a pro­ject, all pack­ages match­ing that pro­ject across the en­tire de­pen­dency tree will be over­rid­den.Thanks to the fact that pack­age con­tent hashes in­clude name and fin­ger­print, this re­solves be­fore the pack­age is po­ten­tially fetched.This is an easy way to tem­porar­ily use one or more forks which are in en­tirely sep­a­rate di­rec­to­ries. You can it­er­ate on your en­tire de­pen­dency tree un­til every­thing is work­ing, while us­ing com­fort­ably the de­vel­op­ment en­vi­ron­ment and source con­trol of the de­pen­dency pro­jects.The fact that it is a CLI flag makes it ap­pro­pri­ately ephemeral. The mo­ment you drop the flags, you’re back to us­ing your pris­tine, fetched de­pen­dency tree.If the pro­ject does not match, an er­ror oc­curs, pre­vent­ing con­fu­sion:$ zig build –fork=/home/andy/dev/mime

er­ror: fork /home/andy/dev/mime matched no mime pack­ages

If the pro­ject does match, you get a re­minder that you are us­ing a fork, pre­vent­ing con­fu­sion:$ zig build –fork=/home/andy/dev/dvui

info: fork /home/andy/dev/dvui matched 1 (dvui) pack­ages

This func­tion­al­ity is in­tended to en­hance the work­flow of deal­ing with ecosys­tem break­age. I al­ready tried it a bit and found it to be quite pleas­ant to work with. The new work­flow goes like this:Fail to build from source due to ecosys­tem break­age.Tin­ker with –fork un­til your pro­ject works again. During this time you can use the ac­tual up­stream source con­trol, test suite, zig build test –watch -fincremental, etc.Now you have a new op­tion: be self­ish and just keep work­ing on your own stuff, or you can pro­ceed to sub­mit your patches up­stream.…and you can prob­a­bly skip the step where you switch your build.zig.zon to your fork un­less you ex­pect up­stream to take a long time to merge your fixes.

The Windows op­er­at­ing sys­tem pro­vides a large ABI sur­face area for do­ing things in the ker­nel. However, not all ABIs are cre­ated equally. As Casey Muratori points out in his lec­ture, The Only Unbreakable Law, the or­ga­ni­za­tional struc­ture of soft­ware de­vel­op­ment teams has a di­rect im­pact on the struc­ture of the soft­ware they pro­duce. The DLLs on Windows are or­ga­nized into a heirar­chy, with some of the APIs be­ing high-level wrap­pers around lower-level ones. For ex­am­ple, when­ever you call func­tions of ker­nel32.dll, ul­ti­mately, the ac­tual work is done by nt­dll.dll. You can ob­serve this di­rectly by us­ing ProcMon.exe and ex­am­in­ing stack traces.What we’ve learned em­pir­i­cally is that the nt­dll APIs are gen­er­ally well-en­gi­neered, rea­son­able, and pow­er­ful, but the ker­nel32 wrap­pers in­tro­duce un­nec­es­sary heap al­lo­ca­tions, ad­di­tional fail­ure modes, un­in­ten­tional CPU us­age, and bloat.This is why the Zig stan­dard li­brary pol­icy is to Prefer the Native API over Win32. We’re not quite there yet - we have plenty of calls into ker­nel32 re­main­ing - but we’ve taken great strides re­cently. I’ll give you two ex­am­ples.Ac­cord­ing to the of­fi­cial doc­u­men­ta­tion, Windows does not have a straight­for­ward way to get ran­dom bytes.Many pro­jects in­clud­ing Chromium, bor­ingssl, Firefox, and Rust call SystemFunction036 from ad­va­pi32.dll be­cause it worked on ver­sions older than Windows 8.Unfortunately, start­ing with Windows 8, the first time you call this func­tion, it dy­nam­i­cally loads bcrypt­prim­i­tives.dll and calls ProcessPrng. If load­ing the DLL fails (for ex­am­ple due to an over­loaded sys­tem, which we have ob­served on Zig CI sev­eral times), it re­turns er­ror 38 (from a func­tion that has void re­turn type and is doc­u­mented to never fail).The first thing ProcessPrng does is heap al­lo­cate a small, con­stant num­ber of bytes. If this fails it re­turns NO_MEMORY in a BOOL (documented be­hav­ior is to never fail, and al­ways re­turn TRUE).bcryptprimitives.dll ap­par­ently also runs a test suite every time you load it.All that ProcessPrng is re­ally do­ing is NtOpenFile on \\Device\\CNG” and read­ing 48 bytes with NtDeviceIoControlFile to get a seed, and then ini­tial­iz­ing a per-CPU AES-based CSPRNG.So the de­pen­dency on bcrypt­prim­i­tives.dll and ad­va­pi32.dll can both be avoided, and the non­de­ter­min­is­tic fail­ure and la­ten­cies on first RNG read can also be avoided.Read­File looks like this:pub ex­tern kernel32″ fn ReadFile(

hFile: HANDLE,

lp­Buffer: LPVOID,

nNum­berOf­Byte­sToRead: DWORD,

lp­Num­berOf­Bytes­Read: ?*DWORD,

lpOver­lapped: ?*OVERLAPPED,

) call­conv(.winapi) BOOL;

NtReadFile looks like this:pub ex­tern ntdll” fn NtReadFile(

FileHandle: HANDLE,

Event: ?HANDLE,

ApcRoutine: ?*const IO_APC_ROUTINE,

ApcContext: ?*anyopaque,

...

Read the original on ziglang.org »

9 282 shares, 38 trendiness

IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption

Victorian-era vinegar valen­ti­nes’ show that trolling ex­isted long be­fore so­cial me­dia or the in­ter­ne­tAmer­i­cans wake up and smell the cof­fee price surge—skip­ping Starbucks, brew­ing at home, and drink­ing Diet Coke for caf­feine and The Associated PressPrivate eq­ui­ty’s play­book to shake off the zom­bies: meet the con­tin­u­a­tion ve­hi­cleMacKen­zie Scott says her col­lege room­mate loaned her $1,000 so she would­n’t have to drop out—and is now in­spir­ing her to give away bil­lion­sRo­mance scam from the front lines of the $16 bil­lion fraud cri­sis: 6 dead dogs, a miss­ing $39,000, and a wronged widow‘Amer­i­ca’s Got Talent’ cre­ator Simon Cowell has given up work­ing on Fridays be­cause it’s point­less’—and re­search shows he’s right

Microsoft AI chief gives it 18 months—for all white-col­lar work to be au­to­mated by AIMacKenzie Scott says her col­lege room­mate loaned her $1,000 so she would­n’t have to drop out—and is now in­spir­ing her to give away bil­lion­s­Some folks on Wall Street think yes­ter­day’s U. S. jobs num­ber is implausible’ and thus due for a down­ward cor­rec­tio­nAna­log-ob­sessed Gen Zers are buy­ing $40 app block­ers to limit their so­cial me­dia use and take a break from the slot ma­chine in your pock­et’Ac­tress Jennifer Garner just took her $724 mil­lion or­ganic food em­pire pub­lic. She started her ca­reer mak­ing just $150 weekly as a broke’ un­der­studyEv­ery U.S. Olympian was promised a $200,000 pay­out, but how much they ac­tu­ally keep de­pends on where they live

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

10 264 shares, 15 trendiness

Platforms bend over backward to help DHS censor ICE critics, advocates say

Platforms bend over back­ward to help DHS cen­sor ICE crit­ics, ad­vo­cates say

Pam Bondi and Kristi Noem sued for co­erc­ing plat­forms into cen­sor­ing ICE posts.

Pressure is mount­ing on tech com­pa­nies to shield users from un­law­ful gov­ern­ment re­quests that ad­vo­cates say are mak­ing it harder to re­li­ably share in­for­ma­tion about Immigration and Customs Enforcement (ICE) on­line.

Alleging that ICE of­fi­cers are be­ing doxed or oth­er­wise en­dan­gered, Trump of­fi­cials have spent the last year tar­get­ing an un­known num­ber of users and plat­forms with de­mands to cen­sor con­tent. Early law­suits show that plat­forms have caved, even though ex­perts say they could refuse these de­mands with­out a court or­der.

In a law­suit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) ac­cused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of co­erc­ing tech com­pa­nies into re­mov­ing a wide range of con­tent to con­trol what the pub­lic can see, hear, or say about ICE op­er­a­tions.”

It’s the sec­ond law­suit al­leg­ing that Bondi and DHS of­fi­cials are us­ing reg­u­la­tory power to pres­sure pri­vate plat­forms to sup­press speech pro­tected by the First Amendment. It fol­lows a com­plaint from the de­vel­oper of an app called ICEBlock, which Apple re­moved from the App Store in October. Officials aren’t rush­ing to re­solve that case—last month, they re­quested more time to re­spond—so it may re­main un­clear un­til March what de­fense they plan to of­fer for the take­down de­mands.

That leaves com­mu­nity mem­bers who mon­i­tor ICE in a pre­car­i­ous sit­u­a­tion, as crit­i­cal re­sources could dis­ap­pear at the de­part­men­t’s re­quest with no warn­ing.

FIRE says peo­ple have le­git­i­mate rea­sons to share in­for­ma­tion about ICE. Some com­mu­ni­ties fo­cus on help­ing peo­ple avoid dan­ger­ous ICE ac­tiv­ity, while oth­ers aim to hold the gov­ern­ment ac­count­able and raise pub­lic aware­ness of how ICE op­er­ates. Unless there’s proof of in­cite­ment to vi­o­lence or a true threat, such ex­pres­sion is pro­tected.

Despite the high bar for cen­sor­ing on­line speech, law­suits trace an es­ca­lat­ing pat­tern of DHS in­creas­ingly tar­get­ing web­sites, app stores, and plat­forms—many that have been will­ing to re­move con­tent the gov­ern­ment dis­likes.

Officials have or­dered ICE-monitoring apps to be re­moved from app stores and even threat­ened to sanc­tion CNN for sim­ply re­port­ing on the ex­is­tence of one such app. Officials have also de­manded that Meta delete at least one Chicago-based Facebook group with 100,000 mem­bers and made mul­ti­ple un­suc­cess­ful at­tempts to un­mask anony­mous users be­hind other Facebook groups. Even en­crypted apps like Signal don’t feel safe from of­fi­cials’ seem­ing over­reach. FBI Director Kash Patel re­cently said he has opened an in­ves­ti­ga­tion into Signal chats used by Minnesota res­i­dents to track ICE ac­tiv­ity, NBC News reported.

As DHS cen­sor­ship threats in­crease, plat­forms have done lit­tle to shield users, ad­vo­cates say. Not only have they some­times failed to re­ject un­law­ful or­ders that sim­ply pro­vided a a bare men­tion of officer safety/​dox­ing’” as jus­ti­fi­ca­tion, but in one case, Google com­plied with a sub­poena that left a crit­i­cal sec­tion blank, the Electronic Frontier Foundation (EFF) re­ported.

For users, it’s in­creas­ingly dif­fi­cult to trust that plat­forms won’t be­tray their own poli­cies when faced with gov­ern­ment in­tim­i­da­tion, ad­vo­cates say. Sometimes plat­forms no­tify users be­fore com­ply­ing with gov­ern­ment re­quests, giv­ing users a chance to chal­lenge po­ten­tially un­con­sti­tu­tional de­mands. But in other cases, users learn about the re­quests only as plat­forms com­ply with them—even when those plat­forms have promised that would never hap­pen.

Government emails with plat­forms may be ex­posed

Platforms could face back­lash from users if law­suits ex­pose their com­mu­ni­ca­tions to the gov­ern­ment, a pos­si­bil­ity in the com­ing months. Last fall, the EFF sued af­ter DOJ, DHS, ICE, and Customs and Border Patrol failed to re­spond to Freedom of Information Act re­quests seek­ing emails be­tween the gov­ern­ment and plat­forms about take­down de­mands. Other law­suits may sur­face emails in dis­cov­ery. In the com­ing weeks, a judge will set a sched­ule for EFFs lit­i­ga­tion.

The na­ture and con­tent of the Defendants’ com­mu­ni­ca­tions with these tech­nol­ogy com­pa­nies” is critical for de­ter­min­ing whether they crossed the line from gov­ern­men­tal ca­jol­ing to un­con­sti­tu­tional co­er­cion,” EFFs com­plaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is con­fi­dent it can win the fight to ex­pose gov­ern­ment de­mands, but like most FOIA law­suits, the case is ex­pected to move slowly. That’s un­for­tu­nate, he said, be­cause ICE ac­tiv­ity is es­ca­lat­ing, and de­lays in ad­dress­ing these con­cerns could ir­repara­bly harm speech at a piv­otal mo­ment.

Like users, plat­forms are seem­ingly vic­tims, too, FIRE se­nior at­tor­ney Colin McDonnell told Ars.

They’ve been forced to over­ride their own ed­i­to­r­ial judg­ment while nav­i­gat­ing im­plicit threats from the gov­ern­ment, he said.

If Attorney General Bondi de­mands that they re­move speech, the plat­form is go­ing to feel like they have to com­ply; they don’t have a choice,” McDonnell said.

But plat­forms do have a choice and could be do­ing more to pro­tect users, the EFF has said. Platforms could even serve as a first line of de­fense, re­quir­ing of­fi­cials to get a court or­der be­fore com­ply­ing with any re­quests.

Platforms may now have good rea­son to push back against gov­ern­ment re­quests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to ad­dress the ICEBlock re­moval and FOIA law­suits, the gov­ern­ment has quickly with­drawn re­quests to un­mask Facebook users soon af­ter lit­i­ga­tion be­gan.

That’s like an ac­knowl­edge­ment that the Trump ad­min­is­tra­tion, when ac­tu­ally chal­lenged in court, was­n’t even will­ing to de­fend it­self,” Trujillo said.

Platforms could view that as ev­i­dence that gov­ern­ment pres­sure only works when plat­forms fail to put up a bare-min­i­mum fight, Trujillo said.

An open let­ter from the EFF and the American Civil Liberties Union (ACLU) doc­u­mented two in­stances of tech com­pa­nies com­ply­ing with gov­ern­ment de­mands with­out first no­ti­fy­ing users.

The let­ter called out Meta for un­mask­ing at least one user with­out prior no­tice, which groups noted potentially” oc­cured due to a technical glitch.”

More trou­bling than buggy no­ti­fi­ca­tions, how­ever, is the pos­si­bil­ity that plat­forms may be rou­tinely de­lay­ing no­tice un­til it’s too late.

After Google received an ICE sub­poena for user data and ful­filled it on the same day that it no­ti­fied the user,” the com­pany ad­mit­ted that sometimes when Google misses its re­sponse dead­line, it com­plies with the sub­poena and pro­vides no­tice to a user at the same time to min­i­mize the de­lay for an over­due pro­duc­tion,” the let­ter said.

This is a wor­ry­ing ad­mis­sion that vi­o­lates [Google’s] clear promise to users, es­pe­cially be­cause there is no le­gal con­se­quence to miss­ing the gov­ern­men­t’s re­sponse dead­line,” the let­ter said.

Platforms face no sanc­tions for re­fus­ing to com­ply with gov­ern­ment de­mands that have not been court-or­dered, the let­ter noted. That’s why the EFF and ACLU have urged com­pa­nies to use their immense re­sources” to shield users who may not be able to drop every­thing and fight un­con­sti­tu­tional data re­quests.

In their let­ter, the groups asked com­pa­nies to in­sist on court in­ter­ven­tion be­fore com­ply­ing with a DHS sub­poena. They should also re­sist DHS gag or­ders” that ask plat­forms to hand over data with­out no­ti­fy­ing users.

Instead, they should com­mit to giv­ing users as much no­tice as pos­si­ble when they are the tar­get of a sub­poena,” as well as a copy of the sub­poena. Ideally, plat­forms would also link users to le­gal aid re­sources and take up le­gal fights on be­half of vul­ner­a­ble users, ad­vo­cates sug­gested.

That’s not what’s hap­pen­ing so far. Trujillo told Ars that it feels like companies have bent over back­ward to ap­pease the Trump ad­min­is­tra­tion.”

The tide could turn this year if courts side with app mak­ers be­hind crowd­sourc­ing apps like ICEBlock and Eyes Up, who are su­ing to end the al­leged gov­ern­ment co­er­cion. FIREs McDonnell, who rep­re­sents the cre­ator of Eyes Up, told Ars that plat­forms may feel more com­fort­able ex­er­cis­ing their own ed­i­to­r­ial judg­ment mov­ing for­ward if a court de­clares they were co­erced into re­mov­ing con­tent.

DHS can’t use dox­ing to dodge First Amendment

FIREs law­suit ac­cuses Bondi and Noem of co­erc­ing Meta to dis­able a Facebook group with 100,000 mem­bers called ICE Sightings–Chicagoland.”

The pop­u­lar­ity of that group surged dur­ing Operation Midway Blitz,” when hun­dreds of agents ar­rested more than 4,500 peo­ple over weeks of raids that used tear gas in neigh­bor­hoods and caused car crashes and other vi­o­lence. Arrests in­cluded US cit­i­zens and im­mi­grants of law­ful sta­tus, which gave Chicagoans rea­son to fear be­ing in­jured or ar­rested due to their prox­im­ity to ICE raids, no mat­ter their im­mi­gra­tion sta­tus,” FIREs com­plaint said.

Kassandra Rosado, a life­long Chicagoan and US cit­i­zen of Mexican de­scent, started the Facebook group and served as ad­min, mod­er­at­ing con­tent with other vol­un­teers. She pro­hib­ited hate speech or bul­ly­ing” and instructed group mem­bers not to post any­thing threat­en­ing, hate­ful, or that pro­moted vi­o­lence or il­le­gal con­duct.”

Facebook only ever flagged five posts that sup­pos­edly vi­o­lated com­mu­nity guide­lines, but in warn­ings, the com­pany re­as­sured Rosado that groups aren’t pe­nal­ized when mem­bers or vis­i­tors break the rules with­out ad­min ap­proval.”

Rosado had no rea­son to sus­pect that her group was in dan­ger of re­moval. When Facebook dis­abled her group, it told Rosado the group vi­o­lated com­mu­nity stan­dards multiple times.” But her com­plaint noted that, con­fus­ingly, Facebook poli­cies don’t pro­vide for dis­abling groups if a few mem­bers post os­ten­si­bly pro­hib­ited con­tent; they call for re­mov­ing groups when the group mod­er­a­tor re­peat­edly ei­ther cre­ates pro­hib­ited con­tent or af­fir­ma­tively approves’ such con­tent.”

Facebook’s de­ci­sion came af­ter a right-wing in­flu­encer, Laura Loomer, tagged Noem and Bondi in a so­cial me­dia post al­leg­ing that the group was getting peo­ple killed.” Within two days, Bondi bragged that she had got­ten the group dis­abled while claim­ing that it was be­ing used to dox and tar­get [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi se­lec­tively uses the term doxing” when peo­ple post im­ages from ICE ar­rests. He pointed to ICEs own so­cial me­dia ac­counts,” which share fa­vor­able opin­ions of ICE along­side videos and pho­tos of ICE ar­rests that Bondi does­n’t con­sider dox­ing.

Rosado’s cre­ation of Facebook groups to send and re­ceive in­for­ma­tion about where and how ICE car­ries out its du­ties in pub­lic, to share pho­tographs and videos of ICE car­ry­ing out its du­ties in pub­lic, and to ex­change opin­ions about and crit­i­cism of ICEs tac­tics in car­ry­ing out its du­ties, is speech pro­tected by the First Amendment,” FIRE ar­gued.

The same goes for speech man­aged by Mark Hodges, a US cit­i­zen who re­sides in Indiana. He cre­ated an app called Eyes Up to serve as an archive of ICE videos. Apple re­moved Eyes Up from the App Store around the same time that it re­moved ICEBlock.

It is just videos of what gov­ern­ment em­ploy­ees did in pub­lic car­ry­ing out their du­ties,” McDonnell said. It’s noth­ing even close to threat­en­ing or dox­ing or any of these other the­o­ries that the gov­ern­ment has used to jus­tify sup­press­ing speech.”

Bondi bragged that she had got­ten ICEBlock banned, and FIREs com­plaint con­firmed that Hodges’ com­pany re­ceived the same no­ti­fi­ca­tion that ICEBlock’s de­vel­oper got af­ter Bondi’s vic­tory lap. The no­tice said that Apple re­ceived information” from law en­force­ment” claim­ing that the apps had vi­o­lated Apple guide­lines against defamatory, dis­crim­i­na­tory, or mean-spir­ited con­tent.”

Apple did not reach the same con­clu­sion when it in­de­pen­dently re­viewed Eyes Up prior to gov­ern­ment med­dling, FIREs com­plaint said. Notably, the app re­mains avail­able in Google Play, and Rosado now man­ages a new Facebook group with sim­i­lar con­tent but some­what tighter re­stric­tions on who can join. Neither ac­tiv­ity has re­quired ur­gent in­ter­ven­tion from ei­ther tech gi­ants or the gov­ern­ment.

McDonnell told Ars that it’s harm­ful for DHS to wa­ter down the mean­ing of dox­ing when push­ing plat­forms to re­move con­tent crit­i­cal of ICE.

When most of us hear the word doxing,’ we think of some­thing that’s threat­en­ing, post­ing pri­vate in­for­ma­tion along with home ad­dresses or places of work,” McDonnell said. And it seems like the gov­ern­ment is ex­pand­ing that de­f­i­n­i­tion to en­com­pass just shar­ing, even if there’s no threats, noth­ing vi­o­lent. Just shar­ing in­for­ma­tion about what our gov­ern­ment is do­ing.”

Expanding the de­f­i­n­i­tion and then us­ing that term to jus­tify sup­press­ing speech is con­cern­ing, he said, es­pe­cially since the First Amendment in­cludes no ex­cep­tion for doxing,” even if DHS ever were to pro­vide ev­i­dence of it.

To sup­press speech, of­fi­cials must show that groups are in­cit­ing vi­o­lence or mak­ing true threats. FIRE has al­leged that the gov­ern­ment has not met the ex­tra­or­di­nary jus­ti­fi­ca­tions re­quired for a prior re­straint” on speech and is in­stead us­ing vague dox­ing threats to dis­crim­i­nate against speech based on view­point. They’re seek­ing a per­ma­nent in­junc­tion bar­ring of­fi­cials from co­erc­ing tech com­pa­nies into cen­sor­ing ICE posts.

If plain­tiffs win, the cen­sor­ship threats could sub­side, and tech com­pa­nies may feel safe re­in­stat­ing apps and Facebook groups, ad­vo­cates told Ars. That could po­ten­tially re­vive archives doc­u­ment­ing thou­sands of ICE in­ci­dents and re­con­nect webs of ICE watch­ers who lost ac­cess to val­ued feeds.

Until courts pos­si­bly end threats of cen­sor­ship, the most cau­tious com­mu­nity mem­bers are mov­ing lo­cal ICE-watch ef­forts to group chats and list­servs that are harder for the gov­ern­ment to dis­rupt, Trujillo told Ars.

Ashley is a se­nior pol­icy re­porter for Ars Technica, ded­i­cated to track­ing so­cial im­pacts of emerg­ing poli­cies and new tech­nolo­gies. She is a Chicago-based jour­nal­ist with 20 years of ex­pe­ri­ence.

Verizon im­poses new road­block on users try­ing to un­lock paid-off phones

NASA has a new prob­lem to fix be­fore the next Artemis II count­down test

$1.8 mil­lion MST3K Kickstarter brings in (almost) every­one from the old show

...

Read the original on arstechnica.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.