10 interesting stories served every morning and every evening.




1 793 shares, 59 trendiness

uBlock Origin filter list to hide YouTube Shorts

A main­tained uBlock Origin fil­ter list to hide all traces of YouTube shorts videos.

Copy the link be­low, go to uBlock Origin > Dashboard > Filter lists, scroll to the bot­tom, and paste the link un­der­neath the Import…’ head­ing:

https://​raw.githubuser­con­tent.com/​i5heu/​ublock-hide-yt-shorts/​mas­ter/​list.txt

> uBlock Origin sub­scribe link < (does not work on GitHub)

> uBlock Origin sub­scribe link < (does not work on GitHub)

After the ini­tial cre­ateor of this list @gijsdev is now van­ished for half a year, i ( i5heu ) took it on me to main­tain this list.

This pro­ject is an in­de­pen­dent, open-source ini­tia­tive and is not af­fil­i­ated with, en­dorsed by, spon­sored by, or as­so­ci­ated with Alphabet Inc., Google LLC, or YouTube.

...

Read the original on github.com »

2 539 shares, 25 trendiness

Taggart (@mttaggart@infosec.exchange)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on infosec.exchange »

3 471 shares, 34 trendiness

News publishers limit Internet Archive access due to AI scraping concerns

As part of its mis­sion to pre­serve the web, the Internet Archive op­er­ates crawlers that cap­ture web­page snap­shots. Many of these snap­shots are ac­ces­si­ble through its pub­lic-fac­ing tool, the Wayback Machine. But as AI bots scav­enge the web for train­ing data to feed their mod­els, the Internet Archive’s com­mit­ment to free in­for­ma­tion ac­cess has turned its dig­i­tal li­brary into a po­ten­tial li­a­bil­ity for some news pub­lish­ers.

When The Guardian took a look at who was try­ing to ex­tract its con­tent, ac­cess logs re­vealed that the Internet Archive was a fre­quent crawler, said Robert Hahn, head of busi­ness af­fairs and li­cens­ing. The pub­lisher de­cided to limit the Internet Archive’s ac­cess to pub­lished ar­ti­cles, min­i­miz­ing the chance that AI com­pa­nies might scrape its con­tent via the non­prof­it’s repos­i­tory of over one tril­lion web­page snap­shots.

The Wayback Machine’s snap­shots of news home­pages plum­met af­ter a breakdown” in archiv­ing pro­jects

Specifically, Hahn said The Guardian has taken steps to ex­clude it­self from the Internet Archive’s APIs and fil­ter out its ar­ti­cle pages from the Wayback Machine’s URLs in­ter­face. The Guardian’s re­gional home­pages, topic pages, and other land­ing pages will con­tinue to ap­pear in the Wayback Machine.

In par­tic­u­lar, Hahn ex­pressed con­cern about the Internet Archive’s APIs.

A lot of these AI busi­nesses are look­ing for read­ily avail­able, struc­tured data­bases of con­tent,” he said. The Internet Archive’s API would have been an ob­vi­ous place to plug their own ma­chines into and suck out the IP.” (He ad­mits the Wayback Machine it­self is less risky,” since the data is not as well-struc­tured.)

As news pub­lish­ers try to safe­guard their con­tents from AI com­pa­nies, the Internet Archive is also get­ting caught in the crosshairs. The Financial Times, for ex­am­ple, blocks any bot that tries to scrape its pay­walled con­tent, in­clud­ing bots from OpenAI, Anthropic, Perplexity, and the Internet Archive. The ma­jor­ity of FT sto­ries are pay­walled, ac­cord­ing to di­rec­tor of global pub­lic pol­icy and plat­form strat­egy Matt Rogerson. As a re­sult, usu­ally only un­pay­walled FT sto­ries ap­pear in the Wayback Machine be­cause those are meant to be avail­able to the wider pub­lic any­way.

Common Crawl and Internet Archive are widely con­sid­ered to be the good guys’ and are used by the bad guys’ like OpenAI,” said Michael Nelson, a com­puter sci­en­tist and pro­fes­sor at Old Dominion University. In every­one’s aver­sion to not be con­trolled by LLMs, I think the good guys are col­lat­eral dam­age.”

To pre­serve their work — and drafts of his­tory — jour­nal­ists take archiv­ing into their own hands

The Guardian has­n’t doc­u­mented spe­cific in­stances of its web­pages be­ing scraped by AI com­pa­nies via the Wayback Machine. Instead, it’s tak­ing these mea­sures proac­tively and is work­ing di­rectly with the Internet Archive to im­ple­ment the changes. Hahn says the or­ga­ni­za­tion has been re­cep­tive to The Guardian’s con­cerns.

The out­let stopped short of an all-out block on the Internet Archive’s crawlers, Hahn said, be­cause it sup­ports the non­prof­it’s mis­sion to de­moc­ra­tize in­for­ma­tion, though that po­si­tion re­mains un­der re­view as part of its rou­tine bot man­age­ment.

[The de­ci­sion] was much more about com­pli­ance and a back­door threat to our con­tent,” he said.

When asked about The Guardian’s de­ci­sion, Internet Archive founder Brewster Kahle said that if pub­lish­ers limit li­braries, like the Internet Archive, then the pub­lic will have less ac­cess to the his­tor­i­cal record.” It’s a prospect, he im­plied, that could un­der­cut the or­ga­ni­za­tion’s work coun­ter­ing information dis­or­der.”

After 25 years, Brewster Kahle and the Internet Archive are still work­ing to de­moc­ra­tize knowl­edge

The Guardian is­n’t alone in reeval­u­at­ing its re­la­tion­ship to the Internet Archive. The New York Times con­firmed to Nieman Lab that it’s ac­tively hard block­ing” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its ro­bots.txt file, dis­al­low­ing ac­cess to its con­tent.

We be­lieve in the value of The New York Times’s hu­man-led jour­nal­ism and al­ways want to en­sure that our IP is be­ing ac­cessed and used law­fully,” said a Times spokesper­son. We are block­ing the Internet Archive’s bot from ac­cess­ing the Times be­cause the Wayback Machine pro­vides un­fet­tered ac­cess to Times con­tent — in­clud­ing by AI com­pa­nies — with­out au­tho­riza­tion.”

Last August, Reddit an­nounced that it would block the Internet Archive, whose dig­i­tal li­braries in­clude count­less archived Reddit fo­rums, com­ments sec­tions, and pro­files. This con­tent is not un­like what Reddit now li­censes to Google as AI train­ing data for tens of mil­lions of dol­lars.

[The] Internet Archive pro­vides a ser­vice to the open web, but we’ve been made aware of in­stances where AI com­pa­nies vi­o­late plat­form poli­cies, in­clud­ing ours, and scrape data from the Wayback Machine,” a Reddit spokesper­son told The Verge at the time. Until they’re able to de­fend their site and com­ply with plat­form poli­cies…we’re lim­it­ing some of their ac­cess to Reddit data to pro­tect red­di­tors.”

Kahle has also al­luded to steps the Internet Archive is tak­ing to re­strict bulk ac­cess to its li­braries. In a Mastodon post last fall, he wrote that there are many col­lec­tions that are avail­able to users but not for bulk down­load­ing. We use in­ter­nal rate-lim­it­ing sys­tems, fil­ter­ing mech­a­nisms, and net­work se­cu­rity ser­vices such as Cloudflare.”

Currently, how­ever, the Internet Archive does not dis­al­low any spe­cific crawlers through its ro­bots.txt file, in­clud­ing those of ma­jor AI com­pa­nies. As of January 12, the ro­bots.txt file for archive.org read: ​​Welcome to the Archive! Please crawl our files. We ap­pre­ci­ate it if you can crawl re­spon­si­bly. Stay open!” Shortly af­ter we in­quired about this lan­guage, it was changed. The file now reads, sim­ply, Welcome to the Internet Archive!”

There is ev­i­dence that the Wayback Machine, gen­er­ally speak­ing, has been used to train LLMs in the past. An analy­sis of Google’s C4 dataset by the Washington Post in 2023 showed that the Internet Archive was among mil­lions of web­sites in the train­ing data used to build Google’s T5 model and Meta’s Llama mod­els. Out of the 15 mil­lion do­mains in the C4 dataset, the do­main for the Wayback Machine (web.archive.org) was ranked as the 187th most pre­sent.

Hundreds of thou­sands of videos from news pub­lish­ers like The New York Times and Vox were used to train AI mod­elsIn May 2023, the Internet Archive went of­fline tem­porar­ily af­ter an AI com­pany caused a server over­load, Wayback Machine di­rec­tor Mark Graham told Nieman Lab this past fall. The com­pany sent tens of thou­sands of re­quests per sec­ond from vir­tual hosts on Amazon Web Services to ex­tract text data from the non­prof­it’s pub­lic do­main archives. The Internet Archive blocked the hosts twice be­fore putting out a pub­lic call to respectfully” scrape its site.

We got in con­tact with them. They ended up giv­ing us a do­na­tion,” Graham said. They ended up say­ing that they were sorry and they stopped do­ing it.”

Those want­ing to use our ma­te­ri­als in bulk should start slowly, and ramp up,” wrote Kahle in a blog post shortly af­ter the in­ci­dent. Also, if you are start­ing a large pro­ject please con­tact us …we are here to help.”

The Guardian’s moves to limit the Internet Archive’s ac­cess made us won­der whether other news pub­lish­ers were tak­ing sim­i­lar ac­tions. We looked at pub­lish­ers’ ro­bots.txt pages as a way to mea­sure po­ten­tial con­cern over the Internet Archive’s crawl­ing.

A web­site’s ro­bots.txt page tells bots which parts of the site they can crawl, act­ing like a doorman,” telling vis­i­tors who is and is­n’t al­lowed in the house and which parts are off lim­its. Robots.txt pages aren’t legally bind­ing, so the com­pa­nies run­ning crawl­ing bots aren’t ob­lig­ated to com­ply with them, but they in­di­cate where the Internet Archive is un­wel­come.

For ex­am­ple, in ad­di­tion to hard block­ing,” The New York Times and The Athletic in­clude the archive.org_bot in their ro­bots.txt file, though they do not cur­rently dis­al­low other bots op­er­ated by the Internet Archive.

To ex­plore this is­sue, Nieman Lab used jour­nal­ist Ben Welsh‘s data­base of 1,167 news web­sites as a start­ing point. As part of a larger side pro­ject to archive news sites’ home­pages, Welsh runs crawlers that reg­u­larly scrape the ro­bots.txt files of the out­lets in his data­base. In late December, we down­loaded a spread­sheet from Welsh’s site that dis­played all the bots dis­al­lowed in the ro­bots.txt files of those sites. We iden­ti­fied four bots that the AI user agent watch­dog ser­vice Dark Visitors has as­so­ci­ated with the Internet Archive. (The Internet Archive did not re­spond to re­quests to con­firm its own­er­ship of these bots.)

This data is not com­pre­hen­sive, but ex­ploratory. It does not rep­re­sent global, in­dus­try-wide trends — 76% of sites in the Welsh’s pub­lisher list are based in the U. S., for ex­am­ple — but in­stead be­gins to shed light on which pub­lish­ers are less ea­ger to have their con­tent crawled by the Internet Archive.

In to­tal, 241 news sites from nine coun­tries ex­plic­itly dis­al­low at least one out of the four Internet Archive crawl­ing bots.

Most of those sites (87%) are owned by USA Today Co., the largest news­pa­per con­glom­er­ate in the United States for­merly known as Gannett. (Gannett sites only make up 18% of Welsh’s orig­i­nal pub­lish­ers list.) Each Gannett-owned out­let in our dataset dis­al­lows the same two bots: archive.org_bot” and ia_archiver-web.archive.org”. These bots were added to the ro­bots.txt files of Gannett-owned pub­li­ca­tions in 2025.

Some Gannett sites have also taken stronger mea­sures to guard their con­tents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine re­turn a mes­sage that says, Sorry. This URL has been ex­cluded from the Wayback Machine.”

USA Today Co. has con­sis­tently em­pha­sized the im­por­tance of safe­guard­ing our con­tent and in­tel­lec­tual prop­erty,” a com­pany spokesper­son said via email. Last year, we in­tro­duced new pro­to­cols to de­ter unau­tho­rized data col­lec­tion and scrap­ing, redi­rect­ing such ac­tiv­ity to a des­ig­nated page out­lin­ing our li­cens­ing re­quire­ments.”

Gannett de­clined to com­ment fur­ther on its re­la­tion­ship with the Internet Archive. In an October 2025 earn­ings call, CEO Mike Reed spoke to the com­pa­ny’s anti-scrap­ing mea­sures.

In September alone, we blocked 75 mil­lion AI bots across our lo­cal and USA Today plat­forms, the vast ma­jor­ity of which were seek­ing to scrape our lo­cal con­tent,” Reed said on that call. About 70 mil­lion of those came from OpenAI.” (Gannett signed a con­tent li­cens­ing agree­ment with Perplexity in July 2025.)

About 93% (226 sites) of pub­lish­ers in our dataset dis­al­low two out of the four Internet Archive bots we iden­ti­fied. Three news sites in the sam­ple dis­al­low three Internet Archive crawlers: Le Huffington Post, Le Monde, and Le Monde in English, all of which are owned by Group Le Monde.

Some French pub­lish­ers are giv­ing AI rev­enue di­rectly to jour­nal­ists. Could that ever hap­pen in the U. S.?

The news sites in our sam­ple aren’t only tar­get­ing the Internet Archive. Out of the 241 sites that dis­al­low at least one of the four Internet Archive bots in our sam­ple, 240 sites dis­al­low Common Crawl — an­other non­profit in­ter­net preser­va­tion pro­ject that has been more closely linked to com­mer­cial LLM de­vel­op­ment. Of our sam­ple, 231 sites all dis­al­low bots op­er­ated by OpenAI, Google AI, and Common Crawl.

As we’ve pre­vi­ously re­ported, the Internet Archive has taken on the Herculean task of pre­serv­ing the in­ter­net, and many news or­ga­ni­za­tions aren’t equipped to save their own work. In December, Poynter an­nounced a joint ini­tia­tive with the Internet Archive to train lo­cal news out­lets on how to pre­serve their con­tent. Archiving ini­tia­tives like this, while ur­gently needed, are few and far be­tween. Since there is no fed­eral man­date that re­quires in­ter­net con­tent to be pre­served, the Internet Archive is the most ro­bust archiv­ing ini­tia­tive in the United States.

The Internet Archive tends to be good cit­i­zens,” Hahn said. It’s the law of un­in­tended con­se­quences: You do some­thing for re­ally good pur­poses, and it gets abused.”

Photo of Internet Archive home­page by SDF_QWE used un­der an Adobe Stock li­cense.

...

Read the original on www.niemanlab.org »

4 406 shares, 25 trendiness

My smart sleep mask broadcasts users' brainwaves to an open MQTT broker

I re­cently got a smart sleep mask from Kickstarter. I was not ex­pect­ing to end up with the abil­ity to read strangers’ brain­waves and send them elec­tric im­pulses in their sleep. But here we are.

The mask was from a small Chinese re­search com­pany, very cool hard­ware — EEG brain mon­i­tor­ing, elec­tri­cal mus­cle stim­u­la­tion around the eyes, vi­bra­tion, heat­ing, au­dio. The app was still rough around the edges though and the mask kept dis­con­nect­ing, so I asked Claude to try re­verse-en­gi­neer the Bluetooth pro­to­col and build me a sim­ple web con­trol panel in­stead.

The first thing Claude did was scan for BLE (Bluetooth Low Energy) de­vices nearby. It found mine among 35 de­vices in range, con­nected, and mapped the in­ter­face — two data chan­nels. One for send­ing com­mands, one for stream­ing data.

Then it tried talk­ing to it. Sent maybe a hun­dred dif­fer­ent com­mand pat­terns. Modbus frames, JSON, raw bytes, com­mon head­ers. Unfortunately, the de­vice said noth­ing back, the pro­to­col was not a stan­dard one.

So Claude went af­ter the app in­stead. Grabbed the Android APK, de­com­piled it with jadx. Turns out the app is built with Flutter, which is a bit of a prob­lem for re­verse en­gi­neer­ing. Flutter com­piles Dart source code into na­tive ARM64 ma­chine code — you can’t just read it back like nor­mal Java Android apps. The ac­tual busi­ness logic lives in a 9MB bi­nary blob.

But even com­piled bi­na­ries have strings in them. Error mes­sages, URLs, de­bug logs. Claude ran strings on the bi­nary and this was the most pro­duc­tive step of the whole ses­sion. Among the thou­sands of lines of Flutter frame­work noise, it found:

* Hardcoded cre­den­tials for the com­pa­ny’s mes­sage bro­ker (shared by every copy of the app)

* All fif­teen com­mand builder func­tion names (e.g. to set vi­bra­tion, heat­ing, elec­tric stim­u­la­tion, etc.)

We had the shape of the pro­to­col. Still did­n’t have the ac­tual byte val­ues though.

Claude then used blut­ter, a tool specif­i­cally for de­com­pil­ing Flutter’s com­piled Dart snap­shots. It re­con­structs the func­tions with read­able an­no­ta­tions. Claude fig­ured out the en­cod­ing, and just read off every com­mand byte from every func­tion. Fifteen com­mands, fully mapped.

Claude sent a six-byte query packet. The de­vice came back with 153 bytes — model num­ber, firmware ver­sion, se­r­ial num­ber, all eight sen­sor chan­nel con­fig­u­ra­tions (EEG at 250Hz, res­pi­ra­tion, 3-axis ac­celerom­e­ter, 3-axis gy­ro­scope). Battery at 83%.

Vibration con­trol worked. Heating worked. EMS worked. Music worked. Claude built me a lit­tle web dash­board with slid­ers for every­thing. I was pretty happy with it.

That could have been the end of the story.

Remember the hard­coded cre­den­tials from ear­lier? While pok­ing around, Claude tried us­ing them to con­nect to the com­pa­ny’s MQTT bro­ker — MQTT is a pub/​sub mes­sag­ing sys­tem stan­dard in IoT, where de­vices pub­lish sen­sor read­ings and sub­scribe to com­mands. It con­nected fine. Then it started re­ceiv­ing data. Not just from my de­vice — from all of them. About 25 were ac­tive:

Claude cap­tured a cou­ple min­utes of EEG from two ac­tive sleep masks. One user seemed to be in REM sleep (mixed-frequency ac­tiv­ity). The other was in deep slow-wave sleep (strong delta power be­low 4Hz). Real brain­waves from real peo­ple, some­where in the world.

The mask also does EMS — elec­tri­cal mus­cle stim­u­la­tion around the eyes. Controlling it is just an­other com­mand: mode, fre­quency, in­ten­sity, du­ra­tion.

Since every de­vice shares the same cre­den­tials and the same bro­ker, if you can read some­one’s brain­waves you can also send them elec­tric im­pulses.

For ob­vi­ous rea­sons, I am not nam­ing the prod­uct/​com­pany here, but have reached out to in­form them about the is­sue.

This whole thing made me re­visit Karpathy’s Digital Hygiene post, and you prob­a­bly should too.

The re­verse en­gi­neer­ing — Bluetooth, APK de­com­pi­la­tion, Dart bi­nary analy­sis, MQTT dis­cov­ery — was more or less one-shot­ted by Claude (Opus 4.6) over a 30′ au­tonomous ses­sion.

Update: this some­how reached #1 on HN and folks have been ask­ing for the claude con­ver­sa­tion tran­script. I’ve added it here.

...

Read the original on aimilios.bearblog.dev »

5 374 shares, 21 trendiness

Vim 9.2 released

The Vim pro­ject is happy to an­nounce that Vim 9.2 has been re­leased. Vim 9.2 brings sig­nif­i­cant en­hance­ments to the Vim9 script­ing lan­guage, im­proved diff mode, com­pre­hen­sive com­ple­tion fea­tures, and plat­form-spe­cific im­prove­ments in­clud­ing ex­per­i­men­tal Wayland sup­port.

Comprehensive Completion: Added sup­port for fuzzy match­ing dur­ing in­sert-mode com­ple­tion and the abil­ity to com­plete words di­rectly from reg­is­ters (CTRL-X CTRL-R). New completeopt’ flags like nosort and near­est of­fer finer con­trol over how matches are dis­played and or­dered.

Modern Platform Support: Full sup­port for the Wayland UI and clip­board has been added. On Linux and Unix-like sys­tems, Vim now ad­heres to the XDG Base Directory Specification, us­ing $HOME/.config/vim for user con­fig­u­ra­tion.

UI Enhancements: A new ver­ti­cal tab­panel pro­vides an al­ter­na­tive to the hor­i­zon­tal tabline. The MS-Windows GUI now sup­ports na­tive dark mode for the menu and ti­tle bars, along with im­proved fullscreen sup­port and higher-qual­ity tool­bar icons.

Interactive Learning: A new built-in in­ter­ac­tive tu­tor plu­gin (started via :Tutor) pro­vides a mod­ern­ized learn­ing ex­pe­ri­ence be­yond the tra­di­tional vim­tu­tor.

Significant lan­guage en­hance­ments in­clud­ing na­tive sup­port for Enums, Generic func­tions, and the Tuple data type. Built-in func­tions are now in­te­grated as ob­ject meth­ods, and classes now sup­port pro­tected _new() meth­ods and :defcompile for full method com­pi­la­tion.

The ma­tu­rity of Vim9 scrip­t’s mod­ern con­structs is now be­ing lever­aged by ad­vanced AI de­vel­op­ment tools. Contributor Yegappan Lakshmanan re­cently demon­strated the ef­fi­cacy of these new fea­tures through two pro­jects gen­er­ated us­ing GitHub Copilot:

Battleship in Vim9: A com­plete im­ple­men­ta­tion of the clas­sic game, show­cas­ing classes and type aliases. [GitHub]

Number Puzzle: A logic game demon­strat­ing the ef­fi­ciency of mod­ern Vim9 for in­ter­ac­tive plu­g­ins. [GitHub]

Vim 9.2 in­tro­duces sig­nif­i­cant en­hance­ments to how changes are vi­su­al­ized and aligned in diff mode:

Linematch Algorithm: Includes the linematch” al­go­rithm for the diffopt’ set­ting. This aligns changes be­tween buffers on sim­i­lar lines, greatly im­prov­ing diff high­light­ing ac­cu­racy.

Diff Anchors: The new diffanchors’ op­tion al­lows you to spec­ify an­chor points (comma-separated ad­dresses) to split and in­de­pen­dently diff buffer sec­tions, en­sur­ing bet­ter align­ment in com­plex files.

Inline Highlighting: Improves high­light­ing for changes within a line. This is con­fig­urable via the inline” sub-op­tion for diffopt’. Note that inline:simple” has been added to the de­fault diffopt’ value.

Here are some ex­am­ples for the im­proved in­line high­light­ing:

Several long-stand­ing de­faults have been up­dated to bet­ter suit mod­ern hard­ware and work­flows. These val­ues have been re­moved from de­faults.vim as they are now the in­ter­nal de­faults.

On (Always vis­i­ble in non-com­pat­i­ble mode)

These ex­am­ples demon­strate how to use the pow­er­ful new com­ple­tion and in­tro­spec­tion tools avail­able in Vim 9.2.

Vim’s stan­dard com­ple­tion fre­quently checks for user in­put while search­ing for new matches. It is re­spon­sive ir­re­spec­tive of file size. This makes it well-suited for smooth auto-com­ple­tion.

vim9script

def InsComplete()

if getcharstr(1) == ’ && get­line(‘.’)->str­part(0, col(‘.’) - 1) =~ \k$’

SkipTextChangedIEvent()

feed­keys(“", n”)

en­dif

end­def

def SkipTextChangedIEvent(): string

# Suppress next event caused by

vim9script

var se­lect­ed_­match = nul­l_string

var all­files: list

def GrepComplete(arglead: string, cmd­line: string, cur­sor­pos: num­ber): list

re­turn ar­glead->len() > 1 ? sys­tem­list($‘grep -REIHns {arglead}“’ ..

′ –exclude-dir=.git –exclude=”.*” –exclude=“tags” –exclude=“*.swp”’) : []

end­def

def VisitFile()

if (selected_match != nul­l_string)

var qfitem = getqflist({lines: [selected_match]}).items[0]

if qfitem->has_key(‘bufnr’) && qfitem.lnum > 0

var pos = qfitem.vcol > 0 ? setcharpos’ : setpos’

exec $‘:b +call\ {pos}(”.”,\ [0,\ {qfitem.lnum},\ {qfitem.col},\ 0]) {qfitem.bufnr}’

set­buf­var(qfitem.bufnr, &buflisted’, 1)

en­dif

en­dif

end­def

def FuzzyFind(arglead: string, _: string, _: num­ber): list

if all­files == nul­l_list

all­files = sys­tem­list($’find {get(g:, fzfind_root”, .“)} \!

\( -path */.git” -prune -o -name *.swp” \) -type f -follow’)

en­dif

re­turn ar­glead == ’ ? all­files : all­files->match­fuzzy(ar­glead)

end­def

def FuzzyBuffer(arglead: string, _: string, _: num­ber): list

var bufs = ex­e­cute(‘buffers’, silent!’)->split(“\n”)

var al­t­buf = bufs->in­dexof((_, v) => v =~ ^\s*\d\+\s\+#’)

if al­t­buf != -1

[bufs[0], bufs[al­t­buf]] = [bufs[altbuf], bufs[0]]

en­dif

re­turn ar­glead == ’ ? bufs : bufs->match­fuzzy(ar­glead)

end­def

def SelectItem()

se­lect­ed_­match =

if getcmd­line() =~ ^\s*\%(Grep\|Find\|Buffer\)\s’

var info = cmd­com­plete_info()

if info != {} && info.pum_vis­i­ble && !info.matches->empty()

se­lect­ed_­match = info.se­lected != -1 ? info.matches[info.se­lected] : info.matches[0]

setcmd­line(info.cmd­line_orig) # Preserve search pat­tern in his­tory

en­dif

en­dif

end­def

com­mand! -nargs=+ -complete=customlist,GrepComplete Grep VisitFile()

com­mand! -nargs=* -complete=customlist,FuzzyBuffer Buffer exe b  .. se­lect­ed_­match->match­str(‘\d\+’)

com­mand! -nargs=* -complete=customlist,FuzzyFind Find exe !empty(selected_match) ? $‘e {selected_match}’ :

nnoremap

vim9script

def CmdComplete()

var [cmdline, cur­pos] = [getcmdline(), getcmd­pos()]

if getchar(1, {number: true}) == 0 # Typehead is empty

&& !pumvisible() && cur­pos == cmd­line->len() + 1

&& cmd­line =~ \%(\w\|[*/:.-]\)$’ && cmd­line !~ ^\d\+$’

feed­keys(“\

For au­to­matic popup menu com­ple­tion as you type in search or : com­mands, in­clude this in your .vimrc:

vim9script

def CmdComplete()

var [cmdline, cur­pos, cmd­mode] = [getcmdline(), getcmd­pos(), ex­pand(′

Other Improvements and ChangesMany bugs have been fixed since the re­lease of Vim 9.1, in­clud­ing se­cu­rity vul­ner­a­bil­i­ties, mem­ory leaks and po­ten­tial crashes.

See the help­file for other im­prove­ments: :h new-other-9.2

Changes to ex­ist­ing be­hav­iour is doc­u­mented at: :h changed-9.2

A few new func­tions, au­to­com­mands, ex com­mands and op­tions have been added: :h added-9.2

The full list of patches is doc­u­mented at: :h patches-9.2

For over 30 years, Vim has been Charityware,” sup­port­ing chil­dren in Kibaale, Uganda. Following the pass­ing of Bram Moolenaar, the ICCF Holland foun­da­tion was dis­solved, and its mis­sion has been car­ried for­ward by a new part­ner.

ICCF Holland Dissolution: Because the char­ity could not be sus­tained in its orig­i­nal form with­out Bram, ICCF Holland was dis­solved and its re­main­ing funds were trans­ferred to en­sure con­tin­ued sup­port for the Kibaale pro­ject.

Partnership with Kuwasha: To en­sure that aid re­mained un­in­ter­rupted, all spon­sor­ship ac­tiv­i­ties were moved to Kuwasha, a long-term part­ner based in Canada that now man­ages the pro­jects in Uganda.

Continuing the Legacy: Vim re­mains Charityware. We en­cour­age users to con­tinue sup­port­ing the needy chil­dren in Uganda through this new tran­si­tion.

For in­for­ma­tion on how to sup­port this cause, please visit the Sponsor page.We would like to thank every­body who con­tributed to the pro­ject through patches, trans­la­tions, and bug re­ports. We are very grate­ful for any sup­port.You can find the new re­lease on the Download page.

...

Read the original on www.vim.org »

6 355 shares, 15 trendiness

Devlog

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

As we ap­proach the end of the 0.16.0 re­lease cy­cle, Jacob has been hard at work, bring­ing std. Io.Evented up to speed with all the lat­est API changes:Both of these are based on user­space stack switch­ing, some­times called fibers”, stackful corou­tines”, or green threads”.They are now avail­able to tin­ker with, by con­struct­ing one’s ap­pli­ca­tion us­ing std.Io.Evented. They should be con­sid­ered ex­per­i­men­tal be­cause there is im­por­tant fol­lowup work to be done be­fore they can be used re­li­ably and ro­bustly:di­ag­nose the un­ex­pected per­for­mance degra­da­tion when us­ing IoMode.evented for the com­piler­builtin func­tion to tell you the max­i­mum stack size of a given func­tion to make these im­ple­men­ta­tions prac­ti­cal to use when over­com­mit is off.With those caveats in mind, it seems we are in­deed reach­ing the Promised Land, where Zig code can have Io im­ple­men­ta­tions ef­fort­lessly swapped out:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var threaded: std.Io.Threaded = .init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

de­fer threaded.deinit();

const io = threaded.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

$ strace ./hello_threaded

ex­ecve(”./​hel­lo_threaded”, [”./hello_threaded”], 0x7ffc1da88b20 /* 98 vars */) = 0

mmap(NULL, 262207, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f583f338000

arch_prctl(ARCH_SET_FS, 0x7f583f378018) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f583f338000, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

rt_sigac­tion(SI­GIO, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

writev(1, [{iov_base=“Hello, World!\n”, iov_len=14}], 1Hello, World!

) = 14

rt_sigac­tion(SI­GIO, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Swapping out only the I/O im­ple­men­ta­tion:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var evented: std.Io.Evented = un­de­fined;

try evented.init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

.backing_allocator_needs_mutex = false,

de­fer evented.deinit();

const io = evented.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

ex­ecve(”./​hel­lo_evented”, [”./hello_evented”], 0x7fff368894f0 /* 98 vars */) = 0

mmap(NULL, 262215, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c28000

arch_prctl(ARCH_SET_FS, 0x7f70a4c68020) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f70a4c28008, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c27000

mmap(0x7f70a4c28000, 548864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4ba1000

io_ur­ing_setup(64, {flags=IORING_SETUP_COOP_TASKRUN|IORING_SETUP_SINGLE_ISSUER, sq_thread­_cpu=0, sq_thread­_i­dle=1000, sq_en­tries=64, cq_en­tries=128, fea­tures=IOR­ING_FEAT_S­IN­GLE_MMAP|IOR­ING_FEAT_N­ODROP|IOR­ING_FEAT_­SUB­MIT_STA­BLE|IOR­ING_FEAT_R­W_CUR_­POS|IOR­ING_FEAT_CUR_PER­SON­AL­ITY|IOR­ING_FEAT_­FAST_POLL|IOR­ING_FEAT_POL­L_32BITS|IOR­ING_FEAT_SQPOL­L_NON­FIXED|IOR­ING_FEAT_EX­T_ARG|IOR­ING_FEAT_­NA­TIVE_­WORK­ERS|IOR­ING_FEAT_RSR­C_­TAGS|IOR­ING_FEAT_C­QE_SKIP|IOR­ING_FEAT_LINKED_­FILE|IOR­ING_FEAT_REG_REG_RING|IOR­ING_FEAT_RECVSEND_BUN­DLE|IOR­ING_FEAT_MIN_­TIME­OUT|IOR­ING_FEAT_R­W_ATTR|IOR­ING_FEAT_NO_IOWAIT, sq_off={head=0, tail=4, ring_­mask=16, ring_en­tries=24, flags=36, dropped=32, ar­ray=2112, user_addr=0}, cq_off={head=8, tail=12, ring_­mask=20, ring_en­tries=28, over­flow=44, cqes=64, flags=40, user_addr=0}}) = 3

mmap(NULL, 2368, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0) = 0x7f70a4ba0000

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0x10000000) = 0x7f70a4b9f000

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8Hello, World!

) = 1

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8) = 1

mun­map(0x7f70a4b9f000, 4096) = 0

mun­map(0x7f70a4ba0000, 2368) = 0

close(3) = 0

mun­map(0x7f70a4ba1000, 548864) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Key point here be­ing that the app func­tion is iden­ti­cal be­tween those two snip­pets.Mov­ing be­yond Hello World, the Zig com­piler it­self works fine us­ing std.Io.Evented, both with io_ur­ing and with GCD, but as men­tioned above, there is a not-yet-di­ag­nosed per­for­mance degra­da­tion when do­ing so.

If you have a Zig pro­ject with de­pen­den­cies, two big changes just landed which I think you will be in­ter­ested to learn about. Fetched pack­ages are now stored lo­cally in the zig-pkg di­rec­tory of the pro­ject root (next to your build.zig file).For ex­am­ple here are a few re­sults from awebo af­ter run­ning zig build:$ du -sh zig-pkg/*

13M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi

20K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w

4.3M pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw

5.2M uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n

728K vaxis-0.5.1-BWN­V_Ax­EC­QCj3p4Hcv4U3Y­o1W­MU­J7Z2­FU­j0Ukpu­JGxQQ

It is highly rec­om­mended to add this di­rec­tory to the pro­ject-lo­cal source con­trol ig­nore file (e.g. .gitignore). However, by be­ing out­side of .zig-cache, it pro­vides the pos­si­bil­ity of dis­trib­ut­ing self-con­tained source tar­balls, which con­tain all de­pen­den­cies and there­fore can be used to build of­fline, or for archival pur­poses.Mean­while, an ad­di­tional copy of the de­pen­dency is cached glob­ally. After fil­ter­ing out all the un­used files based on the paths fil­ter, the con­tents are re­com­pressed:$ du -sh ~/.cache/zig/p/*

2.4M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi.tar.gz

4.0K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w.tar.gz

636K pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw.tar.gz

880K uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n.tar.gz

120K vaxis-0.5.1-BWN­V_BFEC­QB­bX­eTeFd48uTJR­jD5a-KD6kPuKanz­zVB01.tar.gz

The mo­ti­va­tion for this change is to make it eas­ier to tin­ker. Go ahead and edit those files, see what hap­pens. Swap out your pack­age di­rec­tory with a git clone. Grep your de­pen­den­cies all to­gether. Configure your IDE to auto-com­plete based on the zig-pkg di­rec­tory. Run baobab on your de­pen­dency tree. Furthermore, by hav­ing the global cache have com­pressed files in­stead makes it eas­ier to share that cached data be­tween com­put­ers. In the fu­ture, it is planned to sup­port peer-to-peer tor­rent­ing of de­pen­dency trees. By re­com­press­ing pack­ages into a canon­i­cal form, this will al­low peers to share Zig pack­ages with min­i­mal band­width. I love this idea be­cause it si­mul­ta­ne­ously pro­vides re­silience to net­work out­ages, as well as a pop­u­lar­ity con­test. Find out which open source pack­ages are pop­u­lar based on num­ber of seed­ers!The sec­ond change here is the ad­di­tion of the –fork flag to zig build.In ret­ro­spect, it seems so ob­vi­ous, I don’t know why I did­n’t think of it since the be­gin­ning. It looks like this:zig build –fork=[path]

This is a pro­ject over­ride op­tion. Given a path to a source check­out of a pro­ject, all pack­ages match­ing that pro­ject across the en­tire de­pen­dency tree will be over­rid­den.Thanks to the fact that pack­age con­tent hashes in­clude name and fin­ger­print, this re­solves be­fore the pack­age is po­ten­tially fetched.This is an easy way to tem­porar­ily use one or more forks which are in en­tirely sep­a­rate di­rec­to­ries. You can it­er­ate on your en­tire de­pen­dency tree un­til every­thing is work­ing, while us­ing com­fort­ably the de­vel­op­ment en­vi­ron­ment and source con­trol of the de­pen­dency pro­jects.The fact that it is a CLI flag makes it ap­pro­pri­ately ephemeral. The mo­ment you drop the flags, you’re back to us­ing your pris­tine, fetched de­pen­dency tree.If the pro­ject does not match, an er­ror oc­curs, pre­vent­ing con­fu­sion:$ zig build –fork=/home/andy/dev/mime

er­ror: fork /home/andy/dev/mime matched no mime pack­ages

If the pro­ject does match, you get a re­minder that you are us­ing a fork, pre­vent­ing con­fu­sion:$ zig build –fork=/home/andy/dev/dvui

info: fork /home/andy/dev/dvui matched 1 (dvui) pack­ages

This func­tion­al­ity is in­tended to en­hance the work­flow of deal­ing with ecosys­tem break­age. I al­ready tried it a bit and found it to be quite pleas­ant to work with. The new work­flow goes like this:Fail to build from source due to ecosys­tem break­age.Tin­ker with –fork un­til your pro­ject works again. During this time you can use the ac­tual up­stream source con­trol, test suite, zig build test –watch -fincremental, etc.Now you have a new op­tion: be self­ish and just keep work­ing on your own stuff, or you can pro­ceed to sub­mit your patches up­stream.…and you can prob­a­bly skip the step where you switch your build.zig.zon to your fork un­less you ex­pect up­stream to take a long time to merge your fixes.

The Windows op­er­at­ing sys­tem pro­vides a large ABI sur­face area for do­ing things in the ker­nel. However, not all ABIs are cre­ated equally. As Casey Muratori points out in his lec­ture, The Only Unbreakable Law, the or­ga­ni­za­tional struc­ture of soft­ware de­vel­op­ment teams has a di­rect im­pact on the struc­ture of the soft­ware they pro­duce. The DLLs on Windows are or­ga­nized into a heirar­chy, with some of the APIs be­ing high-level wrap­pers around lower-level ones. For ex­am­ple, when­ever you call func­tions of ker­nel32.dll, ul­ti­mately, the ac­tual work is done by nt­dll.dll. You can ob­serve this di­rectly by us­ing ProcMon.exe and ex­am­in­ing stack traces.What we’ve learned em­pir­i­cally is that the nt­dll APIs are gen­er­ally well-en­gi­neered, rea­son­able, and pow­er­ful, but the ker­nel32 wrap­pers in­tro­duce un­nec­es­sary heap al­lo­ca­tions, ad­di­tional fail­ure modes, un­in­ten­tional CPU us­age, and bloat.This is why the Zig stan­dard li­brary pol­icy is to Prefer the Native API over Win32. We’re not quite there yet - we have plenty of calls into ker­nel32 re­main­ing - but we’ve taken great strides re­cently. I’ll give you two ex­am­ples.Ac­cord­ing to the of­fi­cial doc­u­men­ta­tion, Windows does not have a straight­for­ward way to get ran­dom bytes.Many pro­jects in­clud­ing Chromium, bor­ingssl, Firefox, and Rust call SystemFunction036 from ad­va­pi32.dll be­cause it worked on ver­sions older than Windows 8.Unfortunately, start­ing with Windows 8, the first time you call this func­tion, it dy­nam­i­cally loads bcrypt­prim­i­tives.dll and calls ProcessPrng. If load­ing the DLL fails (for ex­am­ple due to an over­loaded sys­tem, which we have ob­served on Zig CI sev­eral times), it re­turns er­ror 38 (from a func­tion that has void re­turn type and is doc­u­mented to never fail).The first thing ProcessPrng does is heap al­lo­cate a small, con­stant num­ber of bytes. If this fails it re­turns NO_MEMORY in a BOOL (documented be­hav­ior is to never fail, and al­ways re­turn TRUE).bcryptprimitives.dll ap­par­ently also runs a test suite every time you load it.All that ProcessPrng is re­ally do­ing is NtOpenFile on \\Device\\CNG” and read­ing 48 bytes with NtDeviceIoControlFile to get a seed, and then ini­tial­iz­ing a per-CPU AES-based CSPRNG.So the de­pen­dency on bcrypt­prim­i­tives.dll and ad­va­pi32.dll can both be avoided, and the non­de­ter­min­is­tic fail­ure and la­ten­cies on first RNG read can also be avoided.Read­File looks like this:pub ex­tern kernel32″ fn ReadFile(

hFile: HANDLE,

lp­Buffer: LPVOID,

nNum­berOf­Byte­sToRead: DWORD,

lp­Num­berOf­Bytes­Read: ?*DWORD,

lpOver­lapped: ?*OVERLAPPED,

) call­conv(.winapi) BOOL;

NtReadFile looks like this:pub ex­tern ntdll” fn NtReadFile(

FileHandle: HANDLE,

Event: ?HANDLE,

ApcRoutine: ?*const IO_APC_ROUTINE,

ApcContext: ?*anyopaque,

...

Read the original on ziglang.org »

7 331 shares, 28 trendiness

IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption

Victorian-era vinegar valen­ti­nes’ show that trolling ex­isted long be­fore so­cial me­dia or the in­ter­ne­tAmer­i­cans wake up and smell the cof­fee price surge—skip­ping Starbucks, brew­ing at home, and drink­ing Diet Coke for caf­feine and The Associated PressPrivate eq­ui­ty’s play­book to shake off the zom­bies: meet the con­tin­u­a­tion ve­hi­cleMacKen­zie Scott says her col­lege room­mate loaned her $1,000 so she would­n’t have to drop out—and is now in­spir­ing her to give away bil­lion­sRo­mance scam from the front lines of the $16 bil­lion fraud cri­sis: 6 dead dogs, a miss­ing $39,000, and a wronged widow‘Amer­i­ca’s Got Talent’ cre­ator Simon Cowell has given up work­ing on Fridays be­cause it’s point­less’—and re­search shows he’s right

Microsoft AI chief gives it 18 months—for all white-col­lar work to be au­to­mated by AIMacKenzie Scott says her col­lege room­mate loaned her $1,000 so she would­n’t have to drop out—and is now in­spir­ing her to give away bil­lion­sAna­log-ob­sessed Gen Zers are buy­ing $40 app block­ers to limit their so­cial me­dia use and take a break from the slot ma­chine in your pock­et’­Some folks on Wall Street think yes­ter­day’s U. S. jobs num­ber is implausible’ and thus due for a down­ward cor­rec­tionOpe­nAI’s Codex and Anthropic’s Claude spark cod­ing rev­o­lu­tion as de­vel­op­ers say they’ve aban­doned tra­di­tional pro­gram­mingEv­ery U.S. Olympian was promised a $200,000 pay­out, but how much they ac­tu­ally keep de­pends on where they live

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

8 272 shares, 11 trendiness

Platforms bend over backward to help DHS censor ICE critics, advocates say

Platforms bend over back­ward to help DHS cen­sor ICE crit­ics, ad­vo­cates say

Pam Bondi and Kristi Noem sued for co­erc­ing plat­forms into cen­sor­ing ICE posts.

Pressure is mount­ing on tech com­pa­nies to shield users from un­law­ful gov­ern­ment re­quests that ad­vo­cates say are mak­ing it harder to re­li­ably share in­for­ma­tion about Immigration and Customs Enforcement (ICE) on­line.

Alleging that ICE of­fi­cers are be­ing doxed or oth­er­wise en­dan­gered, Trump of­fi­cials have spent the last year tar­get­ing an un­known num­ber of users and plat­forms with de­mands to cen­sor con­tent. Early law­suits show that plat­forms have caved, even though ex­perts say they could refuse these de­mands with­out a court or­der.

In a law­suit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) ac­cused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of co­erc­ing tech com­pa­nies into re­mov­ing a wide range of con­tent to con­trol what the pub­lic can see, hear, or say about ICE op­er­a­tions.”

It’s the sec­ond law­suit al­leg­ing that Bondi and DHS of­fi­cials are us­ing reg­u­la­tory power to pres­sure pri­vate plat­forms to sup­press speech pro­tected by the First Amendment. It fol­lows a com­plaint from the de­vel­oper of an app called ICEBlock, which Apple re­moved from the App Store in October. Officials aren’t rush­ing to re­solve that case—last month, they re­quested more time to re­spond—so it may re­main un­clear un­til March what de­fense they plan to of­fer for the take­down de­mands.

That leaves com­mu­nity mem­bers who mon­i­tor ICE in a pre­car­i­ous sit­u­a­tion, as crit­i­cal re­sources could dis­ap­pear at the de­part­men­t’s re­quest with no warn­ing.

FIRE says peo­ple have le­git­i­mate rea­sons to share in­for­ma­tion about ICE. Some com­mu­ni­ties fo­cus on help­ing peo­ple avoid dan­ger­ous ICE ac­tiv­ity, while oth­ers aim to hold the gov­ern­ment ac­count­able and raise pub­lic aware­ness of how ICE op­er­ates. Unless there’s proof of in­cite­ment to vi­o­lence or a true threat, such ex­pres­sion is pro­tected.

Despite the high bar for cen­sor­ing on­line speech, law­suits trace an es­ca­lat­ing pat­tern of DHS in­creas­ingly tar­get­ing web­sites, app stores, and plat­forms—many that have been will­ing to re­move con­tent the gov­ern­ment dis­likes.

Officials have or­dered ICE-monitoring apps to be re­moved from app stores and even threat­ened to sanc­tion CNN for sim­ply re­port­ing on the ex­is­tence of one such app. Officials have also de­manded that Meta delete at least one Chicago-based Facebook group with 100,000 mem­bers and made mul­ti­ple un­suc­cess­ful at­tempts to un­mask anony­mous users be­hind other Facebook groups. Even en­crypted apps like Signal don’t feel safe from of­fi­cials’ seem­ing over­reach. FBI Director Kash Patel re­cently said he has opened an in­ves­ti­ga­tion into Signal chats used by Minnesota res­i­dents to track ICE ac­tiv­ity, NBC News reported.

As DHS cen­sor­ship threats in­crease, plat­forms have done lit­tle to shield users, ad­vo­cates say. Not only have they some­times failed to re­ject un­law­ful or­ders that sim­ply pro­vided a a bare men­tion of officer safety/​dox­ing’” as jus­ti­fi­ca­tion, but in one case, Google com­plied with a sub­poena that left a crit­i­cal sec­tion blank, the Electronic Frontier Foundation (EFF) re­ported.

For users, it’s in­creas­ingly dif­fi­cult to trust that plat­forms won’t be­tray their own poli­cies when faced with gov­ern­ment in­tim­i­da­tion, ad­vo­cates say. Sometimes plat­forms no­tify users be­fore com­ply­ing with gov­ern­ment re­quests, giv­ing users a chance to chal­lenge po­ten­tially un­con­sti­tu­tional de­mands. But in other cases, users learn about the re­quests only as plat­forms com­ply with them—even when those plat­forms have promised that would never hap­pen.

Government emails with plat­forms may be ex­posed

Platforms could face back­lash from users if law­suits ex­pose their com­mu­ni­ca­tions to the gov­ern­ment, a pos­si­bil­ity in the com­ing months. Last fall, the EFF sued af­ter DOJ, DHS, ICE, and Customs and Border Patrol failed to re­spond to Freedom of Information Act re­quests seek­ing emails be­tween the gov­ern­ment and plat­forms about take­down de­mands. Other law­suits may sur­face emails in dis­cov­ery. In the com­ing weeks, a judge will set a sched­ule for EFFs lit­i­ga­tion.

The na­ture and con­tent of the Defendants’ com­mu­ni­ca­tions with these tech­nol­ogy com­pa­nies” is critical for de­ter­min­ing whether they crossed the line from gov­ern­men­tal ca­jol­ing to un­con­sti­tu­tional co­er­cion,” EFFs com­plaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is con­fi­dent it can win the fight to ex­pose gov­ern­ment de­mands, but like most FOIA law­suits, the case is ex­pected to move slowly. That’s un­for­tu­nate, he said, be­cause ICE ac­tiv­ity is es­ca­lat­ing, and de­lays in ad­dress­ing these con­cerns could ir­repara­bly harm speech at a piv­otal mo­ment.

Like users, plat­forms are seem­ingly vic­tims, too, FIRE se­nior at­tor­ney Colin McDonnell told Ars.

They’ve been forced to over­ride their own ed­i­to­r­ial judg­ment while nav­i­gat­ing im­plicit threats from the gov­ern­ment, he said.

If Attorney General Bondi de­mands that they re­move speech, the plat­form is go­ing to feel like they have to com­ply; they don’t have a choice,” McDonnell said.

But plat­forms do have a choice and could be do­ing more to pro­tect users, the EFF has said. Platforms could even serve as a first line of de­fense, re­quir­ing of­fi­cials to get a court or­der be­fore com­ply­ing with any re­quests.

Platforms may now have good rea­son to push back against gov­ern­ment re­quests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to ad­dress the ICEBlock re­moval and FOIA law­suits, the gov­ern­ment has quickly with­drawn re­quests to un­mask Facebook users soon af­ter lit­i­ga­tion be­gan.

That’s like an ac­knowl­edge­ment that the Trump ad­min­is­tra­tion, when ac­tu­ally chal­lenged in court, was­n’t even will­ing to de­fend it­self,” Trujillo said.

Platforms could view that as ev­i­dence that gov­ern­ment pres­sure only works when plat­forms fail to put up a bare-min­i­mum fight, Trujillo said.

An open let­ter from the EFF and the American Civil Liberties Union (ACLU) doc­u­mented two in­stances of tech com­pa­nies com­ply­ing with gov­ern­ment de­mands with­out first no­ti­fy­ing users.

The let­ter called out Meta for un­mask­ing at least one user with­out prior no­tice, which groups noted potentially” oc­cured due to a technical glitch.”

More trou­bling than buggy no­ti­fi­ca­tions, how­ever, is the pos­si­bil­ity that plat­forms may be rou­tinely de­lay­ing no­tice un­til it’s too late.

After Google received an ICE sub­poena for user data and ful­filled it on the same day that it no­ti­fied the user,” the com­pany ad­mit­ted that sometimes when Google misses its re­sponse dead­line, it com­plies with the sub­poena and pro­vides no­tice to a user at the same time to min­i­mize the de­lay for an over­due pro­duc­tion,” the let­ter said.

This is a wor­ry­ing ad­mis­sion that vi­o­lates [Google’s] clear promise to users, es­pe­cially be­cause there is no le­gal con­se­quence to miss­ing the gov­ern­men­t’s re­sponse dead­line,” the let­ter said.

Platforms face no sanc­tions for re­fus­ing to com­ply with gov­ern­ment de­mands that have not been court-or­dered, the let­ter noted. That’s why the EFF and ACLU have urged com­pa­nies to use their immense re­sources” to shield users who may not be able to drop every­thing and fight un­con­sti­tu­tional data re­quests.

In their let­ter, the groups asked com­pa­nies to in­sist on court in­ter­ven­tion be­fore com­ply­ing with a DHS sub­poena. They should also re­sist DHS gag or­ders” that ask plat­forms to hand over data with­out no­ti­fy­ing users.

Instead, they should com­mit to giv­ing users as much no­tice as pos­si­ble when they are the tar­get of a sub­poena,” as well as a copy of the sub­poena. Ideally, plat­forms would also link users to le­gal aid re­sources and take up le­gal fights on be­half of vul­ner­a­ble users, ad­vo­cates sug­gested.

That’s not what’s hap­pen­ing so far. Trujillo told Ars that it feels like companies have bent over back­ward to ap­pease the Trump ad­min­is­tra­tion.”

The tide could turn this year if courts side with app mak­ers be­hind crowd­sourc­ing apps like ICEBlock and Eyes Up, who are su­ing to end the al­leged gov­ern­ment co­er­cion. FIREs McDonnell, who rep­re­sents the cre­ator of Eyes Up, told Ars that plat­forms may feel more com­fort­able ex­er­cis­ing their own ed­i­to­r­ial judg­ment mov­ing for­ward if a court de­clares they were co­erced into re­mov­ing con­tent.

DHS can’t use dox­ing to dodge First Amendment

FIREs law­suit ac­cuses Bondi and Noem of co­erc­ing Meta to dis­able a Facebook group with 100,000 mem­bers called ICE Sightings–Chicagoland.”

The pop­u­lar­ity of that group surged dur­ing Operation Midway Blitz,” when hun­dreds of agents ar­rested more than 4,500 peo­ple over weeks of raids that used tear gas in neigh­bor­hoods and caused car crashes and other vi­o­lence. Arrests in­cluded US cit­i­zens and im­mi­grants of law­ful sta­tus, which gave Chicagoans rea­son to fear be­ing in­jured or ar­rested due to their prox­im­ity to ICE raids, no mat­ter their im­mi­gra­tion sta­tus,” FIREs com­plaint said.

Kassandra Rosado, a life­long Chicagoan and US cit­i­zen of Mexican de­scent, started the Facebook group and served as ad­min, mod­er­at­ing con­tent with other vol­un­teers. She pro­hib­ited hate speech or bul­ly­ing” and instructed group mem­bers not to post any­thing threat­en­ing, hate­ful, or that pro­moted vi­o­lence or il­le­gal con­duct.”

Facebook only ever flagged five posts that sup­pos­edly vi­o­lated com­mu­nity guide­lines, but in warn­ings, the com­pany re­as­sured Rosado that groups aren’t pe­nal­ized when mem­bers or vis­i­tors break the rules with­out ad­min ap­proval.”

Rosado had no rea­son to sus­pect that her group was in dan­ger of re­moval. When Facebook dis­abled her group, it told Rosado the group vi­o­lated com­mu­nity stan­dards multiple times.” But her com­plaint noted that, con­fus­ingly, Facebook poli­cies don’t pro­vide for dis­abling groups if a few mem­bers post os­ten­si­bly pro­hib­ited con­tent; they call for re­mov­ing groups when the group mod­er­a­tor re­peat­edly ei­ther cre­ates pro­hib­ited con­tent or af­fir­ma­tively approves’ such con­tent.”

Facebook’s de­ci­sion came af­ter a right-wing in­flu­encer, Laura Loomer, tagged Noem and Bondi in a so­cial me­dia post al­leg­ing that the group was getting peo­ple killed.” Within two days, Bondi bragged that she had got­ten the group dis­abled while claim­ing that it was be­ing used to dox and tar­get [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi se­lec­tively uses the term doxing” when peo­ple post im­ages from ICE ar­rests. He pointed to ICEs own so­cial me­dia ac­counts,” which share fa­vor­able opin­ions of ICE along­side videos and pho­tos of ICE ar­rests that Bondi does­n’t con­sider dox­ing.

Rosado’s cre­ation of Facebook groups to send and re­ceive in­for­ma­tion about where and how ICE car­ries out its du­ties in pub­lic, to share pho­tographs and videos of ICE car­ry­ing out its du­ties in pub­lic, and to ex­change opin­ions about and crit­i­cism of ICEs tac­tics in car­ry­ing out its du­ties, is speech pro­tected by the First Amendment,” FIRE ar­gued.

The same goes for speech man­aged by Mark Hodges, a US cit­i­zen who re­sides in Indiana. He cre­ated an app called Eyes Up to serve as an archive of ICE videos. Apple re­moved Eyes Up from the App Store around the same time that it re­moved ICEBlock.

It is just videos of what gov­ern­ment em­ploy­ees did in pub­lic car­ry­ing out their du­ties,” McDonnell said. It’s noth­ing even close to threat­en­ing or dox­ing or any of these other the­o­ries that the gov­ern­ment has used to jus­tify sup­press­ing speech.”

Bondi bragged that she had got­ten ICEBlock banned, and FIREs com­plaint con­firmed that Hodges’ com­pany re­ceived the same no­ti­fi­ca­tion that ICEBlock’s de­vel­oper got af­ter Bondi’s vic­tory lap. The no­tice said that Apple re­ceived information” from law en­force­ment” claim­ing that the apps had vi­o­lated Apple guide­lines against defamatory, dis­crim­i­na­tory, or mean-spir­ited con­tent.”

Apple did not reach the same con­clu­sion when it in­de­pen­dently re­viewed Eyes Up prior to gov­ern­ment med­dling, FIREs com­plaint said. Notably, the app re­mains avail­able in Google Play, and Rosado now man­ages a new Facebook group with sim­i­lar con­tent but some­what tighter re­stric­tions on who can join. Neither ac­tiv­ity has re­quired ur­gent in­ter­ven­tion from ei­ther tech gi­ants or the gov­ern­ment.

McDonnell told Ars that it’s harm­ful for DHS to wa­ter down the mean­ing of dox­ing when push­ing plat­forms to re­move con­tent crit­i­cal of ICE.

When most of us hear the word doxing,’ we think of some­thing that’s threat­en­ing, post­ing pri­vate in­for­ma­tion along with home ad­dresses or places of work,” McDonnell said. And it seems like the gov­ern­ment is ex­pand­ing that de­f­i­n­i­tion to en­com­pass just shar­ing, even if there’s no threats, noth­ing vi­o­lent. Just shar­ing in­for­ma­tion about what our gov­ern­ment is do­ing.”

Expanding the de­f­i­n­i­tion and then us­ing that term to jus­tify sup­press­ing speech is con­cern­ing, he said, es­pe­cially since the First Amendment in­cludes no ex­cep­tion for doxing,” even if DHS ever were to pro­vide ev­i­dence of it.

To sup­press speech, of­fi­cials must show that groups are in­cit­ing vi­o­lence or mak­ing true threats. FIRE has al­leged that the gov­ern­ment has not met the ex­tra­or­di­nary jus­ti­fi­ca­tions re­quired for a prior re­straint” on speech and is in­stead us­ing vague dox­ing threats to dis­crim­i­nate against speech based on view­point. They’re seek­ing a per­ma­nent in­junc­tion bar­ring of­fi­cials from co­erc­ing tech com­pa­nies into cen­sor­ing ICE posts.

If plain­tiffs win, the cen­sor­ship threats could sub­side, and tech com­pa­nies may feel safe re­in­stat­ing apps and Facebook groups, ad­vo­cates told Ars. That could po­ten­tially re­vive archives doc­u­ment­ing thou­sands of ICE in­ci­dents and re­con­nect webs of ICE watch­ers who lost ac­cess to val­ued feeds.

Until courts pos­si­bly end threats of cen­sor­ship, the most cau­tious com­mu­nity mem­bers are mov­ing lo­cal ICE-watch ef­forts to group chats and list­servs that are harder for the gov­ern­ment to dis­rupt, Trujillo told Ars.

Ashley is a se­nior pol­icy re­porter for Ars Technica, ded­i­cated to track­ing so­cial im­pacts of emerg­ing poli­cies and new tech­nolo­gies. She is a Chicago-based jour­nal­ist with 20 years of ex­pe­ri­ence.

Verizon im­poses new road­block on users try­ing to un­lock paid-off phones

NASA has a new prob­lem to fix be­fore the next Artemis II count­down test

$1.8 mil­lion MST3K Kickstarter brings in (almost) every­one from the old show

...

Read the original on arstechnica.com »

9 255 shares, 49 trendiness

I love the work of the ArchWiki maintainers

For this year’s I love Free Software Day” I would like to thank the main­tain­ers of Free Software doc­u­men­ta­tion, and here es­pe­cially the main­tain­ers of the

ArchWiki. Maintainers in gen­eral, and main­tain­ers of doc­u­men­ta­tion most of the time get way too lit­tle recog­ni­tion for their con­tri­bu­tions to soft­ware free­dom.

Myself, Arch Project Leader Levente, ArchWiki main­tainer Ferdinand (Alad), and FSFEs vice pres­i­dent Heiki at FOSDEM 2026 af­ter I handed them over some hacker choco­late.

The ArchWiki is a re­source, I my­self and many peo­ple around me reg­u­larly con­sult - no mat­ter if it is ac­tu­ally about Arch or an­other Free Software dis­tri­b­u­tion. There are count­less times, when I read ar­ti­cles there to get a bet­ter un­der­stand­ing of the tools I daily use, like e-mail pro­grams, ed­i­tors, or all kinds of win­dow man­agers I used over time. It helped me to dis­cover some handy fea­tures or con­fig­u­ra­tion tips that were dif­fi­cult for me to find in the doc­u­men­ta­tion of the soft­ware it­self.

Whenever I run into is­sues set­ting up a GNU/Linux dis­tri­b­u­tion for my­self or fam­ily and friends, the ArchWiki had my back!

Whenever I want to bet­ter un­der­stand a soft­ware, the ArchWiki is most of­ten the first page I end up con­sult­ing.

You are one of the pearls of the in­ter­net! Or in Edward Snowden’s words:

Is it just me, or have search re­sults be­come ab­solute garbage for ba­si­cally every site? It’s nearly im­pos­si­ble to dis­cover use­ful in­for­ma­tion these days (outside the ArchWiki). https://​x.com/​Snow­den/​sta­tus/​1460666075033575425

Thank you, to all the ArchWiki con­trib­u­tors for gath­er­ing all the knowl­edge to help oth­ers in so­ci­ety to bet­ter un­der­stand tech­nol­ogy and for the ArchWiki main­tain­ers to en­sure the long term avail­abil­ity and re­li­a­bil­ity of this cru­cial re­source.

If you also ap­pre­ci­ated the work of the ArchWiki main­tain­ers for our so­ci­ety, tell them as well, and I en­cour­age you to make a do­na­tion to

Arch.

PS: Thanks also to Morton for con­nect­ing me with Ferdinand and Levente at FOSDEM.

...

Read the original on k7r.eu »

10 223 shares, 0 trendiness

OffSeq - Live Threat Intelligence

Press slash or con­trol plus K to fo­cus the search. Use the ar­row keys to nav­i­gate re­sults and press en­ter to open a threat.

...

Read the original on radar.offseq.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.