10 interesting stories served every morning and every evening.

Vercel Security Checkpoint

www.noctua.at

DeepSeek V4—almost on the frontier, a fraction of the price

simonwillison.net

24th April 2026

Chinese AI lab DeepSeek’s last model re­lease was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their hotly an­tic­i­pated V4 se­ries in the shape of two pre­view mod­els, DeepSeek-V4-Pro and DeepSeek-V4-Flash.

Both mod­els are 1 mil­lion to­ken con­text Mixture of Experts. Pro is 1.6T to­tal pa­ra­me­ters, 49B ac­tive. Flash is 284B to­tal, 13B ac­tive. They’re us­ing the stan­dard MIT li­cense.

I think this makes DeepSeek-V4-Pro the new largest open weights model. It’s larger than Kimi K2.6 (1.1T) and GLM-5.1 (754B) and more than twice the size of DeepSeek V3.2 (685B).

Pro is 865GB on Hugging Face, Flash is 160GB. I’m hop­ing that a lightly quan­tized Flash will run on my 128GB M5 MacBook Pro. It’s pos­si­ble the Pro model may run on it if I can stream just the nec­es­sary ac­tive ex­perts from disk.

For the mo­ment I tried the mod­els out via OpenRouter, us­ing llm-open­router:

llm in­stall llm-open­router

llm open­router re­fresh

llm -m open­router/​deepseek/​deepseek-v4-pro Generate an SVG of a pel­i­can rid­ing a bi­cy­cle’

Here’s the pel­i­can for DeepSeek-V4-Flash:

And for DeepSeek-V4-Pro:

For com­par­i­son, take a look at the pel­i­cans I got from DeepSeek V3.2 in December, V3.1 in August, and V3 – 0324 in March 2025.

So the pel­i­cans are pretty good, but what’s re­ally no­table here is the cost. DeepSeek V4 is a very, very in­ex­pen­sive model.

This is DeepSeek’s pric­ing page. They’re charg­ing $0.14/million to­kens in­put and $0.28/million to­kens out­put for Flash, and $1.74/million in­put and $3.48/million out­put for Pro.

Here’s a com­par­i­son table with the fron­tier mod­els from Gemini, OpenAI and Anthropic:

DeepSeek-V4-Flash is the cheap­est of the small mod­els, beat­ing even OpenAI’s GPT-5.4 Nano. DeepSeek-V4-Pro is the cheap­est of the larger fron­tier mod­els.

This note from the DeepSeek pa­per helps ex­plain why they can price these mod­els so low—they’ve fo­cused a great deal on ef­fi­ciency with this re­lease, es­pe­cially for longer con­text prompts:

In the sce­nario of 1M-token con­text, even DeepSeek-V4-Pro, which has a larger num­ber of ac­ti­vated pa­ra­me­ters, at­tains only 27% of the sin­gle-to­ken FLOPs (measured in equiv­a­lent FP8 FLOPs) and 10% of the KV cache size rel­a­tive to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller num­ber of ac­ti­vated pa­ra­me­ters, pushes ef­fi­ciency even fur­ther: in the 1M-token con­text set­ting, it achieves only 10% of the sin­gle-to­ken FLOPs and 7% of the KV cache size com­pared with DeepSeek-V3.2.

In the sce­nario of 1M-token con­text, even DeepSeek-V4-Pro, which has a larger num­ber of ac­ti­vated pa­ra­me­ters, at­tains only 27% of the sin­gle-to­ken FLOPs (measured in equiv­a­lent FP8 FLOPs) and 10% of the KV cache size rel­a­tive to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller num­ber of ac­ti­vated pa­ra­me­ters, pushes ef­fi­ciency even fur­ther: in the 1M-token con­text set­ting, it achieves only 10% of the sin­gle-to­ken FLOPs and 7% of the KV cache size com­pared with DeepSeek-V3.2.

DeepSeek’s self-re­ported bench­marks in their pa­per show their Pro model com­pet­i­tive with those other fron­tier mod­els, al­beit with this note:

Through the ex­pan­sion of rea­son­ing to­kens, DeepSeek-V4-Pro-Max demon­strates su­pe­rior per­for­mance rel­a­tive to GPT-5.2 and Gemini-3.0-Pro on stan­dard rea­son­ing bench­marks. Nevertheless, its per­for­mance falls mar­gin­ally short of GPT-5.4 and Gemini-3.1-Pro, sug­gest­ing a de­vel­op­men­tal tra­jec­tory that trails state-of-the-art fron­tier mod­els by ap­prox­i­mately 3 to 6 months.

Through the ex­pan­sion of rea­son­ing to­kens, DeepSeek-V4-Pro-Max demon­strates su­pe­rior per­for­mance rel­a­tive to GPT-5.2 and Gemini-3.0-Pro on stan­dard rea­son­ing bench­marks. Nevertheless, its per­for­mance falls mar­gin­ally short of GPT-5.4 and Gemini-3.1-Pro, sug­gest­ing a de­vel­op­men­tal tra­jec­tory that trails state-of-the-art fron­tier mod­els by ap­prox­i­mately 3 to 6 months.

I’m keep­ing an eye on hug­ging­face.co/​un­sloth/​mod­els as I ex­pect the Unsloth team will have a set of quan­tized ver­sions out pretty soon. It’s go­ing to be very in­ter­est­ing to see how well that Flash model runs on my own ma­chine.

A Farewell to Ask.com | 25 Years of Curiosity

www.ask.com

Every great search must come to an end.

As IAC con­tin­ues to sharpen its fo­cus, we have made the de­ci­sion to dis­con­tinue our search busi­ness,

which in­cludes Ask.com. After 25 years of an­swer­ing the world’s ques­tions, Ask.com of­fi­cially closed on

May 1, 2026.

To the mil­lions who asked…”

We are deeply grate­ful to the bril­liant en­gi­neers, de­sign­ers, and teams who built and sup­ported Ask over

the decades. And to you—the mil­lions of users who turned to us for an­swers in a rapidly chang­ing

world—thank you for your end­less cu­rios­ity, your loy­alty, and your trust.

Jeeves’ spirit en­dures.

Artemis II Photo Timeline

artemistimeline.com

Pre-order Now →

AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights

arxiv.org

View PDF

HTML (experimental)

Abstract:As ar­ti­fi­cial in­tel­li­gence (AI) tools be­come widely adopted, large lan­guage mod­els (LLMs) are in­creas­ingly in­volved on both sides of de­ci­sion-mak­ing processes, rang­ing from hir­ing to con­tent mod­er­a­tion. This dual adop­tion raises a crit­i­cal ques­tion: do LLMs sys­tem­at­i­cally fa­vor con­tent that re­sem­bles their own out­puts? Prior re­search in com­puter sci­ence has iden­ti­fied self-pref­er­ence bias — the ten­dency of LLMs to fa­vor their own gen­er­ated con­tent — but its real-world im­pli­ca­tions have not been em­pir­i­cally eval­u­ated. We fo­cus on the hir­ing con­text, where job ap­pli­cants of­ten rely on LLMs to re­fine re­sumes, while em­ploy­ers de­ploy them to screen those same re­sumes. Using a large-scale con­trolled re­sume cor­re­spon­dence ex­per­i­ment, we find that LLMs con­sis­tently pre­fer re­sumes gen­er­ated by them­selves over those writ­ten by hu­mans or pro­duced by al­ter­na­tive mod­els, even when con­tent qual­ity is con­trolled. The bias against hu­man-writ­ten re­sumes is par­tic­u­larly sub­stan­tial, with self-pref­er­ence bias rang­ing from 67% to 82% across ma­jor com­mer­cial and open-source mod­els. To as­sess la­bor mar­ket im­pact, we sim­u­late re­al­is­tic hir­ing pipelines across 24 oc­cu­pa­tions. These sim­u­la­tions show that can­di­dates us­ing the same LLM as the eval­u­a­tor are 23% to 60% more likely to be short­listed than equally qual­i­fied ap­pli­cants sub­mit­ting hu­man-writ­ten re­sumes, with the largest dis­ad­van­tages ob­served in busi­ness-re­lated fields such as sales and ac­count­ing. We fur­ther demon­strate that this bias can be re­duced by more than 50% through sim­ple in­ter­ven­tions tar­get­ing LLMs’ self-recog­ni­tion ca­pa­bil­i­ties. These find­ings high­light an emerg­ing but pre­vi­ously over­looked risk in AI-assisted de­ci­sion mak­ing and call for ex­panded frame­works of AI fair­ness that ad­dress not only de­mo­graphic-based dis­par­i­ties, but also bi­ases in AI-AI in­ter­ac­tions.

arXiv-is­sued DOI via DataCite

Submission his­tory

From: Jiannan Xu [view email] [v1]

Sat, 30 Aug 2025 11:40:11 UTC (3,032 KB)

[v2]

Thu, 11 Sep 2025 16:59:36 UTC (3,032 KB)

[v3]

Mon, 9 Feb 2026 13:24:26 UTC (5,723 KB)

NetHack 5.0.0: Release Notes

nethack.org

The NetHack DevTeam is an­nounc­ing the re­lease of NetHack 5.0.0 on

May 2, 2026

NetHack 5.0 is an en­hance­ment to the dun­geon ex­plo­ration game NetHack,

which is a dis­tant de­scen­dent of Rogue and Hack, and a di­rect de­scen­dent

of NetHack 3.6.

NetHack 5.0.0 is a re­lease of NetHack. As a .0 ver­sion, there may be some

bugs en­coun­tered. Constructive sug­ges­tions, GitHub pull re­quests, and bug

re­ports are all wel­come and en­cour­aged.

Along with the game im­prove­ments and bug fixes, NetHack 5.0 strives to make

some gen­eral ar­chi­tec­tural im­prove­ments to the game or to its build­ing

process. Among them, 5.0:

Has its source code com­pli­ant with the C99 stan­dard.

Removes bar­ri­ers to build­ing NetHack on one plat­form and op­er­at­ing sys­tem,

for later ex­e­cu­tion on an­other (possibly quite dif­fer­ent) plat­form and/​or

op­er­at­ing sys­tem. That ca­pa­bil­ity is gen­er­ally known as cross-compiling.”

See the file Cross-compiling” in the top-level folder for more in­for­ma­tion

on that.

The build-time yacc and lex”-based level com­piler, the

yacc and lex”-based dun­geon com­piler, and the quest text file pro­cess­ing

pre­vi­ously done by NetHack’s makedefs” util­ity, have been re­placed with

Lua text al­ter­na­tives that are loaded and processed by the game dur­ing play.

A list of over 3100 fixes and changes can be found in the game’s sources

in the file doc/​fix­es5 – 0-0.txt. The text in there was writ­ten for the

de­vel­op­ment team’s own use and is pro­vided as is”. Some en­tries might be

con­sid­ered spoilers”, par­tic­u­larly in the new fea­tures” sec­tion.

Existing saved games and bones files will not work with NetHack 5.0.0.

Checksums (sha256) of bi­na­ries that you have down­loaded from nethack.org

can be ver­i­fied on Windows plat­forms us­ing:

  certUtil -hashfile nethack-500-win-x64.zip SHA256

or

  certUtil -hashfile nethack-500-win-ar­m64.zip SHA256

The fol­low­ing com­mand can be used on most plat­forms to help con­firm the lo­ca­tion of

var­i­ous files that NetHack may use:

  nethack –showpaths

As with all re­leases of the game, we ap­pre­ci­ate your feed­back. Please sub­mit any

bugs us­ing the prob­lem re­port form. Also, please check the known bugs” list

be­fore you log a prob­lem - some­body else may have al­ready found it.

Happy NetHacking!

VideoLAN / dav2d · GitLab

code.videolan.org

Enabling ai co author by default by cwebster-99 · Pull Request #310226 · microsoft/vscode

github.com

Skip to con­tent

Navigation Menu

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal toolsView all fea­tures

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal tools

AI CODE CREATION

GitHub CopilotWrite bet­ter code with AI

GitHub CopilotWrite bet­ter code with AI

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub ModelsManage and com­pare prompts

GitHub ModelsManage and com­pare prompts

MCP RegistryNewIntegrate ex­ter­nal tools

MCP RegistryNewIntegrate ex­ter­nal tools

View all fea­tures

Pricing

Provide feed­back

Saved searches

Use saved searches to fil­ter your re­sults more quickly

Sign up

Appearance set­tings

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Fork

39.6k

Star

184k

Star

184k

Merged

Conversation

Pull re­quest overview

This PR changes the Git ex­ten­sion’s git.ad­dAICoAu­thor set­ting so that AI co-au­thor trail­ers are en­abled by de­fault, mak­ing the de­fault be­hav­ior au­to­mat­i­cally add a Co-authored-by trailer when AI-generated code con­tri­bu­tions are de­tected.

Changes:

Updates git.ad­dAICoAu­thor con­fig­u­ra­tion de­fault from off” to all”.

Copilot’s find­ings

Files re­viewed: 1/1 changed files

Comments gen­er­ated: 1

Screenshot Changes

Base: 3c1b53dd Current: eec3f9cf

Changed (3)

blocks-ci screen­shots changed

Replace the con­tents of test/​com­po­nent­Fix­tures/​blocks-ci-screen­shots.md with:

<!– auto-gen­er­ated by CI — do not edit man­u­ally –>

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Dark

![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​cb32a3e854b5734fe5aa­ca2318f2e0a42ee821b05ea97883ea42c5ba95ed­b3c3)

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Light

![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​42624f­b­ba5e0d­b7f32c224b5e­b9c5d­d3b08245697ae2e7d2a88be0d7c287129b)

NoiceBroice

ref­er­enced

this pull re­quest

in ThomasSnowden37/Harmoniq-Charts

Co-authored-by: Copilot <copi­lot@github.com>

Open

Labels

None yet

Russia Poisons Wikipedia

www.bettedangerous.com

***Please take out a mem­ber­ship to sup­port the light of truth.***

As AI chat­bots con­tinue to ad­vance, Russia is in­fect­ing them with Kremlin-manipulated con­tent tai­lored to in­flu­ence the global in­ter­net, dis­tort­ing the pub­lic’s un­der­stand­ing of facts and abil­ity to make well-in­formed de­ci­sions.—Ex­pos­ing Pravda: How pro-Krem­lin forces are poi­son­ing AI mod­els and rewrit­ing Wikipedia, Atlantic Council

Yesterday, I read a Wikipedia page for a book I’m about to re­view. I am still un­set­tled.

The page was stripped of re­al­ity, and in its place was a san­i­tized fairy­tale where Putin is good and the book — a bru­tal and damn­ing his­toric ac­count of Soviet abuses — is sub­tly and not so sub­tly un­der­mined from every di­rec­tion.

Once I got over the shock of what I had just read — it was like be­ing forced into an al­ter­nate re­al­ity — I be­gan in­ves­ti­gat­ing Russia’s re­la­tion­ship to Wikipedia. Perhaps not sur­pris­ingly, the Russian state has been steadily dis­tort­ing truth, ex­ploit­ing the plat­for­m’s crowd-sourc­ing ar­chi­tec­ture to in­flu­ence pub­lic knowl­edge.

Malign Activity

In a re­port by the Institute for Strategic Dialogue ti­tled Identifying Sock Puppets on Wikipedia, its au­thors used a semantic clus­ter­ing’ ap­proach to fo­cus on the English-language Wikipedia en­try for the Russo-Ukrainian war, and 48 other pages about Ukraine that link di­rectly to it.”

The au­thors wrote:

Malign ac­tiv­ity has tar­geted a num­ber of in­for­ma­tion en­vi­ron­ments, in­clud­ing every ma­jor so­cial me­dia plat­form: Twitter, Facebook, YouTube, Instagram, TikTok, stand­alone web­sites and many oth­ers. This pa­per, how­ever, is ded­i­cated to pos­si­ble plat­form ma­nip­u­la­tion on a venue that tends to be much less re­searched than main­stream so­cial me­dia: Wikipedia.This re­port pre­sents work that set out to cre­ate, trial and eval­u­ate a method to try to de­tect covert and or­gan­ised ma­nip­u­la­tion of Wikipedia at scale.

Malign ac­tiv­ity has tar­geted a num­ber of in­for­ma­tion en­vi­ron­ments, in­clud­ing every ma­jor so­cial me­dia plat­form: Twitter, Facebook, YouTube, Instagram, TikTok, stand­alone web­sites and many oth­ers. This pa­per, how­ever, is ded­i­cated to pos­si­ble plat­form ma­nip­u­la­tion on a venue that tends to be much less re­searched than main­stream so­cial me­dia: Wikipedia.

This re­port pre­sents work that set out to cre­ate, trial and eval­u­ate a method to try to de­tect covert and or­gan­ised ma­nip­u­la­tion of Wikipedia at scale.

As I quickly learned, mul­ti­ple re­ports have ex­plored organized ma­nip­u­la­tion” on Wikipedia’s en­tries on Russia’s in­va­sion of Ukraine.

Portal Kombat

Between September and December 2023, the French de­fense agency Vigilance and Protection Service against Foreign Digital Interference (VIGINUM) an­a­lyzed information por­tals” dis­sem­i­nat­ing pro-Russ­ian con­tent and tar­get­ing sev­eral west­ern coun­tries, in­clud­ing France.

In the VIGINUM re­port, PORTAL KOMBAT: A struc­tured and co­or­di­nated pro-Russ­ian pro­pa­ganda net­work, re­searchers in­ves­ti­gated a net­work of 193 sites that ini­tially cov­ered news from Russian and Ukrainian lo­cal­i­ties.”

According to the re­search, the cov­er­age changed the day af­ter Russia in­vaded Ukraine and be­gan to tar­get oc­cu­pied Ukrainian ter­ri­to­ries and west­ern coun­tries sup­port­ing Ukraine and its pop­u­la­tion.

The sites in this net­work pro­duce no orig­i­nal con­tent but mas­sively re­lay pub­li­ca­tions from sources that are pri­mar­ily three types: so­cial me­dia ac­counts of Russian or pro-Russ­ian ac­tors, Russian news agen­cies, and of­fi­cial web­sites of lo­cal in­sti­tu­tions or ac­tors.”

The main ob­jec­tive seems to be to cover the Russo-Ukrainian con­flict by pre­sent­ing pos­i­tively the spe­cial mil­i­tary op­er­a­tion’ and den­i­grat­ing Ukraine and its lead­ers. Very ide­o­log­i­cally ori­ented, this con­tent re­peat­edly pre­sents in­ac­cu­rate or mis­lead­ing nar­ra­tives. As for the por­tal tar­get­ing France, pravda-fr[.]com, it di­rectly con­tributes to po­lar­ize the Francophone dig­i­tal pub­lic de­bate.”

The main ob­jec­tive seems to be to cover the Russo-Ukrainian con­flict by pre­sent­ing pos­i­tively the spe­cial mil­i­tary op­er­a­tion’ and den­i­grat­ing Ukraine and its lead­ers. Very ide­o­log­i­cally ori­ented, this con­tent re­peat­edly pre­sents in­ac­cu­rate or mis­lead­ing nar­ra­tives. As for the por­tal tar­get­ing France, pravda-fr[.]com, it di­rectly con­tributes to po­lar­ize the Francophone dig­i­tal pub­lic de­bate.”

VIGINUM caught an in­ser­tion of the site pravda-fr[.]com be­ing used as a source for a Wikipedia ar­ti­cle about a geopolitical sit­u­a­tion” in the Red Sea.

In a foot­note, they wrote: The Wikipedia ar­ti­cle ti­tled Operation Guardian of Prosperity’ cre­ated on December 22, 2023, was edited the next day by user @ Lataupefr,’ who in­serted two ar­ti­cles from pravda-fr[.]com with sources be­ing Russian-pro Telegram chan­nels @ BrainlessChanel’ and @ kom­pro­mat­media.’”

See mod­i­fi­ca­tions: https://​fr.wikipedia.org/​w/​in­dex.php?ti­tle=Opéra­tion_­Gar­di­en_de_la_prospérité&diff=prev&ol­did=210810683

Foreign Digital Interference

The pre­cise se­lec­tion of these pro-Russ­ian sources…proves there’s a real tar­get­ing ef­fort to dis­sem­i­nate the strate­gic nar­ra­tives… Given its tech­ni­cal char­ac­ter­is­tics, the processes im­ple­mented and the pur­sued pur­pose, this net­work con­sti­tutes for­eign dig­i­tal in­ter­fer­ence.—VIG­INUM

Those words — for­eign dig­i­tal in­ter­fer­ence — are very im­por­tant.

The West has ne­glected to fight on the bat­tle­field that has been right in front of them the en­tire time — the in­ter­net.

This week, we learned JD Vance and Marjorie Taylor Greene pro­moted a fake story by the Russian dis­in­for­ma­tion net­work — Storm-1516 — which is linked to the GRU and be­lieved to em­ploy work­ers from the Internet Research Agency, the St. Petersburg op­er­a­tion that at­tacked American minds to help in­stall Donald Trump in 2016 and whose out­put was pro­moted by mem­bers of Trump’s 2016 cam­paign, which I re­cap in this se­ries:

The story Vance and Greene pro­moted was an ob­vi­ous fake — a lie about yachts be­ing pur­chased with mil­i­tary aid to Ukraine. It’s im­por­tant to never for­get that Vance is Peter Thiel’s repli­cant and to­gether, they backed Rumble, which is a full-throated Russian pro­pa­ganda net­work.

A decade af­ter the 2016 US elec­tion, we are watch­ing the es­ca­la­tion of in­for­ma­tion war­fare as new tools are weaponized.

AI Models, Rewriting Wikipedia, and Laundering Content

As Atlantic Council re­ports in Exposing Pravda: How pro-Krem­lin forces are poi­son­ing AI mod­els and rewrit­ing Wikipedia:

Russia has ex­panded, de­vel­oped, and tai­lored an in­flu­ence cam­paign tar­get­ing much of the world, spread­ing its con­tent in Wikipedia ar­ti­cles and in pop­u­lar ar­ti­fi­cial in­tel­li­gence (AI) tools. As elec­tion cam­paigns in Romania and Moldova took place, or as po­lit­i­cal dis­cus­sions be­tween US President Donald Trump and Ukrainian President Volodymyr Zelenskyy un­folded, a net­work of in­au­then­tic pro-Russ­ian por­tals ramped up its ac­tiv­ity, laun­der­ing con­tent from sanc­tioned news out­lets and align­ing global in­for­ma­tion sources with the Kremlin nar­ra­tive ma­chine.”

Russia has ex­panded, de­vel­oped, and tai­lored an in­flu­ence cam­paign tar­get­ing much of the world, spread­ing its con­tent in Wikipedia ar­ti­cles and in pop­u­lar ar­ti­fi­cial in­tel­li­gence (AI) tools. As elec­tion cam­paigns in Romania and Moldova took place, or as po­lit­i­cal dis­cus­sions be­tween US President Donald Trump and Ukrainian President Volodymyr Zelenskyy un­folded, a net­work of in­au­then­tic pro-Russ­ian por­tals ramped up its ac­tiv­ity, laun­der­ing con­tent from sanc­tioned news out­lets and align­ing global in­for­ma­tion sources with the Kremlin nar­ra­tive ma­chine.”

The Atlantic Council ref­er­enc­ing the French re­port notes that much of the fakes are com­ing from the Pravda net­work, which it calls a collection of fraud­u­lent news por­tals tar­get­ing more than eighty coun­tries and re­gions through­out the world, launched by Russia in 2014. In 2024, the French dis­in­for­ma­tion watch­dog Viginum re­ported on the op­er­a­tion, iden­ti­fy­ing the ma­li­cious ac­tiv­ity of a Crimea-based IT busi­ness, find­ings that the Atlantic Council’s Digital Forensic Research Lab (DFRLab) later con­firmed, which showed di­rect Russian in­volve­ment with the net­work.”

The Pravda net­work acts as an in­for­ma­tion laun­dro­mat, am­pli­fy­ing and sat­u­rat­ing the news cy­cle with tropes em­a­nat­ing from Russian news out­lets and Kremlin-aligned Telegram chan­nels. During the 2024 super-election year,” the net­work cre­ated web­sites specif­i­cally tar­get­ing NATO, as well as Trump, French President Emmanuel Macron, and other world lead­ers and politi­cians.—Ex­pos­ing Pravda

The Atlantic Council re­port iden­ti­fies this or­ga­nized ma­nip­u­la­tion as global — a Russian on­line in­flu­ence op­er­a­tion that has taken root across the global in­ter­net.”

Think of this in terms of transna­tional or­ga­nized crime, ex­cept in­stead of drugs, or hu­man traf­fick­ing, or arms traf­fick­ing, we’re al­low­ing un­friendly for­eign pow­ers to ma­nip­u­late our col­lec­tive re­al­ity — his­tory, cul­ture, our shared nar­ra­tive.

The Atlantic Council also notes that Russia’s strat­egy, in a likely at­tempt to evade global sanc­tions on Russian news out­lets, is now poi­son­ing AI tools and Wikipedia. By pos­ing as au­thor­i­ta­tive sources on Wikipedia and re­li­able news out­lets cited by pop­u­lar large lan­guage mod­els (LLMs), Russian tropes are rewrit­ing the story of Russia’s war in Ukraine. The di­rect con­se­quence is the ex­po­sure of Western au­di­ences to con­tent con­tain­ing pro-Krem­lin, anti-Ukrain­ian, and anti-West­ern mes­sag­ing when us­ing AI chat­bots that rely on LLMs trained on ma­te­r­ial such as Wikipedia.

As AI chat­bots con­tinue to ad­vance, Russia is in­fect­ing them with Kremlin-manipulated con­tent tai­lored to in­flu­ence the global in­ter­net, dis­tort­ing the pub­lic’s un­der­stand­ing of facts and abil­ity to make well-in­formed de­ci­sions. This op­er­a­tion opens the door to ques­tions re­gard­ing the trans­parency of the train­ing of AI mod­els and the mod­er­a­tion of con­tent em­a­nat­ing from known Russian-manipulated sources that have per­sis­tently di­vided the West on its sup­port for Ukraine.”

It al­ways comes back to Ukraine.

But it does­n’t stop with Ukraine.

Russia won’t stop un­til Russia is stopped.

Through these as­saults, they are dis­arm­ing what should be the only sub­stan­tive re­sis­tance to their re­build­ing the for­mer Soviet bloc.

They have no right to dic­tate our will, and it’s pa­thetic that we’re let­ting them.

The Sum of All Human Knowledge

In a re­port ti­tled Characterizing Knowledge Manipulation in a Russian Wikipedia Fork, the au­thors used a dataset of 1.9 mil­lion Russian Wikipedia ar­ti­cles and its fork,” which they call an or­ga­nized ef­fort to ma­nip­u­late knowl­edge.”

As the world’s largest en­cy­clo­pe­dia and the ninth most vis­ited web­site glob­ally. Wikipedia holds an in­flu­en­tial po­si­tion within the web ecosys­tem… main­tained through a col­lab­o­ra­tive com­mu­nity ef­fort to be­come the sum of all hu­man knowl­edge’ (Sutcliffe 2016).—Characterizing Knowledge Manipulation in a Russian Wikipedia Fork

Its au­thors note that knowledge on Wikipedia has a ma­jor so­ci­etal im­pact” and iden­tity mul­ti­ple au­thor­i­tar­ian coun­tries such as China and Turkey which sim­ply block the plat­form al­to­gether.

In a sec­tion of the re­port ti­tled Relevance,” re­searchers ex­plain how national iden­tity and pub­lic opin­ion can be in­flu­enced by the in­for­ma­tion cit­i­zens are find­ing on­line about their his­tory… Wikipedia was ranked the 6th most im­por­tant in­for­ma­tion about his­tory, pass­ing mu­seum vis­its, col­lege courses, and so­cial me­dia (Burkholder and Schaffer 2021). Therefore, at­tempts to ma­nip­u­late Wikipedia con­tent, even if they hap­pen in other plat­forms, could have a sig­nif­i­cant so­ci­etal im­pact.”

They warn that Wikipedia con­tent is fre­quently used for train­ing Large Language Models (LLMs) and that manipulated ver­sions of Wikipedia used as train­ing data for LLMs can en­cour­age AI-powered sys­tems that pro­mote ideas with spe­cific bi­ases.”

Immediately af­ter its de­but, Elon Musk’s Grokipedia was ex­posed for push­ing ex­trem­ist ide­ol­ogy and pub­lish­ing Russian pro­pa­ganda.

Last year, Musk called for a boy­cott of Wikipedia and con­tin­ues to call it Wokepedia, spread­ing his own pro­pa­ganda. Trump’s regime has threat­ened to re­voke the tax-ex­empt sta­tus of the non-profit, which turned 25-years-old this year.

Trump, whose al­ter­nate re­al­ity lie fac­tory, Truth Social, is a fun-house mir­ror of the name Pravda, which means truth’ and justice’ in Russian, and was the name of the of­fi­cial news­pa­per of the Central Committee of the Communist Party of the Soviet Union.

While Trump helps Putin re­build the Soviet em­pire, I’ll be over here pub­lish­ing a re­port on the book that took it down.

****

2016 Election Attack — The Book!

American Monsters — The Book — Buy Here!

Donations Welcome

****

Bette Dangerous is a reader-funded mag­a­zine. Thank you to all monthly, an­nual, and found­ing mem­bers.

I ex­pose the cor­rup­tion of bil­lion­aire fas­cists, while re­ly­ing on mem­ber­ships for sup­port.

Thank you in ad­vance for con­sid­er­ing the fol­low­ing:

Upgrade to Paid Member

Upgrade to Paid Member

Upgrade to Founding Member

Upgrade to Founding Member

Gifting mem­ber­ships

Gifting mem­ber­ships

Share my re­port­ing with al­lies

Share my re­port­ing with al­lies

Buying my ebooks

Buying my ebooks

Donating to the ko-fi fund or di­rectly to venmo

Donating to the ko-fi fund or di­rectly to venmo

Heidi’s Ko-Fi Fund

Heidi’s Venmo

A pri­vate link to an an­nual mem­ber­ship dis­count for older adults, those on fixed in­comes or draw­ing dis­abil­ity, as well as ac­tivists and mem­bers of the me­dia is avail­able upon re­quest at bet­tedan­ger­ous/​gmail. 🥹

More info about Bette Dangerous - This mag­a­zine is writ­ten by Heidi Siegmund Cuda, an Emmy-award win­ning in­ves­tiga­tive re­porter/​pro­ducer, au­thor, and vet­eran mu­sic and nightlife colum­nist. She is the co­host of RADICALIZED Truth Survives, an in­ves­tiga­tive show about dis­in­for­ma­tion and is part of the Byline Media team. Thank you for your sup­port of in­de­pen­dent in­ves­tiga­tive jour­nal­ism.

🤍

Begin each day with a grate­ful heart.

🤍

No posts

Just a moment...

www.citadelsecurities.com

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.