10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
24th April 2026
Chinese AI lab DeepSeek’s last model release was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their hotly anticipated V4 series in the shape of two preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash.
Both models are 1 million token context Mixture of Experts. Pro is 1.6T total parameters, 49B active. Flash is 284B total, 13B active. They’re using the standard MIT license.
I think this makes DeepSeek-V4-Pro the new largest open weights model. It’s larger than Kimi K2.6 (1.1T) and GLM-5.1 (754B) and more than twice the size of DeepSeek V3.2 (685B).
Pro is 865GB on Hugging Face, Flash is 160GB. I’m hoping that a lightly quantized Flash will run on my 128GB M5 MacBook Pro. It’s possible the Pro model may run on it if I can stream just the necessary active experts from disk.
For the moment I tried the models out via OpenRouter, using llm-openrouter:
llm install llm-openrouter
llm openrouter refresh
llm -m openrouter/deepseek/deepseek-v4-pro ‘Generate an SVG of a pelican riding a bicycle’
Here’s the pelican for DeepSeek-V4-Flash:
And for DeepSeek-V4-Pro:
For comparison, take a look at the pelicans I got from DeepSeek V3.2 in December, V3.1 in August, and V3 – 0324 in March 2025.
So the pelicans are pretty good, but what’s really notable here is the cost. DeepSeek V4 is a very, very inexpensive model.
This is DeepSeek’s pricing page. They’re charging $0.14/million tokens input and $0.28/million tokens output for Flash, and $1.74/million input and $3.48/million output for Pro.
Here’s a comparison table with the frontier models from Gemini, OpenAI and Anthropic:
DeepSeek-V4-Flash is the cheapest of the small models, beating even OpenAI’s GPT-5.4 Nano. DeepSeek-V4-Pro is the cheapest of the larger frontier models.
This note from the DeepSeek paper helps explain why they can price these models so low—they’ve focused a great deal on efficiency with this release, especially for longer context prompts:
In the scenario of 1M-token context, even DeepSeek-V4-Pro, which has a larger number of activated parameters, attains only 27% of the single-token FLOPs (measured in equivalent FP8 FLOPs) and 10% of the KV cache size relative to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller number of activated parameters, pushes efficiency even further: in the 1M-token context setting, it achieves only 10% of the single-token FLOPs and 7% of the KV cache size compared with DeepSeek-V3.2.
In the scenario of 1M-token context, even DeepSeek-V4-Pro, which has a larger number of activated parameters, attains only 27% of the single-token FLOPs (measured in equivalent FP8 FLOPs) and 10% of the KV cache size relative to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller number of activated parameters, pushes efficiency even further: in the 1M-token context setting, it achieves only 10% of the single-token FLOPs and 7% of the KV cache size compared with DeepSeek-V3.2.
DeepSeek’s self-reported benchmarks in their paper show their Pro model competitive with those other frontier models, albeit with this note:
Through the expansion of reasoning tokens, DeepSeek-V4-Pro-Max demonstrates superior performance relative to GPT-5.2 and Gemini-3.0-Pro on standard reasoning benchmarks. Nevertheless, its performance falls marginally short of GPT-5.4 and Gemini-3.1-Pro, suggesting a developmental trajectory that trails state-of-the-art frontier models by approximately 3 to 6 months.
Through the expansion of reasoning tokens, DeepSeek-V4-Pro-Max demonstrates superior performance relative to GPT-5.2 and Gemini-3.0-Pro on standard reasoning benchmarks. Nevertheless, its performance falls marginally short of GPT-5.4 and Gemini-3.1-Pro, suggesting a developmental trajectory that trails state-of-the-art frontier models by approximately 3 to 6 months.
I’m keeping an eye on huggingface.co/unsloth/models as I expect the Unsloth team will have a set of quantized versions out pretty soon. It’s going to be very interesting to see how well that Flash model runs on my own machine.
Every great search must come to an end.
As IAC continues to sharpen its focus, we have made the decision to discontinue our search business,
which includes Ask.com. After 25 years of answering the world’s questions, Ask.com officially closed on
May 1, 2026.
“To the millions who asked…”
We are deeply grateful to the brilliant engineers, designers, and teams who built and supported Ask over
the decades. And to you—the millions of users who turned to us for answers in a rapidly changing
world—thank you for your endless curiosity, your loyalty, and your trust.
Jeeves’ spirit endures.
Pre-order Now →
View PDF
HTML (experimental)
Abstract:As artificial intelligence (AI) tools become widely adopted, large language models (LLMs) are increasingly involved on both sides of decision-making processes, ranging from hiring to content moderation. This dual adoption raises a critical question: do LLMs systematically favor content that resembles their own outputs? Prior research in computer science has identified self-preference bias — the tendency of LLMs to favor their own generated content — but its real-world implications have not been empirically evaluated. We focus on the hiring context, where job applicants often rely on LLMs to refine resumes, while employers deploy them to screen those same resumes. Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 67% to 82% across major commercial and open-source models. To assess labor market impact, we simulate realistic hiring pipelines across 24 occupations. These simulations show that candidates using the same LLM as the evaluator are 23% to 60% more likely to be shortlisted than equally qualified applicants submitting human-written resumes, with the largest disadvantages observed in business-related fields such as sales and accounting. We further demonstrate that this bias can be reduced by more than 50% through simple interventions targeting LLMs’ self-recognition capabilities. These findings highlight an emerging but previously overlooked risk in AI-assisted decision making and call for expanded frameworks of AI fairness that address not only demographic-based disparities, but also biases in AI-AI interactions.
arXiv-issued DOI via DataCite
Submission history
From: Jiannan Xu [view email] [v1]
Sat, 30 Aug 2025 11:40:11 UTC (3,032 KB)
[v2]
Thu, 11 Sep 2025 16:59:36 UTC (3,032 KB)
[v3]
Mon, 9 Feb 2026 13:24:26 UTC (5,723 KB)
The NetHack DevTeam is announcing the release of NetHack 5.0.0 on
May 2, 2026
NetHack 5.0 is an enhancement to the dungeon exploration game NetHack,
which is a distant descendent of Rogue and Hack, and a direct descendent
of NetHack 3.6.
NetHack 5.0.0 is a release of NetHack. As a .0 version, there may be some
bugs encountered. Constructive suggestions, GitHub pull requests, and bug
reports are all welcome and encouraged.
Along with the game improvements and bug fixes, NetHack 5.0 strives to make
some general architectural improvements to the game or to its building
process. Among them, 5.0:
Has its source code compliant with the C99 standard.
Removes barriers to building NetHack on one platform and operating system,
for later execution on another (possibly quite different) platform and/or
operating system. That capability is generally known as “cross-compiling.”
See the file “Cross-compiling” in the top-level folder for more information
on that.
The build-time “yacc and lex”-based level compiler, the
“yacc and lex”-based dungeon compiler, and the quest text file processing
previously done by NetHack’s “makedefs” utility, have been replaced with
Lua text alternatives that are loaded and processed by the game during play.
A list of over 3100 fixes and changes can be found in the game’s sources
in the file doc/fixes5 – 0-0.txt. The text in there was written for the
development team’s own use and is provided “as is”. Some entries might be
considered “spoilers”, particularly in the “new features” section.
Existing saved games and bones files will not work with NetHack 5.0.0.
Checksums (sha256) of binaries that you have downloaded from nethack.org
can be verified on Windows platforms using:
certUtil -hashfile nethack-500-win-x64.zip SHA256
or
certUtil -hashfile nethack-500-win-arm64.zip SHA256
The following command can be used on most platforms to help confirm the location of
various files that NetHack may use:
nethack –showpaths
As with all releases of the game, we appreciate your feedback. Please submit any
bugs using the problem report form. Also, please check the “known bugs” list
before you log a problem - somebody else may have already found it.
Happy NetHacking!
Skip to content
Navigation Menu
AI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external toolsView all features
AI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external tools
AI CODE CREATION
GitHub CopilotWrite better code with AI
GitHub CopilotWrite better code with AI
GitHub SparkBuild and deploy intelligent apps
GitHub SparkBuild and deploy intelligent apps
GitHub ModelsManage and compare prompts
GitHub ModelsManage and compare prompts
MCP RegistryNewIntegrate external tools
MCP RegistryNewIntegrate external tools
View all features
Pricing
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign up
Appearance settings
Notifications
You must be signed in to change notification settings
Fork
39.6k
Star
184k
Star
184k
Merged
Conversation
Pull request overview
This PR changes the Git extension’s git.addAICoAuthor setting so that AI co-author trailers are enabled by default, making the default behavior automatically add a Co-authored-by trailer when AI-generated code contributions are detected.
Changes:
Updates git.addAICoAuthor configuration default from “off” to “all”.
Copilot’s findings
Files reviewed: 1/1 changed files
Comments generated: 1
Screenshot Changes
Base: 3c1b53dd Current: eec3f9cf
Changed (3)
blocks-ci screenshots changed
Replace the contents of test/componentFixtures/blocks-ci-screenshots.md with:
<!– auto-generated by CI — do not edit manually –>
#### editor/codeEditor/CodeEditor/Dark

#### editor/codeEditor/CodeEditor/Light

NoiceBroice
referenced
this pull request
in ThomasSnowden37/Harmoniq-Charts
Co-authored-by: Copilot <copilot@github.com>
Open
Labels
None yet
***Please take out a membership to support the light of truth.***
As AI chatbots continue to advance, Russia is infecting them with Kremlin-manipulated content tailored to influence the global internet, distorting the public’s understanding of facts and ability to make well-informed decisions.—Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia, Atlantic Council
Yesterday, I read a Wikipedia page for a book I’m about to review. I am still unsettled.
The page was stripped of reality, and in its place was a sanitized fairytale where Putin is good and the book — a brutal and damning historic account of Soviet abuses — is subtly and not so subtly undermined from every direction.
Once I got over the shock of what I had just read — it was like being forced into an alternate reality — I began investigating Russia’s relationship to Wikipedia. Perhaps not surprisingly, the Russian state has been steadily distorting truth, exploiting the platform’s crowd-sourcing architecture to influence public knowledge.
Malign Activity
In a report by the Institute for Strategic Dialogue titled Identifying Sock Puppets on Wikipedia, its authors used a ‘semantic clustering’ approach to focus on the “English-language Wikipedia entry for the Russo-Ukrainian war, and 48 other pages about Ukraine that link directly to it.”
The authors wrote:
Malign activity has targeted a number of information environments, including every major social media platform: Twitter, Facebook, YouTube, Instagram, TikTok, standalone websites and many others. This paper, however, is dedicated to possible platform manipulation on a venue that tends to be much less researched than mainstream social media: Wikipedia.This report presents work that set out to create, trial and evaluate a method to try to detect covert and organised manipulation of Wikipedia at scale.
Malign activity has targeted a number of information environments, including every major social media platform: Twitter, Facebook, YouTube, Instagram, TikTok, standalone websites and many others. This paper, however, is dedicated to possible platform manipulation on a venue that tends to be much less researched than mainstream social media: Wikipedia.
This report presents work that set out to create, trial and evaluate a method to try to detect covert and organised manipulation of Wikipedia at scale.
As I quickly learned, multiple reports have explored “organized manipulation” on Wikipedia’s entries on Russia’s invasion of Ukraine.
Portal Kombat
Between September and December 2023, the French defense agency Vigilance and Protection Service against Foreign Digital Interference (VIGINUM) analyzed “information portals” disseminating pro-Russian content and targeting several western countries, including France.
In the VIGINUM report, PORTAL KOMBAT: A structured and coordinated pro-Russian propaganda network, researchers investigated a network of 193 sites that initially covered news from “Russian and Ukrainian localities.”
According to the research, the coverage changed the day after Russia invaded Ukraine and began to target occupied Ukrainian territories and western countries supporting Ukraine and its population.
The sites in this network produce “no original content but massively relay publications from sources that are primarily three types: social media accounts of Russian or pro-Russian actors, Russian news agencies, and official websites of local institutions or actors.”
“The main objective seems to be to cover the Russo-Ukrainian conflict by presenting positively ‘the special military operation’ and denigrating Ukraine and its leaders. Very ideologically oriented, this content repeatedly presents inaccurate or misleading narratives. As for the portal targeting France, pravda-fr[.]com, it directly contributes to polarize the Francophone digital public debate.”
“The main objective seems to be to cover the Russo-Ukrainian conflict by presenting positively ‘the special military operation’ and denigrating Ukraine and its leaders. Very ideologically oriented, this content repeatedly presents inaccurate or misleading narratives. As for the portal targeting France, pravda-fr[.]com, it directly contributes to polarize the Francophone digital public debate.”
VIGINUM caught an insertion of the site pravda-fr[.]com being used as a source for a Wikipedia article about a “geopolitical situation” in the Red Sea.
In a footnote, they wrote: “The Wikipedia article titled ‘Operation Guardian of Prosperity’ created on December 22, 2023, was edited the next day by user ‘@ Lataupefr,’ who inserted two articles from pravda-fr[.]com with sources being Russian-pro Telegram channels ‘@ BrainlessChanel’ and ‘@ kompromatmedia.’”
See modifications: https://fr.wikipedia.org/w/index.php?title=Opération_Gardien_de_la_prospérité&diff=prev&oldid=210810683
Foreign Digital Interference
The precise selection of these pro-Russian sources…proves there’s a real targeting effort to disseminate the strategic narratives… Given its technical characteristics, the processes implemented and the pursued purpose, this network constitutes foreign digital interference.—VIGINUM
Those words — foreign digital interference — are very important.
The West has neglected to fight on the battlefield that has been right in front of them the entire time — the internet.
This week, we learned JD Vance and Marjorie Taylor Greene promoted a fake story by the Russian disinformation network — Storm-1516 — which is linked to the GRU and believed to employ workers from the Internet Research Agency, the St. Petersburg operation that attacked American minds to help install Donald Trump in 2016 and whose output was promoted by members of Trump’s 2016 campaign, which I recap in this series:
The story Vance and Greene promoted was an obvious fake — a lie about yachts being purchased with military aid to Ukraine. It’s important to never forget that Vance is Peter Thiel’s replicant and together, they backed Rumble, which is a full-throated Russian propaganda network.
A decade after the 2016 US election, we are watching the escalation of information warfare as new tools are weaponized.
AI Models, Rewriting Wikipedia, and Laundering Content
As Atlantic Council reports in Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia:
“Russia has expanded, developed, and tailored an influence campaign targeting much of the world, spreading its content in Wikipedia articles and in popular artificial intelligence (AI) tools. As election campaigns in Romania and Moldova took place, or as political discussions between US President Donald Trump and Ukrainian President Volodymyr Zelenskyy unfolded, a network of inauthentic pro-Russian portals ramped up its activity, laundering content from sanctioned news outlets and aligning global information sources with the Kremlin narrative machine.”
“Russia has expanded, developed, and tailored an influence campaign targeting much of the world, spreading its content in Wikipedia articles and in popular artificial intelligence (AI) tools. As election campaigns in Romania and Moldova took place, or as political discussions between US President Donald Trump and Ukrainian President Volodymyr Zelenskyy unfolded, a network of inauthentic pro-Russian portals ramped up its activity, laundering content from sanctioned news outlets and aligning global information sources with the Kremlin narrative machine.”
The Atlantic Council referencing the French report notes that much of the fakes are coming from the Pravda network, which it calls a “collection of fraudulent news portals targeting more than eighty countries and regions throughout the world, launched by Russia in 2014. In 2024, the French disinformation watchdog Viginum reported on the operation, identifying the malicious activity of a Crimea-based IT business, findings that the Atlantic Council’s Digital Forensic Research Lab (DFRLab) later confirmed, which showed direct Russian involvement with the network.”
The Pravda network acts as an information laundromat, amplifying and saturating the news cycle with tropes emanating from Russian news outlets and Kremlin-aligned Telegram channels. During the 2024 “super-election year,” the network created websites specifically targeting NATO, as well as Trump, French President Emmanuel Macron, and other world leaders and politicians.—Exposing Pravda
The Atlantic Council report identifies this organized manipulation as global — “a Russian online influence operation that has taken root across the global internet.”
Think of this in terms of transnational organized crime, except instead of drugs, or human trafficking, or arms trafficking, we’re allowing unfriendly foreign powers to manipulate our collective reality — history, culture, our shared narrative.
The Atlantic Council also notes that Russia’s strategy, “in a likely attempt to evade global sanctions on Russian news outlets, is now poisoning AI tools and Wikipedia. By posing as authoritative sources on Wikipedia and reliable news outlets cited by popular large language models (LLMs), Russian tropes are rewriting the story of Russia’s war in Ukraine. The direct consequence is the exposure of Western audiences to content containing pro-Kremlin, anti-Ukrainian, and anti-Western messaging when using AI chatbots that rely on LLMs trained on material such as Wikipedia.
“As AI chatbots continue to advance, Russia is infecting them with Kremlin-manipulated content tailored to influence the global internet, distorting the public’s understanding of facts and ability to make well-informed decisions. This operation opens the door to questions regarding the transparency of the training of AI models and the moderation of content emanating from known Russian-manipulated sources that have persistently divided the West on its support for Ukraine.”
It always comes back to Ukraine.
But it doesn’t stop with Ukraine.
Russia won’t stop until Russia is stopped.
Through these assaults, they are disarming what should be the only substantive resistance to their rebuilding the former Soviet bloc.
They have no right to dictate our will, and it’s pathetic that we’re letting them.
The Sum of All Human Knowledge
In a report titled Characterizing Knowledge Manipulation in a Russian Wikipedia Fork, the authors used a dataset of “1.9 million Russian Wikipedia articles and its fork,” which they call “an organized effort to manipulate knowledge.”
As the world’s largest encyclopedia and the ninth most visited website globally. Wikipedia holds an influential position within the web ecosystem… maintained through a collaborative community effort to become the ‘sum of all human knowledge’ (Sutcliffe 2016).—Characterizing Knowledge Manipulation in a Russian Wikipedia Fork
Its authors note that “knowledge on Wikipedia has a major societal impact” and identity multiple authoritarian countries such as China and Turkey which simply block the platform altogether.
In a section of the report titled “Relevance,” researchers explain how “national identity and public opinion can be influenced by the information citizens are finding online about their history… Wikipedia was ranked the 6th most important information about history, passing museum visits, college courses, and social media (Burkholder and Schaffer 2021). Therefore, attempts to manipulate Wikipedia content, even if they happen in other platforms, could have a significant societal impact.”
They warn that Wikipedia content is frequently used for training Large Language Models (LLMs) and that “manipulated versions of Wikipedia used as training data for LLMs can encourage AI-powered systems that promote ideas with specific biases.”
Immediately after its debut, Elon Musk’s Grokipedia was exposed for pushing extremist ideology and publishing Russian propaganda.
Last year, Musk called for a boycott of Wikipedia and continues to call it Wokepedia, spreading his own propaganda. Trump’s regime has threatened to revoke the tax-exempt status of the non-profit, which turned 25-years-old this year.
Trump, whose alternate reality lie factory, Truth Social, is a fun-house mirror of the name Pravda, which means ‘truth’ and ‘justice’ in Russian, and was the name of the official newspaper of the Central Committee of the Communist Party of the Soviet Union.
While Trump helps Putin rebuild the Soviet empire, I’ll be over here publishing a report on the book that took it down.
****
2016 Election Attack — The Book!
American Monsters — The Book — Buy Here!
Donations Welcome
****
Bette Dangerous is a reader-funded magazine. Thank you to all monthly, annual, and founding members.
I expose the corruption of billionaire fascists, while relying on memberships for support.
Thank you in advance for considering the following:
Upgrade to Paid Member
Upgrade to Paid Member
Upgrade to Founding Member
Upgrade to Founding Member
Gifting memberships
Gifting memberships
Share my reporting with allies
Share my reporting with allies
Buying my ebooks
Buying my ebooks
Donating to the ko-fi fund or directly to venmo
Donating to the ko-fi fund or directly to venmo
Heidi’s Ko-Fi Fund
Heidi’s Venmo
A private link to an annual membership discount for older adults, those on fixed incomes or drawing disability, as well as activists and members of the media is available upon request at bettedangerous/gmail. 🥹
More info about Bette Dangerous - This magazine is written by Heidi Siegmund Cuda, an Emmy-award winning investigative reporter/producer, author, and veteran music and nightlife columnist. She is the cohost of RADICALIZED Truth Survives, an investigative show about disinformation and is part of the Byline Media team. Thank you for your support of independent investigative journalism.
🤍
Begin each day with a grateful heart.
🤍
No posts
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.