10 interesting stories served every morning and every evening.

Enabling ai co author by default by cwebster-99 · Pull Request #310226 · microsoft/vscode

github.com

Skip to con­tent

Navigation Menu

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal toolsView all fea­tures

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal tools

AI CODE CREATION

GitHub CopilotWrite bet­ter code with AI

GitHub CopilotWrite bet­ter code with AI

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub ModelsManage and com­pare prompts

GitHub ModelsManage and com­pare prompts

MCP RegistryNewIntegrate ex­ter­nal tools

MCP RegistryNewIntegrate ex­ter­nal tools

View all fea­tures

Pricing

Provide feed­back

Saved searches

Use saved searches to fil­ter your re­sults more quickly

Sign up

Appearance set­tings

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Fork

39.7k

Star

185k

Star

185k

Merged

Conversation

Pull re­quest overview

This PR changes the Git ex­ten­sion’s git.ad­dAICoAu­thor set­ting so that AI co-au­thor trail­ers are en­abled by de­fault, mak­ing the de­fault be­hav­ior au­to­mat­i­cally add a Co-authored-by trailer when AI-generated code con­tri­bu­tions are de­tected.

Changes:

Updates git.ad­dAICoAu­thor con­fig­u­ra­tion de­fault from off” to all”.

Copilot’s find­ings

Files re­viewed: 1/1 changed files

Comments gen­er­ated: 1

Screenshot Changes

Base: 3c1b53dd Current: eec3f9cf

Changed (3)

blocks-ci screen­shots changed

Replace the con­tents of test/​com­po­nent­Fix­tures/​blocks-ci-screen­shots.md with:

<!– auto-gen­er­ated by CI — do not edit man­u­ally –>

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Dark

![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​cb32a3e854b5734fe5aa­ca2318f2e0a42ee821b05ea97883ea42c5ba95ed­b3c3)

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Light

![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​42624f­b­ba5e0d­b7f32c224b5e­b9c5d­d3b08245697ae2e7d2a88be0d7c287129b)

NoiceBroice

ref­er­enced

this pull re­quest

in ThomasSnowden37/Harmoniq-Charts

Co-authored-by: Copilot <copi­lot@github.com>

Closed

srid

men­tioned this pull re­quest

Merged

Open

mi­crosoft

locked as spam and lim­ited con­ver­sa­tion to col­lab­o­ra­tors

Labels

None yet

VideoLAN / dav2d · GitLab

code.videolan.org

NetHack 5.0.0: Release Notes

nethack.org

The NetHack DevTeam is an­nounc­ing the re­lease of NetHack 5.0.0 on

May 2, 2026

NetHack 5.0 is an en­hance­ment to the dun­geon ex­plo­ration game NetHack,

which is a dis­tant de­scen­dent of Rogue and Hack, and a di­rect de­scen­dent

of NetHack 3.6.

NetHack 5.0.0 is a re­lease of NetHack. As a .0 ver­sion, there may be some

bugs en­coun­tered. Constructive sug­ges­tions, GitHub pull re­quests, and bug

re­ports are all wel­come and en­cour­aged.

Along with the game im­prove­ments and bug fixes, NetHack 5.0 strives to make

some gen­eral ar­chi­tec­tural im­prove­ments to the game or to its build­ing

process. Among them, 5.0:

Has its source code com­pli­ant with the C99 stan­dard.

Removes bar­ri­ers to build­ing NetHack on one plat­form and op­er­at­ing sys­tem,

for later ex­e­cu­tion on an­other (possibly quite dif­fer­ent) plat­form and/​or

op­er­at­ing sys­tem. That ca­pa­bil­ity is gen­er­ally known as cross-compiling.”

See the file Cross-compiling” in the top-level folder for more in­for­ma­tion

on that.

The build-time yacc and lex”-based level com­piler, the

yacc and lex”-based dun­geon com­piler, and the quest text file pro­cess­ing

pre­vi­ously done by NetHack’s makedefs” util­ity, have been re­placed with

Lua text al­ter­na­tives that are loaded and processed by the game dur­ing play.

A list of over 3100 fixes and changes can be found in the game’s sources

in the file doc/​fix­es5 – 0-0.txt. The text in there was writ­ten for the

de­vel­op­ment team’s own use and is pro­vided as is”. Some en­tries might be

con­sid­ered spoilers”, par­tic­u­larly in the new fea­tures” sec­tion.

Existing saved games and bones files will not work with NetHack 5.0.0.

Checksums (sha256) of bi­na­ries that you have down­loaded from nethack.org

can be ver­i­fied on Windows plat­forms us­ing:

  certUtil -hashfile nethack-500-win-x64.zip SHA256

or

  certUtil -hashfile nethack-500-win-ar­m64.zip SHA256

The fol­low­ing com­mand can be used on most plat­forms to help con­firm the lo­ca­tion of

var­i­ous files that NetHack may use:

  nethack –showpaths

As with all re­leases of the game, we ap­pre­ci­ate your feed­back. Please sub­mit any

bugs us­ing the prob­lem re­port form. Also, please check the known bugs” list

be­fore you log a prob­lem - some­body else may have al­ready found it.

Happy NetHacking!

DO_NOT_TRACK

donottrack.sh

Many CLI tools, SDKs, and frame­works col­lect teleme­try data by de­fault. Each one has its own way to opt out:

You get the idea. There are too many, and they are all dif­fer­ent.

A sin­gle, stan­dard en­vi­ron­ment vari­able that clearly and un­am­bigu­ously ex­presses a user’s wish to opt out of any of the fol­low­ing:

Non-essential-to-functionality re­quests to the cre­ator of the soft­ware or third-party

We just want lo­cal soft­ware.

Add the line above to your shell con­fig­u­ra­tion file so it ap­plies to all your ter­mi­nal ses­sions:

If you de­velop tools that col­lect teleme­try, an­a­lyt­ics, or make non-es­sen­tial net­work re­quests, please check for this vari­able:

If DO_NOT_TRACK is set to 1, dis­able all track­ing

Consider mak­ing teleme­try opt-in rather than opt-out

This Month in Ladybird - April 2026 - Ladybird

ladybird.org

Hello friends! In April we merged 333 PRs from 35 con­trib­u­tors, 7 of whom made their first-ever com­mit to Ladybird! Here’s what we’ve been up to.

Ladybird is en­tirely funded by the gen­er­ous sup­port of com­pa­nies and in­di­vid­u­als who be­lieve in the open web. This month, we’re ex­cited to wel­come the fol­low­ing new spon­sors:

Human Rights Foundation (via the AI for Individual Rights” pro­gram) with $50,000

Jakub Stęplowski with $1,000

We’re in­cred­i­bly grate­ful for their sup­port. If you’re in­ter­ested in spon­sor­ing the pro­ject, please con­tact us.

Inline PDF viewer

PDFs now ren­der in­line through the bun­dled pdf.js viewer (#9132). pdf.js is a full-fea­tured PDF viewer writ­ten en­tirely in JavaScript, HTML, and CSS, with page nav­i­ga­tion, text se­lec­tion, zoom, and find-in-doc­u­ment. Profiling pdf.js load­ing the Intel ISA Manual also drove im­prove­ments to our typed-ar­ray view cache and :has() in­val­i­da­tion.

Browsing his­tory and rich ad­dress bar au­to­com­plete

Type in the ad­dress bar and you now get rich, his­tory-aware sug­ges­tions: pre­vi­ously vis­ited pages with fav­i­cons and ti­tles, a search-en­gine short­cut, and plain URL com­ple­tions (#8933). Behind the scenes, a SQLite-backed HistoryStore per­sists every nav­i­ga­tion along with its ti­tle, fav­i­con, visit count, and last-visit time, and Clear brows­ing his­tory” is wired up in the Privacy set­tings page. Both the Qt and AppKit UIs ren­der the new rich rows.

Speculative and in­cre­men­tal HTML pars­ing

The HTML parser now con­sumes the re­sponse body in­cre­men­tally (#9151). Bytes flow through a stream­ing text de­coder into the to­k­enizer one chunk at a time, the to­k­enizer pauses when it runs out of in­put, and re­sumes when more ar­rives. This re­places a model where we waited for the full body be­fore start­ing to parse.

We also im­ple­mented the spec­u­la­tive HTML parser (#9114). When the main parser blocks on a syn­chro­nous ex­ter­nal script, a sep­a­rate to­k­enizer scans ahead through the un­parsed in­put and is­sues spec­u­la­tive fetches for the re­sources it finds: <script src>, <link rel=stylesheet|pre­load>, and <img src>. It tracks <base href> and skips into tem­plates and for­eign con­tent cor­rectly. A fol­low-up wired the spec­u­la­tive parser into the doc­u­men­t’s pre­load map (#9164), so re­sources dis­cov­ered spec­u­la­tively get dedu­pli­cated against the reg­u­lar parser’s later fetches in­stead of be­ing re­quested twice.

Off-thread JavaScript com­pi­la­tion

Bytecode gen­er­a­tion for fetched scripts’ top-level code now runs on a back­ground thread pool (#9118). Worker threads pro­duce the byte­code and the data needed to build an Executable, while every­thing that touches the VM or GC heap stays on the main thread. This cov­ers clas­sic scripts, mod­ules, and top-level IIFEs, and shifts roughly 200ms of main thread time onto back­ground threads while load­ing YouTube alone.

Per-Navigable ras­ter­i­za­tion

Each Navigable now ras­ter­izes in­de­pen­dently on its own thread (#8793). Previously, iframes were painted syn­chro­nously as nested dis­play lists in­side their par­en­t’s dis­play list, which meant only the top-level tra­vers­a­ble’s ren­der­ing thread was ever ac­tive. The par­en­t’s dis­play list now ref­er­ences each iframe’s ras­ter­ized out­put through an ExternalContentSource, so iframe in­val­i­da­tions no longer re­quire re-record­ing the par­ent. Beyond the par­al­lelism, this is prep work for mov­ing iframes into sep­a­rate sand­boxed processes.

JavaScript en­gine

With the C++/Rust tran­si­tion be­hind us, we spent April cash­ing in.

Faster JS-to-JS calls. A multi-part se­ries (#8891, #8909, #8912) made Call, Return, and End in­struc­tions stay en­tirely in the AsmInt as­sem­bly in­ter­preter for the com­mon case, with hand-tuned ARM64 paired load/​store (ldp/stp) for reg­is­ter save/​re­store. Native func­tion calls also dis­patch di­rectly from AsmInt now, via a new RawNativeFunction vari­ant that holds a plain func­tion pointer in­stead of an AK::Function (#8922).

O(1) byte­code reg­is­ter al­lo­ca­tor. Generator::allocate_register used to scan the free pool to find the low­est-num­bered reg­is­ter. We were spend­ing ~800ms in this func­tion alone while load­ing x.com. With the C++/Rust pipeline par­ity pe­riod over, the al­lo­ca­tor is now a plain LIFO stack (#9007).

Cached for-in it­er­a­tion. for (key in obj) sites now cache the flat­tened enu­mer­able key snap­shot and reuse it as long as the re­ceiver’s shape, in­dexed stor­age, and pro­to­type chain still match (#8856). Speedometer 2 went from 67.7 to 73.6, and Speedometer 3 from 4.11 to 4.22!

A grab-bag of other im­prove­ments:

The parser uses zero-copy iden­ti­fier name shar­ing across the lexer, parser, and scope col­lec­tor. On a cor­pus of web­site JS, pars­ing is 1.14x faster and uses 282 MB less RSS. (#8801)

Short string con­cate­na­tions skip the rope rep­re­sen­ta­tion when the re­sult is go­ing to be ob­served as a flat string any­way. 2.13x speedup on a tight a + b loop. (#9184)

Lexical-this ar­row func­tions no longer al­lo­cate a func­tion en­vi­ron­ment per call. Another 2.13x on a mi­crobench­mark. (#9192)

Sparse ar­rays no longer pay an ea­ger cost for their holes: Array(20_000_000) stays mostly meta­data in­stead of do­ing work pro­por­tional to twenty mil­lion imag­i­nary el­e­ments. (#8847)

A new lazy JS::Substring type backs reg­exp cap­tures and string builtins like slice, split, and in­dexed ac­cess, gain­ing 1.066x on Octane’s reg­exp bench­mark. (#8863)

Source po­si­tions are pre­served end-to-end in byte­code source maps, sav­ing ~250ms on x.com. (#9027)

Zero-copy TransferArrayBuffer saves ~130ms on YouTube load. (#9088)

Cached typed-ar­ray views switched from a WeakHashSet to an in­tru­sive list, sav­ing ~250ms load­ing the Intel ISA PDF in pdf.js. (#9180)

Every Promise al­lo­cated two PromiseResolvingFunction cells with AK::Function clo­sures that did­n’t ac­tu­ally cap­ture any­thing. They’re now sta­tic func­tions dis­patched by a Kind enum, drop­ping a per-re­solver al­lo­ca­tion across every promise the en­gine cre­ates. (#9188)

Skipping prop­erty-table mark­ing for non-dic­tio­nary shapes cut 1.3 sec­onds off GC time while load­ing map­tiler.com. (#9044)

A fast path for Array.prototype.indexOf on packed ar­rays (#9123)

Array.prototype.sort reuses cached UTF-16 in­stead of re-transcod­ing on every com­par­i­son (#9036)

Imports for WASM, JSON, and CSS mod­ules (#6029)

Removed ShadowRealm sup­port, since the pro­posal has stalled in the stan­dards process (#8753)

GTK4 / libad­waita fron­tend

Ladybird has a new Linux fron­tend built on GTK4 and libad­waita, sit­ting along­side the ex­ist­ing Qt fron­tend (#8691). It’s in­spired by GNOME Web (Epiphany) and fol­lows GNOMEs de­sign guide­lines: no menubar, a ham­burger menu, and AdwTabView for tabs. Out of the box you get au­to­com­plete and se­cu­rity icons in the URL bar, find-in-page, fullscreen, con­text menus, alert/​con­firm/​prompt/​color/​file di­alogs, clip­board, multi-win­dow, light/​dark theme, and DPR scal­ing. It’s still early, so not yet at fea­ture par­ity with the Qt and AppKit fron­tends.

Bookmarks

Last month we got book­marks. This month they got a proper man­age­ment UI:

An about:book­marks page for man­ag­ing book­marks and fold­ers (#8825)

Bookmark im­port and ex­port from the new page (#8938)

Context menus for edit­ing book­marks and fold­ers (#8715)

A date_added time­stamp on every book­mark and folder (#8867)

Bookmarks bar QoL: open in new tab, copy URL, mid­dle-click and Ctrl/Cmd+click to open in new tab (#8758)

The HTML5 drag-and-drop API is now wired up (#8783). about:book­marks uses it for re­order­ing, and it works on reg­u­lar web pages too.

Cache and CacheStorage

We im­ple­mented Cache and CacheStorage end to end, with all nine meth­ods (open, has, delete, keys, match, matchAll, add, ad­dAll, put) backed by an ephemeral in-mem­ory store (#8745).

CSS fea­tures

im­age-set() : Basic sup­port for the stan­dard and -webkit- pre­fixed forms. At paint time we pick the can­di­date whose res­o­lu­tion best matches the de­vice pixel ra­tio, skip­ping un­sup­ported MIME types. This makes header im­ages show up on go­comics.com. (#9090)

po­si­tion-an­chor and CSS an­chor po­si­tion­ing : Initial sup­port for an­chor-po­si­tioned el­e­ments, fix­ing the hand and gun po­si­tion­ing on css­doom.wtf. (#8686)

Color in­ter­po­la­tion rewrite : Aligned with css-color-4. We now in­ter­po­late in float in­stead of u8, han­dle miss­ing and pow­er­less com­po­nents cor­rectly, deal with out-of-gamut sRGB, and ap­ply al­pha mul­ti­pli­ers con­sis­tently. (#8934)

Color in­ter­po­la­tion rewrite : Aligned with css-color-4. We now in­ter­po­late in float in­stead of u8, han­dle miss­ing and pow­er­less com­po­nents cor­rectly, deal with out-of-gamut sRGB, and ap­ply al­pha mul­ti­pli­ers con­sis­tently. (#8934)

Presentational hints through the cas­cade : Legacy pre­sen­ta­tional HTML at­trib­utes (align, bg­color, etc.) used to by­pass the reg­u­lar CSS cas­cade and write di­rectly into the el­e­men­t’s cas­caded prop­er­ties. They now go through the cas­cade as nor­mal au­thor de­c­la­ra­tions, so var() sub­sti­tu­tion and the in­valid-at-com­puted-value-time fall­back work cor­rectly. Fixes a crash on html.spec.whatwg.org. (#9176)

Presentational hints through the cas­cade : Legacy pre­sen­ta­tional HTML at­trib­utes (align, bg­color, etc.) used to by­pass the reg­u­lar CSS cas­cade and write di­rectly into the el­e­men­t’s cas­caded prop­er­ties. They now go through the cas­cade as nor­mal au­thor de­c­la­ra­tions, so var() sub­sti­tu­tion and the in­valid-at-com­puted-value-time fall­back work cor­rectly. Fixes a crash on html.spec.whatwg.org. (#9176)

align on table sec­tions and rows : <thead>, <tbody>, <tfoot>, and <tr> honor the align pre­sen­ta­tional at­tribute, fix­ing but­ton place­ment on brick­link.com. (#9177)

BeforeAfter

align on table sec­tions and rows : <thead>, <tbody>, <tfoot>, and <tr> honor the align pre­sen­ta­tional at­tribute, fix­ing but­ton place­ment on brick­link.com. (#9177)

stroke-dashar­ray in­ter­po­la­tion : SVG dashes fi­nally an­i­mate smoothly. (#9133)

stroke-dashar­ray in­ter­po­la­tion : SVG dashes fi­nally an­i­mate smoothly. (#9133)

aut­o­fo­cus : Elements with the aut­o­fo­cus at­tribute ac­tu­ally re­ceive fo­cus on page load now. (#9016)

aut­o­fo­cus : Elements with the aut­o­fo­cus at­tribute ac­tu­ally re­ceive fo­cus on page load now. (#9016)

List mark­ers in RTL text : Bullets now sit on the right side of right-to-left text, fix­ing list ren­der­ing on Arabic Wikipedia. (#9099)

BeforeAfter

List mark­ers in RTL text : Bullets now sit on the right side of right-to-left text, fix­ing list ren­der­ing on Arabic Wikipedia. (#9099)

Inline flex/​grid base­lines : An in­line flex or grid con­tainer now de­rives its base­line from its child’s first line box, not its last wrapped line. Fixes link text and icon align­ment on nos.nl. (#9183)

BeforeAfter

Inline flex/​grid base­lines : An in­line flex or grid con­tainer now de­rives its base­line from its child’s first line box, not its last wrapped line. Fixes link text and icon align­ment on nos.nl. (#9183)

Networking

getad­drinfo no longer blocks the event loop. LibDNS now runs lookups on a thread pool, fires A and AAAA queries in par­al­lel (RFC 8305-ish), and co­a­lesces con­cur­rent lookups for the same name. RequestServer’s pre­con­nect path was sneak­ing past our re­solver and let­ting libcurl spawn its own threaded re­solver that would pthread­_join us on the main thread; that’s now routed through the same DNS pool. (#9109)

Profile of load­ing x.com when DNS is slow, be­fore and af­ter:

Over in RequestServer, drain­ing queued re­sponse data was O(n²) when WebContent was slower than the net­work. RequestServer was spend­ing ~30 sec­onds in mem­cpy and 3 sec­onds in Vector::remove while open­ing a YouTube video! Switching AllocatingMemoryStream to a singly-linked chunk list made con­sump­tion O(1). (#9028)

We now ad­ver­tise AVIF and WebP in our Accept header for im­age re­quests, match­ing other en­gines. Some CDNs use the Accept header to de­cide whether to serve mod­ern for­mats or fall back to JPEG. (#9046)

Style in­val­i­da­tion

Selector in­val­i­da­tion used to be straight­for­ward: se­lec­tors al­ways looked down­ward. :host ru­ined that. :has() made it way worse. Any de­scen­dant change can now force you to walk up the tree find­ing an­ces­tors whose :has() ar­gu­ments just flipped, and a lot of this mon­th’s in­val­i­da­tion work is about mak­ing that walk less waste­ful.

Four big wins this month:

Reddit rule cache re­builds: 13.2s → 3.2s. Stylesheet mu­ta­tions no longer re­build every style scope’s cache when only one scope changed. (#9138)

Reddit in­fi­nite scroll: 11% fewer point­less re­com­putes. Sibling struc­tural in­val­i­da­tion stopped fan­ning out to de­scen­dants that don’t ob­serve the po­si­tion. (#9155)

:has() mu­ta­tion in­val­i­da­tion skips un­af­fected an­chors , with sub­stan­tial re­duc­tions mea­sured on azure.com. (#9168)

:has() child-list vis­its on the Intel ISA PDF: 71k → 1.6k. Coalesced when pend­ing data al­ready cov­ers every con­crete fea­ture bucket the scope cares about, sav­ing ~650ms on the pdf.js load. (#9179)

A large new struc­tural-in­val­i­da­tion test bat­tery ex­posed and fixed sev­eral in­val­i­da­tion holes (#9095), and a string of smaller tight­en­ings landed around hover, stylesheet mu­ta­tion scope, cus­tom-prop­erty maps, and com­puted-style diff­ing (#9077, #9049, #9079, #9080, #9141).

Linux GPU paint­ing via dmabuf

On Linux Vulkan builds, GPU-backed paint­ing was be­ing se­cretly un­done every frame: WebContent painted into a GPU-backed Skia sur­face, but the buffer it shared with the UI process was a CPU bitmap, which forced a full GPU-to-CPU read­back on every flush. SharedImage can now carry a Linux dmabuf han­dle, so the front and back buffers stay GPU-resident the whole way to the UI process. (#8917, #8920)

mi­mal­loc as the main al­lo­ca­tor

Our C++ and Rust code now share a sin­gle al­lo­ca­tor in­stance, mi­mal­loc v2, in­stead of each go­ing through the sys­tem al­lo­ca­tor sep­a­rately (#8752). We don’t over­ride mal­loc() sys­tem-wide, so third-party li­braries keep their own al­lo­ca­tor con­tracts. JS bench­marks im­proved across the board.

Sites that work bet­ter

The biggest vis­i­ble wins this month are on Reddit and YouTube.

Reddit im­age gallery carousels ac­tu­ally work now, af­ter fix­ing two un­re­lated lay­out bugs around ::slotted() match­ing and ab­solutely po­si­tioned de­scen­dants of split in­lines (#9148). And thanks to TextDecoderStream, the SPA stops swal­low­ing link clicks, so you can fi­nally open the com­ments! Infinite scroll also ben­e­fits from the struc­tural-in­val­i­da­tion work cov­ered above.

YouTube ben­e­fits from a stack of un­re­lated im­prove­ments: off-thread top-level JS com­pile, off-thread WOFF2 de­com­pres­sion (saves ~170ms on Gmail too, #8976), re­duced @font-face fetch fanout (177 → ~9 fetches on ini­tial load, #9032), the RequestServer mem­ory churn fix, and zero-copy TransferArrayBuffer.

A hand­ful of smaller fixes:

go­comics.com : Header im­ages show up, thanks to im­age-set().

yan­dex.com/​maps : Vector-tile WebGL ren­der­ing works af­ter a small pile of WebGL fix-ups, in­clud­ing the WEBGL_debug_renderer_info ex­ten­sion (#9043).

strava.com : Login works now that Navigator.getBattery throws the spec-man­dated er­ror type in­stead of one of our own (#8770).

GitHub Insights : Loads ~100ms faster thanks to the Element.matches() and .closest() se­lec­tor cache (#8987).

tweak­ers.net : The lap­top com­par­i­son page is ~31% faster from in­dexed HTMLFormElement prop­erty name lookups (#9009).

neon.com : No longer crashes (#8812).

chan­nel4.com : Vertically mis­aligned cat­e­gory text fixed in flex auto-mar­gin res­o­lu­tion (#9050).

Cloudflare Turnstile : Still does­n’t pass, but we fail it much faster now thanks to auth-scheme han­dling, Array.prototype.shift() op­ti­miza­tions, and a pile of UA event han­dler hard­en­ing on <input> range and num­ber el­e­ments (#9063).

Web Platform Tests (WPT)

Our WPT score went from 2,003,537 to 2,067,263 this month, a head­line gain of 63,726 sub­tests. There’s an as­ter­isk on that num­ber: WPT im­ported test262, the of­fi­cial ECMAScript con­for­mance suite, up­stream this month, which added 53,207 JavaScript sub­tests to the count. We pass 52,045 of them (a 97.8% pass rate), since we’ve been run­ning test262 in­de­pen­dently for years and LibJS con­for­mance is in great shape. So roughly 52k of the 63.7k gain is from the im­port, and the re­main­ing ~11.7k is gen­uine new browser-plat­form progress, in the same ball­park as January’s 13,690.

AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights

arxiv.org

View PDF

HTML (experimental)

Abstract:As ar­ti­fi­cial in­tel­li­gence (AI) tools be­come widely adopted, large lan­guage mod­els (LLMs) are in­creas­ingly in­volved on both sides of de­ci­sion-mak­ing processes, rang­ing from hir­ing to con­tent mod­er­a­tion. This dual adop­tion raises a crit­i­cal ques­tion: do LLMs sys­tem­at­i­cally fa­vor con­tent that re­sem­bles their own out­puts? Prior re­search in com­puter sci­ence has iden­ti­fied self-pref­er­ence bias — the ten­dency of LLMs to fa­vor their own gen­er­ated con­tent — but its real-world im­pli­ca­tions have not been em­pir­i­cally eval­u­ated. We fo­cus on the hir­ing con­text, where job ap­pli­cants of­ten rely on LLMs to re­fine re­sumes, while em­ploy­ers de­ploy them to screen those same re­sumes. Using a large-scale con­trolled re­sume cor­re­spon­dence ex­per­i­ment, we find that LLMs con­sis­tently pre­fer re­sumes gen­er­ated by them­selves over those writ­ten by hu­mans or pro­duced by al­ter­na­tive mod­els, even when con­tent qual­ity is con­trolled. The bias against hu­man-writ­ten re­sumes is par­tic­u­larly sub­stan­tial, with self-pref­er­ence bias rang­ing from 67% to 82% across ma­jor com­mer­cial and open-source mod­els. To as­sess la­bor mar­ket im­pact, we sim­u­late re­al­is­tic hir­ing pipelines across 24 oc­cu­pa­tions. These sim­u­la­tions show that can­di­dates us­ing the same LLM as the eval­u­a­tor are 23% to 60% more likely to be short­listed than equally qual­i­fied ap­pli­cants sub­mit­ting hu­man-writ­ten re­sumes, with the largest dis­ad­van­tages ob­served in busi­ness-re­lated fields such as sales and ac­count­ing. We fur­ther demon­strate that this bias can be re­duced by more than 50% through sim­ple in­ter­ven­tions tar­get­ing LLMs’ self-recog­ni­tion ca­pa­bil­i­ties. These find­ings high­light an emerg­ing but pre­vi­ously over­looked risk in AI-assisted de­ci­sion mak­ing and call for ex­panded frame­works of AI fair­ness that ad­dress not only de­mo­graphic-based dis­par­i­ties, but also bi­ases in AI-AI in­ter­ac­tions.

arXiv-is­sued DOI via DataCite

Submission his­tory

From: Jiannan Xu [view email] [v1]

Sat, 30 Aug 2025 11:40:11 UTC (3,032 KB)

[v2]

Thu, 11 Sep 2025 16:59:36 UTC (3,032 KB)

[v3]

Mon, 9 Feb 2026 13:24:26 UTC (5,723 KB)

California to begin ticketing driverless cars that violate traffic laws

www.bbc.com

19 hours ago

Grace Eliza Goodwin

Getty Images

Driverless cars are be­com­ing more com­mon in some California cities, but when the au­tonomous ve­hi­cles vi­o­late traf­fic laws, po­lice haven’t been able to ticket them - un­til now.

The state’s Department of Motor Vehicles (DMV) has an­nounced new reg­u­la­tions on au­tonomous ve­hi­cles (AVs), in­clud­ing a process for po­lice to is­sue a notice of AV non­com­pli­ance” di­rectly to the car’s man­u­fac­turer.

The new rules, which will go into ef­fect 1 July, are part of a larger 2024 law that im­posed deeper reg­u­la­tion on the tech­nol­ogy.

There have been a num­ber of re­ports of the cars break­ing traf­fic laws, in­clud­ing dur­ing a San Francisco black­out last year.

The California DMV is call­ing the new rules the most com­pre­hen­sive AV reg­u­la­tions in the na­tion”.

Under the new rules, po­lice can cite AV com­pa­nies when their ve­hi­cles com­mit mov­ing vi­o­la­tions. The rules will also re­quire the com­pa­nies to re­spond to calls from po­lice and other emer­gency of­fi­cials within 30 sec­onds, and will is­sue penal­ties if their ve­hi­cles en­ter ac­tive emer­gency zones.

California con­tin­ues to lead the na­tion in the de­vel­op­ment and adop­tion of AV tech­nol­ogy, and these up­dated reg­u­la­tions fur­ther demon­strate the state’s com­mit­ment to pub­lic safety,” DMV Director Steve Gordon said in a press re­lease.

Waymo is one of the main op­er­a­tors of fully self-dri­ving ro­b­o­t­axis in the San Francisco Bay Area and Los Angeles County, but sev­eral com­pa­nies, in­clud­ing Tesla, also have per­mits to test their AVs in some California cities. The BBC has con­tacted Waymo and Tesla for com­ment.

When the ve­hi­cles vi­o­late traf­fic laws, some po­lice have been stumped as to how to hold the dri­ver­less cars ac­count­able.

In an in­ci­dent last September, po­lice of­fi­cers in San Bruno - a city south of San Francisco - no­ticed a Waymo AV mak­ing an il­le­gal U-turn at a light di­rectly in front of them, the San Bruno Police Department said at the time. But when of­fi­cers stopped the car, they were not able to is­sue a ticket with­out a dri­ver to give it to. Instead, they con­tacted the com­pany about the glitch”.

San Francisco Fire Department of­fi­cials have also re­peat­edly com­plained about ro­b­o­t­axis get­ting in the way of emer­gency re­sponses.

Six Years Perfecting Maps on watchOS

www.david-smith.org

I love go­ing on wilder­ness ad­ven­tures. I am rarely hap­pier than when I am far off into the moun­tains with­out a soul in sight. As a re­sult, I have spent a lot of time learn­ing how to safely ex­plore and nav­i­gate when I’m away from civ­i­liza­tion. The most im­por­tant habit I’ve found for not get­ting lost is to be very reg­u­lar in check­ing your lo­ca­tion as you go, and the best way I’ve found to do that is to have a map on my wrist.

For more than six years I’ve been work­ing to­wards cre­at­ing the best pos­si­ble map­ping ex­pe­ri­ence on the Apple Watch. With yes­ter­day’s launch of Pedometer++ 8, I feel like this de­sign jour­ney has reached a mean­ing­ful des­ti­na­tion. I would con­tend that Pedometer++’s watchOS map­ping sup­port is the ab­solute best avail­able on the App Store.

So I wanted to walk through the jour­ney it took to get here.

Early Efforts

I have wanted a good map on my wrist since the Apple Watch launched. This was­n’t re­al­is­ti­cally pos­si­ble un­til watchOS 6, which brought SwiftUI to the plat­form and, for the first time, made real” apps pos­si­ble. But in those early days, the screens were tiny, and the proces­sors slow. I could­n’t quite get to where I wanted.

This was my very first at­tempt that shipped in Pedometer++. These maps were gen­er­ated com­pletely on the server, which in­volved send­ing the rel­e­vant work­out data roundtrip every time I wanted to re­fresh the dis­play. This sys­tem let me val­i­date the idea, but it was never go­ing to be prac­ti­cally use­ful for nav­i­ga­tion or reg­u­lar use, and could never work of­fline.

Custom Mapping Engine

I knew that if I wanted to make progress to­wards this goal, I’d need to work at a lower level, so I got to work build­ing a fully SwiftUI-native map ren­der­ing en­gine. SwiftUI was the only choice be­cause it’s all that watchOS sup­ported, and proved to be help­ful for putting maps into wid­gets, which also only sup­port SwiftUI.

In 2021, I got this en­gine to a place where I could re­li­ably and per­for­mantly ren­der a map on watchOS. With it, I can ren­der any tile-based maps and over­lay lo­ca­tion in­for­ma­tion on top.

Map Designs

Next came the ques­tion of how best to sur­face data to users. App de­sign on watchOS is a re­ally fun — but frus­trat­ing — chal­lenge. You are de­sign­ing for a rel­a­tively tiny screen, which must be op­er­ated one-handed. In this case, I want the user to be able to read the map and use it to nav­i­gate, while also hav­ing ac­cess to other work­out-re­lated in­for­ma­tion.

This be­gan a long se­ries of de­sign at­tempts, most of which (if I’m be­ing hon­est) were kinda aw­ful.

In the end, I set­tled on a modal” ap­proach where the user can switch be­tween a map screen and a met­rics screen us­ing a but­ton on the top-left cor­ner.

This in­ter­face pro­vides one con­text where the user can freely pan/​zoom around the map and an­other where I can use the more stan­dard watchOS tabbed page in­ter­face for met­rics and con­trols. I shipped this to Pedometer++, but there was al­ways some­thing that did­n’t quite sit right with me about it.

This de­sign felt like a com­pro­mise, and not in a good way. I felt that in or­der to achieve the goal of mak­ing the map in­ter­ac­tive, I could­n’t have the map be part of any UI struc­ture that in­volved swipes. As the screens on Apple Watches got larger, it felt less needed in or­der to give the map enough space to be use­ful.

So I set about try­ing al­ter­na­tive de­signs. SO many de­signs.

For a while, I thought that I needed to find a way to put the met­rics at the bot­tom of the screen. However, that would lead to other prob­lems on longer out­ings or for work­outs that aren’t nav­i­ga­tion-fo­cused. So I kept it­er­at­ing and came up with even more de­signs.

All of these de­signs suf­fered from the same fun­da­men­tal is­sue: they re­quired the app to dis­play only a fixed set of fields at a time.

I could make the in­ter­face con­fig­urable, but one of the fun­da­men­tal rules of watchOS de­sign is that you should avoid any in­ter­ac­tion that takes more than a few sec­onds on the watch. Any user-con­fig­urable setup is in­her­ently fid­dly, so I did­n’t like this ap­proach.

Dark Mode, Liquid Glass, & Cartography

Around the same time I was still wrestling with the de­sign chal­lenges of how best to struc­ture the app, Apple an­nounced watchOS 26, and the ar­rival of Liquid Glass. One of the core de­sign as­pects of Liquid Glass is lay­er­ing stack­ing el­e­ments on top of each other, but an­other is the types of col­ors that work best with each other.

I was pre­vi­ously us­ing Thunderforest Outdoors as my basemap for the app. I love the con­tent this map in­cludes, but when I started over­lay­ing glassy el­e­ments over it I found that it was­n’t well-suited for Liquid Glass.

So… I com­mis­sioned a cus­tom map. Working with the in­cred­i­ble car­tog­ra­pher Andy Allen, we cre­ated a com­pletely new basemap that would look fan­tas­tic with Liquid Glass.1

We sim­pli­fied the map vi­su­ally, in­creased the con­trast of the el­e­ments, and made the map el­e­ments more sat­u­rated to pre­vent them from be­com­ing a muddy mess when shown be­low glass.

With this work done, I had an­other op­por­tu­nity: I could fi­nally have a dark mode vari­ant of the map tiles. While help­ful on iOS, this re­ally shines on watchOS. Andy and I re­ally worked to­ward some­thing which would be in­cred­i­bly leg­i­ble at ar­m’s length.

The re­sult of these ef­forts is that now I have a great map for watchOS… but a de­sign that did­n’t match that great­ness.

Striving for Great

I kept try­ing. To get me out of my de­sign rut, I en­listed the help of the fan­tas­tic de­signer Rafa Conde. I needed a fresh set of eyes on this and very quickly, this part­ner­ship paid off. They pro­posed a va­ri­ety of al­ter­na­tive lay­outs, but when I saw this one I knew it was the one.

The lay­er­ing of the met­rics on the top-left cor­ner, with the map be­ing the top page of a ver­ti­cal stack, was the cor­rect an­swer. This de­sign han­dles in­ter­ac­tiv­ity by re­quir­ing a tap on the map first to en­ter browse mode”.

Tweaking and Polishing

Now that I had the over­all con­cept locked in, the real fun be­gan, ac­tu­ally build­ing the app and di­al­ing in all the de­tails. I fairly quickly took Rafa’s con­cept and turned it into a work­ing pro­to­type. This let me val­i­date the idea in the field… lit­er­ally. After walk­ing a few hun­dred miles with it, I was con­fi­dent it was the cor­rect ap­proach.

Next, I needed to dial in the font and make more sub­tle de­sign choices.

After a bit more it­er­a­tion, I ar­rived at the de­sign that shipped yes­ter­day. It is leg­i­ble, use­ful, and (in my hum­ble opin­ion) beau­ti­ful.

It feels re­ally good to be able to cap off this six-year jour­ney with a de­sign I could­n’t be more proud of. This screen rep­re­sents so much ac­cu­mu­lated ef­fort and learn­ing. It fi­nally gives me a de­sign which feels na­tive on the plat­form, but also novel and unique.

Here is the evo­lu­tion of this de­sign over the last six years:

Postscript: Considering MapKit

While my work on watchOS map­ping mas­sively pre­dates the ar­rival of Apple’s MapKit onto the plat­form, it is prob­a­bly worth ex­plain­ing why I de­cided to do all of this cus­tom work to avoid us­ing it.

Fundamentally, I find that MapKit is great for ba­sic uses, but does­n’t pro­vide nearly the level of con­fig­ura­bil­ity and util­ity which I want Pedometer++ to of­fer. For ex­am­ple:

MapKit on watchOS al­ways shows in dark mode, which gen­er­ally is a good de­fault, but closes the door on some ac­ces­si­bil­ity and user choice rea­sons. I needed it to be a user-se­lec­table op­tion.

While MapKit on watchOS has got­ten bet­ter over time in terms of what you can do with it, I still find it a bit lim­it­ing in terms of an­i­ma­tions and over­lays.

MapKit’s cov­er­age is im­prov­ing with re­gards to topo­graphic con­tours and trail mark­ing, but there are far too many places where the MapKit map is es­sen­tially blank, but I know there should be more rich de­tails avail­able. For ex­am­ple, here is my map vs MapKit at the trail­head of one of my fa­vorite hikes in Scotland.

I still find it so cool that my work on this al­lows me to say that I commissioned a car­tog­ra­pher” to work on some­thing for me. 😁 ↩

I still find it so cool that my work on this al­lows me to say that I commissioned a car­tog­ra­pher” to work on some­thing for me. 😁 ↩

An open-weights Chinese model just beat Claude, GPT-5.5, and Gemini in a programming challenge - ThinkPol

thinkpol.ca

By Rohana Rezel

I’m run­ning the on­go­ing AI Coding Contest where I pit ma­jor lan­guage mod­els against each other in real-time pro­gram­ming tasks with ob­jec­tive scor­ing. Day 12 was the Word Gem Puzzle. Ten mod­els en­tered. The re­sults were not what most peo­ple would have pre­dicted.

Kimi K2.6, an open-weights model from Chinese startup Moonshot AI, won the chal­lenge out­right: 22 match points, 7 – 1-0. MiMo V2-Pro from Xiaomi came sec­ond. GPT-5.5 was third. Claude Opus 4.7 fin­ished fifth. Every model from the Western fron­tier labs landed be­low the top two.

The chal­lenge

The Word Gem Puzzle is a slid­ing-tile let­ter puz­zle. The board is a rec­tan­gu­lar grid (10×10, 15×15, 20×20, 25×25, or 30×30) filled with let­ter tiles and one blank space. Bots can slide any ad­ja­cent tile into the blank and at any point claim valid English words formed in straight hor­i­zon­tal or ver­ti­cal lines. Diagonals don’t count. Backwards does­n’t count.

The scor­ing re­wards longer words and pun­ishes short ones. Words un­der seven let­ters cost points: a five-let­ter word loses you one point, a three-let­ter word costs three. Seven let­ters or more score their length mi­nus six, so an eight-let­ter word is worth two points. The same word can only be claimed once; if an­other bot gets there first, you get noth­ing. Each pair of mod­els played five rounds, one per grid size, with a ten-sec­ond wall-clock limit per round.

The grids are seeded with real dic­tio­nary words in a cross­word-style lay­out, then the re­main­ing cells are filled with let­ters weighted by Scrabble tile fre­quen­cies, and fi­nally the blank is scram­bled, more ag­gres­sively on larger boards. On a 10×10, many seed words sur­vive in­tact. On a 30×30, al­most none do. That turns out to mat­ter a lot.

The code pro­duced by Nvidia’s Nemotron Super 3 con­tained a syn­tax er­ror, so it never con­nected to the game server. Nine mod­els ac­tu­ally com­peted.

Kimi K2.6 is open-weights, pub­licly avail­able from Moonshot AI, a Chinese startup founded in 2023. MiMo V2-Pro is cur­rently API-only; the tweet linked here is Xiaomi con­firm­ing that weights for their newer V2.5 Pro model are drop­ping soon.[1]https://​x.com/​Xi­aomiM­iMo/​sta­tus/​2047840164777726076 The mod­els from Anthropic, OpenAI, Google, and xAI placed third through sev­enth. GLM 5.1, from Chinese lab Zhipu AI, placed fourth. DeepSeek fin­ished eighth. This is­n’t a clean China-beats-West story; it’s two spe­cific mod­els that won.

What I saw

The move logs tell the story. Kimi won by slid­ing ag­gres­sively. Its ap­proach was greedy: score each pos­si­ble move by what new pos­i­tive-value words it un­locks, ex­e­cute the best one, re­peat. When no move un­locked a pos­i­tive word, it fell back to the first le­gal di­rec­tion al­pha­bet­i­cally. This caused some in­ef­fi­cient edge-os­cil­la­tion, a 2-cycle pat­tern where the bot bounced the blank back and forth with­out progress. On smaller grids where seed words were still largely in­tact, that hurt. On the 30×30 grids, where the scram­ble had bro­ken up nearly every­thing and re­con­struc­tion was the only path to points, the sheer slide vol­ume even­tu­ally paid off. Kimi’s cu­mu­la­tive score of 77 was the high­est in the tour­na­ment.

MiMo’s slid­ing code ex­ists in the repo, but its best value greater than zero” thresh­old never trig­gered, so in prac­tice it never slid once. It went straight to scan­ning the ini­tial grid for words of seven let­ters or more and blasted all its claims in a sin­gle TCP packet. Brittle strat­egy: en­tirely de­pen­dent on the scram­ble leav­ing in­tact seed words. On grids where words sur­vived, MiMo cleaned up fast. On grids where they did­n’t, it scored noth­ing. Final tally: 43 cu­mu­la­tive points, sec­ond place.

Claude also did­n’t slide. The move logs show it hold­ing up well on 25×25 boards where scram­ble den­sity was still man­age­able, then falling apart on 30×30 where ac­tual tile move­ment was needed. Not slid­ing is a real lim­i­ta­tion in a puz­zle built around slid­ing.

GPT-5.5 was more con­ser­v­a­tive, roughly 120 slides per round with a cap to avoid thrash­ing, and showed the strongest num­bers on 15×15 and 30×30 grids. Grok never slid ei­ther, yet scored rea­son­ably on the larger boards. GLM was the most ag­gres­sive slider in the whole tour­na­ment, over 800,000 to­tal slides, but stalled badly when­ever it ran out of pos­i­tive moves.

DeepSeek sent mal­formed data every round. Zero use­ful out­put. At least it did­n’t make things worse by play­ing.

Muse made things worse by play­ing.

The scor­ing pe­nal­izes short words: three-let­ter words cost three points, four-let­ter words cost two, five-let­ter words cost one. The in­tent is to stop bots from car­pet-bomb­ing the board with the” and and” and it.” Every se­ri­ous com­peti­tor fil­tered their dic­tio­nary to words of seven let­ters or more. Muse claimed every­thing. Every word it could find, re­gard­less of length, fired off as a claim. On a 30×30 grid with hun­dreds of short valid words vis­i­ble at any mo­ment, Muse found them all and claimed every one.

Its cu­mu­la­tive score was −15,309. It lost all eight matches and won zero rounds. There is a ver­sion of Muse that sim­ply con­nected to the server and did noth­ing, and that ver­sion would have scored zero, a 15,309-point im­prove­ment. The gap be­tween Muse and eighth place was larger than the gap be­tween eighth and first.

DeepSeek’s mal­formed out­put tells you some­thing about how it han­dles novel pro­to­col specs un­der time pres­sure. Muse’s spi­ral tells you some­thing dif­fer­ent: it saw valid words and claimed them, with no ap­par­ent model of what valid” meant given the scor­ing rules. It read the task par­tially and ex­e­cuted that par­tial read­ing in full. Worth not­ing for any­one de­ploy­ing these mod­els on struc­tured tasks with penal­ties.

What sur­prised me

I de­sign these chal­lenges, so I have a rea­son­able sense of what they test. What I did­n’t fully an­tic­i­pate was how starkly the 30×30 grids would sep­a­rate the field. On smaller boards, the dif­fer­ence be­tween a sta­tic scan­ner and an ac­tive slider was mod­est. At full scale, mod­els that could only find what was al­ready there ran out of road. Kimi’s greedy loop, flawed as it was, kept pro­duc­ing out­put when the sta­tic scan­ners had noth­ing left to claim.

The other thing worth not­ing: MiMo and Kimi fin­ished two points apart de­spite do­ing al­most op­po­site things. Two dif­fer­ent the­o­ries of the same puz­zle, nearly iden­ti­cal re­sults. That means the gap be­tween first and sec­ond was partly seed vari­ance, not just ca­pa­bil­ity dif­fer­ence.

The big­ger pic­ture

One fair coun­ter­ar­gu­ment: this scor­ing sys­tem re­wards ag­gres­sive word claim­ing, and heav­ily safety-tuned mod­els may be more con­ser­v­a­tive about that kind of car­pet-bomb­ing. If so, the re­sults re­flect a mis­match be­tween task de­sign and aligned model be­hav­iour, not raw ca­pa­bil­ity. It’s a rea­son­able ob­jec­tion. It does­n’t change the out­come.

One chal­lenge does­n’t over­turn gen­eral bench­marks. This puz­zle tests real-time de­ci­sion-mak­ing and whether a model can write clean func­tional code that con­nects to a TCP server and plays a novel game cor­rectly. It does­n’t test long-con­text rea­son­ing or code gen­er­a­tion from a spec.

But I’ve been run­ning these chal­lenges long enough to no­tice what’s chang­ing. A year ago, the as­sump­tion was that the Western fron­tier labs had a ca­pa­bil­ity lead open-weights could­n’t close. Kimi K2.6 now scores 54 on the Artificial Analysis Intelligence Index. GPT-5.5 scores 60, Claude 57. That’s not par­ity, but it’s close, and it’s com­ing from a model any­one can down­load.

When mod­els within a few in­dex points of the fron­tier are also freely avail­able to run lo­cally, that’s a dif­fer­ent com­pet­i­tive sit­u­a­tion than the one that ex­isted a year ago. This chal­lenge is one data point in that shift. The gap is small enough now that it shows up in re­sults like this one.

Rohana Rezel runs the AI Coding Contest and is a tech­nol­o­gist, re­searcher, and com­mu­nity leader based in Vancouver, BC.

References

Russia Poisons Wikipedia

www.bettedangerous.com

***Please take out a mem­ber­ship to sup­port the light of truth.***

As AI chat­bots con­tinue to ad­vance, Russia is in­fect­ing them with Kremlin-manipulated con­tent tai­lored to in­flu­ence the global in­ter­net, dis­tort­ing the pub­lic’s un­der­stand­ing of facts and abil­ity to make well-in­formed de­ci­sions.—Ex­pos­ing Pravda: How pro-Krem­lin forces are poi­son­ing AI mod­els and rewrit­ing Wikipedia, Atlantic Council

Yesterday, I read a Wikipedia page for a book I’m about to re­view. I am still un­set­tled.

The page was stripped of re­al­ity, and in its place was a san­i­tized fairy­tale where Putin is good and the book — a bru­tal and damn­ing his­toric ac­count of Soviet abuses — is sub­tly and not so sub­tly un­der­mined from every di­rec­tion.

Once I got over the shock of what I had just read — it was like be­ing forced into an al­ter­nate re­al­ity — I be­gan in­ves­ti­gat­ing Russia’s re­la­tion­ship to Wikipedia. Perhaps not sur­pris­ingly, the Russian state has been steadily dis­tort­ing truth, ex­ploit­ing the plat­for­m’s crowd-sourc­ing ar­chi­tec­ture to in­flu­ence pub­lic knowl­edge.

Malign Activity

In a re­port by the Institute for Strategic Dialogue ti­tled Identifying Sock Puppets on Wikipedia, its au­thors used a semantic clus­ter­ing’ ap­proach to fo­cus on the English-language Wikipedia en­try for the Russo-Ukrainian war, and 48 other pages about Ukraine that link di­rectly to it.”

The au­thors wrote:

Malign ac­tiv­ity has tar­geted a num­ber of in­for­ma­tion en­vi­ron­ments, in­clud­ing every ma­jor so­cial me­dia plat­form: Twitter, Facebook, YouTube, Instagram, TikTok, stand­alone web­sites and many oth­ers. This pa­per, how­ever, is ded­i­cated to pos­si­ble plat­form ma­nip­u­la­tion on a venue that tends to be much less re­searched than main­stream so­cial me­dia: Wikipedia.This re­port pre­sents work that set out to cre­ate, trial and eval­u­ate a method to try to de­tect covert and or­gan­ised ma­nip­u­la­tion of Wikipedia at scale.

Malign ac­tiv­ity has tar­geted a num­ber of in­for­ma­tion en­vi­ron­ments, in­clud­ing every ma­jor so­cial me­dia plat­form: Twitter, Facebook, YouTube, Instagram, TikTok, stand­alone web­sites and many oth­ers. This pa­per, how­ever, is ded­i­cated to pos­si­ble plat­form ma­nip­u­la­tion on a venue that tends to be much less re­searched than main­stream so­cial me­dia: Wikipedia.

This re­port pre­sents work that set out to cre­ate, trial and eval­u­ate a method to try to de­tect covert and or­gan­ised ma­nip­u­la­tion of Wikipedia at scale.

As I quickly learned, mul­ti­ple re­ports have ex­plored organized ma­nip­u­la­tion” on Wikipedia’s en­tries on Russia’s in­va­sion of Ukraine.

Portal Kombat

Between September and December 2023, the French de­fense agency Vigilance and Protection Service against Foreign Digital Interference (VIGINUM) an­a­lyzed information por­tals” dis­sem­i­nat­ing pro-Russ­ian con­tent and tar­get­ing sev­eral west­ern coun­tries, in­clud­ing France.

In the VIGINUM re­port, PORTAL KOMBAT: A struc­tured and co­or­di­nated pro-Russ­ian pro­pa­ganda net­work, re­searchers in­ves­ti­gated a net­work of 193 sites that ini­tially cov­ered news from Russian and Ukrainian lo­cal­i­ties.”

According to the re­search, the cov­er­age changed the day af­ter Russia in­vaded Ukraine and be­gan to tar­get oc­cu­pied Ukrainian ter­ri­to­ries and west­ern coun­tries sup­port­ing Ukraine and its pop­u­la­tion.

The sites in this net­work pro­duce no orig­i­nal con­tent but mas­sively re­lay pub­li­ca­tions from sources that are pri­mar­ily three types: so­cial me­dia ac­counts of Russian or pro-Russ­ian ac­tors, Russian news agen­cies, and of­fi­cial web­sites of lo­cal in­sti­tu­tions or ac­tors.”

The main ob­jec­tive seems to be to cover the Russo-Ukrainian con­flict by pre­sent­ing pos­i­tively the spe­cial mil­i­tary op­er­a­tion’ and den­i­grat­ing Ukraine and its lead­ers. Very ide­o­log­i­cally ori­ented, this con­tent re­peat­edly pre­sents in­ac­cu­rate or mis­lead­ing nar­ra­tives. As for the por­tal tar­get­ing France, pravda-fr[.]com, it di­rectly con­tributes to po­lar­ize the Francophone dig­i­tal pub­lic de­bate.”

The main ob­jec­tive seems to be to cover the Russo-Ukrainian con­flict by pre­sent­ing pos­i­tively the spe­cial mil­i­tary op­er­a­tion’ and den­i­grat­ing Ukraine and its lead­ers. Very ide­o­log­i­cally ori­ented, this con­tent re­peat­edly pre­sents in­ac­cu­rate or mis­lead­ing nar­ra­tives. As for the por­tal tar­get­ing France, pravda-fr[.]com, it di­rectly con­tributes to po­lar­ize the Francophone dig­i­tal pub­lic de­bate.”

VIGINUM caught an in­ser­tion of the site pravda-fr[.]com be­ing used as a source for a Wikipedia ar­ti­cle about a geopolitical sit­u­a­tion” in the Red Sea.

In a foot­note, they wrote: The Wikipedia ar­ti­cle ti­tled Operation Guardian of Prosperity’ cre­ated on December 22, 2023, was edited the next day by user @ Lataupefr,’ who in­serted two ar­ti­cles from pravda-fr[.]com with sources be­ing Russian-pro Telegram chan­nels @ BrainlessChanel’ and @ kom­pro­mat­media.’”

See mod­i­fi­ca­tions: https://​fr.wikipedia.org/​w/​in­dex.php?ti­tle=Opéra­tion_­Gar­di­en_de_la_prospérité&diff=prev&ol­did=210810683

Foreign Digital Interference

The pre­cise se­lec­tion of these pro-Russ­ian sources…proves there’s a real tar­get­ing ef­fort to dis­sem­i­nate the strate­gic nar­ra­tives… Given its tech­ni­cal char­ac­ter­is­tics, the processes im­ple­mented and the pur­sued pur­pose, this net­work con­sti­tutes for­eign dig­i­tal in­ter­fer­ence.—VIG­INUM

Those words — for­eign dig­i­tal in­ter­fer­ence — are very im­por­tant.

The West has ne­glected to fight on the bat­tle­field that has been right in front of them the en­tire time — the in­ter­net.

This week, we learned JD Vance and Marjorie Taylor Greene pro­moted a fake story by the Russian dis­in­for­ma­tion net­work — Storm-1516 — which is linked to the GRU and be­lieved to em­ploy work­ers from the Internet Research Agency, the St. Petersburg op­er­a­tion that at­tacked American minds to help in­stall Donald Trump in 2016 and whose out­put was pro­moted by mem­bers of Trump’s 2016 cam­paign, which I re­cap in this se­ries:

The story Vance and Greene pro­moted was an ob­vi­ous fake — a lie about yachts be­ing pur­chased with mil­i­tary aid to Ukraine. It’s im­por­tant to never for­get that Vance is Peter Thiel’s repli­cant and to­gether, they backed Rumble, which is a full-throated Russian pro­pa­ganda net­work.

A decade af­ter the 2016 US elec­tion, we are watch­ing the es­ca­la­tion of in­for­ma­tion war­fare as new tools are weaponized.

AI Models, Rewriting Wikipedia, and Laundering Content

As Atlantic Council re­ports in Exposing Pravda: How pro-Krem­lin forces are poi­son­ing AI mod­els and rewrit­ing Wikipedia:

Russia has ex­panded, de­vel­oped, and tai­lored an in­flu­ence cam­paign tar­get­ing much of the world, spread­ing its con­tent in Wikipedia ar­ti­cles and in pop­u­lar ar­ti­fi­cial in­tel­li­gence (AI) tools. As elec­tion cam­paigns in Romania and Moldova took place, or as po­lit­i­cal dis­cus­sions be­tween US President Donald Trump and Ukrainian President Volodymyr Zelenskyy un­folded, a net­work of in­au­then­tic pro-Russ­ian por­tals ramped up its ac­tiv­ity, laun­der­ing con­tent from sanc­tioned news out­lets and align­ing global in­for­ma­tion sources with the Kremlin nar­ra­tive ma­chine.”

Russia has ex­panded, de­vel­oped, and tai­lored an in­flu­ence cam­paign tar­get­ing much of the world, spread­ing its con­tent in Wikipedia ar­ti­cles and in pop­u­lar ar­ti­fi­cial in­tel­li­gence (AI) tools. As elec­tion cam­paigns in Romania and Moldova took place, or as po­lit­i­cal dis­cus­sions be­tween US President Donald Trump and Ukrainian President Volodymyr Zelenskyy un­folded, a net­work of in­au­then­tic pro-Russ­ian por­tals ramped up its ac­tiv­ity, laun­der­ing con­tent from sanc­tioned news out­lets and align­ing global in­for­ma­tion sources with the Kremlin nar­ra­tive ma­chine.”

The Atlantic Council ref­er­enc­ing the French re­port notes that much of the fakes are com­ing from the Pravda net­work, which it calls a collection of fraud­u­lent news por­tals tar­get­ing more than eighty coun­tries and re­gions through­out the world, launched by Russia in 2014. In 2024, the French dis­in­for­ma­tion watch­dog Viginum re­ported on the op­er­a­tion, iden­ti­fy­ing the ma­li­cious ac­tiv­ity of a Crimea-based IT busi­ness, find­ings that the Atlantic Council’s Digital Forensic Research Lab (DFRLab) later con­firmed, which showed di­rect Russian in­volve­ment with the net­work.”

The Pravda net­work acts as an in­for­ma­tion laun­dro­mat, am­pli­fy­ing and sat­u­rat­ing the news cy­cle with tropes em­a­nat­ing from Russian news out­lets and Kremlin-aligned Telegram chan­nels. During the 2024 super-election year,” the net­work cre­ated web­sites specif­i­cally tar­get­ing NATO, as well as Trump, French President Emmanuel Macron, and other world lead­ers and politi­cians.—Ex­pos­ing Pravda

The Atlantic Council re­port iden­ti­fies this or­ga­nized ma­nip­u­la­tion as global — a Russian on­line in­flu­ence op­er­a­tion that has taken root across the global in­ter­net.”

Think of this in terms of transna­tional or­ga­nized crime, ex­cept in­stead of drugs, or hu­man traf­fick­ing, or arms traf­fick­ing, we’re al­low­ing un­friendly for­eign pow­ers to ma­nip­u­late our col­lec­tive re­al­ity — his­tory, cul­ture, our shared nar­ra­tive.

The Atlantic Council also notes that Russia’s strat­egy, in a likely at­tempt to evade global sanc­tions on Russian news out­lets, is now poi­son­ing AI tools and Wikipedia. By pos­ing as au­thor­i­ta­tive sources on Wikipedia and re­li­able news out­lets cited by pop­u­lar large lan­guage mod­els (LLMs), Russian tropes are rewrit­ing the story of Russia’s war in Ukraine. The di­rect con­se­quence is the ex­po­sure of Western au­di­ences to con­tent con­tain­ing pro-Krem­lin, anti-Ukrain­ian, and anti-West­ern mes­sag­ing when us­ing AI chat­bots that rely on LLMs trained on ma­te­r­ial such as Wikipedia.

As AI chat­bots con­tinue to ad­vance, Russia is in­fect­ing them with Kremlin-manipulated con­tent tai­lored to in­flu­ence the global in­ter­net, dis­tort­ing the pub­lic’s un­der­stand­ing of facts and abil­ity to make well-in­formed de­ci­sions. This op­er­a­tion opens the door to ques­tions re­gard­ing the trans­parency of the train­ing of AI mod­els and the mod­er­a­tion of con­tent em­a­nat­ing from known Russian-manipulated sources that have per­sis­tently di­vided the West on its sup­port for Ukraine.”

It al­ways comes back to Ukraine.

But it does­n’t stop with Ukraine.

Russia won’t stop un­til Russia is stopped.

Through these as­saults, they are dis­arm­ing what should be the only sub­stan­tive re­sis­tance to their re­build­ing the for­mer Soviet bloc.

They have no right to dic­tate our will, and it’s pa­thetic that we’re let­ting them.

The Sum of All Human Knowledge

In a re­port ti­tled Characterizing Knowledge Manipulation in a Russian Wikipedia Fork, the au­thors used a dataset of 1.9 mil­lion Russian Wikipedia ar­ti­cles and its fork,” which they call an or­ga­nized ef­fort to ma­nip­u­late knowl­edge.”

As the world’s largest en­cy­clo­pe­dia and the ninth most vis­ited web­site glob­ally. Wikipedia holds an in­flu­en­tial po­si­tion within the web ecosys­tem… main­tained through a col­lab­o­ra­tive com­mu­nity ef­fort to be­come the sum of all hu­man knowl­edge’ (Sutcliffe 2016).—Characterizing Knowledge Manipulation in a Russian Wikipedia Fork

Its au­thors note that knowledge on Wikipedia has a ma­jor so­ci­etal im­pact” and iden­tity mul­ti­ple au­thor­i­tar­ian coun­tries such as China and Turkey which sim­ply block the plat­form al­to­gether.

In a sec­tion of the re­port ti­tled Relevance,” re­searchers ex­plain how national iden­tity and pub­lic opin­ion can be in­flu­enced by the in­for­ma­tion cit­i­zens are find­ing on­line about their his­tory… Wikipedia was ranked the 6th most im­por­tant in­for­ma­tion about his­tory, pass­ing mu­seum vis­its, col­lege courses, and so­cial me­dia (Burkholder and Schaffer 2021). Therefore, at­tempts to ma­nip­u­late Wikipedia con­tent, even if they hap­pen in other plat­forms, could have a sig­nif­i­cant so­ci­etal im­pact.”

They warn that Wikipedia con­tent is fre­quently used for train­ing Large Language Models (LLMs) and that manipulated ver­sions of Wikipedia used as train­ing data for LLMs can en­cour­age AI-powered sys­tems that pro­mote ideas with spe­cific bi­ases.”

Immediately af­ter its de­but, Elon Musk’s Grokipedia was ex­posed for push­ing ex­trem­ist ide­ol­ogy and pub­lish­ing Russian pro­pa­ganda.

Last year, Musk called for a boy­cott of Wikipedia and con­tin­ues to call it Wokepedia, spread­ing his own pro­pa­ganda. Trump’s regime has threat­ened to re­voke the tax-ex­empt sta­tus of the non-profit, which turned 25-years-old this year.

Trump, whose al­ter­nate re­al­ity lie fac­tory, Truth Social, is a fun-house mir­ror of the name Pravda, which means truth’ and justice’ in Russian, and was the name of the of­fi­cial news­pa­per of the Central Committee of the Communist Party of the Soviet Union.

While Trump helps Putin re­build the Soviet em­pire, I’ll be over here pub­lish­ing a re­port on the book that took it down.

****

2016 Election Attack — The Book!

American Monsters — The Book — Buy Here!

Donations Welcome

****

Bette Dangerous is a reader-funded mag­a­zine. Thank you to all monthly, an­nual, and found­ing mem­bers.

I ex­pose the cor­rup­tion of bil­lion­aire fas­cists, while re­ly­ing on mem­ber­ships for sup­port.

Thank you in ad­vance for con­sid­er­ing the fol­low­ing:

Upgrade to Paid Member

Upgrade to Paid Member

Upgrade to Founding Member

Upgrade to Founding Member

Gifting mem­ber­ships

Gifting mem­ber­ships

Share my re­port­ing with al­lies

Share my re­port­ing with al­lies

Buying my ebooks

Buying my ebooks

Donating to the ko-fi fund or di­rectly to venmo

Donating to the ko-fi fund or di­rectly to venmo

Heidi’s Ko-Fi Fund

Heidi’s Venmo

A pri­vate link to an an­nual mem­ber­ship dis­count for older adults, those on fixed in­comes or draw­ing dis­abil­ity, as well as ac­tivists and mem­bers of the me­dia is avail­able upon re­quest at bet­tedan­ger­ous/​gmail. 🥹

More info about Bette Dangerous - This mag­a­zine is writ­ten by Heidi Siegmund Cuda, an Emmy-award win­ning in­ves­tiga­tive re­porter/​pro­ducer, au­thor, and vet­eran mu­sic and nightlife colum­nist. She is the co­host of RADICALIZED Truth Survives, an in­ves­tiga­tive show about dis­in­for­ma­tion and is part of the Byline Media team. Thank you for your sup­port of in­de­pen­dent in­ves­tiga­tive jour­nal­ism.

🤍

Begin each day with a grate­ful heart.

🤍

No posts

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.