10 interesting stories served every morning and every evening.

Enabling ai co author by default by cwebster-99 · Pull Request #310226 · microsoft/vscode

github.com

Skip to con­tent

Navigation Menu

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal toolsView all fea­tures

AI CODE CREATIONGitHub CopilotWrite bet­ter code with AIGitHub SparkBuild and de­ploy in­tel­li­gent apps­GitHub ModelsManage and com­pare promptsMCP RegistryNewIntegrate ex­ter­nal tools

AI CODE CREATION

GitHub CopilotWrite bet­ter code with AI

GitHub CopilotWrite bet­ter code with AI

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub SparkBuild and de­ploy in­tel­li­gent apps

GitHub ModelsManage and com­pare prompts

GitHub ModelsManage and com­pare prompts

MCP RegistryNewIntegrate ex­ter­nal tools

MCP RegistryNewIntegrate ex­ter­nal tools

View all fea­tures

Pricing

Provide feed­back

Saved searches

Use saved searches to fil­ter your re­sults more quickly

Sign up

Appearance set­tings

Notifications You must be signed in to change no­ti­fi­ca­tion set­tings

Fork 39.7k

Star 185k

Star 185k

Merged

Conversation

Pull re­quest overview

This PR changes the Git ex­ten­sion’s git.ad­dAICoAu­thor set­ting so that AI co-au­thor trail­ers are en­abled by de­fault, mak­ing the de­fault be­hav­ior au­to­mat­i­cally add a Co-authored-by trailer when AI-generated code con­tri­bu­tions are de­tected.

Changes:

Updates git.ad­dAICoAu­thor con­fig­u­ra­tion de­fault from off” to all”.

Copilot’s find­ings

Files re­viewed: 1/1 changed files

Comments gen­er­ated: 1

Screenshot Changes

Base: 3c1b53dd Current: eec3f9cf

Changed (3)

blocks-ci screen­shots changed

Replace the con­tents of test/​com­po­nent­Fix­tures/​blocks-ci-screen­shots.md with:

<!– auto-gen­er­ated by CI — do not edit man­u­ally –>

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Dark ![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​cb32a3e854b5734fe5aa­ca2318f2e0a42ee821b05ea97883ea42c5ba95ed­b3c3)

#### ed­i­tor/​codeEd­i­tor/​CodeEd­i­tor/​Light ![screenshot](https://​hediet-screen­shots.azureweb­sites.net/​im­ages/​42624f­b­ba5e0d­b7f32c224b5e­b9c5d­d3b08245697ae2e7d2a88be0d7c287129b)

NoiceBroice

ref­er­enced this pull re­quest in ThomasSnowden37/Harmoniq-Charts

Co-authored-by: Copilot <copi­lot@github.com>

Closed

srid

men­tioned this pull re­quest

Merged

Open

mi­crosoft

locked as spam and lim­ited con­ver­sa­tion to col­lab­o­ra­tors

Labels

None yet

Google Chrome silently installs a 4 GB AI model on your device without consent. At a billion-device scale the climate costs are insane.

www.thatprivacyguy.com

Two weeks ago I wrote about Anthropic silently reg­is­ter­ing a Native Messaging bridge in seven Chromium-based browsers on every ma­chine where Claude Desktop was in­stalled [1]. The pat­tern was: in­stall on user launch of prod­uct A, write con­fig­u­ra­tion into the user’s in­stalls of prod­ucts B, C, D, E, F, G, H with­out ask­ing. Reach across ven­dor trust bound­aries. No con­sent di­a­log. No opt-out UI. Re-installs it­self if the user re­moves it man­u­ally, every time Claude Desktop is launched.

This week I dis­cov­ered the same pat­tern, ex­e­cuted by Google. Google Chrome is reach­ing into users’ ma­chines and writ­ing a 4 GB on-de­vice AI model file to disk with­out ask­ing. The file is named weights.bin. It lives in OptGuideOnDeviceModel. It is the weights for Gemini Nano, Google’s on-de­vice LLM. Chrome did not ask. Chrome does not sur­face it. If the user deletes it, Chrome re-down­loads it.

The le­gal analy­sis is the same one I gave for the Anthropic case. The en­vi­ron­men­tal analy­sis is new. At Chrome’s scale, the cli­mate bill for one model push, paid in at­mos­pheric CO2 by the en­tire planet, is be­tween six thou­sand and sixty thou­sand tonnes of CO2-equivalent emis­sions, de­pend­ing on how many de­vices re­ceive the push. That is the en­vi­ron­men­tal cost of one com­pany uni­lat­er­ally de­cid­ing that two bil­lion peo­ples’ de­fault browser will mass-dis­trib­ute a 4 GB bi­nary they did not re­quest.

This is, in my pro­fes­sional opin­ion, a di­rect breach of Article 5(3) of Directive 2002/58/EC (the ePri­vacy Directive) [2], a breach of the Article 5(1) GDPR prin­ci­ples of law­ful­ness, fair­ness, and trans­parency [3], a breach of Article 25 GDPRs data-pro­tec­tion-by-de­sign oblig­a­tion [3], and an en­vi­ron­men­tal harm of a mag­ni­tude that would be a no­ti­fi­able event un­der the Corporate Sustainability Reporting Directive (CSRD) for any in-scope un­der­tak­ing [4].

What is on the disk and how it got there

On any ma­chine that has Chrome in­stalled, in the user pro­file, sits a di­rec­tory whose name is OptGuideOnDeviceModel. Inside it is a file called weights.bin. The file is ap­prox­i­mately 4 GB. It is the weights file for Gemini Nano. Chrome uses it to power fea­tures Google has mar­keted un­der names like Help me write”, on-de­vice scam de­tec­tion, and other AI-assisted browser func­tions.

The file ap­peared with no con­sent prompt. There is no check­box in Chrome Settings la­belled download a 4 GB AI model”. The down­load trig­gers when Chrome’s AI fea­tures are ac­tive, and those fea­tures are ac­tive by de­fault in re­cent Chrome ver­sions. On any ma­chine that meets the hard­ware re­quire­ments, Chrome treats the user’s hard­ware as a de­liv­ery tar­get and writes the model.

The cy­cle of dele­tion and re-down­load has been doc­u­mented across mul­ti­ple in­de­pen­dent re­ports on Windows in­stal­la­tions [5][6][7][8] - the user deletes, Chrome re-down­loads, the user deletes again, Chrome re-down­loads again. The only ways to make the dele­tion stick are to dis­able Chrome’s AI fea­tures through chrome://​flags or en­ter­prise pol­icy tool­ing that home users do not gen­er­ally have, or to unin­stall Chrome en­tirely [5]. On ma­cOS the file lands as mode 600 owned by the user (so it is deletable in prin­ci­ple) but Chrome holds the in­stall state in Local State af­ter the bytes are writ­ten, and as soon as the vari­a­tions server next tells Chrome the pro­file is el­i­gi­ble, the down­load fires again - the ar­chi­tec­ture is the same, only the file per­mis­sions dif­fer.

How I ver­i­fied this on a freshly cre­ated Apple Silicon pro­file

Most of the ex­ist­ing re­port­ing on this be­hav­iour is from Windows users who no­ticed their disk fill­ing up - use­ful, but Google could (and prob­a­bly will) try to char­ac­terise those re­ports as anec­dotes from non-rep­re­sen­ta­tive con­fig­u­ra­tions. So I went look­ing for a clean wit­ness on a dif­fer­ent plat­form.

The wit­ness I found is ma­cOS it­self. The ker­nel keeps a filesys­tem event log called .fseventsd - it records every file cre­ate, mod­ify and delete at the OS level, in­de­pen­dent of any ap­pli­ca­tion log­ging. Chrome can­not edit it, Google can­not re­motely reach it, and the page files that record the events sur­vive the dele­tion of the files they ref­er­ence.

I cre­ated a Chrome user-data di­rec­tory on 23 April 2026 to run an au­to­mated au­dit (one of the WebSentinel 100-site pri­vacy sweeps). The au­dit dri­ver is fully Chrome DevTools Protocol - it loads a page, dwells for five min­utes with no in­put, cap­tures events, closes Chrome be­tween sites - and the pro­file had re­ceived zero key­board or mouse in­put from a hu­man at any point in its ex­is­tence. Every AI mode” sur­face in Chrome was un­touched - in fact every UI sur­face in Chrome was un­touched, the au­dit dri­ver only in­ter­acts with the doc­u­ment via CDP and the om­ni­box is never reached. By 29 April the pro­file con­tained 4 GB of OptGuideOnDeviceModel weights - and I knew it be­cause a rou­tine du -sh of the au­dit-pro­file di­rec­tory caught it dur­ing a cleanup pass.

I went back to .fseventsd to ask ex­actly when those 4 GB landed. ma­cOS gave me the an­swer, byte-pre­cise, in three se­quen­tial page files:

24 April 2026, 16:38:54 CEST (14:38:54 UTC) - Chrome cre­ates the OptGuideOnDeviceModel di­rec­tory in the au­dit pro­file (page file 0000000003f7f339).

24 April 2026, 16:47:22 CEST (14:47:22 UTC) - three con­cur­rent un­packer sub­processes spawn tem­po­rary di­rec­to­ries in /private/var/folders/…/com.google.Chrome.chrome_chrome_Unpacker_BeginUnzipping.*/. One of them (5xzqPo) writes weights.bin, man­i­fest.json, _metadata/verified_contents.json and on_de­vice_­mod­el_ex­e­cu­tion_­con­fig.pb. The sec­ond writes a Certificate Revocation List up­date. The third writes a browser pre­load-data up­date. Chrome batched a se­cu­rity up­date, a pre­load re­fresh and a 4 GB AI model into the same idle win­dow, as if they were equiv­a­lent (page file 00000000040c8855).

24 April 2026, 16:53:22 CEST (14:53:22 UTC) - the un­packed weights.bin is moved to its fi­nal lo­ca­tion at OptGuideOnDeviceModel/2025.8.8.1141/weights.bin along with adapter_­cache.bin, en­coder_­cache.bin, _metadata/verified_contents.json and the ex­e­cu­tion con­fig. Concurrently four ad­di­tional model tar­gets (numbered 40, 49, 51 and 59 in Chrome’s op­ti­miza­tion-guide enum) reg­is­ter fresh en­tries in op­ti­miza­tion_guide_­mod­el_­s­tore - these are the smaller text-safety and prompt-rout­ing mod­els that pair with the LLM. None of these tar­gets ex­isted in the pro­file be­fore this mo­ment (page file 00000000040d0f9c).

Total in­stall time, from di­rec­tory cre­ation to fi­nal move: 14 min­utes and 28 sec­onds. Total hu­man ac­tion against the pro­file dur­ing that win­dow: none. The au­dit dri­ver was ei­ther dwelling on a third-party home page or tran­si­tion­ing be­tween sites - the un­packer fired in the back­ground while a tab waited for a five-minute timer to ex­pire.

The nam­ing in­side that fsev­entsd record is, if any­thing, the most damn­ing de­tail. The temp di­rec­tory is com.google.Chrome.chrome_chrome_Un­pack­er_Be­gi­n­Un­zip­ping.5xzqPo - that pre­fix com.google.Chrome.chrome_chrome_* is the bun­dle ID and sub­process nam­ing con­ven­tion Google Chrome it­self uses. It is not com.google.Google­Up­dater.* and it is not com.google.Google­Soft­ware­Up­date.*. The writer is Chrome - the browser process the user has in­stalled and trusts to load web pages - reach­ing into the user’s filesys­tem on its own ini­tia­tive and lay­ing down a 4 GB ML bi­nary while the fore­ground tab does some­thing com­pletely un­re­lated.

Three fur­ther pieces of cor­rob­o­rat­ing ev­i­dence sit else­where on the same ma­chine:

Chrome’s own Local State JSON for the au­dit pro­file con­tains an op­ti­miza­tion_guide.on_de­vice block with mod­el_­val­i­da­tion_re­sult: { at­temp­t_­count: 1, re­sult: 2, com­po­nen­t_ver­sion: 2025.8.8.1141” }. Chrome ran the model. The com­po­nen­t_ver­sion matches the ver­sion string the fsev­entsd events recorded as the path com­po­nent. Two in­de­pen­dent wit­nesses, same arte­fact. The same block re­ports per­for­mance_­class: 6, vram_mb: 36864″ - Chrome char­ac­terised my hard­ware (read the GPU, read the uni­fied mem­ory to­tal) to de­cide whether I was el­i­gi­ble for the model push, be­fore any user-fac­ing AI fea­ture sur­faced.

Chrome’s own Local State JSON for the au­dit pro­file con­tains an op­ti­miza­tion_guide.on_de­vice block with mod­el_­val­i­da­tion_re­sult: { at­temp­t_­count: 1, re­sult: 2, com­po­nen­t_ver­sion: 2025.8.8.1141” }. Chrome ran the model. The com­po­nen­t_ver­sion matches the ver­sion string the fsev­entsd events recorded as the path com­po­nent. Two in­de­pen­dent wit­nesses, same arte­fact. The same block re­ports per­for­mance_­class: 6, vram_mb: 36864″ - Chrome char­ac­terised my hard­ware (read the GPU, read the uni­fied mem­ory to­tal) to de­cide whether I was el­i­gi­ble for the model push, be­fore any user-fac­ing AI fea­ture sur­faced.

Chrome’s ChromeFeatureState for the au­dit pro­file lists OnDeviceModelBackgroundDownload<OnDeviceModelBackgroundDownload and ShowOnDeviceAiSettings<OnDeviceModelBackgroundDownload in the en­able-fea­tures block. The first flag is what trig­gers the silent down­load. The sec­ond flag is what re­veals the on-de­vice AI sec­tion in chrome://​set­tings. Both are gated by the same roll­out flag - which means that by Chrome’s own ar­chi­tec­ture, the in­stall be­gins be­fore the user has any set­tings UI in which to refuse it. The set­tings page that would let you dis­cover the fea­ture ex­ists is en­abled in lock­step with the in­stall - it is de­sign, not over­sight.

Chrome’s ChromeFeatureState for the au­dit pro­file lists OnDeviceModelBackgroundDownload<OnDeviceModelBackgroundDownload and ShowOnDeviceAiSettings<OnDeviceModelBackgroundDownload in the en­able-fea­tures block. The first flag is what trig­gers the silent down­load. The sec­ond flag is what re­veals the on-de­vice AI sec­tion in chrome://​set­tings. Both are gated by the same roll­out flag - which means that by Chrome’s own ar­chi­tec­ture, the in­stall be­gins be­fore the user has any set­tings UI in which to refuse it. The set­tings page that would let you dis­cover the fea­ture ex­ists is en­abled in lock­step with the in­stall - it is de­sign, not over­sight.

The GoogleUpdater logs record the on-de­vice-model con­trol com­po­nent (appid {44fc7fe2 – 65ce-487c-93f4-edee46eeaaab}) be­ing down­loaded from http://​edgedl.me.gvt1.com/​edgedl/​dif­f­gen-puf­fin/%​7B44fc7fe2 – 65ce-487c-93f4-edee46eeaaab%7D/… - a 7 MB com­pressed con­trol file that ar­rived on 20 April 2026, three days be­fore the au­dit pro­file in ques­tion was cre­ated. That is the up­stream con­trol plane: it is pro­file-in­de­pen­dent, it is launched au­to­mat­i­cally by a LaunchAgent that fires every hour, and the URL is plain HTTP (the in­tegrity is ver­i­fied by the CRX-3 sig­na­ture in­side the pack­age, not by trans­port se­cu­rity). The con­trol com­po­nent gives Chrome the man­i­fest point­ing at the ac­tual weights, and Chrome’s in-process OnDeviceModelComponentInstaller - a sep­a­rate code path from GoogleUpdater - then fetches the multi-GB weights di­rect from Google’s CDN.

The GoogleUpdater logs record the on-de­vice-model con­trol com­po­nent (appid {44fc7fe2 – 65ce-487c-93f4-edee46eeaaab}) be­ing down­loaded from http://​edgedl.me.gvt1.com/​edgedl/​dif­f­gen-puf­fin/%​7B44fc7fe2 – 65ce-487c-93f4-edee46eeaaab%7D/… - a 7 MB com­pressed con­trol file that ar­rived on 20 April 2026, three days be­fore the au­dit pro­file in ques­tion was cre­ated. That is the up­stream con­trol plane: it is pro­file-in­de­pen­dent, it is launched au­to­mat­i­cally by a LaunchAgent that fires every hour, and the URL is plain HTTP (the in­tegrity is ver­i­fied by the CRX-3 sig­na­ture in­side the pack­age, not by trans­port se­cu­rity). The con­trol com­po­nent gives Chrome the man­i­fest point­ing at the ac­tual weights, and Chrome’s in-process OnDeviceModelComponentInstaller - a sep­a­rate code path from GoogleUpdater - then fetches the multi-GB weights di­rect from Google’s CDN.

So we now have a four-way ev­i­dence chain - ma­cOS ker­nel filesys­tem events, Chrome’s own per-pro­file state, Chrome’s run­time fea­ture flags, and Google’s com­po­nent-up­dater logs - all four agree­ing on the same con­duct, and the con­duct is: a 4 GB AI model ar­rived on this user’s disk with­out con­sent, with­out no­tice, on a pro­file that re­ceived zero hu­man in­put, in a win­dow of 14 min­utes and 28 sec­onds, on a Tuesday af­ter­noon.

Reports of the OptGuideOnDeviceModel di­rec­tory and the weights.bin file have been cir­cu­lat­ing in com­mu­nity fo­rums for over a year - what is new in 2026 is the scale and the ver­i­fi­a­bil­ity. Chrome’s mar­ket share has held above 64% glob­ally [9][10], Chrome’s user base is be­tween 3.45 bil­lion and 3.83 bil­lion in­di­vid­u­als world­wide de­pend­ing on which 2026 es­ti­mate you trust [9][11], and Google has been rolling Gemini fea­tures into Chrome with in­creas­ing ag­gres­sion. The be­hav­iour is no longer af­fect­ing a mi­nor­ity of power users on a mi­nor­ity of plat­forms - it is af­fect­ing hun­dreds of mil­lions of de­vices, on every desk­top OS Chrome ships against.

The Anthropic com­par­i­son, point for point

The same dark-pat­tern play­book. I am re­peat­ing my cat­e­gori­sa­tion from the Claude Desktop ar­ti­cle [1] be­cause the pat­terns are iden­ti­cal and that is the point.

1. Forced bundling across trust bound­aries. Anthropic in­stalled Claude Desktop, then wrote into Brave, Edge, Arc, Vivaldi, Opera, and Chromium. Google in­stalls Chrome, then writes a 4 GB AI model un­der the user’s pro­file di­rec­tory with­out au­tho­ri­sa­tion. The bi­nary is not Chrome. It is a sep­a­rately-trained ma­chine-learn­ing model, with a sep­a­rate pur­pose, a sep­a­rate data-pro­tec­tion pro­file, and a sep­a­rate con­sent foot­print.

2. Invisible de­fault, no opt-in. No di­a­logue at first launch. No check­box in Settings. The model is down­loaded; the user finds out about it months later when their disk fills up [5][6][7].

3. More dif­fi­cult to re­move than in­stall. Adding the file took zero clicks. Removing it re­quires (a) dis­cov­er­ing the file ex­ists, (b) un­der­stand­ing what it is, (c) nav­i­gat­ing into a hid­den user pro­file path, (d) delet­ing it (and on Windows, also clear­ing the read-only at­tribute first), and (e) ac­cept­ing that Chrome will silently re-down­load it on next el­i­gi­ble win­dow un­less the user also nav­i­gates chrome://​flags, en­ter­prise pol­icy, or plat­form-spe­cific con­fig­u­ra­tion tool­ing to dis­able the un­der­ly­ing Chrome AI fea­ture [5]. None of those steps is doc­u­mented in the place a nor­mal user looks - none of them is even hinted at in de­fault Chrome.

4. Pre-staging of ca­pa­bil­ity the user has not re­quested. The Nano model ex­ists on the user’s disk so that Chrome fea­tures that use it can run in­stantly when the user in­vokes them. The user has not in­voked any of those fea­tures. The model still sits there, tak­ing 4 GB.

5. Scope in­fla­tion through generic nam­ing. OptGuideOnDeviceModel is in­ter­nal Chrome jar­gon for OptimizationGuide on-de­vice model stor­age”. A user look­ing at their disk us­age, even one who knows roughly what they are look­ing at, would not match OptGuideOnDeviceModel/weights.bin to Gemini Nano LLM weights”. Accurate nam­ing would be GeminiNanoLLM/weights.bin. Google chose to ob­fus­cate the name.

6. Registration into re­sources the user has not con­fig­ured. A user who has not opened Chrome’s AI fea­tures still gets the model. A user who has opened them once and de­cided they were not in­ter­ested still gets the model. The file’s pres­ence is de­cou­pled from the user’s ac­tual use of any fea­ture it pow­ers.

7. Documentation gap. Google’s user-fac­ing doc­u­men­ta­tion about Chrome’s AI fea­tures does not, with the promi­nence pro­por­tion­ate to a 4 GB silent down­load, tell the user that the cost of the fea­ture be­ing avail­able is a 4 GB file ap­pear­ing on their de­vice. The be­hav­iour is doc­u­mented in places a cu­ri­ous ad­min will find. It is not doc­u­mented in the place a reg­u­lar user looks be­fore in­stalling Chrome or be­fore Chrome de­cides to be­gin push­ing the model.

8. Automatic re-in­stall on every run. Same as Claude Desktop. Delete the file, Chrome re-cre­ates it. The user’s dele­tion is treated as a tran­sient state to be cor­rected, not as a di­rec­tive to be re­spected.

9. Retroactive sur­vival of any fu­ture user con­sent. If Google in fu­ture starts ask­ing users would you like Chrome to down­load a 4 GB AI model”, that prompt does not retro-ac­tively le­git­imise the silent in­stalls that have al­ready hap­pened on hun­dreds of mil­lions of de­vices. The dam­age to the trust re­la­tion­ship is done. The bytes have moved. The at­mos­phere has been writ­ten to.

10. Code-signed, shipped through the nor­mal re­lease chan­nel. This is not test build be­hav­iour. It is Chrome sta­ble.

The AI Mode” pill is the cherry on top

Here is the part that should make every pri­vacy lawyer in the au­di­ence put their cof­fee down. When Chrome 147 launches against an el­i­gi­ble pro­file, the om­ni­box - the ad­dress bar at the top of the win­dow, the most vis­i­ble piece of real es­tate in the en­tire browser - ren­ders an AI Mode” pill to the right of the URL field. A rea­son­able user, see­ing AI Mode” sit­ting in their browser’s most promi­nent UI el­e­ment in 2026, with the well-pub­li­cised ex­is­tence of on-de­vice LLMs in Chrome and a 4 GB Gemini Nano bi­nary al­ready silently in­stalled on their disk, is go­ing to draw what feels like an ob­vi­ous in­fer­ence - that the vis­i­ble AI Mode is us­ing the on-de­vice model, that their queries stay on the de­vice, that the lo­cal model is what pow­ers the lo­cal-look­ing sur­face.

Every part of that in­fer­ence is wrong. The AI Mode pill in the Chrome 147 om­ni­box is a cloud-backed Search Generative Experience sur­face - every query the user types into it is sent over the net­work to Google’s servers for pro­cess­ing by Google’s hosted mod­els. The on-de­vice Nano model is not in­voked by the AI Mode UI flow at all. They are en­tirely sep­a­rate code paths - the most vis­i­ble AI af­for­dance in the browser does not use the lo­cal model the user has been silently given, and the fea­tures that do use the lo­cal model (Help-Me-Write in <textarea>, tab-group AI sug­ges­tions, smart paste, page sum­mary) are buried in textarea-con­text menus and tab-group right-click menus that the av­er­age user will dis­cover, on av­er­age, never.

Think about what that arrange­ment ac­tu­ally is. The user pays the stor­age cost of the silent in­stall (4 GB on disk, plus the band­width of the silent down­load). The user’s most vis­i­ble AI ex­pe­ri­ence - the pill they ac­tu­ally see and click - de­liv­ers no on-de­vice ben­e­fit at all be­cause it routes to Google’s servers re­gard­less. The on-de­vice model is there­fore a sunk cost im­posed on the user, with no off­set­ting trans­parency ben­e­fit at the sur­face where trans­parency would mat­ter most. To put it an­other way - if the on-de­vice in­stall had given the user a clear your AI Mode queries stay on your de­vice” prop­erty, the in­stall would have a de­fen­si­ble pri­vacy fram­ing (worse stor­age, bet­ter data flow). It does not - the in­stall gives Google a fu­ture-op­tions re­source (the model can be in­voked by other Chrome sub­sys­tems with­out fur­ther server round-trips) at the user’s disk-and-band­width ex­pense, while the head­line AI sur­face con­tin­ues to send the user’s queries to Google as be­fore. The lo­cal model is a Google-side as­set po­si­tioned on the user’s de­vice - it is not a user-side as­set and one could ar­gue it is noth­ing but sleight-of-hand to hide that ac­tu­ally, the vis­i­ble AI mode is NOT us­ing the lo­cal model.

That arrange­ment, on its own, en­gages at least three of the de­cep­tive de­sign pat­tern fam­i­lies cat­a­logued in EDPB Guidelines 03/2022 [20]. It is mis­lead­ing in­for­ma­tion be­cause the vis­i­ble la­bel AI Mode” cre­ates a false im­pres­sion about where pro­cess­ing oc­curs - the la­bel does not say cloud-backed” or queries sent to Google”, and a rea­son­able user with knowl­edge of on-de­vice AI will in­fer lo­cal­ity from the prox­im­ity of an on-de­vice 4 GB model on their disk. It is skip­ping be­cause the user is not given a mo­ment to choose be­tween lo­cal-only and cloud-backed AI sur­faces - both are switched on by the same up­stream roll­out, with no per-fea­ture con­sent. And it is hin­der­ing be­cause turn­ing AI Mode off does not also re­move the on-de­vice in­stall, and re­mov­ing the on-de­vice in­stall does not turn AI Mode off - the two are sep­a­rately con­trolled, and dis­cov­er­ing both con­trols re­quires know­ing about both chrome://​flags and chrome://​set­tings/​ai, nei­ther of which is ob­vi­ous in de­fault Chrome.

So: not just a non-con­sented in­stall, but a non-con­sented in­stall that dou­bles as cover for a par­al­lel cloud-backed sur­face that mis­rep­re­sents to the user where their typ­ing is be­ing processed. Both lay­ers com­pound the con­sent prob­lem.

Why this is un­law­ful in the EEA and the UK

Article 5(3) of Directive 2002/58/EC (the ePri­vacy Directive) pro­hibits the stor­ing of in­for­ma­tion, or the gain­ing of ac­cess to in­for­ma­tion al­ready stored, in the ter­mi­nal equip­ment of a sub­scriber or user, with­out the user’s prior, freely-given, spe­cific, in­formed, and un­am­bigu­ous con­sent, ex­cept where strictly nec­es­sary for the pro­vi­sion of an in­for­ma­tion-so­ci­ety ser­vice ex­plic­itly re­quested by the user [2]. The 4 GB Gemini Nano weights file is in­for­ma­tion stored in the user’s ter­mi­nal equip­ment. The user did not con­sent. The user has not re­quested any ser­vice that strictly re­quires a 4 GB on-de­vice LLM. Chrome is func­tional with­out the file. The Article 5(3) breach is di­rect.

Article 5(1) GDPR re­quires pro­cess­ing of per­sonal data to be law­ful, fair, and trans­par­ent to the data sub­ject [3]. Where the user’s hard­ware is pro­filed to de­ter­mine el­i­gi­bil­ity for the model push, where the in­stall events are logged on Google’s servers, and where the on-de­vice fea­tures the model pow­ers process user prompts (whether or not those prompts leave the de­vice), the law­ful­ness, fair­ness, and trans­parency of all of that pro­cess­ing de­pend on the user be­ing told, in plain lan­guage, what is hap­pen­ing. They are not.

Article 25 GDPR re­quires the con­troller to im­ple­ment ap­pro­pri­ate tech­ni­cal and or­gan­i­sa­tional mea­sures to en­sure that, by de­fault, only per­sonal data that are nec­es­sary for each spe­cific pur­pose are processed [3]. Pre-staging a 4 GB AI model on a user’s disk, against a con­tin­gency that the user might in fu­ture in­voke an AI fea­ture, is the ar­chi­tec­tural op­po­site of by-de­fault min­imi­sa­tion and the pro­fil­ing of the de­vice to de­ter­mine whether or not to push the model is not dif­fer­ent to the pro­fil­ing used to track you on­line and as such that pro­file con­tains per­sonal data and if the AI model is used, will process per­sonal data, so the GDPR ar­gu­ments are in scope and valid.

Under the UK GDPR and the Privacy and Electronic Communications Regulations 2003, the analy­sis is the same. Under the California Consumer Privacy Act, the ab­sence of a no­tice-at-col­lec­tion cov­er­ing this spe­cific cat­e­gory of pre-staged soft­ware puts Google’s CCPA no­tice pos­ture in ques­tion [12].

Then there are the crim­i­nal-law vi­o­la­tions un­der var­i­ous na­tional com­puter-mis­use statutes - which again can­not be over­stated.

ESG: the cli­mate cost of the silent push

The Anthropic case I wrote about was a desk­top ap­pli­ca­tion in­stalling a 350-byte JSON man­i­fest in seven di­rec­to­ries. The band­width and en­ergy cost of that, summed across all Claude Desktop users, was neg­li­gi­ble. The Chrome case is dif­fer­ent. Chrome is push­ing a 4 GB bi­nary across hun­dreds of mil­lions of de­vices. That has a mea­sur­able, quan­tifi­able, and frankly alarm­ing en­vi­ron­men­tal foot­print.

I am cal­cu­lat­ing this us­ing the same method­ol­ogy our WebSentinel au­dit plat­form ap­plies to web­site en­vi­ron­men­tal analy­sis [13]:

Energy in­ten­sity of net­work data trans­fer: 0.06 kWh per GB, the mid-band of Pärssinen et al. (2018) Environmental im­pact as­sess­ment of on­line ad­ver­tis­ing”, Science of The Total Environment [14]. The pa­per re­ports a 0.04 – 0.10 kWh/​GB range de­pend­ing on the share of fixed-line vs mo­bile trans­fer and in­clu­sion of end-user de­vice en­ergy. 0.06 is a de­fen­si­ble mid-point.

Grid emis­sions fac­tor: 0.25 kg CO2e per kWh, the EEA / IEA com­pos­ite EU-27 elec­tric­ity-sup­ply fac­tor for 2024 re­port­ing [15]. Globally the fig­ure varies from ~0.10 kg/​kWh on mostly-re­new­able grids to over 0.70 kg/​kWh on coal-heavy grids; 0.25 is mid-band for a global push and is the fig­ure WebSentinel uses by de­fault.

Per-device cost of one Nano push

Bandwidth: 4 GB

Energy: 4 × 0.06 = 0.24 kWh per de­vice per push

CO2: 0.24 × 0.25 = 0.06 kg CO2e per de­vice per push

That is per de­vice, per push. A sin­gle down­load of the model. It does not in­clude re-down­loads trig­gered by the user try­ing and fail­ing to delete the file. It does not in­clude sub­se­quent up­dates to the model. It does not in­clude the on-de­vice in­fer­ence en­ergy when the model is ac­tu­ally used. It is just the one-time de­liv­ery cost to one de­vice.

Aggregated cost across the de­ploy­ment

Google does not pub­lish how many de­vices re­ceive the Nano push. The el­i­gi­bil­ity cri­te­ria gat­ing the push (a hard­ware performance class” that Chrome com­putes from CPU class, GPU class, sys­tem RAM and avail­able VRAM - typ­i­cally ~16 GB uni­fied mem­ory or bet­ter on Apple Silicon, ~16 GB RAM and a dis­crete or in­te­grated GPU with suf­fi­cient VRAM on Windows and Linux) carve out the very low end of the con­sumer in­stall base, but the qual­i­fy­ing pop­u­la­tion is still enor­mous. I will use three il­lus­tra­tive de­ploy­ment bands so the reader can pick whichever they con­sider clos­est to re­al­ity. None of these bands is im­plau­si­bly large for a fea­ture that ships in de­fault-on Chrome.

To com­pare those num­bers to what an ESG re­port could com­pare to:

24 GWh (low band) is roughly the an­nual elec­tric­ity con­sump­tion of about 7,000 av­er­age UK house­holds [16].

24 GWh (low band) is roughly the an­nual elec­tric­ity con­sump­tion of about 7,000 av­er­age UK house­holds [16].

120 GWh (mid band) is roughly the an­nual elec­tric­ity con­sump­tion of about 36,000 av­er­age UK house­holds, or the an­nual out­put of a 14 MW wind tur­bine run­ning at typ­i­cal UK ca­pac­ity fac­tor.

120 GWh (mid band) is roughly the an­nual elec­tric­ity con­sump­tion of about 36,000 av­er­age UK house­holds, or the an­nual out­put of a 14 MW wind tur­bine run­ning at typ­i­cal UK ca­pac­ity fac­tor.

240 GWh (high band) is roughly the an­nual elec­tric­ity con­sump­tion of about 72,000 av­er­age UK house­holds, or the an­nual out­put of about 28 MW of in­stalled wind ca­pac­ity.

240 GWh (high band) is roughly the an­nual elec­tric­ity con­sump­tion of about 72,000 av­er­age UK house­holds, or the an­nual out­put of about 28 MW of in­stalled wind ca­pac­ity.

6,000 tonnes CO2e (low band) is roughly the an­nual emis­sions of 1,300 av­er­age pas­sen­ger cars in the EU [17].

6,000 tonnes CO2e (low band) is roughly the an­nual emis­sions of 1,300 av­er­age pas­sen­ger cars in the EU [17].

30,000 tonnes CO2e (mid band) is roughly the an­nual emis­sions of 6,500 cars, or one re­turn flight from London to Sydney for about 8,000 pas­sen­gers in econ­omy.

30,000 tonnes CO2e (mid band) is roughly the an­nual emis­sions of 6,500 cars, or one re­turn flight from London to Sydney for about 8,000 pas­sen­gers in econ­omy.

60,000 tonnes CO2e (high band) is roughly the an­nual emis­sions of 13,000 cars.

60,000 tonnes CO2e (high band) is roughly the an­nual emis­sions of 13,000 cars.

These are the de­liv­ery-only num­bers. They count the bytes tra­vers­ing the net­work ex­actly once. They do not count:

The roughly 4 GB × N de­vices of disk-stor­age cost, sus­tained, on user hard­ware. SSDs have a per-GB em­bod­ied car­bon cost of ap­prox­i­mately 0.16 kg CO2e per GB of NAND man­u­fac­tured [18]; for 1 bil­lion de­vices × 4 GB that is around 640,000 tonnes CO2e of em­bod­ied SSD al­lo­cated to a use case the user did not con­sent to. This is a one-off man­u­fac­tur­ing-car­bon im­pact, but the stor­age bur­den is borne in per­pe­tu­ity by user de­vices that could oth­er­wise have used the space for user data.

The on-de­vice in­fer­ence en­ergy when Nano is in­voked. Per in­fer­ence this is small. At 2 bil­lion daily Chrome users it is no longer small.

The re-down­load cy­cle for users who try to delete the file. Each suc­cess­ful re-trig­ger of the down­load is an­other 4 GB × 0.06 kWh × 0.25 kg = 0.06 kg CO2e per de­vice per re-down­load.

The fu­ture model up­dates. Gemini Nano is not a one-shot arte­fact; it is an evolv­ing model with pe­ri­odic weight re­freshes. Each re­fresh re­peats the cal­cu­la­tion.

In ESG-reporting lan­guage, the one-time push of the cur­rent model is a Scope 3 Category 11 (“use of sold prod­ucts”) emis­sion against Google, at­trib­ut­able to the user-side de­liv­ery of a bi­nary the user did not re­quest, in the op­er­a­tion of a free prod­uct Google dis­trib­utes [4].

Why the band­width side mat­ters in its own right

In ad­di­tion to the car­bon cost, the net­work-band­width cost is paid by ISPs, by mo­bile net­work op­er­a­tors, by users on me­tered con­nec­tions, and by every piece of net­work in­fra­struc­ture that has to carry an un­wanted 4 GB pay­load to a des­ti­na­tion that did not ask for it. Per the Pärssinen ref­er­ence, around 50% of that de­liv­ery en­ergy is in the ac­cess net­work and CDN edge, around 30% is in user-side equip­ment (router, mo­dem, NIC), and the re­main­der is in the core. None of that in­fra­struc­ture ex­ists for free. Every byte Chrome pushes is a byte that com­petes with bytes the user ac­tu­ally wanted.

For users on capped mo­bile data plans, par­tic­u­larly in re­gions where smart­phone-as-only-in­ter­net is dom­i­nant (much of Africa, much of South and Southeast Asia, most of Latin America), 4 GB of un­re­quested down­load is on the or­der of a mon­th’s data al­lowance, vapourised by Chrome on the user’s be­half. Google has not, to my knowl­edge, pub­lished any analy­sis of the wel­fare im­pact of this on the pop­u­la­tions whose in­ter­net ac­cess is me­tered.

Keep in mind that mo­bile data plans (4G and 5G) are used by many house­holds who do not have ac­cess to fiber, ca­ble or adsl and are used for desk­top de­vices as well as mo­bile - so the ar­gu­ment that Google won’t push this to mo­bile de­vices (although I have not found any­thing of­fi­cial to sup­port that ar­gu­ment any­way) will not fly.

What Google should have done

This is not a hard list. It is the same list I gave Anthropic in the Claude Desktop ar­ti­cle, ap­plied to Google.

Ask. First time Chrome is about to down­load the Nano model, pop a di­a­logue. Chrome would like to down­load a 4 GB AI model file to your de­vice to power the fol­low­ing fea­tures. Allow, or skip and de­cide later.” Two but­tons. Done.

Ask. First time Chrome is about to down­load the Nano model, pop a di­a­logue. Chrome would like to down­load a 4 GB AI model file to your de­vice to power the fol­low­ing fea­tures. Allow, or skip and de­cide later.” Two but­tons. Done.

Pull, not push. Trigger the down­load as a down­stream con­se­quence of the user in­vok­ing an AI fea­ture for the first time. Let the fea­ture it­self be the con­sent event. Do not pre-stage on a con­tin­gency.

Pull, not push. Trigger the down­load as a down­stream con­se­quence of the user in­vok­ing an AI fea­ture for the first time. Let the fea­ture it­self be the con­sent event. Do not pre-stage on a con­tin­gency.

Surface it. In chrome://​set­tings/, list the AI model files Chrome has down­loaded, their size, the fea­tures they power, and a Remove and stop down­load­ing” but­ton per model. Make re­moval per­sis­tent, not a tran­sient state Chrome cor­rects on next launch.

Surface it. In chrome://​set­tings/, list the AI model files Chrome has down­loaded, their size, the fea­tures they power, and a Remove and stop down­load­ing” but­ton per model. Make re­moval per­sis­tent, not a tran­sient state Chrome cor­rects on next launch.

Document it. Tell the user, plainly, in the Chrome de­scrip­tion on the Microsoft Store, in the Chrome in­staller, on the Google Chrome down­load page, that Chrome will down­load ad­di­tional model files of sub­stan­tial size on sup­ported hard­ware. Currently, this is es­sen­tially un­doc­u­mented to a nor­mal user.

Document it. Tell the user, plainly, in the Chrome de­scrip­tion on the Microsoft Store, in the Chrome in­staller, on the Google Chrome down­load page, that Chrome will down­load ad­di­tional model files of sub­stan­tial size on sup­ported hard­ware. Currently, this is es­sen­tially un­doc­u­mented to a nor­mal user.

Respect dele­tion. If the user deletes weights.bin, do not re-cre­ate it. If the user has a strong pref­er­ence about what is on their disk, the ap­pli­ca­tion is not in a po­si­tion to over­ride that pref­er­ence be­cause the ap­pli­ca­tion thinks it knows bet­ter.

Respect dele­tion. If the user deletes weights.bin, do not re-cre­ate it. If the user has a strong pref­er­ence about what is on their disk, the ap­pli­ca­tion is not in a po­si­tion to over­ride that pref­er­ence be­cause the ap­pli­ca­tion thinks it knows bet­ter.

Disclose at scale. Publish, in Google’s an­nual ESG re­port, the ag­gre­gate band­width and car­bon foot­print of all AI-feature model pushes to user de­vices, bro­ken down by re­gion. Treat it as the Scope 3 Category 11 emis­sion it is. Account for it.

Disclose at scale. Publish, in Google’s an­nual ESG re­port, the ag­gre­gate band­width and car­bon foot­print of all AI-feature model pushes to user de­vices, bro­ken down by re­gion. Treat it as the Scope 3 Category 11 emis­sion it is. Account for it.

Thienan Tran

thienantran.com

Talking to 35 Strangers at the Gym

Published: May 1, 2026 Updated: May 5, 2026

Background

A cou­ple months ago, I was the Wizard of Loneliness. I had grad­u­ated from col­lege al­most two years prior and, while I had luck­ily found a job, I was un­suc­cess­ful in find­ing friends.

Each night, I would look up how to make friends af­ter col­lege” and find the same ad­vice given every time: do your hobby with other peo­ple, fre­quently”.

On pa­per, the gym seemed like the per­fect op­por­tu­nity to meet peo­ple since I would go there nearly every day; how­ever, ac­cord­ing to Reddit, there’s a num­ber of peo­ple who want to be left alone and can be ir­ri­tated if you in­ter­rupted their work­out to talk.

I am deeply afraid of ir­ri­tat­ing some­one or be­ing in awk­ward sit­u­a­tions. Here’s a list of things that I did as a re­sult of that fear:

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

So you can un­der­stand when I say that walk­ing up to some­one and start­ing a con­ver­sa­tion with them at the gym of all places is kinda ter­ri­fy­ing for me.

Unfortunately, there was no other good op­tion. My other hobby is pro­gram­ming, but the Syracuse Development group only meets up once a month, and ac­tiv­i­ties sug­gested by r/​Syra­cuse like vol­ley­ball and trivia night re­quire you to al­ready have friends. I did­n’t have a choice. If I wanted friends, I would have to put in the work at the gym.

Problem Statement

I am lonely and have no friends.

Procedure

I de­cided to run a lit­tle ex­per­i­ment to find some friends.

Each day, for one month, I picked out one per­son to ap­proach. Usually it would be some­one I saw fre­quently at the gym.

If they were in the mid­dle of an ex­er­cise, I waited for them to fin­ish their set.

Then, I would ap­proach them, stand near them and wave to get their at­ten­tion, and then give them my open­ing line.

Initially, my open­ing line for every­one was Hey I see you here all the time. You’re pretty strong. What’s your split?” After a week or so, I be­gan cus­tomiz­ing the open­ing line per per­son based on what I found in­ter­est­ing about them.

For in­stance, some­one was wear­ing a Boston hat and I was cu­ri­ous whether they went to school in Boston like I did, so I asked them about it. After the open­ing line, I tried to talk to them for 5 – 10 min­utes un­til they let me go. I tried not to be the one to end it be­cause I have a habit of end­ing con­ver­sa­tions early, but I did leave them alone if they ob­vi­ously did not want to talk.

Results

Here’s the raw data. I split it up by week and put it into these col­lapsi­ble things be­cause it takes up a lot of space. Click on each week to see the data for that week.

Description is a short de­scrip­tion of the per­son.

Length is how long the con­ver­sa­tion was. A short con­ver­sa­tion is 0 – 2 min­utes, a medium con­ver­sa­tion is 5 – 7 min­utes, and a long con­ver­sa­tion is 10+ min­utes.

Notes are just any­thing in­ter­est­ing about the con­ver­sa­tion or the per­son I was talk­ing to.

Aftermath is what hap­pened af­ter that con­ver­sa­tion.

Reflection

The first cou­ple days were ex­tremely dif­fi­cult. I had been con­di­tioned to be­lieve that ini­ti­at­ing a con­ver­sa­tion with a stranger was weird and it was tough to break free from that. As a re­sult, for the first few peo­ple, I would al­ways make a de­tour at the last sec­ond, i.e. make a trip to the wa­ter foun­tain. I chick­ened out! The so­lu­tion was to ap­proach the per­son as quickly as pos­si­ble so that I did­n’t have time to think about run­ning away.

Luckily, most peo­ple were re­cep­tive. I got a rush of dopamine when­ever some­one re­sponded pos­i­tively to my con­ver­sa­tion, so talk­ing to new peo­ple be­came strangely ad­dic­tive. I kept talk­ing to more and more new peo­ple each day un­til I talked to a whop­ping seven (SIX SEVENNN) new peo­ple in one day (this is why Week 3 has a lot of en­tries). It was crazy.

Something in­ter­est­ing I learned early on was that even if some­one had head­phones on, there was a good chance they were open to con­ver­sa­tion. I mean, I had my ear­buds in and I was will­ing to talk to any­body. Most peo­ple were just lis­ten­ing to mu­sic and took the head­phones off to talk.

People did­n’t al­ways re­spond pos­i­tively though. In Week 1 and Week 2, I came across a num­ber of peo­ple who were re­ally short with their re­sponses and did­n’t try to con­tinue the con­ver­sa­tion. They gave off the vibe that they did­n’t want to talk to me. It was re­ally awk­ward and al­most made me end the ex­per­i­ment.

But over time, I came to ac­cept that it’s ok if they did­n’t want to talk to me. That’s just one of the things you have to ex­pect when you do some­thing like this.

And be­ing in an awk­ward sit­u­a­tion is ac­tu­ally not that bad. It sucks in the mo­ment, but then you just take a few min­utes to calm down and then you move on with your life. You’re ok.

However, I did end up pulling back in Week 4 and Week 5. I felt like con­stantly talk­ing to more new peo­ple was pro­duc­ing di­min­ish­ing re­turns. I had al­ready es­tab­lished a con­nec­tion with many peo­ple at the gym, so it was a bet­ter use of my lim­ited time (remember I still have to work out!) to nur­ture those ex­ist­ing con­nec­tions into mean­ing­ful ones.

I ended up pri­or­i­tiz­ing the 5 – 6 peo­ple who seemed the most in­ter­ested in me.

One of these peo­ple is some­one I will re­fer to as the other Asian guy”. I got a lot closer to him than ex­pected. We re­al­ized we had the same work­out rou­tine so we be­came gym bud­dies and started work­ing out to­gether. A few weeks later, he in­vited me to his apart­ment, where he cooked me a smash burger. His girl­friend showed me graphic pic­tures of what she was learn­ing in PA school too. Then, we watched a movie with their cat. I’m re­ally grate­ful that they were kind enough to have me over as a guest.

Also, some­thing new hap­pened: in­stead of scar­ing peo­ple away, I had a pos­i­tive im­pact on some­one.

These texts were from one of the peo­ple I pri­or­i­tized, the male SU stu­dent. He had re­cently moved to Syracuse and was strug­gling to make new friends. He re­lated to a cou­ple of my videos where I talked about the same strug­gles and was su­per ap­pre­cia­tive that I talked to him that day. The fol­low­ing week, we tried out Kofta Burger af­ter a rec­om­men­da­tion from my friend who lives down­town.

The burger was de­li­cious and we had a great time.

Despite my suc­cesses, my work is­n’t done. I re­al­ized near the end of the month that what I truly wanted was to con­sis­tently hang out with peo­ple on the week­ends. Unfortunately, most of the friends I’ve made are busy on the week­end. They’re tak­ing trips to visit loved ones, go­ing to the bar (I’m not that into drink­ing), or run­ning er­rands, so it’s hard to plan any­thing.

But I guess that’s a bet­ter prob­lem to have than eter­nal lone­li­ness.

A few months ago, I was googling how to make friends af­ter col­lege” every night. Now I have peo­ple to text, peo­ple to wave to at the gym, and peo­ple who no­tice when I don’t show up for a few days. AND I be­came a more re­silient per­son who is un­afraid to do hard and scary things.

No more Wizard of Loneliness for me!

Heh this blew up on HackerNews. I want to give some more con­text for peo­ple who are un­sure if the gym was the right place to do this. And this is all in hind­sight; I did not re­al­ize this un­til now.

The gym I go to, Crunch Fitness, has a so­cial as­pect to it. While many peo­ple keep to them­selves, it’s com­mon to see peo­ple chat­ting. Sometimes they’re chat­ting in be­tween sets. Other times, they’re chat­ting on the tread­mill. The staff go out of their way to in­ter­act with us, and of­ten the peo­ple who did­n’t want to talk to me talk to other peo­ple! I guess they are more open with their friends.

The peo­ple at the gym are also re­ally sup­port­ive. I for­got to men­tion this but once, when I was do­ing hip thrusts, I messed up and did­n’t rerack the ma­chine cor­rectly. I fell on my butt and the ma­chine made a huge CLANK sound when it fell. Everybody turned to look at me. I was re­ally em­barassed. But then, one guy came and helped me re­turn the ma­chine to the start­ing po­si­tion while an­other guy swung by to make sure I was ok. He as­sured me that it hap­pened to every­one and to not let it get to me. I did­n’t know ei­ther of these peo­ple! They just wanted to help.

I don’t dis­agree that the gym is pri­mar­ily a place to work­out, but I think that it’s also a place where you can find com­mu­nity. Maybe my gym is spe­cial in how so­cial it is but maybe peo­ple are friend­lier than they ap­pear to be. I’m bet­ting on the lat­ter.

Valve releases Steam Controller CAD files under Creative Commons license

www.digitalfoundry.net

Modders, start your en­gines.

by William Judd

Yesterday, 10:29am

With the rather ex­cel­lent Steam Controller now on its way to the lucky few that man­aged to or­der one, Valve has re­leased a full set of CAD files for their new hard­ware. The idea is to let en­ter­pris­ing mod­ders cre­ate their own Steam Controller add-ons, like skins, charg­ing stands, grip ex­ten­ders or smart­phone mounts.

The Valve re­lease in­cludes files for the ex­ter­nal shell (“surface topol­ogy”) of the Controller and Puck, with a .STP, .STL and en­gi­neer­ing di­a­gram of each de­vice, with the lat­ter show­ing ar­eas that must re­main un­cov­ered to let the de­vice main­tain its sig­nal strength and oth­er­wise func­tion as de­signed.

Valve has pre­vi­ously re­leased CAD files for its Steam Deck hand­held, Valve Index VR suite and even the orig­i­nal Steam Controller a decade ago, so this re­lease is wel­comed but not un­ex­pected.

The re­lease is un­der a fairly re­stric­tive Creative Commons li­cense which al­lows for non-com­mer­cial use and re­quires at­tri­bu­tion and shar­ing of de­signs back to the com­mu­nity. However, the li­cense also sug­gests that com­mer­cial en­ti­ties in­ter­ested in mak­ing ac­ces­sories for the Steam Controller or its Puck can con­tact Valve di­rectly to dis­cuss terms.

What is your ul­ti­mate Steam Controller or Steam Controller Puck ac­ces­sory? Let us know in the com­ments be­low. For me, it would def­i­nitely be a smart­phone clip - play­ing through some­thing rel­a­tively low-stakes like Forza Horizon 6 via Moonlight game stream­ing on a phone would be slick.

[source steam­com­mu­nity.com]

Will is web­site ed­i­tor for Digital Foundry, spe­cial­is­ing in PC hard­ware, sim rac­ing and dis­play tech­nol­ogy.

Author Profile

Bluesky

Reply

Appearing Productive in The Workplace — No One's Happy

nooneshappy.com

Parkinson’s Law states that work ex­pands to fill the time avail­able. In the era of AI, work­ers now have a tool that ex­pands to fill what­ever a large lan­guage model can be per­suaded to gen­er­ate, which is to say, with­out limit.

Parkinson’s Law states that work ex­pands to fill the time avail­able. In the era of AI, work­ers now have a tool that ex­pands to fill what­ever a large lan­guage model can be per­suaded to gen­er­ate, which is to say, with­out limit.

What I have watched hap­pen in my pro­fes­sion in the last two years, I am still strug­gling to de­scribe. The first time I knew some­thing was wrong, roughly a year and a quar­ter ago, I no­ticed a col­league re­ply­ing to me us­ing AI. His re­sponse was ob­vi­ously gen­er­ated by Claude. The punc­tu­a­tion gave it away — em dashes where no one types em dashes, the rhyth­mic struc­ture, the con­fi­dent grasp of tech­nolo­gies I knew for a fact he did not un­der­stand. I sat with it for a while, weigh­ing whether to de­bate some­one who was vis­i­bly copy-past­ing ver­ba­tim from a model. The chan­nel was pub­lic, and I spent more time than I should have cor­rect­ing fun­da­men­tals. Eventually I stopped. He was not, in any mean­ing­ful sense, on the other side of the con­ver­sa­tion.

Generative AI can pro­duce work that looks ex­pert with­out be­ing ex­pert, and the fail­ure ar­rives in two shapes. The first is when novices in a field are able to pro­duce work that re­sem­bles what their se­niors pro­duce, faster or more ad­vanced than their judg­ment. The sec­ond is when peo­ple gen­er­ate ar­ti­facts in dis­ci­plines they were never trained in. The two fail­ures look sim­i­lar from a dis­tance and are not the same. Research has mostly mea­sured the first. The sec­ond is what it is miss­ing, and in my ex­pe­ri­ence it is the riskier of the two.

Cross do­main gen­er­a­tion

People who can­not write code are build­ing soft­ware. People who have never de­signed a data sys­tem are de­sign­ing data sys­tems. Most of it is not shipped; it is built, of­ten for many hours, pos­si­bly shown in­ter­nally with great vigor, used qui­etly, and oc­ca­sion­ally sur­faced to a client with­out much fan­fare. Workers can ob­sess over an idea, work­ing many hours over­time. There are a few prac­ti­tion­ers who use the cur­rent agen­tic tools to do com­plex things prop­erly, but they are scarce and as I find, typ­i­cally in code gen­er­a­tion. AI, for all its ca­pa­bil­i­ties at the level of the in­di­vid­ual, has not scaled prop­erly in my work­place.

I have a col­league, a care­ful and in­tel­li­gent per­son in a role that is not en­gi­neer­ing, who spent two months ear­lier this year build­ing a sys­tem that should have been de­signed by some­one with for­mal train­ing in data ar­chi­tec­ture. He used the tools well, by the stan­dards by which use of the tools is cur­rently mea­sured. He pro­duced a great deal of code, a great deal of doc­u­men­ta­tion, a great deal of what looked, to any­one who did not know what to look for, like progress. He could not, when asked, ex­plain how any of it ac­tu­ally worked. The work was wrong from the first day. The schemas, and more im­por­tantly the ob­jec­tives, were wrong in a way that would have been ob­vi­ous to any­one with two years in the field. Several of us did know. When opin­ions were voiced even as high as a V.P., he fought back. The room had been arranged in such a way that say­ing so was not a con­tri­bu­tion; his man­agers were too in­vested in the ap­pear­ance of mo­men­tum to want the ap­pear­ance dis­turbed. The work will con­tinue, in all prob­a­bil­ity, un­til it is shown to a stake­holder, and they de­cide not to in­vest.

This is the part of the phe­nom­e­non I find hard­est to write about. The tool did not make him a worse col­league. It made him able to im­per­son­ate, for months, a dis­ci­pline he had never trained in, and the im­per­son­ation was good enough that the in­sti­tu­tional in­cen­tives all bent to­ward let­ting him con­tinue. Perhaps it’s a fail­ure of man­age­ment, but I have been find­ing man­age­ment to be so ea­ger to em­brace AI that they’re will­ing to ac­cept the risk.

It would be tol­er­a­ble, per­haps, if the tool of­fered an hon­est as­sess­ment of what it had pro­duced. The Cheng et al. Stanford study pub­lished in Science this spring [1] con­firmed what every reg­u­lar user al­ready knew: lead­ing mod­els are roughly fifty per­cent more agree­able than hu­man re­spon­dents, af­firm­ing the user even where the af­fir­ma­tion is un­war­ranted. Berkeley CMR meta-analy­ses [4] found AI-literate users of­ten over­es­ti­mate their per­for­mance. Particularly in­ter­est­ing when work­ers stray out­side of their train­ing. An NBER study of sup­port agents [2] found gen­er­a­tive AI boosted novice pro­duc­tiv­ity by about a third while barely help­ing ex­perts. Harvard Business School re­searchers found the same pat­tern in con­sult­ing work [3]. So you have over­con­fi­dent, novices able to im­prove their in­di­vid­ual pro­duc­tiv­ity in an area of ex­per­tise they are un­able to re­view for cor­rect­ness. What could go wrong?

The con­duit prob­lem

A grow­ing body of work calls this out­put-com­pe­tence de­cou­pling [5]. In any pre­vi­ous era, the qual­ity of a piece of work was a more or less re­li­able sig­nal of the com­pe­tence of the per­son who pro­duced it. A novice es­say read like a novice es­say; novice code crashed in novice ways. AI has sev­ered that re­la­tion­ship. A novice now pro­duces work that does not be­tray the novice, be­cause the com­pe­tence the work re­flects is not the novice’s com­pe­tence at all. It is the sys­tem’s. The per­son, in the trans­ac­tion, be­comes a kind of con­duit, ca­pa­ble of rout­ing the out­put to a re­cip­i­ent and in­ca­pable of eval­u­at­ing it on the way through.

The skills of pro­duc­ing work and judg­ing it were de­lib­er­ately dis­tinct, but ac­com­plish­ing the work it­self used to teach the judg­ment. The first skill now be­longs, in large part, to the ma­chines. The sec­ond still be­longs to us, though fewer are both­er­ing to ac­quire or uti­lize it.

The ar­chi­tec­tural cri­tique that used to come from some­one who was taught, or who had built and bro­ken three of these be­fore now comes from a model with no em­bod­ied mem­ory of build­ing or break­ing any­thing. The slow­ness was not a tax on the real work; the slow­ness was the real work. It was how the work got good, and how the peo­ple pro­duc­ing the work got good, and how the firm whose name was on the work could promise the client that what they were buy­ing was a par­tic­u­lar kind of thing rather than a generic one.

The cur­rent gen­er­a­tion of agen­tic sys­tems is built around the premise that the hu­man is the bot­tle­neck — that the loop runs faster and cleaner with­out the awk­ward de­lay of some­one read­ing what is about to hap­pen and de­cid­ing whether it should. This is, in a great many cases, ex­actly back­wards. The hu­man in the loop is not a ves­tige of an ear­lier era; the hu­man is the only part of the loop with skin in the game. Removing the H from HITL is not an ef­fi­ciency. It is the aban­don­ment of the only mech­a­nism the sys­tem has for catch­ing it­self.

Slop on the in­side

Requirements doc­u­ments that were once a page are now twelve. Status up­dates that were once three sen­tences are now bul­leted sum­maries of bul­leted sum­maries. Retrospective notes, post-in­ci­dent re­ports, de­sign memos, kick­off decks: every ar­ti­fact that can be elon­gated is, by peo­ple who do not read what they pro­duce, for read­ers who do not read what they re­ceive. The cost of pro­duc­ing a doc­u­ment has fallen to nearly zero; the cost of read­ing one has not, and is in fact ris­ing, be­cause the reader must now sift the syn­thetic con­text for what­ever the doc­u­ment was orig­i­nally about. Each in­di­vid­ual de­ci­sion to elon­gate seems ra­tio­nal, and each is in­de­pen­dently re­warded — read­ers are more con­fi­dent in longer AI-generated ex­pla­na­tions whether or not the ex­pla­na­tions are cor­rect [5]. The col­lec­tive ef­fect is that the sig­nal in any given work­place is harder to find than it was be­fore any of this be­gan. The check­points have been hid­den, drowned in their own pa­per­work, even when the peo­ple drown­ing them were gen­uinely try­ing to be brief”.

This is a new form of slop, and it is more ex­pen­sive than the pub­lic kind, be­cause the peo­ple pro­duc­ing it are be­ing paid a salary to do so. The pipeline of fu­ture ex­perts is thin­ning from both ends. The work that used to teach judg­ment is now done by the tool, and the en­try-level roles where the teach­ing hap­pened are be­ing cut on the the­ory that the tool can do the work. What this is caus­ing, in many of­fices in­clud­ing mine, is a great deal of mo­tion and very lit­tle of what mo­tion used to cre­ate.

The down­stream costs are ac­cu­mu­lat­ing quickly. Most of the pub­lic dis­cus­sion of AI slop has fo­cused on the flood into pub­lic mar­kets — a University of Florida mar­ket­ing study [6] be­ing among the more di­rect treat­ments. What is less re­marked upon is the same dy­namic play­ing out in­side or­ga­ni­za­tions: time wasted us­ing AI on tasks that did not need it, on ar­ti­facts no one will read, on processes that ex­ist only be­cause the tool made it cheap to con­struct them. On decks that spell out things that pre­vi­ously did­n’t even need to be said or were as­sumed.

What to do about it

What dis­ci­pline looks like, in this en­vi­ron­ment, is al­most em­bar­rass­ingly old-fash­ioned and may seem ob­vi­ous to most of you un­til you try to avoid it. Use the tool where you can ver­ify pre­cisely what it pro­duces. Never ask a model for con­fir­ma­tion; the tool agrees with every­one, and an agree­ment that costs the agreer noth­ing is worth noth­ing.

Generative AI does well on tasks where feed­back is fast, where be­ing ap­prox­i­mately right is good enough, where the hu­man re­mains the fi­nal ar­biter. Drafting a memo, gen­er­at­ing ex­am­ples, sum­ma­riz­ing ma­te­r­ial the reader could ver­ify if they cared to. The University of Illinois Generative AI guid­ance [7] and the PLOS Computational Biology Ten Simple Rules” pa­per on AI in re­search [8], among the more care­ful doc­u­ments now cir­cu­lat­ing, list much of this ex­plic­itly: brain­storm­ing, copy­edit­ing, re­for­mu­lat­ing one’s own ideas, pat­tern de­tec­tion in data one al­ready un­der­stands.

In every rec­om­mended use, the hu­man sup­plies the judg­ment and the tool sup­plies the through­put. This is a stronger po­si­tion than hu­man-in-the-loop. The tool sits out­side the work, con­tribut­ing where in­vited and silent oth­er­wise, which is the op­po­site of what most agen­tic sys­tems are now be­ing built to do.

For firms, the com­pet­i­tive ad­van­tage of a firm whose work can be trusted has not dis­ap­peared; it has, if any­thing, ap­pre­ci­ated, be­cause so many of the fir­m’s com­peti­tors are qui­etly con­vert­ing them­selves into con­tent-gen­er­a­tion pipelines and count­ing on the client not to no­tice.

This is al­ready com­ing to a head. Deloitte has al­ready re­funded part of a $440,000 fee over an AI-hallucinated gov­ern­ment re­port. It could be a pro­duc­tion sys­tem built on a hal­lu­ci­nated spec­i­fi­ca­tion, or a se­nior en­gi­neer who re­al­izes they have spent the last year nom­i­nally re­view­ing work they could no longer com­pe­tently re­view. The reck­on­ing will not be sub­tle. The firms still do­ing the work prop­erly will be in a po­si­tion to charge for it. The firms that have hol­lowed them­selves out will dis­cover that what they hol­lowed out was the thing the client was pay­ing for.

Misunderstanding and mis­use of AI in the work­place is ram­pant. In many of the rooms I now find my­self in, ex­per­tise has been asked to look the other way: to de­liver faster, pro­duce more, in­te­grate the tools more deeply, get out of the way of the col­leagues who are getting things done”. The ar­ti­facts are ac­cu­mu­lat­ing; the work is not. And some­where on the other side of all this out­put, a client is open­ing a de­liv­er­able, read­ing a sum­ma­rized list, and they may just choose to re­view it man­u­ally.

Disclaimer: I am not an ex­pert, or a writer. This is not an aca­d­e­mic ar­ti­cle. Whether I like it or not, I am at the precipice of AI. These are my ex­pe­ri­ences, in my work­place, with ref­er­ences to things that I think are relevent. If you take one thing away, take away that peo­ple are im­pres­sion­able crea­tures. Also, those that claimed this ar­ti­cle is iron­i­cally a ca­su­alty of it’s own com­plaint are 100% right, Kudos.

References

1. Sycophantic AI de­creases proso­cial in­ten­tions and pro­motes de­pen­dence (Cheng, Lee, Khadpe, Yu, Han, & Jurafsky, 2026). Science.

2. Generative AI at Work (Brynjolfsson, Li, & Raymond, 2025). The Quarterly Journal of Economics, 140(2), 889 – 942. Also: NBER Working Paper No. 31161, April 2023.

3. Navigating the Jagged Technological Frontier (Dell’Acqua, McFowland, Mollick, et al., 2026). Organization Science. Originally HBS Working Paper No. 24 – 013, 2023.

4. Seven Myths About AI and Productivity: What the Evidence Really Says (Berkeley CMR, 2025). Meta-analysis con­firm­ing asym­met­ric AI pro­duc­tiv­ity gains and user over­con­fi­dence.

5. Beyond the Steeper Curve: AI-Mediated Metacognitive Decoupling (Koch, 2025). Longer AI ex­pla­na­tions make users more con­fi­dent re­gard­less of cor­rect­ness.

6. Generative AI and the mar­ket for cre­ative con­tent (Zou, Shi, & Wu, 2026). Forthcoming, Journal of Marketing Research.

7. Generative AI Guidance (University of Illinois). Recommended uses and lim­i­ta­tions of gen­er­a­tive AI in aca­d­e­mic and pro­fes­sional work.

8. Ten sim­ple rules for op­ti­mal and care­ful use of gen­er­a­tive AI in sci­ence (Helmy, Jin, et al., 2025). PLOS Computational Biology, 21(10), e1013588.

Belgium stops decommissioning nuclear power plants

dpa-international.com

30.04.2026, 11:37 Uhr

Belgium will stop de­com­mis­sion­ing its nu­clear power plants, Prime Minister Bart De Wever an­nounced on Thursday.

The gov­ern­ment is go­ing to ne­go­ti­ate with op­er­a­tor ENGIE over the na­tion­al­iza­tion of the plants, De Wever said.

This gov­ern­ment chooses safe, af­ford­able, and sus­tain­able en­ergy. With less de­pen­dence on fos­sil im­ports and more con­trol over our own sup­ply,” he wrote on X.

ENGIE said it signed a let­ter of in­tent with the Belgian gov­ern­ment on ex­clu­sive ne­go­ti­a­tions.

The agree­ment cov­ers the po­ten­tial ac­qui­si­tion of the com­plete nu­clear fleet of seven re­ac­tors, the as­so­ci­ated per­son­nel, all nu­clear sub­sidiaries, as well as all as­so­ci­ated as­sets and li­a­bil­i­ties, in­clud­ing de­com­mis­sion­ing and dis­man­tling oblig­a­tions,” a press re­lease said.

A ba­sic agree­ment is ex­pected to be reached by October, it said.

Belgium orig­i­nally de­cided in 2003 to phase-out nu­clear power pro­duc­tion by 2025, but po­lit­i­cal de­bate and en­ergy se­cu­rity con­cerns have led to de­lays.

Last year the Belgian par­lia­ment voted by a large ma­jor­ity to end the nu­clear phase-out. De Wever’s gov­ern­ment also aims to build new nu­clear power plants.

Belgium has seven nu­clear re­ac­tors at two dif­fer­ent sites, al­though three re­ac­tors have al­ready been taken off the grid.

The fate of the age­ing in­stal­la­tions has been de­bated for decades. The coun­try is cur­rently heav­ily de­pen­dent on gas im­ports to cover its elec­tric­ity needs as it has been strug­gling to ex­pand re­new­able power gen­er­a­tion sig­nif­i­cantly.

Bart De Wever on X

ENGIE press re­lease

(c) 2026 dpa Deutsche Presse Agentur GmbH

Mercedes-Benz commits to bringing back physical buttons

www.drive.com.au

news

Another brand back­flips and ad­mits that touch-sen­si­tive but­tons for fre­quently used con­trols were a mis­take, but only af­ter the nudge from cus­tomers.

Electric Cars

Mercedes-Benz joins the grow­ing list of man­u­fac­tur­ers lis­ten­ing to cus­tomers and ad­mit­ting that touch-sen­si­tive con­trols and bury­ing con­trols in menus were mis­takes.

The German brand re­mains com­mit­ted to of­fer­ing large screens in its mod­els, but has lis­tened to its cus­tomers and will of­fer phys­i­cal but­tons for key func­tions in fu­ture.

This is partly un­like Audi and Volkswagen, which have cho­sen to re­duce the size of their in­fo­tain­ment screens to make room for the re­turn­ing phys­i­cal con­trols.

The up­com­ing GLC and C-Class will be of­fered with the 39.1-inch MBUX Hyperscreen’ that cov­ers al­most the en­tire width of the dash­board, but with phys­i­cal but­tons in front of the dual wire­less charg­ers, along with phys­i­cal but­tons and switches re­turn­ing to the steer­ing wheel.

Mercedes-Benz Sales boss Mathias Geisen, when speak­ing to Autocar, said the brand has changed its course: Customers told us two years ago, guys, nice idea, but it just does­n’t work for us’, so we changed that and made it more ana­logue.”

Physical but­tons, switches, and di­als will con­tinue to be in­cor­po­rated into up­com­ing mod­els, as the brand plans to blend its screen with the re­quired phys­i­cal con­trols.

He also ex­plained that I’m a big be­liever in screens, be­cause I re­ally be­lieve if you want to con­nect, you have to make the magic work be­hind the screen.”

But in our fu­ture prod­ucts, you will see more hard keys for spe­cific func­tions that cus­tomers want to have di­rect ac­cess for with hard keys.

When we do car re­search clin­ics, cus­tomers are very clear: We love the big screens, but we want to have [hard con­trols for] spe­cific func­tion­al­i­ties.’”

The brand will also of­fer a cus­tomis­able wall­pa­per el­e­ment for the near me­tre-wide seam­less touch­screen, a choice that its sales boss ad­mits was brought be­cause phones are such a huge part of peo­ple’s lives and they are used to that level of tech­nol­ogy.

If you want to con­nect to the cus­tomer, you’ve got to find a way to trans­late this dig­i­tal ex­pe­ri­ence from your phone to the cus­tomer.”

The new-gen­er­a­tion GLC SUV will show­case the brand’s new MB.EA elec­tric ve­hi­cle plat­form when it ar­rives in the fourth quar­ter of 2026 (October to December), shared with the up­com­ing C-Class when it’s due early next year.

9 Images

Electric Cars Guide

Red Squares — the GitHub outage graph

red-squares.cian.lol

DENIC Status

status.denic.de

Components

DNS

Services

DNS Nameservice

May 6, 2026 01:34 CESTMay 5, 2026 23:34 UTC

RESOLVED

All Services are up and run­ning.

May 5, 2026 23:28 CESTMay 5, 2026 21:28 UTC

INVESTIGATING

Frankfurt am Main, 5 May 2026 — DENIC eG is cur­rently ex­pe­ri­enc­ing a dis­rup­tion in its DNS ser­vice for .de do­mains. As a re­sult, all DNSSEC-signed .de do­mains are cur­rently af­fected in their reach­a­bil­ity. The root cause of the dis­rup­tion has not yet been fully iden­ti­fied. DENICs tech­ni­cal teams are work­ing in­ten­sively on analy­sis and on restor­ing sta­ble op­er­a­tions as quickly as pos­si­ble. Based on cur­rent in­for­ma­tion, users and op­er­a­tors of .de do­mains may ex­pe­ri­ence im­pair­ments in do­main res­o­lu­tion. Further up­dates will be pro­vided as soon as re­li­able find­ings on the cause and re­cov­ery are avail­able. DENIC asks all af­fected par­ties for their un­der­stand­ing. For fur­ther en­quiries, DENIC can be con­tacted via the usual chan­nels.

DNSSEC Debugger - nic.de

dnssec-analyzer.verisignlabs.com

Back to Verisign Labs Tools

Analyzing DNSSEC prob­lems for nic.de

Move your mouse over any or sym­bols for re­me­di­a­tion hints.

Want a sec­ond opin­ion? Test nic.de at dnsviz.net.

↓ Advanced op­tions

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.