10 interesting stories served every morning and every evening.




1 852 shares, 46 trendiness

googleworkspace/cli: Google Workspace CLI — one command-line tool for Drive, Gmail, Calendar, Sheets, Docs, Chat, Admin, and more. Dynamically built from Google Discovery Service. Includes AI agent skills.

One CLI for all of Google Workspace — built for hu­mans and AI agents.

Drive, Gmail, Calendar, and every Workspace API. Zero boil­er­plate. Structured JSON out­put. 40+ agent skills in­cluded.

npm in­stall -g @googleworkspace/cli

gws does­n’t ship a sta­tic list of com­mands. It reads Google’s own Discovery Service at run­time and builds its en­tire com­mand sur­face dy­nam­i­cally. When Google Workspace adds an API end­point or method, gws picks it up au­to­mat­i­cally.

* Node.js 18+ — for npm in­stall (or down­load a pre-built bi­nary from GitHub Releases)

* A Google Cloud pro­ject — re­quired for OAuth cre­den­tials. You can cre­ate one via the Google Cloud Console or with the gcloud CLI or with the gws auth setup com­mand.

npm in­stall -g @googleworkspace/cli

The npm pack­age bun­dles pre-built na­tive bi­na­ries for your OS and ar­chi­tec­ture. No Rust tool­chain re­quired.

Pre-built bi­na­ries are also avail­able on the GitHub Releases page.

cargo in­stall –git https://​github.com/​google­work­space/​cli –locked

A Nix flake is also avail­able at github:google­work­space/​cli

nix run github:google­work­space/​cli

gws auth setup # walks you through Google Cloud pro­ject con­fig

gws auth lo­gin # sub­se­quent OAuth lo­gin

gws drive files list –params {“pageSize”: 5}’

For hu­mans — stop writ­ing curl calls against REST docs. gws gives you –help on every re­source, –dry-run to pre­view re­quests, and auto‑pag­i­na­tion.

For AI agents — every re­sponse is struc­tured JSON. Pair it with the in­cluded agent skills and your LLM can man­age Workspace with­out cus­tom tool­ing.

# List the 10 most re­cent files

gws drive files list –params {“pageSize”: 10}’

# Create a spread­sheet

gws sheets spread­sheets cre­ate –json {“properties”: {“title”: Q1 Budget”}}’

# Send a Chat mes­sage

gws chat spaces mes­sages cre­ate \

–params {“parent”: spaces/xyz”}’ \

–json {“text”: Deploy com­plete.“}’ \

–dry-run

# Introspect any method’s re­quest/​re­sponse schema

gws schema drive.files.list

# Stream pag­i­nated re­sults as NDJSON

gws drive files list –params {“pageSize”: 100}’ –page-all | jq -r .files[].name’

The CLI sup­ports mul­ti­ple auth work­flows so it works on your lap­top, in CI, and on a server.

Credentials are en­crypted at rest (AES-256-GCM) with the key stored in your OS keyring.

gws auth setup # one-time: cre­ates a Cloud pro­ject, en­ables APIs, logs you in

gws auth lo­gin # sub­se­quent scope se­lec­tion and lo­gin

gws auth setup re­quires the gcloud CLI. If you don’t have gcloud, use the man­ual setup be­low in­stead.

You can au­then­ti­cate with more than one Google ac­count and switch be­tween them:

gws auth lo­gin –account work@corp.com # lo­gin and reg­is­ter an ac­count

gws auth lo­gin –account per­sonal@gmail.com

gws auth list # list reg­is­tered ac­counts

gws auth de­fault work@corp.com. # set the de­fault

gws –account per­sonal@gmail.com drive files list # one-off over­ride

ex­port GOOGLE_WORKSPACE_CLI_ACCOUNT=per­sonal@gmail.com # env var over­ride

Credentials are stored per-ac­count as cre­den­tials. in ~/.config/gws/, with an ac­counts.json reg­istry track­ing de­faults.

Use this when gws auth setup can­not au­to­mate pro­ject/​client cre­ation, or when you want ex­plicit con­trol.

Download the client JSON and save it to:

gws auth lo­gin

You can com­plete OAuth ei­ther man­u­ally or with browser au­toma­tion.

* Agent-assisted flow: the agent opens the URL, se­lects ac­count, han­dles con­sent prompts, and re­turns con­trol once the lo­cal­host call­back suc­ceeds.

If con­sent shows Google has­n’t ver­i­fied this app” (testing mode), click Continue. If scope check­boxes ap­pear, se­lect re­quired scopes (or Select all) be­fore con­tin­u­ing.

On the head­less ma­chine:

ex­port GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE=/path/to/credentials.json

gws drive files list # just works

Point to your key file; no lo­gin needed.

ex­port GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE=/path/to/service-account.json

gws drive files list

ex­port GOOGLE_WORKSPACE_CLI_IMPERSONATED_USER=ad­min@ex­am­ple.com

Useful when an­other tool (e.g. gcloud) al­ready mints to­kens for your en­vi­ron­ment.

ex­port GOOGLE_WORKSPACE_CLI_TOKEN=$(gcloud auth print-ac­cess-to­ken)

Environment vari­ables can also live in a .env file.

The repo ships 100+ Agent Skills (SKILL.md files) — one for every sup­ported API, plus higher-level helpers for com­mon work­flows and 50 cu­rated recipes for Gmail, Drive, Docs, Calendar, and Sheets. See the full Skills Index for the com­plete list.

# Install all skills at once

npx skills add https://​github.com/​google­work­space/​cli

# Or pick only what you need

npx skills add https://​github.com/​google­work­space/​cli/​tree/​main/​skills/​gws-drive

npx skills add https://​github.com/​google­work­space/​cli/​tree/​main/​skills/​gws-gmail

Install the ex­ten­sion into the Gemini CLI:

gem­ini ex­ten­sions in­stall https://​github.com/​google­work­space/​cli

Installing this ex­ten­sion gives your Gemini CLI agent di­rect ac­cess to all gws com­mands and Google Workspace agent skills. Because gws han­dles its own au­then­ti­ca­tion se­curely, you sim­ply need to au­then­ti­cate your ter­mi­nal once prior to us­ing the agent, and the ex­ten­sion will au­to­mat­i­cally in­herit your cre­den­tials.

gws mcp starts a Model Context Protocol server over stdio, ex­pos­ing Google Workspace APIs as struc­tured tools that any MCP-compatible client (Claude Desktop, Gemini CLI, VS Code, etc.) can call.

gws mcp -s drive # ex­pose Drive tools

gws mcp -s drive,gmail,cal­en­dar # ex­pose mul­ti­ple ser­vices

gws mcp -s all # ex­pose all ser­vices (many tools!)

mcpServers”: {

gws”: {

command”: gws”,

args”: [“mcp”, -s”, drive,gmail,calendar”]

gws drive files cre­ate –json {“name”: report.pdf”}’ –upload ./report.pdf

Sheets ranges use ! which bash in­ter­prets as his­tory ex­pan­sion. Always wrap val­ues in sin­gle quotes:

# Read cells A1:C10 from Sheet1”

gws sheets spread­sheets val­ues get \

–params {“spreadsheetId”: SPREADSHEET_ID”, range”: Sheet1!A1:C10″}’

# Append rows

gws sheets spread­sheets val­ues ap­pend \

–params {“spreadsheetId”: ID, range”: Sheet1!A1″, valueInputOption”: USER_ENTERED”}’ \

–json {“values”: [[“Name”, Score”], [“Alice”, 95]]}’

Integrate Google Cloud Model Armor to scan API re­sponses for prompt in­jec­tion be­fore they reach your agent.

gws gmail users mes­sages get –params …’ \

–sanitize projects/P/locations/L/templates/T”

Build a clap::Com­mand tree from the doc­u­men­t’s re­sources and meth­ods

Your OAuth app is in test­ing mode and your ac­count is not listed as a test user.

Fix: Open the OAuth con­sent screen in your GCP pro­ject → Test users → Add users → en­ter your Google ac­count email. Then retry gws auth lo­gin.

Expected when your app is in test­ing mode. Click Advanced → Go to to pro­ceed. This is safe for per­sonal use; ver­i­fi­ca­tion is only re­quired to pub­lish the app to other users.

Unverified (testing mode) apps are lim­ited to ~25 OAuth scopes. The rec­om­mended scope pre­set in­cludes many scopes and will ex­ceed this limit.

Fix: Select only the scopes you need:

gws auth lo­gin –scopes drive,gmail,cal­en­dar

gws auth setup re­quires the gcloud CLI to au­to­mate pro­ject cre­ation. You have three op­tions:

Skip gcloud en­tirely — set up OAuth cre­den­tials man­u­ally in the Cloud Console

The OAuth client was not cre­ated as a Desktop app type. In the Credentials page, delete the ex­ist­ing client, cre­ate a new one with type Desktop app, and down­load the new JSON.

If a re­quired Google API is not en­abled for your GCP pro­ject, you will see a 403 er­ror with rea­son ac­cess­Not­Con­fig­ured:

error”: {

...

Read the original on github.com »

2 746 shares, 36 trendiness

Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal 'straight up lies,' report says

Anthropic co-founder and CEO Dario Amodei is not happy — per­haps pre­dictably so — with OpenAI chief Sam Altman. In a memo to staff, re­ported by The Information, Amodei re­ferred to OpenAI’s deal­ings with the Department of Defense as safety the­ater.”

The main rea­son [OpenAI] ac­cepted [the DoD’s deal] and we did not is that they cared about pla­cat­ing em­ploy­ees, and we ac­tu­ally cared about pre­vent­ing abuses,” Amodei wrote.

Last week, Anthropic and the U. S. Department of Defense (DoD) failed to come to an agree­ment over the mil­i­tary’s re­quest for un­re­stricted ac­cess to the AI com­pa­ny’s tech­nol­ogy. Anthropic, which al­ready had a $200 mil­lion con­tract with the mil­i­tary, in­sisted the DoD af­firm that it would not use the com­pa­ny’s AI to en­able do­mes­tic mass sur­veil­lance or au­tonomous weaponry.

Instead, the DoD — known un­der the Trump ad­min­is­tra­tion as the Department of War — struck a deal with OpenAI. Altman stated that his com­pa­ny’s new de­fense con­tract would in­clude pro­tec­tions against the same red lines that Anthropic had as­serted.

In a let­ter to staff, Amodei refers to OpenAI’s mes­sag­ing as straight up lies,” stat­ing that Altman is falsely presenting him­self as a peace­maker and deal­maker.”

Amodei might not be speak­ing solely from a po­si­tion of bit­ter­ness, here. Anthropic specif­i­cally took is­sue with the DoD’s in­sis­tence on the com­pa­ny’s AI be­ing avail­able for any law­ful use.” OpenAI said in a blog post that its con­tract al­lows use of its AI sys­tems for all law­ful pur­poses.”

It was clear in our in­ter­ac­tion that the DoW con­sid­ers mass do­mes­tic sur­veil­lance il­le­gal and was not plan­ning to use it for this pur­pose,” OpenAI’s blog post stated. We en­sured that the fact that it is not cov­ered un­der law­ful use was made ex­plicit in our con­tract.”

Critics have pointed out that the law is sub­ject to change, and what is con­sid­ered il­le­gal now might end up be­ing al­lowed in the fu­ture.

And the pub­lic seems to be sid­ing with Anthropic. ChatGPT unin­stalls jumped 295% af­ter OpenAI made its deal with the DoD.

I think this at­tempted spin/​gaslight­ing is not work­ing very well on the gen­eral pub­lic or the me­dia, where peo­ple mostly see OpenAI’s deal with the DoW as sketchy or sus­pi­cious, and see us as the he­roes (we’re #2 in the App Store now!),” Amodei wrote to his staff. It is work­ing on some Twitter mo­rons, which does­n’t mat­ter, but my main worry is how to make sure it does­n’t work on OpenAI em­ploy­ees.”

...

Read the original on techcrunch.com »

3 565 shares, 44 trendiness

The L in "LLM" Stands for Lying — Acko.net

So it’s no won­der artists would de­nounce gen­er­a­tive AI as mass-pla­gia­rism when it showed up. It’s also no won­der that a bunch of tech en­tre­pre­neurs and data jan­i­tors would­n’t un­der­stand this at all, and would in fact em­brace the pla­gia­rism whole­sale, train­ing their mod­els on every pi­rated shadow li­brary they can get. Or in­deed, every code repos­i­tory out there.

If the out­put of this is generic, gross and sus­pi­cious, there’s a very ob­vi­ous rea­son for it. The dif­fer­ent train­ing sam­ples in the source ma­te­r­ial are them­selves just slop for the ma­chine. Whatever makes the weights go brrr dur­ing train­ing.

This just so hap­pens to cre­ate the plau­si­ble de­ni­a­bil­ity that makes it im­pos­si­ble to say what’s a ci­ta­tion, what’s a hal­lu­ci­na­tion, and what, if any­thing, could be con­sid­ered novel or cre­ative. This is what keeps those shadow li­braries il­le­gal, but ChatGPT “legal”.

Labeling AI con­tent as AI gen­er­ated, or wa­ter­mark­ing it, is thus largely an ex­er­cise in ass-cov­er­ing, and not in any way re­spon­si­ble dis­clo­sure.

It’s also what pro­vides the fig leaf that al­lows many a de­vel­oper to knock-off for early lunch and early din­ner every day, while keep­ing the me­ter run­ning, with­out ever ques­tion­ing whether the in­tel­lec­tual prop­erty clauses in their con­tract still mean any­thing at all.

This leaves the en­gi­neers in ques­tion in an awk­ward spot how­ever. In or­der for vibe-cod­ing to be ac­cept­able and jus­ti­fi­able, they have to con­sider their own out­put dis­pos­able, highly un­cre­ative, and not wor­thy of credit.

If you ask me, no court should have ever ren­dered a judge­ment on whether AI out­put as a cat­e­gory is le­gal or copy­rightable, be­cause none of it is sourced. The judge­ment sim­ply can­not be made, and AI out­put should be treated like a forgery un­less and un­til proven oth­er­wise.

The so­lu­tion to the LLM co­nun­drum is then as ob­vi­ous as it is elu­sive: the only way to sep­a­rate the gold from the slop is for LLMs to per­form cor­rect source at­tri­bu­tion along with in­fer­ence.

This would­n’t just help with the artis­tic side of things. It would also re­veal how much vibe code is merely just copy/​pasted from an ex­ist­ing code­base, while con­ve­niently omit­ting the orig­i­nal au­thor, li­cense and link.

With to­day’s mod­els, real at­tri­bu­tion is a tech­ni­cal im­pos­si­bil­ity. The fact that an LLM can even men­tion and cite sources at all is an emer­gent prop­erty of the data that’s been in­gested, and the prompt be­ing com­pleted. It can only do so when ap­pro­pri­ate ac­cord­ing to the cur­rent po­si­tion in the text.

There’s no rea­son to think that this is gen­er­al­iz­able, rather, it is far more likely that LLMs are merely good at cit­ing things that are fre­quently and cor­rectly cited. It’s ci­ta­tion role-play.

The im­pli­ca­tions of sourc­ing-as-a-re­quire­ment are vast. What does back­prop­a­ga­tion even look like if the weights have to be at­trib­ut­able, and the for­ward pass au­ditable? You won’t be able to fit that in an int4, that’s for sure.

Nevertheless, I think this would be quite re­veal­ing, as this is what AI de­tec­tion tools” are re­ally try­ing to solve for back­wards. It’s crazy that the next big thing af­ter the World Wide Web, and the Google-scale search en­gine to make use of it, was a tech­nol­ogy that can­not tell you where the in­for­ma­tion comes from, by de­sign. It’s… sloppy.

To stop the ma­chines from ly­ing, they have to cite their sources prop­erly. And spoiler, so do the AI com­pa­nies.

...

Read the original on acko.net »

4 432 shares, 32 trendiness

No right to relicense this project · Issue #327 · chardet/chardet

Hi, I’m Mark Pilgrim. You may re­mem­ber me from such clas­sics as Dive Into Python” and Universal Character Encoding Detector.” I am the orig­i­nal au­thor of chardet. First off, I would like to thank the cur­rent main­tain­ers and every­one who has con­tributed to and im­proved this pro­ject over the years. Truly a Free Software suc­cess story.

However, it has been brought to my at­ten­tion that, in the re­lease 7.0.0, the main­tain­ers claim to have the right to relicense” the pro­ject. They have no such right; do­ing so is an ex­plicit vi­o­la­tion of the LGPL. Licensed code, when mod­i­fied, must be re­leased un­der the same LGPL li­cense. Their claim that it is a complete rewrite” is ir­rel­e­vant, since they had am­ple ex­po­sure to the orig­i­nally li­censed code (i.e. this is not a clean room” im­ple­men­ta­tion). Adding a fancy code gen­er­a­tor into the mix does not some­how grant them any ad­di­tional rights.

I re­spect­fully in­sist that they re­vert the pro­ject to its orig­i­nal li­cense.

...

Read the original on github.com »

5 332 shares, 23 trendiness

Relicensing with AI-assisted rewrite

I am not a lawyer, nor am I an ex­pert in copy­right law or soft­ware li­cens­ing. The fol­low­ing post is a break­down of re­cent com­mu­nity events and le­gal news; it should not be taken as le­gal ad­vice re­gard­ing your own pro­jects or de­pen­den­cies.

In the world of open source, re­li­cens­ing is no­to­ri­ously dif­fi­cult. It usu­ally re­quires the unan­i­mous con­sent of every per­son who has ever con­tributed a line of code, a feat nearly im­pos­si­ble for legacy pro­jects. chardet

, a Python char­ac­ter en­cod­ing de­tec­tor used by re­quests and many oth­ers, has sat in that ten­sion for years: as a port of Mozilla’s C++ code it was bound to the LGPL, mak­ing it a gray area for cor­po­rate users and a headache for its most fa­mous con­sumer.

Recently the main­tain­ers used Claude Code to rewrite the whole code­base and re­lease v7.0.0

, re­li­cens­ing from LGPL to MIT in the process. The orig­i­nal au­thor, a2­mark

, saw this as a po­ten­tial GPL vi­o­la­tion:

Licensed code, when mod­i­fied, must be re­leased un­der the same LGPL li­cense. Their claim that it is a complete rewrite” is ir­rel­e­vant, since they had am­ple ex­po­sure to the orig­i­nally li­censed code (i.e. this is not a clean room” im­ple­men­ta­tion). Adding a fancy code gen­er­a­tor into the mix does not some­how grant them any ad­di­tional rights.

* Team A looks at the orig­i­nal code and writes a func­tional spec­i­fi­ca­tion

* Team B (which has never seen the orig­i­nal code) writes new code based solely on that spec­i­fi­ca­tion. By us­ing an AI that was prompted with the orig­i­nal LGPL code, the main­tain­ers by­passed this wall. If the AI learned” from the LGPL code to pro­duce the new ver­sion, the re­sult­ing out­put is ar­guably a de­riv­a­tive work, which un­der the LGPL, must re­main LGPL.

Coincidentally, as this drama un­folded, the U. S. Supreme Court (on March 2, 2026) de­clined to hear an ap­peal

re­gard­ing copy­rights for AI-generated ma­te­r­ial. By let­ting lower court rul­ings stand, the Court ef­fec­tively so­lid­i­fied a Human Authorship” re­quire­ment. That cre­ates a mas­sive le­gal para­dox for the chardet main­tain­ers:

* The copy­right vac­uum: If AI-generated code can­not be copy­righted (as the courts sug­gest), then the main­tain­ers may not even have the le­gal stand­ing to li­cense v7.0.0 un­der MIT or any li­cense.

* The de­riv­a­tive trap: If the AI out­put is con­sid­ered a de­riv­a­tive of the orig­i­nal LGPL code, the rewrite” is a li­cense vi­o­la­tion.

* The own­er­ship void: If the code is truly a new” work cre­ated by a ma­chine, it might tech­ni­cally be in the pub­lic do­main the mo­ment it’s gen­er­ated, ren­der­ing the MIT li­cense moot.

If AI-rewriting” is ac­cepted as a valid way to change li­censes, it rep­re­sents the end of Copyleft. Any de­vel­oper could take a GPL-licensed pro­ject, feed it into an LLM with the prompt Rewrite this in a dif­fer­ent style,” and re­lease it un­der MIT. The le­gal and eth­i­cal lines are still be­ing drawn, and the chardet v7.0.0 case is one of the first real-world tests.

...

Read the original on tuananh.net »

6 224 shares, 9 trendiness

Does that use a lot of energy?

Compare the daily en­ergy con­sump­tion of dif­fer­ent prod­ucts and ac­tiv­i­ties

Select prod­ucts to com­pare en­ergy use through the day

This tool was de­signed to get a sense of the dif­fer­ences in en­ergy con­sump­tion be­tween dif­fer­ent prod­ucts. It’s of­ten dif­fi­cult to un­der­stand whether ac­tiv­i­ties mat­ter a lot or very lit­tle for our over­all en­ergy con­sump­tion.

These num­bers rep­re­sent typ­i­cal prod­ucts and us­age (specifically for the UK, al­though it will of­ten gen­er­alise else­where), and might not re­flect your own per­sonal cir­cum­stances. If you want to get pre­cise mea­sure­ments, you will need to use ded­i­cated en­ergy mon­i­tor­ing equip­ment.

Actual en­ergy con­sump­tion will vary a lot de­pend­ing on fac­tors such as the age and ef­fi­ciency of the prod­uct, how you’re us­ing it (for ex­am­ple, how warm your show­ers are, or how fast you drive), weather and cli­mate con­di­tions (which is par­tic­u­larly im­por­tant for the en­ergy us­age of heaters and air con­di­tion­ers).

How to use

Add and re­move prod­ucts or ac­tiv­i­ties in the side­bar to com­pare them on the chart. Most have the op­tion of ad­just­ing the num­ber of hours used, miles dri­ven, or other units of us­age.

This tool was built by Hannah Ritchie, with the help of Claude Code.

All en­ergy con­sump­tion val­ues in this tool are mea­sured in watt-hours (Wh), which is the amount of en­ergy con­sumed over time. The ba­sic for­mula for cal­cu­lat­ing en­ergy con­sump­tion is:

For ex­am­ple, a 100-watt light bulb used for 2 hours would con­sume 200 watt-hours of en­ergy.

Most prod­ucts on this list are elec­tri­cal, but en­ergy use for non-elec­tric prod­ucts (such as petrol car or gas heat­ing) are con­verted into watt-hour equiv­a­lents.

Energy costs are avail­able for a small se­lec­tion of coun­tries based on their na­tional en­ergy prices (electricity, gas and petrol). This price data is sourced from Eurostat, Ofgem, and the US EIA (based on prices for 2025 or early 2026, de­pend­ing on avail­abil­ity). Costs re­flect av­er­age house­hold prices, and don’t re­flect dy­namic, off-peak or smart tar­iffs.

Below, I list the as­sump­tions and sources for each prod­uct or ac­tiv­ity. Again, the ac­tual level of en­ergy con­sump­tion will de­pend on fac­tors such as the spe­cific ef­fi­ciency of the prod­uct, user set­tings, and cli­mate so these should be in­ter­preted as ap­prox­i­ma­tions to give a sense of mag­ni­tude.

Traditional in­can­des­cent bulbs typ­i­cally range from 25 to 100 watts, with 60 watts be­ing rel­a­tively stan­dard for a house­hold bulb. One hour of use would con­sume 60 Watt-hours (Wh).

LED bulbs use around 80% less en­ergy than in­can­des­cent bulbs for the same amount of light out­put. A stan­dard LED bulb has an en­ergy rat­ing of around 10 W. Using it for one hour would con­sume 10 Wh.

Modern smart­phones have bat­tery ca­pac­i­ties of 3,000-5,000 mAh at ap­prox­i­mately 3.7-4.2V, re­sult­ing in bat­ter­ies around 15-20 watt-hours. If we as­sume there is around 10% to 20% loss due to charg­ing ef­fi­cien­cies, a full charge likely re­quires around 20 Wh.

Medium-efficiency TVs (for ex­am­ple, 40-50 inch LED TVs) con­sume ap­prox­i­mately 60 watts dur­ing ac­tive view­ing.

Larger mod­ern TVs (55-60 inches with 4K ca­pa­bil­ity) typ­i­cally con­sume 80-100 watts. I’ve gone with 90 watts as a rea­son­able av­er­age.

The power con­sump­tion of Apple MacBooks vary de­pend­ing on the model and what ap­pli­ca­tions users are run­ning.

When do­ing every­day tasks such as writ­ing emails, word doc­u­ments, or brows­ing the in­ter­net, they con­sume around 5 to 15 watts. Streaming video is more like 15 to 20 watts. When do­ing in­ten­sive tasks such as edit­ing pho­tos or video, or gam­ing a MacBook Pro can reach 80 to 100 watts.

Here I have as­sumed an av­er­age of 20 watts.

Desktop com­put­ers vary widely, but more ef­fi­cient mod­els con­sume ap­prox­i­mately 50 watts. When do­ing light tasks, this can be a bit lower. Gaming com­put­ers can use far more, es­pe­cially dur­ing peak us­age (often sev­eral hun­dred watts).

The power con­sump­tion of game con­soles can vary a lot, de­pend­ing on the model. The Xbox Series S typ­i­cally con­sumes around 70 watts dur­ing ac­tive game­play. The Xbox Series X con­sumes around twice as much: 150 watts.

Game con­soles use much less when stream­ing TV or film, or when in menu mode.

The mar­ginal in­crease in en­ergy con­sump­tion for one hour of stream­ing is around 0.2 Wh. This com­prises of just 0.028 Wh from Netflix’s servers them­selves, and an­other 0.18 Wh from trans­mis­sion and dis­tri­b­u­tion.

To stream video, you need an in­ter­net con­nec­tion, hence a bar for the elec­tric­ity con­sump­tion for Home WiFi is also shown. Note that, for most peo­ple, this is­n’t ac­tu­ally the mar­ginal in­crease in en­ergy use for stream­ing. Most peo­ple have their in­ter­net run­ning 24/7 re­gard­less; the in­crease in en­ergy use for stream­ing is very small by com­par­i­son. However, it is shown for com­plete­ness.

This does not in­clude the elec­tric­ity us­age of the de­vice (the lap­top or TV it­self). To get the to­tal for that hour of view­ing, com­bine it with the power us­age of what­ever de­vice you’re watch­ing it on.

h/​t to Chris Preist (University of Bristol) for guid­ance on this.

YouTube fig­ures are likely sim­i­lar to Netflix (see above), al­though they may be slightly higher due to typ­i­cal stream­ing pat­terns and ad de­liv­ery. Again, you need to add the power con­sump­tion of the de­vice you’re watch­ing on, sep­a­rately.

WiFi routers typ­i­cally con­sume be­tween 10 and 20 watts con­tin­u­ously. Here I’ve as­sumed 15 watts as a rea­son­able av­er­age.

Recent re­search es­ti­mates that the me­dian ChatGPT query us­ing GPT-4o con­sumes ap­prox­i­mately 0.3 watt-hours of elec­tric­ity.

Actual elec­tric­ity con­sump­tion varies a lot de­pend­ing on the length of query and re­sponse. More de­tailed queries — such as Deep Research — will con­sume more (but there is in­suf­fi­cient pub­lic data to con­firm how much).

If im­proved data be­comes avail­able on more com­plex queries, im­age gen­er­a­tion and video, I would like to add them.

E-readers like the Kindle use e-ink dis­plays that con­sume power pri­mar­ily when re­fresh­ing the page. A typ­i­cal Kindle de­vice has a bat­tery of around 1000–1700 mAh at ~3.7 V, which is 3.7 to 6 Wh. People re­port it last­ing weeks on a full charge with mod­er­ate (30 minute per day) read­ing fre­quency.

That works out to less than 1 Wh per hour. Here I’ve been con­ser­v­a­tive and have rounded it up to 1 Wh.

Electric ket­tles typ­i­cally have power rat­ing be­tween 1500 and 2000 watts. Boiling a full ket­tle (1.5-1.7 litres) takes around 3 to 4 min­utes.

A 2000-watt ket­tle that takes 3 min­utes to boil will con­sume around 100 watt-hours.

Microwaves typ­i­cally have a power rat­ing be­tween 800 and 1,200 watts. If we as­sume 1000 watts, five min­utes of use would con­sume 83 Wh (1000 * 0.08).

Electric ovens can have a power rat­ing rang­ing from 2,000 to 5,000 watts. A typ­i­cal one is around 2500 watts.

Once an oven is on and has reached the de­sired tem­per­a­ture, it typ­i­cally cy­cles and runs at around 50% to 60% ca­pac­ity. I’ve there­fore cal­cu­lated en­ergy con­sump­tion as [2,500W × time × 0.55].

Gas ovens con­sume nat­ural gas for heat­ing but also use elec­tric­ity for ig­ni­tion and con­trols (approximately 300-400 watts). When con­vert­ing the ther­mal en­ergy from gas com­bus­tion to elec­tri­cal equiv­a­lents for com­par­i­son pur­poses, gas ovens typ­i­cally use slightly more to­tal en­ergy than elec­tric ovens due to com­bus­tion in­ef­fi­ciency.

Similar to elec­tric ovens, I have as­sumed that gas ovens cy­cle on and off once they’ve reached the de­sired tem­per­a­ture.

Small air fry­ers typ­i­cally op­er­ate at 800W to 1500W. Larger mod­els (especially with two trays) can be as much as 2500W. I’ve as­sumed 1500 watts in these cal­cu­la­tions. Once an air fryer is on, it typ­i­cally cy­cles and only runs at around 50% to 60% of ca­pac­ity. Averaged over a cy­cle, 1000W is likely more re­al­is­tic.

Ten min­utes of use would con­sume 167 Wh (1000W * 0.17 hours = 167 Wh).

Induction hobs are ef­fi­cient, and tend to have a power rat­ing of 1,000W to 2,000W per ring. I’ve as­sumed 1,500 watts in these cal­cu­la­tions. Like air fry­ers, they’re of­ten not op­er­at­ing at max­i­mum power draw for the full cook­ing ses­sion. 50% is more typ­i­cal. That means the av­er­age power us­age is closer to 750W.

Most cook­ing ac­tiv­i­ties take less time; typ­i­cally 5 to 10 min­utes, which re­duces elec­tric­ity con­sump­tion.

Gas hobs con­vert nat­ural gas to heat. They tend to con­sume 2 to 2.5-times as much en­ergy as in­duc­tion hobs to achieve the same heat out­put. This is be­cause they typ­i­cally op­er­ate at around 40% ef­fi­ciency, com­pared to 85% for an elec­tric hob.

If an in­duc­tion hob has an av­er­age rat­ing of 750W over a cook­ing cy­cle, the use­ful heat de­liv­ered is 638W (750W * 85% ef­fi­ciency). To get that use­ful heat from a gas hob with 40% ef­fi­ciency would need 1595W (638W / 0.4). Here I’ve as­sumed an equiv­a­lent power in­put of 1600W.

A small-to-medium re­frig­er­a­tor (around 130 litres) typ­i­cally con­sumes around 100 kWh per year, which equals ap­prox­i­mately 275 Wh per day on av­er­age.

Standard re­frig­er­a­tor-freezer com­bi­na­tions con­sume any­where be­tween 200 and 500 kWh per year. Some very ef­fi­cient mod­els can achieve less than 200 kWh. Here, I have as­sumed one con­sumes 300 kWh per year. That is ap­prox­i­mately 822 Wh per day.

Vacuum clean­ers typ­i­cally use 500W to over 1,500W. Popular mod­els in the UK use around 620W or 750W. Here, I have as­sumed a power rat­ing of 750W. Ten min­utes of us­age would con­sume 125 Wh.

Washing ma­chine en­ergy us­age varies a lot de­pend­ing on load size, cy­cle type and wa­ter tem­per­a­ture. An av­er­age load in an ef­fi­cient, mod­ern ma­chine might use 600 Wh to 1,000 Wh per cy­cle. A large load could be use than 1,500 Wh. Here I have as­sumed 800 Wh, which is typ­i­cal for a medium load.

Electric tum­ble dry­ers are among the high­est en­ergy con­sumers in the home. Heat pump mod­els are much more ef­fi­cient than con­denser or vented mod­els. A con­denser or vented model might con­sume be­tween 4000 and 5000 Wh per cy­cle. A heat pump model, around half as much.

Here, I have as­sumed 4500 Wh for con­denser or vented cy­cles, and 2000 Wh for a heat pump cy­cle. Actual en­ergy con­sump­tion will de­pend on fac­tors such as load size and user set­tings.

Most en­ergy in a dish­washer is used for heat­ing the wa­ter. They typ­i­cally use be­tween 1,000 and 1,500 Wh per cy­cle. Very ef­fi­cient mod­els can use closer to 500 Wh per cy­cle. Operating on eco modes will also con­sume less than 1,000 Wh.

Here, I have as­sumed 1,250 Wh per cy­cle, which is fairly av­er­age for most users.

Clothes irons typ­i­cally have an en­ergy rat­ing be­tween 1500W and 3000W. Steam irons are to­wards the higher end of the range. Here, I have as­sumed 2500W, which is fairly stan­dard for a steam iron.

Using one for 10 min­utes would con­sume 417 Wh of power.

Dehumidifiers can range from as small as a few 100 watts, up to sev­eral thou­sand for large whole-house units.

Here I’ve as­sumed a medium, portable one with an en­ergy rat­ing of 500W. And a large unit of 1000W.

In hu­mid con­di­tions, or if they’re be­ing used to dry clothes, they will be run­ning at or close to max­i­mum power draw for a long pe­riod of time. In fairly low-hu­mid­ity con­di­tions, they might cy­cle on and off af­ter a few hours, mean­ing their en­ergy use drops to 50% to 70% of the max­i­mum.

Hairdryers typ­i­cally range from 1,000 to 2,000 watts. I have as­sumed a power rat­ing of 1,750W. Five min­utes of use would con­sume 146 Wh.

Electric show­ers are high-power ap­pli­ances, rated be­tween 7,500W to 11,500W. Specific mod­els of 7.2 kW, 7.5 kW, 8.5 kW, 9.5 kW, 10.5 kW, and 11.5 kW are typ­i­cal.

I have as­sumed a 9,500W model here. A 10-minute shower at 9,500 watts would con­sume 1,583 Wh.

An elec­tric shower with hot wa­ter sourced from a heat pump will use less elec­tric­ity.

If we as­sume a heat pump with a Coefficient of Performance (COP) of 3, pro­duc­ing the same heat out­put would use around 3,000 Wh per hour. Some very ef­fi­cient mod­els can achieve less than this; of­ten closer to 2,000 Wh.

If we take the gas equiv­a­lent of an elec­tric shower (rated at 9500W) and as­sume a boiler ef­fi­ciency of 90%, we get around 10,500W in en­ergy in­put equiv­a­lents. A 10-minute shower would con­sume 1,759 Wh.

Standard fans typ­i­cally use 30-75 watts, with 50 watts be­ing a rea­son­able av­er­age.

Small portable elec­tric heaters typ­i­cally range from 400 to 1,000 watts. Here I’ve as­sumed a wattage of 750W. Using this for one hour would con­sume 750 Wh.

A medium space heater typ­i­cally op­er­ates at around 1,500 watts (ranging from 1,000 to as much as 3,000 for large ones). That means us­ing one for an hour would con­sume 1,500 Wh.

Modern air-source heat pumps for sin­gle rooms (mini-splits) typ­i­cally con­sume 600 to 1000 watts of elec­tric­ity per hour of heat­ing. This would be con­verted into around 1,800 to 3,000 Wh of heat.

Here we are as­sum­ing a Coefficient of Performance (CoP) value of around 3, which means 3 units of heat are gen­er­ated per unit of elec­tric­ity in­put.

These cal­cu­la­tions are very sen­si­tive to weather con­di­tions, tem­per­a­ture set­tings, and the in­su­la­tion of the house. These val­ues might be typ­i­cal for a mod­er­ate cli­mate (such as the UK) in win­ter. In slightly warmer con­di­tions, en­ergy us­age will be lower. In colder con­di­tions, it would be higher.

The power draw can also be a bit lower than this once the heat pump is run­ning.

Here, I’ve as­sumed they con­sume 800Wh of elec­tric­ity per hour. That would sup­ply 2,400Wh of heat.

We will as­sume our gas heat­ing needs to sup­ply the same amount of heat as our heat pump: 2,400 Wh.

A gas boiler is around 90% ef­fi­cient, so the en­ergy in­put needed would be 2,700 Wh (2,400 * 90%).

Again, this is very sen­si­tive to the spe­cific boiler sys­tem, cli­mate and heat­ing re­quire­ments.

We can’t get a whole house fig­ure by sim­ply mul­ti­ply­ing by the num­ber of rooms. Energy con­sump­tion will de­pend a lot on the heat loss and fab­ric of the house.

In the UK, a 3-bedroom house has an area of around 90m². A build­ing of this size might have a heat loss of around 50 to 100 W/m². We’ll say 75 W/m². That would mean 6,750W of heat is re­quired (90m² * 75 W/m²).

Getting this from a heat pump with a CoP of 3 would con­sume 2,250Wh of elec­tric­ity per hour (6750 / 3). This is what I’ve as­sumed in our cal­cu­la­tions. In re­al­ity, the con­sump­tion is prob­a­bly lower as en­ergy draw re­duces once the heat pump is up and run­ning.

We’ll use the same as­sump­tions as above for a heat pump. We need to sup­ply 6,750W of heat for the house.

Getting this from a 90% ef­fi­cient boiler would con­sume 7,500Wh of gas per hour.

The av­er­age house­hold in the UK uses around 31,000Wh of gas per day. That’s equiv­a­lent to 4–5 hours of heat­ing (a bit less if their daily to­tal in­cludes a gas shower etc.). In win­ter, these heat­ing hours will likely be higher, and dur­ing the sum­mer, close to zero.

I think 7,500Wh of gas per hour there­fore seems rea­son­able (but very sen­si­tive to a spe­cific house­hold’s cir­cum­stances).

Air con­di­tion­ing units for sin­gle rooms typ­i­cally use 800 to 1,500 watts. I’ve as­sumed 1,000W in these cal­cu­la­tions.

The ac­tual en­ergy us­age will be very sen­si­tive to cli­mate con­di­tions. Warmer, and es­pe­cially hu­mid cli­mates make AC units much less ef­fi­cient. Running one in a mod­er­ate, drier cli­mate would use much less.

They can also con­sume less en­ergy once they’re up-and-run­ning, so they’re not al­ways go­ing at max­i­mum power draw.

Electric bi­cy­cles typ­i­cally con­sume be­tween 10 to 30 watt-hours per mile de­pend­ing on speed, the cy­cling con­di­tions, and how high the level of elec­tric as­sist is. For light as­sist on flat ter­rain, it’s around 8 to 12 Wh; for mod­er­ate, around 12 to 18 Wh; and for heavy as­sist on hilly ter­rain it can reach 30 Wh per mile.

I’ve as­sumed a value of 15 Wh per mile.

Electric scoot­ers typ­i­cally con­sume 15-30 watt-hours per mile de­pend­ing on the model and con­di­tions. Here, I’ve as­sumed a us­age of 25 Wh per mile.

Electric mo­tor­bikes typ­i­cally con­sume 100 to 250 watt-hours per mile de­pend­ing on the model, dri­ver weight and con­di­tions. Real-world tests of mo­tor­bike ef­fi­ciency find ef­fi­cien­cies of around 100 Wh per mile for mod­er­ate ur­ban dri­ving. People re­port higher us­age when dri­ving at higher speeds or mo­tor­way dri­ving.

Here I’ve as­sumed around 150 Wh per mile.

Petrol mo­tor­bikes can con­sume be­tween 50 and 100 miles per gal­lon. Let’s take an av­er­age of 75mpg. A gal­lon is around 4.5 litres, so 75mpg is equiv­a­lent to 0.06 litres per mile.

The en­ergy con­tent of petrol is around 32 MJ per litre (or 8.9 kWh per litre). That equates to 0.53 kWh per mile (8.9kWh per litre * 0.06 litres per mile). Driving one mile uses around 530 Wh per mile.

In terms of en­ergy in­puts, this means an elec­tric mo­tor­bike is 3 to 4 times as ef­fi­cient as a petrol one.

Electric ve­hi­cles av­er­age ap­prox­i­mately 0.3 kWh (300 Wh) per mile. However, this can range from 200 to 400 Wh per mile de­pend­ing on the type of ve­hi­cle, dri­ving con­di­tions and speed.

Petrol cars av­er­age around 40 miles per gal­lon (ranging from around 25 to 50).

Taking an en­ergy den­sity of ~40 kWh per UK gal­lon for petrol, there are around 40.5 kWh in a UK gal­lon (there are 4.546 litres in a gal­lon * 8.9kWh per litre).

This means a petrol car uses around 1kWh (1,000 Wh) per mile. This means an elec­tric car is around 3 to 4 times more ef­fi­cient, since it has far less en­ergy losses from the en­gine, heat pro­duc­tion, and brak­ing.

...

Read the original on hannahritchie.github.io »

7 218 shares, 51 trendiness

Good software knows when to stop

it’s 9AM, you’re ready to up­grade your fa­vorite Linux dis­tri­b­u­tion and pack­ages to their lat­est ver­sions, the process goes smoothly and af­ter a re­boot your ma­chine is now up-to-date. You start go­ing as usual about your day and then when try­ing to list the con­tent of a di­rec­tory on your ma­chine, some­thing strange hap­pens, the rou­tinely bor­ing be­hav­ior you’re used to of ls sur­prises you, and not for the best :

Fortunately, this does not hap­pen… Good soft­ware knows the pur­pose it serves, it does not try to do every­thing, it knows when to stop and what to im­prove.

One of the most coun­ter­in­tu­itive, for the max­i­mal­ist hu­man psy­che we have, is to know the role and place your soft­ware fits in and to de­cide when what you want to do next fits with, what you nowa­days call the product vi­sion”, or if it’s just an­other pro­ject, an­other tool.

For the old­est amongst us this kind of lessons came from 37Signals, the founders of Basecamp’s (the pro­ject man­age­ment tool) books Rework and Getting Real - two books I’d rec­om­mend and es­pe­cially Getting Real for prod­uct de­sign, whose lessons I could sum up by :

* Constraints are ad­van­tages — small teams, tight bud­gets, and lim­ited scope force bet­ter de­ci­sions

* Ignore fea­ture re­quests — don’t build what users ask for; un­der­stand the un­der­ly­ing prob­lem in­stead

* Ship early, ship of­ten — a half-prod­uct that’s real beats a per­fect prod­uct that’s va­por­ware

* Epicenter de­sign — start with the core in­ter­face/​in­ter­ac­tion, not the edges (nav, footer, etc.)

* Say no by de­fault — every fea­ture has a hid­den cost: com­plex­ity, main­te­nance, edge cases

* Scratch your own itch — build some­thing you your­self need; you’ll make bet­ter de­ci­sions

At the time where Minio be­comes AIStor and even Oracle Database be­comes the Oracle AI Database, I think a lit­tle re­minder that not every­thing has to change dras­ti­cally and that be­ing the de facto stan­dard for a given prob­lem has more value than brand­ing your­self as the new hot thing no-one ex­pected.

...

Read the original on ogirardot.writizzy.com »

8 217 shares, 44 trendiness

Huginn Report: February 2026

In the in­tro­duc­tory post for this blog, we men­tioned that Huginn, our ac­tive phish­ing dis­cov­ery tool, was be­ing used to seed Yggdrasil and that we’d have more to share soon. Well, here it is. This is the first in what we plan to be a monthly se­ries where we share what Huginn has been find­ing in the wild, break down in­ter­est­ing at­tacks, and re­port on how ex­ist­ing de­tec­tion tools are per­form­ing. Our hope is that these posts are use­ful both as a re­source for un­der­stand­ing the cur­rent phish­ing land­scape and as a way to demon­strate what our tools can do.

Over the course of February, Huginn processed URLs sourced from pub­lic threat in­tel­li­gence feeds and iden­ti­fied 254 con­firmed phish­ing web­sites. For each of these, we checked whether Google Safe Browsing (GSB) had flagged the URL at the time of our scan. The re­sults were strik­ing: GSB had flagged just 41 of the 254, mean­ing 83.9% of con­firmed phish­ing sites were not flagged by the tool that un­der­pins Chrome’s built-in pro­tec­tion at the time we dis­cov­ered them.

Now, to be fair, some of these may have been flagged later. But that’s kind of the point. Phishing pages are of­ten short-lived by de­sign. The at­tacker sets up a page, blasts out a cam­paign, har­vests what­ever cre­den­tials they can, and takes it down be­fore any­one catches on. If the de­tec­tion comes hours or days af­ter the page goes live, the dam­age is al­ready done. This is the fun­da­men­tal lim­i­ta­tion of block­list-based de­tec­tion: it’s re­ac­tive. Something has to be re­ported and re­viewed be­fore pro­tec­tion kicks in.

We also ran the full dataset of 263 URLs (254 phish­ing, 9 con­firmed le­git­i­mate) through Muninn’s au­to­matic scan. This is the scan that runs on every page you visit with­out any ac­tion on your part. On its own, the au­to­matic scan cor­rectly iden­ti­fied 238 of the 254 phish­ing sites and only in­cor­rectly flagged 6 le­git­i­mate pages.

But the au­to­matic scan is just the first layer. When it flags some­thing as sus­pi­cious or when a user wants to in­ves­ti­gate a page fur­ther, Muninn of­fers a deeper scan that an­a­lyzes a screen­shot of the page. Where the au­to­matic scan is op­ti­mized for pre­ci­sion (keeping false alarms low so it does­n’t dis­rupt your brows­ing), the deep scan is op­ti­mized for cov­er­age. When we ran the full dataset through the deep scan, it caught every sin­gle con­firmed phish­ing site with zero false neg­a­tives. The trade­off is that it flagged all 9 of the le­git­i­mate sites in our dataset as sus­pi­cious, which is worth it when you’re ac­tively in­ves­ti­gat­ing a link you don’t trust. The way to think about it is that the au­to­matic scan is your al­ways-on safety net that stays out of your way, and the deep scan is the cau­tious sec­ond opin­ion that would rather be wrong about a safe page than let a phish­ing page through.

One of the more in­ter­est­ing find­ings is just how much phish­ing is hosted on plat­forms that most peo­ple would con­sider trust­wor­thy. Of the 254 con­firmed phish­ing sites, 149 were hosted on le­git­i­mate, well-known plat­forms. This works well for at­tack­ers for a sim­ple rea­son: you can’t block­list wee­bly.com or ver­cel.app be­cause mil­lions of le­git­i­mate sites use these plat­forms. Detection has to hap­pen at the in­di­vid­ual page level, which is ex­actly the kind of analy­sis block­lists aren’t built for.

The chart tells the story pretty clearly. Weebly hosted 51 phish­ing sites and GSB caught just 2 of them, a 96% miss rate. Vercel, a plat­form pop­u­lar with de­vel­op­ers, hosted 40 with GSB catch­ing 8. Wix hosted 7 and GSB caught none. IPFS, the de­cen­tral­ized file stor­age pro­to­col, hosted 13 phish­ing sites with GSB catch­ing just 1.

Perhaps the most ab­surd find­ing is that 16 phish­ing sites in our dataset were hosted on Google’s own do­mains. These were Google Docs pre­sen­ta­tions, Google Forms, Google Sites pages, and even Google Apps Script de­ploy­ments, all be­ing used for cre­den­tial har­vest­ing. Not a sin­gle one was flagged by Google Safe Browsing. Google is, quite lit­er­ally, host­ing phish­ing at­tacks on its own in­fra­struc­ture and not catch­ing them with its own de­tec­tion tools.

Looking at which brands at­tack­ers are im­per­son­at­ing gives a sense of where they think the money is.

Microsoft is the most im­per­son­ated brand in our dataset with 28 phish­ing sites, fol­lowed by Google at 21, Netflix at 19, Amazon at 16, and AT&T at 13. The top of this list is­n’t sur­pris­ing as these are some of the most widely used ser­vices on the in­ter­net, which makes them high-value tar­gets for cre­den­tial har­vest­ing.

One that stands out is crypto and DeFi phish­ing. We iden­ti­fied 14 sites tar­get­ing crypto users, im­per­son­at­ing plat­forms like Uniswap, Raydium, pump.fun, Trezor, and MetaMask. This makes sense when you think about it: the crypto ecosys­tem moves fast, new pro­to­cols launch con­stantly, and the tar­get au­di­ence is ac­cus­tomed to in­ter­act­ing with un­fa­mil­iar in­ter­faces and con­nect­ing wal­lets to new sites. It’s a near-per­fect en­vi­ron­ment for phish­ing. If you’re ac­tive in crypto, es­pe­cially in Discord com­mu­ni­ties where links get shared fre­quently, this is worth be­ing aware of.

Beyond the ag­gre­gate data, some of the in­di­vid­ual at­tacks Huginn sur­faced are worth walk­ing through in de­tail be­cause they il­lus­trate how so­phis­ti­cated phish­ing has be­come.

One pat­tern we’re see­ing in­volves a two-stage at­tack that splits the lure from the cre­den­tial har­vest­ing. The first stage pre­sents the vic­tim with what looks like a se­cure doc­u­ment ac­cess page. The user is prompted to en­ter their email to view the doc­u­ment, and once they do, they’re redi­rected to a sec­ond page that pre­sents a Microsoft sign-in flow.

On the sur­face this is a stan­dard cre­den­tial har­vest­ing at­tack, but the tech­niques used to avoid de­tec­tion are worth un­pack­ing. The lure page is hosted on an Amazon S3 bucket with URL com­po­nents like garage­doors­bel­montma or hill-county-of­fice-doc. These are in­nocu­ous-look­ing paths on the ama­zon­aws.com do­main, which means the page sails past most block­lists in­clud­ing GSB. You can’t block ama­zon­aws.com.

The redi­rect takes the vic­tim to a seem­ingly un­re­lated do­main with a gen­er­ated to­ken ei­ther in the path or as a URL pa­ra­me­ter. One ex­am­ple we in­ves­ti­gated would only show the Microsoft lo­gin page on the first visit. Subsequent vis­its with the same to­ken were redi­rected to Wikipedia ar­ti­cles. This is a clever eva­sion tech­nique be­cause it means se­cu­rity re­searchers and au­to­mated scan­ners who re­visit the URL will see a be­nign page. The redi­rect des­ti­na­tion also had anti-bot mea­sures in place, adding an­other layer of dif­fi­culty for au­to­mated de­tec­tion.

The pre­sented Microsoft lo­gin flow is vi­su­ally in­dis­tin­guish­able from the real thing and has the email en­tered on the ini­tial lure page pre­loaded into the form, which adds a layer of au­then­tic­ity.

It might seem odd to split this into two stages when it could be done from a sin­gle page. But the sep­a­ra­tion is de­lib­er­ate. The lure page ex­ists mainly to avoid ini­tial de­tec­tion from email fil­ters, Safe Browsing, and other front-line tools. Hosting it on rep­utable in­fra­struc­ture helps it look rou­tine, and it’s cheap to re­place when it even­tu­ally gets flagged. The sec­ond stage is where the ac­tual phish­ing kit lives: the brand­ing, the track­ing, the bot de­tec­tion, and the end­point that col­lects the cre­den­tials. It’s eas­ier to op­er­ate and ro­tate on in­fra­struc­ture the at­tacker con­trols. The lure is dis­pos­able and light­weight. The real work hap­pens be­hind it.

When we in­ves­ti­gated these pages, there were some clear in­di­ca­tors that some­thing was wrong. The biggest one is that the Microsoft lo­gin flow is­n’t hosted on a Microsoft do­main. While web­sites can use Microsoft as an au­tho­riza­tion source, this nor­mally in­volves redi­rect­ing to a Microsoft-controlled page and then back to the orig­i­nal site once au­tho­riza­tion is com­plete. That’s not what’s hap­pen­ing here. Beyond that, none of the sec­ondary in­ter­face el­e­ments work. Create a new ac­count,” Sign in op­tions,” Can’t ac­cess your ac­count?” all ei­ther do noth­ing when clicked or redi­rect back to the cur­rent page. This is some­thing we see over and over: phish­ing kits only im­ple­ment the happy path where the vic­tim en­ters their cre­den­tials with­out click­ing any­thing else. Finally, the er­ror mes­sages are wrong. We went through a le­git­i­mate Microsoft auth flow and recorded the er­ror states (for ex­am­ple, en­ter­ing a non-ex­is­tent email) and com­pared them to what the phish­ing page dis­played. The lan­guage did­n’t match.

Any one of these sig­nals on its own would be sus­pi­cious. Together, they paint a pretty clear pic­ture.

It’s also worth not­ing that when we ran some of these cre­den­tial har­vest­ing sites through VirusTotal, they came back clean. This un­der­scores a point we’ve been mak­ing: you can’t al­ways rely on ex­ist­ing web scan­ning tools to catch these things.

Another at­tack we came across in­volved a page mim­ic­k­ing the Calendly book­ing page of a real Allianz em­ployee. After click­ing to sched­ule, the vic­tim is taken to a fake Google sign-in flow. This kind of at­tack is par­tic­u­larly com­pelling be­cause it can be in­tro­duced as a le­git­i­mate em­ploy­ment op­por­tu­nity or a busi­ness meet­ing, and the fact that it ref­er­ences a real em­ployee adds au­then­tic­ity to the lure.

The first in­di­ca­tor is that Calendly hosts its book­ing pages on its own do­main. The fact that this page was on dis­pos­able in­fra­struc­ture is a give­away if you know to look for it. The Google sign-in flow has the same tell we noted above: none of the sec­ondary but­tons do any­thing. Forgot email?”, Create ac­count,” all non-func­tional. It’s the same pat­tern of only han­dling the happy path.

Not every­thing Huginn finds is cre­den­tial har­vest­ing. We also came across a car wrap­ping scam, which is a well-doc­u­mented scheme where a vic­tim is told they can earn money by hav­ing their car wrapped in ad­ver­tis­ing. The vic­tim fills out a form with their per­sonal in­for­ma­tion in­clud­ing their home ad­dress, re­ceives a fraud­u­lent check in the mail, and is told to pay a lo­cal ven­dor to do the wrap­ping. By the time the bank flags the check as fake, the vic­tim has al­ready sent real money to the vendor.” It’s a dif­fer­ent fla­vor of phish­ing but the un­der­ly­ing me­chan­ics are the same: build trust with a plau­si­ble story, col­lect in­for­ma­tion, and ex­tract value be­fore the vic­tim re­al­izes what hap­pened.

The take­away from all of this is not that Google Safe Browsing is bad. It’s a use­ful tool that pro­tects bil­lions of users and catches a mas­sive vol­ume of known threats. But it has a struc­tural lim­i­ta­tion: it re­lies on URLs be­ing iden­ti­fied and added to a block­list be­fore it can pro­tect you. For novel at­tacks, for phish­ing hosted on trusted plat­forms, and es­pe­cially for at­tacks that use eva­sion tech­niques like one-time to­kens or bot de­tec­tion, there is a gap. That gap is where Muninn op­er­ates.

If you’re in­ter­ested in try­ing Muninn, it’s avail­able as a Chrome ex­ten­sion. We’re in an early phase and would gen­uinely ap­pre­ci­ate feed­back from any­one will­ing to give it a shot. And if you run across phish­ing in the wild, con­sider sub­mit­ting it to Yggdrasil so the data can help pro­tect oth­ers.

...

Read the original on www.norn-labs.com »

9 214 shares, 13 trendiness

YOU JUST REVEIVED

Disclaimer: These are my per­sonal views and do not rep­re­sent any or­ga­ni­za­tion or pro­fes­sional ad­vice.

This post is ded­i­cated to the name­less Vodafone em­ployee watch­ing over me and my fam­ily and bless­ing us with an abun­dance of free min­utes.

My fam­ily and I share a sin­gle mo­bile phone. To be more pre­cise, we share two sim cards which move be­tween a nearly 10 year old Samsung smart­phone and a dumb flip phone de­pend­ing on the pre­sent cir­cum­stances.

The other day we re­ceived an SMS from Vodafone. Being a pre­paid plan, the phone is show­ered with of­fers, pro­mo­tions and gifts en­tic­ing the load of more credit. The longer the in­ter­val be­tween top-ups the more mes­sages the phone re­ceives. I thought this mes­sage was one of those but I was wrong. This mes­sage was spe­cial. YOU JUST REVEIVED FREE UNLIMITED DATA AND 999999 MINUTES TO ALL FOR 5 DAYS! ENJOY BROWSING WITHOUT LIMITS WITH AN OFFER EXCLUSIVELY FOR YOU! -Vodafone

Usually, the of­fers we re­ceive from Vodafone are con­di­tional and re­quire money be spent for them to ac­ti­vate (“SUPER OFFER! 50GB for 30 DAYS WITH 12,70€“). Sure, af­ter a top-up they throw some free things at you but this mes­sage was dif­fer­ent. An un­prompted and un­con­di­tional gift of a mil­lion min­utes and ac­cord­ing to Vodafone —typo and all— just for me!

Before I con­tinue, I will an­swer the ob­vi­ous ques­tion: Did I ac­tu­ally re­ceive 999999 min­utes? Yes, in­deed I did. But un­for­tu­nately, I was only given 7200 min­utes to spend my 999999 min­utes and I could only spend them 1 minute at a time.

At first I thought this mes­sage was sent in er­ror but that did­n’t make sense. If the mes­sages were en­tirely au­to­mated the like­li­hood of the typo ap­pear­ing should be zero. I’ve been re­ceiv­ing mes­sages from Vodafone for years and this is the first time I have seen a typo.

I thought the num­ber was pos­si­bly an un­trans­formed place­holder of some kind but I ac­tu­ally re­ceived the min­utes. Surely if these things are au­to­mated there would be a safe­guard in place to pre­vent such large val­ues?

Could this merely be the hal­lu­ci­na­tions of a LLM? I don’t think so. I doubt the sys­tem send­ing these mes­sages to me for nearly a decade has been re­placed with a large lan­guage model. I don’t know why but this seems un­likely.

My mind then went the only place it pos­si­bly could: Is there a hu­man be­ing in a room some­where far far away send­ing me ALL CAPS gifts and of­fers? Are the mes­sages man­u­ally typed out? Did some­one make a mis­take and ac­ci­den­tally give me a mil­lion min­utes?

Was this of­fer re­ally ex­clu­sive or are there oth­ers out there who also re­ceived it? How many ac­counts does each per­son man­age at once? Do they keep the same ac­counts they are as­signed or is my ac­count moved be­tween han­dlers?

If it is an au­to­mated sys­tem, what cir­cum­stances could cause this mes­sage to oc­cur? Why did I re­ceive it and not some­one else? Is the typo baked into some tem­plate some­where?

It’s all so strange and I doubt I’ll find an­swers but that’s OK. For five days I had a mil­lion min­utes and I was pos­si­bly the first and only Vodafone minute mil­lion­aire.

...

Read the original on dylan.gr »

10 213 shares, 6 trendiness

Google ends its 30 percent app store fee and welcomes third-party app stores

Google is of­fi­cially do­ing away with its 30 per­cent cut of Play Store trans­ac­tions, and rolling out changes to how third-party app stores and al­ter­nate billing sys­tems will be han­dled by Android. Some of these tweaks were pro­posed as part of the set­tle­ment the com­pany reached with Epic in November 2025, but rather than wait for fi­nal ju­di­cial ap­proval, Google is com­mit­ting to re­vamp­ing Android and the Play Store pub­licly.

The biggest change is to how Google will col­lect fees from de­vel­op­ers pub­lish­ing apps on Android. Rather than take its stan­dard 30 per­cent cut of in-app pur­chases through the Play Store, Google is low­er­ing its cut to 20 per­cent, and in some cases 15 per­cent for new in­stalls of apps from de­vel­op­ers par­tic­i­pat­ing in its new App Experience pro­gram or up­dated Google Play Games Level Up pro­gram. Those changes ex­tend to sub­scrip­tions, too, where the com­pa­ny’s cut is low­er­ing to 10 per­cent. For Google’s billing sys­tem, the com­pany says de­vel­op­ers in the UK, US, or European Economic Area (EEA) will now be charged a five per­cent fee and a mar­ket-spe­cific rate” in other re­gions. Of course, for any­one try­ing to avoid those fees, us­ing al­ter­na­tives to Google’s billing sys­tem is get­ting eas­ier.

Google says that de­vel­op­ers will be able to of­fer al­ter­na­tive billing sys­tems along­side its own or guide users out­side of their app to their own web­sites for pur­chases.” The setup, as de­scribed by Google, ap­pears to be more per­mis­sive than what Apple set­tled on in 2025. For iOS apps on the App Store, de­vel­op­ers in­ter­ested in avoid­ing Apple’s fees can only di­rect cus­tomers to al­ter­na­tive pay­ment meth­ods on the web through in-app links. Allowing for these out­side trans­ac­tions is part of what prompted Epic to bring Fortnite back to the App Store in the US in May 2025. The de­vel­oper added the app back to the Play Store in the US in December of that year, and Epic CEO Tim Sweeney shared along­side to­day’s changes that Fortnite will soon be avail­able in Google’s app store glob­ally.

Epic is ul­ti­mately in­ter­ested in get­ting peo­ple to use the mo­bile ver­sion of its Epic Games Store, and Google’s an­nounce­ment also in­cludes de­tails on how third-party app stores can come to Android. Third-party app stores will be able to ap­ply to the com­pa­ny’s new Registered App Stores” pro­gram to see if they meet certain qual­ity and safety bench­marks.” If they do, they’ll be able to take ad­van­tage of a stream­lined in­stal­la­tion in­ter­face in Android. Participating in the pro­gram is op­tional, and users will still be able to side­load al­ter­na­tive app stores that aren’t part of the pro­gram, but Google clearly has a pref­er­ence.  Changes the com­pany plans to make to side­load­ing later in 2026 could de­lib­er­ately make the process more dif­fi­cult, which might force de­vel­op­ers to ap­ply to Google’s pro­gram.

Given the scale of the changes, not all of Google’s tweaks will be avail­able every­where at the same time. Google says that its up­dated fee struc­ture will come to the EEA, the UK and the US by June 30, Australia by September 30, Korea and Japan by December 31 and the en­tire world by September 30, 2027. Meanwhile, the com­pa­ny’s up­dated Google Play Games Level Up pro­gram and new App Experience pro­gram will launch in the EEA, the UK, the US and Australia on September 30, be­fore hit­ting the re­main­ing re­gions along­side the up­dated fee struc­ture. For any de­vel­op­ers in­ter­ested in of­fer­ing their own app store, Google says it’ll launch its Registered App Stores pro­gram with a ver­sion of a ma­jor Android re­lease” be­fore the end of the year. According to the com­pany, the pro­gram will be avail­able in other re­gions first be­fore it comes to the US.

Google has made changes to how it col­lects app store fees in the past, the most sig­nif­i­cant be­ing in 2021, when it low­ered its cut to 15 per­cent on the first $1 mil­lion de­vel­op­ers earn, and 15 per­cent on sub­scrip­tions. The dif­fer­ence here is that the reg­u­la­tory scrutiny brought about by Epic’s law­suit against Google and Apple seems to be a key mo­ti­va­tor for its changes. Well, that, and an en­tirely sep­a­rate busi­ness deal the com­pany made with Epic. Google and Epic’s set­tle­ment served as the ba­sis for these changes, but The Verge re­ported in January that the com­pa­nies also agreed to an $800 mil­lion joint part­ner­ship around prod­uct de­vel­op­ment and Google us­ing Epic’s core tech­nol­ogy.” Letting de­vel­op­ers keep more of their money is ul­ti­mately good, but it’s a busi­ness de­ci­sion Google felt com­fort­able mak­ing, which likely means it has its own share of up­sides.

...

Read the original on www.engadget.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.