10 interesting stories served every morning and every evening.
One CLI for all of Google Workspace — built for humans and AI agents.
Drive, Gmail, Calendar, and every Workspace API. Zero boilerplate. Structured JSON output. 40+ agent skills included.
npm install -g @googleworkspace/cli
gws doesn’t ship a static list of commands. It reads Google’s own Discovery Service at runtime and builds its entire command surface dynamically. When Google Workspace adds an API endpoint or method, gws picks it up automatically.
* Node.js 18+ — for npm install (or download a pre-built binary from GitHub Releases)
* A Google Cloud project — required for OAuth credentials. You can create one via the Google Cloud Console or with the gcloud CLI or with the gws auth setup command.
npm install -g @googleworkspace/cli
The npm package bundles pre-built native binaries for your OS and architecture. No Rust toolchain required.
Pre-built binaries are also available on the GitHub Releases page.
cargo install –git https://github.com/googleworkspace/cli –locked
A Nix flake is also available at github:googleworkspace/cli
nix run github:googleworkspace/cli
gws auth setup # walks you through Google Cloud project config
gws auth login # subsequent OAuth login
gws drive files list –params ‘{“pageSize”: 5}’
For humans — stop writing curl calls against REST docs. gws gives you –help on every resource, –dry-run to preview requests, and auto‑pagination.
For AI agents — every response is structured JSON. Pair it with the included agent skills and your LLM can manage Workspace without custom tooling.
# List the 10 most recent files
gws drive files list –params ‘{“pageSize”: 10}’
# Create a spreadsheet
gws sheets spreadsheets create –json ‘{“properties”: {“title”: “Q1 Budget”}}’
# Send a Chat message
gws chat spaces messages create \
–params ‘{“parent”: “spaces/xyz”}’ \
–json ‘{“text”: “Deploy complete.“}’ \
–dry-run
# Introspect any method’s request/response schema
gws schema drive.files.list
# Stream paginated results as NDJSON
gws drive files list –params ‘{“pageSize”: 100}’ –page-all | jq -r ‘.files[].name’
The CLI supports multiple auth workflows so it works on your laptop, in CI, and on a server.
Credentials are encrypted at rest (AES-256-GCM) with the key stored in your OS keyring.
gws auth setup # one-time: creates a Cloud project, enables APIs, logs you in
gws auth login # subsequent scope selection and login
gws auth setup requires the gcloud CLI. If you don’t have gcloud, use the manual setup below instead.
You can authenticate with more than one Google account and switch between them:
gws auth login –account work@corp.com # login and register an account
gws auth login –account personal@gmail.com
gws auth list # list registered accounts
gws auth default work@corp.com. # set the default
gws –account personal@gmail.com drive files list # one-off override
export GOOGLE_WORKSPACE_CLI_ACCOUNT=personal@gmail.com # env var override
Credentials are stored per-account as credentials. in ~/.config/gws/, with an accounts.json registry tracking defaults.
Use this when gws auth setup cannot automate project/client creation, or when you want explicit control.
Download the client JSON and save it to:
gws auth login
You can complete OAuth either manually or with browser automation.
* Agent-assisted flow: the agent opens the URL, selects account, handles consent prompts, and returns control once the localhost callback succeeds.
If consent shows “Google hasn’t verified this app” (testing mode), click Continue. If scope checkboxes appear, select required scopes (or Select all) before continuing.
On the headless machine:
export GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE=/path/to/credentials.json
gws drive files list # just works
Point to your key file; no login needed.
export GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE=/path/to/service-account.json
gws drive files list
export GOOGLE_WORKSPACE_CLI_IMPERSONATED_USER=admin@example.com
Useful when another tool (e.g. gcloud) already mints tokens for your environment.
export GOOGLE_WORKSPACE_CLI_TOKEN=$(gcloud auth print-access-token)
Environment variables can also live in a .env file.
The repo ships 100+ Agent Skills (SKILL.md files) — one for every supported API, plus higher-level helpers for common workflows and 50 curated recipes for Gmail, Drive, Docs, Calendar, and Sheets. See the full Skills Index for the complete list.
# Install all skills at once
npx skills add https://github.com/googleworkspace/cli
# Or pick only what you need
npx skills add https://github.com/googleworkspace/cli/tree/main/skills/gws-drive
npx skills add https://github.com/googleworkspace/cli/tree/main/skills/gws-gmail
Install the extension into the Gemini CLI:
gemini extensions install https://github.com/googleworkspace/cli
Installing this extension gives your Gemini CLI agent direct access to all gws commands and Google Workspace agent skills. Because gws handles its own authentication securely, you simply need to authenticate your terminal once prior to using the agent, and the extension will automatically inherit your credentials.
gws mcp starts a Model Context Protocol server over stdio, exposing Google Workspace APIs as structured tools that any MCP-compatible client (Claude Desktop, Gemini CLI, VS Code, etc.) can call.
gws mcp -s drive # expose Drive tools
gws mcp -s drive,gmail,calendar # expose multiple services
gws mcp -s all # expose all services (many tools!)
“mcpServers”: {
“gws”: {
“command”: “gws”,
“args”: [“mcp”, “-s”, “drive,gmail,calendar”]
gws drive files create –json ‘{“name”: “report.pdf”}’ –upload ./report.pdf
Sheets ranges use ! which bash interprets as history expansion. Always wrap values in single quotes:
# Read cells A1:C10 from “Sheet1”
gws sheets spreadsheets values get \
–params ‘{“spreadsheetId”: “SPREADSHEET_ID”, “range”: “Sheet1!A1:C10″}’
# Append rows
gws sheets spreadsheets values append \
–params ‘{“spreadsheetId”: “ID”, “range”: “Sheet1!A1″, “valueInputOption”: “USER_ENTERED”}’ \
–json ‘{“values”: [[“Name”, “Score”], [“Alice”, 95]]}’
Integrate Google Cloud Model Armor to scan API responses for prompt injection before they reach your agent.
gws gmail users messages get –params ‘…’ \
–sanitize “projects/P/locations/L/templates/T”
Build a clap::Command tree from the document’s resources and methods
Your OAuth app is in testing mode and your account is not listed as a test user.
Fix: Open the OAuth consent screen in your GCP project → Test users → Add users → enter your Google account email. Then retry gws auth login.
Expected when your app is in testing mode. Click Advanced → Go to to proceed. This is safe for personal use; verification is only required to publish the app to other users.
Unverified (testing mode) apps are limited to ~25 OAuth scopes. The recommended scope preset includes many scopes and will exceed this limit.
Fix: Select only the scopes you need:
gws auth login –scopes drive,gmail,calendar
gws auth setup requires the gcloud CLI to automate project creation. You have three options:
Skip gcloud entirely — set up OAuth credentials manually in the Cloud Console
The OAuth client was not created as a Desktop app type. In the Credentials page, delete the existing client, create a new one with type Desktop app, and download the new JSON.
If a required Google API is not enabled for your GCP project, you will see a 403 error with reason accessNotConfigured:
“error”: {
...
Read the original on github.com »
Anthropic co-founder and CEO Dario Amodei is not happy — perhaps predictably so — with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI’s dealings with the Department of Defense as “safety theater.”
“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote.
Last week, Anthropic and the U. S. Department of Defense (DoD) failed to come to an agreement over the military’s request for unrestricted access to the AI company’s technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company’s AI to enable domestic mass surveillance or autonomous weaponry.
Instead, the DoD — known under the Trump administration as the Department of War — struck a deal with OpenAI. Altman stated that his company’s new defense contract would include protections against the same red lines that Anthropic had asserted.
In a letter to staff, Amodei refers to OpenAI’s messaging as “straight up lies,” stating that Altman is falsely “presenting himself as a peacemaker and dealmaker.”
Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD’s insistence on the company’s AI being available for “any lawful use.” OpenAI said in a blog post that its contract allows use of its AI systems for “all lawful purposes.”
“It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose,” OpenAI’s blog post stated. “We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”
Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future.
And the public seems to be siding with Anthropic. ChatGPT uninstalls jumped 295% after OpenAI made its deal with the DoD.
“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!),” Amodei wrote to his staff. “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.”
...
Read the original on techcrunch.com »
So it’s no wonder artists would denounce generative AI as mass-plagiarism when it showed up. It’s also no wonder that a bunch of tech entrepreneurs and data janitors wouldn’t understand this at all, and would in fact embrace the plagiarism wholesale, training their models on every pirated shadow library they can get. Or indeed, every code repository out there.
If the output of this is generic, gross and suspicious, there’s a very obvious reason for it. The different training samples in the source material are themselves just slop for the machine. Whatever makes the weights go brrr during training.
This just so happens to create the plausible deniability that makes it impossible to say what’s a citation, what’s a hallucination, and what, if anything, could be considered novel or creative. This is what keeps those shadow libraries illegal, but ChatGPT “legal”.
Labeling AI content as AI generated, or watermarking it, is thus largely an exercise in ass-covering, and not in any way responsible disclosure.
It’s also what provides the fig leaf that allows many a developer to knock-off for early lunch and early dinner every day, while keeping the meter running, without ever questioning whether the intellectual property clauses in their contract still mean anything at all.
This leaves the engineers in question in an awkward spot however. In order for vibe-coding to be acceptable and justifiable, they have to consider their own output disposable, highly uncreative, and not worthy of credit.
If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.
The solution to the LLM conundrum is then as obvious as it is elusive: the only way to separate the gold from the slop is for LLMs to perform correct source attribution along with inference.
This wouldn’t just help with the artistic side of things. It would also reveal how much vibe code is merely just copy/pasted from an existing codebase, while conveniently omitting the original author, license and link.
With today’s models, real attribution is a technical impossibility. The fact that an LLM can even mention and cite sources at all is an emergent property of the data that’s been ingested, and the prompt being completed. It can only do so when appropriate according to the current position in the text.
There’s no reason to think that this is generalizable, rather, it is far more likely that LLMs are merely good at citing things that are frequently and correctly cited. It’s citation role-play.
The implications of sourcing-as-a-requirement are vast. What does backpropagation even look like if the weights have to be attributable, and the forward pass auditable? You won’t be able to fit that in an int4, that’s for sure.
Nevertheless, I think this would be quite revealing, as this is what “AI detection tools” are really trying to solve for backwards. It’s crazy that the next big thing after the World Wide Web, and the Google-scale search engine to make use of it, was a technology that cannot tell you where the information comes from, by design. It’s… sloppy.
To stop the machines from lying, they have to cite their sources properly. And spoiler, so do the AI companies.
...
Read the original on acko.net »
Hi, I’m Mark Pilgrim. You may remember me from such classics as “Dive Into Python” and “Universal Character Encoding Detector.” I am the original author of chardet. First off, I would like to thank the current maintainers and everyone who has contributed to and improved this project over the years. Truly a Free Software success story.
However, it has been brought to my attention that, in the release 7.0.0, the maintainers claim to have the right to “relicense” the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a “complete rewrite” is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a “clean room” implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.
I respectfully insist that they revert the project to its original license.
...
Read the original on github.com »
I am not a lawyer, nor am I an expert in copyright law or software licensing. The following post is a breakdown of recent community events and legal news; it should not be taken as legal advice regarding your own projects or dependencies.
In the world of open source, relicensing is notoriously difficult. It usually requires the unanimous consent of every person who has ever contributed a line of code, a feat nearly impossible for legacy projects. chardet
, a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.
Recently the maintainers used Claude Code to rewrite the whole codebase and release v7.0.0
, relicensing from LGPL to MIT in the process. The original author, a2mark
, saw this as a potential GPL violation:
Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a “complete rewrite” is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a “clean room” implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.
* Team A looks at the original code and writes a functional specification
* Team B (which has never seen the original code) writes new code based solely on that specification. By using an AI that was prompted with the original LGPL code, the maintainers bypassed this wall. If the AI “learned” from the LGPL code to produce the new version, the resulting output is arguably a derivative work, which under the LGPL, must remain LGPL.
Coincidentally, as this drama unfolded, the U. S. Supreme Court (on March 2, 2026) declined to hear an appeal
regarding copyrights for AI-generated material. By letting lower court rulings stand, the Court effectively solidified a “Human Authorship” requirement. That creates a massive legal paradox for the chardet maintainers:
* The copyright vacuum: If AI-generated code cannot be copyrighted (as the courts suggest), then the maintainers may not even have the legal standing to license v7.0.0 under MIT or any license.
* The derivative trap: If the AI output is considered a derivative of the original LGPL code, the “rewrite” is a license violation.
* The ownership void: If the code is truly a “new” work created by a machine, it might technically be in the public domain the moment it’s generated, rendering the MIT license moot.
If “AI-rewriting” is accepted as a valid way to change licenses, it represents the end of Copyleft. Any developer could take a GPL-licensed project, feed it into an LLM with the prompt “Rewrite this in a different style,” and release it under MIT. The legal and ethical lines are still being drawn, and the chardet v7.0.0 case is one of the first real-world tests.
...
Read the original on tuananh.net »
Compare the daily energy consumption of different products and activities
Select products to compare energy use through the day
This tool was designed to get a sense of the differences in energy consumption between different products. It’s often difficult to understand whether activities matter a lot or very little for our overall energy consumption.
These numbers represent typical products and usage (specifically for the UK, although it will often generalise elsewhere), and might not reflect your own personal circumstances. If you want to get precise measurements, you will need to use dedicated energy monitoring equipment.
Actual energy consumption will vary a lot depending on factors such as the age and efficiency of the product, how you’re using it (for example, how warm your showers are, or how fast you drive), weather and climate conditions (which is particularly important for the energy usage of heaters and air conditioners).
How to use
Add and remove products or activities in the sidebar to compare them on the chart. Most have the option of adjusting the number of hours used, miles driven, or other units of usage.
This tool was built by Hannah Ritchie, with the help of Claude Code.
All energy consumption values in this tool are measured in watt-hours (Wh), which is the amount of energy consumed over time. The basic formula for calculating energy consumption is:
For example, a 100-watt light bulb used for 2 hours would consume 200 watt-hours of energy.
Most products on this list are electrical, but energy use for non-electric products (such as petrol car or gas heating) are converted into watt-hour equivalents.
Energy costs are available for a small selection of countries based on their national energy prices (electricity, gas and petrol). This price data is sourced from Eurostat, Ofgem, and the US EIA (based on prices for 2025 or early 2026, depending on availability). Costs reflect average household prices, and don’t reflect dynamic, off-peak or smart tariffs.
Below, I list the assumptions and sources for each product or activity. Again, the actual level of energy consumption will depend on factors such as the specific efficiency of the product, user settings, and climate so these should be interpreted as approximations to give a sense of magnitude.
Traditional incandescent bulbs typically range from 25 to 100 watts, with 60 watts being relatively standard for a household bulb. One hour of use would consume 60 Watt-hours (Wh).
LED bulbs use around 80% less energy than incandescent bulbs for the same amount of light output. A standard LED bulb has an energy rating of around 10 W. Using it for one hour would consume 10 Wh.
Modern smartphones have battery capacities of 3,000-5,000 mAh at approximately 3.7-4.2V, resulting in batteries around 15-20 watt-hours. If we assume there is around 10% to 20% loss due to charging efficiencies, a full charge likely requires around 20 Wh.
Medium-efficiency TVs (for example, 40-50 inch LED TVs) consume approximately 60 watts during active viewing.
Larger modern TVs (55-60 inches with 4K capability) typically consume 80-100 watts. I’ve gone with 90 watts as a reasonable average.
The power consumption of Apple MacBooks vary depending on the model and what applications users are running.
When doing everyday tasks such as writing emails, word documents, or browsing the internet, they consume around 5 to 15 watts. Streaming video is more like 15 to 20 watts. When doing intensive tasks such as editing photos or video, or gaming a MacBook Pro can reach 80 to 100 watts.
Here I have assumed an average of 20 watts.
Desktop computers vary widely, but more efficient models consume approximately 50 watts. When doing light tasks, this can be a bit lower. Gaming computers can use far more, especially during peak usage (often several hundred watts).
The power consumption of game consoles can vary a lot, depending on the model. The Xbox Series S typically consumes around 70 watts during active gameplay. The Xbox Series X consumes around twice as much: 150 watts.
Game consoles use much less when streaming TV or film, or when in menu mode.
The marginal increase in energy consumption for one hour of streaming is around 0.2 Wh. This comprises of just 0.028 Wh from Netflix’s servers themselves, and another 0.18 Wh from transmission and distribution.
To stream video, you need an internet connection, hence a bar for the electricity consumption for Home WiFi is also shown. Note that, for most people, this isn’t actually the marginal increase in energy use for streaming. Most people have their internet running 24/7 regardless; the increase in energy use for streaming is very small by comparison. However, it is shown for completeness.
This does not include the electricity usage of the device (the laptop or TV itself). To get the total for that hour of viewing, combine it with the power usage of whatever device you’re watching it on.
h/t to Chris Preist (University of Bristol) for guidance on this.
YouTube figures are likely similar to Netflix (see above), although they may be slightly higher due to typical streaming patterns and ad delivery. Again, you need to add the power consumption of the device you’re watching on, separately.
WiFi routers typically consume between 10 and 20 watts continuously. Here I’ve assumed 15 watts as a reasonable average.
Recent research estimates that the median ChatGPT query using GPT-4o consumes approximately 0.3 watt-hours of electricity.
Actual electricity consumption varies a lot depending on the length of query and response. More detailed queries — such as Deep Research — will consume more (but there is insufficient public data to confirm how much).
If improved data becomes available on more complex queries, image generation and video, I would like to add them.
E-readers like the Kindle use e-ink displays that consume power primarily when refreshing the page. A typical Kindle device has a battery of around 1000–1700 mAh at ~3.7 V, which is 3.7 to 6 Wh. People report it lasting weeks on a full charge with moderate (30 minute per day) reading frequency.
That works out to less than 1 Wh per hour. Here I’ve been conservative and have rounded it up to 1 Wh.
Electric kettles typically have power rating between 1500 and 2000 watts. Boiling a full kettle (1.5-1.7 litres) takes around 3 to 4 minutes.
A 2000-watt kettle that takes 3 minutes to boil will consume around 100 watt-hours.
Microwaves typically have a power rating between 800 and 1,200 watts. If we assume 1000 watts, five minutes of use would consume 83 Wh (1000 * 0.08).
Electric ovens can have a power rating ranging from 2,000 to 5,000 watts. A typical one is around 2500 watts.
Once an oven is on and has reached the desired temperature, it typically cycles and runs at around 50% to 60% capacity. I’ve therefore calculated energy consumption as [2,500W × time × 0.55].
Gas ovens consume natural gas for heating but also use electricity for ignition and controls (approximately 300-400 watts). When converting the thermal energy from gas combustion to electrical equivalents for comparison purposes, gas ovens typically use slightly more total energy than electric ovens due to combustion inefficiency.
Similar to electric ovens, I have assumed that gas ovens cycle on and off once they’ve reached the desired temperature.
Small air fryers typically operate at 800W to 1500W. Larger models (especially with two trays) can be as much as 2500W. I’ve assumed 1500 watts in these calculations. Once an air fryer is on, it typically cycles and only runs at around 50% to 60% of capacity. Averaged over a cycle, 1000W is likely more realistic.
Ten minutes of use would consume 167 Wh (1000W * 0.17 hours = 167 Wh).
Induction hobs are efficient, and tend to have a power rating of 1,000W to 2,000W per ring. I’ve assumed 1,500 watts in these calculations. Like air fryers, they’re often not operating at maximum power draw for the full cooking session. 50% is more typical. That means the average power usage is closer to 750W.
Most cooking activities take less time; typically 5 to 10 minutes, which reduces electricity consumption.
Gas hobs convert natural gas to heat. They tend to consume 2 to 2.5-times as much energy as induction hobs to achieve the same heat output. This is because they typically operate at around 40% efficiency, compared to 85% for an electric hob.
If an induction hob has an average rating of 750W over a cooking cycle, the useful heat delivered is 638W (750W * 85% efficiency). To get that useful heat from a gas hob with 40% efficiency would need 1595W (638W / 0.4). Here I’ve assumed an equivalent power input of 1600W.
A small-to-medium refrigerator (around 130 litres) typically consumes around 100 kWh per year, which equals approximately 275 Wh per day on average.
Standard refrigerator-freezer combinations consume anywhere between 200 and 500 kWh per year. Some very efficient models can achieve less than 200 kWh. Here, I have assumed one consumes 300 kWh per year. That is approximately 822 Wh per day.
Vacuum cleaners typically use 500W to over 1,500W. Popular models in the UK use around 620W or 750W. Here, I have assumed a power rating of 750W. Ten minutes of usage would consume 125 Wh.
Washing machine energy usage varies a lot depending on load size, cycle type and water temperature. An average load in an efficient, modern machine might use 600 Wh to 1,000 Wh per cycle. A large load could be use than 1,500 Wh. Here I have assumed 800 Wh, which is typical for a medium load.
Electric tumble dryers are among the highest energy consumers in the home. Heat pump models are much more efficient than condenser or vented models. A condenser or vented model might consume between 4000 and 5000 Wh per cycle. A heat pump model, around half as much.
Here, I have assumed 4500 Wh for condenser or vented cycles, and 2000 Wh for a heat pump cycle. Actual energy consumption will depend on factors such as load size and user settings.
Most energy in a dishwasher is used for heating the water. They typically use between 1,000 and 1,500 Wh per cycle. Very efficient models can use closer to 500 Wh per cycle. Operating on eco modes will also consume less than 1,000 Wh.
Here, I have assumed 1,250 Wh per cycle, which is fairly average for most users.
Clothes irons typically have an energy rating between 1500W and 3000W. Steam irons are towards the higher end of the range. Here, I have assumed 2500W, which is fairly standard for a steam iron.
Using one for 10 minutes would consume 417 Wh of power.
Dehumidifiers can range from as small as a few 100 watts, up to several thousand for large whole-house units.
Here I’ve assumed a medium, portable one with an energy rating of 500W. And a large unit of 1000W.
In humid conditions, or if they’re being used to dry clothes, they will be running at or close to maximum power draw for a long period of time. In fairly low-humidity conditions, they might cycle on and off after a few hours, meaning their energy use drops to 50% to 70% of the maximum.
Hairdryers typically range from 1,000 to 2,000 watts. I have assumed a power rating of 1,750W. Five minutes of use would consume 146 Wh.
Electric showers are high-power appliances, rated between 7,500W to 11,500W. Specific models of 7.2 kW, 7.5 kW, 8.5 kW, 9.5 kW, 10.5 kW, and 11.5 kW are typical.
I have assumed a 9,500W model here. A 10-minute shower at 9,500 watts would consume 1,583 Wh.
An electric shower with hot water sourced from a heat pump will use less electricity.
If we assume a heat pump with a Coefficient of Performance (COP) of 3, producing the same heat output would use around 3,000 Wh per hour. Some very efficient models can achieve less than this; often closer to 2,000 Wh.
If we take the gas equivalent of an electric shower (rated at 9500W) and assume a boiler efficiency of 90%, we get around 10,500W in energy input equivalents. A 10-minute shower would consume 1,759 Wh.
Standard fans typically use 30-75 watts, with 50 watts being a reasonable average.
Small portable electric heaters typically range from 400 to 1,000 watts. Here I’ve assumed a wattage of 750W. Using this for one hour would consume 750 Wh.
A medium space heater typically operates at around 1,500 watts (ranging from 1,000 to as much as 3,000 for large ones). That means using one for an hour would consume 1,500 Wh.
Modern air-source heat pumps for single rooms (mini-splits) typically consume 600 to 1000 watts of electricity per hour of heating. This would be converted into around 1,800 to 3,000 Wh of heat.
Here we are assuming a Coefficient of Performance (CoP) value of around 3, which means 3 units of heat are generated per unit of electricity input.
These calculations are very sensitive to weather conditions, temperature settings, and the insulation of the house. These values might be typical for a moderate climate (such as the UK) in winter. In slightly warmer conditions, energy usage will be lower. In colder conditions, it would be higher.
The power draw can also be a bit lower than this once the heat pump is running.
Here, I’ve assumed they consume 800Wh of electricity per hour. That would supply 2,400Wh of heat.
We will assume our gas heating needs to supply the same amount of heat as our heat pump: 2,400 Wh.
A gas boiler is around 90% efficient, so the energy input needed would be 2,700 Wh (2,400 * 90%).
Again, this is very sensitive to the specific boiler system, climate and heating requirements.
We can’t get a whole house figure by simply multiplying by the number of rooms. Energy consumption will depend a lot on the heat loss and fabric of the house.
In the UK, a 3-bedroom house has an area of around 90m². A building of this size might have a heat loss of around 50 to 100 W/m². We’ll say 75 W/m². That would mean 6,750W of heat is required (90m² * 75 W/m²).
Getting this from a heat pump with a CoP of 3 would consume 2,250Wh of electricity per hour (6750 / 3). This is what I’ve assumed in our calculations. In reality, the consumption is probably lower as energy draw reduces once the heat pump is up and running.
We’ll use the same assumptions as above for a heat pump. We need to supply 6,750W of heat for the house.
Getting this from a 90% efficient boiler would consume 7,500Wh of gas per hour.
The average household in the UK uses around 31,000Wh of gas per day. That’s equivalent to 4–5 hours of heating (a bit less if their daily total includes a gas shower etc.). In winter, these heating hours will likely be higher, and during the summer, close to zero.
I think 7,500Wh of gas per hour therefore seems reasonable (but very sensitive to a specific household’s circumstances).
Air conditioning units for single rooms typically use 800 to 1,500 watts. I’ve assumed 1,000W in these calculations.
The actual energy usage will be very sensitive to climate conditions. Warmer, and especially humid climates make AC units much less efficient. Running one in a moderate, drier climate would use much less.
They can also consume less energy once they’re up-and-running, so they’re not always going at maximum power draw.
Electric bicycles typically consume between 10 to 30 watt-hours per mile depending on speed, the cycling conditions, and how high the level of electric assist is. For light assist on flat terrain, it’s around 8 to 12 Wh; for moderate, around 12 to 18 Wh; and for heavy assist on hilly terrain it can reach 30 Wh per mile.
I’ve assumed a value of 15 Wh per mile.
Electric scooters typically consume 15-30 watt-hours per mile depending on the model and conditions. Here, I’ve assumed a usage of 25 Wh per mile.
Electric motorbikes typically consume 100 to 250 watt-hours per mile depending on the model, driver weight and conditions. Real-world tests of motorbike efficiency find efficiencies of around 100 Wh per mile for moderate urban driving. People report higher usage when driving at higher speeds or motorway driving.
Here I’ve assumed around 150 Wh per mile.
Petrol motorbikes can consume between 50 and 100 miles per gallon. Let’s take an average of 75mpg. A gallon is around 4.5 litres, so 75mpg is equivalent to 0.06 litres per mile.
The energy content of petrol is around 32 MJ per litre (or 8.9 kWh per litre). That equates to 0.53 kWh per mile (8.9kWh per litre * 0.06 litres per mile). Driving one mile uses around 530 Wh per mile.
In terms of energy inputs, this means an electric motorbike is 3 to 4 times as efficient as a petrol one.
Electric vehicles average approximately 0.3 kWh (300 Wh) per mile. However, this can range from 200 to 400 Wh per mile depending on the type of vehicle, driving conditions and speed.
Petrol cars average around 40 miles per gallon (ranging from around 25 to 50).
Taking an energy density of ~40 kWh per UK gallon for petrol, there are around 40.5 kWh in a UK gallon (there are 4.546 litres in a gallon * 8.9kWh per litre).
This means a petrol car uses around 1kWh (1,000 Wh) per mile. This means an electric car is around 3 to 4 times more efficient, since it has far less energy losses from the engine, heat production, and braking.
...
Read the original on hannahritchie.github.io »
it’s 9AM, you’re ready to upgrade your favorite Linux distribution and packages to their latest versions, the process goes smoothly and after a reboot your machine is now up-to-date. You start going as usual about your day and then when trying to list the content of a directory on your machine, something strange happens, the routinely boring behavior you’re used to of ls surprises you, and not for the best :
Fortunately, this does not happen… Good software knows the purpose it serves, it does not try to do everything, it knows when to stop and what to improve.
One of the most counterintuitive, for the maximalist human psyche we have, is to know the role and place your software fits in and to decide when what you want to do next fits with, what you nowadays call the “product vision”, or if it’s just another project, another tool.
For the oldest amongst us this kind of lessons came from 37Signals, the founders of Basecamp’s (the project management tool) books Rework and Getting Real - two books I’d recommend and especially Getting Real for product design, whose lessons I could sum up by :
* Constraints are advantages — small teams, tight budgets, and limited scope force better decisions
* Ignore feature requests — don’t build what users ask for; understand the underlying problem instead
* Ship early, ship often — a half-product that’s real beats a perfect product that’s vaporware
* Epicenter design — start with the core interface/interaction, not the edges (nav, footer, etc.)
* Say no by default — every feature has a hidden cost: complexity, maintenance, edge cases
* Scratch your own itch — build something you yourself need; you’ll make better decisions
At the time where Minio becomes AIStor and even Oracle Database becomes the Oracle AI Database, I think a little reminder that not everything has to change drastically and that being the de facto standard for a given problem has more value than branding yourself as the new hot thing no-one expected.
...
Read the original on ogirardot.writizzy.com »
In the introductory post for this blog, we mentioned that Huginn, our active phishing discovery tool, was being used to seed Yggdrasil and that we’d have more to share soon. Well, here it is. This is the first in what we plan to be a monthly series where we share what Huginn has been finding in the wild, break down interesting attacks, and report on how existing detection tools are performing. Our hope is that these posts are useful both as a resource for understanding the current phishing landscape and as a way to demonstrate what our tools can do.
Over the course of February, Huginn processed URLs sourced from public threat intelligence feeds and identified 254 confirmed phishing websites. For each of these, we checked whether Google Safe Browsing (GSB) had flagged the URL at the time of our scan. The results were striking: GSB had flagged just 41 of the 254, meaning 83.9% of confirmed phishing sites were not flagged by the tool that underpins Chrome’s built-in protection at the time we discovered them.
Now, to be fair, some of these may have been flagged later. But that’s kind of the point. Phishing pages are often short-lived by design. The attacker sets up a page, blasts out a campaign, harvests whatever credentials they can, and takes it down before anyone catches on. If the detection comes hours or days after the page goes live, the damage is already done. This is the fundamental limitation of blocklist-based detection: it’s reactive. Something has to be reported and reviewed before protection kicks in.
We also ran the full dataset of 263 URLs (254 phishing, 9 confirmed legitimate) through Muninn’s automatic scan. This is the scan that runs on every page you visit without any action on your part. On its own, the automatic scan correctly identified 238 of the 254 phishing sites and only incorrectly flagged 6 legitimate pages.
But the automatic scan is just the first layer. When it flags something as suspicious or when a user wants to investigate a page further, Muninn offers a deeper scan that analyzes a screenshot of the page. Where the automatic scan is optimized for precision (keeping false alarms low so it doesn’t disrupt your browsing), the deep scan is optimized for coverage. When we ran the full dataset through the deep scan, it caught every single confirmed phishing site with zero false negatives. The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, which is worth it when you’re actively investigating a link you don’t trust. The way to think about it is that the automatic scan is your always-on safety net that stays out of your way, and the deep scan is the cautious second opinion that would rather be wrong about a safe page than let a phishing page through.
One of the more interesting findings is just how much phishing is hosted on platforms that most people would consider trustworthy. Of the 254 confirmed phishing sites, 149 were hosted on legitimate, well-known platforms. This works well for attackers for a simple reason: you can’t blocklist weebly.com or vercel.app because millions of legitimate sites use these platforms. Detection has to happen at the individual page level, which is exactly the kind of analysis blocklists aren’t built for.
The chart tells the story pretty clearly. Weebly hosted 51 phishing sites and GSB caught just 2 of them, a 96% miss rate. Vercel, a platform popular with developers, hosted 40 with GSB catching 8. Wix hosted 7 and GSB caught none. IPFS, the decentralized file storage protocol, hosted 13 phishing sites with GSB catching just 1.
Perhaps the most absurd finding is that 16 phishing sites in our dataset were hosted on Google’s own domains. These were Google Docs presentations, Google Forms, Google Sites pages, and even Google Apps Script deployments, all being used for credential harvesting. Not a single one was flagged by Google Safe Browsing. Google is, quite literally, hosting phishing attacks on its own infrastructure and not catching them with its own detection tools.
Looking at which brands attackers are impersonating gives a sense of where they think the money is.
Microsoft is the most impersonated brand in our dataset with 28 phishing sites, followed by Google at 21, Netflix at 19, Amazon at 16, and AT&T at 13. The top of this list isn’t surprising as these are some of the most widely used services on the internet, which makes them high-value targets for credential harvesting.
One that stands out is crypto and DeFi phishing. We identified 14 sites targeting crypto users, impersonating platforms like Uniswap, Raydium, pump.fun, Trezor, and MetaMask. This makes sense when you think about it: the crypto ecosystem moves fast, new protocols launch constantly, and the target audience is accustomed to interacting with unfamiliar interfaces and connecting wallets to new sites. It’s a near-perfect environment for phishing. If you’re active in crypto, especially in Discord communities where links get shared frequently, this is worth being aware of.
Beyond the aggregate data, some of the individual attacks Huginn surfaced are worth walking through in detail because they illustrate how sophisticated phishing has become.
One pattern we’re seeing involves a two-stage attack that splits the lure from the credential harvesting. The first stage presents the victim with what looks like a secure document access page. The user is prompted to enter their email to view the document, and once they do, they’re redirected to a second page that presents a Microsoft sign-in flow.
On the surface this is a standard credential harvesting attack, but the techniques used to avoid detection are worth unpacking. The lure page is hosted on an Amazon S3 bucket with URL components like garagedoorsbelmontma or hill-county-office-doc. These are innocuous-looking paths on the amazonaws.com domain, which means the page sails past most blocklists including GSB. You can’t block amazonaws.com.
The redirect takes the victim to a seemingly unrelated domain with a generated token either in the path or as a URL parameter. One example we investigated would only show the Microsoft login page on the first visit. Subsequent visits with the same token were redirected to Wikipedia articles. This is a clever evasion technique because it means security researchers and automated scanners who revisit the URL will see a benign page. The redirect destination also had anti-bot measures in place, adding another layer of difficulty for automated detection.
The presented Microsoft login flow is visually indistinguishable from the real thing and has the email entered on the initial lure page preloaded into the form, which adds a layer of authenticity.
It might seem odd to split this into two stages when it could be done from a single page. But the separation is deliberate. The lure page exists mainly to avoid initial detection from email filters, Safe Browsing, and other front-line tools. Hosting it on reputable infrastructure helps it look routine, and it’s cheap to replace when it eventually gets flagged. The second stage is where the actual phishing kit lives: the branding, the tracking, the bot detection, and the endpoint that collects the credentials. It’s easier to operate and rotate on infrastructure the attacker controls. The lure is disposable and lightweight. The real work happens behind it.
When we investigated these pages, there were some clear indicators that something was wrong. The biggest one is that the Microsoft login flow isn’t hosted on a Microsoft domain. While websites can use Microsoft as an authorization source, this normally involves redirecting to a Microsoft-controlled page and then back to the original site once authorization is complete. That’s not what’s happening here. Beyond that, none of the secondary interface elements work. “Create a new account,” “Sign in options,” “Can’t access your account?” all either do nothing when clicked or redirect back to the current page. This is something we see over and over: phishing kits only implement the happy path where the victim enters their credentials without clicking anything else. Finally, the error messages are wrong. We went through a legitimate Microsoft auth flow and recorded the error states (for example, entering a non-existent email) and compared them to what the phishing page displayed. The language didn’t match.
Any one of these signals on its own would be suspicious. Together, they paint a pretty clear picture.
It’s also worth noting that when we ran some of these credential harvesting sites through VirusTotal, they came back clean. This underscores a point we’ve been making: you can’t always rely on existing web scanning tools to catch these things.
Another attack we came across involved a page mimicking the Calendly booking page of a real Allianz employee. After clicking to schedule, the victim is taken to a fake Google sign-in flow. This kind of attack is particularly compelling because it can be introduced as a legitimate employment opportunity or a business meeting, and the fact that it references a real employee adds authenticity to the lure.
The first indicator is that Calendly hosts its booking pages on its own domain. The fact that this page was on disposable infrastructure is a giveaway if you know to look for it. The Google sign-in flow has the same tell we noted above: none of the secondary buttons do anything. “Forgot email?”, “Create account,” all non-functional. It’s the same pattern of only handling the happy path.
Not everything Huginn finds is credential harvesting. We also came across a car wrapping scam, which is a well-documented scheme where a victim is told they can earn money by having their car wrapped in advertising. The victim fills out a form with their personal information including their home address, receives a fraudulent check in the mail, and is told to pay a local vendor to do the wrapping. By the time the bank flags the check as fake, the victim has already sent real money to the “vendor.” It’s a different flavor of phishing but the underlying mechanics are the same: build trust with a plausible story, collect information, and extract value before the victim realizes what happened.
The takeaway from all of this is not that Google Safe Browsing is bad. It’s a useful tool that protects billions of users and catches a massive volume of known threats. But it has a structural limitation: it relies on URLs being identified and added to a blocklist before it can protect you. For novel attacks, for phishing hosted on trusted platforms, and especially for attacks that use evasion techniques like one-time tokens or bot detection, there is a gap. That gap is where Muninn operates.
If you’re interested in trying Muninn, it’s available as a Chrome extension. We’re in an early phase and would genuinely appreciate feedback from anyone willing to give it a shot. And if you run across phishing in the wild, consider submitting it to Yggdrasil so the data can help protect others.
...
Read the original on www.norn-labs.com »
Disclaimer: These are my personal views and do not represent any organization or professional advice.
This post is dedicated to the nameless Vodafone employee watching over me and my family and blessing us with an abundance of free minutes.
My family and I share a single mobile phone. To be more precise, we share two sim cards which move between a nearly 10 year old Samsung smartphone and a dumb flip phone depending on the present circumstances.
The other day we received an SMS from Vodafone. Being a prepaid plan, the phone is showered with offers, promotions and gifts enticing the load of more credit. The longer the interval between top-ups the more messages the phone receives. I thought this message was one of those but I was wrong. This message was special. YOU JUST REVEIVED FREE UNLIMITED DATA AND 999999 MINUTES TO ALL FOR 5 DAYS! ENJOY BROWSING WITHOUT LIMITS WITH AN OFFER EXCLUSIVELY FOR YOU! -Vodafone
Usually, the offers we receive from Vodafone are conditional and require money be spent for them to activate (“SUPER OFFER! 50GB for 30 DAYS WITH 12,70€“). Sure, after a top-up they throw some free things at you but this message was different. An unprompted and unconditional gift of a million minutes and according to Vodafone —typo and all— just for me!
Before I continue, I will answer the obvious question: Did I actually receive 999999 minutes? Yes, indeed I did. But unfortunately, I was only given 7200 minutes to spend my 999999 minutes and I could only spend them 1 minute at a time.
At first I thought this message was sent in error but that didn’t make sense. If the messages were entirely automated the likelihood of the typo appearing should be zero. I’ve been receiving messages from Vodafone for years and this is the first time I have seen a typo.
I thought the number was possibly an untransformed placeholder of some kind but I actually received the minutes. Surely if these things are automated there would be a safeguard in place to prevent such large values?
Could this merely be the hallucinations of a LLM? I don’t think so. I doubt the system sending these messages to me for nearly a decade has been replaced with a large language model. I don’t know why but this seems unlikely.
My mind then went the only place it possibly could: Is there a human being in a room somewhere far far away sending me ALL CAPS gifts and offers? Are the messages manually typed out? Did someone make a mistake and accidentally give me a million minutes?
Was this offer really exclusive or are there others out there who also received it? How many accounts does each person manage at once? Do they keep the same accounts they are assigned or is my account moved between handlers?
If it is an automated system, what circumstances could cause this message to occur? Why did I receive it and not someone else? Is the typo baked into some template somewhere?
It’s all so strange and I doubt I’ll find answers but that’s OK. For five days I had a million minutes and I was possibly the first and only Vodafone minute millionaire.
...
Read the original on dylan.gr »
Google is officially doing away with its 30 percent cut of Play Store transactions, and rolling out changes to how third-party app stores and alternate billing systems will be handled by Android. Some of these tweaks were proposed as part of the settlement the company reached with Epic in November 2025, but rather than wait for final judicial approval, Google is committing to revamping Android and the Play Store publicly.
The biggest change is to how Google will collect fees from developers publishing apps on Android. Rather than take its standard 30 percent cut of in-app purchases through the Play Store, Google is lowering its cut to 20 percent, and in some cases 15 percent for new installs of apps from developers participating in its new App Experience program or updated Google Play Games Level Up program. Those changes extend to subscriptions, too, where the company’s cut is lowering to 10 percent. For Google’s billing system, the company says developers in the UK, US, or European Economic Area (EEA) will now be charged a five percent fee and “a market-specific rate” in other regions. Of course, for anyone trying to avoid those fees, using alternatives to Google’s billing system is getting easier.
Google says that developers will be able to offer alternative billing systems alongside its own or “guide users outside of their app to their own websites for purchases.” The setup, as described by Google, appears to be more permissive than what Apple settled on in 2025. For iOS apps on the App Store, developers interested in avoiding Apple’s fees can only direct customers to alternative payment methods on the web through in-app links. Allowing for these outside transactions is part of what prompted Epic to bring Fortnite back to the App Store in the US in May 2025. The developer added the app back to the Play Store in the US in December of that year, and Epic CEO Tim Sweeney shared alongside today’s changes that Fortnite will soon be available in Google’s app store globally.
Epic is ultimately interested in getting people to use the mobile version of its Epic Games Store, and Google’s announcement also includes details on how third-party app stores can come to Android. Third-party app stores will be able to apply to the company’s new “Registered App Stores” program to see if they meet “certain quality and safety benchmarks.” If they do, they’ll be able to take advantage of a streamlined installation interface in Android. Participating in the program is optional, and users will still be able to sideload alternative app stores that aren’t part of the program, but Google clearly has a preference. Changes the company plans to make to sideloading later in 2026 could deliberately make the process more difficult, which might force developers to apply to Google’s program.
Given the scale of the changes, not all of Google’s tweaks will be available everywhere at the same time. Google says that its updated fee structure will come to the EEA, the UK and the US by June 30, Australia by September 30, Korea and Japan by December 31 and the entire world by September 30, 2027. Meanwhile, the company’s updated Google Play Games Level Up program and new App Experience program will launch in the EEA, the UK, the US and Australia on September 30, before hitting the remaining regions alongside the updated fee structure. For any developers interested in offering their own app store, Google says it’ll launch its Registered App Stores program “with a version of a major Android release” before the end of the year. According to the company, the program will be available in other regions first before it comes to the US.
Google has made changes to how it collects app store fees in the past, the most significant being in 2021, when it lowered its cut to 15 percent on the first $1 million developers earn, and 15 percent on subscriptions. The difference here is that the regulatory scrutiny brought about by Epic’s lawsuit against Google and Apple seems to be a key motivator for its changes. Well, that, and an entirely separate business deal the company made with Epic. Google and Epic’s settlement served as the basis for these changes, but The Verge reported in January that the companies also agreed to an $800 million joint partnership around product development and Google using Epic’s “core technology.” Letting developers keep more of their money is ultimately good, but it’s a business decision Google felt comfortable making, which likely means it has its own share of upsides.
...
Read the original on www.engadget.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.