10 interesting stories served every morning and every evening.
...
Read the original on gptzero.me »
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The analysis finds that solar is making gains in every E. U. country, while coal is broadly in retreat. Last year, solar alone supplied more than 20 percent of power in Hungary, Cyprus, Greece, Spain, and the Netherlands. Meanwhile, in 19 European countries, coal accounted for less than 5 percent of power. In 2025, both Ireland and Finland joined the ranks of European countries that have shuttered their last remaining coal plants.
Warming, however, continues to challenge the shift to clean energy as drought saps hydropower. Last year, hydro output dropped slightly in the E. U., and natural gas power rose to compensate.
“The next priority for the E. U. should be to put a serious dent in reliance on expensive, imported gas,” said Ember analyst Beatrice Petrovich. “Gas not only makes the E.U. more vulnerable to energy blackmail, it’s also driving up prices.”
In parts of Europe, there are signs that increasingly cheap batteries are beginning to displace natural gas in the early evening, when power demand is high, but solar output is waning. Said Petrovich, “As this trend accelerates it could limit how much gas is needed in evening hours, therefore stabilizing prices.”
An E. U. Plan to Slash Micropollutants in Wastewater Is Under Attack
...
Read the original on e360.yale.edu »
Why else would they keep them around for so long?
Why else would they keep them around for so long?
Every bug is different. But the math is always real.
Think our numbers are wrong? Edit them yourself.
Users Affected × Frequency × Time Per Incident
How many Apple users hit this bug, how often, and how long they suffer each time.
Σ (Workaround Time × Participation Rate)
The extra time spent by people who try to fix what Apple won’t.
Years Unfixed × Pressure Factor
How long Apple has known about this and how urgent the task usually is.
Human Hours Wasted ÷ Engineering Hours to Fix
How many times over Apple could have fixed it with the productivity they’ve destroyed.
...
Read the original on www.bugsappleloves.com »
API Error: 400 {
“error”: {
“type”: “invalid_request_error”,
“message”:“This organization has been disabled.”
One minute I’m a €220/month “Max 20x” AI “power user” (is this even a thing?). The next, I am a disabled non-person “organization”.
Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else.
Until in one of these sessions I got presented with that response.
If you are automating prompts that look like system instructions (i.e. scaffolding context files, or using Claude to find errors of another Claude and iterate on its CLAUDE.md, or etc…), you are walking on a minefield.
My account was banned! no warning and no feedback, just that message saying that my request was invalid because I am a disabled organization.
I wasn’t even doing anything groundbreaking, in fact, I was asking it to tune a tool I use to do project scaffolding.
Yes you read that right: project scaffolding! Probably one of the most boring things you can think of doing!
So I asked claude to update my scaffolding tool so that it would include a CLAUDE.md file in there with baked instructions for a particular homemade framework (cof
boreDOM cof).
I was playing like a “human-in-the-loop” middleware for these LLM tools. Like watching one instance of Claude try to “boss around” another instance of itself, and the platform’s security guards mistook it for a riot.
To help understand this, there are three main characters in this story:
* Claude A (an instance of claude in a tmux pane)
* Claude B (another instance in another tmux pane)
The loop was like this:
The disabled organization (me) asked Claude A to update the scaffold tool
with a cool CLAUDE.md
I went on and started a new project with the tool, opened a claude in there
(Claude B) and asked for a complex task to be done
Whenever Claude B made a mistake, I would go to Claude A, paste the error,
and say something like “hey, Claude B made this error”
Running two instances of claudes, while one updated the CLAUDE.md of another as it made mistakes.
The loop repeated until I was told I was a disabled organization.
I just wanted a standard context file for new projects.
At one point Claude A got somewhat annoyed with Claude B, and started shoutting! writting in en-US instead of en-GB, that is: ALL CAPS.
I went on to check the file, and it was getting littered with these kind of instructions to make Claude B do something instead of what it would try to do.
My guess is that this likely tripped the “Prompt Injection” heuristics that the non-disabled organization has.
I would love to see the face of that AI when it saw its own “system prompt” language being echoed back to it.
Or I don’t know. This is all just a guess from me.
So I went running to read their docs. What was going on here?
Made an appeal, which was a link to a google docs form, with a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization that I was not only a human but also a well-intended one.
I got no reply. Not even an automatic response. 0 comms.
So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.
I got no reply. Not even an automatic response.
And to wonder that people complain about civil servants, eh, wait until you have to deal with one of these expensive machines!
After a couple of days I got an e-mail:
Yes, the only e-mail I got was a credit note giving my money back.
It’s like they’re saying “We don’t want to talk to you anymore, here is some hush money”. But hey guys, it is not a conversation if it is one-way only, and here I am talking to a wall.
I didn’t even get to have a “It’s not you, it’s us.” I just got a credit note.
I’m glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools… then I would be out of e-mail, photos, documents, and phone OS.
AI moderation is currently a “black box” that prioritizes safety over accuracy to an extreme degree.
If you are automating prompts that look like system instructions (i.e. scaffolding context files), you are walking on a minefield.
I got my €220 back (ouch that’s a lot of money for this kind of service, thanks capitalism). I have reframed the whole scaffolding project, and reverted all the code Claude did there.
Soon I will re-release boreDOM with a new angle and approach, without the help of Claude. I am trying to turn it into a JS framework for LLMs (llm first, or even llm only, it now has no API). To produce and iterate on those single.html files that these tools are now bringing to the world.
If you want to take a look at the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available
here.
Again to wrap this up: this whole post is just my hypothesis. Claude was not doing anything other than iterating on this file at the moment I got the ban. And I haven’t heard from them about this anymore (or ever).
“you got to understand that these organizations have a lot of users…”
...
Read the original on hugodaniel.com »
Here are a few lines of summarized tcpdump output for an ssh session where I send a single keystroke:
$ ./first_lines_of_pcap.sh single-key.pcap
1 0.000s CLIENT->SERVER 36 bytes
2 0.007s SERVER->CLIENT 564 bytes
3 0.015s CLIENT->SERVER 0 bytes
4 0.015s CLIENT->SERVER 36 bytes
5 0.015s SERVER->CLIENT 36 bytes
6 0.026s CLIENT->SERVER 0 bytes
7 0.036s CLIENT->SERVER 36 bytes
8 0.036s SERVER->CLIENT 36 bytes
9 0.046s CLIENT->SERVER 0 bytes
10 0.059s CLIENT->SERVER 36 bytes
I said a “few” because there are a lot of these lines.
$ ./summarize_pcap.sh single-key.pcap
Total packets: 270
36-byte msgs: 179 packets ( 66.3%) 6444 bytes
Other data: 1 packet ( 0.4%) 564 bytes
TCP ACKs: 90 packets ( 33.3%)
Data sent: 6444 bytes in 36-byte messages, 564 bytes in other data
Ratio: 11.4x more data in 36-byte messages than other data
Data packet rate: ~90 packets/second (avg 11.1 ms between data packets)
That is a lot of packets for one keypress. What’s going on here? Why do I care?
I am working on a high-performance game that runs over ssh. The TUI for the game is created in bubbletea 1 and sent over ssh via wish.
The game is played in an 80x60 window that I update 10 times a second. I’m targeting at least 2,000 concurrent players, which means updating ~100 million cells a second. I care about performance.
So I have a script that connects a few hundred bots over ssh and has them make a move a second. Then I use go’s outstanding profiling tools to look at what’s going on.
Yesterday I inadvertently broke my test harness. Instead of regularly sending game data, my server sent the bots a single message that said “your screen is too small.” This cut my game’s CPU and bandwidth usage in half.
At first I was disappointed. I (briefly) thought I had a free massive speedup on my hands, but it was actually a testing error.
If I wasn’t sending game data back to my bots, why did CPU usage drop by 50% instead of 100%?
As part of debugging the test harness issue, I used tcpdump to log game traffic with and without the breaking change. Something like:
# The game runs on port 22
timeout 30s tcpdump -i eth0 ‘port 22’ -w with-breaking-change.pcap
# Revert change
timeout 30s tcpdump -i eth0 ‘port 22’ -w without-breaking-change.pcap
Our breaking change stopped us from rendering our game over ssh. So with-breaking-change.pcap contains packets that represent the overhead of each connection without actually rendering the game.
I was debugging this with Claude Code, so I asked it to summarize what it saw in the pcap.
Wanna take a look yourself? I put with-breaking-change.pcap in this directory
Wow! Here’s what I found:
Packet Size Distribution (413,703 total packets):
274,907 packets (66%): Exactly 36 bytes
138,778 packets (34%): 0 bytes (TCP ACKs)
18 packets (
Further analysis on a smaller pcap pointed to these mysterious packets arriving ~20ms apart.
This was baffling to me (and to Claude Code). We kicked around several ideas like:
* Some quirk of bubbletea or wish
One thing stood out - these exchanges were initiated by my ssh client (stock ssh installed on MacOS) - not by my server.
On a hunch, I took a tcpdump of a regular ssh session.
# on my mac, in one tab
sudo tcpdump -ien0 ‘port 22’
# on my mac, in another tab
ssh $some_vm_of_mine
I waited for the initial connection chatter to die down, sent one keystroke to my remote vm, and looked at the tcpdump output.
I saw the exact same pattern! What in the world?
Once I realized that this was a property of stock ssh and not my game, debugging got a lot easier.
Running ssh -vvv gave me a pretty good sense of what was going on:
debug3: obfuscate_keystroke_timing: starting: interval ~20ms
debug3: obfuscate_keystroke_timing: stopping: chaff time expired (49 chaff packets sent)
debug3: obfuscate_keystroke_timing: starting: interval ~20ms
debug3: obfuscate_keystroke_timing: stopping: chaff time expired (101 chaff packets sent)
That 20ms is a smoking gun - it lines up perfectly with the mysterious pattern we saw earlier! And the rest of the message is pretty helpful too - we sent 49 “chaff” packets for the first keystroke and 101 “chaff” for around the second one.
In 2023, ssh added keystroke timing obfuscation. The idea is that the speed at which you type different letters betrays some information about which letters you’re typing. So ssh sends lots of “chaff” packets along with your keystrokes to make it hard for an attacker to determine when you’re actually entering keys.
That makes a lot of sense for regular ssh sessions, where privacy is critical. But it’s a lot of overhead for an open-to-the-whole-internet game where latency is critical.
Keystroke obfuscation can be disabled client-side. After reverting my original breaking change, I tried updating my test harness to pass ObscureKeystrokeTiming=no when starting up ssh sessions.
This worked great. CPU usage dropped dramatically and bots still received valid data.
But this is hardly a solution in the real world. I want ssh mygame to Just Work without asking users to pass options that they might not understand.
Claude Code originally didn’t have much faith that we could disable this functionality server-side.
Fortunately, the description I found of SSH keystroke obfuscation made it easy to look up the relevant code in go’s ssh library (which I was transitively depending on).
Log message:
Introduce a transport-level ping facility
This adds a pair of SSH transport protocol messages SSH2_MSG_PING/PONG
to implement a ping capability. These messages use numbers in the “local
extensions” number space and are advertised using a “[email protected]”
ext-info message with a string version number of “0”.
The “chaff” messages that ssh uses to obscure keystrokes are SSH2_MSG_PING messages. And they’re sent to servers that advertise the availability of the [email protected] extension. What if we just…don’t advertise [email protected]?
I searched go’s ssh library for [email protected] and found the commit where support was added. The commit was tiny and seemed very easy to revert.
I cloned the go crypto repo and told Claude to revert this change and update our dependencies to use our clone (go’s replace directive makes forking a library very easy).
Then I re-ran my test harness. The results were…very good:
Total CPU 29.90% -> 11.64%
Syscalls 3.10s -> 0.66s
Crypto 1.6s -> 0.11s
Bandwidth ~6.5 Mbit/sec -> ~3 Mbit/sec
Obviously forking go’s crypto library is a little scary, and I’m gonna have to do some thinking about how to maintain my little patch in a safe way.
But this is a huge improvement. I’ve spent much of the last week squeezing out small single-digit performance wins. A >50% drop was unimaginable to me.
I’ve been thinking about whether LLMs remove parts of the problem-solving process that I enjoy. But I’ve gotta say, debugging this problem using Claude Code was super fun.
I am familiar enough with tcpdump, tshark, and friends to know what they can do. But I don’t use them regularly enough to be fast with them. Being able to tell an agent “here’s a weird pcap - tell me what’s going on” was really lovely. And by watching commands as the agent ran them I was able to keep my mental model of the problem up to date.
There were still edge cases. At some point in my confusion I switched to ChatGPT and it very confidently told me that my tcpdump output was normal ssh behavior:
And then doubled down when I pushed back:
Similarly, I had to push Claude Code to consider forking go’s ssh library. And I had to make the original leap of “wait…if our test harness was broken, why was usage not 0%?”
When you say “LLMs did not fully solve this problem” some people tend to respond with “you’re holding it wrong!”
I think they’re sometimes right! Interacting with LLMs is a new skill, and it feels pretty weird if you’re used to writing software like it’s 2020. A more talented user of LLMs may have trivially solved this problem.
But the best way to develop a skill is by practicing it. And for me, that means figuring out how to transfer my problem-solving intuitions to the tools that I’m using.
Besides. Being in the loop is fun. How else would I write this post?
...
Read the original on eieio.games »
In 2000, Douglas Adams made an interesting observation that I keep returning to.
A user on Slashdot named “FascDot Killed My Pr” had asked the following question (where HGttG = Hitchhiker’s Guide to the Galaxy):
First, a big thank-you. You’ve made a lasting contribution to “our” culture (or should that be “culture”?)
I first read HGttG in my early teens. I doubled over laughing the whole time. I read and reread the entire series, bought both Dirk Gently books AND Last Chance to See. Loved them all and wouldn’t trade having read them for anything. (btw, the first mental ward scene in Long Dark Teatime is a no-foolin’, all-time classic.)
However, a few years ago I was talking to a (then) classmate. Very smart, philosophy-major type. He said (paraphrased) “I thought that HGttG was depressing. Such nihilism.” At the time I thought “Hmmm…I didn’t SEE a black beret on his head….”. But every reading of the series since then his comment has struck me as more true–especially in the case of Arthur Dent. In fact, far from being funny, I now find Dent’s character depressing–he’s not just a loser, he literally has no control over his life at all (except in So Long for a while). And the control he does have does him no good (e.g. Earth is destroyed while he’s trying to save his house.)
So my question is: When you were writing these books did you feel you were being gaily whimsical or did you instead feel frustrated and cynical?
I suspect there is a cultural divide at work here. In England our heroes tend to be characters who either have, or come to realise that they have, no control over their lives whatsoever — Pilgrim, Gulliver, Hamlet, Paul Pennyfeather (from Decline and Fall), Tony Last (from A Handful of Dust). We celebrate our defeats and our withdrawals — the Battle of Hastings, Dunkirk, almost any given test match. There was a wonderful book published, oh, about twenty years ago I think, by Stephen Pile called the Book of Heroic Failures. It was staggeringly huge bestseller in England and sank with heroic lack of trace in the U. S. Stephen explained this to me by saying that you cannot make jokes about failure in the States. It’s like cancer, it just isn’t funny at any level. In England, though, for some reason it’s the thing we love most. So Arthur may not seem like much of a hero to Americans — he doesn’t have any stock options, he doesn’t have anything to exchange high fives about round the water-cooler. But to the English, he is a hero. Terrible things happen to him, he complains about it a bit quite articulately, so we can really feel it along with him - then calms down and has a cup of tea. My kind of guy!
I’ve hit a certain amount of difficulty over the years in explaining this in Hollywood. I’m often asked ‘Yes, but what are his goals?’ to which I can only respond, well, I think he’d just like all this to stop, really. It’s been a hard sell. I rather miss David Vogel from the film process. He’s the studio executive at Disney who was in charge of the project for a while, but has since departed. There was a big meeting at one time to discuss, amongst other things, Arthur’s heroicness or lack of it. David suddenly asked me ‘Does Arthur’s presence in the proceedings make a difference to the way things turn out?’ to which I said, slightly puzzled, ‘Well, yes.’ David smiled and said ‘Good. Then he’s a hero.’
In the current, latest version of the screenplay, I think that Arthur’s non-heroic heroism is now absolutely preserved, and I’m pleased with the way he works out.
I think I have more to say about this, and will try to come back and add more here, but meanwhile a few things at random:
As a matter of fact, I have read The Book of Heroic Failures (1979) with great enjoyment. (Post from 2011 — I only wrote four sentences of my own, but one of them was “Too many books have been written in praise of competence; this book provides an antidote by celebrating failure as only a British author can.”)
I think he is right that this goes over better (generally speaking) in England than in the USA. Of course one can make jokes mocking failure, but someone who fails does not automatically become endearing (in a kind of everyman way) in America the way they would in England. It seems to me that Americans are more likely to feel either contempt or pity than to feel kinship: or at any rate, they regard the failure as a setback or interesting circumstance, rather than the natural/default state of the world. (As someone who is neither American nor English, I am of course not someone whose opinions you should pay any heed to.)
* As we live our lives, are we merely victims subject to winds of chance and external circumstance, or are we powerful agents fashioning our own stories, making our own luck? Obviously the answer is “both”, but perhaps the most distinctively American trait is to lean more towards the latter.
...
Read the original on shreevatsa.net »
I pay for a lot of small apps. One of them was Wispr Flow for dictation. That’s $14 CAD/month that I was paying until I had a few lazy days visiting my mother. And then on the afternoon of New Year’s Day, I vibecoded Jabber.
Now, don’t get me wrong, Jabber is not “production quality.” I would never sell it as a product or even recommend it to other people, but it does what I needed from Wispr Flow, and it does exactly the way I want it to. For free.
At work, I’m often asked to make small videos showing some support agent how something works, or sharing some knowledge with new team members, or just a regular demo of something. In the past, I used to use Loom, which costs $15/month. So after creating Jabber, I got excited and vibecoded Reel.
Reel does exactly what I wanted Loom to do: I can record my camera, I can move it around, and I get to trim the video after it’s done (I don’t remember being able to do that with Loom).
Then just yesterday, a friend of mine was telling me how he got tired of paying for Typora and decided to vibecode his own Markdown editor. And that gave me the idea of creating an editor for my blog.
That’s Hugora! Yes, horrible name, but who cares? It’s just for me. I get to edit my Hugo blog just the way I like. It even shows my site theme.
You see the pattern here?
All of these $10/month apps are suddenly a weekend project for me. I’m an engineer, but I have never written a single macOS application. I’ve never even read Swift code in my life, and yet, I now can get an app up and running in a couple of hours. This is crazy.
Most standalone apps will be “features, not products” in the long run — easy to copy and bundle into larger offerings.
And I think we’re there. I don’t know what that means for the future of our industry, but it does seem like a big shift.
I’m still skeptical of vibecoding in general. As I mentioned above, I would not trust my vibecoding enough to make these into products. If something goes wrong, I don’t know how to fix it. Maybe my LLM friends can, but I don’t know. But vibecoding is 100% viable for personal stuff like this: we now have apps on demand.
...
Read the original on rselbach.com »
Currently, Gemini CLI restricts IDE integration features to environments where TERM_PROGRAM is vscode (or other hardcoded values). This forces 3rd-party integrations like jetbrains-ide-companion to mock VS Code by spoofing environment variables to enable core features, otherwise it could not be discovered by Gemini CLI.
For some reason, the process detection is not working properly on windows/linux (, reported by users here JetBrains Plugin Review and here #9273 , and a few other bug report email i’ve received), which making this native IDE detection logic a MUST do for gemini-cli discover and connect to IDE via environmental variables instead of port info file.
This PR adds JetBrains IDE Series to the IDE_DEFINITIONS and updates the detection logic to recognize TERMINAL_EMULATOR=JetBrains-JediTerm as a first-class supported environment.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.