10 interesting stories served every morning and every evening.
The topic of the Rust experiment was just discussed at the annual
Maintainers Summit. The consensus among the assembled developers is that
Rust in the kernel is no longer experimental — it is now a core part of the
kernel and is here to stay. So the “experimental” tag will be coming off.
Congratulations are in order for all of the Rust for Linux team.
(Stay tuned for details in our Maintainers Summit coverage.)
Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds
...
Read the original on lwn.net »
The HDMI Forum, responsible for the HDMI specification, continues to stonewall open source. Valve’s Steam Machine theoretically supports HDMI 2.1, but the mini-PC is software-limited to HDMI 2.0. As a result, more than 60 frames per second at 4K resolution are only possible with limitations.
In a statement to Ars Technica, a Valve spokesperson confirmed that HDMI 2.1 support is “still a work-in-progress on the software side.” “We’ve been working on trying to unblock things there.”
The Steam Machine uses an AMD Ryzen APU with a Radeon graphics unit. Valve strictly adheres to open-source drivers, but the HDMI Forum is unwilling to disclose the 2.1 specification. According to Valve, they have validated the HDMI 2.1 hardware under Windows to ensure basic functionality.
The restriction imposed by the HDMI Forum was already criticized in early 2024 by an AMD employee responsible for Linux. Even then, according to AMD, they had submitted a functional, HDMI 2.1-compatible driver, which the HDMI Forum rejected.
“Unfortunately, the HDMI Forum rejected our proposal,” it was stated at the time. “At this time an open source HDMI 2.1 implementation is not possible without running afoul of the HDMI Forum requirements.”
Only HDMI 2.1 offers sufficient bandwidth for 120 or 144 Hertz at 3840 × 2160 pixels without compression. Furthermore, this version introduced manufacturer-independent variable refresh rates (HDMI VRR). Valve enables 4K and 120 Hertz using chroma subsampling, a compression technique that is particularly noticeable with text. VRR functions in the form of AMD’s Freesync, which requires compatible displays.
Alternatively, interested parties can use an active adapter from DisplayPort 1.4 to HDMI 2.1 to increase the frame rate without compression. However, they do not officially support VRR. Popular variants from Club3D are no longer available; offers from less well-known providers (starting from 35,67 €) are still available in price comparisons.
...
Read the original on www.heise.de »
A new toll applied to cars driving in parts of New York City has led to a measurable drop in traffic, and with it, a 22 percent decline in particulate pollution, according to a new study.
Congestion pricing came into effect in January, with cars paying $9 to drive through busy parts of Manhattan during peak hours. In the first six months of the program, traffic in the congestion zone dropped by 11 percent, accidents by 14 percent, and complaints of excessive honking or other noise by 45 percent, officials said.
A new study from Cornell has now tallied the impact on particulate pollution. Particulates issued from tailpipes can aggravate asthma and heart disease and increase the risk of lung cancer and heart attack. Globally, they are a leading risk factor for premature death.
Analyzing data on air quality, traffic, and weather conditions, researchers determined that in the first half of this year, particulate pollution was down 22 percent in parts of Manhattan affected by congestion pricing.
The decline seen in New York was greater than in other cities with congestion pricing, such as Stockholm and London, researchers note. And the effect extended beyond Lower Manhattan. Pricing led to a drop in pollution across the greater metropolitan area, according to the study, published in the journal npj Clean Air.
“It’s really exciting to me that air quality improved throughout the entire metro area,” said lead author Timothy Fraser, of Cornell University. “This tells us that congestion pricing didn’t simply relocate air pollution to the suburbs by rerouting traffic. Instead, folks are likely choosing cleaner transportation options altogether, like riding public transportation or scheduling deliveries at night. This thins traffic and limits how smog compounds when many cars are on the road.”
...
Read the original on e360.yale.edu »
In September of 2024, Israel blew up boobie trapped pagers belonging to Hezbollah figures in public places in Lebanon, killing 12 people, including two children and two healthcare workers, and injuring 2,800.
The attack was followed by another attack using explosives in walkie-talkies that killed 25 people and injured another 600.
The Associated Press reported that the attacks “wounded many civilians” and that survivors are left “with missing eyes, faces laced with scars, hands with missing fingers”.
The United Nations at the time noted that the attacks “constitute war crimes of murder, attacking civilians, and launching indiscriminate attacks, in addition to violating the right to life” adding that, “Around 500 people suffered severe eye injuries, including a diplomat. Others suffered grave injuries to their faces, hands and bodies” and that “It is also a war crime to commit violence intended to spread terror among civilians, including to intimidate or deter them from supporting an adversary, A climate of fear now pervades everyday life in Lebanon”.
At the time, when asked about the attacks, former CIA director Leon Panetta said, “I don’t think there’s any question that it’s a form of terrorism”.
Now, a new book quietly reveals that Israel carried out the terrorist attack with the help of the AI surveillance firm Palantir, led by Alex Karp and Peter Thiel.
In the new biography of Palantir co-founder Alex Karp, “The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State,” by New York Times journalist Michael Steinberger, he writes that prior to the genocide in Gaza, “the Mossad had been using Palantir technology,” adding that the Shin Bet and IDF, “ sought to obtain Palantir’s software in the wake of Ocotber 7th”.
He goes on to write that, “The demand for Palantir’s assistance was so great that the company dispatched a a team of engineers from London to help get Israeli users online,” adding, “Palantir ended up having to rent a second-floor building that housed its Tel Aviv office, to accommodate the intelligence analysts who needed tutorials”.
Revealing what Israel used the AI-powered software for, Michael Steinberger notes, “Its software was used by the Israeli military in several raids in Gaza” and goes on to write that, “The company’s technology was deployed by the Israelis during military operations in Lebanon in 2024 that decimated Hezbollah’s top leadership” adding that, “It was also used in Operation Grim Beeper, in which hundreds of Hezbollah fighters were injured and maimed when their pagers and walkie-talkies exploded (the Israelis had booby trapped the devices)”.
Francesca Albanese, the United Nations’ Special Rapporteur on the situation of human rights in the Palestinian Territory, occupied since 1967, documented Palantir’s role in the genocide in Gaza, noting, “In January 2024, Palantir announced a new strategic partnership with Israel and held a board meeting in Tel Aviv “in solidarity”; in April 2025, Palantir’s Chief Executive Officer responded to accusations that Palantir had killed Palestinians in Gaza by saying, ‘mostly terrorists, that’s true’. Both incidents are indicative of executive-level knowledge and purpose vis-à-vis the unlawful use of force by Israel, and failure to prevent such acts or withdraw involvement.”
Now it is revealed that the AI software was used in Israel’s terrorist attack in Lebanon as well.
In a recent interview, the former head of the Israeli Mossad, Yossi Cohen, revealed that Israel has similar “booby-trapped and spy-manipulated equipment” in “all the countries you can imagine”.
The fact that a company as influential as Palantir was involved in the terrorist attacks makes these comments even more concerning.
Note to readers: The Dissident is a reader-supported outlet. If you liked this article, consider becoming a paid subscriber.
...
Read the original on the307.substack.com »
Chinese artificial intelligence startup DeepSeek has relied on Nvidia Corp. chips that are banned in the country to develop an upcoming AI model, according to a new report in The Information.
Nvidia’s Blackwell chips were smuggled into China through countries that permitted their sale, The Information reported, citing unnamed sources. More specifically, DeepSeek tapped chips that were installed in data centers in unspecified countries, then dismantled and shipped to China after clearing inspection by companies developing server equipment, The Information said.
The US bans the sale of these advanced semiconductors to China, which has led AI developers there to access the hardware through data centers located outside of the mainland or subterfuge. In November, US prosecutors charged two Chinese nationals and two US citizens with a scheme to ship chips to China by way of Malaysia using a fake real estate business.
A representative for DeepSeek didn’t immediately respond to a request for comment.
DeepSeek drew global attention in January when it debuted an AI model that was competitive with Silicon Valley’s best and said it had built it at a fraction of the cost. The startup was funded by the Chinese hedge fund High-Flyer, which had amassed 10,000 Nvidia GPUs in 2021, prior to US bans on exports of sophisticated Nvidia chips and other graphics processing units.
Earlier this week, President Donald Trump granted Nvidia permission to ship to China an older version of its AI accelerators, the H200. An export ban on its more powerful Blackwell version remains in place.
Beijing has meanwhile pushed Chinese technology companies to rely on domestic equipment to develop AI. DeepSeek released a new model in September and indicated that it was working with Chinese chipmakers on the model.
Nvidia told The Information that it hasn’t seen “any substantiation or received tips” about smuggling through data centers outside of China.
...
Read the original on finance.yahoo.com »
Yesterday I stumbled on this HN thread Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now, where Gemini 3 was hallucinating the frontpage of 10 years from now. One of the comments struck me a bit more though - Bjartr linked to the HN frontpage from exactly 10 years ago, i.e. December 2015. I was reading through the discussions of 10 years ago and mentally grading them for prescience when I realized that an LLM might actually be a lot better at this task. I copy pasted one of the article+comment threads manually into ChatGPT 5.1 Thinking and it gave me a beautiful analysis of what people thought + what actually happened in retrospect, even better and significantly more detailed than what I was doing manually. I realized that this task is actually a really good fit for LLMs and I was looking for excuses to vibe code something with the newly released Opus 4.5, so I got to work. I’m going to get all the front pages of December (31 days, 30 articles per day), get ChatGPT 5.1 Thinking to do the analysis, and present everything in a nice way for historical reading.
There are two macro reasons for why I think the exercise is interesting more generally:
I believe it is quite possible and desirable to train your forward future predictor given training and effort.
I was reminded again of my tweets that said “Be good, future LLMs are watching”. You can take that in many directions, but here I want to focus on the idea that future LLMs are watching. Everything we do today might be scrutinized in great detail in the future because doing so will be “free”. A lot of the ways people behave currently I think make an implicit “security by obscurity” assumption. But if intelligence really does become too cheap to meter, it will become possible to do a perfect reconstruction and synthesis of everything. LLMs are watching (or humans using them might be). Best to be good.
Vibe coding the actual project was relatively painless and took about 3 hours with Opus 4.5, with a few hickups but overall very impressive. The repository is on GitHub here: karpathy/hn-time-capsule. Here is the progression of what the code does:
* Given a date, download the frontpage of 30 articles
* For each article, download/parse the article itself and the full comment thread using Algolia API.
* Package up everything into a markdown prompt asking for the analysis. Here is the prompt prefix I used:
The following is an article that appeared on Hacker News 10 years ago, and the discussion thread.
Let’s use our benefit of hindsight now in 6 sections:
1. Give a brief summary of the article and the discussion thread.
2. What ended up happening to this topic? (research the topic briefly and write a summary)
3. Give out awards for “Most prescient” and “Most wrong” comments, considering what happened.
4. Mention any other fun or notable aspects of the article or discussion.
5. Give out grades to specific people for their comments, considering what happened.
6. At the end, give a final score (from 0-10) for how interesting this article and its retrospect analysis was.
As for the format of Section 5, use the header “Final grades” and follow it with simply an unordered list of people and their grades in the format of “name: grade (optional comment)”. Here is an example:
Final grades
- speckx: A+ (excellent predictions on …)
- tosh: A (correctly predicted this or that …)
- keepamovin: A
- bgwalter: D
- fsflover: F (completely wrong on …)
Your list may contain more people of course than just this toy example. Please follow the format exactly because I will be parsing it programmatically. The idea is that I will accumulate the grades for each account to identify the accounts that were over long periods of time the most prescient or the most wrong.
As for the format of Section 6, use the prefix “Article hindsight analysis interestingness score:” and then the score (0-10) as a number. Give high scores to articles/discussions that are prominent, notable, or interesting in retrospect. Give low scores in cases where few predictions are made, or the topic is very niche or obscure, or the discussion is not very interesting in retrospect.
Here is an example:
Article hindsight analysis interestingness score: 8
* Submit prompt to GPT 5.1 Thinking via the OpenAI API
* Render the results into static HTML web pages for easy viewing
* Host the html result pages on my website: https://karpathy.ai/hncapsule/
* Host all the intermediate results of the data directory if someone else would like to play. It’s the file data.zip under the exact same url prefix (intentionally avoiding a direct link).
I spent a few hours browsing around and found it to be very interesting. A few example threads just for fun:
And then when you navigate over to the Hall of Fame, you can find the top commenters of Hacker News in December 2015, sorted by imdb-style score of their grade point average. In particular, congratulations to pcwalton, tptacek, paulmd, cstross, greglindahl, moxie, hannob, 0xcde4c3db, Manishearth, johncolanduoni - GPT 5.1 Thinking found your comments very insightful and prescient. You can also scroll all the way down to find the noise of HN, which I think we’re all familiar with too :)
My code (wait, Opus’ code?) on GitHub can be used to reproduce or tweak the results. Running 31 days of 30 articles through GPT 5.1 Thinking meant 31 * 30 = 930 LLM queries and cost about $58 and somewhere around ~1 hour. The LLM megaminds of the future might find this kind of a thing a lot easier, a lot faster and a lot cheaper.
...
Read the original on karpathy.bearblog.dev »
It’s not every day a video codec wins an Emmy. But yesterday, the Television Academy honored the AV1 specification with a Technology & Engineering Emmy Award, recognizing its impact on how the world delivers video content.
Through the mid-2010s, video codecs were an invisible tax on the web, built on a closed licensing system with expensive, unpredictable fees. Most videos online relied on the H.264 codec, which open-source projects like Firefox could only support without paying MPEG LA license fees thanks to Cisco’s open-source OpenH.264 module.
Especially as demand for video grew, the web needed a next-generation codec to make high-quality streaming faster and more reliable. H.265 promised efficiency gains, but there was no guarantee of another OpenH.264-style arrangement. The risk was another fragmented ecosystem where browsers like Firefox couldn’t play large portions of the web’s video.
To solve this, Mozilla joined other technical leaders to form the Alliance for Open Media (AOM) in 2015 and started ambitious work on a next-generation codec built from Google’s VP9, Mozilla’s Daala, and Cisco’s Thor.
The result was AV1, released in 2018, which delivered top-tier compression as an open standard under a royalty-free patent policy. It’s now widely deployed across the streaming ecosystem, including hardware decoders and optimized software decoders which allow open-source browsers like Firefox to provide state of the art video compression to all users across the web.
AV1 is also the foundation for the image format AVIF, which is deployed across browsers and provides excellent compression for still and animated images (AVIF is based on a video codec, after all).
The Emmy award reflects the value of open standards, open-source software, and the sustained work by AOM participants and the broader community fighting for an open web.
AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.
AOMedia is working on the upcoming release of AV2. It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.
As AV2 arrives, our goal remains unchanged: make video on the web open, efficient, and accessible to everyone.
...
Read the original on blog.mozilla.org »
There’s an old compiler-building tutorial that has become part of the field’s lore: the Let’s Build a Compiler
series by Jack Crenshaw (published between 1988 and 1995).
I ran into it in 2003
and was very impressed, but it’s now 2025 and this tutorial is still being mentioned quite often in Hacker News threads. Why is that? Why does a tutorial from 35 years ago, built in Pascal and emitting Motorola 68000 assembly - technologies that are virtually unknown for the new generation of programmers - hold sway over compiler enthusiasts? I’ve decided to find out.
The tutorial is easily available and readable online, but just re-reading it seemed insufficient. So I’ve decided on meticulously translating the compilers built in it to Python and emit a more modern target - WebAssembly. It was an enjoyable process and I want to share the outcome and some insights gained along the way.
The result is this code repository. Of particular interest is the TUTORIAL.md file, which describes how each part in the original tutorial is mapped to my code. So if you want to read the original tutorial but play with code you can actually easily try on your own, feel free to follow my path.
To get a taste of the input language being compiled and the output my compiler generates, here’s a sample program in the KISS language designed by Jack Crenshaw:
var X=0
{ sum from 0 to n-1 inclusive, and add to result }
procedure addseq(n, ref result)
var i, sum { 0 initialized }
while i < n
sum = sum + i
i = i + 1
end
result = result + sum
end
program testprog
begin
addseq(11, X)
end
It’s from part 13 of the tutorial, so it showcases procedures along with control constructs like the loop, and passing parameters both by value and by reference. Here’s the WASM text generated by my compiler for part 13:
You’ll notice that there is some trickiness in the emitted code w.r.t. handling the by-reference parameter (my previous post
deals with this issue in more detail). In general, though, the emitted code is inefficient - there is close to 0 optimization applied.
Also, if you’re very diligent you’ll notice something odd about the global variable - it seems to be implicitly returned by the generated
function. This is just a testing facility that makes my compiler easy to test. All the compilers are extensively tested - usually by running the generated WASM code and verifying expected results.
Insights - what makes this tutorial so special?
While reading the original tutorial again, I had on opportunity to reminisce on what makes it so effective. Other than the very fluent and conversational writing style of Jack Crenshaw, I think it’s a combination of two key factors:
The tutorial builds a recursive-descent parser step by step, rather than
giving a long preface on automata and table-based parser generators. When
I first encountered it (in 2003), it was taken for granted that if you want
to write a parser then lex + yacc are the way to go . Following the
development of a simple and clean hand-written
parser was a revelation that wholly changed my approach to the subject;
subsequently, hand-written recursive-descent parsers have been my go-to approach
for almost 20 years now.
Rather than getting stuck in front-end minutiae, the tutorial goes straight
to generating working assembly code, from very early on. This was also a
breath of fresh air for engineers who grew up with more traditional courses
where you spend 90% of the time on parsing, type checking and other semantic
analysis and often run entirely out of steam by the time code generation
is taught.
To be honest, I don’t think either of these are a big problem with modern resources, but back in the day the tutorial clearly hit the right nerve with many people.
What else does it teach us?
Jack Crenshaw’s tutorial takes the syntax-directed translation
approach, where code is emitted while parsing, without having to divide the compiler into explicit phases with IRs. As I said above, this is a fantastic approach for getting started, but in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types, it becomes painfully obvious that it would be very nice if we knew the types of expressions before
we generate code for them.
I don’t know if this is implicated in Jack Crenshaw’s abandoning the tutorial at some point after part 14, but it may very well be. He keeps writing how the emitted code is clearly sub-optimal and can be improved, but IMHO it’s just not that easy to improve using the syntax-directed translation strategy. With perfect hindsight vision, I would probably use Part 14 (types) as a turning point - emitting some kind of AST from the parser and then doing simple type checking and analysis on that AST prior to generating code from it.
All in all, the original tutorial remains a wonderfully readable introduction to building compilers. This post and the GitHub repository
it describes are a modest contribution that aims to improve the experience of folks reading the original tutorial today and not willing to use obsolete technologies. As always, let me know if you run into any issues or have questions!
...
Read the original on eli.thegreenplace.net »
Last week, I started working on a new side-project. It’s a standard React app partly made up of run-of-the-mill CRUD views—a perfect fit for LLM-assisted programming. I reasoned that if I could get an LLM to quickly write the boring code for me, I’d have more time to focus on the interesting problems I wanted to solve.
I’ve pretty much settled on Claude Code as my coding assistant of choice, but I’d been hearing great things about Google’s Gemini 3 Pro. Despite my aversion to Google products, I decided to try it out on my new codebase.
I already had Gemini CLI installed, but that only gave me access to Gemini 2.5 with rate limits. I wanted to try out Gemini 3 Pro, and I wanted to avoid being rate limited. I had some spare cash to burn on this experiment, so I went looking for ways to pay for a Gemini Pro plan, if such a thing existed.
Thus began my grand adventure in trying to give Google my money.
The name “Gemini” is so overloaded that it barely means anything. Based on the context, Gemini could refer to:
To make things even more confusing, Google has at least three different products just for agentic coding: Gemini Code Assist (Gemini CLI is a part of this suite of products), Jules, and Antigravity.
And then there’s a bunch of other GenAI stuff that is powered by Gemini but doesn’t have the word Gemini in the name: Vertex AI Platform, Google AI Studio, NotebookLM, and who knows what else.
I just wanted to plug my credit card information into a form and get access to a coding assistant. Instead, I was dunked into an alphabet soup of products that all seemed to do similar things and, crucially, didn’t have any giant “Buy Now!” buttons for me to click.
In contrast, both Anthropic and OpenAI have two primary ways you can access their products: via their consumer offerings at claude.ai and chatgpt.com respectively, or via API credits that you can buy through their respective developer consoles. In each case, there is a form field where you can plug in your credit card details, and a big, friendly “Buy Now!” button to click.
After half an hour of searching the web, I did the obvious thing and asked the free version of Gemini (the chatbot, not one of those other Geminis) what to do:
How do I pay for the pro version of Gemini so i can use it in the terminal for writing code? I specifically want to use the Gemini 3 Pro model.
It thought for a suspiciously long time and told me that Gemini 3 Pro required a developer API key to use. Since the new model is still in preview, it’s not yet available on any of the consumer plans. When I asked follow up questions about pricing, it told me that “Something went wrong”. Which translates to: we broke something, but we won’t tell you how to fix it.
So I asked Claude for help. Between the two LLMs, I was able to figure out how to create an API key for the Gemini I wanted.
Google AI Studio is supposed to be the all-in-one dashboard for Google’s generative AI models. This is where you can experiment with model parameters, manage API keys, view logs, and manage billing for your projects.
I logged into Google AI Studio and created a new API key. This part was pretty straightforward: I followed the on-screen instructions and had a fresh new key housed under a project in a few seconds. I then verified that my key was working with Gemini CLI.
It worked! Now all that was left to do was to purchase some API credits. Back in Google AI Studio, I saw a link titled “Set up billing” next to my key. It looked promising, so I clicked it.
That’s where the fun really began.
The “Set up billing” link kicked me out of Google AI Studio and into Google Cloud Console, and my heart sank. Every time I’ve logged into Google Cloud Console or AWS, I’ve wasted hours upon hours reading outdated documentation, gazing in despair at graphs that make no sense, going around in circles from dashboard to dashboard, and feeling a strong desire to attain freedom from this mortal coil.
Turns out I can’t just put $100 into my Gemini account. Instead, I must first create a Billing Account. After I’ve done that, I must associate it with a project. Then I’m allowed to add a payment method to the Billing Account. And then, if I’m lucky, my API key will turn into a paid API key with Gemini Pro privileges.
So I did the thing. The whole song and dance. Including the mandatory two-factor OTP verification that every Indian credit card requires. At the end of the process, I was greeted with a popup telling me I had to verify my payment method before I’d be allowed to use it.
Wait. Didn’t I just verify my payment method? When I entered the OTP from my bank?
Nope, turns out Google hungers for more data. Who’d have thunk it?
To verify my payment method for reals, I had to send Google a picture of my government-issued ID and the credit card I’d just associated with my Billing Account. I had to ensure all the numbers on my credit card were redacted by manually placing black bars on top of them in an image editor, leaving only my name and the last four digits of the credit card number visible.
This felt unnecessarily intrusive. But by this point, I was too deep in the process to quit. I was invested. I needed my Gemini 3 Pro, and I was willing to pay any price.
The upload form for the government ID rejected my upload twice before it finally accepted it. It was the same exact ID every single time, just in different file formats. It wanted a PNG file. Not a JPG file, nor a PDF file, but a PNG file. Did the upload form mention that in the instructions? Of course not.
After jumping through all these hoops, I received an email from Google telling me that my verification will be completed in a few days.
A few days? Nothing to do but wait, I suppose.
At this point, I closed all my open Cloud Console tabs and went back to work. But when I was fifteen minutes into writing some code by hand like a Neanderthal, I received a second email from Google telling me that my verification was complete.
So for the tenth time that day, I navigated to AI Studio. For the tenth time I clicked “Set up billing” on the page listing my API keys. For the tenth time I was told that my project wasn’t associated with a billing account. For the tenth time I associated the project with my new billing account. And finally, after doing all of this, the “Quota tier” column on the page listing my API keys said “Tier 1” instead of “Set up billing”.
Wait, Tier 1? Did that mean there were other tiers? What were tiers, anyway? Was I already on the best tier? Or maybe I was on the worst one? Not important. The important part was that I had my API key and I’d managed to convince Google to charge me for it.
I went back to the Gemini CLI, ran the /settings command, and turned on the “Enable experimental features” option. I ran the /models command, which told me that Gemini 3 Pro was now available.
When I tried sending a message to the LLM, it failed with this 403 error:
“error”: {
“message”: “{\n "error": {\n \“code\”: 403,\n \“message\”: \“The caller does not have permission\”,\n \“status\”:\“PERMISSION_DENIED\“\n }\n}\n”,
“code”: 403,
“status”: “Forbidden”
Is that JSON inside a string inside JSON? Yes. Yes it is.
To figure out if my key was even working, I tried calling the Gemini API from JavaScript, reproducing the basic example from Google’s own documentation.
No dice. I ran into the exact same error.
I then tried talking to Gemini 3 Pro using the Playground inside Google AI Studio. It showed me a toast message saying Failed to generate content. Please try again. The chat transcript said An internal error has occurred.
At this point I gave up and walked away from my computer. It was already 8pm. I’d been trying to get things to work since 5pm. I needed to eat dinner, play Clair Obscur, and go to bed. I had no more time to waste and no more fucks to give.
Just as I was getting into bed, I received an email from Google with this subject line:
Your Google Cloud and APIs billing account XXXXXX-XXXXXX-XXXXXX is in good standing at this time.
With the message inside saying:
Based on the information you provided and further analysis by Google, we have reinstated your billing account XXXXXX-XXXXXX-XXXXXX. Your account is in good standing, and you should now have full access to your account and related Project(s) and Service(s).
I have no idea what any of this means, but Gemini 3 Pro started working correctly after I received this email. It worked in the Playground, directly by calling the API from JavaScript, and with Gemini CLI.
Problem solved, I guess. Until Google mysteriously decides that my account is no longer in good standing.
This was such a frustrating experience that I still haven’t tried using Gemini with my new codebase, nearly a week after I made all those sacrifices to the Gods of Billing Account.
I understand why the process for getting a Gemini API key is so convoluted. It’s designed for large organizations, not an individual developers trying to get work done; it serves the bureaucracy, not the people doing the work; it’s designed for maximum compliance with government regulations, not for efficiency or productivity.
Google doesn’t want my money unless I’m an organization that employs ten thousand people.
In contrast to Google, Anthropic and OpenAI are much smaller and much more nimble. They’re able to make the process of setting up a developer account quick and easy for those of us who just want to get things done. Unlike Google, they haven’t yet become complacent. They need to compete for developer mindshare if they are to survive a decade into the future. Maybe they’ll add the same level of bureaucracy to their processes as they become larger, but for now they’re fairly easy to deal with.
I’m still going to try using Gemini 3 Pro with Gemini CLI as my coding assistant, but I’ll probably cap the experiment to a month. Unless Gemini 3 Pro is a massive improvement over its competitors, I’ll stick to using tools built by organizations that want me as a customer.
...
Read the original on ankursethi.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.