10 interesting stories served every morning and every evening.




1 921 shares, 42 trendiness

The (successful) end of the kernel Rust experiment

The topic of the Rust ex­per­i­ment was just dis­cussed at the an­nual

Maintainers Summit. The con­sen­sus among the as­sem­bled de­vel­op­ers is that

Rust in the ker­nel is no longer ex­per­i­men­tal — it is now a core part of the

ker­nel and is here to stay. So the experimental” tag will be com­ing off.

Congratulations are in or­der for all of the Rust for Linux team.

(Stay tuned for de­tails in our Maintainers Summit cov­er­age.)

Copyright © 2025, Eklektix, Inc.

Comments and pub­lic post­ings are copy­righted by their cre­ators.

Linux is a reg­is­tered trade­mark of Linus Torvalds

...

Read the original on lwn.net »

2 502 shares, 64 trendiness

HDMI Forum Continues to Block HDMI 2.1 for Linux

The HDMI Forum, re­spon­si­ble for the HDMI spec­i­fi­ca­tion, con­tin­ues to stonewall open source. Valve’s Steam Machine the­o­ret­i­cally sup­ports HDMI 2.1, but the mini-PC is soft­ware-lim­ited to HDMI 2.0. As a re­sult, more than 60 frames per sec­ond at 4K res­o­lu­tion are only pos­si­ble with lim­i­ta­tions.

In a state­ment to Ars Technica, a Valve spokesper­son con­firmed that HDMI 2.1 sup­port is still a work-in-progress on the soft­ware side.” We’ve been work­ing on try­ing to un­block things there.”

The Steam Machine uses an AMD Ryzen APU with a Radeon graph­ics unit. Valve strictly ad­heres to open-source dri­vers, but the HDMI Forum is un­will­ing to dis­close the 2.1 spec­i­fi­ca­tion. According to Valve, they have val­i­dated the HDMI 2.1 hard­ware un­der Windows to en­sure ba­sic func­tion­al­ity.

The re­stric­tion im­posed by the HDMI Forum was al­ready crit­i­cized in early 2024 by an AMD em­ployee re­spon­si­ble for Linux. Even then, ac­cord­ing to AMD, they had sub­mit­ted a func­tional, HDMI 2.1-compatible dri­ver, which the HDMI Forum re­jected.

Unfortunately, the HDMI Forum re­jected our pro­posal,” it was stated at the time. At this time an open source HDMI 2.1 im­ple­men­ta­tion is not pos­si­ble with­out run­ning afoul of the HDMI Forum re­quire­ments.”

Only HDMI 2.1 of­fers suf­fi­cient band­width for 120 or 144 Hertz at 3840 × 2160 pix­els with­out com­pres­sion. Furthermore, this ver­sion in­tro­duced man­u­fac­turer-in­de­pen­dent vari­able re­fresh rates (HDMI VRR). Valve en­ables 4K and 120 Hertz us­ing chroma sub­sam­pling, a com­pres­sion tech­nique that is par­tic­u­larly no­tice­able with text. VRR func­tions in the form of AMDs Freesync, which re­quires com­pat­i­ble dis­plays.

Alternatively, in­ter­ested par­ties can use an ac­tive adapter from DisplayPort 1.4 to HDMI 2.1 to in­crease the frame rate with­out com­pres­sion. However, they do not of­fi­cially sup­port VRR. Popular vari­ants from Club3D are no longer avail­able; of­fers from less well-known providers (starting from 35,67 €) are still avail­able in price com­par­isons.

...

Read the original on www.heise.de »

3 401 shares, 32 trendiness

In New York City, Congestion Pricing Leads to Marked Drop in Pollution

A new toll ap­plied to cars dri­ving in parts of New York City has led to a mea­sur­able drop in traf­fic, and with it, a 22 per­cent de­cline in par­tic­u­late pol­lu­tion, ac­cord­ing to a new study.

Congestion pric­ing came into ef­fect in January, with cars pay­ing $9 to drive through busy parts of Manhattan dur­ing peak hours. In the first six months of the pro­gram, traf­fic in the con­ges­tion zone dropped by 11 per­cent, ac­ci­dents by 14 per­cent, and com­plaints of ex­ces­sive honk­ing or other noise by 45 per­cent, of­fi­cials said.

A new study from Cornell has now tal­lied the im­pact on par­tic­u­late pol­lu­tion. Particulates is­sued from tailpipes can ag­gra­vate asthma and heart dis­ease and in­crease the risk of lung can­cer and heart at­tack. Globally, they are a lead­ing risk fac­tor for pre­ma­ture death.

Analyzing data on air qual­ity, traf­fic, and weather con­di­tions, re­searchers de­ter­mined that in the first half of this year, par­tic­u­late pol­lu­tion was down 22 per­cent in parts of Manhattan af­fected by con­ges­tion pric­ing.

The de­cline seen in New York was greater than in other cities with con­ges­tion pric­ing, such as Stockholm and London, re­searchers note. And the ef­fect ex­tended be­yond Lower Manhattan. Pricing led to a drop in pol­lu­tion across the greater met­ro­pol­i­tan area, ac­cord­ing to the study, pub­lished in the jour­nal npj Clean Air.

It’s re­ally ex­cit­ing to me that air qual­ity im­proved through­out the en­tire metro area,” said lead au­thor Timothy Fraser, of Cornell University. This tells us that con­ges­tion pric­ing did­n’t sim­ply re­lo­cate air pol­lu­tion to the sub­urbs by rerout­ing traf­fic. Instead, folks are likely choos­ing cleaner trans­porta­tion op­tions al­to­gether, like rid­ing pub­lic trans­porta­tion or sched­ul­ing de­liv­er­ies at night. This thins traf­fic and lim­its how smog com­pounds when many cars are on the road.”

...

Read the original on e360.yale.edu »

4 400 shares, 36 trendiness

Israel Used Palantir Technologies In Pager Terrorist Attack In Lebanon.

In September of 2024, Israel blew up boo­bie trapped pagers be­long­ing to Hezbollah fig­ures in pub­lic places in Lebanon, killing 12 peo­ple, in­clud­ing two chil­dren and two health­care work­ers, and in­jur­ing 2,800.

The at­tack was fol­lowed by an­other at­tack us­ing ex­plo­sives in walkie-talkies that killed 25 peo­ple and in­jured an­other 600.

The Associated Press re­ported that the at­tacks wounded many civil­ians” and that sur­vivors are left with miss­ing eyes, faces laced with scars, hands with miss­ing fin­gers”.

The United Nations at the time noted that the at­tacks constitute war crimes of mur­der, at­tack­ing civil­ians, and launch­ing in­dis­crim­i­nate at­tacks, in ad­di­tion to vi­o­lat­ing the right to life” adding that, Around 500 peo­ple suf­fered se­vere eye in­juries, in­clud­ing a diplo­mat. Others suf­fered grave in­juries to their faces, hands and bod­ies” and that It is also a war crime to com­mit vi­o­lence in­tended to spread ter­ror among civil­ians, in­clud­ing to in­tim­i­date or de­ter them from sup­port­ing an ad­ver­sary, A cli­mate of fear now per­vades every­day life in Lebanon”.

At the time, when asked about the at­tacks, for­mer CIA di­rec­tor Leon Panetta said, I don’t think there’s any ques­tion that it’s a form of ter­ror­ism”.

Now, a new book qui­etly re­veals that Israel car­ried out the ter­ror­ist at­tack with the help of the AI sur­veil­lance firm Palantir, led by Alex Karp and Peter Thiel.

In the new bi­og­ra­phy of Palantir co-founder Alex Karp, The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State,” by New York Times jour­nal­ist Michael Steinberger, he writes that prior to the geno­cide in Gaza, the Mossad had been us­ing Palantir tech­nol­ogy,” adding that the Shin Bet and IDF, sought to ob­tain Palantir’s soft­ware in the wake of Ocotber 7th”.

He goes on to write that, The de­mand for Palantir’s as­sis­tance was so great that the com­pany dis­patched a a team of en­gi­neers from London to help get Israeli users on­line,” adding, Palantir ended up hav­ing to rent a sec­ond-floor build­ing that housed its Tel Aviv of­fice, to ac­com­mo­date the in­tel­li­gence an­a­lysts who needed tu­to­ri­als”.

Revealing what Israel used the AI-powered soft­ware for, Michael Steinberger notes, Its soft­ware was used by the Israeli mil­i­tary in sev­eral raids in Gaza” and goes on to write that, The com­pa­ny’s tech­nol­ogy was de­ployed by the Israelis dur­ing mil­i­tary op­er­a­tions in Lebanon in 2024 that dec­i­mated Hezbollah’s top lead­er­ship” adding that, It was also used in Operation Grim Beeper, in which hun­dreds of Hezbollah fight­ers were in­jured and maimed when their pagers and walkie-talkies ex­ploded (the Israelis had booby trapped the de­vices)”.

Francesca Albanese, the United Nations’ Special Rapporteur on the sit­u­a­tion of hu­man rights in the Palestinian Territory, oc­cu­pied since 1967, doc­u­mented Palantir’s role in the geno­cide in Gaza, not­ing, In January 2024, Palantir an­nounced a new strate­gic part­ner­ship with Israel and held a board meet­ing in Tel Aviv in sol­i­dar­ity”; in April 2025, Palantir’s Chief Executive Officer re­sponded to ac­cu­sa­tions that Palantir had killed Palestinians in Gaza by say­ing, mostly ter­ror­ists, that’s true’. Both in­ci­dents are in­dica­tive of ex­ec­u­tive-level knowl­edge and pur­pose vis-à-vis the un­law­ful use of force by Israel, and fail­ure to pre­vent such acts or with­draw in­volve­ment.”

Now it is re­vealed that the AI soft­ware was used in Israel’s ter­ror­ist at­tack in Lebanon as well.

In a re­cent in­ter­view, the for­mer head of the Israeli Mossad, Yossi Cohen, re­vealed that Israel has sim­i­lar booby-trapped and spy-ma­nip­u­lated equip­ment” in all the coun­tries you can imag­ine”.

The fact that a com­pany as in­flu­en­tial as Palantir was in­volved in the ter­ror­ist at­tacks makes these com­ments even more con­cern­ing.

Note to read­ers: The Dissident is a reader-sup­ported out­let. If you liked this ar­ti­cle, con­sider be­com­ing a paid sub­scriber.

...

Read the original on the307.substack.com »

5 281 shares, 24 trendiness

China’s DeepSeek Uses Banned Nvidia Chips for AI Model, Report Says

Chinese ar­ti­fi­cial in­tel­li­gence startup DeepSeek has re­lied on Nvidia Corp. chips that are banned in the coun­try to de­velop an up­com­ing AI model, ac­cord­ing to a new re­port in The Information.

Nvidia’s Blackwell chips were smug­gled into China through coun­tries that per­mit­ted their sale, The Information re­ported, cit­ing un­named sources. More specif­i­cally, DeepSeek tapped chips that were in­stalled in data cen­ters in un­spec­i­fied coun­tries, then dis­man­tled and shipped to China af­ter clear­ing in­spec­tion by com­pa­nies de­vel­op­ing server equip­ment, The Information said.

The US bans the sale of these ad­vanced semi­con­duc­tors to China, which has led AI de­vel­op­ers there to ac­cess the hard­ware through data cen­ters lo­cated out­side of the main­land or sub­terfuge. In November, US pros­e­cu­tors charged two Chinese na­tion­als and two US cit­i­zens with a scheme to ship chips to China by way of Malaysia us­ing a fake real es­tate busi­ness.

A rep­re­sen­ta­tive for DeepSeek did­n’t im­me­di­ately re­spond to a re­quest for com­ment.

DeepSeek drew global at­ten­tion in January when it de­buted an AI model that was com­pet­i­tive with Silicon Valley’s best and said it had built it at a frac­tion of the cost. The startup was funded by the Chinese hedge fund High-Flyer, which had amassed 10,000 Nvidia GPUs in 2021, prior to US bans on ex­ports of so­phis­ti­cated Nvidia chips and other graph­ics pro­cess­ing units.

Earlier this week, President Donald Trump granted Nvidia per­mis­sion to ship to China an older ver­sion of its AI ac­cel­er­a­tors, the H200. An ex­port ban on its more pow­er­ful Blackwell ver­sion re­mains in place.

Beijing has mean­while pushed Chinese tech­nol­ogy com­pa­nies to rely on do­mes­tic equip­ment to de­velop AI. DeepSeek re­leased a new model in September and in­di­cated that it was work­ing with Chinese chip­mak­ers on the model.

Nvidia told The Information that it has­n’t seen any sub­stan­ti­a­tion or re­ceived tips” about smug­gling through data cen­ters out­side of China.

...

Read the original on finance.yahoo.com »

6 276 shares, 37 trendiness

Auto-grading decade-old Hacker News discussions with hindsight

Yesterday I stum­bled on this HN thread Show HN: Gemini Pro 3 hal­lu­ci­nates the HN front page 10 years from now, where Gemini 3 was hal­lu­ci­nat­ing the front­page of 10 years from now. One of the com­ments struck me a bit more though - Bjartr linked to the HN front­page from ex­actly 10 years ago, i.e. December 2015. I was read­ing through the dis­cus­sions of 10 years ago and men­tally grad­ing them for pre­science when I re­al­ized that an LLM might ac­tu­ally be a lot bet­ter at this task. I copy pasted one of the ar­ti­cle+com­ment threads man­u­ally into ChatGPT 5.1 Thinking and it gave me a beau­ti­ful analy­sis of what peo­ple thought + what ac­tu­ally hap­pened in ret­ro­spect, even bet­ter and sig­nif­i­cantly more de­tailed than what I was do­ing man­u­ally. I re­al­ized that this task is ac­tu­ally a re­ally good fit for LLMs and I was look­ing for ex­cuses to vibe code some­thing with the newly re­leased Opus 4.5, so I got to work. I’m go­ing to get all the front pages of December (31 days, 30 ar­ti­cles per day), get ChatGPT 5.1 Thinking to do the analy­sis, and pre­sent every­thing in a nice way for his­tor­i­cal read­ing.

There are two macro rea­sons for why I think the ex­er­cise is in­ter­est­ing more gen­er­ally:

I be­lieve it is quite pos­si­ble and de­sir­able to train your for­ward fu­ture pre­dic­tor given train­ing and ef­fort.

I was re­minded again of my tweets that said Be good, fu­ture LLMs are watch­ing”. You can take that in many di­rec­tions, but here I want to fo­cus on the idea that fu­ture LLMs are watch­ing. Everything we do to­day might be scru­ti­nized in great de­tail in the fu­ture be­cause do­ing so will be free”. A lot of the ways peo­ple be­have cur­rently I think make an im­plicit security by ob­scu­rity” as­sump­tion. But if in­tel­li­gence re­ally does be­come too cheap to me­ter, it will be­come pos­si­ble to do a per­fect re­con­struc­tion and syn­the­sis of every­thing. LLMs are watch­ing (or hu­mans us­ing them might be). Best to be good.

Vibe cod­ing the ac­tual pro­ject was rel­a­tively pain­less and took about 3 hours with Opus 4.5, with a few hick­ups but over­all very im­pres­sive. The repos­i­tory is on GitHub here: karpa­thy/​hn-time-cap­sule. Here is the pro­gres­sion of what the code does:

* Given a date, down­load the front­page of 30 ar­ti­cles

* For each ar­ti­cle, down­load/​parse the ar­ti­cle it­self and the full com­ment thread us­ing Algolia API.

* Package up every­thing into a mark­down prompt ask­ing for the analy­sis. Here is the prompt pre­fix I used:

The fol­low­ing is an ar­ti­cle that ap­peared on Hacker News 10 years ago, and the dis­cus­sion thread.

Let’s use our ben­e­fit of hind­sight now in 6 sec­tions:

1. Give a brief sum­mary of the ar­ti­cle and the dis­cus­sion thread.

2. What ended up hap­pen­ing to this topic? (research the topic briefly and write a sum­mary)

3. Give out awards for Most pre­scient” and Most wrong” com­ments, con­sid­er­ing what hap­pened.

4. Mention any other fun or no­table as­pects of the ar­ti­cle or dis­cus­sion.

5. Give out grades to spe­cific peo­ple for their com­ments, con­sid­er­ing what hap­pened.

6. At the end, give a fi­nal score (from 0-10) for how in­ter­est­ing this ar­ti­cle and its ret­ro­spect analy­sis was.

As for the for­mat of Section 5, use the header Final grades” and fol­low it with sim­ply an un­ordered list of peo­ple and their grades in the for­mat of name: grade (optional com­ment)”. Here is an ex­am­ple:

Final grades

- speckx: A+ (excellent pre­dic­tions on …)

- tosh: A (correctly pre­dicted this or that …)

- keep­amovin: A

- bg­wal­ter: D

- fs­flover: F (completely wrong on …)

Your list may con­tain more peo­ple of course than just this toy ex­am­ple. Please fol­low the for­mat ex­actly be­cause I will be pars­ing it pro­gram­mat­i­cally. The idea is that I will ac­cu­mu­late the grades for each ac­count to iden­tify the ac­counts that were over long pe­ri­ods of time the most pre­scient or the most wrong.

As for the for­mat of Section 6, use the pre­fix Article hind­sight analy­sis in­ter­est­ing­ness score:” and then the score (0-10) as a num­ber. Give high scores to ar­ti­cles/​dis­cus­sions that are promi­nent, no­table, or in­ter­est­ing in ret­ro­spect. Give low scores in cases where few pre­dic­tions are made, or the topic is very niche or ob­scure, or the dis­cus­sion is not very in­ter­est­ing in ret­ro­spect.

Here is an ex­am­ple:

Article hind­sight analy­sis in­ter­est­ing­ness score: 8

* Submit prompt to GPT 5.1 Thinking via the OpenAI API

* Render the re­sults into sta­tic HTML web pages for easy view­ing

* Host the html re­sult pages on my web­site: https://​karpa­thy.ai/​hn­cap­sule/

* Host all the in­ter­me­di­ate re­sults of the data di­rec­tory if some­one else would like to play. It’s the file data.zip un­der the ex­act same url pre­fix (intentionally avoid­ing a di­rect link).

I spent a few hours brows­ing around and found it to be very in­ter­est­ing. A few ex­am­ple threads just for fun:

And then when you nav­i­gate over to the Hall of Fame, you can find the top com­menters of Hacker News in December 2015, sorted by imdb-style score of their grade point av­er­age. In par­tic­u­lar, con­grat­u­la­tions to pcwal­ton, tp­tacek, paulmd, cstross, greglin­dahl, moxie, han­nob, 0xcde4c3db, Manishearth, john­colan­duoni - GPT 5.1 Thinking found your com­ments very in­sight­ful and pre­scient. You can also scroll all the way down to find the noise of HN, which I think we’re all fa­mil­iar with too :)

My code (wait, Opus’ code?) on GitHub can be used to re­pro­duce or tweak the re­sults. Running 31 days of 30 ar­ti­cles through GPT 5.1 Thinking meant 31 * 30 = 930 LLM queries and cost about $58 and some­where around ~1 hour. The LLM mega­minds of the fu­ture might find this kind of a thing a lot eas­ier, a lot faster and a lot cheaper.

...

Read the original on karpathy.bearblog.dev »

7 267 shares, 12 trendiness

When a video codec wins an Emmy

It’s not every day a video codec wins an Emmy. But yes­ter­day, the Television Academy hon­ored the AV1 spec­i­fi­ca­tion with a Technology & Engineering Emmy Award, rec­og­niz­ing its im­pact on how the world de­liv­ers video con­tent.

Through the mid-2010s, video codecs were an in­vis­i­ble tax on the web, built on a closed li­cens­ing sys­tem with ex­pen­sive, un­pre­dictable fees. Most videos on­line re­lied on the H.264 codec, which open-source pro­jects like Firefox could only sup­port with­out pay­ing MPEG LA li­cense fees thanks to Cisco’s open-source OpenH.264 mod­ule.

Especially as de­mand for video grew, the web needed a next-gen­er­a­tion codec to make high-qual­ity stream­ing faster and more re­li­able. H.265 promised ef­fi­ciency gains, but there was no guar­an­tee of an­other OpenH.264-style arrange­ment. The risk was an­other frag­mented ecosys­tem where browsers like Firefox could­n’t play large por­tions of the we­b’s video.

To solve this, Mozilla joined other tech­ni­cal lead­ers to form the Alliance for Open Media (AOM) in 2015 and started am­bi­tious work on a next-gen­er­a­tion codec built from Google’s VP9, Mozilla’s Daala, and Cisco’s Thor.

The re­sult was AV1, re­leased in 2018, which de­liv­ered top-tier com­pres­sion as an open stan­dard un­der a roy­alty-free patent pol­icy. It’s now widely de­ployed across the stream­ing ecosys­tem, in­clud­ing hard­ware de­coders and op­ti­mized soft­ware de­coders which al­low open-source browsers like Firefox to pro­vide state of the art video com­pres­sion to all users across the web.

AV1 is also the foun­da­tion for the im­age for­mat AVIF, which is de­ployed across browsers and pro­vides ex­cel­lent com­pres­sion for still and an­i­mated im­ages (AVIF is based on a video codec, af­ter all).

The Emmy award re­flects the value of open stan­dards, open-source soft­ware, and the sus­tained work by AOM par­tic­i­pants and the broader com­mu­nity fight­ing for an open web.

AV1 fixed a struc­tural prob­lem in the ecosys­tem at the time, but the work is­n’t fin­ished. Video de­mand keeps ris­ing, and the next gen­er­a­tion of open codecs must re­main com­pet­i­tive.

AOMedia is work­ing on the up­com­ing re­lease of AV2. It will fea­ture mean­ing­fully bet­ter com­pres­sion than AV1, much higher ef­fi­ciency for screen/​graph­i­cal con­tent, al­pha chan­nel sup­port, and more.

As AV2 ar­rives, our goal re­mains un­changed: make video on the web open, ef­fi­cient, and ac­ces­si­ble to every­one.

...

Read the original on blog.mozilla.org »

8 254 shares, 13 trendiness

Revisiting "Let's Build a Compiler"

There’s an old com­piler-build­ing tu­to­r­ial that has be­come part of the field’s lore: the Let’s Build a Compiler

se­ries by Jack Crenshaw (published be­tween 1988 and 1995).

I ran into it in 2003

and was very im­pressed, but it’s now 2025 and this tu­to­r­ial is still be­ing men­tioned quite of­ten in Hacker News threads. Why is that? Why does a tu­to­r­ial from 35 years ago, built in Pascal and emit­ting Motorola 68000 as­sem­bly - tech­nolo­gies that are vir­tu­ally un­known for the new gen­er­a­tion of pro­gram­mers - hold sway over com­piler en­thu­si­asts? I’ve de­cided to find out.

The tu­to­r­ial is eas­ily avail­able and read­able on­line, but just re-read­ing it seemed in­suf­fi­cient. So I’ve de­cided on metic­u­lously trans­lat­ing the com­pil­ers built in it to Python and emit a more mod­ern tar­get - WebAssembly. It was an en­joy­able process and I want to share the out­come and some in­sights gained along the way.

The re­sult is this code repos­i­tory. Of par­tic­u­lar in­ter­est is the TUTORIAL.md file, which de­scribes how each part in the orig­i­nal tu­to­r­ial is mapped to my code. So if you want to read the orig­i­nal tu­to­r­ial but play with code you can ac­tu­ally eas­ily try on your own, feel free to fol­low my path.

To get a taste of the in­put lan­guage be­ing com­piled and the out­put my com­piler gen­er­ates, here’s a sam­ple pro­gram in the KISS lan­guage de­signed by Jack Crenshaw:

var X=0

{ sum from 0 to n-1 in­clu­sive, and add to re­sult }

pro­ce­dure addseq(n, ref re­sult)

var i, sum { 0 ini­tial­ized }

while i < n

sum = sum + i

i = i + 1

end

re­sult = re­sult + sum

end

pro­gram test­prog

be­gin

addseq(11, X)

end

It’s from part 13 of the tu­to­r­ial, so it show­cases pro­ce­dures along with con­trol con­structs like the loop, and pass­ing pa­ra­me­ters both by value and by ref­er­ence. Here’s the WASM text gen­er­ated by my com­piler for part 13:

You’ll no­tice that there is some trick­i­ness in the emit­ted code w.r.t. han­dling the by-ref­er­ence pa­ra­me­ter (my pre­vi­ous post

deals with this is­sue in more de­tail). In gen­eral, though, the emit­ted code is in­ef­fi­cient - there is close to 0 op­ti­miza­tion ap­plied.

Also, if you’re very dili­gent you’ll no­tice some­thing odd about the global vari­able - it seems to be im­plic­itly re­turned by the gen­er­ated

func­tion. This is just a test­ing fa­cil­ity that makes my com­piler easy to test. All the com­pil­ers are ex­ten­sively tested - usu­ally by run­ning the gen­er­ated WASM code and ver­i­fy­ing ex­pected re­sults.

Insights - what makes this tu­to­r­ial so spe­cial?

While read­ing the orig­i­nal tu­to­r­ial again, I had on op­por­tu­nity to rem­i­nisce on what makes it so ef­fec­tive. Other than the very flu­ent and con­ver­sa­tional writ­ing style of Jack Crenshaw, I think it’s a com­bi­na­tion of two key fac­tors:

The tu­to­r­ial builds a re­cur­sive-de­scent parser step by step, rather than

giv­ing a long pref­ace on au­tomata and table-based parser gen­er­a­tors. When

I first en­coun­tered it (in 2003), it was taken for granted that if you want

to write a parser then lex + yacc are the way to go . Following the

de­vel­op­ment of a sim­ple and clean hand-writ­ten

parser was a rev­e­la­tion that wholly changed my ap­proach to the sub­ject;

sub­se­quently, hand-writ­ten re­cur­sive-de­scent parsers have been my go-to ap­proach

for al­most 20 years now.

Rather than get­ting stuck in front-end minu­tiae, the tu­to­r­ial goes straight

to gen­er­at­ing work­ing as­sem­bly code, from very early on. This was also a

breath of fresh air for en­gi­neers who grew up with more tra­di­tional courses

where you spend 90% of the time on pars­ing, type check­ing and other se­man­tic

analy­sis and of­ten run en­tirely out of steam by the time code gen­er­a­tion

is taught.

To be hon­est, I don’t think ei­ther of these are a big prob­lem with mod­ern re­sources, but back in the day the tu­to­r­ial clearly hit the right nerve with many peo­ple.

What else does it teach us?

Jack Crenshaw’s tu­to­r­ial takes the syn­tax-di­rected trans­la­tion

ap­proach, where code is emit­ted while pars­ing, with­out hav­ing to di­vide the com­piler into ex­plicit phases with IRs. As I said above, this is a fan­tas­tic ap­proach for get­ting started, but in the lat­ter parts of the tu­to­r­ial it starts show­ing its lim­i­ta­tions. Especially once we get to types, it be­comes painfully ob­vi­ous that it would be very nice if we knew the types of ex­pres­sions be­fore

we gen­er­ate code for them.

I don’t know if this is im­pli­cated in Jack Crenshaw’s aban­don­ing the tu­to­r­ial at some point af­ter part 14, but it may very well be. He keeps writ­ing how the emit­ted code is clearly sub-op­ti­mal and can be im­proved, but IMHO it’s just not that easy to im­prove us­ing the syn­tax-di­rected trans­la­tion strat­egy. With per­fect hind­sight vi­sion, I would prob­a­bly use Part 14 (types) as a turn­ing point - emit­ting some kind of AST from the parser and then do­ing sim­ple type check­ing and analy­sis on that AST prior to gen­er­at­ing code from it.

All in all, the orig­i­nal tu­to­r­ial re­mains a won­der­fully read­able in­tro­duc­tion to build­ing com­pil­ers. This post and the GitHub repos­i­tory

it de­scribes are a mod­est con­tri­bu­tion that aims to im­prove the ex­pe­ri­ence of folks read­ing the orig­i­nal tu­to­r­ial to­day and not will­ing to use ob­so­lete tech­nolo­gies. As al­ways, let me know if you run into any is­sues or have ques­tions!

...

Read the original on eli.thegreenplace.net »

9 198 shares, 51 trendiness

Getting a Gemini API key is an exercise in frustration — Ankur Sethi's Internet Website

Last week, I started work­ing on a new side-pro­ject. It’s a stan­dard React app partly made up of run-of-the-mill CRUD views—a per­fect fit for LLM-assisted pro­gram­ming. I rea­soned that if I could get an LLM to quickly write the bor­ing code for me, I’d have more time to fo­cus on the in­ter­est­ing prob­lems I wanted to solve.

I’ve pretty much set­tled on Claude Code as my cod­ing as­sis­tant of choice, but I’d been hear­ing great things about Google’s Gemini 3 Pro. Despite my aver­sion to Google prod­ucts, I de­cided to try it out on my new code­base.

I al­ready had Gemini CLI in­stalled, but that only gave me ac­cess to Gemini 2.5 with rate lim­its. I wanted to try out Gemini 3 Pro, and I wanted to avoid be­ing rate lim­ited. I had some spare cash to burn on this ex­per­i­ment, so I went look­ing for ways to pay for a Gemini Pro plan, if such a thing ex­isted.

Thus be­gan my grand ad­ven­ture in try­ing to give Google my money.

The name Gemini” is so over­loaded that it barely means any­thing. Based on the con­text, Gemini could re­fer to:

To make things even more con­fus­ing, Google has at least three dif­fer­ent prod­ucts just for agen­tic cod­ing: Gemini Code Assist (Gemini CLI is a part of this suite of prod­ucts), Jules, and Antigravity.

And then there’s a bunch of other GenAI stuff that is pow­ered by Gemini but does­n’t have the word Gemini in the name: Vertex AI Platform, Google AI Studio, NotebookLM, and who knows what else.

I just wanted to plug my credit card in­for­ma­tion into a form and get ac­cess to a cod­ing as­sis­tant. Instead, I was dunked into an al­pha­bet soup of prod­ucts that all seemed to do sim­i­lar things and, cru­cially, did­n’t have any gi­ant Buy Now!” but­tons for me to click.

In con­trast, both Anthropic and OpenAI have two pri­mary ways you can ac­cess their prod­ucts: via their con­sumer of­fer­ings at claude.ai and chat­gpt.com re­spec­tively, or via API cred­its that you can buy through their re­spec­tive de­vel­oper con­soles. In each case, there is a form field where you can plug in your credit card de­tails, and a big, friendly Buy Now!” but­ton to click.

After half an hour of search­ing the web, I did the ob­vi­ous thing and asked the free ver­sion of Gemini (the chat­bot, not one of those other Geminis) what to do:

How do I pay for the pro ver­sion of Gemini so i can use it in the ter­mi­nal for writ­ing code? I specif­i­cally want to use the Gemini 3 Pro model.

It thought for a sus­pi­ciously long time and told me that Gemini 3 Pro re­quired a de­vel­oper API key to use. Since the new model is still in pre­view, it’s not yet avail­able on any of the con­sumer plans. When I asked fol­low up ques­tions about pric­ing, it told me that Something went wrong”. Which trans­lates to: we broke some­thing, but we won’t tell you how to fix it.

So I asked Claude for help. Between the two LLMs, I was able to fig­ure out how to cre­ate an API key for the Gemini I wanted.

Google AI Studio is sup­posed to be the all-in-one dash­board for Google’s gen­er­a­tive AI mod­els. This is where you can ex­per­i­ment with model pa­ra­me­ters, man­age API keys, view logs, and man­age billing for your pro­jects.

I logged into Google AI Studio and cre­ated a new API key. This part was pretty straight­for­ward: I fol­lowed the on-screen in­struc­tions and had a fresh new key housed un­der a pro­ject in a few sec­onds. I then ver­i­fied that my key was work­ing with Gemini CLI.

It worked! Now all that was left to do was to pur­chase some API cred­its. Back in Google AI Studio, I saw a link ti­tled Set up billing” next to my key. It looked promis­ing, so I clicked it.

That’s where the fun re­ally be­gan.

The Set up billing” link kicked me out of Google AI Studio and into Google Cloud Console, and my heart sank. Every time I’ve logged into Google Cloud Console or AWS, I’ve wasted hours upon hours read­ing out­dated doc­u­men­ta­tion, gaz­ing in de­spair at graphs that make no sense, go­ing around in cir­cles from dash­board to dash­board, and feel­ing a strong de­sire to at­tain free­dom from this mor­tal coil.

Turns out I can’t just put $100 into my Gemini ac­count. Instead, I must first cre­ate a Billing Account. After I’ve done that, I must as­so­ci­ate it with a pro­ject. Then I’m al­lowed to add a pay­ment method to the Billing Account. And then, if I’m lucky, my API key will turn into a paid API key with Gemini Pro priv­i­leges.

So I did the thing. The whole song and dance. Including the manda­tory two-fac­tor OTP ver­i­fi­ca­tion that every Indian credit card re­quires. At the end of the process, I was greeted with a popup telling me I had to ver­ify my pay­ment method be­fore I’d be al­lowed to use it.

Wait. Didn’t I just ver­ify my pay­ment method? When I en­tered the OTP from my bank?

Nope, turns out Google hungers for more data. Who’d have thunk it?

To ver­ify my pay­ment method for re­als, I had to send Google a pic­ture of my gov­ern­ment-is­sued ID and the credit card I’d just as­so­ci­ated with my Billing Account. I had to en­sure all the num­bers on my credit card were redacted by man­u­ally plac­ing black bars on top of them in an im­age ed­i­tor, leav­ing only my name and the last four dig­its of the credit card num­ber vis­i­ble.

This felt un­nec­es­sar­ily in­tru­sive. But by this point, I was too deep in the process to quit. I was in­vested. I needed my Gemini 3 Pro, and I was will­ing to pay any price.

The up­load form for the gov­ern­ment ID re­jected my up­load twice be­fore it fi­nally ac­cepted it. It was the same ex­act ID every sin­gle time, just in dif­fer­ent file for­mats. It wanted a PNG file. Not a JPG file, nor a PDF file, but a PNG file. Did the up­load form men­tion that in the in­struc­tions? Of course not.

After jump­ing through all these hoops, I re­ceived an email from Google telling me that my ver­i­fi­ca­tion will be com­pleted in a few days.

A few days? Nothing to do but wait, I sup­pose.

At this point, I closed all my open Cloud Console tabs and went back to work. But when I was fif­teen min­utes into writ­ing some code by hand like a Neanderthal, I re­ceived a sec­ond email from Google telling me that my ver­i­fi­ca­tion was com­plete.

So for the tenth time that day, I nav­i­gated to AI Studio. For the tenth time I clicked Set up billing” on the page list­ing my API keys. For the tenth time I was told that my pro­ject was­n’t as­so­ci­ated with a billing ac­count. For the tenth time I as­so­ci­ated the pro­ject with my new billing ac­count. And fi­nally, af­ter do­ing all of this, the Quota tier” col­umn on the page list­ing my API keys said Tier 1” in­stead of Set up billing”.

Wait, Tier 1? Did that mean there were other tiers? What were tiers, any­way? Was I al­ready on the best tier? Or maybe I was on the worst one? Not im­por­tant. The im­por­tant part was that I had my API key and I’d man­aged to con­vince Google to charge me for it.

I went back to the Gemini CLI, ran the /settings com­mand, and turned on the Enable ex­per­i­men­tal fea­tures” op­tion. I ran the /models com­mand, which told me that Gemini 3 Pro was now avail­able.

When I tried send­ing a mes­sage to the LLM, it failed with this 403 er­ror:

error”: {

message”: {\n "error": {\n \“code\”: 403,\n \“message\”: \“The caller does not have per­mis­sion\”,\n \“status\”:\“PERMISSION_DENIED\“\n }\n}\n”,

code”: 403,

status”: Forbidden”

Is that JSON in­side a string in­side JSON? Yes. Yes it is.

To fig­ure out if my key was even work­ing, I tried call­ing the Gemini API from JavaScript, re­pro­duc­ing the ba­sic ex­am­ple from Google’s own doc­u­men­ta­tion.

No dice. I ran into the ex­act same er­ror.

I then tried talk­ing to Gemini 3 Pro us­ing the Playground in­side Google AI Studio. It showed me a toast mes­sage say­ing Failed to gen­er­ate con­tent. Please try again. The chat tran­script said An in­ter­nal er­ror has oc­curred.

At this point I gave up and walked away from my com­puter. It was al­ready 8pm. I’d been try­ing to get things to work since 5pm. I needed to eat din­ner, play Clair Obscur, and go to bed. I had no more time to waste and no more fucks to give.

Just as I was get­ting into bed, I re­ceived an email from Google with this sub­ject line:

Your Google Cloud and APIs billing ac­count XXXXXX-XXXXXX-XXXXXX is in good stand­ing at this time.

With the mes­sage in­side say­ing:

Based on the in­for­ma­tion you pro­vided and fur­ther analy­sis by Google, we have re­in­stated your billing ac­count XXXXXX-XXXXXX-XXXXXX. Your ac­count is in good stand­ing, and you should now have full ac­cess to your ac­count and re­lated Project(s) and Service(s).

I have no idea what any of this means, but Gemini 3 Pro started work­ing cor­rectly af­ter I re­ceived this email. It worked in the Playground, di­rectly by call­ing the API from JavaScript, and with Gemini CLI.

Problem solved, I guess. Until Google mys­te­ri­ously de­cides that my ac­count is no longer in good stand­ing.

This was such a frus­trat­ing ex­pe­ri­ence that I still haven’t tried us­ing Gemini with my new code­base, nearly a week af­ter I made all those sac­ri­fices to the Gods of Billing Account.

I un­der­stand why the process for get­ting a Gemini API key is so con­vo­luted. It’s de­signed for large or­ga­ni­za­tions, not an in­di­vid­ual de­vel­op­ers try­ing to get work done; it serves the bu­reau­cracy, not the peo­ple do­ing the work; it’s de­signed for max­i­mum com­pli­ance with gov­ern­ment reg­u­la­tions, not for ef­fi­ciency or pro­duc­tiv­ity.

Google does­n’t want my money un­less I’m an or­ga­ni­za­tion that em­ploys ten thou­sand peo­ple.

In con­trast to Google, Anthropic and OpenAI are much smaller and much more nim­ble. They’re able to make the process of set­ting up a de­vel­oper ac­count quick and easy for those of us who just want to get things done. Unlike Google, they haven’t yet be­come com­pla­cent. They need to com­pete for de­vel­oper mind­share if they are to sur­vive a decade into the fu­ture. Maybe they’ll add the same level of bu­reau­cracy to their processes as they be­come larger, but for now they’re fairly easy to deal with.

I’m still go­ing to try us­ing Gemini 3 Pro with Gemini CLI as my cod­ing as­sis­tant, but I’ll prob­a­bly cap the ex­per­i­ment to a month. Unless Gemini 3 Pro is a mas­sive im­prove­ment over its com­peti­tors, I’ll stick to us­ing tools built by or­ga­ni­za­tions that want me as a cus­tomer.

...

Read the original on ankursethi.com »

10 191 shares, 19 trendiness

Qwen

...

Read the original on qwen.ai »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.