10 interesting stories served every morning and every evening.




1 1,152 shares, 55 trendiness

Introducing Sonnet 4.6

Claude Sonnet 4.6 is our most ca­pa­ble Sonnet model yet. It’s a full up­grade of the mod­el’s skills across cod­ing, com­puter use, long-con­text rea­son­ing, agent plan­ning, knowl­edge work, and de­sign. Sonnet 4.6 also fea­tures a 1M to­ken con­text win­dow in beta. For those on our Free and Pro plans, Claude Sonnet 4.6 is now the de­fault model in claude.ai and Claude Cowork. Pricing re­mains the same as Sonnet 4.5, start­ing at $3/$15 per mil­lion to­kens.Son­net 4.6 brings much-im­proved cod­ing skills to more of our users. Improvements in con­sis­tency, in­struc­tion fol­low­ing, and more have made de­vel­op­ers with early ac­cess pre­fer Sonnet 4.6 to its pre­de­ces­sor by a wide mar­gin. They of­ten even pre­fer it to our smartest model from November 2025, Claude Opus 4.5.Performance that would have pre­vi­ously re­quired reach­ing for an Opus-class model—in­clud­ing on real-world, eco­nom­i­cally valu­able of­fice tasks—is now avail­able with Sonnet 4.6. The model also shows a ma­jor im­prove­ment in com­puter use skills com­pared to prior Sonnet mod­els.As with every new Claude model, we’ve run ex­ten­sive safety eval­u­a­tions of Sonnet 4.6, which over­all showed it to be as safe as, or safer than, our other re­cent Claude mod­els. Our safety re­searchers con­cluded that Sonnet 4.6 has a broadly warm, hon­est, proso­cial, and at times funny char­ac­ter, very strong safety be­hav­iors, and no signs of ma­jor con­cerns around high-stakes forms of mis­align­ment.”Al­most every or­ga­ni­za­tion has soft­ware it can’t eas­ily au­to­mate: spe­cial­ized sys­tems and tools built be­fore mod­ern in­ter­faces like APIs ex­isted. To have AI use such soft­ware, users would pre­vi­ously have had to build be­spoke con­nec­tors. But a model that can use a com­puter the way a per­son does changes that equa­tion.In October 2024, we were the first to in­tro­duce a gen­eral-pur­pose com­puter-us­ing model. At the time, we wrote that it was still ex­per­i­men­tal—at times cum­ber­some and er­ror-prone,” but we ex­pected rapid im­prove­ment. OSWorld, the stan­dard bench­mark for AI com­puter use, shows how far our mod­els have come. It pre­sents hun­dreds of tasks across real soft­ware (Chrome, LibreOffice, VS Code, and more) run­ning on a sim­u­lated com­puter. There are no spe­cial APIs or pur­pose-built con­nec­tors; the model sees the com­puter and in­ter­acts with it in much the same way a per­son would: click­ing a (virtual) mouse and typ­ing on a (virtual) key­board.Across six­teen months, our Sonnet mod­els have made steady gains on OSWorld. The im­prove­ments can also be seen be­yond bench­marks: early Sonnet 4.6 users are see­ing hu­man-level ca­pa­bil­ity in tasks like nav­i­gat­ing a com­plex spread­sheet or fill­ing out a multi-step web form, be­fore pulling it all to­gether across mul­ti­ple browser tabs.The model cer­tainly still lags be­hind the most skilled hu­mans at us­ing com­put­ers. But the rate of progress is re­mark­able nonethe­less. It means that com­puter use is much more use­ful for a range of work tasks—and that sub­stan­tially more ca­pa­ble mod­els are within reach.Scores prior to Claude Sonnet 4.5 were mea­sured on the orig­i­nal OSWorld; scores from Sonnet 4.5 on­ward use OSWorld-Verified. OSWorld-Verified (released July 2025) is an in-place up­grade of the orig­i­nal OSWorld bench­mark, with up­dates to task qual­ity, eval­u­a­tion grad­ing, and in­fra­struc­ture.At the same time, com­puter use poses risks: ma­li­cious ac­tors can at­tempt to hi­jack the model by hid­ing in­struc­tions on web­sites in what’s known as a prompt in­jec­tion at­tack. We’ve been work­ing to im­prove our mod­els’ re­sis­tance to prompt in­jec­tions—our safety eval­u­a­tions show that Sonnet 4.6 is a ma­jor im­prove­ment com­pared to its pre­de­ces­sor, Sonnet 4.5, and per­forms sim­i­larly to Opus 4.6. You can find out more about how to mit­i­gate prompt in­jec­tions and other safety con­cerns in our API docs.Be­yond com­puter use, Claude Sonnet 4.6 has im­proved on bench­marks across the board. It ap­proaches Opus-level in­tel­li­gence at a price point that makes it more prac­ti­cal for far more tasks. You can find a full dis­cus­sion of Sonnet 4.6’s ca­pa­bil­i­ties and its safety-re­lated be­hav­iors in our sys­tem card; a sum­mary and com­par­i­son to other re­cent mod­els is be­low.In Claude Code, our early test­ing found that users pre­ferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time. Users re­ported that it more ef­fec­tively read the con­text be­fore mod­i­fy­ing code and con­sol­i­dated shared logic rather than du­pli­cat­ing it. This made it less frus­trat­ing to use over long ses­sions than ear­lier mod­els.Users even pre­ferred Sonnet 4.6 to Opus 4.5, our fron­tier model from November, 59% of the time. They rated Sonnet 4.6 as sig­nif­i­cantly less prone to ov­erengi­neer­ing and laziness,” and mean­ing­fully bet­ter at in­struc­tion fol­low­ing. They re­ported fewer false claims of suc­cess, fewer hal­lu­ci­na­tions, and more con­sis­tent fol­low-through on multi-step tasks.Son­net 4.6’s 1M to­ken con­text win­dow is enough to hold en­tire code­bases, lengthy con­tracts, or dozens of re­search pa­pers in a sin­gle re­quest. More im­por­tantly, Sonnet 4.6 rea­sons ef­fec­tively across all that con­text. This can make it much bet­ter at long-hori­zon plan­ning. We saw this par­tic­u­larly clearly in the Vending-Bench Arena eval­u­a­tion, which tests how well a model can run a (simulated) busi­ness over time—and which in­cludes an el­e­ment of com­pe­ti­tion, with dif­fer­ent AI mod­els fac­ing off against each other to make the biggest prof­its.Son­net 4.6 de­vel­oped an in­ter­est­ing new strat­egy: it in­vested heav­ily in ca­pac­ity for the first ten sim­u­lated months, spend­ing sig­nif­i­cantly more than its com­peti­tors, and then piv­oted sharply to fo­cus on prof­itabil­ity in the fi­nal stretch. The tim­ing of this pivot helped it fin­ish well ahead of the com­pe­ti­tion.Son­net 4.6 out­per­forms Sonnet 4.5 on Vending-Bench Arena by in­vest­ing in ca­pac­ity early, then piv­ot­ing to prof­itabil­ity in the fi­nal stretch.Early cus­tomers also re­ported broad im­prove­ments, with fron­tend code and fi­nan­cial analy­sis stand­ing out. Customers in­de­pen­dently de­scribed vi­sual out­puts from Sonnet 4.6 as no­tably more pol­ished, with bet­ter lay­outs, an­i­ma­tions, and de­sign sen­si­bil­ity than those from pre­vi­ous mod­els. Customers also needed fewer rounds of it­er­a­tion to reach pro­duc­tion-qual­ity re­sults.Claude Sonnet 4.6 matches Opus 4.6 per­for­mance on OfficeQA, which mea­sures how well a model can read en­ter­prise doc­u­ments (charts, PDFs, ta­bles), pull the right facts, and rea­son from those facts. It’s a mean­ing­ful up­grade for doc­u­ment com­pre­hen­sion work­loads.The per­for­mance-to-cost ra­tio of Claude Sonnet 4.6 is ex­tra­or­di­nary—it’s hard to over­state how fast Claude mod­els have been evolv­ing in re­cent months. Sonnet 4.6 out­per­forms on our or­ches­tra­tion evals, han­dles our most com­plex agen­tic work­loads, and keeps im­prov­ing the higher you push the ef­fort set­tings.Claude Sonnet 4.6 is a no­table im­prove­ment over Sonnet 4.5 across the board, in­clud­ing long-hori­zon tasks and more dif­fi­cult prob­lems.Out of the gate, Claude Sonnet 4.6 is al­ready ex­celling at com­plex code fixes, es­pe­cially when search­ing across large code­bases is es­sen­tial. For teams run­ning agen­tic cod­ing at scale, we’re see­ing strong res­o­lu­tion rates and the kind of con­sis­tency de­vel­op­ers need.Claude Sonnet 4.6 has mean­ing­fully closed the gap with Opus on bug de­tec­tion, let­ting us run more re­view­ers in par­al­lel, catch a wider va­ri­ety of bugs, and do it all with­out in­creas­ing cost.For the first time, Sonnet brings fron­tier-level rea­son­ing in a smaller and more cost-ef­fec­tive form fac­tor. It pro­vides a vi­able al­ter­na­tive if you are a heavy Opus user.Claude Sonnet 4.6 mean­ing­fully im­proves the an­swer re­trieval be­hind our core prod­uct—we saw a sig­nif­i­cant jump in an­swer match rate com­pared to Sonnet 4.5 in our Financial Services Benchmark, with bet­ter re­call on the spe­cific work­flows our cus­tomers de­pend on.Box eval­u­ated how Claude Sonnet 4.6 per­forms when tested on deep rea­son­ing and com­plex agen­tic tasks across real en­ter­prise doc­u­ments. It demon­strated sig­nif­i­cant im­prove­ments, out­per­form­ing Claude Sonnet 4.5 in heavy rea­son­ing Q&A by 15 per­cent­age points.Claude Sonnet 4.6 hit 94% on our in­sur­ance bench­mark, mak­ing it the high­est-per­form­ing model we’ve tested for com­puter use. This kind of ac­cu­racy is mis­sion-crit­i­cal to work­flows like sub­mis­sion in­take and first no­tice of loss.Claude Sonnet 4.6 de­liv­ers fron­tier-level re­sults on com­plex app builds and bug-fix­ing. It’s be­com­ing our go-to for the kind of deep code­base work that used to re­quire more ex­pen­sive mod­els.Claude Sonnet 4.6 pro­duced the best iOS code we’ve tested for Rakuten AI. Better spec com­pli­ance, bet­ter ar­chi­tec­ture, and it reached for mod­ern tool­ing we did­n’t ask for, all in one shot. The re­sults gen­uinely sur­prised us.

Sonnet 4.6 is a sig­nif­i­cant leap for­ward on rea­son­ing through dif­fi­cult tasks. We find it es­pe­cially strong on branched and multi-step tasks like con­tract rout­ing, con­di­tional tem­plate se­lec­tion, and CRM co­or­di­na­tion—ex­actly where our cus­tomers need strong model sense and re­li­a­bil­ity.We’ve been im­pressed by how ac­cu­rately Claude Sonnet 4.6 han­dles com­plex com­puter use. It’s a clear im­prove­ment over any­thing else we’ve tested in our evals.Claude Sonnet 4.6 has per­fect de­sign taste when build­ing fron­tend pages and data re­ports, and it re­quires far less hand-hold­ing to get there than any­thing we’ve tested be­fore.Claude Sonnet 4.6 was ex­cep­tion­ally re­spon­sive to di­rec­tion — de­liv­er­ing pre­cise fig­ures and struc­tured com­par­isons when asked, while also gen­er­at­ing gen­uinely use­ful ideas on trial strat­egy and ex­hibit prepa­ra­tion.On the Claude Developer Platform, Sonnet 4.6 sup­ports both adap­tive think­ing and ex­tended think­ing, as well as con­text com­paction in beta, which au­to­mat­i­cally sum­ma­rizes older con­text as con­ver­sa­tions ap­proach lim­its, in­creas­ing ef­fec­tive con­text length.On our API, Claude’s web search and fetch tools now au­to­mat­i­cally write and ex­e­cute code to fil­ter and process search re­sults, keep­ing only rel­e­vant con­tent in con­text—im­prov­ing both re­sponse qual­ity and to­ken ef­fi­ciency. Additionally, code ex­e­cu­tion, mem­ory, pro­gram­matic tool call­ing, tool search, and tool use ex­am­ples are now gen­er­ally avail­able.Son­net 4.6 of­fers strong per­for­mance at any think­ing ef­fort, even with ex­tended think­ing off. As part of your mi­gra­tion from Sonnet 4.5, we rec­om­mend ex­plor­ing across the spec­trum to find the ideal bal­ance of speed and re­li­able per­for­mance, de­pend­ing on what you’re build­ing.We find that Opus 4.6 re­mains the strongest op­tion for tasks that de­mand the deep­est rea­son­ing, such as code­base refac­tor­ing, co­or­di­nat­ing mul­ti­ple agents in a work­flow, and prob­lems where get­ting it just right is para­mount.For Claude in Excel users, our add-in now sup­ports MCP con­nec­tors, let­ting Claude work with the other tools you use day-to-day, like S&P Global, LSEG, Daloopa, PitchBook, Moody’s, and FactSet. You can ask Claude to pull in con­text from out­side your spread­sheet with­out ever leav­ing Excel. If you’ve al­ready set up MCP con­nec­tors in Claude.ai, those same con­nec­tions will work in Excel au­to­mat­i­cally. This is avail­able on Pro, Max, Team, and Enterprise plans.How to use Claude Sonnet 4.6Claude Sonnet 4.6 is avail­able now on all Claude plans, Claude Cowork, Claude Code, our API, and all ma­jor cloud plat­forms. We’ve also up­graded our free tier to Sonnet 4.6 by de­fault—it now in­cludes file cre­ation, con­nec­tors, skills, and com­paction.If you’re a de­vel­oper, you can get started quickly by us­ing claude-son­net-4-6 via the Claude API.

...

Read the original on www.anthropic.com »

2 570 shares, 90 trendiness

15+ years later, Microsoft morged my diagram

A few days ago, peo­ple started tag­ging me on Bluesky and Hacker News about a di­a­gram on Microsoft’s Learn por­tal. It looked… fa­mil­iar.

In 2010, I wrote A suc­cess­ful Git branch­ing

model and cre­ated a di­a­gram to go with it. I de­signed that di­a­gram in Apple Keynote, at the time ob­sess­ing over the col­ors, the curves, and the lay­out un­til it clearly com­mu­ni­cated how branches re­late to each other over time. I also pub­lished the source file so oth­ers could build on it. That di­a­gram has since spread every­where: in books, talks, blog posts, team wikis, and YouTube videos. I never minded. That was the whole point: shar­ing knowl­edge and let­ting the in­ter­net take it by storm!

What I did not ex­pect was for Microsoft, a tril­lion-dol­lar com­pany, some 15+ years later, to ap­par­ently run it through an AI im­age gen­er­a­tor and pub­lish the re­sult on their of­fi­cial Learn por­tal, with­out any credit or link back to the orig­i­nal.

The AI rip-off was not just ugly. It was care­less, bla­tantly am­a­teuris­tic, and lack­ing any am­bi­tion, to put it gen­tly. Microsoft un­wor­thy. The care­fully crafted vi­sual lan­guage and lay­out of the orig­i­nal, the branch col­ors, the lane de­sign, the dot and bub­ble align­ment that made the orig­i­nal so read­able—all of it had been mud­dled into a laugh­able form. Proper AI slop.

Arrows miss­ing and point­ing in the wrong di­rec­tion, and the ob­vi­ous continvoucly morged” text quickly gave it away as a cheap AI ar­ti­fact.

It had the rough shape of my di­a­gram though. Enough ac­tu­ally so that peo­ple rec­og­nized the orig­i­nal in it and started call­ing Microsoft out on it and reach­ing out to me. That so many peo­ple were up­set about this was re­ally nice, hon­estly. That, and continvoucly morged” was a very fun meme—thank you, in­ter­net! 😄

Oh god yes, Microsoft con­tin­voucly morged my di­a­gram there for sure 😬— Vincent Driessen (@nvie.com) 2026-02-16T20:55:54.762Z

Other than that, I find this whole thing mostly very sad­den­ing. Not be­cause some com­pany used my di­a­gram. As I said, it’s been every­where for 15 years and I’ve al­ways been fine with that. What’s dispir­it­ing is the (lack of) process

and care: take some­one’s care­fully crafted work, run it through a ma­chine to wash off the fin­ger­prints, and ship it as your own. This is­n’t a case of be­ing in­spired by some­thing and build­ing on it. It’s the op­po­site of that. It’s tak­ing some­thing that worked and mak­ing it worse. Is there even a goal here be­yond generating con­tent”?

What’s slightly wor­ry­ing me is that this time around, the di­a­gram was both well-known enough and ob­vi­ously AI-slop-y enough that it was easy to spot as pla­gia­rism. But we all know there will just be more and more con­tent like this that is­n’t so well-known or soon will get mu­tated or dis­guised in more ad­vanced ways that this pla­gia­rism no longer will be rec­og­niz­able as such.

I don’t need much here. A sim­ple link back and at­tri­bu­tion to the orig­i­nal ar­ti­cle would be a good start. I would also be in­ter­ested in un­der­stand­ing how this Learn page at Microsoft came to be, what the goals were here, and what the process has been that led to the cre­ation of this ugly as­set, and how there seem­ingly has not been any form of proof-read­ing for a doc­u­ment used as a learn­ing re­source by many de­vel­op­ers.

...

Read the original on nvie.com »

3 542 shares, 47 trendiness

Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago

CEO hope­fuls have a new ri­val for the top job: their own board di­rec­tor­sWhy your boss loves AI and you hate it: Corporate prof­its are cap­tur­ing your ex­tra pro­duc­tiv­ity, and your salary is­n’t­Com­pa­nies are cy­cling through CEOs—and re­plac­ing them with first-tim­er­sAn­thropic was sup­posed to be a safe’ al­ter­na­tive to OpenAI, but CEO Dario Amodei ad­mits his com­pany strug­gles to bal­ance safety with prof­it­sT­hou­sands of CEOs just ad­mit­ted AI had no im­pact on em­ploy­ment or pro­duc­tiv­ity—and it has econ­o­mists res­ur­rect­ing a para­dox from 40 years agoWhat OpenAI’s OpenClaw hire says about the fu­ture of AI agents

Thousands of CEOs just ad­mit­ted AI had no im­pact on em­ploy­ment or pro­duc­tiv­ity—and it has econ­o­mists res­ur­rect­ing a para­dox from 40 years ago$56 tril­lion na­tional debt lead­ing to a spi­ral­ing cri­sis: Budget watch­dog warns the U. S. is walk­ing a crum­bling pathCur­rent price of sil­ver as of Tuesday, February 17, 2026You need $2 mil­lion to re­tire and almost no one is close,’ BlackRock CEO warns, a prob­lem that Gen X will make harder and nas­tier’A bil­lion­aire and an A-list ac­tor found refuge in a 37-home Florida neigh­bor­hood with armed guards—proof that pri­vacy is now the ul­ti­mate lux­ury

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

4 492 shares, 13 trendiness

Stephen Colbert says CBS didn't air interview out of fear of FCC

Late Show” host Stephen Colbert said CBS did not air his Monday in­ter­view with Texas state Rep. James Talarico out of fear of the Federal Communications Commission.

Colbert kicked off Monday night’s show by al­most im­me­di­ately men­tion­ing Talarico’s ab­sence.

He was sup­posed to be here, but we were told in no un­cer­tain terms by our net­work’s lawyers, who called us di­rectly, that we could not have him on the broad­cast,” Colbert said. Then, then I was told in some un­cer­tain terms that not only could I not have him on, I could not men­tion me not hav­ing him on. And be­cause my net­work clearly does­n’t want us to talk about this, let’s talk about this.”

The Late Show” pub­lished the un­aired in­ter­view with Talarico on YouTube. In the in­ter­view, Colbert and Talarico, who is run­ning for the U. S. Senate, dis­cuss the FCC crack­down, in­clud­ing open­ing a probe into ABCs The View,” af­ter Talarico ap­peared on the show.

I think that Donald Trump is wor­ried that we’re about to flip Texas,” Talarico said, which was met with au­di­ence ap­plause. This is the party that ran against can­cel cul­ture, and now they’re try­ing to con­trol what we watch, what we say, what we read. And this is the most dan­ger­ous kind of can­cel cul­ture, the kind that comes from the top.”

Talarico ac­cused the Trump ad­min­is­tra­tion of selling out the First Amendment to curry fa­vor with cor­rupt politi­cians.”

A threat to any of our First Amendment rights is a threat to all of our First Amendment rights.”

In an emailed state­ment, CBS said: THE LATE SHOW was not pro­hib­ited by CBS from broad­cast­ing the in­ter­view with Rep. James Talarico. The show was pro­vided le­gal guid­ance that the broad­cast could trig­ger the FCC equal-time rule for two other can­di­dates, in­clud­ing Rep. Jasmine Crockett, and pre­sented op­tions for how the equal time for other can­di­dates could be ful­filled. THE LATE SHOW de­cided to pre­sent the in­ter­view through its YouTube chan­nel with on-air pro­mo­tion on the broad­cast rather than po­ten­tially pro­vid­ing the equal-time op­tions.”

In a state­ment, the White House de­fended the FCC. Stephen Colbert is a pa­thetic train­wreck with no tal­ent and ter­ri­ble rat­ings, which is ex­actly why CBS can­celed his show and is boot­ing him off the air­waves,” White House spokesman Davis Ingle said. FCC Chairman Brendan Carr is sim­ply do­ing his job and en­forc­ing the rules.”

At a po­lit­i­cal event Tuesday night in Austin, hun­dreds of sup­port­ers burst into ap­plause and cheers when Talarico opened his rally by re­fer­ring to the Colbert in­ter­view.

Trump’s FCC col­luded with cor­po­rate me­dia ex­ec­u­tives at CBS to keep that in­ter­view off the air, and I think it’s safe to say that their plan back­fired,” Talarico said to a packed event room on the first day of early vot­ing ahead of the March 3 pri­mary.

Talarico’s ri­val in the Texas Senate Democratic pri­mary, Rep. Jasmine Crockett, ap­peared on Colbert’s show in May.

CBS move to not air the seg­ment comes as the FCC, the gov­ern­men­t’s me­dia reg­u­la­tor, and most no­tably its chair­man, Brendan Carr, have been par­tic­u­larly com­bat­ive with net­works that have drawn the ire of the pres­i­dent.

Trump has for months sug­gested the FCC could re­voke the li­censes of tele­vi­sion broad­cast­ers. More re­cently, Carr, who was ap­pointed by Trump to lead the FCC, has said that day­time and late-night TV talk shows must com­ply with the equal time rule re­gard­ing po­lit­i­cal can­di­dates.

The FCCs equal time rule pro­hibits ra­dio and broad­cast chan­nels from host­ing po­lit­i­cal can­di­dates dur­ing an elec­tion with­out giv­ing air­time to their op­po­nents. During his show Monday, Colbert high­lighted that news in­ter­views and talk show in­ter­views with politi­cians are ex­cep­tions.

On Jan. 21, Carr re­leased a let­ter warn­ing net­works about the rule, say­ing that he is con­sid­er­ing elim­i­nat­ing ex­cep­tions due to the net­works’ po­ten­tial par­ti­san mo­ti­va­tions.

Colbert fired back at Carr on Monday, ac­cus­ing the chair­man of be­ing mo­ti­vated by par­ti­san pur­poses.

Let’s just call this what it is: Donald Trump’s ad­min­is­tra­tion wants to si­lence any­one who says any­thing bad about Trump on TV be­cause all Trump does is watch TV,” Colbert joked.

In a state­ment, FCC Commissioner Anna M. Gomez called Monday’s in­ci­dent another trou­bling ex­am­ple of cor­po­rate ca­pit­u­la­tion in the face of this Administration’s broader cam­paign to cen­sor and con­trol speech.”

The FCC has no law­ful au­thor­ity to pres­sure broad­cast­ers for po­lit­i­cal pur­poses or to cre­ate a cli­mate that chills free ex­pres­sion,” Gomez, the lone Democratic com­mis­sioner, said in the state­ment. CBS is fully pro­tected un­der the First Amendment to de­ter­mine what in­ter­views it airs, which makes its de­ci­sion to yield to po­lit­i­cal pres­sure all the more dis­ap­point­ing.”

This comes months af­ter ABC pulled Jimmy Kimmel Live!” off the air indefinitely” af­ter Carr crit­i­cized com­ments the host made about the as­sas­si­nated con­ser­v­a­tive ac­tivist Charlie Kirk.

Kimmel ac­cused the MAGA Gang” of try­ing to score po­lit­i­cal points” by char­ac­ter­iz­ing the sus­pect as any­thing other than one of them.”

Kimmel’s show was pulled a cou­ple of days later and re­turned to the air af­ter about a week.

In July, CBS an­nounced that Colbert’s show would come to an end in 2026, call­ing it purely a fi­nan­cial de­ci­sion against a chal­leng­ing back­drop in late night.”

It is not re­lated in any way to the show’s per­for­mance, con­tent or other mat­ters hap­pen­ing at Paramount,” CBS ex­ec­u­tives said in a joint state­ment.

The host has been a fre­quent critic of the Trump ad­min­is­tra­tion.

...

Read the original on www.nbcnews.com »

5 491 shares, 46 trendiness

TV’s Best Drama You’ve Probably Never Heard Of — Scene+Heard

The best thing the show’s writ­ers ever did was re­al­ize that Joe was­n’t the most in­ter­est­ing char­ac­ter. Subsequent sea­sons trace the dis­so­lu­tion of his com­plex, as he finds him­self con­fronting the lim­its of his charisma and the con­se­quences of his ac­tions. It’s the death of the an­ti­hero, and in its place rises a show im­bued with new­found life, as the bur­geon­ing busi­ness part­ner­ship be­tween its two main fe­male char­ac­ters be­comes the cen­tral nar­ra­tive.

Season 2’s open­ing se­quence es­tab­lishes this won­der­fully en­er­getic change of pace with a three-minute scene shot en­tirely in one take. The hand­held cam­era swings and pans around a sub­ur­ban home crammed with coders, con­struc­tion tools and ca­bles strewn across the ground. It’s a cin­e­mato­graphic man­i­fes­ta­tion of the crack­ling en­ergy, messi­ness and all, be­tween peo­ple tak­ing a risk to cre­ate some­thing new. Here, we meet Mutiny, Donna and Cameron’s video game sub­scrip­tion ser­vice that takes cen­ter stage in Season 2 and 3.

As the two nav­i­gate the pas­sions and pit­falls of run­ning a startup, the melo­dra­matic ten­sion of the first sea­son is re­placed with a pal­pa­ble light­ness and am­bi­tion. There are still plenty of great dra­matic rev­e­la­tions and story beats, but none of it feels forced or in ser­vice of a half-baked an­ti­hero arc. The stakes feel gen­uine and emo­tion­ally po­tent.

The part­ner­ship be­tween Donna and Cameron is largely the im­pe­tus for this. I can’t think of a bet­ter por­trayal of fe­male friend­ship on tele­vi­sion that I’ve seen than the one in this show. Rather than be de­fined by their re­la­tions to Joe and Gordon or by tropes like the work­ing mother, they’re given agency and al­lowed to be flawed and am­bi­tious and all the other things me­dia has con­stantly told women not to be.

Cameron, who grew up learn­ing how to sur­vive on her own, opens up to col­lab­o­rate and trust oth­ers — but there’s a con­stant fear of los­ing the com­pany to which she’s ded­i­cated her whole life. Donna, who has ex­pe­ri­enced the heart­break of a failed prod­uct once be­fore, comes into her own as a leader — but, by try­ing to al­ways make the most log­i­cal de­ci­sions for the com­pany, loses the part­ner­ship she needed most.

The pro­gres­sion of their friend­ship — the ways in which they sup­port, hurt, and even­tu­ally for­give each other — is treated with such nu­ance, and it’s a gen­uinely mov­ing re­la­tion­ship to watch un­fold.

Their bond is just one of the many com­plex dy­nam­ics this show ex­plores. As the show ma­tures, so do its char­ac­ters. Joe learns to un­der­stand the im­por­tance of those around him — that peo­ple are not only the means to an end, but the end it­self. Gordon, so ea­ger in ear­lier sea­sons to prove him­self and be re­mem­bered for some­thing, finds con­fi­dence and peace in the pre­sent, and leaves a legacy that will long re­ver­ber­ate in char­ac­ters and view­ers alike. As much as these char­ac­ters grow and evolve, what re­mains at their core is what brought them to­gether in the first place: a shared am­bi­tion to build some­thing that makes a dif­fer­ence in the world.

...

Read the original on www.sceneandheardnu.com »

6 435 shares, 24 trendiness

Tesla 'Robotaxi' adds 5 more crashes in Austin in a month — 4x worse than humans

Tesla has re­ported five new crashes in­volv­ing its Robotaxi” fleet in Austin, Texas, bring­ing the to­tal to 14 in­ci­dents since the ser­vice launched in June 2025. The newly filed NHTSA data also re­veals that Tesla qui­etly up­graded one ear­lier crash to in­clude a hos­pi­tal­iza­tion in­jury, some­thing the com­pany never dis­closed pub­licly.

The new data comes from the lat­est up­date to NHTSAs Standing General Order (SGO) in­ci­dent re­port data­base for au­to­mated dri­ving sys­tems (ADS). We have been track­ing Tesla’s Robotaxi crash data closely, and the trend is not im­prov­ing.

Tesla sub­mit­ted five new crash re­ports in January 2026, cov­er­ing in­ci­dents from December 2025 and January 2026. All five in­volved Model Y ve­hi­cles op­er­at­ing with the au­tonomous dri­ving sys­tem verified en­gaged” in Austin.

The new crashes in­clude a col­li­sion with a fixed ob­ject at 17 mph while the ve­hi­cle was dri­ving straight, a crash with a bus while the Tesla was sta­tion­ary, a col­li­sion with a heavy truck at 4 mph, and two sep­a­rate in­ci­dents where the Tesla backed into ob­jects, one into a pole or tree at 1 mph and an­other into a fixed ob­ject at 2 mph.

As with every pre­vi­ous Tesla crash in the data­base, all five new in­ci­dent nar­ra­tives are fully redacted as confidential busi­ness in­for­ma­tion.” Tesla re­mains the only ADS op­er­a­tor to sys­tem­at­i­cally hide crash de­tails from the pub­lic through NHTSAs con­fi­den­tial­ity pro­vi­sions. Waymo, Zoox, and every other com­pany in the data­base pro­vide full nar­ra­tive de­scrip­tions of their in­ci­dents.

Buried in the up­dated data is a re­vised re­port for a July 2025 crash (Report ID 13781-11375) that Tesla orig­i­nally filed as property dam­age only.” In December 2025, Tesla sub­mit­ted a third ver­sion of that re­port up­grad­ing the in­jury sever­ity to Minor W/ Hospitalization.”

This means some­one in­volved in a Tesla Robotaxi” crash re­quired hos­pi­tal treat­ment. The orig­i­nal crash in­volved a right turn col­li­sion with an SUV at 2 mph. Tesla’s de­layed ad­mis­sion of hos­pi­tal­iza­tion, five months af­ter the in­ci­dent, raises more ques­tions about its crash re­port­ing, which is al­ready heav­ily redacted.

With 14 crashes now on the books, Tesla’s Robotaxi” crash rate in Austin con­tin­ues to de­te­ri­o­rate. Extrapolating from Tesla’s Q4 2025 earn­ings mileage data, which showed roughly 700,000 cu­mu­la­tive paid miles through November, the fleet likely reached around 800,000 miles by mid-Jan­u­ary 2026. That works out to one crash every 57,000 miles.

The irony is that Tesla’s own num­bers con­demn it. Tesla’s Vehicle Safety Report claims the av­er­age American dri­ver ex­pe­ri­ences a mi­nor col­li­sion every 229,000 miles and a ma­jor col­li­sion every 699,000 miles. By Tesla’s own bench­mark, its Robotaxi” fleet is crash­ing nearly 4 times more of­ten than what the com­pany says is nor­mal for a reg­u­lar hu­man dri­ver in a mi­nor col­li­sion, and vir­tu­ally every sin­gle one of these miles was dri­ven with a trained safety mon­i­tor in the ve­hi­cle who could in­ter­vene at any mo­ment, which means they likely pre­vented more crashes that Tesla’s sys­tem would­n’t have avoided.

Using NHTSAs broader po­lice-re­ported crash av­er­age of roughly one per 500,000 miles, the pic­ture is even worse, Tesla’s fleet is crash­ing at ap­prox­i­mately 8 times the hu­man rate.

Meanwhile, Waymo has logged over 127 mil­lion fully dri­ver­less miles, with no safety dri­ver, no mon­i­tor, no chase car, and in­de­pen­dent re­search shows Waymo re­duces in­jury-caus­ing crashes by 80% and se­ri­ous-in­jury crashes by 91% com­pared to hu­man dri­vers. Waymo re­ports 51 in­ci­dents in Austin alone in this same NHTSA data­base, but its fleet has dri­ven or­ders of mag­ni­tude more miles in the city than Tesla’s su­per­vised robotaxis.”

Here’s a full list of Tesla’s ADS crashes re­lated to the Austin Robotaxi ser­vice:

We keep up­dat­ing this story be­cause the data keeps get­ting worse. Five more crashes, a qui­etly up­graded hos­pi­tal­iza­tion, and to­tal nar­ra­tive redac­tion across the board, all from a com­pany that claims its au­tonomous dri­ving sys­tem is safer than hu­mans.

Tesla fans and share­hold­ers hold on to the thought that the com­pa­ny’s ro­b­o­t­axis are not re­spon­si­ble for some of these crashes, which is true, even though that’s much harder to de­ter­mine with Tesla redact­ing the crash nar­ra­tive on all crashes, but the prob­lem is that even Tesla’s own bench­mark shows hu­mans have fewer crashes.

The 14 crashes over roughly 800,000 miles yield a crash rate of one crash every 57,000 miles. Tesla’s own safety data in­di­cate that a typ­i­cal hu­man dri­ver has a mi­nor col­li­sion every 229,000 miles, whether or not they are at fault.

By the com­pa­ny’s own num­bers, its Robotaxi” fleet crashes nearly 4 times more of­ten than a nor­mal dri­ver, and every sin­gle one of those miles had a safety mon­i­tor who could hit the kill switch. That is not a round­ing er­ror or an early-pro­gram hic­cup. It is a fun­da­men­tal per­for­mance gap.

What makes this es­pe­cially frus­trat­ing is the lack of trans­parency. Every other ADS com­pany in the NHTSA data­base, Waymo, Zoox, Aurora, Nuro, pro­vides de­tailed nar­ra­tives ex­plain­ing what hap­pened in each crash. Tesla redacts every­thing. We can­not in­de­pen­dently as­sess whether Tesla’s sys­tem was at fault, whether the safety mon­i­tor failed to in­ter­vene in time, or whether these were un­avoid­able sit­u­a­tions caused by other road users. Tesla wants us to trust its safety record while mak­ing it im­pos­si­ble to ver­ify.

The cra­zi­est part is that Tesla be­gan of­fer­ing rides with­out a safety mon­i­tor in Austin in late January 2026, just af­ter it ex­pe­ri­enced 4 crashes in the first half of the month.

As we re­ported in our sta­tus check on the pro­gram yes­ter­day, the ser­vice cur­rently has roughly 42 ac­tive cars in Austin with be­low 20% avail­abil­ity and the rides with safety mon­i­tor are ex­tremely lim­ited and not run­ning most of the time, but it’s still wor­ri­some that Tesla would even at­tempt that know­ing its crash rate is still higher than hu­man dri­vers with a safety mon­i­tor in the front pas­sen­ger seat.

The fact that reg­u­la­tors are not get­ting in­volved tells you every­thing you need to know about the state of the US/Texas gov­ern­ment right now.

...

Read the original on electrek.co »

7 395 shares, 24 trendiness

AsteroidOS 2.0 Released

Asteroids travel steadily, oc­ca­sion­ally leav­ing ob­serv­able dis­tance. It has been a while since our last re­lease, and now it’s fi­nally here!

AsteroidOS 2.0 has ar­rived, bring­ing ma­jor fea­tures and im­prove­ments gath­ered dur­ing its jour­ney through com­mu­nity space. Always-on-Display, ex­panded sup­port for more watches, new launcher styles, cus­tomiz­able quick set­tings, sig­nif­i­cant per­for­mance in­creases in parts of the User Interface, and en­hance­ments to our syn­chro­niza­tion clients are just some high­lights of what to ex­pect.

New QuickPanel

The for­mer QuickSettings top menu on the home­screen has been re­worked into a highly cus­tomiz­able QuickPanel with many more set­tings tog­gles, app short cuts and re­morse timer dri­ven power off.

New App Launchers

Seven more App Launcher styles have been added. Those can be se­lected in the new Launcher set­tings page.

Enhanced Wallpaper and Watchface gallery

Watchfaces are now paired with the user se­lected Wallpaper al­ready in the Watchface gallery. Helping to find your favourite com­bi­na­tion at a glance. Both pages re­ceived ma­jor per­for­mance im­prove­ments.

Nightstand mode

Use your watch as a bed­side clock or sim­ply show charg­ing much more clearly. Selected watch­faces show a large charg­ing sta­tus when power is con­nected.

The night­stand set­tings page makes this mode very ver­sa­tile.

New back­ground an­i­ma­tion

Reworked de­sign for a more or­ganic feel­ing of breathing”.

New wall­pa­pers

Extending on the well re­ceived flatmesh de­sign, tri­an­gu­lated wall­pa­pers turned out to fit beau­ti­fully.

Diamonds

A 2048 like game with a fresh twist. Suited nicely for small res­o­lu­tions and dis­plays.

Weather app de­sign over­haul

Embracing the new pos­si­bil­i­ties Noto Sans and its vast va­ri­ety of font styles of­fers. The weather app got re­fined to­wards bet­ter leg­i­bil­ity and pre­sen­ta­tion of very long place names.

Timer app re­design

The timer app works in the back­ground now. It got op­ti­mised for use on round watches. The de­sign is now con­sis­tent with the stop­watch.

Flashlight app

Yup, it flashes light. Most use­ful, so it got added to the stock se­lec­tion.

Animated Bootsplash logo

A very small touch. But yet an­other pos­si­bil­ity for de­sign­ers to get in­volved.

Round screens with a flat tyre shape are now sup­ported.

Calculator app with new lay­out

Improved but­ton lay­out for eas­ier op­er­a­tion and bet­ter leg­i­bil­ity, es­pe­cially on round dis­plays.

New UI el­e­ments and pol­ished icons

Improved tog­gles, progress bars and other UI el­e­ments by uni­fy­ing the de­sign and re­mov­ing in­con­sis­ten­cies.

More trans­la­tions (49 lan­guages)

More then 20 lan­guages added since our last re­lease thanks to much wel­come com­mu­nity ef­fort.

Noto Sans sys­tem font

Supporting the lo­cal­i­sa­tion ef­forts, Noto Sans of­fers con­sis­tent font styles for many lan­guages with cus­tom char­ac­ter set.

Color Emoji

Underlining the flat na­ture of our UI, we moved from Noto Color Emoji to Twemoji.

Ringtone vi­bra­tion pat­tern

Customise all the things! Here, the vi­bra­tion pat­tern on in­com­ing calls.

Optimized Rendering

Significant per­for­mance im­prove­ments have been made to the User Interface, re­sult­ing in smoother an­i­ma­tions and tran­si­tions.

Battery Life Improvements

Various op­ti­miza­tions have been im­ple­mented to ex­tend bat­tery life dur­ing daily use.

Stability Fixes

Numerous bug fixes and sta­bil­ity im­prove­ments have been ap­plied across the sys­tem.

Since 1.0 we added sup­port for the fol­low­ing watches:

And par­tial sup­port for the fol­low­ing watches:

Casio WSD-F10/F20 (koi, ayu) - brick­ing have been re­ported on some watches

Moto 360 1st gen (minnow) - has un­der­whelm­ing per­for­mance, it is the only watch we have ported with a TI SoC.

Samsung Gear 2 (rinato) - too un­sta­ble and too bad power man­age­ment

Samsung Gear Live (sprat) - in an un­us­able state due to per­sis­tent dis­play is­sues

We have cre­ated an Experimental” cat­e­gory in our watch gallery for the above 5 watches since we do not con­sider those suit­able for daily use. We will how­ever con­tinue to pro­vide in­stall im­ages for these watches, and we wel­come new con­trib­u­tors with fresh ideas to help im­prove sup­port! We also con­tinue to mon­i­tor sup­ported watches and for ex­am­ple re­cently de­moted the Sony Smartwatch 3 (tetra) due to un­re­solved hard­ware sup­port is­sues.

The Samsung Gear 2 (rinato) is our first watch sup­ported with a main­line Linux ker­nel and there­fore with­out the use of lib­hy­bris. The Asus Zenwatch 2 (sparrow) also has very ba­sic sup­port for run­ning on a main­line Linux ker­nel.

For a com­plete list of sup­ported de­vices and in­stal­la­tion in­struc­tions, please visit our in­stal­la­tion guide.

Apart from adding new watches, the com­mu­nity has also been ac­tively en­hanc­ing the sup­port for our ex­ist­ing range of watches. Visit our newly cre­ated fea­ture ma­trix page to find out about the de­tailed sup­port level for your watch.

New Bluetooth lib should im­prove sta­bil­ity and sim­plify the pair­ing process

A more mod­u­lar ar­chi­tec­ture, al­low­ing for eas­ier ex­tend­ing and main­tain­abil­ity of the app.

Thanks to Noodlez, ini­tial AsteroidOS sup­port has been added to Gadgetbridge ver­sion 0.73.0.

Jozef Mlich has added AsteroidOS sup­port to Adam Piggs Amazfish. Initially de­vel­oped for SailfishOS, Amazfish is now also avail­able in kirigami flavour for linux desk­tops.

After our ini­tial re­lease StefWe cre­ated Telescope a sync client for UBports.

This re­lease would not have been pos­si­ble with­out the ded­i­cated ef­forts of our com­mu­nity con­trib­u­tors. We ex­tend our heart­felt thanks to every­one who re­ported is­sues, sub­mit­ted patches, and pro­vided feed­back dur­ing the de­vel­op­ment cy­cle.

Over the years, the AsteroidOS com­mu­nity has ex­panded its reach, with com­mu­nity trans­la­tors adding over 20 lan­guages to the AsteroidOS Weblate. Translating into your lo­cal lan­guage is the eas­i­est way to get in­volved. Your help is most valu­able to make AsteroidOS fit for use in your re­gion.

Watchface cre­ation has been a pop­u­lar com­mu­nity ac­tiv­ity lately. We are happy to pre­sent the new and com­pre­hen­sive watch­faces cre­ation and de­sign guide. It is gar­nished with test­ing and de­ploy­ment scripts to sim­plify the process fur­ther. Our com­mu­nity came up with funny and beau­ti­ful new watch­faces. Those are all col­lected in the un­of­fi­cial watch­faces repos­i­tory.

moW­erk has con­tributed a va­ri­ety of watch­faces. Two high­lights are the min­i­mal­is­tic pulse­dot and a clas­sic Monty Python in­spired silly walks watch­face.

MagneFire did show-off Doom, Super Tux Kart, gpSP and other em­u­lated games on his watch. The na­tive 2048 port called di­a­monds was re­cently in­cluded into the stock set of apps.

Dodoradio worked on a few un­usual watches, like the LG Watch W7 with its phys­i­cal hands to be taken into ac­count. And the Casio WSD-FXX se­ries sport­ing mul­ti­func­tional sec­ondary dis­plays. Along with some more con­ven­tional ports such as the Fossil Gen5 and Polar M600. For watches with GPS, he con­tributed a Map app with way­point func­tion­al­ity to the com­mu­nity repos­i­tory. His ini­tial ver­sion of the of­ten re­quested as­ter­oid-health app is al­ready ca­pa­ble of au­to­mat­i­cally track­ing steps and heartrate with min­i­mal im­pact on bat­tery life.

Beroset im­ple­mented the new Nightstand mode. In ad­di­tion to his ef­forts in main­tain­ing the build tools, Beroset has also de­vel­oped host-tools which make it eas­ier to work on watches from a Linux host. Furthermore, he has in­cluded a user-friendly GUI for de­ploy­ing watch­faces and cre­ated as­ter­oid-weath­er­fetch, a con­ve­nient app that down­loads weather data us­ing the watches IP con­nec­tion.

PostmarketOS now of­fers our launcher and core apps, thanks to post­mar­ke­tOS de­vel­oper PureTryOut, who moved our buildsys­tem from qmake to cmake along the way.

The pro­gram lcd-tools by lecris and MagneFire was orig­i­nally de­vel­oped to con­trol the sec­ondary LCD on the TicWatch Pro. And got ex­tended by dodor­a­dio and beroset to make use of many more fea­tures the Casio sec­ondary dis­plays of­fer.

Our web­site as­ter­oi­dos.org has seen a ma­jor con­tent ex­ten­sion.

A FAQ sec­tion has been added to pro­vide a quick overview of our pro­ject.

The Install page has grown into a gallery of sup­ported watches over time with now 30 watches listed. We re­named it to Watches” and plan to evolve this page into a pur­chase guide to aid new users in choos­ing a sup­ported watch. A first step was to re­size the im­ages of all watches to cor­rectly re­flect the rel­a­tive size dif­fer­ences be­tween them, to be able to com­pare their di­men­sions.

The Documentation pages are fre­quently up­dated by com­mu­nity mem­bers and nicely keep up with the cur­rent state of de­vel­op­ment. We re­cently moved them into a MediaWiki. This en­ables users with­out deeper knowl­edge to con­tribute to the doc­u­men­ta­tion much more eas­ily.

The cre­ator of the un­of­fi­cial Subreddit gave us full ac­cess, mak­ing it an of­fi­cial chan­nel along­side our Mastodon ac­count.

As we al­ready men­tioned in a pre­vi­ous blog post, we moved all our com­mu­ni­ca­tion from freen­ode to Matrix and Libera.chat. You are in­vited to join the AsteroidOS Matrix chan­nel us­ing this link. https://​ma­trix.to/#/#​As­ter­oid:ma­trix.org

With 2.0 we in­tro­duce a com­mu­nity repos­i­tory, to im­prove dis­cov­er­abil­ity and sim­plify the in­stal­la­tion of pre­com­piled pack­ages, while build­ing the foun­da­tion for a pos­si­ble graph­i­cal soft­ware cen­ter in the fu­ture. Currently, the repos­i­tory con­sists of a few de­bug­ging tools, com­mu­nity watch­faces, games and em­u­la­tors. Developers are wel­come to cre­ate pull re­quests on the meta-com­mu­nity repo for pack­ag­ing.

After mov­ing our in­fra­struc­ture to a larger server, we have seen an in­crease in the fre­quency of nightly re­leases. However, it is worth not­ing that com­pletely re­build­ing all pack­ages for all 30 watch sys­tem im­ages still takes al­most a week. Therefore, we can ex­pect the nightlies to be ready on week­ends.

Interested in con­tribut­ing to AsteroidOS? Whether you’re a de­vel­oper, de­signer, or en­thu­si­ast, there are many ways to get in­volved:

Join our com­mu­nity fo­rums to dis­cuss ideas and share feed­back.

Report is­sues or sug­gest fea­tures on our GitHub repos­i­tory.

Help with trans­lat­ing AsteroidOS to your lan­guage us­ing Weblate.

Contribute to the code­base by tack­ling open is­sues or de­vel­op­ing new fea­tures.

Your par­tic­i­pa­tion helps make AsteroidOS bet­ter for every­one.

Ready to ex­pe­ri­ence the lat­est fea­tures and im­prove­ments? Download AsteroidOS 2.0 from our of­fi­cial web­site and fol­low the in­stal­la­tion in­struc­tions for your de­vice.

Thank you for your con­tin­ued sup­port. We hope you en­joy AsteroidOS 2.0!

As you might have no­ticed, the cur­rent re­leases linked on the in­stal­la­tion pages have fea­ture par­ity with the 2.0 re­lease. At some point, we de­cided to switch from our sta­ble 1.0 re­lease to a quasi 1.1 nightly rolling re­lease, as the 1.0 re­lease be­came too old to main­tain. In the fu­ture, we would like to change our re­lease cy­cle to of­fer more fre­quent sta­ble re­leases. A sta­ble re­lease will al­ways be sta­ble. But not too old to no longer be main­tain­able.

For the fu­ture, we are go­ing to set up a roadmap for fea­tures we would like to see in an even­tual next re­lease. Based on re­cent early com­mu­nity work, we might see fea­tures like:

Written by AsteroidOS Team on the 17/02/2026

...

Read the original on asteroidos.org »

8 372 shares, 17 trendiness

Using go fix to modernize Go code

The 1.26 re­lease of Go this month in­cludes a com­pletely rewrit­ten go fix sub­com­mand. Go fix uses a suite of al­go­rithms to iden­tify op­por­tu­ni­ties to im­prove your code, of­ten by tak­ing ad­van­tage of more mod­ern fea­tures of the lan­guage and li­brary. In this post, we’ll first show you how to use go fix to mod­ern­ize your Go code­base. Then in the sec­ond sec­tion we’ll dive into the in­fra­struc­ture be­hind it and how it is evolv­ing. Finally, we’ll pre­sent the theme of self-service” analy­sis tools to help mod­ule main­tain­ers and or­ga­ni­za­tions en­code their own guide­lines and best prac­tices.

The go fix com­mand, like go build and go vet, ac­cepts a set of pat­terns that de­note pack­ages. This com­mand fixes all pack­ages be­neath the cur­rent di­rec­tory:

$ go fix ./…

On suc­cess, it silently up­dates your source files. It dis­cards any fix that touches gen­er­ated files since the ap­pro­pri­ate fix in that case is to the logic of the gen­er­a­tor it­self. We rec­om­mend run­ning go fix over your pro­ject each time you up­date your build to a newer Go tool­chain re­lease. Since the com­mand may fix hun­dreds of files, start from a clean git state so that the change con­sists only of ed­its from go fix; your code re­view­ers will thank you.

To pre­view the changes the above com­mand would have made, use the -diff flag:

$ go fix -diff ./…

–- dir/​file.go (old)

+++ dir/​file.go (new)

- eq := strings.In­dexByte(pair, =’)

- re­sult[pair[:eq]] = pair[1+eq:]

+ be­fore, af­ter, _ := strings.Cut(pair, =”)

+ re­sult[be­fore] = af­ter

You can list the avail­able fix­ers by run­ning this com­mand:

$ go tool fix help

Registered an­a­lyz­ers:

any re­place in­ter­face{} with any

build­tag check //go:build and // +build di­rec­tives

fm­tap­pendf re­place []byte(fmt.Sprintf) with fmt.Ap­pendf

for­var re­move re­dun­dant re-de­c­la­ra­tion of loop vari­ables

host­port check for­mat of ad­dresses passed to net.Dial

in­line ap­ply fixes based on go:fix in­line’ com­ment di­rec­tives

map­sloop re­place ex­plicit loops over maps with calls to maps pack­age

min­max re­place if/​else state­ments with calls to min or max

Adding the name of a par­tic­u­lar an­a­lyzer shows its com­plete doc­u­men­ta­tion:

$ go tool fix help for­var

for­var: re­move re­dun­dant re-de­c­la­ra­tion of loop vari­ables

The for­var an­a­lyzer re­moves un­nec­es­sary shad­ow­ing of loop vari­ables.

Before Go 1.22, it was com­mon to write `for _, x := range s { x := x … }`

to cre­ate a fresh vari­able for each it­er­a­tion. Go 1.22 changed the se­man­tics

of `for` loops, mak­ing this pat­tern re­dun­dant. This an­a­lyzer re­moves the

un­nec­es­sary `x := x` state­ment.

This fix only ap­plies to `range` loops.

By de­fault, the go fix com­mand runs all an­a­lyz­ers. When fix­ing a large pro­ject it may re­duce the bur­den of code re­view if you ap­ply fixes from the most pro­lific an­a­lyz­ers as sep­a­rate code changes. To en­able only spe­cific an­a­lyz­ers, use the flags match­ing their names. For ex­am­ple, to run just the any fixer, spec­ify the -any flag. Conversely, to run all the an­a­lyz­ers ex­cept se­lected ones, negate the flags, for in­stance -any=false.

As with go build and go vet, each run of the go fix com­mand an­a­lyzes only a spe­cific build con­fig­u­ra­tion. If your pro­ject makes heavy use of files tagged for dif­fer­ent CPUs or plat­forms, you may wish to run the com­mand more than once with dif­fer­ent val­ues of GOARCH and GOOS for bet­ter cov­er­age:

$ GOOS=linux GOARCH=amd64 go fix ./…

$ GOOS=darwin GOARCH=arm64 go fix ./…

$ GOOS=windows GOARCH=amd64 go fix ./…

Running the com­mand more than once also pro­vides op­por­tu­ni­ties for syn­er­gis­tic fixes, as we’ll see be­low.

The in­tro­duc­tion of gener­ics in Go 1.18 marked the end of an era of very few changes to the lan­guage spec and the start of a pe­riod of more rapid—though still care­ful—change, es­pe­cially in the li­braries. Many of the triv­ial loops that Go pro­gram­mers rou­tinely write, such as to gather the keys of a map into a slice, can now be con­ve­niently ex­pressed as a call to a generic func­tion such as maps. Keys. Consequently these new fea­tures cre­ate many op­por­tu­ni­ties to sim­plify ex­ist­ing code.

In December 2024, dur­ing the fren­zied adop­tion of LLM cod­ing as­sis­tants, we be­came aware that such tools tended—un­sur­pris­ingly—to pro­duce Go code in a style sim­i­lar to the mass of Go code used dur­ing train­ing, even when there were newer, bet­ter ways to ex­press the same idea. Less ob­vi­ously, the same tools of­ten re­fused to use the newer ways even when di­rected to do so in gen­eral terms such as always use the lat­est id­ioms of Go 1.25.” In some cases, even when ex­plic­itly told to use a fea­ture, the model would deny that it ex­isted. (See my 2025 GopherCon talk for more ex­as­per­at­ing de­tails.) To en­sure that fu­ture mod­els are trained on the lat­est id­ioms, we need to en­sure that these id­ioms are re­flected in the train­ing data, which is to say the global cor­pus of open-source Go code.

Over the past year, we have built dozens of an­a­lyz­ers to iden­tify op­por­tu­ni­ties for mod­ern­iza­tion. Here are three ex­am­ples of the fixes they sug­gest:

min­max re­places an if state­ment by a use of Go 1.21’s min or max func­tions:

rangeint re­places a 3-clause for loop by a Go 1.22 range-over-int loop:

strings­cut (whose -diff out­put we saw ear­lier) re­places uses of strings. Index and slic­ing by Go 1.18’s strings.Cut:

These mod­ern­iz­ers are in­cluded in go­pls, to pro­vide in­stant feed­back as you type, and in go fix, so that you can mod­ern­ize sev­eral en­tire pack­ages at once in a sin­gle com­mand. In ad­di­tion to mak­ing code clearer, mod­ern­iz­ers may help Go pro­gram­mers learn about newer fea­tures. As part of the process of ap­prov­ing each new change to the lan­guage and stan­dard li­brary, the pro­posal re­view group now con­sid­ers whether it should be ac­com­pa­nied by a mod­ern­izer. We ex­pect to add more mod­ern­iz­ers with each re­lease.

Go 1.26 in­cludes a small but widely use­ful change to the lan­guage spec­i­fi­ca­tion. The built-in new func­tion cre­ates a new vari­able and re­turns its ad­dress. Historically, its sole ar­gu­ment was re­quired to be a type, such as new(string), and the new vari­able was ini­tial­ized to its zero” value, such as ”. In Go 1.26, the new func­tion may be called with any value, caus­ing it to cre­ate a vari­able ini­tial­ized to that value, avoid­ing the need for an ad­di­tional state­ment. For ex­am­ple:

This fea­ture filled a gap that had been dis­cussed for over a decade and re­solved one of the most pop­u­lar pro­pos­als for a change to the lan­guage. It is es­pe­cially con­ve­nient in code that uses a pointer type *T to in­di­cate an op­tional value of type T, as is com­mon when work­ing with se­ri­al­iza­tion pack­ages such as json. Marshal or pro­to­col buffers. This is such a com­mon pat­tern that peo­ple of­ten cap­ture it in a helper, such as the newInt func­tion be­low, sav­ing the caller from the need to break out of an ex­pres­sion con­text to in­tro­duce ad­di­tional state­ments:

type RequestJSON struct {

URL string

Attempts *int // (optional)

data, err := json.Mar­shal(&Re­questJ­SON{

URL: url,

Attempts: newInt(10),

func newInt(x int) *int { re­turn &x }

Helpers such as newInt are so fre­quently needed with pro­to­col buffers that the proto API it­self pro­vides them as proto. Int64, proto.String, and so on. But Go 1.26 makes all these helpers un­nec­es­sary:

data, err := json.Mar­shal(&Re­questJ­SON{

URL: url,

Attempts: new(10),

To help you take ad­van­tage of this fea­ture, the go fix com­mand now in­cludes a fixer, new­expr, that rec­og­nizes new-like” func­tions such as newInt and sug­gests fixes to re­place the func­tion body with re­turn new(x) and to re­place every call, whether in the same pack­age or an im­port­ing pack­age, with a di­rect use of new(expr).

To avoid in­tro­duc­ing pre­ma­ture uses of new fea­tures, mod­ern­iz­ers of­fer fixes only in files that re­quire at least the min­i­mum ap­pro­pri­ate ver­sion of Go (1.26 in this in­stance), ei­ther through a go 1.26 di­rec­tive in the en­clos­ing go.mod file or a //go:build go1.26 build con­straint in the file it­self.

Run this com­mand to up­date all calls of this form in your source tree:

$ go fix -newexpr ./…

At this point, with luck, all of your newInt-like helper func­tions will have be­come un­used and may be safely deleted (assuming they aren’t part of a sta­ble pub­lished API). A few calls may re­main where it would be un­safe to sug­gest a fix, such as when the name new is lo­cally shad­owed by an­other de­c­la­ra­tion. You can also use the dead­code com­mand to help iden­tify un­used func­tions.

Applying one mod­ern­iza­tion may cre­ate op­por­tu­ni­ties to ap­ply an­other. For ex­am­ple, this snip­pet of code, which clamps x to the range 0–100, causes the min­max mod­ern­izer to sug­gest a fix to use max. Once that fix is ap­plied it sug­gests a sec­ond fix, this time to use min.

Synergies may also oc­cur be­tween dif­fer­ent an­a­lyz­ers. For ex­am­ple, a com­mon mis­take is to re­peat­edly con­cate­nate strings within a loop, re­sult­ing in qua­dratic time com­plex­ity—a bug and a po­ten­tial vec­tor for a de­nial-of-ser­vice at­tack. The strings­builder mod­ern­izer rec­og­nizes the prob­lem and sug­gests us­ing Go 1.10’s strings. Builder:

Once this fix is ap­plied, a sec­ond an­a­lyzer may rec­og­nize that the WriteString and Sprintf op­er­a­tions can be com­bined as fmt. Fprintf(&s, %02x”, b), which is both cleaner and more ef­fi­cient, and of­fer a sec­ond fix. (This sec­ond an­a­lyzer is QF1012 from Dominik Honnef’s sta­t­ic­check, which is al­ready en­abled in go­pls but not yet in go fix, though we plan to add sta­t­ic­check an­a­lyz­ers to the go com­mand start­ing in Go 1.27.)

Consequently, it may be worth run­ning go fix more than once un­til it reaches a fixed point; twice is usu­ally enough.

A sin­gle run of go fix may ap­ply dozens of fixes within the same source file. All fixes are con­cep­tu­ally in­de­pen­dent, anal­o­gous to a set of git com­mits with the same par­ent. The go fix com­mand uses a sim­ple three-way merge al­go­rithm to rec­on­cile the fixes in se­quence, anal­o­gous to the task of merg­ing a set of git com­mits that edit the same file. If a fix con­flicts with the list of ed­its ac­cu­mu­lated so far, it is dis­carded, and the tool is­sues a warn­ing that some fixes were skipped and that the tool should be run again.

This re­li­ably de­tects syn­tac­tic con­flicts aris­ing from over­lap­ping ed­its, but an­other class of con­flict is pos­si­ble: a se­man­tic con­flict oc­curs when two changes are tex­tu­ally in­de­pen­dent but their mean­ings are in­com­pat­i­ble. As an ex­am­ple con­sider two fixes that each re­move the sec­ond-to-last use of a lo­cal vari­able: each fix is fine by it­self, but when both are ap­plied to­gether the lo­cal vari­able be­comes un­used, and in Go that’s a com­pi­la­tion er­ror. Neither fix is re­spon­si­ble for re­mov­ing the vari­able de­c­la­ra­tion, but some­one has to do it, and that some­one is the user of go fix.

A sim­i­lar se­man­tic con­flict arises when a set of fixes causes an im­port to be­come un­used. Because this case is so com­mon, the go fix com­mand ap­plies a fi­nal pass to de­tect un­used im­ports and re­move them au­to­mat­i­cally.

Semantic con­flicts are rel­a­tively rare. Fortunately they usu­ally re­veal them­selves as com­pi­la­tion er­rors, mak­ing them im­pos­si­ble to over­look. Unfortunately, when they hap­pen, they do de­mand some man­ual work af­ter run­ning go fix.

Let’s now delve into the in­fra­struc­ture be­neath these tools.

Since the ear­li­est days of Go, the go com­mand has had two sub­com­mands for sta­tic analy­sis, go vet and go fix, each with its own suite of al­go­rithms: checkers” and fixers”. A checker re­ports likely mis­takes in your code, such as pass­ing a string in­stead of an in­te­ger as the operand of a fmt. Printf(“%d”) con­ver­sion. A fixer safely ed­its your code to fix a bug or to ex­press the same thing in a bet­ter way, per­haps more clearly, con­cisely, or ef­fi­ciently. Sometimes the same al­go­rithm ap­pears in both suites when it can both re­port a mis­take and safely fix it.

In 2017 we re­designed the then-mono­lithic go vet pro­gram to sep­a­rate the checker al­go­rithms (now called analyzers”) from the driver”, the pro­gram that runs them; the re­sult was the Go analy­sis frame­work. This sep­a­ra­tion en­ables an an­a­lyzer to be writ­ten once then run in a di­verse range of dri­vers for dif­fer­ent en­vi­ron­ments, such as:

* unitchecker, which turns a suite of an­a­lyz­ers into a sub­com­mand that can be run by the go com­mand’s scal­able in­cre­men­tal build sys­tem, anal­o­gous to a com­piler in go build. This is the ba­sis of go fix and go vet.

* nogo, the anal­o­gous dri­ver for al­ter­na­tive build sys­tems such as Bazel and Blaze.

* sin­glechecker, which turns an an­a­lyzer into a stand­alone com­mand that loads, parses, and type-checks a set of pack­ages (perhaps a whole pro­gram) and then an­a­lyzes them. We of­ten use it for ad hoc ex­per­i­ments and mea­sure­ments over the mod­ule mir­ror (proxy.golang.org) cor­pus.

* mul­ti­checker, which does the same thing for a suite of an­a­lyz­ers with a swiss-army knife’ CLI.

* go­pls, the lan­guage server be­hind VS Code and other ed­i­tors, which pro­vides real-time di­ag­nos­tics from an­a­lyz­ers af­ter each ed­i­tor key­stroke.

* the highly con­fig­urable dri­ver used by the sta­t­ic­check tool. (Staticcheck also pro­vides a large suite of an­a­lyz­ers that can be run in other dri­vers.)

* Tricorder, the batch sta­tic analy­sis pipeline used by Google’s monorepo and in­te­grated with its code re­view sys­tem.

* go­pls’ MCP server, which makes di­ag­nos­tics avail­able to LLM-based cod­ing agents, pro­vid­ing more ro­bust guardrails”.

One ben­e­fit of the frame­work is its abil­ity to ex­press helper an­a­lyz­ers that don’t re­port di­ag­nos­tics or sug­gest fixes of their own but in­stead com­pute some in­ter­me­di­ate data struc­ture that may be use­ful to many other an­a­lyz­ers, amor­tiz­ing the costs of its con­struc­tion. Examples in­clude con­trol-flow graphs, the SSA rep­re­sen­ta­tion of func­tion bod­ies, and data struc­tures for op­ti­mized AST nav­i­ga­tion.

Another ben­e­fit of the frame­work is its sup­port for mak­ing de­duc­tions across pack­ages. An an­a­lyzer can at­tach a fact” to a func­tion or other sym­bol so that in­for­ma­tion learned while an­a­lyz­ing the func­tion’s body can be used when later an­a­lyz­ing a call to the func­tion, even if the call ap­pears in an­other pack­age or the later analy­sis oc­curs in a dif­fer­ent process. This makes it easy to de­fine scal­able in­ter­pro­ce­dural analy­ses. For ex­am­ple, the printf checker can tell when a func­tion such as log. Printf is re­ally just a wrap­per around fmt.Printf, so it knows that calls to log.Printf should be checked in a sim­i­lar man­ner. This process works by in­duc­tion, so the tool will also check calls to fur­ther wrap­pers around log.Printf, and so on. An ex­am­ple of an an­a­lyzer that makes heavy use of facts is Uber’s ni­l­away, which re­ports po­ten­tial mis­takes re­sult­ing in nil pointer deref­er­ences.

The process of separate analy­sis” in go fix is anal­o­gous to the process of sep­a­rate com­pi­la­tion in go build. Just as the com­piler builds pack­ages start­ing from the bot­tom of the de­pen­dency graph and pass­ing type in­for­ma­tion up to im­port­ing pack­ages, the analy­sis frame­work works from the bot­tom of the de­pen­dency graph up, pass­ing facts (and types) up to im­port­ing pack­ages.

In 2019, as we started de­vel­op­ing go­pls, the lan­guage server for Go, we added the abil­ity for an an­a­lyzer to sug­gest a fix when re­port­ing a di­ag­nos­tic. The printf an­a­lyzer, for ex­am­ple, of­fers to re­place fmt. Printf(msg) with fmt.Printf(“%s”, msg) to avoid mis­for­mat­ting should the dy­namic msg value con­tain a % sym­bol. This mech­a­nism has be­come the ba­sis for many of the quick fixes and refac­tor­ing fea­tures of go­pls.

While all these de­vel­op­ments were hap­pen­ing to go vet, go fix re­mained stuck as it was back be­fore the Go com­pat­i­bil­ity promise, when early adopters of Go used it to main­tain their code dur­ing the rapid and some­times in­com­pat­i­ble evo­lu­tion of the lan­guage and li­braries.

The Go 1.26 re­lease brings the Go analy­sis frame­work to go fix. The go vet and go fix com­mands have con­verged and are now al­most iden­ti­cal in im­ple­men­ta­tion. The only dif­fer­ences be­tween them are the cri­te­ria for the suites of al­go­rithms they use, and what they do with com­puted di­ag­nos­tics. Go vet an­a­lyz­ers must de­tect likely mis­takes with low false pos­i­tives; their di­ag­nos­tics are re­ported to the user. Go fix an­a­lyz­ers must gen­er­ate fixes that are safe to ap­ply with­out re­gres­sion in cor­rect­ness, per­for­mance, or style; their di­ag­nos­tics may not be re­ported, but the fixes are di­rectly ap­plied. Aside from this dif­fer­ence of em­pha­sis, the task of de­vel­op­ing a fixer is no dif­fer­ent from that of de­vel­op­ing a checker.

As the num­ber of an­a­lyz­ers in go vet and go fix con­tin­ues to grow, we have been in­vest­ing in in­fra­struc­ture both to im­prove the per­for­mance of each an­a­lyzer and to make it eas­ier to write each new an­a­lyzer.

For ex­am­ple, most an­a­lyz­ers start by tra­vers­ing the syn­tax trees of each file in the pack­age look­ing for a par­tic­u­lar kind of node such as a range state­ment or func­tion lit­eral. The ex­ist­ing in­spec­tor pack­age makes this scan ef­fi­cient by pre-com­put­ing a com­pact in­dex of a com­plete tra­ver­sal so that later tra­ver­sals can quickly skip sub­trees that don’t con­tain any nodes of in­ter­est. Recently we ex­tended it with the Cursor datatype to al­low flex­i­ble and ef­fi­cient nav­i­ga­tion be­tween nodes in all four car­di­nal di­rec­tions—up, down, left, and right, sim­i­lar to nav­i­gat­ing the el­e­ments of an HTML DOM—making it easy and ef­fi­cient to ex­press a query such as find each go state­ment that is the first state­ment of a loop body”:

var cur­File in­spec­tor.Cur­sor = …

// Find each go state­ment that is the first state­ment of a loop body.

for curGo := range cur­File.Pre­order((*ast.GoStmt)(nil)) {

kind, in­dex := curGo.Par­ent­Edge()

if kind == edge.Block­Stmt_List && in­dex == 0 {

switch curGo.Par­ent().Par­ent­EdgeKind() {

case edge.ForStmt_­Body, edge.RangeStmt_­Body:

Many an­a­lyz­ers start by search­ing for calls to a spe­cific func­tion, such as fmt. Printf. Function calls are among the most nu­mer­ous ex­pres­sions in Go code, so rather than search every call ex­pres­sion and test whether it is a call to fmt.Printf, it is much more ef­fi­cient to pre-com­pute an in­dex of sym­bol ref­er­ences, which is done by typein­dex and its helper an­a­lyzer. Then the calls to fmt.Printf can be enu­mer­ated di­rectly, mak­ing the cost pro­por­tional to the num­ber of calls in­stead of to the size of the pack­age. For an an­a­lyzer such as host­port that seeks an in­fre­quently used sym­bol (net.Dial), this can eas­ily make it 1,000× faster.

Some other in­fra­struc­tural im­prove­ments over the past year in­clude:

* a de­pen­dency graph of the stan­dard li­brary that an­a­lyz­ers can con­sult to avoid in­tro­duc­ing im­port cy­cles. For ex­am­ple, we can’t in­tro­duce a call to strings.Cut in a pack­age that is it­self im­ported by strings.

...

Read the original on go.dev »

9 359 shares, 18 trendiness

Gentoo on Codeberg – Gentoo Linux

Gentoo now has a pres­ence on Codeberg, and con­tri­bu­tions can be sub­mit­ted for the Gentoo repos­i­tory mir­ror at https://​code­berg.org/​gen­too/​gen­too as an al­ter­na­tive to GitHub. Eventually also other git repos­i­to­ries will be­come avail­able un­der the Codeberg Gentoo or­ga­ni­za­tion. This is part of the grad­ual mir­ror mi­gra­tion away from GitHub, as al­ready men­tioned in the 2025 end-of-year re­view. Codeberg is a site based on Forgejo, main­tained by a ded­i­cated

non-profit or­ga­ni­za­tion, and lo­cated in Berlin, Germany. Thanks to every­one who has helped make this move pos­si­ble!

These mir­rors are for con­ve­nience for con­tri­bu­tion and we con­tinue to host our own repos­i­to­ries, just like we did while us­ing GitHub mir­rors for ease of con­tri­bu­tion too.

If you wish to sub­mit pull re­quests on Codeberg, it is rec­om­mended to use the AGit ap­proach as it is more space ef­fi­cient and does not re­quire you to main­tain a fork of gen­too.git on your own Codeberg pro­file. To set it up, clone the up­stream URL and check out a branch lo­cally:

Once you’re ready to cre­ate your PR:

and the PR should be cre­ated au­to­mat­i­cally. To push ad­di­tional com­mits, re­peat the above com­mand - be sure that the same topic is used. If you wish to force-push up­dates (because you’re amend­ing com­mits), add -o force-push=true” to the above com­mand.

More doc­u­men­ta­tion can be found on our wiki.

...

Read the original on www.gentoo.org »

10 337 shares, 22 trendiness

Zaneham/BarraCUDA: Open-source CUDA compiler targeting AMD GPUs (and more in the future!). Compiles .cu to GFX11 machine code.

An open-source CUDA com­piler that tar­gets AMD GPUs, with more ar­chi­tec­tures planned. Written in 15,000 lines of C99. Zero LLVM de­pen­dency. Compiles .cu files straight to GFX11 ma­chine code and spits out ELF .hsaco bi­na­ries that AMD GPUs can ac­tu­ally run.

This is what hap­pens when you look at NVIDIAs walled gar­den and think how hard can it be?” The an­swer is: quite hard, ac­tu­ally, but I did it any­way.

note: if youre here to test out my cur­rent ten­stor­rent im­ple­men­ta­tion youll have to clone that re­spec­tive branch :-)

Takes CUDA C source code, the same .cu files you’d feed to nvcc, and com­piles them to AMD RDNA 3 (gfx1100) bi­na­ries. No LLVM. No HIP trans­la­tion layer. No convert your CUDA to some­thing else first.” Just a lexer, a parser, an IR, and roughly 1,700 lines of hand-writ­ten in­struc­tion se­lec­tion that would make a com­piler text­book weep.

Every sin­gle en­cod­ing has been val­i­dated against llvm-ob­j­dump with zero de­code fail­ures. I did­n’t use LLVM to com­pile, but I did use it to check my home­work.

# It’s C99. It builds with gcc. There are no de­pen­den­cies.

make

# That’s it. No cmake. No au­to­conf. No 47-step build process.

# If this does­n’t work, your gcc is bro­ken, not the Makefile.

* A will to live (optional but rec­om­mended)

* LLVM is NOT re­quired. BarraCUDA does its own in­struc­tion en­cod­ing like an adult.

# Compile to AMD GPU bi­nary

./barracuda –amdgpu-bin ker­nel.cu -o ker­nel.hsaco

# Dump the IR (for de­bug­ging or cu­rios­ity)

./barracuda –ir ker­nel.cu

# Just parse and dump the AST

./barracuda –ast ker­nel.cu

# Run se­man­tic analy­sis

./barracuda –sema ker­nel.cu

The fol­low­ing CUDA fea­tures com­pile to work­ing GFX11 ma­chine code:

* All C con­trol flow: if/​else, for, while, do-while, switch/​case, goto/​la­bel

__global__ void vec­tor_add(float *c, float *a, float *b, int n)

int idx = threa­dIdx.x + block­Idx.x * block­Dim.x;

if (idx < n)

c[idx] = a[idx] + b[idx];

All data struc­tures use pre-al­lo­cated fixed-size ar­rays. No mal­loc in hot paths. No re­cur­sion. Bounded loops every­where. The kind of code that would make JPLs cod­ing stan­dards com­mit­tee nod ap­prov­ingly be­fore go­ing back to land­ing things on Mars.

Being hon­est about lim­i­ta­tions is im­por­tant. Here’s what’s miss­ing:

* un­signed as a bare type spec­i­fier (use un­signed int or just int)

* +=, -=, >>= and friends (compound as­sign­ment, spell it out for now)

None of these are ar­chi­tec­tural block­ers. They’re all haven’t got round to it yet” items.

* vec­tor_add.cu - The hello world” of GPU com­put­ing

* not­gpt.cu - AI-generated CUDA with ex­tremely sar­cas­tic com­ments (tiled SGEMM, re­duc­tions, his­tograms, pre­fix scan, sten­cils, half pre­ci­sion, co­op­er­a­tive groups, and the kitchen sink” ker­nel)

* canon­i­cal.cu - Canonical pat­terns from NVIDIA sam­ples adapted for the parser

Fix the known gaps: com­pound as­sign­ment op­er­a­tors, bare un­signed, in­te­ger lit­eral suf­fixes, const, pa­ra­me­ter re­as­sign­ment. These are all small parser/​low­erer changes. The goal is to com­pile real-world .cu files with­out mod­i­fi­ca­tions.

The gen­er­ated code works but is­n’t win­ning any bench­marks. Priorities:

* Better reg­is­ter al­lo­ca­tion (currently lin­ear scan, con­sider graph colour­ing)

The IR (BIR) is tar­get-in­de­pen­dent. The back­end is cleanly sep­a­rated. Adding a new tar­get means writ­ing a new isel + emit pair. Candidates:

* Tenstorrent - RISC-V based AI ac­cel­er­a­tors. Open ISA. Very dif­fer­ent ex­e­cu­tion model (tile-based, not SIMT) but the IR maps well.

* Intel Arc - Xe ar­chi­tec­ture. Would give BarraCUDA cov­er­age across all three ma­jor GPU ven­dors.

* RISC-V Vector Extension - For when GPUs are too main­stream and you want to run CUDA on a soft­core.

If you’re con­sid­er­ing writ­ing your own AMDGPU back­end, here are the things that will ruin your af­ter­noon:

* SOP1 pre­fix is 0xBE800000, not what you’d ex­pect from the docs

* VOP3 VDST is at bits [7:0], not [15:8] like a sen­si­ble per­son would as­sume

* Null SADDR is 0x7C for global mem­ory, 0xFC for scratch

* RDNA 3 is Wave32 by de­fault, not Wave64 like GCN

* The ISA man­ual is 500 pages and con­tra­dicts it­self at least twice

All 1,735 lines of amdg­pu_emit.c are a tes­ta­ment to read­ing those pages so you don’t have to.

Found a bug? Want to dis­cuss the finer points of AMDGPU in­struc­tion en­cod­ing? Need some­one to com­mis­er­ate with about the state of GPU com­put­ing?

Open an is­sue if theres any­thing you want to dis­cuss. Or don’t. I’m not your mum.

Based in New Zealand, where it’s al­ready to­mor­row and the GPUs are just as con­fused as every­where else.

Apache 2.0. Do what­ever you want. If this com­piler some­how ends up in pro­duc­tion, I’d love to hear about it, mostly so I can up­date my LinkedIn with some­thing more in­ter­est­ing than wrote a CUDA com­piler for fun.

* Steven Muchnick for Advanced Compiler Design and Implementation. If this com­piler does any­thing right, that book is why.

* Low Level for the Zero to Hero C course and the YouTube chan­nel. That’s where I learnt C.

* Abe Kornelis for be­ing an amaz­ing teacher. His work on the z390 Portable Mainframe Assembler pro­ject is well worth your time.

* To the peo­ple who’ve sent mes­sages of kind­ness and cri­tique, thank you from a for­ever stu­dent and a happy hob­by­ist.

* My Granny, Grandad, Nana and Baka. Love you x

He aha te mea nui o te ao. He tān­gata, he tān­gata, he tān­gata.

What is the most im­por­tant thing in the world? It is peo­ple, it is peo­ple, it is peo­ple.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.