10 interesting stories served every morning and every evening.
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. For those on our Free and Pro plans, Claude Sonnet 4.6 is now the default model in claude.ai and Claude Cowork. Pricing remains the same as Sonnet 4.5, starting at $3/$15 per million tokens.Sonnet 4.6 brings much-improved coding skills to more of our users. Improvements in consistency, instruction following, and more have made developers with early access prefer Sonnet 4.6 to its predecessor by a wide margin. They often even prefer it to our smartest model from November 2025, Claude Opus 4.5.Performance that would have previously required reaching for an Opus-class model—including on real-world, economically valuable office tasks—is now available with Sonnet 4.6. The model also shows a major improvement in computer use skills compared to prior Sonnet models.As with every new Claude model, we’ve run extensive safety evaluations of Sonnet 4.6, which overall showed it to be as safe as, or safer than, our other recent Claude models. Our safety researchers concluded that Sonnet 4.6 has “a broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment.”Almost every organization has software it can’t easily automate: specialized systems and tools built before modern interfaces like APIs existed. To have AI use such software, users would previously have had to build bespoke connectors. But a model that can use a computer the way a person does changes that equation.In October 2024, we were the first to introduce a general-purpose computer-using model. At the time, we wrote that it was “still experimental—at times cumbersome and error-prone,” but we expected rapid improvement. OSWorld, the standard benchmark for AI computer use, shows how far our models have come. It presents hundreds of tasks across real software (Chrome, LibreOffice, VS Code, and more) running on a simulated computer. There are no special APIs or purpose-built connectors; the model sees the computer and interacts with it in much the same way a person would: clicking a (virtual) mouse and typing on a (virtual) keyboard.Across sixteen months, our Sonnet models have made steady gains on OSWorld. The improvements can also be seen beyond benchmarks: early Sonnet 4.6 users are seeing human-level capability in tasks like navigating a complex spreadsheet or filling out a multi-step web form, before pulling it all together across multiple browser tabs.The model certainly still lags behind the most skilled humans at using computers. But the rate of progress is remarkable nonetheless. It means that computer use is much more useful for a range of work tasks—and that substantially more capable models are within reach.Scores prior to Claude Sonnet 4.5 were measured on the original OSWorld; scores from Sonnet 4.5 onward use OSWorld-Verified. OSWorld-Verified (released July 2025) is an in-place upgrade of the original OSWorld benchmark, with updates to task quality, evaluation grading, and infrastructure.At the same time, computer use poses risks: malicious actors can attempt to hijack the model by hiding instructions on websites in what’s known as a prompt injection attack. We’ve been working to improve our models’ resistance to prompt injections—our safety evaluations show that Sonnet 4.6 is a major improvement compared to its predecessor, Sonnet 4.5, and performs similarly to Opus 4.6. You can find out more about how to mitigate prompt injections and other safety concerns in our API docs.Beyond computer use, Claude Sonnet 4.6 has improved on benchmarks across the board. It approaches Opus-level intelligence at a price point that makes it more practical for far more tasks. You can find a full discussion of Sonnet 4.6’s capabilities and its safety-related behaviors in our system card; a summary and comparison to other recent models is below.In Claude Code, our early testing found that users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time. Users reported that it more effectively read the context before modifying code and consolidated shared logic rather than duplicating it. This made it less frustrating to use over long sessions than earlier models.Users even preferred Sonnet 4.6 to Opus 4.5, our frontier model from November, 59% of the time. They rated Sonnet 4.6 as significantly less prone to overengineering and “laziness,” and meaningfully better at instruction following. They reported fewer false claims of success, fewer hallucinations, and more consistent follow-through on multi-step tasks.Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context. This can make it much better at long-horizon planning. We saw this particularly clearly in the Vending-Bench Arena evaluation, which tests how well a model can run a (simulated) business over time—and which includes an element of competition, with different AI models facing off against each other to make the biggest profits.Sonnet 4.6 developed an interesting new strategy: it invested heavily in capacity for the first ten simulated months, spending significantly more than its competitors, and then pivoted sharply to focus on profitability in the final stretch. The timing of this pivot helped it finish well ahead of the competition.Sonnet 4.6 outperforms Sonnet 4.5 on Vending-Bench Arena by investing in capacity early, then pivoting to profitability in the final stretch.Early customers also reported broad improvements, with frontend code and financial analysis standing out. Customers independently described visual outputs from Sonnet 4.6 as notably more polished, with better layouts, animations, and design sensibility than those from previous models. Customers also needed fewer rounds of iteration to reach production-quality results.Claude Sonnet 4.6 matches Opus 4.6 performance on OfficeQA, which measures how well a model can read enterprise documents (charts, PDFs, tables), pull the right facts, and reason from those facts. It’s a meaningful upgrade for document comprehension workloads.The performance-to-cost ratio of Claude Sonnet 4.6 is extraordinary—it’s hard to overstate how fast Claude models have been evolving in recent months. Sonnet 4.6 outperforms on our orchestration evals, handles our most complex agentic workloads, and keeps improving the higher you push the effort settings.Claude Sonnet 4.6 is a notable improvement over Sonnet 4.5 across the board, including long-horizon tasks and more difficult problems.Out of the gate, Claude Sonnet 4.6 is already excelling at complex code fixes, especially when searching across large codebases is essential. For teams running agentic coding at scale, we’re seeing strong resolution rates and the kind of consistency developers need.Claude Sonnet 4.6 has meaningfully closed the gap with Opus on bug detection, letting us run more reviewers in parallel, catch a wider variety of bugs, and do it all without increasing cost.For the first time, Sonnet brings frontier-level reasoning in a smaller and more cost-effective form factor. It provides a viable alternative if you are a heavy Opus user.Claude Sonnet 4.6 meaningfully improves the answer retrieval behind our core product—we saw a significant jump in answer match rate compared to Sonnet 4.5 in our Financial Services Benchmark, with better recall on the specific workflows our customers depend on.Box evaluated how Claude Sonnet 4.6 performs when tested on deep reasoning and complex agentic tasks across real enterprise documents. It demonstrated significant improvements, outperforming Claude Sonnet 4.5 in heavy reasoning Q&A by 15 percentage points.Claude Sonnet 4.6 hit 94% on our insurance benchmark, making it the highest-performing model we’ve tested for computer use. This kind of accuracy is mission-critical to workflows like submission intake and first notice of loss.Claude Sonnet 4.6 delivers frontier-level results on complex app builds and bug-fixing. It’s becoming our go-to for the kind of deep codebase work that used to require more expensive models.Claude Sonnet 4.6 produced the best iOS code we’ve tested for Rakuten AI. Better spec compliance, better architecture, and it reached for modern tooling we didn’t ask for, all in one shot. The results genuinely surprised us.
Sonnet 4.6 is a significant leap forward on reasoning through difficult tasks. We find it especially strong on branched and multi-step tasks like contract routing, conditional template selection, and CRM coordination—exactly where our customers need strong model sense and reliability.We’ve been impressed by how accurately Claude Sonnet 4.6 handles complex computer use. It’s a clear improvement over anything else we’ve tested in our evals.Claude Sonnet 4.6 has perfect design taste when building frontend pages and data reports, and it requires far less hand-holding to get there than anything we’ve tested before.Claude Sonnet 4.6 was exceptionally responsive to direction — delivering precise figures and structured comparisons when asked, while also generating genuinely useful ideas on trial strategy and exhibit preparation.On the Claude Developer Platform, Sonnet 4.6 supports both adaptive thinking and extended thinking, as well as context compaction in beta, which automatically summarizes older context as conversations approach limits, increasing effective context length.On our API, Claude’s web search and fetch tools now automatically write and execute code to filter and process search results, keeping only relevant content in context—improving both response quality and token efficiency. Additionally, code execution, memory, programmatic tool calling, tool search, and tool use examples are now generally available.Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off. As part of your migration from Sonnet 4.5, we recommend exploring across the spectrum to find the ideal balance of speed and reliable performance, depending on what you’re building.We find that Opus 4.6 remains the strongest option for tasks that demand the deepest reasoning, such as codebase refactoring, coordinating multiple agents in a workflow, and problems where getting it just right is paramount.For Claude in Excel users, our add-in now supports MCP connectors, letting Claude work with the other tools you use day-to-day, like S&P Global, LSEG, Daloopa, PitchBook, Moody’s, and FactSet. You can ask Claude to pull in context from outside your spreadsheet without ever leaving Excel. If you’ve already set up MCP connectors in Claude.ai, those same connections will work in Excel automatically. This is available on Pro, Max, Team, and Enterprise plans.How to use Claude Sonnet 4.6Claude Sonnet 4.6 is available now on all Claude plans, Claude Cowork, Claude Code, our API, and all major cloud platforms. We’ve also upgraded our free tier to Sonnet 4.6 by default—it now includes file creation, connectors, skills, and compaction.If you’re a developer, you can get started quickly by using claude-sonnet-4-6 via the Claude API.
...
Read the original on www.anthropic.com »
A few days ago, people started tagging me on Bluesky and Hacker News about a diagram on Microsoft’s Learn portal. It looked… familiar.
In 2010, I wrote A successful Git branching
model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it. That diagram has since spread everywhere: in books, talks, blog posts, team wikis, and YouTube videos. I never minded. That was the whole point: sharing knowledge and letting the internet take it by storm!
What I did not expect was for Microsoft, a trillion-dollar company, some 15+ years later, to apparently run it through an AI image generator and publish the result on their official Learn portal, without any credit or link back to the original.
The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy. The carefully crafted visual language and layout of the original, the branch colors, the lane design, the dot and bubble alignment that made the original so readable—all of it had been muddled into a laughable form. Proper AI slop.
Arrows missing and pointing in the wrong direction, and the obvious “continvoucly morged” text quickly gave it away as a cheap AI artifact.
It had the rough shape of my diagram though. Enough actually so that people recognized the original in it and started calling Microsoft out on it and reaching out to me. That so many people were upset about this was really nice, honestly. That, and “continvoucly morged” was a very fun meme—thank you, internet! 😄
Oh god yes, Microsoft continvoucly morged my diagram there for sure 😬— Vincent Driessen (@nvie.com) 2026-02-16T20:55:54.762Z
Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it’s been everywhere for 15 years and I’ve always been fine with that. What’s dispiriting is the (lack of) process
and care: take someone’s carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn’t a case of being inspired by something and building on it. It’s the opposite of that. It’s taking something that worked and making it worse. Is there even a goal here beyond “generating content”?
What’s slightly worrying me is that this time around, the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn’t so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.
I don’t need much here. A simple link back and attribution to the original article would be a good start. I would also be interested in understanding how this Learn page at Microsoft came to be, what the goals were here, and what the process has been that led to the creation of this ugly asset, and how there seemingly has not been any form of proof-reading for a document used as a learning resource by many developers.
...
Read the original on nvie.com »
CEO hopefuls have a new rival for the top job: their own board directorsWhy your boss loves AI and you hate it: Corporate profits are capturing your extra productivity, and your salary isn’tCompanies are cycling through CEOs—and replacing them with first-timersAnthropic was supposed to be a ‘safe’ alternative to OpenAI, but CEO Dario Amodei admits his company struggles to balance safety with profitsThousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years agoWhat OpenAI’s OpenClaw hire says about the future of AI agents
Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago$56 trillion national debt leading to a spiraling crisis: Budget watchdog warns the U. S. is walking a crumbling pathCurrent price of silver as of Tuesday, February 17, 2026You need $2 million to retire and ‘almost no one is close,’ BlackRock CEO warns, a problem that Gen X will make ’harder and nastier’A billionaire and an A-list actor found refuge in a 37-home Florida neighborhood with armed guards—proof that privacy is now the ultimate luxury
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
“Late Show” host Stephen Colbert said CBS did not air his Monday interview with Texas state Rep. James Talarico out of fear of the Federal Communications Commission.
Colbert kicked off Monday night’s show by almost immediately mentioning Talarico’s absence.
“He was supposed to be here, but we were told in no uncertain terms by our network’s lawyers, who called us directly, that we could not have him on the broadcast,” Colbert said. “Then, then I was told in some uncertain terms that not only could I not have him on, I could not mention me not having him on. And because my network clearly doesn’t want us to talk about this, let’s talk about this.”
“The Late Show” published the unaired interview with Talarico on YouTube. In the interview, Colbert and Talarico, who is running for the U. S. Senate, discuss the FCC crackdown, including opening a probe into ABC’s “The View,” after Talarico appeared on the show.
“I think that Donald Trump is worried that we’re about to flip Texas,” Talarico said, which was met with audience applause. “This is the party that ran against cancel culture, and now they’re trying to control what we watch, what we say, what we read. And this is the most dangerous kind of cancel culture, the kind that comes from the top.”
Talarico accused the Trump administration of “selling out the First Amendment to curry favor with corrupt politicians.”
“A threat to any of our First Amendment rights is a threat to all of our First Amendment rights.”
In an emailed statement, CBS said: “THE LATE SHOW was not prohibited by CBS from broadcasting the interview with Rep. James Talarico. The show was provided legal guidance that the broadcast could trigger the FCC equal-time rule for two other candidates, including Rep. Jasmine Crockett, and presented options for how the equal time for other candidates could be fulfilled. THE LATE SHOW decided to present the interview through its YouTube channel with on-air promotion on the broadcast rather than potentially providing the equal-time options.”
In a statement, the White House defended the FCC. “Stephen Colbert is a pathetic trainwreck with no talent and terrible ratings, which is exactly why CBS canceled his show and is booting him off the airwaves,” White House spokesman Davis Ingle said. “FCC Chairman Brendan Carr is simply doing his job and enforcing the rules.”
At a political event Tuesday night in Austin, hundreds of supporters burst into applause and cheers when Talarico opened his rally by referring to the Colbert interview.
“Trump’s FCC colluded with corporate media executives at CBS to keep that interview off the air, and I think it’s safe to say that their plan backfired,” Talarico said to a packed event room on the first day of early voting ahead of the March 3 primary.
Talarico’s rival in the Texas Senate Democratic primary, Rep. Jasmine Crockett, appeared on Colbert’s show in May.
CBS’ move to not air the segment comes as the FCC, the government’s media regulator, and most notably its chairman, Brendan Carr, have been particularly combative with networks that have drawn the ire of the president.
Trump has for months suggested the FCC could revoke the licenses of television broadcasters. More recently, Carr, who was appointed by Trump to lead the FCC, has said that daytime and late-night TV talk shows must comply with the equal time rule regarding political candidates.
The FCC’s equal time rule prohibits radio and broadcast channels from hosting political candidates during an election without giving airtime to their opponents. During his show Monday, Colbert highlighted that news interviews and talk show interviews with politicians are exceptions.
On Jan. 21, Carr released a letter warning networks about the rule, saying that he is considering eliminating exceptions due to the networks’ potential partisan motivations.
Colbert fired back at Carr on Monday, accusing the chairman of being motivated by partisan purposes.
“Let’s just call this what it is: Donald Trump’s administration wants to silence anyone who says anything bad about Trump on TV because all Trump does is watch TV,” Colbert joked.
In a statement, FCC Commissioner Anna M. Gomez called Monday’s incident “another troubling example of corporate capitulation in the face of this Administration’s broader campaign to censor and control speech.”
“The FCC has no lawful authority to pressure broadcasters for political purposes or to create a climate that chills free expression,” Gomez, the lone Democratic commissioner, said in the statement. “CBS is fully protected under the First Amendment to determine what interviews it airs, which makes its decision to yield to political pressure all the more disappointing.”
This comes months after ABC pulled “Jimmy Kimmel Live!” off the air “indefinitely” after Carr criticized comments the host made about the assassinated conservative activist Charlie Kirk.
Kimmel accused “the MAGA Gang” of trying to “score political points” by characterizing the suspect “as anything other than one of them.”
Kimmel’s show was pulled a couple of days later and returned to the air after about a week.
In July, CBS announced that Colbert’s show would come to an end in 2026, calling it “purely a financial decision against a challenging backdrop in late night.”
“It is not related in any way to the show’s performance, content or other matters happening at Paramount,” CBS executives said in a joint statement.
The host has been a frequent critic of the Trump administration.
...
Read the original on www.nbcnews.com »
The best thing the show’s writers ever did was realize that Joe wasn’t the most interesting character. Subsequent seasons trace the dissolution of his complex, as he finds himself confronting the limits of his charisma and the consequences of his actions. It’s the death of the antihero, and in its place rises a show imbued with newfound life, as the burgeoning business partnership between its two main female characters becomes the central narrative.
Season 2’s opening sequence establishes this wonderfully energetic change of pace with a three-minute scene shot entirely in one take. The handheld camera swings and pans around a suburban home crammed with coders, construction tools and cables strewn across the ground. It’s a cinematographic manifestation of the crackling energy, messiness and all, between people taking a risk to create something new. Here, we meet Mutiny, Donna and Cameron’s video game subscription service that takes center stage in Season 2 and 3.
As the two navigate the passions and pitfalls of running a startup, the melodramatic tension of the first season is replaced with a palpable lightness and ambition. There are still plenty of great dramatic revelations and story beats, but none of it feels forced or in service of a half-baked antihero arc. The stakes feel genuine and emotionally potent.
The partnership between Donna and Cameron is largely the impetus for this. I can’t think of a better portrayal of female friendship on television that I’ve seen than the one in this show. Rather than be defined by their relations to Joe and Gordon or by tropes like the working mother, they’re given agency and allowed to be flawed and ambitious and all the other things media has constantly told women not to be.
Cameron, who grew up learning how to survive on her own, opens up to collaborate and trust others — but there’s a constant fear of losing the company to which she’s dedicated her whole life. Donna, who has experienced the heartbreak of a failed product once before, comes into her own as a leader — but, by trying to always make the most logical decisions for the company, loses the partnership she needed most.
The progression of their friendship — the ways in which they support, hurt, and eventually forgive each other — is treated with such nuance, and it’s a genuinely moving relationship to watch unfold.
Their bond is just one of the many complex dynamics this show explores. As the show matures, so do its characters. Joe learns to understand the importance of those around him — that people are not only the means to an end, but the end itself. Gordon, so eager in earlier seasons to prove himself and be remembered for something, finds confidence and peace in the present, and leaves a legacy that will long reverberate in characters and viewers alike. As much as these characters grow and evolve, what remains at their core is what brought them together in the first place: a shared ambition to build something that makes a difference in the world.
...
Read the original on www.sceneandheardnu.com »
Tesla has reported five new crashes involving its “Robotaxi” fleet in Austin, Texas, bringing the total to 14 incidents since the service launched in June 2025. The newly filed NHTSA data also reveals that Tesla quietly upgraded one earlier crash to include a hospitalization injury, something the company never disclosed publicly.
The new data comes from the latest update to NHTSA’s Standing General Order (SGO) incident report database for automated driving systems (ADS). We have been tracking Tesla’s Robotaxi crash data closely, and the trend is not improving.
Tesla submitted five new crash reports in January 2026, covering incidents from December 2025 and January 2026. All five involved Model Y vehicles operating with the autonomous driving system “verified engaged” in Austin.
The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.
As with every previous Tesla crash in the database, all five new incident narratives are fully redacted as “confidential business information.” Tesla remains the only ADS operator to systematically hide crash details from the public through NHTSA’s confidentiality provisions. Waymo, Zoox, and every other company in the database provide full narrative descriptions of their incidents.
Buried in the updated data is a revised report for a July 2025 crash (Report ID 13781-11375) that Tesla originally filed as “property damage only.” In December 2025, Tesla submitted a third version of that report upgrading the injury severity to “Minor W/ Hospitalization.”
This means someone involved in a Tesla “Robotaxi” crash required hospital treatment. The original crash involved a right turn collision with an SUV at 2 mph. Tesla’s delayed admission of hospitalization, five months after the incident, raises more questions about its crash reporting, which is already heavily redacted.
With 14 crashes now on the books, Tesla’s “Robotaxi” crash rate in Austin continues to deteriorate. Extrapolating from Tesla’s Q4 2025 earnings mileage data, which showed roughly 700,000 cumulative paid miles through November, the fleet likely reached around 800,000 miles by mid-January 2026. That works out to one crash every 57,000 miles.
The irony is that Tesla’s own numbers condemn it. Tesla’s Vehicle Safety Report claims the average American driver experiences a minor collision every 229,000 miles and a major collision every 699,000 miles. By Tesla’s own benchmark, its “Robotaxi” fleet is crashing nearly 4 times more often than what the company says is normal for a regular human driver in a minor collision, and virtually every single one of these miles was driven with a trained safety monitor in the vehicle who could intervene at any moment, which means they likely prevented more crashes that Tesla’s system wouldn’t have avoided.
Using NHTSA’s broader police-reported crash average of roughly one per 500,000 miles, the picture is even worse, Tesla’s fleet is crashing at approximately 8 times the human rate.
Meanwhile, Waymo has logged over 127 million fully driverless miles, with no safety driver, no monitor, no chase car, and independent research shows Waymo reduces injury-causing crashes by 80% and serious-injury crashes by 91% compared to human drivers. Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “robotaxis.”
Here’s a full list of Tesla’s ADS crashes related to the Austin Robotaxi service:
We keep updating this story because the data keeps getting worse. Five more crashes, a quietly upgraded hospitalization, and total narrative redaction across the board, all from a company that claims its autonomous driving system is safer than humans.
Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
The 14 crashes over roughly 800,000 miles yield a crash rate of one crash every 57,000 miles. Tesla’s own safety data indicate that a typical human driver has a minor collision every 229,000 miles, whether or not they are at fault.
By the company’s own numbers, its “Robotaxi” fleet crashes nearly 4 times more often than a normal driver, and every single one of those miles had a safety monitor who could hit the kill switch. That is not a rounding error or an early-program hiccup. It is a fundamental performance gap.
What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or whether these were unavoidable situations caused by other road users. Tesla wants us to trust its safety record while making it impossible to verify.
The craziest part is that Tesla began offering rides without a safety monitor in Austin in late January 2026, just after it experienced 4 crashes in the first half of the month.
As we reported in our status check on the program yesterday, the service currently has roughly 42 active cars in Austin with below 20% availability and the rides with safety monitor are extremely limited and not running most of the time, but it’s still worrisome that Tesla would even attempt that knowing its crash rate is still higher than human drivers with a safety monitor in the front passenger seat.
The fact that regulators are not getting involved tells you everything you need to know about the state of the US/Texas government right now.
...
Read the original on electrek.co »
Asteroids travel steadily, occasionally leaving observable distance. It has been a while since our last release, and now it’s finally here!
AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect.
New QuickPanel
The former QuickSettings top menu on the homescreen has been reworked into a highly customizable QuickPanel with many more settings toggles, app short cuts and remorse timer driven power off.
New App Launchers
Seven more App Launcher styles have been added. Those can be selected in the new Launcher settings page.
Enhanced Wallpaper and Watchface gallery
Watchfaces are now paired with the user selected Wallpaper already in the Watchface gallery. Helping to find your favourite combination at a glance. Both pages received major performance improvements.
Nightstand mode
Use your watch as a bedside clock or simply show charging much more clearly. Selected watchfaces show a large charging status when power is connected.
The nightstand settings page makes this mode very versatile.
New background animation
Reworked design for a more organic feeling of “breathing”.
New wallpapers
Extending on the well received flatmesh design, triangulated wallpapers turned out to fit beautifully.
Diamonds
A 2048 like game with a fresh twist. Suited nicely for small resolutions and displays.
Weather app design overhaul
Embracing the new possibilities Noto Sans and its vast variety of font styles offers. The weather app got refined towards better legibility and presentation of very long place names.
Timer app redesign
The timer app works in the background now. It got optimised for use on round watches. The design is now consistent with the stopwatch.
Flashlight app
Yup, it flashes light. Most useful, so it got added to the stock selection.
Animated Bootsplash logo
A very small touch. But yet another possibility for designers to get involved.
Round screens with a flat tyre shape are now supported.
Calculator app with new layout
Improved button layout for easier operation and better legibility, especially on round displays.
New UI elements and polished icons
Improved toggles, progress bars and other UI elements by unifying the design and removing inconsistencies.
More translations (49 languages)
More then 20 languages added since our last release thanks to much welcome community effort.
Noto Sans system font
Supporting the localisation efforts, Noto Sans offers consistent font styles for many languages with custom character set.
Color Emoji
Underlining the flat nature of our UI, we moved from Noto Color Emoji to Twemoji.
Ringtone vibration pattern
Customise all the things! Here, the vibration pattern on incoming calls.
Optimized Rendering
Significant performance improvements have been made to the User Interface, resulting in smoother animations and transitions.
Battery Life Improvements
Various optimizations have been implemented to extend battery life during daily use.
Stability Fixes
Numerous bug fixes and stability improvements have been applied across the system.
Since 1.0 we added support for the following watches:
And partial support for the following watches:
Casio WSD-F10/F20 (koi, ayu) - bricking have been reported on some watches
Moto 360 1st gen (minnow) - has underwhelming performance, it is the only watch we have ported with a TI SoC.
Samsung Gear 2 (rinato) - too unstable and too bad power management
Samsung Gear Live (sprat) - in an unusable state due to persistent display issues
We have created an “Experimental” category in our watch gallery for the above 5 watches since we do not consider those suitable for daily use. We will however continue to provide install images for these watches, and we welcome new contributors with fresh ideas to help improve support! We also continue to monitor supported watches and for example recently demoted the Sony Smartwatch 3 (tetra) due to unresolved hardware support issues.
The Samsung Gear 2 (rinato) is our first watch supported with a mainline Linux kernel and therefore without the use of libhybris. The Asus Zenwatch 2 (sparrow) also has very basic support for running on a mainline Linux kernel.
For a complete list of supported devices and installation instructions, please visit our installation guide.
Apart from adding new watches, the community has also been actively enhancing the support for our existing range of watches. Visit our newly created feature matrix page to find out about the detailed support level for your watch.
New Bluetooth lib should improve stability and simplify the pairing process
A more modular architecture, allowing for easier extending and maintainability of the app.
Thanks to Noodlez, initial AsteroidOS support has been added to Gadgetbridge version 0.73.0.
Jozef Mlich has added AsteroidOS support to Adam Piggs Amazfish. Initially developed for SailfishOS, Amazfish is now also available in kirigami flavour for linux desktops.
After our initial release StefWe created Telescope a sync client for UBports.
This release would not have been possible without the dedicated efforts of our community contributors. We extend our heartfelt thanks to everyone who reported issues, submitted patches, and provided feedback during the development cycle.
Over the years, the AsteroidOS community has expanded its reach, with community translators adding over 20 languages to the AsteroidOS Weblate. Translating into your local language is the easiest way to get involved. Your help is most valuable to make AsteroidOS fit for use in your region.
Watchface creation has been a popular community activity lately. We are happy to present the new and comprehensive watchfaces creation and design guide. It is garnished with testing and deployment scripts to simplify the process further. Our community came up with funny and beautiful new watchfaces. Those are all collected in the unofficial watchfaces repository.
moWerk has contributed a variety of watchfaces. Two highlights are the minimalistic pulsedot and a classic Monty Python inspired silly walks watchface.
MagneFire did show-off Doom, Super Tux Kart, gpSP and other emulated games on his watch. The native 2048 port called diamonds was recently included into the stock set of apps.
Dodoradio worked on a few unusual watches, like the LG Watch W7 with its physical hands to be taken into account. And the Casio WSD-FXX series sporting multifunctional secondary displays. Along with some more conventional ports such as the Fossil Gen5 and Polar M600. For watches with GPS, he contributed a Map app with waypoint functionality to the community repository. His initial version of the often requested asteroid-health app is already capable of automatically tracking steps and heartrate with minimal impact on battery life.
Beroset implemented the new Nightstand mode. In addition to his efforts in maintaining the build tools, Beroset has also developed host-tools which make it easier to work on watches from a Linux host. Furthermore, he has included a user-friendly GUI for deploying watchfaces and created asteroid-weatherfetch, a convenient app that downloads weather data using the watches IP connection.
PostmarketOS now offers our launcher and core apps, thanks to postmarketOS developer PureTryOut, who moved our buildsystem from qmake to cmake along the way.
The program lcd-tools by lecris and MagneFire was originally developed to control the secondary LCD on the TicWatch Pro. And got extended by dodoradio and beroset to make use of many more features the Casio secondary displays offer.
Our website asteroidos.org has seen a major content extension.
A FAQ section has been added to provide a quick overview of our project.
The Install page has grown into a gallery of supported watches over time with now 30 watches listed. We renamed it to “Watches” and plan to evolve this page into a purchase guide to aid new users in choosing a supported watch. A first step was to resize the images of all watches to correctly reflect the relative size differences between them, to be able to compare their dimensions.
The Documentation pages are frequently updated by community members and nicely keep up with the current state of development. We recently moved them into a MediaWiki. This enables users without deeper knowledge to contribute to the documentation much more easily.
The creator of the unofficial Subreddit gave us full access, making it an official channel alongside our Mastodon account.
As we already mentioned in a previous blog post, we moved all our communication from freenode to Matrix and Libera.chat. You are invited to join the AsteroidOS Matrix channel using this link. https://matrix.to/#/#Asteroid:matrix.org
With 2.0 we introduce a community repository, to improve discoverability and simplify the installation of precompiled packages, while building the foundation for a possible graphical software center in the future. Currently, the repository consists of a few debugging tools, community watchfaces, games and emulators. Developers are welcome to create pull requests on the meta-community repo for packaging.
After moving our infrastructure to a larger server, we have seen an increase in the frequency of nightly releases. However, it is worth noting that completely rebuilding all packages for all 30 watch system images still takes almost a week. Therefore, we can expect the nightlies to be ready on weekends.
Interested in contributing to AsteroidOS? Whether you’re a developer, designer, or enthusiast, there are many ways to get involved:
Join our community forums to discuss ideas and share feedback.
Report issues or suggest features on our GitHub repository.
Help with translating AsteroidOS to your language using Weblate.
Contribute to the codebase by tackling open issues or developing new features.
Your participation helps make AsteroidOS better for everyone.
Ready to experience the latest features and improvements? Download AsteroidOS 2.0 from our official website and follow the installation instructions for your device.
Thank you for your continued support. We hope you enjoy AsteroidOS 2.0!
As you might have noticed, the current releases linked on the installation pages have feature parity with the 2.0 release. At some point, we decided to switch from our stable 1.0 release to a quasi 1.1 nightly rolling release, as the 1.0 release became too old to maintain. In the future, we would like to change our release cycle to offer more frequent stable releases. A stable release will always be stable. But not too old to no longer be maintainable.
For the future, we are going to set up a roadmap for features we would like to see in an eventual next release. Based on recent early community work, we might see features like:
Written by AsteroidOS Team on the 17/02/2026
...
Read the original on asteroidos.org »
The 1.26 release of Go this month includes a completely rewritten go fix subcommand. Go fix uses a suite of algorithms to identify opportunities to improve your code, often by taking advantage of more modern features of the language and library. In this post, we’ll first show you how to use go fix to modernize your Go codebase. Then in the second section we’ll dive into the infrastructure behind it and how it is evolving. Finally, we’ll present the theme of “self-service” analysis tools to help module maintainers and organizations encode their own guidelines and best practices.
The go fix command, like go build and go vet, accepts a set of patterns that denote packages. This command fixes all packages beneath the current directory:
$ go fix ./…
On success, it silently updates your source files. It discards any fix that touches generated files since the appropriate fix in that case is to the logic of the generator itself. We recommend running go fix over your project each time you update your build to a newer Go toolchain release. Since the command may fix hundreds of files, start from a clean git state so that the change consists only of edits from go fix; your code reviewers will thank you.
To preview the changes the above command would have made, use the -diff flag:
$ go fix -diff ./…
–- dir/file.go (old)
+++ dir/file.go (new)
- eq := strings.IndexByte(pair, ‘=’)
- result[pair[:eq]] = pair[1+eq:]
+ before, after, _ := strings.Cut(pair, “=”)
+ result[before] = after
You can list the available fixers by running this command:
$ go tool fix help
Registered analyzers:
any replace interface{} with any
buildtag check //go:build and // +build directives
fmtappendf replace []byte(fmt.Sprintf) with fmt.Appendf
forvar remove redundant re-declaration of loop variables
hostport check format of addresses passed to net.Dial
inline apply fixes based on ‘go:fix inline’ comment directives
mapsloop replace explicit loops over maps with calls to maps package
minmax replace if/else statements with calls to min or max
Adding the name of a particular analyzer shows its complete documentation:
$ go tool fix help forvar
forvar: remove redundant re-declaration of loop variables
The forvar analyzer removes unnecessary shadowing of loop variables.
Before Go 1.22, it was common to write `for _, x := range s { x := x … }`
to create a fresh variable for each iteration. Go 1.22 changed the semantics
of `for` loops, making this pattern redundant. This analyzer removes the
unnecessary `x := x` statement.
This fix only applies to `range` loops.
By default, the go fix command runs all analyzers. When fixing a large project it may reduce the burden of code review if you apply fixes from the most prolific analyzers as separate code changes. To enable only specific analyzers, use the flags matching their names. For example, to run just the any fixer, specify the -any flag. Conversely, to run all the analyzers except selected ones, negate the flags, for instance -any=false.
As with go build and go vet, each run of the go fix command analyzes only a specific build configuration. If your project makes heavy use of files tagged for different CPUs or platforms, you may wish to run the command more than once with different values of GOARCH and GOOS for better coverage:
$ GOOS=linux GOARCH=amd64 go fix ./…
$ GOOS=darwin GOARCH=arm64 go fix ./…
$ GOOS=windows GOARCH=amd64 go fix ./…
Running the command more than once also provides opportunities for synergistic fixes, as we’ll see below.
The introduction of generics in Go 1.18 marked the end of an era of very few changes to the language spec and the start of a period of more rapid—though still careful—change, especially in the libraries. Many of the trivial loops that Go programmers routinely write, such as to gather the keys of a map into a slice, can now be conveniently expressed as a call to a generic function such as maps. Keys. Consequently these new features create many opportunities to simplify existing code.
In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. (See my 2025 GopherCon talk for more exasperating details.) To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.
Over the past year, we have built dozens of analyzers to identify opportunities for modernization. Here are three examples of the fixes they suggest:
minmax replaces an if statement by a use of Go 1.21’s min or max functions:
rangeint replaces a 3-clause for loop by a Go 1.22 range-over-int loop:
stringscut (whose -diff output we saw earlier) replaces uses of strings. Index and slicing by Go 1.18’s strings.Cut:
These modernizers are included in gopls, to provide instant feedback as you type, and in go fix, so that you can modernize several entire packages at once in a single command. In addition to making code clearer, modernizers may help Go programmers learn about newer features. As part of the process of approving each new change to the language and standard library, the proposal review group now considers whether it should be accompanied by a modernizer. We expect to add more modernizers with each release.
Go 1.26 includes a small but widely useful change to the language specification. The built-in new function creates a new variable and returns its address. Historically, its sole argument was required to be a type, such as new(string), and the new variable was initialized to its “zero” value, such as “”. In Go 1.26, the new function may be called with any value, causing it to create a variable initialized to that value, avoiding the need for an additional statement. For example:
This feature filled a gap that had been discussed for over a decade and resolved one of the most popular proposals for a change to the language. It is especially convenient in code that uses a pointer type *T to indicate an optional value of type T, as is common when working with serialization packages such as json. Marshal or protocol buffers. This is such a common pattern that people often capture it in a helper, such as the newInt function below, saving the caller from the need to break out of an expression context to introduce additional statements:
type RequestJSON struct {
URL string
Attempts *int // (optional)
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: newInt(10),
func newInt(x int) *int { return &x }
Helpers such as newInt are so frequently needed with protocol buffers that the proto API itself provides them as proto. Int64, proto.String, and so on. But Go 1.26 makes all these helpers unnecessary:
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: new(10),
To help you take advantage of this feature, the go fix command now includes a fixer, newexpr, that recognizes “new-like” functions such as newInt and suggests fixes to replace the function body with return new(x) and to replace every call, whether in the same package or an importing package, with a direct use of new(expr).
To avoid introducing premature uses of new features, modernizers offer fixes only in files that require at least the minimum appropriate version of Go (1.26 in this instance), either through a go 1.26 directive in the enclosing go.mod file or a //go:build go1.26 build constraint in the file itself.
Run this command to update all calls of this form in your source tree:
$ go fix -newexpr ./…
At this point, with luck, all of your newInt-like helper functions will have become unused and may be safely deleted (assuming they aren’t part of a stable published API). A few calls may remain where it would be unsafe to suggest a fix, such as when the name new is locally shadowed by another declaration. You can also use the deadcode command to help identify unused functions.
Applying one modernization may create opportunities to apply another. For example, this snippet of code, which clamps x to the range 0–100, causes the minmax modernizer to suggest a fix to use max. Once that fix is applied it suggests a second fix, this time to use min.
Synergies may also occur between different analyzers. For example, a common mistake is to repeatedly concatenate strings within a loop, resulting in quadratic time complexity—a bug and a potential vector for a denial-of-service attack. The stringsbuilder modernizer recognizes the problem and suggests using Go 1.10’s strings. Builder:
Once this fix is applied, a second analyzer may recognize that the WriteString and Sprintf operations can be combined as fmt. Fprintf(&s, “%02x”, b), which is both cleaner and more efficient, and offer a second fix. (This second analyzer is QF1012 from Dominik Honnef’s staticcheck, which is already enabled in gopls but not yet in go fix, though we plan to add staticcheck analyzers to the go command starting in Go 1.27.)
Consequently, it may be worth running go fix more than once until it reaches a fixed point; twice is usually enough.
A single run of go fix may apply dozens of fixes within the same source file. All fixes are conceptually independent, analogous to a set of git commits with the same parent. The go fix command uses a simple three-way merge algorithm to reconcile the fixes in sequence, analogous to the task of merging a set of git commits that edit the same file. If a fix conflicts with the list of edits accumulated so far, it is discarded, and the tool issues a warning that some fixes were skipped and that the tool should be run again.
This reliably detects syntactic conflicts arising from overlapping edits, but another class of conflict is possible: a semantic conflict occurs when two changes are textually independent but their meanings are incompatible. As an example consider two fixes that each remove the second-to-last use of a local variable: each fix is fine by itself, but when both are applied together the local variable becomes unused, and in Go that’s a compilation error. Neither fix is responsible for removing the variable declaration, but someone has to do it, and that someone is the user of go fix.
A similar semantic conflict arises when a set of fixes causes an import to become unused. Because this case is so common, the go fix command applies a final pass to detect unused imports and remove them automatically.
Semantic conflicts are relatively rare. Fortunately they usually reveal themselves as compilation errors, making them impossible to overlook. Unfortunately, when they happen, they do demand some manual work after running go fix.
Let’s now delve into the infrastructure beneath these tools.
Since the earliest days of Go, the go command has had two subcommands for static analysis, go vet and go fix, each with its own suite of algorithms: “checkers” and “fixers”. A checker reports likely mistakes in your code, such as passing a string instead of an integer as the operand of a fmt. Printf(“%d”) conversion. A fixer safely edits your code to fix a bug or to express the same thing in a better way, perhaps more clearly, concisely, or efficiently. Sometimes the same algorithm appears in both suites when it can both report a mistake and safely fix it.
In 2017 we redesigned the then-monolithic go vet program to separate the checker algorithms (now called “analyzers”) from the “driver”, the program that runs them; the result was the Go analysis framework. This separation enables an analyzer to be written once then run in a diverse range of drivers for different environments, such as:
* unitchecker, which turns a suite of analyzers into a subcommand that can be run by the go command’s scalable incremental build system, analogous to a compiler in go build. This is the basis of go fix and go vet.
* nogo, the analogous driver for alternative build systems such as Bazel and Blaze.
* singlechecker, which turns an analyzer into a standalone command that loads, parses, and type-checks a set of packages (perhaps a whole program) and then analyzes them. We often use it for ad hoc experiments and measurements over the module mirror (proxy.golang.org) corpus.
* multichecker, which does the same thing for a suite of analyzers with a ‘swiss-army knife’ CLI.
* gopls, the language server behind VS Code and other editors, which provides real-time diagnostics from analyzers after each editor keystroke.
* the highly configurable driver used by the staticcheck tool. (Staticcheck also provides a large suite of analyzers that can be run in other drivers.)
* Tricorder, the batch static analysis pipeline used by Google’s monorepo and integrated with its code review system.
* gopls’ MCP server, which makes diagnostics available to LLM-based coding agents, providing more robust “guardrails”.
One benefit of the framework is its ability to express helper analyzers that don’t report diagnostics or suggest fixes of their own but instead compute some intermediate data structure that may be useful to many other analyzers, amortizing the costs of its construction. Examples include control-flow graphs, the SSA representation of function bodies, and data structures for optimized AST navigation.
Another benefit of the framework is its support for making deductions across packages. An analyzer can attach a “fact” to a function or other symbol so that information learned while analyzing the function’s body can be used when later analyzing a call to the function, even if the call appears in another package or the later analysis occurs in a different process. This makes it easy to define scalable interprocedural analyses. For example, the printf checker can tell when a function such as log. Printf is really just a wrapper around fmt.Printf, so it knows that calls to log.Printf should be checked in a similar manner. This process works by induction, so the tool will also check calls to further wrappers around log.Printf, and so on. An example of an analyzer that makes heavy use of facts is Uber’s nilaway, which reports potential mistakes resulting in nil pointer dereferences.
The process of “separate analysis” in go fix is analogous to the process of separate compilation in go build. Just as the compiler builds packages starting from the bottom of the dependency graph and passing type information up to importing packages, the analysis framework works from the bottom of the dependency graph up, passing facts (and types) up to importing packages.
In 2019, as we started developing gopls, the language server for Go, we added the ability for an analyzer to suggest a fix when reporting a diagnostic. The printf analyzer, for example, offers to replace fmt. Printf(msg) with fmt.Printf(“%s”, msg) to avoid misformatting should the dynamic msg value contain a % symbol. This mechanism has become the basis for many of the quick fixes and refactoring features of gopls.
While all these developments were happening to go vet, go fix remained stuck as it was back before the Go compatibility promise, when early adopters of Go used it to maintain their code during the rapid and sometimes incompatible evolution of the language and libraries.
The Go 1.26 release brings the Go analysis framework to go fix. The go vet and go fix commands have converged and are now almost identical in implementation. The only differences between them are the criteria for the suites of algorithms they use, and what they do with computed diagnostics. Go vet analyzers must detect likely mistakes with low false positives; their diagnostics are reported to the user. Go fix analyzers must generate fixes that are safe to apply without regression in correctness, performance, or style; their diagnostics may not be reported, but the fixes are directly applied. Aside from this difference of emphasis, the task of developing a fixer is no different from that of developing a checker.
As the number of analyzers in go vet and go fix continues to grow, we have been investing in infrastructure both to improve the performance of each analyzer and to make it easier to write each new analyzer.
For example, most analyzers start by traversing the syntax trees of each file in the package looking for a particular kind of node such as a range statement or function literal. The existing inspector package makes this scan efficient by pre-computing a compact index of a complete traversal so that later traversals can quickly skip subtrees that don’t contain any nodes of interest. Recently we extended it with the Cursor datatype to allow flexible and efficient navigation between nodes in all four cardinal directions—up, down, left, and right, similar to navigating the elements of an HTML DOM—making it easy and efficient to express a query such as “find each go statement that is the first statement of a loop body”:
var curFile inspector.Cursor = …
// Find each go statement that is the first statement of a loop body.
for curGo := range curFile.Preorder((*ast.GoStmt)(nil)) {
kind, index := curGo.ParentEdge()
if kind == edge.BlockStmt_List && index == 0 {
switch curGo.Parent().ParentEdgeKind() {
case edge.ForStmt_Body, edge.RangeStmt_Body:
Many analyzers start by searching for calls to a specific function, such as fmt. Printf. Function calls are among the most numerous expressions in Go code, so rather than search every call expression and test whether it is a call to fmt.Printf, it is much more efficient to pre-compute an index of symbol references, which is done by typeindex and its helper analyzer. Then the calls to fmt.Printf can be enumerated directly, making the cost proportional to the number of calls instead of to the size of the package. For an analyzer such as hostport that seeks an infrequently used symbol (net.Dial), this can easily make it 1,000× faster.
Some other infrastructural improvements over the past year include:
* a dependency graph of the standard library that analyzers can consult to avoid introducing import cycles. For example, we can’t introduce a call to strings.Cut in a package that is itself imported by strings.
...
Gentoo now has a presence on Codeberg, and contributions can be submitted for the Gentoo repository mirror at https://codeberg.org/gentoo/gentoo as an alternative to GitHub. Eventually also other git repositories will become available under the Codeberg Gentoo organization. This is part of the gradual mirror migration away from GitHub, as already mentioned in the 2025 end-of-year review. Codeberg is a site based on Forgejo, maintained by a dedicated
non-profit organization, and located in Berlin, Germany. Thanks to everyone who has helped make this move possible!
These mirrors are for convenience for contribution and we continue to host our own repositories, just like we did while using GitHub mirrors for ease of contribution too.
If you wish to submit pull requests on Codeberg, it is recommended to use the AGit approach as it is more space efficient and does not require you to maintain a fork of gentoo.git on your own Codeberg profile. To set it up, clone the upstream URL and check out a branch locally:
Once you’re ready to create your PR:
and the PR should be created automatically. To push additional commits, repeat the above command - be sure that the same topic is used. If you wish to force-push updates (because you’re amending commits), add “-o force-push=true” to the above command.
More documentation can be found on our wiki.
...
Read the original on www.gentoo.org »
An open-source CUDA compiler that targets AMD GPUs, with more architectures planned. Written in 15,000 lines of C99. Zero LLVM dependency. Compiles .cu files straight to GFX11 machine code and spits out ELF .hsaco binaries that AMD GPUs can actually run.
This is what happens when you look at NVIDIA’s walled garden and think “how hard can it be?” The answer is: quite hard, actually, but I did it anyway.
note: if youre here to test out my current tenstorrent implementation youll have to clone that respective branch :-)
Takes CUDA C source code, the same .cu files you’d feed to nvcc, and compiles them to AMD RDNA 3 (gfx1100) binaries. No LLVM. No HIP translation layer. No “convert your CUDA to something else first.” Just a lexer, a parser, an IR, and roughly 1,700 lines of hand-written instruction selection that would make a compiler textbook weep.
Every single encoding has been validated against llvm-objdump with zero decode failures. I didn’t use LLVM to compile, but I did use it to check my homework.
# It’s C99. It builds with gcc. There are no dependencies.
make
# That’s it. No cmake. No autoconf. No 47-step build process.
# If this doesn’t work, your gcc is broken, not the Makefile.
* A will to live (optional but recommended)
* LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.
# Compile to AMD GPU binary
./barracuda –amdgpu-bin kernel.cu -o kernel.hsaco
# Dump the IR (for debugging or curiosity)
./barracuda –ir kernel.cu
# Just parse and dump the AST
./barracuda –ast kernel.cu
# Run semantic analysis
./barracuda –sema kernel.cu
The following CUDA features compile to working GFX11 machine code:
* All C control flow: if/else, for, while, do-while, switch/case, goto/label
__global__ void vector_add(float *c, float *a, float *b, int n)
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx < n)
c[idx] = a[idx] + b[idx];
All data structures use pre-allocated fixed-size arrays. No malloc in hot paths. No recursion. Bounded loops everywhere. The kind of code that would make JPL’s coding standards committee nod approvingly before going back to landing things on Mars.
Being honest about limitations is important. Here’s what’s missing:
* unsigned as a bare type specifier (use unsigned int or just int)
* +=, -=, >>= and friends (compound assignment, spell it out for now)
None of these are architectural blockers. They’re all “haven’t got round to it yet” items.
* vector_add.cu - The “hello world” of GPU computing
* notgpt.cu - AI-generated CUDA with extremely sarcastic comments (tiled SGEMM, reductions, histograms, prefix scan, stencils, half precision, cooperative groups, and the “kitchen sink” kernel)
* canonical.cu - Canonical patterns from NVIDIA samples adapted for the parser
Fix the known gaps: compound assignment operators, bare unsigned, integer literal suffixes, const, parameter reassignment. These are all small parser/lowerer changes. The goal is to compile real-world .cu files without modifications.
The generated code works but isn’t winning any benchmarks. Priorities:
* Better register allocation (currently linear scan, consider graph colouring)
The IR (BIR) is target-independent. The backend is cleanly separated. Adding a new target means writing a new isel + emit pair. Candidates:
* Tenstorrent - RISC-V based AI accelerators. Open ISA. Very different execution model (tile-based, not SIMT) but the IR maps well.
* Intel Arc - Xe architecture. Would give BarraCUDA coverage across all three major GPU vendors.
* RISC-V Vector Extension - For when GPUs are too mainstream and you want to run CUDA on a softcore.
If you’re considering writing your own AMDGPU backend, here are the things that will ruin your afternoon:
* SOP1 prefix is 0xBE800000, not what you’d expect from the docs
* VOP3 VDST is at bits [7:0], not [15:8] like a sensible person would assume
* Null SADDR is 0x7C for global memory, 0xFC for scratch
* RDNA 3 is Wave32 by default, not Wave64 like GCN
* The ISA manual is 500 pages and contradicts itself at least twice
All 1,735 lines of amdgpu_emit.c are a testament to reading those pages so you don’t have to.
Found a bug? Want to discuss the finer points of AMDGPU instruction encoding? Need someone to commiserate with about the state of GPU computing?
Open an issue if theres anything you want to discuss. Or don’t. I’m not your mum.
Based in New Zealand, where it’s already tomorrow and the GPUs are just as confused as everywhere else.
Apache 2.0. Do whatever you want. If this compiler somehow ends up in production, I’d love to hear about it, mostly so I can update my LinkedIn with something more interesting than wrote a CUDA compiler for fun.
* Steven Muchnick for Advanced Compiler Design and Implementation. If this compiler does anything right, that book is why.
* Low Level for the Zero to Hero C course and the YouTube channel. That’s where I learnt C.
* Abe Kornelis for being an amazing teacher. His work on the z390 Portable Mainframe Assembler project is well worth your time.
* To the people who’ve sent messages of kindness and critique, thank you from a forever student and a happy hobbyist.
* My Granny, Grandad, Nana and Baka. Love you x
He aha te mea nui o te ao. He tāngata, he tāngata, he tāngata.
What is the most important thing in the world? It is people, it is people, it is people.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.