10 interesting stories served every morning and every evening.
The topic of the Rust experiment was just discussed at the annual
Maintainers Summit. The consensus among the assembled developers is that
Rust in the kernel is no longer experimental — it is now a core part of the
kernel and is here to stay. So the “experimental” tag will be coming off.
Congratulations are in order for all of the Rust for Linux team.
(Stay tuned for details in our Maintainers Summit coverage.)
Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds
...
Read the original on lwn.net »
I complained about this on the socials, but I didn’t get it all out of my system. So now I write a blog post.
I’ve never liked the philosophy of “put an icon in every menu item by default”.
Google Sheets, for example, does this. Go to “File” or “Edit” or “View” and you’ll see a menu with a list of options, every single one having an icon (same thing with the right-click context menu).
It’s extra noise to me. It’s not that I think menu items should never have icons. I think they can be incredibly useful (more on that below). It’s more that I don’t like the idea of “give each menu item an icon” being the default approach.
This posture lends itself to a practice where designers have an attitude of “I need an icon to fill up this space” instead of an attitude of “Does the addition of a icon here, and the cognitive load of parsing and understanding it, help or hurt how someone would use this menu system?”
The former doesn’t require thinking. It’s just templating — they all have icons, so we need to put something there. The latter requires care and thoughtfulness for each use case and its context.
To defend my point, one of the examples I always pointed to was macOS. For the longest time, Apple’s OS-level menus seemed to avoid this default approach of sticking icons in every menu item.
That is, until macOS Tahoe shipped.
Tahoe now has icons in menus everywhere. For example, here’s the Apple menu:
Let’s look at others. As I’m writing this I have Safari open. Let’s look at the “Safari” menu:
Hmm. Interesting. Ok so we’ve got an icon for like half the menu items. I wonder why some get icons and others don’t?
For example, the “Settings” menu item (third from the top) has an icon. But the other item in its grouping “Privacy Report” does not. I wonder why? Especially when Safari has an icon for Privacy report, like if you go to customize the toolbar you’ll see it:
Hmm. Who knows? Let’s keep going.
Let’s look at the “File” menu in Safari:
Some groupings have icons and get inset, while other groupings don’t have icons and don’t get inset. Interesting…again I wonder what the rationale is here? How do you choose? It’s not clear to me.
Let’s keep going. Let’s go to the “View” menu:
Oh boy, now we’re really in it. Some of these menu items have the notion of a toggle (indicated by the checkmark) so now you’ve got all kinds of alignment things to deal with. The visual symbols are doubling-up when there’s a toggle and an icon.
The “View” menu in Mail is a similar mix of:
You know what would be a fun game? Get a bunch of people in a room, show them menus where the textual labels are gone, and see who can get the most right.
In so many of these cases, I honestly can’t intuit why some menus have icons and others do not. What are so many of these icons affording me at the cost of extra visual and cognitive parsing? I don’t know.
To be fair, there are some menus where these visual symbols are incredibly useful. Take this menu from Finder:
The visual depiction of how those are going to align is actually incredibly useful because it’s way easier for my brain to parse the symbol and understand where the window is going to go than it is to read the text and imagine in my head what “Top Left” or “Bottom & Top” or “Quarters” will mean. But a visual symbol? I instantly get it!
Those are good icons in menus. I like those.
What I find really interesting about this change on Apple’s part is how it seemingly goes against their own previous human interface guidelines (as pointed out to me by Peter Gassner).
They have an entire section in their 2005 guidelines (and 1992 and 2020) titled “Using Symbols in Menus”:
See what it says?There are a few standard symbols you can use to indicate additional information in menus…Don’t use other, arbitrary symbols in menus, because they add visual clutter and may confuse people.
They even have an example of what not to do and guess what it looks like? A menu in macOS Tahoe.
It’s pretty obvious how I feel. I’m tired of all this visual noise in my menus.
And now that Apple has seemingly thrown in with the “stick an icon in every menu by default” crowd, it’s harder than ever for me to convince people otherwise. To persuade, “Hey, unless you can articulate a really good reason to add this, maybe our default posture should be no icons in menus?”
So I guess this is the world I live in now. Icons in menus. Icons in menus everywhere.
...
Read the original on blog.jim-nielsen.com »
On September 14, 2015, our first publicly-trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients could accept, and had done it using automated software. Of course, in retrospect this was just the first of billions of certificates. Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.
In 2023, we marked the tenth anniversary of the creation of our nonprofit, Internet Security Research Group, which continues to host Let’s Encrypt and other public benefit infrastructure projects. Now, in honor of the tenth anniversary of Let’s Encrypt’s public certificate issuance and the start of the general availability of our services, we’re looking back at a few milestones and factors that contributed to our success.
A conspicuous part of Let’s Encrypt’s history is how thoroughly our vision of scalability through automation has succeeded.
In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day. We’re now on track to reach a billion active sites, probably sometime in the coming year. (The “certificates issued” and “certificates active” metrics are quite different because our certificates regularly expire and get replaced.)
The steady growth of our issuance volume shows the strength of our architecture, the validity of our vision, and the great efforts of our engineering team to scale up our own infrastructure. It also reminds us of the confidence that the Internet community is placing in us, making the use of a Let’s Encrypt certificate a normal and, dare we say, boring choice. But I often point out that our ever-growing issuance volumes are only an indirect measure of value. What ultimately matters is improving the security of people’s use of the web, which, as far as Let’s Encrypt’s contribution goes, is not measured by issuance volumes so much as by the prevalence of HTTPS encryption. For that reason, we’ve always emphasized the graph of the percentage of encrypted connections that web users make (here represented by statistics from Firefox).
(These graphs are snapshots as of the date of this post; a dynamically updated version is found on our stats page.) Our biggest goal was to make a concrete, measurable security impact on the web by getting HTTPS connection prevalence to increase—and it’s worked. It took five years or so to get the global percentage from below 30% to around 80%, where it’s remained ever since. In the U. S. it has been close to 95% for a while now.
A good amount of the remaining unencrypted traffic probably comes from internal or private organizational sites (intranets), but other than that we don’t know much about it; this would be a great topic for Internet security researchers to look into.
We believe our present growth in certificate issuance volume is essentially coming from growth in the web as a whole. In other words, if we protect 20% more sites over some time period, it’s because the web itself grew by 20%.
We’ve blogged about most of Let’s Encrypt’s most significant milestones as they’ve happened, and I invite everyone in our community to look over those blog posts to see how far we’ve come. We’ve also published annual reports for the past seven years, which offer elegant and concise summaries of our work.
As I personally think back on the past decade, just a few of the many events that come to mind include:
Telling the world about the project in November 2014
Our one millionth certificate in March 2016, then our 100 millionth certificate in June 2017, and then our billionth certificate in 2020
Along the way, first issuing one million certificates in a single day (in September 2018), significantly contributed to by the SquareSpace and Shopify Let’s Encrypt integrations
Just at the end of September 2025, we issued more than ten million certificates in a day for the first time.
We’ve also periodically rolled out new features such as internationalized domain name support (2016), wildcard support (2018), and short-lived and IP address (2025) certificates. We’re always working on more new features for the future.
There are many technical milestones like our database server upgrades in 2021, where we found we needed a serious server infrastructure boost because of the tremendous volumes of data we were dealing with. Similarly, our original infrastructure was using Gigabit Ethernet internally, and, with the growth of our issuance volume and logging, we found that our Gigabit Ethernet network eventually became too slow to synchronize database instances! (Today we’re using 25-gig Ethernet.) More recently, we’ve experimented with architectural upgrades to our ever-growing Certificate Transparency logs, and decided to go ahead with deploying those upgrades—to help us not just keep up with, but get ahead of, our continuing growth.
These kinds of growing pains and successful responses to them are nice to remember because they point to the inexorable increase in demands on our infrastructure as we’ve become a more and more essential part of the Internet. I’m proud of our technical teams which have handled those increased demands capably and professionally.
I also recall the ongoing work involved in making sure our certificates would be as widely accepted as possible, which has meant managing the original cross-signature from IdenTrust, and subsequently creating and propagating our own root CA certificates. This process has required PKI engineering, key ceremonies, root program interactions, documentation, and community support associated with certificate migrations. Most users never have reason to look behind the scenes at our chains of trust, but our engineers update it as root and intermediate certificates have been replaced. We’ve engaged at the CA/B Forum, IETF, and in other venues with the browser root programs to help shape the web PKI as a technical leader.
As I wrote in 2020, our ideal of complete automation of the web PKI aims at a world where most site owners wouldn’t even need to think about certificates at all. We continue to get closer and closer to that world, which creates a risk that people will take us and our services for granted, as the details of certificate renewal occupy less of site operators’ mental energy. As I said at the time,
When your strategy as a nonprofit is to get out of the way, to offer services that people don’t need to think about, you’re running a real risk that you’ll eventually be taken for granted. There is a tension between wanting your work to be invisible and the need for recognition of its value. If people aren’t aware of how valuable our services are then we may not get the support we need to continue providing them.
I’m also grateful to our communications and fundraising staff who help make clear what we’re doing every day and how we’re making the Internet safer.
Our community continually recognizes our work in tangible ways by using our certificates—now by the tens of millions per day—and by sponsoring us.
We were honored to be recognized with awards including the 2022 Levchin Prize for Real-World Cryptography and the 2019 O’Reilly Open Source Award. In October of this year some of the individuals who got Let’s Encrypt started were honored to receive the IEEE Cybersecurity Award for Practice.
We documented the history, design, and goals of the project in an academic paper at the ACM CCS ’19 conference, which has subsequently been cited hundreds of times in academic research.
Ten years later, I’m still deeply grateful to the five initial sponsors that got Let’s Encrypt off the ground - Mozilla, EFF, Cisco, Akamai, and IdenTrust. When they committed significant resources to the project, it was just an ambitious idea. They saw the potential and believed in our team, and because of that we were able to build the service we operate today.
I’d like to particularly recognize IdenTrust, a PKI company that worked as a partner from the outset and enabled us to issue publicly-trusted certificates via a cross-signature from one of their roots. We would simply not have been able to launch our publicly-trusted certificate service without them. Back when I first told them that we were starting a new nonprofit certificate authority that would give away millions of certificates for free, there wasn’t any precedent for this arrangement, and there wasn’t necessarily much reason for IdenTrust to pay attention to our proposal. But the company really understood what we were trying to do and was willing to engage from the beginning. Ultimately, IdenTrust’s support made our original issuance model a reality.
I’m proud of what we have achieved with our staff, partners, and donors over the past ten years. I hope to be even more proud of the next ten years, as we use our strong footing to continue to pursue our mission to protect Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. You can help us make the next ten years great as well by donating or becoming a sponsor.
...
Read the original on letsencrypt.org »
The students at America’s elite universities are supposed to be the smartest, most promising young people in the country. And yet, shocking percentages of them are claiming academic accommodations designed for students with learning disabilities.
In an article published this week in The Atlantic, education reporter Rose Horowitch lays out some shocking numbers. At Brown and Harvard, 20 percent of undergraduate students are disabled. At Amherst College, that’s 34 percent. At Stanford University, it’s a galling 38 percent. Most of these students are claiming mental health conditions and learning disabilities, like anxiety, depression, and ADHD.
Obviously, something is off here. The idea that some of the most elite, selective universities in America—schools that require 99th percentile SATs and sterling essays—would be educating large numbers of genuinely learning disabled students is clearly bogus. A student with real cognitive struggles is much more likely to end up in community college, or not in higher education at all, right?
The professors Horowitz interviewed largely back up this theory. “You hear ‘students with disabilities’ and it’s not kids in wheelchairs,” one professor told Horowitch. “It’s just not. It’s rich kids getting extra time on tests.” Talented students get to college, start struggling, and run for a diagnosis to avoid bad grades. Ironically, the very schools that cognitively challenged students are most likely to attend—community colleges—have far lower rates of disabled students, with only three to four percent of such students getting accommodations.
To be fair, some of the students receiving these accommodations do need them. But the current language of the Americans with Disabilities Act (ADA) allows students to get expansive accommodations with little more than a doctor’s note.
While some students are no doubt seeking these accommodations as semi-conscious cheaters, I think most genuinely identify with the mental health condition they’re using to get extra time on tests. Over the past few years, there’s been a rising push to see mental health and neurodevelopmental conditions as not just a medical fact, but an identity marker. Will Lindstrom, the director of the Regents’ Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a growing number of students with this perspective. “It’s almost like it’s part of their identity,” Lindstrom told her. “By the time we see them, they’re convinced they have a neurodevelopmental disorder.”
What’s driving this trend? Well, the way conditions like ADHD, autism, and anxiety get talked about online—the place where most young people first learn about these conditions—is probably a contributing factor. Online creators tend to paint a very broad picture of the conditions they describe. A quick scroll of TikTok reveals creators labeling everything from always wearing headphones, to being bad at managing your time, to doodling in class as a sign that someone may have a diagnosable condition. According to these videos, who isn’t disabled?
The result is a deeply distorted view of “normal.” If ever struggling to focus or experiencing boredom is a sign you have ADHD, the implication is that a “normal,” nondisabled person has essentially no problems. A “neurotypical” person, the thinking goes, can churn out a 15-page paper with no hint of procrastination, maintain perfect focus during a boring lecture, and never experience social anxiety or awkwardness. This view is buffeted by the current way many of these conditions are diagnosed. As Horowitch points out, when the latest issue of the DSM, the manual psychiatrists use to diagnose patients, was released in 2013, it significantly lowered the bar for an ADHD diagnosis. When the definition of these conditions is set so liberally, it’s easy to imagine a highly intelligent Stanford student becoming convinced that any sign of academic struggle proves they’re learning disabled, and any problems making friends are a sign they have autism.
Risk-aversion, too, seems like a compelling factor driving bright students to claim learning disabilities. Our nation’s most promising students are also its least assured. So afraid of failure—of bad grades, of a poorly-received essay—they take any sign of struggle as a diagnosable condition. A few decades ago, a student who entered college and found the material harder to master and their time less easily managed than in high school would have been seen as relatively normal. Now, every time she picks up her phone, a barrage of influencers is clamoring to tell her this is a sign she has ADHD. Discomfort and difficulty are no longer perceived as typical parts of growing up.
In this context, it’s easy to read the rise of academic accommodations among the nation’s most intelligent students as yet another manifestation of the risk-aversion endemic in the striving children of the upper middle class. For most of the elite-college students who receive them, academic accommodations are a protection against failure and self-doubt. Unnecessary accommodations are a two-front form of cheating—they give you an unjust leg-up on your fellow students, but they also allow you to cheat yourself out of genuine intellectual growth. If you mask learning deficiencies with extra time on texts, soothe social anxiety by forgoing presentations, and neglect time management skills with deadline extensions, you might forge a path to better grades. But you’ll also find yourself less capable of tackling the challenges of adult life.
...
Read the original on reason.com »
My name is Bruno Simon, and I’m a creative developer (mostly for the web).
This is my portfolio. Please drive around to learn more about me and discover the many secrets of this world.
Teleports you to the closest respawn
Respawn
Server currently offline. Scores can’t be saved.
- Everyone can see them
- New whispers remove old ones (max 30)
- One whisper per user
- Choose a flag
- No slur!
- Max 30 characters
Thank you for visiting my portfolio!
If you are curious about the stack and how I built it, here’s everything you need to know.
Three.js is the library I’m using to render this 3D world.
It was created by mr.doob (X, GitHub), followed by hundreds of awesome developers, one of which being Sunag (X, GitHub) who added TSL, enabling the use of both WebGL and WebGPU, making this portfolio possible.
If you want to learn Three.js, I got you covered with this huge course.
It contains everything you need to start building awesome stuff with Three.js (and much more).
I’ve been making devlogs since the very start of this portfolio and you can find them on my Youtube channel.
Even though the portfolio is out, I’m still working on the last videos so that the series is complete.
The code is available on GitHub under MIT license. Even the Blender files are there, so have fun!
For security reasons, I’m not sharing the server code, but the portfolio works without it.
The music you hear was made especially for this portfolio by the awesome Kounine (Linktree).
They are now under CC0 license, meaning you can do whatever you want with them!
Download them here.
Server currently offline. Scores can’t be saved.
Come hang out with the community, show us your projects and ask us anything.
Contact me directly.
I have to warn you, I try to answer everyone, but it might take a while.
...
Read the original on bruno-simon.com »
Update 2025/02/04: Oracle asks the USPTO to dismiss our petition. Read more
Update 2024/11/22: We’ve filed a petition to cancel with the USPTO. Read more
Update 2025/02/04: Oracle asks the USPTO to dismiss our petition. Read more
Update 2024/11/22: We’ve filed a petition to cancel with the USPTO. Read more
You have long ago abandoned the JavaScript trademark, and it is causing
widespread, unwarranted confusion and disruption.
JavaScript is the world’s most popular programming language, powering websites everywhere. Yet, few of the millions who program in it realize that JavaScript is a trademark you, Oracle, control. The disconnect is glaring: JavaScript has become a general-purpose term used by countless individuals and companies, independent of any Oracle product.
Oracle’s hold on the JavaScript trademark clearly fits the legal definition of
trademark abandonment. A previous
blog post addressed this issue, requesting that you, Oracle, release the trademark. Unsurprisingly, the request was met with silence. It is therefore time to take active steps in order to bring the JavaScript trademark into the public domain, where it belongs.
A mark shall be deemed to be “abandoned” if either of the following occurs:
When
its use has
been discontinued with intent not to resume
such use.
Intent not to resume may be inferred from circumstances. Nonuse for 3
consecutive years shall be prima facie evidence of abandonment.
“Use”
of
a mark means
the bona
fide use of
such mark made
in the ordinary course of trade, and not made merely to reserve a right in
a mark.
When any course of conduct of the owner, including acts of omission as well
as commission, causes
the mark to
become the generic name for the goods or services on or in connection with
which it is used or otherwise to lose its significance as
a mark.
Purchaser motivation shall not be a test for determining abandonment under
this paragraph.
In the case of JavaScript, both criteria apply.
The JavaScript trademark is currently held by Oracle America, Inc. (US Serial Number: 75026640, US Registration Number: 2416017). How did this come to be?
In 1995, Netscape partnered with Sun Microsystems to create interactive websites. Brendan Eich famously spent only 10 days creating the first version of JavaScript, a dynamic programming language with a rough syntactic lineage from Sun’s Java language. As a result of this partnership, Sun held the JavaScript trademark. In 2009, Oracle acquired Sun Microsystems and the JavaScript trademark as a result.
The trademark is simply a relic of this acquisition. Neither Sun nor Oracle has ever built a product using the mark. Legal staff, year after year, have renewed the trademark without question. It’s likely that only a few within Oracle even know they possess the JavaScript trademark, and even if they do, they likely don’t understand the frustration it causes within the developer community.
Oracle has abandoned the JavaScript trademark through nonuse.
Oracle has never seriously offered a product called JavaScript. In the 1990s and early 2000s, Netscape Navigator, which supported JavaScript as a browser feature, was a key player. However, Netscape’s usage and influence faded by 2003, and the browser saw its final release in 2008. JavaScript, meanwhile, evolved into a widely used, independent programming language, embedded in multiple browsers, entirely separate from Oracle.
The most recent
specimen, filed with the USPTO in 2019, references nodejs.org (a project created by Ryan Dahl, the author of this letter) and Oracle’s
JavaScript Extension Toolkit (JET). But Node.js is not an Oracle product, and JET is merely a set of JavaScript libraries for Oracle services, particularly Oracle Cloud. There are millions of JavaScript libraries; JET is not special.
Oracle also offers GraalVM, a JVM that can execute JavaScript, among other languages. But GraalVM is far from a canonical JavaScript implementation; engines like V8, JavaScriptCore, and SpiderMonkey hold that role. GraalVM’s product page doesn’t even mention “JavaScript”; you must dig into the documentation to find its support.
Oracle’s use of JavaScript in GraalVM and JET does not reflect genuine use of
the trademark. These weak connections do not satisfy the requirement for consistent, real-world use in trade.
A mark can also be considered abandoned if it becomes a generic term.
In 1996, Netscape
announced
a meeting of the ECMA International standards organization to standardize the JavaScript programming language. Sun (now Oracle), refused to give up the “JavaScript” mark for this use though, so it was decided that the language would be called “ECMAScript” instead. (Microsoft happily offered up “JScript”, but no-one else wanted that.) Brendan Eich, the creator of JavaScript and a co-signatory of this letter,
wrote in 2006
that “ECMAScript was always an unwanted trade name that sounds like a skin disease.”
Ecma International formed TC39, a technical steering committee, which publishes ECMA-262, the specification for JavaScript. This committee includes participants from all major browsers, like Google’s Chrome, Apple’s Safari, and Mozilla’s Firefox, as well as representatives from server-side JavaScript runtimes like Node.js and Deno.
Oracle’s ownership of the JavaScript trademark only causes confusion. The term “JavaScript” is used freely by millions of developers, companies, and organizations around the world, with no interference from Oracle. Oracle has done nothing to assert its rights over the JavaScript name, likely because they do not believe their claim to the mark would hold up in court. Unlike typical trademark holders who protect their trademarks by extracting licensing fees or enforcing usage restrictions, Oracle has allowed the JavaScript name to be used by anyone. This inaction further supports the argument that the trademark has lost its significance and has become generic.
Programmers working with JavaScript have formed innumerable community organizations. These organizations, like the standards bodies, have been forced to painstakingly avoid naming the programming language they are built around—for example, JSConf. Sadly, without risking a legal trademark challenge against Oracle, there can be no “JavaScript Conference” nor a “JavaScript Specification.” The world’s most popular programming language cannot even have a conference in its name.
There is a vast misalignment between the trademark’s ownership and its
widespread, generic use.
By law, a trademark is abandoned if it is either not used or becomes a generic
term. Both apply to JavaScript.
It’s time for the USPTO to end the JavaScript trademark and recognize it as a generic name for the world’s most popular programming language, which has multiple implementations across the industry.
Oracle, you likely have no real business interest in the mark. It’s renewed simply because legal staff are obligated to renew all trademarks, regardless of their relevance or use.
We urge you to release the mark into the public domain. However, asking nicely has been tried before, and it was met with silence. If you do not act, we will challenge your ownership by filing a petition for cancellation with the USPTO.
...
Read the original on javascript.tm »
Today, we’re releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.
Devstral 2 is currently free to use via our API.
We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.
Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.
Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.
Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.
Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.
Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.
Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.
The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.
We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.
“Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it’s a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.
“Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.
Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.
Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.
Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:
Project-aware context: Automatically scans your file structure and Git status to provide relevant context
Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes
Multi-file orchestration: Understands your entire codebase—not just the file you’re editing—enabling architecture-level reasoning that can halve your PR cycle time
You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.
Devstral 2 is currently offered free via our API. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.
We’ve partnered with leading, open agent tools Kilo Code and Cline to bring Devstral 2 to where you already build.
Mistral Vibe CLI is available as an extension in Zed, so you can use it directly inside your IDE.
Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on build.nvidia.com. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.
Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.
For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for Mistral Vibe CLI.
We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!
Share your projects, questions, or discoveries with us on X/Twitter, Discord, or GitHub.
If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to apply to join our team.
...
Read the original on mistral.ai »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on grapheneos.social »
While LLMs are adept at reading and can be terrific at editing, their writing is much more mixed. At best, writing from LLMs is hackneyed and cliché-ridden; at worst, it brims with tells that reveal that the prose is in fact automatically generated.
What’s so bad about this? First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their
intellectual
fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).
Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?
This can be navigated, of course, but it is truly perilous: our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice. For us at Oxide, there is a more mechanical reason to be jaundiced about using LLMs to write: because our hiring process very much selects for writers, we know that everyone at Oxide can write — and we have the luxury of demanding of ourselves the kind of writing that we know that we are all capable of.
So our guideline is to generally not use LLMs to write, but this shouldn’t be thought of as an absolute — and it doesn’t mean that an LLM can’t be used as part of the writing process. Just please: consider your responsibility to yourself, to your own ideas — and to the reader.
...
Read the original on rfd.shared.oxide.computer »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.