10 interesting stories served every morning and every evening.
Claude Opus 4.6 and Sonnet 4.6 now include the full 1M context window at standard pricing on the Claude Platform. Standard pricing applies across the full window — $5/$25 per million tokens for Opus 4.6 and $3/$15 for Sonnet 4.6. There’s no multiplier: a 900K-token request is billed at the same per-token rate as a 9K one.
* Full rate limits at every context length. Your standard account throughput applies across the entire window.
* 6x more media per request. Up to 600 images or PDF pages, up from 100. Available today on Claude Platform natively, Microsoft Azure Foundry, and Google Cloud’s Vertex AI.
* No beta header required. Requests over 200K tokens work automatically. If you’re already sending the beta header, it’s ignored so no code changes are required.
1M context is now included in Claude Code for Max, Team, and Enterprise users with Opus 4.6. Opus 4.6 sessions can use the full 1M context window automatically, meaning fewer compactions and more of the conversation kept intact. 1M context previously required extra usage.
A million tokens of context only matters if the model can recall the right details and reason across them. Opus 4.6 scores 78.3% on MRCR v2, the highest among frontier models at that context length.
That means you can load an entire codebase, thousands of pages of contracts, or the full trace of a long-running agent — tool calls, observations, intermediate reasoning — and use it directly. The engineering work, lossy summarization, and context clearing that long-context work previously required are no longer needed. The full conversation stays intact.
...
Read the original on claude.com »
A lightweight, open-source, fully local alternative to Logitech Options+ for remapping every programmable button on the Logitech MX Master 3S mouse.
No telemetry. No cloud. No Logitech account required.
* macOS support — full macOS compatibility added thanks to andrew-sz, using CGEventTap for mouse hooking, Quartz CGEvent for key simulation, and NSWorkspace for app detection. See macOS Setup Guide for details.
* Per-application profiles — automatically switch button mappings when you switch apps (e.g., different bindings for Chrome vs. VS Code)
* DPI / pointer speed control — slider from 200–8000 DPI with quick presets, synced to the device via HID++
* Auto-reconnection — automatically detects when the mouse is turned off/on or disconnected/reconnected and restores full functionality without restarting the app
* Live connection status — the UI shows a real-time “Connected” / “Not Connected” badge that updates as the mouse connects or disconnects
* System tray — runs in background, hides to tray on close, toggle remapping on/off from tray menu
* Zero external services — config is a local JSON file, all processing happens on your machine
The UI shows an interactive diagram of the MX Master 3S. Click any button’s hotspot dot to change its action.
Note: The architecture is designed to be extensible to other Logitech HID++ mice, but only the MX Master 3S is tested.
Extract the zip to any folder (Desktop, Documents, wherever you like)
That’s it — the app will open and start remapping your mouse buttons immediately.
* Closing the window doesn’t quit the app — it keeps running in the tray
* Windows SmartScreen may show a warning the first time → click More info → Run anyway
* Logitech Options+ must not be running (it conflicts with HID++ access)
* Logitech Options+ must NOT be running (it conflicts with HID++ access)
# 1. Clone the repository
git clone https://github.com/TomBadash/MouseControl.git
cd MouseControl
# 2. Create a virtual environment
python -m venv .venv
# 3. Activate it
.venv\Scripts\activate # Windows (PowerShell / CMD)
source .venv/bin/activate # macOS / Linux
# 4. Install dependencies
pip install -r requirements.txt
# Option A: Run directly
python main_qml.py
# Option B: Use the batch file (shows a console window)
Mouser.bat
# Option C: Use the desktop shortcut (no console window)
# Double-click Mouser.lnk
Tip: To run without a console window, use pythonw.exe main_qml.py or the .lnk shortcut.
A Mouser.lnk shortcut is included. To create one manually:
$s = (New-Object -ComObject WScript.Shell).CreateShortcut(“$([Environment]::GetFolderPath(‘Desktop’))\Mouser.lnk”)
$s.TargetPath = “C:\path\to\mouser\.venv\Scripts\pythonw.exe”
$s.Arguments = “main_qml.py”
$s.WorkingDirectory = “C:\path\to\mouser”
$s.IconLocation = “C:\path\to\mouser\images\logo.ico, 0”
$s.Save()
To produce a standalone Mouser.exe that anyone can download and run without Python:
# 1. Install PyInstaller (inside your venv)
pip install pyinstaller
# 2. Build using the included spec file
pyinstaller Mouser.spec –noconfirm
# — or simply run the build script —
build.bat
The output is in dist\Mouser\. Zip that entire folder and distribute it.
A low-level Windows mouse hook (SetWindowsHookExW with WH_MOUSE_LL) runs on a dedicated background thread with its own Win32 message pump. It intercepts:
Intercepted events are either blocked (hook returns 1) and replaced with an action, or passed through to the application.
HID++ 2.0 (primary, Bluetooth) — Opens the Logitech HID collection, discovers REPROG_CONTROLS_V4 (feature 0x1B04), and diverts CID 0x00C3 (gesture button). Best reliability.
Raw Input (fallback) — Registers for raw mouse input and detects extra button bits beyond the standard 5.
Middle-click fallback — When gesture button has an action but middle-click is unassigned, middle-click events route to the gesture action.
Polls the foreground window every 300ms using GetForegroundWindow → GetWindowThreadProcessId → process name. Handles UWP apps by resolving ApplicationFrameHost.exe to the actual child process.
The central orchestrator. On app change, it performs a lightweight profile switch — clears and re-wires hook callbacks without tearing down the hook thread or HID++ connection. This avoids the latency and instability of a full hook restart.
* HID++ layer — HidGestureListener detects device disconnection (read errors) and enters a reconnect loop, retrying every 2–5 seconds until the device is back
* Hook layer — MouseHook listens for WM_DEVICECHANGE notifications and reinstalls the low-level mouse hook when devices are added or removed
* UI layer — connection state flows from HID++ → MouseHook → Engine → Backend (cross-thread safe via Qt signals) → QML, updating the status badge in real time
All settings are stored in %APPDATA%\Mouser\config.json (Windows) or ~/Library/Application Support/Mouser/config.json (macOS). The config supports:
The app has two pages accessible from a slim sidebar:
* Left panel: List of profiles. The “Default (All Apps)” profile is always present. Per-app profiles show the app icon and name. Select a profile to edit its mappings.
* Right panel: Interactive mouse diagram with clickable hotspot dots on each button. Click a dot to expand an action picker with categorized chips. Changes save instantly to the selected profile.
* Add profile: ComboBox at the bottom lists known apps (Chrome, Edge, VS Code, VLC, etc.). Click “+” to create a per-app profile.
* DPI slider: 200–8000 with quick presets (400, 800, 1000, 1600, 2400, 4000, 6000, 8000). Reads the current DPI from the device on startup.
* Windows & macOS only — Linux is not yet supported
* MX Master 3S only — HID++ feature indices and CIDs are hardcoded for this device (PID 0xB034)
* Bluetooth recommended — HID++ gesture button divert works best over Bluetooth; USB receiver has partial support
* Conflicts with Logitech Options+ — both apps fight over HID++ access; quit Options+ before running Mouser
* Scroll inversion is experimental — uses coalesced PostMessage injection to avoid LL hook deadlocks; may not work perfectly in all apps
* Admin not required — but some games or elevated windows may not receive injected keystrokes
* More devices — support other Logitech HID++ mice (MX Master 3, MX Anywhere 3, etc.)
* Start with Windows — autostart via registry or Task Scheduler
* Per-app profile auto-creation — detect new apps and prompt to create a profile
* macOS support — added via CGEventTap, Quartz CGEvent, and NSWorkspace (thanks @andrew-sz)
Contributions are welcome! To get started:
Set up the dev environment (see Installation)
Make your changes and test with an MX Master 3S
If Mouser saves you from installing Logitech Options+, consider supporting development:
Every bit helps keep the project going — thank you!
This project is licensed under the MIT License.
Mouser is not affiliated with or endorsed by Logitech. “Logitech”, “MX Master”, and “Options+” are trademarks of Logitech International S. A.
...
Read the original on github.com »
Your iPhone (or any other smartphone) is a computer capable of running a complete desktop operating system, and has been so for quite some time.
A week ago, Apple asked us to say hello to MacBook Neo. It’s a very reasonably priced entrant to the Mac laptop line, just $599. It’s perfect for students, priced at just $499 with an education discount.
I have no arguments against this device’s existence. But I couldn’t help but also notice it comes equipped with an A18 Pro chip, the very same chip that powers the iPhone 16 Pro I carry in my pocket. I’m bothered, as I have been since the original iPad introduction 16 years ago, by the unnecessary restrictions placed by corporate powers to run third-party software and operating systems on devices we own.
“Love at first Mac.” Welcome to the family! Don’t ever think of putting MacOS on your iPad though!
On the MacBook Neo, I can ostensibly go to a browser, any browser, and click links to download whatever software I’d like.
On my iPhone, there is no such ability. As a US citizen, I must go through the Apple-approved App Store to download / install third-party software. Smells like freedom.
On the MacBook Neo, I can run code and build software with no restrictions.
On my iPhone, this ability is heavily sandboxed and there is no full access to the filesystem through a user accessible shell, as much as I may want one.
On the MacBook Neo, I can even opt to not use MacOS at all and instead install Asahi Linux if I so choose (assuming Apple continues to allow custom kernel booting as it has in M-series Macs).
On my iPhone, the only operating system I can use is iOS — the boot loader for iPhones and iPads is locked down as to restrict “jailbreaking”, otherwise known as modifications that allow a device owner to install software outside of Apple-vetted channels.
Both this new MacBook offering and my iPhone use the same system on a chip, down to the number of CPU cores, GPU cores and gigabytes of RAM. Apple would like to have everyone believe these devices are fundamentally different, that the iPhone must be locked in the name of “user safety”, because clicking on links is dangerous.
That is an insult to the average person’s intelligence. Despite the form factor differences, these devices are basically the exact same thing. The iPhone / iPad have been restricted in their ability to download software straight from the internet, not due to “safety”, but because Apple has a significant profit motive in doing so.
The MacBook is capable of running iOS, iPad OS, MacOS, or any other OS for that matter. The latest iPhones are equally capable of running MacOS, demonstrated beyond a reasonable doubt by this latest product announcement.
An iPhone is a MacBook, is an iPad, is an iPhone. But not actually, because Apple has artificially deemed so through strict hardware-level controls.
In right to root access, I laid out the case for why the right to choose the software loaded on devices we own must be advocated for in the broader “right to repair” discussion.
Nearly two decades after the iPhone was first announced, the chips Apple developed have grown into computing beasts, so much so that even ones meant for iPhones can run MacOS.
The mechanisms employed over these same two decades to restrict software loading on mobile devices are unacceptable. These restrictions are boons for corporate and sovereign powers that would like to control what you can do with your computing devices.
I want to work towards the precedent that every device you own should have the capacity to load the software of your choice. Mobile devices are not special, they are general computers in a handheld form factor, artificially restricted from loading software to keep corporate profits and government controls intact. The narrative that mobile devices are somehow different enough to warrant restrictions on software loading mainly benefits actors that don’t have the average person’s best interests at heart.
Now that I know my iPhone has the capacity to run MacOS, I would very much like to do so. I’d love to consider the possibility of switching to a less intrusive phone and repurposing this iPhone into a web server. Perhaps that seems silly, but it’s really not. I’ve already paid for the device, it’s clearly a full blown computer, and why should I not be able to modify it as I see fit?
A right to root access would make all of this possible. And I really want to run MacOS on my iPhone now, damn it.
...
Read the original on medhir.com »
This is a tool for powerful automation of OS X. At its core, Hammerspoon is just a bridge between the operating system and a Lua scripting engine.
What gives Hammerspoon its power is a set of extensions that expose specific pieces of system functionality, to the user. With these, you can write Lua scripts to control many aspects of your OS X environment.
* Drag Hammerspoon.app from your Downloads folder to Applications
Out of the box, Hammerspoon does nothing - you will need to create ~/.hammerspoon/init.lua and fill it with useful code. There are several resources which can help you:
Hammerspoon is a fork of Mjolnir. Mjolnir aims to be a very minimal application, with its extensions hosted externally and managed using a Lua package manager. We wanted to provide a more integrated experience.
Our intentions for Hammerspoon broadly fall into these categories:
* Ever wider coverage of system APIs in Extensions
...
Read the original on github.com »
A Hard Reset, and What Comes NextBuilding on the internet in 2026 is different. We learned that the hard way. Today we’re sharing difficult news: we’ve made the decision to significantly downsize the Digg team. This wasn’t a decision made lightly, and it’s important to say clearly: this is one of the strongest groups of people we’ve ever had the privilege of working with. This is not a reflection of their talent, their effort, or their belief in what we were building. It’s a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority. Within hours, we got a taste of what we’d only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.
This isn’t just a Digg problem. It’s an internet problem. But it hit us harder because trust is the product.
We underestimated the gravitational pull of existing platforms. Network effects aren’t just a moat, they’re a wall. The loyalty users have to the communities they’ve already built elsewhere is profound. Getting people to move is a hard enough problem. Getting them to move and bring their people with them is something else entirely.
We’re not giving up. Digg isn’t going away.
A small but determined team is stepping up to rebuild with a completely reimagined angle of attack. Positioning Digg as simply an alternative to incumbents wasn’t imaginative enough. That’s a race we were never going to win. What comes next needs to be genuinely different.
We’re also announcing something we’re excited about: Kevin Rose, Digg’s founder who started the company back in 2004, is returning to join the team full-time. Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago. He’ll continue as an advisor to True Ventures, but Digg will be his primary focus. We couldn’t think of a better person to help figure out what Digg needs to become.
Lastly, Diggnation, our official Digg podcast, will continue recording monthly while we work on the re-reboot.
Lastly, and most importantly, thank you…To the team members we’re saying goodbye to today: thank you. You took a bet on a hard problem and showed up every day. The work you did laid the groundwork for what comes next, even if it doesn’t feel that way right now.
To the community who came back to Digg, submitted links, argued in the comments, and emailed us with what you wanted: we haven’t forgotten why we’re doing this. We know how frustrating this is, and we hope you’ll give us another look once we have something to show, we’ll save your usernames!
Ultimately, the internet needs a place where we can trust the content and the people behind it. We’re going to figure out how to build it.
More soon
–@justin, CEO
...
Read the original on digg.com »
Wired headphones were supposed to die with the headphone jack. Instead, they’re making a comeback as consumers seek out better sound quality and technology from a simpler time.
When Apple ditched the headphone jacks on iPhones in 2016, I went into exile. I wasn’t about to let a giant company dictate my listening habits, so I bought an Android and plugged in for dear life. But eventually, my phone took its last breath in the same exact month that Google — one of the last holdouts — said it was getting rid of headphone jacks on its phones too. It felt like a cosmic sign of defeat. So I went back to iPhone, tossed my wired earbuds in a drawer and joined the Bluetooth hordes.
Maybe I gave up too easily. Recently, a quiet movement has grown in the shadows based on a controversial truth: wired headphones are better than Bluetooth. Sales are through the roof in recent months. You can often get better sound for the money with a wired pair, but it’s not just audio snobs either. Wired headphones are a full-blown cultural trend — a resurgence some tie to a broader anti-tech backlash. Whether it’s practical, political or aesthetic, one thing is clear. Wired headphones are back.
“I’m converted,” says Aryn Grusin, a wired-headphone-loving social worker from Portland, Oregon in the US. A few months ago, she borrowed her fiancé’s old-fashioned wired earbuds and never looked back. “I just think it feels comforting. I like that it signifies to the world that I’m listening to something.”
...
Read the original on www.bbc.com »
“In reality, it will largely be down to parents to educate themselves on why these changes have been brought in and decide for themselves if they deem the games (and their loot box mechanics) suitable for their child,” she said.
...
Read the original on www.bbc.com »
It’s tough to make predictions, especially about the future.
I’ve been an Emacs fanatic for over 20 years. I’ve built and maintained some of the most popular Emacs packages, contributed to Emacs itself, and spent countless hours tweaking my configuration. Emacs isn’t just my editor — it’s my passion, and my happy place.
Over the past year, I’ve also been spending a lot of time with Vim and Neovim, relearning them from scratch and having a blast contrasting how the two communities approach similar problems. It’s been a fun and refreshing experience.
And lately, like everyone else in our industry, I’ve been playing with AI tools — Claude Code in particular — watching the impact of AI on the broader programming landscape, and pondering what it all means for the future of programming. Naturally, I keep coming back to the same question: what happens to my beloved Emacs and its “arch nemesis” Vim in this brave new world?
I think the answer is more nuanced than either “they’re doomed” or “nothing changes”. Predicting the future is obviously hard work, but it’s so fun to speculate on it.
My reasoning is that every major industry shift presents plenty of risks and opportunities for those involved in it, so I want to spend a bit of time ruminating over the risks and opportunities for Emacs and Vim.
The only thing that is constant is change.
Every major technological disruption presents both challenges and opportunities. Things are never simply black and white — just some shade of gray. The AI revolution is no different, and I think it’s worth examining both sides honestly before drawing any conclusions.
VS Code is already the dominant editor by a wide margin, and it’s going to get first-class integrations with every major AI tool — Copilot (obviously), Codex, Claude, Gemini, you name it. Microsoft has every incentive to make VS Code the best possible host for AI-assisted development, and the resources to do it.
On top of that, purpose-built AI editors like Cursor, Windsurf, and others are attracting serious investment and talent. These aren’t adding AI to an existing editor as an afterthought — they’re building the entire experience around AI workflows. They offer integrated context management, inline diffs, multi-file editing, and agent loops that feel native rather than bolted on.
Every developer who switches to one of these tools is a developer who isn’t learning Emacs or Vim keybindings, isn’t writing Elisp, and isn’t contributing to our ecosystems. The gravity well is real.
I never tried Cursor and Windsurf simply because they are essentially forks of VS Code and I can’t stand VS Code. I tried it several times over the years and I never felt productive in it for a variety of reasons.
Part of the case for Emacs and Vim has always been that they make you faster at writing and editing code. The keybindings, the macros, the extensibility — all of it is in service of making the human more efficient at the mechanical act of coding.
But if AI is writing most of your code, how much does mechanical editing speed matter? When you’re reviewing and steering AI-generated diffs rather than typing code character by character, the bottleneck shifts from “how fast can I edit” to “how well can I specify intent and evaluate output.” That’s a fundamentally different skill, and it’s not clear that Emacs or Vim have an inherent advantage there.
The learning curve argument gets harder to justify too. “Spend six months learning Emacs and you’ll be 10x faster” is a tough sell when a junior developer with Cursor can scaffold an entire application in an afternoon.
VS Code has Microsoft. Cursor has venture capital. Emacs has… a small group of volunteers and the FSF. Vim had Bram, and now has a community of maintainers. Neovim has a small but dedicated core team.
This has always been the case, of course, but AI amplifies the gap. Building deep AI integrations requires keeping up with fast-moving APIs, models, and paradigms. Well-funded teams can dedicate engineers to this full-time. Volunteer-driven projects move at the pace of people’s spare time and enthusiasm.
Let’s go all the way: what if programming as we know it is fully automated within the next decade? If AI agents can take a specification and produce working, tested, deployed software without human intervention, we won’t need coding editors at all. Not Emacs, not Vim, not VS Code, not Cursor. The entire category becomes irrelevant.
I don’t think this is likely in the near term, but it’s worth acknowledging as a possibility. The trajectory of AI capabilities has surprised even the optimists (and I was initially an AI skeptic, but the rapid advancements last year eventually changed my mind).
That paints a grim picture, but here’s the thing — Emacs and Vim have been written off more times than I can count. Eclipse was going to kill them. IntelliJ was going to kill them. VS Code was going to kill them. Sublime Text, Atom, TextMate — all were supposedly the final nail in the coffin. Most of those “killers” are themselves dead or declining, while Emacs and Vim keep chugging along. There’s a resilience to these editors that’s easy to underestimate.
So let’s look at the other side of the coin.
Here’s the thing almost nobody is talking about: Emacs and Vim have always suffered from the obscurity of their extension languages. Emacs Lisp is a 1980s Lisp dialect that most programmers have never seen before. VimScript is… VimScript. Even Lua, which Neovim adopted specifically because it’s more approachable, is niche enough that most developers haven’t written a line of it.
This has been the single biggest bottleneck for both ecosystems. Not the editors themselves — they’re incredibly powerful — but the fact that customizing them requires learning an unfamiliar language, and most people never make it past copying snippets from blog posts and READMEs.
I felt incredibly overwhelmed by Elisp and VimScript when I was learning Emacs and Vim for the first time, and I imagine I wasn’t the only one. I started to feel very productive in Emacs only after putting in quite a lot of time to actually learn Elisp properly. (Never bothered to do the same for VimScript, though, and admittedly I’m not too eager to master Lua either.)
AI changes this overnight. You can now describe what you want in plain English and get working Elisp, VimScript, or Lua. “Write me an Emacs function that reformats the current paragraph to 72 columns and adds a prefix” — done. “Configure lazy.nvim to set up LSP with these keybindings” — done. The extension language barrier, which has been the biggest obstacle to adoption for decades, is suddenly much lower.
After 20+ years in the Emacs community, I often have the feeling that a relatively small group — maybe 50 to 100 people — is driving most of the meaningful progress. The same names show up in MELPA, on the mailing lists, and in bug reports. This isn’t a criticism of those people (I’m proud to be among them), but it’s a structural weakness. A community that depends on so few contributors is fragile.
And it’s not just Elisp and VimScript. The C internals of both Emacs and Vim (and Neovim’s C core) are maintained by an even smaller group. Finding people who are both willing and able to hack on decades-old C codebases is genuinely hard, and it’s only getting harder as fewer developers learn C at all.
AI tools can help here in two ways. First, they lower the barrier for new contributors — someone who understands the concept of what they want to build can now get AI assistance with the implementation in an unfamiliar language. Second, they help existing maintainers move faster. I’ve personally found that AI is excellent at generating test scaffolding, writing documentation, and handling the tedious parts of package maintenance that slow everything down.
The Emacs and Neovim communities aren’t sitting idle. There are already impressive AI integrations:
* ellama — an Emacs interface for interacting with LLMs via llama.cpp and Ollama
* copilot.el — GitHub Copilot integration (I happen to be the current maintainer of the project)
* agent-shell — a native Emacs buffer for interacting with LLM agents (Claude Code, Gemini CLI, etc.) via the Agent Client Protocol
* gp.nvim — ChatGPT-like sessions in Neovim with support for multiple providers
And this is just a sample. Building these integrations isn’t as hard as it might seem — the APIs are straightforward, and the extensibility of both editors means you can wire up AI tools in ways that feel native. With AI assistance, creating new integrations becomes even easier. I wouldn’t be surprised if the pace of plugin development accelerates significantly.
Here’s an irony that deserves more attention: many of the most powerful AI coding tools are terminal-native. Claude Code, Aider, and various Copilot CLI tools all run in the terminal. And what lives in the terminal? Emacs and Vim.
Running Claude Code in an Emacs vterm buffer or a Neovim terminal split is a perfectly natural workflow. You get the AI agent in one pane and your editor in another, with all your keybindings and tools intact. There’s no context switching to a different application — it’s all in the same environment.
This is actually an advantage over GUI-based AI editors, where the AI integration is tightly coupled to the editor’s own interface. With terminal-native tools, you get to choose your own editor and your own AI tool, and they compose naturally.
There’s another angle worth considering: if programming is increasingly about writing prompts rather than code, you still benefit from a great text editor for that. Prompts are text, and crafting them well matters. I find it ironic that Claude Code — a tool I otherwise love — doesn’t use readline, so my Emacs keybindings don’t work properly in it, and its vim emulation is fairly poor. I still think using React for CLI apps is a mistake, and I suspect many people would enjoy running Claude Code inside their Emacs or Vim instead. That’s exactly what the Agent Client Protocol (ACP) enables — it lets editors like Emacs (via agent-shell) act as first-class clients for AI agents, giving you proper editing, keybindings, and all the power of your editor while interacting with tools like Claude Code. The best prompt editor might just be the one you’ve been using for decades.
Emacs’s “editor as operating system” philosophy is uniquely well-suited to AI integration. It’s not just a code editor — it’s a mail client (Gnus, mu4e), a note-taking system (Org mode), a Git interface (Magit), a terminal emulator, a file manager, an RSS reader, and much more.
AI can be integrated at every one of these layers. Imagine an AI assistant that can read your org-mode agenda, draft email replies in mu4e, help you write commit messages in Magit, and refactor code in your source buffers — all within the same environment, sharing context. No other editor architecture makes this kind of deep, cross-domain integration as natural as Emacs does.
Admittedly, I’ve stopped using Emacs as my OS a long time ago, and these days I use it mostly for programming and blogging. (I’m writing this article in Emacs with the help of markdown-mode.) Still, I’m only one Emacs user and many are probably using it in a more holistic manner.
One of the most underappreciated benefits of AI for Emacs and Vim users is mundane: troubleshooting. Both editors have notoriously steep learning curves and opaque error messages. “Wrong type argument: stringp, nil” has driven more people away from Emacs than any competitor ever did.
AI tools are remarkably good at explaining cryptic error messages, diagnosing configuration issues, and suggesting fixes. They can read your init file and spot the problem. They can explain what a piece of Elisp does. They can help you understand why your keybinding isn’t working. This dramatically flattens the learning curve — not by making the editor simpler, but by giving every user access to a patient, knowledgeable guide.
I don’t really need any AI assistance to troubleshoot anything in my Emacs setup, but it’s been handy occasionally in Neovim-land, where my knowledge is relatively modest by comparison.
There’s at least one documented case of someone returning to Emacs after years away, specifically because Claude Code made it painless to fix configuration issues. They’d left for IntelliJ because the configuration burden got too annoying — and came back once AI removed that barrier. “Happy f*cking days I’m home again,” as they put it. If AI can bring back lapsed Emacs users, that’s a good thing in my book.
Let’s revisit the doomsday scenario. Say programming is fully automated and nobody writes code anymore. Does Emacs die?
Not necessarily. Emacs is already used for far more than programming. People use Org mode to manage their entire lives — tasks, notes, calendars, journals, time tracking, even academic papers. Emacs is a capable writing environment for prose, with excellent support for LaTeX, Markdown, AsciiDoc, and plain text. You can read email, browse the web, manage files, and yes, play Tetris.
Vim, similarly, is a text editing paradigm as much as a program. Vim keybindings have colonized every text input in the computing world — VS Code, IntelliJ, browsers, shells, even Emacs (via Evil mode). Even if the Vim program fades, the Vim idea is immortal.
And who knows — maybe there’ll be a market for artisanal, hand-crafted software one day, the way there’s a market for vinyl records and mechanical watches. “Organic, small-batch code, lovingly typed by a human in Emacs — one character at a time.” I’d buy that t-shirt. And I’m fairly certain those artisan programmers won’t be using VS Code.
So even in the most extreme scenario, both editors have a life beyond code. A diminished one, perhaps, but a life nonetheless.
I think what’s actually happening is more interesting than “editors die” or “editors are fine.” The role of the editor is shifting.
For decades, the editor was where you wrote code. Increasingly, it’s becoming where you review, steer, and refine code that AI writes. The skills that matter are shifting from typing speed and editing gymnastics to specification clarity, code reading, and architectural judgment.
In this world, the editor that wins isn’t the one with the best code completion — it’s the one that gives you the most control over your workflow. And that has always been Emacs and Vim’s core value proposition.
The question is whether the communities can adapt fast enough. The tools are there. The architecture is there. The philosophy is right. What’s needed is people — more contributors, more plugin authors, more documentation writers, more voices in the conversation. AI can help bridge the gap, but it can’t replace genuine community engagement.
Not everyone in the Emacs and Vim communities is enthusiastic about AI, and the objections go beyond mere technophobia. There are legitimate ethical concerns that are going to be debated for a long time:
* Energy consumption. Training and running large language models requires enormous amounts of compute and electricity. For communities that have long valued efficiency and minimalism — Emacs users who pride themselves on running a 40-year-old editor, Vim users who boast about their sub-second startup times — the environmental cost of AI is hard to ignore.
* Copyright and training data. LLMs are trained on vast corpora of code and text, and the legality and ethics of that training remain contested. Some developers are uncomfortable using tools that may have learned from copyrighted code without explicit consent. This concern hits close to home for open-source communities that care deeply about licensing.
* Job displacement. If AI makes developers significantly more productive, fewer developers might be needed. This is an uncomfortable thought for any programming community, and it’s especially pointed for editors whose identity is built around empowering human programmers.
These concerns are already producing concrete action. The Vim community recently saw the creation of EVi, a fork of Vim whose entire raison d’etre is to provide a text editor free from AI-assisted (generated?) code contributions. Whether you agree with the premise or not, the fact that people are forking established editors over this tells you how strongly some community members feel.
I don’t think these concerns should stop anyone from exploring AI tools, but they’re real and worth taking seriously. I expect to see plenty of spirited debate about this on emacs-devel and the Neovim issue tracker in the years ahead.
The future ain’t what it used to be.
I won’t pretend I’m not worried. The AI wave is moving fast, the incumbents have massive advantages in funding and mindshare, and the very nature of programming is shifting under our feet. It’s entirely possible that Emacs and Vim will gradually fade into niche obscurity, used only by a handful of diehards who refuse to move on.
But I’ve been hearing that Emacs is dying for 20 years, and it’s still here. The community is small but passionate, the editor is more capable than ever, and the architecture is genuinely well-suited to the AI era. Vim’s situation is similar — the core idea is so powerful that it keeps finding new expression (Neovim being the latest and most vigorous incarnation).
The editors that survive won’t be the ones with the flashiest AI features. They’ll be the ones whose users care enough to keep building, adapting, and sharing. That’s always been the real engine of open-source software, and no amount of AI changes that.
So if you’re an Emacs or Vim user: don’t panic, but don’t be complacent either. Learn the new AI tools (if you’re not fundamentally opposed to them, that is). Pimp your setup and make it awesome. Write about your workflows. Help newcomers. The best way to ensure your editor survives the AI age is to make it thrive in it.
Maybe the future ain’t what it used to be — but that’s not necessarily a bad thing.
I’ll admit this essay turned out longer and more disjointed than I’d hoped. I’ve had all these thoughts rattling around in my head for a while now and I wanted to cover them all, but I definitely struggled to weave them into a cohesive narrative. Programming may be hard, but writing prose remains harder. Thanks for bearing with me.
That’s all I have for you today. Keep hacking!
P. S. There’s an interesting Hacker News discussion about this article. Check it out if you want to see what the broader community thinks!
...
Read the original on batsov.com »
Yesterday, the IRS announced the release of the project I’ve been engineering leading since this summer, its new Tax Withholding Estimator (TWE). Taxpayers enter in their income, expected deductions, and other relevant info to estimate what they’ll owe in taxes at the end of the year, and adjust the withholdings on their paycheck. It’s free, open source, and, in a major first for the IRS, open for public contributions.
TWE is full of exciting learnings about the field of public sector software. Being me, I’m going to start by writing about by far the driest one: XML.
XML is widely considered clunky at best, obsolete at worst. It evokes memories of SOAP configs and J2EE (it’s fine, even good, if those acronyms don’t mean anything to you). My experience with the Tax Withholding Estimator, however, has taught me that XML absolutely has a place in modern software development, and it should be considered a leading option for any cross-platform declarative specification.
TWE is a static site generated from two XML configurations. The first of these configs is the Fact Dictionary, our representation of the US Tax Code; the second will be the subject of a later blog post.
We use the Fact Graph, a logic engine, to calculate the taxpayer’s tax obligations (and their withholdings) based on the facts defined in the Fact Dictionary. The Fact Graph was originally built for IRS Direct File and now we use it for TWE. I’m going to introduce you to the Fact Graph the way that I was introduced to it: by example.
Put aside any preconceptions you might have about XML for a moment and ask yourself what this fact describes, and how well it describes it.
This fact describes a /totalOwed fact that’s derived by subtracting /totalPayments from /totalTax. In tax terms, this fact describes the amount you will need to pay the IRS at the end of the year. That amount, “total owed,” is the difference between the total taxes due for your income (“total tax”) and the amount you’ve already paid (“total payments”).
My initial reaction to this was that it’s quite verbose, but also reasonably clear. That’s more or less how I still feel.
You only need to look at a few of these to intuit the structure. Take the refundable credits calculation, for example. A refundable credit is a tax credit that can lead to a negative tax balance—if you qualify for more refundable credits than you owe in taxes, the government just gives you some money. TWE calculates the total value of refundable credits by adding up the values of the Earned Income Credit, the Child Tax Credit (CTC), American Opportunity Credit, the refundable portion of the Adoption Credit, and some other stuff from the Schedule 3.
By contrast, non-refundable tax credits can bring your tax burden down to zero, but won’t ever make it negative. TWE models that by subtracting non-refundable credits from the tentative tax burden while making sure it can’t go below zero, using the operator.
While admittedly very verbose, the nesting is straightforward to follow. The tax after non-refundable credits is derived by saying “give me the greater of these two numbers: zero, or the difference between tentative tax and the non-refundable credits.”
Finally, what about inputs? Obviously we need places for the taxpayer to provide information, so that we can calculate all the other values.
Okay, so instead of we use . Because the value is… writable. Fair enough. The denotes what type of value this fact takes. True-or-false questions use , like this one that records whether the taxpayer is 65 or older.
There are some (much) longer facts, but these are a fair representation of what the median fact looks like. Facts depend on other facts, sometimes derived and sometimes writable, and they all add up to some final tax numbers at the end. But why encode math this way when it seems far clunkier than traditional notation?
Countless mainstream programming languages would instead let you write this calculation in a notation that looks more like normal math. Take this JavaScript example, which looks like elementary algebra:
const totalOwed = totalTax - totalPayments
That seems better! It’s far more concise, easier to read, and doesn’t make you explicitly label the “minuend” and “subtrahend.”
Let’s add in the definitions for totalTax and totalPayments.
const totalTax = tentativeTaxNetNonRefundableCredits + totalOtherTaxes
const totalPayments = totalEstimatedTaxesPaid +
totalTaxesPaidOnSocialSecurityIncome +
totalRefundableCredits
const totalOwed = totalTax - totalPayments
Still not too bad. Total tax is calculated by adding the tax after non-refundable credits (discussed earlier) to whatever’s in “other taxes.” Total payments is the sum of estimated taxes you’ve already paid, taxes you’ve paid on social security, and any refundable credits.
The problem with the JavaScript representation is that it’s imperative. It describes actions you take in a sequence, and once the sequence is done, the intermediate steps are lost. The issues with this get more obvious when you go another level deeper, adding the definitions of all the values that totalTax and totalPayments depend on.
// Total tax calculation
const totalOtherTaxes = selfEmploymentTax + additionalMedicareTax + netInvestmentIncomeTax
const tentativeTaxNetNonRefundableCredits = Math.max(totalTentativeTax - totalNonRefundableCredits, 0)
const totalTax = tentativeTaxNetNonRefundableCredits + totalOtherTaxes
// Total payments calculation
const totalEstimatedTaxesPaid = getInput()
const totalTaxesPaidOnSocialSecurityIncome = socialSecuritySources
.map(source => source.totalTaxesPaid)
.reduce((acc, val) => { return acc+val }, 0)
const totalRefundableCredits = earnedIncomeCredit +
additionalCtc +
americanOpportunityCredit +
adoptionCreditRefundable +
schedule3OtherPaymentsAndRefundableCreditsTotal
const totalPayments = totalEstimatedTaxesPaid +
totalTaxesPaidOnSocialSecurityIncome +
totalRefundableCredits
// Total owed
const totalOwed = totalTax - totalPayments
We are quickly arriving at a situation that has a lot of subtle problems.
One problem is the execution order. The hypothetical getInput() function solicits an answer from the taxpayer, which has to happen before the program can continue. Calculations that don’t depend on knowing “total estimated taxes” are still held up waiting for the user; calculations that do depend on knowing that value had better be specified after it.
Or, take a close look at how we add up all the social security income:
const totalTaxesPaidOnSocialSecurityIncome = socialSecuritySources
.map(source => source.totalTaxesPaid)
.reduce((acc, val) => { return acc+val }, 0)
All of a sudden we are really in the weeds with JavaScript. These are not complicated code concepts—map and reduce are both in the standard library and basic functional paradigms are widespread these days—but they are not tax math concepts. Instead, they are implementation details.
Compare it to the Fact representation of that same value.
This isn’t perfect—the * that represents each social security source is a little hacky—but the meaning is much clearer. What are the total taxes paid on social security income? The sum of the taxes paid on each social security income. How do you add all the items in a collection? With .
Plus, it reads like all the other facts; needing to add up all items in a collection didn’t suddenly kick us into a new conceptual realm.
The philosophical difference between these two is that, unlike JavaScript, which is imperative, the Fact Dictionary is declarative. It doesn’t describe exactly what steps the computer will take or in what order; it describes a bunch of named calculations and how they depend on each other. The engine decides automatically how to execute that calculation.
Besides being (relatively) friendlier to read, the most important benefit of a declarative tax model is that you can ask the program how it calculated something. Per the Fact Graph’s original author, Chris Given:
The Fact Graph provides us with a means of proving that none of the unasked questions would have changed the bottom line of your tax return and that you’re getting every tax benefit to which you’re entitled.
Suppose you get a value for totalOwed that doesn’t seem right. You can’t ask the JavaScript version “how did you arrive at that number?” because those intermediate values have already been discarded. Imperative programs are generally debugged by adding log statements or stepping through with a debugger, pausing to check each value. This works fine when the number of intermediate values is small; it does not scale at all for the US Tax Code, where the final value is calculated based on hundreds upon hundreds of calculations of intermediate values.
With a declarative graph representation, we get auditability and introspection for free, for every single calculation.
Intuit, the company behind TurboTax, came to the same conclusion, and published a whitepaper about their “Tax Knowledge Graph” in 2020. Their implementation is not open source, however (or least I can’t find it). The IRS Fact Graph is open source and public domain, so it can be studied, shared, and extended by the public.
If we accept the need for a declarative data representation of the tax code, what should it be?
In many of the places where people used to encounter XML, such network data transfer and configuration files, it has been replaced by JSON. I find JSON to be a reasonably good wire format and a painful configuration format, but in neither case would I rather be using XML (although it’s a close call on the latter).
The Fact Dictionary is different. It’s not a pile of settings or key-value pairs. It’s a custom language that models a unique and complex problem space. In programming we call this a domain-specific language, or DSL for short.
As an exercise, I tried to come up with a plausible JSON representation of the /tentativeTaxNetNonRefundableCredits fact from earlier.
“description”: “Total tentative tax after applying non-refundable credits, but before applying refundable credits.”,
“definition”: {
“type”: “Expression”,
“kind”: “GreaterOf”,
“children”: [
“type”: “Value”,
“kind”: “Dollar”,
“value”: 0
“type”: “Expression”,
“kind”: “Subtract”,
“minuend”: {
“type”: “Dependency”,
“path”: “/totalTentativeTax”
“subtrahend”: {
“type”: “Dependency”,
“path”: “/totalNonRefundableCredits”
This is not a terribly complicated fact, but it’s immediately apparent that JSON does not handle arbitrary nested expressions well. The only complex data structure available in JSON is an object, so every child object has to declare what kind of object it is. Contrast that with XML, where the “kind” of the object is embedded in its delimiters.
I think this XML representation could be improved, but even in its current form, it is clearly better than JSON. (It’s also, amusingly, a couple lines shorter.) Attributes and named children give you just enough expressive power to make choices about what your language should or should not emphasize. Not being tied to specific set of data types makes it reasonable to define your own, such as a distinction between “dollars” and “integers.”
A lot of minor frustrations we’ve all internalized as inevitable with JSON are actually JSON-specific. XML has comments, for instance. That’s nice. It also has sane whitespace and newline handling, which is important when your descriptions are often long. For text that has any length or shape to it, XML is far more pleasant to read and edit by hand than JSON.
There are still verbosity gains to be had, particularly with switch statements (omitted here out of respect for page length). I’d certainly remove the explicit “minuend” and “subtrahend,” for starters.
I believe that the original team didn’t do this because they didn’t want the order of the children to have semantic consequence. I get it, but order is guaranteed in XML and I think the additional nesting and words do more harm then good.
What about YAML? Chris Given again:
whatever you do, don’t try to express the logic of the Internal Revenue Code as YAML
Finally, there’s a good case to made that you could build this DSL with s-expressions. In a lot of ways, this is nicest syntax to read and edit.
(Fact
(Path “/tentativeTaxNetNonRefundableCredits”)
(Description “Total tentative tax after applying non-refundable
credits, but before applying refundable credits.“)
(Derived
(GreaterOf
(Dollar 0)
(Subtract
(Minuend (Dependency “/totalTentativeTax”))
(Subtrahends (Dependency “/totalNonRefundableCredits”))))))
HackerNews user ok123456 asks: “Why would I want to use this over Prolog/Datalog?”
I’m a Prolog fan! This is also possible.
...
Read the original on unplannedobsolescence.com »
What It Is, Why I’m Doing It Now, and How It Came About
Thanks to all the backers who have contributed to the campaign so far, plus a special shout-out to those who have generously donated to support my work! As a subscriber to the “Dabao” campaign, you’re already aware of the Baochip-1x. This update fills in the backstory of what it is, why I’m doing it now, and how it came about.
In my mind, the Baochip-1x’s key differentiating feature is the inclusion of a Memory Management Unit (MMU). No other microcontroller in this performance/integration class has this feature, to the best of my knowledge. For those not versed in OS-nerd speak, the MMU is what sets the software that runs on your phone or desktop apart from the software that runs in your toaster oven. It facilitates secure, loadable apps by sticking every application in its own virtual memory space.
The MMU is a venerable piece of technology, dating back to the 1960’s. Its page-based memory protection scheme is well-understood and has passed the test of time; I’ve taught its principles to hundreds of undergraduates, and it continues to be a cornerstone of modern OSes.
Diagram illustrating an early virtual memory scheme from Kilburn, et al, ‘One-level storage system’, IRE Transactions, EC-11(2):223-235, 1962
When it comes to evaluating security-oriented features, older is not always worse; in fact, withstanding the test of time is a positive signal. For example, the AES cipher is about 26 years old. This seems ancient for computer technology, yet many cryptographers recommend it over newer ciphers explicitly because AES has withstood the test of hundreds of cryptographers trying to break it, with representation from every nation state, over years and years.
I’m aware of newer memory protection technologies, such as CHERI, PMPs, MPUs… and as a nerd, I love thinking about these sorts of things. In fact, in my dissertation, I even advocated for the use of CHERI-style hardware capabilities and tagged pointers in new CPU architectures.
However, as a pragmatic system architect, I see no reason to eschew the MMU in favor of any of these. In fact, the MMU is composable with all of these primitives — it’s valid to have both a PMP and an MMU in the same RISC-V CPU. And, even if you’re using a CHERI-like technology for hardware-enforced bounds checking on pointers, it still doesn’t allow for transparent address space relocation. Without page-based virtual memory, each program would need to be linked to a distinct, non-overlapping region of physical address space at compile time, and you couldn’t have swap memory.
This begs the question: if the MMU is such an obvious addition, why isn’t it more prevalent? If it’s such an obvious choice, wouldn’t more players include it in their chips?
“Small” CPUs such as those found in embedded SoCs have lacked this feature since their inception. I trace this convention back to the introduction of the ARM7TDMI core in the 1990s. Back then, transistors were scarce, memory even more so, and so virtual memory was not a great product/market fit for devices with just a couple kilobytes of RAM, not even enough to hold a page table. The ARM7TDMI core’s efficiency and low cost made it a runaway success, shipping over a billion units and establishing ARM as the dominant player in the embedded SoC space.
Fast forward 30 years, and Moore’s Law has given us tens of thousands of times more capability; today, a fleck of silicon smaller than your pinky nail contains more transistors than a full-sized PC desktop from the 1990s. Despite the progress, these small flecks of silicon continue to adhere to the pattern that was established in the 1990s: small systems get flat memory spaces with no address isolation.
Die shot of a modern 22nm system-on-chip (SoC). This fleck of silicon is about 4mm on a side and contains more transistors than a desktop PC from the 1990’s. Despite this, the logic region is more empty space by area than active gates
The root cause turns out explicitly to be because MMUs are so valuable: without one, you can’t run Linux, BSD, or Mach. Thus, when ARM split their IP portfolio into the A, R, and M-series cores, the low-cost M-series cores were forbidden from having an MMU to prevent price erosion of their high-end A-series cores. Instead, a proprietary hack known as the “MPU” was introduced that gives some memory security, but without an easy path to benefits such as swap memory.
We’ve been locked into this convention for so long that we simply forgot to challenge the assumptions.
Thanks to the rise of open architecture specifications such as RISC-V, and fully-open implementations of the RISC-V spec such as the Vexriscv, I’m not bound by anyone’s rules for what can or can’t go onto an SoC. And so, I am liberated to make the choice to include an MMU in the Baochip-1x.
This naturally empowers enthusiasts to try and run Linux on the Baochip-1x, but we (largely Sean ‘xobs’ Cross and me) already wrote a pure-Rust OS called “Xous” which incorporates an MMU but in a framework that is explicitly targeted towards small memory footprint devices like the Baochip-1x. The details of Xous are beyond the scope of this post, but if you’re interested, check out the talk we gave at 39C3.
This couples into the core argument as to why a “mostly open RTL” SoC is the right thing for this moment in time. As a staunch advocate for open-source technologies, I would love to see a fully-open silicon stack, from the fabs-up. I’m heartened to see multiple initiatives working on fixing this problem, but it’s a hard problem. I estimate it could take more than a decade before we have a sufficiently robust open source silicon ecosystem to market economically competitive SoCs.
For those of us looking to create an embedded product today, that leaves only one practical option: continue to use Cortex-M ARM devices, and if we want hardware memory protection, we have to tune our software to their proprietary MPU. This means further entrenching our code bases in a proprietary standard. Do I really want to spend my time porting Xous to use ARM’s proprietary flavor of memory protection? Surely not.
Thus, I would argue that we simply can’t afford to wait for fully open source PDKs to come along. Given the opportunity to do a partially-open RTL tapeout today, versus waiting for the perfect, fully-open source solution, the benefit of taping out partially-open RTL SoCs today is crystal clear to me.
A partially-open SoC available today empowers a larger community that is interested in an open source future, even if they aren’t hardware experts. As a larger community, we can begin the process of de-leveraging ARM together, so that when economically viable, “truly open” silicon alternatives come to market, they can drop directly into a mature application stack. After all, software drives demand for silicon, not the other way around.
The good news is that on the Baochip-1x, everything that can “compute” on data is available for simulation and inspection. The parts that are closed are components such as the AXI bus framework, USB PHY, and analog components such as the PLL, voltage regulators, and I/O pads.
Thus, while certain portions of the Baochip-1x SoC are closed-source, none of them are involved in the transformation of data. In other words, all the closed source components are effectively “wires”: the data that goes in on one side should match the data coming out the other side. While this is dissatisfying from the “absolute trust” perspective, one can’t definitively rule out the possibility of back doors in black-box wires, we can inspect its perimeter and confirm that, for a broad range of possibilities, it behaves correctly. It’s not perfect transparency, but it’s far better than the fully-NDA SoCs we currently use to handle our secrets, and more importantly, it allows us to start writing code for open architectures, paving a roadmap to an eventually fully-open silicon-to-software future.
Those with a bit of silicon savvy would note that it’s not cheap to produce such a chip, yet, I have not raised a dollar of venture capital. I’m also not independently wealthy. So how is this possible?
The short answer is I “hitchhiked” on a 22 nm chip designed primarily by Crossbar, Inc. I was able to include a CPU of my choice, along with a few other features, in some unused free space on the chip’s floorplan. By switching off which CPU is active, you can effectively get two chips for the price of one mask set.
Floorplan of the Baochip, illustrating the location and relative sizes of its 5 open-source CPU cores
For those who haven’t peeked under the hood of a System-on-Chip (SoC), the key fact to know is that the cost of modern SoCs is driven largely by peripherals and memory. The CPU itself is often just a small fraction of the area, just a couple percent in the case of the Baochip-1x. Furthermore, all peripherals are “memory mapped”: flashing an LED, for example, entails tickling some specific locations in memory. Who does the tickling doesn’t matter — whether ARM or RISC-V CPU, or even a state machine — the peripherals respond just the same. Thus, one can effectively give the same “body” two different “personalities” by switching out their “brains”; by switching out their CPU cores, you can have the same physical piece of silicon run vastly different code bases.
The long answer starts a couple years ago, with Crossbar wanting to build a high-performance secure enclave that would differentiate itself in several ways, notably by fabricating in a relatively advanced (compared to other security chips) 22 nm process and by using their RRAM technology for non-volatile storage. RRAM is similar to FLASH memory in that it retains data without power but with faster write times and smaller (32-byte) page sizes, and it can scale below 40 nm — a limit below which FLASH has not been able to scale.
In addition to flexing their process superiority, they wanted to differentiate by being pragmatically open source about the design; security chips have been traditionally been wrapped behind NDAs, despite calls from users for transparency.
Paradoxically, open source security chips are harder to certify because the certification standards such as Common Criteria evaluates closed-source flaws as “more secure” than open-source flaws. My understanding is that the argument goes something along the lines of, “hacking chips is hard, so any barrier you can add to the up-front cost of exploiting the chip increases the effective security of the chip overall”. Basically, if the pen tester doing a security evaluation judges that a bug is easier to find and exploit if the source code is public, then, sharing the source code lowers your score. As a result, the certification scores of open source chips are likely much worse than that of a closed source chip. And, since you can’t sell security chips to big customers without certifications, security chips end up being mostly closed source.
Kind of a crazy system, right? But if you consider that the people buying oodles and oodles of security chips are institutions like banks and governments, filled with non-technical managers whose primary focus is risk management, plus they are outsourcing the technical evaluation anyways, the status quo makes a little more sense. What’s a banker going to do with the source code of a chip, anyway?
Crossbar wanted to buck the trend and heed the call for open source transparency in security chips and approached me to help advise on strategy. I agreed to help them, but under one condition: that I would be allowed to add a CPU core of my own choice and sell a version of the chip under my own brand. Part of the reason was that Crossbar, for risk reduction reasons, wanted to go with a proprietary ARM CPU. Having designed chips in a prior life, I can appreciate the desire for risk reduction and going with a tape-out proven core.
However, as an open source strategy advisor, I argued that users who viewed open source as a positive feature would likely also expect, at a minimum, that the CPU would be open source. Thus I offered to add the battle-tested CPU core from the Precursor SoC — the Vexriscv — to the tapout, and I promised I would implement the core in such a way that even if it didn’t work, we could just switch it off and there would be minimal impact on the chip’s power and area budget.
Out of this arrangement was born the Baochip-1x.
At the time of writing, wafers containing the Baochip-1x design have been fabricated, and hundreds of the chips have been handed out through an early sampling program. These engineering samples were all hand-screened by me.
However, that’s about to change. There’s currently a pod of wafers hustling through a fab in Hsinchu, and two of them are earmarked to become fully production-qualified Baochip-1x silicon. These will go through a fully automated screening flow. Assuming this process completes smoothly, I’ll have a few thousand Baochip-1x’s available to sell. More chips are planned for later in the year, but a combination of capital constraints, risk mitigation, and the sheer time it takes to go from blank silicon to fully assembled devices puts further inventory out until late in 2026.
Rather than waiting until the supply chain was fully stabilized, I decided to take a small risk and share the first few thousand chips with developers like you through the Dabao evaluation board pre-order campaign. After working on this project for a couple years in total silence, I’m excited to start building a community around it. Despite the effort I’ve put into the project, there’s loads more to be done: documentation to be written, drivers to bring up, and more programming languages and OSes to support. While I personally champion the Rust-based “Xous” OS, it doesn’t have to be the only, or even the primary, code base for Baochip. I’m hoping to empower a few good open source community leaders with accessible hardware security primitives, so we can all build a brighter, more secure, and more trustable open source future together.
Ask Crowd Supply about an order
Ask Baochip a technical question
Learn More About This Project
Go to the main project page
See all project updates
...
Read the original on www.crowdsupply.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.