10 interesting stories served every morning and every evening.




1 1,007 shares, 52 trendiness

1M context is now generally available for Opus 4.6 and Sonnet 4.6

Claude Opus 4.6 and Sonnet 4.6 now in­clude the full 1M con­text win­dow at stan­dard pric­ing on the Claude Platform. Standard pric­ing ap­plies across the full win­dow — $5/$25 per mil­lion to­kens for Opus 4.6 and $3/$15 for Sonnet 4.6. There’s no mul­ti­plier: a 900K-token re­quest is billed at the same per-to­ken rate as a 9K one.

* Full rate lim­its at every con­text length. Your stan­dard ac­count through­put ap­plies across the en­tire win­dow.

* 6x more me­dia per re­quest. Up to 600 im­ages or PDF pages, up from 100. Available to­day on Claude Platform na­tively, Microsoft Azure Foundry, and Google Cloud’s Vertex AI.

* ​​No beta header re­quired. Requests over 200K to­kens work au­to­mat­i­cally. If you’re al­ready send­ing the beta header, it’s ig­nored so no code changes are re­quired.

1M con­text is now in­cluded in Claude Code for Max, Team, and Enterprise users with Opus 4.6. Opus 4.6 ses­sions can use the full 1M con­text win­dow au­to­mat­i­cally, mean­ing fewer com­pactions and more of the con­ver­sa­tion kept in­tact. 1M con­text pre­vi­ously re­quired ex­tra us­age.

A mil­lion to­kens of con­text only mat­ters if the model can re­call the right de­tails and rea­son across them. Opus 4.6 scores 78.3% on MRCR v2, the high­est among fron­tier mod­els at that con­text length.

That means you can load an en­tire code­base, thou­sands of pages of con­tracts, or the full trace of a long-run­ning agent — tool calls, ob­ser­va­tions, in­ter­me­di­ate rea­son­ing — and use it di­rectly. The en­gi­neer­ing work, lossy sum­ma­riza­tion, and con­text clear­ing that long-con­text work pre­vi­ously re­quired are no longer needed. The full con­ver­sa­tion stays in­tact.

...

Read the original on claude.com »

2 397 shares, 18 trendiness

TomBadash/MouseControl: A lightweight, open-source alternative to Logitech Options+ for remapping buttons on the Logitech MX Master 3S mouse

A light­weight, open-source, fully lo­cal al­ter­na­tive to Logitech Options+ for remap­ping every pro­gram­ma­ble but­ton on the Logitech MX Master 3S mouse.

No teleme­try. No cloud. No Logitech ac­count re­quired.

* ma­cOS sup­port — full ma­cOS com­pat­i­bil­ity added thanks to an­drew-sz, us­ing CGEventTap for mouse hook­ing, Quartz CGEvent for key sim­u­la­tion, and NSWorkspace for app de­tec­tion. See ma­cOS Setup Guide for de­tails.

* Per-application pro­files — au­to­mat­i­cally switch but­ton map­pings when you switch apps (e.g., dif­fer­ent bind­ings for Chrome vs. VS Code)

* DPI / pointer speed con­trol — slider from 200–8000 DPI with quick pre­sets, synced to the de­vice via HID++

* Auto-reconnection — au­to­mat­i­cally de­tects when the mouse is turned off/​on or dis­con­nected/​re­con­nected and re­stores full func­tion­al­ity with­out restart­ing the app

* Live con­nec­tion sta­tus — the UI shows a real-time Connected” / Not Connected” badge that up­dates as the mouse con­nects or dis­con­nects

* System tray — runs in back­ground, hides to tray on close, tog­gle remap­ping on/​off from tray menu

* Zero ex­ter­nal ser­vices — con­fig is a lo­cal JSON file, all pro­cess­ing hap­pens on your ma­chine

The UI shows an in­ter­ac­tive di­a­gram of the MX Master 3S. Click any but­ton’s hotspot dot to change its ac­tion.

Note: The ar­chi­tec­ture is de­signed to be ex­ten­si­ble to other Logitech HID++ mice, but only the MX Master 3S is tested.

Extract the zip to any folder (Desktop, Documents, wher­ever you like)

That’s it — the app will open and start remap­ping your mouse but­tons im­me­di­ately.

* Closing the win­dow does­n’t quit the app — it keeps run­ning in the tray

* Windows SmartScreen may show a warn­ing the first time → click More info → Run any­way

* Logitech Options+ must not be run­ning (it con­flicts with HID++ ac­cess)

* Logitech Options+ must NOT be run­ning (it con­flicts with HID++ ac­cess)

# 1. Clone the repos­i­tory

git clone https://​github.com/​TomBadash/​MouseC­on­trol.git

cd MouseControl

# 2. Create a vir­tual en­vi­ron­ment

python -m venv .venv

# 3. Activate it

.venv\Scripts\activate # Windows (PowerShell / CMD)

source .venv/bin/activate # ma­cOS / Linux

# 4. Install de­pen­den­cies

pip in­stall -r re­quire­ments.txt

# Option A: Run di­rectly

python main_qml.py

# Option B: Use the batch file (shows a con­sole win­dow)

Mouser.bat

# Option C: Use the desk­top short­cut (no con­sole win­dow)

# Double-click Mouser.lnk

Tip: To run with­out a con­sole win­dow, use pythonw.exe main_qml.py or the .lnk short­cut.

A Mouser.lnk short­cut is in­cluded. To cre­ate one man­u­ally:

$s = (New-Object -ComObject WScript.Shell).CreateShortcut(“$([Environment]::GetFolderPath(‘Desktop’))\Mouser.lnk”)

$s.TargetPath = C:\path\to\mouser\.venv\Scripts\pythonw.exe”

$s.Arguments = main_qml.py”

$s.WorkingDirectory = C:\path\to\mouser”

$s.IconLocation = C:\path\to\mouser\images\logo.ico, 0”

$s.Save()

To pro­duce a stand­alone Mouser.exe that any­one can down­load and run with­out Python:

# 1. Install PyInstaller (inside your venv)

pip in­stall pyin­staller

# 2. Build us­ing the in­cluded spec file

pyin­staller Mouser.spec –noconfirm

# — or sim­ply run the build script —

build.bat

The out­put is in dist\Mouser\. Zip that en­tire folder and dis­trib­ute it.

A low-level Windows mouse hook (SetWindowsHookExW with WH_MOUSE_LL) runs on a ded­i­cated back­ground thread with its own Win32 mes­sage pump. It in­ter­cepts:

Intercepted events are ei­ther blocked (hook re­turns 1) and re­placed with an ac­tion, or passed through to the ap­pli­ca­tion.

HID++ 2.0 (primary, Bluetooth) — Opens the Logitech HID col­lec­tion, dis­cov­ers REPROG_CONTROLS_V4 (feature 0x1B04), and di­verts CID 0x00C3 (gesture but­ton). Best re­li­a­bil­ity.

Raw Input (fallback) — Registers for raw mouse in­put and de­tects ex­tra but­ton bits be­yond the stan­dard 5.

Middle-click fall­back — When ges­ture but­ton has an ac­tion but mid­dle-click is unas­signed, mid­dle-click events route to the ges­ture ac­tion.

Polls the fore­ground win­dow every 300ms us­ing GetForegroundWindow → GetWindowThreadProcessId → process name. Handles UWP apps by re­solv­ing ApplicationFrameHost.exe to the ac­tual child process.

The cen­tral or­ches­tra­tor. On app change, it per­forms a light­weight pro­file switch — clears and re-wires hook call­backs with­out tear­ing down the hook thread or HID++ con­nec­tion. This avoids the la­tency and in­sta­bil­ity of a full hook restart.

* HID++ layer — HidGestureListener de­tects de­vice dis­con­nec­tion (read er­rors) and en­ters a re­con­nect loop, retry­ing every 2–5 sec­onds un­til the de­vice is back

* Hook layer — MouseHook lis­tens for WM_DEVICECHANGE no­ti­fi­ca­tions and re­in­stalls the low-level mouse hook when de­vices are added or re­moved

* UI layer — con­nec­tion state flows from HID++ → MouseHook → Engine → Backend (cross-thread safe via Qt sig­nals) → QML, up­dat­ing the sta­tus badge in real time

All set­tings are stored in %APPDATA%\Mouser\config.json (Windows) or ~/Library/Application Support/Mouser/config.json (macOS). The con­fig sup­ports:

The app has two pages ac­ces­si­ble from a slim side­bar:

* Left panel: List of pro­files. The Default (All Apps)” pro­file is al­ways pre­sent. Per-app pro­files show the app icon and name. Select a pro­file to edit its map­pings.

* Right panel: Interactive mouse di­a­gram with click­able hotspot dots on each but­ton. Click a dot to ex­pand an ac­tion picker with cat­e­go­rized chips. Changes save in­stantly to the se­lected pro­file.

* Add pro­file: ComboBox at the bot­tom lists known apps (Chrome, Edge, VS Code, VLC, etc.). Click +” to cre­ate a per-app pro­file.

* DPI slider: 200–8000 with quick pre­sets (400, 800, 1000, 1600, 2400, 4000, 6000, 8000). Reads the cur­rent DPI from the de­vice on startup.

* Windows & ma­cOS only — Linux is not yet sup­ported

* MX Master 3S only — HID++ fea­ture in­dices and CIDs are hard­coded for this de­vice (PID 0xB034)

* Bluetooth rec­om­mended — HID++ ges­ture but­ton di­vert works best over Bluetooth; USB re­ceiver has par­tial sup­port

* Conflicts with Logitech Options+ — both apps fight over HID++ ac­cess; quit Options+ be­fore run­ning Mouser

* Scroll in­ver­sion is ex­per­i­men­tal — uses co­a­lesced PostMessage in­jec­tion to avoid LL hook dead­locks; may not work per­fectly in all apps

* Admin not re­quired — but some games or el­e­vated win­dows may not re­ceive in­jected key­strokes

* More de­vices — sup­port other Logitech HID++ mice (MX Master 3, MX Anywhere 3, etc.)

* Start with Windows — au­tostart via reg­istry or Task Scheduler

* Per-app pro­file auto-cre­ation — de­tect new apps and prompt to cre­ate a pro­file

* ma­cOS sup­port — added via CGEventTap, Quartz CGEvent, and NSWorkspace (thanks @andrew-sz)

Contributions are wel­come! To get started:

Set up the dev en­vi­ron­ment (see Installation)

Make your changes and test with an MX Master 3S

If Mouser saves you from in­stalling Logitech Options+, con­sider sup­port­ing de­vel­op­ment:

Every bit helps keep the pro­ject go­ing — thank you!

This pro­ject is li­censed un­der the MIT License.

Mouser is not af­fil­i­ated with or en­dorsed by Logitech. Logitech”, MX Master”, and Options+” are trade­marks of Logitech International S. A.

...

Read the original on github.com »

3 339 shares, 13 trendiness

Your Phone is an Entire Computer

Your iPhone (or any other smart­phone) is a com­puter ca­pa­ble of run­ning a com­plete desk­top op­er­at­ing sys­tem, and has been so for quite some time.

A week ago, Apple asked us to say hello to MacBook Neo. It’s a very rea­son­ably priced en­trant to the Mac lap­top line, just $599. It’s per­fect for stu­dents, priced at just $499 with an ed­u­ca­tion dis­count.

I have no ar­gu­ments against this de­vice’s ex­is­tence. But I could­n’t help but also no­tice it comes equipped with an A18 Pro chip, the very same chip that pow­ers the iPhone 16 Pro I carry in my pocket. I’m both­ered, as I have been since the orig­i­nal iPad in­tro­duc­tion 16 years ago, by the un­nec­es­sary re­stric­tions placed by cor­po­rate pow­ers to run third-party soft­ware and op­er­at­ing sys­tems on de­vices we own.

Love at first Mac.” Welcome to the fam­ily! Don’t ever think of putting MacOS on your iPad though!

On the MacBook Neo, I can os­ten­si­bly go to a browser, any browser, and click links to down­load what­ever soft­ware I’d like.

On my iPhone, there is no such abil­ity. As a US cit­i­zen, I must go through the Apple-approved App Store to down­load / in­stall third-party soft­ware. Smells like free­dom.

On the MacBook Neo, I can run code and build soft­ware with no re­stric­tions.

On my iPhone, this abil­ity is heav­ily sand­boxed and there is no full ac­cess to the filesys­tem through a user ac­ces­si­ble shell, as much as I may want one.

On the MacBook Neo, I can even opt to not use MacOS at all and in­stead in­stall Asahi Linux if I so choose (assuming Apple con­tin­ues to al­low cus­tom ker­nel boot­ing as it has in M-series Macs).

On my iPhone, the only op­er­at­ing sys­tem I can use is iOS — the boot loader for iPhones and iPads is locked down as to re­strict jailbreaking”, oth­er­wise known as mod­i­fi­ca­tions that al­low a de­vice owner to in­stall soft­ware out­side of Apple-vetted chan­nels.

Both this new MacBook of­fer­ing and my iPhone use the same sys­tem on a chip, down to the num­ber of CPU cores, GPU cores and gi­ga­bytes of RAM. Apple would like to have every­one be­lieve these de­vices are fun­da­men­tally dif­fer­ent, that the iPhone must be locked in the name of user safety”, be­cause click­ing on links is dan­ger­ous.

That is an in­sult to the av­er­age per­son’s in­tel­li­gence. Despite the form fac­tor dif­fer­ences, these de­vices are ba­si­cally the ex­act same thing. The iPhone / iPad have been re­stricted in their abil­ity to down­load soft­ware straight from the in­ter­net, not due to safety”, but be­cause Apple has a sig­nif­i­cant profit mo­tive in do­ing so.

The MacBook is ca­pa­ble of run­ning iOS, iPad OS, MacOS, or any other OS for that mat­ter. The lat­est iPhones are equally ca­pa­ble of run­ning MacOS, demon­strated be­yond a rea­son­able doubt by this lat­est prod­uct an­nounce­ment.

An iPhone is a MacBook, is an iPad, is an iPhone. But not ac­tu­ally, be­cause Apple has ar­ti­fi­cially deemed so through strict hard­ware-level con­trols.

In right to root ac­cess, I laid out the case for why the right to choose the soft­ware loaded on de­vices we own must be ad­vo­cated for in the broader right to re­pair” dis­cus­sion.

Nearly two decades af­ter the iPhone was first an­nounced, the chips Apple de­vel­oped have grown into com­put­ing beasts, so much so that even ones meant for iPhones can run MacOS.

The mech­a­nisms em­ployed over these same two decades to re­strict soft­ware load­ing on mo­bile de­vices are un­ac­cept­able. These re­stric­tions are boons for cor­po­rate and sov­er­eign pow­ers that would like to con­trol what you can do with your com­put­ing de­vices.

I want to work to­wards the prece­dent that every de­vice you own should have the ca­pac­ity to load the soft­ware of your choice. Mobile de­vices are not spe­cial, they are gen­eral com­put­ers in a hand­held form fac­tor, ar­ti­fi­cially re­stricted from load­ing soft­ware to keep cor­po­rate prof­its and gov­ern­ment con­trols in­tact. The nar­ra­tive that mo­bile de­vices are some­how dif­fer­ent enough to war­rant re­stric­tions on soft­ware load­ing mainly ben­e­fits ac­tors that don’t have the av­er­age per­son’s best in­ter­ests at heart.

Now that I know my iPhone has the ca­pac­ity to run MacOS, I would very much like to do so. I’d love to con­sider the pos­si­bil­ity of switch­ing to a less in­tru­sive phone and re­pur­pos­ing this iPhone into a web server. Perhaps that seems silly, but it’s re­ally not. I’ve al­ready paid for the de­vice, it’s clearly a full blown com­puter, and why should I not be able to mod­ify it as I see fit?

A right to root ac­cess would make all of this pos­si­ble. And I re­ally want to run MacOS on my iPhone now, damn it.

...

Read the original on medhir.com »

4 326 shares, 14 trendiness

Hammerspoon/hammerspoon: Staggeringly powerful macOS desktop automation with Lua

This is a tool for pow­er­ful au­toma­tion of OS X. At its core, Hammerspoon is just a bridge be­tween the op­er­at­ing sys­tem and a Lua script­ing en­gine.

What gives Hammerspoon its power is a set of ex­ten­sions that ex­pose spe­cific pieces of sys­tem func­tion­al­ity, to the user. With these, you can write Lua scripts to con­trol many as­pects of your OS X en­vi­ron­ment.

* Drag Hammerspoon.app from your Downloads folder to Applications

Out of the box, Hammerspoon does noth­ing - you will need to cre­ate ~/.hammerspoon/init.lua and fill it with use­ful code. There are sev­eral re­sources which can help you:

Hammerspoon is a fork of Mjolnir. Mjolnir aims to be a very min­i­mal ap­pli­ca­tion, with its ex­ten­sions hosted ex­ter­nally and man­aged us­ing a Lua pack­age man­ager. We wanted to pro­vide a more in­te­grated ex­pe­ri­ence.

Our in­ten­tions for Hammerspoon broadly fall into these cat­e­gories:

* Ever wider cov­er­age of sys­tem APIs in Extensions

...

Read the original on github.com »

5 323 shares, 13 trendiness

Digg

A Hard Reset, and What Comes NextBuilding on the in­ter­net in 2026 is dif­fer­ent. We learned that the hard way. Today we’re shar­ing dif­fi­cult news: we’ve made the de­ci­sion to sig­nif­i­cantly down­size the Digg team. This was­n’t a de­ci­sion made lightly, and it’s im­por­tant to say clearly: this is one of the strongest groups of peo­ple we’ve ever had the priv­i­lege of work­ing with. This is not a re­flec­tion of their tal­ent, their ef­fort, or their be­lief in what we were build­ing. It’s a re­flec­tion of the bru­tal re­al­ity of find­ing prod­uct-mar­ket fit in an en­vi­ron­ment that has fun­da­men­tally changed.

When the Digg beta launched, we im­me­di­ately no­ticed posts from SEO spam­mers not­ing that Digg still car­ried mean­ing­ful Google link au­thor­ity. Within hours, we got a taste of what we’d only heard ru­mors about. The in­ter­net is now pop­u­lated, in mean­ing­ful part, by so­phis­ti­cated AI agents and au­to­mated ac­counts. We knew bots were part of the land­scape, but we did­n’t ap­pre­ci­ate the scale, so­phis­ti­ca­tion, or speed at which they’d find us. We banned tens of thou­sands of ac­counts. We de­ployed in­ter­nal tool­ing and in­dus­try-stan­dard ex­ter­nal ven­dors. None of it was enough. When you can’t trust that the votes, the com­ments, and the en­gage­ment you’re see­ing are real, you’ve lost the foun­da­tion a com­mu­nity plat­form is built on.

This is­n’t just a Digg prob­lem. It’s an in­ter­net prob­lem. But it hit us harder be­cause trust is the prod­uct.

We un­der­es­ti­mated the grav­i­ta­tional pull of ex­ist­ing plat­forms. Network ef­fects aren’t just a moat, they’re a wall. The loy­alty users have to the com­mu­ni­ties they’ve al­ready built else­where is pro­found. Getting peo­ple to move is a hard enough prob­lem. Getting them to move and bring their peo­ple with them is some­thing else en­tirely.

We’re not giv­ing up. Digg is­n’t go­ing away.

A small but de­ter­mined team is step­ping up to re­build with a com­pletely reimag­ined an­gle of at­tack. Positioning Digg as sim­ply an al­ter­na­tive to in­cum­bents was­n’t imag­i­na­tive enough. That’s a race we were never go­ing to win. What comes next needs to be gen­uinely dif­fer­ent.

We’re also an­nounc­ing some­thing we’re ex­cited about: Kevin Rose, Digg’s founder who started the com­pany back in 2004, is re­turn­ing to join the team full-time. Starting the first week of April, Kevin will be putting his fo­cus back on the com­pany he built twenty+ years ago. He’ll con­tinue as an ad­vi­sor to True Ventures, but Digg will be his pri­mary fo­cus. We could­n’t think of a bet­ter per­son to help fig­ure out what Digg needs to be­come.

Lastly, Diggnation, our of­fi­cial Digg pod­cast, will con­tinue record­ing monthly while we work on the re-re­boot.

Lastly, and most im­por­tantly, thank you…To the team mem­bers we’re say­ing good­bye to to­day: thank you. You took a bet on a hard prob­lem and showed up every day. The work you did laid the ground­work for what comes next, even if it does­n’t feel that way right now.

To the com­mu­nity who came back to Digg, sub­mit­ted links, ar­gued in the com­ments, and emailed us with what you wanted: we haven’t for­got­ten why we’re do­ing this. We know how frus­trat­ing this is, and we hope you’ll give us an­other look once we have some­thing to show, we’ll save your user­names!

Ultimately, the in­ter­net needs a place where we can trust the con­tent and the peo­ple be­hind it. We’re go­ing to fig­ure out how to build it.

More soon

–@justin, CEO

...

Read the original on digg.com »

6 311 shares, 15 trendiness

Wired headphone sales are exploding. What's with the Bluetooth backlash?

Wired head­phones were sup­posed to die with the head­phone jack. Instead, they’re mak­ing a come­back as con­sumers seek out bet­ter sound qual­ity and tech­nol­ogy from a sim­pler time.

When Apple ditched the head­phone jacks on iPhones in 2016, I went into ex­ile. I was­n’t about to let a gi­ant com­pany dic­tate my lis­ten­ing habits, so I bought an Android and plugged in for dear life. But even­tu­ally, my phone took its last breath in the same ex­act month that Google — one of the last hold­outs — said it was get­ting rid of head­phone jacks on its phones too. It felt like a cos­mic sign of de­feat. So I went back to iPhone, tossed my wired ear­buds in a drawer and joined the Bluetooth hordes.

Maybe I gave up too eas­ily. Recently, a quiet move­ment has grown in the shad­ows based on a con­tro­ver­sial truth: wired head­phones are bet­ter than Bluetooth. Sales are through the roof in re­cent months. You can of­ten get bet­ter sound for the money with a wired pair, but it’s not just au­dio snobs ei­ther. Wired head­phones are a full-blown cul­tural trend — a resur­gence some tie to a broader anti-tech back­lash. Whether it’s prac­ti­cal, po­lit­i­cal or aes­thetic, one thing is clear. Wired head­phones are back.

I’m con­verted,” says Aryn Grusin, a wired-head­phone-lov­ing so­cial worker from Portland, Oregon in the US. A few months ago, she bor­rowed her fi­ancé’s old-fash­ioned wired ear­buds and never looked back. I just think it feels com­fort­ing. I like that it sig­ni­fies to the world that I’m lis­ten­ing to some­thing.”

...

Read the original on www.bbc.com »

7 284 shares, 17 trendiness

PEGI to give 16 age rating to UK games with loot boxes

In re­al­ity, it will largely be down to par­ents to ed­u­cate them­selves on why these changes have been brought in and de­cide for them­selves if they deem the games (and their loot box me­chan­ics) suit­able for their child,” she said.

...

Read the original on www.bbc.com »

8 184 shares, 13 trendiness

Emacs and Vim in the Age of AI

It’s tough to make pre­dic­tions, es­pe­cially about the fu­ture.

I’ve been an Emacs fa­natic for over 20 years. I’ve built and main­tained some of the most pop­u­lar Emacs pack­ages, con­tributed to Emacs it­self, and spent count­less hours tweak­ing my con­fig­u­ra­tion. Emacs is­n’t just my ed­i­tor — it’s my pas­sion, and my happy place.

Over the past year, I’ve also been spend­ing a lot of time with Vim and Neovim, re­learn­ing them from scratch and hav­ing a blast con­trast­ing how the two com­mu­ni­ties ap­proach sim­i­lar prob­lems. It’s been a fun and re­fresh­ing ex­pe­ri­ence.

And lately, like every­one else in our in­dus­try, I’ve been play­ing with AI tools — Claude Code in par­tic­u­lar — watch­ing the im­pact of AI on the broader pro­gram­ming land­scape, and pon­der­ing what it all means for the fu­ture of pro­gram­ming. Naturally, I keep com­ing back to the same ques­tion: what hap­pens to my beloved Emacs and its arch neme­sis” Vim in this brave new world?

I think the an­swer is more nu­anced than ei­ther they’re doomed” or nothing changes”. Predicting the fu­ture is ob­vi­ously hard work, but it’s so fun to spec­u­late on it.

My rea­son­ing is that every ma­jor in­dus­try shift pre­sents plenty of risks and op­por­tu­ni­ties for those in­volved in it, so I want to spend a bit of time ru­mi­nat­ing over the risks and op­por­tu­ni­ties for Emacs and Vim.

The only thing that is con­stant is change.

Every ma­jor tech­no­log­i­cal dis­rup­tion pre­sents both chal­lenges and op­por­tu­ni­ties. Things are never sim­ply black and white — just some shade of gray. The AI rev­o­lu­tion is no dif­fer­ent, and I think it’s worth ex­am­in­ing both sides hon­estly be­fore draw­ing any con­clu­sions.

VS Code is al­ready the dom­i­nant ed­i­tor by a wide mar­gin, and it’s go­ing to get first-class in­te­gra­tions with every ma­jor AI tool — Copilot (obviously), Codex, Claude, Gemini, you name it. Microsoft has every in­cen­tive to make VS Code the best pos­si­ble host for AI-assisted de­vel­op­ment, and the re­sources to do it.

On top of that, pur­pose-built AI ed­i­tors like Cursor, Windsurf, and oth­ers are at­tract­ing se­ri­ous in­vest­ment and tal­ent. These aren’t adding AI to an ex­ist­ing ed­i­tor as an af­ter­thought — they’re build­ing the en­tire ex­pe­ri­ence around AI work­flows. They of­fer in­te­grated con­text man­age­ment, in­line diffs, multi-file edit­ing, and agent loops that feel na­tive rather than bolted on.

Every de­vel­oper who switches to one of these tools is a de­vel­oper who is­n’t learn­ing Emacs or Vim key­bind­ings, is­n’t writ­ing Elisp, and is­n’t con­tribut­ing to our ecosys­tems. The grav­ity well is real.

I never tried Cursor and Windsurf sim­ply be­cause they are es­sen­tially forks of VS Code and I can’t stand VS Code. I tried it sev­eral times over the years and I never felt pro­duc­tive in it for a va­ri­ety of rea­sons.

Part of the case for Emacs and Vim has al­ways been that they make you faster at writ­ing and edit­ing code. The key­bind­ings, the macros, the ex­ten­si­bil­ity — all of it is in ser­vice of mak­ing the hu­man more ef­fi­cient at the me­chan­i­cal act of cod­ing.

But if AI is writ­ing most of your code, how much does me­chan­i­cal edit­ing speed mat­ter? When you’re re­view­ing and steer­ing AI-generated diffs rather than typ­ing code char­ac­ter by char­ac­ter, the bot­tle­neck shifts from how fast can I edit” to how well can I spec­ify in­tent and eval­u­ate out­put.” That’s a fun­da­men­tally dif­fer­ent skill, and it’s not clear that Emacs or Vim have an in­her­ent ad­van­tage there.

The learn­ing curve ar­gu­ment gets harder to jus­tify too. Spend six months learn­ing Emacs and you’ll be 10x faster” is a tough sell when a ju­nior de­vel­oper with Cursor can scaf­fold an en­tire ap­pli­ca­tion in an af­ter­noon.

VS Code has Microsoft. Cursor has ven­ture cap­i­tal. Emacs has… a small group of vol­un­teers and the FSF. Vim had Bram, and now has a com­mu­nity of main­tain­ers. Neovim has a small but ded­i­cated core team.

This has al­ways been the case, of course, but AI am­pli­fies the gap. Building deep AI in­te­gra­tions re­quires keep­ing up with fast-mov­ing APIs, mod­els, and par­a­digms. Well-funded teams can ded­i­cate en­gi­neers to this full-time. Volunteer-driven pro­jects move at the pace of peo­ple’s spare time and en­thu­si­asm.

Let’s go all the way: what if pro­gram­ming as we know it is fully au­to­mated within the next decade? If AI agents can take a spec­i­fi­ca­tion and pro­duce work­ing, tested, de­ployed soft­ware with­out hu­man in­ter­ven­tion, we won’t need cod­ing ed­i­tors at all. Not Emacs, not Vim, not VS Code, not Cursor. The en­tire cat­e­gory be­comes ir­rel­e­vant.

I don’t think this is likely in the near term, but it’s worth ac­knowl­edg­ing as a pos­si­bil­ity. The tra­jec­tory of AI ca­pa­bil­i­ties has sur­prised even the op­ti­mists (and I was ini­tially an AI skep­tic, but the rapid ad­vance­ments last year even­tu­ally changed my mind).

That paints a grim pic­ture, but here’s the thing — Emacs and Vim have been writ­ten off more times than I can count. Eclipse was go­ing to kill them. IntelliJ was go­ing to kill them. VS Code was go­ing to kill them. Sublime Text, Atom, TextMate — all were sup­pos­edly the fi­nal nail in the cof­fin. Most of those killers” are them­selves dead or de­clin­ing, while Emacs and Vim keep chug­ging along. There’s a re­silience to these ed­i­tors that’s easy to un­der­es­ti­mate.

So let’s look at the other side of the coin.

Here’s the thing al­most no­body is talk­ing about: Emacs and Vim have al­ways suf­fered from the ob­scu­rity of their ex­ten­sion lan­guages. Emacs Lisp is a 1980s Lisp di­alect that most pro­gram­mers have never seen be­fore. VimScript is… VimScript. Even Lua, which Neovim adopted specif­i­cally be­cause it’s more ap­proach­able, is niche enough that most de­vel­op­ers haven’t writ­ten a line of it.

This has been the sin­gle biggest bot­tle­neck for both ecosys­tems. Not the ed­i­tors them­selves — they’re in­cred­i­bly pow­er­ful — but the fact that cus­tomiz­ing them re­quires learn­ing an un­fa­mil­iar lan­guage, and most peo­ple never make it past copy­ing snip­pets from blog posts and READMEs.

I felt in­cred­i­bly over­whelmed by Elisp and VimScript when I was learn­ing Emacs and Vim for the first time, and I imag­ine I was­n’t the only one. I started to feel very pro­duc­tive in Emacs only af­ter putting in quite a lot of time to ac­tu­ally learn Elisp prop­erly. (Never both­ered to do the same for VimScript, though, and ad­mit­tedly I’m not too ea­ger to mas­ter Lua ei­ther.)

AI changes this overnight. You can now de­scribe what you want in plain English and get work­ing Elisp, VimScript, or Lua. Write me an Emacs func­tion that re­for­mats the cur­rent para­graph to 72 columns and adds a pre­fix” — done. Configure lazy.nvim to set up LSP with these key­bind­ings” — done. The ex­ten­sion lan­guage bar­rier, which has been the biggest ob­sta­cle to adop­tion for decades, is sud­denly much lower.

After 20+ years in the Emacs com­mu­nity, I of­ten have the feel­ing that a rel­a­tively small group — maybe 50 to 100 peo­ple — is dri­ving most of the mean­ing­ful progress. The same names show up in MELPA, on the mail­ing lists, and in bug re­ports. This is­n’t a crit­i­cism of those peo­ple (I’m proud to be among them), but it’s a struc­tural weak­ness. A com­mu­nity that de­pends on so few con­trib­u­tors is frag­ile.

And it’s not just Elisp and VimScript. The C in­ter­nals of both Emacs and Vim (and Neovim’s C core) are main­tained by an even smaller group. Finding peo­ple who are both will­ing and able to hack on decades-old C code­bases is gen­uinely hard, and it’s only get­ting harder as fewer de­vel­op­ers learn C at all.

AI tools can help here in two ways. First, they lower the bar­rier for new con­trib­u­tors — some­one who un­der­stands the con­cept of what they want to build can now get AI as­sis­tance with the im­ple­men­ta­tion in an un­fa­mil­iar lan­guage. Second, they help ex­ist­ing main­tain­ers move faster. I’ve per­son­ally found that AI is ex­cel­lent at gen­er­at­ing test scaf­fold­ing, writ­ing doc­u­men­ta­tion, and han­dling the te­dious parts of pack­age main­te­nance that slow every­thing down.

The Emacs and Neovim com­mu­ni­ties aren’t sit­ting idle. There are al­ready im­pres­sive AI in­te­gra­tions:

* el­lama — an Emacs in­ter­face for in­ter­act­ing with LLMs via llama.cpp and Ollama

* copi­lot.el — GitHub Copilot in­te­gra­tion (I hap­pen to be the cur­rent main­tainer of the pro­ject)

* agent-shell — a na­tive Emacs buffer for in­ter­act­ing with LLM agents (Claude Code, Gemini CLI, etc.) via the Agent Client Protocol

* gp.nvim — ChatGPT-like ses­sions in Neovim with sup­port for mul­ti­ple providers

And this is just a sam­ple. Building these in­te­gra­tions is­n’t as hard as it might seem — the APIs are straight­for­ward, and the ex­ten­si­bil­ity of both ed­i­tors means you can wire up AI tools in ways that feel na­tive. With AI as­sis­tance, cre­at­ing new in­te­gra­tions be­comes even eas­ier. I would­n’t be sur­prised if the pace of plu­gin de­vel­op­ment ac­cel­er­ates sig­nif­i­cantly.

Here’s an irony that de­serves more at­ten­tion: many of the most pow­er­ful AI cod­ing tools are ter­mi­nal-na­tive. Claude Code, Aider, and var­i­ous Copilot CLI tools all run in the ter­mi­nal. And what lives in the ter­mi­nal? Emacs and Vim.

Running Claude Code in an Emacs vterm buffer or a Neovim ter­mi­nal split is a per­fectly nat­ural work­flow. You get the AI agent in one pane and your ed­i­tor in an­other, with all your key­bind­ings and tools in­tact. There’s no con­text switch­ing to a dif­fer­ent ap­pli­ca­tion — it’s all in the same en­vi­ron­ment.

This is ac­tu­ally an ad­van­tage over GUI-based AI ed­i­tors, where the AI in­te­gra­tion is tightly cou­pled to the ed­i­tor’s own in­ter­face. With ter­mi­nal-na­tive tools, you get to choose your own ed­i­tor and your own AI tool, and they com­pose nat­u­rally.

There’s an­other an­gle worth con­sid­er­ing: if pro­gram­ming is in­creas­ingly about writ­ing prompts rather than code, you still ben­e­fit from a great text ed­i­tor for that. Prompts are text, and craft­ing them well mat­ters. I find it ironic that Claude Code — a tool I oth­er­wise love — does­n’t use read­line, so my Emacs key­bind­ings don’t work prop­erly in it, and its vim em­u­la­tion is fairly poor. I still think us­ing React for CLI apps is a mis­take, and I sus­pect many peo­ple would en­joy run­ning Claude Code in­side their Emacs or Vim in­stead. That’s ex­actly what the Agent Client Protocol (ACP) en­ables — it lets ed­i­tors like Emacs (via agent-shell) act as first-class clients for AI agents, giv­ing you proper edit­ing, key­bind­ings, and all the power of your ed­i­tor while in­ter­act­ing with tools like Claude Code. The best prompt ed­i­tor might just be the one you’ve been us­ing for decades.

Emacs’s editor as op­er­at­ing sys­tem” phi­los­o­phy is uniquely well-suited to AI in­te­gra­tion. It’s not just a code ed­i­tor — it’s a mail client (Gnus, mu4e), a note-tak­ing sys­tem (Org mode), a Git in­ter­face (Magit), a ter­mi­nal em­u­la­tor, a file man­ager, an RSS reader, and much more.

AI can be in­te­grated at every one of these lay­ers. Imagine an AI as­sis­tant that can read your org-mode agenda, draft email replies in mu4e, help you write com­mit mes­sages in Magit, and refac­tor code in your source buffers — all within the same en­vi­ron­ment, shar­ing con­text. No other ed­i­tor ar­chi­tec­ture makes this kind of deep, cross-do­main in­te­gra­tion as nat­ural as Emacs does.

Admittedly, I’ve stopped us­ing Emacs as my OS a long time ago, and these days I use it mostly for pro­gram­ming and blog­ging. (I’m writ­ing this ar­ti­cle in Emacs with the help of mark­down-mode.) Still, I’m only one Emacs user and many are prob­a­bly us­ing it in a more holis­tic man­ner.

One of the most un­der­ap­pre­ci­ated ben­e­fits of AI for Emacs and Vim users is mun­dane: trou­bleshoot­ing. Both ed­i­tors have no­to­ri­ously steep learn­ing curves and opaque er­ror mes­sages. Wrong type ar­gu­ment: stringp, nil” has dri­ven more peo­ple away from Emacs than any com­peti­tor ever did.

AI tools are re­mark­ably good at ex­plain­ing cryp­tic er­ror mes­sages, di­ag­nos­ing con­fig­u­ra­tion is­sues, and sug­gest­ing fixes. They can read your init file and spot the prob­lem. They can ex­plain what a piece of Elisp does. They can help you un­der­stand why your key­bind­ing is­n’t work­ing. This dra­mat­i­cally flat­tens the learn­ing curve — not by mak­ing the ed­i­tor sim­pler, but by giv­ing every user ac­cess to a pa­tient, knowl­edge­able guide.

I don’t re­ally need any AI as­sis­tance to trou­bleshoot any­thing in my Emacs setup, but it’s been handy oc­ca­sion­ally in Neovim-land, where my knowl­edge is rel­a­tively mod­est by com­par­i­son.

There’s at least one doc­u­mented case of some­one re­turn­ing to Emacs af­ter years away, specif­i­cally be­cause Claude Code made it pain­less to fix con­fig­u­ra­tion is­sues. They’d left for IntelliJ be­cause the con­fig­u­ra­tion bur­den got too an­noy­ing — and came back once AI re­moved that bar­rier. Happy f*ck­ing days I’m home again,” as they put it. If AI can bring back lapsed Emacs users, that’s a good thing in my book.

Let’s re­visit the dooms­day sce­nario. Say pro­gram­ming is fully au­to­mated and no­body writes code any­more. Does Emacs die?

Not nec­es­sar­ily. Emacs is al­ready used for far more than pro­gram­ming. People use Org mode to man­age their en­tire lives — tasks, notes, cal­en­dars, jour­nals, time track­ing, even aca­d­e­mic pa­pers. Emacs is a ca­pa­ble writ­ing en­vi­ron­ment for prose, with ex­cel­lent sup­port for LaTeX, Markdown, AsciiDoc, and plain text. You can read email, browse the web, man­age files, and yes, play Tetris.

Vim, sim­i­larly, is a text edit­ing par­a­digm as much as a pro­gram. Vim key­bind­ings have col­o­nized every text in­put in the com­put­ing world — VS Code, IntelliJ, browsers, shells, even Emacs (via Evil mode). Even if the Vim pro­gram fades, the Vim idea is im­mor­tal.

And who knows — maybe there’ll be a mar­ket for ar­ti­sanal, hand-crafted soft­ware one day, the way there’s a mar­ket for vinyl records and me­chan­i­cal watches. Organic, small-batch code, lov­ingly typed by a hu­man in Emacs — one char­ac­ter at a time.” I’d buy that t-shirt. And I’m fairly cer­tain those ar­ti­san pro­gram­mers won’t be us­ing VS Code.

So even in the most ex­treme sce­nario, both ed­i­tors have a life be­yond code. A di­min­ished one, per­haps, but a life nonethe­less.

I think what’s ac­tu­ally hap­pen­ing is more in­ter­est­ing than editors die” or editors are fine.” The role of the ed­i­tor is shift­ing.

For decades, the ed­i­tor was where you wrote code. Increasingly, it’s be­com­ing where you re­view, steer, and re­fine code that AI writes. The skills that mat­ter are shift­ing from typ­ing speed and edit­ing gym­nas­tics to spec­i­fi­ca­tion clar­ity, code read­ing, and ar­chi­tec­tural judg­ment.

In this world, the ed­i­tor that wins is­n’t the one with the best code com­ple­tion — it’s the one that gives you the most con­trol over your work­flow. And that has al­ways been Emacs and Vim’s core value propo­si­tion.

The ques­tion is whether the com­mu­ni­ties can adapt fast enough. The tools are there. The ar­chi­tec­ture is there. The phi­los­o­phy is right. What’s needed is peo­ple — more con­trib­u­tors, more plu­gin au­thors, more doc­u­men­ta­tion writ­ers, more voices in the con­ver­sa­tion. AI can help bridge the gap, but it can’t re­place gen­uine com­mu­nity en­gage­ment.

Not every­one in the Emacs and Vim com­mu­ni­ties is en­thu­si­as­tic about AI, and the ob­jec­tions go be­yond mere techno­pho­bia. There are le­git­i­mate eth­i­cal con­cerns that are go­ing to be de­bated for a long time:

* Energy con­sump­tion. Training and run­ning large lan­guage mod­els re­quires enor­mous amounts of com­pute and elec­tric­ity. For com­mu­ni­ties that have long val­ued ef­fi­ciency and min­i­mal­ism — Emacs users who pride them­selves on run­ning a 40-year-old ed­i­tor, Vim users who boast about their sub-sec­ond startup times — the en­vi­ron­men­tal cost of AI is hard to ig­nore.

* Copyright and train­ing data. LLMs are trained on vast cor­pora of code and text, and the le­gal­ity and ethics of that train­ing re­main con­tested. Some de­vel­op­ers are un­com­fort­able us­ing tools that may have learned from copy­righted code with­out ex­plicit con­sent. This con­cern hits close to home for open-source com­mu­ni­ties that care deeply about li­cens­ing.

* Job dis­place­ment. If AI makes de­vel­op­ers sig­nif­i­cantly more pro­duc­tive, fewer de­vel­op­ers might be needed. This is an un­com­fort­able thought for any pro­gram­ming com­mu­nity, and it’s es­pe­cially pointed for ed­i­tors whose iden­tity is built around em­pow­er­ing hu­man pro­gram­mers.

These con­cerns are al­ready pro­duc­ing con­crete ac­tion. The Vim com­mu­nity re­cently saw the cre­ation of EVi, a fork of Vim whose en­tire rai­son d’e­tre is to pro­vide a text ed­i­tor free from AI-assisted (generated?) code con­tri­bu­tions. Whether you agree with the premise or not, the fact that peo­ple are fork­ing es­tab­lished ed­i­tors over this tells you how strongly some com­mu­nity mem­bers feel.

I don’t think these con­cerns should stop any­one from ex­plor­ing AI tools, but they’re real and worth tak­ing se­ri­ously. I ex­pect to see plenty of spir­ited de­bate about this on emacs-de­vel and the Neovim is­sue tracker in the years ahead.

The fu­ture ain’t what it used to be.

I won’t pre­tend I’m not wor­ried. The AI wave is mov­ing fast, the in­cum­bents have mas­sive ad­van­tages in fund­ing and mind­share, and the very na­ture of pro­gram­ming is shift­ing un­der our feet. It’s en­tirely pos­si­ble that Emacs and Vim will grad­u­ally fade into niche ob­scu­rity, used only by a hand­ful of diehards who refuse to move on.

But I’ve been hear­ing that Emacs is dy­ing for 20 years, and it’s still here. The com­mu­nity is small but pas­sion­ate, the ed­i­tor is more ca­pa­ble than ever, and the ar­chi­tec­ture is gen­uinely well-suited to the AI era. Vim’s sit­u­a­tion is sim­i­lar — the core idea is so pow­er­ful that it keeps find­ing new ex­pres­sion (Neovim be­ing the lat­est and most vig­or­ous in­car­na­tion).

The ed­i­tors that sur­vive won’t be the ones with the flashiest AI fea­tures. They’ll be the ones whose users care enough to keep build­ing, adapt­ing, and shar­ing. That’s al­ways been the real en­gine of open-source soft­ware, and no amount of AI changes that.

So if you’re an Emacs or Vim user: don’t panic, but don’t be com­pla­cent ei­ther. Learn the new AI tools (if you’re not fun­da­men­tally op­posed to them, that is). Pimp your setup and make it awe­some. Write about your work­flows. Help new­com­ers. The best way to en­sure your ed­i­tor sur­vives the AI age is to make it thrive in it.

Maybe the fu­ture ain’t what it used to be — but that’s not nec­es­sar­ily a bad thing.

I’ll ad­mit this es­say turned out longer and more dis­jointed than I’d hoped. I’ve had all these thoughts rat­tling around in my head for a while now and I wanted to cover them all, but I def­i­nitely strug­gled to weave them into a co­he­sive nar­ra­tive. Programming may be hard, but writ­ing prose re­mains harder. Thanks for bear­ing with me.

That’s all I have for you to­day. Keep hack­ing!

P. S. There’s an in­ter­est­ing Hacker News dis­cus­sion about this ar­ti­cle. Check it out if you want to see what the broader com­mu­nity thinks!

...

Read the original on batsov.com »

9 182 shares, 35 trendiness

XML is a Cheap DSL

Yesterday, the IRS an­nounced the re­lease of the pro­ject I’ve been en­gi­neer­ing lead­ing since this sum­mer, its new Tax Withholding Estimator (TWE). Taxpayers en­ter in their in­come, ex­pected de­duc­tions, and other rel­e­vant info to es­ti­mate what they’ll owe in taxes at the end of the year, and ad­just the with­hold­ings on their pay­check. It’s free, open source, and, in a ma­jor first for the IRS, open for pub­lic con­tri­bu­tions.

TWE is full of ex­cit­ing learn­ings about the field of pub­lic sec­tor soft­ware. Being me, I’m go­ing to start by writ­ing about by far the dri­est one: XML.

XML is widely con­sid­ered clunky at best, ob­so­lete at worst. It evokes mem­o­ries of SOAP con­figs and J2EE (it’s fine, even good, if those acronyms don’t mean any­thing to you). My ex­pe­ri­ence with the Tax Withholding Estimator, how­ever, has taught me that XML ab­solutely has a place in mod­ern soft­ware de­vel­op­ment, and it should be con­sid­ered a lead­ing op­tion for any cross-plat­form de­clar­a­tive spec­i­fi­ca­tion.

TWE is a sta­tic site gen­er­ated from two XML con­fig­u­ra­tions. The first of these con­figs is the Fact Dictionary, our rep­re­sen­ta­tion of the US Tax Code; the sec­ond will be the sub­ject of a later blog post.

We use the Fact Graph, a logic en­gine, to cal­cu­late the tax­pay­er’s tax oblig­a­tions (and their with­hold­ings) based on the facts de­fined in the Fact Dictionary. The Fact Graph was orig­i­nally built for IRS Direct File and now we use it for TWE. I’m go­ing to in­tro­duce you to the Fact Graph the way that I was in­tro­duced to it: by ex­am­ple.

Put aside any pre­con­cep­tions you might have about XML for a mo­ment and ask your­self what this fact de­scribes, and how well it de­scribes it.

This fact de­scribes a /totalOwed fact that’s de­rived by sub­tract­ing /totalPayments from /totalTax. In tax terms, this fact de­scribes the amount you will need to pay the IRS at the end of the year. That amount, total owed,” is the dif­fer­ence be­tween the to­tal taxes due for your in­come (“total tax”) and the amount you’ve al­ready paid (“total pay­ments”).

My ini­tial re­ac­tion to this was that it’s quite ver­bose, but also rea­son­ably clear. That’s more or less how I still feel.

You only need to look at a few of these to in­tuit the struc­ture. Take the re­fund­able cred­its cal­cu­la­tion, for ex­am­ple. A re­fund­able credit is a tax credit that can lead to a neg­a­tive tax bal­ance—if you qual­ify for more re­fund­able cred­its than you owe in taxes, the gov­ern­ment just gives you some money. TWE cal­cu­lates the to­tal value of re­fund­able cred­its by adding up the val­ues of the Earned Income Credit, the Child Tax Credit (CTC), American Opportunity Credit, the re­fund­able por­tion of the Adoption Credit, and some other stuff from the Schedule 3.

By con­trast, non-re­fund­able tax cred­its can bring your tax bur­den down to zero, but won’t ever make it neg­a­tive. TWE mod­els that by sub­tract­ing non-re­fund­able cred­its from the ten­ta­tive tax bur­den while mak­ing sure it can’t go be­low zero, us­ing the op­er­a­tor.

While ad­mit­tedly very ver­bose, the nest­ing is straight­for­ward to fol­low. The tax af­ter non-re­fund­able cred­its is de­rived by say­ing give me the greater of these two num­bers: zero, or the dif­fer­ence be­tween ten­ta­tive tax and the non-re­fund­able cred­its.”

Finally, what about in­puts? Obviously we need places for the tax­payer to pro­vide in­for­ma­tion, so that we can cal­cu­late all the other val­ues.

Okay, so in­stead of we use . Because the value is… writable. Fair enough. The de­notes what type of value this fact takes. True-or-false ques­tions use , like this one that records whether the tax­payer is 65 or older.

There are some (much) longer facts, but these are a fair rep­re­sen­ta­tion of what the me­dian fact looks like. Facts de­pend on other facts, some­times de­rived and some­times writable, and they all add up to some fi­nal tax num­bers at the end. But why en­code math this way when it seems far clunkier than tra­di­tional no­ta­tion?

Countless main­stream pro­gram­ming lan­guages would in­stead let you write this cal­cu­la­tion in a no­ta­tion that looks more like nor­mal math. Take this JavaScript ex­am­ple, which looks like el­e­men­tary al­ge­bra:

const to­talOwed = to­tal­Tax - to­tal­Pay­ments

That seems bet­ter! It’s far more con­cise, eas­ier to read, and does­n’t make you ex­plic­itly la­bel the minuend” and subtrahend.”

Let’s add in the de­f­i­n­i­tions for to­tal­Tax and to­tal­Pay­ments.

const to­tal­Tax = ten­ta­tive­TaxNet­Non­Re­fund­able­Cred­its + to­talOther­Taxes

const to­tal­Pay­ments = to­talEs­ti­mat­ed­Tax­e­s­Paid +

to­tal­Tax­e­s­PaidOnSo­cialSe­cu­ri­ty­In­come +

to­tal­Re­fund­able­Cred­its

const to­talOwed = to­tal­Tax - to­tal­Pay­ments

Still not too bad. Total tax is cal­cu­lated by adding the tax af­ter non-re­fund­able cred­its (discussed ear­lier) to what­ev­er’s in other taxes.” Total pay­ments is the sum of es­ti­mated taxes you’ve al­ready paid, taxes you’ve paid on so­cial se­cu­rity, and any re­fund­able cred­its.

The prob­lem with the JavaScript rep­re­sen­ta­tion is that it’s im­per­a­tive. It de­scribes ac­tions you take in a se­quence, and once the se­quence is done, the in­ter­me­di­ate steps are lost. The is­sues with this get more ob­vi­ous when you go an­other level deeper, adding the de­f­i­n­i­tions of all the val­ues that to­tal­Tax and to­tal­Pay­ments de­pend on.

// Total tax cal­cu­la­tion

const to­talOther­Taxes = self­Em­ploy­ment­Tax + ad­di­tionalMedicare­Tax + net­Invest­mentIn­comeTax

const ten­ta­tive­TaxNet­Non­Re­fund­able­Cred­its = Math.max(totalTentativeTax - to­tal­Non­Re­fund­able­Cred­its, 0)

const to­tal­Tax = ten­ta­tive­TaxNet­Non­Re­fund­able­Cred­its + to­talOther­Taxes

// Total pay­ments cal­cu­la­tion

const to­talEs­ti­mat­ed­Tax­e­s­Paid = get­Input()

const to­tal­Tax­e­s­PaidOnSo­cialSe­cu­ri­ty­In­come = so­cialSe­cu­ri­tySources

.map(source => source.to­tal­Tax­e­s­Paid)

.reduce((acc, val) => { re­turn acc+val }, 0)

const to­tal­Re­fund­able­Cred­its = earned­In­come­Credit +

ad­di­tion­alCtc +

amer­i­canOp­por­tu­ni­ty­Credit +

adop­tion­Cred­itRe­fund­able +

sched­ule3Other­Pay­mentsAn­dRefund­able­Cred­it­sTo­tal

const to­tal­Pay­ments = to­talEs­ti­mat­ed­Tax­e­s­Paid +

to­tal­Tax­e­s­PaidOnSo­cialSe­cu­ri­ty­In­come +

to­tal­Re­fund­able­Cred­its

// Total owed

const to­talOwed = to­tal­Tax - to­tal­Pay­ments

We are quickly ar­riv­ing at a sit­u­a­tion that has a lot of sub­tle prob­lems.

One prob­lem is the ex­e­cu­tion or­der. The hy­po­thet­i­cal get­Input() func­tion so­lic­its an an­swer from the tax­payer, which has to hap­pen be­fore the pro­gram can con­tinue. Calculations that don’t de­pend on know­ing total es­ti­mated taxes” are still held up wait­ing for the user; cal­cu­la­tions that do de­pend on know­ing that value had bet­ter be spec­i­fied af­ter it.

Or, take a close look at how we add up all the so­cial se­cu­rity in­come:

const to­tal­Tax­e­s­PaidOnSo­cialSe­cu­ri­ty­In­come = so­cialSe­cu­ri­tySources

.map(source => source.to­tal­Tax­e­s­Paid)

.reduce((acc, val) => { re­turn acc+val }, 0)

All of a sud­den we are re­ally in the weeds with JavaScript. These are not com­pli­cated code con­cepts—map and re­duce are both in the stan­dard li­brary and ba­sic func­tional par­a­digms are wide­spread these days—but they are not tax math con­cepts. Instead, they are im­ple­men­ta­tion de­tails.

Compare it to the Fact rep­re­sen­ta­tion of that same value.

This is­n’t per­fect—the * that rep­re­sents each so­cial se­cu­rity source is a lit­tle hacky—but the mean­ing is much clearer. What are the to­tal taxes paid on so­cial se­cu­rity in­come? The sum of the taxes paid on each so­cial se­cu­rity in­come. How do you add all the items in a col­lec­tion? With .

Plus, it reads like all the other facts; need­ing to add up all items in a col­lec­tion did­n’t sud­denly kick us into a new con­cep­tual realm.

The philo­soph­i­cal dif­fer­ence be­tween these two is that, un­like JavaScript, which is im­per­a­tive, the Fact Dictionary is de­clar­a­tive. It does­n’t de­scribe ex­actly what steps the com­puter will take or in what or­der; it de­scribes a bunch of named cal­cu­la­tions and how they de­pend on each other. The en­gine de­cides au­to­mat­i­cally how to ex­e­cute that cal­cu­la­tion.

Besides be­ing (relatively) friend­lier to read, the most im­por­tant ben­e­fit of a de­clar­a­tive tax model is that you can ask the pro­gram how it cal­cu­lated some­thing. Per the Fact Graph’s orig­i­nal au­thor, Chris Given:

The Fact Graph pro­vides us with a means of prov­ing that none of the unasked ques­tions would have changed the bot­tom line of your tax re­turn and that you’re get­ting every tax ben­e­fit to which you’re en­ti­tled.

Suppose you get a value for to­talOwed that does­n’t seem right. You can’t ask the JavaScript ver­sion how did you ar­rive at that num­ber?” be­cause those in­ter­me­di­ate val­ues have al­ready been dis­carded. Imperative pro­grams are gen­er­ally de­bugged by adding log state­ments or step­ping through with a de­bug­ger, paus­ing to check each value. This works fine when the num­ber of in­ter­me­di­ate val­ues is small; it does not scale at all for the US Tax Code, where the fi­nal value is cal­cu­lated based on hun­dreds upon hun­dreds of cal­cu­la­tions of in­ter­me­di­ate val­ues.

With a de­clar­a­tive graph rep­re­sen­ta­tion, we get au­ditabil­ity and in­tro­spec­tion for free, for every sin­gle cal­cu­la­tion.

Intuit, the com­pany be­hind TurboTax, came to the same con­clu­sion, and pub­lished a whitepa­per about their Tax Knowledge Graph” in 2020. Their im­ple­men­ta­tion is not open source, how­ever (or least I can’t find it). The IRS Fact Graph is open source and pub­lic do­main, so it can be stud­ied, shared, and ex­tended by the pub­lic.

If we ac­cept the need for a de­clar­a­tive data rep­re­sen­ta­tion of the tax code, what should it be?

In many of the places where peo­ple used to en­counter XML, such net­work data trans­fer and con­fig­u­ra­tion files, it has been re­placed by JSON. I find JSON to be a rea­son­ably good wire for­mat and a painful con­fig­u­ra­tion for­mat, but in nei­ther case would I rather be us­ing XML (although it’s a close call on the lat­ter).

The Fact Dictionary is dif­fer­ent. It’s not a pile of set­tings or key-value pairs. It’s a cus­tom lan­guage that mod­els a unique and com­plex prob­lem space. In pro­gram­ming we call this a do­main-spe­cific lan­guage, or DSL for short.

As an ex­er­cise, I tried to come up with a plau­si­ble JSON rep­re­sen­ta­tion of the /tentativeTaxNetNonRefundableCredits fact from ear­lier.

description”: Total ten­ta­tive tax af­ter ap­ply­ing non-re­fund­able cred­its, but be­fore ap­ply­ing re­fund­able cred­its.”,

definition”: {

type”: Expression”,

kind”: GreaterOf”,

children”: [

type”: Value”,

kind”: Dollar”,

value”: 0

type”: Expression”,

kind”: Subtract”,

minuend”: {

type”: Dependency”,

path”: /totalTentativeTax”

subtrahend”: {

type”: Dependency”,

path”: /totalNonRefundableCredits”

This is not a ter­ri­bly com­pli­cated fact, but it’s im­me­di­ately ap­par­ent that JSON does not han­dle ar­bi­trary nested ex­pres­sions well. The only com­plex data struc­ture avail­able in JSON is an ob­ject, so every child ob­ject has to de­clare what kind of ob­ject it is. Contrast that with XML, where the kind” of the ob­ject is em­bed­ded in its de­lim­iters.

I think this XML rep­re­sen­ta­tion could be im­proved, but even in its cur­rent form, it is clearly bet­ter than JSON. (It’s also, amus­ingly, a cou­ple lines shorter.) Attributes and named chil­dren give you just enough ex­pres­sive power to make choices about what your lan­guage should or should not em­pha­size. Not be­ing tied to spe­cific set of data types makes it rea­son­able to de­fine your own, such as a dis­tinc­tion be­tween dollars” and integers.”

A lot of mi­nor frus­tra­tions we’ve all in­ter­nal­ized as in­evitable with JSON are ac­tu­ally JSON-specific. XML has com­ments, for in­stance. That’s nice. It also has sane white­space and new­line han­dling, which is im­por­tant when your de­scrip­tions are of­ten long. For text that has any length or shape to it, XML is far more pleas­ant to read and edit by hand than JSON.

There are still ver­bosity gains to be had, par­tic­u­larly with switch state­ments (omitted here out of re­spect for page length). I’d cer­tainly re­move the ex­plicit minuend” and subtrahend,” for starters.

I be­lieve that the orig­i­nal team did­n’t do this be­cause they did­n’t want the or­der of the chil­dren to have se­man­tic con­se­quence. I get it, but or­der is guar­an­teed in XML and I think the ad­di­tional nest­ing and words do more harm then good.

What about YAML? Chris Given again:

what­ever you do, don’t try to ex­press the logic of the Internal Revenue Code as YAML

Finally, there’s a good case to made that you could build this DSL with s-ex­pres­sions. In a lot of ways, this is nicest syn­tax to read and edit.

(Fact

(Path /tentativeTaxNetNonRefundableCredits”)

(Description Total ten­ta­tive tax af­ter ap­ply­ing non-re­fund­able

cred­its, but be­fore ap­ply­ing re­fund­able cred­its.“)

(Derived

(GreaterOf

(Dollar 0)

(Subtract

(Minuend (Dependency /totalTentativeTax”))

(Subtrahends (Dependency /totalNonRefundableCredits”))))))

HackerNews user ok123456 asks: Why would I want to use this over Prolog/Datalog?”

I’m a Prolog fan! This is also pos­si­ble.

...

Read the original on unplannedobsolescence.com »

10 181 shares, 28 trendiness

What It Is, Why I'm Doing It Now, and How It Came About

What It Is, Why I’m Doing It Now, and How It Came About

Thanks to all the back­ers who have con­tributed to the cam­paign so far, plus a spe­cial shout-out to those who have gen­er­ously do­nated to sup­port my work! As a sub­scriber to the Dabao” cam­paign, you’re al­ready aware of the Baochip-1x. This up­date fills in the back­story of what it is, why I’m do­ing it now, and how it came about.

In my mind, the Baochip-1x’s key dif­fer­en­ti­at­ing fea­ture is the in­clu­sion of a Memory Management Unit (MMU). No other mi­cro­con­troller in this per­for­mance/​in­te­gra­tion class has this fea­ture, to the best of my knowl­edge. For those not versed in OS-nerd speak, the MMU is what sets the soft­ware that runs on your phone or desk­top apart from the soft­ware that runs in your toaster oven. It fa­cil­i­tates se­cure, load­able apps by stick­ing every ap­pli­ca­tion in its own vir­tual mem­ory space.

The MMU is a ven­er­a­ble piece of tech­nol­ogy, dat­ing back to the 1960’s. Its page-based mem­ory pro­tec­tion scheme is well-un­der­stood and has passed the test of time; I’ve taught its prin­ci­ples to hun­dreds of un­der­grad­u­ates, and it con­tin­ues to be a cor­ner­stone of mod­ern OSes.

Diagram il­lus­trat­ing an early vir­tual mem­ory scheme from Kilburn, et al, One-level stor­age sys­tem’, IRE Transactions, EC-11(2):223-235, 1962

When it comes to eval­u­at­ing se­cu­rity-ori­ented fea­tures, older is not al­ways worse; in fact, with­stand­ing the test of time is a pos­i­tive sig­nal. For ex­am­ple, the AES ci­pher is about 26 years old. This seems an­cient for com­puter tech­nol­ogy, yet many cryp­tog­ra­phers rec­om­mend it over newer ci­phers ex­plic­itly be­cause AES has with­stood the test of hun­dreds of cryp­tog­ra­phers try­ing to break it, with rep­re­sen­ta­tion from every na­tion state, over years and years.

I’m aware of newer mem­ory pro­tec­tion tech­nolo­gies, such as CHERI, PMPs, MPUs… and as a nerd, I love think­ing about these sorts of things. In fact, in my dis­ser­ta­tion, I even ad­vo­cated for the use of CHERI-style hard­ware ca­pa­bil­i­ties and tagged point­ers in new CPU ar­chi­tec­tures.

However, as a prag­matic sys­tem ar­chi­tect, I see no rea­son to es­chew the MMU in fa­vor of any of these. In fact, the MMU is com­pos­able with all of these prim­i­tives — it’s valid to have both a PMP and an MMU in the same RISC-V CPU. And, even if you’re us­ing a CHERI-like tech­nol­ogy for hard­ware-en­forced bounds check­ing on point­ers, it still does­n’t al­low for trans­par­ent ad­dress space re­lo­ca­tion. Without page-based vir­tual mem­ory, each pro­gram would need to be linked to a dis­tinct, non-over­lap­ping re­gion of phys­i­cal ad­dress space at com­pile time, and you could­n’t have swap mem­ory.

This begs the ques­tion: if the MMU is such an ob­vi­ous ad­di­tion, why is­n’t it more preva­lent? If it’s such an ob­vi­ous choice, would­n’t more play­ers in­clude it in their chips?

Small” CPUs such as those found in em­bed­ded SoCs have lacked this fea­ture since their in­cep­tion. I trace this con­ven­tion back to the in­tro­duc­tion of the ARM7TDMI core in the 1990s. Back then, tran­sis­tors were scarce, mem­ory even more so, and so vir­tual mem­ory was not a great prod­uct/​mar­ket fit for de­vices with just a cou­ple kilo­bytes of RAM, not even enough to hold a page table. The ARM7TDMI core’s ef­fi­ciency and low cost made it a run­away suc­cess, ship­ping over a bil­lion units and es­tab­lish­ing ARM as the dom­i­nant player in the em­bed­ded SoC space.

Fast for­ward 30 years, and Moore’s Law has given us tens of thou­sands of times more ca­pa­bil­ity; to­day, a fleck of sil­i­con smaller than your pinky nail con­tains more tran­sis­tors than a full-sized PC desk­top from the 1990s. Despite the progress, these small flecks of sil­i­con con­tinue to ad­here to the pat­tern that was es­tab­lished in the 1990s: small sys­tems get flat mem­ory spaces with no ad­dress iso­la­tion.

Die shot of a mod­ern 22nm sys­tem-on-chip (SoC). This fleck of sil­i­con is about 4mm on a side and con­tains more tran­sis­tors than a desk­top PC from the 1990’s. Despite this, the logic re­gion is more empty space by area than ac­tive gates

The root cause turns out ex­plic­itly to be be­cause MMUs are so valu­able: with­out one, you can’t run Linux, BSD, or Mach. Thus, when ARM split their IP port­fo­lio into the A, R, and M-series cores, the low-cost M-series cores were for­bid­den from hav­ing an MMU to pre­vent price ero­sion of their high-end A-series cores. Instead, a pro­pri­etary hack known as the MPU was in­tro­duced that gives some mem­ory se­cu­rity, but with­out an easy path to ben­e­fits such as swap mem­ory.

We’ve been locked into this con­ven­tion for so long that we sim­ply for­got to chal­lenge the as­sump­tions.

Thanks to the rise of open ar­chi­tec­ture spec­i­fi­ca­tions such as RISC-V, and fully-open im­ple­men­ta­tions of the RISC-V spec such as the Vexriscv, I’m not bound by any­one’s rules for what can or can’t go onto an SoC. And so, I am lib­er­ated to make the choice to in­clude an MMU in the Baochip-1x.

This nat­u­rally em­pow­ers en­thu­si­asts to try and run Linux on the Baochip-1x, but we (largely Sean xobs’ Cross and me) al­ready wrote a pure-Rust OS called Xous” which in­cor­po­rates an MMU but in a frame­work that is ex­plic­itly tar­geted to­wards small mem­ory foot­print de­vices like the Baochip-1x. The de­tails of Xous are be­yond the scope of this post, but if you’re in­ter­ested, check out the talk we gave at 39C3.

This cou­ples into the core ar­gu­ment as to why a mostly open RTL SoC is the right thing for this mo­ment in time. As a staunch ad­vo­cate for open-source tech­nolo­gies, I would love to see a fully-open sil­i­con stack, from the fabs-up. I’m heart­ened to see mul­ti­ple ini­tia­tives work­ing on fix­ing this prob­lem, but it’s a hard prob­lem. I es­ti­mate it could take more than a decade be­fore we have a suf­fi­ciently ro­bust open source sil­i­con ecosys­tem to mar­ket eco­nom­i­cally com­pet­i­tive SoCs.

For those of us look­ing to cre­ate an em­bed­ded prod­uct to­day, that leaves only one prac­ti­cal op­tion: con­tinue to use Cortex-M ARM de­vices, and if we want hard­ware mem­ory pro­tec­tion, we have to tune our soft­ware to their pro­pri­etary MPU. This means fur­ther en­trench­ing our code bases in a pro­pri­etary stan­dard. Do I re­ally want to spend my time port­ing Xous to use ARMs pro­pri­etary fla­vor of mem­ory pro­tec­tion? Surely not.

Thus, I would ar­gue that we sim­ply can’t af­ford to wait for fully open source PDKs to come along. Given the op­por­tu­nity to do a par­tially-open RTL tape­out to­day, ver­sus wait­ing for the per­fect, fully-open source so­lu­tion, the ben­e­fit of tap­ing out par­tially-open RTL SoCs to­day is crys­tal clear to me.

A par­tially-open SoC avail­able to­day em­pow­ers a larger com­mu­nity that is in­ter­ested in an open source fu­ture, even if they aren’t hard­ware ex­perts. As a larger com­mu­nity, we can be­gin the process of de-lever­ag­ing ARM to­gether, so that when eco­nom­i­cally vi­able, truly open” sil­i­con al­ter­na­tives come to mar­ket, they can drop di­rectly into a ma­ture ap­pli­ca­tion stack. After all, soft­ware dri­ves de­mand for sil­i­con, not the other way around.

The good news is that on the Baochip-1x, every­thing that can compute” on data is avail­able for sim­u­la­tion and in­spec­tion. The parts that are closed are com­po­nents such as the AXI bus frame­work, USB PHY, and ana­log com­po­nents such as the PLL, volt­age reg­u­la­tors, and I/O pads.

Thus, while cer­tain por­tions of the Baochip-1x SoC are closed-source, none of them are in­volved in the trans­for­ma­tion of data. In other words, all the closed source com­po­nents are ef­fec­tively wires”: the data that goes in on one side should match the data com­ing out the other side. While this is dis­sat­is­fy­ing from the absolute trust” per­spec­tive, one can’t de­fin­i­tively rule out the pos­si­bil­ity of back doors in black-box wires, we can in­spect its perime­ter and con­firm that, for a broad range of pos­si­bil­i­ties, it be­haves cor­rectly. It’s not per­fect trans­parency, but it’s far bet­ter than the fully-NDA SoCs we cur­rently use to han­dle our se­crets, and more im­por­tantly, it al­lows us to start writ­ing code for open ar­chi­tec­tures, paving a roadmap to an even­tu­ally fully-open sil­i­con-to-soft­ware fu­ture.

Those with a bit of sil­i­con savvy would note that it’s not cheap to pro­duce such a chip, yet, I have not raised a dol­lar of ven­ture cap­i­tal. I’m also not in­de­pen­dently wealthy. So how is this pos­si­ble?

The short an­swer is I hitchhiked” on a 22 nm chip de­signed pri­mar­ily by Crossbar, Inc. I was able to in­clude a CPU of my choice, along with a few other fea­tures, in some un­used free space on the chip’s floor­plan. By switch­ing off which CPU is ac­tive, you can ef­fec­tively get two chips for the price of one mask set.

Floorplan of the Baochip, il­lus­trat­ing the lo­ca­tion and rel­a­tive sizes of its 5 open-source CPU cores

For those who haven’t peeked un­der the hood of a System-on-Chip (SoC), the key fact to know is that the cost of mod­ern SoCs is dri­ven largely by pe­riph­er­als and mem­ory. The CPU it­self is of­ten just a small frac­tion of the area, just a cou­ple per­cent in the case of the Baochip-1x. Furthermore, all pe­riph­er­als are memory mapped”: flash­ing an LED, for ex­am­ple, en­tails tick­ling some spe­cific lo­ca­tions in mem­ory. Who does the tick­ling does­n’t mat­ter — whether ARM or RISC-V CPU, or even a state ma­chine — the pe­riph­er­als re­spond just the same. Thus, one can ef­fec­tively give the same body” two dif­fer­ent personalities” by switch­ing out their brains”; by switch­ing out their CPU cores, you can have the same phys­i­cal piece of sil­i­con run vastly dif­fer­ent code bases.

The long an­swer starts a cou­ple years ago, with Crossbar want­ing to build a high-per­for­mance se­cure en­clave that would dif­fer­en­ti­ate it­self in sev­eral ways, no­tably by fab­ri­cat­ing in a rel­a­tively ad­vanced (compared to other se­cu­rity chips) 22 nm process and by us­ing their RRAM tech­nol­ogy for non-volatile stor­age. RRAM is sim­i­lar to FLASH mem­ory in that it re­tains data with­out power but with faster write times and smaller (32-byte) page sizes, and it can scale be­low 40 nm — a limit be­low which FLASH has not been able to scale.

In ad­di­tion to flex­ing their process su­pe­ri­or­ity, they wanted to dif­fer­en­ti­ate by be­ing prag­mat­i­cally open source about the de­sign; se­cu­rity chips have been tra­di­tion­ally been wrapped be­hind NDAs, de­spite calls from users for trans­parency.

Paradoxically, open source se­cu­rity chips are harder to cer­tify be­cause the cer­ti­fi­ca­tion stan­dards such as Common Criteria eval­u­ates closed-source flaws as more se­cure” than open-source flaws. My un­der­stand­ing is that the ar­gu­ment goes some­thing along the lines of, hacking chips is hard, so any bar­rier you can add to the up-front cost of ex­ploit­ing the chip in­creases the ef­fec­tive se­cu­rity of the chip over­all”. Basically, if the pen tester do­ing a se­cu­rity eval­u­a­tion judges that a bug is eas­ier to find and ex­ploit if the source code is pub­lic, then, shar­ing the source code low­ers your score. As a re­sult, the cer­ti­fi­ca­tion scores of open source chips are likely much worse than that of a closed source chip. And, since you can’t sell se­cu­rity chips to big cus­tomers with­out cer­ti­fi­ca­tions, se­cu­rity chips end up be­ing mostly closed source.

Kind of a crazy sys­tem, right? But if you con­sider that the peo­ple buy­ing oo­dles and oo­dles of se­cu­rity chips are in­sti­tu­tions like banks and gov­ern­ments, filled with non-tech­ni­cal man­agers whose pri­mary fo­cus is risk man­age­ment, plus they are out­sourc­ing the tech­ni­cal eval­u­a­tion any­ways, the sta­tus quo makes a lit­tle more sense. What’s a banker go­ing to do with the source code of a chip, any­way?

Crossbar wanted to buck the trend and heed the call for open source trans­parency in se­cu­rity chips and ap­proached me to help ad­vise on strat­egy. I agreed to help them, but un­der one con­di­tion: that I would be al­lowed to add a CPU core of my own choice and sell a ver­sion of the chip un­der my own brand. Part of the rea­son was that Crossbar, for risk re­duc­tion rea­sons, wanted to go with a pro­pri­etary ARM CPU. Having de­signed chips in a prior life, I can ap­pre­ci­ate the de­sire for risk re­duc­tion and go­ing with a tape-out proven core.

However, as an open source strat­egy ad­vi­sor, I ar­gued that users who viewed open source as a pos­i­tive fea­ture would likely also ex­pect, at a min­i­mum, that the CPU would be open source. Thus I of­fered to add the bat­tle-tested CPU core from the Precursor SoC — the Vexriscv — to the tapout, and I promised I would im­ple­ment the core in such a way that even if it did­n’t work, we could just switch it off and there would be min­i­mal im­pact on the chip’s power and area bud­get.

Out of this arrange­ment was born the Baochip-1x.

At the time of writ­ing, wafers con­tain­ing the Baochip-1x de­sign have been fab­ri­cated, and hun­dreds of the chips have been handed out through an early sam­pling pro­gram. These en­gi­neer­ing sam­ples were all hand-screened by me.

However, that’s about to change. There’s cur­rently a pod of wafers hus­tling through a fab in Hsinchu, and two of them are ear­marked to be­come fully pro­duc­tion-qual­i­fied Baochip-1x sil­i­con. These will go through a fully au­to­mated screen­ing flow. Assuming this process com­pletes smoothly, I’ll have a few thou­sand Baochip-1x’s avail­able to sell. More chips are planned for later in the year, but a com­bi­na­tion of cap­i­tal con­straints, risk mit­i­ga­tion, and the sheer time it takes to go from blank sil­i­con to fully as­sem­bled de­vices puts fur­ther in­ven­tory out un­til late in 2026.

Rather than wait­ing un­til the sup­ply chain was fully sta­bi­lized, I de­cided to take a small risk and share the first few thou­sand chips with de­vel­op­ers like you through the Dabao eval­u­a­tion board pre-or­der cam­paign. After work­ing on this pro­ject for a cou­ple years in to­tal si­lence, I’m ex­cited to start build­ing a com­mu­nity around it. Despite the ef­fort I’ve put into the pro­ject, there’s loads more to be done: doc­u­men­ta­tion to be writ­ten, dri­vers to bring up, and more pro­gram­ming lan­guages and OSes to sup­port. While I per­son­ally cham­pion the Rust-based Xous” OS, it does­n’t have to be the only, or even the pri­mary, code base for Baochip. I’m hop­ing to em­power a few good open source com­mu­nity lead­ers with ac­ces­si­ble hard­ware se­cu­rity prim­i­tives, so we can all build a brighter, more se­cure, and more trustable open source fu­ture to­gether.

Ask Crowd Supply about an or­der

Ask Baochip a tech­ni­cal ques­tion

Learn More About This Project

Go to the main pro­ject page

See all pro­ject up­dates

...

Read the original on www.crowdsupply.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.