10 interesting stories served every morning and every evening.

Linux gaming is getting faster because Windows APIs are becoming Linux kernel features

www.xda-developers.com

Published May 10, 2026, 12:30 PM EDT

His love of PCs and their com­po­nents was born out of try­ing to squeeze every ounce of per­for­mance out of the fam­ily com­puter. Tinkering with his own build at age 10 turned into build­ing PCs for friends and fam­ily, fos­ter­ing a pas­sion that would ul­ti­mately take shape as a ca­reer path.

Besides be­ing the first call for tech sup­port for those close to him, Ty is a com­puter sci­ence stu­dent, with his fo­cus be­ing cloud com­put­ing and net­work­ing. He also com­peted in semi-pro Counter-Strike for 8 years, mak­ing him in­ti­mately fa­mil­iar with every­thing to do with pe­riph­er­als.

Sign in to your XDA ac­count

In March 2026, Linux crossed five per­cent of Steam’s user base for the first time, an all-time high for an op­er­at­ing sys­tem that spent two decades as a nov­elty when it came to any kind of gam­ing. Microsoft’s end-of-sup­port dead­line for Windows 10 last October pushed many users to look at al­ter­na­tives, and the Steam Deck has qui­etly turned mil­lions of peo­ple into Linux gamers with­out them re­ally think­ing about it, lead­ing to more wide­spread adop­tion on desk­top ma­chines.

Most of that progress used to hap­pen in­side a piece of soft­ware called Wine, the trans­la­tion layer that con­vinces Windows games they’re run­ning on Windows. Valve’s tuned ver­sion of Wine, called Proton, is what makes Steam Play and the Steam Deck work. For years, every mean­ing­ful im­prove­ment to Linux gam­ing came from changes to Wine and Proton them­selves. That’s still true, but in­creas­ingly the most im­por­tant changes are hap­pen­ing one layer deeper, in­side the Linux ker­nel. The lat­est ex­am­ple of that is some­thing called NTSYNC, a ker­nel-level dri­ver that has of­fered great per­for­mance gains over pre­vi­ous ver­sions of Wine, and is loaded by de­fault on every Steam Deck that’s up-to-date.

Related

What NTSYNC ac­tu­ally is

An ad­di­tional piece of the per­for­mance puz­zle

NTSYNC is a small piece of dri­ver added di­rectly to the Linux ker­nel that gives it a na­tive im­ple­men­ta­tion of a set of Windows-specific tools that games de­pend on to co­or­di­nate them­selves.

Modern games jug­gle dozens of things at once. While you’re play­ing, your CPU man­ages the ren­der­ing pipeline, load­ing as­sets, run­ning physics, pro­cess­ing au­dio, han­dling AI NPC rou­tines, and track­ing in­puts, all in par­al­lel across mul­ti­ple cores. All those jobs con­stantly have to co­or­di­nate so they don’t trip over each other.

Quiz

8 Questions · Test Your Knowledge

The his­tory of LinuxTrivia chal­lenge

From a Finnish stu­den­t’s side pro­ject to pow­er­ing the world — how well do you know the story of Linux?

OriginsKernelDistrosPioneersMilestones

Begin

01 / 8

Origins

In what year did Linus Torvalds first an­nounce the Linux ker­nel to the world?

A1989B1991C1993D1995

Correct! Linus Torvalds posted his now-fa­mous mes­sage to the comp.os.minix news­group on August 25, 1991, de­scrib­ing Linux as just a hob­by’ pro­ject. Few could have pre­dicted it would one day run the ma­jor­ity of the world’s servers and smart­phones.

Not quite — Torvalds made his an­nounce­ment in 1991. He was a 21-year-old com­puter sci­ence stu­dent at the University of Helsinki at the time, and his mod­est post de­scribed the pro­ject as some­thing that won’t be big and pro­fes­sion­al’ like GNU.

Continue

02 / 8

Pioneers

Which uni­ver­sity was Linus Torvalds at­tend­ing when he cre­ated the first ver­sion of the Linux ker­nel?

AStockholm UniversityBAalto UniversityCUniversity of HelsinkiDMIT

Correct! Torvalds was study­ing at the University of Helsinki in Finland when he be­gan work­ing on Linux as a per­sonal pro­ject, ini­tially in­spired by MINIX, a small Unix-like sys­tem used for ed­u­ca­tional pur­poses.

Not quite — Torvalds was a stu­dent at the University of Helsinki in Finland. He started Linux partly out of frus­tra­tion with the lim­i­ta­tions of MINIX, which his pro­fes­sor Andrew Tanenbaum had de­signed de­lib­er­ately to be sim­ple for teach­ing.

Continue

03 / 8

Kernel

What op­er­at­ing sys­tem pri­mar­ily in­spired Linus Torvalds to cre­ate the Linux ker­nel?

AMS-DOSBMINIXCBSD UnixDSolaris

Correct! MINIX, cre­ated by pro­fes­sor Andrew Tanenbaum, was the di­rect in­spi­ra­tion for Linux. Torvalds used MINIX on his new Intel 386 PC but found it too re­stricted for his needs, which pushed him to write his own ker­nel.

Not quite — the an­swer is MINIX. Torvalds was us­ing MINIX when he started Linux, and even held a fa­mous on­line de­bate with its cre­ator Andrew Tanenbaum about ker­nel de­sign phi­los­o­phy, specif­i­cally mono­lithic ver­sus mi­cro­ker­nel ar­chi­tec­tures.

Continue

04 / 8

Milestones

What was the ver­sion num­ber of the first pub­licly re­leased Linux ker­nel in 1991?

A0.01B0.1C1.0D0.99

Correct! Linux ver­sion 0.01 was the first ker­nel Torvalds re­leased pub­licly in September 1991. It was a rough, early build that could only run on Intel 386 hard­ware and had very lim­ited func­tion­al­ity, but it marked the true be­gin­ning of the Linux pro­ject.

Not quite — the first pub­lic re­lease was ver­sion 0.01 in September 1991. The ker­nel did­n’t reach ver­sion 1.0 un­til March 1994, by which point it had grown sig­nif­i­cantly in ca­pa­bil­ity and had at­tracted con­tri­bu­tions from de­vel­op­ers around the world.

Continue

05 / 8

Distros

Which Linux dis­tri­b­u­tion, first re­leased in 1993, is one of the old­est still ac­tively main­tained to­day?

AUbuntuBFedoraCSlackwareDDebian

Correct! Slackware, cre­ated by Patrick Volkerding, was first re­leased in July 1993, mak­ing it one of the old­est sur­viv­ing Linux dis­tri­b­u­tions. It is known for its sim­plic­ity and Unix-like phi­los­o­phy, and it con­tin­ues to be main­tained to this day.

Not quite — the an­swer is Slackware, re­leased in 1993 by Patrick Volkerding. While Debian was also founded in 1993, Slackware nar­rowly edges it out as the older re­lease. Ubuntu did­n’t ar­rive un­til 2004, and Fedora launched in 2003.

Continue

06 / 8

Origins

The GNU Project, which pro­vided many tools that paired with the Linux ker­nel, was founded by which de­vel­oper?

AEric RaymondBRichard StallmanCBruce PerensDIan Murdock

Correct! Richard Stallman founded the GNU Project in 1983 with the goal of cre­at­ing a com­pletely free Unix-like op­er­at­ing sys­tem. When the Linux ker­nel ap­peared in 1991, it filled the miss­ing piece GNU needed, and the com­bi­na­tion be­came what many call GNU/Linux.

Not quite — it was Richard Stallman who founded the GNU Project in 1983. Stallman is also known for cre­at­ing the GPL (GNU General Public License) and found­ing the Free Software Foundation, two pil­lars that shaped the le­gal and philo­soph­i­cal foun­da­tion of free soft­ware.

Continue

07 / 8

Milestones

Which com­pany re­leased a land­mark com­mer­cial Linux dis­tri­b­u­tion in 1994, help­ing bring Linux into the en­ter­prise world?

ACanonicalBSUSECRed HatDMandriva

Correct! Red Hat re­leased its first Linux dis­tri­b­u­tion in 1994 and be­came one of the most in­flu­en­tial com­mer­cial Linux com­pa­nies in his­tory. It pi­o­neered the en­ter­prise Linux mar­ket and was even­tu­ally ac­quired by IBM in 2019 for ap­prox­i­mately $34 bil­lion.

Not quite — Red Hat is the an­swer. Founded by Marc Ewing and Bob Young, Red Hat helped prove that com­pa­nies could build sus­tain­able busi­nesses around open-source soft­ware. SUSE Linux also launched in 1994, mak­ing it a close ri­val, but Red Hat be­came the more glob­ally dom­i­nant en­ter­prise force.

Continue

08 / 8

Distros

Ubuntu Linux, one of the most pop­u­lar desk­top dis­tri­b­u­tions, is based on which other Linux dis­tri­b­u­tion?

AArch LinuxBFedoraCDebianDGentoo

Correct! Ubuntu is based on Debian and was first re­leased in October 2004 by Mark Shuttleworth’s com­pany Canonical. It was de­signed to make Linux more ac­ces­si­ble to every­day users, and its six-month re­lease cy­cle and long-term sup­port ver­sions made it a fa­vorite for both desk­tops and servers.

Not quite — Ubuntu is built on top of Debian. Debian it­self was founded in 1993 by Ian Murdock and is known for its strict com­mit­ment to free soft­ware and sta­bil­ity. Ubuntu in­her­its Debian’s pack­age man­age­ment sys­tem (APT and .deb pack­ages) but adds its own user-friendly layer on top.

See My Score

Challenge Complete

Your Score

/ 8

Thanks for play­ing!

Try Again

Windows han­dles this co­or­di­na­tion by us­ing a spe­cific set of mech­a­nisms, and be­fore NTSYNC, Wine had to mimic these mech­a­nisms us­ing things like esync and fsync, which both worked, but did­n’t al­ways match Windows ex­actly. NTSYNC builds these mech­a­nisms straight into the Linux ker­nel for the first time, and it means Wine does­n’t have to em­u­late any­thing any­more. The de­vel­oper-fac­ing API calls don’t ac­tu­ally change, Linux just knows how to an­swer them na­tively.

Related

NTSYNC is part of a grow­ing pat­tern

Not the first time Linux has in­her­ited fea­tures be­cause of Windows

NTSYNC is­n’t the first time Linux has gained a new fea­ture specif­i­cally be­cause Windows games needed it. A few years back, Linux added a way for soft­ware to wait on sev­eral events at once, which is some­thing Windows had built in for decades, but Linux did­n’t. Wine had been work­ing around the gap with awk­ward tricks un­til the ker­nel fi­nally got na­tive sup­port.

This work is dri­ven by Valve, by CodeWeavers (the com­pany that em­ploys many of the core Wine de­vel­op­ers, in­clud­ing NTSYNCs au­thor Elizabeth Figura), and by a steady stream of con­trib­u­tors who want Linux to be a real gam­ing plat­form with­out de­pend­ing on out-of-ecosys­tem patches for­ever.

Related

These aren’t mag­i­cal per­for­mance gains

fsync was al­ready pretty good

The head­line per­for­mance gains look great, but they need some con­text. The eye-catch­ing 40 to 200 per­cent FPS gains cited in NTSYNCs orig­i­nal bench­marks were mea­sured against un­mod­i­fied up­stream Wine, which al­most no­body uses to play games on Linux any­more. Most Linux gamers, in­clud­ing every Steam Deck owner, use Proton, which al­ready has fsync. Compared to fsync, NTSYNCs per­for­mance gains are far more mod­est. The games that ben­e­fit most from the change to NTSYNC are games that were re­ally strug­gling be­fore. Anything that was run­ning at de­cent fram­er­ates be­fore­hand is still go­ing to run fine.

Related

These 7 Linux myths you still be­lieve sim­ply aren’t true

Linux is a com­pletely dif­fer­ent beast than it was a decade ago.

Valve adopted it any­way

It’s a great sign

Pierre-Loup Griffais, an en­gi­neer at Valve, has gone on the record to say that fsync was al­ready fast enough, and de­spite that, Valve still shipped NTSYNC in sta­ble SteamOS in March any­way, which speaks to the fact that fsync is still a workaround at its core, and can be the cause of is­sues out­side of poor raw FPS.

These old workarounds got sub­tle edge cases wrong in ways that pro­duced oc­ca­sional hitches, dead­locks, or weird be­hav­ior in spe­cific games, which are bugs that don’t show up on bench­mark charts but can ab­solutely ruin in­di­vid­ual ex­pe­ri­ences. NTSYNC fixes those at the source by match­ing Windows be­hav­ior ex­actly, and that means as soon as your fa­vorite dis­tro moves to the new ker­nel ver­sion, whether it be Bazzite, CachyOS, Fedora, or a fla­vor of Ubuntu, they all get this much-needed fix.

Related

4 rea­sons Valve’s full SteamOS re­lease will change PC gam­ing again

Valve’s full SteamOS re­lease will change PC gam­ing again, and here are some of the most im­por­tant ways.

Gaming on Linux con­tin­ues to im­prove by the month

Linux has grown so much in the gam­ing de­part­ment. Where there once was noth­ing but clever Wine patches and com­mu­nity workarounds now lies sup­port from gam­ing be­he­moths like Valve, dri­ving changes to the Linux ker­nel it­self. NTSYNC won’t be the last time a piece of Windows gets re­built in­side Linux be­cause gamers needed it, and with more than five per­cent of Steam’s user base now run­ning Linux, the in­cen­tive to keep do­ing it has never been stronger.

Why I'm leaving GitHub for Forgejo

jorijn.com

I moved my code from GitHub to a self-hosted Forgejo. Not be­cause of the out­ages, but be­cause of who owns what runs on top of them. The Dutch gov­ern­ment just made the same call.

On April 27, 2026 the Dutch Ministry of the Interior soft-launched code.over­heid.nl, a self-hosted Forgejo in­stance for Dutch gov­ern­ment source code. Project man­ager Boris Van Hoytema said the plat­form was born from the re­quire­ment that the min­istry has to legally pub­lish [its] source code on a place that [it] owns,” and that Forgejo was picked over GitLab be­cause it is fully open source and of­fers all the free­doms needed for dig­i­tal au­ton­omy.

The week be­fore, I qui­etly moved my own code in the same di­rec­tion. My canon­i­cal Git host is now code.jorijn.com, run­ning Forgejo v15 LTS on a sin­gle NUC in a hard­ened setup. Some of my repos­i­to­ries al­ready live there; the rest are queued. The longer-term plan is to archive my pub­lic GitHub repos­i­to­ries once the mi­gra­tion is com­plete and point each archive at the new home.

Most pieces about leav­ing GitHub lead with the out­ages. Outages are real. They are not why I’m leav­ing. The out­ages, the AI-by-default opt-in, and the fact that GitHub no longer has its own CEO are all symp­toms of one un­der­ly­ing fact: I do not own this. The Dutch gov­ern­ment just pub­lished the same con­clu­sion. So this is the long ver­sion of that think­ing, and what the move ac­tu­ally looks like once you de­cide to make it.

TL;DR

GitHub logged 257 in­ci­dents in May 2025 to April 2026, 48 of them ma­jor. The CTO pub­licly apol­o­gised and said ca­pac­ity needs to scale 30x to keep up with AI-driven load.

In August 2025 GitHub stopped hav­ing its own CEO. It is now a unit of Microsoft’s CoreAI di­vi­sion, the same group build­ing Copilot and the broader AI stack.

On April 24, 2026 GitHub flipped Copilot Free, Pro, and Pro+ user-in­ter­ac­tion data to opt-in for AI train­ing by de­fault. There is no repos­i­tory-level opt-out.

US-jurisdictional risk un­der FISA Section 702 and the CLOUD Act is un­re­solved. Microsoft’s own at­tor­ney told the French Senate un­der oath he could not guar­an­tee EU data was safe from silent US gov­ern­ment ac­cess.

The Dutch gov­ern­ment picked Forgejo for code.over­heid.nl in April 2026 for the same set of rea­sons. I’m do­ing the same for my work.

code.jorijn.com runs Forgejo v15 LTS on a sin­gle NUC with a KVM-isolated, weekly-re­built Actions run­ner. Public GitHub repos­i­to­ries will be archived and pointed at the new home as the mi­gra­tion com­pletes.

Why out­ages aren’t ac­tu­ally the rea­son

The April 2026 out­ages were the kind that makes en­gi­neers an­gry. On April 23 the merge queue’s squash-merge code path silently re­verted pre­vi­ously merged com­mits across 658 repos­i­to­ries and 2,092 pull re­quests af­ter a fea­ture flag was rolled out in­com­pletely. Companies in­clud­ing Modal and Zipline did man­ual data re­cov­ery. Four days later, an over­loaded Elasticsearch clus­ter took Pull Requests, Issues, and Packages of­fline for over six hours.

But pick any month and the pic­ture is the same kind of bad. February 2026 alone logged 37 in­ci­dents, in­clud­ing a 3-hour 40-minute out­age that took Actions, the Copilot Coding Agent, Code Review, CodeQL, Dependabot, and Pages down at once. October 1, 2025 was a ten-hour ma­cOS-run­ner out­age. The IncidentHub ag­gre­ga­tion puts the May 2025 to April 2026 to­tal at 257 in­ci­dents and 48 ma­jor out­ages, with roughly 112 hours of to­tal down­time.

The right way to read this list is not GitHub is un­re­li­able.” Big sys­tems break. The right way to read it is the fram­ing GitHub it­self put on it. CTO Vlad Fedorov apol­o­gised on April 28 and said ca­pac­ity has to grow 30x to keep up with the load. He at­trib­uted that load di­rectly to agentic AI work­flow growth” since December 2025. The re­li­a­bil­ity story is down­stream of the AI story. GitHub is not slow­ing down on AI fea­tures. It is dou­bling down on them. The out­ages are what dou­bling-down looks like in pro­duc­tion.

The Pragmatic Engineer pointed out that GitLab, Bitbucket, Vercel, Linear, and Sentry did­n’t have the same year. They serve de­vel­op­ers un­der the same over­all de­mand pres­sure. Whatever GitHub is wrestling with is spe­cific to GitHub.

GitHub no longer has its own CEO

The big­ger fact is older than the apol­ogy and got a lot less press. On August 11, 2025 Thomas Dohmke stepped down as GitHub’s CEO. Microsoft did not re­place him. Instead, GitHub was ab­sorbed into Microsoft’s CoreAI di­vi­sion, a group Satya Nadella in­tro­duced in January 2025 with the stated mis­sion to build the end-to-end Copilot and AI stack for both first-party and third-party cus­tomers.

GitHub’s rev­enue, en­gi­neer­ing, and sup­port now re­port into Microsoft’s de­vel­oper di­vi­sion un­der Julia Liuson. GitHub’s CPO re­ports to Microsoft’s AI plat­form VP. The brand still ex­ists. The in­de­pen­dent lead­er­ship does not.

This mat­ters be­cause the older ar­gu­ment for stay­ing on GitHub was that Microsoft kept it at ar­m’s length. From 2018 through 2024 that was sub­stan­tively true. Dohmke had a real seat. Product de­ci­sions were vis­i­bly GitHub’s, not Microsoft’s. After August 2025 that ar­gu­ment no longer holds. When you push code to github.com to­day, you are push­ing it to a unit of Microsoft’s AI or­ga­ni­za­tion. Whether that both­ers you de­pends on how much you trust Microsoft’s AI or­ga­ni­za­tion to make the same de­ci­sions about your repos­i­tory that the older GitHub would have made. I no longer do, and the rea­son for that dis­trust shows up in the next sec­tion.

The train­ing-data de­fault flipped

On March 25, 2026 GitHub an­nounced a pri­vacy-state­ment change ef­fec­tive April 24. From that date, in­ter­ac­tion data, specif­i­cally in­puts, out­puts, code snip­pets, and as­so­ci­ated con­text, from Copilot Free, Pro, and Pro+ users will be used to train and im­prove our AI mod­els un­less they opt out.

Three things about that state­ment mat­ter, in or­der.

First: opt-out, not opt-in. The de­fault flipped. Anyone us­ing Copilot for free, on Pro, or on Pro+ is now con­tribut­ing to model train­ing un­less they go to the Copilot set­tings page and turn it off.

Second: there is no repos­i­tory-level switch. As a main­tainer, I can­not tell GitHub don’t train on in­ter­ac­tions in­side my repos­i­tory. The opt-out is per user ac­count, so each con­trib­u­tor has to make their own choice. In ef­fect, my code­base be­comes train­ing ma­te­r­ial when­ever any­one us­ing Copilot Free/Pro/Pro+ touches it, no mat­ter how I li­cense it.

Third: the carve-out for pri­vate repos­i­to­ries is nar­rower than it sounds. GitHub says it does not use pri­vate-repo con­tent at rest” for train­ing, but it does col­lect code snip­pets and in­ter­ac­tion con­text” gen­er­ated while Copilot is be­ing used in­side a pri­vate repo. The line be­tween the code at rest and the snip­pets gen­er­ated while edit­ing it is, char­i­ta­bly, blurry.

Copilot Business and Copilot Enterprise cus­tomers are ex­empt be­cause they are gov­erned by sep­a­rate Data Protection Agreements. The split is clean: pay enough and your in­ter­ac­tions are not train­ing data. Otherwise they are.

I wrote about agen­tic GitHub Actions a few weeks ago, and at the time the se­cu­rity model was the head­line. The train­ing-data flip is the sec­ond half of the same story: GitHub’s strate­gic in­ter­est in your in­ter­ac­tion data is struc­tural now, not op­tional. I am not in­ter­ested in ar­gu­ing about the mer­its of that strat­egy on some­one else’s plat­form. I would rather not be on the plat­form.

Then there’s the ju­ris­dic­tion

Underneath all of this is a layer that does­n’t shift when the pri­vacy state­ment does. GitHub Inc. and Microsoft Corp. are US com­pa­nies. Anything they hold sits in scope of US law, in­clud­ing FISA Section 702 and the CLOUD Act of 2018. Both ap­ply re­gard­less of where data phys­i­cally sits.

Section 702 was reau­tho­rised in April 2024 for two years and is cur­rently run­ning on a 45-day ex­ten­sion signed at the end of April 2026 while Congress ar­gues over a longer re­newal. It au­tho­rises US in­tel­li­gence col­lec­tion against non-US per­sons through elec­tronic com­mu­ni­ca­tions ser­vice providers domi­ciled in the US. The CLOUD Act lets US law en­force­ment com­pel a US-headquartered com­pany to pro­duce data stored any­where in the world.

GitHub an­nounced EU data res­i­dency for Enterprise Cloud in October 2024. That solves data lo­ca­tion. It does not solve ju­ris­dic­tion. CLOUD Act ex­po­sure fol­lows cor­po­rate con­trol, not ge­og­ra­phy.

The most hon­est ar­tic­u­la­tion of this came not from a reg­u­la­tor but from Microsoft’s own at­tor­ney, who told a French Senate hear­ing in June 2025, un­der oath, that he could not guar­an­tee French data stored in European Microsoft dat­a­cen­tres was safe from silent US gov­ern­ment ac­cess.

I cov­ered the broader le­gal pic­ture in my ear­lier piece on why hosted in Frankfurt” does­n’t mean GDPR-compliant, and the op­er­a­tional im­pli­ca­tions for host­ing providers in my piece on NIS2, so I’ll keep the de­tail there. The point that mat­ters here is nar­row. As long as your code lives at github.com, your code lives in US le­gal ter­ri­tory. EU data res­i­dency is a com­fort, not a fix.

The Dutch gov­ern­men­t’s call: code.over­heid.nl

This is where the Dutch gov­ern­men­t’s choice de­serves more at­ten­tion than it got. The le­gal dri­ver is the Netherlands’ Open, ten­zij” pol­icy, in force since 2020: soft­ware de­vel­oped with pub­lic funds is open source by de­fault un­less se­cu­rity or con­fi­den­tial­ity re­quires oth­er­wise. To com­ply, the min­istry needed some­where to pub­lish code that it ac­tu­ally con­trolled. Code.overheid.nl is the an­swer.

The piece worth paus­ing on is which forge they chose. The European Commission runs code.eu­ropa.eu on self-hosted GitLab, live since September 2022. Germany’s open­Code is also GitLab. France’s code.gouv.fr is an ag­gre­ga­tor that in­dexes re­pos hosted else­where, not a forge in it­self.

The Dutch gov­ern­men­t’s choice of Forgejo, not GitLab, was de­lib­er­ate. As the OSOR ar­ti­cle put it, the ra­tio­nale was that Forgejo is fully open source, with no open-core split, and of­fers all the free­doms needed for dig­i­tal au­ton­omy. Van Hoytema added that Forgejo’s roadmap was way more aligned” with theirs than the al­ter­na­tives. The gov­ern­ment did not just want a sov­er­eign forge. They wanted a sov­er­eign forge that was­n’t gated be­hind a com­mer­cial ven­dor’s pre­mium tier.

So the in­sti­tu­tional pat­tern mat­ters: a na­tional gov­ern­ment with se­ri­ous lawyers and a long mem­ory looked at the same pic­ture I was look­ing at, made the same de­ci­sion, and shipped it the week be­fore I did. That is­n’t proof that the de­ci­sion is right. It is, at min­i­mum, proof that the de­ci­sion is no longer fringe.

Why Forgejo, and not GitLab

I weighed GitLab se­ri­ously. Self-hosted GitLab CE is a known quan­tity, with a much larger com­mer­cial ecosys­tem and, frankly, a more pol­ished UI. Two things tipped the choice.

First: li­cens­ing. GitLab is open core. The Community Edition is MIT-licensed, but many of the fea­tures I’d ac­tu­ally want in pro­duc­tion live in the Enterprise tiers un­der a non-free li­cense. Forgejo went the other way. As of v9.0 in August 2024 the pro­ject re­li­censed from MIT to GPLv3+, with the ex­plicit goal of stay­ing copy­left and re­sist­ing fu­ture com­mer­cial cap­ture of the code­base. The fork from Gitea in December 2022 hap­pened pre­cisely be­cause Gitea Ltd took con­trol of the trade­marks and do­mains in a way the com­mu­nity had not con­sented to. The les­son learned shows up in the li­cense.

Second: gov­er­nance. Forgejo lives un­der Codeberg e.V., a non-profit reg­is­tered in Berlin since September 2018, with a mem­ber-elected board, pub­lic bud­gets, and 300,000+ repos­i­to­ries on its hosted in­stance. Members vote on the bud­get an­nu­ally; the 2025 plan was ac­cepted with 88 in favour, zero against, one ab­sten­tion. That is not a mar­ket­ing claim about com­mu­nity gov­er­nance. That is a German Verein do­ing what Vereine do.

Forgejo v15.0 LTS shipped on April 16, 2026. It is the pro­jec­t’s 100th re­lease. Long-term sup­port runs through July 15, 2027. Forgejo Actions reached the ma­tu­rity I needed (ephemeral run­ners, OpenID Connect, reusable work­flow ex­pan­sion) in v15. Releases since the fork have been steady, with ac­tive monthly re­ports.

The hon­est caveat: the com­mer­cial Forgejo ecosys­tem is real but thin. The clean­est com­mer­cial of­fer­ing to­day is Codey by VSHN, a Swiss-hosted man­aged Forgejo from 19 CHF per month, launched on Servala in March 2025. There is no Red-Hat-style en­ter­prise sup­port sub­scrip­tion. If you need 24/7 phone sup­port and a ven­dor to point at, you will need to build that your­self, or wait. I am will­ing to wait, be­cause I would rather own the plat­form.

What I built, and why it looks the way it does

Code.jorijn.com runs on a sin­gle Intel NUC with 64 GB of RAM in my home of­fice. Forgejo v15 LTS, Postgres 17, and Traefik live in­side Docker. An Incus-managed KVM vir­tual ma­chine sits be­side them and runs my Forgejo Actions run­ner. That is the whole plat­form.

The in­ter­est­ing de­ci­sion is not in the Forgejo de­ploy­ment. Forgejo plus Postgres plus a re­verse proxy is not in­ter­est­ing. The de­ci­sion that took the most thought is the run­ner.

Where the dan­ger ac­tu­ally is

If you self-host a forge, the forge it­self is the easy part. The hard part is what­ever runs the CI jobs. My run­ner has to ex­e­cute npm in­stall, com­poser in­stall, and pip in­stall on a daily Renovate sched­ule, against lock­files gen­er­ated by my own repos­i­to­ries. That means it ex­e­cutes life­cy­cle scripts. It means every job po­ten­tially runs un­trusted code, of the same gen­eral shape that re­cent npm-worm and ax­ios sup­ply-chain at­tacks used to ride de­pen­dency bots that auto-merged within an hour.

The run­ner’s job, in other words, is not to run code. The run­ner’s job is to con­tain the code while it runs. Everything in the run­ner ar­chi­tec­ture ex­ists for that rea­son. The same logic I de­scribed in my piece on un­main­tained de­pen­den­cies ap­plies here: as­sume any sin­gle layer can fail, and de­sign so the next one ab­sorbs the fail­ure.

The de­fenses, weak­est to strongest

The run­ner uses five lay­ers, in or­der from soft­est to hard­est.

A per­sis­tent KVM vir­tual ma­chine. The run­ner lives in its own VM, not in a con­tainer on the host. The host’s ker­nel is not shared with the job en­vi­ron­ment. A Linux ker­nel CVE in­side the run­ner has to break the KVM bound­ary be­fore it can touch the NUC.

gVi­sor as the de­fault Docker run­time in­side that VM. Job con­tain­ers run un­der runsc, which in­ter­cepts sys­tem calls in user space rather than pass­ing them to the host ker­nel. A con­tainer es­cape has to break gVi­sor and the sur­round­ing KVM.

A weekly de­struc­tive re­build. Every Monday at 02:00 UTC the en­tire VM is de­stroyed and recre­ated from a freshly baked Ubuntu base im­age, with new per­sis­tent run­ner reg­is­tra­tions minted against Forgejo. The base im­age it­self re­builds on Sundays, so the new VM con­sumes that week’s apt and ker­nel patches. Persistent state can­not live longer than seven days.

An nfta­bles egress fil­ter on the run­ner’s bridge. The run­ner can reach :443, :80, :22, and :53 to pub­lic des­ti­na­tions (npm, pypi, ghcr, my own Forgejo via the pub­lic host­name through the router’s hair­pin NAT). It can­not reach 192.168.0.0/16, 10.0.0.0/8, or 172.16.0.0/12. A com­pro­mised job can­not scan my LAN, can­not reach the router ad­min in­ter­face, and can­not reach the host’s other ser­vices.

Scope-bound run­ner to­kens, never ad­min-scoped. The two per­sis­tent run­ner reg­is­tra­tions are tied to a sin­gle user scope and a sin­gle org scope re­spec­tively, with write:user,write:or­ga­ni­za­tion PAT scopes for man­age­ment. A leaked to­ken can­not reg­is­ter run­ners out­side its scope, and def­i­nitely can­not do any­thing ad­min-scoped.

The com­bi­na­tion is de­lib­er­ately over­lap­ping. Each layer is a fence. Together they are a perime­ter with depth. None of this is novel, in the sense that all the prim­i­tives are up­stream and well-doc­u­mented. What is new is wiring them to­gether for a sin­gle-user home­lab where the en­tire plat­form fits on one NUC and re­verts cleanly when some­thing goes wrong.

The un­der­ly­ing prim­i­tives, KVM iso­la­tion, gVi­sor, weekly re­builds, and scope-bound run­ner reg­is­tra­tions, are all things Forgejo and Incus sup­port na­tively. I just had to com­bine them.

What I gave up

This is the sec­tion I have to write be­cause every both sides” ar­ti­cle I re­spect has one. So: what does mov­ing to Forgejo cost me, hon­estly?

Discovery and the so­cial graph. GitHub is where my con­trib­u­tors find me. When some­one pushes a small fix to a pub­lic repos­i­tory, they ex­pect to do it on github.com, not on a do­main they have never heard of. The plan I’m work­ing to­wards is to archive each pub­lic GitHub repos­i­tory once the move is done and point its README at code.jorijn.com. The dis­cov­ery path stays in­tact: peo­ple still find me via GitHub, see the archive no­tice, and fol­low the link to the canon­i­cal home. I’m not there yet — a few repos­i­to­ries al­ready live on code.jorijn.com, the rest are queued. Until then, the gap is real, and I ac­cept it.

GitHub Actions ecosys­tem fragility. Forgejo Actions de­lib­er­ately aims for fa­mil­iar­ity, not com­pat­i­bil­ity. Most things work. Some don’t. per­mis­sions: blocks at the work­flow level are silently ig­nored. ac­tions/​check­out@v6 broke au­then­ti­cated check­out on non-GitHub run­ners in early 2026, so I pinned every­thing to v5. ac­tions/​up­load-ar­ti­fact@v4 re­quires the Forgejo-hosted fork. OIDC works but uses a dif­fer­ent work­flow key (enable-openid-connect: true) than GitHub’s per­mis­sions: id-to­ken: write. None of these are block­ers. They are all fric­tion. If your work­flows lean heav­ily on GitHub-specific fea­tures, the mi­gra­tion is a pro­ject, not an evening.

Dependabot. Forgejo does­n’t have it. I run Renovate on the same self-hosted run­ner, on a 3-hour sched­ule. It does the same job. It has more con­fig­u­ra­tion. The setup took me a day.

24/7 ven­dor sup­port. GitHub Enterprise gives you a phone num­ber and an SLA. Forgejo gives you an is­sue tracker and a chat room. For a one-per­son op­er­a­tion that is fine. For a 200-engineer or­gan­i­sa­tion it might not be, and that is a real rea­son to wait.

When this is­n’t worth it

I would not move to self-hosted Forgejo if any of the fol­low­ing are true.

The team has zero ap­petite or ca­pac­ity for run­ning in­fra­struc­ture. A man­aged Forgejo (Codey, or Codeberg for FOSS) closes most of that gap, but you still own the mi­gra­tion cost.

You are heav­ily in­vested in GitHub-specific fea­tures: GitHub Apps mar­ket­place, Codespaces, Copilot Workspace, Advanced Security. Forgejo is a forge, not a de­vel­oper-plat­form-as-a-ser­vice.

Your con­trib­u­tor base is the GitHub so­cial graph. If dis­cov­er­abil­ity mat­ters more than own­er­ship, stay where the con­trib­u­tors are. Or ac­cept the fric­tion, archive your pub­lic repos­i­to­ries with a pointer to your new home once the move is fin­ished, and re­visit the de­ci­sion later.

You don’t have a cred­i­ble op­er­a­tional an­swer for the run­ner. The run­ner is the part where this gets se­ri­ous. If you are not pre­pared to think about KVM iso­la­tion, gVi­sor, nfta­bles, and weekly re­builds, run your CI jobs on a man­aged run­ner host, or stay on GitHub.

The Dutch gov­ern­men­t’s pat­tern is the right model here too. They did not mi­grate every­thing in one step. Code.overheid.nl is a soft-launch plat­form for min­istries to share open-source code, not a whole­sale re­place­ment for every­thing else they use. My setup has the same shape: Forgejo is canon­i­cal for my work, GitHub is a mir­ror, and I am will­ing to re­visit the mir­ror later.

Key take­aways

GitHub is no longer an in­de­pen­dent com­pany with its own CEO. Since August 2025 it is a unit of Microsoft’s CoreAI di­vi­sion.

The April 2026 out­ages and the Copilot train­ing-data de­fault flip are down­stream of the same shift. Both are pre­dictable from the struc­ture.

US-jurisdictional risk un­der FISA 702 and the CLOUD Act is real and un­re­solv­able from the cus­tomer side. EU data res­i­dency is a com­fort, not a fix.

The Dutch gov­ern­ment picked Forgejo for code.over­heid.nl in April 2026 for the same set of rea­sons. The in­sti­tu­tional pat­tern is form­ing.

A de­fen­si­ble self-hosted Forgejo de­ploy­ment is achiev­able on a sin­gle NUC, but the run­ner is the part that re­quires real care: KVM iso­la­tion, gVi­sor, weekly re­builds, scope-bound to­kens, and an egress fil­ter that says no to your LAN.

Migration fric­tion is real. Archiving your pub­lic GitHub repos­i­to­ries with a pointer to the new home keeps the dis­cov­ery path in­tact while you com­plete the move.

Recurring server or de­ploy­ment is­sues?

I help teams make pro­duc­tion re­li­able with CI/CD, Kubernetes, and cloud—so fixes stick and de­ploys stop be­ing stress­ful.

Explore DevOps con­sul­tancy

Related ar­ti­cles

30 Apr 2026

copy.fail (CVE-2026 – 31431): a small Linux ker­nel bug with an un­usu­ally big blast ra­dius­copy.fail is a Linux ker­nel lo­cal priv­i­lege es­ca­la­tion dis­closed on 29 April 2026. It works on nearly every mod­ern dis­tri­b­u­tion, leaves no on-disk trace, and slips past Kubernetes’ de­fault sec­comp. Why it mat­ters and what to do.

1631 words

copy.fail (CVE-2026 – 31431): a small Linux ker­nel bug with an un­usu­ally big blast ra­dius

copy.fail is a Linux ker­nel lo­cal priv­i­lege es­ca­la­tion dis­closed on 29 April 2026. It works on nearly every mod­ern dis­tri­b­u­tion, leaves no on-disk trace, and slips past Kubernetes’ de­fault sec­comp. Why it mat­ters and what to do.

29 Apr 2026

HashiCorp Vault vs OpenBao: a thor­ough com­par­i­son for plat­form teamsTwo se­crets man­agers, one shared code­base, two very dif­fer­ent li­censes. A deep, prac­ti­cal com­par­i­son of HashiCorp Vault and OpenBao for plat­form en­gi­neers pick­ing be­tween them.

4605 words

HashiCorp Vault vs OpenBao: a thor­ough com­par­i­son for plat­form teams

Two se­crets man­agers, one shared code­base, two very dif­fer­ent li­censes. A deep, prac­ti­cal com­par­i­son of HashiCorp Vault and OpenBao for plat­form en­gi­neers pick­ing be­tween them.

19 Apr 2026

OpenTofu vs Terraform in 2026: the fork fi­nally di­vergedThree years af­ter the fork, OpenTofu and Terraform have di­verged in li­cens­ing, gov­er­nance, and tech­ni­cal fea­tures. For EU teams eval­u­at­ing in­fra­struc­ture-as-code strat­egy, the choice is no longer the­o­ret­i­cal.

1950 words

OpenTofu vs Terraform in 2026: the fork fi­nally di­verged

Three years af­ter the fork, OpenTofu and Terraform have di­verged in li­cens­ing, gov­er­nance, and tech­ni­cal fea­tures. For EU teams eval­u­at­ing in­fra­struc­ture-as-code strat­egy, the choice is no longer the­o­ret­i­cal.

Setting up a free *.city.state.us locality domain

fredchan.org

tl;dr

In the US, can get a do­main name like som­e­name.city.state.us for free. If your town has its own do­main, you can get name­servers from Amazon Lightsail, send the Interim .US Domain Template to the del­e­gated man­ager for your lo­cal­ity to reg­is­ter one, then point DNS en­tries at your web­host.

What’s a lo­cal­ity do­main?

A lo­cal­ity do­main is a do­main name that’s as­so­ci­ated with a lo­ca­tion in the United States, such as fred­er­ick.seat­tle.wa.us (which cur­rently redi­rects to fred­chan.org). Locality do­mains were first cre­ated in 1992, and the in­fra­struc­ture has been main­tained un­der gov­ern­ment con­tract ever since.

To reg­is­ter one, you must be a US cit­i­zen or per­ma­nent res­i­dent, an or­ga­ni­za­tion in­cor­po­rated in the US, or an or­ga­ni­za­tion with a bona fide pres­ence in the US that reg­u­larly en­gages in law­ful ac­tiv­i­ties or has an of­fice in the US. (For the full ver­biage, see the bot­tom of this form).

Step 1: Choose a lo­cal­ity do­main

Registration of many lo­cal­ity do­mains have been del­e­gated to var­i­ous com­pa­nies who ac­tu­ally han­dle the do­main reg­is­tra­tion. See the list of del­e­gated sub­do­mains for do­mains you can reg­is­ter un­der, which also has the con­tact e-mail for the cor­re­spond­ing reg­is­trar as of 2009.

Since this list is quite old and some com­pa­nies may have re­struc­tured/​re­named, you may need to hunt for an e-mail for the cur­rent in­car­na­tion of the reg­is­trar. For ex­am­ple, the e-mail listed for seat­tle.wa.us is do­mainrq@nwnexus.com be­long­ing to NW Nexus, which is now NuOz Corporation, so the e-mail I con­tacted was sup­port@nuoz.com.

If you do not live in one of these lo­cal­i­ties, you can try reg­is­ter­ing a do­main un­der gen.your-state.us, like next.gen.oh.us, which is of­fi­cially for gen­eral in­de­pen­dent en­ti­ties. The con­tact should be in the same list.

If the lo­cal­ity do­main you’re look­ing for is­n’t on the list (i.e. reg­is­tra­tion has not been del­e­gated), you’re prob­a­bly screwed. The man­ager of all un­del­e­gated do­mains, NeuStar, will only al­low lo­cal gov­ern­ment agen­cies to reg­is­ter them due to gov­ern­ment pol­icy. This is sup­posed to be tem­po­rary, but the pol­icy has re­mained since 2002.

Step 2: Acquire name­servers

When you get a nor­mal do­main, like fred­chan.org, your do­main reg­is­trar usu­ally pro­vides name­servers for you af­ter you buy. These name­servers are where you put DNS records that point your do­main to an IP ad­dress, like your web host’s IP ad­dress. However, in or­der to reg­is­ter a lo­cal­ity do­main, you need to al­ready have name­servers.

The only place I could find that pro­vides free name­servers for non-top level do­mains (e.g. every lo­cal­ity do­main) is Amazon Lightsail. Lightsail is Amazon’s low cost AWS web host­ing ser­vice. You’re nor­mally sup­posed to rent a web host­ing server from them, but you don’t ac­tu­ally need to do that.

Create an AWS ac­count and go to the Lightsail con­sole.

On the left nav­i­ga­tion panel, click Domains & DNS.

Click the Create DNS zone but­ton.

Select Use a do­main from an­other reg­is­trar and type in the do­main you in­tend to reg­is­ter later.

Click Create DNS zone.

Take note of the do­main names in the Name servers sec­tion. You need these when you fill out the do­main reg­is­tra­tion form.

Step 3: Fill out the do­main reg­is­tra­tion form

Now that you have name servers, you can fill out the Interim .US Domain Template v2.0. In this sec­tion, I’ll walk you through some of the trick­ier parts of this form, as­sum­ing you are reg­is­ter­ing a do­main for your­self.

2. FULLY-QUALIFIED DOMAIN NAME:

This is the do­main you want to reg­is­ter, e.g. fred­er­ick.seat­tle.wa.us.

3. ORGANIZATION INFORMATION

If you are a hu­man be­ing and not an or­ga­ni­za­tion, you can fill out sec­tions 3a-e with your own ad­dress.

4. DESCRIPTION OF ORGANIZATION/DOMAIN:

Describe what you’re do­ing with this do­main. For ex­am­ple, if you ex­pect to host a web­site on it, you can say that. You can use it for pur­poses that you don’t write in the form later on as well.

5. Date Operational……:

You can use your birth date here.

6. ADMINISTRATIVE CONTACT OF ORGANIZATION/DOMAIN and 7. TECHNICAL AND ZONE CONTACT

Both of these can be you. 6i, 7i, and 7j can all be your e-mail ad­dress, and if you don’t have a fax num­ber, leave 7k blank.

8. PRIMARY SERVER: HOSTNAME, NETADDRESS and 9. SECONDARY SERVER: HOSTNAME, NETADDRESS

This is where you fill in your name server ad­dresses.

Lightsail will have given you at least 2 – 4 name server ad­dresses, of which you need to know the IP ad­dress of each. You can use an on­line DNS lookup tool to find their IP ad­dresses or the dig com­mand in your ter­mi­nal.

Any of the name­servers can be the pri­mary server. Then, for the rest of the servers, you can re­peat sec­tion 9 as many times as you need un­til you’ve added all the name­servers.

10. US NEXUS REQUIREMENTS

Instructions for sec­tion 10 ap­pear at the bot­tom of the form. For in­stance, if the do­main is for per­sonal use and you are a US Citizen, your ap­pli­ca­tion pur­pose is (iii) per­sonal use, your Nexus Category is (category 11) Natural per­son who is a United States Citizen, and you leave Nexus Validator blank.

Step 4: Send the form and wait

Send the form to the do­main reg­is­trar you iden­ti­fied be­fore for your lo­cal­ity do­main. When reg­is­ter­ing, I wrote I’d like to reg­is­ter a new lo­cal­ity do­main with the fol­low­ing in­for­ma­tion” and pasted the en­tire form con­tents into the e-mail, so they know what the form is for.

This can take days or pos­si­bly weeks, since they’re not usu­ally au­to­mated. If suc­cess­ful, you’ll get an e-mail con­firm­ing that your do­main has been reg­is­tered.

Step 5: Finish set­ting up DNS in Lightsail

You can now go back to the DNS zone you cre­ated in Lightsail, and in the DNS records tab, cre­ate DNS records to point your do­main to what­ever server you want― web servers, Minecraft servers, FTP servers… any­thing!

For free web host­ing, I use GitHub Pages, which has a guide to con­fig­ur­ing cus­tom do­mains. Different web hosts will have slightly dif­fer­ent in­struc­tions, but they will all in­volve cre­at­ing DNS records.

Once your DNS records are con­fig­ured, you should be able to visit your new lo­cal­ity do­main and see your web­site!

FAQ

Do I ac­tu­ally have to live in my lo­cal­ity do­main’s area?

I’m not re­ally sure. Honestly, I doubt any­one cares enough to ac­tu­ally check that you live at the ad­dress you sup­ply in the reg­is­tra­tion form. I know some­one who used to live in their lo­cal­ity, moved out of the coun­try, and still has their lo­cal­ity do­main. However, your mileage may vary.

Will WHOIS re­quests leak my ad­dress?

Nope. Even though you must sup­ply your ad­dress in the reg­is­tra­tion form, a WHOIS re­quest for your lo­cal­ity do­main will only show in­for­ma­tion about the reg­is­trar.

Special thanks

Big thanks goes to sleep­less and Minh Nguyen for their guides on reg­is­ter­ing lo­cal­ity do­mains. After see­ing their guides, I wanted to write my own that would clar­ify some of the ques­tions I had while fol­low­ing them, which led to the cre­ation of this ar­ti­cle!

Twin brothers wipe 96 gov't databases minutes after being fired

arstechnica.com

Yeah, they could,” Sohaib agreed.

Muneeb noted that an em­ployee they knew would have some work to do” when the de­struc­tion was re­vealed.

Sohaib fed Muneeb more sug­ges­tions.

Delete their filesys­tem as well?” he said.

Smart idea,” said Muneeb.

Sohaib then won­dered if they had been too hasty. Perhaps, he said, You shoulda had a kill script. Like, black­mail­ing them for some money would have been—”

No, you do not do that, that’s proof of guilt, man,” Muneeb said.

No, but the thing was, you al­ways have your opin­ion,” Sohaib com­plained, and the two then bick­ered about whether they might try to black­mail their com­pa­ny’s cus­tomers in­stead.

As the data de­struc­tion went on, Sohaib said, They’re gonna prob­a­bly raid this place.”

I’ll clean this shit up,” Muneeb said.

After wip­ing out the data­bases and event logs, the broth­ers re­in­stalled the op­er­at­ing sys­tems on their cor­po­rate lap­tops with the help of an un­named co-con­spir­a­tor.

God guide my words

Sohaib was right—the feds did raid them. It just took three weeks.

On March 12, 2025, a search war­rant was ex­e­cuted at Sohaib’s home in Alexandria. Agents grabbed plenty of tech gear but also turned up seven firearms and 370 rounds of .30 cal­iber am­mu­ni­tion. Given his for­mer crimes, Sohaib should have had none of this.

The broth­ers re­mained free for an­other nine months as the in­ves­ti­ga­tion pro­ceeded, but both were even­tu­ally ar­rested on December 3 and in­dicted for a host of crimes (you can read the in­dict­ment here).

Muneeb signed a plea deal on April 15, 2026, ad­mit­ting to the ma­jor al­le­ga­tions in the in­dict­ment.

Sohaib took his case to trial. He lost. On May 7, 2026, a jury found him guilty of con­spir­acy to com­mit com­puter fraud, pass­word traf­fick­ing, and pos­ses­sion of a firearm by a pro­hib­ited per­son. He will be sen­tenced in September.

A History of IDEs at Google

laurent.le-brun.eu

I pre­vi­ously dis­cussed how the main code­base at Google en­forces strict tool­ing and con­ven­tions to al­low the code­base to scale. For many years, there was one glar­ing ex­cep­tion: the IDE.

Context: I worked at Google from 2011 to 2024. Some of the in­for­ma­tion might be ap­prox­i­ma­tive, I’ll up­date it if there are re­ports. This blog post fo­cuses on the main monorepo at Google (google3).

A frag­mented ecosys­tem

Like in many com­pa­nies, en­gi­neers at Google have been able to pick their IDE of choice, and this re­sulted in a lot of frag­men­ta­tion. In 2011, some of the most se­nior en­gi­neers were asked a ques­tion: Is there a way to get a good uni­form IDE for all Googlers?“ The an­swer was es­sen­tially No”. Among oth­ers, Jeff Dean replied:

Trying to get a group of de­vel­op­ers to all agree on a com­mon ed­i­tor is a recipe for un­hap­pi­ness. Everyone has dif­fer­ent opin­ions about what is im­por­tant here, and the ad­van­tages and dis­ad­van­tages of dif­fer­ent sys­tems are weighed dif­fer­ently by dif­fer­ent peo­ple. In the end, it does­n’t mat­ter that much.”

Trying to get a group of de­vel­op­ers to all agree on a com­mon ed­i­tor is a recipe for un­hap­pi­ness. Everyone has dif­fer­ent opin­ions about what is im­por­tant here, and the ad­van­tages and dis­ad­van­tages of dif­fer­ent sys­tems are weighed dif­fer­ently by dif­fer­ent peo­ple. In the end, it does­n’t mat­ter that much.”

This was the preva­lent opin­ion for years. After all, it does­n’t mat­ter which IDE your col­leagues use, as long as their code is good. But I worked at Google for 12 years on de­vel­oper tools, and I some­times won­dered about it.

If you look at it from a com­pany pro­duc­tiv­ity stand­point: you don’t want each en­gi­neer to spend too much time set­ting up their ed­i­tor. Although en­gi­neers used dif­fer­ent IDEs, use­ful in­te­gra­tions even­tu­ally had to be reim­ple­mented every­where: Bazel sup­port, Starlark tool­ing, code for­mat­ters, code search in­te­gra­tion, and so on. Google’s in­ter­nal cul­ture made this man­age­able. Engineers would of­ten start tool­ing pro­jects or­gan­i­cally, oth­ers would dis­cover them through the shared code­base and con­tribute. This kind of con­tri­bu­tion is gen­er­ally en­cour­aged (through 20% time and peer bonuses). Critical pro­jects would even­tu­ally be­come of­fi­cially staffed. As an ex­am­ple, a team ded­i­cated to the IntelliJ in­te­gra­tion was formed around 2015.

Some peo­ple might won­der why you’d need a full ded­i­cated team for this. Was the IDE not good enough in the first place? Part of the rea­son is that Google has a set of unique tools, and it just makes en­gi­neers more pro­duc­tive if you can give them a nice IDE in­te­gra­tion. But also, some prob­lems were caused by the sheer size of the monorepo. Traditional IDEs as­sumed that source code, build meta­data, in­dex­ing and analy­sis all hap­pened lo­cally. At Google scale, that as­sump­tion starts to break down.

A Cloud IDE

Around 2013(?), some­thing hap­pened that I had­n’t an­tic­i­pated. Some peo­ple started build­ing a web-based ed­i­tor, named Cider. The name is a ref­er­ence to Cloud IDE, with a trail­ing r” to get a more mem­o­rable name.

In a com­pany where most tools are web-based, where peo­ple spend time in their browser to do code-re­views, nav­i­gate the code­base us­ing Code Search… in a com­pany that uses Chromebooks, it ac­tu­ally makes sense to have a quick way to edit files from the browser.

What sur­prised me though is that Cider even­tu­ally be­came pop­u­lar across en­gi­neers. At first, it was mostly used by tech­ni­cal writ­ers who wanted to edit mark­down files with­out hav­ing to deal with ver­sion con­trol. The work­flow was very ef­fi­cient for fix­ing ty­pos. In one click, you would send the pull re­quest, with an op­tion to au­to­mat­i­cally merge it once ap­proved. Nowadays GitHub has this kind of fea­ture too, but at that time, it felt new to me.

Over time the team added more and more de­vel­oper-ori­ented fea­tures. The turn­ing point came when they added sup­port for code com­ple­tion, through the lan­guage-server pro­to­col.

Cider was a light client that opened much faster than tra­di­tional IDEs. All the magic hap­pened on a back­end that in­dexes the en­tire code­base, so that all the data was ready when­ever some­one opened the web­page.

Code in­tel­li­gence re­quires con­nect­ing each iden­ti­fier with its type and ref­er­ences. This forms a huge lan­guage graph that has to be up­dated at every com­mit. And well… the code­base re­ceives many com­mits per sec­ond. But the IDE also needs ac­cess to his­tor­i­cal data. If I’m work­ing on a pro­ject and my col­league merges their code, I don’t want to pick up the changes im­me­di­ately. So my ed­i­tor needs to use the graph cor­re­spond­ing to my last sync date… aug­mented with my lo­cal changes, ob­vi­ously.

With this kind of fea­ture, the pop­u­lar­ity of Cider con­tin­ued to rise among cer­tain de­mo­graph­ics. For ex­am­ple, it was much eas­ier to con­vince Go de­vel­op­ers to switch than Java de­vel­op­ers (because they ex­pected a much more ad­vanced ed­i­tor). But the joy of search­ing and hav­ing cross-ref­er­ences across a bil­lion files is real.

Cider V: Using VSCode as a fron­tend

The in­vest­ment in the back­end could be jus­ti­fied: it was solv­ing Google-specific prob­lems and there was no good al­ter­na­tive to it. But the fron­tend felt quite lim­ited: it was good for quick fixes, but it could­n’t com­pete with ac­tual IDEs.

The di­rec­tion changed in 2020, when I joined the team as one of the tech leads. At that time, Cider was the dom­i­nant IDE in the com­pany and the ques­tion of its fu­ture came up. It was de­cided to use the VSCode fron­tend in Cider. It was a nat­ural fit: VSCode was al­ready dom­i­nat­ing the IDE land­scape, it was lan­guage-ag­nos­tic, ex­ten­si­ble and built for the web.

By switch­ing to the VSCode fron­tend, we in­her­ited a ma­ture ed­i­tor, a large ex­ten­sion ecosys­tem and years of ex­ist­ing fea­tures. Many Cider fea­ture re­quests were al­ready solved prob­lems in VSCode. More im­por­tantly, the ex­ten­sion sys­tem would un­lock teams across the com­pany and re­move the Cider team from the crit­i­cal path.

Screenshot of Cider V, 2022

Even with a dozen en­gi­neers in the fron­tend team, it took a cou­ple of years to build a com­plete suc­ces­sor to Cider. In 2021, the open beta was used by 5000 en­gi­neers, but a lot of work re­mained to in­te­grate every­thing and pol­ish the ex­pe­ri­ence. The team had to sup­port ver­sion con­trol; in­te­grate the code re­view tool; pro­vide code com­ple­tion and refac­tor­ing fea­tures us­ing the Cider back­end; re­design the way ex­ten­sions are shipped and up­dated; etc.

Many users were pas­sion­ate and used to the Cider ed­i­tor, and ex­pected every lit­tle de­tail to be the same in Cider V. Small work­flow changes or an ex­tra click here and there may be­come an adop­tion blocker for some users. So the pol­ish part of the pro­ject re­quired months of it­er­a­tions. Even color schemes gen­er­ated an ab­surd amount of dis­cus­sions. As Joshua Bloch ob­served back in 2011, the only thing that gen­er­ates more re­li­gious fer­vour than pro­gram­ming lan­guages is text ed­i­tors and IDEs.”

I could also write about the in­ter­ac­tions with the VSCode en­gi­neers and how we con­tributed changes back to VSCode, but this blog post is long enough. I’ll try to write more about it one day. But let’s say that we had to main­tain our lo­cal fork, up­date monthly, and we tried as much as pos­si­ble to re­duce our lo­cal hacks and align with the up­stream code.

Design ex­plo­ration for the code re­view in­te­gra­tion, 2022

A Uniform IDE

I started the blog post with a ques­tion about a uniform IDE for all Googlers”. It did­n’t com­pletely hap­pen but, by 2023, 80% of the de­vel­op­ment in the main Google code­base hap­pened in Cider V (and the num­ber kept in­creas­ing).

Each IDE has its pros and cons, but Cider at­tracted users by hav­ing the best in­te­gra­tions with the com­pany tools, such as ex­cel­lent ver­sion con­trol sup­port and a code re­view in­te­gra­tion where the re­viewer com­ments are shown in­line in the ed­i­tor.

What I found most ex­cit­ing was the side ef­fects of hav­ing most users us­ing the same tool. It meant that we could in­vest more re­sources in the tool (because each change has more im­pact). I was tech lead for the IDE ex­ten­si­bil­ity and, soon, teams across the com­pany reached out and started de­vel­op­ing their own ex­ten­sions to im­prove their spe­cific work­flows. After two years, around 100 in­ter­nal ex­ten­sions were be­ing de­vel­oped. This en­abled many sce­nar­ios that were pre­vi­ously in­fea­si­ble.

In 2023, the man­age­ment pushed all the teams to in­te­grate more and more AI fea­tures. This led to cool fea­tures such as Resolving Code Review Comments with Machine Learning and Smart Paste for con­text-aware ad­just­ments to pasted code. And of course AI code com­ple­tion.

As more AI fea­tures are in­te­grated into the IDE, the ad­van­tages of hav­ing a sin­gle, ex­ten­si­ble plat­form be­come even more ob­vi­ous. Of course, it was very ex­pen­sive and very few com­pa­nies can jus­tify this kind of work. But I be­lieve that the move to a standard” (even if it’s not man­dated) IDE has been very im­pact­ful.

In the end, stan­dard tool­ing cre­ates lever­age.

Comments are closed, but feed­back is wel­come. You can dis­cuss on Hackernews or Mastodon. If you like this kind of con­tent, you can sub­scribe through RSS. To get email no­ti­fi­ca­tions, try a third-party tool like Feedrabbit.

Comments are closed, but feed­back is wel­come. You can dis­cuss on Hackernews or Mastodon. If you like this kind of con­tent, you can sub­scribe through RSS. To get email no­ti­fi­ca­tions, try a third-party tool like Feedrabbit.

Kickstarter Is The Latest Platform Seemingly Forced To Ban Adult Content By Payment Processors

kotaku.com

Kickstarter, the crowd­fund­ing plat­form used to fund the likes of Larian Studios’ Divinity: Original Sin 2 and Warhorse Studios’ Kingdom Come: Deliverance, has re­cently up­dated its Mature Content” guide­lines to pro­hibit con­tent that is deemed violent,” derogatory,” and sex­u­ally photo-realistic,” and, based on emails sent to cre­ators on the plat­form, the pay­ment proces­sor Stripe may be to blame for the sud­den shift away from NSFW con­tent.

At some point over the past few days, seem­ingly around May 11, based on Internet Archive snap­shots, Kickstarter’s Rules” page was up­dated to specif­i­cally pro­hibit sev­eral forms of NSFW con­tent. While the pre­vi­ous ver­sion of the page sim­ply pro­hib­ited Pornographic con­tent,” it now con­tains some oddly spe­cific re­stric­tions, in­clud­ing, but not lim­ited to, implied sex acts,” MILF/DILF” con­tent, implied nu­dity,” and any­thing fea­tur­ing female nip­ples/​are­o­las, gen­i­talia,” and anuses.” Good heav­ens, they’ve even banned buttocks.”

Why? According to a re­port by The Daily Cartoonist, Kickstarter may be un­der pres­sure from its pay­ment proces­sor, Stripe, which Palantir Chairman Peter Thiel and X pro­pri­etor Elon Musk par­tially own. Kickstarter and Stripe did not im­me­di­ately re­spond to a re­quest for com­ment.

According to artist and writer Mike Wolfer, Kickstarter be­gan email­ing cre­ators as early as March 2026 to in­form them that Stripe will con­duct its own re­view” of any pro­jects fea­tur­ing adult/NSFW con­tent,” and may choose to shut down any crowd­sourced pro­jects while they’re live, or even af­ter” they’ve been suc­cess­fully funded. As The Daily Cartoonist notes, the choice to limit NSFW con­tent on Kickstarter feels es­pe­cially sud­den, con­sid­er­ing the plat­form launched its Kickstarter After Dark” newslet­ter in September 2025, which aims to show­case the best adult-ori­ented pro­jects across” fea­tured on the web­site.

If Kickstarter’s newly im­ple­mented rules have in­deed been forced upon them by Stripe, it would­n’t be the first time that a pay­ment proces­sor has had a hand in lim­it­ing NSFW con­tent. Both Steam and Itch.io had to purge count­less NSFW games from their plat­forms in 2025 fol­low­ing pres­sure from bank­ing part­ners and pay­ment proces­sors like Visa and Mastercard. The Australian anti-porn group Collective Shout claimed credit for the cen­sor­ship, but the move in­creas­ingly seems to be part of a larger shift by com­pa­nies that move money around the in­ter­net to crack down on what can ac­tu­ally be bought, sold, and shared on it.

Correction 5/13/26, 11:22 a.m. ET: Pillars of Eternity 2 was made by Obsidian.

Princeton faculty mandate proctoring for in-person exams, upending 133 years of precedent

www.dailyprincetonian.com

All in-per­son ex­am­i­na­tions at Princeton will be proc­tored start­ing July 1, rep­re­sent­ing the most sig­nif­i­cant change to the honor sys­tem since it was es­tab­lished in 1893. The fac­ulty passed a pro­posal re­quir­ing in­struc­tor su­per­vi­sion at Monday’s fac­ulty meet­ing, with one op­pos­ing vote.

The his­toric vote was the cul­mi­na­tion of months of de­lib­er­a­tion within the ad­min­is­tra­tion and stu­dent gov­ern­ing bod­ies about how to ad­dress in­creas­ing con­cerns over aca­d­e­mic in­tegrity vi­o­la­tions, in­clud­ing the pro­lif­er­a­tion of AI us­age. The pro­posal cleared a full fac­ulty vote as the fi­nal of three re­quired rounds of ap­proval, hav­ing al­ready been passed unan­i­mously by the Committee on Examinations and Standing and the Faculty Advisory Committee on Policy.

According to the pol­icy pro­posal, pre­vi­ously sent by Dean of the College Michael Gordin to the Faculty Advisory Committee and in­cluded in Monday’s meet­ing notes, in­struc­tors will re­main pre­sent in exam rooms as a wit­ness to what hap­pens,” but are in­structed not to in­ter­fere with stu­dents. If a sus­pected Honor Code vi­o­la­tion oc­curs, proc­tors will doc­u­ment their ob­ser­va­tions and sub­mit a re­port to the stu­dent-run Honor Committee, where they may later tes­tify un­der the same stan­dards used for other wit­nesses.

The pro­posal notes that ad­di­tional de­tails, in­clud­ing proc­tor-to-stu­dent ra­tios and guide­lines re­gard­ing mon­i­tor­ing prac­tices, will be fi­nal­ized in con­sul­ta­tion with fac­ulty and stu­dent rep­re­sen­ta­tives be­fore the pol­icy takes ef­fect.

Princeton’s honor sys­tem dates back to 1893, when the fac­ulty first in­sti­tuted the Honor Code fol­low­ing a stu­dent pe­ti­tion to elim­i­nate proc­tor­ing dur­ing ex­am­i­na­tions. Since then, the honor sys­tem has re­lied on in­di­vid­ual ac­count­abil­ity, with stu­dents pledg­ing both to re­frain from aca­d­e­mic dis­hon­esty and to re­port those they wit­ness in vi­o­la­tion.

Following the Honor Code’s orig­i­nal im­ple­men­ta­tion, proc­tor­ing was ex­plic­itly banned in the Rules and Procedures of the Faculty and the Rights, Rules, Responsibilities of the University, which re­mained in ef­fect for 133 years up un­til Monday’s vote.

The pol­icy pro­posal cites AI and per­sonal elec­tronic de­vices as ma­jor cat­a­lysts be­hind the pol­icy shift. The ease of ac­cess of these [AI] tools on a small per­sonal de­vice have also changed the ex­ter­nal ap­pear­ance of mis­con­duct dur­ing an ex­am­i­na­tion,” it reads, mak­ing cheat­ing much harder for other stu­dents to ob­serve (and hence to re­port).”

The pro­posal also points to a grow­ing re­luc­tance among stu­dents to re­port peers di­rectly. The pro­posal claims that anony­mous re­port­ing of al­le­ga­tions has in­creased in re­cent years, fu­eled by fears of doxxing or sham­ing among their peer groups” on­line.

Support non­profit stu­dent jour­nal­ism. Donate to the Prince.’ Donate now »

In The Daily Princetonian’s 2025 Senior Survey of over 500 se­niors, 29.9 per­cent of re­spon­dents re­ported that they had cheated on an as­sign­ment or exam dur­ing their time at Princeton. 44.6 per­cent of se­nior re­spon­dents re­ported knowl­edge of Honor Code vi­o­la­tions that they chose not to re­port. Only 0.4 per­cent of se­niors re­sponded say­ing that they had re­ported a peer for an Honor Code vi­o­la­tion.

An Undergraduate Student Government sur­vey of stu­dents cited in the pro­posal re­port­edly found that a ma­jor­ity would fa­vor proc­tor­ing or are in­dif­fer­ent to any change,” though a sizeable mi­nor­ity op­poses it on the grounds that stu­dents should be­have hon­or­ably, and that fac­ulty and stu­dents should trust each other given the 1893 Honor Code com­pact.”

Similarly, stu­dents and fac­ulty pre­vi­ously in­ter­viewed by the Prince’ ex­pressed di­vided views on the pol­i­cy’s im­ple­men­ta­tion. Some cited the in­ad­e­quacy of the cur­rent stu­dent re­port­ing model, while oth­ers said the in­tro­duc­tion of proc­tors could erode the trust that de­fines Princeton’s aca­d­e­mic cul­ture.

The his­toric change comes in the wake of a November pol­icy change man­dat­ing proc­tor­ing for all in­di­vid­ual and small-group ex­ams, in­clud­ing make-up ex­ams, ex­ams taken by stu­dent-ath­letes while trav­el­ing, and ex­ams taken with dis­abil­ity ac­com­mo­da­tions.

In a March guest Opinion col­umn in the Prince,’ Honor Committee Chair Emerita Nadia Makuc 26 wrote that the Honor Committee, which ad­ju­di­cates sus­pected vi­o­la­tions of the Honor Code dur­ing in-per­son ex­am­i­na­tions, had long dis­cussed in­tro­duc­ing proc­tors as an ad­di­tional wit­ness and re­porter in exam rooms, and that the time had come to take that step.

The Honor Committee has ex­pe­ri­enced new strains, in­clud­ing an uptick in cases in the last year and chal­lenges such as gen­er­a­tive AI, and stu­dent sen­ti­ment has rec­og­nized that its pro­ce­dures need to bet­ter re­flect the cur­rent chal­lenges to aca­d­e­mic in­tegrity,” Makuc wrote.

Get the best of the Prince’ de­liv­ered to your doorstep or in­box. Subscribe now »

Honor Committee hear­ings are con­fi­den­tial, stu­dent-led pro­ceed­ings ad­dress­ing po­ten­tial vi­o­la­tions of the Honor Code. Accused stu­dents can pre­sent de­fenses, call wit­nesses, and be as­sisted by a Peer Representative. If stu­dents are found re­spon­si­ble for Honor Code vi­o­la­tions, the max­i­mum penalty that can be as­signed is ex­pul­sion.

William Aepli 26, for­mer co-chair of the Peer Representatives, which ad­vises stu­dents ac­cused of aca­d­e­mic in­tegrity vi­o­la­tions, pre­vi­ously told the Prince’ that his or­ga­ni­za­tion would likely see changes in the type of ev­i­dence pre­sented in hear­ings in Honor Committee hear­ings.

The Honor Committee Constitution and the Honor Code it­self will not need to be changed fol­low­ing the in­sti­tu­tion of proc­tor­ing. Gordin pre­vi­ously con­firmed to the Prince’ that just the Rules and Procedures of the Faculty and Rights, Rules, and Responsibilities will need to be up­dated.

The sec­tion of the Rules and Procedures of the Faculty that pre­vi­ously banned proc­tor­ing will re­place those lines with lan­guage man­dat­ing in­struc­tor su­per­vi­sion dur­ing in-per­son ex­am­i­na­tions, ac­cord­ing to the pro­posal. A one-sen­tence re­vi­sion to Rights, Rules, and Responsibilities will be made be­fore the start of the new aca­d­e­mic year.

The pro­posal states that Gordin met with and re­ceived en­dorse­ments on the pol­icy from current and for­mer stu­dent chairs of the Honor Committee; col­leagues from the Office of the Dean of Undergraduate Students and the McGraw Center for Teaching and Learning; the Faculty-Student Committee on Discipline; and the Academics Chair of the Undergraduate Student Government.”

Undergraduates and fac­ulty are re­al­is­tic in un­der­stand­ing that hav­ing an in­struc­tor su­per­vis­ing ex­am­i­na­tions will not erad­i­cate cheat­ing,” the pro­posal notes. However, they be­lieve that there will be a sig­nif­i­cant de­ter­rent ef­fect, and that hav­ing an ad­di­tional wit­ness in the room will re­duce pres­sure on stu­dents to no­tice and re­port con­cerns while they are them­selves com­plet­ing ex­ams.”

Multiple fac­ulty mem­bers de­clined to com­ment on the new pol­icy fol­low­ing the meet­ing. Professor of English and Theater Jill Dolan, who served as dean of the col­lege from 2015 to 2024, briefly dis­cussed the change in an in­ter­view with the Prince.’

I think it’s a shame, but it’s nec­es­sary,” Dolan said. But I also do un­der­stand why it passed. I think we need some dif­fer­ent prac­tices in this day and age, but it does mark a mo­ment.”

Devon Williams is a News con­trib­u­tor for the Prince’ from Menlo Park, Calif. She can be reached at dw9268[at]prince­ton.edu.

Support non­profit stu­dent jour­nal­ism. Donate to the Prince.’ Donate now »

Luke Grippo con­tributed re­port­ing.

Please send any cor­rec­tions to cor­rec­tions[at]dai­lyprince­ton­ian.com.

The Emacsification of Software

sockpuppet.org

You want a good Markdown viewer more than you think you do.

We’re all read­ing a ton of Markdown. It’s been the lin­gua franca of soft­ware de­vel­op­ment since long be­fore LLMs. But now agents have led us into a cursed re­nais­sance of TUI tool­ing, and the read­ing ex­pe­ri­ence has be­come in­tol­er­a­ble. I’m cer­tain that at least 14% of the agita about AI code is dri­ven by ex­haus­tion over in­ces­santly scrolling ter­mi­nal Markdown.

There are good TUI Markdown view­ers. The Charm folks built glow, which I used & en­joyed. My friend Josh built Markless, which is hand­some and bristling with fea­tures, most no­tably a table-of-con­tents nav. These tools are great. But they’re ham­strung by the ter­mi­nal it­self, which is al­most al­ways mono­spaced and thus fa­tigu­ing to read.

There are good graph­i­cal UI Markdown ed­i­tors. On ma­cOS, where I live, there’s Obsidian, Typora, and Bear, my per­sonal daily dri­ver. Native UI Markdown ed­i­tors are at­trac­tive and leg­i­ble. Good read­ing ex­pe­ri­ences. But they’re ed­i­tors. My ed­i­tors live in par­tic­u­lar vir­tual desk­tops, with win­dows arranged just-so, and it dri­ves me up a fuck­ing wall when I click a ran­dom .md file and it messes with my edit­ing en­vi­ron­ment.

So I took a trip to the App Store, where there are in fact Markdown view­ers. They’re fine-ish. None of them were good. All I want is for some­thing sane to hap­pen when I dou­ble click a .md file, and the view­ers you can grab off the App Store do at first seem sane. It’s only af­ter you live with them for a bit that the prob­lems be­come ap­par­ent. Several of them lack text search. Some have in-app pur­chases(?!). I set­tled on one, only to dis­cover a cou­ple days later that it did­n’t sup­port copy­ing text into the paste buffer. At that point, I was done.

Suddenly, I re­al­ized: a good Markdown viewer was a dumb thing to waste time look­ing for. It’s 2026. I can just have one ex­truded for me.

It took sev­eral hours to gen­er­ate a bet­ter Markdown viewer than I could find on the App Store, but only about 30 min­utes of that was in­ter­ac­tive. The rest of it was spent yelling about zon­ing re­form on Facebook while Claude chugged away. Behold, MDV.app:

Now, I’m cheat­ing a lit­tle bit with that time­line, be­cause I’d done some prepa­ra­tion weeks be­fore. I re­cruited an old Macbook to run Claude on. I set up Xcode and git. I got Claude con­fig­ured, and tracked down some Swift and ma­cOS de­sign skills. But the viewer it­self, to a vi­able state, bet­ter than what was on the app store: about 30 min­utes.

MDV is­n’t the best ma­cOS ap­pli­ca­tion ever built. Or even a par­tic­u­larly com­pe­tent piece of soft­ware (although: it might well be the best ded­i­cated ma­cOS Markdown viewer). But it has im­proved my qual­ity of life im­mensely.

It does all sorts of cool things. Claude and I have cracked the code on se­lect­ing and copy­ing text out of doc­u­ments, and on find­ing fixed strings in­side of them. Also: MDV keeps a SQLite FTS in­dex of all the Markdown files in its (editable) his­tory, along with hot-keyed book­marks and a table-of-con­tents nav. It re­mem­bers my place, across restarts, in doc­u­ments as I tog­gle be­tween them. And it has fussy color themes and de­cent ty­pog­ra­phy, which is the most im­por­tant fea­ture a ded­i­cated Markdown viewer can have. All this stuff just works now, any time I click a .md. It’s great.

Here’s how I know this is a big deal: be­cause every time some­one sends me a Signal mes­sage, my screen flick­ers. It does­n’t stop un­til I ex­plic­itly hide the Signal app, which I al­ways for­get to do un­til I’ve been dri­ven 30% of the way to a mi­graine by sub­tle flick­er­ing.

This is hap­pen­ing be­cause Signal is an Electron app, which means that even though it looks like a na­tive ma­cOS app, it’s not. It’s a whole-ass copy of Chromium ren­der­ing a se­cret web page. It shares this prop­erty with vir­tu­ally every UI app shipped in the last 10 years, each of which car­ries its own flick­er­ing copy of Chromium.

Electron is­n’t good. But it’s al­ways been good enough. Building real na­tive user in­ter­faces has his­tor­i­cally been a dif­fi­cult prob­lem, be­gin­ning with find­ing even re­place­ment-level tal­ent to do the work. Capable ma­cOS na­tive UI de­vel­op­ers are rare birds.

But Claude is­n’t just a re­place­ment-level SwiftUI de­vel­oper. Claude is ac­tu­ally good.

This is­n’t a post about the im­pend­ing death of Electron (if only). It’s also not about get­ting you to use my awe­some Markdown viewer which is triv­ially easy to in­stall and bet­ter than any viewer on the app store and you should def­i­nitely use it.

Actually, no! Stop. Don’t in­stall it. Treat my awe­some Markdown viewer, which is awe­some, in the same way an Emacs user treats a par­tic­u­larly shiny .emacs. Steal the idea and make a bet­ter one.

For those un­fa­mil­iar, here’s how Emacs cul­ture works: its lif­ers build whole ap­pli­ca­tions in elisp (one of the world’s great aw­ful lan­guages). These applications” are al­ways started to scratch a per­sonal itch re­lated to text edit­ing, and in­vari­ably ex­pand in am­bi­tion and scope past any rea­son­able bound­aries of what text ed­i­tors should do. If you look at /r/emacs, it’s 0% Product Hunt, 100% show-and-tell.

There are pop­u­lar elisp pack­ages lots of peo­ple use. But ex­cept for Magit, nerds are alarm­ingly apt to re­place them with their own shinier ver­sions (and then to show them off, tran­si­tion­ing to the spore-form­ing phase of the elisp life­cy­cle). Everything in Emacs is mal­leable.

Until now, the Achilles heel of Emacs cul­ture has been that, ex­cept for Magit, its pack­ages tend to be wretched user ex­pe­ri­ences. Ugly, slow, and dis­cov­er­able only af­ter in­flict­ing years of elisp cor­ti­cal in­juries on your­self.

But AI agents have fracked Emacs cul­ture, and it’s leak­ing out into the wider world. Given ac­cess to a screen and in­puts, agents re­li­ably build na­tive user in­ter­faces. Native UI was the province of pro­fes­sion­ally pack­aged pro­grams. Now it’s all as be­spoke as your ed­i­tor con­fig­u­ra­tion. And, while I’m sure there’s an up­per limit to how good those in­ter­faces can be (with cur­rent fron­tier mod­els), that ceil­ing is higher than any­thing you can do in a TUI.

What does it mean for soft­ware to be Emacsified? Let’s get into it.

First, it’s per­sonal soft­ware. Most of it will be use­ful only to its cre­ator, and then for­got­ten, just like the dozens of ob­so­lete lit­tle elisp pro­grams lit­ter­ing my .emacs. Personal soft­ware de­fines the ethos of Emacs, which was care­fully de­signed over decades to nur­ture these kinds of tools. Emacsification” clocks that every­thing now works this way, not just baroque text ed­i­tors.

Still, every once in a while, one of these pro­grams will es­cape con­tain­ment. It’ll be use­ful enough for more than one per­son to in­stall. But even then, the re­leased ar­ti­fact won’t be the most im­por­tant thing about it. The source code won’t be ei­ther. If an agent wrote all the SwiftUI code in my pro­ject, what do you have to gain from closely read­ing it?

I’m prob­a­bly only a lit­tle bit right about this, but I think a sig­nif­i­cant dri­ver of new Emacs pack­ages is a cat­alytic re­ac­tion be­tween your messy lo­cal con­fig­u­ra­tion and every­one else’s elisp code. Once you know how to get things done in elisp, it can be eas­ier to build your own so­lu­tion than to pack­age-in­stall an ex­ist­ing one. In that kind of en­vi­ron­ment, the code is of pass­ing in­ter­est. What mat­ters are the ideas, the ob­ser­va­tion that yeah, you can do that, and it’ll work well”.

For the kinds of soft­ware I’m talk­ing about, you want the prompts more than you want the source code.

If you’re a nerd com­fort­able with the idea of rolling your own soft­ware, every­thing is now pro­gram­ma­ble, not merely in a tech­ni­cal sense but a prac­ti­cal one. And that gets to a feel­ing I think a lot of peo­ple have when cre­at­ing soft­ware with agents: what does it mean to say you’re building” it? Building” im­plies more ef­fort than you’re ex­pend­ing. What you’re do­ing feels a lot more like con­fig­ur­ing, on a plat­form that has sud­denly be­come vastly more con­fig­urable. A plat­form that feels a lot more like Emacs.

The first thing an AI-pilled de­vel­oper tells you af­ter tak­ing the plunge is how they’re fi­nally fin­ish­ing all the ran­dom side pro­jects they’d col­lected over the years.

That was an ex­cit­ing prospect on its own. But it is now also the case that those things, hy­per­spe­cific as they might be, can also be pleas­ant to use. It’s not lost on me the irony of Emacsification un­der­cut­ting many of the ar­gu­ments for putting up with Emacs it­self, and its janky user in­ter­faces. Magit is still the best thing go­ing. For now.

I don’t have a grand pro­nounce­ment to of­fer about the Future of Software. But I’m pretty sure nerd soft­ware is go­ing to get a lot more in­ter­est­ing. How many clanky ter­mi­nal apps can we dras­ti­cally (and eas­ily) im­prove? I’ll fi­nally be able to un­der­stand io­stat! Across a fleet of hosts, even. And bpf­trace! Have you seen the shit Brendan Gregg had to put up with to do ter­mi­nal vi­su­al­iza­tions from bpf­trace? You don’t have to put up with any of this any­more. In fact, nei­ther do I.

I’m a vul­ner­a­bil­ity re­searcher, and I’ve been like a kid in the candy shop for the first half of 2026 with all the ex­ploit de­vel­op­ment break­throughs in agent cod­ing. But I un­der­stand that makes me a weirdo, and that for most of you all that comes with those ad­vance­ments is dread.

So I’m glad to have some­thing new to talk about that ac­tu­ally feels like an un­al­loyed good. Building na­tive UI is now fun; a lot more fun than build­ing web in­ter­faces ever was. Give it a shot; make some­thing stu­pidly spe­cific to your own prob­lems, en­joy it for a lit­tle while, and then share it some­where — or, bet­ter yet, just a screen­shot and the prompts you used to make it.

Introducing Claude for Small Business

www.anthropic.com

We’re launch­ing Claude for Small Business—a pack­age of con­nec­tors and ready-to-run work­flows that put Claude in­side the tools small busi­nesses de­pend on—to help small busi­ness own­ers take full ad­van­tage of AI and cross off items on the to-do list.

Small busi­nesses ac­count for 44% of U.S. GDP and em­ploy nearly half the pri­vate-sec­tor work­force, but their adop­tion of AI has lagged be­hind larger en­ter­prises. Tools and train­ing are rarely tai­lored to the ways small busi­nesses op­er­ate, and as a re­sult their use of­ten stops at the chat win­dow. As part of our pub­lic ben­e­fit mis­sion, we are com­mit­ted to help­ing busi­ness own­ers har­ness AI more fully and ef­fec­tively for their most im­por­tant work.

Claude for Small Business is a tog­gle in­stall that puts Claude to work in­side the tools small busi­ness own­ers al­ready use: Intuit Quickbooks, PayPal, HubSpot, Canva, Docusign, Google Workspace, and Microsoft 365. From these tools, it can plan pay­roll, close the month, run a sales cam­paign, chase in­voices, and more.

Small busi­nesses make up nearly half the American econ­omy, but they’ve never had the re­sources of big­ger com­pa­nies. AI is the first tech­nol­ogy that can fi­nally close that gap, which is why we’re launch­ing Claude for Small Business, along­side train­ing and part­ner­ships to make sure AI shows up for the en­tre­pre­neurs and com­mu­ni­ties who need it most. Claude for Small Business runs in­side the tools own­ers al­ready rely on, like QuickBooks, PayPal, and HubSpot, and takes on the work that piles up af­ter hours, like plan­ning pay­roll, chas­ing in­voices, or kick­ing off a mar­ket­ing pro­ject. People run the busi­ness, and Claude helps take the late-night work off their plates.”

—Daniela Amodei, Co-founder and President of Anthropic

How it works

Toggle on Claude for Small Business in­side Claude Cowork, con­nect the tools you al­ready use, and pick the job. Claude does the work; you ap­prove be­fore any­thing sends, posts, or pays.

It ships with 15 ready-to-run agen­tic work­flows across fi­nance, op­er­a­tions, sales, mar­ket­ing, HR, and cus­tomer ser­vice. It also in­cludes 15 skills built on the re­peat­able tasks own­ers told us slow them down most.

These in­clude:

Planning pay­roll with con­fi­dence. Settle your QuickBooks cash po­si­tion against in­com­ing PayPal set­tle­ments, build a 30-day fore­cast, rank what’s over­due, and queue the re­minders for you to ap­prove and send.

Closing the month with fewer er­rors. Reconcile your books against set­tle­ments, flag what does­n’t match, write a plain-Eng­lish P&L, and ex­port a close packet you can for­ward straight to your ac­coun­tant through Intuit QuickBooks.

Getting a pulse on your busi­ness. Surface your most im­por­tant busi­ness in­sights on a sched­ule, all on one page: view your cash po­si­tion through Intuit QuickBooks, sales trend, pipeline move­ment, this week’s com­mit­ments, and more.

Running your next cam­paign. Find the slow stretch in your rev­enue, an­a­lyze your HubSpot cam­paign per­for­mance, draft the promo strat­egy, and gen­er­ate the as­sets in Canva to pre­pare your next send.

There’s also an in­voice chaser, mar­gin an­a­lyzer, month-end prep­per, tax-sea­son or­ga­nizer, con­tract re­viewer, lead triager, con­tent strate­gist, and more.

Not only could it prob­lem-solve for me, it also showed me prob­lems I did­n’t know I had.

Not only could it prob­lem-solve for me, it also showed me prob­lems I did­n’t know I had.

What we used to think were the con­straints are just not con­straints any­more. It’s em­pow­er­ing. Hours of look­ing at stuff that does­n’t mat­ter are gone. I want an en­tire or­ga­ni­za­tion where every­body is us­ing these tools daily.

What we used to think were the con­straints are just not con­straints any­more. It’s em­pow­er­ing. Hours of look­ing at stuff that does­n’t mat­ter are gone. I want an en­tire or­ga­ni­za­tion where every­body is us­ing these tools daily.

It’s free­ing up things that used to be a lot of very te­dious cler­i­cal work for more value-add tasks.

It’s free­ing up things that used to be a lot of very te­dious cler­i­cal work for more value-add tasks.

01 /

03

Connect to your stack

Running through Claude Cowork, each con­nected tool han­dles a spe­cific job:

PayPal pow­ers set­tle­ments, in­voic­ing, dis­putes, and re­funds in­side Claude.

Intuit QuickBooks han­dles pay­roll plan­ning, the monthly close, and cash-flow, along with tools to help busi­nesses pre­pare for tax sea­son, and rec­on­cil­i­a­tion work that touches every other sys­tem.

HubSpot runs lead triage, cus­tomer pulse, and cam­paign at­tri­bu­tion.

Canva gen­er­ates con­tent for every chan­nel, with the abil­ity to col­lab­o­rate and edit with your team, pub­lish as­sets, and track per­for­mance.

Docusign sends con­tracts out for sig­na­ture, tracks sta­tus, and files the ex­e­cuted copy back where it be­longs.

The full list of skills, au­toma­tions, and con­nec­tors is avail­able on the so­lu­tions page.

Small and mid-mar­ket busi­nesses fuel our economies, and for decades, QuickBooks has been proud to be their trusted fi­nan­cial part­ner. By in­te­grat­ing the agen­tic AI ca­pa­bil­i­ties of our QuickBooks plat­form into Claude for Small Business, we’re pro­vid­ing small busi­nesses with AI-powered au­toma­tions and ex­pe­ri­ences that al­low them to re­move the com­plex­i­ties of man­ag­ing their fi­nances, ac­cel­er­ate pay­roll work­flows, and gen­er­ate data-backed in­sights that help them grow and scale with speed and con­fi­dence.

Small and mid-mar­ket busi­nesses fuel our economies, and for decades, QuickBooks has been proud to be their trusted fi­nan­cial part­ner. By in­te­grat­ing the agen­tic AI ca­pa­bil­i­ties of our QuickBooks plat­form into Claude for Small Business, we’re pro­vid­ing small busi­nesses with AI-powered au­toma­tions and ex­pe­ri­ences that al­low them to re­move the com­plex­i­ties of man­ag­ing their fi­nances, ac­cel­er­ate pay­roll work­flows, and gen­er­ate data-backed in­sights that help them grow and scale with speed and con­fi­dence.

At HubSpot, our mis­sion is to help scal­ing com­pa­nies grow with AI. We part­nered with Anthropic to build the first CRM con­nec­tor for Claude so go-to-mar­ket teams can ac­cess their HubSpot con­text wher­ever they work. For small busi­nesses, that means get­ting tai­lored an­swers, sum­maries, and vi­su­al­iza­tions di­rectly from their cus­tomer plat­form so they can seg­ment smarter, run bet­ter cam­paigns, and drive more leads.

At HubSpot, our mis­sion is to help scal­ing com­pa­nies grow with AI. We part­nered with Anthropic to build the first CRM con­nec­tor for Claude so go-to-mar­ket teams can ac­cess their HubSpot con­text wher­ever they work. For small busi­nesses, that means get­ting tai­lored an­swers, sum­maries, and vi­su­al­iza­tions di­rectly from their cus­tomer plat­form so they can seg­ment smarter, run bet­ter cam­paigns, and drive more leads.

Small busi­nesses need AI that moves at the speed they do. With Canva pow­er­ing con­tent cre­ation in Claude for Small Business, a busi­ness owner can go from idea to pub­lished, on-brand de­sign in one flow, while AI stream­lines the work in be­tween. It’s part of our vi­sion to make com­plex AI work­flows sim­ple, so we can help peo­ple achieve their goals through de­sign.

Small busi­nesses need AI that moves at the speed they do. With Canva pow­er­ing con­tent cre­ation in Claude for Small Business, a busi­ness owner can go from idea to pub­lished, on-brand de­sign in one flow, while AI stream­lines the work in be­tween. It’s part of our vi­sion to make com­plex AI work­flows sim­ple, so we can help peo­ple achieve their goals through de­sign.

01 /

03

Built for trust

In a sur­vey we ran with small busi­ness own­ers, half named data se­cu­rity as their sin­gle biggest hes­i­ta­tion about AI.

With Claude for Small Business:

You stay in the loop. Every task and work­flow you run within Claude is ini­ti­ated by you. You ap­prove the plan first or, when you’re ready, let it run end-to-end.

Your ex­ist­ing per­mis­sions hold. If an em­ployee can’t see some­thing in QuickBooks or Drive to­day, they can’t see it through Claude.

We don’t train on your data by de­fault on our Team and Enterprise Plans.

Full de­tails are in the Trust Center.

AI Fluency for Small Business

Tools aren’t enough on their own. Owners and their teams need to know when and how to use them, and most haven’t had the op­por­tu­nity to learn.

That’s why we part­nered with PayPal on AI Fluency for Small Business, a free on­line course on us­ing AI to run a small busi­ness. It’s taught by own­ers who’ve built it into their own op­er­a­tions—Prospect Butcher Co. in Brooklyn, MAKS TIPM Rebuilders in California, and oth­ers—with step-by-step guid­ance on how to use AI in your busi­ness safely, re­spon­si­bly, and eth­i­cally. We’ll cover top­ics like know­ing which tasks in your busi­ness are right for AI and how you can get started.

PayPal is proud to part­ner with Anthropic to help small and medium-sized busi­nesses har­ness the full po­ten­tial of the AI-led econ­omy. Together, we are equip­ping these busi­ness own­ers and en­tre­pre­neurs with the tools, ex­per­tise, and trusted in­fra­struc­ture they need to com­pete and thrive in a rapidly evolv­ing dig­i­tal econ­omy and cre­at­ing new op­por­tu­ni­ties for them to in­no­vate, grow and bet­ter serve their cus­tomers.” — Amy Bonitatibus, Chief Corporate Affairs Officer at PayPal

The course is avail­able on-de­mand start­ing to­day.

The Claude SMB Tour

Starting May 14 in Chicago, we’re tak­ing Claude for Small Business on the road. The tour is a free, half-day live AI flu­ency train­ing and hands-on work­shop for 100 lo­cal small busi­ness lead­ers per stop. Anthropic and part­ner Tenex.co are host­ing the tour, with lo­cal part­ners at each stop. Attendees get a one-month Claude Max sub­scrip­tion to start in­te­grat­ing AI into their day-to-day work­flows.

Spring stops in­clude: Chicago, Tulsa, Dallas, Hamilton Township, Baton Rouge, Birmingham, Salt Lake City, Baltimore, San Jose, and Indianapolis.

Thank you to the Greater Cleveland Partnership and the National Talent Collaborative for pi­lot­ing the con­cept with us in March. More cities will be added in the fall.

Partnering with small busi­ness–fo­cused non­prof­its

As a pub­lic ben­e­fit cor­po­ra­tion, part of Anthropic’s mis­sion is to make sure the gains from AI reach all peo­ple and com­mu­ni­ties, es­pe­cially those who have his­tor­i­cally been last in line for new tech­nol­ogy. Small busi­ness own­ers—and the lo­cal in­sti­tu­tions that fund and ad­vise them—are ex­actly that au­di­ence. So along­side Claude for Small Business, we’re in­vest­ing in part­ner­ships that put Claude di­rectly in the hands of small busi­ness own­ers and the or­ga­ni­za­tions that help them grow.

We be­lieve AI can mean­ing­fully ex­pand what’s pos­si­ble for the small­est busi­nesses, in­clud­ing solo en­tre­pre­neurs. Together with Workday and the Local Initiatives Support Corporation (LISC), we’re sup­port­ing the Workday Foundation Solopreneurship Accelerator Program, which in 2026 will equip an ini­tial co­hort of 15 as­pir­ing solo­pre­neurs with seed fund­ing from the Workday Foundation, Claude cred­its from Anthropic, and an AI-first en­tre­pre­neur­ship cur­ricu­lum de­vel­oped by LISC.

Small busi­nesses also de­pend on an en­abling en­vi­ron­ment, in­clud­ing ac­cess to cap­i­tal. That’s why we’re part­ner­ing with three Community Development Financial Institutions (CDFIs) that are de­ploy­ing AI in their own op­er­a­tions and ser­vices: Accion Opportunity Fund, Community Reinvestment Fund USA, and Pacific Community Ventures. With Claude cred­its and hands-on tech­ni­cal sup­port from our team, these CDFIs are build­ing tools that help more small busi­nesses get funded. Pacific Community Ventures, for ex­am­ple, is us­ing Claude to power its Radiant Data Hub—a shared re­source for a net­work of CDFIs—to col­lect and syn­the­size voice-based feed­back from its small busi­ness clients and their work­ers to im­prove prod­ucts and ser­vices.

Getting started

To learn more about Claude for Small Business and ac­cess the AI Fluency for Small Business course, get started here.

Related con­tent

Higher us­age lim­its for Claude and a com­pute deal with SpaceX

We’ve raised Claude’s us­age lim­its and agreed a new com­pute part­ner­ship with SpaceX that will sub­stan­tially in­crease our ca­pac­ity in the near term.

Read more

Agents for fi­nan­cial ser­vices

We’re re­leas­ing ten new Cowork and Claude Code plu­g­ins, in­te­gra­tions with the Microsoft 365 suite, new con­nec­tors, and an MCP app for fi­nan­cial ser­vices and in­sur­ance or­ga­ni­za­tions.

Read more

Building a new en­ter­prise AI ser­vices com­pany with Blackstone, Hellman & Friedman, and Goldman Sachs

Read more

Scorched Earth 2000 HTML Port

www.scorch2000.com

Name

Game name

Private Hide from pub­lic list Tank

Game

Resolution

Wind

Initial cash

Rounds

Offline

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.