10 interesting stories served every morning and every evening.

How I Moved My Digital Stack to Europe

monokai.com

On dig­i­tal sov­er­eignty, and why European cloud is bet­ter than you think

April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope

There’s a ver­sion of this post that starts with a spread­sheet and ends with a quiet sense of sat­is­fac­tion. That’s mostly how it went. But un­der­neath the prac­ti­cal ex­er­cise of swap­ping one SaaS tool for an­other was some­thing that felt more ur­gent, a grow­ing dis­com­fort with how much of my dig­i­tal in­fra­struc­ture sat on servers I did­n’t con­trol, in a ju­ris­dic­tion in­creas­ingly prone to un­pre­dictabil­ity, op­er­ated by com­pa­nies whose in­cen­tives don’t al­ways align with mine.

Digital sov­er­eignty sounds like a buzz­word un­til you think care­fully about what it means. It means know­ing where your data lives. It means not be­ing one pol­icy change, one ac­qui­si­tion, or one ex­ec­u­tive’s bad mood away from los­ing ac­cess to tools your busi­ness de­pends on. It means choos­ing in­fra­struc­ture based on val­ues, not just con­ve­nience.

So I started mi­grat­ing.

Analytics

Google Analytics was the ob­vi­ous first tar­get. It’s the canon­i­cal ex­am­ple of a ser­vice that’s free be­cause you are the prod­uct, your vis­i­tors’ be­hav­ior fun­neled back into Google’s ad­ver­tis­ing ma­chin­ery.

Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant with­out the cookie con­sent the­ater that Google Analytics typ­i­cally re­quires. The re­port­ing is com­pre­hen­sive, the in­ter­face is fa­mil­iar enough, and I own every­thing.

The main down­side is main­te­nance over­head. You’re now re­spon­si­ble for up­dates, back­ups, and keep­ing the server healthy. For most se­tups this is low-fric­tion, but it’s not zero fric­tion.

Email

Proton Mail is based in Switzerland, not EU ter­ri­tory, but Swiss pri­vacy law is closely aligned with GDPR and ar­guably stronger in some re­spects. Proton builds its busi­ness model around pri­vacy rather than ad­ver­tis­ing, and end-to-end en­cryp­tion is baked in at the pro­to­col level rather than bolted on. The email client is solid, the cal­en­dar works well, and for any­one mov­ing away from US-based ser­vices, it sits com­fort­ably in the same spirit as the rest of this stack.

One ad­just­ment is get­ting used to Proton’s fil­ter sys­tem, which is a bit more lim­ited than Gmail’s. Gmail lets you write fil­ters against vir­tu­ally any­thing, in­clud­ing the full body of the mes­sage. Proton does­n’t sup­port fil­ter­ing on email con­tent at all. So if you’ve built a work­flow around catch­ing spe­cific phrases or key­words in mes­sage bod­ies, you’ll have to re­think it. For most peo­ple this won’t be a deal­breaker, but it’s worth know­ing be­fore you mi­grate.

There’s also a prac­ti­cal lim­i­ta­tion worth flag­ging: Proton caps cus­tom do­mains at three, even on the Duo plan. If you run sev­eral do­mains, like sep­a­rate ad­dresses for dif­fer­ent pro­jects or busi­nesses, you’ll hit that ceil­ing quickly and need to re­think how you route and send mail. I ended up con­sol­i­dat­ing, which was prob­a­bly over­due any­way, but it was­n’t a choice I made en­tirely freely.

Proton is­n’t free and charges a sub­stan­tial fee com­pared to other op­tions. You’ll get ac­cess to a whole suite of Proton apps though.

Password Management

Once I was in the Proton ecosys­tem, mov­ing pass­word man­age­ment there as well made sense. Proton Pass is end-to-end en­crypted, open source, and ben­e­fits from the same Swiss ju­ris­dic­tion as the rest of Proton’s stack.

1Password is a gen­uinely great prod­uct and this was a lat­eral move more than an up­grade. The in­ter­face is sim­ple, the browser ex­ten­sion works re­li­ably, and hav­ing pass­words, email, and cal­en­dar un­der one en­crypted roof has a cer­tain sat­is­fy­ing co­her­ence to it.

Compute

DigitalOcean has earned its rep­u­ta­tion by do­ing one thing ex­cep­tion­ally well: get­ting out of your way. The UI is clean, the men­tal model is sim­ple, and spin­ning up in­fra­struc­ture never feels like a chore. It’s the plat­form that proved de­vel­oper ex­pe­ri­ence could be a com­pet­i­tive moat.

Scaleway was a pleas­ant sur­prise. I ex­pected a ca­pa­ble-but-rough European al­ter­na­tive, but what I found was a plat­form that’s gen­uinely well thought out. Servers spun up quickly in­side a pri­vate net­work of my own con­fig­u­ra­tion, the con­trol panel is clean, and the op­tions avail­able matched every­thing I ac­tu­ally needed. Scaleway dis­plays pro­jected CO₂ emis­sions along­side server lo­ca­tion choices, a nice touch.

Object Storage

Scaleway’s ob­ject stor­age is S3-compatible, which makes mi­gra­tion me­chan­i­cal rather than painful, up­date your end­point and cre­den­tials and ex­ist­ing code works un­changed.

I used a tool called rclone to sync my old AWS S3 stor­age buck­ets to the new Scaleway S3 buck­ets. This took a lit­tle more than a week of con­stant sync­ing, as these buck­ets were quite large.

Offsite Backups

OVH is the largest European cloud provider and brings the re­li­a­bil­ity and pric­ing you’d ex­pect at that scale. Their ob­ject stor­age works well as a backup des­ti­na­tion and ends up cheaper than Backblaze B2 once you con­fig­ure life­cy­cle rules to move older back­ups to the cold stor­age class.

Getting there, how­ever, re­quires some pa­tience. The OVHcloud con­trol panel is a labyrinth: the life­cy­cle rule con­fig­u­ra­tion is buried some­where in the doc­u­men­ta­tion, and it in­volves some work in the ter­mi­nal. Once it’s set up, it works re­li­ably and the cost dif­fer­ence is mean­ing­ful.

Transactional Emails

Lettermint is a European trans­ac­tional email ser­vice that does the job with­out the bloat. Deliverability is solid, the API is clean, and it has straight­for­ward pric­ing.

Compared to SendGrid, the an­a­lyt­ics are leaner and the ecosys­tem in­te­gra­tions are fewer. SendGrid has years of tool­ing, doc­u­men­ta­tion, and com­mu­nity an­swers be­hind it. Lettermint is newer and smaller. For most trans­ac­tional send­ing use cases (password re­sets, no­ti­fi­ca­tions, re­ceipts) that does­n’t mat­ter much. But if you’re do­ing com­plex multi-stream email in­fra­struc­ture, you’ll want to au­dit the fea­ture set care­fully first.

Error Tracking

Bugsink is a self-hosted er­ror track­ing tool that ac­cepts Sentry’s SDK, which means the mi­gra­tion path is al­most fric­tion­less, change one line of con­fig­u­ra­tion and you’re done.

To be hon­est: Bugsink is bare-bones. There’s no per­for­mance mon­i­tor­ing, no ses­sion re­plays, no ad­vanced alert­ing. It’s not a Sentry re­place­ment for teams that use Sentry prop­erly. For me, it’s a sim­ple re­mote er­ror log, when some­thing breaks in pro­duc­tion I get a stack trace and that’s enough. Sentry’s cloud prod­uct is gen­uinely ex­cel­lent if you need the full fea­ture set, and for larger en­gi­neer­ing teams the breadth al­most cer­tainly jus­ti­fies the cost. But if your use case is tell me when some­thing broke and show me the stack trace”, self-hosted Bugsink does ex­actly that with no data leav­ing your in­fra­struc­ture.

AI API in­te­gra­tion

For my AI API in­te­gra­tions, I switched from OpenAI to Mistral. It worked out per­fectly as I was mostly us­ing sim­pler mod­els any­way.

Mistral is head­quar­tered in Paris and has pub­lished com­pelling open-weight mod­els along­side its API of­fer­ing. The API is clean, the mod­els are fast and ca­pa­ble, and there’s some­thing co­her­ent about a European AI provider that leans into open­ness rather than away from it. For my in­fer­ence work­loads, the switch was lat­eral in qual­ity and mean­ing­fully bet­ter in terms of where the money goes.

CDN

Exception № 1

Not every­thing moved. Cloudflare is a US com­pany, I still use it, and I’m at peace with that.

Here’s the rea­son­ing: Cloudflare sits in front of my pub­lic-fac­ing web­sites. Its job is to cache, pro­tect against DDoS at­tacks, and make con­tent load fast for vis­i­tors around the world. The data flow­ing through it is al­ready pub­lic by de­f­i­n­i­tion. I’m not rout­ing pri­vate com­mu­ni­ca­tions or sen­si­tive ap­pli­ca­tion data through Cloudflare; I’m us­ing it to serve pages that any­one on the in­ter­net can read. The sov­er­eignty cal­cu­lus is dif­fer­ent when the thing you’re pro­tect­ing is al­ready pub­lic.

I did try Bunny CDN, which is European-based and has a great rep­u­ta­tion. For straight­for­ward CDN use it’s ex­cel­lent. But Cloudflare’s fea­ture set (security rules, Workers plat­form, breadth of con­fig­u­ra­tion op­tions) was­n’t matched closely enough to jus­tify the switch for my spe­cific needs. Sometimes the prag­matic an­swer wins.

Payments

Exception № 2

Stripe is one of the few ser­vices I haven’t moved yet, even though pay­ment in­fra­struc­ture is ex­actly the kind of thing I care about hav­ing in a ju­ris­dic­tion I trust. Mollie is a Dutch pay­ment proces­sor with full EU in­cor­po­ra­tion, strong GDPR com­pli­ance by de­sign, and a prod­uct that has ma­tured con­sid­er­ably in re­cent years. The API has con­verged to­ward par­ity for most com­mon pay­ment flows, and for a European busi­ness the re­gional pay­ment method cov­er­age (iDEAL, Bancontact, SEPA) is ar­guably bet­ter.

The mi­gra­tion is on the list. It’s just not a triv­ial one. Payment in­te­gra­tions touch billing logic, web­hooks, tax in­voic­ing and cus­tomer-fac­ing flows in ways that re­quire care­ful test­ing and a good mo­ment to cut over. It’s also more ex­pen­sive than Stripe for my use­case.

AI Code as­sis­tance

Exception № 3

This one felt over­due. OpenAI works fine, but the com­pa­ny’s tra­jec­tory does­n’t align with my own views any­more. After a pe­riod of de­lib­er­ate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just did­n’t make the cut as it could­n’t com­pete with Claude.

Claude Code is now my day-to-day AI as­sis­tant for cod­ing. The rea­son­ing qual­ity is strong, the con­text han­dling is gen­uinely im­pres­sive, and Anthropic’s ap­proach to safety and trans­parency feels more struc­turally grounded.

Anthropic is a US com­pany, so this does­n’t sat­isfy the ju­ris­dic­tional cri­te­rion I ap­plied else­where. But it sat­is­fies some­thing else, the sense that the or­ga­ni­za­tion build­ing the thing has given se­ri­ous thought to what it’s build­ing and why.

It’s also worth not­ing that lo­cal mod­els are be­com­ing in­creas­ingly vi­able. Qwen, Alibaba’s open-weight model fam­ily, is a strong ex­am­ple: ca­pa­ble enough for many real work­loads, run­ning en­tirely on your own hard­ware, with no data leav­ing your ma­chine. The gap be­tween fron­tier API mod­els and what you can run lo­cally is nar­row­ing faster than most peo­ple re­al­ize.

Not every­thing is ideal. Most data cen­ters still sit out­side Europe, and open” means dif­fer­ent things to dif­fer­ent or­ga­ni­za­tions. But the di­rec­tion is right. A world where ca­pa­ble AI runs on your own hard­ware, with pub­lished weights and trans­par­ent train­ing, is a much bet­ter world for dig­i­tal au­ton­omy than one where all in­fer­ence routes through a hand­ful of closed API providers. We’re not there yet, but the tra­jec­tory is en­cour­ag­ing.

Git Version Control

Exception № 4

GitLab also re­mains for now. GitLab is head­quar­tered in the US but of­fers self-hosted op­tions, and the com­pany has long had a strong com­mit­ment to trans­parency and open source. A self-hosted in­stance is on the roadmap, but mov­ing source con­trol is a more sig­nif­i­cant un­der­tak­ing than most of these mi­gra­tions.

GitHub stays in the pic­ture for one spe­cific pur­pose: pub­lic-fac­ing NPM pack­ages and is­sue track­ing for open source soft­ware. When you pub­lish a pack­age or main­tain pub­lic tool­ing, GitHub is where de­vel­op­ers ex­pect to find it. The net­work ef­fects are real, it’s where the forks, stars, and is­sue re­ports come from. For the pub­lic-fac­ing sur­face of open source work, there’s no mean­ing­ful sov­er­eignty con­cern and a lot of prac­ti­cal up­side.

Was it worth it?

The prac­ti­cal fric­tion was real but man­age­able. Most mi­gra­tions were an af­ter­noon of work: up­date a cre­den­tial here, point a DNS record there, ex­port and im­port some data. A few took longer. None were cat­a­strophic. All in all it took longer than ex­pected, but most time was spent in re­search­ing and plan­ning when to do what. Two months in, every­thing is run­ning with­out in­ci­dent. No fires, no re­grets.

Digital sov­er­eignty is­n’t about para­noia. It’s about be­ing con­scious about your in­fra­struc­ture, where you de­cide who holds your data, who can reach it, and what hap­pens when pol­i­tics shift. The tools are there. The ecosys­tem is mostly ma­ture. The only thing that was stop­ping me was in­er­tia. It’s en­tirely pos­si­ble to run a re­li­able, ca­pa­ble, pro­fes­sional dig­i­tal stack mostly from European in­fra­struc­ture. This mi­gra­tion was proof of that.

GitHub - FULU-Foundation/OrcaSlicer-bambulab

github.com

This ver­sion of OrcaSlicer re­stores full BambuNetwork sup­port for Bambu Lab print­ers.

You are not lim­ited to LAN only. It works over the in­ter­net just like be­fore, through BambuNetwork, with full func­tion­al­ity for nor­mal use and print­ing.

Installation

Windows

Windows re­quires WSL 2.

Before first launch, open Command Prompt or PowerShell as Administrator and run:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Restart Windows, then launch Orca Studio.

Linux

On Linux, a nor­mal in­stal­la­tion is enough.

ma­cOS

Work in progress.

BMCU

I also en­cour­age you to use BMCU.

You can find BMCU firmware in my repos­i­to­ries.

Why I'm leaving GitHub for Forgejo

jorijn.com

I moved my code from GitHub to a self-hosted Forgejo. Not be­cause of the out­ages, but be­cause of who owns what runs on top of them. The Dutch gov­ern­ment just made the same call.

On April 27, 2026 the Dutch Ministry of the Interior soft-launched code.over­heid.nl, a self-hosted Forgejo in­stance for Dutch gov­ern­ment source code. Project man­ager Boris Van Hoytema said the plat­form was born from the re­quire­ment that the min­istry has to legally pub­lish [its] source code on a place that [it] owns,” and that Forgejo was picked over GitLab be­cause it is fully open source and of­fers all the free­doms needed for dig­i­tal au­ton­omy.

The week be­fore, I qui­etly moved my own code in the same di­rec­tion. My canon­i­cal Git host is now code.jorijn.com, run­ning Forgejo v15 LTS on a sin­gle NUC in a hard­ened setup. Some of my repos­i­to­ries al­ready live there; the rest are queued. The longer-term plan is to archive my pub­lic GitHub repos­i­to­ries once the mi­gra­tion is com­plete and point each archive at the new home.

Most pieces about leav­ing GitHub lead with the out­ages. Outages are real. They are not why I’m leav­ing. The out­ages, the AI-by-default opt-in, and the fact that GitHub no longer has its own CEO are all symp­toms of one un­der­ly­ing fact: I do not own this. The Dutch gov­ern­ment just pub­lished the same con­clu­sion. So this is the long ver­sion of that think­ing, and what the move ac­tu­ally looks like once you de­cide to make it.

TL;DR

GitHub logged 257 in­ci­dents in May 2025 to April 2026, 48 of them ma­jor. The CTO pub­licly apol­o­gised and said ca­pac­ity needs to scale 30x to keep up with AI-driven load.

In August 2025 GitHub stopped hav­ing its own CEO. It is now a unit of Microsoft’s CoreAI di­vi­sion, the same group build­ing Copilot and the broader AI stack.

On April 24, 2026 GitHub flipped Copilot Free, Pro, and Pro+ user-in­ter­ac­tion data to opt-in for AI train­ing by de­fault. There is no repos­i­tory-level opt-out.

US-jurisdictional risk un­der FISA Section 702 and the CLOUD Act is un­re­solved. Microsoft’s own at­tor­ney told the French Senate un­der oath he could not guar­an­tee EU data was safe from silent US gov­ern­ment ac­cess.

The Dutch gov­ern­ment picked Forgejo for code.over­heid.nl in April 2026 for the same set of rea­sons. I’m do­ing the same for my work.

code.jorijn.com runs Forgejo v15 LTS on a sin­gle NUC with a KVM-isolated, weekly-re­built Actions run­ner. Public GitHub repos­i­to­ries will be archived and pointed at the new home as the mi­gra­tion com­pletes.

Why out­ages aren’t ac­tu­ally the rea­son

The April 2026 out­ages were the kind that makes en­gi­neers an­gry. On April 23 the merge queue’s squash-merge code path silently re­verted pre­vi­ously merged com­mits across 658 repos­i­to­ries and 2,092 pull re­quests af­ter a fea­ture flag was rolled out in­com­pletely. Companies in­clud­ing Modal and Zipline did man­ual data re­cov­ery. Four days later, an over­loaded Elasticsearch clus­ter took Pull Requests, Issues, and Packages of­fline for over six hours.

But pick any month and the pic­ture is the same kind of bad. February 2026 alone logged 37 in­ci­dents, in­clud­ing a 3-hour 40-minute out­age that took Actions, the Copilot Coding Agent, Code Review, CodeQL, Dependabot, and Pages down at once. October 1, 2025 was a ten-hour ma­cOS-run­ner out­age. The IncidentHub ag­gre­ga­tion puts the May 2025 to April 2026 to­tal at 257 in­ci­dents and 48 ma­jor out­ages, with roughly 112 hours of to­tal down­time.

The right way to read this list is not GitHub is un­re­li­able.” Big sys­tems break. The right way to read it is the fram­ing GitHub it­self put on it. CTO Vlad Fedorov apol­o­gised on April 28 and said ca­pac­ity has to grow 30x to keep up with the load. He at­trib­uted that load di­rectly to agentic AI work­flow growth” since December 2025. The re­li­a­bil­ity story is down­stream of the AI story. GitHub is not slow­ing down on AI fea­tures. It is dou­bling down on them. The out­ages are what dou­bling-down looks like in pro­duc­tion.

The Pragmatic Engineer pointed out that GitLab, Bitbucket, Vercel, Linear, and Sentry did­n’t have the same year. They serve de­vel­op­ers un­der the same over­all de­mand pres­sure. Whatever GitHub is wrestling with is spe­cific to GitHub.

GitHub no longer has its own CEO

The big­ger fact is older than the apol­ogy and got a lot less press. On August 11, 2025 Thomas Dohmke stepped down as GitHub’s CEO. Microsoft did not re­place him. Instead, GitHub was ab­sorbed into Microsoft’s CoreAI di­vi­sion, a group Satya Nadella in­tro­duced in January 2025 with the stated mis­sion to build the end-to-end Copilot and AI stack for both first-party and third-party cus­tomers.

GitHub’s rev­enue, en­gi­neer­ing, and sup­port now re­port into Microsoft’s de­vel­oper di­vi­sion un­der Julia Liuson. GitHub’s CPO re­ports to Microsoft’s AI plat­form VP. The brand still ex­ists. The in­de­pen­dent lead­er­ship does not.

This mat­ters be­cause the older ar­gu­ment for stay­ing on GitHub was that Microsoft kept it at ar­m’s length. From 2018 through 2024 that was sub­stan­tively true. Dohmke had a real seat. Product de­ci­sions were vis­i­bly GitHub’s, not Microsoft’s. After August 2025 that ar­gu­ment no longer holds. When you push code to github.com to­day, you are push­ing it to a unit of Microsoft’s AI or­ga­ni­za­tion. Whether that both­ers you de­pends on how much you trust Microsoft’s AI or­ga­ni­za­tion to make the same de­ci­sions about your repos­i­tory that the older GitHub would have made. I no longer do, and the rea­son for that dis­trust shows up in the next sec­tion.

The train­ing-data de­fault flipped

On March 25, 2026 GitHub an­nounced a pri­vacy-state­ment change ef­fec­tive April 24. From that date, in­ter­ac­tion data, specif­i­cally in­puts, out­puts, code snip­pets, and as­so­ci­ated con­text, from Copilot Free, Pro, and Pro+ users will be used to train and im­prove our AI mod­els un­less they opt out.

Three things about that state­ment mat­ter, in or­der.

First: opt-out, not opt-in. The de­fault flipped. Anyone us­ing Copilot for free, on Pro, or on Pro+ is now con­tribut­ing to model train­ing un­less they go to the Copilot set­tings page and turn it off.

Second: there is no repos­i­tory-level switch. As a main­tainer, I can­not tell GitHub don’t train on in­ter­ac­tions in­side my repos­i­tory. The opt-out is per user ac­count, so each con­trib­u­tor has to make their own choice. In ef­fect, my code­base be­comes train­ing ma­te­r­ial when­ever any­one us­ing Copilot Free/Pro/Pro+ touches it, no mat­ter how I li­cense it.

Third: the carve-out for pri­vate repos­i­to­ries is nar­rower than it sounds. GitHub says it does not use pri­vate-repo con­tent at rest” for train­ing, but it does col­lect code snip­pets and in­ter­ac­tion con­text” gen­er­ated while Copilot is be­ing used in­side a pri­vate repo. The line be­tween the code at rest and the snip­pets gen­er­ated while edit­ing it is, char­i­ta­bly, blurry.

Copilot Business and Copilot Enterprise cus­tomers are ex­empt be­cause they are gov­erned by sep­a­rate Data Protection Agreements. The split is clean: pay enough and your in­ter­ac­tions are not train­ing data. Otherwise they are.

I wrote about agen­tic GitHub Actions a few weeks ago, and at the time the se­cu­rity model was the head­line. The train­ing-data flip is the sec­ond half of the same story: GitHub’s strate­gic in­ter­est in your in­ter­ac­tion data is struc­tural now, not op­tional. I am not in­ter­ested in ar­gu­ing about the mer­its of that strat­egy on some­one else’s plat­form. I would rather not be on the plat­form.

Then there’s the ju­ris­dic­tion

Underneath all of this is a layer that does­n’t shift when the pri­vacy state­ment does. GitHub Inc. and Microsoft Corp. are US com­pa­nies. Anything they hold sits in scope of US law, in­clud­ing FISA Section 702 and the CLOUD Act of 2018. Both ap­ply re­gard­less of where data phys­i­cally sits.

Section 702 was reau­tho­rised in April 2024 for two years and is cur­rently run­ning on a 45-day ex­ten­sion signed at the end of April 2026 while Congress ar­gues over a longer re­newal. It au­tho­rises US in­tel­li­gence col­lec­tion against non-US per­sons through elec­tronic com­mu­ni­ca­tions ser­vice providers domi­ciled in the US. The CLOUD Act lets US law en­force­ment com­pel a US-headquartered com­pany to pro­duce data stored any­where in the world.

GitHub an­nounced EU data res­i­dency for Enterprise Cloud in October 2024. That solves data lo­ca­tion. It does not solve ju­ris­dic­tion. CLOUD Act ex­po­sure fol­lows cor­po­rate con­trol, not ge­og­ra­phy.

The most hon­est ar­tic­u­la­tion of this came not from a reg­u­la­tor but from Microsoft’s own at­tor­ney, who told a French Senate hear­ing in June 2025, un­der oath, that he could not guar­an­tee French data stored in European Microsoft dat­a­cen­tres was safe from silent US gov­ern­ment ac­cess.

I cov­ered the broader le­gal pic­ture in my ear­lier piece on why hosted in Frankfurt” does­n’t mean GDPR-compliant, and the op­er­a­tional im­pli­ca­tions for host­ing providers in my piece on NIS2, so I’ll keep the de­tail there. The point that mat­ters here is nar­row. As long as your code lives at github.com, your code lives in US le­gal ter­ri­tory. EU data res­i­dency is a com­fort, not a fix.

The Dutch gov­ern­men­t’s call: code.over­heid.nl

This is where the Dutch gov­ern­men­t’s choice de­serves more at­ten­tion than it got. The le­gal dri­ver is the Netherlands’ Open, ten­zij” pol­icy, in force since 2020: soft­ware de­vel­oped with pub­lic funds is open source by de­fault un­less se­cu­rity or con­fi­den­tial­ity re­quires oth­er­wise. To com­ply, the min­istry needed some­where to pub­lish code that it ac­tu­ally con­trolled. Code.overheid.nl is the an­swer.

The piece worth paus­ing on is which forge they chose. The European Commission runs code.eu­ropa.eu on self-hosted GitLab, live since September 2022. Germany’s open­Code is also GitLab. France’s code.gouv.fr is an ag­gre­ga­tor that in­dexes re­pos hosted else­where, not a forge in it­self.

The Dutch gov­ern­men­t’s choice of Forgejo, not GitLab, was de­lib­er­ate. As the OSOR ar­ti­cle put it, the ra­tio­nale was that Forgejo is fully open source, with no open-core split, and of­fers all the free­doms needed for dig­i­tal au­ton­omy. Van Hoytema added that Forgejo’s roadmap was way more aligned” with theirs than the al­ter­na­tives. The gov­ern­ment did not just want a sov­er­eign forge. They wanted a sov­er­eign forge that was­n’t gated be­hind a com­mer­cial ven­dor’s pre­mium tier.

So the in­sti­tu­tional pat­tern mat­ters: a na­tional gov­ern­ment with se­ri­ous lawyers and a long mem­ory looked at the same pic­ture I was look­ing at, made the same de­ci­sion, and shipped it the week be­fore I did. That is­n’t proof that the de­ci­sion is right. It is, at min­i­mum, proof that the de­ci­sion is no longer fringe.

Why Forgejo, and not GitLab

I weighed GitLab se­ri­ously. Self-hosted GitLab CE is a known quan­tity, with a much larger com­mer­cial ecosys­tem and, frankly, a more pol­ished UI. Two things tipped the choice.

First: li­cens­ing. GitLab is open core. The Community Edition is MIT-licensed, but many of the fea­tures I’d ac­tu­ally want in pro­duc­tion live in the Enterprise tiers un­der a non-free li­cense. Forgejo went the other way. As of v9.0 in August 2024 the pro­ject re­li­censed from MIT to GPLv3+, with the ex­plicit goal of stay­ing copy­left and re­sist­ing fu­ture com­mer­cial cap­ture of the code­base. The fork from Gitea in December 2022 hap­pened pre­cisely be­cause Gitea Ltd took con­trol of the trade­marks and do­mains in a way the com­mu­nity had not con­sented to. The les­son learned shows up in the li­cense.

Second: gov­er­nance. Forgejo lives un­der Codeberg e.V., a non-profit reg­is­tered in Berlin since September 2018, with a mem­ber-elected board, pub­lic bud­gets, and 300,000+ repos­i­to­ries on its hosted in­stance. Members vote on the bud­get an­nu­ally; the 2025 plan was ac­cepted with 88 in favour, zero against, one ab­sten­tion. That is not a mar­ket­ing claim about com­mu­nity gov­er­nance. That is a German Verein do­ing what Vereine do.

Forgejo v15.0 LTS shipped on April 16, 2026. It is the pro­jec­t’s 100th re­lease. Long-term sup­port runs through July 15, 2027. Forgejo Actions reached the ma­tu­rity I needed (ephemeral run­ners, OpenID Connect, reusable work­flow ex­pan­sion) in v15. Releases since the fork have been steady, with ac­tive monthly re­ports.

The hon­est caveat: the com­mer­cial Forgejo ecosys­tem is real but thin. The clean­est com­mer­cial of­fer­ing to­day is Codey by VSHN, a Swiss-hosted man­aged Forgejo from 19 CHF per month, launched on Servala in March 2025. There is no Red-Hat-style en­ter­prise sup­port sub­scrip­tion. If you need 24/7 phone sup­port and a ven­dor to point at, you will need to build that your­self, or wait. I am will­ing to wait, be­cause I would rather own the plat­form.

What I built, and why it looks the way it does

Code.jorijn.com runs on a sin­gle Intel NUC with 64 GB of RAM in my home of­fice. Forgejo v15 LTS, Postgres 17, and Traefik live in­side Docker. An Incus-managed KVM vir­tual ma­chine sits be­side them and runs my Forgejo Actions run­ner. That is the whole plat­form.

The in­ter­est­ing de­ci­sion is not in the Forgejo de­ploy­ment. Forgejo plus Postgres plus a re­verse proxy is not in­ter­est­ing. The de­ci­sion that took the most thought is the run­ner.

Where the dan­ger ac­tu­ally is

If you self-host a forge, the forge it­self is the easy part. The hard part is what­ever runs the CI jobs. My run­ner has to ex­e­cute npm in­stall, com­poser in­stall, and pip in­stall on a daily Renovate sched­ule, against lock­files gen­er­ated by my own repos­i­to­ries. That means it ex­e­cutes life­cy­cle scripts. It means every job po­ten­tially runs un­trusted code, of the same gen­eral shape that re­cent npm-worm and ax­ios sup­ply-chain at­tacks used to ride de­pen­dency bots that auto-merged within an hour.

The run­ner’s job, in other words, is not to run code. The run­ner’s job is to con­tain the code while it runs. Everything in the run­ner ar­chi­tec­ture ex­ists for that rea­son. The same logic I de­scribed in my piece on un­main­tained de­pen­den­cies ap­plies here: as­sume any sin­gle layer can fail, and de­sign so the next one ab­sorbs the fail­ure.

The de­fenses, weak­est to strongest

The run­ner uses five lay­ers, in or­der from soft­est to hard­est.

A per­sis­tent KVM vir­tual ma­chine. The run­ner lives in its own VM, not in a con­tainer on the host. The host’s ker­nel is not shared with the job en­vi­ron­ment. A Linux ker­nel CVE in­side the run­ner has to break the KVM bound­ary be­fore it can touch the NUC.

gVi­sor as the de­fault Docker run­time in­side that VM. Job con­tain­ers run un­der runsc, which in­ter­cepts sys­tem calls in user space rather than pass­ing them to the host ker­nel. A con­tainer es­cape has to break gVi­sor and the sur­round­ing KVM.

A weekly de­struc­tive re­build. Every Monday at 02:00 UTC the en­tire VM is de­stroyed and recre­ated from a freshly baked Ubuntu base im­age, with new per­sis­tent run­ner reg­is­tra­tions minted against Forgejo. The base im­age it­self re­builds on Sundays, so the new VM con­sumes that week’s apt and ker­nel patches. Persistent state can­not live longer than seven days.

An nfta­bles egress fil­ter on the run­ner’s bridge. The run­ner can reach :443, :80, :22, and :53 to pub­lic des­ti­na­tions (npm, pypi, ghcr, my own Forgejo via the pub­lic host­name through the router’s hair­pin NAT). It can­not reach 192.168.0.0/16, 10.0.0.0/8, or 172.16.0.0/12. A com­pro­mised job can­not scan my LAN, can­not reach the router ad­min in­ter­face, and can­not reach the host’s other ser­vices.

Scope-bound run­ner to­kens, never ad­min-scoped. The two per­sis­tent run­ner reg­is­tra­tions are tied to a sin­gle user scope and a sin­gle org scope re­spec­tively, with write:user,write:or­ga­ni­za­tion PAT scopes for man­age­ment. A leaked to­ken can­not reg­is­ter run­ners out­side its scope, and def­i­nitely can­not do any­thing ad­min-scoped.

The com­bi­na­tion is de­lib­er­ately over­lap­ping. Each layer is a fence. Together they are a perime­ter with depth. None of this is novel, in the sense that all the prim­i­tives are up­stream and well-doc­u­mented. What is new is wiring them to­gether for a sin­gle-user home­lab where the en­tire plat­form fits on one NUC and re­verts cleanly when some­thing goes wrong.

The un­der­ly­ing prim­i­tives, KVM iso­la­tion, gVi­sor, weekly re­builds, and scope-bound run­ner reg­is­tra­tions, are all things Forgejo and Incus sup­port na­tively. I just had to com­bine them.

What I gave up

This is the sec­tion I have to write be­cause every both sides” ar­ti­cle I re­spect has one. So: what does mov­ing to Forgejo cost me, hon­estly?

Discovery and the so­cial graph. GitHub is where my con­trib­u­tors find me. When some­one pushes a small fix to a pub­lic repos­i­tory, they ex­pect to do it on github.com, not on a do­main they have never heard of. The plan I’m work­ing to­wards is to archive each pub­lic GitHub repos­i­tory once the move is done and point its README at code.jorijn.com. The dis­cov­ery path stays in­tact: peo­ple still find me via GitHub, see the archive no­tice, and fol­low the link to the canon­i­cal home. I’m not there yet — a few repos­i­to­ries al­ready live on code.jorijn.com, the rest are queued. Until then, the gap is real, and I ac­cept it.

GitHub Actions ecosys­tem fragility. Forgejo Actions de­lib­er­ately aims for fa­mil­iar­ity, not com­pat­i­bil­ity. Most things work. Some don’t. per­mis­sions: blocks at the work­flow level are silently ig­nored. ac­tions/​check­out@v6 broke au­then­ti­cated check­out on non-GitHub run­ners in early 2026, so I pinned every­thing to v5. ac­tions/​up­load-ar­ti­fact@v4 re­quires the Forgejo-hosted fork. OIDC works but uses a dif­fer­ent work­flow key (enable-openid-connect: true) than GitHub’s per­mis­sions: id-to­ken: write. None of these are block­ers. They are all fric­tion. If your work­flows lean heav­ily on GitHub-specific fea­tures, the mi­gra­tion is a pro­ject, not an evening.

Dependabot. Forgejo does­n’t have it. I run Renovate on the same self-hosted run­ner, on a 3-hour sched­ule. It does the same job. It has more con­fig­u­ra­tion. The setup took me a day.

24/7 ven­dor sup­port. GitHub Enterprise gives you a phone num­ber and an SLA. Forgejo gives you an is­sue tracker and a chat room. For a one-per­son op­er­a­tion that is fine. For a 200-engineer or­gan­i­sa­tion it might not be, and that is a real rea­son to wait.

When this is­n’t worth it

I would not move to self-hosted Forgejo if any of the fol­low­ing are true.

The team has zero ap­petite or ca­pac­ity for run­ning in­fra­struc­ture. A man­aged Forgejo (Codey, or Codeberg for FOSS) closes most of that gap, but you still own the mi­gra­tion cost.

You are heav­ily in­vested in GitHub-specific fea­tures: GitHub Apps mar­ket­place, Codespaces, Copilot Workspace, Advanced Security. Forgejo is a forge, not a de­vel­oper-plat­form-as-a-ser­vice.

Your con­trib­u­tor base is the GitHub so­cial graph. If dis­cov­er­abil­ity mat­ters more than own­er­ship, stay where the con­trib­u­tors are. Or ac­cept the fric­tion, archive your pub­lic repos­i­to­ries with a pointer to your new home once the move is fin­ished, and re­visit the de­ci­sion later.

You don’t have a cred­i­ble op­er­a­tional an­swer for the run­ner. The run­ner is the part where this gets se­ri­ous. If you are not pre­pared to think about KVM iso­la­tion, gVi­sor, nfta­bles, and weekly re­builds, run your CI jobs on a man­aged run­ner host, or stay on GitHub.

The Dutch gov­ern­men­t’s pat­tern is the right model here too. They did not mi­grate every­thing in one step. Code.overheid.nl is a soft-launch plat­form for min­istries to share open-source code, not a whole­sale re­place­ment for every­thing else they use. My setup has the same shape: Forgejo is canon­i­cal for my work, GitHub is a mir­ror, and I am will­ing to re­visit the mir­ror later.

Key take­aways

GitHub is no longer an in­de­pen­dent com­pany with its own CEO. Since August 2025 it is a unit of Microsoft’s CoreAI di­vi­sion.

The April 2026 out­ages and the Copilot train­ing-data de­fault flip are down­stream of the same shift. Both are pre­dictable from the struc­ture.

US-jurisdictional risk un­der FISA 702 and the CLOUD Act is real and un­re­solv­able from the cus­tomer side. EU data res­i­dency is a com­fort, not a fix.

The Dutch gov­ern­ment picked Forgejo for code.over­heid.nl in April 2026 for the same set of rea­sons. The in­sti­tu­tional pat­tern is form­ing.

A de­fen­si­ble self-hosted Forgejo de­ploy­ment is achiev­able on a sin­gle NUC, but the run­ner is the part that re­quires real care: KVM iso­la­tion, gVi­sor, weekly re­builds, scope-bound to­kens, and an egress fil­ter that says no to your LAN.

Migration fric­tion is real. Archiving your pub­lic GitHub repos­i­to­ries with a pointer to the new home keeps the dis­cov­ery path in­tact while you com­plete the move.

Recurring server or de­ploy­ment is­sues?

I help teams make pro­duc­tion re­li­able with CI/CD, Kubernetes, and cloud—so fixes stick and de­ploys stop be­ing stress­ful.

Explore DevOps con­sul­tancy

Related ar­ti­cles

30 Apr 2026

copy.fail (CVE-2026 – 31431): a small Linux ker­nel bug with an un­usu­ally big blast ra­dius­copy.fail is a Linux ker­nel lo­cal priv­i­lege es­ca­la­tion dis­closed on 29 April 2026. It works on nearly every mod­ern dis­tri­b­u­tion, leaves no on-disk trace, and slips past Kubernetes’ de­fault sec­comp. Why it mat­ters and what to do.

1631 words

copy.fail (CVE-2026 – 31431): a small Linux ker­nel bug with an un­usu­ally big blast ra­dius

copy.fail is a Linux ker­nel lo­cal priv­i­lege es­ca­la­tion dis­closed on 29 April 2026. It works on nearly every mod­ern dis­tri­b­u­tion, leaves no on-disk trace, and slips past Kubernetes’ de­fault sec­comp. Why it mat­ters and what to do.

29 Apr 2026

HashiCorp Vault vs OpenBao: a thor­ough com­par­i­son for plat­form teamsTwo se­crets man­agers, one shared code­base, two very dif­fer­ent li­censes. A deep, prac­ti­cal com­par­i­son of HashiCorp Vault and OpenBao for plat­form en­gi­neers pick­ing be­tween them.

4605 words

HashiCorp Vault vs OpenBao: a thor­ough com­par­i­son for plat­form teams

Two se­crets man­agers, one shared code­base, two very dif­fer­ent li­censes. A deep, prac­ti­cal com­par­i­son of HashiCorp Vault and OpenBao for plat­form en­gi­neers pick­ing be­tween them.

19 Apr 2026

OpenTofu vs Terraform in 2026: the fork fi­nally di­vergedThree years af­ter the fork, OpenTofu and Terraform have di­verged in li­cens­ing, gov­er­nance, and tech­ni­cal fea­tures. For EU teams eval­u­at­ing in­fra­struc­ture-as-code strat­egy, the choice is no longer the­o­ret­i­cal.

1950 words

OpenTofu vs Terraform in 2026: the fork fi­nally di­verged

Three years af­ter the fork, OpenTofu and Terraform have di­verged in li­cens­ing, gov­er­nance, and tech­ni­cal fea­tures. For EU teams eval­u­at­ing in­fra­struc­ture-as-code strat­egy, the choice is no longer the­o­ret­i­cal.

Setting up a free *.city.state.us locality domain

fredchan.org

tl;dr

In the US, can get a do­main name like som­e­name.city.state.us for free. If your town has its own do­main, you can get name­servers from Amazon Lightsail, send the Interim .US Domain Template to the del­e­gated man­ager for your lo­cal­ity to reg­is­ter one, then point DNS en­tries at your web­host.

What’s a lo­cal­ity do­main?

A lo­cal­ity do­main is a do­main name that’s as­so­ci­ated with a lo­ca­tion in the United States, such as fred­er­ick.seat­tle.wa.us (which cur­rently redi­rects to fred­chan.org). Locality do­mains were first cre­ated in 1992, and the in­fra­struc­ture has been main­tained un­der gov­ern­ment con­tract ever since.

To reg­is­ter one, you must be a US cit­i­zen or per­ma­nent res­i­dent, an or­ga­ni­za­tion in­cor­po­rated in the US, or an or­ga­ni­za­tion with a bona fide pres­ence in the US that reg­u­larly en­gages in law­ful ac­tiv­i­ties or has an of­fice in the US. (For the full ver­biage, see the bot­tom of this form).

Step 1: Choose a lo­cal­ity do­main

Registration of many lo­cal­ity do­mains have been del­e­gated to var­i­ous com­pa­nies who ac­tu­ally han­dle the do­main reg­is­tra­tion. See the list of del­e­gated sub­do­mains for do­mains you can reg­is­ter un­der, which also has the con­tact e-mail for the cor­re­spond­ing reg­is­trar as of 2009.

Since this list is quite old and some com­pa­nies may have re­struc­tured/​re­named, you may need to hunt for an e-mail for the cur­rent in­car­na­tion of the reg­is­trar. For ex­am­ple, the e-mail listed for seat­tle.wa.us is do­mainrq@nwnexus.com be­long­ing to NW Nexus, which is now NuOz Corporation, so the e-mail I con­tacted was sup­port@nuoz.com.

If you do not live in one of these lo­cal­i­ties, you can try reg­is­ter­ing a do­main un­der gen.your-state.us, like next.gen.oh.us, which is of­fi­cially for gen­eral in­de­pen­dent en­ti­ties. The con­tact should be in the same list.

If the lo­cal­ity do­main you’re look­ing for is­n’t on the list (i.e. reg­is­tra­tion has not been del­e­gated), you’re prob­a­bly screwed. The man­ager of all un­del­e­gated do­mains, NeuStar, will only al­low lo­cal gov­ern­ment agen­cies to reg­is­ter them due to gov­ern­ment pol­icy. This is sup­posed to be tem­po­rary, but the pol­icy has re­mained since 2002.

Step 2: Acquire name­servers

When you get a nor­mal do­main, like fred­chan.org, your do­main reg­is­trar usu­ally pro­vides name­servers for you af­ter you buy. These name­servers are where you put DNS records that point your do­main to an IP ad­dress, like your web host’s IP ad­dress. However, in or­der to reg­is­ter a lo­cal­ity do­main, you need to al­ready have name­servers.

The only place I could find that pro­vides free name­servers for non-top level do­mains (e.g. every lo­cal­ity do­main) is Amazon Lightsail. Lightsail is Amazon’s low cost AWS web host­ing ser­vice. You’re nor­mally sup­posed to rent a web host­ing server from them, but you don’t ac­tu­ally need to do that.

Create an AWS ac­count and go to the Lightsail con­sole.

On the left nav­i­ga­tion panel, click Domains & DNS.

Click the Create DNS zone but­ton.

Select Use a do­main from an­other reg­is­trar and type in the do­main you in­tend to reg­is­ter later.

Click Create DNS zone.

Take note of the do­main names in the Name servers sec­tion. You need these when you fill out the do­main reg­is­tra­tion form.

Step 3: Fill out the do­main reg­is­tra­tion form

Now that you have name servers, you can fill out the Interim .US Domain Template v2.0. In this sec­tion, I’ll walk you through some of the trick­ier parts of this form, as­sum­ing you are reg­is­ter­ing a do­main for your­self.

2. FULLY-QUALIFIED DOMAIN NAME:

This is the do­main you want to reg­is­ter, e.g. fred­er­ick.seat­tle.wa.us.

3. ORGANIZATION INFORMATION

If you are a hu­man be­ing and not an or­ga­ni­za­tion, you can fill out sec­tions 3a-e with your own ad­dress.

4. DESCRIPTION OF ORGANIZATION/DOMAIN:

Describe what you’re do­ing with this do­main. For ex­am­ple, if you ex­pect to host a web­site on it, you can say that. You can use it for pur­poses that you don’t write in the form later on as well.

5. Date Operational……:

You can use your birth date here.

6. ADMINISTRATIVE CONTACT OF ORGANIZATION/DOMAIN and 7. TECHNICAL AND ZONE CONTACT

Both of these can be you. 6i, 7i, and 7j can all be your e-mail ad­dress, and if you don’t have a fax num­ber, leave 7k blank.

8. PRIMARY SERVER: HOSTNAME, NETADDRESS and 9. SECONDARY SERVER: HOSTNAME, NETADDRESS

This is where you fill in your name server ad­dresses.

Lightsail will have given you at least 2 – 4 name server ad­dresses, of which you need to know the IP ad­dress of each. You can use an on­line DNS lookup tool to find their IP ad­dresses or the dig com­mand in your ter­mi­nal.

Any of the name­servers can be the pri­mary server. Then, for the rest of the servers, you can re­peat sec­tion 9 as many times as you need un­til you’ve added all the name­servers.

10. US NEXUS REQUIREMENTS

Instructions for sec­tion 10 ap­pear at the bot­tom of the form. For in­stance, if the do­main is for per­sonal use and you are a US Citizen, your ap­pli­ca­tion pur­pose is (iii) per­sonal use, your Nexus Category is (category 11) Natural per­son who is a United States Citizen, and you leave Nexus Validator blank.

Step 4: Send the form and wait

Send the form to the do­main reg­is­trar you iden­ti­fied be­fore for your lo­cal­ity do­main. When reg­is­ter­ing, I wrote I’d like to reg­is­ter a new lo­cal­ity do­main with the fol­low­ing in­for­ma­tion” and pasted the en­tire form con­tents into the e-mail, so they know what the form is for.

This can take days or pos­si­bly weeks, since they’re not usu­ally au­to­mated. If suc­cess­ful, you’ll get an e-mail con­firm­ing that your do­main has been reg­is­tered.

Step 5: Finish set­ting up DNS in Lightsail

You can now go back to the DNS zone you cre­ated in Lightsail, and in the DNS records tab, cre­ate DNS records to point your do­main to what­ever server you want― web servers, Minecraft servers, FTP servers… any­thing!

For free web host­ing, I use GitHub Pages, which has a guide to con­fig­ur­ing cus­tom do­mains. Different web hosts will have slightly dif­fer­ent in­struc­tions, but they will all in­volve cre­at­ing DNS records.

Once your DNS records are con­fig­ured, you should be able to visit your new lo­cal­ity do­main and see your web­site!

FAQ

Do I ac­tu­ally have to live in my lo­cal­ity do­main’s area?

I’m not re­ally sure. Honestly, I doubt any­one cares enough to ac­tu­ally check that you live at the ad­dress you sup­ply in the reg­is­tra­tion form. I know some­one who used to live in their lo­cal­ity, moved out of the coun­try, and still has their lo­cal­ity do­main. However, your mileage may vary.

Will WHOIS re­quests leak my ad­dress?

Nope. Even though you must sup­ply your ad­dress in the reg­is­tra­tion form, a WHOIS re­quest for your lo­cal­ity do­main will only show in­for­ma­tion about the reg­is­trar.

Special thanks

Big thanks goes to sleep­less and Minh Nguyen for their guides on reg­is­ter­ing lo­cal­ity do­mains. After see­ing their guides, I wanted to write my own that would clar­ify some of the ques­tions I had while fol­low­ing them, which led to the cre­ation of this ar­ti­cle!

Tell New York Times, The Atlantic, and USA Today to keep the crucial work of journalists in the Wayback Machine!

www.savethearchive.com

Petition Text

Dear lead­ers of ma­jor me­dia out­lets,

The free­dom of jour­nal­ists is­n’t only the free­dom to write, it’s also the free­dom to have your work read and re­mem­bered for gen­er­a­tions to come. 2026 is the first World Press Freedom Day in 30 years that jour­nal­ists’ work at ma­jor me­dia out­lets in­clud­ing New York Times, The Atlantic, and USA Today is not be­ing pre­served by the in­de­pen­dent, non­profit Internet Archive. We are call­ing on you and on all news out­lets to pub­licly com­mit to work­ing with the Internet Archive to keep the news in the Wayback Machine.

Since February of this year, the New York Times has told the Internet Archive to stop its Wayback Machine from pre­serv­ing the work of its jour­nal­ists. Meanwhile, Wired re­cently re­ported how USA Today is pub­lish­ing pow­er­ful re­port­ing that re­lies on the Wayback Machine, while iron­i­cally block­ing it from archiv­ing that same re­port­ing. And when over 100 jour­nal­ists de­liv­ered a let­ter cel­e­brat­ing the Internet Archive for their re­spect­ful preser­va­tion of jour­nal­ism, gen­er­at­ing a wave of tech-vi­ral angst, the CEO of The Atlantic weighed in but did­n’t com­mit to find­ing a so­lu­tion. The con­cerns about AI that these pub­li­ca­tions cited as a rea­son to ban the Wayback Machine are wholly hy­po­thet­i­cal. Journalists, and this non­profit pub­lic good that they rely on, de­serve bet­ter.

Though other web­sites use the word archive” and try to style them­selves as sim­i­lar to the Internet Archive, the Wayback Machine is­n’t a flash-in-the-pan ser­vice that skips over pay­walls. It has been pre­serv­ing the news longer than many peo­ple who sign this pe­ti­tion have been alive. Generative AI is the worst ex­cuse to hide prin­ci­pled re­port­ing from fact-check­ers. If any­thing, AI is the top rea­son why the Wayback Machine is more cru­cial than ever. The truth is that AI com­pa­nies can eas­ily do what knock­off archiv­ing sites are do­ing: ig­nore the rules and grab the news off of pub­lish­er’s web­sites with­out their con­sent. There is lit­tle to stop them. There’s only one rea­son that the Internet Archive is­n’t do­ing what most of Silicon Valley is: in­tegrity. This in­tegrity shows us that the Internet Archive is trust­wor­thy and aims to op­er­ate for a very long time.

It should. Censorship and au­thor­i­tar­i­an­ism are grow­ing, along with pres­sure to al­ter re­port­ing and erase facts. Journalists fre­quently face death threats, and many have died across the past year for their work. The least we can do out of re­spect dur­ing these hor­rors is to shore up the Wayback Machine’s neu­tral third party preser­va­tion ef­forts so these brave jour­nal­ists’ work is not lost. Their re­port­ing must re­main ac­ces­si­ble not only to their col­leagues and loved ones, but to the eyes of his­tory.

The Wayback Machine makes every on­line news out­let it archives more re­silient against pres­sure to re­move sto­ries that threaten the pow­er­ful. It is in the in­ter­est of any news out­let that still does real jour­nal­ism to cham­pion such an ally in times like these. It should­n’t be this hard to find a way to in­de­pen­dently pre­serve the news. We call on the lead­er­ship of ma­jor me­dia out­lets to com­mit to work­ing with the Internet Archive and get­ting all the news in the Wayback Machine now!

Sincerely,

The Undersigned

Release scrcpy 4.0 · Genymobile/scrcpy

github.com

To re­ceive a no­ti­fi­ca­tion when a new re­lease is avail­able, click on Watch > Custom > Releases at the top.

scr­cpy v4.0

Changes since v3.3.4:

Migrate from SDL2 to SDL3 (#6216)

Add flex dis­play sup­port (#6772)

Add cam­era torch and zoom sup­port (#6243)

Enforce win­dow as­pect ra­tio (#6761, #6774, #2317, #2387, #3460)

Add –keep-active (#6792, #6787)

Add –background-color (#6807, #5855)

Set de­fault back­ground color to dark gray (#6807)

Display dis­con­nected icon be­fore clos­ing (#6662, #6651)

Add F11 as fullscreen short­cut (#6777)

Add Mod+q short­cut to quit (#6780, #6612)

Fix Meta Quest flick­er­ing (#5913)

Fix phys­i­cal vs log­i­cal size con­fu­sion (#6772)

Fix copy-paste on rooted de­vice (#6224)

Fix col­or­space con­ver­sion is­sue (#1868)

Fix high CPU us­age with au­dio si­lence de­coded from OPUS (#6715)

Add ses­sion meta­data for the video stream (#6159)

Respect video ca­pa­bil­i­ties con­straints (#6766)

Set Windows con­sole code page to UTF-8 (#6663)

Fix sup­port for de­vice se­r­ial con­tain­ing spaces (#6663, #6664, #3537)

Detect TCP de­vices pro­vided by mDNS (#6665, #6248)

Keep Windows ter­mi­nal open on er­ror (#6667)

Set MediaCodec KEY_PRIORITY and KEY_LATENCY to min­i­mum val­ues (#6670)

Open the scr­cpy win­dow ear­lier (#6694, #6546)

Fix de­vice ro­ta­tion short­cut (5fedc79)

Use op­ti­mal size align­ment (#6746, #4949, #6236)

Add –min-size-alignment (#6746)

Fix screen­saver dis­abled un­ex­pect­edly with­out video play­back (#6754)

Add –no-window-aspect-ratio-lock (#6761)

Fix ro­ta­tion of square dis­plays (#6770)

Align the vir­tual dis­play size (#6771)

Add –render-fit (#6772)

Set de­fault au­dio-out­put-buffer to 10ms (#6775, #3793)

Fix turn­ing vir­tual dis­play on via right-click (#6788)

Fix OpenGL run­ner shut­down dead­lock (#6794)

Share SDL hints be­tween nor­mal and OTG modes (#6809, #6808)

Upgrade plat­form-tools (adb) to 37.0.0

Upgrade FFmpeg to 8.1.1 (#6715)

Upgrade SDL to 3.4.8

Upgrade dav1d to 1.5.3

Various tech­ni­cal fixes

Highlights

SDL3

This re­lease mi­grates from SDL2 to SDL3 to ben­e­fit from ac­tive main­te­nance, bug fixes, and con­tin­ued up­stream sup­port. SDL3 also en­ables new fea­tures, such as as­pect-ra­tio lock­ing when re­siz­ing the win­dow.

Thanks to the SDL main­tain­ers for their work and for their sup­port and fixes!

See #6216 for de­tails.

Flex dis­play

A vir­tual dis­play can now be made flex us­ing –flex-display (or -x), mean­ing it can be re­sized dy­nam­i­cally along with the client win­dow.

Here is a demo:

scr­cpy –new-display=/192 -x –start-app=org.mozilla.firefox –keep-active –no-vd-system-decorations

Here are more ex­am­ples:

# Start Android Settings in a win­dow scr­cpy –new-display=1024x768/160 –start-app=com.android.settings –flex-display

# -x is equiv­a­lent to –flex-display scr­cpy –new-display=1024x768/160 –start-app=com.android.settings -x

# By de­fault, the dis­play size/​dpi is 1280x960/160 scr­cpy –new-display –start-app=com.android.settings –flex-display

Use –keep-active to pre­vent the screen from turn­ing off (see be­low):

scr­cpy –new-display -x –keep-active

Increase the bit rate and/​or change the codec to main­tain good qual­ity even with large win­dows:

scr­cpy –new-display -x –video-codec=h265 -b16M

See #6772 for more de­tails.

Camera torch and zoom

The cam­era can be con­trolled dy­nam­i­cally:

MOD+t: turn on the cam­era torch

MOD+Shift+t: turn off the cam­era torch

MOD+↑ (up): zoom in

MOD+↓ (down): zoom out

The cam­era torch can also be turned on at startup by –camera-torch:

scr­cpy –video-source=camera –camera-torch

The cam­era zoom level can be set with –camera-zoom:

scr­cpy –video-source=camera –camera-zoom=1.5

The sup­ported zoom range for each cam­era is given by –list-cameras (any value out­side the sup­ported range will be clamped).

Aspect ra­tio

Previously, the win­dow could be freely re­sized, and black bor­ders were added to main­tain the con­tent as­pect ra­tio.

Thanks to a new API in SDL3, the win­dow as­pect ra­tio is now pre­served while re­siz­ing, avoid­ing black bor­ders.

The old be­hav­ior can be re­stored us­ing –no-window-aspect-ratio-lock.

Keep ac­tive

To pre­vent the de­vice from turn­ing off due to in­ac­tiv­ity, –keep-active pe­ri­od­i­cally sig­nals user ac­tiv­ity to the sys­tem:

scr­cpy –keep-active

Contrary to –stay-awake and –screen-off-timeout, this does not change any global set­tings, and it works whether the de­vice is plugged in or not.

Background color

The de­fault back­ground was pure black; it is now dark gray.

It can be changed with –background-color, which ac­cepts hexa­dec­i­mal color codes (in 3-digit or 6-digit for­mat):

scr­cpy –background-color=#234567 scr­cpy –background-color=234567 # lead­ing #’ is op­tional scr­cpy –background-color=#567 # equiv­a­lent to #556677

Disconnected icon

When the con­nec­tion to the de­vice is lost while mir­ror­ing, the win­dow pre­vi­ously closed im­me­di­ately, which could in­cor­rectly sug­gest that scr­cpy had crashed.

To make dis­con­nec­tions clearer, a dis­con­nected icon is now dis­played for 2 sec­onds be­fore clos­ing the win­dow.

The icon re­places the screen con­tent im­me­di­ately:

More de­tails in #6662.

Meta Quest

Since a Meta Quest firmware up­grade, flick­er­ing oc­curred when mir­ror­ing the screen with scr­cpy.

A workaround was im­ple­mented, so mir­ror­ing a Meta Quest now works again.

See the tech­ni­cal de­tails in #5913 (comment).

High CPU us­age with si­lence

A funny bug: play­ing si­lence used much more CPU than play­ing non-si­lence, dur­ing re­sam­pling of au­dio sam­ples de­coded from an OPUS au­dio stream (resampling was about 40× slower).

It turns out it was caused by de­nor­mals: the OPUS de­coder did not pro­duce ex­act ze­ros, but tiny de­nor­mal num­bers, which can cause per­for­mance is­sues.

This was fixed di­rectly in FFmpeg: #6715 (comment)

More short­cuts

F11 now tog­gles fullscreen (like MOD+f), and MOD+q now quits scr­cpy.

BlueSky: @scrcpy.bsky.social

Kickstarter Is The Latest Platform Seemingly Forced To Ban Adult Content By Payment Processors

kotaku.com

Kickstarter, the crowd­fund­ing plat­form used to fund the likes of Larian Studios’ Divinity: Original Sin 2 and Warhorse Studios’ Kingdom Come: Deliverance, has re­cently up­dated its Mature Content” guide­lines to pro­hibit con­tent that is deemed violent,” derogatory,” and sex­u­ally photo-realistic,” and, based on emails sent to cre­ators on the plat­form, the pay­ment proces­sor Stripe may be to blame for the sud­den shift away from NSFW con­tent.

At some point over the past few days, seem­ingly around May 11, based on Internet Archive snap­shots, Kickstarter’s Rules” page was up­dated to specif­i­cally pro­hibit sev­eral forms of NSFW con­tent. While the pre­vi­ous ver­sion of the page sim­ply pro­hib­ited Pornographic con­tent,” it now con­tains some oddly spe­cific re­stric­tions, in­clud­ing, but not lim­ited to, implied sex acts,” MILF/DILF” con­tent, implied nu­dity,” and any­thing fea­tur­ing female nip­ples/​are­o­las, gen­i­talia,” and anuses.” Good heav­ens, they’ve even banned buttocks.”

Why? According to a re­port by The Daily Cartoonist, Kickstarter may be un­der pres­sure from its pay­ment proces­sor, Stripe, which Palantir Chairman Peter Thiel and X pro­pri­etor Elon Musk par­tially own. Kickstarter and Stripe did not im­me­di­ately re­spond to a re­quest for com­ment.

According to artist and writer Mike Wolfer, Kickstarter be­gan email­ing cre­ators as early as March 2026 to in­form them that Stripe will con­duct its own re­view” of any pro­jects fea­tur­ing adult/NSFW con­tent,” and may choose to shut down any crowd­sourced pro­jects while they’re live, or even af­ter” they’ve been suc­cess­fully funded. As The Daily Cartoonist notes, the choice to limit NSFW con­tent on Kickstarter feels es­pe­cially sud­den, con­sid­er­ing the plat­form launched its Kickstarter After Dark” newslet­ter in September 2025, which aims to show­case the best adult-ori­ented pro­jects across” fea­tured on the web­site.

If Kickstarter’s newly im­ple­mented rules have in­deed been forced upon them by Stripe, it would­n’t be the first time that a pay­ment proces­sor has had a hand in lim­it­ing NSFW con­tent. Both Steam and Itch.io had to purge count­less NSFW games from their plat­forms in 2025 fol­low­ing pres­sure from bank­ing part­ners and pay­ment proces­sors like Visa and Mastercard. The Australian anti-porn group Collective Shout claimed credit for the cen­sor­ship, but the move in­creas­ingly seems to be part of a larger shift by com­pa­nies that move money around the in­ter­net to crack down on what can ac­tu­ally be bought, sold, and shared on it.

Correction 5/13/26, 11:22 a.m. ET: Pillars of Eternity 2 was made by Obsidian.

SpaceX

www.spacex.com

Linux gaming is getting faster because Windows APIs are becoming Linux kernel features

www.xda-developers.com

Published May 10, 2026, 12:30 PM EDT

His love of PCs and their com­po­nents was born out of try­ing to squeeze every ounce of per­for­mance out of the fam­ily com­puter. Tinkering with his own build at age 10 turned into build­ing PCs for friends and fam­ily, fos­ter­ing a pas­sion that would ul­ti­mately take shape as a ca­reer path.

Besides be­ing the first call for tech sup­port for those close to him, Ty is a com­puter sci­ence stu­dent, with his fo­cus be­ing cloud com­put­ing and net­work­ing. He also com­peted in semi-pro Counter-Strike for 8 years, mak­ing him in­ti­mately fa­mil­iar with every­thing to do with pe­riph­er­als.

Sign in to your XDA ac­count

In March 2026, Linux crossed five per­cent of Steam’s user base for the first time, an all-time high for an op­er­at­ing sys­tem that spent two decades as a nov­elty when it came to any kind of gam­ing. Microsoft’s end-of-sup­port dead­line for Windows 10 last October pushed many users to look at al­ter­na­tives, and the Steam Deck has qui­etly turned mil­lions of peo­ple into Linux gamers with­out them re­ally think­ing about it, lead­ing to more wide­spread adop­tion on desk­top ma­chines.

Most of that progress used to hap­pen in­side a piece of soft­ware called Wine, the trans­la­tion layer that con­vinces Windows games they’re run­ning on Windows. Valve’s tuned ver­sion of Wine, called Proton, is what makes Steam Play and the Steam Deck work. For years, every mean­ing­ful im­prove­ment to Linux gam­ing came from changes to Wine and Proton them­selves. That’s still true, but in­creas­ingly the most im­por­tant changes are hap­pen­ing one layer deeper, in­side the Linux ker­nel. The lat­est ex­am­ple of that is some­thing called NTSYNC, a ker­nel-level dri­ver that has of­fered great per­for­mance gains over pre­vi­ous ver­sions of Wine, and is loaded by de­fault on every Steam Deck that’s up-to-date.

Related

What NTSYNC ac­tu­ally is

An ad­di­tional piece of the per­for­mance puz­zle

NTSYNC is a small piece of dri­ver added di­rectly to the Linux ker­nel that gives it a na­tive im­ple­men­ta­tion of a set of Windows-specific tools that games de­pend on to co­or­di­nate them­selves.

Modern games jug­gle dozens of things at once. While you’re play­ing, your CPU man­ages the ren­der­ing pipeline, load­ing as­sets, run­ning physics, pro­cess­ing au­dio, han­dling AI NPC rou­tines, and track­ing in­puts, all in par­al­lel across mul­ti­ple cores. All those jobs con­stantly have to co­or­di­nate so they don’t trip over each other.

Quiz

8 Questions · Test Your Knowledge

The his­tory of LinuxTrivia chal­lenge

From a Finnish stu­den­t’s side pro­ject to pow­er­ing the world — how well do you know the story of Linux?

OriginsKernelDistrosPioneersMilestones

Begin

01 / 8

Origins

In what year did Linus Torvalds first an­nounce the Linux ker­nel to the world?

A1989B1991C1993D1995

Correct! Linus Torvalds posted his now-fa­mous mes­sage to the comp.os.minix news­group on August 25, 1991, de­scrib­ing Linux as just a hob­by’ pro­ject. Few could have pre­dicted it would one day run the ma­jor­ity of the world’s servers and smart­phones.

Not quite — Torvalds made his an­nounce­ment in 1991. He was a 21-year-old com­puter sci­ence stu­dent at the University of Helsinki at the time, and his mod­est post de­scribed the pro­ject as some­thing that won’t be big and pro­fes­sion­al’ like GNU.

Continue

02 / 8

Pioneers

Which uni­ver­sity was Linus Torvalds at­tend­ing when he cre­ated the first ver­sion of the Linux ker­nel?

AStockholm UniversityBAalto UniversityCUniversity of HelsinkiDMIT

Correct! Torvalds was study­ing at the University of Helsinki in Finland when he be­gan work­ing on Linux as a per­sonal pro­ject, ini­tially in­spired by MINIX, a small Unix-like sys­tem used for ed­u­ca­tional pur­poses.

Not quite — Torvalds was a stu­dent at the University of Helsinki in Finland. He started Linux partly out of frus­tra­tion with the lim­i­ta­tions of MINIX, which his pro­fes­sor Andrew Tanenbaum had de­signed de­lib­er­ately to be sim­ple for teach­ing.

Continue

03 / 8

Kernel

What op­er­at­ing sys­tem pri­mar­ily in­spired Linus Torvalds to cre­ate the Linux ker­nel?

AMS-DOSBMINIXCBSD UnixDSolaris

Correct! MINIX, cre­ated by pro­fes­sor Andrew Tanenbaum, was the di­rect in­spi­ra­tion for Linux. Torvalds used MINIX on his new Intel 386 PC but found it too re­stricted for his needs, which pushed him to write his own ker­nel.

Not quite — the an­swer is MINIX. Torvalds was us­ing MINIX when he started Linux, and even held a fa­mous on­line de­bate with its cre­ator Andrew Tanenbaum about ker­nel de­sign phi­los­o­phy, specif­i­cally mono­lithic ver­sus mi­cro­ker­nel ar­chi­tec­tures.

Continue

04 / 8

Milestones

What was the ver­sion num­ber of the first pub­licly re­leased Linux ker­nel in 1991?

A0.01B0.1C1.0D0.99

Correct! Linux ver­sion 0.01 was the first ker­nel Torvalds re­leased pub­licly in September 1991. It was a rough, early build that could only run on Intel 386 hard­ware and had very lim­ited func­tion­al­ity, but it marked the true be­gin­ning of the Linux pro­ject.

Not quite — the first pub­lic re­lease was ver­sion 0.01 in September 1991. The ker­nel did­n’t reach ver­sion 1.0 un­til March 1994, by which point it had grown sig­nif­i­cantly in ca­pa­bil­ity and had at­tracted con­tri­bu­tions from de­vel­op­ers around the world.

Continue

05 / 8

Distros

Which Linux dis­tri­b­u­tion, first re­leased in 1993, is one of the old­est still ac­tively main­tained to­day?

AUbuntuBFedoraCSlackwareDDebian

Correct! Slackware, cre­ated by Patrick Volkerding, was first re­leased in July 1993, mak­ing it one of the old­est sur­viv­ing Linux dis­tri­b­u­tions. It is known for its sim­plic­ity and Unix-like phi­los­o­phy, and it con­tin­ues to be main­tained to this day.

Not quite — the an­swer is Slackware, re­leased in 1993 by Patrick Volkerding. While Debian was also founded in 1993, Slackware nar­rowly edges it out as the older re­lease. Ubuntu did­n’t ar­rive un­til 2004, and Fedora launched in 2003.

Continue

06 / 8

Origins

The GNU Project, which pro­vided many tools that paired with the Linux ker­nel, was founded by which de­vel­oper?

AEric RaymondBRichard StallmanCBruce PerensDIan Murdock

Correct! Richard Stallman founded the GNU Project in 1983 with the goal of cre­at­ing a com­pletely free Unix-like op­er­at­ing sys­tem. When the Linux ker­nel ap­peared in 1991, it filled the miss­ing piece GNU needed, and the com­bi­na­tion be­came what many call GNU/Linux.

Not quite — it was Richard Stallman who founded the GNU Project in 1983. Stallman is also known for cre­at­ing the GPL (GNU General Public License) and found­ing the Free Software Foundation, two pil­lars that shaped the le­gal and philo­soph­i­cal foun­da­tion of free soft­ware.

Continue

07 / 8

Milestones

Which com­pany re­leased a land­mark com­mer­cial Linux dis­tri­b­u­tion in 1994, help­ing bring Linux into the en­ter­prise world?

ACanonicalBSUSECRed HatDMandriva

Correct! Red Hat re­leased its first Linux dis­tri­b­u­tion in 1994 and be­came one of the most in­flu­en­tial com­mer­cial Linux com­pa­nies in his­tory. It pi­o­neered the en­ter­prise Linux mar­ket and was even­tu­ally ac­quired by IBM in 2019 for ap­prox­i­mately $34 bil­lion.

Not quite — Red Hat is the an­swer. Founded by Marc Ewing and Bob Young, Red Hat helped prove that com­pa­nies could build sus­tain­able busi­nesses around open-source soft­ware. SUSE Linux also launched in 1994, mak­ing it a close ri­val, but Red Hat be­came the more glob­ally dom­i­nant en­ter­prise force.

Continue

08 / 8

Distros

Ubuntu Linux, one of the most pop­u­lar desk­top dis­tri­b­u­tions, is based on which other Linux dis­tri­b­u­tion?

AArch LinuxBFedoraCDebianDGentoo

Correct! Ubuntu is based on Debian and was first re­leased in October 2004 by Mark Shuttleworth’s com­pany Canonical. It was de­signed to make Linux more ac­ces­si­ble to every­day users, and its six-month re­lease cy­cle and long-term sup­port ver­sions made it a fa­vorite for both desk­tops and servers.

Not quite — Ubuntu is built on top of Debian. Debian it­self was founded in 1993 by Ian Murdock and is known for its strict com­mit­ment to free soft­ware and sta­bil­ity. Ubuntu in­her­its Debian’s pack­age man­age­ment sys­tem (APT and .deb pack­ages) but adds its own user-friendly layer on top.

See My Score

Challenge Complete

Your Score

/ 8

Thanks for play­ing!

Try Again

Windows han­dles this co­or­di­na­tion by us­ing a spe­cific set of mech­a­nisms, and be­fore NTSYNC, Wine had to mimic these mech­a­nisms us­ing things like esync and fsync, which both worked, but did­n’t al­ways match Windows ex­actly. NTSYNC builds these mech­a­nisms straight into the Linux ker­nel for the first time, and it means Wine does­n’t have to em­u­late any­thing any­more. The de­vel­oper-fac­ing API calls don’t ac­tu­ally change, Linux just knows how to an­swer them na­tively.

Related

NTSYNC is part of a grow­ing pat­tern

Not the first time Linux has in­her­ited fea­tures be­cause of Windows

NTSYNC is­n’t the first time Linux has gained a new fea­ture specif­i­cally be­cause Windows games needed it. A few years back, Linux added a way for soft­ware to wait on sev­eral events at once, which is some­thing Windows had built in for decades, but Linux did­n’t. Wine had been work­ing around the gap with awk­ward tricks un­til the ker­nel fi­nally got na­tive sup­port.

This work is dri­ven by Valve, by CodeWeavers (the com­pany that em­ploys many of the core Wine de­vel­op­ers, in­clud­ing NTSYNCs au­thor Elizabeth Figura), and by a steady stream of con­trib­u­tors who want Linux to be a real gam­ing plat­form with­out de­pend­ing on out-of-ecosys­tem patches for­ever.

Related

These aren’t mag­i­cal per­for­mance gains

fsync was al­ready pretty good

The head­line per­for­mance gains look great, but they need some con­text. The eye-catch­ing 40 to 200 per­cent FPS gains cited in NTSYNCs orig­i­nal bench­marks were mea­sured against un­mod­i­fied up­stream Wine, which al­most no­body uses to play games on Linux any­more. Most Linux gamers, in­clud­ing every Steam Deck owner, use Proton, which al­ready has fsync. Compared to fsync, NTSYNCs per­for­mance gains are far more mod­est. The games that ben­e­fit most from the change to NTSYNC are games that were re­ally strug­gling be­fore. Anything that was run­ning at de­cent fram­er­ates be­fore­hand is still go­ing to run fine.

Related

These 7 Linux myths you still be­lieve sim­ply aren’t true

Linux is a com­pletely dif­fer­ent beast than it was a decade ago.

Valve adopted it any­way

It’s a great sign

Pierre-Loup Griffais, an en­gi­neer at Valve, has gone on the record to say that fsync was al­ready fast enough, and de­spite that, Valve still shipped NTSYNC in sta­ble SteamOS in March any­way, which speaks to the fact that fsync is still a workaround at its core, and can be the cause of is­sues out­side of poor raw FPS.

These old workarounds got sub­tle edge cases wrong in ways that pro­duced oc­ca­sional hitches, dead­locks, or weird be­hav­ior in spe­cific games, which are bugs that don’t show up on bench­mark charts but can ab­solutely ruin in­di­vid­ual ex­pe­ri­ences. NTSYNC fixes those at the source by match­ing Windows be­hav­ior ex­actly, and that means as soon as your fa­vorite dis­tro moves to the new ker­nel ver­sion, whether it be Bazzite, CachyOS, Fedora, or a fla­vor of Ubuntu, they all get this much-needed fix.

Related

4 rea­sons Valve’s full SteamOS re­lease will change PC gam­ing again

Valve’s full SteamOS re­lease will change PC gam­ing again, and here are some of the most im­por­tant ways.

Gaming on Linux con­tin­ues to im­prove by the month

Linux has grown so much in the gam­ing de­part­ment. Where there once was noth­ing but clever Wine patches and com­mu­nity workarounds now lies sup­port from gam­ing be­he­moths like Valve, dri­ving changes to the Linux ker­nel it­self. NTSYNC won’t be the last time a piece of Windows gets re­built in­side Linux be­cause gamers needed it, and with more than five per­cent of Steam’s user base now run­ning Linux, the in­cen­tive to keep do­ing it has never been stronger.

Deterministic Fully-Static Whole-Binary Translation without Heuristics

arxiv.org

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.