10 interesting stories served every morning and every evening.

Internet Archive Switzerland

internetarchive.ch

Welcome

Welcome toIn­t­er­net Archive Switzerland.

Universal Access to ALL Knowledge.

Internet Archive Switzerland is an in­de­pen­dent Swiss foun­da­tion, which is op­er­at­ing as an non-profit or­gan­i­sa­tion based in Sankt Gallen. Our pri­mary goal is: Universal Access to All Knowledge.

Together with like-minded part­ners we col­lect and pre­serve dig­i­tal in­for­ma­tion for learn­ing and re­search. Our ob­jec­tive is to en­sure peo­ple can find any kind of help­ful dig­i­tal ma­te­ri­als, to­day and in the fu­ture.

Although at first glance one might think that the dig­i­tal con­tent avail­able on the in­ter­net is in­ex­haustible and end­lessly grow­ing, it is also ev­i­dent that dig­i­tal in­for­ma­tion is ac­tu­ally short-lived.

We are fac­ing con­stant changes in file for­mats, sud­den fail­ure of stor­age me­dia, rapid dele­tion processes (accidental or de­lib­er­ate) and an in­creas­ing ten­dency of hid­ing knowl­edge be­hind pay­walls. All of this jeop­ar­dises easy ac­cess to in­for­ma­tion, learn­ing and the shap­ing of opin­ions on the ba­sis of facts.

All of this has led us to launch two ini­tia­tives in the foun­da­tion’s early stages: We are part­ner­ing with the University of St. Gallen to build the Gen Artificial Intelligence (AI) Archive, pre­serv­ing to­day’s AI mod­els for fu­ture gen­er­a­tions. And through our Endangered Archives ini­tia­tive, we in­vite global part­ners to ex­plore ways of res­cu­ing vul­ner­a­ble col­lec­tions from con­flict, dis­as­ter, and sup­pres­sion be­fore they are lost.

Projects

What we are WORKING on.

Project 01 · Research

Gen AI Archive

Artificial Intelligence, Generative AI, and Large Language Models (LLMs) are fun­da­men­tally re­shap­ing how hu­man­ity cre­ates and shares knowl­edge. To doc­u­ment this evo­lu­tion, the University of St. Gallen and Internet Archive Switzerland part­ner in pre­serv­ing to­day’s most pro­found mod­els for fu­ture gen­er­a­tions in the Gen AI Archive.

Project 02 · Preservation

Endangered Archives

Cultural her­itage and his­tor­i­cal records world­wide face ever grow­ing threats from con­flict, in­sta­bil­ity, and nat­ural dis­as­ters. To pre­vent the loss of this col­lec­tive mem­ory, Internet Archive Switzerland seeks to es­tab­lish an ini­tia­tive called Endangered Archives. In co­op­er­a­tion with UNESCO and other well es­tab­lished or­gan­i­sa­tions we aim to res­cue vul­ner­a­ble ma­te­ri­als by pro­vid­ing a se­cure dig­i­tal haven.

About

The foun­da­tion.

Organization

An in­de­pen­dent non profit foun­da­tion in St. Gallen, Switzerland. Mission-aligned with the Internet Archive, Internet Archive Canada and Internet Archive Europe, and a com­mon goal: Universal Access to All Knowledge.

Our char­ter states (excerpt): The pur­pose of the Foundation is to ad­vance the preser­va­tion and uni­ver­sal ac­ces­si­bil­ity of all knowl­edge as in­spired by the United Nations Universal Declaration of Human Rights (Articles 19, 26, and 27) and the United Nations’s Sustainable Development Goal 4, which strives to en­sure in­clu­sive and eq­ui­table qual­ity ed­u­ca­tion and pro­mote life­long learn­ing op­por­tu­ni­ties for all.

Board & Advisers

Executive Director

Roman Griesfelder

Internet Archive Switzerland is led by Roman Griesfelder as Executive Director. Roman is an Austrian cit­i­zen and has been liv­ing in Switzerland since 1998. The so­ci­ol­o­gist and busi­ness ad­min­is­tra­tor has been work­ing for many years in se­nior roles as pro­ject man­ager and man­age­ment con­sul­tant, among other things, be­fore hold­ing lead­ing po­si­tions at cul­tural in­sti­tu­tions in Switzerland. His wide-rang­ing in­ter­ests con­verge at the points where so­cial, cul­tural and tech­no­log­i­cal de­vel­op­ments in­ter­sect and af­fect the lives of many peo­ple, or even just a few in­di­vid­u­als.

Location

St. Gallen.

47.4245° N · 9.3767° E

St. Gallen is no stranger to the idea that pre­serv­ing the record is a form of civic re­spon­si­bil­ity. Its archival tra­di­tion stretches back over a thou­sand years — a fit­ting sym­bolic home for this new chap­ter of the Internet Archive.

The ex­is­tence of the Abbey Archives proves that, with con­vic­tion and per­se­ver­ance, it is pos­si­ble to pre­serve the foun­da­tions of our knowl­edge about so­ci­ety. This con­vic­tion mo­ti­vates the Internet Archive Switzerland to em­bark boldly on its mis­sion and to pur­sue it un­wa­ver­ingly.

We are also de­lighted to be part­ner­ing with the University of St. Gallen to es­tab­lish the world’s first com­pre­hen­sive AI archive.

Blog

Latest NEWS.

May 5, 2026 • 5 min­utes of read­ing

Internet Archive Switzerland Launches in St. GallenA Thousand Years of Memory, and a New Chapter

On May 5th, 2026, Internet Archive Switzerland cel­e­brates its launch at the ex­hi­bi­tion hall of the Abbey Archives of St. Gallen, one of the old­est con­tin­u­ously ac­tive archives in the world. We are grate­ful to Peter Erhart and the Abbey Archives of St. Gallen for host­ing us: two in­sti­tu­tions, one a mil­len­nium old and one…

Read more: Internet Archive Switzerland Launches in St. Gallen

May 5, 2026 • 5 min­utes of read­ing

Internet Archive Switzerland Launches in St. Gallen

A Thousand Years of Memory, and a New Chapter

On May 5th, 2026, Internet Archive Switzerland cel­e­brates its launch at the ex­hi­bi­tion hall of the Abbey Archives of St. Gallen, one of the old­est con­tin­u­ously ac­tive archives in the world. We are grate­ful to Peter Erhart and the Abbey Archives of St. Gallen for host­ing us: two in­sti­tu­tions, one a mil­len­nium old and one…

Internet Archive Switzerland: Expanding a Global Mission to Preserve Knowledge

blog.archive.org

Thirty years ago, Brewster Kahle founded the Internet Archive with an am­bi­tious goal: Universal Access to All Knowledge. Today, that mis­sion con­tin­ues to grow with an ex­cit­ing new chap­ter: the launch of the Internet Archive Switzerland, a non-profit foun­da­tion based in St. Gallen.

The Internet Archive Switzerland, on­line at https://​in­ter­netarchive.ch/, is a newly-formed Swiss non-profit foun­da­tion that will op­er­ate in­de­pen­dently within its na­tional con­text. Its ef­forts will ini­tially fo­cus on pre­serv­ing en­dan­gered archives from around the world and col­lect­ing the gen­er­a­tive AI wave that is cur­rently upon us all. With a UNESCO con­fer­ence planned for November 2026 in Paris, Internet Archive Switzerland is tak­ing a con­crete step to ex­plore how en­dan­gered archives can be pro­tected.

In par­al­lel, the Swiss foun­da­tion is work­ing in part­ner­ship with the School of Computer Science at the University of St. Gallen, on the Gen AI Archive pro­ject led by Prof. Dr. Damian Borth. Together, they aim to be­gin archiv­ing AI mod­els, which is an emerg­ing fron­tier for preser­va­tion.

The choice of St. Gallen is no co­in­ci­dence. With a thou­sand-year tra­di­tion of archiv­ing and schol­ar­ship, the city of­fers a fit­ting home for this next phase of dig­i­tal preser­va­tion. Its strong aca­d­e­mic en­vi­ron­ment—in­clud­ing col­lab­o­ra­tion with the University of St. Gallen—makes it an ideal place to es­tab­lish a 21st cen­tury mem­ory or­ga­ni­za­tion.

St. Gallen is a very suit­able place to take the preser­va­tion of our uni­ver­sal knowl­edge a step fur­ther. Stability and in­no­va­tion go hand in hand here and are em­bed­ded in a deep un­der­stand­ing of the im­por­tance of cul­tural her­itage,” said Roman Griesfelder, the ex­ec­u­tive di­rec­tor of Internet Archive Switzerland.

Internet Archive Switzerland joins a grow­ing group of mis­sion-aligned or­ga­ni­za­tions, along­side Internet Archive, Internet Archive Canada, and Internet Archive Europe. Together, these in­de­pen­dent li­braries strengthen a shared vi­sion: build­ing a dis­trib­uted, re­silient dig­i­tal li­brary for the world.

Contact Internet Archive SwitzerlandRoman Griesfelder, ex­ec­u­tive di­rec­to­rof­fice@in­ter­netarchive.ch

I Will Not Add Query Strings to Your URLs

susam.net

By Susam Pal on 09 May 2026

Last evening, a short blog post ap­peared in my feed reader that felt as if it spoke di­rectly to me. It is Chris Morgan’s ex­cel­lent post I’ve banned query strings.

Contents

Wisdom on the Web

Wander on the Web

Misfeature

Broken URLs

Qualms

Conclusion

Wisdom on the Web

Chris is some­one whose Internet com­ments I have been read­ing for about half a decade now. I first stum­bled upon his com­ments on Hacker News, where he left very de­tailed feed­back on a small col­lec­tion of boil­er­plate CSS rules I had shared there. I am by no means a web de­vel­oper. I have spent most of my pro­fes­sional life do­ing sys­tems pro­gram­ming in C and C++. However, de­vel­op­ing web­sites and writ­ing small HTML tools has been a long-time hobby for me. I have learnt most of my web de­vel­op­ment skills as a hob­by­ist by study­ing what other peo­ple do: first by view­ing the source of web­sites I liked in the early 2000s, and later by oc­ca­sion­ally get­ting pos­sessed by the urge to im­ple­ment a new game or tool and search­ing MDN Web Docs to learn what­ever I needed to make it work. One prob­lem with learn­ing a skill this way is that you some­times pick up habits and prac­tices that are fash­ion­able but not nec­es­sar­ily op­ti­mal or cor­rect. So it was re­ally valu­able to me when Chris com­mented on my col­lec­tion of boil­er­plate CSS rules. It helped me im­prove my CSS a lot. In fact, a few of the lessons from his com­ment have re­ally stuck with me; I keep them in mind when­ever I make a hobby HTML pro­ject: al­ways re­tain un­der­lines in links and re­tain pur­ple for vis­ited links.

I have been fol­low­ing Chris’s posts and com­ments on web-re­lated top­ics since then. He of­ten posts great feed­back on web-re­lated pro­jects. Whenever I come across one, I make sure to read them care­fully, even when the pro­ject is­n’t mine. I al­ways end up learn­ing some­thing nice and use­ful from his com­ments. Here is one such re­cent ex­am­ple from the Lobsters story Adding au­thor con­text to RSS.

Wander on the Web

A cou­ple of months ago, I cre­ated a new pro­ject called Wander Console. It is a small, de­cen­tralised, self-hosted web con­sole that lets vis­i­tors to your web­site ex­plore in­ter­est­ing web­sites and pages rec­om­mended by a com­mu­nity of in­de­pen­dent per­sonal web­site own­ers. For ex­am­ple, my con­sole is here: susam.net/​wan­der/. If you click the Wander’ but­ton there, the tool loads a ran­dom per­sonal web page rec­om­mended by the Wander com­mu­nity.

The tool con­sists of one HTML file that im­ple­ments the con­sole and one JavaScript file where the web­site owner de­fines a list of neigh­bour­ing con­soles along with a list of web pages they rec­om­mend. If you copy these two files to your web server, you in­stantly have a Wander con­sole live on the Web. You don’t need any server-side logic or server-side soft­ware be­yond a ba­sic web server to run Wander Console. You can even host it in con­strained en­vi­ron­ments like Codeberg Pages or GitHub Pages. When you click the Wander’ but­ton, the con­sole con­nects to other re­mote con­soles, fetches web page rec­om­men­da­tions, picks one ran­domly and loads it in your web browser. It is a bit like the now de­funct StumbleUpon but it is com­pletely de­cen­tralised. It is also a bit like web rings ex­cept that the com­mu­nity net­work is not re­stricted to be­ing a cy­cle; it is a graph and it is flex­i­ble.

There are cur­rently over 50 web­sites host­ing this tool. Together, they rec­om­mend over 1500 web pages. You can find a re­cent snap­shot of the list of known con­soles and the pages they rec­om­mend at susam.code­berg.page/​wcn/. To learn more about this tool or to set it up on your web­site, please see code­berg.org/​susam/​wan­der.

Misfeature

In case you were won­der­ing why I sud­denly plugged my pro­ject into this post in the pre­vi­ous sec­tion, it is be­cause I re­cently added a du­bi­ous fea­ture to that pro­ject that I my­self was not en­tirely con­vinced about. That mis­fea­ture is rel­e­vant to this post.

In ver­sion 0.4.0 of Wander Console, I added sup­port for a via= query pa­ra­me­ter while load­ing web pages. For ex­am­ple, if you en­coun­tered mid­night.pub while us­ing the con­sole at susam.net/​wan­der/, the con­sole loaded the page us­ing the fol­low­ing URL:

https://​mid­night.pub/?​via=https://​susam.net/​wan­der/

This al­lowed the owner of the rec­om­mended web­site to see, via their ac­cess logs, that the visit orig­i­nated from a Wander Console. Chris’s re­cent blog post is crit­i­cal of fea­tures like this. He writes:

I don’t like peo­ple adding track­ing stuff to URLs. Still less do I like peo­ple adding track­ing stuff to my URLs.

https://​chris­mor­gan.info/​no-query-strings?ref=ex­am­ple.com? Did I ask? If I wanted to know I’d look at the Referer header; and if it is­n’t there, it’s prob­a­bly for a good rea­son. You abuse your users by adding that to the link.

I don’t like peo­ple adding track­ing stuff to URLs. Still less do I like peo­ple adding track­ing stuff to my URLs.

https://​chris­mor­gan.info/​no-query-strings?ref=ex­am­ple.com? Did I ask? If I wanted to know I’d look at the Referer header; and if it is­n’t there, it’s prob­a­bly for a good rea­son. You abuse your users by adding that to the link.

I men­tioned ear­lier that I was not en­tirely con­vinced that adding a re­fer­ral query string was a good thing to do. Why did I add it any­way? I suc­cumbed to pop­u­lar de­mand. Let me briefly de­scribe my frame of mind when I con­sid­ered and im­ple­mented that fea­ture. When I first saw the fea­ture re­quest on Codeberg, my ini­tial re­ac­tion was re­luc­tance. I was­n’t con­vinced it was a good fea­ture. But I was too busy with some on­go­ing al­ge­braic graph the­ory re­search, an­other re­cent hobby, with a loom­ing dead­line, so I did­n’t have a lot of time to think about it clearly. In fact, every­thing about Wander Console has been made in very lit­tle time dur­ing the short breaks I used to take from my re­search. I made the first ver­sion of the con­sole in about one and a half hours one early morn­ing when my brain was too tired to read more al­ge­braic graph the­ory lit­er­a­ture and I re­ally needed a break. During an­other such break, I re­vis­ited that fea­ture re­quest and, de­spite my reser­va­tions, de­cided to im­ple­ment it any­way. During yet an­other such break, I am writ­ing this post.

Normally, I don’t like adding too many new fea­tures to my lit­tle pro­jects. I want them to have a lim­ited scope. I also want them to be­come sta­ble over time. After a pro­ject has ful­filled some es­sen­tial re­quire­ments I had, I just want to call it fea­ture com­plete and never add an­other fea­ture to it again. I’ll fix bugs, of course. But I don’t like to keep adding new fea­tures end­lessly. That’s my style of main­tain­ing my hobby pro­jects. So it should have been very easy for me to ig­nore the fea­ture re­quest for adding a re­fer­ral query string to URLs loaded by the con­sole tool. But I think a tired body and mind, worn down by long and in­tense re­search work, took a toll on me.

Although my gut feel­ing was telling me that it was not a good fea­ture, I could­n’t ar­tic­u­late to my­self ex­actly why. So I im­ple­mented the re­fer­ral query string fea­ture any­way. While do­ing so, I added an opt-out mech­a­nism to the con­fig­u­ra­tion, so that if some­one else did­n’t like the fea­ture, they could dis­able it for them­selves. This was an­other mis­take. A ques­tion­able fea­ture like this should be im­ple­mented as an opt-in fea­ture, not an opt-out fea­ture, if im­ple­mented at all. The fact that I did­n’t have a lot of time to rea­son through the im­pli­ca­tions of this fea­ture meant that I just went ahead and im­ple­mented it with­out think­ing about it crit­i­cally. As the fa­mous quote from Jurassic Park goes:

Your sci­en­tists were so pre­oc­cu­pied with whether or not they could that they did­n’t stop to think if they should.

Broken URLs

It soon turned out that my gut feel­ing was cor­rect. After I im­ple­mented that fea­ture, a page from one of my favourite web­sites re­fused to load in the con­sole. To il­lus­trate the prob­lem, here are a few sim­i­lar but slightly dif­fer­ent URLs for that page:

https://​in­t10h.org/​old­school-pc-fonts/​fontlist/

https://​in­t10h.org/​old­school-pc-fonts/​fontlist/?​2

https://​in­t10h.org/​old­school-pc-fonts/​fontlist/?​foo

The first and sec­ond URLs load fine, but the third URL re­turns an HTTP 404 er­ror page. The web­site uses the query string to de­ter­mine which one of its sev­eral font col­lec­tions to show. So when we add an ar­bi­trary query string to the URL, the web­site tries to in­ter­pret it as a font col­lec­tion iden­ti­fier and the page fails to load. That is why, when my tool added the via= query pa­ra­me­ter to the first URL, the page failed to load.

Later, with a lit­tle time to breathe and some hind­sight, I could ar­tic­u­late why adding re­fer­ral query strings to a work­ing URL was such a bad idea. Altering a URL gives you a new URL. The new URL could point to a com­pletely dif­fer­ent re­source, or to no re­source at all, even if the al­ter­ation is as small as adding a seem­ingly harm­less query string. By adding the re­fer­ral query string, I had ef­fec­tively bro­ken a work­ing URL from a web­site I am very fond of.

Qualms

It is also worth ask­ing whether an HTML tool should con­cern it­self with re­fer­ral query strings at all when web browsers al­ready have a mech­a­nism for this: the HTTP Referer header, gov­erned by Referrer-Policy. That pol­icy can be set at the server level, the doc­u­ment level or even on in­di­vid­ual links. The Web stan­dards al­ready pro­vide de­lib­er­ate con­trols to de­cide how much re­fer­rer in­for­ma­tion should be sent. Appending re­fer­ral query strings to URLs by­passes those con­trols. It moves a pri­vacy and at­tri­bu­tion con­cern out of the re­fer­rer mech­a­nism and em­beds it into the des­ti­na­tion URL in­stead. I don’t think an HTML tool should do that.

There is also a moral ques­tion here about whether it is okay to mod­ify a given URL on be­half of the user in or­der to in­sert a re­fer­ral query string into it. I think it is­n’t.

Conclusion

In the end, I de­cided to re­move the re­fer­ral query string fea­ture from Wander Console. One might won­der why I could­n’t sim­ply leave the fea­ture in as an opt-in. Well, the an­swer is that once I had deemed the fea­ture mis­guided, I no longer wanted it to be part of my soft­ware in any form. The pro­ject is still new and we are still in the days of 0.x re­leases, so if there is a good time to re­move fea­tures, this is it. But my on­go­ing re­search work left me with no time to do it. Finally, when the post I’ve banned query strings ap­peared in my feed reader last evening, it nudged me just enough to take a lit­tle time away from my aca­d­e­mic hobby and de­vote it to re­mov­ing that ill-con­sid­ered fea­ture. The fea­ture is now gone. See com­mit b26d77c for de­tails. The lat­est re­lease, ver­sion 0.6.0, does not have it any­more.

This is a les­son I’ll re­mem­ber for any new hobby pro­jects I hap­pen to make in the fu­ture. If I ever load URLs again, I’ll load them ex­actly as the web­site’s au­thor in­tended. I will never add query strings to your URLs.

I’ve banned query strings — Chris Morgan

chrismorgan.info

🗓️ 2026 – 05-08 • Tagged /web, /opinions, /meta=only

I don’t like peo­ple adding track­ing stuff to URLs. Still less do I like peo­ple adding track­ing stuff to my URLs.

https://​chris­mor­gan.info/​no-query-strings?ref=ex­am­ple.com? Did I ask? If I wanted to know I’d look at the Referer header; and if it is­n’t there, it’s prob­a­bly for a good rea­son. You abuse your users by adding that to the link.

https://​chris­mor­gan.info/​no-query-strings?ut­m_­source=ex­am­ple&utm_&c.? Hey! That one’s even worse, UTM pa­ra­me­ters are for me to use, not you. Leave my URLs alone.

So I’ve de­cided to try a blan­ket ban for this site: no unau­tho­rised query strings.

At pre­sent I don’t use any query strings. If I ever start us­ing any query strings, I’ll al­low only known pa­ra­me­ters. (In past times I used ?t=… and ?h=… cache-bust­ing URLs for stylesheet URLs; and I de­cided I’m okay break­ing such re­quests; there should­n’t be any le­git­i­mate ones.)

Want to see what hap­pens if you add a query string? Go ahead, try it.

It’s my web­site: I can do what I want with it.

And you can do what you want with yours!

This is cur­rently im­ple­mented in my Caddyfile.

LLMs Corrupt Your Documents When You Delegate

arxiv.org

nytimes.com

www.nytimes.com

Please en­able JS and dis­able any ad blocker

The Intolerable Hypocrisy of Cyberlibertarianism

matduggan.com

I like the Internet. I am old enough to re­mem­ber the pre-In­ter­net era and de­spite the younger gen­er­a­tions pin­ing for those sim­pler days, I was there. Paper maps were ab­solutely hor­ri­ble, just you and a com­pass in your car on the side of the road in the mid­dle of the night try­ing to fig­ure out where you are and where you are go­ing. Once when dri­ving from Michigan to Florida I got so lost in the mid­dle of the night in Kentucky that I had to pull over to sleep and wait for the sun so I could fig­ure out where I was. I awoke to an old man star­ing un­blink­ingly into my car, shirt­less, breath­ing heavy enough to fog the win­dows. To say I floored that 1991 Honda Civic is an un­der­state­ment.

You would leave your house and then just dis­ap­pear. This is pre­sented as kind of ro­man­tic now, as if we were just free spir­its on the wind and could stop and re­ally watch a sun­set. In prac­tice it was mostly an an­noy­ing game of at­tempt­ing to guess where peo­ple were. You’d call their job, they had left. You’d call their house, they weren’t home yet. Presumably they were in tran­sit but you ac­tu­ally had no idea. As a child my re­sponse to peo­ple ask­ing me where my par­ents were was of­ten a shrug as I re­sumed at­tempt­ing to eat my weight in shoplifted candy or make home­made na­palm with gaso­line and sty­ro­foam. Sometimes I shud­der as a par­ent re­mem­ber­ing how young I was putting pen­nies on train tracks and hid­ing dan­ger­ously close so that we could get the cool squished penny af­ter­wards.

Cassettes are the worst way to lis­ten to mu­sic ever in­vented. Tapes squealed. Tapes slowed down for no rea­son, like they were de­pressed. Multiple times in my life I would set off on a long road trip, pop in a tape, and within fif­teen min­utes watch as it shot from the deck un­spooled like the guts from the tauntaun in Star Wars. You’d then spend forty-five min­utes at a Sunoco try­ing to wind it back in with a Bic pen know­ing in your heart you were per­form­ing CPR on a corpse. Then you’d put it back in the player out of pure stub­born­ness, and it would chew it­self again im­me­di­ately, and you’d drive the next six hours in si­lence with your own thoughts, which were not as good as Pearl Jam.

So I am, mostly, grate­ful for the bounty the in­ter­net has pro­vided. But there is some­thing wrong, deeply wrong, with what we built. The wrong­ness was there at the start. It was baked into the foun­da­tion by peo­ple who told them­selves a story about free­dom, and that story was a lie, and we are all, every one of us, pay­ing their tab.

To un­der­stand what hap­pened we need to go back to the 90s.

A Declaration of the Independence of Cyberspace

One of the first and most clas­sic ex­am­ples of the ide­ol­ogy that pow­ered and con­tin­ues to power tech is the clas­sic A Declaration of the Independence of Cyberspace” by John Perry Barlow writ­ten in 1996. You can find the full text here. I re­mem­ber think­ing it was ge­nius when I first read it. I was young enough that I also thought Snow Crash” was a se­ri­ous po­lit­i­cal doc­u­ment. Today the Declaration reads like one of those sov­er­eign cit­i­zen TikToks where some­one in traf­fic court is claim­ing diplo­matic im­mu­nity un­der mar­itime law.

It helps to know who Barlow was. Barlow was a Grateful Dead lyri­cist. He was also a Wyoming cat­tle rancher. He was also, briefly, the cam­paign man­ager for Dick Cheney’s first run for Congress. (You did not mis­read that.) He spent his later years as a fix­ture at Davos, the World Economic Forum, where the very wealthy gather each January to re­mind each other that they are in­ter­est­ing. It was at Davos, in February 1996, fu­eled by cham­pagne and griev­ance over the Telecommunications Act, that Barlow banged out the Declaration on a lap­top and emailed it to a few hun­dred friends. From there it be­came, some­how, one of the found­ing doc­u­ments of the mod­ern in­ter­net.

These in­creas­ingly hos­tile and colo­nial mea­sures place us in the same po­si­tion as those pre­vi­ous lovers of free­dom and self-de­ter­mi­na­tion who had to re­ject the au­thor­i­ties of dis­tant, un­in­formed pow­ers. We must de­clare our vir­tual selves im­mune to your sov­er­eignty, even as we con­tinue to con­sent to your rule over our bod­ies. We will spread our­selves across the Planet so that no one can ar­rest our thoughts.

Many of the pil­lars of modern Internet” are here. Identity is­n’t a fixed con­cept based on gov­ern­ment ID but is a more fluid con­cept. We don’t need cen­tral­ized con­trol or re­ally any form of con­trol be­cause those things are un­nec­es­sary. It was this and the fa­mous ear­lier Cyberspace and the American Dream: A MagnaCarta for the Knowledge Age” that laid a fa­mil­iar foun­da­tion for a lot of the cul­ture we now have. [link]

The Magna Carta is also our in­tro­duc­tion to the (now fa­mil­iar) creed of catch up or get left be­hind”. The adop­tion of new tech­nol­ogy must be done at the ab­solute fastest speed pos­si­ble with no reg­u­la­tions or checks. You don’t need to worry about the con­se­quences of tech­nol­ogy be­cause these prob­lems cor­rect them­selves. If you told me the fol­low­ing was writ­ten two weeks ago by OpenAI I would have be­lieved you.

If this analy­sis is cor­rect, copy­right and patent pro­tec­tion of knowl­edge (or at least many forms of it) may no longer be un­nec­es­sary. In fact, the mar­ket­place may al­ready be cre­at­ing ve­hi­cles to com­pen­sate cre­ators of cus­tomized knowl­edge out­side the cum­ber­some copy­right/​patent process

The cum­ber­some copy­right/​patent process. Cumbersome to whom, ex­actly? This is al­ways the move. The thing your in­dus­try would pre­fer not to deal with is re­framed as an ob­so­lete bur­den. Your re­fusal to do it is re­branded as in­no­va­tion. Your in­abil­ity to imag­ine a world where you don’t get ex­actly what you want be­comes a man­i­festo.

Winner Saw It Coming

So there are dozens of these pieces and they all read the same. If you don’t reg­u­late these tech­nolo­gies hu­man­ity will only ben­e­fit. Education, health­care, in­dus­try, etc. We don’t need reg­u­la­tions be­cause the trans­for­ma­tion from the medium of pa­per to dig­i­tal has trans­formed the hu­man spirit. But one was ex­tremely sur­pris­ing to me. Langdon Winner wrote some­thing al­most prophetic back in 1997. You can read it here.

He coins the term cy­ber­lib­er­tar­i­an­ism (or at least is the first men­tion of it I could find) and then goes on to de­scribe an al­most eerily ac­cu­rate set of events.

In this per­spec­tive, the dy­namism of dig­i­tal tech­nol­ogy is our true des­tiny. There is no time to pause, re­flect or ask for more in­flu­ence in shap­ing these de­vel­op­ments. Enormous feats of quick adap­ta­tion are re­quired of all of us just to re­spond to therequire­ments the new tech­nol­ogy casts upon us each day. In the writ­ings of cy­ber­lib­er­tar­i­ans those able to rise to the chal­lenge are the cham­pi­ons of the com­ing mil­len­nium. The rest are fated to lan­guish in the dust.

Characteristic of this way of think­ing is a ten­dency to con­flatethe ac­tiv­i­ties of free­dom seek­ing in­di­vid­u­als with the op­er­a­tionsof enor­mous, profit seek­ing busi­ness firms. In the Magna Cartafor the Knowledge Age, con­cepts of rights, free­doms, ac­cess, andown­er­ship jus­ti­fied as ap­pro­pri­ate to in­di­vid­u­als are mar­shaledto sup­port the machi­na­tions of enor­mous transna­tional firms.We must rec­og­nize, the man­i­festo ar­gues, that Government does­not own cy­ber­space, the peo­ple do.” One might read this as asug­ges­tion that cy­ber­space is a com­mons in which peo­ple have­shared rights and re­spon­si­bil­i­ties. But that is def­i­nitely not wherethe writ­ers carry their rea­son­ing.What ownership by the peo­ple” means, the Magna Cartainsists, is sim­ply private own­er­ship.” And it even­tu­ally be­comesclear that the pri­vate en­ti­ties they have in mind are ac­tu­ally large,transna­tional busi­ness firms, es­pe­cially those in com­mu­ni­ca­tions.Thus, af­ter prais­ing the mar­ket com­pe­ti­tion as the path­way to abet­ter so­ci­ety, the au­thors an­nounce that some forms of compe-tition are dis­tinctly un­wel­come. In fact, the writ­ers fear that the­gov­ern­ment will reg­u­late in a way that re­quires ca­ble com­pa­niesand phone com­pa­nies to com­pete. Needed in­stead, they ar­gue,is the re­duc­tion of bar­ri­ers to col­lab­o­ra­tion of al­ready large firms,a step that will en­cour­age the cre­ation of a huge, com­mer­cial,in­ter­ac­tive mul­ti­me­dia net­work as the for­merly sep­a­rate kinds of­com­mu­ni­ca­tion merge.

What ownership by the peo­ple” means, the Magna Cartainsists, is sim­ply private own­er­ship.” And it even­tu­ally be­comesclear that the pri­vate en­ti­ties they have in mind are ac­tu­ally large,transna­tional busi­ness firms, es­pe­cially those in com­mu­ni­ca­tions.Thus, af­ter prais­ing the mar­ket com­pe­ti­tion as the path­way to abet­ter so­ci­ety, the au­thors an­nounce that some forms of compe-tition are dis­tinctly un­wel­come. In fact, the writ­ers fear that the­gov­ern­ment will reg­u­late in a way that re­quires ca­ble com­pa­niesand phone com­pa­nies to com­pete. Needed in­stead, they ar­gue,is the re­duc­tion of bar­ri­ers to col­lab­o­ra­tion of al­ready large firms,a step that will en­cour­age the cre­ation of a huge, com­mer­cial,in­ter­ac­tive mul­ti­me­dia net­work as the for­merly sep­a­rate kinds of­com­mu­ni­ca­tion merge.

In all he lays out 4 pil­lars of this ide­ol­ogy.

Technological de­ter­min­ism. The new tech­nol­ogy is go­ing to trans­form every­thing, it can­not be stopped, and your only job is to keep up. Stewart Brand’s ac­tual quote, which Winner pulls out and lets sit there like a body on dis­play, is Technology is rapidly ac­cel­er­at­ing and you have to keep up.” There’s no room to ask whether we want any of this. The wave is com­ing. Surf or drown.

It does not oc­cur to any­one in this dis­course that drown’ is a choice the wave is mak­ing, not a nat­ural law. Waves do not have in­ten­tions. Destroying your liveli­hood and leav­ing you to rot is­n’t a re­quire­ment of the nat­ural or­der as much as that would con­ve­nient.

Radical in­di­vid­u­al­ism. The point of all this tech­nol­ogy is per­sonal lib­er­a­tion. Anything that gets in the way of the in­di­vid­ual max­i­miz­ing them­selves be it gov­ern­ment, reg­u­la­tion, so­cial oblig­a­tion, your an­noy­ing neigh­bors, is an ob­sta­cle to be re­moved. Winner notes, with what I imag­ine was a very dry ex­pres­sion, that the writ­ers of the Magna Carta for the Knowledge Age” cited Ayn Rand ap­prov­ingly. In 1994. As in­tel­lec­tual ground­ing. For a doc­u­ment about com­put­ers.

There is some­thing deeply funny about a move­ment claim­ing to in­vent the fu­ture and ground­ing its case in a Russian émi­gré’s air­port nov­els about steel barons in love with their own re­flec­tions.

Free-market ab­so­lutism. Specifically the Milton Friedman, Chicago School, sup­ply-side fla­vor. The mar­ket will sort it out. Regulation is theft. Wealth is virtue. George Gilder, who co-wrote the Magna Carta, had pre­vi­ously writ­ten a book called Wealth and Poverty that helped sell Reaganomics to the masses. He then wrote Microcosm, which ar­gued that mi­cro­proces­sors plus dereg­u­lated cap­i­tal­ism would lib­er­ate hu­man­ity. He was very se­ri­ous about this.

Don’t worry, Gilder is still out there. He loves the blockchain and crypto now. He now writes about how Bitcoin will save the soul of cap­i­tal­ism, which it is some­how do­ing while also de­stroy­ing the planet. Both can be true in his cos­mol­ogy. The ide­ol­ogy is flex­i­ble like that.

A fan­tasy of com­mu­ni­tar­ian out­comes. This is the part that should make you laugh out loud. After es­tab­lish­ing that gov­ern­ment is bad, reg­u­la­tion is theft, and the in­di­vid­ual is sov­er­eign, the cy­ber­lib­er­tar­i­ans then promise that the re­sult of all this will be… rich, de­cen­tral­ized, har­mo­nious com­mu­nity life. Negroponte: It can flat­ten or­ga­ni­za­tions, glob­al­ize so­ci­ety, de­cen­tral­ize con­trol, and help har­mo­nize peo­ple.” Democracy will flour­ish. The gap be­tween rich and poor will close. The lion will lie down with the lamb, and the lamb will have a Pentium II.

We also have the ad­van­tage of hind­sight and know, with­out ques­tion, that all of these pre­dicted out­comes were wrong. Not directionally wrong’ or wrong in the de­tails.’ Wrong the way it would be wrong to pre­dict that if you set your kitchen on fire, the re­sult will be a ren­o­va­tion.

You have to hold these four ideas in your head at the same time to see the trick. The cy­ber­lib­er­tar­i­ans wanted you to be­lieve that rad­i­cal in­di­vid­u­al­ism plus dereg­u­lated cap­i­tal­ism plus in­evitable tech­nol­ogy would pro­duce com­mu­ni­tar­ian utopia. This is, on its face, in­sane. It is the eco­nomic equiv­a­lent of claim­ing that if every­one punches each other re­ally hard, even­tu­ally we’ll all be hug­ging.

But Winner’s sharpest ob­ser­va­tion, the one I keep com­ing back to, is­n’t about any of the four pil­lars in­di­vid­u­ally. It’s about the move un­der­neath them. He writes:

Characteristic of this way of think­ing is a ten­dency to con­flate the ac­tiv­i­ties of free­dom seek­ing in­di­vid­u­als with the op­er­a­tions of enor­mous, profit seek­ing busi­ness firms.”

This is the en­tire game. This is how don’t tread on me” be­comes Meta should be al­lowed to do what­ever it wants.” This is how the rights of the lone hacker work­ing in their garage be­come in­dis­tin­guish­able from the rights of a multi­na­tional with a mar­ket cap larger than most coun­tries’ GDP. The Magna Carta lit­er­ally ar­gues that the gov­ern­ment should re­duce bar­ri­ers to col­lab­o­ra­tion be­tween ca­ble com­pa­nies and phone com­pa­nies in the name of in­di­vid­ual free­dom and so­cial equal­ity. Winner caught this in 1997.

That is why ob­struct­ing such col­lab­o­ra­tion — in the cause of forc­ing a com­pe­ti­tion­be­tween the ca­ble and phone in­dus­tries — is so­cially elit­ist. To the ex­tent it pre­vents col­lab­o­ra­tion be­tween the ca­ble in­dus­try and the phone com­pa­nies, pre­sent fed­eral pol­icy ac­tu­ally thwarts the Administration’s own goals of ac­cess and em­pow­er­ment.

What makes the es­say un­com­fort­able to read now is that Winner was­n’t even pre­dict­ing the fu­ture. He was just de­scrib­ing what was al­ready hap­pen­ing and not­ing where it would ob­vi­ously lead. He saw the me­dia merg­ers and asked the ques­tion no­body in the in­dus­try wanted to an­swer: what hap­pened to the pre­dicted col­lapse of large cen­tral­ized struc­tures in the age of elec­tronic me­dia? Where, ex­actly, did the de­cen­tral­iza­tion go? He saw that the cy­ber­lib­er­tar­i­ans were go­ing to de­liver the op­po­site of every­thing they promised, and that they were go­ing to keep get­ting paid to promise it any­way.

He was writ­ing be­fore Google. Before Facebook. Before the iPhone. Before YouTube. Before Twitter, Bitcoin, Uber, AirBnB, OpenAI, and the en­tire app econ­omy. Before any of the ac­tual ex­am­ples that would even­tu­ally prove him right ex­isted. He just looked at the peo­ple do­ing the talk­ing, lis­tened to what they were say­ing, and wrote down where it ended. It is not a long es­say. He did­n’t need a long es­say. The fu­ture was right there on the page, in their own words. He just had to read it back to them.

The es­say closes with a ques­tion that has, to my knowl­edge, never been se­ri­ously an­swered by the in­dus­try it was aimed at:

Are the prac­tices, re­la­tion­ships and in­sti­tu­tions af­fected by peo­ple’s in­volve­ment with net­worked com­put­ing ones we wish to fos­ter? Or are they ones we must try to mod­ify or even op­pose?”

Twenty-eight years later, the in­dus­try still treats this ques­tion as some­where be­tween naive and sedi­tious. It’s the ques­tion Barlow’s de­c­la­ra­tion was specif­i­cally de­signed to make unask­able. And it re­mains, to this day, the only ques­tion that ac­tu­ally mat­ters.

Caveat emp­tor

When you look at these early for­ma­tive writ­ings, so much of what we see now be­comes clear. The cy­ber­lib­er­tar­ian deal was al­ways the same: you’re on your own. The in­dus­try would build the in­fra­struc­ture, take the prof­its, and shove every con­se­quence, every harm, every cost, every re­spon­si­bil­ity, onto some­body else.

There is no greater ex­am­ple to me than the mod­er­a­tor. Anyone who has ever mod­er­ated a fo­rum or a sub­red­dit knows that adding the word cyber” to a space does­n’t sud­denly turn peo­ple into bet­ter hu­mans. People are still peo­ple. They flame each other, they post slurs, they doxx, they ha­rass, they spam, they post CSAM, they rad­i­cal­ize each other, they grief, they co­or­di­nate, they lie. A space with hu­mans in it re­quires gov­er­nance.

They pro­duce, with fright­en­ing reg­u­lar­ity, the ex­act be­hav­ior any kinder­garten teacher could have pre­dicted. Then they act sur­prised.

But the cy­ber­lib­er­tar­ian model re­quired pre­tend­ing it was un­fore­see­able. The plat­forms could­n’t ac­knowl­edge that they needed gov­er­nance be­cause ac­knowl­edg­ing it would mean ac­knowl­edg­ing re­spon­si­bil­ity, and ac­knowl­edg­ing re­spon­si­bil­ity would mean ac­knowl­edg­ing li­a­bil­ity, and ac­knowl­edg­ing li­a­bil­ity would mean the en­tire eco­nomic model col­lapses. So in­stead the in­dus­try in­vented a beau­ti­ful fic­tion: gov­er­nance hap­pens, but it hap­pens by magic, per­formed by vol­un­teers, for free, who we will si­mul­ta­ne­ously rely on and mock.

Reddit is run by un­paid mod­er­a­tors. Wikipedia is run by un­paid ed­i­tors. Stack Overflow was run by un­paid ex­perts and is now a ghost town. On TikTok and Twitter it is the un­know­able algorithm” that is the cause of and so­lu­tion to every prob­lem backed by capri­cious mod­er­a­tors who de­light in stop­ping free speech. Unless you don’t like it, then it’s neg­li­gence mod­er­a­tion in de­fense of your en­e­mies.

Open source is run by un­paid main­tain­ers hav­ing ner­vous break­downs. The plat­forms col­lect the rent. The peo­ple do­ing the ac­tual work of mak­ing the plat­forms liv­able get noth­ing, and when they ask for any­thing like recog­ni­tion, tools, ba­sic pro­tec­tion from ha­rass­ment, they’re told they’re power-trip­ping nerds who should touch grass.

This is also the crypto story, just with the masks off. What if we made worse money on pur­pose, money that by­passed every pro­tec­tion con­sumers had won over the pre­vi­ous cen­tury, money that could­n’t be re­versed when stolen, money that funded ran­somware at­tacks on hos­pi­tals and pump-and-dumps tar­get­ing peo­ple’s re­tire­ment ac­counts? The cy­ber­lib­er­tar­ian an­swer was: that’s free­dom. The losses were real. People killed them­selves. Hospitals had to turn away pa­tients. The ar­chi­tects be­came bil­lion­aires and bought yachts and now sit on the boards of AI com­pa­nies, where they are rein­vent­ing the same con with a new vo­cab­u­lary.

Now Winner got one thing wrong, and it’s worth paus­ing on, be­cause it’s the most in­ter­est­ing wrin­kle in all of this. What ac­tu­ally hap­pened was weirder and worse. The cy­ber­lib­er­tar­i­ans be­came the cor­po­ra­tions. They did­n’t sell out. They did­n’t be­tray their prin­ci­ples for the first of­fer of money. They sim­ply scaled un­til their prin­ci­ples be­came in­con­ve­nient, and then they stopped men­tion­ing them.

Once the plat­forms got large enough to be un­stop­pable, once they cap­tured enough of the reg­u­la­tory ap­pa­ra­tus to write their own rules, the lib­er­tar­ian rhetoric got qui­etly shelved like a col­lege poster you took down be­fore your in-laws came over. Meta no longer pre­tends it stands for free speech and seem­ingly takes de­light in putting its thumb on the scale. TikTok users have in­vented an en­tire eu­phemistic shadow lan­guage to evade au­to­mated cen­sor­ship like unalive,” le dol­lar bean,” graped” that would have made 1996 Barlow weep into his bolo tie.

Copyright and patents mat­ter when they’re Apple’s copy­right and patents. Or Googles. Or OpenAIs. Go try to make a Facebook+ web­site and see how quickly Meta is ca­pa­ble of re­spond­ing to con­tent it finds ob­jec­tion­able.

Cyberlibertarianism was the lad­der. Once they were on the roof, they kicked it away and started charg­ing ad­mis­sion to look at the view.

So the Internet is Doomed?

Remember I like the Internet. I said it in the be­gin­ning and it is still true. I love the Fediverse, I love weird Discords about small table­top RPGs I’m in. I spend hours in the Mister FPGA fo­rums. There are cor­ners that are good. But they’re mostly good be­cause they’re not big enough to be worth break­ing up.

It feels in­creas­ingly like I’m hang­ing out in the old neigh­bor­hood dive bar af­ter most of the reg­u­lars have moved away. The light­ing is the same. The bar­tender re­mem­bers your or­der. But you can hear your­self think now, and that’s mostly be­cause the room is half empty and the juke­box fi­nally died. The new clien­tele is from out of town. They are tak­ing pic­tures of the menu.

If we want to have a se­ri­ous con­ver­sa­tion about why we are in the sit­u­a­tion we’re in, it is no longer pos­si­ble to pre­tend that the bro­ken ide­ol­ogy that put us on this tra­jec­tory is still some­how com­pat­i­ble with the harsh re­al­i­ties that sur­round us. It is not clear to me if democ­racy can sur­vive a dereg­u­lated Internet. A dereg­u­lated Internet filled with LLMs that can per­fectly im­per­son­ate hu­man be­ings pow­ered by un­reg­u­lated cor­po­ra­tions with zero eth­i­cal guide­lines seems like a some­what ob­vi­ous prob­lem. Like an episode of Star Trek where you the viewer are like well clearly the Zorkians can’t keep the Killbots as pets.” It does­n’t take some gi­ant in­tel­lect to see the pretty fuck­ing ob­vi­ous prob­lem.

If we want to save the parts of the in­ter­net worth sav­ing, we have to evolve. We have to find some sort of eth­i­cal code that says: just be­cause I can do some­thing and it makes money, that is not suf­fi­cient jus­ti­fi­ca­tion to un­leash it on the world. Or, more sim­ply: just be­cause I want to do some­thing and you can­not ac­tively stop me, that does not make do­ing it a good idea. We have waited thirty years for the cy­ber­lib­er­tar­ian fu­ture to ar­rive and pro­duce the promised har­mo­nious com­mu­nity. It’s time to face the facts. It’s never com­ing. The bus left in 1996. The bus was never real.

People did not get bet­ter be­cause they went on­line. Giving every­one ac­cess to a raw, un­fil­tered pipeline of every fact and lie ever pro­duced did not turn them into bet­ter-ed­u­cated peo­ple. It broke them. It al­lowed them to choose the re­al­ity they now in­habit, like or­der­ing off a menu. If I want to be­lieve the world is flat, TikTok will gladly serve me that con­tent all day. Meta will rec­om­mend sup­port­ive groups. There will be hash­tags. There will be Discords. There will be a guy named Trent who runs a pod­cast. I will never have to face the deeply un­com­fort­able pos­si­bil­ity that I might be wrong about any­thing, ever, un­til the day I die, sur­rounded by peo­ple who agree with me about every­thing, in­clud­ing which of the other mourn­ers are se­cretly lizards.

That is the in­ter­net we built. It was not an ac­ci­dent. It was the prod­uct of a spe­cific ide­ol­ogy, writ­ten down by spe­cific peo­ple, at a spe­cific cock­tail party in Davos, in 1996. Winner watched it hap­pen and told us where it was go­ing. We did not lis­ten. There is still time, maybe, to start.

Apple is increasing my cortisol levels

blog.kronis.dev

Date: 2026 – 05-09

I’m cre­at­ing a sim­ple de­vel­oper util­ity to make man­ag­ing Claude Code pro­files (e.g. run­ning it with DeepSeek, or some OpenRouter mod­els) a lit­tle bit eas­ier.

Edit: I just did the first re­lease, which you can check out on ccode.kro­nis.dev, or go di­rectly to the Itch.io page to ei­ther down­load or buy the pre-built bi­na­ries or look at the source code. It’s a sim­ple util­ity and it’s early on (consider get­ting it for free first and only pay­ing later, if it feels use­ful), but cur­rently the code is not signed.

The util­ity is writ­ten in the Go lan­guage, and the tool­ing there makes it re­ally easy to com­pile for var­i­ous plat­forms - I get a sta­tic ex­e­cutable that I can put any­where I want. Even be­fore the re­lease, I wanted to see how easy it would be to ship it.

It works just fine for dis­trib­ut­ing Linux soft­ware (same deal, af­ter chmod +x).

It works sort of fine for dis­trib­ut­ing Windows soft­ware (I get an .exe, SmartScreen might have a word or two, though you can click through it in the same pop-up).

Distributing Mac soft­ware

It does not just work for ma­cOS and my MacBook in­stead shows me this:

What you see is their quar­an­tine kick­ing in for down­loaded soft­ware, even if I share it with my­self over Nextcloud.

Technically, you can ask your users to over­ride it man­u­ally, in the ter­mi­nal:

Most de­vel­op­ers might be will­ing to do that. It is not, how­ever, good user ex­pe­ri­ence and might raise some eye­brows.

Doesn’t seem like such a big deal, right? I’ll just en­roll in their Apple Developer Program, sign the ex­e­cutable and be on my way, right?

Giving Apple money, and fail­ing

Wait, they want how much money for the ac­count?

And it’s a yearly sub­scrip­tion? My brother in Christ, I in­tend to re­lease a util­ity maybe a dozen or two dozen peo­ple are go­ing to down­load, tops, for like 7 USD on Itch.io with a pay-what-you-want model, mean­ing that most of those peo­ple will prob­a­bly choose the price of 0 USD in­stead (since I don’t in­tend to be like Apple, peo­ple have var­i­ous cir­cum­stances).

That means that even if it works out that much, there’s go­ing to be VAT and Itch.io will also take a cut so out of those maybe 50 USD I’ll get about 25 USD, which funds me about 3 months of that Apple Developer Program price. I guess the rea­son for it be­ing priced like that lies some­where be­tween greed and want­ing to gate­keep hob­by­ists out and only sup­port Serious Users™, but it seems a bit stu­pid. Oh well, I al­ready had to get the over­priced MacBook for an­other free­lance thing, be­cause they also won’t let me com­pile ma­cOS/​iOS apps on Windows or Linux, so I guess this is just them spit­ting on me af­ter slap­ping me in the face.

What I get from that is that ar­ti­cles like An app can be a home-cooked meal are cool but don’t take the eco­nom­ics of want­ing to re­lease some­thing pub­licly into ac­count - un­less you’re de­vel­op­ing some­thing that you’ll add a bunch of mon­e­ti­za­tion to, you’ll be los­ing money. For desk­top soft­ware there is Homebrew but that also means that you could­n’t charge a few bucks for it even if you wanted to (or that you’d need to add mac-home­brew-in­stall-in­struc­tions.txt to the Itch.io down­loads page when do­ing the pay-what-you-want ap­proach, which would feel awk­ward).

I don’t like that the eco­nom­ics are push­ing soft­ware and app de­vel­op­ment in a di­rec­tion where re­leas­ing a pack­age (that might be non-open-source or just source-avail­able, but you want to re­lease bi­na­ries) costs money, though I also ac­knowl­edge that there would be other is­sues, like in­sane amounts of spam, with not do­ing that.

Then, we get to the ac­tual ver­i­fi­ca­tion process - it’s un­der­stand­able that they’d want to ver­ify my ID. The prob­lem is that on the MacBook they also ex­pect me to use its we­b­cam to take a pic­ture. I will ad­mit that my M1 MacBook Air is get­ting dated at this point, but re­gard­less of what light­ing I tried, I could just not get a good pic­ture of the doc­u­ment. It’s not like they were like Oh hey, we’ve de­tected that your own iPhone is con­nected to the same lo­cal net­work as this MacBook, would you like to use it as a cam­era?”, so for about 10 at­tempts, this is what I saw:

Eventually, I moved over to try­ing to use my main we­b­cam for that, since their built in one just does­n’t work:

Why they can’t just let me up­load a scan of the doc­u­ment eludes me. I mean, I guess I can imag­ine a few rea­sons why, but it’d prob­a­bly be eas­ier to forge my own ID so it’s not as glossy rather than hav­ing to turn my small kitchen table into this. Pictured for max­i­mum frus­tra­tion, a don­gle that I needed:

Even that was­n’t good enough, be­cause un­der­stand­ably it does­n’t have aut­o­fo­cus for some­thing that you hold close. Not only that, but every 2nd fail­ure seemed to just give me a generic er­ror and I’d have to start the whole en­roll­ment process from the be­gin­ning again:

Luckily I re­al­ized that I can in­stall the app on my iPhone di­rectly. There, it worked on the first try. I guess it must re­ally suck if you don’t have an iPhone or a fancy we­b­cam, bet­ter spend some more money so you can give them money! The pay­ment went through okay, soon af­ter I had an ac­ti­vated de­vel­oper ac­count.

Except of course I did­n’t, look, the app tells me to await an e-mail (which I seem­ingly al­ready re­ceived?):

And the desk­top app does­n’t care at all ei­ther, it does­n’t even know that I’ve tried the en­roll­ment, and of­fers me to start the whole thing over again, de­spite me be­ing signed into the ex­act same ac­count:

It’s prob­a­bly a case of even­tual con­sis­tency and some back­ground processes or what­ever, but it’s also quite frus­trat­ing and, in a word, stu­pid.

Apple is kind of frus­trat­ing

Apple, I think you make hard­ware with pretty good build qual­ity and the M-series chips made for pretty much the per­fect note­book for me - and I’m sure they’re great main dev ma­chines for those that can af­ford the higher spec ver­sions.

I think that’s nice and I gen­uinely en­joy hav­ing the iPhone SE 2022, at least be­fore learn­ing that you killed off the bud­get se­ries al­to­gether (your new e-se­ries are more ex­pen­sive) and re­moved the nice silent mode tog­gle on the side and re­moved TouchID. That’s be­fore we even start talk­ing about the 3.5mm jack and frankly all of that makes me ques­tion whether my next phone should­n’t just be an Android again.

I can deal with need­ing soft­ware like AutoRaise and Rectangle and DiscreteScroll along­side oth­ers to cus­tomize your OS to my lik­ing be­cause you won’t let me do that my­self like most Linux dis­tros do. I can even deal with your win­dow fo­cus need­ing an ex­tra click across mul­ti­ple mon­i­tors and AutoRaise be­ing nice but per­haps too ag­gres­sive, since the de­vel­op­ers are at least try­ing to make the ex­pe­ri­ence nicer!

I can deal with your key­board short­cuts be­ing odd and not even hav­ing a Cut” op­tion in your Finder pro­gram.

I can deal with your weird Control/Command but­ton setup which even breaks re­mote desk­top soft­ware.

I can deal with your weird programs you close aren’t ac­tu­ally closed” ap­proach even though you sold me a MacBook with 8 GB of RAM just so I could de­velop soft­ware in your walled gar­den ecosys­tem.

But to first ven­dor lock me to your ecosys­tem for de­vel­op­ing apps, then de­mand­ing a whole bunch of money so I can sign my soft­ware and it not get quar­an­tined all while I’m not too well off fi­nan­cially, then refuse to let me sub­mit my doc­u­ments to you be­cause your hard­ware pro­duces pic­tures that are not good enough and make me have to in­stall the app on a phone that’s also ex­pen­sive and that not even every­one has, then to still make me wait and have your apps not even show that I’ve sub­mit­ted my ap­pli­ca­tion?

You know what? Apple, fuck you and your for­saken ecosys­tem. This sucks.

A more sane world

I can use SmartID to ver­ify my ID (and age) in about 20 sec­onds when buy­ing an en­ergy drink at the lo­cal gro­cery store.

I can use eParak­sts to dig­i­tally sign doc­u­ments in about a minute, from ei­ther my PC with a card reader (using my gov­ern­ment is­sued ID card), or my phone with their app, end­ing up with a proper cryp­to­graphic sig­na­ture ei­ther at­tached to the EDOC con­tainer (ASIC-E) or a PDF file di­rectly.

I’m sure that other coun­tries also have plenty of sim­i­lar ser­vices for ID and age ver­i­fi­ca­tion, sign­ing doc­u­ments, and other dig­i­tal ser­vices. I ac­knowl­edge that that’s not all of them, and that things are all over the place in this re­gard (alongside the credit card mafia hold­ing a lot of the world’s pay­ment in­fra­struc­ture hostage), but come on, surely it’s pos­si­ble to cre­ate some­thing that works bet­ter than my ex­pe­ri­ence did.

Having a bunch of scrappy Baltic soft­ware pack­ages work­ing bet­ter than those by a multi-bil­lion dol­lar com­pany feels silly.

Update

You know what, Apple is­n’t the only com­pany where things are some­what messed up.

If you want to sign some code for Windows, you can find Certum of­fer­ing code sign­ing, but that also costs around 209 EUR per year, all while they’re sup­posed to be one of the af­ford­able op­tions out there! I’m not sin­gling them out as much as ac­knowl­edg­ing that they’re one of the cheaper op­tions and that many oth­ers out there are worse. What the fuck?

And then you look at Azure Artifact Signing notic­ing that their ba­sic tier only costs 8.54 EUR per month and for a bit feel happy that some­one has at least tried to dis­rupt the ex­tor­tion­ate prices - un­til you set up your Azure ac­count and no­tice that you can­not sign cer­tifi­cates as an in­di­vid­ual if you’re out­side of US & Canada, in the EU only or­ga­ni­za­tions can sign code through them.

This feels about as bad as get­ting TLS certs was be­fore Let’s Encrypt dis­placed most of that rent seek­ing be­hav­ior - the prob­lem be­ing that there aren’t many al­ter­na­tives or com­peti­tors to them, which on one hand means that we’re mov­ing to­wards a mas­sive point of fail­ure, and on the other hand means that if they ever de­cide to de­mand money, then a large part of the Internet will be straight up fucked.

I thought that maybe I’m over­re­act­ing a bit - since my pre­vi­ous is­sues were mostly about the Apple signup process be­ing an­noy­ing and the way they’ve treated their users be­ing worse than it could be, but no, I should be more an­gry - the whole code sign­ing space is stu­pidly ex­pen­sive for what it is. You have ar­gu­ments in the op­po­site di­rec­tion? Just know that they said the ex­act same kind of stuff about TLS be­fore Let’s Encrypt and how you have to pay 100 EUR a year for a cert that’s not even a wild­card be­cause of rea­sons™.

Just let me sign my code with my gov­ern­men­tal ID card and be done with it, jeez.

ymawky

imtomt.github.io

build­ing a web server in aarch64 as­sem­bly to give my life (a lack of) mean­ing

ymawky is a small, sta­tic http web server writ­ten en­tirely in aarch64 as­sem­bly for ma­cos. it uses raw dar­win syscalls with no libc wrap­pers, serves sta­tic files, sup­ports GET, HEAD, PUT, OPTIONS, DELETE, byte ranges, di­rec­tory list­ing, cus­tom er­ror pages, and tries to be as hard­ened as pos­si­ble.

why? why not? the dream of the 80s is alive in ymawky. every­body has ng­inx. hav­ing apache makes you a square. so why not strip every sin­gle con­ve­nience layer that com­puter sci­ence has given us since 1957? i wanted to un­der­stand how a web server ac­tu­ally works, some­thing i know lit­tle about com­ing from a low-level/​sys­tems back­ground. the risks that come up, the prob­lems that need to be solved, the things you don’t think about when you’re writ­ing python or c.

this (probably) won’t re­place ng­inx, but it is do­ing some­thing in the most dif­fi­cult way pos­si­ble.

con­straints

i gave my­self some con­straints for this pro­ject:

aarch64 as­sem­bly only

ma­cos/​dar­win, not linux. only be­cause that’s the sys­tem i have right now. sorry lin­ux­heads :(

raw syscalls only: no libc wrap­pers

sta­tic files only

no pre­ex­ist­ing parsers

ab­solutely no ex­ter­nal li­braries

as­sem­bly, my beloved

as­sem­bly lan­guage is the layer be­tween ma­chine code and other lan­guages. c gets com­piled into as­sem­bly, which then gets as­sem­bled into an ex­e­cutable bi­nary. as­sem­bly is es­sen­tially hu­man-read­able mnemon­ics that di­rectly cor­re­spond to raw ex­e­cutable bytes: mov, add, ldr, str, cmp, among oth­ers. svc #0x80 is the hu­man-read­able equiv­a­lent to the bytes D4 00 10 01 you’ll find in the ex­e­cutable bi­nary.

you get al­most no ab­strac­tions. you move val­ues around be­tween cpu reg­is­ters and mem­ory, com­pare them, jump to dif­fer­ent por­tions of your code, and call the ker­nel for syscalls. it makes sim­ple things look com­pli­cated, but it also makes al­most every step the cpu takes vis­i­ble and un­der your con­trol. it does ex­actly what you tell it to, with­out warn­ings, and with­out any help. if it’s be­hav­ing in­cor­rectly, it’s be­cause you wrote it in­cor­rectly.

writ­ing a web server in as­sem­bly means there are no http li­braries. no au­to­matic cleanup. no string types: strings are just re­gions of mem­ory that hold in­di­vid­ual bytes se­quen­tially. a struct as it ex­ists in c does­n’t re­ally ex­ist as a lan­guage fea­ture. you have to know the ex­act off­set in bytes be­tween each field, and the to­tal size of the struct, or the cpu will hap­pily read the wrong mem­ory.

raw syscalls

ymawky does­n’t use any libc wrap­pers, it just uses raw calls to the ker­nel. take, for ex­am­ple, this snip­pet of code that opens a file:

mov x16, #5 ; SYS_open syscall num­ber adrp x0, file­name@PAGE add x0, x0, file­name@PA­GE­OFF mov x1, #0x0 ; O_RDONLY is just 0x0000 svc #0x80 b.cs open_­failed

in dar­win, the syscall num­ber goes in the x16 reg­is­ter (in aarch64 linux, it goes in x8). syscall num­ber 5 is open(), which takes a cou­ple ar­gu­ments: file­name and mode. you put each ar­gu­ment in the reg­is­ters by hand, then call the ker­nel with svc #0x80.

if open() fails, the carry flag is set. we check that with b.cs open_­failed, which means if the carry flag is set, branch to open_­failed”. then we have to write open_­failed to do what­ever cleanup and re­sponse han­dling is needed.

this hap­pens a lot. as­sem­bly does­n’t have exceptions” or objects”, it just sets a cpu flag that you have to check and deal with.

gen­eral overview

at its most ba­sic, a web server re­ceives a re­quest, processes it, re­turns a sta­tus code, and maybe a file. a lot goes into that receives a re­quest” bit:

set up sock­ets with socket(AF_INET, SOCK_STREAM, 0)

con­fig­ure the socket with set­sock­opt(serverfd, SOL_SOCKET, SO_REUSEADDR, &buf, sizeof(int))

bind a file de­scrip­tor to an ad­dress with bind(sockfd, &addr, 16)

lis­ten to the socket for new con­nec­tions with lis­ten(sockfd, 5)

ac­cept a con­nec­tion with ac­cept(sockfd, NULL, NULL)

ymawky is a fork-on-re­quest server. that means for each new in­bound con­nec­tion, it calls the fork() syscall. this has some ad­van­tages:

mem­ory is not shared be­tween re­quest han­dlers

it’s eas­ier to un­der­stand

it’s eas­ier to write

but it also has some pretty sig­nif­i­cant dis­ad­van­tages:

bloat

each process has its own mem­ory space

it fun­da­men­tally han­dles fewer con­cur­rent con­nec­tions than mod­els like ng­inx’s event-dri­ven async non-block­ing model

with more con­cur­rent con­nec­tions, the ker­nel spends more time switch­ing be­tween processes than ac­tu­ally be­ing in the process

did i men­tion the bloat? and mem­ory con­sump­tion?

bind­ing to sock­ets and lis­ten­ing is the easy part. the real soul-crush­ing task is pro­cess­ing re­quests. a lot goes into this:

de­ter­min­ing re­quest type: GET, HEAD, OPTIONS, PUT, or DELETE

ex­tract­ing the re­quested path

nor­mal­iz­ing the path, like de­cod­ing %20 into a space

per­form­ing safety checks on the path

pars­ing header fields the client sent over

get­ting in­for­ma­tion about the re­quested file

fig­ur­ing out whether it is a di­rec­tory or a reg­u­lar file

writ­ing up­load bod­ies to tem­po­rary files for PUT

build­ing re­sponse head­ers

writ­ing the re­sponse, which is some­how not straight­for­ward

clos­ing any open files

han­dling er­rors with­out crash­ing the server

pars­ing http by hand

i hate string pars­ing. es­pe­cially in as­sem­bly. un­for­tu­nately, an http re­quest is just a string ask­ing a server to do some­thing, and the server has to un­der­stand it.

let’s walk through an ex­am­ple http re­quest:

GET /index.html HTTP/1.0\r\n Range: bytes=1 – 5\r\n\r\n

that first line tells us a lot. it’s a GET re­quest, which means the client would like us to send over in­dex.html. HTTP/1.0 tells the server what ver­sion of http the client is us­ing. the \r\n se­quence, car­riage re­turn plus line­feed, tells the server that’s the end of this line, please process the next one”. the \r\n\r\n at the end tells the server that’s the end of the header. if we never re­ceive \r\n\r\n, we have to bail with 400 Bad Request.

then there is Range: bytes=3 – 5, which means from this file, only give me bytes 3 through 5, ig­nore the rest.” if a file is 500gb large, but you only re­quest bytes 3 through 5, you only re­ceive 3 bytes back. yay! un­for­tu­nately for me, i have to process that header. boo!

first, ymawky de­ter­mines the re­quest type by com­par­ing the first few bytes against every method it sup­ports, then it ex­tracts the path. we scan along the header one byte at a time un­til we find a / or *. but we can’t as­sume every / is the re­quested path. if some­body sends:

GET HTTP/1.0\r\n \r\n

there is a / in HTTP/1.0. once we hit a /, we check that the pre­vi­ous byte was a space. if it was­n’t, we re­ply with 400 Bad Request.

once we find the path, we need some­where to store it. on most sys­tems, PATH_MAX is 4096 bytes, so ymawky has a 4096 byte file­name buffer plus one byte for the null ter­mi­na­tor:

.bss file­name_buffer: .skip 4097 .align 3

copy­ing the file­name is just a loop, but the loop has to con­stantly check both sides: don’t read past the header, and don’t write past the file­name buffer. if the client re­quests GET /aa…[5000 A]…a HTTP/1.0, they should get 414 URI Too Long rather than over­writ­ing 5KB of ar­bi­trary mem­ory.

in python, this is some­thing like:

text.split(“GET /“)[1].split(” )[0]

in as­sem­bly, it’s ~200 lines long, in­clud­ing en­sur­ing HTTP le­gal­ity. is­n’t as­sem­bly the best?

then the path has to be per­cent-de­coded. if the parser sees %, it has to read the next two bytes, ver­ify that they are valid hex char­ac­ters (0 – 9, a-f, A-F), con­vert them into the byte they rep­re­sent, and con­tinue.

GET re­quests can have a Range: header, and PUT re­quests re­quire Content-Length:. un­like the re­quested URL, these can ap­pear at any line in the header. we have to it­er­ate through the header char­ac­ter by char­ac­ter. if we find a \r, we need to check if the next char­ac­ter is \n. if it’s not, it’s a mal­formed header, and we have to send a 400 Bad Request. like­wise, if we find a \n with­out a pre­ced­ing \r, it’s also mal­formed. once we find \r\n, that marks the end of the cur­rent line, and the be­gin­ning of the next. we check if this new line starts with a space, and send a 400 Bad Request if it does (header fields can­not start with white­space). then we check for Range: (or Content-Length:, de­pend­ing on the method), us­ing a lit­tle string com­par­i­son func­tion:

streqn: ldrb w3, [x0] ldrb w4, [x1] cmp w3, w4 b.ne Lstreqn_no_match

cbz w3, Lstreqn_match ;; both equal and both NULL = end of string = match

;; if we’ve reached the end, it’s a match yeah? subs x2, x2, #1 b.eq Lstreqn_match

add x0, x0, #1 add x1, x1, #1 b streqn

Lstreqn_match: mov x0, #1 ret

Lstreqn_no_match: mov x0, #0 ret

this takes two string point­ers, x0 and x1, a max length in x2, and checks if each char­ac­ter is the same.

let’s see what a Range: header can look like:

Range: bytes=10- Range: bytes=-10 Range: bytes=5 – 10

both sides of the range are op­tional, but at least one of them is re­quired. since 10” is a string and not a lit­eral 10, each side has to be con­verted from ascii dig­its into an in­te­ger. we have to write an atoi-style func­tion, be­ing care­ful to check for an in­te­ger over­flow:

;; x0 -> pointer to string atoi: mov x1, #0 mov x3, #10 mov x4, #0 1: ; if the num­ber is >=19 dig­its long, it could over­flow the 64-bit reg­is­ters cmp x4, #19 b.hs Latoi_error

ldrb w2, [x0] cbz w2, 2f

cmp w2, #‘0’ b.lo Latoi_error cmp w2, #‘9’ b.hi Latoi_error

; re­sult = (result * 10) + cur­rent digit mul x1, x1, x3 sub w2, w2, #‘0’ add x1, x1, x2 add x0, x0, #1 add x4, x4, #1

b 1b 2: cmn xzr, xzr ; clear carry to sig­nal suc­cess mov x0, x1 ret

Latoi_error: cmp xzr, xzr ; set carry to sig­nal fail­ure mov x0, #0 ret

in python, that would be int(string). is­n’t as­sem­bly mag­i­cal?

put

PUT is in­ter­est­ing. it’s idem­po­tent, mean­ing the end re­sult on the server is the same re­gard­less of how many times you send the same re­quest. PUT /file.txt will cre­ate file.txt, or com­pletely over­write it if it al­ready ex­ists. putting 1234 to file.txt twice in a row re­sults in one file that con­tains 1234, not 12341234.

this makes PUT hon­estly pretty dan­ger­ous to have open glob­ally, but hey, who cares?

there are a few things to con­sider when han­dling PUT:

what if the process crashes in the mid­dle of han­dling the re­quest?

what if the client says the Content-Length is 2kb, but only sends 100 bytes?

what if the client says the Content-Length is huge, like 50gb?

that last one is easy to fix. con­fig­ure a max­i­mum file size. in con­fig.S, MAX_BODY_SIZE is 1gb by de­fault. if Content-Length is larger than that, ymawky re­fuses the re­quest with 413 Content Too Large. easy peasy.

the first two have the same ba­sic fix. if we blindly opened file.txt and started writ­ing into it, the file could be left half-writ­ten if some­thing goes wrong. so in­stead, ymawky writes to a tem­po­rary file:

.ymawky_tmp_<pid>

to get the pid, we use get­pid() (syscall #20), then a cus­tom itoa() to con­vert the num­ber to a string (while check­ing for buffer over­flow, of course). then, the re­quested con­tent from the client gets writ­ten to the temp file. if every­thing goes smoothly, the temp file is re­named in place, and file.txt now ex­ists on the server. if the client dis­con­nects un­ex­pect­edly, times out, or sends a mal­formed body, the temp file is un­link()’d (syscall #10/syscall #472 for un­linkat()). ex­ist­ing files are only over­writ­ten af­ter a com­plete re­quest was sent over suc­cess­fully.

di­rec­tory list­ing and more string pars­ing yay

have you ever no­ticed some­times you visit a di­rec­tory on a web­site, and it lists all the files with links you can click on? it seems like pretty ba­sic func­tion­al­ity, and it’s not too com­pli­cated. but like every­thing in as­sem­bly, you have to do every­thing by hand.

if you GET /somedir/, we check if di­rec­tory list­ing is en­abled (ALLOW_DIR_LISTING in con­fig.S). if it’s not, we send a 403 Forbidden and call it a day.

GitHub - imtomt/ymawky: MacOS Web Server written entirely in ARM64 assembly

github.com

This is ymawky (yuh maw kee), a web server writ­ten en­tirely in ARM64 as­sem­bly. ymawky is a syscall-only, no libc, fork-per-con­nec­tion web server writ­ten by hand. While it is de­vel­oped for MacOS, I’ve tried to make it as portable as pos­si­ble — how­ever, it’s likely you will still need to make some (hopefully mi­nor) Significant tweaks to get this to run on Linux/other Unix sys­tems. See Implementation Notes for more de­tails.

Building

Requires Xcode Command Line Tools. Install with xcode-se­lect –install. ymawky only runs on ap­ple sil­i­con (arm64).

Run make to build.

Ensure there is a www/ di­rec­tory next to the ymawky ex­e­cutable. That’s the doc­u­ment root where ymawky searches for files. GET with an empty file­name (GET /) will search for www/​in­dex.html, so you might want to make sure there’s an in­dex.html as well.

ymawky will try to serve sta­tic er­ror pages when a clien­t’s re­quest re­sults in er­ror, eg 404. The pages it searches for in err/(​code).html, so en­sure err/ ex­ists alongisde ymawky and www/. See Configuration to mod­ify the de­fault file and doc­root.

Running

./ymawky to start run­ning the web server on 127.0.0.1:8080.

./ymawky [port] to start run­ning the web server on 127.0.0.1:[port]

./ymawky [literally-any-character-other-than-0 – 9] to start run­ning the web server on 127.0.0.1:8080 in de­bug mode. Debug mode dis­ables fork­ing, and makes ymawky only han­dle one re­quest. (I needed to do this be­cause lldb was­n’t let­ting me de­bug the chil­dren, ugh.)

Unfortunately, while cus­tom ports are sup­ported, cus­tom ad­dresses are not. as of right now, ymawky can only run on 127.0.0.1. This is solely be­cause I haven’t im­ple­mented it — but if you’d like to con­sider this a safety fea­ture, then I guess it could be in­ten­tional.

To see ymawky in ac­tion, start run­ning ymawky with ./ymawky [port]. Then open your web browser of choice (or use curl), and visit 127.0.0.1:8080/ or 127.0.0.1:8080/pretty/index.html. Bask in the warmth of as­sem­bly.

What can it do?

ymawky is a sta­tic-file web server. It does­n’t sup­port server-side code to gen­er­ate con­tent on-the-fly, or more ad­vanced URL pars­ing, such as /search?query=term. That’s not to say it’s non-func­tional, though.

Supported HTTP meth­ods:

GET PUT DELETE OPTIONS HEAD

GET

PUT

DELETE

OPTIONS

HEAD

Basic pro­tec­tion from slowloris-like Denial of Service at­tacks

Decodes % hex en­cod­ing, eg, %20 de­codes to a space in file­names, and %61 de­codes to a

Smart path tra­ver­sal de­tec­tion and pre­ven­tion. Blocks .. from tra­vers­ing paths, while not dis­al­low­ing mul­ti­ple pe­ri­ods when they’re part of a file:

GET /../../../etc/passwd -> 403 Forbidden GET /ohwell…txt -> 200 OK GET /../src/ymawky.S -> 403 Forbidden GET /hehe..txt -> 200 OK

GET /../../../etc/passwd -> 403 Forbidden

GET /ohwell…txt -> 200 OK

GET /../src/ymawky.S -> 403 Forbidden

GET /hehe..txt -> 200 OK

Automatically prepends www/ to re­quested files. GET /index.html will re­trieve www/​in­dex.html

Empty GET / re­quests de­fault to GET www/​in­dex.html

PUT re­quests sup­port up­loads of up to 1GiB, though this can be con­fig­ured for larger files

PUT is atomic due to writ­ing to a tem­po­rary file then re­nam­ing, al­low­ing con­cur­rent PUT re­quests with­out leav­ing par­tially-writ­ten files

Content-Length: pars­ing and ver­i­fi­ca­tion in PUT re­quests

MIME type de­tec­tion, giv­ing Content-Type in the re­sponse header with the cor­re­spond­ing MIME type

Accepts Range: bytes= ranges in GET re­quests, sup­port­ing full ranges bytes=X-N, suf­fix ranges bytes=-N, and open-ended ranges bytes=X-. Video scrub­bing is well sup­ported

Basic HTTP ver­sion pars­ing. Requests need to spec­ify HTTP/1.1 or HTTP/1.0, and if re­quest­ing HTTP/1.1, a Host: field needs to be pre­sent in the header. Currently, ymawky does­n’t do any­thing with Host, but per RFC 9112 Section 3.2, the Header must be sent

Serves cus­tom HTML pages for er­ror codes, such as 404, or 500. Look in the err/ di­rec­tory for an ex­am­ple

If the re­quested re­source is a di­rec­tory, list all files and sub­dirs in the di­rec­tory. Note that this ex­cludes www/ (or what­ever your doc­root is): GET / will al­ways search for in­dex.html if no file is given.

Safety”

This is a web server writ­ten en­tirely by-hand in ARM64 as­sem­bly as a fun pro­ject. It’s prob­a­bly got a lot of vul­ner­a­bil­i­ties I’m un­aware of. However, I did do my best to make it safer. Here are some safety pre­cau­tions ymawky takes.

Rejects paths >= PATH_MAX (4096 bytes)

Reject any paths that in­clude path tra­ver­sal — /../..

Reject any re­quests that do not con­tain a path within 16 bytes

Confined to www/. Any path re­quested gets www/ prepended to it

Rejects any path con­tain­ing sym­links, with O_NOFOLLOW_ANY

PUT writes to a tem­po­rary file, www/.​ymawky_tmp_<pid>. Upon suc­cess­fully re­ceiv­ing the whole file, this tem­po­rary file is then re­named to the re­quested file­name. This pre­vents par­tial or cor­rupted PUT re­quests from over­writ­ing ex­ist­ing files.

Reject any re­quests whose path starts with www/.​ymawky_tmp_. This pre­vents some­one from GETing a tem­po­rary file, and pre­vents some­one from send­ing PUT /.ymawky_tmp_4533 or some­thing.

Must re­ceive data within 10 sec­onds. If it’s slower, the con­nec­tion will close. If the en­tire header is not re­ceived within 10 sec­onds to­tal, the con­nec­tion will be closed. This is to pre­vent slowloris-like at­tacks.

HTTP Status Codes

ymawky cur­rently sup­ports and can re­ply with the fol­low­ing sta­tus codes:

200 OK

201 Created

204 No Content

206 Partial Content

400 Bad Request

403 Forbidden

404 Not Found

408 Request Timeout

409 Conflict

411 Length Required

413 Content Too Large

414 URI Too Long

416 Range Not Satisfiable

418 I’m a teapot

431 Request Header Fields Too Large

500 Internal Server Error

501 Not Implemented

503 Service Unavailable

505 HTTP Version Not Supported

507 Insufficient Storage

Custom HTML pages will be served along­side the er­ror codes (400+). These HTML files are lo­cated in err/(​code).html. You can use build_er­r_­pages.sh to cre­ate a page for each code, with dif­fer­ent text at your leisure. Edit the source code of build_er­r_­pages.sh to mod­ify the text per-page, and mod­ify err/​tem­plate.html to mod­ify the base tem­plate. In err/​tem­plate.html:

{{CODE}} - HTTP Code: eg, 404

{{TITLE}} - Title text: eg, Not Found”

{{MSG}} - Custom mes­sage: eg, the rats ate this page”

MIME Types

MIME types are de­tected by an­a­lyz­ing the file ex­ten­sion. The fol­low­ing MIME types are rec­og­nized.

Web-related files:

.html -> text/​html; charset=utf-8

.htm -> text/​html; charset=utf-8

.css -> text/​css; charset=utf-8

.csv -> text/​csv; charset=utf-8

.xml -> text/​xml; charset=utf-8

.js -> text/​javascript; charset=utf-8

.json -> ap­pli­ca­tion/​json

.wasm -> ap­pli­ca­tion/​wasm

.mjs -> text/​javascript; charset=utf-8

.map -> ap­pli­ca­tion/​json

Image files:

.png -> im­age/​png

.jpg -> im­age/​jpeg

.jpeg -> im­age/​jpeg

.gif -> im­age/​gif

.svg -> im­age/​svg+xml

.ico -> im­age/​x-icon

.webp -> im­age/​webp

.avif -> im­age/​avif

.bmp -> im­age/​bmp

.tiff -> im­age/​tiff

.apng -> im­age/​apng

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.