10 interesting stories served every morning and every evening.




1 1,514 shares, 53 trendiness, words and minutes reading time

gorhill/uBlock

This doc­u­ment ex­plains why uBO works best in Firefox.

Ability to un­cloak 3rd-party servers dis­guised as 1st-party through the use of CNAME record. The ef­fect of this is to make uBO on Firefox the most ef­fi­cient at block­ing 3rd-party track­ers rel­a­tive to other other browser/​blocker pairs:

The dark green/​red bars are uBO be­fore/​af­ter it gained abil­ity to un­cloak CNAMEs on Firefox.

Source: Characterizing CNAME Cloaking-Based Tracking

on the Web” at Asia Pacific Network Information Centre, August 2020.

HTML fil­ter­ing is the abil­ity to fil­ter the re­sponse body of HTML doc­u­ments be­fore it is parsed by the browser.

For ex­am­ple, this al­lows the re­moval of spe­cific tags in HTML doc­u­ments be­fore they are parsed and ex­e­cuted by the browser, some­thing not pos­si­ble in a re­li­able man­ner in other browsers. This fea­ture re­quires the we­bRe­quest.fil­ter­Re­sponse­Data() API, cur­rently only avail­able in Firefox.

At browser launch, Firefox will wait for uBO to be up and ready be­fore net­work re­quests are fired from al­ready opened tab(s).

This is not the case with Chromium-based browsers, i.e. tracker/​ad­ver­tise­ment pay­loads may find their way into al­ready opened tabs be­fore uBO is up and ready in Chromium-based browsers, while these are prop­erly fil­tered in Firefox.

Reliably block­ing at browser launch is es­pe­cially im­por­tant for who­ever uses de­fault-deny mode for 3rd-party re­sources and/​or JavaScript.

Pre-fetching, which is dis­abled by de­fault in uBO, is re­li­ably pre­vented in Firefox, while this is not the case in Chromium-based browsers.

Chromium-based browsers give prece­dence to web­sites over user set­tings when it comes to de­cide whether pre-fetch­ing is dis­abled or not.

The Firefox ver­sion of uBO makes use of WebAssembly code for core fil­ter­ing code paths. This is not the case with Chromium-based browsers be­cause this would re­quire an ex­tra per­mis­sion in the ex­ten­sion man­i­fest which could cause fric­tion when pub­lish­ing the ex­ten­sion in the Chrome Web Store.

For more about this, see: https://​github.com/​We­bAssem­bly/​con­tent-se­cu­rity-pol­icy/​is­sues/​7#is­suecom­ment-441259729.

The Firefox ver­sion of uBO use LZ4 com­pres­sion by de­fault to store raw fil­ter lists, com­piled list data, and mem­ory snap­shots to disk stor­age.

LZ4 com­pres­sion re­quires the use of IndexedDB, which is prob­lem­atic with Chromium-based browsers in incog­nito mode — where in­stances of IndexedDB are al­ways re­set, caus­ing uBO to al­ways launch in­ef­fi­ciently and with out of date fil­ter lists (see #399). An IndexedDB in­stance is re­quired be­cause it sup­ports stor­ing Blob-based data, a ca­pa­bil­ity not avail­able to browser.stor­age.lo­cal API.

...

Read the original on github.com »

2 1,157 shares, 44 trendiness, words and minutes reading time

ACLU News & Commentary

...

Read the original on www.aclu.org »

3 1,148 shares, 47 trendiness, words and minutes reading time

The Architecture Behind A One-Person Tech Startup

This is a long-form post break­ing down the setup I use to run a SaaS. From load bal­anc­ing to cron job mon­i­tor­ing to pay­ments and sub­scrip­tions. There’s a lot of ground to cover, so buckle up!

As grandiose as the ti­tle of this ar­ti­cle might sound, I should clar­ify we’re talk­ing about a low-stress, one-per­son com­pany that I run from my flat here in Germany. It’s fully self-funded, and I like to take things slow. It’s prob­a­bly not what most peo­ple imag­ine when I say tech startup”.

I would­n’t be able to do this with­out the vast amount of open-source soft­ware and man­aged ser­vices at my dis­posal. I feel like I’m stand­ing on the shoul­ders of gi­ants, who did all the hard work be­fore me, and I’m very grate­ful for that.

For con­text, I run a one-man SaaS, and this is a more de­tailed ver­sion of my post on the tech stack I use. Please con­sider your own cir­cum­stances be­fore fol­low­ing my ad­vice, your own con­text mat­ters when it comes to tech­ni­cal choices, there’s no holy grail.

I use Kubernetes on AWS, but don’t fall into the trap of think­ing you need this. I learned these tools over sev­eral years men­tored by a very pa­tient team. I’m pro­duc­tive be­cause this is what I know best, and I can fo­cus on ship­ping stuff in­stead. Your mileage may vary.

By the way, I drew in­spi­ra­tion for the for­mat of this post from Wenbin Fang’s blog post. I re­ally en­joyed read­ing his ar­ti­cle, and you might want to check it out too!

With that said, let’s jump right into the tour.

My in­fra­struc­ture han­dles mul­ti­ple pro­jects at once, but to il­lus­trate things I’ll use Panelbear, my most re­cent SaaS, as a real-world ex­am­ple of this setup in ac­tion.

Browser Timings chart in Panelbear, the ex­am­ple pro­ject I’ll use for this tour.

From a tech­ni­cal point of view, this SaaS processes a large amount of re­quests per sec­ond from any­where in the world, and stores the data in an ef­fi­cient for­mat for real time query­ing.

Business-wise it’s still in its in­fancy (I launched six months ago), but it has grown rather quickly for my own ex­pec­ta­tions, es­pe­cially as I orig­i­nally built it for my­self as a Django app us­ing SQLite on a sin­gle tiny VPS. For my goals at the time, it worked just fine and I could have prob­a­bly pushed that model quite far.

However, I grew in­creas­ingly frus­trated hav­ing to reim­ple­ment a lot of the tool­ing I was so ac­cus­tomed to: zero down­time de­ploys, au­toscal­ing, health checks, au­to­matic DNS / TLS / ingress rules, and so on. Kubernetes spoiled me, I was used to deal­ing with higher level ab­strac­tions, while re­tain­ing con­trol and flex­i­bil­ity.

Fast for­ward six months, a cou­ple of it­er­a­tions, and even though my cur­rent setup is still a Django mono­lith, I’m now us­ing Postgres as the app DB, ClickHouse for an­a­lyt­ics data, and Redis for caching. I also use Celery for sched­uled tasks, and a cus­tom event queue for buffer­ing writes. I run most of these things on a man­aged Kubernetes clus­ter (EKS).

It may sound com­pli­cated, but it’s prac­ti­cally an old-school mono­lithic ar­chi­tec­ture run­ning on Kubernetes. Replace Django with Rails or Laravel and you know what I’m talk­ing about. The in­ter­est­ing part is how every­thing is glued to­gether and au­to­mated: au­toscal­ing, ingress, TLS cer­tifi­cates, failover, log­ging, mon­i­tor­ing, and so on.

It’s worth not­ing I use this setup across mul­ti­ple pro­jects, which helps keep my costs down and launch ex­per­i­ments re­ally eas­ily (write a Dockerfile and git push). And since I get asked this a lot: con­trary to what you might be think­ing, I ac­tu­ally spend very lit­tle time man­ag­ing the in­fra­struc­ture, usu­ally 0-2 hours per month to­tal. Most of my time is spent de­vel­op­ing fea­tures, do­ing cus­tomer sup­port, and grow­ing the busi­ness.

That said, these are the tools I’ve been us­ing for sev­eral years now and I’m pretty fa­mil­iar with them. I con­sider my setup sim­ple for what it’s ca­pa­ble of, but it took many years of pro­duc­tion fires at my day job to get here. So I won’t say it’s all sun­shine and roses.

I don’t know who said it first, but what I tell my friends is: Kubernetes makes the sim­ple stuff com­plex, but it also makes the com­plex stuff sim­pler”.

Now that you know I have a man­aged Kubernetes clus­ter on AWS and I run var­i­ous pro­jects in it, let’s make the first stop of the tour: how to get traf­fic into the clus­ter.

My clus­ter is in a pri­vate net­work, so you won’t be able to reach it di­rectly from the pub­lic in­ter­net. There’s a cou­ple of pieces in be­tween that con­trol ac­cess and load bal­ance traf­fic to the clus­ter.

Essentially, I have Cloudflare prox­y­ing all traf­fic to an NLB (AWS L4 Network Load Balancer). This Load Balancer is the bridge be­tween the pub­lic in­ter­net and my pri­vate net­work. Once it re­ceives a re­quest, it for­wards it to one of the Kubernetes clus­ter nodes. These nodes are in pri­vate sub­nets spread across mul­ti­ple avail­abil­ity zones in AWS. It’s all man­aged by the way, but more on that later.

Traffic gets cached at the edge, or for­warded to the AWS re­gion where I op­er­ate.

But how does Kubernetes know which ser­vice to for­ward the re­quest to?” - That’s where ingress-ng­inx comes in. In short: it’s an NGINX clus­ter man­aged by Kubernetes, and it’s the en­try­point for all traf­fic in­side the clus­ter.

NGINX ap­plies rate-lim­it­ing and other traf­fic shap­ing rules I de­fine be­fore send­ing the re­quest to the cor­re­spond­ing app con­tainer. In Panelbear’s case, the app con­tainer is Django be­ing served by Uvicorn.

It’s not much dif­fer­ent from a tra­di­tional ng­inx/​gu­ni­corn/​Django in a VPS ap­proach, with added hor­i­zon­tal scal­ing ben­e­fits and an au­to­mated CDN setup. It’s also a setup once and for­get” kind of thing, mostly a few files be­tween Terraform/Kubernetes, and it’s shared by all de­ployed pro­jects.

When I de­ploy a new pro­ject, it’s es­sen­tially 20 lines of ingress con­fig­u­ra­tion and that’s it:

Those an­no­ta­tions de­scribe that I want a DNS record, with traf­fic prox­ied by Cloudflare, a TLS cer­tifi­cate via letsen­crypt, and that it should rate-limit the re­quests per minute by IP be­fore for­ward­ing the re­quest to my app.

Kubernetes takes care of mak­ing those in­fra changes to re­flect the de­sired state. It’s a lit­tle ver­bose, but it works well in prac­tice.

The chain of ac­tions that oc­cur when I push a new com­mit.

Whenever I push to mas­ter one of my pro­jects, it kicks off a CI pipeline on GitHub Actions. This pipeline runs some code­base checks, end-to-end tests (using Docker com­pose to setup a com­plete en­vi­ron­ment), and once these checks pass it builds a new Docker im­age that gets pushed to ECR (the Docker reg­istry in AWS).

As far as the ap­pli­ca­tion repo is con­cerned, a new ver­sion of the app has been tested and is ready to be de­ployed as a Docker im­age:

So what hap­pens next? There’s a new Docker im­age, but no de­ploy?” - My Kubernetes clus­ter has a com­po­nent called flux. It au­to­mat­i­cally keeps in sync what is cur­rently run­ning in the clus­ter and the lat­est im­age for my apps.

Flux au­to­mat­i­cally keeps track of new re­leases in my in­fra­struc­ture monorepo.

Flux au­to­mat­i­cally trig­gers an in­cre­men­tal roll­out when there’s a new Docker im­age avail­able, and keeps record of these ac­tions in an Infrastructure Monorepo”.

I want ver­sion con­trolled in­fra­struc­ture, so that when­ever I make a new com­mit on this repo, be­tween Terraform and Kubernetes, they will make the nec­es­sary changes on AWS, Cloudflare and the other ser­vices to syn­chro­nize the state of my repo with what is de­ployed.

It’s all ver­sion-con­trolled with a lin­ear his­tory of every de­ploy­ment made. This means less stuff for me to re­mem­ber over the years, since I have no magic set­tings con­fig­ured via clicky-clicky on some ob­scure UI.

Think of this monorepo as de­ploy­able doc­u­men­ta­tion, but more on that later.

A few years ago I used the Actor model of con­cur­rency for var­i­ous com­pany pro­jects, and fell in love with many of the ideas around its ecosys­tem. One thing led to an­other and soon I was read­ing books about Erlang, and its phi­los­o­phy around let­ting things crash.

I might be stretch­ing the idea too much, but in Kubernetes I like to think of live­li­ness probes and au­to­matic restarts as a means to achieve a sim­i­lar ef­fect.

From the Kubernetes doc­u­men­ta­tion: The kubelet uses live­ness probes to know when to restart a con­tainer. For ex­am­ple, live­ness probes could catch a dead­lock, where an ap­pli­ca­tion is run­ning, but un­able to make progress. Restarting a con­tainer in such a state can help to make the ap­pli­ca­tion more avail­able de­spite bugs.”

In prac­tice this has worked pretty well for me. Containers and nodes are meant to come and go, and Kubernetes will grace­fully shift the traf­fic to healthy pods while heal­ing the un­healthy ones (more like killing). Brutal, but ef­fec­tive.

My app con­tain­ers auto-scale based on CPU/Memory us­age. Kubernetes will try to pack as many work­loads per node as pos­si­ble to fully uti­lize it.

In case there’s too many Pods per node in the clus­ter, it will au­to­mat­i­cally spawn more servers to in­crease the clus­ter ca­pac­ity and ease the load. Similarly, it will scale down when there’s not much go­ing on.

Here’s what a Horizontal Pod Autoscaler might look like:

In this ex­am­ple, it will au­to­mat­i­cally ad­just the num­ber of pan­el­bear-api pods based on the CPU us­age, start­ing at 2 repli­cas but cap­ping at 8.

When defin­ing the ingress rules for my app, the an­no­ta­tion cloud­flare-prox­ied: true” is what tells the Kubernetes that I want to use Cloudflare for DNS, and to proxy all re­quests via it’s CDN and DDoS pro­tec­tion too.

From then on, it’s pretty easy to make use of it. I just set stan­dard HTTP cache head­ers in my ap­pli­ca­tions to spec­ify which re­quests can be cached, and for how long.

Cloudflare will use those re­sponse head­ers to con­trol the caching be­hav­ior at the edge servers. It works amaz­ingly well for such a sim­ple setup.

I use Whitenoise to serve sta­tic files di­rectly from my app con­tainer. That way I avoid need­ing to up­load sta­tic files to Nginx/Cloudfront/S3 on each de­ploy­ment. It has worked re­ally well so far, and most re­quests will get cached by the CDN as it gets filled. It’s per­for­mant, and keeps things sim­ple.

I also use NextJS for a few sta­tic web­sites, such as the land­ing page of Panelbear. I could serve it via Cloudfront/S3 or even Netlify or Vercel, but it was easy to just run it as a con­tainer in my clus­ter and let Cloudflare cache the sta­tic as­sets as they are be­ing re­quested. There’s zero added cost for me to do this, and I can re-use all tool­ing for de­ploy­ment, log­ging and mon­i­tor­ing.

Besides sta­tic file caching, there’s also ap­pli­ca­tion data caching (eg. re­sults of heavy cal­cu­la­tions, Django mod­els, rate-lim­it­ing coun­ters, etc…).

On one hand I lever­age an in-mem­ory Least Recently Used (LRU) cache to keep fre­quently ac­cessed ob­jects in mem­ory, and I’d ben­e­fit from zero net­work calls (pure Python, no Redis in­volved).

However, most end­points just use the in-clus­ter Redis for caching. It’s still fast and the cached data can be shared by all Django in­stances, even af­ter re-de­ploys, while an in-mem­ory cache would get wiped.

My Pricing Plans are based on an­a­lyt­ics events per month. For this some sort of me­ter­ing is nec­es­sary to know how many events have been con­sumed within the cur­rent billing pe­riod and en­force lim­its. However, I don’t in­ter­rupt the ser­vice im­me­di­ately when a cus­tomer crosses the limit. Instead a Capacity de­pleted” email is au­to­mat­i­cally sent, and a grace pe­riod is given to the cus­tomer be­fore the API starts re­ject­ing new data.

This is meant to give cus­tomers enough time to de­cide if an up­grade makes sense for them, while en­sur­ing no data is lost. For ex­am­ple dur­ing a traf­fic spike in case their con­tent goes vi­ral or if they’re just en­joy­ing the week­end and not check­ing their emails. If the cus­tomer de­cides to stay in the cur­rent plan and not up­grade, there is no penalty and things will go back to nor­mal once us­age is back within their plan lim­its.

So for this fea­ture I have a func­tion that ap­plies the rules above, which re­quire sev­eral calls to the DB and ClickHouse, but get cached 15 min­utes to avoid re­com­put­ing this on every re­quest. It’s good enough and sim­ple. Worth not­ing: the cache gets in­val­i­dated on plan changes, oth­er­wise it might take 15 min­utes for an up­grade to take ef­fect.

While I en­force global rate lim­its at the ng­inx-ingress on Kubernetes, I some­times want more spe­cific lim­its on a per end­point/​method ba­sis.

For that I use the ex­cel­lent Django Ratelimit li­brary to eas­ily de­clare the lim­its per Django view. It’s con­fig­ured to use Redis as a back­end for keep­ing track of the clients mak­ing the re­quests to each end­point (it stores a hash based on the client key, and not the IP).

In the ex­am­ple above, if the client at­tempts to POST to this par­tic­u­lar end­point more than 5 times per minute, the sub­se­quent call will get re­jected with a HTTP 429 Too Many Requests sta­tus code.

The friendly er­ror mes­sage you’d get when be­ing rate-lim­ited.

Django gives me an ad­min panel for all my mod­els for free. It’s built-in, and It’s pretty handy for in­spect­ing data for cus­tomer sup­port work on the go.

Django’s built-in ad­min panel is very use­ful for do­ing cus­tomer sup­port on the go.

I added ac­tions to help me man­age things from the UI. Things like block­ing ac­cess to sus­pi­cious ac­counts, send­ing out an­nounce­ment emails, and ap­prov­ing full ac­count dele­tion re­quests (first a soft delete, and within 72 hours a full de­stroy).

Security-wise: only staff users are able to ac­cess the panel (me), and I’m plan­ning to add 2FA for ex­tra se­cu­rity on all ac­counts.

Additionally every time a user logs in, I send an au­to­matic se­cu­rity email with de­tails about the new ses­sion to the ac­coun­t’s email. Right now I send it on every new lo­gin, but I might change it in the fu­ture to skip known de­vices. It’s not a very MVP fea­ture”, but I care about se­cu­rity and it was not com­pli­cated to add. At least I’d be warned if some­one logged in to my ac­count.

Of course, there’s a lot more to hard­en­ing an ap­pli­ca­tion than this, but that’s out of the scope of this post.

Example se­cu­rity ac­tiv­ity email you might re­ceive when log­ging in.

Another in­ter­est­ing use case is that I run a lot of dif­fer­ent sched­uled jobs as part of my SaaS. These are things like gen­er­at­ing daily re­ports for my cus­tomers, cal­cu­lat­ing us­age stats every 15 min­utes, send­ing staff emails (I get a daily email with the most im­por­tant met­rics) and what­not.

My setup is ac­tu­ally pretty sim­ple, I just have a few Celery work­ers and a Celery beat sched­uler run­ning in the clus­ter. They are con­fig­ured to use Redis as the task queue. It took me an af­ter­noon to set it up once, and luck­ily I haven’t had any is­sues so far.

I want to get no­ti­fied via SMS/Slack/Email when a sched­uled task is not run­ning as ex­pected. For ex­am­ple when the weekly re­ports task is stuck or sig­nif­i­cantly de­layed. For that I use Healthchecks.io, but check­out Cronitor and CronHub too, I’ve been hear­ing great things about them as well.

To ab­stract their API, I wrote a small Python snip­pet to au­to­mate the mon­i­tor cre­ation and sta­tus ping­ing:

All my ap­pli­ca­tions are con­fig­ured via en­vi­ron­ment vari­ables, old school but portable and well sup­ported. For ex­am­ple, in my Django set­tings.py I’d setup a vari­able with a de­fault value:

And use it any­where in my code like this:

I can over­ride the en­vi­ron­ment vari­able in my Kubernetes con­figmap:

The way se­crets are han­dled is pretty in­ter­est­ing: I want to also com­mit them to my in­fra­struc­ture repo, along­side other con­fig files, but se­crets should be en­crypted.

For that I use kube­seal in Kubernetes. This com­po­nent uses asym­met­ric crypto to en­crypt my se­crets, and only a clus­ter au­tho­rized to ac­cess the de­cryp­tion keys can de­crypt them.

For ex­am­ple this is what you might find in my in­fra­struc­ture repo:

The clus­ter will au­to­mat­i­cally de­crypt the se­crets and pass them to the cor­re­spond­ing con­tainer as an en­vi­ron­ment vari­able:

To pro­tect the se­crets within the clus­ter, I use AWS-managed en­cryp­tion keys via KMS, which are ro­tated reg­u­larly. This is a sin­gle set­ting when cre­at­ing the Kubernetes clus­ter, and it’s fully man­aged.

Operationally what this means is that I write the se­crets as en­vi­ron­ment vari­ables in a Kubernetes man­i­fest, I then run a com­mand to en­crypt them be­fore com­mit­ting, and push my changes.

The se­crets are de­ployed within a few sec­onds, and the clus­ter will take care of au­to­mat­i­cally de­crypt­ing them be­fore run­ning my con­tain­ers.

For ex­per­i­ments I run a vanilla Postgres con­tainer within the clus­ter, and a Kubernetes cron­job that does daily back­ups to S3. This helps keep my costs down, and it’s pretty sim­ple for just start­ing out.

However, as a pro­ject grows, like Panelbear, I move the data­base out of the clus­ter into RDS, and let AWS take care of en­crypted back­ups, se­cu­rity up­dates and all the other stuff that’s no fun to mess up.

For added se­cu­rity, the data­bases man­aged by AWS are still de­ployed within my pri­vate net­work, so they’re un­reach­able via the pub­lic in­ter­net.

I rely on ClickHouse for ef­fi­cient stor­age and (soft) real-time queries over the an­a­lyt­ics data in Panelbear. It’s a fan­tas­tic colum­nar data­base, in­cred­i­bly fast and when you struc­ture your data well you can achieve high com­pres­sion ra­tios (less stor­age costs = higher mar­gins).

I cur­rently self-host a ClickHouse in­stance within my Kubernetes clus­ter. I use a StatefulSet with en­crypted vol­ume keys man­aged by AWS. I have a Kubernetes CronJob that pe­ri­od­i­cally back­ups up all data in an ef­fi­cient colum­nar for­mat to S3. In case of dis­as­ter re­cov­ery, I have a cou­ple of scripts to man­u­ally backup and re­store the data from S3.

ClickHouse has been rock-solid so far, and it’s an im­pres­sive piece of soft­ware. It’s the only tool I was­n’t al­ready fa­mil­iar with when I started my SaaS, but thanks to their docs I was able to get up and run­ning pretty quickly.

I think there’s a lot of low hang­ing fruit in case I wanted to squeeze out even more per­for­mance (eg. op­ti­miz­ing the field types for bet­ter com­pres­sion, pre-com­put­ing ma­te­ri­al­ized ta­bles and tun­ing the in­stance type), but it’s good enough for now.

Besides Django, I also run con­tain­ers for Redis, ClickHouse, NextJS, among other things. These con­tain­ers have to talk to each other some­how, and that some­how is via the built-in ser­vice dis­cov­ery in Kubernetes.

It’s pretty sim­ple: I de­fine a Service re­source for the con­tainer and Kubernetes au­to­mat­i­cally man­ages DNS records within the clus­ter to route traf­fic to the cor­re­spond­ing ser­vice.

For ex­am­ple, given a Redis ser­vice ex­posed within the clus­ter:

I can ac­cess this Redis in­stance any­where from my clus­ter via the fol­low­ing URL:

Notice the ser­vice name and the pro­ject name­space is part of the URL. That makes it re­ally easy for all your clus­ter ser­vices to talk to each other, re­gard­less of where in the clus­ter they run.

For ex­am­ple, here’s how I’d con­fig­ure Django via en­vi­ron­ment vari­ables to use my in-clus­ter Redis:

Kubernetes will au­to­mat­i­cally keep the DNS records in-sync with healthy pods, even as con­tain­ers get moved across nodes dur­ing au­toscal­ing. The way this works be­hind the scenes is pretty in­ter­est­ing, but out of the scope of this post. Here’s a good ex­pla­na­tion in case you find it in­ter­est­ing.

I want ver­sion-con­trolled, re­pro­ducible in­fra­struc­ture that I can cre­ate and de­stroy with a few sim­ple com­mands.

To achieve this, I use Docker, Terraform and Kubernetes man­i­fests in a monorepo that con­tains all-things in­fra­struc­ture, even across mul­ti­ple pro­jects. And for each ap­pli­ca­tion/​pro­ject I use a sep­a­rate git repo, but this code is not aware of the en­vi­ron­ment it will run on.

If you’re fa­mil­iar with The Twelve-Factor App this sep­a­ra­tion may ring a bell or two. Essentially, my ap­pli­ca­tion has no knowl­edge of the ex­act in­fra­struc­ture it will run on, and is con­fig­ured via en­vi­ron­ment vari­ables.

By de­scrib­ing my in­fra­struc­ture in a git repo, I don’t need to keep track of every lit­tle re­source and con­fig­u­ra­tion set­ting in some ob­scure UI. This en­ables me to re­store my en­tire stack with a sin­gle com­mand in case of dis­as­ter re­cov­ery.

Here’s an ex­am­ple folder struc­ture of what you might find on the in­fra monorepo:

Another ad­van­tage of this setup is that all the mov­ing pieces are de­scribed in one place. I can con­fig­ure and man­age reusable com­po­nents like cen­tral­ized log­ging, ap­pli­ca­tion mon­i­tor­ing, and en­crypted se­crets to name a few.

...

Read the original on anthonynsimon.com »

4 1,011 shares, 41 trendiness, words and minutes reading time

Embrace the Grind

There’s this card trick I saw that I still think about all the time. It’s a sim­ple pre­sen­ta­tion (which I’ve fur­ther sim­pli­fied here for clar­ity): a vol­un­teer chooses a card and seals the card in an en­ve­lope. Then, the ma­gi­cian in­vites the vol­un­teer to choose some tea. There are dozens of boxes of tea, all sealed in plas­tic. The vol­un­teer chooses one, rips the plas­tic, and chooses one of the sealed pack­ets con­tain­ing the tea bags. When the vol­un­teer rips open the packet … in­side is their card.

⚠️ If you don’t want to know how the trick is done, stop read­ing now.

The se­cret is mun­dane, but to me it’s thrilling. The card choice is a force. But choice from those dozens of boxes of tea re­ally is a free choice, and the choice of tea bag within that box is also a free choice. There’s no sleight-of-hand: the ma­gi­cian does­n’t touch the tea boxes or the teabag that the vol­un­teer chooses. The card re­ally is in­side of that sealed tea packet.

The trick is all in the prepa­ra­tion. Before the trick, the ma­gi­cian buys dozens of boxes of tea, opens every sin­gle one, un­wraps each tea packet. Puts a Three of Clubs into each packet. Reseals the packet. Puts the pack­ets back in the box. Re-seals each box. And re­peats this hun­dreds of times. This takes hours — days, even.

The only trick” is that this prepa­ra­tion seems so bor­ing, so im­pos­si­bly te­dious, that when we see the ef­fect we can’t imag­ine that any­one would do some­thing so te­dious just for this sim­ple ef­fect.

Teller writes about this in an ar­ti­cle about the seven se­crets of magic:

You will be fooled by a trick if it in­volves more time, money and prac­tice than you (or any other sane on­looker) would be will­ing to in­vest. My part­ner, Penn, and I once pro­duced 500 live cock­roaches from a top hat on the desk of talk-show host David Letterman. To pre­pare this took weeks. We hired an en­to­mol­o­gist who pro­vided slow-mov­ing, cam­era-friendly cock­roaches (the kind from un­der your stove don’t hang around for close-ups) and taught us to pick the bugs up with­out scream­ing like pread­o­les­cent girls. Then we built a se­cret com­part­ment out of foam-core (one of the few ma­te­ri­als cock­roaches can’t cling to) and worked out a de­vi­ous rou­tine for sneak­ing the com­part­ment into the hat. More trou­ble than the trick was worth? To you, prob­a­bly. But not to ma­gi­cians.

I of­ten have peo­ple newer to the tech in­dus­try ask me for se­crets to suc­cess. There aren’t many, re­ally, but this se­cret — be­ing will­ing to do some­thing so ter­rif­i­cally te­dious that it ap­pears to be magic — works in tech too.

We’re an in­dus­try ob­sessed with au­toma­tion, with stream­lin­ing, with ef­fi­ciency. One of the foun­da­tional texts of our en­gi­neer­ing cul­ture, Larry Wall’s virtues of the pro­gram­mer, in­cludes lazi­ness:

Laziness: The qual­ity that makes you go to great ef­fort to re­duce over­all en­ergy ex­pen­di­ture. It makes you write la­bor-sav­ing pro­grams that other peo­ple will find use­ful and doc­u­ment what you wrote so you don’t have to an­swer so many ques­tions about it.

I don’t dis­agree: be­ing able to of­fload repet­i­tive tasks to a pro­gram is one of the best things about know­ing how to code. However, some­times prob­lems can’t be solved by au­toma­tion. If you’re will­ing to em­brace the grind you’ll look like a ma­gi­cian.

For ex­am­ple, I once joined a team main­tain­ing a sys­tem that was drown­ing in bugs. There were some­thing like two thou­sand open bug re­ports. Nothing was tagged, cat­e­go­rized, or pri­or­i­tized. The team could­n’t agree on which is­sues to tackle. They were stuck es­sen­tially pulling bugs at ran­dom, but it was never clear if that is­sue was im­por­tant.. New bug re­ports could­n’t be triaged ef­fec­tively be­cause find­ing du­pli­cates was nearly im­pos­si­ble. So the open ticket count con­tin­ued to climb. The team had been stalled for months. I was tasked with solv­ing the prob­lem: get the team un­stuck, get re­verse the trend in the open ticket count, come up with a way to even­tu­ally drive it down to zero.

So I used the same trick as the ma­gi­cian, which is no trick at all: I did the work. I printed out all the is­sues - one page of pa­per for each is­sue. I read each page. I took over a huge room and started mak­ing piles on the floor. I wrote tags on sticky notes and stuck them to piles. I shuf­fled pages from one stack to an­other. I wrote ticket num­bers on white­boards in long columns; I imag­ined I was Ben Affleck in The Accountant. I spent al­most three weeks in that room, and emerged with every bug re­port re­viewed, tagged, cat­e­go­rized, and pri­or­i­tized.

The trend re­versed im­me­di­ately af­ter that: we were able to close sev­eral hun­dred tick­ets im­me­di­ately as du­pli­cates, and triag­ing new is­sues now took min­utes in­stead of a day. It took I think a year or more to drive the count to zero, but it was all fairly smooth sail­ing. People said I did the im­pos­si­ble, but that’s wrong: I merely did some­thing so bor­ing that no­body else had been will­ing to do it.

Sometimes, pro­gram­ming feels like magic: you chant some ar­cane in­can­ta­tion and a fleet of ro­bots do your bid­ding. But some­times, magic is mun­dane. If you’re will­ing to em­brace the grind, you can pull off the im­pos­si­ble.

...

Read the original on jacobian.org »

5 946 shares, 38 trendiness, words and minutes reading time

Screw it, I'll host it myself

It’s all fun and games un­til some­one loses an eye. Likewise, it’s all fun and games un­til some­one loses ac­cess to their pri­vate and/​or busi­ness data be­cause they trusted it to some­one else.

You don’t have to be an ex­pert seeker to be able to quickly duck out (it’s like the verb googling’, but used to de­scribe search­ing the in­ter­webs through a de­cent search en­gine, like DuckDuckGo) all the sto­ries about lit­tle guys be­ing fucked over by don’t be evil” type of cor­po­rate be­he­moths.

You know what? Let me duck it out for you:

A drink­ing game rec­om­men­da­tion (careful, it may and prob­a­bly will lead to al­co­holism): take a shot every time you find out how some­one’s data has been locked and their busi­ness was jeop­ar­dized be­cause they did­n’t own, or at least back up their data.

Owning your data is more than just hav­ing backup copies of your dig­i­tal in­for­ma­tion. It’s also about con­trol and pri­vacy. It’s about trust. I don’t know about you, but I don’t trust a lot of ser­vices with my data (the ones I do are few and far be­tween).

As this is a post about self-host­ing, I won’t start preach­ing (trust me, it’s hard for me not to) how you should con­sider switch­ing from WhatsApp to Signal, Google Maps to OpenStreetMap, or how you should quit Instagram and Facebook. You’re cre­at­ing a lot of data there, and they don’t do pretty things with it. Fuck, I’m al­ready preach­ing. Sorry about that.

Note: I’m not fully off so­cial me­dia. I’m us­ing Twitter and LinkedIn. Everything on Twitter is pub­lic/​dis­pos­able and I don’t use their pri­vate mes­sag­ing fea­ture. LinkedIn is there for pro­fes­sional cor­re­spon­dence and I will start to ta­per it off slowly, but that one is tough to quit.

I’m aware most of the peo­ple are not power users, and not every­one will want to spend time learn­ing to set up their own al­ter­na­tives to the ser­vices men­tioned and cre­ate the backup strate­gies as I’ve done. It does take some time (mostly to set every­thing up) and some money. If you’ll take any­thing from this post, it should be to al­ways back up your data (yes, even though it’s repli­cated across 5 Google’s dat­a­cen­ters). If shit hits the fan, it may take you a while to adopt new tools or flows, but you will still have your backup. Do your back­ups early and of­ten.

I cre­ated a sim­ple di­a­gram to roughly show how my per­sonal setup works. Before you say any­thing — I’m aware that there’s a group of peo­ple that would­n’t con­sider my self-host­ing as pure self-host­ing. I’m us­ing Vultr* to host my web-fac­ing ap­pli­ca­tions and not a server in my house. Unfortunately, the cur­rent sit­u­a­tion does­n’t al­low me to do that (yet).

So, here’s the di­a­gram. The de­tailed ex­pla­na­tion con­tin­ues be­low.

I’ve sep­a­rated the di­a­gram into 4 parts — each part rep­re­sents a dif­fer­ent phys­i­cal lo­ca­tion of the data.

The part that gets the most ac­tion is the yel­low part, liv­ing in the cloud.

I’m liv­ing in Germany, so the ob­vi­ous choice was to spin up my in­stances in Vultr‘s* data cen­ter in Frankfurt, as ping is the low­est to that cen­ter for me.

Right now, I have six com­pute in­stances run­ning there. You can see types of cloud com­pute in­stances in the pic­ture be­low. It’s pretty sim­i­lar to what you would get from DigitalOcean or AWS EC2.

Why did I choose Vultr*? They have pretty good cus­tomer ser­vice there, and I just hap­pened to stum­ble on them be­fore DigitalOcean got big and pop­u­lar and AWS be­came the leader in the cloud com­put­ing game. Having said that, for purely pri­vate use, I would­n’t opt for AWS even if I had to choose now. I’ll leave it at that.

The break­down looks like this:

* 2 x $10/mo VPS — sev­eral web pro­jects that I run for my­self and friends

Everything com­bined costs me $55 per month.

Nextcloud is the pow­er­house of my every­day data flow and ma­nip­u­la­tion. With the ad­di­tion of apps, it’s a re­ally pow­er­ful one in all so­lu­tion to serve as an al­ter­na­tive to widely pop­u­lar of­fer­ings of the FAANG crowd. Once prop­erly set up, not much main­te­nance is needed.

* Tasks are the al­ter­na­tive to Todoist or Any.do which I used pre­vi­ously.

* Notes are the al­ter­na­tive to Google Keep. Not as fully fea­tured as Evernote or OneNote I have also tried out at one point, but it’s good enough for me.

* Calendar is the al­ter­na­tive to Google Calendar I used pre­vi­ously.

* Contacts are the al­ter­na­tive to Google/Samsung con­tacts I used pre­vi­ously.

I’m also able to stream mu­sic from Nextcloud to my phone, us­ing Nextcloud mu­sic. For the client, you can use any­thing com­pat­i­ble with Ampache or Subsonic. My choice is Power Ampache. I’m not stream­ing a lot of mu­sic though. I al­ways have 30-40 GB of MP3s on my phone that I ro­tate every now and then.

All the data from Nextcloud is in sync with Synology at my home via CloudSync. A big plus is a nice dark theme for the web UI:

I’m a de­vel­oper and more than I need the air to breathe and cof­fee to drink, I need ver­sion con­trol. My weapon of choice is git, which is lucky be­cause there are a lot of host­ing so­lu­tions for it out there. I was think­ing about this one for a while and it boiled down to GitLab vs Gitea.

For my needs, GitLab was overkill, so I went with Gitea. It’s light­weight, easy to up­date, and just works. Its in­ter­face is clean and easy to un­der­stand, and be­cause UI is sim­i­lar to that one of GitHub, peo­ple that I col­lab with find it as a seam­less switch. On the neg­a­tive side, if you want to cus­tomize it, it can be a pain in the butt.

Monica is a per­sonal CRM. Some peo­ple think I’m weird be­cause I’m us­ing a per­sonal CRM. I find it awe­some. After I meet peo­ple, I of­ten write down some in­for­ma­tion about them that I would oth­er­wise for­get. I some­times make notes about longer phone calls if I know the in­for­ma­tion from the call will come in handy later on. Colleagues’ and friends’ birth­days, ideas for their pre­sents, things like that — they go into the CRM.

I men­tion Monica in my post on how you should not ig­nore re­jec­tion emails, where you can see an­other ex­am­ple of how it helps with my flow.

Kanboard is a free and open-source Kanban pro­ject man­age­ment soft­ware. I use it to man­age my side pro­jects, but I also use it for keep­ing track of books I read, some fi­nan­cial plan­ning, study progress track­ing, etc. It’s writ­ten in PHP, it’s eas­ily cus­tomiz­able and it sup­ports mul­ti­ple users. Usually, when I do some col­lab­o­ra­tions, I will im­me­di­ately cre­ate an ac­count for that per­son on both Gitea and Kanboard.

Plausible is my choice for an­a­lyt­ics that I use on sev­eral web­sites that I own. It’s light­weight, it’s open-source, and most im­por­tantly — it re­spects your pri­vacy. There’s a how-to that I wrote on how you can in­stall it on an Ubuntu ma­chine your­self. The bonus thing is that I re­ally like de­vel­op­ers’ ap­proach to run­ning a busi­ness. They have a cool blog where you can read about it.

Development tools that I’m men­tion­ing are ba­si­cally a bunch of scripts I have de­vel­oped and gath­ered over time. Text en­coders/​de­coders, color pick­ers, WYSIWYG lay­out builders, Swagger ed­i­tor, etc. If I use some­thing of­ten and it’s triv­ial to im­ple­ment on my own, I’ll do it.

Desktop PC and NAS are part of my home’ re­gion.

Desktop is noth­ing spe­cial. I don’t play video games, and I don’t do any work that needs a lot of com­put­ing per­for­mance. It’s the 8th gen­er­a­tion i5 with in­te­grated graph­ics, 1TB SSD, and 16 gigs of RAM. OS I’m us­ing is Ubuntu — the lat­est LTS ver­sion. It’s in­stalled on both the desk­top and lap­top.

Everything ex­cept the OS and the apps is synced in real-time to Synology by us­ing the Synology Drive Client.

Synology NAS I’m us­ing is the DS220j. It’s not the fastest thing, but again, it works for me. I have two Western Digital Red dri­ves (shocking, huh?), 2TB each.

Every last week­end of the month, I will man­u­ally backup all the data to Blu-ray discs. Not once, but twice. One copy goes to a safe stor­age space at home and the other one ends up at a com­pletely dif­fer­ent lo­ca­tion.

This is my everything is fucked, burnt or stolen’ sit­u­a­tion rem­edy. I’m not par­tic­u­larly happy with the phys­i­cal se­cu­rity I’ve set up at home, so one of the con­cerns is the theft of disks and back­ups. Other than mov­ing to a dif­fer­ent lo­ca­tion where it would be eas­ier to work on up­grad­ing the phys­i­cal se­cu­rity, my hands are tied re­gard­ing this sub­ject (not for long hope­fully).

Other things could hap­pen, like fire, flood, etc. Of course, it’s a bit of a has­sle, but I be­lieve in be­ing pre­pared for any type of sit­u­a­tion, no mat­ter how im­prob­a­ble it may seem.

When you’re self-host­ing, it will, nat­u­rally, also re­flect on the apps that you use on your portable de­vices. Previously, my phone’s home screen was filled with mostly Google Apps — cal­en­dar, keep, maps, drive. There were also Dropbox, Spotify/Deezer. It’s dif­fer­ent now.

I have De-Googled my phone, us­ing /e/ and F-Droid. There are com­pro­mises you’ll have to make if you choose to go down this path. Sometimes it will go smoothly, but some­times it will frus­trate the hell out of you. It was worth it for me. I value my free­dom and pri­vacy so much more than an oc­ca­sional headache caused by buggy soft­ware.

This is the list of the apps I use fre­quently that are re­lated to self-host­ing:

* PowerAmpache — lets me stream mu­sic from my cloud

* PulseMusic — my main mu­sic app that I use to lis­ten to the mu­sic col­lec­tion stored di­rectly on my phone (30-40GB at any time that I ro­tate from time to time)

* Nextcloud — this is the sync client for the phone and a file browser

* K-9 Mail — re­ally, re­ally ugly look­ing email client that is also the best email client for Android I have ever used

As I men­tioned pre­vi­ously, my Laptop is run­ning the lat­est Ubuntu LTS, just like the desk­top PC. To have things par­tially synced to the NextCloud, I’m us­ing the of­fi­cial desk­top client. Listing other tools that I use as a de­vel­oper that may be tan­gi­bly re­lated to self-host­ing would be an­other two thou­sand words, so I won’t go into that right now.

Is it worth the time and has­sle? Only you can an­swer that for your­self.

Researching al­ter­na­tives to com­mer­cial cloud of­fer­ings, and set­ting every­thing up surely took some time. I did­n’t track it, so I can’t say pre­cisely how much time, but it was def­i­nitely in the dou­ble dig­its. If I had to guess, I would say ~40 hours.

Luckily, af­ter that phase, things run (mostly) with­out any in­ter­rup­tion. I have a monthly re­minder to check for the up­dates and ap­ply them to the soft­ware I’m run­ning. I don’t bother with mi­nor up­dates, so if it’s not bro­ken, I’m not fix­ing it.

If I mo­ti­vate even one per­son to at least con­sider the op­tion of self-host­ing, I will be happy. Feel free to drop me a mes­sage if you de­cide to do that!

* Links to Vultr con­tain my re­fer­ral code, which means that if you choose to sub­scribe to Vultr af­ter you clicked on that link, it will earn me a small com­mis­sion.

...

Read the original on www.markozivanovic.com »

6 843 shares, 146 trendiness, words and minutes reading time

Just Be Rich 🤷‍♂️

No one wants to be the bad guy.

When nar­ra­tives be­gin to shift and the once good guys are la­belled as bad it’s not sur­pris­ing they fight back. They’ll point to crit­i­cisms as ex­ag­ger­a­tions. Their faults as mis­un­der­stand­ings.

Today’s freshly or­dained bad guys are the in­vestors and CEOs of Silicon Valley.

Once cham­pi­oned as the flag­bear­ers of in­no­va­tion and de­moc­ra­ti­za­tion, now they’re viewed as new ver­sions of the mo­nop­o­lies of old and they’re fight­ing back.

The ti­tle of Paul Graham’s es­say, How People Get Rich Now, did­n’t pre­pare me for the real goal of his words. It’s less a tu­to­r­ial or analy­sis and more a thinly veiled at­tempt to ease con­cerns about wealth in­equal­ity.

What he fails to men­tion is that con­cerns about wealth in­equal­ity aren’t con­cerned with how wealth was gen­er­ated but rather the grow­ing wealth gap that has ac­cel­er­ated in re­cent decades. Tech has made star­tups both cheaper and eas­ier but only for a small per­cent­age of peo­ple. And when a se­lect group of peo­ple have an ad­van­tage that oth­ers don’t it’s com­pounded over time.

Paul paints a rosy pic­ture but does­n’t men­tion that in­comes for lower and mid­dle-class fam­i­lies have fallen since the 80s. This golden age of en­tre­pre­neur­ship has­n’t ben­e­fit­ted the vast ma­jor­ity of peo­ple and the in­crease in the Gini co­ef­fi­cient is­n’t sim­ply that more com­pa­nies are be­ing started. The rich are get­ting richer and the poor are get­ting poorer.

And there we have it. The slight in­jec­tion of his true ide­ol­ogy rel­e­gated to the notes sec­tion and vague enough that some might ig­nore. But keep in mind this is the same guy who ar­gued against a wealth tax. His seem­ingly im­par­tial and log­i­cal writ­ing at­tempts to hide his true in­ten­tions.

Is this re­ally about how peo­ple get rich or why we should all be happy that peo­ple like PG are get­ting richer while tons of peo­ple and strug­gling to meet their ba­sic needs. Wealth in­equal­ity is just a rad­i­cal left fairy tale to vil­lainize the hard-work­ing 1%. We could all be rich too, it’s so much eas­ier now. Just pull your­self up by your boot­straps.

There’s no ques­tion that it’s eas­ier now than ever to start a new busi­ness and reach your mar­ket. The in­ter­net has had a de­moc­ra­tiz­ing ef­fect in this re­gard. But it’s also ob­vi­ous to any­one out­side the SV bub­ble that it’s still only ac­ces­si­ble to a small mi­nor­ity of peo­ple. Most peo­ple don’t have the safety net or men­tal band­width to even con­sider en­tre­pre­neur­ship. It is not a panacea for the masses.

But to use that fact to push the false claim that wealth in­equal­ity is solely due to more star­tups and not a real prob­lem says a lot. This es­say is less about how peo­ple get rich and more about why it’s okay that peo­ple like PG are get­ting rich. They’re bet­ter than the rich­est peo­ple of 1960. And we can join them. We just need to stop com­plain­ing and just be rich in­stead.

...

Read the original on keenen.xyz »

7 841 shares, 26 trendiness, words and minutes reading time

Et tu, Signal?

Many tech­nol­o­gists vis­cer­ally felt yes­ter­day’s an­nounce­ment as a punch to the gut when we heard that the Signal mes­sag­ing app was bundling an em­bed­ded cryp­tocur­rency. This news re­ally cut to heart of what many tech­nol­o­gists have felt be­fore when we as loyal users have been ex­ploited and be­trayed by cor­po­ra­tions, but this time it felt much deeper be­cause it in­tro­duced a con­flict of in­ter­est from our fel­low tech­nol­o­gists that we truly be­lieved were ad­vanc­ing a cause many of us also be­lieved in. So many of us have spent sig­nif­i­cant time and so­cial cap­i­tal mov­ing our friends and fam­ily away from the ex­ploita­tive data siphon plat­forms that Facebook et al of­fer, and on to Signal in the hopes of break­ing the cy­cle of com­mer­cial ex­ploita­tion of our on­line re­la­tion­ships. And some of us feel used.

Signal users are over­whelm­ingly tech savvy con­sumers and we’re not id­iots. Do they think we don’t see through the thinly veiled pump and dump scheme that’s pro­posed? It’s an old scam with a new face.

Allegedly the con­trol­ling en­tity prints 250 mil­lion units of some ar­ti­fi­cially scarce trash­coin called MOB (coincidence?) of which the is­su­ing or­ga­ni­za­tion con­trols 85% of the sup­ply. This to­ken then floats on a shady off­shore cryp­tocur­rency ex­change hid­ing in the Cayman Islands or the Bahamas, where users can buy and ex­change the to­ken. The to­ken is wash traded back and forth by in­sid­ers and the ex­change it­self to ar­ti­fi­cially pump up the price be­fore it’s dumped on users in the UK to buy to al­legedly use as payments”. All of this while in­sid­ers are free to silently use in­for­ma­tion asym­me­try to cash out on the in­flux of pumped hype-dri­ven buys be­fore the to­ken crashes in value. Did I men­tion that the ex­change that floats the to­ken is the pri­mary in­vestor in the com­pany it­self, does any­one else see a ma­jor con­flict of in­ter­est here?

Let it be said that every­thing here is prob­a­bly en­tirely le­gal or there sim­ply is no prece­dent yet. The ques­tion every­one is ask­ing be­fore these pro­jects launch now though is: should it be?

I think I speak for many tech­nol­o­gists when I say that any bolted-on cryp­tocur­rency mon­e­ti­za­tion scheme smells like a gi­ant pile of rub­bish and feels enor­mously user-ex­ploita­tive. We’ve seen this be­fore, af­ter all Telegram tried the same thing in an ICO that im­ploded when SEC shut them down, and Facebook fa­mously tried and failed to mon­e­tize WhatsApp through their de­cen­tral­ized-but-not-re­ally dig­i­tal money mar­ket fund pro­ject.

The whole Libra/Diem to­ken (or what­ever they’re call­ing its re­mains this week) was a failed Facebook ini­tia­tive ex­ploit­ing the gap­ing reg­u­la­tory loop­hole where if you sim­ply call your­self a cryp­tocur­rency plat­form (regardless of any tech­nol­ogy) you can ef­fec­tively func­tion as a shadow bank and money trans­mist­ter with no li­cense, all while per­form­ing roughly the same func­tion as a bank but with magic mo­nop­oly money that you can print with no over­sight while your cus­tomers as­sume full coun­ter­party risk. If that sounds like a ter­ri­ble idea, it’s be­cause it is. But we fully ex­pect that level of evil be­hav­ior from Facebookers be­cause that’s kind of their thing.

The sad part of this en­tire pro­ject is that it launched in the UK—likely be­cause it would be bla­tantly il­le­gal in the United States—where here re­tail trans­fers are ubiq­ui­tous, in­stant, and cheap. The dig­i­tal ster­ling held at our high street bank or mo­bile bank­ing app works very well as a cur­rency, and no­body needs or wants to buy a ded­i­cated trash­coin for every sin­gle app on our mo­biles. This is all just bring­ing us back to some stone age barter sys­tem for no rea­son.

The larger trend is of ac­tivist in­vestors try­ing to turn every app with a large user­base into a coin op­er­ated slot ma­chine which forces users to buy from a sup­ply of penny-stock-like to­kens that are thinly traded and which in­vestors and mar­ket mak­ers col­lude on to ma­nip­u­late prices for their own gain. As we saw with the down­fall of Keybase users don’t want a to­k­enized pay-to-play dystopia, and most tech­ni­cal users rightly as­so­ci­ate cryp­tocur­rency with scams and this as­so­ci­a­tion will act like a light­ning rod for le­gal scrutiny. This as­so­ci­a­tion weak­ens the en­tire core value propo­si­tion of the Signal app for no rea­son other than mak­ing a few in­sid­ers richer.

Signal is a still a great piece of soft­ware. Just do one thing and do it well, be the trusted de facto plat­form for pri­vate mes­sag­ing that em­pow­ers dis­si­dents, jour­nal­ists and grandma all to com­mu­ni­cate freely with the same guar­an­tees of pri­vacy. Don’t be­come a dodgy money trans­mit­ter busi­ness. This is not the way.

...

Read the original on www.stephendiehl.com »

8 707 shares, 33 trendiness, words and minutes reading time

How People Get Rich Now

April 2021

Every year since 1982, Forbes mag­a­zine has pub­lished a list of the

rich­est Americans. If we com­pare the 100 rich­est peo­ple in 1982 to

the 100 rich­est in 2020, we no­tice some big dif­fer­ences.

In 1982 the most com­mon source of wealth was in­her­i­tance. Of the

100 rich­est peo­ple, 60 in­her­ited from an an­ces­tor. There were 10

du Pont heirs alone. By 2020 the num­ber of heirs had been cut in

half, ac­count­ing for only 27 of the biggest 100 for­tunes.

Why would the per­cent­age of heirs de­crease? Not be­cause in­her­i­tance

taxes in­creased. In fact, they de­creased sig­nif­i­cantly dur­ing this

pe­riod. The rea­son the per­cent­age of heirs has de­creased is not

that fewer peo­ple are in­her­it­ing great for­tunes, but that more

peo­ple are mak­ing them.

How are peo­ple mak­ing these new for­tunes? Roughly 3/4 by start­ing

com­pa­nies and 1/4 by in­vest­ing. Of the 73 new for­tunes in 2020, 56

de­rive from founders’ or early em­ploy­ees’ eq­uity (52 founders, 2

early em­ploy­ees, and 2 wives of founders), and 17 from man­ag­ing

in­vest­ment funds.

There were no fund man­agers among the 100 rich­est Americans in 1982.

Hedge funds and pri­vate eq­uity firms ex­isted in 1982, but none of

their founders were rich enough yet to make it into the top 100.

Two things changed: fund man­agers dis­cov­ered new ways to gen­er­ate

high re­turns, and more in­vestors were will­ing to trust them with

their money.

But the main source of new for­tunes now is start­ing com­pa­nies, and

when you look at the data, you see big changes there too. People

get richer from start­ing com­pa­nies now than they did in 1982, be­cause

the com­pa­nies do dif­fer­ent things.

In 1982, there were two dom­i­nant sources of new wealth: oil and

real es­tate. Of the 40 new for­tunes in 1982, at least 24 were due

pri­mar­ily to oil or real es­tate. Now only a small num­ber are: of

the 73 new for­tunes in 2020, 4 were due to real es­tate and only 2

to oil.

By 2020 the biggest source of new wealth was what are some­times

called tech” com­pa­nies. Of the 73 new for­tunes, about 30 de­rive

from such com­pa­nies. These are par­tic­u­larly com­mon among the rich­est

of the rich: 8 of the top 10 for­tunes in 2020 were new for­tunes of

this type.

Arguably it’s slightly mis­lead­ing to treat tech as a cat­e­gory.

Isn’t Amazon re­ally a re­tailer, and Tesla a car maker? Yes and no.

Maybe in 50 years, when what we call tech is taken for granted, it

won’t seem right to put these two busi­nesses in the same cat­e­gory.

But at the mo­ment at least, there is def­i­nitely some­thing they share

in com­mon that dis­tin­guishes them. What re­tailer starts AWS? What

car maker is run by some­one who also has a rocket com­pany?

The tech com­pa­nies be­hind the top 100 for­tunes also form a

well-dif­fer­en­ti­ated group in the sense that they’re all com­pa­nies

that ven­ture cap­i­tal­ists would read­ily in­vest in, and the oth­ers

mostly not. And there’s a rea­son why: these are mostly com­pa­nies

that win by hav­ing bet­ter tech­nol­ogy, rather than just a CEO who’s

re­ally dri­ven and good at mak­ing deals.

To that ex­tent, the rise of the tech com­pa­nies rep­re­sents a qual­i­ta­tive

change. The oil and real es­tate mag­nates of the 1982 Forbes 400

did­n’t win by mak­ing bet­ter tech­nol­ogy. They won by be­ing re­ally

dri­ven and good at mak­ing deals.

And in­deed, that way of

get­ting rich is so old that it pre­dates the Industrial Revolution.

The courtiers who got rich in the (nominal) ser­vice of European

royal houses in the 16th and 17th cen­turies were also, as a rule,

re­ally dri­ven and good at mak­ing deals.

People who don’t look any deeper than the Gini co­ef­fi­cient look

back on the world of 1982 as the good old days, be­cause those who

got rich then did­n’t get as rich. But if you dig into how they

got rich, the old days don’t look so good. In 1982, 84% of the

rich­est 100 peo­ple got rich by in­her­i­tance, ex­tract­ing nat­ural

re­sources, or do­ing real es­tate deals. Is that re­ally bet­ter than

a world in which the rich­est peo­ple get rich by start­ing tech

com­pa­nies?

Why are peo­ple start­ing so many more new com­pa­nies than they used

to, and why are they get­ting so rich from it? The an­swer to the

first ques­tion, cu­ri­ously enough, is that it’s mis­phrased. We

should­n’t be ask­ing why peo­ple are start­ing com­pa­nies, but why

they’re start­ing com­pa­nies again.

In 1892, the New York Herald Tribune com­piled a list of all the

mil­lion­aires in America. They found 4047 of them. How many had

in­her­ited their wealth then? Only about 20% — less than the

pro­por­tion of heirs to­day. And when you in­ves­ti­gate the sources of

the new for­tunes, 1892 looks even more like to­day. Hugh Rockoff

found that many of the rich­est … gained their ini­tial edge from

the new tech­nol­ogy of mass pro­duc­tion.”

So it’s not 2020 that’s the anom­aly here, but 1982. The real ques­tion

is why so few peo­ple had got­ten rich from start­ing com­pa­nies in

1982. And the an­swer is that even as the Herald Tribune’s list was

be­ing com­piled, a wave of con­sol­i­da­tion

was sweep­ing through the

American econ­omy. In the late 19th and early 20th cen­turies,

fi­nanciers like J. P. Morgan com­bined thou­sands of smaller com­pa­nies

into a few hun­dred gi­ant ones with com­mand­ing economies of scale.

By the end of World War II, as Michael Lind writes, the ma­jor

sec­tors of the econ­omy were ei­ther or­ga­nized as gov­ern­ment-backed

car­tels or dom­i­nated by a few oli­gop­o­lis­tic cor­po­ra­tions.”

In 1960, most of the peo­ple who start star­tups to­day would have

gone to work for one of them. You could get rich from start­ing your

own com­pany in 1890 and in 2020, but in 1960 it was not re­ally a

vi­able op­tion. You could­n’t break through the oli­gop­o­lies to get

at the mar­kets. So the pres­ti­gious route in 1960 was not to start

your own com­pany, but to work your way up the cor­po­rate lad­der at

an ex­ist­ing one.

Making every­one a cor­po­rate em­ployee de­creased eco­nomic in­equal­ity

(and every other kind of vari­a­tion), but if your model of nor­mal

...

Read the original on paulgraham.com »

9 651 shares, 27 trendiness, words and minutes reading time

Some of the reasons I hope work from home continues and I never have to return to an office — Ryan Mercer's Thoughts

Please don’t make me go back: Some of the rea­sons I hope work from home con­tin­ues and I never have to re­turn to an of­fice

I just hope my em­ployer con­tin­ues to al­low me to work re­motely. Have been ex­ceed­ing pro­duc­tion for the past year­Don’t have to share a key­board/​mouse/​desk with an­other shift (gross).Do not spend an hour a day(+/- 10 min­utes) dri­ving, and save all the money as­so­ci­ated with that com­mute. Do not have to worry about dri­ving on snowy/​icy roads as my job re­quires us to be there un­less there is a county emer­gency mak­ing travel by road il­le­gal.Do not have to go to work sick, be­cause they act like you mur­dered 30 chil­dren if you call in sick. I can work com­fort­ably while iso­lated from my cowork­ers.Do not have to be ex­posed to mul­ti­ple sick cowork­ers at any given time. Do not have to fight 100-130 peo­ple for 1 of 3 mi­crowaves on my 30-minute lunch break, as­sum­ing some­one has­n’t taken my food out of the re­frig­er­a­tor and eaten it/​thrown it away/​left it sit­ting out to make room for theirs.Do not have to use a re­stroom that, with fre­quency (to the point of mul­ti­ple times over the years man­age­ment threat­en­ing su­per­vised re­stroom us­age for adults), has boogers/​urine/​fe­ces/​blood on the floors/​wall/​toi­let seats.Pack into a noisy open of­fice where I can’t fo­cus as peo­ple are con­stantly talk­ing, walk­ing by, burn­ing pop­corn, mi­crowav­ing ox tail and greens that smell worse than burnt pop­corn.Deal with ear-dam­ag­ing fire drills, cram into tiny of­fices with 20 co-work­ers for tor­nado drills/​warn­ings.Deal with an HVAC sys­tem that reg­u­larly gifts tem­per­a­tures in the mid-60s Fahrenheit in the win­ter and mid-80s Fahrenheit in the sum­mer in­side the of­fice where busi­ness ca­sual is re­quired.Drink milky-white wa­ter from the tap AFTER it comes out of a fil­ter that has al­gae grow­ing in the trans­par­ent tube feed­ing the faucet.Deal with cowork­ers dam­ag­ing ve­hi­cles. Just about every­one’s car has mul­ti­ple door dings. Personally, I have no less than 1 dozen that, I know for a fact, oc­curred at work.Have to deal with ran­dom cowork­ers com­ing to my desk/​cor­ner­ing me at the uri­nal and blab­bing in­ces­santly about their di­vorce/​kids/​wife’s boyfriend/​vet bill, this was a reg­u­lar oc­cur­rence in per­son and has hap­pened ex­actly once via teams since work from home started.Don’t have to breathe jet ex­haust that in­evitably makes its way into our build­ing through the cracks around the doors that you can pass mul­ti­ple pieces of pa­per through.Can be here to sign for pack­ages.Can eat health­ier be­cause I have an en­tire kitchen at my dis­posal dur­ing my half-hour lunch.Am far more likely to work over­time when needed as I’m sav­ing an hour com­mut­ing!Don’t need to spend tens of sec­onds in­spect­ing the toi­let seat and/​or con­struct­ing an elab­o­rate ring of squares to ease my mind if I need to poop.Don’t have a desk and chair forced upon me, in­stead, I have the free­dom to pick a desk and chair that I like, that fits my needs.

...

Read the original on www.ryanmercer.com »

10 644 shares, 26 trendiness, words and minutes reading time

Docker without Docker · Fly

should check it out

We’re Fly.io. We take con­tainer im­ages and run them on our hard­ware around the world. It’s pretty neat, and you should check it out ; with an al­ready-work­ing Docker con­tainer, you can be up and run­ning on Fly in well un­der 10 min­utes.

Even though most of our users de­liver soft­ware to us as Docker con­tain­ers, we don’t use Docker to run them. Docker is great, but we’re high-den­sity mul­ti­tenant, and de­spite strides, Docker’s iso­la­tion is­n’t strong enough for that. So, in­stead, we trans­mo­grify con­tainer im­ages into Firecracker mi­cro-VMs.

They do their best to make it look a lot more com­pli­cated, but OCI im­ages — OCI is the stan­dard­ized con­tainer for­mat used by Docker — are pretty sim­ple. An OCI im­age is just a stack of tar­balls.

Backing up: most peo­ple build im­ages from Dockerfiles. A use­ful way to look at a Dockerfile is as a se­ries of shell com­mands, each gen­er­at­ing a tar­ball; we call these layers”. To re­hy­drate a con­tainer from its im­age, we just start the the first layer and un­pack one on top of the next.

You can write a shell script to pull a Docker con­tainer from its reg­istry, and that might clar­ify. Start with some con­fig­u­ra­tion; by de­fault, we’ll grab the base im­age for golang:

We need to au­then­ti­cate to pull pub­lic im­ages from a Docker reg­istry — this is bor­ing but rel­e­vant to the next sec­tion — and that’s easy:

That to­ken will al­low us to grab the manifest” for the con­tainer, which is a JSON in­dex of the parts of a con­tainer.

The first query we make gives us the manifest list”, which gives us point­ers to im­ages for each sup­ported ar­chi­tec­ture:

Pull the di­gest out of the match­ing ar­chi­tec­ture en­try and per­form the same fetch again with it as an ar­gu­ment, and we get the man­i­fest: JSON point­ers to each of the layer tar­balls:

It’s as easy to grab the ac­tual data as­so­ci­ated with these en­tries as you’d hope:

And with those pieces in place, pulling an im­age is sim­ply:

Unpack the tar­balls in or­der and you’ve got the filesys­tem lay­out the con­tainer ex­pects to run in. Pull the config” JSON and you’ve got the en­try­point to run for the con­tainer; you could, I guess, pull and run a Docker con­tainer with noth­ing but a shell script, which I’m prob­a­bly the 1,000th per­son to point out. At any rate, here’s the whole thing.

You’re likely of one of two mind­sets about this: (1) that it’s ex­tremely Unixy and thus ex­cel­lent, or (2) that it’s ex­tremely Unixy and thus hor­ri­fy­ing.

Unix tar is prob­lem­atic. Summing up Aleksa Sarai: tar is­n’t well stan­dard­ized, can be un­pre­dictable, and is bad at ran­dom ac­cess and in­cre­men­tal up­dates. Tiny changes to large files be­tween lay­ers point­lessly du­pli­cate those files; the poor job tar does man­ag­ing con­tainer stor­age is part of why peo­ple burn so much time op­ti­miz­ing con­tainer im­age sizes.

Another fun de­tail is that OCI con­tain­ers share a se­cu­rity foot­gun with git repos­i­to­ries: it’s easy to ac­ci­den­tally build a se­cret into a pub­lic con­tainer, and then in­ad­ver­tently hide it with an up­date in a later im­age.

We’re of a third mind­set re­gard­ing OCI im­ages, which is that they are hor­ri­fy­ing, and that’s lib­er­at­ing. They work pretty well in prac­tice! Look how far they’ve taken us! Relax and make crap­pier de­signs; they’re all you prob­a­bly need.

Back to Fly.io. Our users need to give us OCI con­tain­ers, so that we can un­pack and run them. There’s stan­dard Docker tool­ing to do that, and we use it: we host a Docker reg­istry our users push to.

Running an in­stance of the Docker reg­istry is very easy. You can do it right now; docker pull reg­istry && docker run reg­istry. But our needs are a lit­tle more com­pli­cated than the stan­dard Docker reg­istry: we need multi-ten­ancy, and au­tho­riza­tion that wraps around our API. This turns out not to be hard, and we can walk you through it.

A thing to know off the bat: our users drive Fly.io with a com­mand line util­ity called fly­ctl. fly­ctl is a Go pro­gram (with pub­lic source) that runs on Linux, ma­cOS, and Windows. A nice thing about work­ing in Go in a con­tainer en­vi­ron­ment is that the whole ecosys­tem is built in the same lan­guage, and you can get a lot of stuff work­ing quickly just by im­port­ing it. So, for in­stance, we can drive our Docker repos­i­tory clientside from fly­ctl just by call­ing into Docker’s clientside li­brary.

If you’re build­ing your own plat­form and you have the means, I highly rec­om­mend the CLI-first tack we took. It is so choice. fly­ctl made it very easy to add new fea­tures, like data­bases , pri­vate net­works , vol­umes , and our bonkers SSH ac­cess sys­tem

On the server­side, we started out sim­ple: we ran an in­stance of the stan­dard Docker reg­istry with an au­tho­riz­ing proxy in front of it. fly­ctl man­ages a bearer to­ken and uses the Docker APIs to ini­ti­ate Docker pushes that pass that to­ken; the to­ken au­tho­rizes repos­i­to­ries server­side us­ing calls into our API.

What we do now is­n’t much more com­pli­cated than that. Instead of run­ning a vanilla Docker reg­istry, we built a cus­tom repos­i­tory server. As with the client, we get a Docker reg­istry im­ple­men­ta­tion just by im­port­ing Docker’s reg­istry code as a Go de­pen­dency.

We’ve ex­tracted and sim­pli­fied some of the Go code we used to build this here, just in case any­one wants to play around with the same idea. This is­n’t our pro­duc­tion code (in par­tic­u­lar, all the ac­tual au­then­ti­ca­tion is ripped out), but it’s not far from it, and as you can see, there’s not much to it.

Our cus­tom server is­n’t ar­chi­tec­turally that dif­fer­ent from the vanilla reg­istry/​proxy sys­tem we had be­fore. We wrap the Docker reg­istry API han­dlers with au­tho­rizer mid­dle­ware that checks to­kens, ref­er­ences, and rewrites repos­i­tory names. There are some very mi­nor gotchas:

* Docker is con­tent-ad­dressed, with blobs named” for their SHA256 hashes, and at­tempts to reuse blobs shared be­tween dif­fer­ent repos­i­to­ries. You need to catch those cross-repos­i­tory mounts and rewrite them.

* Docker’s reg­istry code gen­er­ates URLs with _state pa­ra­me­ters that em­bed ref­er­ences to repos­i­to­ries; those need to get rewrit­ten too. _state is HMAC-tagged; our code just shares the HMAC key be­tween the reg­istry and the au­tho­rizer.

In both cases, the source of truth for who has which repos­i­to­ries and where is the data­base that backs our API server. Your push car­ries a bearer to­ken that we re­solve to an or­ga­ni­za­tion ID, and the name of the repos­i­tory you’re push­ing to, and, well, our de­sign is what you’d prob­a­bly come up with to make that work. I sup­pose my point here is that it’s pretty easy to slide into the Docker ecosys­tem.

The pieces are on the board:

* We can ac­cept con­tain­ers from users

* We can store and man­age con­tain­ers for dif­fer­ent or­ga­ni­za­tions.

* We’ve got a VMM en­gine, Firecracker, that we’ve writ­ten about al­ready.

What we need to do now is arrange those pieces so that we can run con­tain­ers as Firecracker VMs.

As far as we’re con­cerned, a con­tainer im­age is just a stack of tar­balls and a blob of con­fig­u­ra­tion (we layer ad­di­tional con­fig­u­ra­tion in as well). The tar­balls ex­pand to a di­rec­tory tree for the VM to run in, and the con­fig­u­ra­tion tells us what bi­nary in that filesys­tem to run when the VM starts.

Meanwhile, what Firecracker wants is a set of block de­vices that Linux will mount as it boots up.

There’s an easy way on Linux to take a di­rec­tory tree and turn it into a block de­vice: cre­ate a file-backed loop de­vice, and copy the di­rec­tory tree into it. And that’s how we used to do things. When our or­ches­tra­tor asked to boot up a VM on one of our servers, we would:

Pull the match­ing con­tainer from the reg­istry. Create a loop de­vice to store the con­tain­er’s filesys­tem on. Unpack the con­tainer (in this case, us­ing Docker’s Go li­braries) into the mounted loop de­vice. Create a sec­ond block de­vice and in­ject our init, ker­nel, con­fig­u­ra­tion, and other goop into. Track down any per­sis­tent vol­umes at­tached to the ap­pli­ca­tion, un­lock them with LUKS, and col­lect their un­locked block de­vices. Create a TAP de­vice, con­fig­ure it for our net­work, and at­tach BPF code to it. Hand all this stuff off to Firecracker and tell it to boot .

This is all a few thou­sand lines of Go.

This sys­tem worked, but was­n’t es­pe­cially fast. Part of the point of Firecracker is to boot so quickly that you (or AWS) can host Lambda func­tions in it and not just long-run­ning pro­grams. A big prob­lem for us was caching; a server in, say, Dallas that’s asked to run a VM for a cus­tomer is very likely to be asked to run more in­stances of that server (Fly.io apps scale triv­ially; if you’ve got 1 of some­thing run­ning and would be hap­pier with 10 of them, you just run fly­ctl scale count 10). We did some caching to try to make this faster, but it was of du­bi­ous ef­fec­tive­ness.

The sys­tem we’d been run­ning was, as far as con­tainer filesys­tems are con­cerned, not a whole lot more so­phis­ti­cated than the shell script at the top of this post. So Jerome re­placed it.

What we do now is run, on each of our servers, an in­stance of con­tain­erd. con­tain­erd does a whole bunch of stuff, but we use it as as a cache.

If you’re a Unix per­son from the 1990s like I am, and you just re­cently started pay­ing at­ten­tion to how Linux stor­age works again, you’ve prob­a­bly no­ticed that a lot has changed. Sometime over the last 20 years, the block de­vice layer in Linux got in­ter­est­ing. LVM2 can pool raw block de­vices and cre­ate syn­thetic block de­vices on top of them. It can treat block de­vice sizes as an ab­strac­tion, chop­ping a 1TB block de­vice into 1,000 5GB syn­thetic de­vices (so long as you don’t ac­tu­ally use 5GB on all those de­vices!). And it can cre­ate snap­shots, pre­serv­ing the blocks on a de­vice in an­other syn­thetic de­vice, and shar­ing those blocks among re­lated de­vices with copy-on-write se­man­tics.

con­tain­erd knows how to drive all this LVM2 stuff, and while I guess it’s out of fash­ion to use the de­vmap­per back­end these days, it works beau­ti­fully for our pur­poses. So now, to get an im­age, we pull it from the reg­istry into our server-lo­cal con­tain­erd, con­fig­ured to run on an LVM2 thin pool. con­tain­erd man­ages snap­shots for every in­stance of a VM/container that we run. Its API pro­vides a sim­ple lease”-based garbage col­lec­tion scheme; when we boot a VM, we take out a lease on a con­tainer snap­shot (which syn­the­sizes a new block de­vice based on the im­age, which con­tain­erd un­packs for us); LVM2 COW means mul­ti­ple con­tain­ers don’t step on each other. When a VM ter­mi­nates, we sur­ren­der the lease, and con­tain­erd even­tu­ally GCs.

The first de­ploy­ment of a VM/container on one of our servers does some lift­ing, but sub­se­quent de­ploy­ments are light­ning fast (the VM build-and-boot process on a sec­ond de­ploy­ment is faster than the log­ging that we do).

Jerome wrote our init in Rust, and, af­ter be­ing ca­joled by Josh Triplett, we re­leased the code, which you can go read.

The filesys­tem that Firecracker is mount­ing on the snap­shot check­out we cre­ate is pretty raw. The first job our init has is to fill in the blanks to fully pop­u­late the root filesys­tem with the mounts that Linux needs to run nor­mal pro­grams.

We in­ject a con­fig­u­ra­tion file into each VM that car­ries the user, net­work, and en­try­point in­for­ma­tion needed to run the im­age. init reads that and con­fig­ures the sys­tem. We use our own DNS server for pri­vate net­work­ing, so init over­rides re­solv.conf. We run a tiny SSH server for user lo­gins over WireGuard; init spawns and mon­i­tors that process. We spawn and mon­i­tor the en­try point pro­gram. That’s it; that’s an init.

So, that’s about half the idea be­hind Fly.io. We run server hard­ware in racks around the world; those servers are tied to­gether with an or­ches­tra­tion sys­tem that plugs into our API. Our CLI, fly­ctl, uses Docker’s tool­ing to push OCI im­ages to us. Our or­ches­tra­tion sys­tem sends mes­sages to servers to con­vert those OCI im­ages to VMs. It’s all pretty neato, but I hope also kind of easy to get your head wrapped around.

The other half” of Fly is our Anycast net­work, which is a CDN built in Rust that uses BGP4 Anycast rout­ing to di­rect traf­fic to the near­est in­stance of your ap­pli­ca­tion. About which: more later.

...

Read the original on fly.io »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.