10 interesting stories served every morning and every evening.




1 636 shares, 32 trendiness, words and minutes reading time

How big was the Tonga eruption?

How big was the Tonga erup­tion?

The ex­plo­sive erup­tion of the Hunga Tonga-Hunga Ha’apai vol­cano may be one of the largest recorded in such de­tail. The blast was vis­i­ble from space, with im­ages of the mas­sive ash plume go­ing vi­ral over the fol­low­ing days. But just how big was it?

The un­der­wa­ter vol­cano erupted with a deaf­en­ing ex­plo­sion on Jan. 15, trig­ger­ing deadly tsunamis, cov­er­ing is­lands in ash, and knock­ing out com­mu­ni­ca­tions for Tonga’s 105,000 peo­ple

The event was cap­tured in as­ton­ish­ing de­tail by satel­lites in­clud­ing the NOAA GOES-West satel­lite, shown be­low.

Breaking down the stages of the erup­tion into in­ter­vals al­lows us to plot the ex­pan­sion of the enor­mous plume of ma­te­r­ial that vol­ca­nol­o­gists call an umbrella cloud”.

Around the time of the ini­tial erup­tion, a cloud mea­sur­ing 38 km (24 miles) wide is thrust into the at­mos­phere. Its di­am­e­ter al­ready mea­sures al­most twice the length of Manhattan, New York. One hour later, it ap­pears to mea­sure around 650 km wide, in­clud­ing shock waves around its edge.

The scale of the um­brella cloud is com­pa­ra­ble to the 1991 Pinatubo erup­tion in the Philippines and is one of largest of the satel­lite era, ac­cord­ing to Michigan Tech vol­ca­nol­o­gist Simon Carn in a NASA blog post.

The satel­lite im­ages of the event show mostly ocean with scat­tered is­lands of Tonga and Fiji barely no­tice­able. Gauging the ac­tual size of the erup­tion is dif­fi­cult when in such a re­mote part of the South Pacific.

Here, we take the cloud of vol­canic ma­te­r­ial and place it over well-known land masses and coast­lines to get a true sense of just how big the erup­tion was.

At 650 km in di­am­e­ter, the cloud would ob­scure most of Great Britain and the east coast of Ireland. It is al­most the same size as main­land Spain.

When com­pared to parts of the U. S., it would cover a large part of Florida or a sec­tion of California from San Francisco to Los Angeles.

If we com­pare the scale of the erup­tion to Southeast Asia, it would cover Cambodia and part of Laos, Vietnam, and Thailand, or ob­scure al­most all of North and South Korea.

If we placed the scale of the erup­tion over Egypt’s Mount Sinai re­gion, it would cover Israel, spilling over into Jordan and the Mediterranean Sea. It is also large enough to ob­scure the Horn of Africa.

Scientists are still de­ter­min­ing the Volcanic Explosivity Index (VEI) of the erup­tion, a mea­sure­ment from 1 to 8 that ex­am­ines the ex­plo­siv­ity of erup­tions. Pinatubo, which is con­sid­ered sim­i­lar to the Hunga Tonga-Hunga Haʻapai erup­tion, scored a 6 on the in­dex.

Pinatubo pro­duced an erup­tion col­umn of gas and ash that rose 40 km into the at­mos­phere, whereas preliminary data on the Tongan erup­tion is that the gas and ash col­umn was at least 20 km high,” said Raymod Cas, a vol­ca­nol­o­gist at Monash University in Australia.

All maps and satel­lite im­agery re­pro­jected to or­tho­graphic pro­jec­tions for ac­cu­racy of mea­sure­ments

...

Read the original on graphics.reuters.com »

2 602 shares, 29 trendiness, words and minutes reading time

max-sixty/prql: PRQL is a modern language for transforming data — a simpler and more powerful SQL

Sign up

PRQL is a mod­ern lan­guage for trans­form­ing data — a sim­pler and more pow­er­ful SQL

Permalink

PRQL is a mod­ern lan­guage for trans­form­ing data — a sim­pler and more pow­er­ful SQL

You can’t per­form that ac­tion at this time.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

...

Read the original on github.com »

3 551 shares, 21 trendiness, words and minutes reading time

The Curse of NixOS

I’ve used NixOS as the only OS on my lap­top for around three years at this point. Installing it has felt sort of like a curse: on the one hand, it’s so clearly the only op­er­at­ing sys­tem that ac­tu­ally gets how pack­age man­age­ment should be done. After us­ing it, I can’t go back to any­thing else. One the other hand, it’s ex­tremely com­pli­cated con­stantly chang­ing soft­ware that re­quires con­fig­u­ra­tion with the sec­ond-worst home­grown con­fig pro­gram­ming lan­guage I’ve ever used1.

I don’t think that NixOS is the fu­ture, but I do ab­solutely think that the ideas in it are, so I want to write about what I think it gets right and what it gets wrong, in the hopes that other pro­jects can take note. As such, this post will not as­sume knowl­edge of NixOS — if you’ve used NixOS sig­nif­i­cantly, there prob­a­bly is­n’t any­thing new in here for you.

The fun­da­men­tal thing that NixOS gets right is that soft­ware is never in­stalled glob­ally. All pack­ages are stored in a con­tent-ad­dress­able store — for in­stance, my ed­i­tor is stored in the di­rec­tory /nix/store/frlxim9yz5qx34ap3iaf55caawgdqkip-neovim-0.5.1/” — the bi­nary, global de­fault con­fig­u­ra­tion, li­braries, and every­thing else in­cluded in the vim pack­age ex­ists in that di­rec­tory. Just down­load­ing that does­n’t install” it, though — there is­n’t re­ally such a thing as installation” in the tra­di­tional sense. Instead, I can open a shell that has a $PATH vari­able set so that it can see neovim. This is quite sim­ple to do — I can run nix-shell -p neovim, and I’ll get dropped into a shell that has neovim in the $PATH.

Crucially, this does­n’t af­fect any soft­ware that does­n’t have its $PATH changed. This means that it’s pos­si­ble to have as many dif­fer­ent ver­sions of the same soft­ware pack­age co­ex­ist­ing at the same time as you want, which is im­pos­si­ble with most dis­tri­b­u­tions! You can have one shell with Python 3.7, an­other with Python 3.9, and in­stall a dif­fer­ent set of li­braries on both of them. If you have two dif­fer­ent pieces of soft­ware that have the same de­pen­dency, you don’t need to make sure they’re com­pat­i­ble with the same ver­sion, since each one can use the ver­sion of the de­pen­dency that it wants to.

Almost all of the good things about NixOS are nat­ural con­se­quences of this sin­gle de­ci­sion.

For in­stance, once you have this, roll­backs are triv­ial — since mul­ti­ple ver­sions of the same soft­ware can co­ex­ist, rolling back just means chang­ing which ver­sion of the soft­ware is used by de­fault. As long as you save the in­for­ma­tion about what ver­sions you used to be on (which is a tiny amount of in­for­ma­tion), rolling back is es­sen­tially just chang­ing some sym­links. Since the ker­nel is a pack­age like any other, you can have the boot­loader re­mem­ber the list of dif­fer­ent ver­sions, and let peo­ple boot into pre­vi­ous con­fig­u­ra­tions just by se­lect­ing an older ver­sion on the boot menu.

This also makes run­ning patched ver­sions of soft­ware much sim­pler — I don’t need to worry about fuck­ing up my sys­tem by patch­ing some­thing like the Python in­ter­preter, since I know that my patched ver­sion will only run when I specif­i­cally want it. But at the same time, I can patch the Python in­ter­preter and then have some soft­ware run­ning on my sys­tem ac­tu­ally use the patched ver­sion, since all of this stuff is con­fig­ured through the same con­fig­u­ra­tion sys­tem.

Another ad­van­tage to this sys­tems is that it makes zero-down­time de­ploys sig­nif­i­cantly sim­pler, since you can have mul­ti­ple ver­sions of the same soft­ware run­ning at the same time. You don’t need to take down the cur­rent ver­sion of the soft­ware be­fore you in­stall the new one, in­stead you can in­stall the new ver­sion of the soft­ware, run both at the same time, and then cut over once you’re con­fi­dent that the new ver­sion works2.

Mobile phones and em­bed­ded de­vices have had to build a less gen­eral ver­sion of this in or­der to avoid oc­ca­sion­ally brick­ing them­selves when they up­date, in the form of an A/B par­ti­tion­ing scheme. So far, desk­top com­put­ers, and par­tic­u­larly Linux dis­tri­b­u­tions have largely ac­cepted that oc­ca­sion­ally brick­ing them­selves on up­date is ba­si­cally fine, but it does­n’t have to be this way! Using a NixOS-style sys­tem elim­i­nates this prob­lem in a clean, uni­fied man­ner3.

One clear rea­son to be­lieve that this is the fu­ture is that lan­guage pack­age man­agers (which are more plen­ti­ful and can it­er­ate faster) have largely landed on es­sen­tially this so­lu­tion — vir­tualenv, Poetry, Yarn, Cargo and many oth­ers have landed on ba­si­cally this model. Most use ver­sion num­bers in­stead of con­tent-ad­dress­able stor­age, due to the lan­guage ecosys­tems that they’re built around, but the fun­da­men­tals are the same, and it’s pretty clear from look­ing at trends in pack­age man­agers that this model tends to be suc­cess­ful.

There are es­sen­tially two fun­da­men­tal de­sign mis­takes in NixOS that lead to the prob­lems with it.

The first is rel­a­tively sim­ple: they de­vel­oped their own pro­gram­ming lan­guage to do con­fig­u­ra­tion, which is not very good and is ex­tremely dif­fi­cult to learn. The vast ma­jor­ity of peo­ple us­ing NixOS do not un­der­stand the lan­guage, and sim­ply copy/​paste ex­am­ple con­fig­u­ra­tions, which mostly works un­til you need to do some­thing com­pli­cated, at which point you’re com­pletely high and dry. There seem to be a hand­ful of peo­ple with a deep un­der­stand­ing of the lan­guage who do most of the in­fra­struc­tural work, and then a long tail of peo­ple with no clue what’s go­ing on. This is ex­ac­er­bated by poor doc­u­men­ta­tion — there are docs for learn­ing Nix as a lan­guage, and docs for us­ing NixOS, but the con­nec­tion be­tween those two things is es­sen­tially un­doc­u­mented. One of the things that’s the­o­ret­i­cally nice about hav­ing every­thing de­fined in the Nix lan­guage is that it’s eas­ily un­der­stand­able once you learn Nix. Unfortunately, Nix is dif­fi­cult enough to learn that I could­n’t tell you if this is true or not. Nix needs more docs ex­plain­ing deeply how prac­ti­cal ap­pli­ca­tions of the Nix lan­guage ac­tu­ally work. It could also do with less ugly syn­tax, but I think that ship has sailed.

There are many other mi­nor com­plaints about NixOS that stem from this — patch­ing pack­ages is the­o­ret­i­cally easy, but an­noy­ing to fig­ure out how to do in prac­tice, for in­stance, and con­fig­u­ra­tion tends to have a lot of spooky ac­tion-at-a-dis­tance.

The sec­ond flaw is that NixOS does not ac­tu­ally pro­vide real iso­la­tion. Running bash -c type $0’ will get you bash is /nix/store/90y23lrznwmkdnczk1dzdsq4m35zj8ww-bash-interactive-5.1-p8/bin/bash — bash knows that it’s run­ning from the Nix store. This means that all soft­ware needs to be re­com­piled to work on NixOS, of­ten with some ter­ri­fy­ing hacks in­volved. It also means that it’s im­pos­si­ble to sta­t­i­cally know what other pack­ages a given pack­age might de­pend on. Currently, the way this is im­ple­mented is es­sen­tially grep­ping a pack­age for /nix/store/ to try to fig­ure out what the de­pen­den­cies are, which is ob­vi­ously… not great. It also means that bi­na­ries that link against /lib/ld-linux.so.2 or scripts that use #!/bin/bash won’t work with­out patch­ing.

Unfortunately, the tools for fix­ing this are not re­ally there yet. Last fall, I pro­to­typed a Linux dis­tri­b­u­tion try­ing to com­bine a nix-store style pack­age repos­i­tory with over­layfs4. Unfortunately, over­layfs be­comes very un­happy when you try to over­lay too many dif­fer­ent paths (with three dis­tinct fail­ure modes, in­ter­est­ingly), which se­verely lim­its this ap­proach. I still think that there’s a lot of po­ten­tial here — over­layfs could be fast for ar­bi­trary num­bers of paths if that was a de­sign goal — but it’s not there yet. This means that try­ing to build con­tent-ad­dress­able store that is trans­par­ent to the apps in­stalled in it re­quires es­sen­tially build­ing a con­tainer im­age for every com­po­si­tion of pack­ages (this is the ap­proach that Silverblue takes), which is fun­da­men­tally un­sat­is­fy­ing to me.

The ad­van­tage to this ap­proach is that you can pig­gy­back off of ex­ist­ing pack­age repos­i­to­ries. One of the main bar­ri­ers for adop­tion of new Linux dis­tri­b­u­tions is pack­ag­ing, but a dis­tri­b­u­tion tak­ing an con­tent ad­dress­able store + over­lay ap­proach could au­to­mat­i­cally get all the ben­e­fits of NixOS along with all of the pack­ages from Debian, Ubuntu, RedHat, Arch, NixOS, and any other dis­tri­b­u­tions it fan­cies.

NixOS very clearly has the cor­rect way of think­ing about de­pen­dency man­age­ment, but is ham­pered by a few poor tech­ni­cal de­ci­sions made long ago. I’m go­ing to keep us­ing it, since I can’t stand any­thing else af­ter hav­ing a taste of NixOS, but I’m root­ing for some­thing new to rise up and take its place, that learns from the lessons of NixOS and im­ple­ments its fea­tures in a more user-friendly way.

...

Read the original on blog.wesleyac.com »

4 457 shares, 36 trendiness, words and minutes reading time

Multimode Image Viewer

...

Read the original on hyper-resolution.org »

5 420 shares, 17 trendiness, words and minutes reading time

Why we're migrating (many of) our servers from Linux to FreeBSD

I’ve been a Linux (or GNU/Linux, for the purists) user since 1996. I’ve been a FreeBSD user since 2002. I have al­ways suc­cess­fully used both op­er­at­ing sys­tems, each for spe­cific pur­poses. I have found, on av­er­age, BSD sys­tems to be more sta­ble than their Linux equiv­a­lents. By sta­bil­ity, I don’t mean up­time (too much up­time means too few ker­nel se­cu­rity up­dates, which is wrong). I mean that things work as they should, that they don’t break” from one up­date to the next, and that you don’t have to re­vise every­thing be­cause of a miss­ing or mod­i­fied ba­sic com­mand.

I’ve al­ways been for de­vel­op­ment and in­no­va­tion as long as it does­n’t (necessarily, au­to­mat­i­cally and un­rea­son­ably) break every­thing that is al­ready in place. And the road that the var­i­ous Linux dis­tri­b­u­tions are tak­ing seems to be that of mod­i­fy­ing things that work just for the sake of it or to fol­low the dik­tats of the Kernel and those who man­age it - but not only.

Some time ago we started a com­plex, con­tin­u­ous and not al­ways lin­ear op­er­a­tion, that is to mi­grate, where pos­si­ble, most of the servers (ours and of our cus­tomers) from Linux to FreeBSD.

There are many al­ter­na­tive op­er­at­ing sys­tems to Linux and the *BSD fam­ily is var­ied and com­plete. FreeBSD, in my opin­ion, to­day is the all rounder” sys­tem par ex­cel­lence, i.e. well re­fined and suit­able both for use on large servers and small em­bed­ded sys­tems. The other BSDs have strengths that, in some fields, make them par­tic­u­larly suit­able but FreeBSD, in my hum­ble opin­ion, is suit­able (almost) for every pur­pose.

So back to the main topic of this ar­ti­cle, why am I mi­grat­ing many of the servers we man­age to FreeBSD? The rea­sons are many, I will list some of them with cor­re­spond­ing ex­pla­na­tions.

One of the fun­da­men­tal prob­lems with Linux is that (we shall re­mem­ber) it is a ker­nel, every­thing else is cre­ated by dif­fer­ent peo­ple/​com­pa­nies. On more than one oc­ca­sion Linus Torvalds as well as other lead­ing Linux ker­nel de­vel­op­ers have re­marked that they care about the de­vel­op­ment of the ker­nel it­self, not how users will use it. In the tech­ni­cal de­ci­sions, there­fore, they don’t take into ac­count what is the real use of the sys­tems but that the ker­nel will go its own path. This is a good thing, as the de­vel­op­ment of the Linux ker­nel is not held back” by the strug­gle be­tween dis­tri­b­u­tions and soft­ware so­lu­tions, but at the same time it is also a dis­ad­van­tage. In FreeBSD, the ker­nel and its user­land (i.e. all the com­po­nents of the base op­er­at­ing sys­tem) are de­vel­oped by the same team and there is, there­fore, a strong co­he­sion be­tween the par­ties. In many Linux dis­tri­b­u­tions it was nec­es­sary to deprecate” if­con­fig in fa­vor of ip be­cause new de­vel­op­ments in the ker­nel were no longer sup­ported by if­con­fig, with­out break­ing com­pat­i­bil­ity with other (previous) ker­nel ver­sions or hav­ing func­tions (on the same net­work in­ter­face) man­aged by dif­fer­ent tools. In FreeBSD, with each re­lease of the op­er­at­ing sys­tem, there are both ker­nel and user­land up­dates, so these changes are con­sis­tently in­cor­po­rated and doc­u­mented, mak­ing the tools com­pat­i­ble with their ker­nel-side up­dates.

In other words, in FreeBSD there is no need to rev­o­lu­tionise” every­thing every few years and changes are made pri­mar­ily in the form of ad­di­tions that can en­rich (and not break) each up­date. If a mod­i­fi­ca­tion was to change the way it in­ter­acts with net­work de­vices, if­con­fig would be mod­i­fied to take ad­van­tage of that and re­main com­pat­i­ble with the old” syn­tax. In the long-term, this kind of ap­proach is def­i­nitely ap­pre­ci­ated by sys­tem ad­min­is­tra­tors who find them­selves with a lin­ear, con­sis­tent, and al­ways well-doc­u­mented up­date path.

Linux and re­lated dis­tri­b­u­tions now have con­tri­bu­tions from many com­pa­nies, many of which (e.g. Red Hat) push (justifiably) in the di­rec­tion of what is con­ve­nient for them, their prod­ucts, and their ser­vices. Being big con­trib­u­tors to the pro­ject they have a big clout so, in­deed, their so­lu­tions of­ten be­come de-facto stan­dards. Consider sys­temd - was there re­ally a need for such a sys­tem? While it brought some ad­van­tages, it added some com­plex­ity to an oth­er­wise ex­tremely sim­ple and func­tional sys­tem. It re­mains di­vi­sive to this day, with many ask­ing, but was it re­ally nec­es­sary? Did the ad­van­tages it brought bal­ance the dis­ad­van­tages?”. 70 bi­na­ries just for ini­tial­is­ing and log­ging and a mil­lion and a half lines of code just for that? But Red Hat threw the rock…and many fol­lowed along. Because some­times it’s nice to fol­low the trend, the hype of a spe­cific so­lu­tion.

Even FreeBSD has big com­pa­nies be­hind it, col­lab­o­rat­ing in a more or less di­rect way. The li­cense is more per­mis­sive, so not every­one who uses it com­mer­cially con­tributes to it, but know­ing that FreeBSD is at the base of Netflix CDNs, Whatsapp servers (waiting for Meta to re­place them, for in­ter­nal co­her­ence rea­sons, with Linux servers), Sony Playstations and, in part, ma­cOS, iOS, iPa­dOS, etc. surely gives con­fi­dence on its level. These re­al­i­ties, how­ever, do not have enough clout to drive the de­vel­op­ment of the core team.

FreeBSD jails are very pow­er­ful tools for jail­ing and sep­a­rat­ing ser­vices. There is con­tro­versy about Docker not run­ning on FreeBSD but I be­lieve (like many oth­ers) that FreeBSD has a more pow­er­ful tool. Jails are older and more ma­ture - and by far - than any con­tainer­iza­tion so­lu­tion on Linux. Jails are ef­fi­cient and are well in­te­grated through­out the op­er­at­ing sys­tem. All ma­jor com­mands (ps, kill, top, etc.) are able to dis­play jail in­for­ma­tion as well. There are many man­age­ment tools but, in fact, they all do the same thing: they in­ter­act with FreeBSD base and cre­ate cus­tom con­fig­u­ra­tion files. Personally I’m very com­fort­able with BastilleBSD but there are a lot of very good tools as well as a suf­fi­ciently sim­ple man­ual man­age­ment. When I need Docker I launch a Linux ma­chine - of­ten Alpine, which I think is a great min­i­mal­ist dis­tri­b­u­tion, or Debian. But I’m mov­ing a lot of ser­vices from Docker to a ded­i­cated jail on FreeBSD. Docker con­tain­ers are a great tool for rapid (and con­sis­tent) soft­ware de­ploy­ment, but it’s not all fun and games. Containers, for ex­am­ple, rely on im­ages that some­times age and are no longer up­dated. This is a se­cu­rity is­sue that should not be over­looked.

UFS2 is still a very good and ef­fi­cient file sys­tem and, when con­fig­ured to use sof­t­up­dates, ca­pa­ble of per­form­ing live snap­shots of the file sys­tem. This is great for back­ups. Ext4 and XFS do not sup­port snap­shots ex­cept through ex­ter­nal tools (like DattoBD or snap­shots through the vol­ume man­ager). This works, of course, but it is not na­tive. Btrfs is great in its in­ten­tions but still not as sta­ble as it should be af­ter all these years of de­vel­op­ment. FreeBSD sup­ports ZFS na­tively in the base sys­tem and this brings many ad­van­tages: sep­a­ra­tion of datasets for jails, as well as Boot Environments, to make snap­shots be­fore up­grades/​changes and to be able to boot (from boot­loader) even on a dif­fer­ent BE, etc.

Linux has al­ways used ex­cel­lent tools such as grub, lilo (now out­dated), etc.. FreeBSD has al­ways used a very lin­ear and con­sis­tent boot sys­tem, with its own boot­loader and ded­i­cated boot par­ti­tion. Whether on mbr, gpt, etc. things are very sim­i­lar and con­sis­tent. I’ve never had a prob­lem get­ting a FreeBSD sys­tem to boot af­ter a move or re­cov­ery from backup. On Linux, how­ever, grub has some­times given me prob­lems, even af­ter a sim­ple ker­nel se­cu­rity up­date.

Meta has been try­ing to bring the per­for­mance of the Linux net­work stack up to the level of FreeBSD’s for years. Many will ask why, then, not move ser­vices to FreeBSD. Large com­pa­nies with huge dat­a­cen­ters can’t change so­lu­tions overnight, and their en­gi­neers, at any level, are Linux ex­perts. They have in­vested heav­ily in btrfs, in Linux, in their specifics. Clearly, upon ac­quir­ing Whatsapp, they pre­ferred to mi­grate the few” Whatsapp servers to Linux and move them to their dat­a­cen­ters. Regarding the real sys­tem per­for­mance (i.e. dis­re­gard­ing bench­marks, use­ful only up to a cer­tain point), FreeBSD shines, es­pe­cially un­der high load con­di­tions. Where Linux starts to gasp (ex: wait­ing for I/O) with 100% CPU, FreeBSD has lower proces­sor load and room for more stuff. In the real world (of my servers and load types), I some­times ex­pe­ri­enced se­vere sys­tem slow­downs due to high I/O, even if the data to be processed was not read/​write de­pen­dent. On FreeBSD this does not hap­pen, and if some­thing is block­ing, it blocks THAT op­er­a­tion, not the rest of the sys­tem. When per­form­ing back­ups or other im­por­tant op­er­a­tions this fac­tor be­comes ex­tremely im­por­tant to en­sure proper (and sta­ble) sys­tem per­for­mance.

FreeBSD, in the base sys­tem, has all the tools to an­a­lyze pos­si­ble prob­lems and sys­tem loads. vmstat” , in a sin­gle line, tells me if the ma­chine is strug­gling for CPU, for I/O or for Ram. gstat -a” shows me how much, disk by disk, par­ti­tion by par­ti­tion, the stor­age is ac­tive, also in per­cent­age with ref­er­ence to its per­for­mance. top”, then, also has sup­port for fig­ur­ing out, process by process, how much I/O is be­ing used (“m” op­tion). On Linux, to get the same re­sults, you have to in­stall spe­cific ap­pli­ca­tions, dif­fer­ent from dis­tri­b­u­tion to dis­tri­b­u­tion.

For my pur­poses, Bhyve is a great vir­tu­al­iza­tion tool. KVM is def­i­nitely more com­plete but since I don’t have any spe­cial or spe­cific needs not cov­ered by Bhyve on FreeBSD, I found (on av­er­age) bet­ter per­for­mance with this com­bi­na­tion. On FreeBSD, how­ever, KSM is miss­ing which, in some cases, can be very use­ful.

Will I aban­don Linux for FreeBSD? Obviously not, just as I haven’t for the last 20 years. Both have their uses, their space, their strengths. But if up to now I have had 80% Linux and 20% FreeBSD, the per­spec­tive is to in­vert the per­cent­ages of use and, where pos­si­ble, di­rectly im­ple­ment so­lu­tions based on FreeBSD.

NOTE: this ar­ti­cle has been trans­lated from its Italian orig­i­nal ver­sion. Even if it’s been re­viewed and adapted, there might be some er­rors.

...

Read the original on it-notes.dragas.net »

6 405 shares, 19 trendiness, words and minutes reading time

GitHub Actions by Example

GitHub Actions by Example is an in­tro­duc­tion to the ser­vice through an­no­tated ex­am­ples.

...

Read the original on www.actionsbyexample.com »

7 309 shares, 13 trendiness, words and minutes reading time

daniel.haxx.se

On Friday January 21, 2022 I re­ceived this email. I tweeted about it and it took off like crazy.

The email comes from a for­tune-500 multi-bil­lion dol­lar com­pany that ap­par­ently might be us­ing a prod­uct that con­tains my code, or maybe they have cus­tomers who do. Who knows?

My guess is that they do this for some com­pli­ance rea­sons and they forgot” that their open source com­po­nents are not au­to­mat­i­cally pro­vided by partners” they can just de­mand this in­for­ma­tion from.

I an­swered the email very briefly and said I will be happy to an­swer with de­tails as soon as we have a sup­port con­tract signed.

I think maybe this serves as a good ex­am­ple of the open source pyra­mid and users in the up­per lay­ers not at all think­ing of how the lower lay­ers are main­tained. Building a house with­out a care about the ground the house stands on.

In my tweet and here in my blog post I redact the name of the com­pany. I most prob­a­bly have the right to tell you who they are, but I still pre­fer to not. (Especially if I man­age to land a prof­itable busi­ness con­tract with them.) I sus­pect we can find this level of en­ti­tle­ment in many com­pa­nies.

The level of ig­no­rance and in­com­pe­tence shown in this sin­gle email is mind-bog­gling.

While they don’t even specif­i­cally say which prod­uct they are us­ing, no code I’ve ever been in­volved with or have my copy­right use log4j and any rookie or bet­ter en­gi­neer could eas­ily ver­ify that.

In the pic­ture ver­sion of the email I padded the name fields to bet­ter anonymize the sender, and in the text be­low I re­placed them with NNNN.

Continue down for the re­ply.

Dear Haxx Team Partner,

You are re­ceiv­ing this mes­sage be­cause NNNN uses a prod­uct you de­vel­oped. We re­quest you re­view and re­spond within 24 hours of re­ceiv­ing this email. If you are not the right per­son, please for­ward this mes­sage to the ap­pro­pri­ate con­tact.

As you may al­ready be aware, a newly dis­cov­ered zero-day vul­ner­a­bil­ity is cur­rently im­pact­ing Java log­ging li­brary Apache Log4j glob­ally, po­ten­tially al­low­ing at­tack­ers to gain full con­trol of af­fected servers.

The se­cu­rity and pro­tec­tion of our cus­tomers’ con­fi­den­tial in­for­ma­tion is our top pri­or­ity. As a key part­ner in serv­ing our cus­tomers, we need to un­der­stand your risk and mit­i­ga­tion plans for this vul­ner­a­bil­ity.

Please re­spond to the fol­low­ing ques­tions us­ing the tem­plate pro­vided be­low.

1. If you uti­lize a Java log­ging li­brary for any of your ap­pli­ca­tion, what Log4j ver­sions are run­ning?

2. Have there been any con­firmed se­cu­rity in­ci­dents to your com­pany?

3. If yes, what ap­pli­ca­tions, prod­ucts, ser­vices, and as­so­ci­ated ver­sions are im­pacted?

4. Were any NNNN prod­uct and ser­vices im­pacted?

5. Has NNNN non-pub­lic or per­sonal in­for­ma­tion been af­fected?

6. If yes, please pro­vide de­tails of af­fected in­for­ma­tion NNNN im­me­di­ately.

7. What is the time­line (MM/DD/YY) for com­plet­ing re­me­di­a­tion? List the NNNN steps, in­clud­ing dates for each.

8. What ac­tion is re­quired from NNNN to com­plete this re­me­di­a­tion?

In an ef­fort to main­tain the in­tegrity of this in­quiry, we re­quest that you do not share in­for­ma­tion re­lat­ing to NNNN out­side of your com­pany and to keep this re­quest to per­ti­nent per­son­nel only.

Thank you in ad­vance for your prompt at­ten­tion to this in­quiry and your part­ner­ship!

Sincerely,

NNNN Information Security

The in­for­ma­tion con­tained in this mes­sage may be CONFIDENTIAL and is for the in­tended ad­dressee only. Any unau­tho­rized use, dis­sem­i­na­tion of the in­for­ma­tion, or copy­ing of this mes­sage is pro­hib­ited. If you are not the in­tended ad­dressee, please no­tify the sender im­me­di­ately and delete this mes­sage.

On January 24th I re­ceived this re­sponse, from the same ad­dress and it quotes my re­ply so I know they got it fine.

Hi David,

Thank you for your re­ply. Are you say­ing that we are not a cus­tomer of your or­ga­ni­za­tion?

/ [a first name]

I replied again (22:29 CET on Jan 24) to this mail that iden­ti­fied me as David”. Now there’s this great story about a David and some gi­ant so I could­n’t help my­self…

Hi Goliath,

No, you have no es­tab­lished con­tract with me or any­one else at Haxx whom you ad­dressed this email to, ask­ing for a lot of in­for­ma­tion. You are not our cus­tomer, we are not your cus­tomer. Also, you did­n’t de­tail what prod­uct it was re­gard­ing.

So, we can ei­ther es­tab­lish such a re­la­tion­ship or you are free to search for an­swers to your ques­tions your­self.

I can only pre­sume that you got our email ad­dress and con­tact in­for­ma­tion into your sys­tems be­cause we pro­duce a lot of open source soft­ware that are used widely.

Best wishes,

Daniel

The im­age ver­sion of the ini­tial email

What you read here is my per­sonal opin­ions and views. You may think dif­fer­ently. Organizations I’m in­volved with may have dif­fer­ent stand-points and peo­ple I work with or know may think dif­fer­ently.

...

Read the original on daniel.haxx.se »

8 303 shares, 17 trendiness, words and minutes reading time

Saving over 100x on egress switching from AWS to Hetzner

Our AWS CloudFront bill spiked to $2,457 in October 2021 from $370 in September. When we dug into the bill, we saw that egress in the EU re­gion ac­counted for most of this in­crease, with egress in the US mak­ing up the rest.

This was­n’t an in­di­ca­tion of some mis­con­fig­u­ra­tion on ourend, but rather, a symp­tom of suc­cess. Our pri­mary prod­uct is Fleet, an open core plat­form for de­vice man­age­ment built on os­query. We of­fer an up­date server for agent up­dates that is freely ac­ces­si­ble to both com­mu­nity users and our pay­ing cus­tomers. Getting these costs un­der con­trol be­came a pri­or­ity so that we could con­tinue to of­fer free ac­cess.

Our needs for this server are pretty sim­ple. We gen­er­ate and sign sta­tic meta­data files with The Update Framework, then serve those along with the bi­nary ar­ti­facts. We don’t have any strict re­quire­ments around la­tency, as these are back­ground processes be­ing up­dated.

At first we looked at Cloudflare’s free tier; Free egress is pretty ap­peal­ing. Digging into Cloudflare’s terms, we found that they only al­low for free tier caching to be used on web­site as­sets. To avoid risk­ing a pro­duc­tion out­age by vi­o­lat­ing these terms, we got in touch with them for a quote. This came out to about a 2x sav­ings over AWS. But we knew we needed or­ders of mag­ni­tude sav­ings in or­der to ex­pand our free of­fer­ing.

Having heard of Hetzner’s low egress costs (20TB free + €1.19/TB/month), we in­ves­ti­gated what it would take to run our own server. We stood up a Caddy file server with au­to­matic HTTPS via Let’s Encrypt over the course of a few hours.

Our December Hetzner bill came out to €36.75 ($41.63). This rep­re­sents a sav­ings of 59x over our prior AWS bill, putting us solidly in the range to con­tinue of­fer­ing the free up­date server. We can still dou­ble our egress with Hetzner be­fore in­cur­ring ad­di­tional charges, which will ren­der a sav­ings of over 118x from AWS. Beyond that, the ad­di­tional egress costs should re­main rea­son­able.

DIYing it does come with ad­di­tional main­te­nance bur­den, but so far we’ve found this man­age­able. Caddy on Hetzner has proved ex­cep­tion­ally re­li­able, with well over 99% up­time in the last two months and no man­ual in­ter­ven­tions re­quired.

...

Read the original on blog.fleetdm.com »

9 286 shares, 21 trendiness, words and minutes reading time

Nova

Can a na­tive Mac code ed­i­tor re­ally be that much bet­ter?

Can a na­tive Mac code ed­i­tor re­ally be that much bet­ter?

If we’re be­ing hon­est, Mac apps are a bit of a lost art.

There are great rea­sons to make cross-plat­form apps — to start, they’re cross-plat­form — but it’s just not who we are. Founded as a Mac soft­ware com­pany in 1997, our joy at Panic comes from building things that feel truly, well, Mac-like.

Long ago, we cre­ated Coda, an all-in-one Mac web ed­i­tor that broke new ground. But when we started work on Nova, we looked at where the web was to­day, and where we needed to be. It was time for a fresh start.

It all starts with our first-class text-ed­i­tor.

It’s new, hy­per-fast, and flex­i­ble, with all the fea­tures you want: smart au­to­com­plete, mul­ti­ple cur­sors, a Minimap, ed­i­tor over­scroll, tag pairs and brack­ets, and way, way more.

For the cu­ri­ous, Nova has built-in sup­port for CoffeeScript, CSS, Diff, ERB, Haml, HTML, INI, JavaScript, JSON, JSX, Less, Lua, Markdown, Perl, PHP, Python, Ruby, Sass, SCSS, Smarty, SQL, TSX, TypeScript, XML, and YAML.

It’s also very ex­pand­able, with a ro­bust API and a built-in extension browser.

But even the best text en­gine in the world means noth­ing un­less you actually en­joy spend­ing your time in the app. So, how does Nova look?

You can make Nova look ex­actly the way you want, while still feel­ing Mac-like. Bright, dark, cy­ber­punk, it’s all you. Plus, themes are CSS-like and easy to write. Nova can even au­to­mat­i­cally change your theme when your Mac switches from light to dark mode.

Nova does­n’t just help you code. It helps your code run.

You can eas­ily cre­ate build and run tasks for your pro­jects. We didn’t have them in Coda, but boy do we have them now. They’re custom scripts that can be trig­gered at any time by tool­bar but­tons or key­board short­cuts.

Imagine build­ing con­tent, and with the sin­gle click of a but­ton watching as Nova fires up your lo­cal server, grabs the ap­pro­pri­ate URL, and opens a browser for you, in­stantly. Just think of the time you’ll save.

Nova sup­ports sep­a­rate Build, Run, and Clean tasks. It can open a re­port when run. And the scripts can be writ­ten in a va­ri­ety of lan­guages.

A Nova ex­ten­sion can do lots of things, like add sup­port for new languages, ex­tend the side­bar, draw beau­ti­ful new themes and syn­tax colors, val­i­date dif­fer­ent code, and much more.

Even bet­ter, ex­ten­sions are writ­ten in JavaScript, so any­one can write them. And Nova in­cludes built-in ex­ten­sion tem­plates for fast development.

Check out some of this week’s pop­u­lar ex­ten­sions…

Displays tags, such as TODO and FIXME, in a side­bar.

Format Javascript, JSON, CSS, SCSS, LESS, HTML and XML us­ing JS-Beautify.

Automatically for­mat PHP files us­ing php-cs-fixer with HTML, Blade and Twig supp…

And we’re here to help. Nova has a whole host of set­tings. We have easily cus­tomiz­able key bind­ings. We have cus­tom, quickly-switch­able workspace lay­outs. And we have loads of ed­i­tor tweaks, from match­ing brackets to over­scroll.

Click around to see Nova’s pref­er­ences!

...

Read the original on nova.app »

10 281 shares, 10 trendiness, words and minutes reading time

A surprisingly hard CS problem

Consider the fol­low­ing prob­lem: Given two se­quences of nat­ural num­bers, com­pute the sums of the square roots of the re­spec­tive lists, and then re­turn which of the sums was larger. So if your lists were [1,4] and [9,9], you’d get 1 + 2 com­pared to 3 + 3, and you’d say that the sec­ond was larger.

How quickly can we com­pute this, as a func­tion of the length of the in­put en­coded in bi­nary? You might en­joy tak­ing a sec­ond to think about this.

The best known al­go­rithm for this prob­lem is in PSPACE. That is, it takes ex­po­nen­tial time but only poly­no­mial space. That means that as far as we know, this prob­lem is crazily hard—among other things, PSPACE is harder than NP.

When I first heard this claim, I thought it was un­be­liev­able. After all, is­n’t there an ob­vi­ous al­go­rithm that works? Namely, in Haskell:

And that looks to me like it takes lin­ear time.

But it does­n’t work. The prob­lem is that this al­go­rithm is as­sum­ing fi­nite pre­ci­sion, which is­n’t nec­es­sar­ily enough to an­swer this ques­tion. Suppose that I can find some lists of num­bers whose sums of square roots are equal for the first ten mil­lion dec­i­mal points and then start be­ing dif­fer­ent. If we run my Haskell al­go­rithm, we’ll er­ro­neously be told that the sums are equal when ac­tu­ally they’re not. So we need to do some­thing smarter.

The cor­rect al­go­rithm ac­knowl­edges that square roots (and there­fore sums of square roots) aren’t fi­nite pre­ci­sion num­bers, they’re in­fi­nite streams of dec­i­mals. So it tells us to we take our two lists and start it­er­a­tively com­put­ing more and more dig­its of their sums-of-square-roots un­til we find a place where they dis­agree. And then we’re done. (If the sums of square roots are equal, this al­go­rithm of course won’t halt. There’s a known, rea­son­ably fast al­go­rithm (in BPP) for check­ing equal­ity of sums of square roots, so we should­n’t worry about that too much.)

But how long will we need to look through these se­quences of dig­its be­fore we find the dis­agree­ing digit? It feels in­tu­itively like we should be able to es­tab­lish some kind of bound on this. Like, maybe we should be able to say if you add two lists of n num­bers, each of which has d dig­its, then they can’t dis­agree for more than k * n * d dig­its” for some k. But no-one’s been able to prove any­thing like this.

This comes down to are you able to em­bed com­pli­cated re­la­tion­ships in­side of sums of square roots”. Like, we’re ba­si­cally ask­ing whether you can con­struct lists with ab­surdly close sums of square roots. This feels to me like a pretty deep ques­tion about square roots. There are other do­mains where this prob­lem is ob­vi­ously hard and ob­vi­ously easy. For ex­am­ple, sup­pose you want to know whether two pro­grams en­code the same bit string, and all you can do is run them a step at a time and see what they out­put. It’s re­ally easy for me to con­struct short pro­grams that take an ex­tremely long time be­fore they dis­agree: for ex­am­ple always print 0” and output Graham’s num­ber of ze­ros, then ones”. On the other hand, com­par­ing the sums of frac­tions is pretty easy, be­cause di­vi­sion is nice and well be­haved. So the ques­tion is how com­pli­cated square roots are.

My guess is that this prob­lem is ac­tu­ally in P, and we’re just stuck on prov­ing it be­cause ir­ra­tional num­bers are con­fus­ing and hard to prove things about, and most of the peo­ple who would be good at work­ing on this are do­ing some­thing more use­ful in­stead.

But if it turns out that I’m wrong, and that sums of square roots can get very close in­deed, I’m go­ing to up­date to­wards think­ing that in­te­gers and square roots are much scarier, richer ob­jects than I’d thought. I’ve up­dated to be­ing more scared of real num­bers than I used to be—they have all these sketchy prop­er­ties like almost none of them have fi­nite de­scrip­tions”. Real num­bers, and sets, and log­i­cal state­ments, have all started feel­ing to me like Cthuluesque mon­strosi­ties whose ap­pear­ances are only tol­er­a­ble be­cause we only look at the pretty parts of them and don’t let our­selves look at the hor­rors that lurk be­low.

Incidentally, com­par­ing sums of square roots is ac­tu­ally kind of a com­mon sub­task in eg com­pu­ta­tional geom­e­try, so a bunch of their prob­lems (including shortest path through a graph in Euclidean space”!) are as-far-as-we-know ex­tremely hard.

Like all great mind-blow­ing facts, this one sounded ini­tially pre­pos­ter­ous and then af­ter a few min­utes seemed com­pletely ob­vi­ous. I love it.

To read more about this, see here.

EDIT: I think that Edward Kmett and Paul Crowley might have fig­ured out how to solve this prob­lem in the com­ments on my Facebook post; see here. I’ll in­ves­ti­gate fur­ther and up­date.

EDIT 2: ac­tu­ally we did­n’t solve the prob­lem, but it might still be a good di­rec­tion for fu­ture re­search.

...

Read the original on shlegeris.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.