10 interesting stories served every morning and every evening.




1 470 shares, 25 trendiness

Delivering SSL/TLS Everywhere

Vital per­sonal and busi­ness in­for­ma­tion flows over the Internet more fre­quently than ever, and we don’t al­ways know when it’s hap­pen­ing. It’s clear at this point that en­crypt­ing is some­thing all of us should be do­ing. Then why don’t we use TLS (the suc­ces­sor to SSL) every­where? Every browser in every de­vice sup­ports it. Every server in every data cen­ter sup­ports it. Why don’t we just flip the switch?

The chal­lenge is server cer­tifi­cates. The an­chor for any TLS-protected com­mu­ni­ca­tion is a pub­lic-key cer­tifi­cate which demon­strates that the server you’re ac­tu­ally talk­ing to is the server you in­tended to talk to. For many server op­er­a­tors, get­ting even a ba­sic server cer­tifi­cate is just too much of a has­sle. The ap­pli­ca­tion process can be con­fus­ing. It usu­ally costs money. It’s tricky to in­stall cor­rectly. It’s a pain to up­date.

Let’s Encrypt is a new free cer­tifi­cate au­thor­ity, built on a foun­da­tion of co­op­er­a­tion and open­ness, that lets every­one be up and run­ning with ba­sic server cer­tifi­cates for their do­mains through a sim­ple one-click process.

Mozilla Corporation, Cisco Systems, Inc., Akamai Technologies, Electronic Frontier Foundation, IdenTrust, Inc., and re­searchers at the University of Michigan are work­ing through the Internet Security Research Group (“ISRG), a California pub­lic ben­e­fit cor­po­ra­tion, to de­liver this much-needed in­fra­struc­ture in Q2 2015. The ISRG wel­comes other or­ga­ni­za­tions ded­i­cated to the same ideal of ubiq­ui­tous, open Internet se­cu­rity.

The key prin­ci­ples be­hind Let’s Encrypt are:

* Free: Anyone who owns a do­main can get a cer­tifi­cate val­i­dated for that do­main at zero cost.

* Automatic: The en­tire en­roll­ment process for cer­tifi­cates oc­curs pain­lessly dur­ing the server’s na­tive in­stal­la­tion or con­fig­u­ra­tion process, while re­newal oc­curs au­to­mat­i­cally in the back­ground.

* Secure: Let’s Encrypt will serve as a plat­form for im­ple­ment­ing mod­ern se­cu­rity tech­niques and best prac­tices.

* Transparent: All records of cer­tifi­cate is­suance and re­vo­ca­tion will be avail­able to any­one who wishes to in­spect them.

* Open: The au­to­mated is­suance and re­newal pro­to­col will be an open stan­dard and as much of the soft­ware as pos­si­ble will be open source.

* Cooperative: Much like the un­der­ly­ing Internet pro­to­cols them­selves, Let’s Encrypt is a joint ef­fort to ben­e­fit the en­tire com­mu­nity, be­yond the con­trol of any one or­ga­ni­za­tion.

If you want to help these or­ga­ni­za­tions in mak­ing TLS Everywhere a re­al­ity, here’s how you can get in­volved:

To learn more about the ISRG and our part­ners, check out our About page.

...

Read the original on letsencrypt.org »

2 458 shares, 23 trendiness

Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever

One of the most frus­trat­ing as­pects in the on­go­ing con­ver­sa­tion around the preser­va­tion of older video games, also known as cul­tural out­put, is the col­li­sion of IP rights and some pub­lish­ers’ un­will­ing­ness to both con­tinue to sup­port and make avail­able these older games and their re­fusal to re­lease those same games into the pub­lic do­main so that oth­ers can do so. It cre­ates this crazy sit­u­a­tion in which a com­pany in­sists on re­tain­ing its copy­rights over a video game that it has ef­fec­tively dis­ap­peared with no good or le­git­i­mate way for the pub­lic to pre­serve them. As I’ve ar­gued for some time now, this breaks the copy­right con­tract with the pub­lic and should come with reper­cus­sions. The whole bar­gain that is copy­right law is that cre­ative works are granted a lim­ited mo­nop­oly on the pro­duc­tion of that work, with that work even­tu­ally ar­riv­ing into the pub­lic do­main. If that ar­rival is not al­lowed to oc­cur, the bar­gain is bro­ken, and not by any­one who would sup­pos­edly infringe” on the copy­right of that work.

Why would game pub­lish­ers do this sort of thing? There are plenty of the­o­ries. The fad of retro-gam­ing is such that pub­lish­ers can claim they are re­serv­ing their rights for an even­tual re­mas­tered ver­sion, or oth­er­wise a re-re­leased ver­sion, of these games. Sometimes they even fol­low through on those plans. In other cases, some com­pa­nies are just so in­grained in IP pro­tec­tion­ism that they can’t see past their own nose (hi there, Nintendo!). In still other cases the com­pa­nies that pub­lished the game no longer ex­ist, and un­rav­el­ing who now holds the rights to their games can be an ab­solute night­mare.

But it just does­n’t have to be like this. Companies could be will­ing to give up their iron-fisted con­trol over their IP for these older games they aren’t will­ing to sup­port or pre­serve them­selves and let oth­ers do it for them. And if you need a real world ex­am­ple of that, you need look only at how Epic is work­ing with The Internet Archive to do ex­actly that.

Epic, now pri­mar­ily known for Fort­nite and the Unreal Engine, has given per­mis­sion for two of the most sig­nif­i­cant video games ever made, Un­real and Un­real Tournament, to be freely ac­cessed via the Internet Archive. As spot­ted by RPS, via Re­setEra, the OldUnreal group an­nounced the move on their Discord, along with in­struc­tions for how to eas­ily down­load and play them on mod­ern ma­chines.

Huge ku­dos to Epic for be­ing cool with this, be­cause while it should­n’t be un­usual to hap­pily let peo­ple freely share a three-decade-old game you don’t sell any more, it’s van­ish­ingly rare. And if you re­main in any doubt, we just got word back from Epic con­firm­ing they’re on board.

We can con­firm that Un­real 1 and Unreal Tournament are avail­able on archive.org,” a spokesper­son told us by email, and peo­ple are free to in­de­pen­dently link to and play these ver­sions.”

Importantly, OldUnreal and The Internet Archive very much know what they’re do­ing here. Grabbing the ZIP file for the game sleekly pulls the ISO di­rectly from The Internet Archive, in­stalls it, and there are in­struc­tions for how to get the game up and run­ning on mod­ern hard­ware. This is ob­vi­ously a la­bor of love from fans ded­i­cated to­ward keep­ing these two ex­cel­lent games alive.

And the size and suc­cess of these games is im­por­tant, too. It would be all too easy for Epic to keep this IP to it­self with a plan for a re­mas­tered ver­sion of each game, or for a forth­com­ing se­quel, or any­thing like that. Instead, Epic has just opened up and al­lowed the in­ter­net to do its thing in pre­serv­ing these im­por­tant ti­tles us­ing one of the most trust­wor­thy sources to do so.

But this is just two games. What would be re­ally nice to see is this be­come a trend, or, bet­ter yet, a pro­gram run by The Internet Archive. Don’t want to bother to pre­serve your old game? No prob­lem, let the IA do it for you!

...

Read the original on www.techdirt.com »

3 430 shares, 42 trendiness

What is the origin of the lake tank image that has become a meme?

It’s a Panzer IVD of the 31st Panzer Regiment as­signed to the 5th Panzer Div. com­manded by Lt. Heinz Zobel lost on May 13th, 1940. The lake” is the Meuse River. The man is a German pi­o­neer.

All credit to find­ing the Panzer of the Lake goes to ConeOfArc for co­or­di­nat­ing the search, and miller786 and their team for find­ing the Panzer. Full sources and de­tails are in Panzer Of The Lake - Meuse River Theory

The photo was taken about co­or­di­nates 50.29092467073664, 4.893099128823844 near mod­ern Wallonia, Belgium on the Meuse River. The tank was not re­cov­ered un­til much later in 1941. The man is an un­named German pi­o­neer likely at the time of re­cov­ery.

Comparison of an al­ter­na­tive orig­i­nal photo and the most re­cent im­age avail­able of the lo­ca­tion (July 2020, Google Street View)

On May 12th, 1940 the 31st Panzer Regiment, as­signed to the 5th Panzer Division, at­tempted to cap­ture a bridge over the Meuse River at Yvoir. The bridge was de­mol­ished by 1st Lieutenant De Wispelaere of the Belgian Engineers.

Werner Advance Detachment (under Oberst Paul Hermann Werner, com­man­der, 31st Panzer Regiment), which be­longed to the 5th Panzer Division, un­der Rommel’s com­mand… Werner re­ceived a mes­sage from close sup­port air re­con­nais­sance in the af­ter­noon that the bridge at Yvoir (seven kilo­me­ters north of Dinant) was still in­tact. He (Werner) im­me­di­ately or­dered Leutnant [Heinz] Zobel’s ar­mored as­sault team of two ar­mored scout cars and one Panzer pla­toon to head to the bridge at top speed… Belgian en­gi­neers un­der the com­mand of 1st Lieutenant de Wispelaere had pre­pared the bridge for de­mo­li­tion while a pla­toon of Ardennes Light Infantry and el­e­ments of a French in­fantry bat­tal­ion screened the bridge… Although the last sol­diers had al­ready passed the bridge, de Wispelaere de­layed the de­mo­li­tion be­cause civil­ian refugees were still ap­proach­ing… two German ar­mored scout cars charged to­ward the bridge while the fol­low­ing three Panzers opened fire. De Wispelaere im­me­di­ately pushed the elec­tri­cal ig­ni­tion, but there was no ex­plo­sion… Wispelaere now left his shel­ter and worked the man­ual ig­ni­tion de­vice. Trying to get back to his bunker, he was hit by a burst from a German ma­chine gun and fell to the ground, mor­tally wounded. At the same time, the ex­plo­sive charge went off. After the gi­gan­tic smoke cloud had drifted away, only the rem­nants of the pil­lars could be seen.

A few kilo­me­ters south at Houx, the Germans used a por­tion of a pon­toon bridge (Bruckengerat B) rated to carry 16 tons to ferry their 25 ton tanks across.

By noon on May 13, Pioniere com­pleted an eight-ton ferry and crossed twenty anti-tank guns to the west bank, how­ever to main­tain the tempo of his di­vi­sions ad­vance, he needed ar­mor and mo­tor­ized units across the river. Rommel per­son­ally or­dered the ferry con­verted to a heav­ier six­teen-ton vari­ant to fa­cil­i­tate the cross­ing of the light Panzers and ar­mored cars. Simultaneously, the Pioniere be­gan con­struc­tion on a bridge ca­pa­ble of cross­ing the di­vi­sion’s heav­ier Panzers and mo­tor­ized units.

Major Erich Schnee in The German Pionier: Case Study of the Combat Engineer’s Employment During Sustained Ground Combat”

On the evening of the 13th, Lt. Zobel’s tank is cross­ing. Approaching the shore, the ferry lifts, the load shifts, and the tank falls into the river.

The panzer IV of Lieutenant Zabel [sic] of the 31. Panzer Regiment of the 5. Panzer-Division, on May 13, 1940, in Houx, as good as un­der­wa­ter ex­cept for the ve­hi­cle com­man­der’s cupola. Close to the west bank, at the pon­toon cross­ing site and later site of 5. Panzer Division bridge, a 16 tonne ferry (Bruckengerat B) gave way to the ap­proach­ing shore­line, likely due to the ro­tat­ing move­ment of the panzer, which turned right when dis­em­bark­ing (the only pos­si­ble di­rec­tion to quickly leave the Meuse’s shore due to the wall cre­ated by the rail line). The tank would be fished out in 1941 dur­ing the re­con­struc­tion of the bridge.

Sometime later the pho­to­graph was taken of a German pi­o­neer in­fantry­man look­ing at the tank. Later the tank was re­cov­ered and its ul­ti­mate fate is un­known.

Available ev­i­dence sug­gests the sol­dier in the photo is a Pioneer/Tank re­cov­ery crew, hold­ing a Kar98k and wear­ing an EM/NCO’S Drill & Work uni­form, more com­monly known as Drillich”.

His role is proven by the pres­ence of pon­toon fer­ries on the Meuse river, used by the 5th Panzer Division. That is also proven by his uni­form, which, as ev­i­dence sug­gests, was used dur­ing work to pre­vent dam­age to their stan­dard woolen uni­form.

An early ver­sion of the Drillich

While I can’t iden­tify the photo, I can nar­row down the tank. I be­lieve it is a Panzer IV D.

It has the short bar­relled 7.5 cm KwK 37 nar­row­ing it down to a Panzer IV Ausf. A through F1 or a Panzer III N.

Both had very sim­i­lar tur­rets, but the Panzer III N has a wider gun mant­let, a more an­gu­lar shroud, and lacked (or cov­ered) the dis­tinc­tive an­gu­lar view ports (I be­lieve they’re view ports) on ei­ther side of the tur­ret face.

This leaves the Panzer IV. The dis­tinc­tive cupola was added in model B. The ex­ter­nal gun mant­let was added in model D.

Panzer IV model D in France 1940 with the ex­ter­nal gun mant­let and periscope. source

Note the front half of the tur­ret top is smooth. There is a pro­tru­sion to the front left of the cupola (I be­lieve it’s a periscope sight) and an­other cir­cu­lar open­ing to the front right. Finally, note the large ven­ti­la­tion hatch just in front of the cupola.

Model E would elim­i­nate the ven­ti­la­tion hatch and re­place it with a fan. The periscope was re­placed with a hatch for sig­nal flags.

Panzer IV model D en­tered mass pro­duc­tion in October 1939 which means it would be too late for Poland, but could have seen ser­vice in France, Norway, or the Soviet Union.

As for the sol­dier…

The ri­fle has a turned down bolt han­dle, a bay­o­net lug (missing from late ri­fles), a dis­tinc­tive dis­as­sem­bly disc on the side of the stock (also miss­ing from late ri­fles), no front site hood (indicative of an early ri­fle), and you can just about make out ex­tra de­tail in the nose cap (also early). This is likely an early Karabiner 98k which is miss­ing its clean­ing rod. See Forgotten Weapons: Evolution of the Karabiner 98k, From Prewar to Kriegsmodell.

ConeOfArc posted a video The Search for Panzer of the Lake.

He broke down what he could iden­tify about the sol­der, prob­a­bly German.

For the tank he con­firms it’s a Panzer IV D us­ing sim­i­lar cri­te­ria I used and he found two ad­di­tional pho­tos of what ap­pear to be the same tank claim­ing to be from the Western front in 1940.

He then found a Russian source claim­ing it was found in Romania at the on­set of Barbarossa in 1941.

Unfortunately that’s all for now. ConeOfArc has put a bounty of $100 US for de­fin­i­tive proof of the tank’s lo­ca­tion. More de­tail can be had on ConeOfArc’s Discord.

...

Read the original on history.stackexchange.com »

4 425 shares, 27 trendiness

auonsson (@auonsson.bsky.social)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. Chinese-flagged cargo ship Yi Peng 3 crossed both sub­ma­rine ca­bles C-Lion 1 and BSC at times match­ing when they broke.

She was shad­owed by Danish navy for a while dur­ing night and is now in Danish Straits leav­ing Baltics.

No signs of board­ing. AIS-caveats ap­ply.

...

Read the original on bsky.app »

5 411 shares, 42 trendiness

Analytical Anti-Aliasing

Today’s jour­ney is Anti-Aliasing and the des­ti­na­tion is Analytical Anti-Aliasing. Getting rid of ras­ter­i­za­tion jag­gies is an art-form with decades upon decades of maths, cre­ative tech­niques and non-stop in­no­va­tion. With so many years of re­search and de­vel­op­ment, there are many fla­vors.

From the sim­ple but re­source in­ten­sive SSAA, over the­ory dense SMAA, to us­ing ma­chine learn­ing with DLAA. Same goal - vastly dif­fer­ent ap­proaches. We’ll take a look at how they work, be­fore in­tro­duc­ing a new way to look a the prob­lem - the ✨analytical🌟 way. The per­fect Anti-Aliasing ex­ists and is sim­pler than you think.

Having im­ple­mented it mul­ti­ple times over the years, I’ll also share some juicy se­crets I have never read any­where be­fore.

To un­der­stand the Anti-Aliasing al­go­rithms, we will im­ple­ment them along the way! Following WebGL can­vases draw a mov­ing cir­cle. Anti-Aliasing can­not be fully un­der­stood with just im­ages, move­ment is es­sen­tial. The red box has 4x zoom. Rendering is done at na­tive res­o­lu­tion of your de­vice, im­por­tant to judge sharp­ness.

Please pixel-peep to judge sharp­ness and alias­ing closely. Resolution of your screen too high to see alias­ing? Lower the res­o­lu­tion with the fol­low­ing but­tons, which will in­te­ger-scale the ren­der­ing.

Let’s start out sim­ple. Using GLSL Shaders we tell the GPU of your de­vice to draw a cir­cle in the most sim­ple and naive way pos­si­ble, as seen in cir­cle.fs above: If the length() from the mid­dle point is big­ger than 1.0, we dis­card the pixel.

The cir­cle is blocky, es­pe­cially at smaller res­o­lu­tions. More painfully, there is strong pixel crawl­ing”, an ar­ti­fact that’s very ob­vi­ous when there is any kind of move­ment. As the cir­cle moves, rows of pix­els pop in and out of ex­is­tence and the stair steps of the pix­e­la­tion move along the side of the cir­cle like beads of dif­fer­ent speeds.

The low ¼ and ⅛ res­o­lu­tions aren’t just there for ex­treme pixel-peep­ing, but also to rep­re­sent small el­e­ments or ones at large dis­tance in 3D.

At lower res­o­lu­tions these ar­ti­facts come to­gether to de­stroy the cir­cu­lar form. The com­bi­na­tion of slow move­ment and low res­o­lu­tion causes one side’s pix­els to come into ex­is­tence, be­fore the other side’s pix­els dis­ap­pear, caus­ing a wob­ble. Axis-alignment with the pixel grid causes plateaus” of pix­els at every 90° and 45° po­si­tion.

Understanding the GPU code is not nec­es­sary to fol­low this ar­ti­cle, but will help to grasp whats hap­pen­ing when we get to the an­a­lyt­i­cal bits.

4 ver­tices mak­ing up a quad are sent to the GPU in the ver­tex shader cir­cle.vs, where they are re­ceived as at­tribute vec2 vtx. The co­or­di­nates are of a unit quad”, mean­ing the co­or­di­nates look like the fol­low­ing im­age. With one fa­mous ex­cep­tion, all GPUs use tri­an­gles, so the quad is ac­tu­ally made up of two tri­an­gles.

The ver­tices here are given to the frag­ment shader cir­cle.fs via vary­ing vec2 uv. The frag­ment shader is called per frag­ment (here frag­ments are pixel-sized) and the vary­ing is in­ter­po­lated lin­early with per­spec­tive cor­rected, barycen­tric co­or­di­nates, giv­ing us a uv co­or­di­nate per pixel from -1 to +1 with zero at the cen­ter.

By per­form­ing the check if (length(uv) < 1.0) we draw our color for frag­ments in­side the cir­cle and re­ject frag­ments out­side of it. What we are do­ing is known as Alpha test­ing”. Without div­ing too deeply and just to hint at what’s to come, what we have cre­ated with length(uv) is the signed dis­tance field of a point.

Just to clar­ify, the cir­cle is­n’t drawn with geom­e­try”, which would have fi­nite res­o­lu­tion of the shape, de­pend­ing on how many ver­tices we use. It’s drawn by the shader”.

SSAA stands for Super Sampling Anti-Aliasing. Render it big­ger, down­sam­ple to be smaller. The idea is as old as 3D ren­der­ing it­self. In fact, the first movies with CGI all re­lied on this with the most naive of im­ple­men­ta­tions. One ex­am­ple is the 1986 movie Flight of the Navigator”, as cov­ered by Captain Disillusion in the video be­low.

1986 did it, so can we. Implemented in mere sec­onds. Easy, right?

cir­cleSSAA.js draws at twice the res­o­lu­tion to a tex­ture, which frag­ment shader post.fs reads from at stan­dard res­o­lu­tion with GL_LINEAR to per­form SSAA. So we have four in­put pix­els for every one out­put pixel we draw to the screen. But it’s some­what strange: There is def­i­nitely Anti-Aliasing hap­pen­ing, but less than ex­pected.

There should be 4 steps of trans­parency, but we only get two!

Especially at lower res­o­lu­tions, we can see the cir­cle does ac­tu­ally have 4 steps of trans­parency, but mainly at the 45° diagonals” of the cir­cle. A cir­cle has of course no sides, but at the axis-aligned bottom” there are only 2 steps of trans­parency: Fully Opaque and 50% trans­par­ent, the 25% and 75% trans­parency steps are miss­ing.

We aren’t sam­pling against the cir­cle shape at twice the res­o­lu­tion, we are sam­pling against the quan­tized re­sult of the cir­cle shape. Twice the res­o­lu­tion, but dis­crete pix­els nonethe­less. The com­bi­na­tion of pix­e­la­tion and sam­ple place­ment does­n’t hold enough in­for­ma­tion where we need it the most: at the axis-aligned flat parts”.

Four times the mem­ory and four times the cal­cu­la­tion re­quire­ment, but only a half-assed re­sult.

Implementing SSAA prop­erly is a minute craft. Here we are draw­ing to a 2x res­o­lu­tion tex­ture and down-sam­pling it with lin­ear in­ter­po­la­tion. So ac­tu­ally, this im­ple­men­ta­tion needs 5x the amount of VRAM. A proper im­ple­men­ta­tion sam­ples the scene mul­ti­ple times and com­bines the re­sult with­out an in­ter­me­di­ary buffer.

With our im­ple­men­ta­tion, we can’t even do more than 2xSSAA with one tex­ture read, as lin­ear in­ter­po­la­tion hap­pens only with 2x2 sam­ples

To com­bat axis-align­ment ar­ti­facts like with our cir­cle above, we need to place our SSAA sam­ples bet­ter. There are mul­ti­ple ways to do so, all with pros and cons. To im­ple­ment SSAA prop­erly, we need deep in­te­gra­tion with the ren­der­ing pipeline. For 3D prim­i­tives, this hap­pens be­low API or en­gine, in the realm of ven­dors and dri­vers.

In fact, some of the best im­ple­men­ta­tions were dis­cov­ered by ven­dors on ac­ci­dent, like SGSSAA. There are also ways in which SSAA can make your scene look worse. Depending on im­ple­men­ta­tion, SSAA messes with mip-map cal­cu­la­tions. As a re­sult the mip-map lod-bias may need ad­just­ment, as ex­plained in the ar­ti­cle above.

WebXR UI pack­age three-mesh-ui , a pack­age ma­ture enough to be used by Meta , uses shader-based ro­tated grid su­per sam­pling to achieve sharp text ren­der­ing in VR, as seen in the code

MSAA is su­per sam­pling, but only at the sil­hou­ette of mod­els, over­lap­ping geom­e­try, and tex­ture edges if Alpha to Coverage” is en­abled. MSAA is im­ple­mented by the graph­ics card in-hard­ware by the graph­ics ven­dors and what is sup­ported de­pends on hard­ware. In the se­lect box be­low you can choose dif­fer­ent MSAA lev­els for our cir­cle.

There is up to MSAA x64, but what is avail­able is im­ple­men­ta­tion de­fined. WebGL 1 has no sup­port, which is why the next can­vas ini­tial­izes a WebGL 2 con­text. In WebGL, NVIDIA lim­its MSAA to 8x on Windows, even if more is sup­ported, whilst on Linux no such limit is in place. On smart­phones you will only get ex­actly 4x, as dis­cussed be­low.

What is edge smooth­ing and how does MSAA even know what to sam­ple against? For now we skip the shader code and im­ple­men­ta­tion. First let’s take a look at MSAAs pros and cons in gen­eral.

We rely on hard­ware to do the Anti-Aliasing, which ob­vi­ously leads to the prob­lem that user hard­ware may not sup­port what we need. The sam­pling pat­terns MSAA uses may also do things we don’t ex­pect. Depending on what your hard­ware does, you may see the cir­cle’s edge trans­parency steps ap­pear­ing in the wrong or­der”.

When MSAA be­came re­quired with OpenGL 3 & DirectX 10 era of hard­ware, sup­port was es­pe­cially hit & miss. Even lat­est Intel GMA iG­PUs ex­pose the OpenGL ex­ten­sion EXT_framebuffer_multisample, but don’t in-fact sup­port MSAA, which led to con­fu­sion. But also in more re­cent smart­phones, sup­port just was­n’t that clear-cut.

Mobile chips sup­port ex­actly MSAAx4 and things are weird. Android will let you pick 2x, but the dri­ver will force 4x any­ways. iPhones & iPads do some­thing rather stu­pid: Choosing 2x will make it 4x, but trans­parency will be rounded to near­est 50% mul­ti­ple, lead­ing to dou­ble edges in our ex­am­ple. There is hard­ware spe­cific rea­son:

Looking at mod­ern video games, one might be­lieve that MSAA is of the past. It usu­ally brings a hefty per­for­mance penalty af­ter all. Surprisingly, it’s still the king un­der cer­tain cir­cum­stances and in very spe­cific sit­u­a­tions, even per­for­mance free.

As a gamer, this goes against in­stinct…

Rahul Prasad: Use MSAA […] It’s ac­tu­ally not as ex­pen­sive on mo­bile as it is on desk­top, it’s one of the nice things you get on mo­bile. […] On some (mobile) GPUs 4x (MSAA) is free, so use it when you have it.

As ex­plained by Rahul Prasad in the above talk, in VR 4xMSAA is a must and may come free on cer­tain mo­bile GPUs. The spe­cific rea­son would de­rail the blog post, but in case you want to go down that par­tic­u­lar rab­bit hole, here is Epic Games’ Niklas Smedberg giv­ing a run-down.

In short, this is pos­si­ble un­der the con­di­tion of for­ward ren­der­ing with geom­e­try that is not too dense and the GPU hav­ing tiled-based ren­der­ing ar­chi­tec­ture, which al­lows the GPU to per­form MSAA cal­cu­la­tions with­out heavy mem­ory ac­cess and thus la­tency hid­ing the cost of the cal­cu­la­tion. Here’s deep dive, if you are in­ter­ested.

MSAA gives you ac­cess to the sam­ples, mak­ing cus­tom MSAA fil­ter­ing curves a pos­si­bil­ity. It also al­lows you to merge both stan­dard mesh-based and signed-dis­tance-field ren­der­ing via al­pha to cov­er­age. This com­plex fea­tures set made pos­si­ble the most out-of-the-box think­ing I ever wit­nessed in graph­ics pro­gram­ming:

Assassin’s Creed Unity used MSAA to ren­der at half res­o­lu­tion and re­con­struct only some buffers to full-res from MSAA sam­ples, as de­scribed on page 48 of the talk GPU-Driven Rendering Pipelines” by Ulrich Haar and Sebastian Aaltonen. Kinda like vari­able rate shad­ing, but im­ple­mented with duct-tape and with­out ven­dor sup­port.

The brain-melt­ing lengths to which graph­ics pro­gram­mers go to uti­lize hard­ware ac­cel­er­a­tion to the last drop has me some­times in awe.

In 2009 a pa­per by Alexander Reshetov struck the graph­ics pro­gram­ming world like a ton of bricks: take the blocky, aliased re­sult of the ren­dered im­age, find edges and clas­sify the pix­els into tetris-like shapes with per-shape fil­ter­ing rules and re­move the blocky edge. Anti-Aliasing based on the mor­phol­ogy of pix­els - MLAA was born.

Computationally cheap, easy to im­ple­ment. Later it was re­fined with more em­pha­sis on re­mov­ing sub-pixel ar­ti­facts to be­come SMAA. It be­came a fan fa­vorite, with an in­jec­tor be­ing de­vel­oped early on to put SMAA into games that did­n’t sup­port it. Some con­sid­ered these too blurry, the say­ing vaseline on the screen” was coined.

It was the fu­ture, a sign of things to come. No more shaky hard­ware sup­port. Like Fixed-Function pipelines died in fa­vor of pro­gram­ma­ble shaders Anti-Aliasing too be­came shader based”.

We’ll take a close look at an al­go­rithm that was in­spired by MLAA, de­vel­oped by Timothy Lottes. Fast ap­prox­i­mate anti-alias­ing”, FXAA. In fact, when it came into wide cir­cu­la­tion, it re­ceived some in­cred­i­ble press. Among oth­ers, Jeff Atwood pulled nei­ther bold fonts nor punches in his 2011 blog post, later re­pub­lished by Kotaku.

Jeff Atwood: The FXAA method is so good, in fact, it makes all other forms of full-screen anti-alias­ing pretty much ob­so­lete overnight. If you have an FXAA op­tion in your game, you should en­able it im­me­di­ately and ig­nore any other AA op­tions.

Let’s see what the hype was about. The fi­nal ver­sion pub­licly re­leased was FXAA 3.11 on August 12th 2011 and the fol­low­ing demos are based on this. First, let’s take a look at our cir­cle with FXAA do­ing the Anti-Aliasing at de­fault set­tings.

A bit of a weird re­sult. It looks good if the cir­cle would­n’t move. Perfectly smooth edges. But the cir­cle dis­torts as it moves. The axis-aligned top and bot­tom es­pe­cially have a lit­tle nub that ap­pears and dis­ap­pears. And switch­ing to lower res­o­lu­tions, the cir­cle even loses its round shape, wob­bling like Play Station 1 graph­ics.

Per-pixel, FXAA con­sid­ers only the 3x3 neigh­bor­hood, so it can’t pos­si­bly know that this area is part of a big shape. But it also does­n’t just blur edges”, as of­ten said. As ex­plained in the of­fi­cial whitepa­per, it finds the edge’s di­rec­tion and shifts the pix­el’s co­or­di­nates to let the per­for­mance free lin­ear in­ter­po­la­tion do the blend­ing.

For our demo here, wrong tool for the job. Really, we did­n’t do FXAA jus­tice with our ex­am­ple. FXAA was cre­ated for an­other use case and has many set­tings and pre­sets. It was cre­ated to anti-alias more com­plex scenes. Let’s give it a fair shot!

A scene from my fa­vorite piece of soft­ware in ex­is­tence: NeoTokyo°. I cre­ated a bright area light in an NT° map and moved a bench to cre­ate an area of strong alias­ing. The fol­low­ing demo uses the aliased out­put from NeoTokyo°, cal­cu­lates the re­quired lu­mi­nance chan­nel and ap­plies FXAA. All FXAA pre­sets and set­tings at your fin­ger tips.

This has fixed res­o­lu­tion and will only be at you de­vice’s na­tive res­o­lu­tion, if your de­vice has no dpi scal­ing and the browser is at 100% zoom.

Just look­ing at the full FXAA 3.11 source, you can see the pas­sion in every line. Portable across OpenGL and DirectX, a PC ver­sion, a XBOX 360 ver­sion, two finely op­ti­mized PS3 ver­sion fight­ing for every GPU cy­cle, in­clud­ing shader dis­as­sam­bly. Such level of pro­fes­sion­al­ism and ded­i­ca­tion, shared with the world in plain text.

The shar­ing and open­ness is why I’m in love with graph­ics pro­gram­ming.

It may be per­for­mance cheap, but only if you al­ready have post-pro­cess­ing in place or do de­ferred shad­ing. Especially in mo­bile graph­ics, mem­ory ac­cess is ex­pen­sive, so sav­ing the frame­buffer to per­form post pro­cess­ing is not al­ways a given. If you need to setup ren­der-to-tex­ture in or­der to have FXAA, then the F” in FXAA evap­o­rates.

In this ar­ti­cle we won’t jump into mod­ern tem­po­ral anti-alias­ing, but be­fore FXAA was even de­vel­oped, TAA was al­ready ex­per­i­mented with. In fact, FXAA was sup­posed to get a new ver­sion 4 and in­cor­po­rate tem­po­ral anti alias­ing in ad­di­tion to the stan­dard spa­tial one, but in­stead it evolved fur­ther and re­branded into TXAA.

Now we get to the good stuff. Analytical Anti-Aliasing ap­proaches the prob­lem back­wards - it knows the shape you need and draws the pixel al­ready Anti-Aliased to the screen. Whilst draw­ing the 2D or 3D shape you need, it fades the shape’s bor­der by ex­actly one pixel.

Always smooth with­out ar­ti­facts and you can ad­just the amount of fil­ter­ing. Preserves shape even at low res­o­lu­tions. No ex­tra buffers or ex­tra hard­ware re­quire­ments.

Even runs on ba­sic WebGL 1.0 or OpenGLES 2.0, with­out any ex­ten­sions.

With the above but­tons, you can set the smooth­ing to be equal to one pixel. This gives a sharp re­sult, but comes with the caveat that axis-aligned 90° sides may still be per­se­ved as flat” in spe­cific com­bi­na­tions of screen res­o­lu­tion, size and cir­cle po­si­tion.

Filtering based on the di­ag­o­nal pixel size of √2 px = 1.4142…, en­sures the tip” of the cir­cle in axis-aligned pixel rows and columns is al­ways non-opaque. This re­moves the per­cep­tion of flat­ness, but makes it shape ever so slightly more blurry.

Or in other words: as soon as the bor­der has an opaque pixel, there is al­ready a trans­par­ent pixel in front” of it.

This style of Anti-Aliasing is usu­ally im­ple­mented with 3 in­gre­di­ents:

But if you look at the code box above, you will find cir­cle-an­a­lyt­i­cal.fs hav­ing none of those. And this is the se­cret sauce we will look at. Before we dive into the im­ple­men­ta­tion, let’s clear the ele­phants in the room…

In graph­ics pro­gram­ming, Analytical refers to ef­fects cre­ated by know­ing the make-up of the in­tended shape be­fore­hand and per­form­ing cal­cu­la­tions against the rigid math­e­mat­i­cal de­f­i­n­i­tion of said shape. This term is used very loosely across com­puter graph­ics, sim­i­lar to su­per sam­pling re­fer­ring to mul­ti­ple things, de­pend­ing on con­text.

Very soft soft-shad­ows which in­clude con­tact-hard­en­ing, im­ple­mented by al­go­rithms like per­cent­age-closer soft shad­ows are very com­pu­ta­tion­ally in­tense and re­quire both high res­o­lu­tion shadow maps and/​or very ag­gres­sive fil­ter­ing to not pro­duce shim­mer­ing dur­ing move­ment.

This is why Naughty Dog’s The Last of Us re­lied on get­ting soft-shad­ows on the main char­ac­ter by cal­cu­lat­ing the shadow from a rigidly de­fined for­mula of a stretched sphere, mul­ti­ple of which were arranged in the shape of the main char­ac­ter, shown in red. An im­proved im­ple­men­ta­tion with shader code can be seen in this Shadertoy demo by ro­main­guy, with the more mod­ern cap­sule, as op­posed a stretched sphere.

This is now an in­te­gral part of mod­ern game en­gines, like Unreal. As op­posed to stan­dard shadow map­ping, we don’t ren­der the scene from the per­spec­tive of the light with fi­nite res­o­lu­tion. We eval­u­ate the shadow per-pixel against the math­e­mat­i­cal equa­tion of the stretched sphere or cap­sule. This makes cap­sule shad­ows an­a­lyt­i­cal.

Staying with the Last of Us, The Last of Us Part II uses the same logic for blurry real-time re­flec­tions of the main char­ac­ter, where Screen Space Reflections aren’t de­fined. Other op­tions like ray­trac­ing against the scene, or us­ing a real-time cube­map like in GTA V are ei­ther noisy and low res­o­lu­tion or high res­o­lu­tion, but low per­for­mance.

Here the re­flec­tion cal­cu­la­tion is part of the ma­te­r­ial shader, ren­der­ing against the rigidly de­fined math­e­mat­i­cal shape of the cap­sule per-pixel, mul­ti­ple of which are arranged in the shape of the main char­ac­ter. This makes cap­sule re­flec­tions an­a­lyt­i­cal.

An on­line demo with is worth at least a mil­lion…

…yeah the joke is get­ting old.

Ambient Occlusion is es­sen­tial in mod­ern ren­der­ing, bring­ing con­tact shad­ows and ap­prox­i­mat­ing global il­lu­mi­na­tion. Another topic as deep as the ocean, with so many im­ple­men­ta­tions. Usually im­ple­mented by some form of raytrace a bunch of rays and blur the re­sult”.

In this Shadertoy demo, the floor is eval­u­ated per-pixel against the rigidly de­fined math­e­mat­i­cal de­scrip­tion of the sphere to get a soft, non-noisy, non-flick­er­ing oc­clu­sion con­tri­bu­tion from the hov­er­ing ball. This im­ple­men­ta­tion is an­a­lyt­i­cal. Not just spheres, there are an­a­lyt­i­cal ap­proaches also for com­plex geom­e­try.

By ex­ten­sion, Unreal Engine has dis­tance field ap­proaches for Soft Shadows and Ambient Occlusion, though one may ar­gue, that this type of signed dis­tance field ren­der­ing does­n’t fit the de­scrip­tion of an­a­lyt­i­cal, con­sid­er­ing the dis­tance field is pre­cal­cu­lated into a 3D tex­ture.

Let’s dive into the sauce. We work with signed dis­tance fields, where for every point that we sam­ple, we know the dis­tance to the de­sired shape. This in­for­ma­tion may be baked into a tex­ture as done for SDF text ren­der­ing or maybe be de­rived per-pixel from a math­e­mat­i­cal for­mula for sim­pler shapes like bezier curves or hearts.

Based on that dis­tance we fade out the bor­der of the shape. If we fade by the size of one pixel, we get per­fectly smooth edges, with­out any strange side ef­fects. The se­cret sauce is in the im­ple­men­ta­tion and un­der the sauce is where the magic is. How does the shader know the size of pixel? How do we blend based on dis­tance?

This ap­proach gives mo­tion-sta­ble pixel-per­fec­tion, but does­n’t work with tra­di­tional ras­ter­i­za­tion. The full shape re­quires a signed dis­tance field.

Specifically, by how much do we fade the bor­der? If we hard­code a sta­tic value, eg. fade at 95% of the cir­cle’s ra­dius, we may get a pleas­ing re­sult for that cir­cle size at that screen res­o­lu­tion, but too much smooth­ing when the cir­cle is big­ger or closer to the cam­era and alias­ing if the cir­cle be­comes small.

We need to know the size of a pixel. This is in part what Screen Space de­riv­a­tives were cre­ated for. Shader func­tions like dFdx, dFdy and fwidth al­low you to get the size of a screen pixel rel­a­tive to some vec­tor. In the above cir­cle-an­a­lyt­i­cal­Com­pare.fs we de­ter­mine by how much the dis­tance changes via two meth­ods:

pix­el­Size = fwidth(dist);

/* or */

pix­el­Size = length(vec2(dFdx(dist), dFdy(dist)));

Relying on Screen Space de­riv­a­tives has the ben­e­fit, that we get the pixel size de­liv­ered to us by the graph­ics pipeline. It prop­erly re­spects any trans­for­ma­tions we might throw at it, in­clud­ing 3D per­spec­tive.

The down side is that it is not sup­ported by the WebGL 1 stan­dard and has to be pulled in via the ex­ten­sion GL_OES_standard_derivatives or re­quires the jump to WebGL 2.

Luckily I have never wit­nessed any de­vice that sup­ported WebGL 1, but not the Screen Space de­riv­a­tives. Even the GMA based Thinkpad X200 & T500 I hard­ware mod­ded do.

Generally, there are some nasty pit­falls when us­ing Screen Space de­riv­a­tives: how the cal­cu­la­tion hap­pens is up to the im­ple­men­ta­tion. This led to the split into dFdxFine() and dFdx­Coarse() in later OpenGL re­vi­sions. The de­fault case can be set via GL_FRAGMENT_SHADER_DERIVATIVE_HINT, but the stan­dard hates you:

OpenGL Docs: The im­ple­men­ta­tion may choose which cal­cu­la­tion to per­form based upon fac­tors such as per­for­mance or the value of the API GL_FRAGMENT_SHADER_DERIVATIVE_HINT hint.

Why do we have stan­dards again? As a graph­ics pro­gram­mer, any­thing with hint has me trau­ma­tized.

Luckily, nei­ther case con­cerns us, as the dif­fer­ence does­n’t show it­self in the con­text of Anti-Aliasing. Performance tech­ni­cally dFdx and dFdy are free (or rather, their cost is al­ready part of the ren­der­ing pipeline), though the pixel size cal­cu­la­tion us­ing length() or fwidth() is not. It is per­formed per-pixel.

This is why there ex­ist two ways of do­ing this: get­ting the length() of the vec­tor that dFdx and dFdy make up, a step in­volv­ing the his­tor­i­cally per­for­mance ex­pen­sive sqrt() func­tion or us­ing fwidth(), which is the ap­prox­i­ma­tion abs(dFdx()) + abs(dFdy()) of the above.

It de­pends on con­text, but on semi-mod­ern hard­ware a call to length() should be per­for­mance triv­ial though, even per-pixel.

To show­case the dif­fer­ence, the above Radius ad­just slider works off of the Pixel size method and ad­justs the SDF dis­tance. If you go with fwidth() and a strong ra­dius shrink, you’ll see some­thing weird.

The di­ag­o­nals shrink more than they should, as the ap­prox­i­ma­tion us­ing ad­di­tion scales too much di­ag­o­nally. We’ll talk about pro­fes­sional im­ple­men­ta­tions fur­ther be­low in a mo­ment, but us­ing fwidth() for AAA is what Unity ex­ten­sion Shapes” by Freya Holmér calls Fast Local Anti-Aliasing” with the fol­low­ing text:

Fast LAA has a slight bias in the di­ag­o­nal di­rec­tions, mak­ing cir­cu­lar shapes ap­pear ever so slightly rhom­bous and have a slightly sharper cur­va­ture in the or­thog­o­nal di­rec­tions, es­pe­cially when small. Sometimes the edges in the di­ag­o­nals are slightly fuzzy as well.

This ef­fects our fad­ing, which will fade more on di­ag­o­nals. Luckily, we fade by the amount of one pixel and thus the dif­fer­ence is re­ally only vis­i­ble when flick­ing be­tween the meth­ods. What to choose de­pends on what you care more about: Performance or Accuracy? But what if I told you can have your cake and eat it too…

…Calculate it your­self! For the 2D case, this is triv­ial and eas­ily ab­stracted away. We know the size our con­text is ren­der­ing at and how big our quad is that we draw on. Calculating the size of the pixel is thus done per-ob­ject, not per-pixel. This is what hap­pens in the above cir­cle­An­a­lyt­i­cal­Com­par­i­son.js.

/* Calculate pixel size based on height.

Simple case: Assumes Square pix­els and a square quad. */

gl.uni­for­m1f(pix­el­Size­Cir­cle, (2.0 / (canvas.height / res­Div)));

No WebGL 2, no ex­ten­sions, works on an­cient hard­ware.

...

Read the original on blog.frost.kiwi »

6 393 shares, 18 trendiness

How Tiny Glade 'built' its way to >600k sold in a month!

[The GameDiscoverCo game dis­cov­ery newslet­ter is writ­ten by how peo­ple find your game’ ex­pert & company founder Simon Carless, and is a reg­u­lar look at how peo­ple dis­cover and buy video games in the 2020s.]

We’re back for a new week, and thanks for the feed­back on our news up front, main fea­ture in the back’ newslet­ter struc­ture, which seems to have gone down well. (We spend a lot of time pick­ing the right news - not all the news - for that sec­tion.)

Before we start, we’re go­ing to ask you an im­por­tant ques­tion - should you be sink­ing your 401k into vTu­ber stocks? Dungeon Investing is try­ing to an­swer that by a deep fi­nan­cial dive into Hololive par­ent com­pany Cover Corp, whom you might know from hit free Steam fangame HoloCure and that L. A. Dodgers base­ball col­lab. Huh!

[HEADS UP: you can sup­port GameDiscoverCo by sub­scrib­ing to GDCo Plus right now. You get base ac­cess to a su­per-de­tailed Steam back -end for un­re­leased & re­leased games, full ac­cess to a sec­ond weekly newslet­ter, Discord ac­cess, eight game dis­cov­ery eBooks & lots more.]

We’re start­ing out with the lat­est game plat­form & dis­cov­ery news, as is now our rule. And here’s what we’d like to point out:

We’re guess­ing you might have seen Pounce Light’s glo­ri­ous relaxing build­ing game” Tiny Glade ($15), which Ana Opara and Tomasz Stachowiak launched on Steam on Sept. 23rd, af­ter a two-year dev pe­riod and a Top 10 ap­pear­ance in June’s Next Fest.

We ex­pected the game to do well - but at 10,000+ CCU on launch, and >1,000 CCU even now, it’s do­ing amaz­ing for a mi­cro-in­die. Why? It ap­pealed to the cozy demo, like The Sims stream­ers (above), wider city builder’ in­flu­encers - and has UGC ga­lore, since play­ers are build­ing Helm’s Deep from The Lord Of The Rings in the game.

So we had to con­tact the devs for a Q&A. They were kind enough to be trans­par­ent with their num­bers - as of a few days ago - in­clud­ing this Steam back-end overview:

A few things stand out there as im­pres­sive or dif­fer­ent to the norm:

* the 616,000 copies sold in less than a month, pretty much guar­an­tee­ing Tiny Glade is sell­ing a mil­lion or two over time. (Blimey.)

* the big DAU (daily ac­tive) num­ber com­pared to CCU (concurrents) - ~30x, ver­sus 8-10x for stick­ier ti­tles. (But the game is still 97% Positive in user re­views.)

* the me­dian time played of just 1 hour 4 min­utes - rel­a­tively low, though we know some out­liers build for hun­dreds of hours.

Just flag­ging: we don’t re­ally see low play time as a neg­a­tive here. Tiny Glade is, at its heart, a gor­geous soft­ware toy. It does­n’t have in-game goals - it’s a sand­box. The peo­ple who bought it love it, and want to sup­port it, and don’t have any re­grets. Neat!

The Tiny Glade team also passed along the coun­try-based stats for Steam buy­ers, which are in­trigu­ing: United States (32%), Germany (9%), France (7%), UK (7%), China (7%), Canada (4%), Russian Federation (4%), Australia (3%), Netherlands (2%) and Japan (2%). So - less Asia-centric than a num­ber of other re­cent PC hits…

Switching to GameDiscoverCo data: here’s our Steam Deep Dive Affinity’ data, show­ing medium-sized (or above) games which have a high player over­lap with Tiny Glade, and are >10x more likely than a normal’ Steam player to own that game:

This gives a re­ally good fla­vor of the kinds of play­ers who pick up Tiny Glade. They’re:

But… why did peo­ple buy Tiny Glade? The an­swer is - in our view - that every sin­gle video (or demo) that the game has ever put out, from early vi­ral Tweets to the Future Games Show 2023 trailer and be­yond, screams this’ll be so fun to build things in, play me!’

With the devs be­ing so good at putting out new WIP work and trail­ers, the game was rarely not vi­ral. It launched with a mind­blow­ing 1,375,441 Steam wish­lists - the team notes that the big spike at ~20k [daily ad­di­tions] around May 2024 is Steam Next Fest”:

Due to the sheer amount of play­ers, stream­ers and in­flu­encers rec­om­mend­ing the game at launch - hence that Overwhelmingly Positive re­view score - and a Steam takeover’ fea­ture - Tiny Glade also had a vis­i­bly good post-launch long tail’:

Listen, we know that incredibly well-made game sells’ is self-ev­i­dent, and per­haps not news. But the kind of game this is, goals-wise - and the fact the devs could charge $15 for it, de­spite be­ing so freeform - is su­per in­ter­est­ing. So don’t dis­miss it out of hand.

To fin­ish up, here’s a brief Q&A we had with Ana & Tom. We don’t gen­er­ally reprint these in full. But the an­swers they had were so fas­ci­nat­ing, we felt we had to. Ta da:

Q: There’s a trend re­cently for games that re­ally don’t have strong fail­ure states or put any pres­sure on the player. Sometimes game de­sign­ers’ don’t want to de­sign games’ like that. Can you ex­plain why you de­cided to make Tiny Glade like that?

I think it de­pends on what kind of ex­pe­ri­ence you’re try­ing to achieve… We wanted to craft a serene space that you can es­cape to, the child­hood feel­ing that you have all the time in the world. Sometimes you want a high in­ten­sity game - but some­times you just want to kick back and see where your imag­i­na­tion takes you.

Q: How much did you it­er­ate with al­pha/​beta testers pre-re­lease to pol­ish the game, or did you end up do­ing a lot of the UI/UX it­er­a­tion your­self?

Oh, we it­er­ated a lot. Some tools went through 6 or 7 fully fleshed out pro­to­types be­fore we set­tled on what you can see in the game to­day. We first do a sim­ple ver­sion that we can test on our­selves. Sometimes that stage alone can take mul­ti­ple at­tempts.

If it does pass our in­ter­nal eval­u­a­tion, we pol­ish it up a bit, and then we run a playtest. If we’re lucky, then that ver­sion works and then it’s about smooth­ing the rough edges, do­ing mi­cro it­er­a­tions, so to say. But of­ten things don’t work like you’d ex­pect, and you need to go back to the draw­ing board and try again.

Sometimes you can only tell if some­thing works when the rest of the pieces are at a cer­tain level of com­ple­tion. It’s a very, very it­er­a­tive process, where you work on all the pieces to­gether, flesh­ing them all out lit­tle by lit­tle. Before we shipped, we had 5 ex­ter­nal playtests in the two year de­vel­op­ment pe­riod.

Q: Do you have two or three rules of game feel’ that you think you did great in Tiny Glade? It’s clear that game feel’ is a big part of its suc­cess!

Yes! We ac­tu­ally out­lined de­sign pil­lars in the very be­gin­ning of the de­vel­op­ment. They were a lot from lit­tle ef­fort”, no wrong an­swers”, it’s alive” (the lat­ter re­fer­ring to the world re­act­ing to what you’ve built, such as ivy, birds, sheep, etc).

For the game feel’, I think a lot from lit­tle ef­fort” is prob­a­bly the biggest one. Whenever you draw a wall, change roof shape, drag out fences, a lot of stuff is be­ing gen­er­ated right here and now, just on your whim. Each brick, peb­ble and plank is care­fully placed by the game.

With any­thing that’s gen­er­ated, we aim for it to feel hand-crafted and per­fectly im­per­fect, as if some­one man­u­ally con­structed all these things just for you. You can hear it from the sound de­sign too. We wanted it to be very tac­tile, and have an as­so­ci­a­tion with real ma­te­ri­als, as if you’re build­ing a dio­rama in real life.

Q: Tech-wise, I was blown away that [often high-end fo­cused game tech eggheads] Digital Foundry gave you a rave video re­view! Congrats on that - the tech is stand­out. Do you have any tech in­spi­ra­tions, and do you think pro­ce­dural el­e­ments are still un­der-used in games?

Thank you :D From a ren­der­ing per­spec­tive, the biggest in­spi­ra­tions were Limbo & Inside. There, you don’t need to tweak a mil­lion set­tings in op­tions to get a beau­ti­ful ex­pe­ri­ence from the start. You launch the game, and you’re im­me­di­ately in it.

We strived for the full ex­pe­ri­ence to be the same across all ma­chines, so that you could ex­pe­ri­ence beau­ti­ful light­ing even on low-end PCs. When it comes to light­ing tech­nolo­gies, af­ter many it­er­a­tions, Tiny Glade ac­tu­ally ended up be­ing sim­i­lar to Metro Exodus :D

I think we’re used to see­ing pro­ce­dural tech­niques used for gen­er­at­ing huge worlds, or infinite’ con­tent. So one could say that pro­ce­dural is used to a nar­row ex­tent. But that might be just a mat­ter of se­man­tics, be­cause one could also draw par­al­lels be­tween pro­ce­dural gen­er­a­tion and sys­temic game­play.

Many games am­plify your in­put via mul­ti­ple lay­ered sys­tems; they might just not be la­beled as procedural gen­er­a­tion”. You could even say that the wand de­sign sys­tem in Noita is pro­ce­dural gen­er­a­tion. We hap­pen to use it to make the act of cre­ation sat­is­fy­ing and re­spon­sive in­stead.

It’s true that a dom­i­nant plat­form-led sta­tus quo can be smoth­er­ing. But for prac­ti­cal rea­sons, grum­bling about it of­ten gets sup­pressed. So it’s fas­ci­nat­ing to see a spin­off of Disco Elysium stu­dio ZA/UM - a game very much forged in rad­i­cal pol­i­tics - go straight for the jugu­lar about the work­ers vs. the rul­ing (platform) par­ties.

Banger quote #1, from Summer Eternal’s Aleksandar Gavrilović? I am still ea­gerly await­ing a sec­ond cri­sis [beyond the cur­rent lay­offs], one which would spot­light the largest struc­tural is­sue in game de­vel­op­ment… one third of all PC rev­enue from all de­vel­op­ers (from in­dies to AAA) is sy­phoned to dig­i­tal fief­doms, of which Valve is the most egre­gious ex­am­ple.

I can imag­ine a near fu­ture with more worker power, but I lack the imag­i­na­tion to en­vi­sion the re­place­ment of Valve with a com­mu­nity owned al­ter­na­tive. That winter castle’ will not fall as eas­ily, but we should at least start openly dis­cussing al­ter­na­tives.”

Banger quote #2, from the com­pa­ny’s Dora Klindžić? It’s true, Summer Eternal will not fix the games in­dus­try, al­though as a byprod­uct of our op­er­a­tion we might gen­er­ate a panacea for agri­cul­ture, as­tron­omy, in­ac­cu­rate bus timeta­bles, those hoax mes­sages that tar­get your mom, lo­cal elec­tions, and syphilis. I think this in­dus­try is fin­ished. But for­tu­nately for every­one, video games are not.” Now that’s a sound­bite….

[We’re GameDiscoverCo, an agency based around one sim­ple is­sue: how do play­ers find, buy and en­joy your PC or con­sole game? We run the newslet­ter you’re read­ing, and pro­vide con­sult­ing ser­vices for pub­lish­ers, funds, and other smart game in­dus­try folks.]

...

Read the original on newsletter.gamediscover.co »

7 341 shares, 27 trendiness

Pandas but 100x faster

My main back­ground is a hedge fund pro­fes­sional, so I deal with fi­nance data all the time and so far the Pandas li­brary has been an in­dis­pens­able tool in my work­flow and my most used Python li­brary.

Then came along Polars (written in Rust, btw!) which shook the ground of Python ecosys­tem due to its speed and ef­fi­ciency, you can check some of Polars bench­mark here.

I have around +/- 30 thou­sand lines of Pandas code, so you can un­der­stand why I’ve been hes­i­tant to rewrite them to Polars, de­spite my en­thu­si­asm for speed and op­ti­miza­tion. The sheer scale of the task has led to re­peated de­lays, as I weigh the po­ten­tial ben­e­fits of a faster and more ef­fi­cient li­brary against the sig­nif­i­cant ef­fort re­quired to refac­tor my ex­ist­ing code.

There has al­ways been this thought in the back of my mind:

Pandas is writ­ten in C and Cython, which means the main en­gine is King C…there got to be a way to op­ti­mize Pandas and lever­age the C en­gine!

Here comes FireDucks, the an­swer to my prayer: . It was launched on October 2023 by a team of pro­gram­mers from NEC Corporation which have 30+ years of ex­pe­ri­ence de­vel­op­ing su­per­com­put­ers, read the an­nounce­ment here.

Quick check the bench­mark page here! I’ll let the num­bers speak by them­selves.

* This is the cra­zi­est bench, FireDucks even beat DuckDB! Also check Pandas & Polars ranks.

* It’s even faster than Polars!

Alrighty those bench num­bers from FireDucks looks amaz­ing, but a good rule of thumb is never take num­bers for granted…don’t trust, ver­ify! Hence I’m mak­ing my own set of bench­marks on my ma­chine.

Yes the last two bench­mark num­bers are 130x and 200x faster than Pandas…are you not amused with these per­for­mance im­pact?! So yeah, the ti­tle of this post is not a click­bait, it’s real. Another key point I need to high­light, the most im­por­tant one:

you can just plug FireDucks into your ex­ist­ing Pandas code and ex­pect mas­sive speed im­prove­ments..im­pres­sive in­deed!

I’m lost for words..frankly! What else would Pandas users want?

A note for those group of peo­ple bash­ing Python for be­ing slow…yes pure Python is su­per slow I agree. But it has been proven time and again it can be op­ti­mized and once it’s been prop­erly op­ti­mized (FireDucks, Codon, Cython, etc) it can be speedy as well since Python back­end uses C en­gine!

Be smart folks! Noone sane would use pure Python” for se­ri­ous work­load…lever­age the vast ecosys­tem!

...

Read the original on hwisnu.bearblog.dev »

8 267 shares, 15 trendiness

Understanding the BM25 full text search algorithm

BM25, or Best Match 25, is a widely used al­go­rithm for full text search. It is the de­fault in Lucene/Elasticsearch and SQLite, among oth­ers. Recently, it has be­come com­mon to com­bine full text search and vec­tor sim­i­lar­ity search into hybrid search”. I wanted to un­der­stand how full text search works, and specif­i­cally BM25, so here is my at­tempt at un­der­stand­ing by re-ex­plain­ing.

For a quick bit of con­text on why I’m think­ing about search al­go­rithms, I’m build­ing a per­son­al­ized con­tent feed that scours noisy sources for con­tent re­lated to your in­ter­ests. I started off us­ing vec­tor sim­i­lar­ity search and wanted to also in­clude full-text search to im­prove the han­dling of ex­act key­words (for ex­am­ple, a friend has Solid.js” as an in­ter­est and us­ing vec­tor sim­i­lar­ity search alone, that turns up more con­tent re­lated to React than Solid).

The ques­tion that mo­ti­vated this deep dive into BM25 was: can I com­pare the BM25 scores of doc­u­ments across mul­ti­ple queries to de­ter­mine which query the doc­u­ment best matches?

Initially, both ChatGPT and Claude told me no — though an­noy­ingly, af­ter do­ing this deep dive and for­mu­lat­ing a more pre­cise ques­tion, they both said yes 🤦‍♂️. Anyway, let’s get into the de­tails of BM25 and then I’ll share my con­clu­sions about this ques­tion.

At the most ba­sic level, the goal of a full text search al­go­rithm is to take a query and find the most rel­e­vant doc­u­ments from a set of pos­si­bil­i­ties.

However, we don’t re­ally know which doc­u­ments are relevant”, so the best we can do is guess. Specifically, we can rank doc­u­ments based on the prob­a­bil­ity that they are rel­e­vant to the query. (This is called The Probability Ranking Principle.)

How do we cal­cu­late the prob­a­bil­ity that a doc­u­ment is rel­e­vant?

For full text or lex­i­cal search, we are only go­ing to use qual­i­ties of the search query and each of the doc­u­ments in our col­lec­tion. (In con­trast, vec­tor sim­i­lar­ity search might use an em­bed­ding model trained on an ex­ter­nal cor­pus of text to rep­re­sent the mean­ing or se­man­tics of the query and doc­u­ment.)

BM25 uses a cou­ple of dif­fer­ent com­po­nents of the query and the set of doc­u­ments:

* Query terms: if a search query is made up of mul­ti­ple terms, BM25 will cal­cu­late a sep­a­rate score for each term and then sum them up.

* Inverse Document Frequency (IDF): how rare is a given search term across the en­tire doc­u­ment col­lec­tion? We as­sume that com­mon words (such as the” or and”) are less in­for­ma­tive than rare words. Therefore, we want to boost the im­por­tance of rare words.

* Term fre­quency in the doc­u­ment: how many times does a search term ap­pear in a given doc­u­ment? We as­sume that more rep­e­ti­tion of a query term in a given doc­u­ment in­creases the like­li­hood that that doc­u­ment is re­lated to the term. However, BM25 also ad­justs this so that there are di­min­ish­ing re­turns each time a term is re­peated.

* Document length: how long is the given doc­u­ment com­pared to oth­ers? Long doc­u­ments might re­peat the search term more, just by virtue of be­ing longer. We don’t want to un­fairly boost long doc­u­ments, so BM25 ap­plies some nor­mal­iza­tion based on how the doc­u­men­t’s length com­pares to the av­er­age.

These four com­po­nents are what make up BM25. Now, let’s look at ex­actly how they’re used.

The BM25 al­go­rithm might look scary to non-math­e­mati­cians (my eyes glazed over the first time I saw it), but I promise, it’s not too hard to un­der­stand!

Here is the full equa­tion:

Now, let’s go through it piece-by-piece.

* is the full query, po­ten­tially com­posed of mul­ti­ple query terms

* is the num­ber of query terms

* is each of the query terms

This part of the equa­tion says: given a doc­u­ment and a query, sum up the scores for each of the query terms.

Now, let’s dig into how we cal­cu­late the score for each of the query terms.

The first com­po­nent of the score cal­cu­lates how rare the query term is within the whole col­lec­tion of doc­u­ments us­ing the Inverse Document Frequency (IDF).

The key el­e­ments to fo­cus on in this equa­tion are:

* is the to­tal num­ber of doc­u­ments in our col­lec­tion

* is the num­ber of doc­u­ments that con­tain the query term

* there­fore is the num­ber of doc­u­ments that do not con­tain the query term

In sim­ple lan­guage, this part boils down to the fol­low­ing: com­mon terms will ap­pear in many doc­u­ments. If the term ap­pears in many doc­u­ments, we will have a small num­ber (, or the num­ber of doc­u­ments that do not have the term) di­vided by . As a re­sult, com­mon terms will have a small ef­fect on the score.

In con­trast, rare terms will ap­pear in few doc­u­ments so will be small and will be large. Therefore, rare terms will have a greater im­pact on the score.

The con­stants and are there to smooth out the equa­tion and en­sure that we don’t end up with wildly vary­ing re­sults if the term is ei­ther very rare or very com­mon.

In the pre­vi­ous step, we looked at how rare the term is across the whole set of doc­u­ments. Now, let’s look at how fre­quent the given query is in the given doc­u­ment.

The terms in this equa­tion are:

* is the fre­quency of the given query in the given doc­u­ment

* is a tun­ing pa­ra­me­ter that is gen­er­ally set be­tween and

This equa­tion takes the term fre­quency within the doc­u­ment into ef­fect, but en­sures that term rep­e­ti­tion has di­min­ish­ing re­turns. The in­tu­ition here is that, at some point, the doc­u­ment is prob­a­bly re­lated to the query term and we don’t want an in­fi­nite amount of rep­e­ti­tion to be weighted too heav­ily in the score.

The pa­ra­me­ter con­trols how quickly the re­turns to term rep­e­ti­tion di­min­ish. You can see how the slope changes based on this set­ting:

The last thing we need is to com­pare the length of the given doc­u­ment to the lengths of the other doc­u­ments in the col­lec­tion.

From right to left this time, the pa­ra­me­ters are:

* is the length of the given doc­u­ment

* is the av­er­age doc­u­ment length in our col­lec­tion

* is an­other tun­ing pa­ra­me­ter that con­trols how much we nor­mal­ize by the doc­u­ment length

Long doc­u­ments are likely to con­tain the search term more fre­quently, just by virtue of be­ing longer. Since we don’t want to un­fairly boost long doc­u­ments, this whole term is go­ing to go in the de­nom­i­na­tor of our fi­nal equa­tion. That is, a doc­u­ment that is longer than av­er­age () will be pe­nal­ized by this ad­just­ment.

can be ad­justed by the user. Setting turns off doc­u­ment length nor­mal­iza­tion, while set­ting ap­plies it fully. It is nor­mally set to .

If we take all of the com­po­nents we’ve just dis­cussed and put them to­gether, we ar­rive back at the full BM25 equa­tion:

Reading from left to right, you can see that we are sum­ming up the scores for each query term. For each, we are tak­ing the Inverse Document Frequency, mul­ti­ply­ing it by the term fre­quency in the doc­u­ment (with di­min­ish­ing re­turns), and then nor­mal­iz­ing by the doc­u­ment length.

We’ve just gone through the com­po­nents of the BM25 equa­tion, but I think it’s worth paus­ing to em­pha­size two of its most in­ge­nious as­pects.

As men­tioned ear­lier, BM25 is based on an idea called the Probability Ranking Principle. In short, it says:

If re­trieved doc­u­ments are or­dered by de­creas­ing prob­a­bil­ity of rel­e­vance on the data avail­able, then the sys­tem’s ef­fec­tive­ness is the best that can be ob­tained for the data.

Unfortunately, cal­cu­lat­ing the true” prob­a­bil­ity that a doc­u­ment is rel­e­vant to a query is nearly im­pos­si­ble.

However, we re­ally care about the or­der of the doc­u­ments more than we care about the ex­act prob­a­bil­ity. Because of this, re­searchers re­al­ized that you could sim­plify the equa­tions and make it prac­ti­ca­ble. Specifically, you could drop terms from the equa­tion that would be re­quired to cal­cu­late the full prob­a­bil­ity but where leav­ing them out would not af­fect the or­der.

Even though we are us­ing the Probability Ranking Principle, we are ac­tu­ally cal­cu­lat­ing a weight” in­stead of a prob­a­bil­ity.

This equa­tion cal­cu­lates the weight us­ing term fre­quen­cies. Specifically:

* is the weight for a given doc­u­ment

* is the prob­a­bil­ity that the query term would ap­pear in the doc­u­ment with a given fre­quency () if the doc­u­ment is rel­e­vant ()

The var­i­ous terms boil down to the prob­a­bil­ity that we would see a cer­tain query term fre­quency within the doc­u­ment if the doc­u­ment is rel­e­vant or not rel­e­vant, and the prob­a­bil­i­ties that the term would not ap­pear at all if the doc­u­ment is rel­e­vant or not.

The Robertson/Sparck Jones Weight is a way of es­ti­mat­ing these prob­a­bil­i­ties but only us­ing the counts of dif­fer­ent sets of doc­u­ments:

The terms here are:

* is the num­ber of rel­e­vant doc­u­ments that con­tain the query term

* is the to­tal num­ber of doc­u­ments in the col­lec­tion

* is the num­ber of rel­e­vant doc­u­ments in the col­lec­tion

* is the num­ber of doc­u­ments that con­tain the query term

The big, glar­ing prob­lem with this equa­tion is that you first need to know which doc­u­ments are rel­e­vant to the query. How are we go­ing to get those?

The ques­tion about how to make use of the Robertson/Sparck Joes weight ap­par­ently stumped the en­tire re­search field for about 15 years. The equa­tion was built up from a solid the­o­ret­i­cal foun­da­tion, but re­ly­ing on al­ready hav­ing rel­e­vance in­for­ma­tion made it nearly im­pos­si­ble to put to use.

The BM25 de­vel­op­ers made a very clever as­sump­tion to get to the next step.

For any given query, we can as­sume that most doc­u­ments are not go­ing to be rel­e­vant. If we as­sume that the num­ber of rel­e­vant doc­u­ments is so small as to be neg­li­gi­ble, we can just set those num­bers to zero!

If we sub­sti­tute this into the Robertson/Sparck Jones Weight equa­tion, we get nearly the IDF term used in BM25:

Not re­ly­ing on rel­e­vance in­for­ma­tion made BM25 much more use­ful, while keep­ing the same the­o­ret­i­cal un­der­pin­nings. Victor Lavrenko de­scribed this as a very im­pres­sive leap of faith”, and I think this is quite a neat bit of BM25′s back­story.

As I men­tioned at the start, my mo­ti­vat­ing ques­tion was whether I could com­pare BM25 scores for a doc­u­ment across queries to un­der­stand which query the doc­u­ment best matches.

In gen­eral, BM25 scores can­not be di­rectly com­pared (and this is what ChatGPT and Claude stressed to me in re­sponse to my ini­tial in­quiries 🙂‍↔️). The al­go­rithm does not pro­duce a score from 0 to 1 that is easy to com­pare across sys­tems, and it does­n’t even try to es­ti­mate the prob­a­bil­ity that a doc­u­ment is rel­e­vant. It only fo­cuses on rank­ing doc­u­ments within a cer­tain col­lec­tion in an or­der that ap­prox­i­mates the prob­a­bil­ity of their rel­e­vance to the query. A higher BM25 score means the doc­u­ment is likely to be more rel­e­vant, but it is­n’t the ac­tual prob­a­bil­ity that it is rel­e­vant.

As far as I un­der­stand now, it is pos­si­ble to com­pare the BM25 scores across queries for the same doc­u­ment within the same col­lec­tion of doc­u­ments.

My hint that this was the case was the fact that BM25 sums the scores of each query term. There should not be a se­man­tic dif­fer­ence be­tween com­par­ing the scores for two query term and two whole queries.

The im­por­tant caveat to stress, how­ever, is the same doc­u­ment within the same col­lec­tion. BM25 uses the IDF or rar­ity of terms as well as the av­er­age doc­u­ment length within the col­lec­tion. Therefore, you can­not nec­es­sar­ily com­pare scores across time be­cause any mod­i­fi­ca­tions to the over­all col­lec­tion could change the scores.

For my pur­poses, though, this is use­ful enough. It means that I can do a full text search for each of a user’s in­ter­ests in my col­lec­tion of con­tent and com­pare the BM25 scores to help de­ter­mine which pieces best match their in­ter­ests.

I’ll write more about rank­ing al­go­rithms and how I’m us­ing the rel­e­vance scores in fu­ture posts, but in the mean­time I hope you’ve found this back­ground on BM25 use­ful or in­ter­est­ing!

Thanks to Alex Kesling and Natan Last for feed­back on drafts of this post.

If you are in­ter­ested in div­ing fur­ther into the the­ory and his­tory of BM25, I would highly rec­om­mend watch­ing Elastic en­gi­neer Britta Weber’s 2016 talk Improved Text Scoring with BM25 and read­ing The Probabilistic Relevance Framework: BM25 and Beyond by Stephen Robertson and Hugo Zaragoza.

Also, I had ini­tially in­cluded com­par­isons be­tween BM25 and some other al­go­rithms in this post. But, as you know, it was al­ready a bit long 😅. So, you can now find those in this other post: Comparing full text search al­go­rithms: BM25, TF-IDF, and Postgres.

...

Read the original on emschwartz.me »

9 235 shares, 11 trendiness

4.3 — blender.org

With light link­ing, lights can be set to af­fect only spe­cific ob­jects in the scene.

Shadow link­ing ad­di­tion­ally gives con­trol over which ob­jects acts as shadow block­ers for a light.

This is now fea­ture par­ity with Cycles.

...

Read the original on www.blender.org »

10 192 shares, 8 trendiness

leaningtech/webvm: Virtual Machine for the Web

This repos­i­tory hosts the source code for https://​we­bvm.io, a Linux vir­tual ma­chine that runs in your browser.

Try out the new Alpine / Xorg / i3 graph­i­cal en­vi­ron­ment: https://​we­bvm.io/​alpine.html

WebVM is a server-less vir­tual en­vi­ron­ment run­ning fully client-side in HTML5/WebAssembly. It’s de­signed to be Linux ABI-compatible. It runs an un­mod­i­fied Debian dis­tri­b­u­tion in­clud­ing many na­tive de­vel­op­ment tool­chains.

WebVM is pow­ered by the CheerpX vir­tu­al­iza­tion en­gine, and en­ables safe, sand­boxed client-side ex­e­cu­tion of x86 bi­na­ries on any browser. CheerpX in­cludes an x86-to-We­bAssem­bly JIT com­piler, a vir­tual block-based file sys­tem, and a Linux syscall em­u­la­tor.

Modern browsers do not pro­vide APIs to di­rectly use TCP or UDP. WebVM pro­vides net­work­ing sup­port by in­te­grat­ing with Tailscale, a VPN net­work that sup­ports WebSockets as a trans­port layer.

* Open the Networking” panel from the side-bar

* Click Connect to Tailscale” from the panel

* Log in to Tailscale (create an ac­count if you don’t have one)

* If you are un­fa­mil­iar with Tailscale or would like ad­di­tional in­for­ma­tion see WebVM and Tailscale.

* Enable Github pages in set­tings.

Go to the Pages sec­tion.

Select Github Actions as the source.

If you are us­ing a cus­tom do­main, en­sure Enforce HTTPS is en­abled.

* Go to the Pages sec­tion.

* Select Github Actions as the source.

If you are us­ing a cus­tom do­main, en­sure Enforce HTTPS is en­abled.

* If you are us­ing a cus­tom do­main, en­sure Enforce HTTPS is en­abled.

* Run the work­flow.

Accept the prompt. This is re­quired only once to en­able Actions for your fork.

Click Run work­flow and then once more Run work­flow in the menu.

* Accept the prompt. This is re­quired only once to en­able Actions for your fork.

* Click Run work­flow and then once more Run work­flow in the menu.

* After a few sec­onds a new Deploy work­flow will start, click on it to see de­tails.

* After the work­flow com­pletes, which takes a few min­utes, it will show the URL be­low the de­ploy_­to_github_­pages job.

You can now cus­tomize dock­er­files/​de­bian_mini to suit your needs, or make a new Dockerfile from scratch. Use the Path to Dockerfile work­flow pa­ra­me­ter to se­lect it.

* Download the de­bian_mini Ext2 im­age from https://​github.com/​lean­ingtech/​we­bvm/​re­leases/

You can also build your own by se­lect­ing the Upload GitHub re­lease” work­flow op­tion

Place the im­age in the repos­i­tory root folder

* You can also build your own by se­lect­ing the Upload GitHub re­lease” work­flow op­tion

* Place the im­age in the repos­i­tory root folder

* Edit con­fig_github_ter­mi­nal.js

Uncomment the de­fault val­ues for CMD, ARGS, ENV and CWD

Replace IMAGE_URL with the URL (absolute or rel­a­tive) for the Ext2 im­age. For ex­am­ple /debian_mini_20230519_5022088024.ext2”

* Uncomment the de­fault val­ues for CMD, ARGS, ENV and CWD

* Replace IMAGE_URL with the URL (absolute or rel­a­tive) for the Ext2 im­age. For ex­am­ple /debian_mini_20230519_5022088024.ext2”

* Build WebVM us­ing npm, out­put will be placed in the build di­rec­tory

* Start NGINX, it au­to­mat­i­cally points to the build di­rec­tory just cre­ated

The Deploy work­flow takes into ac­count the CMD spec­i­fied in the Dockerfile. To build a REPL you can sim­ply ap­ply this patch and de­ploy.

diff –git a/​dock­er­files/​de­bian_mini b/​dock­er­files/​de­bian_mini

in­dex 2878332..1f3103a 100644

–- a/​dock­er­files/​de­bian_mini

+++ b/​dock­er­files/​de­bian_mini

@@ -15,4 +15,4 @@ WORKDIR /home/user/

# We set env, as this gets ex­tracted by Webvm. This is op­tional.

ENV HOME=“/home/user” TERM=“xterm” USER=“user” SHELL=“/bin/bash” EDITOR=“vim” LANG=“en_US.UTF-8″ LC_ALL=“C”

RUN echo root:password’ | ch­passwd

-CMD [ /bin/bash” ]

+CMD [ /usr/bin/python3” ]

Please use Issues to re­port any bug. Or come to say hello / share your feed­back on Discord.

WebVM de­pends on the CheerpX x86-to-We­bAssem­bly vir­tu­al­iza­tion tech­nol­ogy, which is in­cluded in the pro­ject via NPM.

The NPM pack­age is up­dated on every re­lease.

Every build is im­mutable, if a spe­cific ver­sion works well for you to­day, it will keep work­ing for­ever.

WebVM is re­leased un­der the Apache License, Version 2.0.

You are wel­come to use, mod­ify, and re­dis­trib­ute the con­tents of this repos­i­tory.

The pub­lic CheerpX de­ploy­ment is pro­vided as-is and is free to use for tech­no­log­i­cal ex­plo­ration, test­ing and use by in­di­vid­u­als. Any other use by or­ga­ni­za­tions, in­clud­ing non-profit, acad­e­mia and the pub­lic sec­tor, re­quires a li­cense. Downloading a CheerpX build for the pur­pose of host­ing it else­where is not per­mit­ted with­out a com­mer­cial li­cense.

If you want to build a prod­uct on top of CheerpX/WebVM, please get in touch: sales@lean­ingtech.com

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.