10 interesting stories served every morning and every evening.




1 465 shares, 19 trendiness

Federal Register

Due to ag­gres­sive au­to­mated scrap­ing of FederalRegister.gov and eCFR.gov, pro­gram­matic ac­cess to these sites is lim­ited to ac­cess to our ex­ten­sive de­vel­oper APIs.

If you are hu­man user re­ceiv­ing this mes­sage, we can add your IP ad­dress to a set of IPs that can ac­cess FederalRegister.gov & eCFR.gov; com­plete the CAPTCHA (bot test) be­low and click Request Access”. This process will be nec­es­sary for each IP ad­dress you wish to ac­cess the site from, re­quests are valid for ap­prox­i­mately one quar­ter (three months) af­ter which the process may need to be re­peated.

An of­fi­cial web­site of the United States gov­ern­ment.

If you want to re­quest a wider IP range, first re­quest ac­cess for your cur­rent IP, and then use the Site Feedback” but­ton found in the lower left-hand side to make the re­quest.

...

Read the original on www.federalregister.gov »

2 441 shares, 40 trendiness

Open Sourcing DOS 4

See the canon­i­cal ver­sion of this blog post at the Microsoft Open Source Blog!

Ten years ago, Microsoft re­leased the source for MS-DOS 1.25 and 2.0 to the Computer History Museum, and then later re­pub­lished them for ref­er­ence pur­poses. This code holds an im­por­tant place in his­tory and is a fas­ci­nat­ing read of an op­er­at­ing sys­tem that was writ­ten en­tirely in 8086 as­sem­bly code nearly 45 years ago.

Today, in part­ner­ship with IBM and in the spirit of open in­no­va­tion, we’re re­leas­ing the source code to MS-DOS 4.00 un­der the MIT li­cense. There’s a some­what com­plex and fas­ci­nat­ing his­tory be­hind the 4.0 ver­sions of DOS, as Microsoft part­nered with IBM for por­tions of the code but also cre­ated a branch of DOS called Multitasking DOS that did not see a wide re­lease.

A young English re­searcher named Connor Starfrost” Hyde re­cently cor­re­sponded with for­mer Microsoft Chief Technical Officer Ray Ozzie about some of the soft­ware in his col­lec­tion. Amongst the flop­pies, Ray found un­re­leased beta bi­na­ries of DOS 4.0 that he was sent while he was at Lotus. Starfrost reached out to the Microsoft Open Source Programs Office (OSPO) to ex­plore re­leas­ing DOS 4 source, as he is work­ing on doc­u­ment­ing the re­la­tion­ship be­tween DOS 4, MT-DOS, and what would even­tu­ally be­come OS/2. Some later ver­sions of these Multitasking DOS bi­na­ries can be found around the in­ter­net, but these new Ozzie beta bi­na­ries ap­pear to be much ear­lier, un­re­leased, and also in­clude the ibm­bio.com source.

Scott Hanselman, with the help of in­ter­net archivist and en­thu­si­ast Jeff Sponaugle, has im­aged these orig­i­nal disks and care­fully scanned the orig­i­nal printed doc­u­ments from this Ozzie Drop”. Microsoft, along with our friends at IBM, think this is a fas­ci­nat­ing piece of op­er­at­ing sys­tem his­tory worth shar­ing.

Jeff Wilcox and OSPO went to the Microsoft Archives, and while they were un­able to find the full source code for MT-DOS, they did find MS DOS 4.00, which we’re re­leas­ing to­day, along­side these ad­di­tional beta bi­na­ries, PDFs of the doc­u­men­ta­tion, and disk im­ages. We will con­tinue to ex­plore the archives and may up­date this re­lease if more is dis­cov­ered.

Thank you to Ray Ozzie, Starfrost, Jeff Sponaugle, Larry Osterman, our friends at the IBM OSPO, as well as the mak­ers of such dig­i­tal arche­ol­ogy soft­ware in­clud­ing, but not lim­ited to Greaseweazle, Fluxengine, Aaru Data Preservation Suite, and the HxC Floppy Emulator. Above all, thank you to the orig­i­nal au­thors of this code, some of whom still work at Microsoft and IBM to­day!

If you’d like to run this soft­ware your­self and ex­plore, we have suc­cess­fully run it di­rectly on an orig­i­nal IBM PC XT, a newer Pentium, and within the open source PCem and 86box em­u­la­tors.

...

Read the original on www.hanselman.com »

3 435 shares, 22 trendiness

catdad/canvas-confetti: 🎉 performant confetti animation in the browser

You can in­stall this mod­ule as a com­po­nent from NPM:

npm in­stall –save can­vas-con­fetti

You can then re­quire(‘can­vas-con­fet­ti’); to use it in your pro­ject build. Note: this is a client com­po­nent, and will not run in Node. You will need to build your pro­ject with some­thing like web­pack in or­der to use this.

You can also in­clude this li­brary in your HTML page di­rectly from a CDN:

Note: you should use the lat­est ver­sion at the time that you in­clude your pro­ject. You can see all ver­sions on the re­leases page.

Thank you for join­ing me in this very im­por­tant mes­sage about mo­tion on your web­site. See, not every­one likes it, and some ac­tu­ally pre­fer no mo­tion. They have ways to tell us about it and we should lis­ten. While I don’t want to go as far as tell you not to have con­fetti on your page just yet, I do want to make it easy for you to re­spect what your users want. There is a dis­able­ForRe­duced­Mo­tion op­tion you can use so that users that have trou­ble with chaotic an­i­ma­tions don’t need to strug­gle on your web­site. This is dis­abled by de­fault, but I am con­sid­er­ing chang­ing that in a fu­ture ma­jor re­lease. If you have strong feel­ings about this, please let me know. For now, please con­fetti re­spon­si­bly.

When in­stalled from npm, this li­brary can be re­quired as a client com­po­nent in your pro­ject build. When us­ing the CDN ver­sion, it is ex­posed as a con­fetti func­tion on win­dow.

con­fetti takes a sin­gle op­tional ob­ject. When win­dow. Promise is avail­able, it will re­turn a Promise to let you know when it is done. When promises are not avail­able (like in IE), it will re­turn null. You can poly­fill promises us­ing any of the pop­u­lar poly­fills. You can also pro­vide a promise im­ple­men­ta­tion to con­fetti through:

const MyPromise = re­quire(‘some-promise-lib’);

const con­fetti = re­quire(‘can­vas-con­fet­ti’);

con­fetti.Promise = MyPromise;

If you call con­fetti mul­ti­ple times be­fore it is done, it will re­turn the same promise every time. Internally, the same can­vas el­e­ment will be reused, con­tin­u­ing the ex­ist­ing an­i­ma­tion with the new con­fetti added. The promise re­turned by each call to con­fetti will re­solve once all an­i­ma­tions are done.

The con­fetti pa­ra­me­ter is a sin­gle op­tional op­tions ob­ject, which has the fol­low­ing prop­er­ties:

* par­ti­cle­Count Integer (default: 50): The num­ber of con­fetti to launch. More is al­ways fun… but be cool, there’s a lot of math in­volved.

* an­gle Number (default: 90): The an­gle in which to launch the con­fetti, in de­grees. 90 is straight up.

* spread Number (default: 45): How far off cen­ter the con­fetti can go, in de­grees. 45 means the con­fetti will launch at the de­fined an­gle plus or mi­nus 22.5 de­grees.

* startVe­loc­ity Number (default: 45): How fast the con­fetti will start go­ing, in pix­els.

* de­cay Number (default: 0.9): How quickly the con­fetti will lose speed. Keep this num­ber be­tween 0 and 1, oth­er­wise the con­fetti will gain speed. Better yet, just never change it.

* grav­ity Number (default: 1): How quickly the par­ti­cles are pulled down. 1 is full grav­ity, 0.5 is half grav­ity, etc., but there are no lim­its. You can even make par­ti­cles go up if you’d like.

* drift Number (default: 0): How much to the side the con­fetti will drift. The de­fault is 0, mean­ing that they will fall straight down. Use a neg­a­tive num­ber for left and pos­i­tive num­ber for right.

* flat Boolean (default: false): Optionally turns off the tilt and wob­ble that three di­men­sional con­fetti would have in the real world. Yeah, they look a lit­tle sad, but y’all asked for them, so don’t blame me.

* ticks Number (default: 200): How many times the con­fetti will move. This is ab­stract… but play with it if the con­fetti dis­ap­pear too quickly for you.

* ori­gin Object: Where to start fir­ing con­fetti from. Feel free to launch off-screen if you’d like.

ori­gin.x Number (default: 0.5): The x po­si­tion on the page, with 0 be­ing the left edge and 1 be­ing the right edge.

ori­gin.y Number (default: 0.5): The y po­si­tion on the page, with 0 be­ing the top edge and 1 be­ing the bot­tom edge.

* ori­gin.x Number (default: 0.5): The x po­si­tion on the page, with 0 be­ing the left edge and 1 be­ing the right edge.

* ori­gin.y Number (default: 0.5): The y po­si­tion on the page, with 0 be­ing the top edge and 1 be­ing the bot­tom edge.

* col­ors Array<String>: An ar­ray of color strings, in the HEX for­mat… you know, like #bada55.

* shapes Array<String|Shape>: An ar­ray of shapes for the con­fetti. There are 3 built-in val­ues of square, cir­cle, and star. The de­fault is to use both squares and cir­cles in an even mix. To use a sin­gle shape, you can pro­vide just one shape in the ar­ray, such as [‘star’]. You can also change the mix by pro­vid­ing a value such as [‘circle’, circle’, square’] to use two third cir­cles and one third squares. You can also cre­ate your own shapes us­ing the con­fetti.shape­FromPath or con­fetti.shape­From­Text helper meth­ods.

* scalar Number (default: 1): Scale fac­tor for each con­fetti par­ti­cle. Use dec­i­mals to make the con­fetti smaller. Go on, try teeny tiny con­fetti, they are adorable!

* zIn­dex Integer (default: 100): The con­fetti should be on top, af­ter all. But if you have a crazy high page, you can set it even higher.

* dis­able­ForRe­duced­Mo­tion Boolean (default: false): Disables con­fetti en­tirely for users that pre­fer re­duced mo­tion. The con­fetti() promise will re­solve im­me­di­ately in this case.

This helper method lets you cre­ate a cus­tom con­fetti shape us­ing an SVG Path string. Any valid path should work, though there are a few caveats:

* All paths will be filed. If you were hop­ing to have a stroke path, that is not im­ple­mented.

* Paths are lim­ited to a sin­gle color, so keep that in mind.

* All paths need a valid trans­form ma­trix. You can pass one in, or you can leave it out and use this helper to cal­cu­late the ma­trix for you. Do note that cal­cu­lat­ing the ma­trix is a bit ex­pen­sive, so it is best to cal­cu­late it once for each path in de­vel­op­ment and cache that value, so that pro­duc­tion con­fetti re­main fast. The ma­trix is de­ter­min­is­tic and will al­ways be the same given the same path value.

* For best for­ward com­pat­i­bil­ity, it is best to re-gen­er­ate and re-cache the ma­trix if you up­date the can­vas-con­fetti li­brary.

* Support for path-based con­fetti is lim­ited to browsers which sup­port Path2D, which should re­ally be all ma­jor browser at this point.

This method will re­turn a Shape — it’s re­ally just a plain ob­ject with some prop­er­ties, but shhh… we’ll pre­tend it’s a shape. Pass this Shape ob­ject into the shapes ar­ray di­rectly.

As an ex­am­ple, here’s how you might do a tri­an­gle con­fetti:

var tri­an­gle = con­fetti.shape­FromPath({ path: M0 10 L5 0 L10 10z’ });

con­fetti({

shapes: [triangle]

This is the highly an­tic­i­pated fea­ture to ren­der emoji con­fetti! Use any stan­dard uni­code emoji. Or other text, but… maybe don’t use other text.

While any text should work, there are some caveats:

* For flail­ing con­fetti, some­thing that is mostly square works best. That is, a sin­gle char­ac­ter, es­pe­cially an emoji.

* Rather than ren­der­ing text every time a con­fetti is drawn, this helper ac­tu­ally ras­ter­izes the text. Therefore, it does not scale well af­ter it is cre­ated. If you plan to use the scalar value to scale your con­fetti, use the same scalar value here when cre­at­ing the shape. This will make sure the con­fetti are not blurry.

The op­tions for this method are:

* op­tions Object:

text String: the text to be ren­dered as a con­fetti. If you can’t make up your mind, I sug­gest 🐈.

scalar Number, op­tional, de­fault: 1: a scale value rel­a­tive to the de­fault size. It matches the scalar value in the con­fetti op­tions.

color String, op­tional, de­fault: #000000: the color used to ren­der the text.

font­Fam­ily String, op­tional, de­fault: na­tive emoji: the font fam­ily name to use when ren­der­ing the text. The de­fault fol­lows best prac­tices for ren­dring the na­tive OS emoji of the de­vice, falling back to sans-serif. If us­ing a web font, make sure this font is loaded be­fore ren­der­ing your con­fetti.

* text String: the text to be ren­dered as a con­fetti. If you can’t make up your mind, I sug­gest 🐈.

* scalar Number, op­tional, de­fault: 1: a scale value rel­a­tive to the de­fault size. It matches the scalar value in the con­fetti op­tions.

* color String, op­tional, de­fault: #000000: the color used to ren­der the text.

* font­Fam­ily String, op­tional, de­fault: na­tive emoji: the font fam­ily name to use when ren­der­ing the text. The de­fault fol­lows best prac­tices for ren­dring the na­tive OS emoji of the de­vice, falling back to sans-serif. If us­ing a web font, make sure this font is loaded be­fore ren­der­ing your con­fetti.

var scalar = 2;

var pineap­ple = con­fetti.shape­From­Text({ text: 🍍, scalar });

con­fetti({

shapes: [pineapple],

scalar

This method cre­ates an in­stance of the con­fetti func­tion that uses a cus­tom can­vas. This is use­ful if you want to limit the area on your page in which con­fetti ap­pear. By de­fault, this method will not mod­ify the can­vas in any way (other than draw­ing to it).

Canvas can be mis­un­der­stood a bit though, so let me ex­plain why you might want to let the mod­ule mod­ify the can­vas just a bit. By de­fault, a can­vas is a rel­a­tively small im­age — some­where around 300x150, de­pend­ing on the browser. When you re­size it us­ing CSS, this sets the dis­play size of the can­vas, but not the im­age be­ing rep­re­sented on that can­vas. Think of it as load­ing a 300x150 jpeg im­age in an img tag and then set­ting the CSS for that tag to 1500x600 — your im­age will end up stretched and blurry. In the case of a can­vas, you need to also set the width and height of the can­vas im­age it­self. If you don’t want to do that, you can al­low con­fetti to set it for you.

Note also that you should per­sist the cus­tom in­stance and avoid ini­tial­iz­ing an in­stance of con­fetti with the same can­vas el­e­ment more than once.

The fol­low­ing global op­tions are avail­able:

* re­size Boolean (default: false): Whether to al­low set­ting the can­vas im­age size, as well as keep it cor­rectly sized if the win­dow changes size (e.g. re­siz­ing the win­dow, ro­tat­ing a mo­bile de­vice, etc.). By de­fault, the can­vas size will not be mod­i­fied.

* use­Worker Boolean (default: false): Whether to use an asyn­chro­nous web worker to ren­der the con­fetti an­i­ma­tion, when­ever pos­si­ble. This is turned off by de­fault, mean­ing that the an­i­ma­tion will al­ways ex­e­cute on the main thread. If turned on and the browser sup­ports it, the an­i­ma­tion will ex­e­cute off of the main thread so that it is not block­ing any other work your page needs to do. Using this op­tion will also mod­ify the can­vas, but more on that di­rectly be­low — do read it. If it is not sup­ported by the browser, this value will be ig­nored.

* dis­able­ForRe­duced­Mo­tion Boolean (default: false): Disables con­fetti en­tirely for users that pre­fer re­duced mo­tion. When set to true, use of this con­fetti in­stance will al­ways re­spect a user’s re­quest for re­duced mo­tion and dis­able con­fetti for them.

Important: If you use use­Worker: true, I own your can­vas now. It’s mine now and I can do what­ever I want with it (don’t worry… I’ll just put con­fetti in­side it, I promise). You must not try to use the can­vas in any way (other than I guess re­mov­ing it from the DOM), as it will throw an er­ror. When us­ing work­ers for ren­der­ing, con­trol of the can­vas must be trans­ferred to the web worker, pre­vent­ing any us­age of that can­vas on the main thread. If you must ma­nip­u­late the can­vas in any way, do not use this op­tion.

var my­Can­vas = doc­u­ment.cre­ateEle­ment(‘can­vas’);

doc­u­ment.body.ap­pend­Child(my­Can­vas);

var my­Con­fetti = con­fetti.cre­ate(my­Can­vas, {

re­size: true,

use­Worker: true

my­Con­fetti({

par­ti­cle­Count: 100,

spread: 160

// any other op­tions from the global

// con­fetti func­tion

Stops the an­i­ma­tion and clears all con­fetti, as well as im­me­di­ately re­solves any out­stand­ing promises. In the case of a sep­a­rate con­fetti in­stance cre­ated with con­fetti.cre­ate, that in­stance will have its own re­set method.

con­fetti();

set­Time­out(() => {

con­fetti.re­set();

}, 100);

var my­Can­vas = doc­u­ment.cre­ateEle­ment(‘can­vas’);

doc­u­ment.body.ap­pend­Child(my­Can­vas);

var my­Con­fetti = con­fetti.cre­ate(my­Can­vas, { re­size: true });

my­Con­fetti();

set­Time­out(() => {

my­Con­fetti.re­set();

}, 100);

Launch some con­fetti the de­fault way:

con­fetti();

con­fetti({

par­ti­cle­Count: 150

con­fetti({

spread: 180

Get cre­ative. Launch a small poof of con­fetti from a ran­dom part of the page:

con­fetti({

...

Read the original on github.com »

4 355 shares, 18 trendiness

TSMC unveils 1.6nm process technology with backside power delivery, rivals Intel's competing design

TSMC an­nounced its lead­ing-edge 1.6nm-class process tech­nol­ogy to­day, a new A16 man­u­fac­tur­ing process that will be the com­pa­ny’s first Angstrom-class pro­duc­tion node and promises to out­per­form its pre­de­ces­sor, N2P, by a sig­nif­i­cant mar­gin. The tech­nol­o­gy’s most im­por­tant in­no­va­tion will be its back­side power de­liv­ery net­work (BSPDN).

Just like TSMCs 2nm-class nodes (N2, N2P, and N2X), the com­pa­ny’s 1.6nm-class fab­ri­ca­tion process will rely on gate-all-around (GAA) nanosheet tran­sis­tors, but un­like the cur­rent and next-gen­er­a­tion nodes, this one uses back­side power de­liv­ery dubbed Super Power Rail. Transistor and BSPDN in­no­va­tions en­able tan­gi­ble per­for­mance and ef­fi­ciency im­prove­ments com­pared to TSMCs N2P: the new node promises an up to 10% higher clock rate at the same volt­age and a 15%–20% lower power con­sump­tion at the same fre­quency and com­plex­ity. In ad­di­tion, the new tech­nol­ogy could en­able 7%–10% higher tran­sis­tor den­sity, de­pend­ing on the ac­tual de­sign.

The most im­por­tant in­no­va­tion of TSMCs A16 process, which was un­veiled at the com­pa­ny’s North American Technology Symposium 2024, is the in­tro­duc­tion of the Super Power Rail (SPR), a so­phis­ti­cated back­side power de­liv­ery net­work (BSPDN). This tech­nol­ogy is tai­lored specif­i­cally for AI and HPC proces­sors that tend to have both com­plex sig­nal wiring and dense power de­liv­ery net­works.

Backside power de­liv­ery will be im­ple­mented into many up­com­ing process tech­nolo­gies as it al­lows for an in­crease in tran­sis­tor den­sity and im­proved power de­liv­ery, which af­fects per­for­mance. Meanwhile, there are sev­eral ways to im­ple­ment a BSPDN. TSMCs Super Power Rail plugs the back­side power de­liv­ery net­work to each tran­sis­tor’s source and drain us­ing a spe­cial con­tact that also re­duces re­sis­tance to get the max­i­mum per­for­mance and power ef­fi­ciency pos­si­ble. From a pro­duc­tion per­spec­tive, this is one of the most com­plex BSPDN im­ple­men­ta­tions and is more com­plex than Intel’s Power Via.

The choice of back­side power rail im­ple­men­ta­tion is per­haps why TSMC de­cided not to add this fea­ture to its N2P and N2X process tech­nolo­gies, as it would make us­ing the pro­duc­tion nodes con­sid­er­ably more ex­pen­sive. Meanwhile, by of­fer­ing a 1.6nm-class node with GAA nanosheet tran­sis­tors and SPR as well as 2nm-class nodes with GAAFETs only, the com­pany will now have two dis­tinct nodes that will not com­pete with each other di­rectly but of­fer dis­tinc­tive ad­van­tages for dif­fer­ent cus­tomers.

The pro­duc­tion time­line for A16 in­di­cates that vol­ume pro­duc­tion of A16 will com­mence in the sec­ond half of 2026. Therefore, ac­tual A16-made prod­ucts will likely de­but in 2027. This time­line po­si­tions A16 to po­ten­tially com­pete with Intel’s 14A node, which will be the Intel’s most ad­vanced node at the time.

...

Read the original on www.tomshardware.com »

5 286 shares, 20 trendiness

ArhanChaudhary/NAND: NAND is a logic simulator suite made entirely from NAND gates

is a Turing equiv­a­lent 16-bit com­puter made en­tirely from a clock and NAND gates em­u­lated on the web. NAND fea­tures its own CPU, ma­chine code lan­guage, as­sem­bly lan­guage, as­sem­bler, vir­tual ma­chine lan­guage, vir­tual ma­chine trans­la­tor, pro­gram­ming lan­guage, com­piler, IDE, and user in­ter­face. NAND is based on the Jack-VM-Hack plat­form spec­i­fied in the Nand to Tetris course and its as­so­ci­ated book.

A sim­ple pro­gram that in­puts some num­bers and com­putes their av­er­age, show­ing off con­trol flow, arith­metic op­er­a­tions, I/O, and dy­namic mem­ory al­lo­ca­tion.

This pro­gram was sup­plied by the Nand to Tetris soft­ware suite.

The game of Pong, show­ing off the lan­guage’s ob­ject-ori­ented model. Use the ar­row keys to move the pad­dle left and right to bounce a ball. Every bounce, the pad­dle be­comes smaller, and the game ends when the ball hits the bot­tom of the screen.

This pro­gram was sup­plied by the Nand to Tetris soft­ware suite.

The game of 2048, show­ing off re­cur­sion and com­plex ap­pli­ca­tion logic. Use the ar­row keys to move the num­bers around the 4x4 grid. The same num­bers com­bine into their sum when moved into each other. Once the 2048 tile is reached, you win the game, though you can keep play­ing on un­til you lose. You lose the game when the board is full and you can’t make any more moves.

A pro­gram that de­lib­er­ately causes a stack over­flow via in­fi­nite re­cur­sion to per­form a vir­tual ma­chine es­cape. It lever­ages the fact that there are no run­time checks to pre­vent a stack over­flow. No other mod­ern plat­form will let you do this :-)

Upon run­ning, the pro­gram will con­stantly print the stack pointer to the screen. Once this dis­played value ex­ceeds 2048, the stack will have reached the end of its in­tended mem­ory space and spill onto the heap mem­ory space, caus­ing the print state­ment to mal­func­tion in ex­plo­sive fash­ion:

Two things of note­wor­thy in­ter­est are worth point­ing out.

If you re­load the page and run this pro­gram on an empty RAM (a RAM full of ze­roes), you will no­tice that the pro­gram re­sets it­self halfway through its ex­e­cu­tion de­spite not press­ing the Reset” but­ton. Why this hap­pens is sim­ple: the jail­bro­ken run­time ex­e­cutes an in­struc­tion that sets the pro­gram coun­ter’s value to 0, ef­fec­tively telling the pro­gram to jump to the first in­struc­tion and start over.

If you run the GeneticAlgorithm ex­am­ple pro­gram and then run this im­me­di­ately af­ter­wards, the pro­gram in its ram­page reads old RAM mem­ory that was sim­ply never over­writ­ten.

A pro­gram that ex­ploits the fact that the run­time does­n’t pre­vent stack smash­ing to call a func­tion that would oth­er­wise be in­ac­ces­si­ble. In or­der to un­der­stand how this works, let’s ex­am­ine this il­lus­tra­tion of NANDs stack frame lay­out.

taken from the Nand to Tetris book.

If you’re un­fa­mil­iar with stack lay­outs, here’s the main idea be­hind the ex­ploit. Whenever a func­tion re­turns, it needs to know where (which ma­chine code in­struc­tion mem­ory ad­dress) it should go to pro­ceed with ex­e­cu­tion flow. So, when the func­tion is first called, this mem­ory ad­dress, along with some other unim­por­tant data, is tem­porar­ily stored on the stack in a mem­ory re­gion re­ferred to as the stack frame as a ref­er­ence for where to re­turn. The il­lus­tra­tion de­scribes the ex­act po­si­tion of this re­turn ad­dress rel­a­tive to the func­tion call, a po­si­tion that can be re­verse en­gi­neered.

The pro­gram en­ables the user to over­write a sin­gle mem­ory ad­dress in the RAM to any value. Putting two and two to­gether, if the user were to over­write the re­turn ad­dress of a stack frame with the ad­dress of an­other func­tion, they es­sen­tially gain the abil­ity to ex­e­cute ar­bi­trary code in­cluded in the pro­gram.

Indeed, if you en­ter 267 as the mem­ory lo­ca­tion and 1715 as the value to over­write, two num­bers re­verse en­gi­neered by man­u­ally in­spect­ing the stack mem­ory space and the as­sem­bler, you’ll see this idea in work­ing ac­tion.

This is­n’t a vul­ner­a­bil­ity unique to NAND. It ex­ists in C as well! How cool!

Believe it or not, out of the many, many dif­fer­ent com­po­nents of NAND, this sin­gle-hand­edly took the longest to de­velop!

This pro­gram is a crea­ture sim­u­la­tion that uti­lizes sim­ple ma­chine learn­ing. It fol­lows the ar­ti­fi­cial in­tel­li­gence coded se­ries (parts one and two) from Code Bullet. Make sure to check out his chan­nel, he makes some re­ally cool stuff!

Every dot has its own brain” of ac­cel­er­a­tion vec­tors, and they evolve to reach a goal through nat­ural se­lec­tion. Every gen­er­a­tion, dots that die” closer to the goal are more likely to be se­lected as the par­ents for the next gen­er­a­tion. Reproduction in­her­ently causes some of the brain to mu­tate, wholly ef­fec­tively sim­u­lat­ing nat­ural evo­lu­tion.

Nevertheless, there is much to be de­sired. Due to per­for­mance, the only fac­tor dots use to evolve is their close­ness to the goal upon death, en­dow­ing the nat­ural se­lec­tion al­go­rithm with low en­tropy. Due to mem­ory us­age, there are smaller than sat­is­fac­tory lim­its on the num­ber of dots and the sizes of their brains. Lastly, due to tech­ni­cal com­plex­ity, re-plac­ing ob­sta­cles dur­ing the sim­u­la­tion does not guar­an­tee that the dots will have large enough brains to reach the goal. Brain sizes are only de­ter­mined at the be­gin­ning of the pro­gram.

I’ve uti­lized a myr­iad of op­ti­miza­tion tech­niques to snake around the fol­low­ing hard­ware re­stric­tions and make this pos­si­ble:

* NAND has a lim­ited ROM mem­ory space, mean­ing the pro­gram won’t com­pile if there’s too much code. In fact, the fi­nal ver­sion of this pro­gram uses 99.2% of the in­struc­tion mem­ory space.

* NAND has a lim­ited RAM mem­ory space, mean­ing the pro­gram has to be care­ful to op­ti­mize heap mem­ory us­age. In fact, the rea­son why the screen fills with sta­tic be­tween gen­er­a­tions is to use the screen mem­ory space as tem­po­rary swap mem­ory for the next gen­er­a­tion — the RAM is al­ready com­pletely full!

* NAND has no float­ing point type (decimal num­bers) and can only rep­re­sent the in­te­gers be­tween -32768 and 32767, mak­ing cal­cu­lat­ing fit­ness less pre­cise and more chal­leng­ing to im­ple­ment. Integer over­flows must also be ac­counted for.

To avoid beat­ing around the bush, I’ve stuck to doc­u­ment­ing these tech­niques and ad­di­tional in­sights in this pro­gram’s code­base for those in­ter­ested.

Before we start, the most im­por­tant de­tail to re­mem­ber about writ­ing pro­grams in Jack is that there is no op­er­a­tor pri­or­ity; this is prob­a­bly why your pro­gram is­n’t work­ing.

For ex­am­ple, you should change:

4 * 2 + 3 to (4 * 2) + 3

if (~x & y) to if ((~x) & y)

but you can keep if (y & ~x) the same as there is no op­er­a­tor am­bi­gu­ity.

Without paren­the­sis, the eval­u­a­tion value of an am­bigu­ous ex­pres­sion is un­de­fined.

NAND boasts its own com­plete tech stack. As a con­se­quence, NAND can only be pro­grammed in Jack, its weakly typed ob­ject-ori­ented pro­gram­ming lan­guage. In lay­man’s terms, Jack is C with Java’s syn­tax.

Let’s take the ap­proach of ex­am­ple-based learn­ing and dive right in.

* This pro­gram prompts the user to en­ter a phrase

* and an en­ergy level. Program out­put:

* Whats on your mind? Superman

* Whats your en­ergy level? 3

* Superman!

* Superman!

* Superman!

class Main {

func­tion void main() {

var String s;

var int en­ergy, i;

let s = Keyboard.readLine(“Whats on your mind? );

let en­ergy = Keyboard.readInt(“Whats your en­ergy level? );

let i = 0;

let s = s.ap­pend­Char(33); // Appends the char­ac­ter !’

while (i < en­ergy) {

do Output.printString(s);

do Output.println();

let i = i + 1;

taken from the Nand to Tetris lec­ture slides.

If you’ve al­ready had some ex­pe­ri­ence with pro­gram­ming, this should look very fa­mil­iar; it is clear that Jack was heav­ily in­spired by Java. Main.main is the en­try point to the pro­gram. The code seg­ment demon­strates ba­sic us­age of vari­ables as well as the while loop for con­trol flow.

Additionally, it uses Keyboard.readLine and Keyboard.readInt to read in­put from the user, and Output.printString and Output.println to print out­put to the screen — all of which are de­fined in de­tail in the Jack OS Reference. By de­fault, the Jack OS is bun­dled with your pro­gram dur­ing com­pi­la­tion to en­able in­ter­fac­ing with strings, mem­ory, hard­ware, and more.

Every pro­gram­ming lan­guage has a fixed set of prim­i­tive data types. Jack sup­ports three: int, char, and boolean. You can ex­tend this ba­sic reper­toire with your own ab­stract data types as needed. Prior knowl­edge about ob­ject-ori­ented pro­gram­ming di­rectly car­ries over to this sec­tion.

/** Represents a point in 2D plane. */

class Point {

// The co­or­di­nates of the cur­rent point in­stance:

field int x, y;

// The num­ber of point ob­jects con­structed so far:

sta­tic int point­Count;

/** Constructs a point and ini­tial­izes

it with the given co­or­di­nates */

con­struc­tor Point new(int ax, int ay) {

let x = ax;

let y = ay;

let point­Count = point­Count + 1;

re­turn this;

/** Returns the x co­or­di­nate of the cur­rent point in­stance */

method int getx() { re­turn x; }

/** Returns the y co­or­di­nate of the cur­rent point in­stance */

method int gety() { re­turn y; }

/** Returns the num­ber of Points con­structed so far */

func­tion int get­Point­Count() {

re­turn point­Count;

/** Returns a point which is this

point plus the other point */

method Point plus(Point other) {

re­turn Point.new(x + other.getx(),

y + other.gety());

/** Returns the Euclidean dis­tance be­tween the

cur­rent point in­stance and the other point */

method int dis­tance(Point other) {

var int dx, dy;

let dx = x - other.getx();

let dy = y - other.gety();

re­turn Math.sqrt((dx * dx) + (dy * dy));

/** Prints the cur­rent point in­stance, as (x, y)” */

method void print() {

var String tmp;

let tmp = (”;

do Output.printString(tmp);

do tmp.dis­pose();

do Output.printInt(x);

let tmp = , ;

do Output.printString(tmp);

do tmp.dis­pose();

do Output.printInt(y);

...

Read the original on github.com »

6 255 shares, 11 trendiness

adam-maj/tiny-gpu: A minimal GPU design in Verilog to learn how GPUs work from the ground up

A min­i­mal GPU im­ple­men­ta­tion in Verilog op­ti­mized for learn­ing about how GPUs work from the ground up.

If you want to learn how a CPU works all the way from ar­chi­tec­ture to con­trol sig­nals, there are many re­sources on­line to help you.

GPUs are not the same.

Because the GPU mar­ket is so com­pet­i­tive, low-level tech­ni­cal de­tails for all mod­ern ar­chi­tec­tures re­main pro­pri­etary.

While there are lots of re­sources to learn about GPU pro­gram­ming, there’s al­most noth­ing avail­able to learn about how GPUs work at a hard­ware level.

The best op­tion is to go through open-source GPU im­ple­men­ta­tions like Miaow and VeriGPU and try to fig­ure out what’s go­ing on. This is chal­leng­ing since these pro­jects aim at be­ing fea­ture com­plete and func­tional, so they’re quite com­plex.

This is why I built tiny-gpu!

With this mo­ti­va­tion in mind, we can sim­plify GPUs by cut­ting out the ma­jor­ity of com­plex­ity in­volved with build­ing a pro­duc­tion-grade graph­ics card, and fo­cus on the core el­e­ments that are crit­i­cal to all of these mod­ern hard­wareac­cel­er­a­tors.

This pro­ject is pri­mar­ily fo­cused on ex­plor­ing:

Architecture - What does the ar­chi­tec­ture of a GPU look like? What are the most im­por­tant el­e­ments?

Parallelization - How is the SIMD progam­ming model im­ple­mented in hard­ware?

Memory - How does a GPU work around the con­straints of lim­ited mem­ory band­width?

After un­der­stand­ing the fun­da­men­tals laid out in this pro­ject, you can check­out the ad­vanced func­tion­al­ity sec­tion to un­der­stand some of the most im­por­tant op­ti­miza­tions made in pro­duc­tion grade GPUs (that are more chal­leng­ing to im­ple­ment) which im­prove per­for­mance.

tiny-gpu is built to ex­e­cute a sin­gle ker­nel at a time.

In or­der to launch a ker­nel, we need to do the fol­low­ing:

Load data mem­ory with the nec­es­sary data

Specify the num­ber of threads to launch in the de­vice con­trol reg­is­ter

Launch the ker­nel by set­ting the start sig­nal to high.

The GPU it­self con­sists of the fol­low­ing units:

The de­vice con­trol reg­is­ter usu­ally stores meta­data spec­i­fy­ing how ker­nels should be ex­e­cuted on the GPU.

In this case, the de­vice con­trol reg­is­ter just stores the thread­_­count - the to­tal num­ber of threads to launch for the ac­tive ker­nel.

Once a ker­nel is launched, the dis­patcher is the unit that ac­tu­ally man­ages the dis­tri­b­u­tion of threads to dif­fer­ent com­pute cores.

The dis­patcher or­ga­nizes threads into groups that can be ex­e­cuted in par­al­lel on a sin­gle core called blocks and sends these blocks off to be processed by avail­able cores.

Once all blocks have been processed, the dis­patcher re­ports back that the ker­nel ex­e­cu­tion is done.

The GPU is built to in­ter­face with an ex­ter­nal global mem­ory. Here, data mem­ory and pro­gram mem­ory are sep­a­rated out for sim­plic­ity.

tiny-gpu data mem­ory has the fol­low­ing spec­i­fi­ca­tions:

* 8 bit data (stores val­ues of <256 for each row)

tiny-gpu pro­gram mem­ory has the fol­low­ing spec­i­fi­ca­tions:

* 16 bit data (each in­struc­tion is 16 bits as spec­i­fied by the ISA)

Global mem­ory has fixed read/​write band­width, but there may be far more in­com­ing re­quests across all cores to ac­cess data from mem­ory than the ex­ter­nal mem­ory is ac­tu­ally able to han­dle.

The mem­ory con­trollers keep track of all the out­go­ing re­quests to mem­ory from the com­pute cores, throt­tle re­quests based on ac­tual ex­ter­nal mem­ory band­width, and re­lay re­sponses from ex­ter­nal mem­ory back to the proper re­sources.

Each mem­ory con­troller has a fixed num­ber of chan­nels based on the band­width of global mem­ory.

The same data is of­ten re­quested from global mem­ory by mul­ti­ple cores. Constantly ac­cess global mem­ory re­peat­edly is ex­pen­sive, and since the data has al­ready been fetched once, it would be more ef­fi­cient to store it on de­vice in SRAM to be re­trieved much quicker on later re­quests.

This is ex­actly what the cache is used for. Data re­trieved from ex­ter­nal mem­ory is stored in cache and can be re­trieved from there on later re­quests, free­ing up mem­ory band­width to be used for new data.

Each core has a num­ber of com­pute re­sources, of­ten built around a cer­tain num­ber of threads it can sup­port. In or­der to max­i­mize par­al­leliza­tion, these re­sources need to be man­aged op­ti­mally to max­i­mize re­source uti­liza­tion.

In this sim­pli­fied GPU, each core processed one block at a time, and for each thread in a block, the core has a ded­i­cated ALU, LSU, PC, and reg­is­ter file. Managing the ex­e­cu­tion of thread in­struc­tions on these re­sources is one of the most chal­len­ing prob­lems in GPUs.

Each core has a sin­gle sched­uler that man­ages the ex­e­cu­tion of threads.

The tiny-gpu sched­uler ex­e­cutes in­struc­tions for a sin­gle block to com­ple­tion be­fore pick­ing up a new block, and it ex­e­cutes in­struc­tions for all threads in-sync and se­quen­tially.

In more ad­vanced sched­ulers, tech­niques like pipelin­ing are used to stream the ex­e­cu­tion of mul­ti­ple in­struc­tions sub­se­quent in­struc­tions to max­i­mize re­source uti­liza­tion be­fore pre­vi­ous in­struc­tions are fully com­plete. Additionally, warp sched­ul­ing can be use to ex­e­cute mul­ti­ple batches of threads within a block in par­al­lel.

The main con­straint the sched­uler has to work around is the la­tency as­so­ci­ated with load­ing & stor­ing data from global mem­ory. While most in­struc­tions can be ex­e­cuted syn­chro­nously, these load-store op­er­a­tions are asyn­chro­nous, mean­ing the rest of the in­struc­tion ex­e­cu­tion has to be built around these long wait times.

Asynchronously fetches the in­struc­tion at the cur­rent pro­gram counter from pro­gram mem­ory (most should ac­tu­ally be fetch­ing from cache af­ter a sin­gle block is ex­e­cuted).

Decodes the fetched in­struc­tion into con­trol sig­nals for thread ex­e­cu­tion.

Each thread has it’s own ded­i­cated set of reg­is­ter files. The reg­is­ter files hold the data that each thread is per­form­ing com­pu­ta­tions on, which en­ables the same-in­struc­tion mul­ti­ple-data (SIMD) pat­tern.

Importantly, each reg­is­ter file con­tains a few read-only reg­is­ters hold­ing data about the cur­rent block & thread be­ing ex­e­cuted lo­cally, en­abling ker­nels to be ex­e­cuted with dif­fer­ent data based on the lo­cal thread id.

Dedicated arith­metic-logic unit for each thread to per­form com­pu­ta­tions. Handles the ADD, SUB, MUL, DIV arith­metic in­struc­tions.

Also han­dles the CMP com­par­i­son in­struc­tion which ac­tu­ally out­puts whether the re­sult of the dif­fer­ence be­tween two reg­is­ters is neg­a­tive, zero or pos­i­tive - and stores the re­sult in the NZP reg­is­ter in the PC unit.

Dedicated load-store unit for each thread to ac­cess global data mem­ory.

Handles the LDR & STR in­struc­tions - and han­dles async wait times for mem­ory re­quests to be processed and re­layed by the mem­ory con­troller.

Dedicated pro­gram-counter for each unit to de­ter­mine the next in­struc­tions to ex­e­cute on each thread.

By de­fault, the PC in­cre­ments by 1 af­ter every in­struc­tion.

With the BRnzp in­struc­tion, the NZP reg­is­ter checks to see if the NZP reg­is­ter (set by a pre­vi­ous CMP in­struc­tion) matches some case - and if it does, it will branch to a spe­cific line of pro­gram mem­ory. This is how loops and con­di­tion­als are im­ple­mented.

Since threads are processed in par­al­lel, tiny-gpu as­sumes that all threads converge” to the same pro­gram counter af­ter each in­struc­tion - which is a naive as­sump­tion for the sake of sim­plic­ity.

In real GPUs, in­di­vid­ual threads can branch to dif­fer­ent PCs, caus­ing branch di­ver­gence where a group of threads threads ini­tially be­ing processed to­gether has to split out into sep­a­rate ex­e­cu­tion.

tiny-gpu im­ple­ments a sim­ple 11 in­struc­tion ISA built to en­able sim­ple ker­nels for proof-of-con­cept like ma­trix ad­di­tion & ma­trix mul­ti­pli­ca­tion (implementation fur­ther down on this page).

For these pur­poses, it sup­ports the fol­low­ing in­struc­tions:

* BRnzp - Branch in­struc­tion to jump to an­other line of pro­gram mem­ory if the NZP reg­is­ter matches the nzp con­di­tion in the in­struc­tion.

* CMP - Compare the value of two reg­is­ters and store the re­sult in the NZP reg­is­ter to use for a later BRnzp in­struc­tion.

* RET - Signal that the cur­rent thread has reached the end of ex­e­cu­tion.

Each reg­is­ter is spec­i­fied by 4 bits, mean­ing that there are 16 to­tal reg­is­ters. The first 13 reg­is­ter R0 - R12 are free reg­is­ters that sup­port read/​write. The last 3 reg­is­ters are spe­cial read-only reg­is­ters used to sup­ply the %blockIdx, %blockDim, and %threadIdx crit­i­cal to SIMD.

Each core fol­lows the fol­low­ing con­trol flow go­ing through dif­fer­ent stages to ex­e­cute each in­struc­tion:

FETCH - Fetch the next in­struc­tion at cur­rent pro­gram counter from pro­gram mem­ory.

REQUEST - Request data from global mem­ory if nec­es­sary (if LDR or STR in­struc­tion).

WAIT - Wait for data from global mem­ory if ap­plic­a­ble.

The con­trol flow is laid out like this for the sake of sim­plic­ity and un­der­stand­abil­ity.

In prac­tice, sev­eral of these steps could be com­pressed to be op­ti­mize pro­cess­ing times, and the GPU could also use pipelin­ing to stream and co­or­di­nate the ex­e­cu­tion of many in­struc­tions on a cores re­sources with­out wait­ing for pre­vi­ous in­struc­tions to fin­ish.

Each thread within each core fol­lows the above ex­e­cu­tion path to per­form com­pu­ta­tions on the data in it’s ded­i­cated reg­is­ter file.

This re­sem­bles a stan­dard CPU di­a­gram, and is quite sim­i­lar in func­tion­al­ity as well. The main dif­fer­ence is that the %blockIdx, %blockDim, and %threadIdx val­ues lie in the read-only reg­is­ters for each thread, en­abling SIMD func­tion­al­ity.

I wrote a ma­trix ad­di­tion and ma­trix mul­ti­pli­ca­tion ker­nel us­ing my ISA as a proof of con­cept to demon­strate SIMD pro­gram­ming and ex­e­cu­tion with my GPU. The test files in this repos­i­tory are ca­pa­ble of fully sim­u­lat­ing the ex­e­cu­tion of these ker­nels on the GPU, pro­duc­ing data mem­ory states and a com­plete ex­e­cu­tion trace.

This ma­trix ad­di­tion ker­nel adds two 1 x 8 ma­tri­ces by per­form­ing 8 el­e­ment wise ad­di­tions in sep­a­rate threads.

This demon­stra­tion makes use of the %blockIdx, %blockDim, and %threadIdx reg­is­ters to show SIMD pro­gram­ming on this GPU. It also uses the LDR and STR in­struc­tions which re­quire async mem­ory man­age­ment.

.threads 8

.data 0 1 2 3 4 5 6 7  ; ma­trix A (1 x 8)

.data 0 1 2 3 4 5 6 7  ; ma­trix B (1 x 8)

MUL R0, %blockIdx, %blockDim

ADD R0, R0, %threadIdx  ; i = block­Idx * block­Dim + threa­dIdx

CONST R1, #0  ; baseA (matrix A base ad­dress)

CONST R2, #8  ; baseB (matrix B base ad­dress)

CONST R3, #16  ; baseC (matrix C base ad­dress)

ADD R4, R1, R0  ; addr(A[i]) = baseA + i

LDR R4, R4  ; load A[i] from global mem­ory

ADD R5, R2, R0  ; addr(B[i]) = baseB + i

LDR R5, R5  ; load B[i] from global mem­ory

ADD R6, R4, R5  ; C[i] = A[i] + B[i]

ADD R7, R3, R0  ; addr(C[i]) = baseC + i

STR R7, R6  ; store C[i] in global mem­ory

RET  ; end of ker­nel

The ma­trix mul­ti­pli­ca­tion ker­nel mul­ti­plies two 2x2 ma­tri­ces. It per­forms el­e­ment wise cal­cu­la­tion of the dot prod­uct of the rel­e­vant row and col­umn and uses the CMP and BRnzp in­struc­tions to demon­strate branch­ing within the threads (notably, all branches con­verge so this ker­nel works on the cur­rent tiny-gpu im­ple­men­ta­tion).

.threads 4

.data 1 2 3 4  ; ma­trix A (2 x 2)

.data 1 2 3 4  ; ma­trix B (2 x 2)

MUL R0, %blockIdx, %blockDim

ADD R0, R0, %threadIdx  ; i = block­Idx * block­Dim + threa­dIdx

CONST R1, #1  ; in­cre­ment

CONST R2, #2  ; N (matrix in­ner di­men­sion)

CONST R3, #0  ; baseA (matrix A base ad­dress)

CONST R4, #4  ; baseB (matrix B base ad­dress)

CONST R5, #8  ; baseC (matrix C base ad­dress)

DIV R6, R0, R2  ; row = i // N

MUL R7, R6, R2

SUB R7, R0, R7  ; col = i % N

...

Read the original on github.com »

7 254 shares, 13 trendiness

Home

Tribler is a Bittorrent-compatible al­ter­na­tive to Youtube. It is de­signed to pro­tect your pri­vacy, build a web-of-trust, be at­tack-re­silient, and re­ward con­tent cre­ators di­rectly. We are build­ing a mi­cro-econ­omy with­out banks, with­out ad­ver­tis­ers, and with­out any gov­ern­ment. Together with Harvard University, the Tribler team de­ployed one of the first fully dis­trib­uted ledgers in August 2007, see BBC News cov­erge and a New Scientist ar­ti­cle. In com­ing years we will fur­ther ex­pand our mi­cro-econ­omy based on band­width to­kens. We aim to be­come the key place where au­di­ences find their tor­rents, cre­ative tal­ents get dis­cov­ered, and artists get fi­nan­cial re­wards from their fans. Tribler is the place where 100 per­cent of the money goes to artists and the peo­ple that run the in­fra­struc­ture.

Over 2 mil­lion peo­ple have used Tribler over the years. The Tribler pro­ject was started in 2005 at Delft University of Technology and over 100+ de­vel­op­ers con­tributed code to it. We are con­tin­u­ously im­prov­ing it and fur­ther ex­pand­ing the sci­en­tific de­vel­op­ers team.

Technical foun­da­tions of Tribler are the Bittorrent pro­to­col, an over­lay for P2P com­mu­ni­ca­tion across NAT/firewalls, grad­ual build­ing of trust in pub­lic keys with Bittorrent seed­ing, and our to­ken econ­omy with in­cen­tives for Tor-like re­lay­ing and hid­den seed­ing. For 12 years we have been build­ing a very ro­bust self-or­gan­is­ing Peer-to-Peer sys­tem. Today Tribler is ro­bust: the only way to take Tribler down is to take The Internet down” (but a sin­gle soft­ware bug could end every­thing).

This wiki page con­tains our main tech­ni­cal doc­u­men­ta­tion, high­lights:

* Trustchain: our 10.000 trans­ac­tions per sec­ond ledger

Open pro­jects for new TUDelft mas­ter the­sis stu­dents: Tor-like stream­ing, self-sov­er­eign iden­tity and au­then­ti­ca­tion on Android, rel­e­vance rank­ing of search re­sults (+swarm pop­u­lar­ity), per­fect meta­data through dis­trib­uted crowd­sourc­ing, self-re­in­forc­ing trust, and per­fect net­work con­nec­tiv­ity us­ing NAT/Firewall tra­ver­sal. Speculative pro­jects with long-term fo­cus: pre­dic­tion mar­ket for cli­mate change. A mar­ket de­signed against fron­trun­ners and high-fre­quency trad­ing abusers in gen­eral.

Social me­dia to­day is ob­sessed with profit, filled with ad­ver­tise­ments, over­flow­ing with false­hoods, and in­fested with fake news. We’re try­ing to fix these hard prob­lems in a unique way: by build­ing trust. Our au­da­cious am­bi­tion is a clean-slate re-cre­ation of The Internet it­self with foun­da­tions of trust. Craiglist and eBay showed us in 1995 that trust­wor­thy trade was pos­si­ble on­line. Uber, Etsy, and AirBnB show that en­tire in­dus­tries can be dis­rupted by a sin­gle plat­form with a nat­ural mo­nop­oly.

For the past 18 years we have build and de­ployed plat­forms to cre­ate trust. Before Wikipedia and Youtube ex­isted we stud­ied the mech­a­nisms be­hind trust and user-gen­er­ated con­tent on a small scale. Several years be­fore Wikipedia emerged we de­ployed a mu­sic en­cy­clo­pe­dia with un­con­strained write ac­cess, it never be­came pop­u­lar be­cause we fo­cused too much on soft­ware, in­stead com­mu­nity growth.

Today we keep a nar­row fo­cus and con­tin­u­ously ex­pand Tribler with trust­wor­thy de­cen­tral­ized tech­nol­ogy. We launched sub-sec­ond key­word search for Bittorrent swarms with­out any server back in 2010 (see our old Google Tech Talk on this topic). One of our op­er­a­tional trust brows­ing pro­to­types:

* Our work from 2004, 2-year in-depth mea­sure­ment and analy­sis of Bittorrent (.pdf 25 pages), largest mea­sure­ment to date. Covers eight months of the BitTorrent/Suprnova.org file shar­ing ecosys­tem. In par­tic­u­lar, we show mea­sure­ment re­sults of the pop­u­lar­ity and the avail­abil­ity of BitTorrent, of its down­load per­for­mance, of the con­tent life­time, and of the struc­ture of the com­mu­nity re­spon­si­ble for ver­i­fy­ing up­loaded con­tent.

Tribler sup­ports tor­rent search with­out web­sites, anony­mous down­load­ing, tor­rent stream­ing, chan­nels of tor­rents, and shar­ing con­tent for to­kens. Overview of Tribler (.html 5 pages). All Tribler fea­tures are im­ple­mented in a com­pletely dis­trib­uted man­ner, not re­ly­ing on any cen­tral­ized com­po­nent. Still, Tribler man­ages to re­main fully back­wards com­pat­i­ble with BitTorrent. The 2006 overview of Tribler (.pdf 6 pages) fea­tur­ing taste groups, friends, friends-of-friends and faster down­loads by do­nat­ing band­width to friends (protocol spec of friend boost­ing). Note that the 2006-2009 Tribler pro­to­col spec­i­fi­ca­tion (.pdf 47 pages) is now mostly out­dated, as we switched to our new syn­chro­niza­tion pro­to­col called Dispersy (see be­low).

Trust in so­cial me­dia con­tent is es­sen­tial for a sus­tain­able ecosys­tem. We in­tro­duced chan­nels of Bittorrent swarms in 2009 with the Tribler 4.x re­lease. Each user can vote on chan­nels to in­crease their vis­i­bil­ity and tell every­body the chan­nel owner is not a spam­mer and not spread­ing fake items. The rep­u­ta­tion of both the vot­ers and chan­nel owner are im­por­tant.

Tribler pro­tects your pri­vacy by not stor­ing any­thing on any server. To pro­tect your pri­vacy even more, we have pro­to­typed search al­go­rithms based on ho­mo­mor­phic cryp­tog­ra­phy. We pre­sented a new al­go­rithm sys­tem for pri­vacy-re­spect­ing

scal­able Gnutella-like search in 2014. Our ap­proach to scal­a­bil­ity is a sim­i­lar­ity func­tion in the en­crypted do­main (homomorphic), en­abling se­man­tic clus­ter­ing with pri­vacy.

Back in 2006 we in­tro­duced long-lived iden­ti­ties to sep­a­rate trust­wor­thy peers from freerid­ers and spam­mers (PermID). To pro­tect your pri­vacy fur­ther we also de­vised an al­ter­na­tive to onion rout­ing which po­ten­tially could have stronger se­cu­rity guar­an­tees (correlation at­tack). See the de­tails in this the­sis on

Multi-core ar­chi­tec­ture for anony­mous Internet stream­ing

which in­cludes a per­for­mance analy­sis of run­ning code.

We de­ployed one of the worlds first fully dis­trib­uted ledgers in August of 2007. For over a decade we metic­u­lously mea­sured, analysed, im­proved, and en­hanced this live sys­tem. Today it de­fines the state-of-the-art in blockchain re­search, but in the early days it barely func­tioned at all. A to­tal of five Ph. D. stu­dents of Delft con­tributed key parts and up­grades.

At launch we called our ini­tia­tive bandwidth-as-a-currency”. Today we have spe­cific ter­mi­nol­ogy for what we did: a to­ken econ­omy. We are mak­ing Internet band­width a trad­able com­mod­ity with­out any mid­dle­man or need for any cen­tralised gov­er­nance. Our ef­forts span over a decade, mak­ing us the vet­er­ans in the field. Our ledger pro­vides an in­cen­tive for Bittorrent seed­ing and Tor-like re­lay­ing. For nu­mer­ous years the tit-for-tat al­go­rithm pro­vided the only in­cen­tive for con­tri­bu­tions in Bittorrent. No in­cen­tive for seed­ing ex­isted, ex­cept when cen­tral servers kept track of your up­loads and down­loads. We mea­sured closed in­vite-only com­mu­ni­ties for nu­mer­ous years and math­e­mat­i­cally showed their rich-get-richer prop­er­ties. For de­tails see Fast down­load but eter­nal seed­ing: the re­ward and pun­ish­ment of shar­ing ra­tio en­force­ment and our mea­sure­ment pa­per un­der­stand­ing band­width eco­nom­ics and ra­tio en­force­ment (.pdf 5 pages). We mea­sured 508,269 peers in 444 swarms within five BitTorrent com­mu­ni­ties, rang­ing from pub­lic to highly elite. We ob­serve down­load per­for­mance, con­nectabil­ity, seeder/​leecher ra­tios, seed­ing du­ra­tion, and sta­tis­tics re­gard­ing the re­source sup­ply.

We got in­spi­ra­tion for a novel blockchain de­sign based on op­er­at­ing our own ledger and study­ing to­ken economies. Our cur­rent work is called Trustchain, a unique de­sign from 2012 where all par­tic­i­pants have their own per­sonal blockchain and cre­ate their own gen­e­sis block. Our older work used a graph-based ap­proach and graph-based rep­u­ta­tion al­go­rithms. Trustchain records trans­ac­tions in a tam­per-proof and scal­able man­ner. It does not re­quire min­ing and does not try to solve the dou­ble spend­ing prob­lem. Our prim­i­tive 2007 ledger pre-dates Bitcoin, ad­di­tion­ally our 2012 DAG-based ap­proach pre-dates IOTA and the Texas DAG patents.

We are fans of Bitcoin, but also showed in an early analy­sis the flaws in this con­cept. Our ap­proach to dig­i­tal sig­na­tures is the es­sen­tial dif­fer­ence which sets us apart from oth­ers. Mono-signatures form the foun­da­tion of all other pro­jects we have seen in the past decade. Meaning, in sys­tems such as Bitcoin a trans­ac­tion is al­ready valid with a sin­gle sig­na­ture. Our Trustchain de­sign does not per­mit trans­ac­tions with merely a sin­gle sig­na­ture. Trustchain only sup­ports multi-party agree­ment record­ing, oth­ers are not valid. We be­lieve that we cre­ated a more pow­er­ful sys­tem by re­mov­ing sin­gle-sig­na­ture trans­ac­tions. Only time can tell the use­full­ness of this aca­d­e­m­i­cally-pure and min­i­mal de­sign.

The foun­da­tion of our ap­proach is mak­ing re­peated suc­cess­ful in­ter­ac­tions be­tween ac­tors ex­plicit and durable. Cryptographically signed records of suc­cess­ful en­coun­ters serve as proof-of-work cer­tifi­cates. The va­lid­ity and value of these cer­tifi­cates is de­ter­mined by a trust and rep­u­ta­tion sys­tem. Relaying for anonymity and seed­ing in Tribler con­sti­tutes work which is re­warded with a signed cer­tifi­cate. Helping oth­ers and up­load­ing in Bittorrent swarms is re­warded with band­width to­kens (e.g. signed cer­tifi­cates). Mining in our sys­tem be­comes down­load parts of a swarm and up­load­ing them to mul­ti­ple in­ter­ested par­ties. In 2013 we got the credit min­ing part of our sys­tem op­er­a­tional in early Beta. The screen­shot be­low from November 2013 shows the boost­ing of var­i­ous swarms. Note the in­vest­ment yields of struck gold” and poor” in the right col­umn.

For our nar­row fo­cus of a Bittorrent client we are ex­plor­ing the fun­da­men­tals of iden­tity, trust, and trade. With over 1 bil­lion users of Youtube and Bittorrent we know there is a mass au­di­ence ready for some­thing bet­ter.

Our ap­proach has very bor­ing foun­da­tions, when com­pared to newer and more sexy work, like IPFS, FileCoin, or Storj. We first mea­sured Bittorrent in 2002, it is a flour­ish­ing ma­ture ecosys­tem and ready for an up­grade. Bootstrapping an ecosys­tem is hard, we de­signed and de­ployed a su­pe­rior al­ter­na­tive to Bittorrent. It be­came an of­fi­cial IETF Internet Standard, but com­pletely flopped. This formed our pref­er­ence for sim­plic­ity, el­e­gance and our al­lergy for bloat­ware, clean-slate work, and over-en­gi­neer­ing. Numerous other pro­jects try to cre­ate a generic ap­proach us­ing an ICO for fund­ing and promis­ing the early adopters a daz­zling re­turn-on-in­vest­ment. Tribler is dif­fer­ent. rant warn­ing. We are non-profit aca­d­e­mics. We do not want to re­place the old elite with a new crypto-cur­rency elite. What is changed if we re­place back­room deals, lob­by­ists, mid­dle­man, and le­gal mo­nop­o­lies with the tools of the new elite: al­go­rithms, early in­vestor re­wards, proof-of-dom­i­nat­ing-stake, and smart con­tracts? Replacing the ana­log world and bread­ing dig­i­tal-na­tive in­equal­ity does not make the world a bet­ter place. We are cre­at­ing a mi­cro-econ­omy based on fair­ness, trust, equal­ity, and self-gov­er­nance. By de­sign we ban­ish rent-seek­ing. Critical in­fra­struc­ture rarely makes profit. We are try­ing to build crit­i­cal in­fra­struc­ture.

As of December 2014 Tribler has a build-in ver­sion of a Tor-like anonymity sys­tem. This is com­pletely dis­con­nected from The’ Tor net­work. It is still on­go­ing work. It gives you prob­a­bly su­pe­rior pro­tec­tion than a VPN, but no pro­tec­tion against re­source­ful spy­ing agen­cies.

We have im­ple­mented the main parts of the Tor wire pro­to­col within Tribler. Instead of the TCP pro­to­col that the’ Tor net­work uses, we use UDP. The en­ables us to do NAT punc­tur­ing and tra­ver­sal. We have cre­ated our own net­work us­ing this Tor vari­ant, our code is not com­pat­i­ble with nor­mal Tor. Work started as a small trial in December 2013 with anony­mous Bittorrent down­load­ing. Essential part of our work is that every­body who down­loads anony­mously also be­comes a re­lay. This brings the Bittorrent tit-for-tat idea to dark­nets. With this on­go­ing work we aim to of­fer in 2018 with Tribler V7.0 prox­ied down­load­ing for any Bittorrent swarm.

Lengthy doc­u­men­ta­tion in the form of two mas­ter the­sis doc­u­ments is avail­able. First is a gen­eral doc­u­men­ta­tion of the tun­nel and re­lay mech­a­nism, Anonymous HD video stream­ing, .pdf 68 pages. Second is fo­cused on en­cryp­tion part, called Anonymous Internet: Anonymizing peer-to-peer traf­fic us­ing ap­plied cryp­tog­ra­phy, .pdf 85 pages. In ad­di­tion, there are the spec­i­fi­ca­tions for the pro­to­cols for anony­mous down­load­ing and hid­den seed­ing on this wiki.

The cur­rent foun­da­tion of Tribler is the Dispersy over­lay. Dispersy func­tion­al­ity in­cludes: mak­ing con­nec­tions, send­ing mes­sages, punc­tur­ing NAT boxes, and dis­trib­uted data­base syn­chro­niza­tion. Every 5 sec­onds Dispersy sends out a mes­sage to es­tab­lish a new con­nec­tion or re-con­nect to a known peer. Note that we are tran­si­tion­ing to a new over­lay for the du­ra­tions of 2018.

Overlay com­mu­ni­ca­tion, peer dis­cov­ery and con­tent dis­cov­ery (keyword search) are es­sen­tial build­ing blocks of a peer-to-peer sys­tem. Tribler pre­serves the con­tent and peers it dis­cov­ered in the past. Every Tribler client runs a full SQL data­base en­gine. Several times per sec­ond each Tribler peer sends and re­ceives up­dates for this data­base. Our pro­to­col for dis­trib­uted data­base syn­chro­niza­tion is called Dispersy. See a sim­ple mes­sag­ing client writ­ten with just a few lines of code as a sim­ple tu­to­r­ial ex­am­ple; out­dated bro­ken tu­to­r­ial.

Dispersy is a fully de­cen­tral­ized sys­tem for syn­chro­niza­tion (.pdf), ca­pa­ble of run­ning in chal­lenged net­work en­vi­ron­ments. Key fea­tures of Dispersy are state­less syn­chro­niza­tion us­ing Bloomfilters, de­cen­tral­ized NAT tra­ver­sal, and data bun­dle se­lec­tion al­go­rithms that al­low the sys­tem to scale over 100,000 bun­dles in the pres­ence of high churn and high-load sce­nar­i­o’s.

Dispersy uses a sim­ple data­base schema, with the sync table con­tain­ing the data bun­dles to syn­chro­nise across peers in the packet field.

Android port­ing teams are work­ing on the down­load­ing and Tor-like pro­to­col part of Tribler and the over­lay, chan­nels and search por­tions. As of June 2014 there is ini­tial run­ning code. The fo­cus is on sta­bil­ity and cre­at­ing a ma­ture build en­vi­ron­ment us­ing Jenkins. See be­low two ac­tual screen­shot of cur­rent run­ning code. Download the al­pha  .APK here: https://​jenk­ins.tri­bler.org/​job/​Build-Tri­bler_An­droid-Python/​last­Build/

The fol­low­ing work is on­go­ing. We have an op­er­a­tional Android app that can spread it­self via NFC. The app can spread vi­ral via friends, even if it is blocked from a cen­tral app store.

Original stu­dent as­sign­ment: The aim is to cre­ate an Open Source Android smart­phone app to help by­pass re­stric­tions by non-de­mo­c­ra­tic gov­ern­ments. The Arab Spring showed the im­por­tance of video record­ing of mass protests. However, pos­ses­sion of a video record­ing on your phone of hu­man rights vi­o­la­tions and mass up­ris­ings brings grave dan­ger. The idea is to make this app check-point-proof”, mean­ing that a some­what knowl­edge­able per­son will not de­tect the pres­ence of the app and will not dis­cover any video con­tent. The app it­self should be hid­den, you can make a stealth” app by some­how re­mov­ing the app icon from your app list (sadly it sim­ply still shows up in the unin­stall app list). The app is ac­ti­vated sim­ply by dialing” a se­cret tele­phone num­ber or other method your deem se­cure. Starting point for your work can be found here: http://​stack­over­flow.com/​ques­tions/​5921071/​how-to-cre­ate-a-stealth-like-an­droid-app. Your Stealth app need to be able to vi­rally spread and be able to by­pass an gov­ern­ment re­stric­tions on the of­fi­cial app store. Include the fea­ture for NFC and di­rect-wifi trans­fer of the .apk with an easy on-screen man­ual and steps. Thus users can pass your app along to their friends.

Peer-to-Peer (P2P) net­works work on the pre­sump­tion that all nodes in the net­work are con­nectable. However, NAT boxes and fire­walls pre­vent con­nec­tions to many nodes on the Internet. We cre­ated a method to punc­ture NATs which does not re­quire a server. Our method is there­fore a sim­ple no-server-needed al­ter­na­tive to the com­plex STUN, TURN and ICE ap­proaches.

We con­ducted one of the largest mea­sure­ments of NAT/Firewall be­hav­ior and punc­ture ef­fi­ciency in the wild. Our method is a UDP hole-punch­ing tech­nique. We mea­sured the suc­cess rate us­ing vol­un­teers run­ning Tribler. Number of users in our tri­als are 907 and 1531 peo­ple. Our re­sults show that UDP hole punch­ing is an ef­fec­tive method to in­crease the con­nectabil­ity of peers on the Internet: ap­prox­i­mately 64% of all peers are be­hind a NAT box or fire­wall. More than 80% of hole punch­ing at­tempts be­tween these peers suc­ceed.

Brief de­scrip­tion of our UDP punc­ture method in IETF draft

As Tribler sci­en­tists and en­gi­neer we are ac­tively try­ing to make a bet­ter world. Our mi­cro-econ­omy is our liv­ing lab for ex­per­i­ment­ing with al­ter­na­tive mod­els for cap­i­tal­ism. We aim to re-in­vent money by cre­at­ing the first sus­tain­able econ­omy with­out any moral haz­ards from bankers, politi­cians, and mega­cor­po­ra­tion. Citizens and only the cit­i­zens are in con­trol with self-gov­er­nance.

Our grand vi­sion in a 1+ hour lec­ture given at Stanford University, via their Youtube chan­nel. We want to do more then be a Youtube al­ter­na­tive. Our grand vi­sion is lib­er­at­ing both me­dia and money. See the talk Abstract and slides (.pdf 78 pages). Keywords: trans­form money, Bank-of-Bits”, global fi­nan­cial melt­down iso­la­tion. Use co­op­er­a­tion&sta­bil­ity, not volatil­ity&greed. Alter the essence of cap­i­tal­ism (rich get richer) by abol­ish­ing com­pound in­ter­est rate and fa­cil­i­ta­tion of safe zero-cost money trans­fers & lend­ing. We aim for a di­rect as­sault on the essence of cap­i­tal­ism, aim­ing even fur­ther then the Bitcoin ac­com­plish­ment (bypassing the cen­tral bank).

* 2014: Test net­work goes live for anony­mous Tor-like down­load­ing (not con­nected in any with with the’ Tor pro­ject)

* 2010: Wikipedia.org uses our tech­nol­ogy for live trial

* 2007: Our dis­trib­uted ledger launched in the wild

* 2004: Slashdot for first time with largest Bittorrent study

...

Read the original on github.com »

8 241 shares, 18 trendiness

an Increase API design principle — Increase

are the nouns of your API. Deciding how to name and model these nouns is ar­guably the hard­est and most im­por­tant part of de­sign­ing an API. The re­sources you ex­pose or­ga­nize your users’ men­tal model of how your prod­uct works and what it can do. At Increase, our team has used a prin­ci­ple called no ab­strac­tions” to help. What do we mean by this?

Much of our team came from Stripe, and when de­sign­ing our API we con­sid­ered the same val­ues that have been suc­cess­ful there. Stripe ex­cels at de­sign­ing in their API — ex­tract­ing the es­sen­tial fea­tures of a com­plex do­main into some­thing their users can eas­ily un­der­stand and work with. In their case this most no­tably means mod­el­ing pay­ments across many dif­fer­ent net­works into an API re­source called a PaymentIntent . For ex­am­ple, Visa and Mastercard have sub­tly dif­fer­ent rea­son codes for why a charge­back can be ini­ti­ated, but Stripe com­bines those codes into a sin­gle enum so that their users don’t need to con­sider the two net­works sep­a­rately.

This makes sense be­cause many of Stripe’s users are early star­tups work­ing on prod­ucts to­tally un­re­lated to pay­ments. They don’t nec­es­sar­ily know, or need to know, about the nu­ances of credit cards. They want to in­te­grate Stripe quickly, get back to build­ing their prod­uct, and stop think­ing about pay­ments.“For Increase users, try­ing to hide the un­der­ly­ing com­plex­ity of these net­works would ir­ri­tate them, not sim­plify their lives.”In­crease’s users are not like this. They of­ten have deep ex­ist­ing knowl­edge of pay­ment net­works, think about fi­nan­cial tech­nol­ogy all the time, and come to us be­cause of our di­rect net­work con­nec­tions and the depth of in­te­gra­tion that lets them build. They want to know ex­actly when the FedACH win­dow closes and when trans­fers will land. They un­der­stand that set­ting a dif­fer­ent Standard Entry Class code on an ACH trans­fer can re­sult in dif­fer­ent re­turn tim­ing. Trying to hide the un­der­ly­ing com­plex­ity of these net­works (by, for ex­am­ple, mod­el­ing ACH trans­fers and wire trans­fers with a sin­gle API re­source) would ir­ri­tate them, not sim­plify their lives.

Early con­ver­sa­tions with these users helped us ar­tic­u­late what we dubbed the no ab­strac­tions” prin­ci­ple as we built the first ver­sion of our API. Some ex­am­ples of the way this mind­set has sub­se­quently af­fected its de­sign:In­stead of in­vent­ing our own names for API re­sources and their at­trib­utes, we tend to use the vo­cab­u­lary of the un­der­ly­ing net­works. For ex­am­ple, the pa­ra­me­ters we ex­pose when mak­ing an ACH trans­fer via our API are named af­ter fields in the Nacha spec­i­fi­ca­tion. Similar to how we use net­work nomen­cla­ture, we try to model our re­sources af­ter real-world events like an ac­tion taken or a mes­sage sent. This re­sults in more of our API re­sources be­ing im­mutable. An ap­proach that’s worked well for our API is to take a clus­ter of these im­mutable re­sources (all of the net­work mes­sages that can be sent as part of the ACH trans­fer life­cy­cle, for ex­am­ple) and group them to­gether un­der a state ma­chine lifecycle ob­ject”. For ex­am­ple, the ach_­trans­fer ob­ject in our API has a field called sta­tus that changes over time, and sev­eral im­mutable sub-ob­jects that are cre­ated as the trans­fer moves through its life­cy­cle. A newly-minted ach_­trans­fer ob­ject looks like:{

id”: ach_transfer_abc123″,

created_at”: 2024-04-24T00:00:00+00:00″,

amount”: 1000,

status”: pending_approval”,

approval”: null,

submission”: null,

acknowledgement”: null

// other fields omit­ted here for clar­ity

}After that same trans­fer has moved through our pipeline and we’ve sub­mit­ted it to FedACH, it looks like:{

id”: ach_transfer_abc123″,

created_at”: 2024-04-24T00:00:00+00:00″,

amount”: 1000,

status”: submitted”,

// im­mutable, pop­u­lated when the trans­fer is ap­proved

approval”: {

approved_by”: ad­min­is­tra­tor@your­com­pany.com,

approved_at”: 2024-04-24T01:00:00+00:00″

// im­mutable, pop­u­lated when the trans­fer is sub­mit­ted

submission”: {

trace_number”: 058349238292834″,

submitted_at”: 2024-04-24T02:00:00+00:00″

// im­mutable, pop­u­lated when the trans­fer is ac­knowl­edged

acknowledgement”: {

acknowledged_at”: 2024-04-24T03:00:00+00:00″

// other fields omit­ted for clar­ity

}If, for a given API re­source, the set of ac­tions a user can take on dif­fer­ent in­stances of the re­source varies a lot, we tend to split it into mul­ti­ple re­sources. For ex­am­ple, the set of ac­tions you can take on an orig­i­nated ACH trans­fer is dif­fer­ent (the com­plete op­po­site, re­ally) than the ac­tions you can take on a re­ceived ACH trans­fer, so we sep­a­rate these into ach_­trans­fer and in­bound­_ach_­trans­fer re­sources.

This ap­proach can make our API more ver­bose and in­tim­i­dat­ing at first glance — there are a lot of re­sources on the left-hand side of our doc­u­men­ta­tion page! We think it makes things more pre­dictable over the long-term, though.

Importantly, our en­gi­neer­ing team has com­mit­ted to this ap­proach. When you de­sign a com­plex API over sev­eral years, you make small in­cre­men­tal de­ci­sions all the time. Committing to foun­da­tional prin­ci­ples up­front has re­duced the cog­ni­tive load for these de­ci­sions. For ex­am­ple, when send­ing a wire trans­fer to the Federal Reserve, there’s a re­quired field called Input Message Accountability Data which serves as a glob­ally-unique ID for that trans­fer. When build­ing sup­port for wire trans­fers, an en­gi­neer in an ab­strac­tion-heavy API might have to de­lib­er­ate how to name this field in a user-friendly” way - trace_num­ber? ref­er­ence_num­ber? id? At Increase that hy­po­thet­i­cal en­gi­neer names the field in­put_mes­sage_ac­count­abil­i­ty_­data and moves on. When an Increase user en­coun­ters this field for the first time, while it might not be the most im­me­di­ately rec­og­niz­able name at first, it helps them un­der­stand im­me­di­ately how this maps to the un­der­ly­ing sys­tem.

No Abstractions is­n’t right for every API, but con­sid­er­ing the level of ab­strac­tion that’s ap­pro­pri­ate for the de­vel­op­ers in­te­grat­ing against it is a valu­able ex­er­cise. This will de­pend on their level of ex­pe­ri­ence work­ing with your prod­uct do­main and the amount of en­ergy they’ll be com­mit­ting to the in­te­gra­tion, among other things. If you’re build­ing an ab­strac­tion-heavy API, be pre­pared to think hard be­fore adding new fea­tures. If you’re build­ing an ab­strac­tion-light API, com­mit to it and re­sist the temp­ta­tion to add ab­strac­tions when it comes along.Email jobs@in­crease.com to learn more about our open roles.Bank­ing ser­vices pro­vided by First Internet Bank of Indiana, Member FDIC. Increase is a fi­nan­cial tech­nol­ogy com­pany, not a bank. Cards Issued by First Internet Bank of Indiana, pur­suant to a li­cense from Visa Inc. Deposits are in­sured by the FDIC up to the max­i­mum al­lowed by law through First Internet Bank of Indiana, Member FDIC.

...

Read the original on increase.com »

9 209 shares, 11 trendiness

Corporate Open Source is Dead

That’s four months af­ter HashiCorp rug­pulled their en­tire de­vel­op­ment com­mu­nity and ditched open source for the Business Source License.’

As some­one on Hacker News pointed out so elo­quently:

IBM is like a juicer that takes all the de­li­cious fla­vor out of a fruit

HashiCorp al­ready did a great job pre-drain­ing all their fla­vor.”

Some peo­ple won­der if HashiCorp’s de­ci­sion to drop open source was be­cause they wanted to juice the books for a higher price. I mean, six bil­lion dol­lars? And they’re not even a point­less AI com­pany!

This blog post is a tran­script of the video I posted to­day, Corporate Open Source is Dead. You can watch it on YouTube.

Meanwhile, Redis dropped the open BSD li­cense and in­vented their own Source Available’ li­cense.

And last year, I cov­ered how Red Hat found a way to just barely com­ply with the open source GPL li­cense for their Enterprise Linux dis­tro.

Other com­pa­nies like MongoDB, Cockroach Labs, Confluent, Elasticsearch, and Sentry also went Source Available’. It started with some of the smaller play­ers, but as rot sets in at even the biggest open source’ com­pa­nies, open source devs are choos­ing the nu­clear op­tion.

Terraform, HashiCorp’s bread and but­ter, was forked into OpenTofu, and adopted by the Linux Foundation. Companies who built their busi­nesses on top of Terraform quickly switched over. Even juicier, OpenBao—a fork of HashiCorp’s other big pro­ject Vault—is backed by IBM! What’s go­ing to hap­pen with that fork now?

At least forks seem pretty straight­for­ward in Hashi-land. In the wake of Redis’ wan­ton de­struc­tion, it seems like there’s a new fork every week!

And some de­vel­op­ers are even ex­plor­ing ditch­ing the Redis code en­tirely, like red­ka’s an API-compatible wrap­per on top of SQLite!

After Red Hat closed its door—most of the way, at least they did­n’t try pulling a switcheroo on the li­cense it­self! Oracle, SUSE, and CIQ scrapped to­gether the OpenELA al­liance to main­tain forks of Enterprise Linux. And CentOS users who’ll be left in a lurch as June marks the end of CentOS 7 sup­port have to de­cide whether to use AlmaLinux or one of the ELA pro­jects now.

All these moves shat­tered the play­book star­tups and mega­corps used—and now we’re see­ing, abused—to build up bil­lions in rev­enue over the past decade.

It was all in the name of open source’.

As free money dries up and prof­its slow, com­pa­nies slash head­count al­most as fast as com­mu­nity trust.

2024 is the year Corporate Open Source—or at least any re­main­ing il­lu­sions about it—fi­nally died.

It’s one thing to build a prod­uct with a pro­pri­etary code­base, and charge for li­censes. You can still build com­mu­ni­ties around that model, and it’s worked for decades.

But it’s to­tally dif­fer­ent when you build your prod­uct un­der an open source li­cense, fos­ter a com­mu­nity of users who then build their own busi­nesses on top of that soft­ware, then yoink the li­cense when your rev­enue is af­fected.

Bryan Cantrill’s been sound­ing the alarm for years—yes, that Bryan Cantrill, the one who posted this gem:

Brian’s pre­sen­ta­tion from 12 years ago is worth a watch, and the bot­tom line is summed up by Drew DeVault:

[Contributor License Agreements are] a strat­egy em­ployed by com­mer­cial com­pa­nies with one pur­pose only: to place a rug un­der the pro­ject, so that they can pull at the first sign of a bad quar­ter. This strat­egy ex­ists to sub­vert the open source so­cial con­tract.

By work­ing on a pro­ject with a CLA, where you sign away your code, you’re giv­ing carte blanche for the com­pany to take away your free­dom to use their soft­ware.

From a com­pa­ny’s per­spec­tive, if they want CLAs or if they want to use an anti-open-source li­cense, they do not care about your free­doms. They’re pro­tect­ing rev­enue streams. They’ll of­ten talk about free­load­ers, whether it’s Amazon build­ing a com­pet­ing hosted so­lu­tion, or some startup that found a way to mon­e­tize sup­port.

But in the end, even if you have GPL code and you charge peo­ple to get it, it’s not truly free as in free­dom, if the com­pany re­stricts how you can use, mod­ify, and share the code.

But there’s a dis­tinc­tion here, and I know a few peo­ple watch­ing this are al­ready yelling at me. There’s free” soft­ware, and there’s open source.”

People in the free soft­ware com­mu­nity cor­rectly iden­ti­fied the dan­ger of call­ing free soft­ware open source.’

I don’t think we have to be so dog­matic about it, but there is a fun­da­men­tal philo­soph­i­cal dif­fer­ence be­tween the free soft­ware com­mu­nity, with or­ga­ni­za­tions like the Free Software Foundation and Software Freedom Conservancy be­hind it, and the more busi­ness-ori­ented open source’ cul­ture.

Open source cul­ture re­lies on trust. Trust that com­pa­nies you and I helped build (even with­out be­ing on the pay­roll) would­n’t rug­pull.

But time and time again, that trust is shat­tered.

Is this slow death of cor­po­rate open source bad? Well, it’s cer­tainly been an­noy­ing, es­pe­cially for devs like me who felt con­nected to these com­mu­ni­ties in the past. But it’s not all bad.

In fact, this could be a huge op­por­tu­nity; what hap­pened to the spunky star­tups like Ansible, HashiCorp, Elasticsearch, or Redis? They were light­ing their in­dus­tries on fire with great new soft­ware.

What hap­pened to build­ing up com­mu­ni­ties of de­vel­op­ers, cross­ing cul­tural and eco­nomic bar­ri­ers to make soft­ware that changed the world?

There are still pro­jects do­ing that, but so many suc­cumb to en­ter­prise money, where eye-wa­ter­ing amounts of rev­enue puts profit over phi­los­o­phy.

But as money dries up, as more de­vel­op­ers get laid off af­ter the in­sane hir­ing trends of the past five years, maybe small dev teams can move the nee­dle.

The AI bub­ble has­n’t popped yet, so some great peo­ple are get­ting sucked into that vor­tex.

But some­one else could be on the cusp of the next great open source pro­ject. Just… don’t add a CLA, okay?

And it’s not just devs; big com­pa­nies can join in. Historically bad play­ers like Microsoft and maybe even Oracle—man, it pains me to say that. They’ve even made strides in the past decade!

IBM could even mend some wounds, like they could re­unite OpenTofu and Terraform. There’s prece­dent, like when IO.js merged back into Node.js af­ter a fork in 2015.

People asked what Red Hat could do to get me in­ter­ested in Enterprise Linux again. It’s sim­ple: stop treat­ing peo­ple who don’t bring rev­enue to the table like garbage. Freeloaders are part of open source—whether they’re run­ning home­lab or a com­pet­ing busi­ness.

Companies who want to be­friend open source devs need to show they care about more than just money. Unfortunately, the trend right now is to rug­pull to juice the quar­ter­lies, be­cause money line al­ways goes up!

But you know what? I’d just pre­fer hon­esty. If rev­enue is so de­pen­dent on sell­ing soft­ware, just… make the soft­ware pro­pri­etary. Don’t be so coy!

But to any­one who’s not a multi-bil­lion dol­lar cor­po­ra­tion, don’t be a vic­tim of the next rug­pull. The warn­ing signs are clear: Don’t sign a CLA. Stay away from pro­jects that re­quire them.

Stick to open source li­censes that re­spect your free­dom, not li­censes writ­ten to juice rev­enue and prep a com­pany for a bil­lion-dol­lar-buy­out.

Maybe it’s time for a new open source re­bel­lion. Maybe this time, money won’t change com­pany cul­ture as new pro­jects arise from the ash heap. Maybe not, but at least we can try.

...

Read the original on www.jeffgeerling.com »

10 193 shares, 11 trendiness

rust-magic-patterns/rust-stream-visualized/Readme.md at master · alexpusch/rust-magic-patterns

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

We read every piece of feed­back, and take your in­put very se­ri­ously.

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

We read every piece of feed­back, and take your in­put very se­ri­ously.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.