10 interesting stories served every morning and every evening.

1 770 shares, 150 trendiness, words and minutes reading time

Just Be Rich ūü§∑‚Äć‚ôāÔłŹ

No one wants to be the bad guy.

When nar¬≠ra¬≠tives be¬≠gin to shift and the once good guys are la¬≠belled as bad it‚Äôs not sur¬≠pris¬≠ing they Ô¨Āght back. They‚Äôll point to crit¬≠i¬≠cisms as ex¬≠ag¬≠ger¬≠a¬≠tions. Their faults as mis¬≠un¬≠der¬≠stand¬≠ings.

Today’s freshly or­dained bad guys are the in­vestors and CEOs of Silicon Valley.

Once cham¬≠pi¬≠oned as the Ô¨āag¬≠bear¬≠ers of in¬≠no¬≠va¬≠tion and de¬≠moc¬≠ra¬≠ti¬≠za¬≠tion, now they‚Äôre viewed as new ver¬≠sions of the mo¬≠nop¬≠o¬≠lies of old and they‚Äôre Ô¨Āght¬≠ing back.

The ti­tle of Paul Graham’s es­say, How People Get Rich Now, did­n’t pre­pare me for the real goal of his words. It’s less a tu­to­r­ial or analy­sis and more a thinly veiled at­tempt to ease con­cerns about wealth in­equal­ity.

What he fails to men­tion is that con­cerns about wealth in­equal­ity aren’t con­cerned with how wealth was gen­er­ated but rather the grow­ing wealth gap that has ac­cel­er­ated in re­cent decades. Tech has made star­tups both cheaper and eas­ier but only for a small per­cent­age of peo­ple. And when a se­lect group of peo­ple have an ad­van­tage that oth­ers don’t it’s com­pounded over time.

Paul paints a rosy pic¬≠ture but does¬≠n‚Äôt men¬≠tion that in¬≠comes for lower and mid¬≠dle-class fam¬≠i¬≠lies have fallen since the 80s. This golden age of en¬≠tre¬≠pre¬≠neur¬≠ship has¬≠n‚Äôt ben¬≠e¬≠Ô¨Āt¬≠ted the vast ma¬≠jor¬≠ity of peo¬≠ple and the in¬≠crease in the Gini co¬≠ef¬≠Ô¨Ā¬≠cient is¬≠n‚Äôt sim¬≠ply that more com¬≠pa¬≠nies are be¬≠ing started. The rich are get¬≠ting richer and the poor are get¬≠ting poorer.

And there we have it. The slight in­jec­tion of his true ide­ol­ogy rel­e­gated to the notes sec­tion and vague enough that some might ig­nore. But keep in mind this is the same guy who ar­gued against a wealth tax. His seem­ingly im­par­tial and log­i­cal writ­ing at­tempts to hide his true in­ten­tions.

Is this re­ally about how peo­ple get rich or why we should all be happy that peo­ple like PG are get­ting richer while tons of peo­ple and strug­gling to meet their ba­sic needs. Wealth in­equal­ity is just a rad­i­cal left fairy tale to vil­lainize the hard-work­ing 1%. We could all be rich too, it’s so much eas­ier now. Just pull your­self up by your boot­straps.

There’s no ques­tion that it’s eas­ier now than ever to start a new busi­ness and reach your mar­ket. The in­ter­net has had a de­moc­ra­tiz­ing ef­fect in this re­gard. But it’s also ob­vi­ous to any­one out­side the SV bub­ble that it’s still only ac­ces­si­ble to a small mi­nor­ity of peo­ple. Most peo­ple don’t have the safety net or men­tal band­width to even con­sider en­tre­pre­neur­ship. It is not a panacea for the masses.

But to use that fact to push the false claim that wealth in­equal­ity is solely due to more star­tups and not a real prob­lem says a lot. This es­say is less about how peo­ple get rich and more about why it’s okay that peo­ple like PG are get­ting rich. They’re bet­ter than the rich­est peo­ple of 1960. And we can join them. We just need to stop com­plain­ing and just be rich in­stead.


Read the original on keenen.xyz ¬Ľ

2 705 shares, 34 trendiness, words and minutes reading time

How People Get Rich Now

April 2021

Every year since 1982, Forbes mag­a­zine has pub­lished a list of the

rich­est Americans. If we com­pare the 100 rich­est peo­ple in 1982 to

the 100 rich­est in 2020, we no­tice some big dif­fer­ences.

In 1982 the most com­mon source of wealth was in­her­i­tance. Of the

100 rich­est peo­ple, 60 in­her­ited from an an­ces­tor. There were 10

du Pont heirs alone. By 2020 the num­ber of heirs had been cut in

half, ac­count­ing for only 27 of the biggest 100 for­tunes.

Why would the per­cent­age of heirs de­crease? Not be­cause in­her­i­tance

taxes in­creased. In fact, they de­creased sig­nif­i­cantly dur­ing this

pe­riod. The rea­son the per­cent­age of heirs has de­creased is not

that fewer peo­ple are in­her­it­ing great for­tunes, but that more

peo­ple are mak­ing them.

How are peo­ple mak­ing these new for­tunes? Roughly 3/4 by start­ing

com­pa­nies and 1/4 by in­vest­ing. Of the 73 new for­tunes in 2020, 56

de­rive from founders’ or early em­ploy­ees’ eq­uity (52 founders, 2

early em­ploy­ees, and 2 wives of founders), and 17 from man­ag­ing

in­vest­ment funds.

There were no fund man­agers among the 100 rich­est Americans in 1982.

Hedge funds and pri¬≠vate eq¬≠uity Ô¨Ārms ex¬≠isted in 1982, but none of

their founders were rich enough yet to make it into the top 100.

Two things changed: fund man­agers dis­cov­ered new ways to gen­er­ate

high re­turns, and more in­vestors were will­ing to trust them with

their money.

But the main source of new for­tunes now is start­ing com­pa­nies, and

when you look at the data, you see big changes there too. People

get richer from start­ing com­pa­nies now than they did in 1982, be­cause

the com­pa­nies do dif­fer­ent things.

In 1982, there were two dom­i­nant sources of new wealth: oil and

real es­tate. Of the 40 new for­tunes in 1982, at least 24 were due

pri­mar­ily to oil or real es­tate. Now only a small num­ber are: of

the 73 new for­tunes in 2020, 4 were due to real es­tate and only 2

to oil.

By 2020 the biggest source of new wealth was what are some­times

called ‚Äútech‚ÄĚ com¬≠pa¬≠nies. Of the 73 new for¬≠tunes, about 30 de¬≠rive

from such com­pa­nies. These are par­tic­u­larly com­mon among the rich­est

of the rich: 8 of the top 10 for­tunes in 2020 were new for­tunes of

this type.

Arguably it’s slightly mis­lead­ing to treat tech as a cat­e­gory.

Isn’t Amazon re­ally a re­tailer, and Tesla a car maker? Yes and no.

Maybe in 50 years, when what we call tech is taken for granted, it

won’t seem right to put these two busi­nesses in the same cat­e­gory.

But at the mo­ment at least, there is def­i­nitely some­thing they share

in com­mon that dis­tin­guishes them. What re­tailer starts AWS? What

car maker is run by some­one who also has a rocket com­pany?

The tech com­pa­nies be­hind the top 100 for­tunes also form a

well-dif­fer­en­ti­ated group in the sense that they’re all com­pa­nies

that ven­ture cap­i­tal­ists would read­ily in­vest in, and the oth­ers

mostly not. And there’s a rea­son why: these are mostly com­pa­nies

that win by hav­ing bet­ter tech­nol­ogy, rather than just a CEO who’s

re­ally dri­ven and good at mak­ing deals.

To that ex­tent, the rise of the tech com­pa­nies rep­re­sents a qual­i­ta­tive

change. The oil and real es­tate mag­nates of the 1982 Forbes 400

did­n’t win by mak­ing bet­ter tech­nol­ogy. They won by be­ing re­ally

dri­ven and good at mak­ing deals.

And in­deed, that way of

get­ting rich is so old that it pre­dates the Industrial Revolution.

The courtiers who got rich in the (nominal) ser­vice of European

royal houses in the 16th and 17th cen­turies were also, as a rule,

re­ally dri­ven and good at mak­ing deals.

People who don‚Äôt look any deeper than the Gini co¬≠ef¬≠Ô¨Ā¬≠cient look

back on the world of 1982 as the good old days, be­cause those who

got rich then did­n’t get as rich. But if you dig into how they

got rich, the old days don’t look so good. In 1982, 84% of the

rich­est 100 peo­ple got rich by in­her­i­tance, ex­tract­ing nat­ural

re­sources, or do­ing real es­tate deals. Is that re­ally bet­ter than

a world in which the rich­est peo­ple get rich by start­ing tech


Why are peo­ple start­ing so many more new com­pa­nies than they used

to, and why are they get­ting so rich from it? The an­swer to the

Ô¨Ārst ques¬≠tion, cu¬≠ri¬≠ously enough, is that it‚Äôs mis¬≠phrased. We

should­n’t be ask­ing why peo­ple are start­ing com­pa­nies, but why

they’re start­ing com­pa­nies again.

In 1892, the New York Herald Tribune com­piled a list of all the

mil­lion­aires in America. They found 4047 of them. How many had

in¬≠her¬≠ited their wealth then? Only about 20% ‚ÄĒ less than the

pro­por­tion of heirs to­day. And when you in­ves­ti­gate the sources of

the new for­tunes, 1892 looks even more like to­day. Hugh Rockoff

found that “many of the rich­est … gained their ini­tial edge from

the new tech¬≠nol¬≠ogy of mass pro¬≠duc¬≠tion.‚ÄĚ

So it’s not 2020 that’s the anom­aly here, but 1982. The real ques­tion

is why so few peo­ple had got­ten rich from start­ing com­pa­nies in

1982. And the an­swer is that even as the Herald Tribune’s list was

be­ing com­piled, a wave of con­sol­i­da­tion

was sweep­ing through the

American econ­omy. In the late 19th and early 20th cen­turies,

Ô¨Ā¬≠nanciers like J. P. Morgan com¬≠bined thou¬≠sands of smaller com¬≠pa¬≠nies

into a few hun­dred gi­ant ones with com­mand­ing economies of scale.

By the end of World War II, as Michael Lind writes, “the ma­jor

sec­tors of the econ­omy were ei­ther or­ga­nized as gov­ern­ment-backed

car¬≠tels or dom¬≠i¬≠nated by a few oli¬≠gop¬≠o¬≠lis¬≠tic cor¬≠po¬≠ra¬≠tions.‚ÄĚ

In 1960, most of the peo­ple who start star­tups to­day would have

gone to work for one of them. You could get rich from start­ing your

own com­pany in 1890 and in 2020, but in 1960 it was not re­ally a

vi­able op­tion. You could­n’t break through the oli­gop­o­lies to get

at the mar­kets. So the pres­ti­gious route in 1960 was not to start

your own com­pany, but to work your way up the cor­po­rate lad­der at

an ex­ist­ing one.

Making every­one a cor­po­rate em­ployee de­creased eco­nomic in­equal­ity

(and every other kind of vari­a­tion), but if your model of nor­mal


Read the original on paulgraham.com ¬Ľ

3 635 shares, 35 trendiness, words and minutes reading time

Introducing OpenSearch

Today, we are in­tro­duc­ing the OpenSearch pro­ject, a com­mu­nity-dri­ven, open source fork of Elasticsearch and Kibana. We are mak­ing a long-term in­vest­ment in OpenSearch to en­sure users con­tinue to have a se­cure, high-qual­ity, fully open source search and an­a­lyt­ics suite with a rich roadmap of new and in­no­v­a­tive func­tion­al­ity. This pro­ject in­cludes OpenSearch (derived from Elasticsearch 7.10.2) and OpenSearch Dashboards (derived from Kibana 7.10.2). Additionally, the OpenSearch pro­ject is the new home for our pre­vi­ous dis­tri­b­u­tion of Elasticsearch (Open Distro for Elasticsearch), which in­cludes fea­tures such as en­ter­prise se­cu­rity, alert­ing, ma­chine learn­ing, SQL, in­dex state man­age­ment, and more. All of the soft­ware in the OpenSearch pro­ject is re­leased un­der the Apache License, Version 2.0 (ALv2). We in­vite you to check out the code for OpenSearch and OpenSearch Dashboards on GitHub, and join us and the grow­ing com­mu­nity around this ef­fort.

We wel¬≠come in¬≠di¬≠vid¬≠u¬≠als and or¬≠ga¬≠ni¬≠za¬≠tions who are users of Elasticsearch, as well as those who are build¬≠ing prod¬≠ucts and ser¬≠vices based on Elasticsearch. Our goal with the OpenSearch pro¬≠ject is to make it easy for as many peo¬≠ple and or¬≠ga¬≠ni¬≠za¬≠tions as pos¬≠si¬≠ble to use OpenSearch in their busi¬≠ness, their prod¬≠ucts, and their pro¬≠jects. Whether you are an in¬≠de¬≠pen¬≠dent de¬≠vel¬≠oper, an en¬≠ter¬≠prise IT de¬≠part¬≠ment, a soft¬≠ware ven¬≠dor, or a man¬≠aged ser¬≠vice provider, the ALv2 li¬≠cense grants you well-un¬≠der¬≠stood us¬≠age rights for OpenSearch. You can use, mod¬≠ify, ex¬≠tend, em¬≠bed, mon¬≠e¬≠tize, re¬≠sell, and of¬≠fer OpenSearch as part of your prod¬≠ucts and ser¬≠vices. We have also pub¬≠lished per¬≠mis¬≠sive us¬≠age guide¬≠lines for the OpenSearch trade¬≠mark, so you can use the name to pro¬≠mote your of¬≠fer¬≠ings. Broad adop¬≠tion ben¬≠e¬≠Ô¨Āts all mem¬≠bers of the com¬≠mu¬≠nity.

We plan to re¬≠name our ex¬≠ist¬≠ing Amazon Elasticsearch Service to Amazon OpenSearch Service. Aside from the name change, cus¬≠tomers can rest as¬≠sured that we will con¬≠tinue to de¬≠liver the same great ex¬≠pe¬≠ri¬≠ence with¬≠out any im¬≠pact to on¬≠go¬≠ing op¬≠er¬≠a¬≠tions, de¬≠vel¬≠op¬≠ment method¬≠ol¬≠ogy, or busi¬≠ness use. Amazon OpenSearch Service will of¬≠fer a choice of open source en¬≠gines to de¬≠ploy and run, in¬≠clud¬≠ing the cur¬≠rently avail¬≠able 19 ver¬≠sions of ALv2 Elasticsearch (7.9 and ear¬≠lier, with 7.10 com¬≠ing soon) as well as new ver¬≠sions of OpenSearch. We will con¬≠tinue to sup¬≠port and main¬≠tain the ALv2 Elasticsearch ver¬≠sions with se¬≠cu¬≠rity and bug Ô¨Āxes, and we will de¬≠liver all new fea¬≠tures and func¬≠tion¬≠al¬≠ity through OpenSearch and OpenSearch Dashboards. The Amazon OpenSearch Service APIs will be back¬≠ward com¬≠pat¬≠i¬≠ble with the ex¬≠ist¬≠ing ser¬≠vice APIs to elim¬≠i¬≠nate any need for cus¬≠tomers to up¬≠date their cur¬≠rent client code or ap¬≠pli¬≠ca¬≠tions. Additionally, just as we did for pre¬≠vi¬≠ous ver¬≠sions of Elasticsearch, we will pro¬≠vide a seam¬≠less up¬≠grade path from ex¬≠ist¬≠ing Elasticsearch 6.x and 7.x man¬≠aged clus¬≠ters to OpenSearch.

We are not alone in our com­mit­ment to OpenSearch. Organizations as di­verse as Red Hat, SAP, Capital One, and Logz.io have joined us in sup­port.

‚ÄúAt Red Hat, we be¬≠lieve in the power of open source, and that com¬≠mu¬≠nity col¬≠lab¬≠o¬≠ra¬≠tion is the best way to build soft¬≠ware,‚ÄĚ said Deborah Bryant, Senior Director, Open Source Program OfÔ¨Āce, Red Hat. ‚ÄúWe ap¬≠pre¬≠ci¬≠ate Amazon‚Äôs com¬≠mit¬≠ment to OpenSearch be¬≠ing open and we are ex¬≠cited to see con¬≠tin¬≠ued sup¬≠port for open source at Amazon.‚ÄĚ

‚ÄúSAP cus¬≠tomers ex¬≠pect a uni¬≠Ô¨Āed, busi¬≠ness-cen¬≠tric and open SAP Business Technology Platform,‚ÄĚ said Jan Schaffner, SVP and Head of BTP Foundational Plane. ‚ÄúOur ob¬≠serv¬≠abil¬≠ity strat¬≠egy uses Elasticsearch as a ma¬≠jor en¬≠abler. OpenSearch pro¬≠vides a true open source path and com¬≠mu¬≠nity-dri¬≠ven ap¬≠proach to move this for¬≠ward.‚ÄĚ

‚ÄúAt¬†Capital¬†One, we take an open source-Ô¨Ārst ap¬≠proach to soft¬≠ware de¬≠vel¬≠op¬≠ment, and have seen that we‚Äôre able to in¬≠no¬≠vate more quickly by lever¬≠ag¬≠ing the tal¬≠ents of de¬≠vel¬≠oper com¬≠mu¬≠ni¬≠ties world¬≠wide,‚ÄĚ said Nureen D‚ÄôSouza, Sr. Manager for¬†Cap¬≠i¬≠tal¬†One‚Äôs Open Source Program OfÔ¨Āce. ‚ÄúWhen our teams chose to use Elasticsearch, the free¬≠doms pro¬≠vided by the Apache-v2.0 li¬≠cense was cen¬≠tral to that choice. We‚Äôre very sup¬≠port¬≠ive of the OpenSearch pro¬≠ject, as it will give us greater con¬≠trol and au¬≠ton¬≠omy over our data plat¬≠form choices while re¬≠tain¬≠ing the free¬≠dom¬†af¬≠forded by an open source li¬≠cense.‚ÄĚ

‚ÄúAt Logz.io we have a deep be¬≠lief that com¬≠mu¬≠nity dri¬≠ven open source is an en¬≠abler for in¬≠no¬≠va¬≠tion and pros¬≠per¬≠ity,‚ÄĚ said Tomer Levy, co-founder and CEO of Logz.io. ‚ÄúWe have the high¬≠est com¬≠mit¬≠ment to our cus¬≠tomers and the com¬≠mu¬≠nity that re¬≠lies on open source to en¬≠sure that OpenSearch is avail¬≠able, thriv¬≠ing, and has a strong path for¬≠ward for the com¬≠mu¬≠nity and led by the com¬≠mu¬≠nity. We have made a com¬≠mit¬≠ment to work with AWS and other mem¬≠bers of the com¬≠mu¬≠nity to in¬≠no¬≠vate and en¬≠able every or¬≠ga¬≠ni¬≠za¬≠tion around the world to en¬≠joy the ben¬≠e¬≠Ô¨Āts of these crit¬≠i¬≠cal open source pro¬≠jects.‚ÄĚ

We are truly ex¬≠cited about the po¬≠ten¬≠tial for OpenSearch to be a com¬≠mu¬≠nity en¬≠deavor, where any¬≠one can con¬≠tribute to it, in¬≠Ô¨āu¬≠ence it, and make de¬≠ci¬≠sions to¬≠gether about its fu¬≠ture. Community de¬≠vel¬≠op¬≠ment, at its best, lets peo¬≠ple with di¬≠verse in¬≠ter¬≠ests have a di¬≠rect hand in guid¬≠ing and build¬≠ing prod¬≠ucts they will use; this re¬≠sults in prod¬≠ucts that meet their needs bet¬≠ter than any¬≠thing else. It seems we aren‚Äôt alone in this in¬≠ter¬≠est; there‚Äôs been an out¬≠pour¬≠ing of ex¬≠cite¬≠ment from the com¬≠mu¬≠nity to drive OpenSearch, and ques¬≠tions about how we plan to work to¬≠gether.

We’ve taken a num­ber of steps to make it easy to col­lab­o­rate on OpenSearch’s de­vel­op­ment. The en­tire code base is un­der the Apache 2.0 li­cense, and we don’t ask for a con­trib­u­tor li­cense agree­ment (CLA). This makes it easy for any­one to con­tribute. We’re also keep­ing the code base well-struc­tured and mod­u­lar, so every­one can eas­ily mod­ify and ex­tend it for their own uses.

Amazon is the pri¬≠mary stew¬≠ard and main¬≠tainer of OpenSearch to¬≠day, and we have pro¬≠posed guid¬≠ing prin¬≠ci¬≠ples for de¬≠vel¬≠op¬≠ment that make it clear that any¬≠one can be a val¬≠ued stake¬≠holder in the pro¬≠ject. We in¬≠vite every¬≠one to pro¬≠vide feed¬≠back and start con¬≠tribut¬≠ing to OpenSearch. As we work to¬≠gether in the open, we ex¬≠pect to un¬≠cover the best ways to col¬≠lab¬≠o¬≠rate and em¬≠power all in¬≠ter¬≠ested stake¬≠hold¬≠ers to share in de¬≠ci¬≠sion mak¬≠ing. Cultivating the right gov¬≠er¬≠nance ap¬≠proach for an open source pro¬≠ject re¬≠quires thought¬≠ful de¬≠lib¬≠er¬≠a¬≠tion with the com¬≠mu¬≠nity. We‚Äôre con¬≠Ô¨Ā¬≠dent that we can Ô¨Ānd the best ap¬≠proach to¬≠gether over time.

Getting OpenSearch to this point re¬≠quired sub¬≠stan¬≠tial work to re¬≠move Elastic com¬≠mer¬≠cial li¬≠censed fea¬≠tures, code, and brand¬≠ing. The OpenSearch re¬≠pos we made avail¬≠able to¬≠day are a foun¬≠da¬≠tion on which every¬≠one can build and in¬≠no¬≠vate. You should con¬≠sider the ini¬≠tial code to be at an al¬≠pha stage ‚ÄĒ it is not com¬≠plete, not thor¬≠oughly tested, and not suit¬≠able for pro¬≠duc¬≠tion use. We are plan¬≠ning to re¬≠lease a beta in the next few weeks, and ex¬≠pect it to sta¬≠bi¬≠lize and be ready for pro¬≠duc¬≠tion by early sum¬≠mer (mid-2021).

The code base is ready, how­ever, for your con­tri­bu­tions, feed­back, and par­tic­i­pa­tion. To get go­ing with the re­pos, grab the source from GitHub and build it your­self:

Once you’ve cloned the re­pos, see what you can do. These re­pos are un­der ac­tive con­struc­tion, so what works or does­n’t work will change from mo­ment to mo­ment. Some tasks you can do to help in­clude:

* See what you can get run­ning in your en­vi­ron­ment.

* Debug any is¬≠sues you do Ô¨Ānd and sub¬≠mit PRs.

* Take a look at the con­tribut­ing guides (OpenSearch, OpenSearch Dashboards) and de­vel­oper guides (OpenSearch, OpenSearch Dashboards) to make sure they are clear and un­der­stand­able to you.

Once you have OpenSearch and OpenSearch Dashboards run­ning:

* Test any cus­tom plu­g­ins or code you use and re­port what breaks.

* Run a sam­ple work­load and get in touch if it be­haves dif­fer­ently from your pre­vi­ous setup.

* Connect it to any ex¬≠ter¬≠nal tools‚Ää/‚Ääli¬≠braries and Ô¨Ānd out what works as ex¬≠pected.

We en¬≠cour¬≠age every¬≠body to en¬≠gage with the OpenSearch com¬≠mu¬≠nity. We have launched a com¬≠mu¬≠nity site at opensearch.org. Our fo¬≠rums are where we col¬≠lab¬≠o¬≠rate and make de¬≠ci¬≠sions. We wel¬≠come pull re¬≠quests through GitHub to Ô¨Āx bugs, im¬≠prove per¬≠for¬≠mance and sta¬≠bil¬≠ity, or add new fea¬≠tures. Keep an eye out for ‚Äúhelp-wanted‚ÄĚ tags on is¬≠sues.

We’re so thrilled to have you along with us on this jour­ney, and we can’t wait to see where it leads. We look for­ward to be­ing part of a grow­ing com­mu­nity that dri­ves OpenSearch to be­come soft­ware that every­one wants to in­no­vate on and use.


Read the original on aws.amazon.com ¬Ľ

4 621 shares, 33 trendiness, words and minutes reading time

Use console.log() like a pro

Using con­sole.log() for JavaScript de­bug­ging is the most com­mon prac­tice among de­vel­op­ers. But, there is more…

The¬†console ob¬≠ject pro¬≠vides ac¬≠cess to the browser‚Äôs de¬≠bug¬≠ging con¬≠sole. The speciÔ¨Ācs of how it works vary from browser to browser, but there is a¬†de facto¬†set of fea¬≠tures that are typ¬≠i¬≠cally pro¬≠vided.


Read the original on markodenic.com ¬Ľ

5 618 shares, 27 trendiness, words and minutes reading time

Microsoft buys Nuance for nearly $20 billion as it readies deal frenzy

Sign up for our daily brief¬≠in¬≠g¬≠Make your busy days sim¬≠pler with Axios AM/PM. Catch up on what‚Äôs new and why it mat¬≠ters in just 5 min¬≠utes. Thank you for sub¬≠scrib¬≠ing!Catch up on the day‚Äôs biggest busi¬≠ness sto¬≠ries¬≠Sub¬≠scribe to Axios Closer for in¬≠sights into the day‚Äôs busi¬≠ness news and trends and why they mat¬≠terThank you for sub¬≠scrib¬≠ing!Stay on top of the lat¬≠est mar¬≠ket trendsSub¬≠scribe to Axios Markets for the lat¬≠est mar¬≠ket trends and eco¬≠nomic in¬≠sights. Sign up for free.Thank you for sub¬≠scrib¬≠ing!Binge on the stats and sto¬≠ries that drive the sports world with Axios Sports. Sign up for free.Thank you for sub¬≠scrib¬≠ing!Get our smart take on tech¬≠nol¬≠ogy from the Valley and D.C. with Axios Login. Sign up for free.Thank you for sub¬≠scrib¬≠ing!Get an in¬≠sid¬≠er‚Äôs guide to the new White House with Axios Sneak Peek. Sign up for free.Thank you for sub¬≠scrib¬≠ing!Catch up on coro¬≠n¬≠avirus sto¬≠ries and spe¬≠cial re¬≠ports, cu¬≠rated by Mike Allen every¬≠day¬≠Catch up on coro¬≠n¬≠avirus sto¬≠ries and spe¬≠cial re¬≠ports, cu¬≠rated by Mike Allen every¬≠dayThank you for sub¬≠scrib¬≠ing!Want a daily di¬≠gest of the top Denver news?Get a daily di¬≠gest of the most im¬≠por¬≠tant sto¬≠ries af¬≠fect¬≠ing your home¬≠town with Axios DenverThank you for sub¬≠scrib¬≠ing!Want a daily di¬≠gest of the top Des Moines news?Get a daily di¬≠gest of the most im¬≠por¬≠tant sto¬≠ries af¬≠fect¬≠ing your home¬≠town with Axios Des MoinesThank you for sub¬≠scrib¬≠ing!Want a daily di¬≠gest of the top Twin Cities news?Get a daily di¬≠gest of the most im¬≠por¬≠tant sto¬≠ries af¬≠fect¬≠ing your home¬≠town with Axios Twin CitiesThank you for sub¬≠scrib¬≠ing!Want a daily di¬≠gest of the top Tampa Bay news?Get a daily di¬≠gest of the most im¬≠por¬≠tant sto¬≠ries af¬≠fect¬≠ing your home¬≠town with Axios Tampa BayThank you for sub¬≠scrib¬≠ing!Want a daily di¬≠gest of the top Charlotte news?Get a daily di¬≠gest of the most im¬≠por¬≠tant sto¬≠ries af¬≠fect¬≠ing your home¬≠town with Axios CharlotteThank you for sub¬≠scrib¬≠ing!Thank you for sub¬≠scrib¬≠ing!Mi¬≠crosoft buys Nuance for nearly $20 bil¬≠lion as it read¬≠ies deal fren¬≠zyMi¬≠crosoft an¬≠nounced Monday it would buy Nuance Communications, a soft¬≠ware com¬≠pany that fo¬≠cuses on speech recog¬≠ni¬≠tion through ar¬≠ti¬≠Ô¨Ā¬≠cial in¬≠tel¬≠li¬≠gence, in an all-cash trans¬≠ac¬≠tion val¬≠ued at $19.7 bil¬≠lion (including debt as¬≠sump¬≠tion).Why it mat¬≠ters: This is Microsoft‚Äôs sec¬≠ond-largest ac¬≠qui¬≠si¬≠tion, be¬≠hind the $26.2 bil¬≠lion deal for LinkedIn in 2016.Microsoft is try¬≠ing to leapfrog com¬≠peti¬≠tors like Google and Amazon as they face record an¬≠titrust scrutiny.Its cloud busi¬≠ness has also been boom¬≠ing dur¬≠ing the pan¬≠demic. Microsoft‚Äôs stock hit an all-time high on Friday, as it ap¬≠proaches $2 tril¬≠lion in mar¬≠ket value. Details: Nuance makes money by sell¬≠ing soft¬≠ware tools that help to tran¬≠scribe speech. The Burlington, Massachusetts-based com¬≠pany, for ex¬≠am¬≠ple, pow¬≠ered the speech recog¬≠ni¬≠tion en¬≠gine be¬≠hind Apple‚Äôs voice as¬≠sis¬≠tant, Siri. The big pic¬≠ture: The deals Microsoft is eye¬≠ing are sig¬≠nif¬≠i¬≠cantly larger than its usual tar¬≠gets. Microsoft tried to buy TikTok‚Äôs U.S. op¬≠er¬≠a¬≠tions last year in a deal re¬≠port¬≠edly val¬≠ued be¬≠tween $10 bil¬≠lion to $30 bil¬≠lion. Reports sug¬≠gest it‚Äôs in ad¬≠vanced talks with gam¬≠ing chat app Discord for a deal worth more than $10 bil¬≠lion. A re¬≠port in February sug¬≠gested Microsoft was eye¬≠ing a takeover of Pinterest, worth $53 bil¬≠lion on the pub¬≠lic mar¬≠ket. Last September, it bought gam¬≠ing gi¬≠ant ZeniMax Media for $7.5 bil¬≠lion. Bottom line: Aside from its ac¬≠qui¬≠si¬≠tion of Linkedin, Microsoft‚Äôs biggest deals have all been worth less than $10 bil¬≠lion. (The com¬≠pany pur¬≠chased aQuan¬≠tive for roughly $6 bil¬≠lion in 2007 and Skype for $8.5 bil¬≠lion in 2011. It bought Nokia for $7.2 bil¬≠lion in 2014 and and GitHub for $7.5 bil¬≠lion in 2019.)


Read the original on www.axios.com ¬Ľ

6 488 shares, 25 trendiness, words and minutes reading time

RMS addresses the free software community ‚ÄĒ Free Software Foundation ‚ÄĒ Working together for free software

Ever since my teenage years, I felt as if there were a Ô¨Ālmy cur¬≠tain sep¬≠a¬≠rat¬≠ing me from other peo¬≠ple my age. I un¬≠der¬≠stood the words of their con¬≠ver¬≠sa¬≠tions, but I could not grasp why they said what they did. Much later I re¬≠al¬≠ized that I did¬≠n‚Äôt un¬≠der¬≠stand the sub¬≠tle cues that other peo¬≠ple were re¬≠spond¬≠ing to.

Later in life, I dis¬≠cov¬≠ered that some peo¬≠ple had neg¬≠a¬≠tive re¬≠ac¬≠tions to my be¬≠hav¬≠ior, which I did not even know about. Tending to be di¬≠rect and hon¬≠est with my thoughts, I some¬≠times made oth¬≠ers un¬≠com¬≠fort¬≠able or even of¬≠fended them — es¬≠pe¬≠cially women. This was not a choice: I did¬≠n‚Äôt un¬≠der¬≠stand the prob¬≠lem enough to know which choices there were.

Sometimes I lost my tem­per be­cause I did­n’t have the so­cial skills to avoid it. Some peo­ple could cope with this; oth­ers were hurt. I apol­o­gize to each of them. Please di­rect your crit­i­cism at me, not at

the Free Software Foundation.

Occasionally I learned some­thing about re­la­tion­ships and so­cial skills, so over the years I’ve found ways to get bet­ter at these sit­u­a­tions. When peo­ple help me un­der­stand an as­pect of what went wrong, and that shows me a way of treat­ing peo­ple bet­ter, I teach my­self to rec­og­nize when I should act that way. I keep mak­ing this ef­fort, and over time, I im­prove.

Some have de¬≠scribed me as be¬≠ing ‚Äútone-deaf,‚ÄĚ and that is fair. With my dif¬≠Ô¨Ā¬≠culty in un¬≠der¬≠stand¬≠ing so¬≠cial cues, that tends to hap¬≠pen. For in¬≠stance, I de¬≠fended Professor Minsky on an M. I.T. mail¬≠ing list af¬≠ter some¬≠one leaped to the con¬≠clu¬≠sion that he was just guilty as Jeffrey Epstein. To my sur¬≠prise, some thought my mes¬≠sage de¬≠fended Epstein. As I had stated pre¬≠vi¬≠ously, Epstein is a se¬≠r¬≠ial rapist, and rapists should be pun¬≠ished. I wish for his vic¬≠tims and those harmed by him to re¬≠ceive jus¬≠tice.

False ac¬≠cu¬≠sa¬≠tions — real or imag¬≠i¬≠nary, against me or against oth¬≠ers — es¬≠pe¬≠cially anger me. I knew Minsky only dis¬≠tantly, but see¬≠ing him un¬≠justly ac¬≠cused made me spring to his de¬≠fense. I would have done it for any¬≠one. Police bru¬≠tal¬≠ity makes me an¬≠gry, but when the cops lie about their vic¬≠tims af¬≠ter¬≠wards, that false ac¬≠cu¬≠sa¬≠tion is the ul¬≠ti¬≠mate out¬≠rage for me. I con¬≠demn racism and sex¬≠ism, in¬≠clud¬≠ing their sys¬≠temic forms, so when peo¬≠ple say I don‚Äôt, that hurts too.

It was right for me to talk about the in­jus­tice to Minsky, but it was tone-deaf that I did­n’t ac­knowl­edge as con­text the in­jus­tice that Epstein did to women or the pain that caused.

I’ve learned some­thing from this about how to be kind to peo­ple who have been hurt. In the fu­ture, that will help me be kind to peo­ple in other sit­u­a­tions, which is what I hope to do.


Read the original on www.fsf.org ¬Ľ

7 384 shares, 16 trendiness, words and minutes reading time

'The first spaceman landed in my field'

Sixty years ago, a man went into space for the very Ô¨Ārst time.

For the USSR, Yuri Gagarin’s sin­gle or­bit of the Earth was a huge achieve­ment and pro­pa­ganda coup. There will be cel­e­bra­tions across Russia to mark the an­niver­sary.

Our Moscow cor­re­spon­dent Steve Rosenberg re­ports on the day a new Russian hero was born, and meets the lit­tle girl who wit­nessed it.


Read the original on www.bbc.co.uk ¬Ľ

8 323 shares, 15 trendiness, words and minutes reading time

Cloudflare Pages is now Generally Available

In December, we an¬≠nounced the beta of CloudÔ¨āare Pages: a fast, se¬≠cure, and free way for fron¬≠tend de¬≠vel¬≠op¬≠ers to build, host, and col¬≠lab¬≠o¬≠rate on Jamstack sites.

It’s been in­cred­i­ble to see what hap­pens when you put a pow­er­ful tool in de­vel­op­ers’ hands. In just a few months of beta, thou­sands of de­vel­op­ers have de­ployed over ten thou­sand pro­jects, reach­ing mil­lions of peo­ple around the world.

Today, we‚Äôre ex¬≠cited to an¬≠nounce that CloudÔ¨āare Pages is now avail¬≠able for any¬≠one and ready for your pro¬≠duc¬≠tion needs. We‚Äôre also ex¬≠cited to show off some of the new fea¬≠tures we‚Äôve been work¬≠ing on over the course of the beta, in¬≠clud¬≠ing: web an¬≠a¬≠lyt¬≠ics, built in redi¬≠rects, pro¬≠tected pre¬≠views, live pre¬≠views, and op¬≠ti¬≠mized im¬≠ages (oh, my!). Lastly, we‚Äôll give you a sneak peek into what we‚Äôll be work¬≠ing on next to make CloudÔ¨āare Pages your go-to plat¬≠form for de¬≠ploy¬≠ing not just sta¬≠tic sites, but full-stack ap¬≠pli¬≠ca¬≠tions.

CloudÔ¨āare Pages rad¬≠i¬≠cally sim¬≠pli¬≠Ô¨Āes the process of de¬≠vel¬≠op¬≠ing and de¬≠ploy¬≠ing sites by tak¬≠ing care of all the te¬≠dious parts of web de¬≠vel¬≠op¬≠ment. Now, de¬≠vel¬≠op¬≠ers can fo¬≠cus on the fun and cre¬≠ative parts in¬≠stead.

Getting started with CloudÔ¨āare Pages is as easy as con¬≠nect¬≠ing your repos¬≠i¬≠tory and se¬≠lect¬≠ing your frame¬≠work and build com¬≠mands.

Once you‚Äôre set up, the only magic words you‚Äôll need are `git com¬≠mit` and `git push`. We‚Äôll take care of build¬≠ing and de¬≠ploy¬≠ing your sites for you, so you won‚Äôt ever have to leave your cur¬≠rent work¬≠Ô¨āow.

With every change, CloudÔ¨āare Pages gen¬≠er¬≠ates a new pre¬≠view link and posts it to the as¬≠so¬≠ci¬≠ated pull re¬≠quest. The pre¬≠view link makes shar¬≠ing your work with oth¬≠ers easy, whether they‚Äôre re¬≠view¬≠ing the code or the con¬≠tent for each change.

Every site de¬≠vel¬≠oped with CloudÔ¨āare Pages is de¬≠ployed to CloudÔ¨āare‚Äôs net¬≠work of data cen¬≠ters in over 100 coun¬≠tries ‚ÄĒ the same net¬≠work we‚Äôve been build¬≠ing out for the past 10 years with the best per¬≠for¬≠mance and se¬≠cu¬≠rity for our cus¬≠tomers in mind.

Over the past few months, our de¬≠vel¬≠op¬≠ers have been busy too, hard¬≠en¬≠ing our of¬≠fer¬≠ing from beta to gen¬≠eral avail¬≠abil¬≠ity, Ô¨Āx¬≠ing bugs, and work¬≠ing on new fea¬≠tures to make build¬≠ing pow¬≠er¬≠ful web¬≠sites even eas¬≠ier.

Here are some of the new and im­proved fea­tures you may have missed.

With CloudÔ¨āare Pages, we set out to make de¬≠vel¬≠op¬≠ing and de¬≠ploy¬≠ing sites easy at every step, and that does¬≠n‚Äôt stop at pro¬≠duc¬≠tion. Launch day is usu¬≠ally when the real work be¬≠gins.

Built-in, free web an­a­lyt­ics

Speaking of launch days, if there’s one ques­tion that’s on my mind on a launch day, it’s: how are things go­ing?

As soon as the go-live but­ton is pressed, I want to know: how many views are we get­ting? Was the ef­fort worth it? Are users run­ning into er­rors?

The weeks, or months af¬≠ter the launch, I still want to know: is our growth steady? Where is the traf¬≠Ô¨Āc com¬≠ing from? Is there any¬≠thing we can do to im¬≠prove the user‚Äôs ex¬≠pe¬≠ri¬≠ence?

With Pages, CloudÔ¨āare‚Äôs pri¬≠vacy-Ô¨Ārst Web Analytics are avail¬≠able to an¬≠swer all of these es¬≠sen¬≠tial ques¬≠tions for suc¬≠cess¬≠fully run¬≠ning your web¬≠site at scale. Later this week, you will be able to en¬≠able an¬≠a¬≠lyt¬≠ics with a sin¬≠gle click and start track¬≠ing your site‚Äôs progress and per¬≠for¬≠mance, in¬≠clud¬≠ing met¬≠rics about your traf¬≠Ô¨Āc and web core vi¬≠tals.

_redirects Ô¨Āle sup¬≠port

Websites are liv­ing pro­jects. As you make up­dates to your prod­uct names, blog post ti­tles, and site lay­outs, your URLs are bound to change as well. The chal­lenge is not let­ting those changes leave dead URLs be­hind for your users to stum­ble over.

To avoid leav­ing be­hind a trail of dead URLs, you should cre­ate redi­rects that au­to­mat­i­cally lead the user to your con­tent’s new home. The chal­lenge with cre­at­ing redi­rects is co­or­di­nat­ing the code change that changes the URL in tan­dem with the cre­ation of the redi­rect.

You can now do both with one swift com­mit.

By adding a _redirects Ô¨Āle to the build out¬≠put di¬≠rec¬≠tory for your pro¬≠ject, you can eas¬≠ily redi¬≠rect users to the right URL. Just add the redi¬≠rects into the Ô¨Āle in the fol¬≠low¬≠ing for¬≠mat:


/contact-me /contact 301

/blog https://‚Äčwww.ghost.org 301

CloudÔ¨āare Pages makes it easy to cre¬≠ate new redi¬≠rects and im¬≠port ex¬≠ist¬≠ing redi¬≠rects us¬≠ing our new sup¬≠port for _redirects Ô¨Āles.

When we started CloudÔ¨āare, peo¬≠ple be¬≠lieved per¬≠for¬≠mance and se¬≠cu¬≠rity were at odds with each other, and trade¬≠offs had to be made be¬≠tween the two. We set out to show that that was wrong.

Today, we sim¬≠i¬≠larly be¬≠lieve that work¬≠ing col¬≠lab¬≠o¬≠ra¬≠tively and mov¬≠ing fast are at odds with each other. As the say¬≠ing goes, ‚ÄúIf you want to go fast, go alone. If you want to go far, go to¬≠gether.‚ÄĚ

We pre¬≠vi¬≠ously dis¬≠cussed the ways in which CloudÔ¨āare Pages al¬≠lows de¬≠vel¬≠op¬≠ers and their stake¬≠hold¬≠ers to move fast to¬≠gether, and we‚Äôve built two ad¬≠di¬≠tional im¬≠prove¬≠ments to make it even eas¬≠ier!

Protected pre¬≠views with CloudÔ¨āare Access in¬≠te¬≠gra¬≠tion

One of the ways CloudÔ¨āare Pages sim¬≠pli¬≠Ô¨Āes col¬≠lab¬≠o¬≠ra¬≠tion is by gen¬≠er¬≠at¬≠ing unique pre¬≠view URLs for each com¬≠mit. The pre¬≠view URLs make it easy for any¬≠one on your team to check out your work in progress, take it for a spin, and pro¬≠vide feed¬≠back be¬≠fore the changes go live.

While it’s great to shop ideas with your cowork­ers and stake­hold­ers ahead of the big day, the sur­prise el­e­ment is what makes the big day, The Big Day.

With our new CloudÔ¨āare Access in¬≠te¬≠gra¬≠tion, re¬≠strict¬≠ing ac¬≠cess to your pre¬≠view de¬≠ploy¬≠ments is as easy as click¬≠ing a but¬≠ton.

CloudÔ¨āare Access is a Zero Trust so¬≠lu¬≠tion ‚ÄĒ think of it like a bouncer check¬≠ing each re¬≠quest to your site at the door. By de¬≠fault, we add the mem¬≠bers of your CloudÔ¨āare or¬≠ga¬≠ni¬≠za¬≠tion, so when you send them a new pre¬≠view link, they‚Äôre prompted with a one-time PIN that is sent to their email for au¬≠then¬≠ti¬≠ca¬≠tion. However, you can mod¬≠ify the pol¬≠icy to in¬≠te¬≠grate with your pre¬≠ferred SSO provider.

CloudÔ¨āare Access comes with 50 seats in¬≠cluded in the free tier ‚ÄĒ enough to make sure no one leaks your new ‚Äúdark mode‚ÄĚ fea¬≠ture be¬≠fore you want them to.

Live pre¬≠views with CloudÔ¨āare Tunnel

While pre¬≠view de¬≠ploy¬≠ments make it easy to share progress when you‚Äôre work¬≠ing asyn¬≠chro¬≠nously, some¬≠times a live col¬≠lab¬≠o¬≠ra¬≠tion ses¬≠sion is the best way to crank out those Ô¨Ān¬≠ish¬≠ing touches and last minute copy changes.

With CloudÔ¨āare Tunnel, you can ex¬≠pose your lo¬≠cal¬≠host through a se¬≠cure tun¬≠nel to an eas¬≠ily share¬≠able URL, so you can get live feed¬≠back from your team¬≠mates be¬≠fore you com¬≠mit (pun in¬≠tended).

It’s easy to get ex­cited about all the new fea­tures you can play with, one of the real killer fea­tures of Pages is the per­for­mance and re­li­a­bil­ity.

We got a bit of a head start on per­for­mance be­cause we built Pages on the same net­work we’ve been op­ti­miz­ing for per­for­mance for the past ten years. As a re­sult, we learned a thing or two about ac­cel­er­at­ing web per­for­mance along the way.

One of the best tools for im­prov­ing your site per­for­mance is by serv­ing smaller con­tent, which takes less time to trans­fer. One way to make your con­tent smaller is via com­pres­sion. We’ve re­cently in­tro­duced two types of com­pres­sion to Pages:

Image com¬≠pres¬≠sion: Since im¬≠ages rep¬≠re¬≠sent some of the largest types of con¬≠tent we serve, serv¬≠ing them ef¬≠Ô¨Ā¬≠ciently can have great im¬≠pact on per¬≠for¬≠mance. To im¬≠prove ef¬≠Ô¨Ā¬≠ciency, we now use Polish to com¬≠press your im¬≠ages, and serve fewer bytes over the wire. When pos¬≠si¬≠ble, we‚Äôll also serve a WebP ver¬≠sion of your im¬≠age (and AVIF too, com¬≠ing soon).

Gzip and Brotli: Even smaller as¬≠sets, such as HTML or JavaScript can ben¬≠e¬≠Ô¨Āt from com¬≠pres¬≠sion. Pages will now serve con¬≠tent com¬≠pressed with gzip or Brotli, based on the type of com¬≠pres¬≠sion the client can sup¬≠port.

While we‚Äôve been of¬≠fer¬≠ing com¬≠pres¬≠sion for a long time now (dating all the way back to 2012), this is the Ô¨Ārst time we‚Äôre able to pre-process the as¬≠sets, at the time of the build step, rather than on the Ô¨āy, re¬≠sult¬≠ing in even bet¬≠ter com¬≠pres¬≠sion.

Another way to make con­tent smaller is by lit­er­ally shrink­ing it.

Device-based re­siz­ing: To make users’ ex­pe­ri­ences even smoother, es­pe­cially on less re­li­able mo­bile de­vices, we want to make sure we’re not send­ing large im­ages that will only get pre­viewed on a small screen. Our new op­ti­miza­tion will ap­pro­pri­ately re­size the im­age based on whether the de­vice is mo­bile or desk­top.

If you’re in­ter­ested in more im­age op­ti­miza­tion fea­tures, we have some an­nounce­ments planned for later in the week, so stay tuned.

While to¬≠day‚Äôs mile¬≠stone marks CloudÔ¨āare Pages as a pro¬≠duc¬≠tion-ready prod¬≠uct, like I said, that‚Äôs where our true work be¬≠gins, not ends.

There are so many fea­tures we’re ex­cited to sup­port in the fu­ture, and we wanted to give you a small glimpse into what it holds:

GitLab / Bitbucket sup­port

We started out by of­fer­ing di­rect in­te­gra­tion with GitHub to reach as many de­vel­op­ers as we pos­si­bly could, but we want to con­tin­u­ously grow out the ecosys­tem we in­ter­act with.


If you‚Äôre man¬≠ag¬≠ing all of your code and con¬≠tent through source con¬≠trol, it‚Äôs suf¬≠Ô¨Ā¬≠cient to rely on com¬≠mit¬≠ting your code as a way to trig¬≠ger a new pre¬≠view. However, if you‚Äôre man¬≠ag¬≠ing your code in one place, but the con¬≠tent in an¬≠other, such as a CMS, you may still want to pre¬≠view your con¬≠tent changes be¬≠fore they go live.

To en­able you to do so, we’ll be pro­vid­ing an end­point you’ll be able to call in or­der to trig­ger a brand new de­ploy­ment via a web­hook.

A/B test­ing

No mat­ter how much lo­cal test­ing you’ve done, or how many co-work­ers you’ve re­ceived feed­back from, some un­ex­pected be­hav­ior (whether a bug or a typo) is even­tu­ally bound to slip, only to get caught in pro­duc­tion. Your re­view­ers are hu­man too, af­ter all.

When the in­evitable hap­pens, how­ever, you don’t want it im­pact­ing all of your users at once.

To give you bet¬≠ter con¬≠trol of rolling out changes into pro¬≠duc¬≠tion, we‚Äôre look¬≠ing for¬≠ward to of¬≠fer¬≠ing you the abil¬≠ity to roll out your changes to a per¬≠cent¬≠age of your traf¬≠Ô¨Āc to gain con¬≠Ô¨Ā¬≠dence in your changes be¬≠fore you go to 100%.

Supporting sta¬≠tic sites is just the be¬≠gin¬≠ning of the jour¬≠ney for CloudÔ¨āare Pages. With redi¬≠rects sup¬≠port, we‚Äôre start¬≠ing to in¬≠tro¬≠duce the Ô¨Ārst bit of dy¬≠namic func¬≠tion¬≠al¬≠ity to Pages, but our am¬≠bi¬≠tions ex¬≠tend far be¬≠yond.

Our long term goal with Pages is to make full-stack ap­pli­ca­tion de­vel­op­ment as breezy an ex­pe­ri­ence as sta­tic site de­vel­op­ment is to­day. We want to make Pages the de­ploy­ment tar­get for your sta­tic as­sets, and the APIs that make them dy­namic. With Workers and Durable Objects, we be­lieve we have just the toolset to build upon.

We’ll be start­ing by al­low­ing you to de­ploy a Worker func­tion by in­clud­ing it in your /api or /functions di­rec­tory. Over time, we’ll be in­tro­duc­ing new ways for you to de­ploy Durable Objects or uti­lize the KV name­spaces in the same way.

Imagine, your en¬≠tire ap¬≠pli¬≠ca¬≠tion ‚ÄĒ ¬†frontend, APIs, stor¬≠age, data ‚ÄĒ all de¬≠ployed with a sin¬≠gle com¬≠mit, eas¬≠ily testable in stag¬≠ing, and a sin¬≠gle merge to de¬≠ploy to pro¬≠duc¬≠tion.

Sign up or check out our docs to get started.

The best part about this is get­ting to see what you build, so if you’re build­ing some­thing cool, make sure to pop into our Discord and tell us all about it.


Read the original on blog.cloudflare.com ¬Ľ

9 308 shares, 15 trendiness, words and minutes reading time

A High-Performance Arm Server CPU For Use In Big AI Systems

NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems

Kicking off an¬≠other busy Spring GPU Technology Conference for NVIDIA, this morn¬≠ing the graph¬≠ics and ac¬≠cel¬≠er¬≠a¬≠tor de¬≠signer is an¬≠nounc¬≠ing that they are go¬≠ing to once again de¬≠sign their own Arm-based CPU/SoC. Dubbed Grace — af¬≠ter Grace Hopper, the com¬≠puter pro¬≠gram¬≠ming pi¬≠o¬≠neer and US Navy rear ad¬≠mi¬≠ral — the CPU is NVIDIA‚Äôs lat¬≠est stab at more fully ver¬≠ti¬≠cally in¬≠te¬≠grat¬≠ing their hard¬≠ware stack by be¬≠ing able to of¬≠fer a high-per¬≠for¬≠mance CPU along¬≠side their reg¬≠u¬≠lar GPU wares. According to NVIDIA, the chip is be¬≠ing de¬≠signed specif¬≠i¬≠cally for large-scale neural net¬≠work work¬≠loads, and is ex¬≠pected to be¬≠come avail¬≠able in NVIDIA prod¬≠ucts in 2023.

With two years to go un¬≠til the chip is ready, NVIDIA is play¬≠ing things rel¬≠a¬≠tively coy at this time. The com¬≠pany is of¬≠fer¬≠ing only lim¬≠ited de¬≠tails for the chip — it will be based on a fu¬≠ture it¬≠er¬≠a¬≠tion of Arm‚Äôs Neoverse cores, for ex¬≠am¬≠ple — as to¬≠day‚Äôs an¬≠nounce¬≠ment is a bit more fo¬≠cused on NVIDIA‚Äôs fu¬≠ture work¬≠Ô¨āow model than it is speeds and feeds. If noth¬≠ing else, the com¬≠pany is mak¬≠ing it clear early on that, at least for now, Grace is an in¬≠ter¬≠nal prod¬≠uct for NVIDIA, to be of¬≠fered as part of their larger server of¬≠fer¬≠ings. The com¬≠pany is¬≠n‚Äôt di¬≠rectly gun¬≠ning for the Intel Xeon or AMD EPYC server mar¬≠ket, but in¬≠stead they are build¬≠ing their own chip to com¬≠ple¬≠ment their GPU of¬≠fer¬≠ings, cre¬≠at¬≠ing a spe¬≠cial¬≠ized chip that can di¬≠rectly con¬≠nect to their GPUs and help han¬≠dle enor¬≠mous, tril¬≠lion pa¬≠ra¬≠me¬≠ter AI mod¬≠els.

More broadly speak¬≠ing, Grace is de¬≠signed to Ô¨Āll the CPU-sized hole in NVIDIA‚Äôs AI server of¬≠fer¬≠ings. The com¬≠pa¬≠ny‚Äôs GPUs are in¬≠cred¬≠i¬≠bly well-suited for cer¬≠tain classes of deep learn¬≠ing work¬≠loads, but not all work¬≠loads are purely GPU-bound, if only be¬≠cause a CPU is needed to keep the GPUs fed. NVIDIA‚Äôs cur¬≠rent server of¬≠fer¬≠ings, in turn, typ¬≠i¬≠cally rely on AMD‚Äôs EPYC proces¬≠sors, which are very fast for gen¬≠eral com¬≠pute pur¬≠poses, but lack the kind of high-speed I/O and deep learn¬≠ing op¬≠ti¬≠miza¬≠tions that NVIDIA is look¬≠ing for. In par¬≠tic¬≠u¬≠lar, NVIDIA is cur¬≠rently bot¬≠tle¬≠necked by the use of PCI Express for CPU-GPU con¬≠nec¬≠tiv¬≠ity; their GPUs can talk quickly amongst them¬≠selves via NVLink, but not back to the host CPU or sys¬≠tem RAM.

The so­lu­tion to the prob­lem, as was the case even be­fore Grace, is to use NVLink for CPU-GPU com­mu­ni­ca­tions. Previously NVIDIA has worked with the OpenPOWER foun­da­tion to get NVLink into POWER9 for ex­actly this rea­son, how­ever that re­la­tion­ship is seem­ingly on its way out, both as POWER’s pop­u­lar­ity wanes and POWER10 is skip­ping NVLink. Instead, NVIDIA is go­ing their own way by build­ing an Arm server CPU with the nec­es­sary NVLink func­tion­al­ity.

The end re¬≠sult, ac¬≠cord¬≠ing to NVIDIA, will be a high-per¬≠for¬≠mance and high-band¬≠width CPU that is de¬≠signed to work in tan¬≠dem with a fu¬≠ture gen¬≠er¬≠a¬≠tion of NVIDIA server GPUs. With NVIDIA talk¬≠ing about pair¬≠ing each NVIDIA GPU with a Grace CPU on a sin¬≠gle board — sim¬≠i¬≠lar to to¬≠day‚Äôs mez¬≠za¬≠nine cards — not only does CPU per¬≠for¬≠mance and sys¬≠tem mem¬≠ory scale up with the num¬≠ber of GPUs, but in a round¬≠about way, Grace will serve as a co-proces¬≠sor of sorts to NVIDIA‚Äôs GPUs. This, if noth¬≠ing else, is a very NVIDIA so¬≠lu¬≠tion to the prob¬≠lem, not only im¬≠prov¬≠ing their per¬≠for¬≠mance, but giv¬≠ing them a counter should the more tra¬≠di¬≠tion¬≠ally in¬≠te¬≠grated AMD or Intel try some sort of sim¬≠i¬≠lar CPU+GPU fu¬≠sion play.

By 2023 NVIDIA will be up to NVLink 4, which will of¬≠fer at least 900GB/sec of cum¬≠mu¬≠la¬≠tive (up + down) band¬≠width be¬≠tween the SoC and GPU, and over 600GB/sec cum¬≠mu¬≠la¬≠tive be¬≠tween Grace SoCs. Critically, this is greater than the mem¬≠ory band¬≠width of the SoC, which means that NVIDIA‚Äôs GPUs will have a cache co¬≠her¬≠ent link to the CPU that can ac¬≠cess the sys¬≠tem mem¬≠ory at full band¬≠width, and also al¬≠low¬≠ing the en¬≠tire sys¬≠tem to have a sin¬≠gle shared mem¬≠ory ad¬≠dress space. NVIDIA de¬≠scribes this as bal¬≠anc¬≠ing the amount of band¬≠width avail¬≠able in a sys¬≠tem, and they‚Äôre not wrong, but there‚Äôs more to it. Having an on-pack¬≠age CPU is a ma¬≠jor means to¬≠wards in¬≠creas¬≠ing the amount of mem¬≠ory NVIDIA‚Äôs GPUs can ef¬≠fec¬≠tively ac¬≠cess and use, as mem¬≠ory ca¬≠pac¬≠ity con¬≠tin¬≠ues to be the pri¬≠mary con¬≠strain¬≠ing fac¬≠tors for large neural net¬≠works — you can only ef¬≠Ô¨Ā¬≠ciently run a net¬≠work as big as your lo¬≠cal mem¬≠ory pool.

And this mem¬≠ory-fo¬≠cused strat¬≠egy is re¬≠Ô¨āected in the mem¬≠ory pool de¬≠sign of Grace, as well. Since NVIDIA is putting the CPU on a shared pack¬≠age with the GPU, they‚Äôre go¬≠ing to put the RAM down right next to it. Grace-equipped GPU mod¬≠ules will in¬≠clude a to-be-de¬≠ter¬≠mined amount of LPDDR5x mem¬≠ory, with NVIDIA tar¬≠get¬≠ing at least 500GB/sec of mem¬≠ory band¬≠width. Besides be¬≠ing what‚Äôs likely to be the high¬≠est-band¬≠width non-graph¬≠ics mem¬≠ory op¬≠tion in 2023, NVIDIA is tout¬≠ing the use of LPDDR5x as a gain for en¬≠ergy ef¬≠Ô¨Ā¬≠ciency, ow¬≠ing to the tech¬≠nol¬≠o¬≠gy‚Äôs mo¬≠bile-fo¬≠cused roots and very short trace lengths. And, since this is a server part, Grace‚Äôs mem¬≠ory will be ECC-enabled, as well.

As for CPU per¬≠for¬≠mance, this is ac¬≠tu¬≠ally the part where NVIDIA has said the least. The com¬≠pany will be us¬≠ing a fu¬≠ture gen¬≠er¬≠a¬≠tion of Arm‚Äôs Neoverse CPU cores, where the ini¬≠tial N1 de¬≠sign has al¬≠ready been turn¬≠ing heads. But other than that, all the com¬≠pany is say¬≠ing is that the cores should break 300 points on the SPECrate2017_int_base through¬≠put bench¬≠mark, which would be com¬≠pa¬≠ra¬≠ble to some of AMD‚Äôs sec¬≠ond-gen¬≠er¬≠a¬≠tion 64 core EPYC CPUs. The com¬≠pany also is¬≠n‚Äôt say¬≠ing much about how the CPUs are con¬≠Ô¨Āg¬≠ured or what op¬≠ti¬≠miza¬≠tions are be¬≠ing added specif¬≠i¬≠cally for neural net¬≠work pro¬≠cess¬≠ing. But since Grace is meant to sup¬≠port NVIDIA‚Äôs GPUs, I would ex¬≠pect it to be stronger where GPUs in gen¬≠eral are weaker.

Otherwise, as men­tioned ear­lier, NVIDIA big vi­sion goal for Grace is sig­nif­i­cantly cut­ting down the time re­quired for the largest neural net­work­ing mod­els. NVIDIA is gun­ning for 10x higher per­for­mance on 1 tril­lion pa­ra­me­ter mod­els, and their per­for­mance pro­jec­tions for a 64 mod­ule Grace+A100 sys­tem (with the­o­ret­i­cal NVLink 4 sup­port) would be to bring down train­ing such a model from a month to three days. Or al­ter­na­tively, be­ing able to do real-time in­fer­ence on a 500 bil­lion pa­ra­me­ter model on an 8 mod­ule sys­tem.

Overall, this is NVIDIA‚Äôs sec¬≠ond real stab at the data cen¬≠ter CPU mar¬≠ket — and the Ô¨Ārst that is likely to suc¬≠ceed. NVIDIA‚Äôs Project Denver, which was orig¬≠i¬≠nally an¬≠nounced just over a decade ago, never re¬≠ally panned out as NVIDIA ex¬≠pected. The fam¬≠ily of cus¬≠tom Arm cores was never good enough, and never made it out of NVIDIA‚Äôs mo¬≠bile SoCs. Grace, in con¬≠trast, is a much safer pro¬≠ject for NVIDIA; they‚Äôre merely li¬≠cens¬≠ing Arm cores rather than build¬≠ing their own, and those cores will be in use by nu¬≠mer¬≠ous other par¬≠ties, as well. So NVIDIA‚Äôs risk is re¬≠duced to largely get¬≠ting the I/O and mem¬≠ory plumb¬≠ing right, as well as keep¬≠ing the Ô¨Ā¬≠nal de¬≠sign en¬≠ergy ef¬≠Ô¨Ā¬≠cient.

If all goes ac¬≠cord¬≠ing to plan, ex¬≠pect to see Grace in 2023. NVIDIA is al¬≠ready con¬≠Ô¨Ārm¬≠ing that Grace mod¬≠ules will be avail¬≠able for use in HGX car¬≠rier boards, and by ex¬≠ten¬≠sion DGX and all the other sys¬≠tems that use those boards. So while we haven‚Äôt seen the full ex¬≠tent of NVIDIA‚Äôs Grace plans, it‚Äôs clear that they are plan¬≠ning to make it a core part of fu¬≠ture server of¬≠fer¬≠ings.

And even though Grace is¬≠n‚Äôt ship¬≠ping un¬≠til 2023, NVIDIA has al¬≠ready lined up their Ô¨Ārst cus¬≠tomers for the hard¬≠ware — and they‚Äôre su¬≠per¬≠com¬≠puter cus¬≠tomers, no less. Both the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are an¬≠nounc¬≠ing to¬≠day that they‚Äôll be or¬≠der¬≠ing su¬≠per¬≠com¬≠put¬≠ers based on Grace. Both sys¬≠tems will be built by HPE‚Äôs Cray group, and are set to come on¬≠line in 2023.

CSCS’s sys­tem, dubbed Alps, will be re­plac­ing their cur­rent Piz Daint sys­tem, a Xeon plus NVIDIA P100 clus­ter. According to the two com­pa­nies, Alps will of­fer 20 ExaFLOPS of AI per­for­mance, which is pre­sum­ably a com­bi­na­tion of CPU, CUDA core, and ten­sor core through­put. When it’s launched, Alps should be the fastest AI-focused su­per­com­puter in the world.

Interestingly, how­ever, CSCS’s am­bi­tions for the sys­tem go be­yond just ma­chine learn­ing work­loads. The in­sti­tute says that they’ll be us­ing Alps as a gen­eral pur­pose sys­tem, work­ing on more tra­di­tional HPC-type tasks as well as AI-focused tasks. This in­cludes CSCS’s tra­di­tional re­search into weather and the cli­mate, which the pre-AI Piz Daint is al­ready used for as well.

As pre­vi­ously men­tioned, Alps will be built by HPE, who will be bas­ing on their pre­vi­ously-an­nounced Cray EX ar­chi­tec­ture. This would make NVIDIA’s Grace the sec­ond CPU op­tion for Cray EX, along with AMD’s EPYC proces­sors.

Meanwhile Los Alamos‚Äô sys¬≠tem is be¬≠ing de¬≠vel¬≠oped as part of an on¬≠go¬≠ing col¬≠lab¬≠o¬≠ra¬≠tion be¬≠tween the lab and NVIDIA, with LANL set to be the Ô¨Ārst US-based cus¬≠tomer to re¬≠ceive a Grace sys¬≠tem. LANL is not dis¬≠cussing the ex¬≠pected per¬≠for¬≠mance of their sys¬≠tem be¬≠yond the fact that it‚Äôs ex¬≠pected to be ‚Äúleadership-class,‚ÄĚ though the lab is plan¬≠ning on us¬≠ing it for 3D sim¬≠u¬≠la¬≠tions, tak¬≠ing ad¬≠van¬≠tage of the largest data set sizes af¬≠forded by Grace. The LANL sys¬≠tem is set to be de¬≠liv¬≠ered in early 2023.


Read the original on www.anandtech.com ¬Ľ

10 297 shares, 29 trendiness, words and minutes reading time

Add chrome 0day · r4j0x00/exploits@7ba55e5

Sign in

Sign up

Sign up


This com­mit does not be­long to any branch on this repos­i­tory, and may be­long to a fork out­side of the repos­i­tory.

You can’t per­form that ac­tion at this time.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.


Read the original on github.com ¬Ľ

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.