10 interesting stories served every morning and every evening.
No one wants to be the bad guy.
When narratives begin to shift and the once good guys are labelled as bad it’s not surprising they fight back. They’ll point to criticisms as exaggerations. Their faults as misunderstandings.
Today’s freshly ordained bad guys are the investors and CEOs of Silicon Valley.
Once championed as the flagbearers of innovation and democratization, now they’re viewed as new versions of the monopolies of old and they’re fighting back.
The title of Paul Graham’s essay, How People Get Rich Now, didn’t prepare me for the real goal of his words. It’s less a tutorial or analysis and more a thinly veiled attempt to ease concerns about wealth inequality.
What he fails to mention is that concerns about wealth inequality aren’t concerned with how wealth was generated but rather the growing wealth gap that has accelerated in recent decades. Tech has made startups both cheaper and easier but only for a small percentage of people. And when a select group of people have an advantage that others don’t it’s compounded over time.
Paul paints a rosy picture but doesn’t mention that incomes for lower and middle-class families have fallen since the 80s. This golden age of entrepreneurship hasn’t benefitted the vast majority of people and the increase in the Gini coefficient isn’t simply that more companies are being started. The rich are getting richer and the poor are getting poorer.
And there we have it. The slight injection of his true ideology relegated to the notes section and vague enough that some might ignore. But keep in mind this is the same guy who argued against a wealth tax. His seemingly impartial and logical writing attempts to hide his true intentions.
Is this really about how people get rich or why we should all be happy that people like PG are getting richer while tons of people and struggling to meet their basic needs. Wealth inequality is just a radical left fairy tale to villainize the hard-working 1%. We could all be rich too, it’s so much easier now. Just pull yourself up by your bootstraps.
There’s no question that it’s easier now than ever to start a new business and reach your market. The internet has had a democratizing effect in this regard. But it’s also obvious to anyone outside the SV bubble that it’s still only accessible to a small minority of people. Most people don’t have the safety net or mental bandwidth to even consider entrepreneurship. It is not a panacea for the masses.
But to use that fact to push the false claim that wealth inequality is solely due to more startups and not a real problem says a lot. This essay is less about how people get rich and more about why it’s okay that people like PG are getting rich. They’re better than the richest people of 1960. And we can join them. We just need to stop complaining and just be rich instead.
...
Read the original on keenen.xyz »
April 2021
Every year since 1982, Forbes magazine has published a list of the
richest Americans. If we compare the 100 richest people in 1982 to
the 100 richest in 2020, we notice some big differences.
In 1982 the most common source of wealth was inheritance. Of the
100 richest people, 60 inherited from an ancestor. There were 10
du Pont heirs alone. By 2020 the number of heirs had been cut in
half, accounting for only 27 of the biggest 100 fortunes.
Why would the percentage of heirs decrease? Not because inheritance
taxes increased. In fact, they decreased significantly during this
period. The reason the percentage of heirs has decreased is not
that fewer people are inheriting great fortunes, but that more
people are making them.
How are people making these new fortunes? Roughly 3/4 by starting
companies and 1/4 by investing. Of the 73 new fortunes in 2020, 56
derive from founders’ or early employees’ equity (52 founders, 2
early employees, and 2 wives of founders), and 17 from managing
investment funds.
There were no fund managers among the 100 richest Americans in 1982.
Hedge funds and private equity firms existed in 1982, but none of
their founders were rich enough yet to make it into the top 100.
Two things changed: fund managers discovered new ways to generate
high returns, and more investors were willing to trust them with
their money.
But the main source of new fortunes now is starting companies, and
when you look at the data, you see big changes there too. People
get richer from starting companies now than they did in 1982, because
the companies do different things.
In 1982, there were two dominant sources of new wealth: oil and
real estate. Of the 40 new fortunes in 1982, at least 24 were due
primarily to oil or real estate. Now only a small number are: of
the 73 new fortunes in 2020, 4 were due to real estate and only 2
to oil.
By 2020 the biggest source of new wealth was what are sometimes
called “tech” companies. Of the 73 new fortunes, about 30 derive
from such companies. These are particularly common among the richest
of the rich: 8 of the top 10 fortunes in 2020 were new fortunes of
this type.
Arguably it’s slightly misleading to treat tech as a category.
Isn’t Amazon really a retailer, and Tesla a car maker? Yes and no.
Maybe in 50 years, when what we call tech is taken for granted, it
won’t seem right to put these two businesses in the same category.
But at the moment at least, there is definitely something they share
in common that distinguishes them. What retailer starts AWS? What
car maker is run by someone who also has a rocket company?
The tech companies behind the top 100 fortunes also form a
well-differentiated group in the sense that they’re all companies
that venture capitalists would readily invest in, and the others
mostly not. And there’s a reason why: these are mostly companies
that win by having better technology, rather than just a CEO who’s
really driven and good at making deals.
To that extent, the rise of the tech companies represents a qualitative
change. The oil and real estate magnates of the 1982 Forbes 400
didn’t win by making better technology. They won by being really
driven and good at making deals.
And indeed, that way of
getting rich is so old that it predates the Industrial Revolution.
The courtiers who got rich in the (nominal) service of European
royal houses in the 16th and 17th centuries were also, as a rule,
really driven and good at making deals.
People who don’t look any deeper than the Gini coefficient look
back on the world of 1982 as the good old days, because those who
got rich then didn’t get as rich. But if you dig into how they
got rich, the old days don’t look so good. In 1982, 84% of the
richest 100 people got rich by inheritance, extracting natural
resources, or doing real estate deals. Is that really better than
a world in which the richest people get rich by starting tech
companies?
Why are people starting so many more new companies than they used
to, and why are they getting so rich from it? The answer to the
first question, curiously enough, is that it’s misphrased. We
shouldn’t be asking why people are starting companies, but why
they’re starting companies again.
In 1892, the New York Herald Tribune compiled a list of all the
millionaires in America. They found 4047 of them. How many had
inherited their wealth then? Only about 20% — less than the
proportion of heirs today. And when you investigate the sources of
the new fortunes, 1892 looks even more like today. Hugh Rockoff
found that “many of the richest … gained their initial edge from
the new technology of mass production.”
So it’s not 2020 that’s the anomaly here, but 1982. The real question
is why so few people had gotten rich from starting companies in
1982. And the answer is that even as the Herald Tribune’s list was
being compiled, a wave of consolidation
was sweeping through the
American economy. In the late 19th and early 20th centuries,
financiers like J. P. Morgan combined thousands of smaller companies
into a few hundred giant ones with commanding economies of scale.
By the end of World War II, as Michael Lind writes, “the major
sectors of the economy were either organized as government-backed
cartels or dominated by a few oligopolistic corporations.”
In 1960, most of the people who start startups today would have
gone to work for one of them. You could get rich from starting your
own company in 1890 and in 2020, but in 1960 it was not really a
viable option. You couldn’t break through the oligopolies to get
at the markets. So the prestigious route in 1960 was not to start
your own company, but to work your way up the corporate ladder at
an existing one.
Making everyone a corporate employee decreased economic inequality
(and every other kind of variation), but if your model of normal
...
Read the original on paulgraham.com »
Today, we are introducing the OpenSearch project, a community-driven, open source fork of Elasticsearch and Kibana. We are making a long-term investment in OpenSearch to ensure users continue to have a secure, high-quality, fully open source search and analytics suite with a rich roadmap of new and innovative functionality. This project includes OpenSearch (derived from Elasticsearch 7.10.2) and OpenSearch Dashboards (derived from Kibana 7.10.2). Additionally, the OpenSearch project is the new home for our previous distribution of Elasticsearch (Open Distro for Elasticsearch), which includes features such as enterprise security, alerting, machine learning, SQL, index state management, and more. All of the software in the OpenSearch project is released under the Apache License, Version 2.0 (ALv2). We invite you to check out the code for OpenSearch and OpenSearch Dashboards on GitHub, and join us and the growing community around this effort.
We welcome individuals and organizations who are users of Elasticsearch, as well as those who are building products and services based on Elasticsearch. Our goal with the OpenSearch project is to make it easy for as many people and organizations as possible to use OpenSearch in their business, their products, and their projects. Whether you are an independent developer, an enterprise IT department, a software vendor, or a managed service provider, the ALv2 license grants you well-understood usage rights for OpenSearch. You can use, modify, extend, embed, monetize, resell, and offer OpenSearch as part of your products and services. We have also published permissive usage guidelines for the OpenSearch trademark, so you can use the name to promote your offerings. Broad adoption benefits all members of the community.
We plan to rename our existing Amazon Elasticsearch Service to Amazon OpenSearch Service. Aside from the name change, customers can rest assured that we will continue to deliver the same great experience without any impact to ongoing operations, development methodology, or business use. Amazon OpenSearch Service will offer a choice of open source engines to deploy and run, including the currently available 19 versions of ALv2 Elasticsearch (7.9 and earlier, with 7.10 coming soon) as well as new versions of OpenSearch. We will continue to support and maintain the ALv2 Elasticsearch versions with security and bug fixes, and we will deliver all new features and functionality through OpenSearch and OpenSearch Dashboards. The Amazon OpenSearch Service APIs will be backward compatible with the existing service APIs to eliminate any need for customers to update their current client code or applications. Additionally, just as we did for previous versions of Elasticsearch, we will provide a seamless upgrade path from existing Elasticsearch 6.x and 7.x managed clusters to OpenSearch.
We are not alone in our commitment to OpenSearch. Organizations as diverse as Red Hat, SAP, Capital One, and Logz.io have joined us in support.
“At Red Hat, we believe in the power of open source, and that community collaboration is the best way to build software,” said Deborah Bryant, Senior Director, Open Source Program Office, Red Hat. “We appreciate Amazon’s commitment to OpenSearch being open and we are excited to see continued support for open source at Amazon.”
“SAP customers expect a unified, business-centric and open SAP Business Technology Platform,” said Jan Schaffner, SVP and Head of BTP Foundational Plane. “Our observability strategy uses Elasticsearch as a major enabler. OpenSearch provides a true open source path and community-driven approach to move this forward.”
“At Capital One, we take an open source-first approach to software development, and have seen that we’re able to innovate more quickly by leveraging the talents of developer communities worldwide,” said Nureen D’Souza, Sr. Manager for Capital One’s Open Source Program Office. “When our teams chose to use Elasticsearch, the freedoms provided by the Apache-v2.0 license was central to that choice. We’re very supportive of the OpenSearch project, as it will give us greater control and autonomy over our data platform choices while retaining the freedom afforded by an open source license.”
“At Logz.io we have a deep belief that community driven open source is an enabler for innovation and prosperity,” said Tomer Levy, co-founder and CEO of Logz.io. “We have the highest commitment to our customers and the community that relies on open source to ensure that OpenSearch is available, thriving, and has a strong path forward for the community and led by the community. We have made a commitment to work with AWS and other members of the community to innovate and enable every organization around the world to enjoy the benefits of these critical open source projects.”
We are truly excited about the potential for OpenSearch to be a community endeavor, where anyone can contribute to it, influence it, and make decisions together about its future. Community development, at its best, lets people with diverse interests have a direct hand in guiding and building products they will use; this results in products that meet their needs better than anything else. It seems we aren’t alone in this interest; there’s been an outpouring of excitement from the community to drive OpenSearch, and questions about how we plan to work together.
We’ve taken a number of steps to make it easy to collaborate on OpenSearch’s development. The entire code base is under the Apache 2.0 license, and we don’t ask for a contributor license agreement (CLA). This makes it easy for anyone to contribute. We’re also keeping the code base well-structured and modular, so everyone can easily modify and extend it for their own uses.
Amazon is the primary steward and maintainer of OpenSearch today, and we have proposed guiding principles for development that make it clear that anyone can be a valued stakeholder in the project. We invite everyone to provide feedback and start contributing to OpenSearch. As we work together in the open, we expect to uncover the best ways to collaborate and empower all interested stakeholders to share in decision making. Cultivating the right governance approach for an open source project requires thoughtful deliberation with the community. We’re confident that we can find the best approach together over time.
Getting OpenSearch to this point required substantial work to remove Elastic commercial licensed features, code, and branding. The OpenSearch repos we made available today are a foundation on which everyone can build and innovate. You should consider the initial code to be at an alpha stage — it is not complete, not thoroughly tested, and not suitable for production use. We are planning to release a beta in the next few weeks, and expect it to stabilize and be ready for production by early summer (mid-2021).
The code base is ready, however, for your contributions, feedback, and participation. To get going with the repos, grab the source from GitHub and build it yourself:
Once you’ve cloned the repos, see what you can do. These repos are under active construction, so what works or doesn’t work will change from moment to moment. Some tasks you can do to help include:
* See what you can get running in your environment.
* Debug any issues you do find and submit PRs.
* Take a look at the contributing guides (OpenSearch, OpenSearch Dashboards) and developer guides (OpenSearch, OpenSearch Dashboards) to make sure they are clear and understandable to you.
Once you have OpenSearch and OpenSearch Dashboards running:
* Test any custom plugins or code you use and report what breaks.
* Run a sample workload and get in touch if it behaves differently from your previous setup.
* Connect it to any external tools / libraries and find out what works as expected.
We encourage everybody to engage with the OpenSearch community. We have launched a community site at opensearch.org. Our forums are where we collaborate and make decisions. We welcome pull requests through GitHub to fix bugs, improve performance and stability, or add new features. Keep an eye out for “help-wanted” tags on issues.
We’re so thrilled to have you along with us on this journey, and we can’t wait to see where it leads. We look forward to being part of a growing community that drives OpenSearch to become software that everyone wants to innovate on and use.
...
Read the original on aws.amazon.com »
Using console.log() for JavaScript debugging is the most common practice among developers. But, there is more…
The console object provides access to the browser’s debugging console. The specifics of how it works vary from browser to browser, but there is a de facto set of features that are typically provided.
...
Read the original on markodenic.com »
Sign up for our daily briefingMake your busy days simpler with Axios AM/PM. Catch up on what’s new and why it matters in just 5 minutes. Thank you for subscribing!Catch up on the day’s biggest business storiesSubscribe to Axios Closer for insights into the day’s business news and trends and why they matterThank you for subscribing!Stay on top of the latest market trendsSubscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.Thank you for subscribing!Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.Thank you for subscribing!Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.Thank you for subscribing!Get an insider’s guide to the new White House with Axios Sneak Peek. Sign up for free.Thank you for subscribing!Catch up on coronavirus stories and special reports, curated by Mike Allen everydayCatch up on coronavirus stories and special reports, curated by Mike Allen everydayThank you for subscribing!Want a daily digest of the top Denver news?Get a daily digest of the most important stories affecting your hometown with Axios DenverThank you for subscribing!Want a daily digest of the top Des Moines news?Get a daily digest of the most important stories affecting your hometown with Axios Des MoinesThank you for subscribing!Want a daily digest of the top Twin Cities news?Get a daily digest of the most important stories affecting your hometown with Axios Twin CitiesThank you for subscribing!Want a daily digest of the top Tampa Bay news?Get a daily digest of the most important stories affecting your hometown with Axios Tampa BayThank you for subscribing!Want a daily digest of the top Charlotte news?Get a daily digest of the most important stories affecting your hometown with Axios CharlotteThank you for subscribing!Thank you for subscribing!Microsoft buys Nuance for nearly $20 billion as it readies deal frenzyMicrosoft announced Monday it would buy Nuance Communications, a software company that focuses on speech recognition through artificial intelligence, in an all-cash transaction valued at $19.7 billion (including debt assumption).Why it matters: This is Microsoft’s second-largest acquisition, behind the $26.2 billion deal for LinkedIn in 2016.Microsoft is trying to leapfrog competitors like Google and Amazon as they face record antitrust scrutiny.Its cloud business has also been booming during the pandemic. Microsoft’s stock hit an all-time high on Friday, as it approaches $2 trillion in market value. Details: Nuance makes money by selling software tools that help to transcribe speech. The Burlington, Massachusetts-based company, for example, powered the speech recognition engine behind Apple’s voice assistant, Siri. The big picture: The deals Microsoft is eyeing are significantly larger than its usual targets. Microsoft tried to buy TikTok’s U.S. operations last year in a deal reportedly valued between $10 billion to $30 billion. Reports suggest it’s in advanced talks with gaming chat app Discord for a deal worth more than $10 billion. A report in February suggested Microsoft was eyeing a takeover of Pinterest, worth $53 billion on the public market. Last September, it bought gaming giant ZeniMax Media for $7.5 billion. Bottom line: Aside from its acquisition of Linkedin, Microsoft’s biggest deals have all been worth less than $10 billion. (The company purchased aQuantive for roughly $6 billion in 2007 and Skype for $8.5 billion in 2011. It bought Nokia for $7.2 billion in 2014 and and GitHub for $7.5 billion in 2019.)
...
Read the original on www.axios.com »
Ever since my teenage years, I felt as if there were a filmy curtain separating me from other people my age. I understood the words of their conversations, but I could not grasp why they said what they did. Much later I realized that I didn’t understand the subtle cues that other people were responding to.
Later in life, I discovered that some people had negative reactions to my behavior, which I did not even know about. Tending to be direct and honest with my thoughts, I sometimes made others uncomfortable or even offended them — especially women. This was not a choice: I didn’t understand the problem enough to know which choices there were.
Sometimes I lost my temper because I didn’t have the social skills to avoid it. Some people could cope with this; others were hurt. I apologize to each of them. Please direct your criticism at me, not at
the Free Software Foundation.
Occasionally I learned something about relationships and social skills, so over the years I’ve found ways to get better at these situations. When people help me understand an aspect of what went wrong, and that shows me a way of treating people better, I teach myself to recognize when I should act that way. I keep making this effort, and over time, I improve.
Some have described me as being “tone-deaf,” and that is fair. With my difficulty in understanding social cues, that tends to happen. For instance, I defended Professor Minsky on an M. I.T. mailing list after someone leaped to the conclusion that he was just guilty as Jeffrey Epstein. To my surprise, some thought my message defended Epstein. As I had stated previously, Epstein is a serial rapist, and rapists should be punished. I wish for his victims and those harmed by him to receive justice.
False accusations — real or imaginary, against me or against others — especially anger me. I knew Minsky only distantly, but seeing him unjustly accused made me spring to his defense. I would have done it for anyone. Police brutality makes me angry, but when the cops lie about their victims afterwards, that false accusation is the ultimate outrage for me. I condemn racism and sexism, including their systemic forms, so when people say I don’t, that hurts too.
It was right for me to talk about the injustice to Minsky, but it was tone-deaf that I didn’t acknowledge as context the injustice that Epstein did to women or the pain that caused.
I’ve learned something from this about how to be kind to people who have been hurt. In the future, that will help me be kind to people in other situations, which is what I hope to do.
...
Read the original on www.fsf.org »
Sixty years ago, a man went into space for the very first time.
For the USSR, Yuri Gagarin’s single orbit of the Earth was a huge achievement and propaganda coup. There will be celebrations across Russia to mark the anniversary.
Our Moscow correspondent Steve Rosenberg reports on the day a new Russian hero was born, and meets the little girl who witnessed it.
...
Read the original on www.bbc.co.uk »
In December, we announced the beta of Cloudflare Pages: a fast, secure, and free way for frontend developers to build, host, and collaborate on Jamstack sites.
It’s been incredible to see what happens when you put a powerful tool in developers’ hands. In just a few months of beta, thousands of developers have deployed over ten thousand projects, reaching millions of people around the world.
Today, we’re excited to announce that Cloudflare Pages is now available for anyone and ready for your production needs. We’re also excited to show off some of the new features we’ve been working on over the course of the beta, including: web analytics, built in redirects, protected previews, live previews, and optimized images (oh, my!). Lastly, we’ll give you a sneak peek into what we’ll be working on next to make Cloudflare Pages your go-to platform for deploying not just static sites, but full-stack applications.
Cloudflare Pages radically simplifies the process of developing and deploying sites by taking care of all the tedious parts of web development. Now, developers can focus on the fun and creative parts instead.
Getting started with Cloudflare Pages is as easy as connecting your repository and selecting your framework and build commands.
Once you’re set up, the only magic words you’ll need are `git commit` and `git push`. We’ll take care of building and deploying your sites for you, so you won’t ever have to leave your current workflow.
With every change, Cloudflare Pages generates a new preview link and posts it to the associated pull request. The preview link makes sharing your work with others easy, whether they’re reviewing the code or the content for each change.
Every site developed with Cloudflare Pages is deployed to Cloudflare’s network of data centers in over 100 countries — the same network we’ve been building out for the past 10 years with the best performance and security for our customers in mind.
Over the past few months, our developers have been busy too, hardening our offering from beta to general availability, fixing bugs, and working on new features to make building powerful websites even easier.
Here are some of the new and improved features you may have missed.
With Cloudflare Pages, we set out to make developing and deploying sites easy at every step, and that doesn’t stop at production. Launch day is usually when the real work begins.
Built-in, free web analytics
Speaking of launch days, if there’s one question that’s on my mind on a launch day, it’s: how are things going?
As soon as the go-live button is pressed, I want to know: how many views are we getting? Was the effort worth it? Are users running into errors?
The weeks, or months after the launch, I still want to know: is our growth steady? Where is the traffic coming from? Is there anything we can do to improve the user’s experience?
With Pages, Cloudflare’s privacy-first Web Analytics are available to answer all of these essential questions for successfully running your website at scale. Later this week, you will be able to enable analytics with a single click and start tracking your site’s progress and performance, including metrics about your traffic and web core vitals.
_redirects file support
Websites are living projects. As you make updates to your product names, blog post titles, and site layouts, your URLs are bound to change as well. The challenge is not letting those changes leave dead URLs behind for your users to stumble over.
To avoid leaving behind a trail of dead URLs, you should create redirects that automatically lead the user to your content’s new home. The challenge with creating redirects is coordinating the code change that changes the URL in tandem with the creation of the redirect.
You can now do both with one swift commit.
By adding a _redirects file to the build output directory for your project, you can easily redirect users to the right URL. Just add the redirects into the file in the following format:
/home / 301
/contact-me /contact 301
/blog https://www.ghost.org 301
Cloudflare Pages makes it easy to create new redirects and import existing redirects using our new support for _redirects files.
When we started Cloudflare, people believed performance and security were at odds with each other, and tradeoffs had to be made between the two. We set out to show that that was wrong.
Today, we similarly believe that working collaboratively and moving fast are at odds with each other. As the saying goes, “If you want to go fast, go alone. If you want to go far, go together.”
We previously discussed the ways in which Cloudflare Pages allows developers and their stakeholders to move fast together, and we’ve built two additional improvements to make it even easier!
Protected previews with Cloudflare Access integration
One of the ways Cloudflare Pages simplifies collaboration is by generating unique preview URLs for each commit. The preview URLs make it easy for anyone on your team to check out your work in progress, take it for a spin, and provide feedback before the changes go live.
While it’s great to shop ideas with your coworkers and stakeholders ahead of the big day, the surprise element is what makes the big day, The Big Day.
With our new Cloudflare Access integration, restricting access to your preview deployments is as easy as clicking a button.
Cloudflare Access is a Zero Trust solution — think of it like a bouncer checking each request to your site at the door. By default, we add the members of your Cloudflare organization, so when you send them a new preview link, they’re prompted with a one-time PIN that is sent to their email for authentication. However, you can modify the policy to integrate with your preferred SSO provider.
Cloudflare Access comes with 50 seats included in the free tier — enough to make sure no one leaks your new “dark mode” feature before you want them to.
Live previews with Cloudflare Tunnel
While preview deployments make it easy to share progress when you’re working asynchronously, sometimes a live collaboration session is the best way to crank out those finishing touches and last minute copy changes.
With Cloudflare Tunnel, you can expose your localhost through a secure tunnel to an easily shareable URL, so you can get live feedback from your teammates before you commit (pun intended).
It’s easy to get excited about all the new features you can play with, one of the real killer features of Pages is the performance and reliability.
We got a bit of a head start on performance because we built Pages on the same network we’ve been optimizing for performance for the past ten years. As a result, we learned a thing or two about accelerating web performance along the way.
One of the best tools for improving your site performance is by serving smaller content, which takes less time to transfer. One way to make your content smaller is via compression. We’ve recently introduced two types of compression to Pages:
Image compression: Since images represent some of the largest types of content we serve, serving them efficiently can have great impact on performance. To improve efficiency, we now use Polish to compress your images, and serve fewer bytes over the wire. When possible, we’ll also serve a WebP version of your image (and AVIF too, coming soon).
Gzip and Brotli: Even smaller assets, such as HTML or JavaScript can benefit from compression. Pages will now serve content compressed with gzip or Brotli, based on the type of compression the client can support.
While we’ve been offering compression for a long time now (dating all the way back to 2012), this is the first time we’re able to pre-process the assets, at the time of the build step, rather than on the fly, resulting in even better compression.
Another way to make content smaller is by literally shrinking it.
Device-based resizing: To make users’ experiences even smoother, especially on less reliable mobile devices, we want to make sure we’re not sending large images that will only get previewed on a small screen. Our new optimization will appropriately resize the image based on whether the device is mobile or desktop.
If you’re interested in more image optimization features, we have some announcements planned for later in the week, so stay tuned.
While today’s milestone marks Cloudflare Pages as a production-ready product, like I said, that’s where our true work begins, not ends.
There are so many features we’re excited to support in the future, and we wanted to give you a small glimpse into what it holds:
GitLab / Bitbucket support
We started out by offering direct integration with GitHub to reach as many developers as we possibly could, but we want to continuously grow out the ecosystem we interact with.
Webhooks
If you’re managing all of your code and content through source control, it’s sufficient to rely on committing your code as a way to trigger a new preview. However, if you’re managing your code in one place, but the content in another, such as a CMS, you may still want to preview your content changes before they go live.
To enable you to do so, we’ll be providing an endpoint you’ll be able to call in order to trigger a brand new deployment via a webhook.
A/B testing
No matter how much local testing you’ve done, or how many co-workers you’ve received feedback from, some unexpected behavior (whether a bug or a typo) is eventually bound to slip, only to get caught in production. Your reviewers are human too, after all.
When the inevitable happens, however, you don’t want it impacting all of your users at once.
To give you better control of rolling out changes into production, we’re looking forward to offering you the ability to roll out your changes to a percentage of your traffic to gain confidence in your changes before you go to 100%.
Supporting static sites is just the beginning of the journey for Cloudflare Pages. With redirects support, we’re starting to introduce the first bit of dynamic functionality to Pages, but our ambitions extend far beyond.
Our long term goal with Pages is to make full-stack application development as breezy an experience as static site development is today. We want to make Pages the deployment target for your static assets, and the APIs that make them dynamic. With Workers and Durable Objects, we believe we have just the toolset to build upon.
We’ll be starting by allowing you to deploy a Worker function by including it in your /api or /functions directory. Over time, we’ll be introducing new ways for you to deploy Durable Objects or utilize the KV namespaces in the same way.
Imagine, your entire application — frontend, APIs, storage, data — all deployed with a single commit, easily testable in staging, and a single merge to deploy to production.
Sign up or check out our docs to get started.
The best part about this is getting to see what you build, so if you’re building something cool, make sure to pop into our Discord and tell us all about it.
...
Read the original on blog.cloudflare.com »
NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems
Kicking off another busy Spring GPU Technology Conference for NVIDIA, this morning the graphics and accelerator designer is announcing that they are going to once again design their own Arm-based CPU/SoC. Dubbed Grace — after Grace Hopper, the computer programming pioneer and US Navy rear admiral — the CPU is NVIDIA’s latest stab at more fully vertically integrating their hardware stack by being able to offer a high-performance CPU alongside their regular GPU wares. According to NVIDIA, the chip is being designed specifically for large-scale neural network workloads, and is expected to become available in NVIDIA products in 2023.
With two years to go until the chip is ready, NVIDIA is playing things relatively coy at this time. The company is offering only limited details for the chip — it will be based on a future iteration of Arm’s Neoverse cores, for example — as today’s announcement is a bit more focused on NVIDIA’s future workflow model than it is speeds and feeds. If nothing else, the company is making it clear early on that, at least for now, Grace is an internal product for NVIDIA, to be offered as part of their larger server offerings. The company isn’t directly gunning for the Intel Xeon or AMD EPYC server market, but instead they are building their own chip to complement their GPU offerings, creating a specialized chip that can directly connect to their GPUs and help handle enormous, trillion parameter AI models.
More broadly speaking, Grace is designed to fill the CPU-sized hole in NVIDIA’s AI server offerings. The company’s GPUs are incredibly well-suited for certain classes of deep learning workloads, but not all workloads are purely GPU-bound, if only because a CPU is needed to keep the GPUs fed. NVIDIA’s current server offerings, in turn, typically rely on AMD’s EPYC processors, which are very fast for general compute purposes, but lack the kind of high-speed I/O and deep learning optimizations that NVIDIA is looking for. In particular, NVIDIA is currently bottlenecked by the use of PCI Express for CPU-GPU connectivity; their GPUs can talk quickly amongst themselves via NVLink, but not back to the host CPU or system RAM.
The solution to the problem, as was the case even before Grace, is to use NVLink for CPU-GPU communications. Previously NVIDIA has worked with the OpenPOWER foundation to get NVLink into POWER9 for exactly this reason, however that relationship is seemingly on its way out, both as POWER’s popularity wanes and POWER10 is skipping NVLink. Instead, NVIDIA is going their own way by building an Arm server CPU with the necessary NVLink functionality.
The end result, according to NVIDIA, will be a high-performance and high-bandwidth CPU that is designed to work in tandem with a future generation of NVIDIA server GPUs. With NVIDIA talking about pairing each NVIDIA GPU with a Grace CPU on a single board — similar to today’s mezzanine cards — not only does CPU performance and system memory scale up with the number of GPUs, but in a roundabout way, Grace will serve as a co-processor of sorts to NVIDIA’s GPUs. This, if nothing else, is a very NVIDIA solution to the problem, not only improving their performance, but giving them a counter should the more traditionally integrated AMD or Intel try some sort of similar CPU+GPU fusion play.
By 2023 NVIDIA will be up to NVLink 4, which will offer at least 900GB/sec of cummulative (up + down) bandwidth between the SoC and GPU, and over 600GB/sec cummulative between Grace SoCs. Critically, this is greater than the memory bandwidth of the SoC, which means that NVIDIA’s GPUs will have a cache coherent link to the CPU that can access the system memory at full bandwidth, and also allowing the entire system to have a single shared memory address space. NVIDIA describes this as balancing the amount of bandwidth available in a system, and they’re not wrong, but there’s more to it. Having an on-package CPU is a major means towards increasing the amount of memory NVIDIA’s GPUs can effectively access and use, as memory capacity continues to be the primary constraining factors for large neural networks — you can only efficiently run a network as big as your local memory pool.
And this memory-focused strategy is reflected in the memory pool design of Grace, as well. Since NVIDIA is putting the CPU on a shared package with the GPU, they’re going to put the RAM down right next to it. Grace-equipped GPU modules will include a to-be-determined amount of LPDDR5x memory, with NVIDIA targeting at least 500GB/sec of memory bandwidth. Besides being what’s likely to be the highest-bandwidth non-graphics memory option in 2023, NVIDIA is touting the use of LPDDR5x as a gain for energy efficiency, owing to the technology’s mobile-focused roots and very short trace lengths. And, since this is a server part, Grace’s memory will be ECC-enabled, as well.
As for CPU performance, this is actually the part where NVIDIA has said the least. The company will be using a future generation of Arm’s Neoverse CPU cores, where the initial N1 design has already been turning heads. But other than that, all the company is saying is that the cores should break 300 points on the SPECrate2017_int_base throughput benchmark, which would be comparable to some of AMD’s second-generation 64 core EPYC CPUs. The company also isn’t saying much about how the CPUs are configured or what optimizations are being added specifically for neural network processing. But since Grace is meant to support NVIDIA’s GPUs, I would expect it to be stronger where GPUs in general are weaker.
Otherwise, as mentioned earlier, NVIDIA big vision goal for Grace is significantly cutting down the time required for the largest neural networking models. NVIDIA is gunning for 10x higher performance on 1 trillion parameter models, and their performance projections for a 64 module Grace+A100 system (with theoretical NVLink 4 support) would be to bring down training such a model from a month to three days. Or alternatively, being able to do real-time inference on a 500 billion parameter model on an 8 module system.
Overall, this is NVIDIA’s second real stab at the data center CPU market — and the first that is likely to succeed. NVIDIA’s Project Denver, which was originally announced just over a decade ago, never really panned out as NVIDIA expected. The family of custom Arm cores was never good enough, and never made it out of NVIDIA’s mobile SoCs. Grace, in contrast, is a much safer project for NVIDIA; they’re merely licensing Arm cores rather than building their own, and those cores will be in use by numerous other parties, as well. So NVIDIA’s risk is reduced to largely getting the I/O and memory plumbing right, as well as keeping the final design energy efficient.
If all goes according to plan, expect to see Grace in 2023. NVIDIA is already confirming that Grace modules will be available for use in HGX carrier boards, and by extension DGX and all the other systems that use those boards. So while we haven’t seen the full extent of NVIDIA’s Grace plans, it’s clear that they are planning to make it a core part of future server offerings.
And even though Grace isn’t shipping until 2023, NVIDIA has already lined up their first customers for the hardware — and they’re supercomputer customers, no less. Both the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are announcing today that they’ll be ordering supercomputers based on Grace. Both systems will be built by HPE’s Cray group, and are set to come online in 2023.
CSCS’s system, dubbed Alps, will be replacing their current Piz Daint system, a Xeon plus NVIDIA P100 cluster. According to the two companies, Alps will offer 20 ExaFLOPS of AI performance, which is presumably a combination of CPU, CUDA core, and tensor core throughput. When it’s launched, Alps should be the fastest AI-focused supercomputer in the world.
Interestingly, however, CSCS’s ambitions for the system go beyond just machine learning workloads. The institute says that they’ll be using Alps as a general purpose system, working on more traditional HPC-type tasks as well as AI-focused tasks. This includes CSCS’s traditional research into weather and the climate, which the pre-AI Piz Daint is already used for as well.
As previously mentioned, Alps will be built by HPE, who will be basing on their previously-announced Cray EX architecture. This would make NVIDIA’s Grace the second CPU option for Cray EX, along with AMD’s EPYC processors.
Meanwhile Los Alamos’ system is being developed as part of an ongoing collaboration between the lab and NVIDIA, with LANL set to be the first US-based customer to receive a Grace system. LANL is not discussing the expected performance of their system beyond the fact that it’s expected to be “leadership-class,” though the lab is planning on using it for 3D simulations, taking advantage of the largest data set sizes afforded by Grace. The LANL system is set to be delivered in early 2023.
...
Read the original on www.anandtech.com »
Sign in
Sign up
Sign up
Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.