10 interesting stories served every morning and every evening.
Over the years, Flutter has attracted millions of developers who built user interfaces across every platform. Flutter began as a UI toolkit for mobile - iOS and Android, only. Then Flutter added support for web. Finally, Flutter expanded to Mac, Windows, and Linux. Across this massive expansion of scope and responsibility, the Flutter team has only marginally increased its size. To help expand Flutter’s available labor, and accelerate development, we’re creating a fork of Flutter, called Flock.
Let’s do some back-of-the-napkin math to appreciate the Flutter team’s labor shortage.
How many Flutter developers exist in the world, today? My guess is that it’s on the order of 1,000,000 developers. The real number is probably higher, but one million should be reasonably conservative.
How large is the Flutter team, today? Google doesn’t publish this information, but my guess is that the team is about 50 people strong.
That’s 50 people serving the needs of 1,000,000. Doing a little bit of division, that means that every single member of the Flutter team is responsible for the needs of 20,000 Flutter developers! That ratio is clearly unworkable for any semblance of customer support.
A labor shortage can always be fixed through hiring. However, due to company-wide issues at Google, the Flutter team’s head count was frozen circa 2023, and then earlier in 2024 we learned of a small number of layoffs. It seems that the team may now be expanding again, through outsourcing, but we’re not likely to see the Flutter team double or quadruple its size any time soon.
To make matters worse, Google’s corporate re-focus on AI caused the Flutter team to de-prioritize all desktop platforms. As we speak, the Flutter team is in maintenance mode for 3 of its 6 supported platforms. Desktop is quite possibly the greatest untapped value for Flutter, but it’s now mostly stagnant.
Limited labor comes at a great cost for a toolkit that has rapidly expanded its user base, along with its overall scope.
With so few developers to work on tickets, many tickets linger in the backlog. They can easily linger for years, if they’re ever addressed at all.
By the time a member of the Flutter team begins to investigate a ticket, the ticket might be years old. At that point, the Flutter team developer typically asks for further information from the person who filed the ticket. In my experience, when this happens to me, I’ve long since stopped working with the client who had the initial issue. I’ve written hundreds of thousands of lines of code since then. I often don’t even remember filing the issue, let alone the obscure details related to the original issue. The team can’t fix the bug without information from me, and it’s been too long for me to provide information to the team. So the bug gets buried for a future developer to rediscover.
Timing isn’t just an issue for eventually root causing and fixing bugs. It’s also a major product problem. Imagine that you’re the engineering director, or CTO of a company whose next release is blocked by some Flutter bug. What do you do if the team won’t work on that bug for 2 years? Well, if it’s a serious bug for your company, then you stop using Flutter. You don’t have a choice. You need to keep moving forward. Your team doesn’t know how to work on the Flutter framework, and the Flutter framework team is either unresponsive, or at least completely non-committal towards a fix. Oh well - can’t use Flutter any more. Flutter won’t survive if these kinds of experiences become common.
Flutter has two very valuable qualities. First, it’s open source, so any developer can see how any part of Flutter is implemented, and can even change it. Second, the Flutter framework is written in the same language as Flutter apps. Because of these two qualities, experienced Flutter app developers, and package developers can contribute to the Flutter framework.
How many Flutter developers exist in the world today who are capable of contributing at a productive level to the Flutter framework? Conservatively, I would guess there are about 1,000 of them. In other words, there are at least 1,000 Flutter developers in the world who could conceivably be hired to the Flutter team, if the team wanted to hire that many developers.
Remember that ratio of 1 Flutter team member per 20,000 developers? If every capable Flutter framework contributor in the world regularly contributed to Flutter, that ratio of 1:20,000 would drop to 1:1,000. That’s still a big ratio, but it’s much better than what it is now.
Moreover, as more external contributors get comfortable submitting fixes and features to Flutter, they’ll tend to help train others to do the same. Thus, the support ratio would continue to move in a better direction.
If increased external contributions is the path to a better Flutter world, then why fork Flutter when everyone could just work directly with the Flutter team?
It’s a tempting proposition to setup a concerted effort to contribute directly to Flutter. After all, the Flutter team regularly touts the number of external contributions that it rolls into each release. According to the Flutter public relations effort, they’d love all those external contributions!
But, sadly, trying to work with the Flutter team delivers a different reality. While some developers have had success working with the Flutter team, many other developers have found it frustrating, if not unworkable. There are, no doubt, a number of factors that contribute to this result. Different developers will experience different issues. But here are some of them:
* Limited review labor:
The developers who don’t have enough time to write code are the same developers tapped
to review contributions. Therefore, it can take a long time for review or updates.
The time crunch also seems to lend itself to contentious review conversations.
* The developers who don’t have enough time to write code are the same developers tapped
to review contributions. Therefore, it can take a long time for review or updates.
* The time crunch also seems to lend itself to contentious review conversations.
* Everything takes forever, and it always seems to be about non-critical details.
* Communication monoculture - most of the team seems to expect a certain way of communicating,
which doesn’t match the variety of personalities in the world. Thus, some people have an
exceptionally difficult time navigating otherwise quick and simple conversations.
The result of the aforementioned issues, and probably others that aren’t listed, is that the total number of people who have ever contributed to the Flutter framework is currently less than 1,500. That number includes people who dropped by, one time, to fix a typo in a Dart Doc and then never contributed again. That’s not the number of regular contributors who add significant value.
Whatever your experience with contributions to Flutter, one has to critically assess why a team that loves external contributions has only managed to merge contributions from 1,500 developers over a span of nearly a decade. My humble suggestion is that it’s because the inviting message of the PR team doesn’t match the experience of actually pushing a change through the team’s policies, developer availability, and technical culture.
The only people who can change this reality are the people within the Flutter organization. However, most of those people don’t actually think any of this is a problem. I know, because a number of them have expressed this to me, directly. There are a number of significant blind spots for the Flutter team, which largely revolve around the fact that members of the team have never been responsible for routinely delivering app features and fixes that are built upon Flutter. In other words, I believe there are blind spots because Flutter team members don’t actually use Flutter. Thus, the urgency around many issues isn’t appreciated, nor is the urgency and time cost associated with submitting fixes directly to Flutter as an external contributor.
If the Flutter team doesn’t recognize the contribution problem, and therefore won’t take steps to address it, then what else can be done? That’s where we find ourselves in this post, and in this effort. We’ve decided that the one thing we can do to help the labor issue is to fork Flutter.
Our fork of Flutter is called Flock. We describe Flock as “Flutter+”. In other words, we
do not want, or intend, to fork the Flutter community. Flock will remain constantly up to date with Flutter. Flock will add important bug fixes, and popular community features, which the Flutter team either can’t, or won’t implement.
By forking Flutter, we get to decide what gets merged. We won’t lower the quality bar, but by controlling merge decisions, we do gain the following opportunities:
* Recruit a much larger PR review team than the Flutter team. This means faster review times.
* Recruit PR reviewers who are ready to facilitate contributions, instead of merely tolerating them.
This means support for a wider contributor audience.
* Optimize policies. E.g., don’t blindly demand design docs and conference calls when they won’t
substantially add to the effectiveness of the task at hand.
* Use contribution successes to socially promote more contributions.
* We’re all Flutter users - leverage team and company relationships to identify market priority.
As Flock ships important bug fixes and features, the Flutter team can then choose to add those to Flutter, on their schedule. The community will no longer be limited by the Flutter team’s availability, nor will the community need to beg the Flutter team to please accept a change. The Flutter team can use Flock’s solutions, or not, but all Flock users will have access to them, eliminating your company and team’s urgency and desperation.
Flock, as the name implies, will only go as far as the community that supports it. We would love for you to get involved.
Flock’s first step is to mirror Flutter. This means automatically mirroring the master,
beta, and stable branches, along with replicating all release tags. Additionally, once the framework is mirrored, Flock will need to automatically build and upload the engine, and make those engine binaries available to Flock users.
As we work through the mirroring process, it would be a big help if you would try building your apps with Flock. You shouldn’t see any difference between Flock and Flutter, and you can configure Flock with a tiny Flutter Version Manager (FVM) configuration.
Check our instructions to get started.
Flock needs to recruit dozens of reviewers. Reviewers are responsible for enforcing a quality bar that’s similar to Flutter’s. This includes requiring descriptive class, method, and property names, effective Dart Docs, and appropriate tests.
But we want reviewers to go even further than that. We don’t just want to tolerate contributions - we want to facilitate them. Many of us have had the experience of getting a PR 90% to the finish line only to have a Flutter team reviewer declare that it can’t merge until we do something that we don’t know how to do. It’s an awful experience, and we aim to avoid it with Flock.
We want Flock reviewers who are willing to step in and help a contributor achieve the final 10% of the PR. This doesn’t mean contributors get to be lazy. But if a contributor has done everything that he knows how to do, and the PR is close to complete, then we want the reviewer to step in and provide direction for the final 10%. This is how we educate contributors and ensure that the next PR is 100% complete.
If you’d like to become a Flock reviewer, please reach out to us.
Maintaining and extending a long-lived fork of Flutter requires some number of experts who direct specific areas of the project. For example, I’m initially stepping up at the Director of Flock, as well as the Framework Lead. Jesse Ezell has stepped up as the Engine Lead.
We’d like to bring in a Flutter Tool lead, who directs extensions to the flutter CLI tool.
We’d also like to break up the engine responsibilities with a Lead per platform: Android, iOS, Mac, Windows, Linux.
If you’d like to direct efforts for an area of Flock, please reach out to us.
Let’s shift Flutter into overdrive and help make it the universal UI toolkit it should have been. Flutter has the potential to outshine every alternative in the market. But it needs the community to Flock together to help get it there. Let’s do this!
...
Read the original on flutterfoundation.dev »
Apple unveils the new iMac with M4, supercharged by Apple Intelligence and available in fresh colors
The world’s best all-in-one desktop features even more performance, a nano-texture display option, a 12MP Center Stage camera, and Thunderbolt 4 connectivity — all in a strikingly thin design
CUPERTINO, CALIFORNIA Apple today announced the new iMac, featuring the powerful M4 chip and Apple Intelligence, in its stunning, ultra-thin design. With M4, iMac is up to 1.7x faster for daily productivity, and up to 2.1x faster for demanding workflows like photo editing and gaming, compared to iMac with M1.1 With the Neural Engine in M4, iMac is the world’s best all-in-one for AI and is built for Apple Intelligence, the personal intelligence system that transforms how users work, communicate, and express themselves, while protecting their privacy. The new iMac is available in an array of beautiful new colors, and the 24-inch 4.5K Retina display offers a new nano-texture glass option.2 iMac features a new 12MP Center Stage camera with Desk View, up to four Thunderbolt 4 ports,3 and color-matched accessories that include USB-C. Starting at just $1,299, now with 16GB of unified memory, the new iMac is available to pre-order today, with availability beginning Friday, November 8.
“iMac is beloved by millions of users, from families at home to entrepreneurs hard at work. With the incredible features of Apple Intelligence and the powerful performance of Apple silicon, the new iMac changes the game once again,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With M4 and Apple Intelligence, gorgeous new colors that pop in any space, an advanced 12MP Center Stage camera, and a new nano-texture glass display option, it’s a whole new era for iMac.”
Three people watch a scene from an animated movie on iMac.
iMac is shown in the storefront of a business.
The new iMac brings even more fun with bold new colors, enabling users to add a pop of personalization in their homes, workspaces, or storefronts.
The new iMac brings even more fun with bold new colors, enabling users to add a pop of personalization in their homes, workspaces, or storefronts.
On a purple iMac, a user has two windows open as they multitask between Safari and Excel.
iMac with M4 features the world’s fastest CPU core, making multitasking across apps like Safari and Excel lightning fast.
With the power of M4 and its advanced GPU, iMac enables incredibly smooth gameplay in titles like the highly anticipated Civilization VII.
The new iMac with M4 flies through demanding creative workflows, such as applying complex filters and effects in apps like Adobe Photoshop and Adobe Premiere Pro.
With systemwide Writing Tools powered by Apple Intelligence, users can rewrite, proofread, and summarize text nearly everywhere they write — from Apple apps like Keynote, to third-party apps like Craft and Bear.
Apple Intelligence does all this while protecting users’ privacy at every step. At its core is on-device processing, and for more complex tasks, Private Cloud Compute gives users access to Apple’s even larger, server-based models and offers groundbreaking protections for personal information. In addition, users can access ChatGPT for free without creating an account, and privacy protections are built in — their IP addresses are obscured and OpenAI won’t store requests. For those who choose to connect their account, OpenAI’s data-use policies apply.
The redesigned Siri helps users accelerate tasks throughout their day. Siri can be placed anywhere on the desktop for easy access, and with the option to type requests, users can get Siri’s help in even a quiet space like an office.
From left to right: (1) iMac features a color-matched keyboard and mouse or trackpad. (2) These accessories now come with USB-C ports, so users can charge all of their favorite devices with just a single cable.
(1) iMac features a color-matched keyboard and mouse or trackpad. (2) These accessories now come with USB-C ports, so users can charge all of their favorite devices with just a single cable.
With a new nano-texture glass option, which reduces glare while delivering outstanding image quality, users can place iMac in even more spaces, such as a sun-drenched living room or bright storefront.
A new 12MP Center Stage camera makes video calls even more engaging, perfectly centering users and those around them in the frame, even when moving around.
Customers can pre-order the new iMac with M4 starting today, October 28, on apple.com/store and in the Apple Store app in 28 countries and regions, including the U. S. It will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, beginning Friday, November 8.
iMac starts at $1,299 (U.S.) and $1,249 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features an 8-core CPU, an 8-core GPU, 16GB of unified memory configurable up to 24GB, 256GB SSD configurable up to 1TB, two Thunderbolt/USB 4 ports, Magic Keyboard, and Magic Mouse or Magic Trackpad.
iMac with a 10-core CPU and 10-core GPU starts at $1,499 (U.S.) and $1,399 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features 16GB of unified memory configurable up to 32GB, 256GB SSD configurable up to 2TB, four Thunderbolt 4 ports, Magic Keyboard with Touch ID, and Magic Mouse or Magic Trackpad.
Additional technical specifications — including the nano-texture display option, configure-to-order options, and accessories — are available at apple.com/mac.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
Apple Intelligence is available now as a free software update for Mac with M1 and later, and can be accessed in most regions around the world when the device and Siri language are set to U.S. English. The first set of features is in beta and available with macOS Sequoia 15.1, with more features rolling out in the months to come.
Apple Intelligence is quickly adding support for more languages. In December, Apple Intelligence will add support for localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K., and in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.
AppleCare+ for Mac provides unparalleled service and support. This includes unlimited incidents of accidental damage, battery service coverage, and 24/7 support from the people who know Mac best.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup, or focus on features that help them make the most of their new device. Customers can also learn more about getting started with their new device with a Today at Apple session at their nearest Apple Store.
Apple unveils the new iMac with M4, supercharged by Apple Intelligence and available in fresh colors
The world’s best all-in-one desktop features even more performance, a nano-texture display option, a 12MP Center Stage camera, and Thunderbolt 4 connectivity — all in a strikingly thin design
CUPERTINO, CALIFORNIA Apple today announced the new iMac, featuring the powerful M4 chip and Apple Intelligence, in its stunning, ultra-thin design. With M4, iMac is up to 1.7x faster for daily productivity, and up to 2.1x faster for demanding workflows like photo editing and gaming, compared to iMac with M1.1 With the Neural Engine in M4, iMac is the world’s best all-in-one for AI and is built for Apple Intelligence, the personal intelligence system that transforms how users work, communicate, and express themselves, while protecting their privacy. The new iMac is available in an array of beautiful new colors, and the 24-inch 4.5K Retina display offers a new nano-texture glass option.2 iMac features a new 12MP Center Stage camera with Desk View, up to four Thunderbolt 4 ports,3 and color-matched accessories that include USB-C. Starting at just $1,299, now with 16GB of unified memory, the new iMac is available to pre-order today, with availability beginning Friday, November 8.
“iMac is beloved by millions of users, from families at home to entrepreneurs hard at work. With the incredible features of Apple Intelligence and the powerful performance of Apple silicon, the new iMac changes the game once again,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With M4 and Apple Intelligence, gorgeous new colors that pop in any space, an advanced 12MP Center Stage camera, and a new nano-texture glass display option, it’s a whole new era for iMac.”
The M4 chip brings a boost in performance to iMac. Featuring a more capable CPU with the world’s fastest CPU core,4 the new iMac is up to 1.7x faster than iMac with M1. Users will feel this performance across everyday activities like multitasking between their favorite apps and browsing webpages in Safari. And with an immensely powerful GPU featuring Apple’s most advanced graphics architecture, iMac with M4 handles more intense workloads like photo editing and gaming up to 2.1x faster than iMac with M1. This also enables a smoother gameplay experience in titles like the upcoming Civilization VII. The new iMac comes standard with 16GB of faster unified memory — configurable up to 32GB. The Neural Engine in M4 is now over 3x faster than on iMac with M1, making it the world’s best all-in-one for AI, and accelerating the pace at which users can get things done.
Families, small businesses, and entrepreneurs can fly through daily productivity tasks with up to 1.7x faster performance1 in apps like Microsoft Excel, and up to 1.5x faster browsing performance5 in Safari compared to iMac with M1.
Gamers can enjoy incredibly smooth gameplay, with up to 2x higher frame rates5 than on iMac with M1.
Content creators can edit like never before, with up to 2.1x faster photo and video editing performance when applying complex filters and effects in apps like Adobe Photoshop1 and Adobe Premiere Pro5 compared to iMac with M1.
Compared to the most popular 24-inch all-in-one PC with the latest Intel Core 7 processor, the new iMac is up to 4.5x faster.1
Compared to the most popular Intel-based iMac model, the new iMac is up to 6x faster.1
A New Era with Apple Intelligence on the Mac
Apple Intelligence ushers in a new era for the Mac, bringing personal intelligence to the personal computer. Combining powerful generative models with industry-first privacy protections, Apple Intelligence harnesses the power of Apple silicon and the Neural Engine to unlock new ways for users to work, communicate, and express themselves on Mac. It is available in U.S. English with macOS Sequoia 15.1. With systemwide Writing Tools, users can refine their words by rewriting, proofreading, and summarizing text nearly everywhere they write. With the newly redesigned Siri, users can move fluidly between spoken and typed requests to accelerate tasks throughout their day, and Siri can answer thousands of questions about Mac and other Apple products. New Apple Intelligence features will be available in December, with additional capabilities rolling out in the coming months. Image Playground gives users a new way to create fun original images, and Genmoji allows them to create custom emoji in seconds. Siri will become even more capable, with the ability to take actions across the system and draw on a user’s personal context to deliver intelligence that is tailored to them. In December, ChatGPT will be integrated into Siri and Writing Tools, allowing users to access its expertise without needing to jump between tools.
Apple Intelligence does all this while protecting users’ privacy at every step. At its core is on-device processing, and for more complex tasks, Private Cloud Compute gives users access to Apple’s even larger, server-based models and offers groundbreaking protections for personal information. In addition, users can access ChatGPT for free without creating an account, and privacy protections are built in — their IP addresses are obscured and OpenAI won’t store requests. For those who choose to connect their account, OpenAI’s data-use policies apply.
The new iMac comes in seven vibrant colors, bringing fresh shades of green, yellow, orange, pink, purple, and blue, alongside silver. The back of iMac features bold colors designed to stand out, while the front expresses subtle shades of the new palette so users can focus on doing their best work. Every iMac comes with a color-matched Magic Keyboard and Magic Mouse or optional Magic Trackpad, all of which now feature a USB-C port, so users can charge their favorite devices with a single cable.
The expansive 24-inch 4.5K Retina display on iMac is its highest-rated feature, and for the first time, it’s available with a nano-texture glass option that drastically reduces reflections and glare, while maintaining outstanding image quality.2 With nano-texture glass, users can place iMac in even more spaces, such as a sun-drenched living room or bright storefront.
A new 12MP Center Stage camera with support for Desk View makes video calls even more engaging. Center Stage keeps everyone perfectly centered on a video call — great for families gathered on FaceTime. Desk View makes use of the wide-angle lens to simultaneously show the user and a top-down view of their desk, which is useful for educators presenting a lesson to students, or creators showing off their latest DIY project. Rounding out the unrivaled audio and video experience is the beloved studio-quality three-microphone array with beamforming and an immersive six-speaker sound system.
On the new iMac, all four USB-C ports support Thunderbolt 4 for superfast data transfers, so users can connect even more accessories like external storage, docks, and up to two 6K external displays, creating a massive canvas with more than 50M pixels for users to spread out their work.3 iMac also supports both Wi-Fi 6E and Bluetooth 5.3. And with the advanced security of Touch ID, users can easily and securely unlock their computer, make online purchases with Apple Pay, and download apps.6 Additionally, Touch ID works with Fast User Switching, so customers can switch between different user profiles with just the press of a finger.
macOS Sequoia completes the new iMac experience with a host of exciting features, including iPhone Mirroring, allowing users to wirelessly interact with their iPhone, its apps, and its notifications directly from their Mac.7 Safari, the world’s fastest browser,8 now offers Highlights, which quickly pulls up relevant information from a site; a smarter, redesigned Reader with a table of contents and high-level summary; and a new Video Viewer to watch videos without distractions. With Distraction Control, users can hide items on a webpage that they may find disruptive to their browsing. Gaming gets even more immersive with features like Personalized Spatial Audio and improvements to Game Mode, along with a breadth of exciting titles, including the upcoming Assassin’s Creed Shadows. Easier window tiling means users can stay organized with a windows layout that works best for them. The all-new Passwords app gives convenient access to passwords, passkeys, and other credentials, all stored in one place. And users can apply beautiful new built-in backgrounds for video calls, including a variety of color gradients and system wallpapers, or upload their own photos.
Better for the Environment
The new iMac with M4 is designed with the environment in mind, with 100 percent recycled aluminum in the stand, and 100 percent recycled gold plating, tin soldering, and copper in multiple printed circuit boards. iMac meets Apple’s high standards for energy efficiency, and is free of mercury, brominated flame retardants, and PVC. New this year, the packaging of iMac is entirely fiber-based, bringing Apple closer to its goal to remove plastic from its packaging by 2025.
Today, Apple is carbon neutral for global corporate operations and, as part of its ambitious Apple 2030 goal, plans to be carbon neutral across its entire carbon footprint by the end of this decade.
Customers can pre-order the new iMac with M4 starting today, October 28, on apple.com/store and in the Apple Store app in 28 countries and regions, including the U.S. It will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, beginning Friday, November 8.
iMac starts at $1,299 (U.S.) and $1,249 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features an 8-core CPU, an 8-core GPU, 16GB of unified memory configurable up to 24GB, 256GB SSD configurable up to 1TB, two Thunderbolt/USB 4 ports, Magic Keyboard, and Magic Mouse or Magic Trackpad.
iMac with a 10-core CPU and 10-core GPU starts at $1,499 (U.S.) and $1,399 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features 16GB of unified memory configurable up to 32GB, 256GB SSD configurable up to 2TB, four Thunderbolt 4 ports, Magic Keyboard with Touch ID, and Magic Mouse or Magic Trackpad.
Additional technical specifications — including the nano-texture display option, configure-to-order options, and accessories — are available at apple.com/mac.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
Apple Intelligence is available now as a free software update for Mac with M1 and later, and can be accessed in most regions around the world when the device and Siri language are set to U.S. English. The first set of features is in beta and available with macOS Sequoia 15.1, with more features rolling out in the months to come.
Apple Intelligence is quickly adding support for more languages. In December, Apple Intelligence will add support for localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K., and in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.
AppleCare+ for Mac provides unparalleled service and support. This includes unlimited incidents of accidental damage, battery service coverage, and 24/7 support from the people who know Mac best.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup, or focus on features that help them make the most of their new device. Customers can also learn more about getting started with their new device with a Today at Apple session at their nearest Apple Store.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
Testing was conducted by Apple in September and October 2024. See apple.com/imac for more information.
Actual diagonal screen measurement is 23.5 inches. Nano-texture display is an option on models with 10-core CPU and 10-core GPU.
All four USB-C ports support Thunderbolt 4 on models with 10-core CPU and 10-core GPU.
Testing was conducted by Apple in October 2024 using shipping competitive systems and select industry-standard benchmarks.
Results are compared to previous-generation 24-inch iMac systems with Apple M1, 8-core CPU, 8-core GPU, 16GB of RAM, and 2TB SSD.
iMac with 8-core CPU and 8-core GPU can configure to Magic Keyboard with Touch ID and Numeric Keypad, and iMac with 10-core CPU and 10-core GPU comes standard with Touch ID.
Available on Mac computers with Apple silicon and Intel-based Mac computers with a T2 Security Chip. Requires that the user’s iPhone and Mac are signed in with the same Apple Account using two-factor authentication, their iPhone and Mac are near each other and have Bluetooth and Wi-Fi turned on, and their Mac is not using AirPlay or Sidecar. Some iPhone features (e.g., camera and microphone) are not compatible with iPhone Mirroring.
Testing was conducted by Apple in August 2024. See apple.com/safari for more information.
Copy text
* Families, small businesses, and entrepreneurs can fly through daily productivity tasks with up to 1.7x faster performance1 in apps like Microsoft Excel, and up to 1.5x faster browsing performance5 in Safari compared to iMac with M1.
* Gamers can enjoy incredibly smooth gameplay, with up to 2x higher frame rates5 than on iMac with M1.
* Content creators can edit like never before, with up to 2.1x faster photo and video editing performance when applying complex filters and effects in apps like Adobe Photoshop1 and Adobe Premiere Pro5 compared to iMac with M1.
* Compared to the most popular 24-inch all-in-one PC with the latest Intel Core 7 processor, the new iMac is up to 4.5x faster.1
* Compared to the most popular Intel-based iMac model, the new iMac is up to 6x faster.1
* Customers can pre-order the new iMac with M4 starting today, October 28, on apple.com/store and in the Apple Store app in 28 countries and regions, including the U.S. It will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, beginning Friday, November 8.
* iMac starts at $1,299 (U.S.) and $1,249 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features an 8-core CPU, an 8-core GPU, 16GB of unified memory configurable up to 24GB, 256GB SSD configurable up to 1TB, two Thunderbolt/USB 4 ports, Magic Keyboard, and Magic Mouse or Magic Trackpad.
* iMac with a 10-core CPU and 10-core GPU starts at $1,499 (U.S.) and $1,399 (U.S.) for education, and is available in green, yellow, orange, pink, purple, blue, and silver. It features 16GB of unified memory configurable up to 32GB, 256GB SSD configurable up to 2TB, four Thunderbolt 4 ports, Magic Keyboard with Touch ID, and Magic Mouse or Magic Trackpad.
* Additional technical specifications — including the nano-texture display option, configure-to-order options, and accessories — are available at apple.com/mac.
* With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
* Apple Intelligence is available now as a free software update for Mac with M1 and later, and can be accessed in most regions around the world when the device and Siri language are set to U.S. English. The first set of features is in beta and available with macOS Sequoia 15.1, with more features rolling out in the months to come.
* Apple Intelligence is quickly adding support for more languages. In December, Apple Intelligence will add support for localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K., and in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.
* AppleCare+ for Mac provides unparalleled service and support. This includes unlimited incidents of accidental damage, battery service coverage, and 24/7 support from the people who know Mac best.
* Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup, or focus on features that help them make the most of their new device. Customers can also learn more about getting started with their new device with a Today at Apple session at their nearest Apple Store.
* Testing was conducted by Apple in September and October 2024. See apple.com/imac for more information.
* Actual diagonal screen measurement is 23.5 inches. Nano-texture display is an option on models with 10-core CPU and 10-core GPU.
* All four USB-C ports support Thunderbolt 4 on models with 10-core CPU and 10-core GPU.
* Testing was conducted by Apple in October 2024 using shipping competitive systems and select industry-standard benchmarks.
* Results are compared to previous-generation 24-inch iMac systems with Apple M1, 8-core CPU, 8-core GPU, 16GB of RAM, and 2TB SSD.
* iMac with 8-core CPU and 8-core GPU can configure to Magic Keyboard with Touch ID and Numeric Keypad, and iMac with 10-core CPU and 10-core GPU comes standard with Touch ID.
* Available on Mac computers with Apple silicon and Intel-based Mac computers with a T2 Security Chip. Requires that the user’s iPhone and Mac are signed in with the same Apple Account using two-factor authentication, their iPhone and Mac are near each other and have Bluetooth and Wi-Fi turned on, and their Mac is not using AirPlay or Sidecar. Some iPhone features (e.g., camera and microphone) are not compatible with iPhone Mirroring.
* Testing was conducted by Apple in August 2024. See apple.com/safari for more information.
Testing was conducted by Apple in September and October 2024. See apple.com/imac for more information.
Actual diagonal screen measurement is 23.5 inches. Nano-texture display is an option on models with 10-core CPU and 10-core GPU.
All four USB-C ports support Thunderbolt 4 on models with 10-core CPU and 10-core GPU.
Testing was conducted by Apple in October 2024 using shipping competitive systems and select industry-standard benchmarks.
Results are compared to previous-generation 24-inch iMac systems with Apple M1, 8-core CPU, 8-core GPU, 16GB of RAM, and 2TB SSD.
iMac with 8-core CPU and 8-core GPU can configure to Magic Keyboard with Touch ID and Numeric Keypad, and iMac with 10-core CPU and 10-core GPU comes standard with Touch ID.
Available on Mac computers with Apple silicon and Intel-based Mac computers with a T2 Security Chip. Requires that the user’s iPhone and Mac are signed in with the same Apple Account using two-factor authentication, their iPhone and Mac are near each other and have Bluetooth and Wi-Fi turned on, and their Mac is not using AirPlay or Sidecar. Some iPhone features (e.g., camera and microphone) are not compatible with iPhone Mirroring.
Testing was conducted by Apple in August 2024. See apple.com/safari for more information.
...
Read the original on www.apple.com »
Dropshipping AliExpress watches, AI-generated SEO spam websites… marginally legal and ethical passive income schemes, that serve to generate that income mostly for their promoters, can feel like a modern phenomenon. The promise of big money for little work is one of the fundamental human weaknesses, though, and it has been exploited by “business coaches” and “investment promoters” for about as long as the concept of invesstment has existed. We used to refer mostly to the “get rich quick” scheme, but fashions change with the time, and at the moment “passive income” is the watchword of business YouTubers and Instagram advertising.
And what income is more passive than vending machine coin revenue? Automated vending has had a bit of a renaissance, with social media influencers buying old machines and turning them into a business. The split of their revenue between vending machine income and social media sponsorship is questionable, but it’s definitely brought some younger eyes to an industry that is as rife with passive income scams as your average spam folder. Perhaps it’s the enforcement efforts of the SEC, or perhaps today’s youth just need a little more time to advance their art, but I haven’t so far seen a vending machine hustle quite as financialized as the post-divestiture payphone industry.
For much of the history of the telephone system, payphones were owned and operated by telephone carriers. As with the broader telephone monopoly, there were technical reasons for this integration. Payphones, more specifically called coin operated telephones, were “dumb” devices that relied on the telephone exchange for control. In the case of a manual exchange, you would pick up a payphone and ask the operator for your party–-and they would advise you of the price and tell you to insert coins. The coin acceptor in the payphone used a simple electrical signaling scheme to notify the operator of which and how many coins you had inserted, and it was up to the operator to check that it was correct and connect the call. If coins needed to be returned after the call, the operator would signal the phone to do so.
With the introduction of electromechanical and then digital exchanges, coin control became automated, but payphones continued to use specialized signaling schemes to communicate with the coin control system. They had to be connected to special loops, usually called “coin lines,” with the equipment to receive and send these signals. The payphone itself was a direct extension of the telephone system, under remote control of the exchange, much like later devices like line concentrators. It was only natural that they would be operated by the same company that operated the control system they relied on.
Well, a lot of things have changed about the payphone industry. The 1968 Carterfone decision revolutionized the telephone industry by allowing the customer to connect their own device. Coin operated telephones in the traditional sense were unaffected, but Carterfone opened the door to a whole new kind of payphone.
In 1970, burglar alarm manufacturer Robotguard blazed the trail into a new telephone business. They imported a Japanese payphone that was a little different from the American models of the time: it implemented coin payment internally. Robotguard connected the payphone through one of their burglar alarm autodialers, a device that was already fully compliant with telephone industry regulations, and then hooked it up to a Southwestern Bell telephone line in a department store in in St. Louis. By inserting a dime, the phone was enabled and you could make a local call (the autodialer was used, in part, to limit dialing to 7 digits to ensure that only local calls were made).
Robotguard had done their homework, consulting the same law firm that represented Carterfone in the 1968 case. They believed the scheme to be legal, since the modified Japanese payphone behaved, to the telephone company, just like any other customer-owned phone. The New York Times quotes Southwestern Bell, whose attitude is perhaps best described as resignation:
Spokesmen for the Southwestern Bell Telephone Company, the operating company in that area, acknowledge that the equipment is in the store, that it is working as described and that it appears completely legal. There is nothing they can do about it at this time, they say.
There was, indeed, nothing that they could do about it. Robotguard had introduced the Customer-Owned Coin-Operated Telephone, or COCOT, to the United States. Payphones were now a competitive business.
Despite a certain air of inevitability, COCOTs had a slow start. First, there would indeed be an effort by telephone companies to legally restrict COCOTs. This was never entirely successful, but did result in a set of state regulations (and to a lesser extent, federal regulations related to long-distance calls) that made the payphone business harder to get into. More importantly, though, the technical capabilities of COCOTs were limited. The Robotguard design could charge only a fixed fee per call, which made it a practical necessity to limit the payphone to local calls. Telephone company payphones, which allowed long-distance calls at a higher rate, had an advantage. Long-distance calls were also typically billed by minute, which made it important for a payphone to impose a time limit before charging more. These capabilities were difficult to implement in a reasonably compact, robust device in the 1970s.
A number of articles will tell you that COCOTs became far more common as a result of payphone deregulation stemming from the 1984 breakup of AT&T. I would love to hear evidence to the contrary, but from my research I believe this is a misconception, or at least not the entire story. In fact, payphones were deregulated by the Telecommunications Act of 1996, but that was done in large part because COCOTs were already common and telephone companies were unhappy that conventional payphones were subject to rate regulation while COCOTs were not [1].
Divestiture did definitely open the floodgates of COCOTs, although I think that the advances in electronics around that time were also a significant factor in their proliferation. In any case, several manufacturers introduced COCOTs in 1984 and 1985.
These later-generation COCOTs were significantly more sophisticated than the mechanical system used by Robotguard. To the user, they were pretty much indistinguishable from carrier-operated payphones, charging varying rates based on call duration and local or long distance. This local simulation of the telephone exchange’s charging decisions required that each COCOT have, in internal memory, a prefix and rate table to determine charges. Early examples used ROM chips shipped by their manufacturer, but over time the industry shifted to remote programming via modems. These sophisticated, electronically-controlled coin operated phones that did not rely on an exchange-provided coin line came to be known as “smart payphones” and even, occasionally, as “smartphones.”
Smart payphones greatly simplified payphone operations and were even adopted by the established telephone companies, where they could save money compared to the more complex exchange-controlled system. But they also made COCOTs completely practical, as good to the consumer as any other payphone. As COCOTs became remotely programmable, the payphone business started to feel like a way to generate–-dare I say it–-passive income. All you had to do was collect the coins. Well, that and keep the phone in working order, which would become a struggle for the thinly staffed and overleveraged Payphone Service Providers (PSPs) that would come to dominate the industry.
One of the new entrants into the payphone business was a company that specialized in exactly the kind of remote management these new smart payphones required: Jaroth Inc., which would do business as Pacific Telemanagement Solutions or PTS. Today, PTS is the largest PSP in the United States, but that isn’t saying a whole lot. They enjoyed great success in the 1990s, though, and were so well-positioned as a PSP in the ’00s that they often purchased the existing payphone fleet from former Bell Operating Companies that decided to abandon the payphone business.
The 1990s were a good time for payphones, and they were also a good time for investment scams. Loose enforcement of regulations around investment offerings, the Dot Com Boom, and a generally strong economy created a lot of opportunities for “telecom entrepreneurs” that were more interested in moving money than information.
The problem of 1990s telecommunications companies funded in unscrupulous ways is not at all unique to payphones, although it did reach a sort of apex there. I will take this opportunity to go on a tangent, one of those things that I have always wanted to write an article about but have never quite had enough material for: MMDS, the Multichannel Multipoint Distribution Service.
MMDS was, essentially, cable television upconverted to a microwave band and then put through directional antennas. It was often marketed as “Wireless Cable,” sort of an odd term, but it was intended as a direct competitor to conventional cable television. I think it’s fair to call it an ancestor of what we now call WISPs, using small roof-mounted parabolic antennas as an alternative to costly CATV outside plant. Some MMDS installations literally were early WISPs: MMDS could carry a modified version of DOCSIS.
Wireless cable got a pretty bad rap, though. If you pay attention to WISPs, you will no doubt have noticed that while the low capital investment required can enable beneficial competition, it also enables a lot of companies that you might call “fly by night.” Some start out with good intentions and just aren’t up to the task, while some come from “entrepreneurs” with a history of fraud, but either way they end up collecting money and then disappearing with it.
MMDS had a huge problem with shady operators, and more often of the “history of fraud” type. Supposed MMDS startups would take out television and newspaper ads nationwide offering an incredible opportunity to invest in this exciting new industry. The scam took different forms in the details, but the most common model was to sell “shares” of a new MMDS company in the four-to-five-digit range. Investors were told that the company was using the capital to build out their network and would shortly have hundreds of customers.
In practice, most of these “MMDS startups” were in cities with powerful incumbent cable companies and, even worse, preexisting MMDS operators using the limited spectrum available for such a wideband service. They never had any chance of getting a license, and didn’t have anyone with the expertise to actually build an MMDS system even if they got one. They just pocketed the money and were next seen on a beach in Mexico or in prison, depending on the whims of fortune.
These wireless cable schemes became so common, and so notorious, that if you asked a lot of people what wireless cable was the two answers you’d get are probably “no idea” and “an old scam.”
It only takes a brief look at newspaper archives to find that the payphone industry was a little sketchy. There are constant, nationwide, near-identical classified ads with text like “buy and retire now” and “$150k yearly potential” and “CALL NOW!”. Sometimes more than one appear back to back, and they’re still nearly identical. None of these ads give a company name or really anything but a phone number, and the phone numbers repeat so infrequently that I suspect the advertisers were intentionally rotating them. This was pretty much the Craigslist “work from home” post of the era.
To understand payphone economics better, let’s talk a little about how the payphone business operated. Telephone companies had long run payphones on the same payment model, by finding a location for the payphone (or being contacted by the proprietor of a location) and then offering the location a portion of revenue. In the case of incumbent telcos, this was often a fixed rate per call. So someone owned the location and the payphone operator paid them in the form of a royalty.
COCOTs enabled a somewhat more complex model. A COCOT might be located in a business, connected to a telephone company line, and remotely programmed by a service provider, all of which were different companies from the person that actually collected the money. The revenue had to get split between all of these parties somehow, but COCOTS weren’t regulated and that was all a matter of negotiation.
Much like the vending machine industry today, one of the most difficult parts of making money with a payphone was actually finding a good location–-one that wasn’t already taken by another operator. As more and more PSPs spread across the country, this became more and more of a challenge. So you can imagine the appeal of getting into the payphone hustle without having to do all that location scouting and negotiation. Thus all the ads for payphone routes for sale… ostensibly a turnkey business, ready to go.
Ah, but people with turnkey, profitable businesses don’t tend to sell them. Something is up.
Not all of these were outright scams, or at least I assume some of them weren’t. There probably were some PSPs that financed expansion by selling or leasing rights to some of their devices. But there were also a lot of… well, let’s talk about the second largest PSP of the late ’90s.
Somewhere around 1994, Charles Edwards of Atlanta, Georgia had an idea. His history is obscure, but he seems to have been an experienced salesman, perhaps in the insurance industry. He put his talent for sales to work raising capital for ETS Payphones, Inc., which would place and operate payphones on the behalf of investors.
The deal was something like this: ETS identified locations for payphones and negotiated an agreement to place them. Then, they sold the payphone itself, along with rights to the location, to an investor for five to seven thousand dollars a pop. ETS would then operate and maintain the payphone while paying a fixed monthly lease to the investor who had purchased it–-something like $83 a month.
It was a great deal for the investors–-they didn’t need any expertise or really to do any work, since ETS arranged the location, installed the phones, and even collected the coins. In fact, most investors purchased phones in cities far from where they lived, such was the convenience of the ETS model. There was virtually no risk for investors, either. ETS promised a monthly payment up front, and the contract said that they would refund the investor if the payphone didn’t work out.
The ETS network was far larger than just Edwards could manage. Most of the investment deals were sold by independent representatives, the majority of them insurance agents, who could pick it up as a side business to earn some commission. Edwards sold nearly 50,000 payphones on this basis, many of them in deals of over $100,000. Small-time investors convinced of the value by their insurance agents, many of them retirees, put over $300 million into ETS from 1996 to 2000.
There was, as you might have guessed, a catch. One wonders if the payphones were even real. I think that at least many of them were; ETS ran job listings for payphone technicians in multiple cities and occasionally responded to press inquiries and complaints about malfunctioning payphones bearing their logo. Besides, the telecom industry recognized ETS as a huge PSP in terms of both installed base and call volume.
What definitely wasn’t real was the revenue. ETS was a ponzi scheme. In 2000, the SEC went for Charles Edwards, showing that ETS had never been profitable. Edwards sponsored a NASCAR team and directed millions of dollars in salary and consulting fees to himself, but in the first half of 2000 ETS lost $33 million. The monthly lease payments to investors were being made from the capital put in by newer investors, and even that was drying up.
SEC v. ETS went on for six years, in good part due to an appeal to the Supreme Court based on ETS’ theory that a contract that paid a fixed, rather than variable, monthly rate could not be considered a security. In 2006, Charles Edwards was convicted of 83 counts of wire fraud and sentenced to thirteen years in prison.
Edwards was far from the only coin-op fraudster. ETS was not unusual except in that it managed to be the largest. When a class-action firm and several state attorneys general went after ETS, their press releases almost always mentioned a few other similar payphone schemes facing similar legal challenges. Remember all of those classified ads? I suspect some of them were ETS, but ETS also had a more sophisticated sales operation than two-line classifieds. Most of them were probably from competitors.
The payphone industry crashed alongside ETS; ETS almost certainly would have collapsed (albeit likely more slowly) even if it had been above board. Increasing cellphone ownership from the ’90s to ’00s made payphones largely obsolete, and more and more established telcos and PSPs decided to drop them. One of the reasons for PTS’s ascent was its willingness to buy out operators who wanted out: in 2008, PTS bought most of AT&T’s fleet. In 2011, they bought most of Verizon’s fleet. Almost every incumbent telephone company got out of the payphone business and most of them sold to PTS.
Given all that, you might think that payphone scams were only a thing of the ’90s. And they mostly were, but you can imagine that there was an opportunity for anyone who could adapt the ETS model to the internet age.
Pantheon Holdings did just that. It’s even more difficult to untangle the early days of Pantheon than it is ETS. Pantheon operated through a variety of shell companies and brands, but “the Internet Machine Company” was perhaps the most to the point. Around 2005, Pantheon built “internet kiosks” where customers could check their email, print documents, and even make phone calls for a nominal cash or credit card payment. Sometimes called “global business centers,” these kiosks were presented as an exciting business opportunity to mostly elderly investors who were given the opportunity to buy one for just $18,000.
Once again, the kiosks were real, but the revenue was not. Pantheon placed the machines in low-traffic locations and did nothing to market them. By 2009, more than a dozen people had been convicted of fraud in relation to the Internet Machines.
Pantheon kiosks still turn up on the junk market.
[1] I spent quite a bit of time researching the history of payphone regulation to try to understand exactly what did change in 1984, how many COCOTs operated and on what legal basis from 1970-1984, etc. I did not have much success. What I can tell is that COCOTs were very rare prior to 1984 (so rare that the FCC apparently didn’t know of any, according to a 1984 memo, despite the 1970 example), and by the late ’80s were very common. The FCC seems to have taken the view, in 1984, that COCOTs had always been legal, and just weren’t being made or used on any significant scale. That’s somewhat inconsistent, though, with the fact that suddenly after 1984 divestiture a bunch of companies started making COCOTs for the first time. My best guess right now is that from 1970-1984 COCOTs were probably legal but were something of a gray area because of the lack of any regulations specifically applying to them. Some combination of divestiture broadly “shaking up” the phone industry, electronics making COCOTs much more feasible, and who knows what else lead multiple companies to get into the COCOT business in the mid-’80s. That lead the FCC to issue a series of regulatory opinions on COCOTs that consistently upheld them as legal, culminating in the 1996 act dropping payphone regulation entirely.
sincerely,
j. b. crawford
Support me on Ko-Fi and receive EYES ONLY, a monthly or more special edition.
Discuss this, complain, etc: #computer.rip:waffle.tech on Matrix
Me, elsewhere: Mastodon, Pixelfed, YouTube
This website is begrudgingly generated by the use of software. Letters to the
editor are welcome via facsimile to +1 (505) 926-5492 or mail to PO Box 26924,
Albuquerque, NM 87125.
...
Read the original on computer.rip »
A return to hand-written notes by learning to read & write
We present a model to convert photos of handwriting into a digital format that reproduces component pen strokes, without the need for specialized equipment.
Digital note-taking is gaining popularity, offering a durable, editable, and easily indexable way of storing notes in a vectorized form. However, a substantial gap remains between digital note-taking and traditional pen-and-paper note-taking, a practice still favored by a majority of people. Bridging this gap by converting a note taker’s physical writing into a digital form is a process called derendering. The result is a sequence of strokes, or trajectories of a writing instrument like a pen or finger, recorded as points and stored digitally. This is also known as an “online” representation of writing, or “digital ink”.The conversion to digital ink offers users who still prefer traditional handwritten notes access to their notes in a digital form. Instead of simply using optical character recognition (OCR), which would allow the writing to be transcribed to a text document, by capturing the handwritten documents as a collection of strokes, it’s possible to reproduce them in a form that can be edited freely by hand in a way that is more natural. It allows the user to create documents with a realistic look that captures their handwriting style, rather than simply a collection of text. This representation allows the user to later inspect, modify or complete their handwritten notes, which gives their notes enhanced durability, seamless organization and integration with other digital content (images, text, links) or digital assistance.For these reasons, this field has gained significant interest in both academia and industry, with software solutions that digitize handwriting and hardware solutions that leverage smart pens or special paper for capture. The need for additional hardware and accompanying software stack is, however, an obstacle for wider adoption, as it creates both onboarding friction and carries additional expense for the user.With this in mind, in “InkSight: Offline-to-Online Handwriting Conversion by Learning to Read and Write”, we propose an approach to derendering that can take a picture of a handwritten note and extract the strokes that generated the writing without the need for specialized equipment. We also remove the reliance on typical geometric constructs, where gradients, contours, and shapes in an image are utilized to extract writing strokes. Instead, we train the model to build an understanding of “reading”, so it can recognize written words, and “writing”, so it can output strokes that resemble handwriting. This results in a more robust model that performs well across diverse scenarios and appearances, including challenging lighting conditions, occlusions, etc. You can access the model and the inference code on our GitHub repo.
The key goal of this approach is to capture the stroke-level trajectory details of handwriting. The user can then store the resulting strokes in the note taking app of their choice.
Left: Offline handwriting. Right: Output digital ink (online handwriting). In every word, character colors transition from red to purple, following the rainbow sequence, ROYGBIV. Within each stroke, the shade progresses from darker to lighter.
Under the hood, we apply an off the shelf OCR model to identify handwritten words, then use the model to convert them to strokes. To foster reproducibility, reusability, and ease of adoption, we combine the widely popular and readily available ViT encoder with an mT5 encoder-decoder.
While the fundamental concept of derendering appears straightforward — training a model that generates digital ink representations from input images — the practical implementation for arbitrary input images presents two significant challenges:Limited Supervised Data: Acquiring paired data with corresponding images and ground truth digital ink for supervised training can be expensive and time-consuming. To our knowledge, no datasets with sufficient variety exist for this task.Scalability to large images: The model must effectively handle arbitrarily large input images with varying resolutions and amount of content.
To address the first problem while avoiding onerous data collection, we propose a multi-task training setup that combines recognition and derendering tasks. This enables the model to generalize on derendering tasks with various styles of images as input, and injects the model with both semantic understanding and knowledge of the mechanics of writing handwritten text.This approach thus differs from methods that rely on geometric constructs, where gradients, contours, and shapes in an image are utilized to extract writing strokes. Learning to read enhances the model’s capability in precisely locating and extracting textual elements from the images. Learning to write ensures that the resulting vector representation, the digital ink, closely aligns with the typical human approach of writing in terms of physical dynamics and the order of strokes. Combined, these allow us to train a model in the absence of large amounts of paired samples, which are difficult to obtain.
One solution to the problem of scalability is to train a model with very high-resolution input images and very long output sequences. However, this is computationally prohibitive. Instead, we break down the derendering of a page of notes into three steps: (1) OCR to extract word-level bounding boxes, (2) derendering each of the words separately, and (3) replacing the offline (pixel) representation of the words with the derendered strokes using the color coding described above to improve visualization.To narrow the domain gap between the synthetic images of rendered inks and the real photos, we augment the data in tasks that take rendered ink as input. Data augmentation is done by randomizing the ink angle, color, stroke width, and by adding Gaussian noise and cluttered backgrounds.
We create a training mixture that comprises five different task types. The first two tasks are derendering tasks (i.e., they generate a digital ink output). One uses only an image as input and the other uses both an image and the accompanying text that has been recognized by the OCR model. The following two tasks are recognition tasks that produce text output, the first of which leverages real images and the latter, synthetic ones. Finally, a fifth task is a combination of recognition and derendering, hence a mixed task with text-and-ink output.Each type of task utilizes a task-specific input text, enabling the model to distinguish between tasks during both training and inference. Below you will find a recognition and a derendering task.
Derendering with text: Takes an image and a text input and outputs the ink that would generate that text in the style of the image.
Recognition of synthetic images: Takes an image and recognizes what is written within.
To train the system, we pair images of text and corresponding digital ink. The digital ink is sampled from real-time writing trajectories and subsequently represented as a sequence of strokes. Each stroke is represented by a sequence of points, obtained by sampling from the writing or drawing trajectory at a constant rate (e.g., 50 points per second). The corresponding image is created by rendering the ink - creating a bitmap at a prespecified resolution. This creates a pixel-stroke correspondence, that is a precursor for the model input-output pairs.
A further necessary step, and a unique one for this modality, is the ink tokenizer, which represents the points in a format that is friendly to a large language model (LLM). Each point is converted into two tokens, one each encoding its x and y coordinates. The token sequence for this ink begins with b, signifying the beginning of the stroke, followed by the tokens for the coordinates of the sampled points.
Illustration of the ink tokenization for a single-stroke ink. The dark red ink depicts the ink stroke, with numbered circles marking sampled points after time resampling. The color gradient of the sampled points (1–7) indicates the point order. Each point is represented with two tokens encoding its coordinates x (left half of the shaded box) and y (right half). The token sequence for this ink begins with b, signifying the beginning of the stroke, followed by the tokens for coordinates of sampled points.
To evaluate the performance of our approach, we first collected an evaluation dataset. We started with OCR data, and then added paired samples that we collected manually by asking people to trace text images they were shown (human-generated traces).We then trained three variants of the model: Small-p (∼340M parameters, “-p” for “public” setup), Small-i (“-i” for “in-house”), and Large-i (∼1B parameters). We compared our approach to a General Virtual Sketching (GVS) baseline.We show that the vector representations produced by our system are both semantically and geometrically similar to the input images, and are similar to human-generated digital ink data, as measured by both automatic and human evaluations.
We show the performance of our models and GVS compared to two public evaluation datasets, IAM and IMGUR5K, and an out of domain dataset of sketches. Our models mostly produce results that accurately reflect the text content, disregarding semantically irrelevant background. They can also handle occlusions, highlighting the benefit of the learned reading prior. In contrast, GVS produces multiple duplicate strokes and has difficulty distinguishing between background and foreground. Our Large-i model is further able to retain more details and accommodate more diverse image styles. See the paper for more examples.
Comparison between performance of GVS, Small-i, Small-p, and Large-i on two public evaluation datasets (Rows 1–3, IAM; rows 4–6, IMGUR5K).
Out-of-domain behavior: Sketch derendering for Small-p, Small-i, Large-i and GVS. Our models are mostly able to derender simple sketches, however they do still exhibit significant artifacts like extraneous or misaligned strokes.
At present, the field has not established metrics or benchmarks for quantitative evaluation of this task. So, we conduct both human and automated evaluation to compare the similarity of our model output to the original image and to human-generated digital inks.Here we present the human evaluation results, with numerous other results derived from automated evaluations and an ablation study in our paper. We performed a human evaluation of the quality of the derendered inks produced by the three model variants. We used the “golden” human traced data from the HierText dataset as the control group and the output of our model on these samples as the experimental group.
Comparison of the performance of our models (Small-p, Small-i and Large-i) and manual tracing on two samples of text of varying difficulty from the HierText dataset.
In the figure above, notice the error in the quote for all models on the top row (the double-quote mark), which the human tracing got correct. On the bottom row the situation is reversed, with the human tracing focusing solely on the main word, missing most other elements. The human tracing is also not perfectly aligned with the underlying image, emphasizing the complexity and tracing difficulty of the handwritten parts of the HierText dataset.Evaluators were shown the original image alongside a rendered digital ink sample, which was either model-generated or human-traced (unknown to the evaluators). They were asked to answer two questions: (1) Is the digital ink output a reasonable tracing of the input image? (Answers: “Yes, it’s a good tracing,” “It’s an okay tracing, but has some small errors,” “It’s a bad tracing, has some major artifacts.”) (2) Could this digital ink output have been produced by a human? (Answers: “Yes” or “No”.) The evaluation included 16 individuals familiar with digital ink, but not involved in this research. Each sample was evaluated by three raters and aggregated with majority voting.
The results show that a majority of derendered inks, generated with the Large-i model perform about as well as human-generated ones. Moreover 87% of the Large-i outputs are marked as good or having only small errors.
In this work we present a first-of-its-kind approach to convert photos of handwriting into digital ink. We propose a training setup that works without paired training data. We show that our method is robust to a variety of inputs, can work on full handwritten notes, and generalizes to out-of-domain sketches to some extent. Furthermore, our approach does not require complex modeling and can be constructed from standard building blocks.
We want to thank all the authors of this work, Arina Rak, Julian Schnitzler, and Chengkun Li, who formed a student team working with Google Research for the duration of the project, as well as Claudiu Musat, Henry Rowley and Jesse Berent. All authors, with the exception of the student team, are now part of Google Deepmind.
...
Read the original on research.google »
In this article I’ll discuss different strategies for incrementally adding Rust into a server written in another language, such as JavaScript, Python, Java, Go, PHP, Ruby, etc. The main reason why you’d want to do this is because you’ve profiled your server, identified a hot function which is not meeting your performance requirements because it’s bottlenecked by the CPU, and the usual techniques of memoizing the function or improving its algorithm wouldn’t be feasible or effective in this situation for whatever reason. You have come to the conclusion that it would be worth investigating swapping out the function implementation for something written in a more CPU-efficient language, like Rust. Great, then this is definitely the article for you.
The strategies are ordered in tiers, where “tier” is short for “tier of Rust adoption.” The first tier would be not using Rust at all. The last tier would be rewriting the entire server in Rust.
The example server which we’ll be applying and benchmarking the strategies on will be implemented in JS, running on the Node.js runtime. The strategies can be generalized to any other language or runtime though.
Let’s say we have a Node.js server with an HTTP endpoint that takes a string of text as a query parameter and returns a 200px by 200px PNG image of the text encoded as a QR code.
Here’s what the server code would look like:
const express = require(‘express’);
const generateQrCode = require(‘./generate-qr.js’);
const app = express();
app.get(‘/qrcode’, async (req, res) => {
const { text } = req.query;
if (!text) {
return res.status(400).send(‘missing “text” query param’);
if (text.length > 512) {
return res.status(400).send(’text must be
And here’s what the hot function would look like:
const QRCode = require(‘qrcode’);
* @param {string} text - text to encode
* @returns {Promise
We can hit that endpoint by calling:
Which will correctly produce this QR code PNG:
Anyway, let’s throw tens of thousands of requests at this server for 30 seconds and see how it performs:
Since I haven’t described my benchmarking methodology these results are meaningless on their own, we can’t say whether this is “good” or “bad” performance. That’s okay, since we don’t care about the absolute numbers, we’re going to use these results as a baseline to compare all of the following implementations against. Every server is tested in the same environment so relative comparisons will be accurate.
Regarding the abnormally high memory usage, it’s because I’m running Node.js in “cluster mode”, which spawns 12 processes for each of the 12 CPU cores on my test machine, and each process is a standalone Node.js instance which is why it takes up 1300+ MB of memory even though we have a very simple server. JS is single-threaded so this is what we have to do if we want a Node.js server to make full use of a multi-core CPU.
For this strategy we rewrite the hot function in Rust, compile it as a standalone CLI tool, and then call it from our host server.
Let’s start by rewriting the function in Rust:
/** qr_lib/lib.rs **/
use qrcode::{QrCode, EcLevel};
use image::Luma;
use image::codecs::png::{CompressionType, FilterType, PngEncoder};
pub type StdErr = Box
/** qr_cli/main.rs **/
use std::{env, process};
use std::io::{self, BufWriter, Write};
use qr_lib::StdErr;
fn main() -> Result
We can use this CLI like so:
qr-cli https://youtu.be/cE0wfjsybIQ?t=74 > crab-rave.png
Now let’s update the hot function in our host server to call this CLI:
const { spawn } = require(‘child_process’);
const path = require(‘path’);
const qrCliPath = path.resolve(__dirname, ‘./qr-cli’);
* @param {string} text - text to encode
* @returns {Promise
Now let’s see how this change affected performance:
Wow, I was not expecting throughput to increase by 76%! This is a very caveman-brain strategy so it’s funny to see that it was that effective. Average response size also halved from 1506 bytes to 778 bytes, the compression algo in the Rust library must be better than the one in the JS library. We’re serving significantly more requests per second and returning significantly smaller responses, so I’d say this is a great result.
For this strategy we’ll compile the Rust function into a Wasm module, and then load and run it from the host server using a Wasm runtime. Some links to Wasm runtimes across different languages:
Since we’re integrating into a Node.js server let’s use wasm-bindgen to generate the glue code that our Rust Wasm code and our JS code will use to interact with each other.
/** qr_wasm_bindgen/lib.rs **/
use wasm_bindgen::prelude::*;
#[wasm_bindgen(js_name = generateQrCode)]
pub fn generate_qr_code(text: &str) -> Result
After compiling that code using wasm-pack, we can copy the built assets over to our Node.js server and use them in the hot function like this:
const wasm = require(‘./qr_wasm_bindgen.js’);
* @param {string} text - text to encode
* @returns {Buffer} - QR code
module.exports = function generateQrCode(text) {
return Buffer.from(wasm.generateQrCode(text));
Using Wasm doubled our throughput compared to the baseline! However the jump in performance compared to the earlier caveman-brain strategy of calling a CLI tool is smaller than I would have expected.
Anyway, while wasm-bindgen is an excellent JS to Rust Wasm binding generator there’s no equivalent of it for other languages such as Python, Java, Go, PHP, Ruby, etc. I don’t want to leave those folks out to dry, so I’ll explain how to write the bindings by hand. Disclaimer: the code is going to get ugly, so unless you’re really interested in seeing how the sausage is made you can just skip over the next section.
The funny thing about Wasm is that it only supports four data types: i32, i64, f32, and f64. Yet for our use-case we need to pass a string from the host to a Wasm function, and the Wasm function needs to return an array to the host. Wasm doesn’t have strings or arrays. So how are we supposed to solve this problem?
The answer hinges on having a couple insights:
* The Wasm module’s memory is shared between the Wasm instance and the host, both can read and modify it.
* A Wasm module can only request up to 4GB of memory, so every possible memory address can be encoded as an i32, so this data type is also used as a memory address pointer.
If we want to pass a string from the host to a Wasm function the host has to directly write the string into the Wasm module’s memory, and then pass two i32s to the Wasm function: one pointing to the string’s memory address and another specifying the string’s byte length.
And if we want to pass an array from a Wasm function to the host, the host first needs to provide the Wasm function an i32 pointing to the memory address where the array should be written, and then when the Wasm function completes it returns an i32 which represents the number of bytes that were written.
However, now we have a new problem: when the host writes to the Wasm module’s memory, how can it ensure it doesn’t overwrite memory that the Wasm module is using? For the host to be able to safely write to memory, it must first ask the Wasm module to allocate space for it.
Okay, now with all of that context out of the way we can finally look at this code and actually understand it:
/** qr_wasm/lib.rs **/
use std::{alloc::Layout, mem, slice, str};
// host calls this function to allocate space where
// it can safely write data to
#[no_mangle]
pub unsafe extern “C” fn alloc(size: usize) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(
size * mem::size_of::
After compiling this Wasm module here’s how we’d use it from JS:
const path = require(‘path’);
const fs = require(‘fs’);
// fetch Wasm file
const qrWasmPath = path.resolve(__dirname, ‘./qr_wasm.wasm’);
const qrWasmBinary = fs.readFileSync(qrWasmPath);
// instantiate Wasm module
const qrWasmModule = new WebAssembly.Module(qrWasmBinary);
const qrWasmInstance = new WebAssembly.Instance(
qrWasmModule,
// JS strings are UTF16, but we need to re-encode them
// as UTF8 before passing them to our Wasm module
const textEncoder = new TextEncoder();
// tell Wasm module to allocate two buffers for us:
// - 1st buffer: an input buffer which we’ll
// write UTF8 strings into that
// the generateQrCode function
// will read
// - 2nd buffer: an output buffer that the
// generateQrCode function will
// write QR code PNG bytes into
// and that we’ll read
const textMemLen = 1024;
const textMemOffset = qrWasmInstance.exports.alloc(textMemLen);
const outputMemLen = 4096;
...
Read the original on github.com »
This has been a hard post for me to write after participating in WordPress since before I even started a career in tech and, until 3 months ago, for my entire tech career. That said, it has been in the making for quite a while now and it is time that I make it official.
I’ve officially left the WordPress project after 14+ years of contributing including:
* Over 11 years as mostly the sole moderator for the official WordPress jobs site
It’s true that I had largely been moving away from the WordPress project since at least 2017. I think that is when I realized just how dishonest so much of the “community” around WordPress really is.
I’ve watched people pour their lives into giving back only to have it all tossed out because their important work isn’t what Matt wanted people to focus on.
I’ve watched good people try to make the community stronger and protect users by contributing to privacy, accessibility, governance and so many more vital areas only to, not only have their contributions ignored, but to see the contributors themselves abused and pushed out of the community entirely for their basic advocacy.
I’ve watched WordPress company after WordPress company claim to be “better” in how it treats its people. The truth is nearly all of them used WordPress’ virtues as an excuse to under pay people and abuse them. From some of the most respected product companies to some of the most prominent agencies I’ve watched them chew up good people with threats that they “aren’t good enough to leave” and similar to continue to justify low pay and benefits.
I’ve watched a full cult form around Automattic, the company behind wordpress.com. In 2014 I even applied to work there but by that point I was already at a stage where I didn’t trust the org due to abuse I had seen a friend go through. I confess I took the paid trial but I intentionally did not take it seriously and I accepted another role before the trial started. All that is to say, Automattic was never honest about who it is so I really didn’t feel too bad at the time about going through such motions. They had the chance to change my mind then but that not only didn’t happen but the whole experience instead lowered my respect for the org even further.
The list goes on and on but WordPress was never an honest community, yet I stayed, for far too long.
I joined WP Engine in 2018 because it was the one company that really did seem to be honest about who they were. Like every other company they were in it to make money, but unlike every other company they didn’t hide that fact behind abusive language. They didn’t claim I was “family.” They didn’t claim their work was virtuous and therefore somehow “better” than non-WordPress orgs. No, they said they wanted to be the biggest host and went after that with the best pay I saw in the WordPress ecosystem and interesting work on top of it.
I left WP Engine in July of this year for a lot of reasons, but I don’t regret why I joined them. I stand by the fact that they were the most honest company in WordPress, an ecosystem where honesty is often harder to come by than in any other I’ve worked in.
So that brings us to the current WordPress implosion. There are plenty of folks writing the timeline of events of what is going on. That isn’t why I’m writing this post. That said, up until September I was happily still working on Kana, my WordPress development environment, and exploring a few plugin ideas I’ve had, even if WordPress isn’t the best tool for blogging anymore.
The utter hypocrisy of Matt Mullenweg’s actions isn’t really unsurprising to anyone who has watched Automattic for the last decade or more but it is my final straw. Yes, I should’ve left earlier when I saw friends hurt. Somehow I guess I always thought my actions would somehow help the situation. It was a naive position, I now realize.
I won’t say WP Engine is blameless in the wider world of WordPress and open source, I didn’t leave there on a whim after all. The nature of the attack on them now, however, is beyond the pale. It finally shows the world what WordPress really is, an abusive, predatory organization/community lying about its virtues to abuse other companies and organizations in an effort to get free work.
The truth is that WordPress is, at best, past its peak and we’re going to see more fights between the companies in its orbit as they each battle for a larger slice of a shrinking pie. That isn’t something anyone can change with its current leadership and project structure.
With this action I finally see that there is no helping the situation and I fear that my continued involvement can only serve to signal to others that their own work and efforts are both safe and worthwhile. They are neither. I won’t be responsible for leading good people to an abusive ecosystem.
With this I’ve archived my last remaining WordPress projects, Kana and the theme for this site, until such a time as there is proper governance in the WordPress project. I’ve also stepped back and will no longer contribute to Meetups or other WordPress events (something I was excited about rejoining just this past summer). I’ve even stopped moderating jobs.wordpress.net, something I had done almost daily since the summer of 2013.
Should the project find new life, preferably with lesser ambitions that a huge share of the whole internet, and competent governance in the spirit of the virtues it claims to represent I will rethink my position. For now, however, So Long WordPress, and thanks for all the fish.
Finally, a note to the many good people still working in WordPress:
Thank you, all of you, for the years of conversation and support. While I speak of WordPress as a whole there are many of you both individually and in smaller communities that have done so much for the people in WordPress. I look forward to continuing conversations and support for your work in the future, regardless of where we all wind up.
I do, however, ask that you please consider the effects your actions have on others and be careful not to lead new victims to future abuse.
← This Site Now Runs on Hugo
...
Read the original on chriswiegman.com »
There’s a common narrative that Microsoft was moribund under Steve Ballmer and then later saved by the miraculous leadership of Satya Nadella. This is the dominant narrative in every online discussion about the topic I’ve seen and it’s a commonly expressed belief “in real life” as well. While I don’t have anything negative to say about Nadella’s leadership in this post, this narrative underrates Ballmer’s role in Microsoft’s success. Not only did Microsoft’s financials, revenue and profit, look great under Ballmer, Microsoft under Ballmer made deep, long-term bets that set up Microsoft for success in the decades after his reign. At the time, the bets were widely panned, indicating that they weren’t necessarily obvious, but we can see in retrospect that the company made very strong bets despite the criticism at the time.
In addition to overseeing deep investments in areas that people would later credit Nadella for, Ballmer set Nadella up for success by clearing out political barriers for any successor. Much like Gary Bernhardt’s talk, which was panned because he made the problem statement and solution so obvious that people didn’t realize they’d learned something non-trivial, Ballmer set up Microsoft for future success so effectively that it’s easy to criticize him for being a bum because his successor is so successful.
For people who weren’t around before the turn of the century, in the 90s, Microsoft used to be considered the biggest, baddest, company in town. But it wasn’t long before people’s opinions on Microsoft changed — by 2007, many people thought of Microsoft as the next IBM and Paul Graham wrote Microsoft is Dead, in which he noted that Microsoft being considered effective was ancient history:
A few days ago I suddenly realized Microsoft was dead. I was talking to a young startup founder about how Google was different from Yahoo. I said that Yahoo had been warped from the start by their fear of Microsoft. That was why they’d positioned themselves as a “media company” instead of a technology company. Then I looked at his face and realized he didn’t understand. It was as if I’d told him how much girls liked Barry Manilow in the mid 80s. Barry who? Microsoft? He didn’t say anything, but I could tell he didn’t quite believe anyone would be frightened of them.
These kinds of comments often came with comments that Microsoft’s revenue was destined to fall, such as these comments by Graham:
Actors and musicians occasionally make comebacks, but technology companies almost never do. Technology companies are projectiles. And because of that you can call them dead long before any problems show up on the balance sheet. Relevance may lead revenues by five or even ten years.
Graham names Google and the web as primary causes of Microsoft’s death, which we’ll discuss later. Although Graham doesn’t name Ballmer or note his influence in Microsoft is Dead, Ballmer has been a favorite punching bag of techies for decades. Ballmer came up on the business side of things and later became EVP of Sales and Support; techies love belittling non-technical folks in tech. A common criticism, then and now, is that Ballmer didn’t understand tech and was a poor leader because all he knew was sales and the bottom line and all he can do is copy what other people have done. Just for example, if you look at online comments on tech forums (minimsft, HN, slashdot, etc.) when Ballmer pushed Sinofsky out in 2012, Ballmer’s leadership is nearly universally panned. Here’s a fairly typical comment from someone claiming to be an anonymous Microsoft insider:
Dump Ballmer. Fire 40% of the workforce starting with the loser online services (they are never going to get any better). Reinvest the billions in start-up opportunities within the puget sound that can be accretive to MSFT and acquisition targets … Reset Windows - Desktop and Tablet. Get serious about business cloud (like Salesforce …)
To the extent that Ballmer defended himself, it was by pointing out that the market appeared to be undervaluing Microsoft. Ballmer noted that Microsoft’s market cap at the time was extremely low relative to its fundamentals/financials relative to Amazon, Google, Apple, Oracle, IBM, and Salesforce. This seems to have been a fair assessment by Ballmer as Microsoft has outperformed all of those companies since then.
When Microsoft’s market cap took off after Nadella became CEO, it was only natural the narrative would be that Ballmer was killing Microsoft and that the company was struggling until Nadella turned it around. You can pick other discussions if you want, but just for example, if we look at the most recent time Microsoft is Dead hit #1 on HN, a quick ctrl+F has Ballmer’s name showing up 24 times. Ballmer has some defenders, but the standard narrative that Ballmer was holding Microsoft back is there, and one of the defenders even uses part of the standard narrative: Ballmer was an unimaginative hack, but he at least set up Microsoft well financially. If you look at high ranking comments, they’re all dunking on Ballmer.
And if you look on less well informed forums, like Twitter or Reddit, you see the same attacks, but Ballmer has fewer defenders. On Twitter, when I search for “Ballmer”, the first four results are unambiguously making fun of Ballmer. The fifth hit could go either way, but from the comments, seems to generally be taken as making of Ballmer, and as I far as I scrolled down, all but one of the remaining videos was making fun of Ballmer (the one that wasn’t was an interview where Ballmer notes that he offered Zuckerberg “$20B+, something like that” for Facebook in 2009, which would’ve been the 2nd largest tech acquisition ever at the time, second only to Carly Fiorina’s acquisition of Compaq for $25B in 2001). Searching reddit (incognito window with no history) is the same story (excluding the stories about him as an NBA owner, where he’s respected by fans). The top story is making fun of him, the next one notes that he’s wealthier than Bill Gates and the top comment on his performance as a CEO starts with “The irony is that he is Microsofts [sic] worst CEO” and then has the standard narrative that the only reason the company is doing well is due to Nadella saving the day, that Ballmer missed the boat on all of the important changes in the tech industry, etc.
To sum it up, for the past twenty years, people having been dunking on Ballmer for being a buffoon who doesn’t understand tech and who was, at best, some kind of bean counter who knew how to keep the lights on but didn’t know how to foster innovation and caused Microsoft to fall behind in every important market.
The common view is at odds with what actually happened under Ballmer’s leadership. In financially material positive things that happened under Ballmer since Graham declared Microsoft dead, we have:
* 2009: Bing launched. This is considered a huge failure, but the bar here is fairly high. A quick web search finds that Bing allegedly made $1B in profit in 2015 and $6.4B in FY 2024 on $12.6B of revenue (given Microsoft’s PE ratio in 2022, a rough estimate for Bing’s value in 2022 would be $240B)
* 2010: Microsoft creates Azure I can’t say that I personally like it as a product, but in terms of running large scale cloud infrastructure, the three companies that are head-and-shoulders ahead of everyone else in the world are Amazon, Google, and Microsoft. From a business standpoint, the worst thing you could say about Microsoft here is that they’re a solid #2 in terms of the business and the biggest threat to become the #1 The enterprise sales arm, built and matured under Ballmer, was and is critical to the success of Azure and Office
* I can’t say that I personally like it as a product, but in terms of running large scale cloud infrastructure, the three companies that are head-and-shoulders ahead of everyone else in the world are Amazon, Google, and Microsoft. From a business standpoint, the worst thing you could say about Microsoft here is that they’re a solid #2 in terms of the business and the biggest threat to become the #1
* The enterprise sales arm, built and matured under Ballmer, was and is critical to the success of Azure and Office
* 2010: Office 365 released Microsoft transitioned its enterprise / business suite of software from boxed software to subscription-based software with online options there isn’t really a fixed date for this; the official release of Office 365 seems like as good a year as any Like Azure, I don’t personally like these products, but if Microsoft were to split up into major business units, the enterprise software suite is the business unit that could possibly rival Azure in market cap
* Microsoft transitioned its enterprise / business suite of software from boxed software to subscription-based software with online options there isn’t really a fixed date for this; the official release of Office 365 seems like as good a year as any
* there isn’t really a fixed date for this; the official release of Office 365 seems like as good a year as any
* Like Azure, I don’t personally like these products, but if Microsoft were to split up into major business units, the enterprise software suite is the business unit that could possibly rival Azure in market cap
There are certainly plenty of big misses as well. From 2010-2015, HoloLens was one of Microsoft’s biggest bets, behind only Azure and then Bing, but no one’s big AR or VR bets have had good returns to date. Microsoft failed to capture the mobile market. Although Windows Phone was generally well received by reviewers who tried it, depending on who you ask, Microsoft was either too late or wasn’t willing to subsidize Windows Phone for long enough. Although .NET is still used today, in terms of marketshare, .NET and Silverlight didn’t live up to early promises and critical parts were hamstrung or killed as a side effect of internal political battles. Bing is, by reputation, a failure and, at least given Microsoft’s choices at the time, probably needed antitrust action against Google to succeed, but this failure still resulted in a business unit worth hundreds of billions of dollars. And despite all of the failures, the biggest bet, Azure, is probably worth on the order of a trillion dollars.
The enterprise sales arm of Microsoft was built out under Ballmer before he was CEO (he was, for a time, EVP for Sales and Support, and actually started at Microsoft as the first business manager) and continued to get built out when Ballmer was CEO. Microsoft’s sales playbook was so effective that, when I was Microsoft, Google would offer some customers on Office 365 Google’s enterprise suite (Docs, etc.) for free. Microsoft salespeople noted that they would still usually be able to close the sale of Microsoft’s paid product even when competing against a Google that was giving their product away. For the enterprise, the combination of Microsoft’s offering and its enterprise sales team was so effective that Google couldn’t even give its product away.
If you’re reading this and you work at a “tech” company, the company is overwhelmingly likely to choose the Google enterprise suite over the Microsoft enterprise suite and the enterprise sales pitch Microsoft sales people have probably sounds ridiculous to you.
An acquaintance of mine who ran a startup had a Microsoft Azure salesperson come in and try to sell them on Azure, opening with “You’re on AWS, the consumer cloud. You need Azure, the enterprise cloud”. For most people in tech companies, enterprise is synonymous with overpriced, unreliable, junk. In the same way it’s easy to make fun of Ballmer because he came up on the sales and business side of the house, it’s easy to make fun of an enterprise sales pitch when you hear it but, overall, Microsoft’s enterprise sales arm does a good job. When I worked in Azure, I looked into how it worked and, having just come from Google, there was a night and day difference. This was in 2015, under Nadella, but the culture and processes that let Microsoft scale this up were built out under Ballmer. I think there were multiple months where Microsoft hired and onboarded more salespeople than Google employed in total and every stage of the sales pipeline was fairly effective.
When people point to a long list of failures like Bing, Zune, Windows Phone, and HoloLens as evidence that Ballmer was some kind of buffoon who was holding Microsoft back, this demonstrates a lack of understanding of the tech industry. This is like pointing to a list of failed companies a VC has funded as evidence the VC doesn’t know what they’re doing. But that’s silly in a hits based industry like venture capital. If you want to claim the VC is bad, you need to point out poor total return or a lack of big successes, which would imply poor total return. Similarly, a large company like Microsoft has a large portfolio of bets and one successful bet can pay for a huge number of failures. Ballmer’s critics can’t point to a poor total return because Microsoft’s total return was very good under his tenure. Revenue increased from , depending on whether you want to count from when Ballmer became President in July 1998 or when Ballmer became CEO in January 2000. The company was also quite profitable when Ballmer left, recording $27B in profit the previous four quarters, more than the revenue of the company he took over. By market cap, Azure alone would be in the top 10 largest public companies in the world and the enterprise software suite minus Azure would probably just miss being in the top 10.
As a result, critics also can’t point to a lack of hits when Ballmer presided over the creation of Azure, the conversion of Microsoft’s enterprise software from set of local desktop apps to Office 365 et al., the creation of the world’s most effective enterprise sales org, the creation of Microsoft’s video game empire (among other things, Ballmer was CEO when Microsoft acquired Bungie and made Halo the Xbox’s flagship game on launch in 2001), etc. Even Bing, widely considered a failure, on last reported revenue and current P/E ratio, would be 12th most valuable tech company in the world, between Tencent and ASML. When attacking Ballmer, people cite Bing as a failure that occurred on Ballmer’s watch, which tells you something about the degree of success Ballmer had. Most companies would love to have their successes be as successful as Bing, let alone their failures. Of course it would be better if Ballmer was prescient and all of his bets succeeded, making Microsoft worth something like $10T instead of the lowly $3T market cap it has today, but the criticism of Ballmer that says that he had some failures and some $1T successes is a criticism that he wasn’t the greatest CEO of all time by a gigantic margin. True, but not much of a criticism.
And, unlike Nadella, Ballmer didn’t inherit a company that was easily set up for success. As we noted earlier, it wasn’t long into Ballmer’s tenure that Microsoft was considered a boring, irrelevant company and the next IBM, mostly due to decisions made when Bill Gates was CEO. As a very senior Microsoft employee from the early days, Ballmer was also partially responsible for the state of Microsoft at the time, so Microsoft’s problems are also at least partially attributable to him (but that also means he should get some credit for the success Microsoft had through the 90s). Nevertheless, he navigated Microsoft’s most difficult problems well and set up his successor for smooth sailing.
Earlier, we noted that Paul Graham cited Google and the rise of the web as two causes for Microsoft’s death prior to 2007. As we discussed in this look at antitrust action in tech, these both share a common root cause, antitrust action against Microsoft. If we look at the documents from the Microsoft antitrust case, it’s clear that Microsoft knew how important the internet was going to be and had plans to control the internet. As part of these plans, they used their monopoly power on the desktop to kill Netscape. They technically lost an antirust case due to this, but if you look at the actual outcomes, Microsoft basically got what they wanted from the courts. The remedies levied against Microsoft are widely considered to have been useless (the initial decision involved breaking up Microsoft, but they were able to reverse this on appeal), and the case dragged on for long enough that Netscape was doomed by the time the case was decided, and the remedies that weren’t specifically targeted at the Netscape situation were meaningless.
A later part of the plan to dominate the web, discussed at Microsoft but never executed, was to kill Google. If we’re judging Microsoft by how “dangerous” it is, how effectively it crushes its competitors, like Paul Graham did when he judged Microsoft to be dead, then Microsoft certainly became less dangerous, but the feeling at Microsoft was that their hand was forced due to the circumstances. One part of the plan to kill Google was to redirect users who typed google.com into their address bar to MSN search. This was before Chrome existed and before mobile existed in any meaningful form. Windows desktop marketshare was 97% and IE had between 80% to 95% marketshare depending on the year, with most of the rest of the marketshare belonging to the rapidly declining Netscape. If Microsoft makes this move, Google is killed before it can get Chrome and Android off the ground and, barring extreme antitrust action, such as a breakup of Microsoft, Microsoft owns the web to this day. And then for dessert, it’s not clear there wouldn’t be a reason not to go after Amazon.
After internal debate, Microsoft declined to kill Google not due to fear of antitrust action, but due to fear of bad PR from the ensuing antitrust action. Had Microsoft redirected traffic away from Google, the impact on Google would’ve been swifter and more severe than their moves against Netscape and in the time it would take for the DoJ to win another case against Microsoft, Google would suffer the same fate as Netscape. It might be hard to imagine this if you weren’t around at the time, but the DoJ vs. Microsoft case was regular front-page news in a way that we haven’t seen since (in part because companies learned their lesson on this one — Google supposedly killed the 2011-2012 FTC against them with lobbying and has cleverly maneuvered the more recent case so that it doesn’t dominate the news cycle in the same way). The closest thing we’ve seen since the Microsoft antitrust media circus was the media response to the Crowdstrike outage, but that was a flash in the pan compared to the DoJ vs. Microsoft case.
If there’s a criticism of Ballmer here, perhaps it’s something like Microsoft didn’t pre-emptively learn the lessons its younger competitors learned from its big antitrust case before the big antitrust case. A sufficiently prescient executive could’ve advocated for heavy lobbying to head the antitrust case off at pass, like Google did in 2011-2012, or maneuvered to make the antitrust case just another news story, like Google has been doing for the current case. Another possible criticism is that Microsoft didn’t correctly read the political tea leaves and realize that there wasn’t going to be serious US tech antitrust for at least two decades after the big case against Microsoft. In principle, Ballmer could’ve overridden the decision to not kill Google if he had the right expertise on staff to realize that the United States was entering a two decade period of reduced antitrust scrutiny in tech.
As criticisms go, I think the former criticism is correct, but not an indictment of Ballmer unless you expect CEOs to be , so as evidence that Ballmer was a bad CEO, this would be a very weak criticism. And it’s not clear that the latter criticism is correct. While Google was able to get away with things like hardcoding the search engine in Android to prevent users from changing their search engine setting to having badware installers trick users into making Chrome the default browser, they were considered the “good guys” and didn’t get much scrutiny for these sorts of actions, Microsoft wasn’t treated with kid gloves in the same way by the press or the general public. Google didn’t trigger a serious antitrust investigation until 2011, so it’s possible the lack of serious antitrust action between 2001 and 2010 was an artifact of Microsoft being careful to avoid antitrust scrutiny and Google being too small to draw scrutiny and that a move to kill Google when it was still possible would’ve drawn serious antitrust scrutiny and another PR circus. That’s one way in which the company Ballmer inherited was in a more difficult situation than its competitors — Microsoft’s hands were perceived to be tied and may have actually been tied. Microsoft could and did get severe criticism for taking an action when the exact same action taken by Google would be lauded as clever.
When I was at Microsoft, there was a lot of consternation about this. One funny example was when, in 2011, Google officially called out Microsoft for unethical behavior and the media jumped on this as yet another example of Microsoft behaving badly. A number of people I talked to at Microsoft were upset by this because, according to them, Microsoft got the idea to do this when they noticed that Google was doing it, but reputations take a long time to change and actions taken while Gates was CEO significantly reduced Microsoft’s ability to maneuver.
Another difficulty Ballmer had to deal with on taking over was Microsoft’s intense internal politics. Again, as a very senior Microsoft employee going back to almost the beginning, he bears some responsibility for this, but Ballmer managed to clear the board of the worst bad actors so that Nadella didn’t inherit such a difficult situation. If we look at why Microsoft didn’t dominate the web under Ballmer, in addition to concerns that killing Google would cause a PR backlash, internal political maneuvering killed most of Microsoft’s most promising web products and reduced the appeal and reach of most of the rest of its web products. For example, Microsoft had a working competitor to Google Docs in 1997, one year before Google was founded and nine years before Google acquired , but it was killed for political reasons. And likewise for NetMeeting and other promising products. Microsoft certainly wasn’t alone in having internal political struggles, but it was famous for having more brutal politics than most.
Although Ballmer certainly didn’t do a perfect job at cleaning house, when I was at Microsoft and asked about promising projects that were sidelined or killed due to internal political struggles, the biggest recent sources of those issues were shown the door under Ballmer, leaving a much more functional company for Nadella to inherit.
Stepping back to look at the big picture, Ballmer inherited a company that was a financially strong position that was hemmed in by internal and external politics in a way that caused outside observers to think the company was overwhelmingly likely to slide into irrelevance, leading to predictions like Graham’s famous prediction that Microsoft is dead, with revenues expected to decline in five to ten years. In retrospect, we can see that moves made under Gates limited Microsoft’s ability to use its monopoly power to outright kill competitors, but there was no inflection point at which a miraculous turnaround was mounted. Instead, Microsoft continued its very strong execution on enterprise products and continued making reasonable bets on the future in a successful effort to supplant revenue streams that were internally viewed as long-term dead ends, even if they were going to be profitable dead ends, such as Windows and boxed (non-subscription) software.
Unlike most companies in that position, Microsoft was willing to very heavily subsidize a series of bets that leadership thought could power the company for the next few decades, such as Windows Phone, Bing, Azure, Xbox, and HoloLens. From the internal and external commentary on these bets, you can see why it’s so hard for companies to use their successful lines of business to subsidize new lines of business when the writing is on the wall for the successful businesses. People panned these bets as stupid moves that would kill the company, saying the company should focus is efforts on its most profitable businesses, such as Windows. Even when there’s very clear data showing that bucking the status quo is the right thing, people usually don’t do it, in part because you look like an idiot when it doesn’t pan out, but Ballmer was willing to make the right bets in the face of decades of ridicule.
Not all of the bets panned out, but if we look at comments from critics who were saying that Microsoft was doomed because it was subsidizing the wrong bets or younger companies would surpass it, well, today, Microsoft is worth 50% more than Google and twice as much as Meta. If we look at the broader history of the tech industry, Microsoft has had sustained strong execution from its founding in 1975 until today, a nearly fifty year run, a run that’s arguably been unmatched in the tech industry. Intel’s been around as bit longer, but they stumbled very badly around the turn of the century and they’ve had a number of problems over the past decade. IBM has a long history, but it just wasn’t all that big during its early history, e.g., when T. J. Watson renamed Computing-Tabulating-Recording Company to International Business Machines, its revenue was still well under $10M a year (inflation adjusted, on the order of $100M a year). Computers started becoming big and IBM was big for a tech company by the 50s, but the antitrust case brought against IBM in 1969 that dragged on until it was dropped for being “without merit” in 1982 hamstrung the company and its culture in ways that are still visible when you look at, for example, why IBM’s various cloud efforts have failed and, in the 90s, the company was on its deathbed and only managed to survive at all due to Gerstner’s turnaround. If we look at older companies that had long sustained runs of strong execution, most of them are gone, like DEC and Data General, or had very bad stumbles that nearly ended the company, like IBM and Apple. There are companies that have had similarly long periods of strong execution, like Oracle, but those companies haven’t been nearly as effective as Microsoft in expanding their lines of business and, as a result, Oracle is worth perhaps two Bings. That makes Oracle the 20th most valuable public company in the world, which certainly isn’t bad, but it’s no Microsoft.
If Microsoft stumbles badly, a younger company like Nvidia, Meta, or Google could overtake Microsoft’s track record, but that would be no fault of Ballmer’s and we’d still have to acknowledge that Ballmer was a very effective CEO, not just in terms of bringing the money in, but in terms of setting up a vision that set Microsoft up for success for the next fifty years.
Besides the headline items mentioned above, off the top of my head, here are a few things I thought were interesting that happened under Ballmer since Graham declared Microsoft to be dead
* 2011: Suimt Gulwani, at MSR, publishes “Automating string processing in spreadsheets using input-output examples”, named a most influential POPL paper 10 years later This paper is about using program synthesis for spreadsheet “autocomplete/inference” I’m not a fan of patents, but I would guess that the reason autocomplete/inference works fairly well in Excel and basically doesn’t work at all in Google Sheets is that MS has a patent on this based on this work
* This paper is about using program synthesis for spreadsheet “autocomplete/inference”
* I’m not a fan of patents, but I would guess that the reason autocomplete/inference works fairly well in Excel and basically doesn’t work at all in Google Sheets is that MS has a patent on this based on this work
* 2012: Microsoft releases TypeScript This has to be the most widely used programming language released this century and it’s a plausible candidate for becoming the most widely used language, period (as long as you don’t also count TS usage as JS)
* This has to be the most widely used programming language released this century and it’s a plausible candidate for becoming the most widely used language, period (as long as you don’t also count TS usage as JS)
* 2012: Microsoft Surface released Things haven’t been looking so good for the Surface line since Panos Panay left in 2022, and this was arguably failure even in 2022, but this was a $7B/yr line of business in 2022, which goes to show you how big and successful Microsoft is — most companies would love to have something doing as well as a failed $7B/yr business
* Things haven’t been looking so good for the Surface line since Panos Panay left in 2022, and this was arguably failure even in 2022, but this was a $7B/yr line of business in 2022, which goes to show you how big and successful Microsoft is — most companies would love to have something doing as well as a failed $7B/yr business
* 2015: Microsoft releases vscode (after the end of Ballmer’s tenure in 2014, but this work came out of work under Ballmer’s tenure in multiple ways) This seems like the most widely used editor among programmers today by a very large margin. When I looked at survey data on this a number of years back, I was shocked by how quickly this happened. It seems like vscode has achieved a level of programmer editor dominance that’s never been seen before. Probably the closest thing was Visual Studio a decade before Paul declared Microsoft dead, but that never achieved the same level of marketshare due to a combination of effectively being Windows only software and also costing quite a bit of money Heath Borders notes that Erich Gamma, hired in 2011, was highly influential here
* This seems like the most widely used editor among programmers today by a very large margin. When I looked at survey data on this a number of years back, I was shocked by how quickly this happened. It seems like vscode has achieved a level of programmer editor dominance that’s never been seen before. Probably the closest thing was Visual Studio a decade before Paul declared Microsoft dead, but that never achieved the same level of marketshare due to a combination of effectively being Windows only software and also costing quite a bit of money
* Heath Borders notes that Erich Gamma, hired in 2011, was highly influential here
One response to Microsoft’s financial success, both the direct success that happened under Ballmer as well as later success that was set up by Ballmer, is that Microsoft is financially successful but irrelevant for trendy programmers, like IBM. For one thing, rounded to the nearest Bing, IBM is probably worth either zero or one Bings. But even if we put aside the financial aspect and we just look at how much each $1T tech company (Apple, Nvidia, Microsoft, Google, Amazon, and Meta) has impacted programmers, Nvidia, Apple, and Microsoft all have a lot of programmers who are dependent on the company due to some kind of ecosystem dependence (CUDA; iOS; .NET and Windows, the latter of which is still the platform of choice for many large areas, such as AAA games).
You could make a case for the big cloud vendors, but I don’t think that companies have a nearly forced dependency on AWS in the same way that a serious English-language consumer app company really needs an iOS app or an AAA game company has to release on Windows and overwhelmingly likely develops on Windows.
If we look at programmers who aren’t pinned to an ecosystem, Microsoft seems highly relevant to a lot of programmers due to the creation of tools like vscode and TypeScript. I wouldn’t say that it’s necessarily more relevant than Amazon since so many programmers use AWS, but it’s hard to argue that the company that created (among many other things) vscode and TypeScript under Ballmer’s watch is irrelevant to programmers.
Shortly after joining Microsoft in 2015, I bet Derek Chiou that Google would beat Microsoft to $1T market cap. Unlike most external commentators, I agreed with the bets Microsoft was making, but when I looked around at the kinds of internal dysfunction Microsoft had at the time, I thought that would cause them enough problems that Google would win. That was wrong — Microsoft beat Google to $1T and is now worth $1T more than Google.
I don’t think I would’ve made the bet even a year later, after seeing Microsoft from the inside and how effective Microsoft sales was and how good Microsoft was at shipping things that are appealing to enterprises and the comparing that to Google’s cloud execution and strategy. But you could say that I made a mistake that was fairly analogous to what external commentators made until I saw how Microsoft operated in detail.
Thanks to Laurence Tratt, Yossi Kreinin, Heath Borders, Justin Blank, and Fabian Giesen for comments/corrections/discussion
...
Read the original on danluu.com »
HTML Forms have powerful validation mechanisms, but they are heavily underused. In fact, not many people even know much about them. Is this because of some flaw in their design? Let’s explore.
Beyond that, there is a bunch of other ways that you can add constraints to your input. Precisely, there are three ways to do it:
* Using specific type attribute values, such as “email”, “number”, or “url”
* Using other input attributes that create constraints, such as “pattern” or “maxlength”
* Using the setCustomValidity DOM method of the input
The last one is the most powerful as it allows to create arbitrary validation logic and handle complex cases. Do you notice how it differs from the first two techniques? The first two are defined with attributes, but setCustomValidity
is a method.
Here’s a great write-up that explains the differences between DOM attributes and properties: https://jakearchibald.com/2024/attributes-vs-properties/
The fact that setCustomValidity API is exposed only as a method and doesn’t have an attribute equivalent leads to some terrible ergonomics. I’ll show you with an example.
But first, a very quick intro to how this API works:
This would make input invalid and the browser will show the reason as “Any text message”.
Passing an empty string makes the input valid (unless other constraints are applied).
That’s pretty much it! Now let’s apply this knowledge.
Let’s say we want to implement an equivalent of the required attribute. That means that an empty input must be prevent the form from being submitted.
This kind of looks like we’re done and this code should be enough to accomplish the task. But try to see it in action:
It may seem to work, but there’s just one important edge case: the input is in a valid state initially. If you reset the component and press the “submit” button, the form submission will go through. But surely, before we ever touch the input, it is empty, and therefore must be invalid. But we only ever do something when the input value changes.
How can we fix this?
Let’s execute some code when the component mounts:
Great! Now everything works as expected. But at what cost?
Let’s look at our clumsy way to validate the initial value:
Ugh! Wouldn’t want to write that one each time. Let’s think about what’s wrong with this.
* The validation logic is duplicated between the onChange handler and the initial render phase
* The initial validation is not co-located with the input, so we’re losing code cohesion.
It’s fragile: if you update validation logic, you might forget to update code in both places.
* The useRef + useLayouEffect + onChange combo is just too much ceremony,
especially when a form has a lot of inputs. And it gets even more confusing if only some of those inputs use customValidity
This is what happens when you deal with a purely imperative API in a declarative component.
Unlike validation attributes, CustomValidity is a purely imperative API. In other words, there’s no input attribute that we can use to set custom validity.
In fact, I would argue that this is the main reason for poor adoption of native form validation. If the API is cumbersome, sometimes it just does not matter how powerful it is.
In essence, this is the attribute we need:
In a declarative framework, this would allow to define input validations in a very powerful way:
Pretty cool! In my opinion, at least. Though you can rightfully argue that this accomplishes only what the existing required attribute is already capable of. Where’s the “power”?
Let me show you, but first, since there’s no actual custom-validity currently in the HTML Spec, let’s implement it in userland.
This will work well for our demo purposes.
For a production-ready component check out a more complete implementation.
Now we’ll explore which non-trivial cases this design can help solve.
In real-world apps, validation often gets more complex than local checks. Imagine a username input that should be valid only if the username is not taken. This would require async calls to your server and an intermediary state: the form should not be valid while the check is in progress. Let’s see how our abstraction can handle this.
Play around with this example. It uses the required to prevent empty inputs. But then it relies on customValidity to mark input as invalid during the loading state and based on the response.
First, we create an async function to check whether the username is unique that imitates a server request with a delay.
Next, we’ll create a controlled form component and use react-query to manage to server request when the input value changes:
Great! We have the setup in place. It consists of two crucial parts:
* Our custom component that is capable of taking the customValidity prop
That’s it! We’re describing the whole async validation flow, including loading, error and success states, in one attribute. You can go back to see the result again if you wish
This one will be shorter, but also interesting, because it covers dependent input fields. Let’s implement a form that requires to repeat the entered password:
You can try it out:
I hope I’ve been able to show you how setCustomValidity can cover validation needs of all kinds.
But the real power comes from great APIs.
And hopefully, you are now equipped with one of those.
And even more hopefully, we will see it natively in the HTML Spec one day.
...
Read the original on expressionstatement.com »
Older (Access-J): 2024.08.03: Clang vs. Clang: You’re making Clang angry. You wouldn’t like Clang when it’s angry. #compilers #optimization #bugs #timing #security #codescans
2024.10.28: The sins of the 90s: Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars
2024.08.03: Clang vs. Clang: You’re making Clang angry. You wouldn’t like Clang when it’s angry. #compilers #optimization #bugs #timing #security #codescans
2024.06.12: Bibliography keys: It’s as easy as [1], [2], [3]. #bibliographies #citations #bibtex #votemanipulation #paperwriting
2023.11.25: Another way to botch the security analysis of Kyber-512:
2023.10.23: Reducing “gate” counts for Kyber-512: Two algorithm analyses, from first principles, contradicting NIST’s calculation. #xor #popcount #gates #memory #clumping
2023.10.03: The inability to count correctly:
2022.08.05: NSA, NIST, and post-quantum cryptography: Announcing my second lawsuit against the U. S. government. #nsa #nist #des #dsa #dualec #sigintenablingproject #nistpqc #foia
2022.01.29: Plagiarism as a patent amplifier:
2020.12.06: Optimizing for the wrong metric, part 1: Microsoft Word: Review of “An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development” by Knauff and Nejasmic. #latex #word #efficiency #metrics
2019.10.24: Why EdDSA held up better than ECDSA against Minerva:
2019.04.30: An introduction to vectorization: Understanding one of the most important changes in the high-speed-software ecosystem. #vectorization #sse #avx #avx512 #antivectors
2017.11.05: Reconstructing ROCA: A case study of how quickly an attack can be developed from a limited disclosure. #infineon #roca #rsa
2017.10.17: Quantum algorithms to find collisions: Analysis of several algorithms for the collision problem, and for the related multi-target preimage problem. #collision #preimage #pqcrypto
2017.07.23: Fast-key-erasure random-number generators: An effort to clean up several messes simultaneously. #rng #forwardsecrecy #urandom #cascade #hmac #rekeying #proofs
2017.07.19: Benchmarking post-quantum cryptography: News regarding the SUPERCOP benchmarking system, and more recommendations to NIST. #benchmarking #supercop #nist #pqcrypto
2016.10.30: Some challenges in post-quantum standardization: My comments to NIST on the first draft of their call for submissions. #standardization #nist #pqcrypto
2016.06.07: The death of due process: A few notes on technology-fueled normalization of lynch mobs targeting both the accuser and the accused. #ethics #crime #punishment
2016.05.16: Security fraud in Europe’s “Quantum Manifesto”: How quantum cryptographers are stealing a quarter of a billion Euros from the European Commission. #qkd #quantumcrypto #quantummanifesto
2016.03.15: Thomas Jefferson and Apple versus the FBI: Can the government censor how-to books? What if some of the readers are criminals? What if the books can be understood by a computer? An introduction to freedom of speech for software publishers. #censorship #firstamendment #instructions #software #encryption
2015.11.20: Break a dozen secret keys, get a million more for free: Batch attacks are often much more cost-effective than single-target attacks. #batching #economics #keysizes #aes #ecc #rsa #dh #logjam
2015.03.14: The death of optimizing compilers: Abstract of my tutorial at ETAPS 2015. #etaps #compilers #cpuevolution #hotspots #optimization #domainspecific #returnofthejedi
2015.02.18: Follow-You Printing: How Equitrac’s marketing department misrepresents and interferes with your work. #equitrac #followyouprinting #dilbert #officespaceprinter
2014.06.02: The Saber cluster: How we built a cluster capable of computing 3000000000000000000000 multiplications per year for just 50000 EUR. #nvidia #linux #howto
2014.05.17: Some small suggestions for the Intel instruction set: Low-cost changes to CPU architecture would make cryptography much safer and much faster. #constanttimecommitment #vmul53 #vcarry #pipelinedocumentation
2014.04.11: NIST’s cryptographic standardization process: The first step towards improvement is to admit previous failures. #standardization #nist #des #dsa #dualec #nsa
2014.03.23: How to design an elliptic-curve signature system: There are many choices of elliptic-curve signature systems. The standard choice, ECDSA, is reasonable if you don’t care about simplicity, speed, and security. #signatures #ecc #elgamal #schnorr #ecdsa #eddsa #ed25519
2014.02.05: Entropy Attacks! The conventional wisdom says that hash outputs can’t be controlled; the conventional wisdom is simply wrong.
Meredith Whittaker, president of the Signal Foundation, gave an interesting
talk
at NDSS 2024 titled “AI, Encryption, and the Sins of the 90s”.
I won’t try to summarize everything the talk is saying: go watch the talk video yourself, or at least read through the transcript. But I’ll say something here about what the “sins” part of the talk’s title is referring to.
The talk says that, in the 1990s, “cryptosystems were still classified as munitions and subject to strict export controls”. The talk describes the “crypto wars” as “a series of legal battles, campaigns, and policy debates that played out in the US across the 1990s”, resulting in “the liberalization of strong encryption in 1999″, allowing people to “develop and use strong encryption without being subject to controls”.
OK, that sounds familiar. Which parts are the “sins”?
Answer: the talk claims that “the legacy of the crypto wars was to trade privacy for encryption—and to usher in an age of mass corporate surveillance”.
Wow. That sounds bad, and surprising, definitely something worth understanding better. If cryptographic export controls had instead remained in place after 1999, how would that have improved privacy and reduced corporate surveillance?
Answer: the talk claims that, without strong cryptography, “the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible”.
Wait, what? Let’s look at the facts.
Internet commerce was already booming by 1999. Let’s look specifically at the history of Amazon.
Amazon was founded in 1994. Its initial public stock offering was in
1997. Amazon was sued by Barnes & Noble in
1997, and was sued by Wal-Mart in 1998. Bezos was named Time Magazine’s Person of the Year in 1999:
Bezos’ vision of the online retailing universe was so complete, his Amazon.com site so elegant and appealing, that it became from Day One the point of reference for anyone who had anything to sell online. And that, it turns out, is everyone.
Amazon’s
revenue
was 15.75 million dollars in 1996, 147.79 million dollars in 1997, 609.82 million dollars in 1998, and 1.64 billion dollars in 1999. Amazon was competently executing a business plan that from the outset
explicitly prioritized growth.
Where does anyone get the idea that continued cryptographic export controls would have stopped the growth of Internet commerce, rather than simply limiting the security level of Internet commerce? How do we reconcile this idea with the observed facts of Amazon already growing rapidly in the 1990s? The export controls were still in place; to the extent that Internet commerce was encrypted at all, it was encrypted primarily with a weak cryptosystem, namely 512-bit RSA.
Just to emphasize how fast Amazon’s growth was at that point: Amazon’s revenue was more than doubling every year. If that had kept up, Amazon’s revenue in 2023 would have been more than 26000000 billion dollars. In reality, Amazon’s revenue in 2023 was only 575 billion dollars.
Okay, okay, 575 billion dollars is a lot of money, and Amazon is now
fighting antitrust regulators. But how is Amazon’s growth before and after 1999 a story about a change in cryptography regulation, rather than a story about customers liking a convenient shopping site that provided fast, reasonably reliable deliveries of an ever-expanding collection of products at competitive prices?
These are natural questions for anyone checking whether the talk’s claims match the available evidence. But the talk doesn’t answer any of these questions. Look, for example, at the full paragraph containing the “would not have been possible” quote:
It’s not that 1999 wasn’t a win, at least in a narrow sense. Indeed, we can craft a counterfactual history in which the liberalization of encryption didn’t happen, in which we instead accepted some janky, backdoored, government-standard cryptosystem—some sad Clipper chip DES admixture—and that instead became the thing: a world in which strong cryptosystems did not receive the benefit of many eyes and open scrutiny. But of course the future from then to now would have been very different—not least of all because the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible.
Aside from irrelevant details, how is the “counterfactual history” of a “janky, backdoored, government-standard cryptosystem” different from the reality of export-controlled cryptography in the late 1990s, when 95%
of SSL connections were limited to RSA-512? The explosion of Internet commerce was already happening at that point.
Where does the “would not have been possible” claim come from? I’m not allergic to the phrase “of course”, but I try to limit it to cases where things are really obvious, which is definitely not the situation here.
Government regulations are just one of many sources of weak cryptography. Weak cryptography, in turn, is just one of many sources of Internet-security failures.
Companies reported spending
more than 0.5% of revenue in 2023
on things labeled as “cybersecurity”. A cybersecurity company named CrowdStrike accidentally
took down millions of Windows computers
in July 2024, causing long service outages for many other companies. CrowdStrike had been given control over all those computers because it was saying that this would help protect those computers against attacks. Delta Airlines, in a lawsuit filed this month against CrowdStrike,
said that
the outage “crippled its operations for several days, costing more than $500 million in lost revenue and extra expenses”. Meanwhile there are endless reports of ransomware running rampant, as illustrated by BlackCat disrupting various health-care services for
weeks
starting in February 2024.
And yet, despite the evident disruptions, Internet commerce continues.
Do we want better security to stop the attacks? Yes. Does not having better security mean that the entire system of Internet commerce will be destroyed?
Um, well, it’s conceivable that there will be such a dramatic increase in attacks that we’ll all retreat to non-Internet commerce (because, y’know, non-Internet commerce is secure). But somehow the attackers don’t seem interested in killing the goose that lays the golden eggs.
Let’s rewind to 1999. The CIH virus had destroyed data on
a million computers, and was just one of many examples of attacks. This didn’t stop the Internet from skyrocketing in popularity; it simply prompted effort to fix vulnerabilities.
One of the vulnerabilities at that time was the use of RSA-512. From the perspective of stopping attacks, this vulnerability was
important to fix. But, from the same perspective, there were many other vulnerabilities that were also important to fix, including many that were
cheaper to exploit
than attacking RSA-512. My own experience is that exploitable buffer overflows were very easy to find back then.
Does it sound plausible if someone picks one of the system vulnerabilities in 1999 and claims that fixing this vulnerability is what made the difference between Internet commerce succeeding and Internet commerce failing? I’d expect such a claim to be backed by
evidence that Internet commerce was on such a knife’s edge (rather than being a clear win for its convenience) and
an explanation of what was so special about this vulnerability.
Otherwise the claim sounds like nothing more than wishful thinking about the importance of some particular area of security.
Let’s move on to the second part of the claim that, without strong cryptography, “the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible”.
The mass-surveillance industry is much older than 1999. See, for example, the book
“IBM and the Holocaust”, which traces how IBM’s punch-card databases were used to “organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor”.
Does a database not count as a “corporate database” if the decisions of what’s going into the database are being made by a government, in this case the Nazis? Does that make the database less evil? Also, does the level of evil depend on whether this was a database operated by IBM for the Nazis
or a database operated by the Nazis using technology provided by IBM? Somehow I don’t think these distinctions mattered for people in the concentration camps.
As the 20th century continued, more and more powerful technology made surveillance less and less expensive. Here’s a
quote
from a 2007 study “Engaging privacy and information technology in a digital age”, issued by a committee formed by the U. S. National Academies of Sciences, Engineering, and Medicine:
Beginning in the late 1950s, the computer became a central tool of organizational surveillance. It addressed problems of space and time in the management of records and data analysis and fueled the trend of centralization of records. The power of databases to aggregate information previously scattered across diverse locations gave institutions the ability to create comprehensive personal profiles of individuals, frequently without their knowledge or cooperation. The possibility of the use of such power for authoritarian purposes awakened images of Orwellian dystopia in the minds of countless journalists, scholars, writers, and politicians during the 1960s, drawing wide-scale public attention to surveillance and lending urgency to the emerging legal debate over privacy rights.
One of the sectors that immediately benefited from the introduction of computer database technology was the credit-reporting industry. … But the credit and insurance industries were not alone. Banks, utility companies, telephone companies, medical institutions, marketing firms, and many other businesses were compiling national and regional dossiers about their clients and competitors in quantities never before seen in the United States.
Surely the 1960s surveillance dossiers by “the credit-reporting industry” and “marketing firms” and so on count as examples of “corporate databases”.
What’s the mechanism by which continued cryptographic export controls would have supposedly stopped the growth of surveillance? How do we reconcile this with the observed facts of government surveillance and corporate surveillance already exploding in the second half of the 20th century, when cryptographic export controls were in place? Why does it matter for this growth whether databases were “RSA-protected” or not? Or, more to the point, whether they were protected by something stronger than RSA-512?
The talk doesn’t answer any of these questions either.
I was, as the talk mentions, one of the people fighting export controls in the 1990s. The reason I was taking action is that I had studied the situation and was troubled by it. In particular, I had concluded that the export controls were contributing to attacks. If I was wrong about that, then I’d like to understand why.
The talk claims that moving to stronger cryptography had the negative effect of creating attacks: specifically, of creating corporate mass surveillance. I’d like to understand the rationale for this claim. But I don’t see where the talk explains the supposed mechanism, or provides any evidence, or addresses the contrary evidence provided by 20th-century surveillance.
Beyond claiming that the actions against export controls contributed to corporate surveillance, the talk claims that these actions came from a narrow perspective
of seeing the government as the only problem. For example:
The talk claims that “strategic mistakes in the 1990s—in particular, the mistake of trusting industry and ‘the free market’ while viewing the government as the sole threat to fundamental rights—is a big part of how we got here”.
The talk criticizes the “dated, market-centric folk wisdom that is content to leave the governance of significant choices about fundamental rights—like expression and privacy—to a handful of private companies, assuming the invisible hand will work some BCorp magic”.
The talk criticizes “conflating encryption with privacy and focusing narrowly on the tech itself—on encryption as the goal, not a means to the goal—while focusing concerns about privacy invasion solely on governments, assumed to be always on the verge of tyranny—while ignoring (or even celebrating) the interests of market actors”.
In context, the talk is attributing these perspectives to those of us fighting the “crypto wars”.
...
Read the original on blog.cr.yp.to »
Why are popularizing educational newsletter-frequency writers of important fields like Matt Levine for finance so rare? Because most fields are too slow or ambiguous, and writers of the right combination of expertise, obsession, and persistence are also rare.
[For support of key website features (link annotation popups/popovers & transclusions, collapsible sections, backlinks, tablesorting, image zooming, sidenotes etc), you must enable JavaScript.]
Matt Levine is the most well-known newslettrist (“Money Stuff”) in the financial industry, having blogged or written since , finding his niche in popularization after stints in Wall Street & law. His commentary is influential, people leak to him, he sometimes interviews major figures (notoriously, Sam Bankman-Fried) or recounts inside information, and a number of phrases like his “laws of insider trading” (specifically, how not to) have gained currency to the point where readers can now do much of the work of sourcing an issue for him.
He is read by hundreds of thousands of readers (including myself)—everyone from shoeshine boy to billionaire. The size of his audience is respectable, but perhaps its most remarkable feature is that many of those readers have nothing to do with the financial industry. Though his newsletter is officially a Bloomberg News newsletter which he simply writes, many of his readers will visit Bloomberg solely for him, and indeed, might have little idea who or what a Bloomberg is. Nevertheless, readers loyally tune in for each installment every few days to learn about arcane financial instruments they have never heard of before, and (except for Levine) never will again.
One might ask (and indeed, a billionaire once did), “where are the other Matt Levines?” or “who are the Matt levines of other fields?” That is, where are the Matt Levines of, say, chemistry or drug development, who explain & popularize other major industries which are vital to modern life, directly or indirectly appear in the news often, and yet people are widely ignorant of it, and deeply misunderstand its fundamental dynamics? Why don’t we have a Matt Levine for every industry? Where is the Levine of, I don’t know, petroleum refining or fracking, of shipping containers? Are we just in need of a good list of recommendations? Or could we just set up a prize to coax out some potential Levines in other industries?
When I first met Matt, the first thing I said was “Matt Levine, only you can do what you do!”
My pessimistic conclusion is that Matt Levines are not made, they are born, and that the Matt Levine formula is largely irreproducible: there are few industries where it makes sense, and there are few people suited for this job, and that is the simple answer why there are not many Levines.
So, what is the formula, exactly? The Matt Levine formula is weighty matters, leavened by humor, with basic explanations of complicated financial matters. As Levine has been doing this online for so long (~13 years), relatively speaking, he can often refer to his previous coverage and comment on how things turned out. Certain themes repeat periodically so often that they receive their own catchphrases, like “worries about bond market liquidity” or the laws of insider trading.
Many people owe most of what they know about stock trading, bonds, arcane but controversial matters like naked shorts, meme stocks etc to Levine; and I would be embarrassed to admit how much of my economics knowledge comes through Levine rather than some more rigorous source like my old economics textbooks. This is because Levine provides 3 key ingredients which foster learning:
cases with known outcomes/answers: to develop expertise in a subject, the subject ideally provides many problems, with known answers, of high accuracy. Most subjects do not. But Levine’s subject (finance & law) does.
A good subject for developing expertise is something like chess: an endless number of chess games can be played rapidly, they all have a clear outcome (win/draw/loss), and one can study each one carefully to understand what went right or wrong. A bad subject is something like military strategy: there are not many large-scale wars which have been documented adequately, each war is unique and unrepeatable and a general may participate in only a few in a lifetime, and the outcomes (never mind any individual’s contribution) are often difficult or impossible to judge. Many areas are more like military strategy than chess—how do you judge the expertise of a CEO, or a Hollywood director, or a scientist forecasting the distant future?
Levine works in an area which does provide many clearcut examples, because he focuses on lawsuits, prosecutions, crimes, and deals. These are examples where the outcome will usually be known in a few years, at most, or at least a major update/development, and where the involved parties do all the research necessary, and where the evidence is often completely unambiguous—Levine just has to read their filings, and excerpt the text message where someone boasts about their insider trading in no uncertain terms.
In relying on reporting & filings for his commentary, Levine is very much like the dying local newspaper crime reporter, who relies on the police blotter & courtroom access to rapidly file their articles. These articles are nearly endless, and mostly forgettable because there are, broadly speaking, not really any principles governing local crime. “Some guy got drunk and into a fight and killed another guy” is something that happens frequently, but it illuminates no universal principle; it sheds no light on anything else. It was just something horrible that often happens at random when people take dangerous drugs like alcohol rather than safe ones like nicotine, and is of no broader importance; true-crime addicts consume it for its entertainment value, like darker versions of Hollywood tabloids—“who murdered who” instead of “who’s sleeping with who”. It’s just “one d—n thing after another”. (The TV show Cops has run for 36 years now, and could run for another 36 years without breaking a sweat, but after 72 seasons, what would you have learned that you didn’t learn after the first few?)
But in Levine’s area, this is not the case. Many of these examples are due to highly intelligent, motivated, competent people and organizations clashing for deep reasons. This means that to understand them, you need…
first-principles explanations: most people experience an “illusion of depth”, in that they believe they understand the causal mechanics of an area far better than they do. But in fact, they have learned only a superficial model of the area. Levine corrects this.
Particularly in economic matters, people believe many intuitive folk economics like eg. building new houses cannot lower prices, or that businesses raise prices simply “when they feel greedy”, or that voluntary transactions must have a loser & a winner when other people transact (but not themselves personally), or that a policy will have only intended effects because everyone will just do what they are ordered to (even though they personally work around policies or rules all the time for doubtless noble reasons).
Levine patiently gives from first-principles (supply-and-demand, market efficiency & adverse selection, people following incentives, public choice theory) explanations of why some thing about markets or contracts is the way it is, how it operated (or failed to operate) in a particular case, and what (and why) the various counterfactual future outcomes are.
These repeated explanations—however simplified and abstracted—gradually build up genuine knowledge which can transfer to the real world beyond some crammed supply-and-demand schematics in a long-forgotten economics class.
spaced repetition enabled by fast turnover: a newsletter is inherently spaced in time, and by returning to themes repeatedly, with various twists or instantiations, the reader learns due to the spacing effect.
In normal news consumption, as opposed to the drip-feed of a columnist on a steady beat, one might read about some instance of financial malfeasance in great depth in the WSJ or NYT, say—but once.
This coverage might be extremely high quality, but nevertheless, such “massed presentation” is a recipe for forgetting. It is like cramming flashcards the night before the test: no matter how good the flashcards are or how much you remember while taking the test, most of it will be forgotten.
However, in Levine’s case, even if specific cases or events resolve quickly, the same principle will show up again soon enough.
So, that is the Levine formula: the global economy furnishes him many rapidly-resolved examples which he can use to entertainingly illustrate basic principles of economics, and by doing that so regularly over so long, readers gain a genuine durable education in economics which they will remember and where they can apply those principles on their own.
Analyzed into parts, we can see why many areas cannot support a Levine: they lack one of the 3 ingredients:
Crime reporting covers crimes which are numerous and rapidly-resolved, but there is not much to learn.
Logistics like fracking or oil or containers may cover many cases which have broad principles, but those cases are often resolved in secrecy and due to the extreme boom-bust cycle of those industries, may take decades to ‘mature’ (eg. an over-extended oil company might not go bust for decades depending on how exactly cycles play out).
And areas like drug development may be cursed by all 3: drug development often ends in failure for unknown reasons, in the dark, decades later, and what reasons are known may be totally idiosyncratic to a specific drug or disease; what is known may be at best a loose rule of thumb.
It would be nice to have a blog like Matt Levine covering, say, evolutionary biology or Greco-Roman philosophy, but it’s obvious why that isn’t going to work—you can’t have a very entertaining newsletter when it might take centuries for a debate to resolve, if anyone can agree it was resolved at all, and you certainly aren’t going to be able to provide so many clear illustrations of basic principles that reading the newsletter constitutes an education. (You can read & learn about them, but their natural form would be, well, a textbook or a monograph or something like that, and definitely not a newsletter.)
And most areas are more like drug development than they are like hedge funds suing each other over some contractual gimmick or clerical error.
OK, but surely there are still plenty of areas where the preconditions are met? (Particularly rapidly-developing ones, like cryptocurrency or AI recently?) So where are their Matt Levines?
This brings us to the second half of the equation of the Matt Levine formula: the Matt Levine part.
Consider the implication of the 3 requirements for the author, rather than the reader: they are going to see the same human comedies play out, again and again, and have to shout it into the void again, only to watch it happen yet again. The author feels the weight of the repetition far more than any reader does. It is like a teacher who must teach the same curriculum for the 30th time—and again read & grade each of dozens of assignments on it by hundreds of students.
It is not every person who can do so well, or at all. Often a great expert will make a terrible teacher, because they are unable to endure the repetition, or understand the ignorance of the beginner, or treat the floundering student with kindness.
I personally appreciate Levine’s permanent fascination with finance, his willingness to explain the same things over and over. But I don’t think I (or most people) would be able to do so for a long time without turning it into a dull ticket-punching exercise as part of a mundane job rather than an avocation; and indeed, I have shunned my opportunities to become a Levine of some area, when I felt my interest & patience for fools rapidly waning. (Particularly darknet markets: there was considerable demand for commentators like Eileen Ormsby or DeepDotWeb, but I could see no new principles to maintain my interest and just an endless infosec churn of temporary trivia—a warning that burnout was approaching—and quit the area while I was ahead.)
Indeed, expertise is a reason to stop teaching entirely: if you are really interested in an area, and good at it (good enough to understand and commentate ongoing events), then even if you are a great teacher who is gifted at explaining the area to laymen, why would you settle for teaching about it instead of doing it? But if you do it, then you will struggle to write about it regularly publicly: actually doing something, instead of reading a court summary of it, can take years of hard work, and prohibit you from writing about it in various ways. (It will also usually pay much worse—Matt Levine is doubtless compensated handsomely by Bloomberg, but perhaps not as handsomely as if he had kept rising through Big Law, and superstar outcomes imply most would-be newslettrists are paid peanuts.)
So you have a serious problem: anyone good enough to be ‘the Matt Levine of an area’ is also under considerable pressure to not be him.
Why would anyone want to? Well, if you ask someone like Richard Feynman or Andrej Karpathy or Tom Lehrer why they pursued pedagogy instead of the professional pursuits that brought them fame & wealth, the answer would have to be that “they love to teach”. Which is a fine reason, but a passion for teaching a particular subject is far from common.
So you have a filter with many layers: you need areas which fulfill a stringent set of conditions for such an educational newsletter, and you need a very unusual sort of individual, someone who is expert in the area and has preferably gotten their hands dirty, who is good enough to work professionally in it, but who also is capable of explaining it well, at a beginner level, many times, endlessly without burning out or getting bored, because of their intense interest in the area (but again, not quite intense enough to make them go do it instead of write about it).
Each step here filters out most candidates, and by the end, there’s just not that much left. You can’t fix these filters easily. No prize or Substack tweak will suddenly make drug discovery happen fast and fail for clear reasons, or conjure up a Levine in a specific area when you want one.
So, that’s why there are so few Matt Levines, and explains where the other Matt Levines are: they don’t, and usually can’t, exist.
[Similar links by topic]
[Bibliography of links/references used in page]
...
Read the original on gwern.net »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.