10 interesting stories served every morning and every evening.




1 1,334 shares, 58 trendiness

She Came Out of the Bathroom Naked, Employee Says

There are also sex scenes filmed with the smart glasses — some­one is wear­ing them hav­ing sex. That is why this is so ex­tremely sen­si­tive. There are cam­eras every­where in our of­fice, and you are not al­lowed to bring your own phones or any de­vice that can record”, an em­ployee says. In or­der to an­swer ques­tions and in­ter­pret what the cam­era sees, the glasses re­quire that data be processed via Meta’s in­fra­struc­ture — it is not pos­si­ble to in­ter­act with the AI solely lo­cally on the phone. We con­tact Synsam and Synoptik for an in­ter­view about what train­ing the sales staff re­ceive and how it can be that the an­swers they give are so dif­fer­ent. Synsam re­sponded in writ­ing that its role is to in­form cus­tomers about the ap­plic­a­ble terms and to pro­vide in­ter­nal train­ing, but that re­spon­si­bil­ity for com­ply­ing with Swedish law and Meta’s terms ul­ti­mately rests with the wearer. Synoptik re­sponded in sim­i­lar terms, say­ing its staff are trained in ethics and em­p­hazise the user’s re­spon­si­bil­ity.But for the AI as­sis­tant to func­tion, voice, text, im­age and some­times video must be processed and may be shared on­wards. This data pro­cess­ing is done au­to­mat­i­cally and can­not be turned off.It is not spec­i­fied how much data may be analysed or for how long it may be stored. Nor is it spec­i­fied who is given ac­cess to the data.Where do the im­ages come from? Can pri­vate videos from Sweden end up on screens in Kenya? Those who ap­pear in the im­ages, have they con­sented to ap­pear­ing in this way?“Many be­lieve that data must be stored within the EU to be pro­tected. But un­der GDPR it does not mat­ter where the server is lo­cated — as long as the coun­try meets the EUs re­quire­ments. If it does not, data may not be sent there”.“Tech­ni­cally, we have data cen­tres in Sweden, Denmark and Ireland, but the phys­i­cal lo­ca­tion is ac­tu­ally less rel­e­vant. The le­gal re­spon­si­bil­ity lies with Meta Ireland, which is the European en­tity. Where the data is ac­tu­ally processed — in Europe or in the US — does not change the reg­u­la­tory frame­work”.“For it to be per­mit­ted to use a ser­vice provider in a third coun­try (outside the EU), it is re­quired that ro­bust agree­ments with in­struc­tions are in place. It must also be en­sured that there is le­gal sup­port for the trans­fers, so that the data that is trans­ferred re­ceives con­tin­ued strong and equiv­a­lent pro­tec­tion when it is processed in a third coun­try. The pro­tec­tion must there­fore not be­come weaker when it is processed by sub­con­trac­tors”, says Petra Wierup.

Hur san­no­likt är det att du skulle rek­om­mendera SvD till en vän eller kol­lega?

...

Read the original on www.svd.se »

2 1,087 shares, 43 trendiness

Reactions mixed as B.C. to end time changes, adopt year-round daylight time

Most British Columbians will change their clocks for the last time this Sunday.

B. C. is per­ma­nently adopt­ing day­light time, mak­ing its spring for­ward” on March 8 the last time change, Premier David Eby an­nounced Monday.

We are done wait­ing. British Columbia is go­ing to change our clocks just one more time — and then never again,” Eby said.

He said chang­ing the clocks causes all types of prob­lems,” from chil­dren and their par­ents los­ing sleep, to dogs get­ting up at the wrong time, to more car ac­ci­dents.

Residents will have eight months to pre­pare for Nov. 1, 2026, when the clocks would have been turned back one hour, but will now re­main the same.

B. C.’s new time zone will be called Pacific time,” ac­cord­ing to the province.

Residents in cer­tain com­mu­ni­ties in east­ern B. C. that rec­og­nize some form of moun­tain time will not be af­fected by the change.

B. C. passed leg­is­la­tion to make this change pos­si­ble in 2019, but with­out a firm time­line. Previously, Eby said he would wait for B.C.’s American neigh­bours — in­clud­ing Washington state, Oregon and California — to do the same.

While U. S. President Donald Trump ex­pressed in­ter­est in elim­i­nat­ing day­light time shortly af­ter his 2024 elec­tion, Congress has not passed the nec­es­sary leg­is­la­tion.

Eby noted the Yukon has al­ready made the change to per­ma­nent day­light time and added he hopes American neigh­bours will fol­low suit.

CBC News reached out to Washington State Governor Bob Ferguson, but did not hear back im­me­di­ately.

University of B. C. busi­ness pro­fes­sor Werner Antweiler said that while he was sur­prised by B.C.’s de­ci­sion to make this change with­out wait­ing for nearby states, he sees ben­e­fits in elim­i­nat­ing time changes.

Time changes, par­tic­u­larly switch­ing clocks an hour for­ward in the spring, has been found to be harm­ful, as it im­poses jet lag on the en­tire pop­u­la­tion,” he said.

Somebody had to move first — it might as well be B. C. — and then see how it plays out,” Antweiler said.

The ques­tion is now, who will fol­low?”

A pub­lic en­gage­ment re­port in 2019 for the B. C. gov­ern­ment found that 93 per cent of re­spon­dents pre­ferred year-round day­light time.

However, 54 per cent of those who re­sponded said it was im­por­tant for B. C. to have align­ing poli­cies on day­light time with other ju­ris­dic­tions.

Of those who wanted year-round day­light time, three-quar­ters said it was due to health and well­ness con­cerns.

Switching to per­ma­nent stan­dard time was not a vot­ing op­tion.

Bridgitte Anderson, President and CEO of the Greater Vancouver Board of Trade, ex­pressed con­cern about B. C.’s de­ci­sion to move for­ward alone in this de­ci­sion.

The uni­lat­eral change in time is an un­wel­come dis­trac­tion that will make it more dif­fi­cult to at­tract and re­tain busi­nesses in British Columbia,” she said in a state­ment.

The choice to change the time un­lit­er­ally will cre­ate an ad­di­tional headache for busi­nesses op­er­at­ing on both sides of the bor­der.”

The Vancouver Airport Authority was among groups in 2019 that pushed for B. C. to align with other ju­ris­dic­tions on the Pacific coast, to avoid un­nec­es­sary con­fu­sion for trav­ellers.

A spokesper­son for the Vancouver Airport Authority said in re­sponse to the an­nounced change that align­ment is fundamental” for air­line op­er­a­tions and ef­fi­cient sched­ul­ing.

We will work with in­dus­try to bet­ter un­der­stand the po­ten­tial sched­ule and pas­sen­ger im­pli­ca­tions of mov­ing to per­ma­nent day­light sav­ing time with­out neigh­bour­ing ju­ris­dic­tions also fol­low­ing suit,” the state­ment said.

Ryan Mitton, the B. C. di­rec­tor of leg­isla­tive af­fairs for the Canadian Federation of Independent Business, said that while some may be re­lieved not to re­set the clocks each year, it may also cause confusion and dis­rup­tion.”

This ap­pears mo­ti­vated by an at­tempt to dis­tract from the province’s $13-billion deficit, not gen­uine con­cern for the im­pacts of the time change,” he said in a state­ment.

...

Read the original on www.cbc.ca »

3 944 shares, 108 trendiness

p5.js Web Editor

...

Read the original on editor.p5js.org »

4 714 shares, 120 trendiness

I'm struggling to think of any online services for which I'd be willing to verify my identity or age

Identity ver­i­fi­ca­tion and age ver­i­fi­ca­tion is an in­creasinly com­mon pol­icy con­ver­sa­tion at the mo­ment, in nu­mer­ous coun­tries.

Often, this is in com­bi­na­tion with pro­pos­als to ban chil­dren from vary­ing con­cepts of social me­dia”, which gen­er­ally means that every­one would have to prove that they were not a child.

I have yet to see a well-con­sid­ered pro­posal.

Worse, the ques­tion that they are try­ing an­swer is rarely stated clearly and con­cisely.

And it is un­usual to see any con­sid­er­a­tion of broader so­ci­o­log­i­cal is­sues, let alone an em­pha­sis on this, with a fo­cus in­stead on per­ceived quick win” tech­noso­lu­tion­ism.

I was pon­der­ing last night for which ser­vices I, per­son­ally, would ac­tu­ally be will­ing to ver­ify my age or iden­tity.

At least, none that I can think of at the mo­ment.

I ap­pre­ci­ate that I com­pute in an un­usual way (when com­pared with most com­puter users), and that much of what I do on­line is about ac­cess­ing my own ser­vices.

Some of those - my fedi server, my RSS server, my mes­sag­ing ser­vices - are build around en­joy­ing stuff from other peo­ple’s ser­vices.

Would I be will­ing to ver­ify my iden­tity or age to read some­one’s RSS feed? No. While I en­joy the myr­iad blogs that I fol­low, none are cru­cial to me.

I oc­ca­sion­ally watch videos (which started on YouTube, but which I down­load into my Jellyfin in­stance), and per­haps YouTube will be forced to do age ver­i­fi­ca­tion. It would be a shame, but again, I’ll just not watch YouTube videos. Not a big loss. Mostly, I buy sec­ond­hand DVDs, rip them, and watch them from my Jellyfin in­stance. I haven’t been asked to ver­ify my age for a DVD pur­chase (online or of­fline) in a very long time.

Friends have had to at­tempt to block ac­cess to their sites from the UK. While I can still ac­cess their sites via Tor, that’s what I tend to do. I feel sorry for them for the likely sig­nif­i­cant drop in vis­i­tors, likely af­fect­ing their en­joy­ment and in some cases their rev­enue, and, prob­a­bly their in­cen­tive to con­tinue to write / post / record stuff.

I don’t use any in­di­vid­ual fo­rums any more (their demise is a shame; I’d pre­fer this over cen­tralised dis­cus­sion sites), nor do I use Reddit. I oc­ca­sion­ally look at the com­ments on HN if one of my posts is sur­faced there, but if HN forced iden­tify or age ver­i­fi­ca­tion, I’d just stop do­ing it. No big deal for me.

Websites with com­ments sec­tions? I don’t want to see the com­ments any­way, so I block those, which makes for a very pleas­ant brows­ing ex­pe­ri­ence. I don’t com­ment my­self.

Code forges / places to con­tribute to FOSS? Most of my FOSS con­tri­bu­tions are non-code, but even so, I use some or­gan­i­sa­tion’s GitLab re­pos, and oc­ca­sion­ally I con­tribute to pro­jects on other forges. I doubt that my con­tri­bu­tions are mean­ing­ful in them­selves, and it may not be an op­tion to switch in­fra­struc­ture in any case (that might ont make the re­quire­ment go away), but since I am not a mas­sive, or par­tic­u­larly valu­able con­trib­u­tor, I’d feel less bad about sim­ply step­ping away.

For Wikipedia, I’d prob­a­bly re­build my Kiwix in­stance and use that in­stead. Yes, ar­ti­cles would not be quite so up to date, but I rarely ac­cess Wikipedia for rapidly-chang­ing in­for­ma­tion. In any case, there are trade­offs, and per­son­ally I would pre­fer my pri­vacy, the se­cu­rity of my per­sonal data, and, well, just not be­ing part of this kind of cen­sor­ship.

Signal? That would be a pain. I don’t have a workaround for that. I’m hap­pily us­ing XMPP, but as a com­ple­ment to Signal, not an al­ter­na­tive.

Teams/Zoom? I don’t have ac­counts on those ser­vices, but I do join, via my browser, when a client sends me a link. If I was faced with a choice of hav­ing to ver­ify my iden­tity/​age for these ser­vices, then I’d have to con­sider the po­si­tion care­fully. Realistically, I am not in a po­si­tion to say no, I will not use Teams”, as some long-term clients are not go­ing to change their cor­po­rate ap­proach just be­cause Neil does­n’t like some­thing, and I’d rather not lose them as clients. So that could be a pain, if those ser­vices were within scope.

I’ll still ob­ject to these mea­sures - I’m okay, Jack” would be a self­ish stance - but, in prac­tice, yes, I’d be sur­prised if they im­pacted me.

Self-imposed (or, at least, self-con­trolled) dig­i­tal iso­la­tion­ism, per­haps.

Or per­haps, in the fu­ture, some ser­vice will pop up that I will re­ally, re­ally want to use, de­spite it re­quir­ing iden­tity / age ver­i­fi­ca­tion.

...

Read the original on neilzone.co.uk »

5 549 shares, 31 trendiness

Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

The Condé Nast-owned Ars Technica has ter­mi­nated se­nior AI re­porter Benj Edwards fol­low­ing a con­tro­versy over his role in the pub­li­ca­tion and re­trac­tion of an ar­ti­cle that in­cluded AI-fabricated quotes, Futurism has con­firmed.

Earlier this month, Ars re­tracted the story af­ter it was found to in­clude fake quotes at­trib­uted to a real per­son. The ar­ti­cle — a write-up of a vi­ral in­ci­dent in which an AI agent seem­ingly pub­lished a hit piece about a hu­man en­gi­neer named Scott Shambaugh — was ini­tially pub­lished on February 13. After Shambaugh pointed out that he’d never said the quotes at­trib­uted to him, Ars‘ ed­i­tor-in-chief Ken Fisher apol­o­gized in an ed­i­tor’s note, in which he con­firmed that the piece in­cluded fabricated quo­ta­tions gen­er­ated by an AI tool and at­trib­uted to a source who did not say them” and char­ac­ter­ized the er­ror as a serious fail­ure of our stan­dards.” He added that, upon fur­ther re­view, the er­ror ap­peared to be an isolated in­ci­dent.” (404 Media first re­ported on the re­trac­tion.)

Shortly af­ter Fisher’s ed­i­tor’s note was pub­lished, Edwards, one of the re­port’s two by­lined au­thors, took to Bluesky to take full re­spon­si­bil­ity” for the in­clu­sion of the fab­ri­cated quotes.

In the post, Edwards said that he was sick at the time, and while work­ing from bed with a fever and very lit­tle sleep,” he unintentionally made a se­ri­ous jour­nal­is­tic er­ror” as he at­tempted to use an experimental Claude Code-based AI tool” to help him extract rel­e­vant ver­ba­tim source ma­te­r­ial.” He said the tool was­n’t be­ing used to gen­er­ate the ar­ti­cle, but was in­stead de­signed to help list struc­tured ref­er­ences” to put in an out­line. When the tool failed to work, said Edwards, he de­cided to try and use ChatGPT to help him un­der­stand why.

I should have taken a sick day be­cause in the course of that in­ter­ac­tion, I in­ad­ver­tently ended up with a para­phrased ver­sion of Shambaugh’s words rather than his ac­tual words,” Edwards con­tin­ued. He em­pha­sized that the text of the ar­ti­cle was hu­man-writ­ten by us, and this in­ci­dent was iso­lated and is not rep­re­sen­ta­tive of Ars‘ ed­i­to­r­ial stan­dards. None of our ar­ti­cles are AI-generated, it is against com­pany pol­icy and we have al­ways re­spected that.”

Edwards also stressed that his col­league Kyle Orland, the site’s se­nior gam­ing ed­i­tor who co-by­lined the re­tracted story, had no role in this er­ror.”

The con­tro­versy was met with a wave of push­back and spec­u­la­tion from Ars read­ers, many of whom ex­pressed deep frus­tra­tion and dis­ap­point­ment in a lengthy com­ment thread on the web­site. On February 27, Ars cre­ative di­rec­tor Aurich Lawson, while clos­ing the com­ment thread, said that Ars has com­pleted its re­view of this mat­ter” and that the ap­pro­pri­ate in­ter­nal steps have been taken.”

In the com­ing weeks, we’ll pub­lish a reader-fac­ing guide ex­plain­ing how we use and do not use AI in our work,” Lawson wrote. We do not com­ment on per­son­nel de­ci­sions.”

As of February 28, Edwards’ bio on Ars was changed to past tense, ac­cord­ing to an archived ver­sion of the web­page. It now reads that Edwards was a re­porter at Ars, where he cov­ered ar­ti­fi­cial in­tel­li­gence and tech­nol­ogy his­tory.”

Futurism reached out to Ars, Condé Nast, and Edwards to in­quire about the re­porter’s em­ploy­ment sta­tus. Neither the pub­li­ca­tion nor its owner replied. Edwards said he was un­able to com­ment at this time.

Ars‘ re­trac­tion is­n’t the first AI con­tro­versy to rock a news­room, nor to anger a pub­li­ca­tion’s read­ers. It also comes at a mo­ment in which many me­dia bosses are push­ing staff to find uses for AI — as are ex­ec­u­tives across most in­dus­tries — even while clear guide­lines around use of the tech­nol­ogy that up­hold ed­i­to­r­ial ethics re­main elu­sive.

These edicts to in­te­grate AI, mean­while, are back­dropped by a com­pli­cated, ever-shift­ing land­scape: con­tentious copy­right bat­tles be­tween news gi­ants and AI com­pa­nies; si­mul­ta­ne­ous deal-strik­ing by news gi­ants and AI com­pa­nies; an in­ter­net in­creas­ingly full of AI-generated slop news and mis­in­for­ma­tion; and a traf­fic cliff tied to Google’s AI Overviews,” which now para­phrase news in­stead of point­ing read­ers to a list of blue links.

It’s a com­bus­tive, dis­ori­ent­ing mo­ment in the his­tory of me­dia and tech­nol­ogy, when lines in the sand are be­ing drawn by both jour­nal­ists and their au­di­ences. And the Ars fall­out un­der­lines a phe­nom­e­non we’ve seen again and again, as even peo­ple who are deeply fa­mil­iar with AI and its short­com­ings can end up re­ly­ing on it at a crit­i­cal mo­ment — and in the process, fall vic­tim to some­thing much older than gen­er­a­tive AI: hu­man er­ror.

The irony of an AI re­porter be­ing tripped up by AI hal­lu­ci­na­tion is not lost on me,” Edwards said in his February 15 Bluesky post. I take ac­cu­racy in my work very se­ri­ously and this is a painful fail­ure on my part.”

More on AI and me­dia: Google’s AI Is Actively Destroying the News Media

...

Read the original on futurism.com »

6 545 shares, 24 trendiness

How I built a sub-500ms latency voice agent from scratch

I’ve spent the last six months work­ing on a startup, build­ing agent pro­to­types for one of the largest con­sumer pack­aged goods com­pa­nies in the world. As part of that work, our team re­lied on off-the-shelf voice agent plat­forms to help the com­pany op­er­ate more ef­fec­tively. Though I can’t go into the busi­ness de­tails, the tech­ni­cal take­away was clear: voice agents are pow­er­ful, and there are bril­liant off-the-shelf ab­strac­tions like Vapi and ElevenLabs that make spin­ning up voice agents a breeze. But: these ab­strac­tions also hide a sur­pris­ing amount of com­plex­ity.

Just a few days be­fore I started writ­ing this, ElevenLabs raised one of the largest fund­ing rounds in the space, and new fron­tier mod­els like GPT-5.3 and Claude 4.6 dropped. This made me won­der: could I ac­tu­ally build the or­ches­tra­tion layer of a voice agent my­self? Not just a toy ex­per­i­ment, but some­thing that could have close to the same per­for­mance as an all-in-one plat­form like Vapi?

To my sur­prise, I could. It took ~a day and roughly $100 in API cred­its - and the re­sult out­per­formed Vapi’s equiv­a­lent setup by on la­tency, achiev­ing ~400ms end-to-end re­sponse times.

This es­say walks through the full build: why voice agents are de­cep­tively hard, how the turn-tak­ing loop works, how I wired to­gether STT, LLM, and TTS into a stream­ing pipeline, and how ge­og­ra­phy and model se­lec­tion made the biggest dif­fer­ence. Along the way, you can lis­ten to au­dio demos and play with in­ter­ac­tive di­a­grams of the ar­chi­tec­ture.

Voice agents are a big step-change in com­plex­ity com­pared to agen­tic chat.

Text agents are rel­a­tively sim­ple, be­cause the end-user’s ac­tions co­or­di­nate every­thing. The model pro­duces text, the user reads it, types a re­ply, and hits send.” That ac­tion de­fines the turn bound­ary. Nothing needs to hap­pen un­til the user ex­plic­itly ad­vances the flow.

Voice does­n’t work that way. The or­ches­tra­tion is con­tin­u­ous, real-time, and must care­fully man­age mul­ti­ple mod­els at once. At any mo­ment, the sys­tem must de­cide: is the user speak­ing, or are they lis­ten­ing? And the tran­si­tions be­tween those two states are where all the dif­fi­culty lives.

When the user starts speak­ing, the agent must im­me­di­ately stop talk­ing - can­cel gen­er­a­tion, can­cel speech syn­the­sis, flush any buffered au­dio. When the user stops speak­ing, the sys­tem must con­fi­dently de­cide that they’re done, and start re­spond­ing with min­i­mal de­lay. Get ei­ther wrong and the con­ver­sa­tion feels bro­ken.

This is­n’t as sim­ple as mea­sur­ing loud­ness. Human speech in­cludes pauses, hes­i­ta­tions, filler sounds, back­ground noise, and non-ver­bal ac­knowl­edge­ments that should­n’t in­ter­rupt the agent. Downstream from this are the things every­one no­tices: end-to-end la­tency, awk­ward si­lences, agents cut­ting you off, or talk­ing over you.

We judge the qual­ity of voice com­mu­ni­ca­tion sub­con­sciously, as it is so deeply in­grained in who we are. Small tim­ing er­rors that would be ac­cept­able in text - a pause here, a de­lay there - im­me­di­ately feel wrong in speech.

In prac­tice, a good voice agent is not about any sin­gle model. It’s an or­ches­tra­tion prob­lem. You string to­gether mul­ti­ple com­po­nents, and the qual­ity of the ex­pe­ri­ence de­pends al­most en­tirely on how those pieces are co­or­di­nated in time.

The is­sue with all-in-one SDKs is that you get a long list of pa­ra­me­ters to tune, with­out re­ally un­der­stand­ing which ones mat­ter or why. When some­thing feels off, it’s hard to know where the prob­lem lives. That’s what pushed me to go one layer deeper and build the core loop my­self.

Before writ­ing any code, I spent time it­er­at­ing on the ar­chi­tec­ture with ChatGPT out­side of my ed­i­tor. I’ve found this use­ful when work­ing in un­fa­mil­iar do­mains: build a men­tal model first, then im­ple­ment.

My goal with agen­tic cod­ing is al­ways the same. I want to un­der­stand the struc­ture of what I’m build­ing well enough that I can open any file and im­me­di­ately see why it ex­ists and how it fits into the sys­tem.

After a few it­er­a­tions, I re­duced the en­tire prob­lem to a sin­gle loop and a tiny state ma­chine. At the core, a voice agent only needs to an­swer one ques­tion: is the user speak­ing, or lis­ten­ing?

There are two states:

And two tran­si­tions where every­thing hap­pens:

* When the user starts speak­ing, we must stop all agent au­dio and gen­er­a­tion im­me­di­ately.

* When the user stops speak­ing, we must start gen­er­at­ing and stream­ing the agent re­sponse with as lit­tle la­tency as pos­si­ble.

This turn-de­tec­tion logic is the core of every voice sys­tem, so I de­cided to start there.

For the first im­ple­men­ta­tion, I de­lib­er­ately avoided tran­scrip­tion, lan­guage mod­els, and text-to-speech. I wanted the sim­plest check­point that still felt di­rec­tion­ally like a voice agent.

The setup was min­i­mal. A small FastAPI server han­dles an in­com­ing WebSocket con­nec­tion from Twilio, which streams base64-en­coded μ-law au­dio pack­ets at 8kHz in ~20ms frames. Each packet was de­coded and fed into a Voice Activity Detection model - in my case, Silero VAD.

Silero is a tiny, open-source model (around 2MB) that can quickly de­ter­mine whether a short chunk of au­dio con­tains speech. Turn-taking is a much harder prob­lem than speech de­tec­tion, but VAD is still a use­ful prim­i­tive, es­pe­cially for de­cid­ing whether au­dio should be for­warded to more ex­pen­sive down­stream sys­tems.

On top of this, I built a triv­ial state ma­chine: a boolean flag rep­re­sent­ing whether the user was cur­rently speak­ing or lis­ten­ing. When the sys­tem de­tected the end of speech, it played a pre-recorded WAV file back to the caller. When speech re­sumed, it sent a clear sig­nal over the Twilio WebSocket to flush any buffered au­dio and stop play­back im­me­di­ately.

I started this way to iso­late the hard­est part of the prob­lem - turn de­tec­tion - with­out wiring up the rest of the sys­tem.

The re­sult, while ba­sic, was al­ready im­pres­sive:

VAD-only test - the agent plays a pre-recorded clip when­ever I stop talk­ing, and cuts off in­stantly when I in­ter­rupt.

The agent re­sponds im­me­di­ately when I stop speak­ing, and shuts up the in­stant I in­ter­rupted it. Even with­out tran­scrip­tion or gen­er­a­tion, the loop feels some­what con­ver­sa­tional.

This also gave me a use­ful base­line for la­tency. With ea­ger turn-end­ing and a pre-recorded re­sponse, the sys­tem rep­re­sented a lower bound on how fast a voice agent could pos­si­bly feel.

This first pass was valu­able, but its lim­i­ta­tions were ob­vi­ous.

Detecting the pres­ence of speech is not the same as know­ing when a user has fin­ished their thought. A slow speaker might pause for sev­eral sec­onds mid-sen­tence. A pure VAD would ea­gerly de­cide the turn had ended and start talk­ing too early.

In prac­tice, real turn-tak­ing re­quires com­bin­ing low-level au­dio sig­nals with higher-level se­man­tic cues from the tran­script it­self. That meant the VAD-only ap­proach could­n’t scale to a real sys­tem.

What it did give me was a clean con­trol-flow model and a solid la­tency base­line to com­pare against. With that in place, it was time to wire in the full pipeline.

The next step was re­plac­ing my hand-rolled turn de­tec­tion with some­thing de­signed for pro­duc­tion: Deepgram’s Flux.

Flux is a stream­ing API that com­bines tran­scrip­tion and turn de­tec­tion in a sin­gle model. You feed it a con­tin­u­ous au­dio stream, and it emits events - most im­por­tantly, start of turn” and end of turn,” with the fi­nal tran­script in­cluded at the end.

This re­placed the core of my ar­chi­tec­ture. Flux be­came the source of truth for when the agent should speak and when it should im­me­di­ately stop and lis­ten.

On top of that, I built a ded­i­cated agent-turn pipeline. When Flux sig­nals the end of a user turn, this pipeline kicks off a real-time se­quence:

* The tran­script and con­ver­sa­tion his­tory are sent to an LLM to be­gin gen­er­a­tion.

* As soon as the first to­ken ar­rives, it is streamed into a text-to-speech ser­vice over WebSocket.

* Every au­dio packet pro­duced by TTS is for­warded di­rectly to the out­bound Twilio socket.

The core idea is to pipeline every stream as to max­i­mally re­duce la­tency.

One im­por­tant de­tail here was keep­ing text-to-speech con­nec­tions warm. Establishing a fresh WebSocket to ElevenLabs adds a few hun­dred mil­lisec­onds of la­tency, so I kept a small pool of pre-con­nected sock­ets alive. That alone shaved roughly 300ms off the re­sponse time.

Barge-ins were han­dled sym­met­ri­cally. When Flux de­tects that the user starts speak­ing, the agent pipeline is im­me­di­ately can­celled: in-flight LLM gen­er­a­tion is stopped, TTS is torn down, and a clear mes­sage is sent to Twilio to flush any queued au­dio. The agent falls silent in­stantly, and Flux re­sumes lis­ten­ing for the next end-of-turn.

The full ar­chi­tec­ture - Twilio streams au­dio to Deepgram Flux for turn de­tec­tion, which trig­gers ei­ther a barge-in (cancel every­thing) or an agent turn (LLMTTS → au­dio back to the caller).

My first test was to run the or­ches­tra­tion en­tirely lo­cally, mainly to un­der­stand how ge­o­graphic place­ment af­fects la­tency. I built most of this pro­ject from a re­mote wooden cabin in south­ern Turkey, while trav­el­ing and hik­ing, so this setup was far from ideal.

Latency trace run­ning lo­cally from south­ern Turkey. TTFT av­er­ages ~1.3s, with first au­dio ar­riv­ing ~1.6s af­ter the turn ends.

End-to-end la­tency av­er­aged around 1.6 sec­onds, mea­sured from my server. According to Twilio, their me­dia edge adds roughly ~100ms on top of that, bring­ing to­tal per­ceived la­tency to about 1.7s.

That’s still quite far from Vapi’s ~840ms la­tency for a com­pa­ra­ble con­fig­u­ra­tion - more than twice as slow. At that point, the de­lay be­comes no­tice­able. Conversations start to feel hes­i­tant. Pauses stretch just long enough to feel awk­ward.

This was a use­ful re­minder: even with a cor­rect ar­chi­tec­ture, ge­og­ra­phy mat­ters.

In our ar­chi­tec­ture, every packet of au­dio hops to and from three ex­ter­nal ser­vices. If you want to min­i­mize la­tency, the or­ches­tra­tion layer needs to live phys­i­cally close to them.

To fur­ther im­prove la­tency, I de­ployed the sys­tem on Railway in the EU re­gion and con­fig­ured Twilio, Deepgram, and ElevenLabs to use their EU de­ploy­ments as well (Note: ElevenLabs au­to­mat­i­cally chooses the near­est re­gion by de­fault)

The dif­fer­ence was im­me­di­ate:

Latency trace af­ter de­ploy­ing to Railway EU. TTFT drops to ~300-500ms, with first au­dio at ~550-750ms.

The av­er­age la­tency mea­sured at the server dropped to ~690ms, which trans­lates to a to­tal end-to-end la­tency of roughly ~790ms once Twilio’s edge is in­cluded - more than 2x im­prove­ment!

For com­par­i­son, the equiv­a­lent con­fig­u­ra­tion in Vapi - us­ing the same STT, LLM, and TTS mod­els - es­ti­mates around ~840ms. In this setup, the cus­tom or­ches­tra­tion ac­tu­ally beats Vapi’s own es­ti­mates by about 50ms.

More im­por­tantly, the sub­jec­tive dif­fer­ence is ob­vi­ous. The con­ver­sa­tion feels re­spon­sive. Interruptions work cleanly. The agent no longer feels like it’s hes­i­tat­ing be­fore every re­ply.

Hosted pipeline - the con­ver­sa­tion feels nat­ural, with clean in­ter­rup­tions and fast re­sponses.

So far in this pro­ject, I’d been us­ing gpt-4o-mini, which seemed to be the low­est-la­tency model avail­able from OpenAI. However, af­ter dig­ging a bit deeper, I dis­cov­ered that the in­fer­ence la­tency of Groq’s llama-3.3-70b could be up to faster.

I wanted to ver­ify this for my­self, so I set up a small test har­ness on my pro­duc­tion server. It ran 360 chat com­ple­tions across a range of mod­els, can­celling each re­quest im­me­di­ately af­ter the first to­ken was re­ceived. Below are the re­sult­ing first-to­ken la­tency mea­sure­ments:

First-token la­tency across providers - Groq’s mod­els are in a league of their own.

As you can see, Groq’s mod­els leave every­thing from OpenAI in the dust. As far as I can tell, this is the low­est achiev­able la­tency with­out run­ning your own in­fer­ence in­fra­struc­ture. It’s gen­uinely im­pres­sive - ~80ms is faster than a hu­man blink, which is usu­ally quoted at around 100ms.

I swapped out gpt-4o-mini for Groq’s llama-3.3-70b, and the re­sults hon­estly sur­prised me:

Aside from the very first turn, the con­ver­sa­tion felt smooth and snappy. With av­er­age end-to-end la­tency hov­er­ing around ~400ms, I was strug­gling to keep up - lis­ten­ing back to the record­ing, it sounds like I was tak­ing longer to re­ply than the agent.

Latency trace with Groq - end-to-end la­tency av­er­ag­ing ~400ms, with first au­dio of­ten ar­riv­ing in un­der 500ms.

At this la­tency, in­ter­rup­tion han­dling also feels dra­mat­i­cally bet­ter. The agen­t’s voice cuts out al­most im­me­di­ately af­ter I start speak­ing, mak­ing the in­ter­ac­tion feel far closer to a real con­ver­sa­tion than any­thing I’d ex­pe­ri­enced be­fore.

I was re­ally sur­prised that I could beat off-the-shelf providers by a full mul­ti­ple. From ex­ten­sive ex­pe­ri­ence work­ing with both Vapi and Elevenlabs agent SDKs on a real pro­duc­tion use case, I found that my ini­tial pro­to­type is able to re­li­ably achieve a 2x la­tency im­prove­ment, which is a huge deal when it comes to serv­ing nat­ural-sound­ing and pleas­ant voice agent in­ter­ac­tions.

Building a voice agent from scratch taught me what ac­tu­ally mat­ters in get­ting AI voice con­ver­sa­tions to feel snappy:

Latency

What users ex­pe­ri­ence as responsiveness” is the time from when they stop speak­ing to when they hear the first syl­la­ble of the agen­t’s re­sponse. That path runs through turn de­tec­tion, tran­scrip­tion, LLM time-to-first-to­ken, text-to-speech syn­the­sis, out­bound au­dio buffer­ing, and net­work hops be­tween all of them. You op­ti­mize this by iden­ti­fy­ing which stages sit on the crit­i­cal path and mak­ing sure noth­ing blocks un­nec­es­sar­ily.

Model choice and TTFT

In voice sys­tems, re­ceiv­ing the first LLM to­ken is the mo­ment the en­tire pipeline can be­gin mov­ing. The TTFT ac­counts for more than half of the to­tal la­tency, so choos­ing a la­tency-op­ti­mised in­fer­ence setup like Groq made the biggest dif­fer­ence. Model size also seems to mat­ter: larger mod­els may be re­quired for some com­plex use cases, but they also im­pose a la­tency cost that’s very no­tice­able in con­ver­sa­tional set­tings. The right model de­pends on the job, but TTFT is the met­ric that ac­tu­ally mat­ters.

Pipelining the agent turn.

A pro­duc­tion voice agent can­not be built as STTLLMTTS as three se­quen­tial steps. The agent turn must be a stream­ing pipeline: LLM to­kens flow into TTS as soon as they ar­rive, and au­dio frames flow to the phone im­me­di­ately. The goal is to never un­nec­es­sar­ily block gen­er­a­tion. Anything that waits for a full re­sponse be­fore mov­ing on is wast­ing time.

Cancelling in-flight calls.

Interruption han­dling must prop­a­gate to all parts of the agent turn, im­me­di­ately. When a user starts speak­ing, the sys­tem must can­cel LLM gen­er­a­tion, tear down TTS, and flush any buffered out­bound au­dio si­mul­ta­ne­ously. Missing any one of those makes barge-ins feel bro­ken.

Geography is a first-class de­sign pa­ra­me­ter.

Once you or­ches­trate mul­ti­ple ex­ter­nal ser­vices - tele­phony, STT, TTS, LLM - place­ment dom­i­nates every­thing. If those ser­vices aren’t co-lo­cated, la­tency com­pounds quickly. Moving the or­ches­tra­tion layer and us­ing the cor­rect re­gional end­points cut e2e la­tency in half. Service place­ment makes a huge dif­fer­ence.

Taken to­gether, these lessons ex­plain why voice feels de­cep­tively hard. Real-time sys­tems are un­for­giv­ing, and hu­mans are ex­tremely sen­si­tive to tim­ing er­rors.

Off-the-shelf vs. be­spoke

This is­n’t an ar­gu­ment against plat­forms like Vapi or ElevenLabs. Those sys­tems of­fer far more than or­ches­tra­tion: APIs, ob­serv­abil­ity, re­li­a­bil­ity, and deep con­fig op­tions that would take real ef­fort to re­pro­duce. For most teams, re­build­ing all of that would be a mis­take - be­ing able to test and val­i­date a voice agent app with­out get­ting to this level of tech­ni­cal depth is truly amaz­ing, and that’s how I first got ex­cited about the tech­nol­ogy.

But build­ing your voice agent your­self - even a stripped-down one - is still a worth­while ex­er­cise. It forces you to un­der­stand what the pa­ra­me­ters ac­tu­ally con­trol, why cer­tain de­faults ex­ist, and where the real bot­tle­necks live. That un­der­stand­ing makes you bet­ter at con­fig­ur­ing the off-the-shelf plat­forms, and in some cases lets you build some­thing more be­spoke when your use case de­mands it.

Voice is an or­ches­tra­tion prob­lem. Once you see the loop clearly, it be­comes a solv­able en­gi­neer­ing prob­lem.

The full source code is avail­able on GitHub: github.com/​Nick­Tikhonov/​shuo

Follow me on X for more.

...

Read the original on www.ntik.me »

7 506 shares, 76 trendiness

Apple introduces MacBook Pro with all-new M5 Pro and M5 Max

Apple in­tro­duces MacBook Pro with all‑new M5 Pro and M5 Max, de­liv­er­ing break­through pro per­for­mance and next-level on-de­vice AI

The world’s best pro lap­top raises the bar again with blaz­ing-fast CPU and GPU per­for­mance, plus up to 2x faster SSD speeds and 1TB of start­ing stor­age

Apple to­day an­nounced the lat­est 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bring­ing game-chang­ing per­for­mance and AI ca­pa­bil­i­ties to the world’s best pro lap­top. With M5 Pro and M5 Max, MacBook Pro fea­tures a new CPU with the world’s fastest CPU core,1 a next-gen­er­a­tion GPU with a Neural Accelerator in each core, and higher uni­fied mem­ory band­width, al­to­gether de­liv­er­ing up to 4x AI per­for­mance com­pared to the pre­vi­ous gen­er­a­tion, and up to 8x AI per­for­mance com­pared to M1 mod­els.2 This al­lows de­vel­op­ers, re­searchers, busi­ness pro­fes­sion­als, and cre­atives to un­lock new AI-enabled work­flows right on MacBook Pro. It now comes with up to 2x faster SSD per­for­mance2 and starts at 1TB of stor­age for M5 Pro and 2TB for M5 Max. The new MacBook Pro in­cludes N1, an Apple-designed wire­less net­work­ing chip that en­ables Wi-Fi 7 and Bluetooth 6, bring­ing im­proved per­for­mance and re­li­a­bil­ity to wire­less con­nec­tions. It also of­fers up to 24 hours of bat­tery life; a gor­geous Liquid Retina XDR dis­play with a nano-tex­ture op­tion; a wide ar­ray of con­nec­tiv­ity, in­clud­ing Thunderbolt 5; a 12MP Center Stage cam­era; stu­dio-qual­ity mics; an im­mer­sive six-speaker sound sys­tem; Apple Intelligence fea­tures; and the power of ma­cOS Tahoe. The new MacBook Pro comes in space black and sil­ver, and is avail­able to pre-or­der start­ing to­mor­row, March 4, with avail­abil­ity be­gin­ning Wednesday, March 11.

MacBook Pro with M5 Pro and M5 Max re­de­fines what’s pos­si­ble on a pro lap­top, now up to 4x faster than the pre­vi­ous gen­er­a­tion,” said John Ternus, Apple’s se­nior vice pres­i­dent of Hardware Engineering. With Neural Accelerators in the GPU, the new MacBook Pro en­ables pro­fes­sion­als to run ad­vanced LLMs on de­vice and un­lock ca­pa­bil­i­ties that no other lap­top can do — all while main­tain­ing ex­cep­tional bat­tery life. Combined with even faster uni­fied mem­ory and stor­age, it em­pow­ers users to take their work even fur­ther, un­leash­ing new pos­si­bil­i­ties and push­ing the bound­aries of what they can do.”

On the new MacBook Pro, LLMs will run even faster in pop­u­lar apps like LM Studio.

On the new MacBook Pro, im­age analy­sis will run even faster in pop­u­lar apps like QuPath.

Scaling up per­for­mance from M5 and of­fer­ing the same break­through GPU ar­chi­tec­ture with a Neural Accelerator in each core, M5 Pro and M5 Max de­liver up to 4x faster LLM prompt pro­cess­ing than M4 Pro and M4 Max, and up to 8x AI im­age gen­er­a­tion than M1 Pro and M1 Max.2 This en­ables AI re­searchers and de­vel­op­ers to train cus­tom mod­els lo­cally, and cre­ative pro­fes­sion­als to lever­age AI-powered tools for video edit­ing, mu­sic pro­duc­tion, and de­sign work. Both chips also bring up to a 50 per­cent in­crease in graph­ics per­for­mance com­pared to M4 Pro and M4 Max,3,4 en­abling mo­tion de­sign­ers to work with com­plex 3D scenes in real time and VFX artists to pre­view ef­fects in­stantly. And with a faster and more power-ef­fi­cient Neural Engine and higher uni­fied mem­ory band­width, the new MacBook Pro de­liv­ers in­cred­i­ble on-de­vice AI ca­pa­bil­i­ties. The in­crease in uni­fied mem­ory band­width en­ables com­plex work­flows like in­ten­sive AI model train­ing and mas­sive video pro­jects. M5 Pro sup­ports up to 64GB of uni­fied mem­ory with up to 307GB/s of mem­ory band­width, while M5 Max sup­ports up to 128GB of uni­fied mem­ory with up to 614GB/s of mem­ory band­width.

With the power of M5 Pro and M5 Max on MacBook Pro, AI re­searchers and de­vel­op­ers can train cus­tom mod­els lo­cally, and cre­ative pro­fes­sion­als are able to lever­age AI-powered tools for video edit­ing, mu­sic pro­duc­tion, and de­sign work.

With the power of M5 Pro and M5 Max on MacBook Pro, AI re­searchers and de­vel­op­ers can train cus­tom mod­els lo­cally, and cre­ative pro­fes­sion­als are able to lever­age AI-powered tools for video edit­ing, mu­sic pro­duc­tion, and de­sign work.

With the power of M5 Pro and M5 Max on MacBook Pro, AI re­searchers and de­vel­op­ers can train cus­tom mod­els lo­cally, and cre­ative pro­fes­sion­als are able to lever­age AI-powered tools for video edit­ing, mu­sic pro­duc­tion, and de­sign work.

Up to 7.8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.

Up to 6.9x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.

Up to 5.2x faster 3D ren­der­ing in Maxon Redshift when com­pared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.

Up to 1.6x faster gam­ing per­for­mance with ray trac­ing in games like Cyberpunk 2077: Ultimate Edition when com­pared to MacBook Pro with M4 Pro.

Up to 8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.

Up to 6.7x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.

Up to 5.4x faster video ef­fects ren­der­ing per­for­mance in Blackmagic DaVinci Resolve Studio when com­pared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.

Up to 3.5x faster AI video-en­hanc­ing per­for­mance in Topaz Video when com­pared to MacBook Pro with M4 Max.

With up to 2x faster read/​write per­for­mance, reach­ing speeds of up to 14.5GB/s, the new MacBook Pro ac­cel­er­ates work­flows for pro­fes­sion­als work­ing across 4K and 8K video pro­jects.

A MacBook Pro user sits out­side with a tele­scope un­der a starry night’s sky.

A MacBook Pro user works at their desk with two ex­ter­nal dis­plays.

The new MacBook Pro de­liv­ers up to a phe­nom­e­nal 24 hours of bat­tery life, so up­graders can take their pro work­flows any­where.

The gor­geous Liquid Retina XDR dis­play de­liv­ers 1600 nits peak HDR bright­ness and up to 1000 nits for SDR con­tent for stun­ning vi­su­als.

The new MacBook Pro can drive up to two high-res­o­lu­tion ex­ter­nal dis­plays with M5 Pro, and up to four high-res­o­lu­tion dis­plays with M5 Max, pro­vid­ing the flex­i­bil­ity to cre­ate ex­pan­sive work­spaces.

Customers can pre-or­der the new 14- and 16-inch MacBook Pro mod­els with M5 Pro and M5 Max start­ing to­mor­row, March 4, on ap­ple.com/​store and in the Apple Store app in 33 coun­tries and re­gions, in­clud­ing the U. S. All mod­els will be­gin ar­riv­ing to cus­tomers, and will be in Apple Store lo­ca­tions and Apple Authorized Resellers, start­ing Wednesday, March 11.

The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for ed­u­ca­tion.

The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for ed­u­ca­tion. All mod­els are avail­able in space black and sil­ver.

Additional tech­ni­cal spec­i­fi­ca­tions, con­fig­ure-to-or­der op­tions, and ac­ces­sories are avail­able at ap­ple.com/​mac.

The 14-inch MacBook Pro with M5 now comes stan­dard with 1TB of stor­age, and is avail­able in space black and sil­ver, start­ing at $1,699 (U.S.) and $1,599 (U.S.) for ed­u­ca­tion.

With Apple Trade In, cus­tomers can trade in their cur­rent com­puter and get credit to­ward a new Mac. Customers can visit ap­ple.com/​shop/​trade-in to see what their de­vice is worth.

AppleCare de­liv­ers ex­cep­tional ser­vice and sup­port, with flex­i­ble op­tions for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to pro­tect mul­ti­ple prod­ucts in one sim­ple plan. Both plans in­clude cov­er­age for ac­ci­dents like drops and spills, theft and loss pro­tec­tion on el­i­gi­ble prod­ucts, bat­tery re­place­ment ser­vice, and 24/7 sup­port from Apple Experts. For more in­for­ma­tion, visit ap­ple.com/​ap­ple­care.

Every cus­tomer who buys di­rectly from Apple Retail gets ac­cess to Personal Setup. In these guided on­line ses­sions, a Specialist can walk them through setup or fo­cus on fea­tures that will help them make the most of their new de­vice. Customers can also learn more about get­ting started and go­ing fur­ther with their new de­vice with a Today at Apple ses­sion at their near­est Apple Store.

Customers in the U.S. who shop at Apple us­ing Apple Card can pay monthly at 0 per­cent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 per­cent Daily Cash back — all up front. More in­for­ma­tion — in­clud­ing de­tails on el­i­gi­bil­ity, ex­clu­sions, and Apple Card terms — is avail­able at ap­ple.com/​ap­ple-card/​monthly-in­stall­ments.

Apple in­tro­duces MacBook Pro with all‑new M5 Pro and M5 Max, de­liv­er­ing break­through pro per­for­mance and next-level on-de­vice AI

The world’s best pro lap­top raises the bar again with blaz­ing-fast CPU and GPU per­for­mance, plus up to 2x faster SSD speeds and 1TB of start­ing stor­age

CUPERTINO, CALIFORNIA Apple to­day an­nounced the lat­est 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bring­ing game-chang­ing per­for­mance and AI ca­pa­bil­i­ties to the world’s best pro lap­top. With M5 Pro and M5 Max, MacBook Pro fea­tures a new CPU with the world’s fastest CPU core,1 a next-gen­er­a­tion GPU with a Neural Accelerator in each core, and higher uni­fied mem­ory band­width, al­to­gether de­liv­er­ing up to 4x AI per­for­mance com­pared to the pre­vi­ous gen­er­a­tion, and up to 8x AI per­for­mance com­pared to M1 mod­els.2 This al­lows de­vel­op­ers, re­searchers, busi­ness pro­fes­sion­als, and cre­atives to un­lock new AI-enabled work­flows right on MacBook Pro. It now comes with up to 2x faster SSD per­for­mance2 and starts at 1TB of stor­age for M5 Pro and 2TB for M5 Max. The new MacBook Pro in­cludes N1, an Apple-designed wire­less net­work­ing chip that en­ables Wi-Fi 7 and Bluetooth 6, bring­ing im­proved per­for­mance and re­li­a­bil­ity to wire­less con­nec­tions. It also of­fers up to 24 hours of bat­tery life; a gor­geous Liquid Retina XDR dis­play with a nano-tex­ture op­tion; a wide ar­ray of con­nec­tiv­ity, in­clud­ing Thunderbolt 5; a 12MP Center Stage cam­era; stu­dio-qual­ity mics; an im­mer­sive six-speaker sound sys­tem; Apple Intelligence fea­tures; and the power of ma­cOS Tahoe. The new MacBook Pro comes in space black and sil­ver, and is avail­able to pre-or­der start­ing to­mor­row, March 4, with avail­abil­ity be­gin­ning Wednesday, March 11.

MacBook Pro with M5 Pro and M5 Max re­de­fines what’s pos­si­ble on a pro lap­top, now up to 4x faster than the pre­vi­ous gen­er­a­tion,” said John Ternus, Apple’s se­nior vice pres­i­dent of Hardware Engineering. With Neural Accelerators in the GPU, the new MacBook Pro en­ables pro­fes­sion­als to run ad­vanced LLMs on de­vice and un­lock ca­pa­bil­i­ties that no other lap­top can do — all while main­tain­ing ex­cep­tional bat­tery life. Combined with even faster uni­fied mem­ory and stor­age, it em­pow­ers users to take their work even fur­ther, un­leash­ing new pos­si­bil­i­ties and push­ing the bound­aries of what they can do.”

M5 Pro and M5 Max are built us­ing the new Apple-designed Fusion Architecture and en­gi­neered from the ground up for AI. This in­no­v­a­tive de­sign com­bines two dies into a sin­gle sys­tem on a chip, pro­vid­ing tremen­dous per­for­mance boosts. M5 Pro and M5 Max fea­ture a new up-to-18-core CPU with 6 su­per cores, the world’s fastest CPU core,1 and 12 all-new per­for­mance cores, op­ti­mized for power-ef­fi­cient, mul­ti­threaded pro work­loads — al­to­gether de­liv­er­ing up to 30 per­cent faster per­for­mance.3 M5 Pro is de­signed for users run­ning com­plex work­flows, like coders op­ti­miz­ing al­go­rithms and pho­tog­ra­phers pro­cess­ing mas­sive im­age li­braries, while M5 Max is for those push­ing the ab­solute lim­its, such as en­gi­neers run­ning rig­or­ous sim­u­la­tions.

Scaling up per­for­mance from M5 and of­fer­ing the same break­through GPU ar­chi­tec­ture with a Neural Accelerator in each core, M5 Pro and M5 Max de­liver up to 4x faster LLM prompt pro­cess­ing than M4 Pro and M4 Max, and up to 8x AI im­age gen­er­a­tion than M1 Pro and M1 Max.2 This en­ables AI re­searchers and de­vel­op­ers to train cus­tom mod­els lo­cally, and cre­ative pro­fes­sion­als to lever­age AI-powered tools for video edit­ing, mu­sic pro­duc­tion, and de­sign work. Both chips also bring up to a 50 per­cent in­crease in graph­ics per­for­mance com­pared to M4 Pro and M4 Max,3,4 en­abling mo­tion de­sign­ers to work with com­plex 3D scenes in real time and VFX artists to pre­view ef­fects in­stantly. And with a faster and more power-ef­fi­cient Neural Engine and higher uni­fied mem­ory band­width, the new MacBook Pro de­liv­ers in­cred­i­ble on-de­vice AI ca­pa­bil­i­ties. The in­crease in uni­fied mem­ory band­width en­ables com­plex work­flows like in­ten­sive AI model train­ing and mas­sive video pro­jects. M5 Pro sup­ports up to 64GB of uni­fied mem­ory with up to 307GB/s of mem­ory band­width, while M5 Max sup­ports up to 128GB of uni­fied mem­ory with up to 614GB/s of mem­ory band­width.

Up to 7.8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.

Up to 6.9x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.

Up to 5.2x faster 3D ren­der­ing in Maxon Redshift when com­pared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.

Up to 1.6x faster gam­ing per­for­mance with ray trac­ing in games like Cyberpunk 2077: Ultimate Edition when com­pared to MacBook Pro with M4 Pro.

Up to 8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.

Up to 6.7x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.

Up to 5.4x faster video ef­fects ren­der­ing per­for­mance in Blackmagic DaVinci Resolve Studio when com­pared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.

Up to 3.5x faster AI video-en­hanc­ing per­for­mance in Topaz Video when com­pared to MacBook Pro with M4 Max.

The new MacBook Pro de­liv­ers up to 2x faster read/​write per­for­mance com­pared to the pre­vi­ous gen­er­a­tion,4 reach­ing speeds of up to 14.5GB/s5 and ac­cel­er­at­ing work­flows for pro­fes­sion­als work­ing across 4K and 8K video pro­jects, LLMs, and com­plex datasets. MacBook Pro with M5 Pro now comes stan­dard with 1TB of stor­age, while MacBook Pro with M5 Max now comes stan­dard with 2TB. And the 14-inch MacBook Pro with M5 now comes stan­dard with 1TB of stor­age.

Even More Value for Upgraders

The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a ma­jor leap for pro users. There’s never been a bet­ter time for cus­tomers to up­grade from a pre­vi­ous gen­er­a­tion of MacBook Pro with Apple sil­i­con or an Intel-based Mac.

Enhanced AI per­for­mance with Neural Accelerators in the GPU: Users up­grad­ing from M1 mod­els will ex­pe­ri­ence up to 8x faster AI per­for­mance.2

Exceptional bat­tery life: The new MacBook Pro gets up to 24 hours of bat­tery life, giv­ing Intel-based up­graders up to 13 ad­di­tional hours, and users com­ing from M1 mod­els will get up to three more hours, so they can get more done on a sin­gle charge.2 And un­like many PC lap­tops, MacBook Pro de­liv­ers the same in­cred­i­ble per­for­mance whether plugged in or on bat­tery. Users will be able to fast-charge up to 50 per­cent in just 30 min­utes us­ing a 96W or higher USB-C power adapter.2

Best dis­play in a pro lap­top: Upgraders will en­joy the Liquid Retina XDR dis­play, which fea­tures 1600 nits peak HDR bright­ness and up to 1000 nits for SDR con­tent, and of­fers a nano-tex­ture op­tion.

Comprehensive con­nec­tiv­ity: The new MacBook Pro has a wide ar­ray of con­nec­tiv­ity op­tions, in­clud­ing three Thunderbolt 5 ports for high-speed data trans­fer, HDMI that sup­ports up to 8K res­o­lu­tion, an SDXC card slot for quick me­dia im­port, and MagSafe 3 with fast-charge ca­pa­bil­ity. Upgraders can also drive up to two high-res­o­lu­tion ex­ter­nal dis­plays with M5 Pro, and up to four high-res­o­lu­tion dis­plays with M5 Max, pro­vid­ing the flex­i­bil­ity to cre­ate ex­pan­sive work­spaces.

Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring im­proved per­for­mance and re­li­a­bil­ity to wire­less con­nec­tions.

Advanced cam­era, mics, and speak­ers: Featuring a 12MP Center Stage cam­era with Desk View sup­port and stu­dio-qual­ity mics, the new MacBook Pro will al­low users to look and sound their best while tak­ing calls. They will also ex­pe­ri­ence an im­mer­sive six-speaker sound sys­tem with sup­port for Spatial Audio.

ma­cOS Tahoe trans­forms the MacBook Pro ex­pe­ri­ence with pow­er­ful ca­pa­bil­i­ties that tur­bocharge pro­duc­tiv­ity.6 Major up­dates to Spotlight make it eas­ier to find rel­e­vant apps and files and im­me­di­ately take ac­tion right from the search bar. Apple Intelligence is even more ca­pa­ble while pro­tect­ing users’ pri­vacy at every step.7 Shortcuts get even more pow­er­ful with in­tel­li­gent ac­tions and the abil­ity to tap di­rectly in to Apple Intelligence mod­els. Integrated into Messages, FaceTime, and the Phone app, Live Translation helps users eas­ily com­mu­ni­cate across lan­guages, trans­lat­ing text and au­dio.7 Additionally, de­vel­op­ers can bring Apple Intelligence ca­pa­bil­i­ties into their ap­pli­ca­tions or tap in to the Foundation Models frame­work for spe­cial­ized on-de­vice in­tel­li­gence tasks. Continuity fea­tures in­clude the Phone app on Mac, which lets users re­lay cel­lu­lar calls from their nearby iPhone, and with Live Activities from iPhone, they can stay on top of things hap­pen­ing in real time.6 ma­cOS Tahoe also fea­tures a beau­ti­ful new de­sign with Liquid Glass, and users can per­son­al­ize their Mac in even more ways with an up­dated Control Center, in ad­di­tion to new color op­tions for fold­ers, app icons, and wid­gets.

MacBook Pro was built with the en­vi­ron­ment in mind, and brings Apple even closer to reach­ing its am­bi­tious plan to be car­bon neu­tral across its en­tire foot­print by 2030. It is made with 45 per­cent re­cy­cled con­tent,8 in­clud­ing 100 per­cent re­cy­cled alu­minum in the en­clo­sure and 100 per­cent re­cy­cled cobalt in the bat­tery. It is man­u­fac­tured with 50 per­cent re­new­able elec­tric­ity, such as wind and so­lar, across the sup­ply chain. The new MacBook Pro is de­signed to be durable and re­pairable, and also of­fers in­dus­try-lead­ing soft­ware sup­port, while meet­ing Apple’s high stan­dards for en­ergy ef­fi­ciency and safer chem­istry. The pa­per pack­ag­ing is 100 per­cent fiber-based and can be eas­ily re­cy­cled.9

Customers can pre-or­der the new 14- and 16-inch MacBook Pro mod­els with M5 Pro and M5 Max start­ing to­mor­row, March 4, on ap­ple.com/​store and in the Apple Store app in 33 coun­tries and re­gions, in­clud­ing the U.S. All mod­els will be­gin ar­riv­ing to cus­tomers, and will be in Apple Store lo­ca­tions and Apple Authorized Resellers, start­ing Wednesday, March 11.

The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for ed­u­ca­tion.

The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for ed­u­ca­tion. All mod­els are avail­able in space black and sil­ver.

Additional tech­ni­cal spec­i­fi­ca­tions, con­fig­ure-to-or­der op­tions, and ac­ces­sories are avail­able at ap­ple.com/​mac.

The 14-inch MacBook Pro with M5 now comes stan­dard with 1TB of stor­age, and is avail­able in space black and sil­ver, start­ing at $1,699 (U.S.) and $1,599 (U.S.) for ed­u­ca­tion.

With Apple Trade In, cus­tomers can trade in their cur­rent com­puter and get credit to­ward a new Mac. Customers can visit ap­ple.com/​shop/​trade-in to see what their de­vice is worth.

AppleCare de­liv­ers ex­cep­tional ser­vice and sup­port, with flex­i­ble op­tions for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to pro­tect mul­ti­ple prod­ucts in one sim­ple plan. Both plans in­clude cov­er­age for ac­ci­dents like drops and spills, theft and loss pro­tec­tion on el­i­gi­ble prod­ucts, bat­tery re­place­ment ser­vice, and 24/7 sup­port from Apple Experts. For more in­for­ma­tion, visit ap­ple.com/​ap­ple­care.

Every cus­tomer who buys di­rectly from Apple Retail gets ac­cess to Personal Setup. In these guided on­line ses­sions, a Specialist can walk them through setup or fo­cus on fea­tures that will help them make the most of their new de­vice. Customers can also learn more about get­ting started and go­ing fur­ther with their new de­vice with a Today at Apple ses­sion at their near­est Apple Store.

Customers in the U.S. who shop at Apple us­ing Apple Card can pay monthly at 0 per­cent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 per­cent Daily Cash back — all up front. More in­for­ma­tion — in­clud­ing de­tails on el­i­gi­bil­ity, ex­clu­sions, and Apple Card terms — is avail­able at ap­ple.com/​ap­ple-card/​monthly-in­stall­ments.

About Apple

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

Testing was con­ducted by Apple in February 2026 us­ing ship­ping com­pet­i­tive sys­tems and se­lect in­dus­try-stan­dard bench­marks.

Testing was con­ducted by Apple in January and February 2026. See ap­ple.com/​mac­book-pro for more in­for­ma­tion.

Results are com­pared to pre­vi­ous-gen­er­a­tion 16-inch MacBook Pro with Apple M4 Pro, 14-core CPU, 20-core GPU, 48GB of uni­fied mem­ory, and a 4TB SSD.

Results are com­pared to pre­vi­ous-gen­er­a­tion 16-inch MacBook Pro with Apple M4 Max, 16-core CPU, 40-core GPU, 128GB of uni­fied mem­ory, and an 8TB SSD.

Testing was con­ducted by Apple in January and February 2026 us­ing pre­pro­duc­tion 16-inch MacBook Pro sys­tems with Apple M5 Max, 18-core CPU, 40-core GPU, 128GB of uni­fied mem­ory, and an 8TB SSD. Tested with FIO 3.41, 1024KB re­quest size, 10GB test file and IO depth=8. Performance tests are con­ducted us­ing spe­cific com­puter sys­tems and re­flect the ap­prox­i­mate per­for­mance of MacBook Pro.

ma­cOS Tahoe is avail­able as a free soft­ware up­date. Some fea­tures may not be avail­able in all re­gions or in all lan­guages. See re­quire­ments at ap­ple.com/​os/​ma­cos.

Apple Intelligence is avail­able in beta with sup­port for these lan­guages: English, Danish, Dutch, French, German, Italian, Norwegian, Portuguese, Spanish, Swedish, Turkish, Vietnamese, Chinese (simplified), Chinese (traditional), Japanese, and Korean. Some fea­tures may not be avail­able in all re­gions or lan­guages. For fea­ture and lan­guage avail­abil­ity and sys­tem re­quire­ments, see sup­port.ap­ple.com/​en-us/​121115.

Product re­cy­cled or re­new­able con­tent is the mass of cer­ti­fied re­cy­cled ma­te­r­ial rel­a­tive to the over­all mass of the de­vice, not in­clud­ing pack­ag­ing or in-box ac­ces­sories.

Breakdown of U.S. re­tail pack­ag­ing by weight. Adhesives, inks, and coat­ings are ex­cluded from cal­cu­la­tions.

Copy text

* Up to 7.8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.

* Up to 6.9x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.

* Up to 5.2x faster 3D ren­der­ing in Maxon Redshift when com­pared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.

* Up to 1.6x faster gam­ing per­for­mance with ray trac­ing in games like Cyberpunk 2077: Ultimate Edition when com­pared to MacBook Pro with M4 Pro.

* Up to 8x faster AI im­age gen­er­a­tion per­for­mance when com­pared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.

* Up to 6.7x faster LLM prompt pro­cess­ing when com­pared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.

* Up to 5.4x faster video ef­fects ren­der­ing per­for­mance in Blackmagic DaVinci Resolve Studio when com­pared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.

* Up to 3.5x faster AI video-en­hanc­ing per­for­mance in Topaz Video when com­pared to MacBook Pro with M4 Max.

* Enhanced AI per­for­mance with Neural Accelerators in the GPU: Users up­grad­ing from M1 mod­els will ex­pe­ri­ence up to 8x faster AI per­for­mance.2

* Exceptional bat­tery life: The new MacBook Pro gets up to 24 hours of bat­tery life, giv­ing Intel-based up­graders up to 13 ad­di­tional hours, and users com­ing from M1 mod­els will get up to three more hours, so they can get more done on a sin­gle charge.2 And un­like many PC lap­tops, MacBook Pro de­liv­ers the same in­cred­i­ble per­for­mance whether plugged in or on bat­tery. Users will be able to fast-charge up to 50 per­cent in just 30 min­utes us­ing a 96W or higher USB-C power adapter.2

* Best dis­play in a pro lap­top: Upgraders will en­joy the Liquid Retina XDR dis­play, which fea­tures 1600 nits peak HDR bright­ness and up to 1000 nits for SDR con­tent, and of­fers a nano-tex­ture op­tion.

* Comprehensive con­nec­tiv­ity: The new MacBook Pro has a wide ar­ray of con­nec­tiv­ity op­tions, in­clud­ing three Thunderbolt 5 ports for high-speed data trans­fer, HDMI that sup­ports up to 8K res­o­lu­tion, an SDXC card slot for quick me­dia im­port, and MagSafe 3 with fast-charge ca­pa­bil­ity. Upgraders can also drive up to two high-res­o­lu­tion ex­ter­nal dis­plays with M5 Pro, and up to four high-res­o­lu­tion dis­plays with M5 Max, pro­vid­ing the flex­i­bil­ity to cre­ate ex­pan­sive work­spaces.

* Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring im­proved per­for­mance and re­li­a­bil­ity to wire­less con­nec­tions.

* Advanced cam­era, mics, and speak­ers: Featuring a 12MP Center Stage cam­era with Desk View sup­port and stu­dio-qual­ity mics, the new MacBook Pro will al­low users to look and sound their best while tak­ing calls. They will also ex­pe­ri­ence an im­mer­sive six-speaker sound sys­tem with sup­port for Spatial Audio.

* Customers can pre-or­der the new 14- and 16-inch MacBook Pro mod­els with M5 Pro and M5 Max start­ing to­mor­row, March 4, on ap­ple.com/​store and in the Apple Store app in 33 coun­tries and re­gions, in­clud­ing the U.S. All mod­els will be­gin ar­riv­ing to cus­tomers, and will be in Apple Store lo­ca­tions and Apple Authorized Resellers, start­ing Wednesday, March 11.

* The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for ed­u­ca­tion.

* The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for ed­u­ca­tion; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for ed­u­ca­tion. All mod­els are avail­able in space black and sil­ver.

* Additional tech­ni­cal spec­i­fi­ca­tions, con­fig­ure-to-or­der op­tions, and ac­ces­sories are avail­able at ap­ple.com/​mac.

* The 14-inch MacBook Pro with M5 now comes stan­dard with 1TB of stor­age, and is avail­able in space black and sil­ver, start­ing at $1,699 (U.S.) and $1,599 (U.S.) for ed­u­ca­tion.

* With Apple Trade In, cus­tomers can trade in their cur­rent com­puter and get credit to­ward a new Mac. Customers can visit ap­ple.com/​shop/​trade-in to see what their de­vice is worth.

* AppleCare de­liv­ers ex­cep­tional ser­vice and sup­port, with flex­i­ble op­tions for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to pro­tect mul­ti­ple prod­ucts in one sim­ple plan. Both plans in­clude cov­er­age for ac­ci­dents like drops and spills, theft and loss pro­tec­tion on el­i­gi­ble prod­ucts, bat­tery re­place­ment ser­vice, and 24/7 sup­port from Apple Experts. For more in­for­ma­tion, visit ap­ple.com/​ap­ple­care.

...

Read the original on www.apple.com »

8 319 shares, 8 trendiness

Welcome (back) to Macintosh

For at least 10 years, every Time Machine set up I have been in charge of, or tasked with main­tain­ing for some­one else, has even­tu­ally run into an is­sue where it stops back­ing up suc­cess­fully. The only so­lu­tion has been to start over, to not in­herit backup state and to man­u­ally delete the old backup. My ex­pe­ri­ence while look­ing up this so­lu­tion re­peat­edly has been that this is not un­com­mon, and def­i­nitely is not just me hold­ing it wrong; in­deed the steps are marked as rec­om­mended so­lu­tions on Apple’s own fo­rums.

For sev­eral years, Spotlight’s tag in­dex has been un­re­li­able. Querying for a file type and a tag of­ten re­turns only a sub­set of the most re­cent files with the tags. Rebuilding the en­tire Spotlight in­dex does noth­ing to al­le­vi­ate this. The only cure is to re­launch the Finder, which some­times fixes it. (There’s also, 13 years af­ter its in­tro­duc­tion, no good, sup­ported and doc­u­mented com­mand line in­ter­face or API to enu­mer­ate, add or re­move tags (aside from those as­sum­ing that they are just the seven col­ors of la­bels). Just say­ing.)

For a few months, Finder has been hav­ing trou­ble with some Spotlight queries some­times. When it hap­pens, ei­ther they seem­ingly kick off and just hang for­ever, or they show re­sults and then as you try to do some­thing, Finder hangs (maybe as a re­sult of a race con­di­tion dur­ing an up­date of the re­sult). The only cure is to re­launch the Finder. Rebuilding the en­tire Spotlight in­dex does noth­ing to al­le­vi­ate this.

For sev­eral years, Finder has been hav­ing trou­ble keep­ing win­dows up to date with folder changes. Most com­monly seen when a pro­gram I’m cod­ing in my day-to-day work writes files to a folder, those files of­ten don’t show up at all. Sometimes you can force it to re­load by back­ing out of the folder, us­ing Go To Folder, en­ter­ing the full path, leav­ing it for a few sec­onds and let­ting the go to panel autocomplete” the name - for some rea­son, that seems to clear out the cob­webs mo­men­tar­ily and sync with the cur­rent state of the file sys­tem. Sometimes, even that workaround does­n’t work, and the only cure is to re­launch the Finder. Leaving the en­tire com­puter to be for hours on end, then clos­ing all win­dows, then open­ing a new win­dow show­ing the same folder still does not re­fresh the state of the folder. Creating a new folder in the Finder in the af­fected folder, that new folder does ap­pear, but things that did not orig­i­nate from within the Finder - no.

For a year or so, lis­ten­ing to some­thing with AirPods Pro and then open­ing Quick Look in the Finder to see a video with an au­dio track of­ten glitches out the au­dio af­ter a sec­ond or two. The mixed au­dio is fine be­fore and af­ter, but there is a very un­pleas­ant glitch. No firmware or OS up­grade has re­solved the is­sue.

For sev­eral years, switch­ing to a win­dow in a full screen space of­ten does not nec­es­sar­ily fo­cus the win­dow. If you cmd+tab to a win­dow that has its own full screen space, you can see that the win­dow does not ac­tive, and that hit­ting a key­board short­cut that is in­tended to be valid just gives you a beep. You have to man­u­ally click in the win­dow to ac­ti­vate it. This es­pe­cially af­fects Safari in full screen win­dows with video, where hit­ting space to pause or left/​right to scan is no longer ef­fec­tive.

With the pos­si­ble ex­cep­tion of in­di­vid­ual dodgy Time Machine pro­to­col im­ple­men­ta­tions from third par­ties, all of the is­sues are di­rectly trace­able to com­po­nents fully in Apple’s con­trol. None of these is­sues are im­pos­si­ble for Apple to fix. All of them are in­cum­bent on them to do so. Nearly all of them have per­sisted for at least two ma­jor OS re­leases and mul­ti­ple Macs.

In the mid­dle of all this, what Apple chooses to fo­cus on is to im­ple­ment a re­design that no one asked for, that butch­ers both the most con­ceiv­ably fun­da­men­tal us­abil­ity and the vi­sual pleas­ant­ness its user base has self-se­lected its plat­forms for; which only sav­ing grace is that it is half-assed enough to not ac­tu­ally re­ally change some things too badly, com­pared to what it could have been like. Although, had I up­graded to ma­cOS Tahoe, chances are on top of the vi­sual change, I would have been treated to ba­sic Apple Event in­fra­struc­ture falling apart and stop­ping work­ing caus­ing hangs, in­sta­bil­ity and un­pre­dictabil­ity.

I have a MacBook Pro M1 Max from 2021, and be­cause it is an ex­cel­lent piece of hard­ware that still per­forms its func­tion ad­mirably, I have been hold­ing on. The cur­rent state of ma­cOS Tahoe is abysmal - I know be­cause I lis­ten to peo­ple who are long-time Mac users, who say so. If it was more sta­ble, maybe I would do what I did with the last few up­grades, all of which made non-pro­duc­tive tweaks to the user in­ter­face in the in­ter­est of unity across plat­forms, and just up­grade. As it stands now, there’s no chance.

The hard­ware is great and no doubt M5 and M6 vari­ants will run cir­cles around M1, but if I have to sink down fur­ther into this bog, that price is too high to pay - a com­mon enough sen­ti­ment that it is a mat­ter of pub­lic in­ter­est to doc­u­ment downgrad­abil­ity or at­tempt­ing to block dark pat­tern up­grades. (And lest you think this is just Mac-related, the dis­ease has spread at least to the wrist, but I am scop­ing it down some­what for clar­ity.)

At some point, enough has to be enough. Apparently, the up­com­ing ver­sions are go­ing to be bug fix/​im­prove­ment fo­cused re­leases in the vein of the leg­endary Snow Leopard re­lease (which it­self was not un­con­tro­ver­sial at the time of its re­lease for the num­ber of things it re­moved). But with things hav­ing got­ten this bad over so many years, my ques­tion is: does Apple even have the or­ga­ni­za­tional acu­ity to un­der­stand, value and fo­cus on these prob­lems?

Clearly, if not break­ing de­vel­op­ers and users was a pri­or­ity, we would not have been where we are to­day, or would we? Even ig­nor­ing bugs and de­sign changes, in which way does it serve users to phase out Rosetta 2, which in a con­tainer-heavy world is more or less re­quired for de­vel­op­ers due to the ecosys­tem of ARM64 Linux con­tain­ers be­ing nowhere near as wide­spread as for AMD x86-64 ones, and which keeps many ap­pli­ca­tions runnable that oth­er­wise would­n’t be?

My hope is that there are Mac lovers within Apple who bite their tongue at every silly icon re­design, ti­tle bar-shrink­ing de­sign re­or­ga­ni­za­tion, misty shower win­dow—re-skin—love let­ter that they are forced to en­act in­stead of tak­ing what was once (and par­tially still is) that rarest of jew­els, a long-term, con­tin­u­ously up­dated, well-de­signed user in­ter­face and mass mar­ket op­er­at­ing sys­tem and it­er­at­ing on it to make it bet­ter. That there are peo­ple who, in so many words, bleed six col­ors; who have found not just a ster­ile tool, but a cul­ture and a com­mu­nity. And who, like so many oth­ers in so many other sit­u­a­tions, re­cently had to see what they love, what they value, what shaped them, torn apart by shift­ing, switch­ing or re­cently starkly ex­posed pri­or­i­ties of peo­ple above them who ei­ther don’t know what they have, or are happy to use it as a ve­hi­cle and ride it to places they care more about.

My hope is that Macintosh is not just one of these em­pires that was at the height of its power and then dis­in­te­grated be­cause of war­ring fac­tions, sa­ti­ated and un­cu­ri­ous rulers, and droughts for which no one was pre­pared, ru­in­ing crops no one re­al­ized were es­sen­tial for sur­vival.

My hope is that there re­mains a pri­mor­dial spark, a glimpse of ge­nius, to re­dis­cover, to re­con­nect to - to serve not an­nual trends or con­stant phoni­fi­ca­tion, but the needs of the user to use the com­puter as a tool to get some­thing done.

My hope is that, just as Apple crawled out of a hard­ware night­mare pock­marked by ther­mal throt­tling, key­boards in­ca­pac­i­tated by strands of hu­man hair and lack of re­spect for its users needs, it can choose to re­fo­cus its soft­ware and its hu­mil­ity too, and stop do­ing this to the bi­cy­cle for the mind.

...

Read the original on take.surf »

9 314 shares, 39 trendiness

India's top court angry after junior judge cites fake AI-generated orders

The high court had also sought a re­port from the ju­nior judge who had used the AI-generated rul­ings. She told the court that this was her first time us­ing an AI tool and she had be­lieved the ci­ta­tions to be genuine”. She had no in­ten­tion to mis­quote or mis­rep­re­sent the rul­ings and that the mis­take oc­curred solely due to the re­liance on an au­to­matic source”, the high court wrote.

...

Read the original on www.bbc.com »

10 300 shares, 57 trendiness

- YouTube

...

Read the original on www.youtube.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.