10 interesting stories served every morning and every evening.
“There are also sex scenes filmed with the smart glasses — someone is wearing them having sex. That is why this is so extremely sensitive. There are cameras everywhere in our office, and you are not allowed to bring your own phones or any device that can record”, an employee says. In order to answer questions and interpret what the camera sees, the glasses require that data be processed via Meta’s infrastructure — it is not possible to interact with the AI solely locally on the phone. We contact Synsam and Synoptik for an interview about what training the sales staff receive and how it can be that the answers they give are so different. Synsam responded in writing that its role is to inform customers about the applicable terms and to provide internal training, but that responsibility for complying with Swedish law and Meta’s terms ultimately rests with the wearer. Synoptik responded in similar terms, saying its staff are trained in ethics and emphazise the user’s responsibility.But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off.It is not specified how much data may be analysed or for how long it may be stored. Nor is it specified who is given access to the data.Where do the images come from? Can private videos from Sweden end up on screens in Kenya? Those who appear in the images, have they consented to appearing in this way?“Many believe that data must be stored within the EU to be protected. But under GDPR it does not matter where the server is located — as long as the country meets the EU’s requirements. If it does not, data may not be sent there”.“Technically, we have data centres in Sweden, Denmark and Ireland, but the physical location is actually less relevant. The legal responsibility lies with Meta Ireland, which is the European entity. Where the data is actually processed — in Europe or in the US — does not change the regulatory framework”.“For it to be permitted to use a service provider in a third country (outside the EU), it is required that robust agreements with instructions are in place. It must also be ensured that there is legal support for the transfers, so that the data that is transferred receives continued strong and equivalent protection when it is processed in a third country. The protection must therefore not become weaker when it is processed by subcontractors”, says Petra Wierup.
Hur sannolikt är det att du skulle rekommendera SvD till en vän eller kollega?
...
Read the original on www.svd.se »
Most British Columbians will change their clocks for the last time this Sunday.
B. C. is permanently adopting daylight time, making its “spring forward” on March 8 the last time change, Premier David Eby announced Monday.
“We are done waiting. British Columbia is going to change our clocks just one more time — and then never again,” Eby said.
He said changing the clocks causes “all types of problems,” from children and their parents losing sleep, to dogs getting up at the wrong time, to more car accidents.
Residents will have eight months to prepare for Nov. 1, 2026, when the clocks would have been turned back one hour, but will now remain the same.
B. C.’s new time zone will be called “Pacific time,” according to the province.
Residents in certain communities in eastern B. C. that recognize some form of mountain time will not be affected by the change.
B. C. passed legislation to make this change possible in 2019, but without a firm timeline. Previously, Eby said he would wait for B.C.’s American neighbours — including Washington state, Oregon and California — to do the same.
While U. S. President Donald Trump expressed interest in eliminating daylight time shortly after his 2024 election, Congress has not passed the necessary legislation.
Eby noted the Yukon has already made the change to permanent daylight time and added he hopes American neighbours will follow suit.
CBC News reached out to Washington State Governor Bob Ferguson, but did not hear back immediately.
University of B. C. business professor Werner Antweiler said that while he was surprised by B.C.’s decision to make this change without waiting for nearby states, he sees benefits in eliminating time changes.
Time changes, particularly switching clocks an hour forward in the spring, has been found to be harmful, as it imposes “jet lag on the entire population,” he said.
“Somebody had to move first — it might as well be B. C. — and then see how it plays out,” Antweiler said.
“The question is now, who will follow?”
A public engagement report in 2019 for the B. C. government found that 93 per cent of respondents preferred year-round daylight time.
However, 54 per cent of those who responded said it was important for B. C. to have aligning policies on daylight time with other jurisdictions.
Of those who wanted year-round daylight time, three-quarters said it was due to health and wellness concerns.
Switching to permanent standard time was not a voting option.
Bridgitte Anderson, President and CEO of the Greater Vancouver Board of Trade, expressed concern about B. C.’s decision to move forward alone in this decision.
“The unilateral change in time is an unwelcome distraction that will make it more difficult to attract and retain businesses in British Columbia,” she said in a statement.
“The choice to change the time unliterally will create an additional headache for businesses operating on both sides of the border.”
The Vancouver Airport Authority was among groups in 2019 that pushed for B. C. to align with other jurisdictions on the Pacific coast, to avoid unnecessary confusion for travellers.
A spokesperson for the Vancouver Airport Authority said in response to the announced change that alignment is “fundamental” for airline operations and efficient scheduling.
“We will work with industry to better understand the potential schedule and passenger implications of moving to permanent daylight saving time without neighbouring jurisdictions also following suit,” the statement said.
Ryan Mitton, the B. C. director of legislative affairs for the Canadian Federation of Independent Business, said that while some may be relieved not to reset the clocks each year, it may also cause “confusion and disruption.”
“This appears motivated by an attempt to distract from the province’s $13-billion deficit, not genuine concern for the impacts of the time change,” he said in a statement.
...
Read the original on www.cbc.ca »
Identity verification and age verification is an increasinly common policy conversation at the moment, in numerous countries.
Often, this is in combination with proposals to ban children from varying concepts of “social media”, which generally means that everyone would have to prove that they were not a child.
I have yet to see a well-considered proposal.
Worse, the question that they are trying answer is rarely stated clearly and concisely.
And it is unusual to see any consideration of broader sociological issues, let alone an emphasis on this, with a focus instead on perceived “quick win” technosolutionism.
I was pondering last night for which services I, personally, would actually be willing to verify my age or identity.
At least, none that I can think of at the moment.
I appreciate that I compute in an unusual way (when compared with most computer users), and that much of what I do online is about accessing my own services.
Some of those - my fedi server, my RSS server, my messaging services - are build around enjoying stuff from other people’s services.
Would I be willing to verify my identity or age to read someone’s RSS feed? No. While I enjoy the myriad blogs that I follow, none are crucial to me.
I occasionally watch videos (which started on YouTube, but which I download into my Jellyfin instance), and perhaps YouTube will be forced to do age verification. It would be a shame, but again, I’ll just not watch YouTube videos. Not a big loss. Mostly, I buy secondhand DVDs, rip them, and watch them from my Jellyfin instance. I haven’t been asked to verify my age for a DVD purchase (online or offline) in a very long time.
Friends have had to attempt to block access to their sites from the UK. While I can still access their sites via Tor, that’s what I tend to do. I feel sorry for them for the likely significant drop in visitors, likely affecting their enjoyment and in some cases their revenue, and, probably their incentive to continue to write / post / record stuff.
I don’t use any individual forums any more (their demise is a shame; I’d prefer this over centralised discussion sites), nor do I use Reddit. I occasionally look at the comments on HN if one of my posts is surfaced there, but if HN forced identify or age verification, I’d just stop doing it. No big deal for me.
Websites with comments sections? I don’t want to see the comments anyway, so I block those, which makes for a very pleasant browsing experience. I don’t comment myself.
Code forges / places to contribute to FOSS? Most of my FOSS contributions are non-code, but even so, I use some organisation’s GitLab repos, and occasionally I contribute to projects on other forges. I doubt that my contributions are meaningful in themselves, and it may not be an option to switch infrastructure in any case (that might ont make the requirement go away), but since I am not a massive, or particularly valuable contributor, I’d feel less bad about simply stepping away.
For Wikipedia, I’d probably rebuild my Kiwix instance and use that instead. Yes, articles would not be quite so up to date, but I rarely access Wikipedia for rapidly-changing information. In any case, there are tradeoffs, and personally I would prefer my privacy, the security of my personal data, and, well, just not being part of this kind of censorship.
Signal? That would be a pain. I don’t have a workaround for that. I’m happily using XMPP, but as a complement to Signal, not an alternative.
Teams/Zoom? I don’t have accounts on those services, but I do join, via my browser, when a client sends me a link. If I was faced with a choice of having to verify my identity/age for these services, then I’d have to consider the position carefully. Realistically, I am not in a position to say “no, I will not use Teams”, as some long-term clients are not going to change their corporate approach just because Neil doesn’t like something, and I’d rather not lose them as clients. So that could be a pain, if those services were within scope.
I’ll still object to these measures - “I’m okay, Jack” would be a selfish stance - but, in practice, yes, I’d be surprised if they impacted me.
Self-imposed (or, at least, self-controlled) digital isolationism, perhaps.
Or perhaps, in the future, some service will pop up that I will really, really want to use, despite it requiring identity / age verification.
...
Read the original on neilzone.co.uk »
The Condé Nast-owned Ars Technica has terminated senior AI reporter Benj Edwards following a controversy over his role in the publication and retraction of an article that included AI-fabricated quotes, Futurism has confirmed.
Earlier this month, Ars retracted the story after it was found to include fake quotes attributed to a real person. The article — a write-up of a viral incident in which an AI agent seemingly published a hit piece about a human engineer named Scott Shambaugh — was initially published on February 13. After Shambaugh pointed out that he’d never said the quotes attributed to him, Ars‘ editor-in-chief Ken Fisher apologized in an editor’s note, in which he confirmed that the piece included “fabricated quotations generated by an AI tool and attributed to a source who did not say them” and characterized the error as a “serious failure of our standards.” He added that, upon further review, the error appeared to be an “isolated incident.” (404 Media first reported on the retraction.)
Shortly after Fisher’s editor’s note was published, Edwards, one of the report’s two bylined authors, took to Bluesky to take “full responsibility” for the inclusion of the fabricated quotes.
In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had “no role in this error.”
The controversy was met with a wave of pushback and speculation from Ars readers, many of whom expressed deep frustration and disappointment in a lengthy comment thread on the website. On February 27, Ars creative director Aurich Lawson, while closing the comment thread, said that “Ars has completed its review of this matter” and that “the appropriate internal steps have been taken.”
“In the coming weeks, we’ll publish a reader-facing guide explaining how we use and do not use AI in our work,” Lawson wrote. “We do not comment on personnel decisions.”
As of February 28, Edwards’ bio on Ars was changed to past tense, according to an archived version of the webpage. It now reads that Edwards “was a reporter at Ars, where he covered artificial intelligence and technology history.”
Futurism reached out to Ars, Condé Nast, and Edwards to inquire about the reporter’s employment status. Neither the publication nor its owner replied. Edwards said he was unable to comment at this time.
Ars‘ retraction isn’t the first AI controversy to rock a newsroom, nor to anger a publication’s readers. It also comes at a moment in which many media bosses are pushing staff to find uses for AI — as are executives across most industries — even while clear guidelines around use of the technology that uphold editorial ethics remain elusive.
These edicts to integrate AI, meanwhile, are backdropped by a complicated, ever-shifting landscape: contentious copyright battles between news giants and AI companies; simultaneous deal-striking by news giants and AI companies; an internet increasingly full of AI-generated slop news and misinformation; and a traffic cliff tied to Google’s “AI Overviews,” which now paraphrase news instead of pointing readers to a list of blue links.
It’s a combustive, disorienting moment in the history of media and technology, when lines in the sand are being drawn by both journalists and their audiences. And the Ars fallout underlines a phenomenon we’ve seen again and again, as even people who are deeply familiar with AI and its shortcomings can end up relying on it at a critical moment — and in the process, fall victim to something much older than generative AI: human error.
“The irony of an AI reporter being tripped up by AI hallucination is not lost on me,” Edwards said in his February 15 Bluesky post. “I take accuracy in my work very seriously and this is a painful failure on my part.”
More on AI and media: Google’s AI Is Actively Destroying the News Media
...
Read the original on futurism.com »
I’ve spent the last six months working on a startup, building agent prototypes for one of the largest consumer packaged goods companies in the world. As part of that work, our team relied on off-the-shelf voice agent platforms to help the company operate more effectively. Though I can’t go into the business details, the technical takeaway was clear: voice agents are powerful, and there are brilliant off-the-shelf abstractions like Vapi and ElevenLabs that make spinning up voice agents a breeze. But: these abstractions also hide a surprising amount of complexity.
Just a few days before I started writing this, ElevenLabs raised one of the largest funding rounds in the space, and new frontier models like GPT-5.3 and Claude 4.6 dropped. This made me wonder: could I actually build the orchestration layer of a voice agent myself? Not just a toy experiment, but something that could have close to the same performance as an all-in-one platform like Vapi?
To my surprise, I could. It took ~a day and roughly $100 in API credits - and the result outperformed Vapi’s equivalent setup by 2× on latency, achieving ~400ms end-to-end response times.
This essay walks through the full build: why voice agents are deceptively hard, how the turn-taking loop works, how I wired together STT, LLM, and TTS into a streaming pipeline, and how geography and model selection made the biggest difference. Along the way, you can listen to audio demos and play with interactive diagrams of the architecture.
Voice agents are a big step-change in complexity compared to agentic chat.
Text agents are relatively simple, because the end-user’s actions coordinate everything. The model produces text, the user reads it, types a reply, and hits “send.” That action defines the turn boundary. Nothing needs to happen until the user explicitly advances the flow.
Voice doesn’t work that way. The orchestration is continuous, real-time, and must carefully manage multiple models at once. At any moment, the system must decide: is the user speaking, or are they listening? And the transitions between those two states are where all the difficulty lives.
When the user starts speaking, the agent must immediately stop talking - cancel generation, cancel speech synthesis, flush any buffered audio. When the user stops speaking, the system must confidently decide that they’re done, and start responding with minimal delay. Get either wrong and the conversation feels broken.
This isn’t as simple as measuring loudness. Human speech includes pauses, hesitations, filler sounds, background noise, and non-verbal acknowledgements that shouldn’t interrupt the agent. Downstream from this are the things everyone notices: end-to-end latency, awkward silences, agents cutting you off, or talking over you.
We judge the quality of voice communication subconsciously, as it is so deeply ingrained in who we are. Small timing errors that would be acceptable in text - a pause here, a delay there - immediately feel wrong in speech.
In practice, a good voice agent is not about any single model. It’s an orchestration problem. You string together multiple components, and the quality of the experience depends almost entirely on how those pieces are coordinated in time.
The issue with all-in-one SDKs is that you get a long list of parameters to tune, without really understanding which ones matter or why. When something feels off, it’s hard to know where the problem lives. That’s what pushed me to go one layer deeper and build the core loop myself.
Before writing any code, I spent time iterating on the architecture with ChatGPT outside of my editor. I’ve found this useful when working in unfamiliar domains: build a mental model first, then implement.
My goal with agentic coding is always the same. I want to understand the structure of what I’m building well enough that I can open any file and immediately see why it exists and how it fits into the system.
After a few iterations, I reduced the entire problem to a single loop and a tiny state machine. At the core, a voice agent only needs to answer one question: is the user speaking, or listening?
There are two states:
And two transitions where everything happens:
* When the user starts speaking, we must stop all agent audio and generation immediately.
* When the user stops speaking, we must start generating and streaming the agent response with as little latency as possible.
This turn-detection logic is the core of every voice system, so I decided to start there.
For the first implementation, I deliberately avoided transcription, language models, and text-to-speech. I wanted the simplest checkpoint that still felt directionally like a voice agent.
The setup was minimal. A small FastAPI server handles an incoming WebSocket connection from Twilio, which streams base64-encoded μ-law audio packets at 8kHz in ~20ms frames. Each packet was decoded and fed into a Voice Activity Detection model - in my case, Silero VAD.
Silero is a tiny, open-source model (around 2MB) that can quickly determine whether a short chunk of audio contains speech. Turn-taking is a much harder problem than speech detection, but VAD is still a useful primitive, especially for deciding whether audio should be forwarded to more expensive downstream systems.
On top of this, I built a trivial state machine: a boolean flag representing whether the user was currently speaking or listening. When the system detected the end of speech, it played a pre-recorded WAV file back to the caller. When speech resumed, it sent a clear signal over the Twilio WebSocket to flush any buffered audio and stop playback immediately.
I started this way to isolate the hardest part of the problem - turn detection - without wiring up the rest of the system.
The result, while basic, was already impressive:
VAD-only test - the agent plays a pre-recorded clip whenever I stop talking, and cuts off instantly when I interrupt.
The agent responds immediately when I stop speaking, and shuts up the instant I interrupted it. Even without transcription or generation, the loop feels somewhat conversational.
This also gave me a useful baseline for latency. With eager turn-ending and a pre-recorded response, the system represented a lower bound on how fast a voice agent could possibly feel.
This first pass was valuable, but its limitations were obvious.
Detecting the presence of speech is not the same as knowing when a user has finished their thought. A slow speaker might pause for several seconds mid-sentence. A pure VAD would eagerly decide the turn had ended and start talking too early.
In practice, real turn-taking requires combining low-level audio signals with higher-level semantic cues from the transcript itself. That meant the VAD-only approach couldn’t scale to a real system.
What it did give me was a clean control-flow model and a solid latency baseline to compare against. With that in place, it was time to wire in the full pipeline.
The next step was replacing my hand-rolled turn detection with something designed for production: Deepgram’s Flux.
Flux is a streaming API that combines transcription and turn detection in a single model. You feed it a continuous audio stream, and it emits events - most importantly, “start of turn” and “end of turn,” with the final transcript included at the end.
This replaced the core of my architecture. Flux became the source of truth for when the agent should speak and when it should immediately stop and listen.
On top of that, I built a dedicated agent-turn pipeline. When Flux signals the end of a user turn, this pipeline kicks off a real-time sequence:
* The transcript and conversation history are sent to an LLM to begin generation.
* As soon as the first token arrives, it is streamed into a text-to-speech service over WebSocket.
* Every audio packet produced by TTS is forwarded directly to the outbound Twilio socket.
The core idea is to pipeline every stream as to maximally reduce latency.
One important detail here was keeping text-to-speech connections warm. Establishing a fresh WebSocket to ElevenLabs adds a few hundred milliseconds of latency, so I kept a small pool of pre-connected sockets alive. That alone shaved roughly 300ms off the response time.
Barge-ins were handled symmetrically. When Flux detects that the user starts speaking, the agent pipeline is immediately cancelled: in-flight LLM generation is stopped, TTS is torn down, and a clear message is sent to Twilio to flush any queued audio. The agent falls silent instantly, and Flux resumes listening for the next end-of-turn.
The full architecture - Twilio streams audio to Deepgram Flux for turn detection, which triggers either a barge-in (cancel everything) or an agent turn (LLM → TTS → audio back to the caller).
My first test was to run the orchestration entirely locally, mainly to understand how geographic placement affects latency. I built most of this project from a remote wooden cabin in southern Turkey, while traveling and hiking, so this setup was far from ideal.
Latency trace running locally from southern Turkey. TTFT averages ~1.3s, with first audio arriving ~1.6s after the turn ends.
End-to-end latency averaged around 1.6 seconds, measured from my server. According to Twilio, their media edge adds roughly ~100ms on top of that, bringing total perceived latency to about 1.7s.
That’s still quite far from Vapi’s ~840ms latency for a comparable configuration - more than twice as slow. At that point, the delay becomes noticeable. Conversations start to feel hesitant. Pauses stretch just long enough to feel awkward.
This was a useful reminder: even with a correct architecture, geography matters.
In our architecture, every packet of audio hops to and from three external services. If you want to minimize latency, the orchestration layer needs to live physically close to them.
To further improve latency, I deployed the system on Railway in the EU region and configured Twilio, Deepgram, and ElevenLabs to use their EU deployments as well (Note: ElevenLabs automatically chooses the nearest region by default)
The difference was immediate:
Latency trace after deploying to Railway EU. TTFT drops to ~300-500ms, with first audio at ~550-750ms.
The average latency measured at the server dropped to ~690ms, which translates to a total end-to-end latency of roughly ~790ms once Twilio’s edge is included - more than 2x improvement!
For comparison, the equivalent configuration in Vapi - using the same STT, LLM, and TTS models - estimates around ~840ms. In this setup, the custom orchestration actually beats Vapi’s own estimates by about 50ms.
More importantly, the subjective difference is obvious. The conversation feels responsive. Interruptions work cleanly. The agent no longer feels like it’s hesitating before every reply.
Hosted pipeline - the conversation feels natural, with clean interruptions and fast responses.
So far in this project, I’d been using gpt-4o-mini, which seemed to be the lowest-latency model available from OpenAI. However, after digging a bit deeper, I discovered that the inference latency of Groq’s llama-3.3-70b could be up to 3× faster.
I wanted to verify this for myself, so I set up a small test harness on my production server. It ran 360 chat completions across a range of models, cancelling each request immediately after the first token was received. Below are the resulting first-token latency measurements:
First-token latency across providers - Groq’s models are in a league of their own.
As you can see, Groq’s models leave everything from OpenAI in the dust. As far as I can tell, this is the lowest achievable latency without running your own inference infrastructure. It’s genuinely impressive - ~80ms is faster than a human blink, which is usually quoted at around 100ms.
I swapped out gpt-4o-mini for Groq’s llama-3.3-70b, and the results honestly surprised me:
Aside from the very first turn, the conversation felt smooth and snappy. With average end-to-end latency hovering around ~400ms, I was struggling to keep up - listening back to the recording, it sounds like I was taking longer to reply than the agent.
Latency trace with Groq - end-to-end latency averaging ~400ms, with first audio often arriving in under 500ms.
At this latency, interruption handling also feels dramatically better. The agent’s voice cuts out almost immediately after I start speaking, making the interaction feel far closer to a real conversation than anything I’d experienced before.
I was really surprised that I could beat off-the-shelf providers by a full multiple. From extensive experience working with both Vapi and Elevenlabs agent SDKs on a real production use case, I found that my initial prototype is able to reliably achieve a 2x latency improvement, which is a huge deal when it comes to serving natural-sounding and pleasant voice agent interactions.
Building a voice agent from scratch taught me what actually matters in getting AI voice conversations to feel snappy:
Latency
What users experience as “responsiveness” is the time from when they stop speaking to when they hear the first syllable of the agent’s response. That path runs through turn detection, transcription, LLM time-to-first-token, text-to-speech synthesis, outbound audio buffering, and network hops between all of them. You optimize this by identifying which stages sit on the critical path and making sure nothing blocks unnecessarily.
Model choice and TTFT
In voice systems, receiving the first LLM token is the moment the entire pipeline can begin moving. The TTFT accounts for more than half of the total latency, so choosing a latency-optimised inference setup like Groq made the biggest difference. Model size also seems to matter: larger models may be required for some complex use cases, but they also impose a latency cost that’s very noticeable in conversational settings. The right model depends on the job, but TTFT is the metric that actually matters.
Pipelining the agent turn.
A production voice agent cannot be built as STT → LLM → TTS as three sequential steps. The agent turn must be a streaming pipeline: LLM tokens flow into TTS as soon as they arrive, and audio frames flow to the phone immediately. The goal is to never unnecessarily block generation. Anything that waits for a full response before moving on is wasting time.
Cancelling in-flight calls.
Interruption handling must propagate to all parts of the agent turn, immediately. When a user starts speaking, the system must cancel LLM generation, tear down TTS, and flush any buffered outbound audio simultaneously. Missing any one of those makes barge-ins feel broken.
Geography is a first-class design parameter.
Once you orchestrate multiple external services - telephony, STT, TTS, LLM - placement dominates everything. If those services aren’t co-located, latency compounds quickly. Moving the orchestration layer and using the correct regional endpoints cut e2e latency in half. Service placement makes a huge difference.
Taken together, these lessons explain why voice feels deceptively hard. Real-time systems are unforgiving, and humans are extremely sensitive to timing errors.
Off-the-shelf vs. bespoke
This isn’t an argument against platforms like Vapi or ElevenLabs. Those systems offer far more than orchestration: APIs, observability, reliability, and deep config options that would take real effort to reproduce. For most teams, rebuilding all of that would be a mistake - being able to test and validate a voice agent app without getting to this level of technical depth is truly amazing, and that’s how I first got excited about the technology.
But building your voice agent yourself - even a stripped-down one - is still a worthwhile exercise. It forces you to understand what the parameters actually control, why certain defaults exist, and where the real bottlenecks live. That understanding makes you better at configuring the off-the-shelf platforms, and in some cases lets you build something more bespoke when your use case demands it.
Voice is an orchestration problem. Once you see the loop clearly, it becomes a solvable engineering problem.
The full source code is available on GitHub: github.com/NickTikhonov/shuo
Follow me on X for more.
...
Read the original on www.ntik.me »
Apple introduces MacBook Pro with all‑new M5 Pro and M5 Max, delivering breakthrough pro performance and next-level on-device AI
The world’s best pro laptop raises the bar again with blazing-fast CPU and GPU performance, plus up to 2x faster SSD speeds and 1TB of starting storage
Apple today announced the latest 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bringing game-changing performance and AI capabilities to the world’s best pro laptop. With M5 Pro and M5 Max, MacBook Pro features a new CPU with the world’s fastest CPU core,1 a next-generation GPU with a Neural Accelerator in each core, and higher unified memory bandwidth, altogether delivering up to 4x AI performance compared to the previous generation, and up to 8x AI performance compared to M1 models.2 This allows developers, researchers, business professionals, and creatives to unlock new AI-enabled workflows right on MacBook Pro. It now comes with up to 2x faster SSD performance2 and starts at 1TB of storage for M5 Pro and 2TB for M5 Max. The new MacBook Pro includes N1, an Apple-designed wireless networking chip that enables Wi-Fi 7 and Bluetooth 6, bringing improved performance and reliability to wireless connections. It also offers up to 24 hours of battery life; a gorgeous Liquid Retina XDR display with a nano-texture option; a wide array of connectivity, including Thunderbolt 5; a 12MP Center Stage camera; studio-quality mics; an immersive six-speaker sound system; Apple Intelligence features; and the power of macOS Tahoe. The new MacBook Pro comes in space black and silver, and is available to pre-order starting tomorrow, March 4, with availability beginning Wednesday, March 11.
“MacBook Pro with M5 Pro and M5 Max redefines what’s possible on a pro laptop, now up to 4x faster than the previous generation,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With Neural Accelerators in the GPU, the new MacBook Pro enables professionals to run advanced LLMs on device and unlock capabilities that no other laptop can do — all while maintaining exceptional battery life. Combined with even faster unified memory and storage, it empowers users to take their work even further, unleashing new possibilities and pushing the boundaries of what they can do.”
On the new MacBook Pro, LLMs will run even faster in popular apps like LM Studio.
On the new MacBook Pro, image analysis will run even faster in popular apps like QuPath.
Scaling up performance from M5 and offering the same breakthrough GPU architecture with a Neural Accelerator in each core, M5 Pro and M5 Max deliver up to 4x faster LLM prompt processing than M4 Pro and M4 Max, and up to 8x AI image generation than M1 Pro and M1 Max.2 This enables AI researchers and developers to train custom models locally, and creative professionals to leverage AI-powered tools for video editing, music production, and design work. Both chips also bring up to a 50 percent increase in graphics performance compared to M4 Pro and M4 Max,3,4 enabling motion designers to work with complex 3D scenes in real time and VFX artists to preview effects instantly. And with a faster and more power-efficient Neural Engine and higher unified memory bandwidth, the new MacBook Pro delivers incredible on-device AI capabilities. The increase in unified memory bandwidth enables complex workflows like intensive AI model training and massive video projects. M5 Pro supports up to 64GB of unified memory with up to 307GB/s of memory bandwidth, while M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth.
With the power of M5 Pro and M5 Max on MacBook Pro, AI researchers and developers can train custom models locally, and creative professionals are able to leverage AI-powered tools for video editing, music production, and design work.
With the power of M5 Pro and M5 Max on MacBook Pro, AI researchers and developers can train custom models locally, and creative professionals are able to leverage AI-powered tools for video editing, music production, and design work.
With the power of M5 Pro and M5 Max on MacBook Pro, AI researchers and developers can train custom models locally, and creative professionals are able to leverage AI-powered tools for video editing, music production, and design work.
Up to 7.8x faster AI image generation performance when compared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.
Up to 6.9x faster LLM prompt processing when compared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.
Up to 5.2x faster 3D rendering in Maxon Redshift when compared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.
Up to 1.6x faster gaming performance with ray tracing in games like Cyberpunk 2077: Ultimate Edition when compared to MacBook Pro with M4 Pro.
Up to 8x faster AI image generation performance when compared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.
Up to 6.7x faster LLM prompt processing when compared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.
Up to 5.4x faster video effects rendering performance in Blackmagic DaVinci Resolve Studio when compared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.
Up to 3.5x faster AI video-enhancing performance in Topaz Video when compared to MacBook Pro with M4 Max.
With up to 2x faster read/write performance, reaching speeds of up to 14.5GB/s, the new MacBook Pro accelerates workflows for professionals working across 4K and 8K video projects.
A MacBook Pro user sits outside with a telescope under a starry night’s sky.
A MacBook Pro user works at their desk with two external displays.
The new MacBook Pro delivers up to a phenomenal 24 hours of battery life, so upgraders can take their pro workflows anywhere.
The gorgeous Liquid Retina XDR display delivers 1600 nits peak HDR brightness and up to 1000 nits for SDR content for stunning visuals.
The new MacBook Pro can drive up to two high-resolution external displays with M5 Pro, and up to four high-resolution displays with M5 Max, providing the flexibility to create expansive workspaces.
Customers can pre-order the new 14- and 16-inch MacBook Pro models with M5 Pro and M5 Max starting tomorrow, March 4, on apple.com/store and in the Apple Store app in 33 countries and regions, including the U. S. All models will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, starting Wednesday, March 11.
The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for education.
The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for education. All models are available in space black and silver.
Additional technical specifications, configure-to-order options, and accessories are available at apple.com/mac.
The 14-inch MacBook Pro with M5 now comes standard with 1TB of storage, and is available in space black and silver, starting at $1,699 (U.S.) and $1,599 (U.S.) for education.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
AppleCare delivers exceptional service and support, with flexible options for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to protect multiple products in one simple plan. Both plans include coverage for accidents like drops and spills, theft and loss protection on eligible products, battery replacement service, and 24/7 support from Apple Experts. For more information, visit apple.com/applecare.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup or focus on features that will help them make the most of their new device. Customers can also learn more about getting started and going further with their new device with a Today at Apple session at their nearest Apple Store.
Customers in the U.S. who shop at Apple using Apple Card can pay monthly at 0 percent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 percent Daily Cash back — all up front. More information — including details on eligibility, exclusions, and Apple Card terms — is available at apple.com/apple-card/monthly-installments.
Apple introduces MacBook Pro with all‑new M5 Pro and M5 Max, delivering breakthrough pro performance and next-level on-device AI
The world’s best pro laptop raises the bar again with blazing-fast CPU and GPU performance, plus up to 2x faster SSD speeds and 1TB of starting storage
CUPERTINO, CALIFORNIA Apple today announced the latest 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bringing game-changing performance and AI capabilities to the world’s best pro laptop. With M5 Pro and M5 Max, MacBook Pro features a new CPU with the world’s fastest CPU core,1 a next-generation GPU with a Neural Accelerator in each core, and higher unified memory bandwidth, altogether delivering up to 4x AI performance compared to the previous generation, and up to 8x AI performance compared to M1 models.2 This allows developers, researchers, business professionals, and creatives to unlock new AI-enabled workflows right on MacBook Pro. It now comes with up to 2x faster SSD performance2 and starts at 1TB of storage for M5 Pro and 2TB for M5 Max. The new MacBook Pro includes N1, an Apple-designed wireless networking chip that enables Wi-Fi 7 and Bluetooth 6, bringing improved performance and reliability to wireless connections. It also offers up to 24 hours of battery life; a gorgeous Liquid Retina XDR display with a nano-texture option; a wide array of connectivity, including Thunderbolt 5; a 12MP Center Stage camera; studio-quality mics; an immersive six-speaker sound system; Apple Intelligence features; and the power of macOS Tahoe. The new MacBook Pro comes in space black and silver, and is available to pre-order starting tomorrow, March 4, with availability beginning Wednesday, March 11.
“MacBook Pro with M5 Pro and M5 Max redefines what’s possible on a pro laptop, now up to 4x faster than the previous generation,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With Neural Accelerators in the GPU, the new MacBook Pro enables professionals to run advanced LLMs on device and unlock capabilities that no other laptop can do — all while maintaining exceptional battery life. Combined with even faster unified memory and storage, it empowers users to take their work even further, unleashing new possibilities and pushing the boundaries of what they can do.”
M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture and engineered from the ground up for AI. This innovative design combines two dies into a single system on a chip, providing tremendous performance boosts. M5 Pro and M5 Max feature a new up-to-18-core CPU with 6 super cores, the world’s fastest CPU core,1 and 12 all-new performance cores, optimized for power-efficient, multithreaded pro workloads — altogether delivering up to 30 percent faster performance.3 M5 Pro is designed for users running complex workflows, like coders optimizing algorithms and photographers processing massive image libraries, while M5 Max is for those pushing the absolute limits, such as engineers running rigorous simulations.
Scaling up performance from M5 and offering the same breakthrough GPU architecture with a Neural Accelerator in each core, M5 Pro and M5 Max deliver up to 4x faster LLM prompt processing than M4 Pro and M4 Max, and up to 8x AI image generation than M1 Pro and M1 Max.2 This enables AI researchers and developers to train custom models locally, and creative professionals to leverage AI-powered tools for video editing, music production, and design work. Both chips also bring up to a 50 percent increase in graphics performance compared to M4 Pro and M4 Max,3,4 enabling motion designers to work with complex 3D scenes in real time and VFX artists to preview effects instantly. And with a faster and more power-efficient Neural Engine and higher unified memory bandwidth, the new MacBook Pro delivers incredible on-device AI capabilities. The increase in unified memory bandwidth enables complex workflows like intensive AI model training and massive video projects. M5 Pro supports up to 64GB of unified memory with up to 307GB/s of memory bandwidth, while M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth.
Up to 7.8x faster AI image generation performance when compared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.
Up to 6.9x faster LLM prompt processing when compared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.
Up to 5.2x faster 3D rendering in Maxon Redshift when compared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.
Up to 1.6x faster gaming performance with ray tracing in games like Cyberpunk 2077: Ultimate Edition when compared to MacBook Pro with M4 Pro.
Up to 8x faster AI image generation performance when compared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.
Up to 6.7x faster LLM prompt processing when compared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.
Up to 5.4x faster video effects rendering performance in Blackmagic DaVinci Resolve Studio when compared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.
Up to 3.5x faster AI video-enhancing performance in Topaz Video when compared to MacBook Pro with M4 Max.
The new MacBook Pro delivers up to 2x faster read/write performance compared to the previous generation,4 reaching speeds of up to 14.5GB/s5 and accelerating workflows for professionals working across 4K and 8K video projects, LLMs, and complex datasets. MacBook Pro with M5 Pro now comes standard with 1TB of storage, while MacBook Pro with M5 Max now comes standard with 2TB. And the 14-inch MacBook Pro with M5 now comes standard with 1TB of storage.
Even More Value for Upgraders
The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a major leap for pro users. There’s never been a better time for customers to upgrade from a previous generation of MacBook Pro with Apple silicon or an Intel-based Mac.
Enhanced AI performance with Neural Accelerators in the GPU: Users upgrading from M1 models will experience up to 8x faster AI performance.2
Exceptional battery life: The new MacBook Pro gets up to 24 hours of battery life, giving Intel-based upgraders up to 13 additional hours, and users coming from M1 models will get up to three more hours, so they can get more done on a single charge.2 And unlike many PC laptops, MacBook Pro delivers the same incredible performance whether plugged in or on battery. Users will be able to fast-charge up to 50 percent in just 30 minutes using a 96W or higher USB-C power adapter.2
Best display in a pro laptop: Upgraders will enjoy the Liquid Retina XDR display, which features 1600 nits peak HDR brightness and up to 1000 nits for SDR content, and offers a nano-texture option.
Comprehensive connectivity: The new MacBook Pro has a wide array of connectivity options, including three Thunderbolt 5 ports for high-speed data transfer, HDMI that supports up to 8K resolution, an SDXC card slot for quick media import, and MagSafe 3 with fast-charge capability. Upgraders can also drive up to two high-resolution external displays with M5 Pro, and up to four high-resolution displays with M5 Max, providing the flexibility to create expansive workspaces.
Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring improved performance and reliability to wireless connections.
Advanced camera, mics, and speakers: Featuring a 12MP Center Stage camera with Desk View support and studio-quality mics, the new MacBook Pro will allow users to look and sound their best while taking calls. They will also experience an immersive six-speaker sound system with support for Spatial Audio.
macOS Tahoe transforms the MacBook Pro experience with powerful capabilities that turbocharge productivity.6 Major updates to Spotlight make it easier to find relevant apps and files and immediately take action right from the search bar. Apple Intelligence is even more capable while protecting users’ privacy at every step.7 Shortcuts get even more powerful with intelligent actions and the ability to tap directly in to Apple Intelligence models. Integrated into Messages, FaceTime, and the Phone app, Live Translation helps users easily communicate across languages, translating text and audio.7 Additionally, developers can bring Apple Intelligence capabilities into their applications or tap in to the Foundation Models framework for specialized on-device intelligence tasks. Continuity features include the Phone app on Mac, which lets users relay cellular calls from their nearby iPhone, and with Live Activities from iPhone, they can stay on top of things happening in real time.6 macOS Tahoe also features a beautiful new design with Liquid Glass, and users can personalize their Mac in even more ways with an updated Control Center, in addition to new color options for folders, app icons, and widgets.
MacBook Pro was built with the environment in mind, and brings Apple even closer to reaching its ambitious plan to be carbon neutral across its entire footprint by 2030. It is made with 45 percent recycled content,8 including 100 percent recycled aluminum in the enclosure and 100 percent recycled cobalt in the battery. It is manufactured with 50 percent renewable electricity, such as wind and solar, across the supply chain. The new MacBook Pro is designed to be durable and repairable, and also offers industry-leading software support, while meeting Apple’s high standards for energy efficiency and safer chemistry. The paper packaging is 100 percent fiber-based and can be easily recycled.9
Customers can pre-order the new 14- and 16-inch MacBook Pro models with M5 Pro and M5 Max starting tomorrow, March 4, on apple.com/store and in the Apple Store app in 33 countries and regions, including the U.S. All models will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, starting Wednesday, March 11.
The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for education.
The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for education. All models are available in space black and silver.
Additional technical specifications, configure-to-order options, and accessories are available at apple.com/mac.
The 14-inch MacBook Pro with M5 now comes standard with 1TB of storage, and is available in space black and silver, starting at $1,699 (U.S.) and $1,599 (U.S.) for education.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
AppleCare delivers exceptional service and support, with flexible options for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to protect multiple products in one simple plan. Both plans include coverage for accidents like drops and spills, theft and loss protection on eligible products, battery replacement service, and 24/7 support from Apple Experts. For more information, visit apple.com/applecare.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup or focus on features that will help them make the most of their new device. Customers can also learn more about getting started and going further with their new device with a Today at Apple session at their nearest Apple Store.
Customers in the U.S. who shop at Apple using Apple Card can pay monthly at 0 percent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 percent Daily Cash back — all up front. More information — including details on eligibility, exclusions, and Apple Card terms — is available at apple.com/apple-card/monthly-installments.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
Testing was conducted by Apple in February 2026 using shipping competitive systems and select industry-standard benchmarks.
Testing was conducted by Apple in January and February 2026. See apple.com/macbook-pro for more information.
Results are compared to previous-generation 16-inch MacBook Pro with Apple M4 Pro, 14-core CPU, 20-core GPU, 48GB of unified memory, and a 4TB SSD.
Results are compared to previous-generation 16-inch MacBook Pro with Apple M4 Max, 16-core CPU, 40-core GPU, 128GB of unified memory, and an 8TB SSD.
Testing was conducted by Apple in January and February 2026 using preproduction 16-inch MacBook Pro systems with Apple M5 Max, 18-core CPU, 40-core GPU, 128GB of unified memory, and an 8TB SSD. Tested with FIO 3.41, 1024KB request size, 10GB test file and IO depth=8. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
macOS Tahoe is available as a free software update. Some features may not be available in all regions or in all languages. See requirements at apple.com/os/macos.
Apple Intelligence is available in beta with support for these languages: English, Danish, Dutch, French, German, Italian, Norwegian, Portuguese, Spanish, Swedish, Turkish, Vietnamese, Chinese (simplified), Chinese (traditional), Japanese, and Korean. Some features may not be available in all regions or languages. For feature and language availability and system requirements, see support.apple.com/en-us/121115.
Product recycled or renewable content is the mass of certified recycled material relative to the overall mass of the device, not including packaging or in-box accessories.
Breakdown of U.S. retail packaging by weight. Adhesives, inks, and coatings are excluded from calculations.
Copy text
* Up to 7.8x faster AI image generation performance when compared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.
* Up to 6.9x faster LLM prompt processing when compared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.
* Up to 5.2x faster 3D rendering in Maxon Redshift when compared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.
* Up to 1.6x faster gaming performance with ray tracing in games like Cyberpunk 2077: Ultimate Edition when compared to MacBook Pro with M4 Pro.
* Up to 8x faster AI image generation performance when compared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.
* Up to 6.7x faster LLM prompt processing when compared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.
* Up to 5.4x faster video effects rendering performance in Blackmagic DaVinci Resolve Studio when compared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.
* Up to 3.5x faster AI video-enhancing performance in Topaz Video when compared to MacBook Pro with M4 Max.
* Enhanced AI performance with Neural Accelerators in the GPU: Users upgrading from M1 models will experience up to 8x faster AI performance.2
* Exceptional battery life: The new MacBook Pro gets up to 24 hours of battery life, giving Intel-based upgraders up to 13 additional hours, and users coming from M1 models will get up to three more hours, so they can get more done on a single charge.2 And unlike many PC laptops, MacBook Pro delivers the same incredible performance whether plugged in or on battery. Users will be able to fast-charge up to 50 percent in just 30 minutes using a 96W or higher USB-C power adapter.2
* Best display in a pro laptop: Upgraders will enjoy the Liquid Retina XDR display, which features 1600 nits peak HDR brightness and up to 1000 nits for SDR content, and offers a nano-texture option.
* Comprehensive connectivity: The new MacBook Pro has a wide array of connectivity options, including three Thunderbolt 5 ports for high-speed data transfer, HDMI that supports up to 8K resolution, an SDXC card slot for quick media import, and MagSafe 3 with fast-charge capability. Upgraders can also drive up to two high-resolution external displays with M5 Pro, and up to four high-resolution displays with M5 Max, providing the flexibility to create expansive workspaces.
* Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring improved performance and reliability to wireless connections.
* Advanced camera, mics, and speakers: Featuring a 12MP Center Stage camera with Desk View support and studio-quality mics, the new MacBook Pro will allow users to look and sound their best while taking calls. They will also experience an immersive six-speaker sound system with support for Spatial Audio.
* Customers can pre-order the new 14- and 16-inch MacBook Pro models with M5 Pro and M5 Max starting tomorrow, March 4, on apple.com/store and in the Apple Store app in 33 countries and regions, including the U.S. All models will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, starting Wednesday, March 11.
* The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for education.
* The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for education. All models are available in space black and silver.
* Additional technical specifications, configure-to-order options, and accessories are available at apple.com/mac.
* The 14-inch MacBook Pro with M5 now comes standard with 1TB of storage, and is available in space black and silver, starting at $1,699 (U.S.) and $1,599 (U.S.) for education.
* With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
* AppleCare delivers exceptional service and support, with flexible options for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to protect multiple products in one simple plan. Both plans include coverage for accidents like drops and spills, theft and loss protection on eligible products, battery replacement service, and 24/7 support from Apple Experts. For more information, visit apple.com/applecare.
...
Read the original on www.apple.com »
For at least 10 years, every Time Machine set up I have been in charge of, or tasked with maintaining for someone else, has eventually run into an issue where it stops backing up successfully. The only solution has been to start over, to not inherit backup state and to manually delete the old backup. My experience while looking up this solution repeatedly has been that this is not uncommon, and definitely is not just me holding it wrong; indeed the steps are marked as recommended solutions on Apple’s own forums.
For several years, Spotlight’s tag index has been unreliable. Querying for a file type and a tag often returns only a subset of the most recent files with the tags. Rebuilding the entire Spotlight index does nothing to alleviate this. The only cure is to relaunch the Finder, which sometimes fixes it. (There’s also, 13 years after its introduction, no good, supported and documented command line interface or API to enumerate, add or remove tags (aside from those assuming that they are just the seven colors of labels). Just saying.)
For a few months, Finder has been having trouble with some Spotlight queries sometimes. When it happens, either they seemingly kick off and just hang forever, or they show results and then as you try to do something, Finder hangs (maybe as a result of a race condition during an update of the result). The only cure is to relaunch the Finder. Rebuilding the entire Spotlight index does nothing to alleviate this.
For several years, Finder has been having trouble keeping windows up to date with folder changes. Most commonly seen when a program I’m coding in my day-to-day work writes files to a folder, those files often don’t show up at all. Sometimes you can force it to reload by backing out of the folder, using Go To Folder, entering the full path, leaving it for a few seconds and letting the go to panel “autocomplete” the name - for some reason, that seems to clear out the cobwebs momentarily and sync with the current state of the file system. Sometimes, even that workaround doesn’t work, and the only cure is to relaunch the Finder. Leaving the entire computer to be for hours on end, then closing all windows, then opening a new window showing the same folder still does not refresh the state of the folder. Creating a new folder in the Finder in the affected folder, that new folder does appear, but things that did not originate from within the Finder - no.
For a year or so, listening to something with AirPods Pro and then opening Quick Look in the Finder to see a video with an audio track often glitches out the audio after a second or two. The mixed audio is fine before and after, but there is a very unpleasant glitch. No firmware or OS upgrade has resolved the issue.
For several years, switching to a window in a full screen space often does not necessarily focus the window. If you cmd+tab to a window that has its own full screen space, you can see that the window does not active, and that hitting a keyboard shortcut that is intended to be valid just gives you a beep. You have to manually click in the window to activate it. This especially affects Safari in full screen windows with video, where hitting space to pause or left/right to scan is no longer effective.
With the possible exception of individual dodgy Time Machine protocol implementations from third parties, all of the issues are directly traceable to components fully in Apple’s control. None of these issues are impossible for Apple to fix. All of them are incumbent on them to do so. Nearly all of them have persisted for at least two major OS releases and multiple Macs.
In the middle of all this, what Apple chooses to focus on is to implement a redesign that no one asked for, that butchers both the most conceivably fundamental usability and the visual pleasantness its user base has self-selected its platforms for; which only saving grace is that it is half-assed enough to not actually really change some things too badly, compared to what it could have been like. Although, had I upgraded to macOS Tahoe, chances are on top of the visual change, I would have been treated to basic Apple Event infrastructure falling apart and stopping working causing hangs, instability and unpredictability.
I have a MacBook Pro M1 Max from 2021, and because it is an excellent piece of hardware that still performs its function admirably, I have been holding on. The current state of macOS Tahoe is abysmal - I know because I listen to people who are long-time Mac users, who say so. If it was more stable, maybe I would do what I did with the last few upgrades, all of which made non-productive tweaks to the user interface in the interest of unity across platforms, and just upgrade. As it stands now, there’s no chance.
The hardware is great and no doubt M5 and M6 variants will run circles around M1, but if I have to sink down further into this bog, that price is too high to pay - a common enough sentiment that it is a matter of public interest to document downgradability or attempting to block dark pattern upgrades. (And lest you think this is just Mac-related, the disease has spread at least to the wrist, but I am scoping it down somewhat for clarity.)
At some point, enough has to be enough. Apparently, the upcoming versions are going to be bug fix/improvement focused releases in the vein of the legendary Snow Leopard release (which itself was not uncontroversial at the time of its release for the number of things it removed). But with things having gotten this bad over so many years, my question is: does Apple even have the organizational acuity to understand, value and focus on these problems?
Clearly, if not breaking developers and users was a priority, we would not have been where we are today, or would we? Even ignoring bugs and design changes, in which way does it serve users to phase out Rosetta 2, which in a container-heavy world is more or less required for developers due to the ecosystem of ARM64 Linux containers being nowhere near as widespread as for AMD x86-64 ones, and which keeps many applications runnable that otherwise wouldn’t be?
My hope is that there are Mac lovers within Apple who bite their tongue at every silly icon redesign, title bar-shrinking design reorganization, misty shower window—re-skin—love letter that they are forced to enact instead of taking what was once (and partially still is) that rarest of jewels, a long-term, continuously updated, well-designed user interface and mass market operating system and iterating on it to make it better. That there are people who, in so many words, bleed six colors; who have found not just a sterile tool, but a culture and a community. And who, like so many others in so many other situations, recently had to see what they love, what they value, what shaped them, torn apart by shifting, switching or recently starkly exposed priorities of people above them who either don’t know what they have, or are happy to use it as a vehicle and ride it to places they care more about.
My hope is that Macintosh is not just one of these empires that was at the height of its power and then disintegrated because of warring factions, satiated and uncurious rulers, and droughts for which no one was prepared, ruining crops no one realized were essential for survival.
My hope is that there remains a primordial spark, a glimpse of genius, to rediscover, to reconnect to - to serve not annual trends or constant phonification, but the needs of the user to use the computer as a tool to get something done.
My hope is that, just as Apple crawled out of a hardware nightmare pockmarked by thermal throttling, keyboards incapacitated by strands of human hair and lack of respect for its users needs, it can choose to refocus its software and its humility too, and stop doing this to the bicycle for the mind.
...
Read the original on take.surf »
The high court had also sought a report from the junior judge who had used the AI-generated rulings. She told the court that this was her first time using an AI tool and she had believed the citations to be “genuine”. She had no intention to misquote or misrepresent the rulings and that “the mistake occurred solely due to the reliance on an automatic source”, the high court wrote.
...
Read the original on www.bbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.