10 interesting stories served every morning and every evening.
Android has always been a fairly open platform, especially if you were deliberate about getting it that way, but we’ve seen in recent months an extremely rapid devolution of the Android ecosystem:
The closing of development of an increasing number of components in AOSP.
Samsung, Xiaomi and OnePlus have removed the option of bootloader unlocking on all of their devices. I suspect Google is not far behind.
Google implementing Play Integrity API and encouraging developers to implement it. Notably the EU’s own identity verification wallet requires this, in stark contrast to their own laws and policies, despite the protest of hundreds on Github.
And finally, the mandatory implementation of developer verification across Android systems. Yes, if you’re running a 3rd-party OS like GOS you won’t be directly affected by this, but it will impact 99.9% of devices, and I foresee many open source developers just opting out of developing apps for Android entirely as a result. We’ve already seen SyncThing simply discontinue development for this reason, citing issues with Google Play Store. They’ve also repeatedly denied updates for NextCloud with no explanation, only restoring it after mass outcry. And we’ve already seen Google targeting any software intended to circumvent ads, labeling them in the system as “dangerous” and “untrusted”. This will most certainly carry into their new “verification” system.
Google once competed with Apple for customers. But in a world where Google walks away from the biggest antitrust trial since 1998 with yet another slap on the wrist, competition is dead, and Google is taking notes from Apple about what they can legally get away with.
Android as we know it is dead. And/or will be dead very soon. We need an open replacement.
...
Read the original on feddit.org »
React-by-default has hidden costs. Here’s a case for making deliberate choices to select the right framework for the job.
React-by-default has hidden costs. Here’s a case for making deliberate choices to select the right framework for the job.
React is no longer winning by technical merit. Today it is winning by default. That default is now slowing innovation across the frontend ecosystem.
When teams need a new frontend, the conversation rarely starts with “What are the constraints and which tool best fits them?” It often starts with “Let’s use React; everyone knows React.” That reflex creates a self-perpetuating cycle where network effects, rather than technical fit, decide architecture.
Meanwhile, frameworks with real innovations struggle for adoption. Svelte compiles away framework overhead. Solid delivers fine-grained reactivity without virtual-DOM tax. Qwik achieves instant startup via resumability. These approaches can outperform React’s model in common scenarios, but they rarely get a fair evaluation because React is chosen by default.
React is excellent at many things. The problem isn’t React itself, it’s the React-by-default mindset.
React’s technical foundations explain some of today’s friction. The virtual DOM was a clever solution for 2013’s problems, but as Rich Harris outlined in “Virtual DOM is pure overhead”, it introduces work modern compilers can often avoid.
Hooks addressed class component pain but introduced new kinds of complexity: dependency arrays, stale closures, and misused effects. Even React’s own docs emphasize restraint: “You Might Not Need an Effect”. Server Components improve time-to-first-byte, but add architectural complexity and new failure modes.
The React Compiler is a smart solution that automates patterns like useMemo/useCallback. Its existence is also a signal: we’re optimizing around constraints baked into the model.
Contrast this with alternative approaches: Svelte 5’s Runes simplify reactivity at compile time; Solid’s fine-grained reactivity updates exactly what changed; Qwik’s resumability eliminates traditional hydration. These aren’t incremental tweaks to React’s model—they’re different models with different ceilings.
Innovation without adoption doesn’t change outcomes. Adoption can’t happen when the choice is made by reflex.
Defaulting to React often ships a runtime and reconciliation cost we no longer question. Even when it’s fast enough, the ceiling is lower than compile-time or fine-grained models. Developer time is spent managing re-renders, effect dependencies, and hydration boundaries instead of shipping value. The broader lesson from performance research is consistent: JavaScript is expensive on the critical path (The Cost of JavaScript).
We’ve centered mental models around “React patterns” instead of web fundamentals, reducing portability of skills and making architectural inertia more likely.
The loss isn’t just performance, it’s opportunity cost when better-fit alternatives are never evaluated. For instance, benchmarks like the JS Framework Benchmark show alternatives like Solid achieving up to 2-3x faster updates in reactivity-heavy scenarios compared to React.
Svelte shifts work to compile time: no virtual DOM, minimal runtime. Components become targeted DOM operations. The mental model aligns with web fundamentals.
But “not enough jobs” keeps Svelte adoption artificially low despite its technical superiority for most use cases. Real-world examples, like The Guardian’s adoption of Svelte for their frontend, demonstrate measurable gains in performance and developer productivity, with reported reductions in bundle sizes and faster load times. For instance, as detailed in Wired’s article on Svelte, developer Shawn Wang (@swyx on X/Twitter) reduced his site’s size from 187KB in React to just 9KB in Svelte by leveraging its compile-time optimizations, which shift framework overhead away from runtime. This leads to faster, more efficient apps especially on slow connections.
Solid delivers fine-grained reactivity with JSX familiarity. Updates flow through signals directly to affected DOM nodes, bypassing reconciliation bottlenecks. Strong performance characteristics, limited mindshare. As outlined in Solid’s comparison guide, this approach enables more efficient updates than React’s virtual DOM, with precise reactivity that minimizes unnecessary work and improves developer experience through simpler state management.
While prominent case studies are scarcer than for more established frameworks, this is largely due to Solid’s lower adoption. Yet anecdotal reports from early adopters suggest similar transformative gains in update efficiency and code simplicity, waiting to be scaled and shared as more teams experiment.
Qwik uses resumability instead of hydration, enabling instant startup by loading only what the current interaction needs. Ideal for large sites, long sessions, or slow networks. According to Qwik’s Think Qwik guide, this is achieved through progressive loading and serializing both state and code. Apps can thus resume execution instantly without heavy client-side bootstrapping, resulting in superior scalability and reduced initial load times compared to traditional frameworks.
Success stories for Qwik may be less visible simply because fewer teams have broken from defaults to try it. But those who have report dramatic improvements in startup times and resource efficiency, indicating a wealth of untapped potential if adoption grows.
All three under-adopted not for lack of merit, but because the default choice blocks trying them out.
Furthermore, React’s API surface area is notably larger and more complex than its alternatives, encompassing concepts like hooks, context, reducers, and memoization patterns that require careful management to avoid pitfalls. This expansive API contributes to higher cognitive load for developers, often leading to bugs from misunderstood dependencies or over-engineering. For example, in Cloudflare’s September 12, 2025 outage, a useEffect hook with a problematic dependency array triggered repeated API calls, overwhelming their Tenant Service and causing widespread failures. In contrast, frameworks like Svelte, Solid, and Qwik feature smaller, more focused APIs that emphasize simplicity and web fundamentals, reducing the mental overhead and making them easier to master and maintain.
React’s dominance creates self-reinforcing barriers. Job postings ask for “React developers” rather than “frontend engineers,” limiting skill diversity. Component libraries and team muscle memory create institutional inertia.
Risk-averse leaders choose the “safe” option. Schools teach what jobs ask for. The cycle continues independent of technical merit.
Escaping requires deliberate action at multiple levels. Technical leaders should choose based on constraints and merits, not momentum. Companies can allocate a small innovation budget to trying alternatives. Developers can upskill beyond a single mental model.
Educators can teach framework-agnostic concepts alongside specific tools. Open source contributors can help alternative ecosystems mature.
To make deliberate choices, use this simple checklist when starting a new project:
* Assess Performance Needs: Evaluate metrics like startup time, update efficiency, and bundle size. Prioritize frameworks with compile-time optimizations if speed is critical.
* Team Skills and Learning Curve: Consider existing expertise but factor in migration paths; many alternatives offer gentle ramps (e.g., Solid’s JSX compatibility with React).
* Scaling and Cost of Ownership: Calculate long-term costs, including maintenance, dependency management, and tech debt. Alternatives often reduce runtime overhead, lowering hosting costs and improving scalability.
* Ecosystem Fit: Balance maturity with innovation; pilot in non-critical areas to test migration feasibility and ROI.
“But ecosystem maturity!” Maturity is valuable, and can also entrench inertia. Age isn’t the same as fitness for today’s constraints.
Additionally, a mature ecosystem often means heavy reliance on third-party packages, which can introduce maintenance burdens like keeping dependencies up-to-date, dealing with security vulnerabilities, and bloating bundles with unused code. While essential in some cases, this flexibility can lead to over-dependence; custom solutions tailored to specific needs are often leaner and more maintainable in the long run. Smaller ecosystems in alternative frameworks encourage building from fundamentals, fostering deeper understanding and less technical debt. Moreover, with AI coding assistants now able to generate precise, custom functions on demand, the barrier to creating bespoke utilities has lowered dramatically. This makes it feasible to avoid generic libraries like lodash or date libraries like Moment or date-fns entirely in favor of lightweight, app-specific implementations.
“But hiring!” Hiring follows demand. You can de‑risk by piloting alternatives in non‑critical paths, then hiring for fundamentals plus on‑the‑job training.
“But stability!” React’s evolution from classes to hooks to Server Components demonstrates constant churn, not stability. Alternative frameworks often provide more consistent APIs.
“But proven at scale!” jQuery was proven at scale too. Past success doesn’t guarantee future relevance.
Monoculture slows web evolution when one framework’s constraints become de facto limits. Talent spends cycles solving framework-specific issues rather than pushing the platform forward. Investment follows incumbents regardless of technical merit.
Curricula optimize for immediate employability over fundamentals, creating framework-specific rather than transferable skills. Platform improvements get delayed because “React can handle it” becomes a default answer.
Healthy ecosystems require diversity, not monocultures. Innovation emerges when different approaches compete and cross-pollinate. Developers grow by learning multiple mental models. The platform improves when several frameworks push different boundaries.
Betting everything on one model creates a single point of failure. What happens if it hits hard limits? What opportunities are we missing by not exploring alternatives?
It’s time to choose frameworks based on constraints and merit rather than momentum. Your next project deserves better than React-by-default. The ecosystem deserves the innovation only diversity can provide.
Stop planting the same seed by default. The garden we could cultivate through diverse framework exploration would be more resilient and more innovative than the monoculture we’ve drifted into.
The choice is ours to make.
...
Read the original on www.lorenstew.art »
A new design with Liquid Glass. Beautiful, delightful, and instantly familiar.
Now with the Phone app and Live Activities from iPhone for next‑level Continuity.
Take hundreds of actions in Spotlight without lifting your hands off the keyboard.
Create more powerful shortcuts than ever with Apple Intelligence.
Reimagined with Liquid Glass, macOS Tahoe is at once fresh and familiar. Apps bring more focus to your content. You can personalize your Mac like never before. And everything just flows into place.
Liquid Glass refracts and reflects content in real time, bringing even more clarity to navigation and controls — and even more vitality to everything you do.
Personalize your Mac with new options including updated light or dark appearances, new color-tinted icons, or a stunning clear look.
Your display feels even larger with the transparent menu bar. And you have more ways to customize the controls and layout in the menu bar and Control Center, even those from third parties.
Sidebars and toolbars in apps reflect the depth of your workspace and offer a subtle hint of the content within reach as you scroll.
Get more done, from even more places.
Now integrated into even more apps and experiences, Apple Intelligence helps you get things done effortlessly and communicate across languages.
Automatically translate texts in Messages, display live translated captions in FaceTime, and get spoken translations for calls in the Phone app.
Intelligent actions in Shortcuts can summarize text, create images, or tap directly into Apple Intelligence models to provide responses that feed into your shortcut.
More ways to express yourself with images.
Mix emoji and descriptions to make something brand-new. In Image Playground, discover additional ChatGPT styles. And have even more control when making images inspired by family and friends using Genmoji and Image Playground.
Continuity helps you work seamlessly across Apple devices. And with the Phone app and Live Activities coming to Mac, it’s even easier to stay on top of things happening in real time.
The menu bar now features the Live Activities from your iPhone. And when you click one, the app opens in iPhone Mirroring so you can take action.
Make and take calls with a click. Conveniently access your synced content like Recents, Contacts, and Voicemail — and enjoy the familiar features from iPhone.
For unknown numbers, Call Screening finds out who’s calling and why. Once the caller shares their name and the reason for their call, your phone rings and you can decide if you want to pick up.
Hold Assist keeps your spot in line while you wait for a live agent and notifies you when they’re ready.
Make quick work of everyday tasks, jump into your favorite activities, and turbocharge pro workflows — all with a whole lot less effort.
Spotlight lets you take hundreds of actions without lifting your hands off the keyboard. And new quick keys help you perform actions even faster.
You can now keep all your apps and most accessed files within easy reach, including intelligent suggestions based on your routines.
Now you can run shortcuts automatically — at a specific time of day or when you take specific actions, like saving a file to a particular folder or connecting to a display.
And so much more.
Magnifier lets you zoom in on your surroundings using a connected camera. Accessibility Reader provides a systemwide, customized reading and listening experience. Braille Access creates an all-new interface for braille displays. And Vehicle Motion Cues help reduce motion sickness in moving vehicles.
Parents can take advantage of a wide set of parental controls designed to keep children safe. These include new enhancements across Communication Limits, Communication Safety, and the App Store.
Now on Mac for the most comfortable writing experience, Journal makes it easy to capture and write about everyday moments and special events using photos, videos, audio recordings, places, and more.
An updated design lets you quickly access filtering and sorting options and customize the size of Collections tiles so you can view your library just how you like. And with Pinned Collections, you can keep your most-visited ones right at your fingertips.
Celebrate the people who matter most with a new tiled design that features beautiful and personalized Contact Posters.
With Apple Intelligence, Reminders can suggest tasks, grocery items, and follow-ups based on emails or other text on your device. It can also automatically categorize related reminders into sections within a list.
The new Games app brings together all the games you have on your Mac. In the Game Overlay, you can adjust system settings, chat with friends, or invite them to play — all without leaving the game. And for developers, Metal 4 brings even more advanced graphics and rendering technologies, like MetalFX Frame Interpolation and Denoising.
Create polls and personalize conversations with backgrounds. Redesigned conversation details feature designated sections for contact info, photos, links, location, and more. Typing indicators in groups let you know exactly who is about to chime in. Screening tools detect spam and give you control. And the Add Contact button now appears next to an unknown number in a group.
Easily refer to changes you’ve made to your accounts. Find previous versions of passwords, along with when they were changed.
Capture conversations in the Phone app as audio recordings with transcriptions. You can also export a note into a Markdown file.
...
Read the original on www.apple.com »
I recently bought a cheap Tapo indoor camera to see what my dog gets up to when I am out of the house.
What actually followed? I ended up reverse-engineering onboarding flows, decompiling an APK, MITMing TLS sessions, and writing cryptographic scripts.
My main motivation for this project really stemmed from the fact that the camera annoyed me from day one. Setting the camera up in frigate was quite painful, no one really seemed to know how these cameras worked online.
SIDENOTE: If you want 2 way audio to work in frigate you must use the tapo:// go2rtc configuration for your main stream instead of the usual rtsp://. TP-Link are lazy and only implement 2 way audio on their own proprietary API.
One undocumented behavior that tripped me up was that the device’s API is supposed to accept credentials admin: after onboarding. However after banging my head against a wall for a few hours I later discovered that if you change your cloud password after onboarding, paired devices don’t get the memo 🙂.
This implied a few things to me that started the cogs turning:
* There must be a call made during on-boarding that syncs the device password with the cloud password
* The device must either allow unauthenticated calls before this step or have some sort of default password.
So considering my onboarding woes and the fact that I was starting to recoil every time the tapo app tried to jam a “Tapo Care” subscription down my throat, a cloudless onboarding solution for the device was beginning to look more and more desirable.
The first step to cracking this egg was to be be able to snoop on what the app and the camera are saying to each other during onboarding. E.g, establish a man in the middle.
To man in the middle a phone app, you must be able to route all http(s) traffic via a proxy server you control. Historically this has been quite simple to achieve, simply spin up a proxy on a computer, add the proxy’s self-signed certificate to the phone’s truststore, and configure the phone to point at the proxy.
However, modern phone apps can use a few nasty tricks to render this approach ineffective. Namely they will blatantly ignore proxies, throw the system truststore to the wind and make liberal use of certificate pinning.
The most full-proof technique for generically MITMing an app has therefore become dynamic instrumentation via tools like frida. What this allows us to do is force an app to use the proxies and certificates that we tell it to whilst batting aside it’s attempts to do things like certificate pinning.
So the setup ended up looking like this (full setup guide here
config:
theme: ‘base’
themeVariables:
primaryColor: ‘#00000000’
primaryTextColor: ‘#fff’
primaryBorderColor: ‘#ffffff8e’
lineColor: ‘#fff’
secondaryColor: ‘#fff’
tertiaryColor: ‘#fff’
sequenceDiagram
participant A as Tapo App (with frida hooks)
participant L as Laptop (mitmproxy)
participant C as Tapo Camera
A->>L: Request
L->>L: Record request
L->>C: Forward request
C–>>L: Response
L->>L: Record response
L–>>A: Forward response
After spinning up mitmproxy, injecting the frida scripts
, and onboarding the camera, we finally see an initial login flow — before the admin password ever gets changed:
However, subsequent requests look like this:
And responses look like this:
So from this initial dive we have learned that:
* Tapo 100% has a default password due to the fact that it performs a full login before it knows anything about the cloud password.
* Tapo has an encrypted securePassthrough channel for its API calls to prevent peeping toms such as myself from spilling the beans.
The next logical step is to decompile the apk in JADX and start rummaging around for a default password.
The initial login call that we captured references an admin username:
Searching for “admin” in JADX gives us many hits but there are a few concentrated in a CameraOnboardingViewModel class that look interesting:
The function m98131y2 appears to be returning a password that is then passed to the new Account() call. Following this function up the chain, we hit gold:
We already know
that the device is using encrypt_type: 3, so that means our default password is:
With the default password now revealed, we have the cards in our hand to derive session keys and decode the securePassthrough messages.
The only thing that would help us further is if we had a reference implementation for the authentication flow. This is where PyTapo
really came in handy.
Using PyTapo as a reference, we could dump the session state and encrypted messages from mitmproxy and write a script to do some static analysis on the decrypted requests and responses, but a really cool feature of mitmproxy is that it supports scripting itself.
What this means is that we can pass a python script to mitmproxy, and have it directly decrypt request and response payloads inline whilst running a capture.
* Pretty-prints them inline in mitmproxy’s UI in request_decrypted and response_decrypted fields
* Dumps them to JSON files for later analysis
The complete list of calls made by the Tapo app during onboarding were:
This boiled down to just four important calls:
changeAdminPassword — change from default password to the cloud password
Everything else was fluff: timezones, record plans, binding to cloud.
In the end, the prize for all this nonsense was a scrappy little Bash script, tapo_onboard.sh
, which:
* Logs in with the default admin password,
* Switches off the obnoxious OSD logo on the camera feed,
Peeling this onion left me with a few observations on Tapo’s firmware.
* Some endpoints use SHA-256 for hashing, while others cling to MD5 like it’s 2003.
* There are two public keys used to send passwords to the device — one that is shared with the client and another super secret one that’s hardcoded in the app. The easiest way to figure out which one to use is to flip a coin.
* Password syncing between the app and its managed devices is strictly vibe-based.
The whole thing feels like it was cobbled together by a consortium of couch-cryptographers. But then again, it was the cheapest indoor camera on amazon, so what did I expect?
And with all this said I did finally manage to figure out what the dog does when I am away.
She sleeps. On the sofa. Sometimes even in her bed.
...
Read the original on kennedn.com »
, /PRNewswire/ – On the heels of the PayPal World announcement, a global platform connecting the world’s largest digital payment systems and wallets, PayPal today introduced PayPal links, a new way to send and receive money through a personalized, one-time link that can be shared in any conversation.
PayPal users in the U. S. can begin creating personalized payment links today, with international expansion to the UK, Italy, and other markets starting later this month. By making payments this simple and universal, PayPal links helps drive new customer acquisition and brings more users into the PayPal ecosystem.
The peer-to-peer (P2P) experience is about to go even further. Crypto will soon be directly integrated into PayPal’s new P2P payment flow in the app. This will make it more convenient for PayPal users in the U. S. to send Bitcoin, Ethereum, PYUSD, and more, to PayPal, Venmo, as well a rapidly growing number of digital wallets across the world that support crypto and stablecoins.
Expanding what people can do with PayPal also comes with reassurance around how personal payments are handled. As always, friends-and-family transfers through Venmo and PayPal are exempt from 1099-K reporting. Users won’t receive tax forms for gifts, reimbursements, or splitting expenses, helping ensure that personal payments stay personal.
“For 25 years, PayPal has revolutionized how money moves between people. Now, we’re taking the next major step,” said Diego Scotti, General Manager, Consumer Group at PayPal. “Whether you’re texting, messaging, or emailing, now your money follows your conversations. Combined with PayPal World, it’s an unbeatable value proposition, showing up where people connect, making it effortless to pay your friends and family, no matter where they are or what app they’re using.”
P2P is a cornerstone of PayPal’s consumer experience, driving engagement and bringing more users into the ecosystem. P2P and other consumer total payment volume saw solid growth in the second quarter, increasing 10% year-over-year as the company focused on improving the experience and increasing user discoverability to make it easier than ever to move money globally. Plus, Venmo saw its highest TPV growth in three years. With PayPal World unlocking seamless interoperability, P2P is poised for even greater momentum in the future as PayPal and Venmo connect to billions of wallets worldwide.
* Create a personalized link — Open the PayPal app, enter the details of your payment or request, and generate a unique, one-time link to share.
* Always the right person — Each link is private, one-time use, and created for a specific transaction.
* Drop it anywhere — Send your link in a text, DM, email, or chat. Add a note, emoji, or payment note.
* Manage payment activity: Unclaimed links expire after 10 days. Users can send a reminder or even cancel the payment or request before the link is claimed with the PayPal app.
* Tap and done — The recipient taps the link and either completes or accepts the payment within the PayPal App with their PayPal account.
* Funds are instant — the recipient will get immediate access to their funds with a PayPal Balance account once accepted.
About PayPal
PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. For more information, visit https://www.paypal.com, https://about.pypl.com/ and https://investor.pypl.com/.
About PayPal USD (PYUSD)
PayPal USD is issued by Paxos Trust Company, LLC, a fully chartered limited purpose trust company. Paxos is licensed to engage in Virtual Currency Business Activity by the New York State Department of Financial Services. Reserves for PayPal USD are fully backed by U. S. dollar deposits, U.S. Treasuries and similar cash equivalents, and PayPal USD can be bought or sold through PayPal and Venmo at a rate of $1.00 per PayPal USD.
PayPal, Inc. (NMLS ID #: 910457) is licensed to engage in Virtual Currency Business Activity by the New York State Department of Financial Services.
...
Read the original on newsroom.paypal-corp.com »
A first version of this piece was almost ready to be published two days ago, but after writing more than 2,000 words, I grew increasingly angry and exasperated, and that made the article become too meandering and rant-like, so I deleted everything, and started afresh several hours later.
This, of course, is about Awe-dropping, Apple’s September 9 event, where they presented the new iPhone lineup, the new AirPods Pro, and the new Apple Watches. And the honest truth here is that I’m becoming less and less inclined to talk about Apple, because it’s a company that I feel has lost its alignment with me and other long-time Apple users and customers.
The more Apple talks and moves like other big tech companies, the less special it gets; the less special and distinctive it gets, the less I’m interested in finding ways to talk about it. Yes, I have admitted that Apple makes me mad lately, so they still elicit a response that isn’t utter indifference on my part. And yes, you could argue that if Apple makes me mad, it means that in the end I still care.
But things aren’t this clear-cut. I currently don’t really care about Apple — I care that their bad software design decisions and their constant user-interface dumbing down may become trends and get picked up by other tech companies. So, what I still care about that’s related to Apple is essentially the consequences of their actions.
The event kicked off with the famous Steve Jobs quote,
Design is not just what it looks like and feels like. Design is how it works.
Why that quote? Why now, after months of criticism towards the new design æsthetic of Liquid Glass? I gave this choice three possible interpretations — I still may be missing something here; I’m sure my readers will let me know.
It’s Apple’s way of trolling the critics, who have repeatedly resorted to Steve Jobs’s words to criticise the several misguided UI choices in Liquid Glass. It’s the same kind of response as Phil Schiller famously blurting, Can’t innovate anymore, my ass! in 2013 during the presentation of the then-redesigned Mac Pro. But it feels like a less genuine, more passive-aggressive response (if this is the way we’re supposed to read their use of that quote).
Apple used the quote in earnest. As in, they really believe that what they’re doing is in line with Jobs’s words. If that’s the case, this is utter self-deception. The quote doesn’t reflect at all what Apple is doing in the UI and software department — the Liquid Glass design is more ‘look feel’ than ‘work’. And the very introduction of the iPhone Air proves that Jobs’s words are falling on deaf ears on the hardware front as well.
Apple used the quote ‘for effect’. As if Meta started a keynote by saying, Our mission is to connect people, no more no less. You know, something that makes you sound great and noble, but not necessarily something you truly believe (or something that is actually true, for that matter).
I can’t know for sure which of these might be the correct interpretation. I think it heavily depends on whose Apple executive came up with the idea. Whatever the case may be, the effect was the same — it felt really jarring and tone-deaf.
If you’re not new here, you’ll know that these are the Apple products I care the least, together with HomePods and Apple TV. I always tune out when Apple presents these, so browse Apple’s website or go read the technical breakdown elsewhere. Personally, I’m too into traditional horology and therefore the design of the Apple Watch has always felt unimaginative at best, and plain ugly at worst.
From a UI standpoint, the Apple Watch continues to feel too complicated to use, and too overburdened with features. I wouldn’t say it’s design by committee, but more like designed to appeal to a whole committee. Apple wants the watch to appeal to a wide range of customers, therefore this little device comes stuffed with all kinds of bells and whistles. As I said more than once, the real feature I would love to see implemented is the ability to just turn off entire feature sets, so that if you only want to use it as a step counter and heart rate monitor, you can tell the watch to be just that; this would be more than just having a watchface that shows you time, steps, heart rate — it would be like having a watch that does only that. With all the features you deem unnecessary effectively disabled, imagine how simpler interacting with it would be, and imagine how longer its battery life would be.
What really got on my nerves during the Apple Watch segment of the event, though, is this: Apple always, always inserts a montage of sob stories about how the Apple Watch has saved lives, and what an indispensable life-saving device it is. Don’t get me wrong, I’m glad those lives were saved. But this kind of ‘showcase’ every year is made in such poor taste. It’s clear to me that it’s all marketing above everything else, that they just want to sell the product, and these people’s stories end up being used as a marketing tactic. It’s depressing.
As for the AirPods, and true wireless earbuds in general, I find this product category to be the most wasteful. Unless someone comes up with a type of earbuds that have easily replaceable batteries, I’m not interested in buying something that’s bound to become e‑waste in a relatively short period of time.
Don’t buy them. Don’t waste your money, unless you have money to waste and don’t care about a company with this kind of leadership. Read How Tim Cook sold out Steve Jobs by Anil Dash to understand how I feel. I couldn’t have said it better myself.
I’d wrap up my article here, but then I’d receive a lot of emails asking me why I didn’t talk about the iPhones, so here are a few stray observations:
One, maybe involuntary, user-friendly move Apple did with this new iPhone lineup is that now we have three very distinct iPhone models, whose nature and price should really help people decide which to purchase.
The regular iPhone 17 is the safe, iterative solution. It looks like an iPhone 16, it works like an iPhone 16 that has now better features. It’s the ideal phone for the average user (tech-savvy or not). It’s the safe choice and the best value iPhone overall.
The iPhone 17 Pro is possibly the most Pro iPhone to date. During its presentation, I felt like Apple wants you to consider this more like a pro camera for videographers and filmmakers rather than just a smartphone with a good camera array. People who have no use for all these pro video recording features shouldn’t waste their money on it. Unless they want a big chunky iPhone with the best camera array and/or have money to burn. In my country (Spain), the 6.3‑inch iPhone 17 Pro starts at €1,319 with 256GB of storage, and goes up to €1,819 with 1TB of storage. For the bigger iPhone 17 Pro, those prices become €1,469 and €1,969 respectively, and if you want the iPhone 17 Pro Max with 2TB of storage, it’ll cost you €2,469. You do you, but I think these are insane prices for phones (and SSDs).
The iPhone Air is just… odd. I was curious to know about other techies’ reactions, and of all the major tech YouTubers, I think the one I’m agreeing the most on their first impressions of the iPhone Air is Marques Brownlee. At this point in his video, he says:
I really think this phone is gonna be a hard sell, because if you subtract emotions from it, it’s just… the worst one. This is gonna jump in the lineup at $999 — it replaces essentially the Plus phones in the lineup — and it is surrounded by other iPhones that are better than it in basically every way, other than being super thin and light. So it’s a fascinating gamble.
This phone has the same A19 Pro chip in it as the Pro phones, minus one GPU core. Interesting choice: apparently it’s a bit more efficient than the base A19, so that’s good for battery life. But we also just heard a whole long list of choices Apple made with the Pro phones to make them more thermally efficient to not overheat — switching from titanium to aluminium, and adding a vapour chamber to the back. But this phone is still titanium, and absolutely does not have room for an advanced thermal solution or any sort of vapour chamber, so it sounds like this phone could get much hotter and throttle performance much quicker. It’s a red flag.
Now we also know that ultra-thin phones have a tendency to be a little bit less durable. They’ve bent over the years. And I’m not gonna be the first one to point this out. […] And Apple of course has thought about this. They’ve for sure tested this, and they’re telling us it’s the most durable iPhone ever. But, I mean, I’m looking at the phone and I think it qualifies also as a red flag. And then we already know there is just no way battery life can be good on this phone, right? There’s just no way. I’ve been reviewing phones for more than a decade, and all signs point to it being trash.
There was a slide in the keynote today about how they were still proud to achieve ‘all-day battery life’. But, like, come on. Really? I mean they still do the thing where they rearranged the components up into the little plateau at the top to make room for more battery at the bottom. But there’s just absolutely not enough room in this phone for a large battery. And it doesn’t appear to be silicon-carbon, or any sort of a special ultra-high density battery.
And Apple also announced it alongside a special dedicated MagSafe battery accessory, just for this phone, that adds 3,149 mAh, and just barely, combined, will match the 17 Pro in terms of quoted video playback. So if that doesn’t scream red flag, I don’t know what to tell you.
It is also e‑SIM-only, globally, ’cause there’s no room in any version of this phone for a plastic SIM card. There’s also no millimeter-wave 5G. And like I said, it’s coming in at $1,000, which is more expensive than the base iPhone, which will have a better camera system, and better battery life, and may overheat less.
So look, I think there’s two ways to look at this phone. This is either Apple just throwing something new at the wall and seeing if it sticks. […] Or you can see this as a visionary, long-time-in-the-making preview at the future of all phones. Like, maybe someday in the future every phone will be this thin. And Apple is just now, today, getting the tech together with the battery and display and modem and Apple Silicon to make this phone possible. Maybe kind of like how the first MacBook Air sucked, and was underpowered, but then eventually all laptops became that thin. Maybe that’s also what’s gonna happen to smartphones. And maybe the same way Samsung made the ultra-thin S25 Edge, and then a few months later they came out with their super-thin foldable, the Z Fold7, and I felt like the Edge phone was one half of that foldable. Maybe that’s also what Apple’s doing. Maybe we’re gonna see an ultra-thin foldable iPhone next year. Maybe.
Yeah, I’m firmly in the “Apple throwing something new at the wall and seeing if it sticks” camp. Because what’s that innovative in having thin smartphones? What’s the usefulness when the other two dimensions keep increasing? Making a thin and light and relatively compact MacBook and calling it ‘Air’ made sense back when virtually no other laptop was that thin and light. It was, and is, a great solution for when you’re out and about or travelling, and space is at a premium; and you also don’t want a bulky computer to lug around.
Then Apple applied the ‘Air’ moniker to the iPad, and that started to make less sense. It’s not that a regular or Pro iPad were and are that cumbersome to begin with. And then Apple felt the need to have MacBook Airs that are 13- and 15-inch in size, instead of 11- and 13-inch. A 15-inch MacBook Air makes little sense, too, as an ‘Air’ laptop. It may be somewhat thin, somewhat light, but it’s not exactly compact.
And now we have the iPhone Air — which is just thin for thinness’ sake. It’s still a big 6.5‑inch phone that’s hardly pocketable. I still happen to handle and use a few older iPhones in the household, and the dimensions of the iPhone 5/5S/SE make this iPhone more ‘Air’ than the iPhone Air. If you want a slightly more recent example, the iPhone 12 mini and 13 mini have the real lightness that could make sense in a phone. Perhaps you’ll once again remind me that the iPhone 12 mini and 13 mini weren’t a success, but I keep finding people telling me they would favour a more compact phone than a big-but-thin phone. I’ll be truly surprised if the iPhone Air turns out to be a bigger success than the ‘mini’ iPhones. It is a striking device in person, no doubt, but once this first impact is gone and you start thinking it over and making your decision, what Marques Brownlee said above is kind of hard to deny.
I find particularly hilarious the whole MagSafe battery accessory affair. Apple creates a super-thin, super-light phone, proudly showcases its striking design, and immediately neutralises this bold move and thin design by offering an accessory 1) that you’ll clearly need if you want to have a decently-lasting battery (thus admitting that that thinness certainly came with an important compromise); and 2) that instantly defeats the purpose of a thin design by returning the bulk that was shaved away in making the phone.
What should I be in awe of?
I found a lot of reactions to these products to be weirdly optimistic. Either I’m becoming more cynical with age and general tech fatigue, or certain people are easily impressed. What usually impresses me is some technological breakthrough I didn’t see coming, or a clever new device, or some clever system software features and applications that give new purposes to a device I’ve known well for a while. This event, and what was presented, didn’t show any of this.
Didn’t you expect Apple to be able to produce yet another iteration of Apple Watches and AirPods that were better than the previous one? Didn’t you expect Apple to be able to make a unibody iPhone after years of making unibody computers? Didn’t you expect Apple to be able to have iPhones with better cameras and recording capabilities than last year’s iPhones? Didn’t you expect Apple to be able to make a thinner iPhone? To come up with better chips? Or a vapour chamber to prevent overheating? Or a ‘centre stage’ feature for the selfie camera? Are these things I should be in awe of?
I will probably be genuinely amazed when Apple is finally able to come up with a solution that entirely removes the dynamic island from the front of the iPhone while still having a front-facing camera up there.
I’ll be similarly amazed when Apple finally gets rid of people who have shown to know very little about software design and user interfaces, and comes up with operating systems that are, once again, intuitive, discoverable, easy to use, and that both look and work well. Because the iOS, iPadOS, and Mac OS 26 releases are not it — and these new iPhones might be awe-inspiring all you want, but you’ll still have to deal with iOS 26 on them. These new iPhones may have a fantastic hardware and all, but what makes any hardware tick is the software. You’ve probably heard that famous quote by Alan Kay, People who are really serious about software should make their own hardware. Steve Jobs himself quoted it, adding that “this is how we feel about it” at his Apple. Today’s Apple needs to hear a revised version of that quote, something like, People who are this serious about their hardware should make better software for it.
The level of good-enough-ism Apple has reached today in software is downright baffling. This widening gap between their hardware and software competence is going to be really damaging if the course isn’t corrected. The tight integration between hardware and software has always been what made Apple platforms stand out. This integration is going to get lost if Apple keeps having wizards for hardware engineers on one side, and software and UI people producing amateurish results on the other side. Relying on legacy and unquestioning fanpeople, for whom everything Apple does is good and awesome and there’s nothing wrong with it, can only go so far. Steve Jobs always knew that software is comparatively more important than the hardware. In a 1994 interview with Jeff Goodell, published by Rolling Stone in 2010 (archived link), Jobs said:
The problem is, in hardware you can’t build a computer that’s twice as good as anyone else’s anymore. Too many people know how to do it. You’re lucky if you can do one that’s one and a third times better or one and a half times better. And then it’s only six months before everybody else catches up. But you can do it in software.
But not if you keep crippling it because you want to bring all your major platforms to the lowest common denominator.
Writer. Translator. Mac consultant. Enthusiast photographer. • If you like what I write, please consider supporting my writing by purchasing my short stories, Minigrooves or by making a donation. Thank you!
...
Read the original on morrick.me »
I have an incredibly boring summer hobby: looking at the changelog for the WebKit Github repo. Why? Because I spend a chunk of my professional life working with webviews inside mobile apps and I like to get an early peek into what’s coming in the next version of iOS. Since Tim Cook has yet to stand up at WWDC and announce “one more thing… Service Worker support in WKWebView, provided you add the correct entry to the WKAppBoundDomains array in your Info.plist” (and you know what, he should) manual research is the order of the day.
So I was really interested to see, the day after WWDC finished, a pull request named:
Liquid Glass was one of the big takeaways from 2025′s WWDC. Probably the biggest change in iOS UI since iOS 7 ditched the skeuomorphic look of the past. But that’s all native UI, what does any of that have to do with webviews?
A poke around the context of the PR revealed something really interesting: Apple has a custom CSS property named -apple-visual-effect . Not only does it allow the use of Liquid Glass in iOS 26 (via values like -apple-system-glass-material) but all versions support using standard materials with values like -apple-system-blur-material-thin.
Before you, like me, fire up Safari and start editing some CSS, I have bad news: no, it doesn’t work on the web. As well it shouldn’t. But it also doesn’t work by default in an app using WKWebView, you have to toggle a setting in WKPreferences called useSystemAppearance… and it’s private. So if you use it, say goodbye to App Store approval.
I wanted to try it out all the same so I hacked around to set useSystemAppearance to true, set my CSS to:
.toolbar {
border-radius: 50%;
-apple-visual-effect: -apple-system-glass-material;
height: 75px;
width: 450px;
Whoever it was at Apple that decided to make this a CSS property is a genius because it makes it incredibly easy to provide different rules based on Liquid Glass support:
.toolbar {
border-radius: 50%;
height: 75px;
width: 450px;
background: rgba(204, 204, 204, 0.7);
@supports (-apple-visual-effect: -apple-system-glass-material) {
background: transparent;
-apple-visual-effect: -apple-system-glass-material
It’s an interesting piece of trivia but no-one outside of Apple can use it. So what does it matter? It doesn’t. Except for the implication for what I’ll call The Toupée Theory of In-App Webviews (thanks to graypegg on Hacker News for the rename). Industry wide they don’t have a great reputation. But my suggestion is this: the main reason webviews in apps have such a bad reputation is because you don’t notice the webviews that are integrated seamlessly.
It stands to reason that Apple wouldn’t have developed this feature if they weren’t using it. Where? We have no idea. But they must be using it somewhere. The fact that none of us have noticed exactly where suggests that we’re interacting with webviews in our daily use of iOS without ever even realising it.
...
Read the original on alastair.is »
I’m happy to announce the release of asciinema CLI 3.0!
This is a complete rewrite of asciinema in Rust, upgrading the recording file format, introducing terminal live streaming, and bringing numerous improvements across the board.
In this post, I’ll go over the highlights of the release. For a deeper overview of new features and improvements, see the release
notes and the detailed
changelog.
First, let’s get the Rust rewrite topic out of the way. I did it because I felt like it. But seriously, I felt like it because I prefer working with Rust 100x more than with Python these days. And this type of code, with syscalls and concurrency, is way easier to deal with in Rust than in Python. That’s my experience, YMMV. Anyway, in addition to making me enjoy working with this component of asciinema again, the rewrite resulted in faster startup, easier installation (a static binary), and made many new features possible by integrating asciinema virtual terminal
(also Rust) into the CLI.
Let’s look at what’s cool and new now.
The new asciicast v3 file format is an evolution of the good old asciicast v2. It addresses several shortcomings of the previous format that were discovered over the years.
The major change in the new format is the use of intervals (deltas) for timing session events. v2 used absolute timestamps (measured since session start), which had its own pros and cons. One often-brought-up issue was the difficulty of editing the recordings - timestamps of all following events had to be adjusted when adding/removing/updating events.
Other than timing, the header has been restructured, grouping related things together, e.g. all terminal-related metadata is now under term. There’s also support for the new “x” (exit) event type, for storing the session exit status. Finally, line comments are allowed by using the # character as the first character on a line.
Here’s an example of a short recording in asciicast v3 format:
The new format is already supported by asciinema
server and asciinema
player.
The new CLI allows for live streaming of terminal sessions, and provides two modes for doing so.
Local mode uses built-in HTTP server, allowing people to view the stream on trusted networks (e.g. a LAN). In this mode no data is sent anywhere, except to the viewers’ browsers, which may require opening a firewall port. The CLI bundles the latest version of asciinema player, and uses it to connect to the stream from the page served by the built-in server.
$ asciinema stream –local
::: asciinema session started
::: Live streaming at http://127.0.0.1:37881
::: Press
Remote mode publishes the stream through an asciinema server (either asciinema.org or a self-hosted one), which acts as a relay, delivering the stream to the viewers at a shareable URL.
$ asciinema stream –remote
::: asciinema session started
::: Live streaming at https://asciinema.org/s/TQGS82DwiBS1bYAY
::: Press
The two modes can be used together as well.
Here’s a live stream of btop running on one of the asciinema.org servers:
You can also watch it directly on asciinema.org at
asciinema.org/s/olesiD03BIFH6Yz1.
Read more about the streaming architecture and supported protocols
here.
asciinema player (seen above) supports all the described protocols. To make the viewing experience smooth and glitch-free, it implements an adaptive buffering mechanism. It measures network latency in real-time and adjusts the buffer size constantly, aiming for a good balance between low latency and buffer-underrun protection.
asciinema server can now record every live stream and turn it into a regular recording. At the moment, asciinema server running at asciinema.org has stream recording disabled and a concurrent live stream limit of 1, but you can self-host the server where recording is enabled and there’s no concurrent stream limit by default. The limits on asciinema.org may change. I’d like to first see how the streaming feature affects resource usage (btw, shout-out to
Brightbox, which provides cloud services for asciinema.org).
In the early versions of asciinema, asciinema rec didn’t support saving to a file - the recording was saved to a tmp file, uploaded to asciinema.org, and the tmp file was removed. Later on, the CLI got the ability to specify a filename, which allowed you to save the result of a recording session to a file in asciicast v1 format and decide whether you want to keep it local only or publish.
Although optional, the filename argument had long been available. However, many, many tutorials on the internet (probably including asciinema’s own docs) showed examples of recording and publishing in one go with asciinema rec. That was fine - many people loved this short path from recording to sharing.
Over the years, I started seeing two problems with this. The first one is that lots of people still think you must upload to asciinema.org, which is not true. You can save locally and nothing leaves your machine. The second one is that the optionality of the filename made it possible to unintentionally publish a recording, and potentially leak sensitive data. And it’s a completely valid concern!
Because of that, on several occasions I’ve seen negative comments saying “asciinema is shady” /m\. It was never shady. It’s just a historical thing. I just kept the original behavior for backward compatibility. asciinema.org is not a commercial product - it’s an instance of asciinema server, which is meant to give users an easy way to share, and to give a taste of what you get when you self-host the server. In fact, I encourage everyone to self-host it, as the recordings uploaded to asciinema.org are a liability for me (while being a good promotion of the project :)).
I hope this clears up any confusion and suspicion.
Anyway, many things have changed since the original behavior of asciinema rec was implemented, including my approach to sharing my data with cloud services. These days I self-host lots of services on a server at home, and I try to avoid cloud services if I can (I’m pragmatic about it though).
The streaming feature was built from the ground up to support the local mode, which came first, and the remote mode followed.
In asciinema CLI 2.4, released 2 years ago, I made the upload command show a prompt where you have to explicitly make a decision on what to do with the recording. It looked like this:
$ asciinema rec
asciinema: recording asciicast to /tmp/tmpo8_612f8-ascii.cast
asciinema: press
It was a stopgap and a way to prepare users for further changes that are coming now.
In 3.0, the filename is always required, and the rec command no longer has upload capability. To publish a recording to asciinema.org or a self-hosted asciinema server, use the explicit asciinema upload .
A related improvement introduced in this release is the new server URL prompt.
When using a command that integrates with asciinema server (upload, stream,
auth) for the first time, a prompt is shown, pre-filled with
https://asciinema.org (for convenience). This lets you choose an asciinema server instance explicitly and intentionally. The choice is saved for future invocations.
It was always possible to point the CLI to another asciinema
server with a config file or environment variable, but this new prompt should come in handy especially when running the CLI in a non-workstation/non-laptop yet interactive environment, such as a fresh VM or a dev container.
This change should make it easier to use the CLI with your own asciinema server, and at the same time it doubles as an additional guard preventing unintended data leaks (to asciinema.org).
I’m really excited about this release. It’s been in the making for a while, but it’s out now, and I’m looking forward to seeing what new use-cases and workflows people will discover with it.
It’s going to take a moment until 3.0 shows up in package repositories for all supported platforms/distros. Meanwhile, you can download prebuilt binaries for GNU/Linux and macOS from the GitHub
release, or build
it from source.
Thanks for reading to this point!
Did you like it? Feel free to send me an email with your feedback to
. You can also reach me on Mastodon at @ku1ik@hachyderm.io.
Thanks!
...
Read the original on blog.asciinema.org »
When referring to the user’s stuff, which is better out of these:
It’s a trick question because often you don’t need any prefix and can just use:
Amazon is a good example of this in action because it’s obvious that it’s your account and your orders:
But what if your product contains things that belong to you and to others — for example, a case working system that contains your cases and everyone else‘s?
You could use “My cases” in a navigation menu like this:
This seems fine on the face of it.
But screens are not only accessed or referred to through a menu.
For example, you might need to sign post users to their cases in an onboarding flow, email notification or help article.
Saying something like “Go to my cases” is awkward and unnatural — if I told you to go to my cases, you’d think I was telling you to go to my cases, not yours.
Similarly, a support agent might tell you to “Go to your cases” over webchat or a phone call. This is confusing if the UI says “My cases”.
These issues just don’t come up when you use “your” — I’ve used this approach in multiple products over the years, and seen exactly zero issues in user research.
This is easy if we look at an example:
This doesn’t make sense because it sounds like you’re instructing the computer to share their profile, not yours.
But it’s clear if you use “my”:
* Use “your” when communicating to the user
* Use “my” when the user is communicating to us
If you’d like to design forms that nail basic details like this, as well as complex problems found in enterprise systems, you might like my course, Form Design Mastery:
...
Read the original on adamsilver.io »
Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible. Learn more about our commitment to integrity in our Code of Ethics.
Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible. Learn more about our commitment to integrity in our Code of Ethics.
Imagine you’re vibing to “Teardrop” when suddenly your face appears on the massive LED screen behind the band. Not as a fun crowd shot—as processed data in Massive Attack’s real-time facial recognition system. Welcome to the most uncomfortable concert experience of 2025.
The band deployed live facial recognition technology that captured and analyzed attendees during their recent performance.
During their latest tour stop, Massive Attack shocked fans by integrating facial recognition into the show itself. Live video feeds captured audience faces, processing them through recognition software and projecting the results as part of the visual experience. This wasn’t subtle venue security—your biometric data became part of the artistic statement, whether you consented or not.
Social media erupted with bewildered reactions from attendees. Some praised the band for forcing a conversation about surveillance that most people avoid, while others expressed discomfort with the unexpected data capture. The split reactions confirmed the band’s provocative intent had landed exactly as designed.
This stunt aligns with the band’s decades-long critique of surveillance culture and digital control systems.
This provocation fits Massive Attack’s DNA perfectly. The Bristol collective has spent years weaving political commentary into their performances, particularly around themes of surveillance and control. Their collaboration with filmmaker Adam Curtis and consistent engagement with privacy issues positioned them as natural provocateurs for this moment.
Unlike typical concert technology that enhances your experience, this facial recognition system explicitly confronted attendees with the reality of data capture. The band made visible what usually happens invisibly—your face being recorded, analyzed, and potentially stored by systems you never explicitly agreed to interact with.
Details about data storage and participant consent remain unclear, adding to both artistic ambiguity and ethical concerns.
Here’s where things get murky. Massive Attack hasn’t released official details about what happened to the captured biometric data or whether permanent records were kept. This opacity intensifies the artistic statement while raising legitimate privacy questions about conducting surveillance to critique surveillance.
The audience split predictably along ideological lines. Privacy advocates called it a boundary violation disguised as art. Others viewed it as necessary shock therapy for our sleepwalking acceptance of facial recognition in everyday spaces. Both reactions prove the intervention achieved its disruptive goal.
Your relationship with facial recognition technology just got more complicated. Every venue, every event, every public space potentially captures your likeness. Massive Attack simply made the invisible visible—and deeply uncomfortable. The question now isn’t whether this was art or privacy violation, but whether you’re ready to confront how normalized surveillance has become in your daily life.
...
Read the original on www.gadgetreview.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.