10 interesting stories served every morning and every evening.
If you live in Germany, you have been treated like livestock by Deutsche Bahn (DB). Almost all of my friends have a story: they traveled with DB, got thrown out in the middle of the night in some cow village, and had to wait hours for the next train.
I have something better. I was kidnapped.
I am taking the RE5 (ID 28521) to my grandmother’s house in Meckenheim. Scheduled departure: 15:32. Scheduled arrival in Bonn: 15:54. From there, the S23 to Meckenheim. A journey of 35 kilometers, or, in DB units, somewhere between forty-five minutes and the heat death of the universe.
I wanted to arrive early to spend more time with her. My father, who lives near Troisdorf, was supposed to join us later.
I board the train. It is twenty minutes late. I consider this early. At least the train showed up. In DB’s official statistics, a train counts as “on time” if it’s less than six minutes late. Cancelled trains are not counted at all. If a train doesn’t exist, it cannot be late.
The train starts moving. The driver announces there are “issues around Bonn.” He does not specify what kind. No one asks. We have learned not to ask. He suggests we exit at Cologne South and take the subway, or continue to Troisdorf and catch a bus from there.
I decide to continue to Troisdorf. My father can just pick me up there and we drive together. The plan adapts.
The driver announces the full detour: from Cologne South to Troisdorf to Neuwied to Koblenz. The entire left bank of the Rhine is unavailable. Only then I notice: the driver has been speaking German only. If you were a tourist who got on in Cologne to visit Brühl, thirteen minutes away, you were about to have a very confusing Christmas in Troisdorf.
A woman near me is holding chocolates and flowers. She is on the phone with her mother. “Sorry Mama, I’ll be late.” Pause. “Deutsche Bahn.” Pause. Her mother understood.
Twenty minutes later. We are approaching Troisdorf. I stand up. I gather my things. My father texts me: he is at the station, waiting.
The driver comes back on: “Hello everyone. Apparently we were not registered at Troisdorf station, so we are on the wrong tracks. We cannot stop.”
He says this the way someone might say “the coffee machine is broken.”
I watch Troisdorf slide past the window. Somewhere in the parking lot outside the station, my father is sitting in his car, watching his son pass by as livestock.
I was trying to travel 35 kilometers. I was now 63 kilometers from my grandmother’s house. Further away than when I started.
There are fifteen stations between Troisdorf and Neuwied. We pass all of them [^6].
At some point you stop being a passenger and start being cargo. A cow transporter. Mooohhhhh. A cow transporter going to a cow village. (Germany has a word for this: Kuhdorf. The cows are metaphorical. Usually.) I reached this point around Oberkassel.
DB once operated a bus to Llucalcari, a Mallorcan village of seventeen people. I wanted to take it home.
An English speaker near the doors is getting agitated. “What is happening? Why didn’t we stop?”
“We are not registered for this track.”
“But where will we stop?”
“Fifty-five minutes.” He said it again, quieter. “I am being kidnapped.”
My seatmate, who had not looked up from his book in forty minutes, turned a page. “Deutsche Bahn.”
I had been kidnapped at a loss.
...
Read the original on www.theocharis.dev »
It’s anecdotal, I know, but my main entertainment business revenue is down 50% over the past 3 months. Our main paid source of leads was Google Ads, which have served us well over the past 10 years or so — I think I know what I am doing in adwords by now.
Once per month I check the analytics, updating keywords and tweaking ad campaigns. Over the past year we increased our budget, and then I started looking at it once per week, running simultaneous campaigns with different settings, just trying to get SOMETHING.
Last month Google gave us a bonus — free money! This was 5x our monthly ad spend, to spend just when we needed it most — over the December holidays. I added another new campaign, updated the budgets for the existing ones. Still no change. The last week there was money to burn, left over from unused ad spend. I increased our budget to 10x. ZERO RETURN.
The money ran out. I am not putting more in. Where do we go from here?
Research shows that many young people are getting their information from short video platforms like TikTok and Instagram. We are trying ads on there.
Our customer base is comprised of 50% returning customers (I am proud of that statistic!). We have an email newsletter, we started sending them regularly over the past 2 months. Remember us?
We also plan to do some actual physical advertising — I am going to a market next weekend, doing a free show or two, handing out cards.
Also, we are branching out — I have some projects I want to make, related to the Magic Poi project, and hopefully sell. We ordered supplies last week.
Right now, though — I’m broke. Anyone need a website or IOT project built? I am AI assisted, very fast!
...
Read the original on www.circusscientist.com »
Today, Michał Kiciński, one of the co-founders of CD PROJEKT, and the co-founder of GOG, has acquired GOG from CD PROJEKT.
We believe the games that shaped us deserve to stay alive: easy to find, buy, download, and play forever. But time is annoyingly good at erasing them. Rights get tangled, compatibility breaks, builds disappear, and a nostalgic evening often turns into a troubleshooting session. That’s the difference between “I’m playing today” (the game lives on) and “I’ll play someday” (the game dies).
As Michał put it: “GOG stands for freedom, independence, and genuine control.”
GOG has always been built on strong values and clear principles. When Marcin Iwiński and Michał Kiciński first came up with the idea for GOG in 2007, the vision was simple: bring classic games back to players, and make sure that once you buy a game, it truly belongs to you, forever. In a market increasingly defined by mandatory clients and closed ecosystems, that philosophy feels more relevant than ever.
This new chapter is about doubling down on that vision. We want to do more to preserve the classics of the past, celebrate standout games of today, and help shape the classics of tomorrow, including new games with real retro spirit.
First of all, DRM-free is more central to GOG than ever. Your library stays yours to enjoy: same access, same offline installers, same sense of ownership. Your data stays with GOG, and GOG GALAXY remains optional.
We’ll keep our relationship with CD PROJEKT. CD PROJEKT RED games will continue to be available on GOG, and upcoming titles from the studio will also be released on the platform.
If you’re a GOG Patron, or you donate to support the Preservation Program, those funds stay within GOG. Your support has been huge this year, and we think that with your help, we can undertake even more ambitious rescue missions in 2026 and 2027. We’ll have more to say about that sometime in 2026.
GOG will remain independent in its operations. We will continue building a platform that’s ethical, non-predatory, and made to last, while helping indie developers reach the world. We’re also committed to giving the community a stronger voice, with new initiatives planned for 2026.
Thanks for being the reason this all matters.
A lot of companies sell games. Fewer do the unglamorous work of making sure the games that shaped people’s lives don’t quietly rot into incompatibility.
Thanks for caring about this mission with us. We’ll keep you posted as we ship, and in the meantime, you can dig into the full FAQ for the detailed answers.
...
Read the original on www.gog.com »
A major link in Tesla’s 4680 battery supply chain has just snapped. South Korean battery material supplier L&F Co. announced today that the value of its massive supply deal with Tesla has been slashed by over 99%, signaling a catastrophic drop in demand for the automaker’s in-house battery cells.
This is arguably the strongest evidence yet that Tesla’s 4680 program, and the vehicle that relies on it, the Cybertruck, is in serious trouble.
In early 2023, L&F announced a $2.9 billion contract to supply high-nickel cathode materials directly to Tesla.
At the time, the industry saw this as a major move by Tesla to secure materials for its ramp-up of the 4680 battery cell, which Elon Musk had touted as the key to halving battery costs and enabling cheaper electric vehicles, a plan he later scrapped.
Right now, Tesla’s Cybertruck is the only vehicle using the automaker’s own 4680 cells.
In a regulatory filing today, L&F revealed that the contract’s value has been written down to just $7,386.
No, that is not a typo. $2.9 billion to roughly $7,400.
L&F did not explicitly state the reason for the cut, citing only a “change in supply quantity,” but the dots are easy to connect. The high-nickel cathode was destined for Tesla’s 4680 cells, and the primary consumer of those cells is the Cybertruck.
We have been reporting on the Cybertruck’s demand issues for the better part of this year. In March, we noted that the truck was turning out to be a flop as Tesla began offering discounted financing to move inventory. By June, Tesla became desperate, launching 0% APR incentives as inventory piled up in lots across the US.
Despite a production capacity of 250,000 units per year at Giga Texas, the Cybertruck is currently selling at a run rate of roughly 20,000 to 25,000 units annually. We even saw Tesla discontinue the cheapest Cybertruck in September because, frankly, no one wanted a gutted version of a truck that was already struggling to find buyers.
If Tesla isn’t building Cybertrucks, it doesn’t need 4680 cells. And if it doesn’t need 4680 cells, L&F has no one to sell its cathode material to.
This is not a good look Tesla’s 4680 program.
For years, we’ve been told that the 4680 cell was the “holy grail” that would allow Tesla to produce a $25,000 electric car. But five years after Battery Day, the cells are still reportedly difficult to manufacture at scale due to the dry electrode process, and their only application is a low-volume pickup truck that has become a commercial failure.
The math here is brutal. A 99% reduction in a supply contract basically means the contract was cancelled. It means Tesla is not ramping 4680 production; if anything, they might be winding it down.
The ‘Cybercab’ is also supposed to be using the 4680 cells, but we will have to wait and see how that goes.
It’s also a vehicle program that could go the way of the Cybertruck. CEO Elon Musk is insisting that it will launch in early 2026 without a steering wheel, but Tesla has yet to solve level 4 autonomous driving.
If it does launch without a steering wheel, it will be a program even more limited in volume than the Cybertruck.
The battery supply situation and the critical minerals behind it are evolving fast, and China controls most of it. In a new Substack, I shared a full list of the years of reserve remaining for each mineral.
...
Read the original on electrek.co »
This post went unexpectedly viral on Hacker News. There’s an interesting discussion there if you want to take a look.
I own more books than I can read. Not in a charming, aspirational way, but in the practical sense that at some point I stopped knowing what I owned. Somewhere around 500 books, memory stopped being a reliable catalog.
For years, I told myself I would fix this. Nothing elaborate, nothing worthy of a startup idea. A spreadsheet would have been enough. I never did it, not because it was hard, but because it was tedious.
The gap between intention and execution was small, but it was enough to keep the project permanently parked in the someday pile.
By the end of 2025, I had been working with AI agents long enough that this kind of project finally felt possible. Not because they made things more impressive, but because they removed the part I always stalled on. Execution.
The bookshelf project is where I clearly understood what my role becomes once execution stops being the bottleneck.
I tried the obvious tools first. ISBN scanner apps failed on Romanian editions, and Goodreads could not identify obscure publishers or antiquarian finds. Anything even slightly nonstandard came back incomplete or wrong. Partial data felt worse than no data at all, so every attempt ended the same way: a few entries filled in, followed by abandonment.
What I needed was not a better app, but a way to tolerate imperfection without the whole system falling apart.
Every project starts with bad data, and this one started with worse data. One afternoon, I photographed every book I own: spines, covers, duplicates, and the occasional blurry thumb. Four hundred and seventy photos in total. Once the images were on my laptop, I opened Claude.
The first steps were mechanical. Renaming files. Converting HEIC to JPG. Then I asked for something real: a script that sends each image to OpenAI’s vision API, extracts author, title, and publisher, normalizes names, resizes images to avoid wasting tokens, and writes everything to a JSON file.
Claude wrote the script and ran it. It worked. Not perfectly, but well enough to matter.
“id”: “ZfEPBCMZDaCKm6k0NVJ8F”,
“title”: “Simulacre și simulare”,
“author”: “Jean Baudrillard”,
“publisher”: “Colectia Panopticon”,
“source”: “dataset/83.jpg”,
Roughly 90 percent of the books came back correct. The failures were predictable: poor lighting, damaged covers, unreadable spines. One novel was confidently identified as a 1987 Soviet agricultural manual.
I fixed the rest by hand. That decision was not technical, it was judgment. Ninety percent accuracy was enough. Chasing the remaining ten percent would have meant days of edge cases for very little additional value. That was the first moment where my role became clear.
Later, when I received a few books for Christmas, we added a second script that runs the same pipeline for new additions. Photo in, metadata and images out.
With metadata sorted, covers were still missing. My photos showed spines, not artwork, and I wanted a clean visual representation. Claude suggested using
Open Library’s API to fetch covers, which mostly worked. Half the covers were low quality or incorrect, and Romanian editions barely existed in the database.
We iterated. Claude wrote a second pass, another model call that scored cover quality and flagged bad matches. For flagged books, it fell back to Google Images via SerpAPI. That handled most cases. A few remained: antiquarian finds and obscure Soviet boxing manuals that no database was ever going to have clean assets for.
I opened Photoshop and fixed ten covers by hand. For a collection of 460 books, ten manual edits felt like a win.
Once the data and covers were in place, the UI came next. The obvious solution was a grid of covers. It was correct, and it was lifeless. I kept looking at my physical bookshelf instead. What makes it interesting is not the covers, but the spines. Different widths, uneven pressure, colors blending into a single texture.
That was the thing I wanted to recreate.
Claude did not invent that idea. It executed it. It wrote a script to extract dominant colors from each cover using color quantization, then computed contrasting text colors for readability. The result was better, but still wrong. Every book had the same width, and real books are not like that.
Open Library had page counts. We mapped page count to spine width and added slight variation to break the uniformity. At that point, it finally looked like a bookshelf.
“id”: “ZfEPBCMZDaCKm6k0NVJ8F”,
“title”: “Simulacre si simulare”,
“author”: “Jean Baudrillard”,
“backgroundColor”: “#f0f0ff”,
“color”: “#1f1f2e”,
“paddingLeft”: 13,
“paddingRight”: 13,
“height”: 384,
“cover”: “/images/bookshelf/simulacre-si-simulare@2x.webp”,
“source”: “dataset/83.jpg”
Visually, the shelf worked, but it felt static. A real shelf responds to touch. When you run your finger along the spines, they tilt slightly. I asked Claude for an animation, and it came back with a scroll based tilt using
Framer Motion.
It was close, but wrong. The movement snapped instead of flowing. I did not know why, I just knew it felt off. That was enough.
Claude explained the issue immediately. We were updating React state on every scroll event, causing unnecessary re renders. The fix was to use motion values and springs that animate outside React’s render cycle. Two minutes later, it was fixed. I spent the next few minutes scrolling back and forth, just watching it move. This was the moment my caution dropped, not because the tool was always right, but because the cost of trying ideas had collapsed.
That confidence had a downside. I started asking for things I did not need. Infinite scroll seemed sensible. Why render 460 books at once? Claude implemented it, and technically it worked. Memory stayed flat, and the DOM updated correctly.
But scrolling broke. The container height desynced, the last books were unreachable, and every attempted fix introduced new jank. The feature worked, but the experience did not. So we removed it. Not because it was broken, but because it was unnecessary. Four hundred and sixty books is not a scale problem. Knowing when to delete working code is not something an AI can decide for you.
The shelf looked great on desktop, but on mobile, horizontal scrolling felt cramped. I wanted an alternative layout: books lying flat, stacked vertically, readable without tilting your head. I pointed Claude at the shelf implementation and asked for a stack view.
It read the code, inferred the patterns, and reused them: animation timing, color extraction, scroll based opacity, the same data shape. It built the new component and wired up a toggle between layouts. It worked without explanation. That surprised me more than anything else.
Claude wrote all the code. So what did I do?
* I decided that 90 percent accuracy was enough.
* I fixed the ten covers no API could find.
* I deleted infinite scroll because I did not need it.
* I kept scrolling the animation until it felt right.
After years of false starts, my bookshelf finally exists. Four hundred and sixty books, cataloged and displayed at bookshelf. I almost dismissed Claude Code as hype. Now, the times when I wrote everything by hand feel distant, almost strange.
Execution keeps getting cheaper. Taste still does not.
...
Read the original on balajmarius.com »
Only the engineers who work on a large software system can meaningfully participate in the design process. That’s because you cannot do good software design without an intimate understanding of the concrete details of the system. In other words, generic software design advice is typically useless for most practical software design problems.
What is generic software design? It’s “designing to the problem”: the kind of advice you give when you have a reasonable understanding of the domain, but very little knowledge of the existing codebase. Unfortunately, this is the only kind of advice you’ll read in software books and blog posts. Engineers love giving generic software design advice for the same reason that all technical professionals love “talking shop”. However, you should be very careful about applying generic advice to your concrete day-to-day work problems.
When you’re doing real work, concrete factors dominate generic factors. Having a clear understanding of what the code looks like right now is far, far more important than having a good grasp on general design patterns or principles. For instance:
* In large codebases, consistency is more important than “good design”. I won’t argue that point here, but I wrote about it at length in Mistakes engineers make in large established codebases.
* Real codebases are typically full of complex, hard-to-predict consequences. If you want to make your change safely, that typically constrains your implementation choices down to a bare handful of possibilities.
* Large shared codebases never reflect a single design, but are always in some intermediate state between different software designs. How the codebase will hang together after an individual change is thus way more important than what ideal “north star” you’re driving towards.
In a world where you could rewrite the entire system at will, generic software design advice would be much more practical. Some projects are like this! But the majority of software engineering work is done on systems that cannot be safely rewritten. These systems cannot rely on “software design”, but must instead rely on internal consistency and the carefulness of their engineers.
What does good software design look like, then?
In my experience, the most useful software design happens in conversations between a small group of engineers who all have deep understanding of the system, because they’re the ones working on it every day. These design discussions are often really boring to outsiders, because they revolve around arcane concrete details of the system, not around general principles that any technical person can understand and have an opinion on.
The kinds of topic being discussed are not “is DRY better than WET”, but instead “could we put this new behavior in subsystem A? No, because it needs information B, which isn’t available to that subsystem in context C, and we can’t expose that without rewriting subsystem D, but if we split up subsystem E here and here…“.
Deep philosophical points about design are rarely important to the discussion. Instead, the most critical contributions point out small misunderstandings of concrete points, like: “oh, you thought B wasn’t available in context C, but we recently refactored C so now we could thread in B if we needed to”.
Generic software design advice is not useful for practical software design problems, but that doesn’t mean it’s totally useless.
Generic software design advice is useful for building brand-new projects. As I argued above, when you’re designing a new feature in an existing system, concrete factors of the system dominate. But when you’re designing a new system, there are no concrete factors, so you can be entirely guided by generic advice.
Generic software design advice is useful for tie-breaking concrete design decisions. I don’t think you should start with a generic design, but if you have a few candidate concrete pathways that all seem acceptable, generic principles can help you decide between them.
This is particularly true at the level of the entire company. In other words, generic software design advice can help ensure consistency across different codebases. This is one of the most useful functions of an official “software architect” role: to provide a set of general principles so that individual engineers can all tie-break their concrete decisions in the same direction.
Generic software design principles can also guide company-wide architectural decisions. Should we run our services in our own datacenter, or in the cloud? Should we use k8s? AWS or Azure? Once you get broad enough, the concrete details of individual services almost don’t matter, because it’s going to be a huge amount of work either way. Still, even for these decisions, concrete details matter a lot. There are certain things you just can’t do in the cloud (like rely on bespoke hardware setups), or that you can’t do in your own datacenter (like deploy your service to the edge in twelve different regions). If the concrete details of your codebase rely on one of those things, you’ll be in for a bad time if you ignore them when making company-wide architectural decisions.
Those are all good reasons to do generic software design. One bad reason companies do generic software design is that it just sounds like a really good idea to people who aren’t working software engineers. Once you’re doing it, the incentives make it hard to stop. Many tech companies fall into this local minimum.
Why not have your highest-paid software engineers spend their time exclusively making the most abstract, highest-impact decisions? You want your structural engineers to be drawing, not laying bricks, after all. I don’t know if structural engineering works like this, but I do know that software engineering doesn’t. In practice, software architecture advice often has to be ignored by the people on the ground. There’s simply no way to actually translate it into something they can implement, in the context of the current system as it exists.
However, for a practice that doesn’t work, “have your top engineers just do generic design” is surprisingly robust. Architects don’t have any skin in the game, because their designs are handed off to actual engineering teams to implement. Because those designs can never be implemented perfectly, architects can both claim credit for successes (after all, it was their design) and disclaim failures (if only those fools had followed my design!)
When working on large existing codebases, useful software design discussions are way, way more concrete than many people believe. They typically involve talking about individual files or even lines of code. You thus can’t do useful software design without being intimately familiar with the codebase (in practice, that almost always means being an active contributor).
Purely generic architecture is not useless, but its role should be restricted to (a) setting out paved paths for brand new systems, (b) tie-breaking decisions on existing systems, and (c) helping companies make broad technology choices.
In my opinion, formal “big-picture software architect” roles that spend all their time laying out the initial designs for projects are doomed to failure. They sound like a good idea (and they’re a good deal for the architect, who can claim credit without risking blame), but they provide very little value to the engineering teams that are tasked with actually writing the code.
Personally, I believe that if you come up with the design for a software project, you ought to be responsible for the project’s success or failure. That would rapidly ensure that the people designing software systems are the people who know how to ship software systems. It would also ensure that the real software designers - the ones that have to take into account all the rough edges and warts of the codebase - get credit for the difficult design work they do.
edit: this post got some comments on Hacker News. I was surprised to see some commenters disagreeing with my point about consistency. I remember the reception of Mistakes engineers make in large established codebases being quite positive. I was not surprised to see some commenters make the “haha, this is hypocritical because it is itself generic advice” point. I addressed this in the “when generic design is useful” section above.
...
Read the original on www.seangoedecke.com »
This page gives you direct access to all 1,038 delisted Steam titles on the site. Below each title are the companies it relates to. An * in the title denotes a placeholder page that contains basic details.
...
Read the original on delistedgames.com »
I switched to eSIM in 2025, and I am full of regret
Swapping SIM cards used to be easy, and then came eSIM.
NanoSIMs are tiny, but phone makers still want to get rid of them.
NanoSIMs are tiny, but phone makers still want to get rid of them.
SIM cards, the small slips of plastic that have held your mobile subscriber information since time immemorial, are on the verge of extinction. In an effort to save space for other components, device makers are finally dropping the SIM slot, and Google is the latest to move to embedded SIMs with the Pixel 10 series. After long avoiding eSIM, I had no choice but to take the plunge when the time came to review Google’s new phones. And boy, do I regret it.
SIM cards have existed in some form since the ’90s. Back then, they were credit card-sized chunks of plastic that occupied a lot of space inside the clunky phones of the era. They slimmed down over time, going through the miniSIM, microSIM, and finally nanoSIM eras. A modern nanoSIM is about the size of your pinky nail, but space is at a premium inside smartphones. Enter, eSIM.
The eSIM standard was introduced in 2016, slowly gaining support as a secondary option in smartphones. Rather than holding your phone number on a removable card, an eSIM is a programmable, non-removable component soldered to the circuit board. This allows you to store multiple SIMs and swap between them in software, and no one can swipe your SIM card from the phone. They also take up half as much space compared to a removable card, which is why OEMs have begun dropping the physical slot.
Apple was the first major smartphone maker to force the use of eSIM with the release of the iPhone 14, and it makes use of that space. The international iPhone 17 with a SIM card slot has a smaller battery than the eSIM-only version, but the difference is only about 8 percent. Google didn’t make the jump until this year with the Pixel 10 series—the US models are eSIM-only, but they unfortunately don’t have more of anything compared to the international versions.
In advance of the shift, Android got system-level support for downloading and transferring eSIMs. But whatever can go wrong will go wrong, and it’s extremely annoying when eSIM goes wrong.
There have been times when I swapped between phones on an almost daily basis—such was the nature of reviewing phones back when there were a dozen of them coming out every month. Never once in all those countless device swaps did I have a problem with my SIM card. As such, I managed to avoid contacting carrier support for years at a time.
In the three months since Google forced me to give up my physical SIM card, I’ve only needed to move my eSIM occasionally. Still, my phone number has ended up stuck in limbo on two occasions. Android’s built-in tools work better than they used to, and I can’t say what is responsible for the eSIM corruption. However, carriers bear the responsibility for how annoying this is to fix.
The first time, I was logged in to the mobile app for my carrier (T-Mobile). After a few minutes of back and forth with support, I was able to use the app to authenticate and get a new eSIM pushed to the phone. It was annoying but relatively painless. The second time a SIM transfer went sideways, I was not logged in to the app, and that was a problem.
When a mobile carrier needs to verify your identity for an account change, they all do the same thing: send a text message. And what happens if you don’t have a working SIM? That’s right—nothing. Without access to my account or phone number, I was stuck with no way to download a new eSIM. The only course of action was to go to a physical store to download an electronic SIM card. What should have been 30 seconds of fiddling with a piece of plastic turned into an hour standing around a retail storefront.
Many people have had the same phone number for years—even decades at this point. These numbers aren’t just a way for people to get in touch because, stupidly, we have also settled on phone numbers as a means of authentication. Banks, messaging apps, crypto exchanges, this very website’s publishing platform, and even the carriers managing your number rely on SMS multifactor codes. And those codes aren’t even very secure.
So losing access to your phone number doesn’t just lock you out of your phone. Key parts of your digital life can also become inaccessible, and that could happen more often now due to the fungible nature of eSIMs.
Most people won’t need to move their phone number very often, but the risk that your eSIM goes up in smoke when you do is very real. Compare that to a physical SIM card, which will virtually never fail unless you damage the card. Swapping that tiny bit of plastic takes a few seconds, and it never requires you to sit on hold with your carrier’s support agents or drive to a store. In short, a physical SIM is essentially foolproof, and eSIM is not.
Obviously, the solution is not to remove multifactor authentication—your phone number is, unfortunately, too important to be unguarded. However, carriers’ use of SMS to control account access is self-defeating and virtually guarantees people are going to have bad experiences in the era of eSIM. Enshittification has truly come for SIM cards.
If this future is inevitable, there ought to be a better way to confirm account ownership when your eSIM glitches. It doesn’t matter what that is as long as SMS isn’t the default. Google actually gets this right with Fi. You can download an eSIM at any time via the Fi app, and it’s secured with the same settings as your Google account. That’s really as good as it gets for consumer security. Between Google Authenticator, passkeys, and push notifications, it’s pretty hard to get locked out of Google, even if you take advantage of advanced security features.
We gave up the headphone jack. We gave up the microSD card. Is all this worthwhile to boost battery capacity by 8 percent? That’s a tough sell.
Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.
I switched to eSIM in 2025, and I am full of regret
A quirky guide to myths and lore based in actual science
...
Read the original on arstechnica.com »
Replaces the standard Go runtime with one designed for the Dreamcast’s constraints: memory 16MB RAM, CPU single-core SH-4, no operating system. Provides garbage collection, goroutines, channels, and the core runtime functions.
Prerequisites: Go 1.25.3+, make, and git must be installed.
go install github.com/drpaneas/godc@latest
godc setup
godc doctor # to check (optional)
Note: The godc CLI tool is a separate project that handles toolchain setup and builds.
mkdir myproject && cd myproject
godc init
# write you main.go and other *.go files
godc build
godc run
See the Quick Start Guide for your first program.
* hello_screen — Hello World on screen using BIOS font
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.