10 interesting stories served every morning and every evening.
🇬🇧->🇵🇱 Przejdź do polskiej wersji tego wpisu / Go to polish version of this post
Just a year ago, I was really deep into the Apple ecosystem. It seemed like there was no turning back from the orchard for me. Phone, laptop, watch, tablet, video and music streaming, cloud storage, and even a key tracker. All from one manufacturer. Plus shared family photo albums, calendars, and even shopping lists.
However, at some point, I discovered Plenti, a company that rents a really wide range of different devices at quite reasonable prices. Casually, I threw the phrase “samsung fold” into the search engine on their website and it turned out that the Samsung Galaxy Z Fold 6 could be rented for just 250-300 PLN per month. That was quite an interesting option, as I was insanely curious about how it is to live with a foldable phone, which after unfolding becomes the equivalent of a tablet. Plus, I would never dare to buy this type of device, because firstly, their price is astronomical, and secondly, I have serious doubts about the longevity of the folding screen. I checked the rental conditions from Plenti and nothing raised my suspicions. Renting seemed like a really cool option, so I decided to get the Fold 6 for half a year. That’s how I broke out of the orchard and slightly reopened the doors to my heart for solutions without the apple logo. I even wrote a post about the whole process - I betrayed #TeamApple for broken phone. What I’m getting at is that this is how Android returned to my living room and I think I started liking it anew.
My adventure with Samsung ended after the planned 6 months. The Galaxy Z Fold 6 is a good phone, and the ability to unfold it to the size of a tablet is an amazing feature. However, what bothered me about it was:
paying 300 PLN (~80 USD) for rent is a good short-term solution to get something to test, but not in the long run.
All the points above made me give up on extending the rental and start wondering what to do next. Interestingly, I liked Android enough that I didn’t necessarily want to go back to iOS. Around this time, an article hit my RSS reader: Creators of the most secure version of Android fear France. Travel ban for the whole team (I think it was this one, but I’m not entirely sure, it doesn’t really matter). It talked about how France wants to get its hands on the GrapheneOS system and thus carry out a very serious attack on the privacy of its users. I thought then, “Hey! A European country wants to force a backdoor into the system, because it is too well secured to surveil its users. Either this is artificially blowing the topic out of proportion, or there is actually something special about this system!”. At that moment, a somewhat forgotten nerd gene ignited in me. I decided to abandon not only iOS, but also mainstream Android, and try a completely alternative system.
GrapheneOS is a custom, open-source operating system designed with the idea of providing users with the highest level of privacy and security. It is based on the Android Open Source Project (AOSP), but differs significantly from standard software versions found in smartphones. Its creators completely eliminated integration with Google services at the system level, which avoids tracking and data collection by corporations, while offering a modern and stable working environment.
The system is distinguished by advanced “hardening” of the kernel and key components, which minimizes vulnerability to hacking attacks and exploits. A unique feature of GrapheneOS is the ability to run Google Play Services in an isolated environment (sandbox), allowing the user to use popular applications without granting them broad system permissions. Currently, the project focuses on supporting Google Pixel series phones, utilizing their dedicated Titan M security chips for full data protection.
When I used to read about GrapheneOS, the list of compatible devices included items from several different manufacturers. Now it’s only Google Pixel devices. This doesn’t mean you can’t run this system on a Samsung, for example, but the creators simply don’t guarantee it will work properly, and you have to deal with potentially porting the version yourself. Note that it’s quite funny that a system freed from Google services should be run exactly on Google devices. If anyone wants to read more about why Pixels are the best for GrapheneOS, I recommend checking out the following keywords - Verified Boot, Titan M, IOMMU, MTE.
At the stage of choosing a device to test GrapheneOS on, I wasn’t yet sure if such a solution would work for me at all and if I’d last with it in the long run. So it would be unreasonable to lay out a significant amount of money. Because of this, probably the only sensible choice was the Google Pixel 9a. This was a few months ago, when not enough time had passed since the premiere of the 10 series models for them to make it onto the fully supported devices list. At that time, the Pixel 9a was the freshest device on the list (offering up to 7 YEARS of support!) and on top of that, it was very attractively priced, as I bought it for around 1600 PLN (~450 USD).
In retrospect, I still consider it a good choice and definitely recommend this path to anyone who is currently at the stage of deciding on what hardware to start their GrapheneOS adventure. The only thing that bothers me a bit about the Pixel 9a is the quality of the photos it takes. I switched to it having previously had the iPhone 15 Pro and Samsung Galaxy Z Fold 6, which are excellent in this regard, so it’s no wonder I’m a bit spoiled, because I was simply used to a completely different level of cameras. Now I also know that GrapheneOS will stay with me for longer, so it’s possible that knowing then what I know now, I would have opted for some more expensive gear. However, this isn’t important to me now, because for the time being I don’t plan to switch to another device, and by the time that changes, the market situation and the list of available options will certainly have changed too. Besides, I’m positively surprised by the battery life and overall performance of this phone.
A suitable smartphone - in my case, it’s a Google Pixel 9a.
A cable to connect the phone to a computer; it can’t be just any cable, but one that is used not only for charging but also for data transmission. It’s best to just use the cable that came with the phone.
A computer with a Chromium-based browser (e.g., Google Chrome, Brave, Microsoft Edge, Vivaldi?). Unfortunately, I must recommend Windows 10/11 here, because then you don’t have to mess around with any drivers; it’s the simplest option.
If it’s new, we take it out of the box and turn it on. If it was previously used, we restore it to factory settings (Settings -> System -> Reset options -> Erase all data (factory reset) -> Erase all data). I think it’s stating the obvious, but I’ll write it anyway - a factory reset results in the deletion of all user data from the device, so if you have anything important on it, you need to back it up.
We must go through the basic setup until we see the home screen. We do the absolute minimum. Here is a breakdown of the steps:
we don’t connect to Wi-Fi, so we skip this step too
we don’t need to do anything with the warranty terms, so just the Next button
there is no need to waste time setting up biometrics, so we politely decline and skip fingerprint and face scan
First of all, we need to make sure that our phone’s software is updated to the latest available version. For this purpose, we go to Settings -> System -> System update. If necessary, we update.
Next, we go to Settings -> About phone -> find the Build number field and tap it 7 times until we see the message You are now a developer. In the meantime, the phone will ask for the PIN we set during the phone setup.
We go back and now enter Settings -> System -> Developer options -> turn on the OEM unlocking option. The phone will ask for the PIN again. After entering it, we still have to confirm that we definitely want to remove the lock.
When the screen goes completely dark, we simultaneously press and hold the power and volume down buttons until the text-based Fastboot Mode interface appears. If the phone starts up normally, it means we performed one of the earlier steps incorrectly.
We go to the computer and open the browser (based on the Chromium engine) to the address https://grapheneos.org/install/web.
A window with a list of devices to choose from will pop up in the browser. There should basically be only one item on it, and that should be our Pixel. We select it and press the Connect button.
Changes will occur on the phone’s display. A message will appear asking to confirm that we actually want to unlock the bootloader. To do this, we must press one of the volume buttons so that instead of Do not unlock the bootlader, Unlock the bootlader appears. At this point, we can confirm by pressing the power button.
On the GrapheneOS website, we scroll down to the Obtaining factory images section and press the Download release button. If the phone is still connected to the computer, the website will decide on its own which system image to download.
We wait for the download to finish. It is obvious that the time needed for this depends directly on the speed of the internet connection.
Locking the bootloader is crucial because it enables the full operation of the Verified Boot feature. It also prevents the use of fastboot mode to flash, format, or wipe partitions. Verified Boot detects any modifications to the OS partitions and blocks the reading of any altered or corrupted data. If changes are detected, the system uses error correction data to attempt to recover the original data, which is then verified again — thanks to this mechanism, the system is resilient to accidental (non-malicious) file corruption.
Being in Fastboot Mode, when we see the Start message, we press the power button, which will cause the system to start normally. If we don’t see Start at the height of the power button, we have to press the volume buttons and find this option.
This is a standard procedure, so we will only go through it briefly:
I recommend turning off the location service, because it’s better to configure it calmly later by granting permissions only to apps that really need it
securing the phone with a fingerprint; I personally am an advocate of this solution, so I recommend using it, GrapheneOS does not (yet) support face unlock, so fingerprint and a standard password are the only methods we have to choose from (of course I reject pattern unlock right at the start as a form of screen lock that cannot even in good conscience be called any security)
I assume that if you are reading this post, you are a graphene freshman and you have no backup to restore, so we just skip this step
We land back in Fastboot Mode. I assume the phone was connected to the computer the whole time (if not, reconnect it). We return to the browser on the computer. We find the Locking the bootloader section and press the Lock bootloader button.
Again, confirmation of this operation on the phone is required. It looks analogous to unlocking, except this time, using the volume buttons, we have to make the Lock the bootloader option active and confirm it with the power button.
Just like when removing the lock, we go to Settings -> About phone -> find the Build number field and tap it 7 times until we see the message You are now a developer. In the meantime, the phone will ask for the PIN we set during the phone setup.
We go back and now enter Settings -> System -> Developer options -> turn off the OEM unlocking option. The phone will ask us to restart to change this setting, but for now we cancel this request, because we still want to completely turn off Developer options, which is done by unchecking the box next to the first option at the very top, Use developer options.
Now the real fun begins. You’ll hear/read as many opinions on what you should and shouldn’t do regarding GrapheneOS hardening as there are people. Some are conservative, while others approach the topic a bit more liberally. In my opinion, there is no one right path, and everyone should dig around, test things out, and decide what suits them and fits their security profile. You’ll quickly find out that GrapheneOS is really one big compromise between convenience and privacy. While this same rule applies to everything belonging to the digital world, it’s only in this case that you’ll truly notice it, because GrapheneOS will show you how many things you can control, which you can’t do using conventional Android. I don’t intend to use this post to promote some “one and only” method of using GrapheneOS. I’ll simply present how I use this system. This way, I’ll show the basics to people fresh to the topic, maybe I’ll manage to suggest an interesting trick they didn’t know to those who have been users for a while, and on a third note, maybe some expert will show up who, after reading my ramblings, will suggest something interesting or point out what I’m doing wrong / could do better. I’m sure that’s the case, since my adventure with GrapheneOS has practically only been going on for 3 months. I warn you right away that I’m not sure if I’ll be able to maintain a logical train of thought, as I’ll probably jump around topics a bit. The subject of GrapheneOS is vast and in today’s post I’ll only manage to slightly touch upon it.
One of the first things I did after booting up the freshly installed system was to create a second user profile. This is done in Settings -> System -> Multiple users. The idea is for this feature to allow two (or more) people to use one phone, each having a separate profile with their own settings, apps, etc. Who in their right mind does that? While I can imagine sharing a home tablet, sharing a phone completely eludes me. It therefore seems like a dead feature, but nothing could be further from the truth.
For me, it works like this: on the Owner user, because that’s the name of the main account created automatically with the system, I installed the Google Play Store along with Google Play services and GmsCompatConfig. This is done through the App Store application, which is a component of the GrapheneOS system. Please don’t confuse this with Apple’s app store, even though the name is the same. From the Play Store I only installed the following applications:
And that’s it. As you can see, this profile serves me only for apps that absolutely require integration with Google services. In practice, I switch to it only when I want to pay contactlessly in a store, which I actually do rarely lately, because if there’s an option, I pay using BLIK codes. Right after switching from Samsung there were more apps on this profile, but one by one I successively gave up on those that made me dependent on the big G.
It’s on the second profile, which let’s assume I called Tommy, that I keep my entire digital life. What does this give me? For instance, the main profile cannot be easily deleted, but the additional one can. Let’s imagine a situation where I need to quickly wipe my phone, but in a way that its basic functions still work, i.e., without a full factory reset. An example could be, say, arriving in the USA and undergoing immigration control. They want access to my phone, so I delete the Tommy user, switch to the Owner user, and hand them the phone. It makes calls, sends SMS messages, even has a banking app, so theoretically it shouldn’t arouse suspicion. However, it lacks all my contacts, a browser with my visited pages history, a password manager, and messengers with chat histories. This is rather a drastic scenario, but not really that improbable, as actions like searching a phone upon arrival in the States are something that happens on a daily basis. Besides, the basic rule of security is not to use an account with administrator privileges on a daily basis.
On GrapheneOS, Obtainium is my primary aggregator for obtaining .apk installation files and automating app updates. It’s like the Google Play Store, but privacy-respecting and for open-source applications. It would be a sin to use GrapheneOS and not at least try to switch to open-source apps. Below I present a list of apps that I use. Additionally, I’m tossing in links to the source code repositories of each of them.
To understand how Obtainium works and how to use it, I recommend checking out this video guide.
I have a few apps that are not open-source, but I still need them. In this case, I don’t download them from the Google Play Store, but exactly from the Aurora Store, which I mentioned above.
Aurora Store is an open-source client of the Google Play store (I guess you could call it a frontend) that allows downloading applications from Google servers without needing Google services (GMS) on the phone.
* Privacy - you don’t need to log in with a Google account to download free apps (you can use built-in anonymous accounts).
With these anonymous accounts, the thing is that sometimes they work, and sometimes they don’t, due to limits that are unreachable with a normal account used by one person, but when a thousand people download apps from one account at once, it starts to get suspicious, and the limits are exceeded quite quickly. Using Aurora Store violates the Google Play Store terms of service, so on the other hand if we use our Google account, it might be temporarily blocked or permanently banned. Some option here is to create a “burner” account just for this, but that takes away some of our privacy, because Google can still index us based on what we downloaded. Anonymous accounts in this case provide almost complete anonymity, because then we are just a drop in the ocean.
When it comes to security, yes, in theory we download .apk files from a verified source, but only under the condition that the Aurora Store creators don’t serve us a Man in the Middle attack. The decision whether you trust the creators of this app is up to you.
Below I present a list of applications that I downloaded from the Aurora Store, checked, and can confirm that they work without GMS (Google Mobile Services).
* My municipality’s app - because I need to know when they’ll collect my trash :)
* OpenVPN - I use it as a tunnel to my home network
* Perplexity - I switched to Gemini, but I confirm it works
* Synology Photos - my home photo gallery on a NAS
* Pocket Casts - podcasts, I plan to migrate to AntennaPod
* TickTick - to-do lists, it’s hard for me to find a sensible alternative that is multiplatform and has all the features I need
Has anyone ever wondered if all apps on a phone need Internet access? Indeed, in the vast majority of cases, a mobile app without network access is useless, but you can’t generalize like that, because for example, the previously mentioned FUTO Voice Input uses a local LLM to convert speech to text, which works offline on the device. Why would such an app need Internet access then? For nothing, so it shouldn’t have such permission. Now let’s take apps like FairScan (document scanning), Catima (loyalty card aggregator), Collabora Office (office suite), or Librera (ebook reader). They too do not need Internet access!
The situation looks even more bizarre when you look at which apps actually need access to all of our device’s sensors. If we think about it calmly, we’ll conclude that in this specific case it’s completely the opposite of the previous one, meaning practically no app needs this information. And I remind you that by default on Android with Google services, all apps have such permissions.
To manage a given application’s permissions, just tap and hold on its icon, select App info from the pop-up menu, and find the Permissions tab. A list categorized by things like - Allowed, Ask every time, and Not allowed will appear. I recommend reviewing this list for each app separately right after installing it. This is the foundation of GrapheneOS hardening.
A collective menu where you can view specific permissions and which apps have them granted is available in Settings -> Security & privacy -> Privacy -> Permission manager. Another interesting place is the Privacy dashboard available in the same location. It’s a tool that shows not only app permissions, but also how often a given app reaches for the permissions granted to it.
In GrapheneOS we don’t only have user profiles, but each user can also have something called a Private space. I encountered something similar on Samsung, where it was called Secure Folder, so I assume this might just be an Android feature implemented differently by each manufacturer.
Private space is turned on in Settings -> Security & privacy -> Private space. It acts like a sort of separated sandbox that is part of the environment you use, but at the same time is isolated from it. For me, it’s a place that gives me quick access to apps that nevertheless require Google services. You might ask - why then do I keep the mBank and T-Mobile apps on the Owner user if I could keep them here? Well, for reasons unknown to me, I’m unable to configure my private space so that paying with contactless BLIK via NFC works correctly in it. The same goes for Magenta Moments from T-Mobile, which don’t work correctly despite GMS being installed in the private space.
* Google Drive - I use it as a cloud to share files with clients
* mObywatel - at first I kept this app in the main profile as downloaded from Aurora Store and everything somewhat worked, but every now and then the app caught a total freeze and stopped responding, I think it might be related to the fact that it does send some Google services-related requests in the background and doesn’t respond until such a request times out, I have this on my list to investigate
* Play Store - I have to download all these apps from somewhere, doing it via Aurora Store in the private space wouldn’t make sense since I have the whole Google services package installed here anyway
* XTB - another investing app… works without GMS, but like I said, I keep all financial ones in one place
Oof… I did it again, sorry. I’m just counting the characters and it comes out to just under 35,000… I’ll probably break that barrier with these next few sentences. Well, long again, but purely meaty again, so I don’t think anyone has reason to complain. As I mentioned earlier, I’ve only touched upon the topic of GrapheneOS, which is extensive, and it’s a good thing, because it’s a great system, and the biggest respect goes to the people behind this project. It’s thanks to them that we even have the option of at least partially freeing ourselves from Google (Android) and Apple (iOS). Therefore, I highly invite you to the final chapter of this post.
Finally, I would like to encourage you to support the GrapheneOS project. The developers behind it are doing a really great job and in my opinion deserve to have some money thrown at them. Information on where and how this can be done can be found here.
...
Read the original on blog.tomaszdunia.pl »
Flaming Alamos were not visible on the outside of any of the homes, because the properties were clad in other materials. But the team asked Harp to assess - by looking at their style and exterior - if these properties were likely to have been built during a period when Flaming Alamos had been on sale.
...
Read the original on www.bbc.com »
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. For those on our Free and Pro plans, Claude Sonnet 4.6 is now the default model in claude.ai and Claude Cowork. Pricing remains the same as Sonnet 4.5, starting at $3/$15 per million tokens.Sonnet 4.6 brings much-improved coding skills to more of our users. Improvements in consistency, instruction following, and more have made developers with early access prefer Sonnet 4.6 to its predecessor by a wide margin. They often even prefer it to our smartest model from November 2025, Claude Opus 4.5.Performance that would have previously required reaching for an Opus-class model—including on real-world, economically valuable office tasks—is now available with Sonnet 4.6. The model also shows a major improvement in computer use skills compared to prior Sonnet models.As with every new Claude model, we’ve run extensive safety evaluations of Sonnet 4.6, which overall showed it to be as safe as, or safer than, our other recent Claude models. Our safety researchers concluded that Sonnet 4.6 has “a broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment.”Almost every organization has software it can’t easily automate: specialized systems and tools built before modern interfaces like APIs existed. To have AI use such software, users would previously have had to build bespoke connectors. But a model that can use a computer the way a person does changes that equation.In October 2024, we were the first to introduce a general-purpose computer-using model. At the time, we wrote that it was “still experimental—at times cumbersome and error-prone,” but we expected rapid improvement. OSWorld, the standard benchmark for AI computer use, shows how far our models have come. It presents hundreds of tasks across real software (Chrome, LibreOffice, VS Code, and more) running on a simulated computer. There are no special APIs or purpose-built connectors; the model sees the computer and interacts with it in much the same way a person would: clicking a (virtual) mouse and typing on a (virtual) keyboard.Across sixteen months, our Sonnet models have made steady gains on OSWorld. The improvements can also be seen beyond benchmarks: early Sonnet 4.6 users are seeing human-level capability in tasks like navigating a complex spreadsheet or filling out a multi-step web form, before pulling it all together across multiple browser tabs.The model certainly still lags behind the most skilled humans at using computers. But the rate of progress is remarkable nonetheless. It means that computer use is much more useful for a range of work tasks—and that substantially more capable models are within reach.Scores prior to Claude Sonnet 4.5 were measured on the original OSWorld; scores from Sonnet 4.5 onward use OSWorld-Verified. OSWorld-Verified (released July 2025) is an in-place upgrade of the original OSWorld benchmark, with updates to task quality, evaluation grading, and infrastructure.At the same time, computer use poses risks: malicious actors can attempt to hijack the model by hiding instructions on websites in what’s known as a prompt injection attack. We’ve been working to improve our models’ resistance to prompt injections—our safety evaluations show that Sonnet 4.6 is a major improvement compared to its predecessor, Sonnet 4.5, and performs similarly to Opus 4.6. You can find out more about how to mitigate prompt injections and other safety concerns in our API docs.Beyond computer use, Claude Sonnet 4.6 has improved on benchmarks across the board. It approaches Opus-level intelligence at a price point that makes it more practical for far more tasks. You can find a full discussion of Sonnet 4.6’s capabilities and its safety-related behaviors in our system card; a summary and comparison to other recent models is below.In Claude Code, our early testing found that users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time. Users reported that it more effectively read the context before modifying code and consolidated shared logic rather than duplicating it. This made it less frustrating to use over long sessions than earlier models.Users even preferred Sonnet 4.6 to Opus 4.5, our frontier model from November, 59% of the time. They rated Sonnet 4.6 as significantly less prone to overengineering and “laziness,” and meaningfully better at instruction following. They reported fewer false claims of success, fewer hallucinations, and more consistent follow-through on multi-step tasks.Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context. This can make it much better at long-horizon planning. We saw this particularly clearly in the Vending-Bench Arena evaluation, which tests how well a model can run a (simulated) business over time—and which includes an element of competition, with different AI models facing off against each other to make the biggest profits.Sonnet 4.6 developed an interesting new strategy: it invested heavily in capacity for the first ten simulated months, spending significantly more than its competitors, and then pivoted sharply to focus on profitability in the final stretch. The timing of this pivot helped it finish well ahead of the competition.Sonnet 4.6 outperforms Sonnet 4.5 on Vending-Bench Arena by investing in capacity early, then pivoting to profitability in the final stretch.Early customers also reported broad improvements, with frontend code and financial analysis standing out. Customers independently described visual outputs from Sonnet 4.6 as notably more polished, with better layouts, animations, and design sensibility than those from previous models. Customers also needed fewer rounds of iteration to reach production-quality results.Claude Sonnet 4.6 matches Opus 4.6 performance on OfficeQA, which measures how well a model can read enterprise documents (charts, PDFs, tables), pull the right facts, and reason from those facts. It’s a meaningful upgrade for document comprehension workloads.The performance-to-cost ratio of Claude Sonnet 4.6 is extraordinary—it’s hard to overstate how fast Claude models have been evolving in recent months. Sonnet 4.6 outperforms on our orchestration evals, handles our most complex agentic workloads, and keeps improving the higher you push the effort settings.Claude Sonnet 4.6 is a notable improvement over Sonnet 4.5 across the board, including long-horizon tasks and more difficult problems.Out of the gate, Claude Sonnet 4.6 is already excelling at complex code fixes, especially when searching across large codebases is essential. For teams running agentic coding at scale, we’re seeing strong resolution rates and the kind of consistency developers need.Claude Sonnet 4.6 has meaningfully closed the gap with Opus on bug detection, letting us run more reviewers in parallel, catch a wider variety of bugs, and do it all without increasing cost.For the first time, Sonnet brings frontier-level reasoning in a smaller and more cost-effective form factor. It provides a viable alternative if you are a heavy Opus user.Claude Sonnet 4.6 meaningfully improves the answer retrieval behind our core product—we saw a significant jump in answer match rate compared to Sonnet 4.5 in our Financial Services Benchmark, with better recall on the specific workflows our customers depend on.Box evaluated how Claude Sonnet 4.6 performs when tested on deep reasoning and complex agentic tasks across real enterprise documents. It demonstrated significant improvements, outperforming Claude Sonnet 4.5 in heavy reasoning Q&A by 15 percentage points.Claude Sonnet 4.6 hit 94% on our insurance benchmark, making it the highest-performing model we’ve tested for computer use. This kind of accuracy is mission-critical to workflows like submission intake and first notice of loss.Claude Sonnet 4.6 delivers frontier-level results on complex app builds and bug-fixing. It’s becoming our go-to for the kind of deep codebase work that used to require more expensive models.Claude Sonnet 4.6 produced the best iOS code we’ve tested for Rakuten AI. Better spec compliance, better architecture, and it reached for modern tooling we didn’t ask for, all in one shot. The results genuinely surprised us.
Sonnet 4.6 is a significant leap forward on reasoning through difficult tasks. We find it especially strong on branched and multi-step tasks like contract routing, conditional template selection, and CRM coordination—exactly where our customers need strong model sense and reliability.We’ve been impressed by how accurately Claude Sonnet 4.6 handles complex computer use. It’s a clear improvement over anything else we’ve tested in our evals.Claude Sonnet 4.6 has perfect design taste when building frontend pages and data reports, and it requires far less hand-holding to get there than anything we’ve tested before.Claude Sonnet 4.6 was exceptionally responsive to direction — delivering precise figures and structured comparisons when asked, while also generating genuinely useful ideas on trial strategy and exhibit preparation.On the Claude Developer Platform, Sonnet 4.6 supports both adaptive thinking and extended thinking, as well as context compaction in beta, which automatically summarizes older context as conversations approach limits, increasing effective context length.On our API, Claude’s web search and fetch tools now automatically write and execute code to filter and process search results, keeping only relevant content in context—improving both response quality and token efficiency. Additionally, code execution, memory, programmatic tool calling, tool search, and tool use examples are now generally available.Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off. As part of your migration from Sonnet 4.5, we recommend exploring across the spectrum to find the ideal balance of speed and reliable performance, depending on what you’re building.We find that Opus 4.6 remains the strongest option for tasks that demand the deepest reasoning, such as codebase refactoring, coordinating multiple agents in a workflow, and problems where getting it just right is paramount.For Claude in Excel users, our add-in now supports MCP connectors, letting Claude work with the other tools you use day-to-day, like S&P Global, LSEG, Daloopa, PitchBook, Moody’s, and FactSet. You can ask Claude to pull in context from outside your spreadsheet without ever leaving Excel. If you’ve already set up MCP connectors in Claude.ai, those same connections will work in Excel automatically. This is available on Pro, Max, Team, and Enterprise plans.How to use Claude Sonnet 4.6Claude Sonnet 4.6 is available now on all Claude plans, Claude Cowork, Claude Code, our API, and all major cloud platforms. We’ve also upgraded our free tier to Sonnet 4.6 by default—it now includes file creation, connectors, skills, and compaction.If you’re a developer, you can get started quickly by using claude-sonnet-4-6 via the Claude API.
...
Read the original on www.anthropic.com »
During the rapid technological advancements of the early 1990s, the video game industry was on the cusp of a massive addition - another dimension. With console shenanigans like the Super FX chip giving players a taste of 3D, hype was at an all-time high. But the games released for home consoles were nothing compared to what arcade developers were capable of doing. By employing gigantic budgets and cutting-edge hardware, the arcade gave players a chance to see the future, today.
This is filler text to try and hack around a problem on the website. You shouldn’t be seeing this. If you are, please report this bug.
But the future eventually arrived with the launch of the 5th generation of consoles. All of a sudden, the revolutionary 3D hardware features that were once exclusive to arcades were now available in home consoles. Without next-generation hype pushing players into the arcade, powerful but expensive arcade machines were no longer sustainable to develop. The industry adjusted by moving toward more cost effective solutions, with many turning to the inexpensive, already proven 3D-capable hardware available in 5th gen home consoles.
Rather than turning around the decline of the arcade, the cheaper hardware may have helped accelerate it. There were fewer unique experiences to pull players into the arcade, and previous hit exclusives were now seeing high quality home console ports that allowed them to be enjoyed without munching quarters. When the 6th generation arrived with the Dreamcast and the PlayStation 2, many arcade stalwarts waved the white flag and started to shift their arcade divisions to home console projects, with mixed success.
Sega was among those hit hardest by this era. They produced some of the greatest arcade thrills of the 1990s and enjoyed massive success in the home console market with the Sega Genesis/Mega Drive. But a string of mistakes and miscalculations combined with the slumping arcade industry sent them to the brink of bankruptcy. By 2002, the Dreamcast had been soundly defeated by the launch of the PlayStation 2, and Sega began porting some of their hits to their former rivals’ hardware just to stay afloat.
The home market was lost, but the languishing arcade scene presented Sega with an opportunity. They still had legendary arcade development teams, and if Sega could leverage them to produce a wave of arcade hits, they would be in a position to dominate a new era of arcades when most others were changing gears. There was just one problem: Sega didn’t have the resources that they once did. If they were going to do this, they needed some help.
And so they did something that would have been considered unthinkable just five years prior. Sega teamed up with Nintendo to develop a GameCube-based arcade platform. Bolstering their ranks was Namco, another coin-op stalwart with tons of arcade veterans.
While Triforce was a collaboration project, it still feels like a very Sega coded arcade system. It can even use certain NAOMI style components! Along with the Xbox-based Chihiro, the Triforce is sometimes considered a successor to the NAOMI 2.
Inside of this metal shell is… a GameCube! Quite literally, actually.
The Triforce hardware is built around a stock GameCube motherboard, with two Triforce-specific boards attached to it: the AM-Baseboard and AM-Mediaboard. The AM (Amusement Machine) boards are the secret sauce of the Triforce and transform the stock GameCube into something capable of producing arcade experiences.
The early boot process is the same as a retail console, but a modified GameCube IPL (sometimes referred to as the GameCube BIOS) is used to initialize the Triforce hardware and load the Triforce equivalent of a “home menu”, Segaboot.
Segaboot is a special disc image on the Mediaboard that can be loaded by the Triforce at will through special commands. It is responsible for loading the actual game and for providing the Service Menu, where the operator can run hardware tests and change settings on the machine.
By using Picoboot to override the boot process, it is possible to load a standard GameCube IPL or homebrew like Swiss. And since all of the pins are still on the mainboard, we can also connect a standard GameCube front panel and even load full GameCube games from microSD over Serial Port 2!
The Baseboard is primarily responsible for input and output. It handles translation between JVS I/O devices (more on those later) and the GameCube’s SI bus. It also takes the GameCube’s digital video output and feeds it to two VGA ports on the back of the main unit.
The Mediaboard’s most important responsibility is storing and serving the game software to the GameCube. It is also used to handle other tasks, such as networking, through special commands.
The Triforce Baseboard was mostly unchanged throughout the Triforce’s lifespan, but the Mediaboard could vary depending on the developer, game, and when the game was released. In fact, games weren’t guaranteed to even come out on the same storage medium!
A spinning disc and active laser were not normally considered reliable enough for an arcade environment. These machines will be on all day, every day for years, and players were often rough on machines that they didn’t own. So, the Triforce eschews the standard GameCube mini-DVD alike format for its own storage solutions.
Most games were designed for the DIMM (Dual In-line Memory Module) variant of the Triforce, where game data is shipped on GD-ROM and loaded into RAM on the first boot. GD (Gigabyte Disc) was a format initially devised by Sega and Yamaha for use in the Dreamcast. By increasing the data density of ordinary compact disc technology, the 12cm GD-ROM had somewhat comparable capacity to the GameCube’s DVD-based 8cm disc (1GiB versus 1.46GiB).
Were GD-ROM drives more reliable than early DVD drives? Maybe! By this point, GD-ROM was an established technology that Sega was already using in arcades for years. Perhaps even more importantly, it was cheaper. Sega designed it so they could even reuse GD-ROM drives designed for their other arcade platforms, since they used a generic SCSI-style connector.
DIMM variant Triforces came with stickers advertising the amount of DIMM RAM on the Mediaboard. These stickers caused some confusion in the enthusiast community, as people would often mistake the amount listed as the total RAM accessible to the game. In reality, the DIMM RAM was mostly intended for use as a read-only RAM drive, rather than for general purpose use. As previously mentioned, the Triforce hardware is based around a stock GameCube motherboard, so games can only access the same 24+16 MiB of RAM that a retail GameCube uses.
Once the game was loaded into memory, it was intended to stay there. And thanks to a battery backup that maintained the data even in the event of a power failure, the GD-ROM may only be needed once in the entire lifetime of the machine. This was their secret toward making the Triforce GD-ROM drive reliable for the arcade. One of the main exceptions would be if a new disc were inserted. Many Triforce games saw updates, which could be shipped on new GD-ROMs.
Namco’s Triforce games ditched the GD-ROM and DIMM RAM and instead used 512MB NAND cartridges to store game data. The NAND retains its contents even if the system loses power and the backup battery runs dry, which eliminates the need for GD-ROMs. These games also saw updates through SD card or over the internet, with updates able to directly modify the NAND contents.
Both methods of storing Triforce game data have the same goal in the end: deliver a disc image to the internal GameCube. In addition to the GD-ROM or NAND cartridge, each game also has a corresponding security key that must be inserted into the Triforce unit in order for the game to run.
There are two variants of Triforce I/O: Type 1 and Type 3. These refer to the Sega JVS Type 1 and Sega JVS Type 3. JVS stands for JAMMA Video Standard, a common standard created by a group of Japanese game companies for connecting various accessories and controllers to arcade systems. It’s easiest to think of JVS I/O as the arcade equivalent to USB. Other Sega JVS I/O compatible devices can work with the Triforce even if they were originally designed for other arcade platforms, but it’s up to the game developers to actually add support for a particular piece of hardware. Type 3 Triforces also have the capability to support more complicated analog input devices.
Whether it was Type 1 or Type 3, Sega had a trick that was instrumental to their efforts to revive the arcade scene and almost every Triforce game would use it. It was a revolutionary idea that had taken hold in the home console market but was still rare to see in arcades: saving and continuing.
By using cheap cards that could hold a small amount of data, players could buy what amounted to a small memory card directly from the arcade machine using a built-in vendor. These cards could be bought for as cheap as a single credit in some cases, and had enough storage to save progress, preferences, and other unlocks. Because the data wasn’t locked to the machine, these cards allowed the player to continue their progress from any arcade that had the game and a working card slot.
The end goal of this was to get players more invested in arcade experiences by having them progress and unlock content. Some Triforce games are full of so many unlockables that it’d be impossible to see everything in a single session at the arcade.
Triforce games can support two types of cards for saving: Magnetic Cards (magcards) and Integrated Circuit (IC) cards. Magcards are cheaper, fragile, and can only survive so many writes before failing. They have the added bonus of having a printable side, where the game can print a player’s achievements and more. IC cards are more like old credit cards with a thicker plastic. They weren’t printable, but were much sturdier.
A limit of 50 writes was imposed on magcards, likely to recoup printing costs and because the cards would eventually wear out. This meant that after 50 writes, the player would have to spend more money on a new card in order to continue saving their data. If an arcade was feeling generous, the operator could choose to make buying and/or refreshing cards free.
Regardless of the card type, if the card were somehow destroyed outside of the machine for any reason, the save data would be lost and the player would have to start over with a fresh card.
Outside of the various cards and their readers, there were plenty of other fairly generic JVS I/O devices, such as coin mechanisms, arcade sticks, buttons, steering wheels, and pedals. Because there were so few Triforce games released, we’ll take a look at unique JVS I/O devices on a game-by-game basis when we start spotlighting the games.
Hypothetically, let’s say you have a vested interest in GameCube hardware and decided to purchase a Triforce arcade unit with a game to see how it works first-hand and write an article about it. Without a cabinet and all of the additional hardware that is required to run a game, the core Triforce is just a fancy paperweight, right? Actually, no!
Using a Raspberry Pi, we can convert USB controllers into JVS devices that the games will recognize thanks to JVS I/O emulation! JVS I/O uses a USB-A style connector, but arranges the pins differently. Compared to USB, JVS I/O’s differential serial signal is closer to the RS485 standard (aka the last serial port standard). It’s not exactly the same, but by using a RS485 adapter connected to through USB-A with D- and D+ hooked up as the differential pair and VBUS hooked to the sense line, USB devices can communicate with JVS I/O. Combine that with OpenJVS, and you can have a computer interface with a Triforce to emulate JVS I/O devices.
In our hypothetical, we suggested that we only purchased one Triforce. In reality, we ended up with four over the past few years: A Type 1 DIMM, a Type 3 DIMM, and two Type 3 NANDs. We also bought a few JVS I/O devices that popped up, including a Virtua Striker 4 Card Reader and a Chihiro/Triforce/NAOMI 2 compatible magcard reader/printer/distributor. However, our real JVS I/O devices ended up being pretty useless due to the fact we were still missing too much hardware to hook them up. JVS I/O emulation was mandatory, and was used to fake enough of the devices to get the games into a working state. To replace the Triforce’s JVS power supply, we used an ATX power supply with the 20+4 pin power connector carefully modified to match its pinout. Do not attempt this at home!
OpenJVS does a well enough job faking devices that most Triforce games can be made to run under it. More importantly, it also let us map the various input devices attached to the games to a DS4 controller. As a bonus, we used some of the extra buttons on the controller to map actions like inserting coins to make general play easier.
All of this tinkering was just enough to let us control and play real Triforce games on real hardware.
Now that we could play Triforce games, we had to give it a spin.
Given that Nintendo hardware powers the Triforce, one might expect it to have some Nintendo-developed games. But there aren’t any. Despite Nintendo’s pedigree for creating appealing and accessible games, they had no interest in making arcade games for the Triforce. Hits like Donkey Kong and Mario Bros were eons ago and the market had drastically changed since then. Instead, Nintendo opted to license out their IPs to the more experienced arcade developers at Sega and Namco.
This partnership resulted in a golden opportunity for the two companies. Their experienced arcade developers had access to some extremely popular IPs, and the GameCube base meant they had a powerful core machine that was also affordable. In the end, though, the Triforce only had nine games released for it and several of those saw home ports.
With so few titles released for the system, it affords us the rare opportunity go through each and every one. The games range from fairly typical arcade titles to high budget monstrosities that would be the crown jewel of any arcade. We’ll be looking at obscure games, legendary games, and everything in between while doing our best to see how they took advantage of the Triforce hardware. Let’s begin.
Did you know that the Triforce has not one, but two Mario Kart games? Mario Kart Arcade GP (2005) and Mario Kart Arcade GP 2 (2007) are often forgotten when people talk about the phenomenal Mario Kart series due to their limited release, especially internationally. Both games are built off the Mario Kart: Double Dash!! engine, but have more of a focus on arcade simplicity and play closer to the style of the original Super Mario Kart.
Those that have played a Mario Kart game know what to expect at the surface level. This is an arcade kart racer with tons of wacky items, popular characters, and colorful tracks to race on. This time around, some popular characters from Namco properties join Mario Kart veterans, such as Pac-Man!
The first game launched with twelve race tracks spread across six cups. Each cup has four stages that use two of the tracks. The second time you race a track in the cup, it will be remixed slightly. Sometimes this just means some different visuals or items, but other times it might have some slight alterations to make driving the track more difficult.
Mario Kart Arcade GP 2 has all of the tracks from the first game and four brand new ones spread out between two new cups. If this was a home console game, the amount of reused content would have been very disappointing. In the arcade setting, it’s not nearly as big of a deal. Most players wouldn’t have had a lot of experience on every course, and many might not have played the first game at all! That being said, Mario Kart Arcade GP 2 still feels more like an improved version 2.0 rather than a full fledged sequel.
On that note, Mario Kart Arcade GP has some very puzzling omissions that were fixed by the sequel. For instance? Only Mario Kart Arcade GP 2 has the iconic 50cc, 100cc, and 150cc difficulty options available from the start! Both games have the same three gameplay modes: Grand Prix, Time Trial, and Versus. Grand Prix has players racing through cups one round at a time. By winning a race in a cup, you unlock the next race. Time Trial should be familiar to anyone. Players are given a triple mushroom and a solo run on a course to set the best time possible. Versus mode can only happen in multicabinet setups when multiple players are around. In this mode, up to four players can race one another on any track.
Regardless of mode, races have a time limit to keep people moving, but they are relaxed enough that they usually won’t come into play.
In order to record progress, Mario Kart Arcade GP and Mario Kart Arcade GP 2 rely on magcards. When the game starts, it’ll ask the player to insert or create a license profile to save their progress. On some cabinets, a camera (known as the “namcam2”) will be present to take a picture that will be used during the race. Players’ faces will show up in the heads-up display and with various distraction items, so making a goofy face could be an advantage in multiplayer. Note that these features are optional, and a player can always choose to play without taking a picture or using a magcard.
There is one rather egregious oversight that is only present in the first game. Mario Kart Arcade GP locks a player to a single character once they’ve created a license card. That means before the player even gets a chance to play the game, they have to choose a character and are forced to use that character unless they start over! Characters have different driving characteristics, so this is a rather important decision!
Regardless, the developers must have realized how awkward this was and changed it so that swapping characters is possible in Mario Kart Arcade GP 2 even when using a magcard.
Whether the driving model for a Mario Kart game is good or not mostly comes down to player preference. Some players love Mario Kart DS, others swear by Mario Kart Wii, or even Double Dash. The GP games are definitely on the slippery side of the series, especially when using the “difficult” characters at higher CCs.
Controls are simple even compared to the already casual home Mario Kart games. The game uses a racing wheel, gas and brake pedals, and an item button. Additionally, there’s a Versus Cancel to opt out of multiplayer to focus on winning the cups. Despite this, it takes some time to get acclimated to the arcade exclusives after coming from modern Mario Kart games. The harder courses pull no punches and will relentlessly throw tight corners. The Grand Prix mode even has hindrances added to certain tracks on their reruns. On Bowser’s Castle, Kamek invades and blocks some of the racing lines on the later laps!
To win on harder difficulties, the player needs every advantage they can get. Items can be the advantage that players need. Both games feature over 100 items, but during a race, each player has access to a pool of three items. In harder Grand Prix cups (and sometimes later stages in earlier ones), players get the option to create their own unique item pool from their unlocked items. Even though a lot of them share properties, a surprising number of them have their own wrinkles. For example, dropping a banana can cause a spinout and immediate time loss, but dropping tacks will cause a kart to pop a tire and lean to one side, making overall driving temporarily more difficult for that player. Items aren’t very balanced so unlocking powerful items gives an undeniable edge.
Throwing items are simple. Aside from the green shell, almost all forward throwing items feature a powerful lock-on effect. Lock-on is automatic and happens after keeping another driver in front for a couple of seconds. Once locked on, that item will head toward the target regardless of what they do to avoid it.
In the first game, players must win all four stages of a cup and the minigame that follows. These minigames are short solo challenges that test a player’s control over the game in unusual situations. Sometimes this means pushing an object, getting big air over jumps, driving backwards, hitting tons of pedestrians (they’re Koopa Troopas, that makes it OK), or even facing off with Bowser outside of his castle. In the sequel, the bonus games are no longer required for cup completion and only award bonus coins.
By winning all of the cups, players unlock a Special mode that varies per game. In GP 1, that is 150cc mode. Mario Kart Arcade GP 2 has 150cc mode unlocked by default, so they went a different direction. Instead, players unlock new track layouts in reverse mode. Unlike the mirror mode present in other games, reverse mode significantly changes some of the tracks beyond just ruining muscle memory. Fun fact, reverse mode was also planned for Mario Kart: Double Dash!! before being cut for mirror mode.
To handle the plethora of tricky corners and tracks, GP games have a drifting mechanic. By tapping the brake, players can initiate a hop. By turning in the air before landing, players can initiate a drift that allows sharp corners to be taken at higher speeds. Because of the powerful lock-on that most items have, drifting has been given an additional benefit. During a drift, players will reflect most items with a shield. An unexpected drift will cost some time, but could be used to block an item at the last moment. Some items also provide a shield, such as the Invincibility Star and Shield items.
Much like the original Super Mario Kart and more recent Mario Kart entries, the GP games have coins strewn across the track. Collecting coins increases a kart’s stop speed, adding a layer of strategy as just driving the optimal lines isn’t enough. During a race, holding 15 coins pushes a kart to its maximum speed. But driving at that speed can also be dangerous, as hitting walls, bouncing off other karts, or being hit by items can cause the player to lose coins.
Mario Kart Arcade GP 2 changes all of the coins on the track to Mario Coins. The first time these coins are collected they count as currency toward unlocks. If coins are dropped, players don’t lose the Mario Coins and they will respawn as golden coins. Up to 25 Mario Coins can be picked up on the track along with bonuses from race ranking and minigames.
Collecting Mario Coins allows for unlocking certain karts, items, portraits, and kart upgrades that will make the veteran players much faster than players just starting out.
Lastly, Mario Kart Arcade GP 2 also adds one more major feature: a “live” announcer that gives updates throughout the race. This feature is proudly demonstrated during the attract menu, even! As corny as it sounds, it’s rather entertaining to leave on at least a few times. Players that don’t want the announcer can turn it off and their preference will be saved to their magcard.
Overall, both of these games are best as multiplayer arcade spectacles. The depth and content of these games don’t quite rival contemporary home releases like Mario Kart Wii. But none of that matters in the arcade with friends, where loud and bombastic multiplayer experiences really shine.
The Mario Kart Arcade GP series would continue with Mario Kart Arcade GP DX in 2013 and Mario Kart Arcade GP VR in 2017, but those would run on newer and more standard PC-based arcade hardware.
These Mario Kart titles were the only two games Namco released on the Triforce hardware. But they had planned at least one other game.
Announced in 2002 as a dual GameCube and arcade release, Star Fox was originally planned to launch before either of the Mario Kart Arcade GP games in 2003. As part of a push for games that could easily be ported between GameCube and arcade, Star Fox would have had connectivity between the two versions through GameCube memory card slots included on the machines. That way, players could bring their own memory cards to transfer progress and/or unlockables between the home version and the arcade version of the game.
Considering the arcade style action featured in Star Fox and Star Fox 64, this seemed like a natural choice for an arcade hit. Players were already chasing high scores in Star Fox 64 and the overall game design would need little modification to work in an arcade. If rumors were true, Namco wasn’t planning to skimp on the hardware, either. They were going to use the massively impressive and incredibly expensive O. R.B.S. cabinet, which was designed specifically for on-rail shooters. Essentially, the player would be locked in a fully immersive orb that would place them squarely in the cockpit of an Arwing with a semi-spherical screen acting as a bubble canopy. On top of that, the cabinet could rotate and slide to reflect what was going on in-game.
Unfortunately, Star Fox Arcade was quietly cancelled and the O. R.B.S. cabinet itself would never actually be used for any arcade game. The GameCube version did eventually see the light of day, however. Released as Star Fox Assault in 2005, the game was heavily reworked and padded out with third person on foot sections. Perhaps as a nod to its origins, players can unlock a port of the arcade classic Xevious by collecting all silver medals.
With that side quest complete, we’ve now covered the entirety of Namco’s contributions to the Triforce library. Thankfully, we’re not done yet, as Sega developed a variety of Triforce arcade games.
Gekitou Pro Yakyuu is a rather unique baseball game that combines characters from various baseball manga created by Shinji Mizushima with real-life Japanese professional baseball players of the era. The game also has a faithful home console port, Gekitou Pro Yakyuu, for the GameCube and PlayStation 2.
The main draw of this baseball game is that it can provide a faithful simulation style game between professional players or a zany arcade experience with special pitches, strong batters, and manga cutaways featuring the illustrated characters. What makes the game so interesting is that these two things aren’t separated - both teams can be filled with a mix of illustrated and professional players, letting their contrasting styles clash right on the field.
At its core, Gekitou Pro Yakyuu is a fairly standard late early 2000s baseball game. Pitchers can roughly place their pitches anywhere in and around the strike zone and batters in turn try to guess where the pitch will be to get a solid hit. Pitchers have a variety of pitches at their disposal that add movement to the ball, and batters in turn have multiple swing types that can counter pitches. Players with better stats generally have more options at their disposal. If the batter guesses the pitch right, their aiming reticle will turn red giving them advanced warning that they guessed correctly.
When playing in the arcade, both teams are filled with a mix of real players and manga players. This creates the interesting scenario where many manga players often feel like superstars that can break the game if not carefully played around. Most of them have special quirks and often have access to special abilities. Manga pitchers can make the ball disappear, zig zag, and confound the batter. Manga batters can also counter this as they have active and passive abilities of their own. One player has his contact range and power grow further out from the center of the strike zone, making him incredibly powerful if the pitcher is painting the corners.
For those interested in playing this game without a Triforce, there’s good news. The home console port is incredibly faithful and even adds some additional modes and features for depth. The GameCube controller also affords players analog control, whereas the arcade original uses an eight-way gate. Once you get in game, though, it’s very apparent that this is the same game.
The home port, as far as we could tell, is missing one small thing. The Triforce version has a scoring system for putting up high scores on the machine. Rather than just trying to win baseball games, players are instead challenged to get a high score across a nine-inning game. Doing positive things like getting hits and getting the opponent out will give the player points. Big moments like double plays and grand slams will give even bigger bonuses, pushing players to the top of the leaderboards.
Players get two innings per credit or can pay 4 credits for a full nine-inning game. Players aiming for a high score need to do that, as those extra innings give more opportunities for scoring points, and there’s a large swath of bonus points for winning the baseball game outright. After nine-innings, win or lose, the game ends. The game also lists high scores for a home run contest, but we couldn’t figure out how to get to that mode.
This game suggests that it has some kind of save card support in the Service Menu, but we weren’t able to find any cards for it to be sure. In all likelihood, cards would have been used to save team data and other preferences for a player. Overall, Gekitou Pro Yakyuu is an effective, if not somewhat simple baseball game that lends itself well to the pick up and play nature of the arcade.
While it was developed by a different team within Sega, Virtua Striker 3 ver. 2002 is very similar to Gekito Pro Yakyuu in some ways. It is a simple to pick up and easy to play sports game with an incredibly faithful home port that brings the same experience to console players with modes that add extra depth. Virtua Striker 3 ver. 2002 is a three-button game: short pass (tackle on defense), long pass, and shoot. That’s it.
The gooooooal of the game is to win five matches in a row against the AI to secure the championship while surviving potential intruders jumping in from the second player in standard mode. This is a king of the hill style arcade game, so whoever wins gets to keep playing while the loser is knocked out. This remains true when playing against AI, so a strong player can play up to five games against the AI before reaching the credits and having to put in more money.
The game follows the rules of soccer closely. There are yellow cards, red cards, offsides, corner kicks, penalty kicks, free kicks and injury time. As an arcade game, it even captures some of the pageantry of the sport with a bombastic opening as the players march onto the field. However, once you’re in game it is a very no frills experience.
The arcade operator could adjust the cabinet’s settings to make things more or less unfair to optimize their profits. In addition to difficulty settings, Golden Goal (short overtime period) and Penalty Kicks could be disabled to give the players less opportunities to break a tie. And this matters a lot, because the AI wins in the event of a tie, forcing the player to plunk in more money to continue.
For competitive events and tournaments, there’s an aptly named tournament mode present in the settings. This mode has both players kicked off the game after match, regardless of who wins. This mode wasn’t (just) added to allow the operator to maximize profits, but rather it was intended for holding in-person tournaments where players would be swapping in and out after every match.
The simplicity to the controls is both the game’s selling point and an annoyance. When on defense in particular, sometimes the defender will rush to get into a particular position regardless of the direction being held on the arcade stick. This lack of control is only worsened by the fact that there’s no switch player button… on the arcade version, at least. The home port is mostly faithful gameplay-wise, but it does take advantage of the extra buttons on the controller to give players the ability to change tactics and swap defenders.
One thing that we should mention is that we were playing on revision 0001 of Virtua Striker 3 ver. 2002. Most games on the Triforce have multiple revisions or updates, with some revisions coming with significant upgrades. Later revisions may have addressed problems in this revisions, especially if the supposed Type 3 version exists.
Virtua Striker 3 ver. 2002 was a tad underwhelming in our opinion. If you’re a huge fan of these games and are seething at our mini-review, we’re fully aware that a lot of our frustrations might simply boil down to a skill issue. But since we were familiar with the rich history of the veterans at Amusement Vision and their legendary track record of arcade games, this one was a little disappointing.
Originally released in 2003, The Key of Avalon: The Wizard Master is a strange and very, very expensive arcade game. This game was not just expensive for the players, but it was also expensive for the operator too! This game is powered by five Triforce cabinets: one central Triforce server for the main game screen, and four additional satellite Triforce pedestals for the players.
The Key of Avalon: The Wizard Master is an arcade trading card board game. The objective of the game is simple - players scan in their decks and see their monsters on the big screen while battling up to three other players for supremacy.
Before playing the game, players need to purchase a starter deck of 30 random trading cards. This deck also comes with an IC card so that players can save their progress. Each satellite Triforce comes with a deck reader to allow a player to scan in their deck of cards. But how would you control the game after scanning in your cards? Why, a touchscreen, of course! And if that wasn’t enough, the game also came with a separate card kiosk specifically for purchasing starter decks and booster packs.
There are at least six revisions of The Key of Avalon. It is important to be aware of what revision a cabinet is, as cards from newer sets will not work with older revisions. Thankfully, cards are marked with what set they came from, making it fairly easy to know which revisions each card is compatible with.
* The Key of Avalon: The Wizard Master - Supports the initial 100 cards. There is a 1.10 revision with small adjustments.
* The Key of Avalon 1.20: Summon The New Monsters: This first major update adds support for 52 new cards in the N series. Earlier prints of these cards may have different stats and are missing some information on the back of the card.
* The Key of Avalon 1.30: Chaotic Sabbat: This version adds support for 35 more cards in the C series. Much like Summon The New Monsters, reprints of these cards have additional information and may have slightly different stats.
* The Key of Avalon 2: Eutaxy Commandment: An update big enough to be called a sequel. It has 61 new cards, a single player mode, and much more. The cards for this revision are in the E series. These cards do not appear to have any changes between early and late prints. The updated stats for older cards are used by this game.
* The Key of Avalon 2.5: War of the Key: The final revision adds support for 40 new cards in the W series. There are also additional Legend cards separate from the main catalogue.
In the end, nearly 300 total cards were released spread out over five rarities: Common, Uncommon, Rare, Very Rare, and Super Rare. Some cards are undoubtedly stronger than others, and those cards are mostly the rarer ones.
Like other collectable card games, players were expected to buy packs and trade with others to build the best possible deck. To prevent someone from getting clever with a printer and suddenly owning all the rare cards, Avalon cards have a barcode embedded into their top edge that the game reads the card data from. Though nearly invisible in normal circumstances, if held up to a light just right, the material of the barcode stands out against the rest of the card.
Cards weren’t all about utility, though. These cards were beautifully illustrated by a myriad of artists, and each monster is represented by a detailed 3D model in game. If someone was lucky, they might’ve stumbled upon alternate art or holofoil versions of cards. Players could also be rewarded with unique Ex cards that were only distributed through events.
Of all of the Triforce games, this was the only one we couldn’t play. Even if we had five Triforces, five GD-ROM drives, and JVS I/O emulation for the cards, it still wouldn’t be enough. The game can be booted with fewer Triforces, but the touchscreen is a total mystery and there was no way to bypass it without having a working Avalon Satellite Pedestal.
We’ve researched the game, bought manuals, and obtained a ton of cards and understand the gameflow, but without having played it we can’t really say if it’s fun or not. However, based on existing sales data and the number of updates, we know that The Key of Avalon was moderately successful despite its high price. Sega would go on to make more trading card arcade games, including a suspiciously similar Chihiro game, Quest of D.
Two years after Virtua Striker 3 ver. 2002, Sega released the next game in the Virtua Striker lineup with Virtua Striker 4. With dramatic upgrades to the controls, support for saving progress, team configuration, rank, and more, this game is often considered the best in the series by fans. And it only got better with Virtua Striker 4 ver.2006, which updated the rosters and added additional online events.
Like most Triforce games, the newer Virtua Striker games take advantage of cards for saving. Virtua Striker 4 uses IC cards to track player progress, similar to The Key of Avalon. These cards are nifty, as not only are they more durable than magcards, but they also contain an ID for logging in to Sega ALL. Net. The internet was enough of a thing at this point that Sega started experimenting with it for tracking player data and progress.
This meant that instead of a local arcade leaderboard, Virtua Striker 4 could have a global leaderboard tracking players on ALL. Net-connected machines across the world. By playing against other players that had registered online, players could be promoted to higher ranks or be demoted to lower ranks. On the surface, this is an upgrade over traditional magcards, but the obvious downside is that these games rely on servers hosted by Sega. Unfortunately, support for these machines ended in 2017, meaning that the online features no longer work. Thankfully, these games can still be played offline without the online services, albeit without the special features and events.
...
Read the original on dolphin-emu.org »
Over the weekend Ars Technica retracted an article because the AI a writer used hallucinated quotes from an open source library maintainer.
The irony here is the maintainer in question, Scott Shambaugh, was harassed by someone’s AI agent over not merging its AI slop code.
It’s likely the bot was running through someone’s local ‘agentic AI’ instance (likely using OpenClaw). The guy who built OpenClaw was just hired by OpenAI to “work on bringing agents to everyone.” You’ll have to forgive me if I’m not enthusastic about that.
This blog post is a lightly-edited transcript of the video I published to YouTube today. Scroll past the video embed if you’re like me, and you’d rather read the text :)
Last month, even before OpenClaw’s release, curl maintainer Daniel Stenberg dropped bug bounties because AI slop resulted in actual useful vulnerability reports going from 15% of all submissions down to 5%.
And that’s not the worst of it—the authors of these bug reports seem to have a more entitled attitude:
These “helpers” try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don’t think we need more of that.
These agentic AI users don’t care about curl. They don’t care about Daniel or other open source maintainers. They just want to grab quick cash bounties using their private AI army.
I manage over 300 open source projects, and while many are more niche than curl or matplotlib, I’ve seen my own increase in AI slop PRs.
It’s gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we’ll see that feature closed off in more and more repos.
AI slop generation is getting easier, but it’s not getting smarter. From what I’ve seen, models have hit a plateau where code generation is pretty good…
But it’s not improving like it did the past few years. The problem is the humans who review the code—who are responsible for the useful software that keeps our systems going—don’t have infinite resources (unlike AI companies).
Some people suggest AI could take over code review too, but that’s not the answer.
If you’re running a personal weather dashboard or building a toy server for your Homelab, fine. But I wouldn’t run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.
If this was a problem already, OpenClaw’s release, and this hiring by OpenAI to democratize agentic AI further, will only make it worse. Right now the AI craze feels the same as the crypto and NFT boom, with the same signs of insane behavior and reckless optimism.
The difference is there’s more useful purposes for LLMs and machine learning, so scammers can point to those uses as they bring down everything good in the name of their AI god.
Since my video The RAM Shortage Comes for Us All in December, we have hard drives as the next looming AI-related shortage, as Western Digital just announced they’re already sold through their inventory for 2026.
Some believe the AI bubble isn’t a bubble, but those people are misguided, just like the AI that hallucinated the quotes in that Ars Technica article.
And they say “this time it’s different”, but it’s not. The same signs are there from other crashes. The big question I have is, how many other things will AI companies destroy before they have to pay their dues.
...
Read the original on www.jeffgeerling.com »
“Late Show” host Stephen Colbert said CBS did not air his Monday interview with Texas state Rep. James Talarico out of fear of the Federal Communications Commission.
Colbert kicked off Monday night’s show by almost immediately mentioning Talarico’s absence.
“He was supposed to be here, but we were told in no uncertain terms by our network’s lawyers, who called us directly, that we could not have him on the broadcast,” Colbert said. “Then, then I was told in some uncertain terms that not only could I not have him on, I could not mention me not having him on. And because my network clearly doesn’t want us to talk about this, let’s talk about this.”
“The Late Show” published the unaired interview with Talarico on YouTube. In the interview, Colbert and Talarico, who is running for the U. S. Senate, discuss the FCC crackdown, including opening a probe into ABC’s “The View,” after Talarico appeared on the show.
“I think that Donald Trump is worried that we’re about to flip Texas,” Talarico said, which was met with audience applause. “This is the party that ran against cancel culture, and now they’re trying to control what we watch, what we say, what we read. And this is the most dangerous kind of cancel culture, the kind that comes from the top.”
Talarico accused the Trump administration of “selling out the First Amendment to curry favor with corrupt politicians.”
“A threat to any of our First Amendment rights is a threat to all of our First Amendment rights.”
In an emailed statement, CBS said: “THE LATE SHOW was not prohibited by CBS from broadcasting the interview with Rep. James Talarico. The show was provided legal guidance that the broadcast could trigger the FCC equal-time rule for two other candidates, including Rep. Jasmine Crockett, and presented options for how the equal time for other candidates could be fulfilled. THE LATE SHOW decided to present the interview through its YouTube channel with on-air promotion on the broadcast rather than potentially providing the equal-time options.”
Talarico’s rival in the Texas Senate Democratic primary, Rep. Jasmine Crockett, appeared on Colbert’s show in May 2025.
CBS’ move to not air the segment comes as the FCC, the government’s media regulator, and most notably its chairman, Brendan Carr, have been particularly combative with networks that have drawn the ire of the president.
Trump has for months suggested the FCC could revoke the licenses of television broadcasters. More recently, Carr, who was appointed by Trump to lead the FCC, has said that daytime and late-night TV talk shows must comply with the equal time rule regarding political candidates.
The FCC’s equal time rule prohibits radio and broadcast channels from hosting political candidates during an election without giving airtime to their opponents. During his show Monday, Colbert highlighted that news interviews and talk show interviews with politicians are exceptions.
On Jan. 21, Carr released a letter warning networks about the rule, saying that he is considering eliminating exceptions due to the networks’ potential partisan motivations.
Colbert fired back at Carr on Monday, accusing the chairman of being motivated by partisan purposes.
“Let’s just call this what it is: Donald Trump’s administration wants to silence anyone who says anything bad about Trump on TV because all Trump does is watch TV,” Colbert joked.
In a statement, FCC Commissioner Anna M. Gomez called Monday’s incident “another troubling example of corporate capitulation in the face of this Administration’s broader campaign to censor and control speech.”
“The FCC has no lawful authority to pressure broadcasters for political purposes or to create a climate that chills free expression,” Gomez, the lone Democratic commissioner, said in the statement. “CBS is fully protected under the First Amendment to determine what interviews it airs, which makes its decision to yield to political pressure all the more disappointing.”
This comes months after ABC pulled “Jimmy Kimmel Live!” off the air “indefinitely” after Carr criticized comments the host made about the assassinated conservative activist Charlie Kirk.
Kimmel accused “the MAGA Gang” of trying to “score political points” by characterizing the suspect “as anything other than one of them.”
Kimmel’s show was pulled a couple of days later and returned to the air after about a week.
...
Read the original on www.nbcnews.com »
...
Read the original on arxiv.org »
← Back
A few days ago I posted to Show HN. I had good fun building that useless little internet experience. The post quickly disappeared from Show HN’s first page, amongst the rest of the vibecoded pulp. And to be clear, I’m fine with that.
The behavior on Show HN was interesting to see though. So I pulled the data.
Show HN of course isn’t dead. You could even say it’s more alive than ever. What has changed is the volume of posts and engagement per post. It’s only natural when more projects are being built in a single weekend. There’s less “Proof of Work”.
From the business side of this, Johan Halse recently called this the Sideprocalypse: the end of the small indie developer’s dream. Every idea has been built, marketed better, and SEO’d into oblivion by someone with more money.
Some cool projects aren’t getting through this noise, which is a pity. Here are a few I thought were interesting:
Now, let’s look at some data.
Show HN started out better than regular submissions. Now it’s significantly worse.
How long does a Show HN post stay on page 1 before being pushed off? During peak hours (US daytime):
No. There’s just more noise, and less opportunity to get attention and have a discussion with other folks on HN about your project. Some gems go completely unnoticed. Maybe something for HN to think about: how do these subjective “gems” get more spotlight? How does HN remain the coolest place to talk about the coolest tech?
...
Read the original on www.arthurcnops.blog »
I found this gem on Hacker News the other day. User soneil posted to a four column version of the ASCII table that blew my mind. I just wanted to repost this here so it is easier to discover.
Here’s an excerpt from the comment:
I always thought it was a shame the ascii table is rarely shown in columns (or rows) of 32, as it makes a lot of this quite obvious. eg, http://pastebin.com/cdaga5i1
It becomes immediately obvious why, eg, ^[ becomes escape. Or that the alphabet is just 40h + the ordinal position of the letter (or 60h for lower-case). Or that we shift between upper & lower-case with a single bit.
You know in ASCII there are 32 characters at the beginning of the table that don’t represent a written symbol. Backspace, newline, escape - that sort of thing. These are called control characters.
In the terminal you can type these control characters by holding the CTRL (control characters, get it?) key in combination with another key. For example, as many experienced vim users know pressing CTRL+[ in the terminal (which is ^[ in caret notation) is the same as pressing the ESC key. But why is the escape key triggered by the [ character? Why not another character? This is the insight soneil shares with us.
Remember that ASCII is a 7 bit encoding. Let’s say the following:
* The first two bits denote the group of the character (2^2 so 4 possible values)
* The remaining five bits describe a character (2^5 so 32 possible values)
In the linked table, which I reproduce below, the four groups are represented by the columns and the rows represent the values.
Now in this table, look for ESC. It’s in the first group, fifth from the bottom. It’s in the first column so its group has bits ‘00’, the row has bits ‘11011’. Now look on the same line, what else is there? Yep, the ‘[’ character is there, be it in a different column:
So when we you type CTRL+[ for ESC, you’re asking for the equivalent of the character 11011 ([) out of the control set. Pressing CTRL simply sets all bits but the last 5 to zero in the character that you typed. You can imagine it as a bitwise AND.
10 11011 ([)
& 00 11111 (CTRL)
= 00 11011 (ESC)
This is why ^J types a newline, ^H types a backspace and ^I types a tab. This is why if you cat -A a Windows text file, it has ^M printed all over (meaning CR, because newlines are CR+LF on Windows).
...
Read the original on garbagecollected.org »
Free and open source alternative to Wispr Flow, Superwhisper, and Monologue.
I like the concept of apps like Wispr Flow, Superwhisper, and Monologue that use AI to add accurate and easy-to-use transcription to your computer, but they all charge fees of ~$10/month when the underlying AI models are free to use or cost pennies.
So over the weekend I vibe-coded my own free version!
It’s called FreeFlow. Here’s how it works:
Download the app from above or click here
Press and hold Fn anytime to start recording and have whatever you say pasted into the current text field
One of the cool features is that it’s context aware. If you’re replying to an email, it’ll read the names of the people you’re replying to and make sure to spell their names correctly. Same with if you’re dictating into a terminal or another app. This is the same thing as Monologue’s “Deep Context” feature.
An added bonus is that there’s no FreeFlow server, so no data is stored or retained - making it more privacy friendly than the SaaS apps. The only information that leaves your computer are the API calls to Groq’s transcription and LLM API (LLM is for post-processing the transcription to adapt to context).
Why does this use Groq instead of a local transcription model?
I love this idea, and originally planned to build FreeFlow using local models, but to have post-processing (that’s where you get correctly spelled names when replying to emails / etc), you need to have a local LLM too.
If you do that, the total pipeline takes too long for the UX to be good (5-10 seconds per transcription instead of
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.