10 interesting stories served every morning and every evening.
Skip to main content
🚨 The Conservatives (EPP) are attempting to force a new vote on Thursday (26th), seeking to reverse Parliament’s NO on indiscriminate scanning. This is a direct attack on democracy and blatant disregard for your right to privacy. No means no. Take action now!
...
Read the original on fightchatcontrol.eu »
Linux gaming has come a long way. When Valve launched Proton back in 2018, it felt like a turning point, turning the Linux gaming experience from “technically possible if you’re okay with a lot of pain” to something that more or less worked. Since then, we’ve seen incremental Wine releases, each one chipping away at compatibility issues and improving performance bit by bit. Wine 10, Wine 9, and so on; each one a collection of bug fixes and small improvements that kept the ecosystem moving forward.
Wine 11 is different. This isn’t just another yearly release with a few hundred bug fixes and some compatibility tweaks. It represents a huge number of changes and bug fixes. However, it also ships with NTSYNC support, which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming. On top of that, the WoW64 architecture overhaul is finally complete, the Wayland driver has grown up a lot, and there’s a big list of smaller improvements that collectively make this feel like an all-new project.
I should be clear: not every game is going to see a night-and-day difference. Some titles will run identically to before. But for the games that do benefit from these changes, the improvements range from noticeable to absurd. And because Proton, SteamOS, and every downstream project builds on top of Wine, those gains trickle down to everyone.
Everything up until now was a workaround
Esync and fsync worked, but they weren’t ideal
If you’ve spent any time tweaking Wine or Proton settings, you’ve probably encountered the terms “esync” and “fsync” before. Maybe you toggled them on in Lutris, or noticed them in Proton launch options, without fully understanding what they do. To understand why NTSYNC matters, you need to understand the problem these solutions were all trying to solve.
Windows games, especially modern ones, are heavily multi-threaded. Your CPU isn’t just running one thing at a time, and instead, it’s juggling rendering, physics calculations, asset streaming, audio processing, AI routines, and more, all in parallel across multiple threads. These threads need to coordinate with each other constantly. One thread might need to wait for another to finish loading a texture before it can render a frame. Another might need exclusive access to a shared resource so two threads don’t try to modify it simultaneously.
Windows handles this coordination through what are called NT synchronization primitives… mutexes, semaphores, events, and the like. They’re baked deep into the Windows kernel, and games rely on them heavily. The problem is that Linux doesn’t have native equivalents that behave exactly the same way. Wine has historically had to emulate these synchronization mechanisms, and the way it did so was, to put it simply, not ideal.
The original approach involved making a round-trip RPC call to a dedicated “kernel” process called wineserver every single time a game needed to synchronize between threads. For a game making thousands of these calls per second, that overhead added up fast and served to be a bottleneck. And it was a bottleneck that manifested as subtle frame stutters, inconsistent frame pacing, and games that just felt a little bit off even when the raw FPS numbers looked fine.
Esync was the first attempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd system call to handle synchronization without bouncing through the wineserver. It worked, and it helped, but it had quirks. Some distros ran into issues with file descriptor limits, since every synchronization object needed its own file descriptor, and games that opened a lot of them could hit the system’s ceiling quite quickly.
Fsync came next, using Linux futexes for even better performance. It was faster than esync in most cases, but it required out-of-tree kernel patches that never made it into the mainline Linux kernel or to upstream Wine out of the box. That meant you needed a custom or patched kernel to use it, which is fine for enthusiasts running CachyOS or Proton-GE, but not exactly accessible for the average user on Ubuntu or Fedora. Futex2, often referred to interchangeably with fsync, did make it to Linux kernel 5.16 as futex_waitv, but the original implementation of fsync isn’t that. Fsync used futex_wait_multiple, and Futex2 used futex_waitv. Applications such as Lutris still refer to it as Fsync, though. It’s still kind of fsync, but it’s not the original fsync.
Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonetheless. They approximated NT synchronization behavior using Linux primitives that weren’t designed for the job, and certain edge cases simply couldn’t be handled correctly. Operations like NtPulseEvent() and the “wait-for-all” mode in NtWaitForMultipleObjects() require direct control over the underlying wait queues in ways that user-space implementations just can’t reliably provide.
Synchronization at the kernel-level, rather than in user-space
NTSYNC takes a completely different approach. Instead of trying to shoehorn Windows synchronization behavior into existing Linux primitives, it adds a new kernel driver that directly models the Windows NT synchronization object API. It exposes a /dev/ntsync device that Wine can talk to, and the kernel itself handles the coordination. No more round trips to wineserver, no more approximations, and the synchronization happens in the kernel, which is where it should be. And it has proper queue management, proper event semantics, and proper atomic operations.
What makes this even better is that NTSYNC was developed by the same person who created esync and fsync in the first place. Elizabeth Figura has been working on this problem for years, iterating through multiple kernel patch revisions, presenting the work at the Linux Plumbers Conference in 2023, and pushing through multiple versions of the patch set before it was finally merged into the mainline Linux kernel with version 6.14.
The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.
The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.
All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.
If NTSYNC is the headline feature, the completion of Wine’s WoW64 architecture is the change that will quietly improve everyone’s life going forward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the subsystem that lets 32-bit applications run on 64-bit systems. Wine has been working toward its own implementation of this for years, and Wine 11 marks the point where it’s officially done.
What this means in practice is that you no longer need 32-bit system libraries installed on your 64-bit Linux system to run 32-bit Windows applications. Wine handles the translation internally, using a single unified binary that automatically detects whether it’s dealing with a 32-bit or 64-bit executable. The old days of installing multilib packages, configuring ia32-libs, or fighting with 32-bit dependencies on your 64-bit distro thankfully over.
This might sound like a small quality-of-life improvement, but it’s a massive piece of engineering work. The WoW64 mode now handles OpenGL memory mappings, SCSI pass-through, and even 16-bit application support. Yes, 16-bit! If you’ve got ancient Windows software from the ’90s that you need to run for whatever reason, Wine 11 has you covered.
For gaming specifically, this matters because a surprising number of games, especially older ones, are 32-bit executables. Previously, getting these to work often meant wrestling with your distro’s multilib setup, which varied in quality and ease depending on whether you were on Ubuntu, Arch, Fedora, or something else entirely. Now, Wine just handles it for you.
The rest of Wine 11 isn’t just filler
There are more fixes, too
It’s easy to let NTSYNC and WoW64 steal the spotlight, but Wine 11 is packed to the gills with other stuff worth talking about.
The Wayland driver has come a long way. Clipboard support now works bidirectionally between Wine and native Wayland applications, which is one of those things you don’t think about until it doesn’t work and it drives you mad. Drag-and-drop from Wayland apps into Wine windows is supported. Display mode changes are now emulated through compositor scaling, which means older games that try to switch to lower resolutions like 640x480 actually behave properly instead of leaving you with a broken desktop. If you’ve been holding off on switching from X11 to Wayland because of Wine compatibility concerns, Wine 11 removes a lot of those barriers.
On the graphics front, EGL is now the default backend for OpenGL rendering on X11, replacing the older GLX path. Vulkan support has been bumped to API version 1.4, and there’s initial support for hardware-accelerated H.264 decoding through Direct3D 11 video APIs using Vulkan Video. That last one is particularly interesting for games and applications that use video playback for things like cutscenes or in-game streaming.
Force feedback support has been improved for racing wheels and flight sticks, which is great news if you’re running a sim setup on Linux. As well, Bluetooth has received a new driver with BLE services and proper pairing support, MIDI soundfont handling has been improved for legacy game music, and there are a couple of minor extras like Zip64 compression support, Unicode 17.0.0 support, TWAIN 2.0 scanning for 64-bit apps, and IPv6 ping functionality.
Thread priority management has been improved on both Linux and macOS, which helps with multi-threaded application performance beyond just the NTSYNC gains. ARM64 devices can now simulate 4K page sizes on systems with larger native pages, which keeps the door open for Wine on Arm hardware. And with more Arm-based Linux devices showing up every year, that matters more than it used to.
Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all received specific compatibility fixes, which is additional to the broader improvements made across the board that will improve performance and compatibility across significantly more titles.
Wine 11 is a big release, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth paying attention to, but combined with the WoW64 completion, the Wayland improvements, and the sheer volume of fixes, it’s the most important Wine release since Proton made Linux gaming viable. Everything built on top of Wine, from Proton to Lutris to Bottles, gets better because of it. If you play games on Linux at all, Wine 11 is worth your time trying out.
...
Read the original on www.xda-developers.com »
It’s been about a year since coding agents appeared on the scene that could actually build you full projects. There were precursors like Aider and early Cursor, but they were more assistant than agent. The new generation is enticing, and a lot of us have spent a lot of free time building all the projects we always wanted to build but never had time to.
And I think that’s fine. Spending your free time building things is super enjoyable, and most of the time you don’t really have to care about code quality and maintainability. It also gives you a way to learn a new tech stack if you so want.
During the Christmas break, both Anthropic and OpenAI handed out some freebies to hook people to their addictive slot machines. For many, it was the first time they experienced the magic of agentic coding. The fold’s getting bigger.
Coding agents are now also introduced to production codebases. After 12 months, we are now beginning to see the effects of all that “progress”. Here’s my current view.
While all of this is anecdotal, it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services. And user interfaces have the weirdest fucking bugs that you’d think a QA team would catch. I give you that that’s been the case for longer than agents exist. But we seem to be accelerating.
We don’t have access to the internals of companies. But every now and then something slips through to some news reporter. Like this supposed AI caused outage at AWS. Which AWS immediately “corrected”. Only to then follow up internally with a 90-day reset.
Satya Nadella, the CEO of Microsoft, has been going on about how much code is now being written by AI at Microsoft. While we don’t have direct evidence, there sure is a feeling that Windows is going down the shitter. Microsoft itself seems to agree, based on this fine blog post.
Companies claiming 100% of their product’s code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes: that is not the seal of quality they think it is. And it’s definitely not good advertising for the fever dream of having your agents do all the work for you.
Through the grapevine you hear more and more people, from software companies small and large, saying they have agentically coded themselves into a corner. No code review, design decisions delegated to the agent, a gazillion features nobody asked for. That’ll do it.
We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.
You’re building an orchestration layer to command an army of autonomous agents. You installed Beads, completely oblivious to the fact that it’s basically uninstallable malware. The internet told you to. That’s how you should work or you’re ngmi. You’re ralphing the loop. Look, Anthropic built a C compiler with an agent swarm. It’s kind of broken, but surely the next generation of LLMs can fix it. Oh my god, Cursor built a browser with a battalion of agents. Yes, of course, it’s not really working and it needed a human to spin the wheel a little bit every now and then. But surely the next generation of LLMs will fix it. Pinky promise! Distribute, divide and conquer, autonomy, dark factories, software is solved in the next 6 months. SaaS is dead, my grandma just had her Claw build her own Shopify!
Now again, this can work for your side project barely anyone is using, including yourself. And hey, maybe there’s somebody out there who can actually make this work for a software product that’s not a steaming pile of garbage and is used by actual humans in anger.
If that’s you, more power to you. But at least among my circle of peers I have yet to find evidence that this kind of shit works. Maybe we all have skill issues.
The problem with agents is that they make errors. Which is fine, humans also make errors. Maybe they are just correctness errors. Easy to identify and fix. Add a regression test on top for bonus points. Or maybe it’s a code smell your linter doesn’t catch. A useless method here, a type that doesn’t make sense, duplicated code over there. On their own, these are harmless. A human will also do such booboos.
But clankers aren’t humans. A human makes the same error a few times. Eventually they learn not to make it again. Either because someone starts screaming at them or because they’re on a genuine learning path.
An agent has no such learning ability. At least not out of the box. It will continue making the same errors over and over again. Depending on the training data it might also come up with glorious new interpolations of different errors.
Now you can try to teach your agent. Tell it to not make that booboo again in your AGENTS.md. Concoct the most complex memory system and have it look up previous errors and best practices. And that can be effective for a specific category of errors. But it also requires you to actually observe the agent making that error.
There’s a much more important difference between clanker and human. A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there’s only so many booboos the human can introduce in a codebase per day. The booboos will compound at a very slow rate. Usually, if the booboo pain gets too big, the human, who hates pain, will spend some time fixing up the booboos. Or the human gets fired and someone else fixes up the booboos. So the pain goes away.
With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that’s unsustainable. You have removed yourself from the loop, so you don’t even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it’s too late.
Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn’t allow your army of agents to make the change in a functioning way. Or your users are screaming at you because something in the latest release broke and deleted some user data.
You realize you can no longer trust the codebase. Worse, you realize that the gazillions of unit, snapshot, and e2e tests you had your clankers write are equally untrustworthy. The only thing that’s still a reliable measure of “does this work” is manually testing the product. Congrats, you fucked yourself (and your company).
You have zero fucking idea what’s going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity. They have seen many bad architectural decisions in their training data and throughout their RL training. You have told them to architect your application. Guess what the result is?
An immense amount of complexity, an amalgam of terrible cargo cult “industry best practices”, that you didn’t rein in before it was too late. But it’s worse than that.
Your agents never see each other’s runs, never get to see all of your codebase, never get to see all the decisions that were made by you or other agents before they make a change. As such, an agent’s decisions are always local, which leads to the exact booboos described above. Immense amounts of code duplication, abstractions for abstractions’ sake.
All of this compounds into an unrecoverable mess of complexity. The exact same mess you find in human-made enterprise codebases. Those arrive at that state because the pain is distributed over a massive amount of people. The individual suffering doesn’t pass the threshold of “I need to fix this”. The individual might not even have the means to fix things. And organizations have super high pain tolerance. But human-made enterprise codebases take years to get there. The organization slowly evolves along with the complexity in a demented kind of synergy and learns how to deal with it.
With agents and a team of 2 humans, you can get to that complexity within weeks.
So now you hope your agents can fix the mess, refactor it, make it pristine. But your agents can also no longer deal with it. Because the codebase and complexity are too big, and they only ever have a local view of the mess.
And I’m not just talking about context window size or long context attention mechanisms failing at the sight of a 1 million lines of code monster. Those are obvious technical limitations. It’s more devious than that.
Before your agent can try and help fix the mess, it needs to find all the code that needs changing and all existing code it can reuse. We call that agentic search. How the agent does that depends on the tools it has. You can give it a Bash tool so it can ripgrep its way through the codebase. You can give it some queryable codebase index, an LSP server, a vector database. In the end it doesn’t matter much. The bigger the codebase, the lower the recall. Low recall means that your agent will, in fact, not find all the code it needs to do a good job.
This is also why those code smell booboos happen in the first place. The agent misses existing code, duplicates things, introduces inconsistencies. And then they blossom into a beautiful shit flower of complexity.
How do we avoid all of this?
Coding agents are sirens, luring you in with their speed of code generation and jagged intelligence, often completing a simple task with high quality at breakneck velocity. Things start falling apart when you think: “Oh golly, this thing is great. Computer, do my work!”.
There’s nothing wrong with delegating tasks to agents, obviously. Good agent tasks share a few properties: they can be scoped so the agent doesn’t need to understand the full system. The loop can be closed, that is, the agent has a way to evaluate its own work. The output isn’t mission critical, just some ad hoc tool or internal piece of software nobody’s life or revenue depends on. Or you just need a rubber duck to bounce ideas against, which basically means bouncing your idea against the compressed wisdom of the internet and synthetic training data. If any of that applies, you found the perfect task for the agent, provided that you as the human are the final quality gate.
Karpathy’s auto-research applied to speeding up startup time of your app? Great! As long as you understand that the code it spits out is not production-ready at all. Auto-research works because you give it an evaluation function that lets the agent measure its work against some metric, like startup time or loss. But that evaluation function only captures a very narrow metric. The agent will happily ignore any metrics not captured by the evaluation function, such as code quality, complexity, or even correctness, if your evaluation function is foobar.
The point is: let the agent do the boring stuff, the stuff that won’t teach you anything new, or try out different things you’d otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation. Yes, sure, you can also use an agent for that final step.
And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you’re actually building and why. Give yourself an opportunity to say, fuck no, we don’t need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. Maybe use tab completion for some nostalgic feels. Or do some pair programming with your agent. Be in the code. Because the simple act of having to write the thing or seeing it being built up step by step introduces friction that allows you to better understand what you want to build and how the system “feels”. This is where your experience and taste come in, something the current SOTA models simply cannot yet replace. And slowing the fuck down and suffering some friction is what allows you to learn and grow.
The end result will be systems and codebases that continue to be maintainable, at least as maintainable as our old systems before agents. Yes, those were not perfect either. Your users will thank you, as your product now sparks joy instead of slop. You’ll build fewer features, but the right ones. Learning to say no is a feature in itself.
You can sleep well knowing that you still have an idea what the fuck is going on, and that you have agency. Your understanding allows you to fix the recall problem of agentic search, leading to better clanker outputs that need less massaging. And if shit hits the fan, you are able to go in and fix it. Or if your initial design has been suboptimal, you understand why it’s suboptimal, and how to refactor it into something better. With or without an agent, don’t fucking care.
All of this requires discipline and agency.
All of this requires humans.
...
Read the original on mariozechner.at »
Microsoft just announced a 7-point plan to fix Windows 11, and the tech press is treating it like a redemption arc. Pavan Davuluri, the Windows president, admitted in January 2026 that “Windows 11 had gone off track” and said Microsoft was entering a mode called “swarming” where engineers would be pulled off new features to fix existing problems.
I saw this headline and my first thought was: it’s like being in an abusive relationship. They beat you, then show up with flowers saying they’ve changed. And everyone around you says “see, they’re getting better.” But the bruises are still there and the apology only covers the hits people noticed.
I want to walk through what Microsoft actually did to Windows 11 over the past four years, because this “fix” announcement only makes sense when you see the full damage list and realize that the worst offenses aren’t even part of the repair plan.
The Copilot invasion started September 26, 2023, when Microsoft pushed their AI chatbot into Windows 11 ahead of the formal 23H2 release. The icon appeared between your Start menu and system tray, you couldn’t move it, you couldn’t remove it through normal settings, and it hijacked the Win+C keyboard shortcut. Over the next two years, Copilot buttons metastasized into Snipping Tool, Photos, Notepad, Widgets, File Explorer context menus, Start menu search, and system Settings. Microsoft even planned to force-install the Microsoft 365 Copilot app directly onto Start menus of “eligible PCs.” The new plan promises to remove all of that. They want credit for pulling their hand out of your pocket.
On April 24, 2024, Microsoft shipped update KB5036980, which injected advertisements into the Windows 11 Start menu’s “Recommended” section. These showed up labeled “Promoted” and pushed apps like Opera browser and some password manager nobody asked for. And the Start menu was just one surface, they also placed ads on the lock screen, in the Settings homepage hawking Game Pass subscriptions, inside File Explorer pushing OneDrive, and through “tip” notifications that were thinly veiled product pitches. The “fix” promises “fewer ads.” Fewer. The operating system you paid $139 for at retail should have exactly zero ads, and the fact that “fewer” is supposed to impress anyone shows how thoroughly Microsoft has lowered the bar.
The privacy angle is where this gets dangerous. When Windows 11 launched in October 2021, Home edition required a Microsoft account during setup. By October 2025, Microsoft had systematically hunted down and killed every single workaround for creating a local account, the `oobe\bypassnro` command, the BypassNRO registry toggle, the `ms-cxh:localonly` trick, even the old fake email method. Amanda Langowski from Microsoft stated it plainly: they were “removing known mechanisms for creating a local account in the Windows Setup experience.”
A Microsoft account means your identity is tied to your OS from first boot. Your activity, your app usage, your browsing through Edge, your files through OneDrive, all funneled into a profile Microsoft controls. And this particular abuse is nowhere in the 7-point fix plan.
OneDrive got the same treatment. Microsoft silently changed Windows 11 setup in 2024 so that OneDrive folder backup enables automatically with no consent dialog, syncing your Desktop, Documents, Pictures, Music, and Videos to Microsoft’s cloud. When people discovered this and tried to turn it off, their files disappeared from their local machine because OneDrive had moved them, transferred ownership of your personal files to their cloud service without asking. Author Jason Pargin went viral describing how OneDrive activated itself, moved his files, then started deleting them when he hit the free 5GB storage limit. Microsoft’s response to this was silence. Also not in the fix plan.
Windows Recall is worth lingering on. Announced May 2024, it’s an AI feature that screenshots everything on your screen every few seconds and makes it searchable. Security researcher Kevin Beaumont demonstrated that the entire Recall database was stored in plaintext in an AppData folder where any malware could extract it. Bank numbers, Social Security numbers, passwords, all sitting in an unencrypted SQLite database.
The UK’s Information Commissioner’s Office got involved. Microsoft delayed it, made it opt-in, added encryption, and quietly relaunched it for Insiders in November 2024. They built a surveillance feature, shipped it broken, got caught, and called the patch “responding to feedback.”
But the abuse pattern goes back way further than Windows 11. In 2015 and 2016, Microsoft ran the GWX (Get Windows 10) campaign, full-screen nag dialogs that pushed Windows 10 upgrades on Windows 7 and 8 users. In May 2016, they changed the behavior of the red X button so that clicking it, which for decades had meant “close” or “cancel”, instead scheduled the Windows 10 upgrade. Microsoft’s own security advice told users to close suspicious dialogs using the X button, and they weaponized that trained behavior against their own customers. A woman named Teri Goldstein sued after the forced upgrade bricked her travel agency PC and won $10,000. Microsoft appealed, then dropped the appeal and paid. They eventually admitted they “went too far.”
And right now, Microsoft is about to force 240 million PCs into the landfill. Windows 10 hit end of life on October 14, 2025, and Windows 11 requires TPM 2.0, specific CPU generations, UEFI Secure Boot, hardware requirements that excluded roughly 20% of all PCs worldwide. Perfectly functional machines, rendered “obsolete” by arbitrary software restrictions. If you want to keep getting security patches on Windows 10, Microsoft will charge you $30 per year, paying for patches to an operating system you already bought a license for. Enterprise customers pay $61 per device for Year 1, $122 for Year 2, and $244 for Year 3, with the price doubling each year.
Edge is its own disaster. Mozilla commissioned an independent report titled “Over the Edge” that documented specific dark patterns including confirmshaming (pop-ups implying you’re “shopping in a dumb way” if you don’t use Edge), disguised ads injected into Google.com and the Chrome Web Store, and default browser settings that hijack back to Edge without notification. Certain Windows web links still force-open in Edge regardless of your default browser setting. Despite all this manipulation, Edge holds just 5.35% global market share. Even with the full weight of an operating system monopoly forcing their browser on people, almost nobody chooses to use it.
And the telemetry question. On Windows 11 Home and Pro, you cannot fully disable telemetry. Setting `AllowTelemetry` to 0 in the registry on non-Enterprise editions gets silently overridden back to 1. Only Enterprise and Education editions can actually turn it off. The operating system you paid for reports data about you to Microsoft, and the setting to stop it is a lie on consumer editions. Also not in the fix plan.
I haven’t even mentioned the EU fining Microsoft over 2.2 billion euros across multiple antitrust rulings, including 561 million euros specifically for breaking a browser ballot promise, a Windows 7 update silently removed the choice screen for 14 months, affecting 15 million users, and it was the first time the EU fined a company for violating a “commitment decision.” Or the _NSAKEY controversy from 1999 where a second crypto key labeled literally `_NSAKEY` was found embedded in Windows NT. Or the time in August 2024 when a Microsoft update bricked Linux dual-boot systems across Ubuntu, Mint, and other distros, and it took 9 months to fully fix.
Ok so here’s the table that tells the whole story:
The bottom four rows are the ones that matter. The privacy-hostile changes, the forced Microsoft accounts, the telemetry that lies about being disabled, OneDrive hijacking your files, the pre-installed garbage, none of that is part of the fix plan. Microsoft’s “swarming” effort targets the most visible UI annoyances, the ones that generate bad headlines. Data collection, vendor lock-in, forced accounts, those stay because those are the revenue model.
Microsoft spent four years deliberately degrading an operating system that people paid $139 or more for, and now they’re announcing the removal of their own damage as if it’s a gift. The “fix” is them taking their foot off your neck and expecting applause. The ads should have never been there, the Copilot buttons should have never been forced, and the taskbar should have never been crippled in the first place. And the things they’re choosing to keep, the telemetry, the forced accounts, the data harvesting, those are the real product, because at this point, you are.
...
Read the original on www.sambent.com »
Hello! This is a long, hopefully fun one! If you’re reading this in your email, you may need to click “expand” to read all the way to the end of this post. Thank you!
When I lived in Nashville, my girlfriends and I would take ourselves on “field trips” across the state. We once went on a tour to spot bald eagles in West Tennessee, and upon arrival, a woman with fluffy hair in the state park bathroom told us she had seen 113 bald eagles the day before. We ended up seeing (counts on one hand)…2.
In the summer of 2017, we went on another field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was chosen as the site for a plutonium and uranium enrichment plant as part of the Manhattan Project, a top-secret WWII effort to develop the first atomic bomb. Once a small and rural farming community settled in the valley of East Tennessee, the swift task to create a nuclear bomb grew the secret settlement titled “Site X” from 3,000 people in 1942 to 75,000 by 1945. Alongside the population growth, enormously complex buildings were built.
A Note: The Manhattan Project created the nuclear bomb that caused extreme devastation in Japan and ended the war. There’s a lot of U. S. history that’s awful and indefensible. Today, though, I’d like to talk about the industrial design and color theory from that era.
Our first stop on the tour was the X-10 Graphite Reactor room and its control panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s second full-scale nuclear reactor. The plutonium produced from uranium there was shipped to Los Alamos, New Mexico, for research into the atomic bomb Fat Man.
What caught my eye as a designer, as with most industrial plants and control rooms of that time, besides the knobs, levers, and buttons, was the use of a very specific seafoam green, seen here on the reactor’s walls and in the control panel room.
Thus began my day-long search, traipsing through the internet for historical information about this specific shade of seafoam green.
Thankfully, this path led me to the work of color theorist Faber Birren.
In the fall of 1919, Faber Birren entered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to commit himself to self-education in color, as such a program didn’t exist. He spent his days interviewing psychologists and physicists and conducted his own color studies, which were considered unconventional at the time. He painted his bedroom walls red vermillion to test if it would make him go mad.
In 1933, he moved to New York City and became a self-appointed color consultant, approaching major corporations to sell the idea that appropriate use of color could boost sales. He convinced a Chicago wholesale meat company that the company’s white walls made the meat unappealing. He studied the steaks on various colored backgrounds and determined that a blue/green background would make the beef appear redder. Sales went up, and soon a number of industries hired Faber to bring color theory into their work, including the leading chemical and wartime contract company, as well as the Manhattan Project building designer, DuPont.
With the increase in wartime production in the US during WWII, Birren and DuPont created a master color safety code for the industrial plant industry, with the aim of reducing accidents and increasing efficiency within plants. These color codes were approved by the National Safety Council in 1944 and are now internationally recognized, having been mandatory practice since 1948. The color coding went as such:
* Fire Red: All fire protection, emergency stop buttons, and flammable liquids should be red
* Solar Yellow: Signifies caution and physical hazards such as falling
* Safety Green: Indicates safety features such as first-aid equipment, emergency exits, and eyewash stations.
* Light Green: Used on walls to reduce visual fatigue
My industrial “seafoam” light green mystery has finally been solved thanks to this article from UChicago Magazine.
Keeping in theme with “control rooms”, I researched the second Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plutonium production reactor in the world. To my surprise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also responsible for the design and construction of Hanford.
In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about research undertaken to measure eye fatigue in the industrial workplace and the effects of interior color on human efficiency and well-being. Using the color chart above, he states that the proper use of color hues can reduce accidents, raise standards of machine maintenance, and improve labor morale.
“The importance of color in factories is first to control brightness in the general field of view for an efficient seeing condition. Interiors can then be conditioned for emotional pleasure and interest, using warm, cool, or luminois hues as working conditions suggest. Color should be functional and not merely decorative.” - Faber Birren
Now, looking at the interiors of the Manhattan Project control rooms and plants, the broad use of Light and Medium Green makes sense. One mistake and mass devastation could have occurred within these towns. Birren writes, “Note that most of the standards are soft in tone. This is deliberate and intended to establish a non-distracting environment. Green is a restful and natural-looking color for average factory interiors. Light Green with Medium Green is suggested.”
Let’s put these theories to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he directed the following color applications for small industrial areas:
* ✔️ Medium Gray is proposed for machinery, equipment, and racks
* ✔️ Beige walls may be applied to interiors deprived of natural light
As we can see, his color theory was followed to a T.
Other US Industrial Plants that Used these Color Methods
This color theory research just opened a whole can of design worms for me, and I’m excited to dive into them more. For example, Germany developed its own seafoam green, specifically designed for bridges, called Cologne Bridge Green. That’s a post for another day.
And finally, if you enjoy this sort of design, I designed a font called “Parts List” that is meant to evoke the feeling of sitting in an oil change waiting room, with the smell of burnt coffee. I created this font out of old auto parts lists, and it’s a perfectly wobbly typeface that will give you that ‘Is it a typewriter or handwriting?’ feeling. It’s now available on my website.
PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was surprised that almost all of the facilities had been torn down, and he just looked at me straight in the face and said, “Who said it’s actually gone?” Noted. ✌️
Thanks for being here!
...
Read the original on bethmathews.substack.com »
...
Read the original on rpastro.square.site »
For various reasons, I have decided to move as many services and subscriptions as possible from non-EU countries to the EU or to switch to European service providers. The reasons for this are the current global political situation and improved data protection. I don’t want to go into the first point any further for various reasons, but the second point should be immediately obvious, since the EU currently has the most user-friendly laws when it comes to data protection. Below, I will list both the old and new service providers; this is not an advertisement, but simply the result of my research, which was aimed at achieving the same or better quality at affordable prices.
I would call this post an interim report, and I will expand on it if I end up migrating more services.
In my opinion, Fastmail is one of the best email providers. In all the years I’ve had my email accounts there, I’ve never had any problems. I paid 10 euros a month for two accounts, could use an unlimited number of my own domains, and could not only set up catch-all addresses but also send emails from any email address I wanted. This is important for my email setup. The calendar is also solid and was used within the family. All of this was also available in a well-designed Android app. Finding a European alternative that offers all of this proved difficult. First, I tried mailbox.org, which I can generally recommend without reservation. Unfortunately, you can’t send emails from any address on your own domain without a workaround, so the search continued. Eventually, I landed on Uberspace. This “pay what you want” provider offers a shell account, web hosting, email hosting, and more at fair prices. In addition, you can use as many of your own domains as you like for both web and email, and send emails from any sender address. There isn’t a dedicated app, which is why I now use Thunderbird for Android and am very satisfied with it.
Uberspace doesn’t offer a built-in calendar solution. So I tried installing various CalDAV servers, but none of them really convinced me. In the end, I simply installed NextCloud on my Uberspace Asteroid, which has CalDAV and CardDAV built in. On my desktop, I use Thunderbird as a client; on Android, I use DAVx5 and Fossil Calendar. It works great, even if NextCloud does come with some overhead. In return, I can now easily share files with others and, in theory, also use NextCloud’s online office functionality.
Now that I’m already using Uberspace for my email and calendar, I was able to host this website there as well. I previously had a VPS with Hetzner for this purpose, which I no longer need. The only minor hurdle was that I use SSI on this site to manage the header centrally. I had previously used Nginx, but Uberspace hosts on Apache, where the SSI implementation is handled slightly differently. However, adapting my HTML code was quite simple, so I was able to quickly migrate the site to Uberspace.
For a long time, I was a satisfied Namecheap customer. They offer good prices, a wide selection of available domains, their DNS management has everything you need, and their support team has helped me quickly on several occasions. But now it was time to look for a comparable provider in the EU. In the end, I settled on hosting.de. Some of the reasons were the prices, reviews, the location in Germany, and the availability of .is domains. So far, everything has been running smoothly; support helped me quickly and competently with one issue; and while prices for non-German domains are slightly higher, they’re still within an acceptable range.
At some point, pretty much everyone had their code on GitHub (or still does). I was no exception, though I had also hosted my own Gitea instance. Eventually, I got tired of that too and migrated all my Git repositories to codeberg.org. Codeberg is a German-based nonprofit organization, and it’s hard to imagine going wrong with this choice.
No changes here. I’ve always been a happy Mullvad customer. For 5 euros a month, I pay a Swedish company that has proven it doesn’t log any data and doesn’t even require me to create an account. No subscription traps, no weird Black Friday deals, no discounts: just 5 euros a month for a reliable, trustworthy service.
For many years, I used my work smartphone for personal use as well. I was more than satisfied with the Pixel 6, but understandably, I wasn’t allowed to install a custom ROM or use alternative app stores like F-Droid. That’s why I decided to buy a separate personal smartphone. I chose the Pixel 9a, which is supported by Graphene OS. I still installed the Google Play Store so I could install a significant number of apps that are only available there. However, I can now use alternative app stores, which allows me to install and use apps like NewPipe. This way, I can enjoy YouTube ad-free and without an account.
For casual use on the couch, a Chromebook has been unbeatable for me so far. It’s affordable, the battery lasts forever, and it wakes up from sleep mode extremely quickly. To break away from Google here as well, I recently bought a cheap used 11-inch MacBook Air (A1465) to install MX Linux with Fluxbox on it and use it for browsing and watching videos. I haven’t had a chance to test it out yet, but I’m hoping it will be able to replace the Chromebook.
...
Read the original on rz01.org »
Here are three stories about the state of gambling in America.
In November 2025, two pitchers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a conspiracy for “rigging pitches.” Frankly, I had never heard of rigged pitches before, but the federal indictment describes a scheme so simple that it’s a miracle that this sort of thing doesn’t happen all the time. Three years ago, a few corrupt bettors approached the pitchers with a tantalizing deal: (1) We’ll bet that certain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.
The plan worked. Why wouldn’t it? There are hundreds of pitches thrown in a baseball game, and nobody cares about one bad pitch. The bets were so deviously clever because they offered enormous rewards for bettors and only incidental inconvenience for players and viewers. Before their plan was snuffed out, the fraudsters won $450,000 from pitches that not even the most ardent Cleveland baseball fan would ever remember the next day. Nobody watching America’s pastime could have guessed that they were witnessing a six-figure fraud.
On the morning of February 28th, someone logged onto the prediction market website Polymarket and made an unusually large bet. This bet wasn’t placed on a baseball game. It wasn’t placed on any sport. This was a bet that the United States would bomb Iran on a specific day, despite extremely low odds of such a thing happening.
A few hours later, bombs landed in Iran. This one bet was part of a $553,000 payday for a user named “Magamyman.” And it was just one of dozens of suspicious, perfectly-timed wagers, totaling millions of dollars, placed in the hours before a war began.
It is almost impossible to believe that, whoever Magamyman is, he didn’t have inside information from members of the administration. The term war profiteering typically refers to arms dealers who get rich from war. But we now live in a world not only where online bettors stand to profit from war, but also where key decision makers in government have the tantalizing options to make hundreds of thousands of dollars by synchronizing military engagements with their gambling position.
On March 10, several days into the Iran War, the journalist Emanuel Fabian reported that a warhead launched from Iran struck a site outside Jerusalem.
Meanwhile on Polymarket, users had placed bets on the precise location of missile strikes on March 10. Fabian’s article was therefore poised to determine payouts of $14 million in betting. As The Atlantic’s Charlie Warzel reported, bettors encouraged him to rewrite his story to produce the outcome that they’d bet on. Others threatened to make his life “miserable.”
A clever dystopian novelist might conceive of a future where poorly paid journalists for news wires are offered six-figure deals to report fictions that cash out bets from online prediction markets. But just how fanciful is that scenario when we have good reason to believe that journalists are already being pressured, bullied, and threatened to publish specific stories that align with multi-thousand dollar bets about the future?
Put it all together: rigged pitches, rigged war bets, and attempts to rig wartime journalism. Without context, each story would sound like a wacky conspiracy theory. But these are not conspiracy theories. These are things that have happened. These are conspiracies—full stop.
“If you’re not paranoid, you’re not paying attention” has historically been one of those bumperstickers you find on the back of a car with so many other bumperstickers that you worry for the sanity of its occupants. But in this weird new reality where every event on the planet has a price, and behind every price is a shadowy counterparty, the jittery gambler’s paranoia—is what I’m watching happening because somebody more powerful than me bet on it?—is starting to seem, eerily, like a kind of perverse common sense.
What’s remarkable is not just the fact that online sports books have taken over sports, or that betting markets have metastasized in politics and culture, but the speed with which both have taken place.
For most of the last century, the major sports leagues were vehemently against gambling, as the Atlantic staff writer McKay Coppins explained in his recent feature. In 1992, NFL commissioner Paul Tagliabue told Congress that “nothing has done more to despoil the games Americans play and watch than widespread gambling on them.” In 2012, NBA commissioner David Stern loudly threatened New Jersey Gov. Chris Christie for signing a bill to legalize sports betting in the Garden State, reportedly screaming, “we’re going to come after you with everything we’ve got.”
So much for that. Following the 2018 Supreme Court decision Murphy vs. NCAA, sports gambling was unleashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 billion gambled on football games, and the league itself made half a billion dollars in advertising, licensing, and data deals.
Nine years ago, Americans bet less than $5 billion on sports. Last year, that number rose to at least $160 billion. Big numbers mean nothing to me, so let me put that statistic another way: $5 billion is roughly the amount Americans spend annually at coin-operated laundromats and $160 billion is nearly what Americans spent last year on domestic airline tickets. So, in a decade, the online sports gambling industry will have risen from the level of coin laundromats to rival the entire airline industry.
And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion. “These predictive markets are the logical endpoint of the online gambling boom,” Coppins told me on my podcast Plain English. “We have taught the entire American population how to gamble with sports. We’ve made it frictionless and easy and put it on everybody’s phone. Why not extend the logic and culture of gambling to other segments of American life?” He continued:
Why not let people gamble on who’s going to win the Oscar, when Taylor Swift’s wedding will be, how many people will be deported from the United States next year, when the Iranian regime will fall, whether a nuclear weapon will be detonated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m making up. These are all bets that you can make on these predictive markets.
Indeed, why not let people gamble on whether there will be a famine in Gaza? The market logic is cold and simple: More bets means more information, and more informational volume is more efficiency in the marketplace of all future happenings. But from another perspective—let’s call it, baseline morality?—the transformation of a famine into a windfall event for prescient bettors seems so grotesque as to require no elaboration. One imagines a young man sending his 1099 documents to a tax accountant the following spring: “right, so here are my dividends, these are the cap gains, and, oh yeah, here’s my $9,000 payout for totally nailing when all those kids would die.”
It is a comforting myth that dystopias happen when obviously bad ideas go too far. Comforting, because it plays to our naive hope that the world can be divided into static categories of good versus evil and that once we stigmatize all the bad people and ghettoize all the bad ideas, some utopia will spring into view. But I think dystopias more likely happen because seemingly good ideas go too far. “Pleasure is better than pain” is a sensible notion, and a society devoted to its implications created Brave New World. “Order is better than disorder” sounds alright to me, but a society devoted to the most grotesque vision of that principle takes us to 1984. Sports gambling is fun, and prediction markets can forecast future events. But extended without guardrails or limitations, those principles lead to a world where ubiquitous gambling leads to cheating, cheating leads to distrust, and distrust leads ultimately to cynicism or outright disengagement.
“The crisis of authority that has kind of already visited every other American institution in the last couple of decades has arrived at professional sports,” Coppins said. Two-thirds of Americans now believe that professional athletes sometimes change their performance to influence gambling outcomes. “Not to overstate it, but that’s a disaster,” he said. And not just for sports.
There are four reasons to worry about the effect of gambling in sports and culture.
The first is the risk to individual bettors. Every time we create 1,000 new gamblers, we create dozens of new addicts and a handful of new bankruptcies. As I’ve reported, there is evidence that about one in five men under 25 is on the spectrum of having a gambling problem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gambling was broadly legalized in 2018. Research from UCLA and USC found that bankruptcies increased by 10 percent in states that legalized online sports betting between 2018 and 2023. People will sometimes ask me what business I have worrying about online gambling when people should be free to spend their money however they like. My response is that wise rules place guardrails around economic activity with a certain rate of personal harm. For alcohol, we have licensing requirements, minimum drinking ages, boundaries around hours of sale, and rules about public consumption. As alcohol consumption is declining among young people, gambling is surging; Gen Z has replaced one (often fun) vice with a meaningful chance of addiction with another (often fun) vice with a meaningful chance of addiction. But whereas we have centuries of experience curtailing excessive drinking with rules and customs, we are currently in a free-for-all era of gambling.
The second risk is to individual players and practitioners. One reason why sports commissioners might have wanted to keep gambling out of their business is that gamblers turns some people into complete psychopaths, and that’s not a very nice experience for folks on the receiving end of gambling-afflicted psychopaths. In his feature, McKay Coppins reports on the experience of Caroline Garcia, a top-ranked tennis player, who said she received torrents of abusive messages from gamblers both for losing games and for winning games. “This has become a very common experience for athletes at the professional level, even at the college level too,” Coppins said. As the experience of journalist Emanuel Fabian shows, gambling can turn ordinary people into mini mob bosses, who go around threatening players and practitioners who they believe are costing them thousands of dollars.
The third risk is to the integrity of sports—or any other institution. At the end of 2025, in addition to its indictment of the Cleveland Guardians pitchers, the FBI announced 30 arrests involving gambling schemes in the NBA. This cavalcade of arrests has dramatically reduced trust in sports. Two-thirds of Americans now believe that professional athletes change their performance to influence gambling outcomes. It does not require extraordinary creativity to imagine how this principle could extend to other domains and institutions. If more people start to believe that things only happen in the world as a direct result of shadowy interests in vast betting markets, it’s going to be a permanent open season for conspiracy theories.
The ultimate risk is almost too dark to contemplate in much detail. As the logic and culture of casinos moves from sports to politics, the scandals that have visited baseball and basketball might soon arrive in politics. Is it really so unbelievable that a politician might tip off a friend, or assuage an enemy, by giving them inside information that would allow them to profit on betting markets? Is it really so incredible to believe that a government official would try to align policy with a betting position that stood to earn them, or an allied group, hundreds of thousands of dollars? That is what a “rigged pitch” in politics would look like. It’s not just wagering on a policy outcome that you suspect will happen. It’s changing policy outcomes based on what can be wagered.
Gambling is flourishing because it meets the needs of our moment: a low-trust world, where lonely young people are seeking high-risk opportunities to launch them into wealth and comfort. In such an environment, financialization might seem to be the last form of civic participation that feels honest to a large portion of the country. Voting is compromised, and polling is manipulated, and news is algorithmically curated. But a bet settles. A game ends. There is comfort in that. In an uncertain and illegible world, it doesn’t get much more certain and legible than this: You won, or you lost.
A 2023 Wall Street Journal poll found that Americans are pulling away from practically every value that once defined national life—patriotism, religion, community, family. Young people care less than their parents about marriage, children, or faith. But nature, abhorring a vacuum, is filling the moral void left by retreating institutions with the market. Money has become our final virtue.
I often find myself thinking about the philosopher Alasdair MacIntyre, who argued in the introduction of After Virtue that modernity had destroyed the shared moral language once supplied by traditions and religion, leaving us with only the language of individual preference. Virtue did not disappear, I think, so much as it died and was reincarnated as the market. It is now the market that tells us what things are worth, what events matter, whose predictions are correct, who is winning, who counts. Money has, in a strange way, become the last moral arbiter standing—the final universal language that a pluralistic, distrustful, post-institutional society can use to communicate with itself.
As this moral vocabulary scales across culture, it also corrodes culture. In sports, when you have money on a game, you’re not rooting for a team. You’re rooting for a proposition. The social function of fandom—shared identity, inherited loyalty, something larger than yourself—dissolves into individual risk. In politics, I fear the consequences will be worse. Prediction markets can be useful for those who want to know the future, but their utility recruits participants into a relationship with the news cycle that is adversarial, and even misanthropic. A young man betting on a terrorist attack or a famine is not acting as a mere concerned citizen whose participation improves the efficiency of global prediction markets. He’s just a dude, on his phone, alone in a room, choosing to root for death.
If that doesn’t bother you, I don’t know how to make it bother you. Based on economic and market efficiency principles alone, this young man’s behavior is defensible. But there is morality outside of markets. There is more to life than the efficiency of information networks. But will we rediscover it, any time soon? Don’t bet on it.
...
Read the original on www.derekthompson.org »
Tesla runs a bug bounty program that invites researchers to find security vulnerabilities in their vehicles. To participate, I needed the actual hardware, so I started looking for Tesla Model 3 parts on eBay. My goal was to get a Tesla car computer and touchscreen running on my desk, booting the car’s operating system.
The car computer consists of two parts - the MCU (Media Control Unit) and the autopilot computer (AP) layered on top of each other. In the car, the computer is located in front of the passenger seat, roughly behind the glovebox. The part itself is the size of an iPad and the thickness of a ~500 page book and is covered in a water-cooled metal casing:
By searching for “Tesla Model 3 MCU” on Ebay, I found quite a lot of results in the $200 - $300 USD price range. Looking at the listings, I found that many of these sellers are “salvaging” companies who buy crashed cars, take them apart, and list all parts for sale individually. Sometimes, they even include a photo of the original crashed car and a way to filter their listings for parts extracted from the same vehicle.
To boot the car up and interact with it, I needed a few more things:
* The display cable to connect them together
For the power supply, I went with an adjustable 0-30V model from Amazon. There was a 5 ampere and a 10A version available, at the time, I figured it’s safer to have some headroom and went with the 10A version — it was a very good decision, as it later turned out, the full setup could consume up to 8A at peak times. The Model 3 screens were surprisingly expensive on Ebay, I assume that is because it is a popular part to replace. I found a pretty good deal for 175 USD.
The last and most difficult part to order was the cable which connects the MCU to the screen. I needed this because both the computer and a screen were being sold with the cables cut a few centimeters after the connector (interestingly most sellers did that, instead of just unplugging the cables).
This is when I discovered that Tesla publishes the wiring “Electrical Reference” for all of its cars publicly. On their service website, you can look up a specific car model, search for a component (such as the display), and it will show you exactly how the part should be wired up, what cables/connectors are used, and even what the different pins are responsible for inside a single connector:
Turns out the display uses a 6-pin cable (2 for 12V and ground, 4 for data) with a special Rosenberger 99K10D-1D5A5-D connector. I soon discovered that unless you are a car manufacturer ordering in bulk, there is no way you are buying a single Rosenberger cable like this. No Ebay listings, nothing on Aliexpress, essentially no search results at all.
After digging around a bit, I found that this cable is very similar to a more widely used automotive cable called “LVDS”, which is used to transfer video in BMW cars. At first sight, the connectors looked like a perfect match to my Rosenberger, so I placed an order:
The computer arrived first. To attempt to power it on, I looked up which pin of which connector I needed to attach 12V and ground to using the Tesla schematics & the few pictures online of people doing the same desk-MCU setup. Since the computer included the shortly cut cables, I was able to strip the relevant wires and attach the power supply’s clips to the right ones:
I saw a couple of red LEDs start flashing, and the computer started up! Since I had no screen yet, there were not many ways to interact with the car. Reading @lewurm’s previous research on GitHub I knew that, at least in older car versions, there was a network inside the car, with some components having their own webserver. I connected an Ethernet cable to the port next to the power connector and to my laptop.
This network does not have DHCP, so you have to manually set your IP address. The IP you select has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not conflict with other hosts on the network. On Reddit, I found the contents of an older /etc/hosts file from a car which shows the hosts that are normally associated with specific IPs:
@lewurm’s blog mentioned that SSH on port :22 and a webserver on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer models? Yes!
I had already found 2 services to explore on the MCU:
* An SSH server which states “SSH allowed: vehicle parked” - quite funny given the circumstances
This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
* Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* A REST-like API on :8080 which returned a history of “tasks”
This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
* This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
Around this time, I also removed the metal shielding to see exactly what the boards look like inside. You can see the two different boards which were stacked on top of each other:
Once the screen and the BMW LVDS cable arrived, it unfortunately became clear that the connector is not going to fit. The BMW connector was much thicker on the sides and it was not possible to plug it into the screen. This led to some super sketchy improvised attempts to strip the two original “tail” cables from the MCU and the screen and connect the individual wires together. The wires were really sensitive and thin. The setup worked for a couple of seconds, but caused wire debris to fall on the PCB and short it, burning one of the power controller chips:
It was extremely hard to find the name/model of the chip that got burned, especially since part of the text printed on it had become unreadable due to the damage. To be able to continue with the project, I had to order a whole other car computer.
In the meantime, my friend Yasser (@n3r0li) somehow pulled off the impossible and identified it as the “MAX16932CATIS/V+T” step-down controller, responsible for converting power down to lower voltages. We ordered the chip and took the board to a local PCB repair shop, where they successfully replaced it and fixed the MCU. Now I had two computers to work with.
So I really did need that Rosenberger cable, there was no getting around it.
After having no luck finding it online and even visiting a Tesla service center in London (an odd encounter, to say the least), I had to accept what I had been trying to avoid: buying an entire Dashboard Wiring Harness.
Back in the Tesla Electrical Reference, in addition to the connectors, one can find every part number. Looking at the cable which connects the MCU to the screen, the number 1067960-XX-E shows. Searching for it on Ebay brings up this monstrosity:
Turns out that actual cars don’t have individual cables. Instead they have these big “looms”, which bundle many cables from a nearby area into a single harness. This is the reason why I could not find the individual cable earlier. They simply don’t manufacture it. Unfortunately I had no other choice but to buy this entire loom for 80 USD.
Despite how bulky it was, the loom worked perfectly. The car booted, the touch screen started up, and I had a working car computer on my desk, running the car’s operating system!
Having the system running, I can now start playing with the user interface, interacting with the exposed network interfaces, exploring the CAN buses, and perhaps even attempting to extract the firmware.
...
Read the original on bugs.xdavidhu.me »
Last year, I visited my grandmother’s house for the first time after the pandemic and came across a cupboard full of loose old photos. I counted 1,351 of them spanning all the way from my grandparents in their early 20s, my mom as a baby, to me in middle school, just around the time when we got our first smartphone and all photos since then were backed up online.
Everything was all over the place so I spent some time going through them individually and organizing them into groups. Some of the initial groups were based on the physical attributes of the photograph like similar aspect ratios or film stock. For example, there was a group of black/white 32mm square pictures that were taken around the time when my grandfather was in his mid 20s.
As I got done with grouping all of them, I was able to see flashes of stories in my head, but they were ephemeral and fragile. For instance, there was a group of photos that looked like it was taken during my grandparents’ wedding but I didn’t know the chronological order they were taken because EXIF metadata didn’t exist around that time.
So I sat down with my grandmother and asked her to reorder the photos and tell me everything she could remember about her wedding. Her face lit up as she narrated the backstory behind the occasion, going from photo to photo, resurfacing details that had been dormant for decades. I wrote everything down, recorded the names of people in some of the photos, some of whom I recognized as younger versions of my uncles and aunts.
After the “interview”, I had multiple pages of notes connecting the photos to events that happened 50 years ago. Since the account was historical, as an inside joke I wanted to see if I could clean it up and present it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a local instance, and began my editorial work. I used the 2011 Royal Wedding as reference and drafted a page starting with the classic infobox and the lead paragraph.
I split up the rest of the content into sections and filled them with everything I could verify like dates, names, places, who sat where. I scanned all the photos and spent some time figuring out what to place where. For every photo placement, there was a follow up to include a descriptive caption too.
Whenever I mentioned a person, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that provided wider context to things like venues, rituals, and the political climate around that time, like for instance a legal amendment that was relevant to the wedding ceremony.
In two evenings, I was able to document a full backstory for the photos into a neat article. These two evenings also made me realize just how powerful encyclopedia software is to record and preserve media and knowledge that would’ve otherwise been lost over time.
This was so much fun that I spent the following months writing pages to account for all the photos that needed to be stitched together.
I got help from r/genealogy about how to approach recording oral history and I was given resources to better conduct interviews, shoutout to u/stemmatis! I would get on calls with my grandmother and people in the family, ask them a couple of questions, and then write. It was also around this time that I began using audio transcription and language models to make the editorial process easier.
Over time, I managed to write a lot of pages connecting people to different life events. The encyclopedia format made it easy to connect dots I would have never found on my own, like discovering that one of the singers at my grandparents’ wedding was the same nurse who helped deliver me.
After finding all the stories behind the physical photos, I started to work on digital photos and videos that I had stored on Google Photos. The wonderful thing about digital photos is that they come with EXIF metadata that can reveal extra information like date, time, and sometimes geographical coordinates.
This time, without any interviews, I wanted to see if I could use a language model to create a page based on just browsing through the photos. As my first experiment, I created a folder with 625 photos of a family trip to Coorg back in 2012.
I pointed Claude Code at the directory and asked it to draft a wiki page by browsing through the images. I hinted at using ImageMagick to create contact sheets so it would help with browsing through multiple photos at once.
After a few minutes and a couple of tokens later, it had created a compelling draft with a detailed account of everything we did during the trip by time of day. The model had no location data to work with, just timestamps and visual content, but it was able to identify the places from the photos alone, including ones that I had forgotten by now. It picked up details on the modes of transportation we used to get between places just from what it could see.
After I had clarified who some of the people in the pictures were, it went on to identify them automatically in the captions. Now that I had a detailed outline ready, the page still only had content based on the available data, so to fill in the gaps I shared a list of anecdotes from my point of view and the model inserted them into places where the narrative called for them.
The Coorg trip only had photos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 photos and 343 videos with an iPhone 12 Pro that included geographical coordinates as part of the EXIF metadata.
On top of that, I exported my location timeline from Google Maps, my Uber trips, my bank transactions, and Shazam history. I would ask Claude Code to start with the photos and then gradually give it access to the different data exports.
Here are some of the things it did across multiple runs:
It cross-referenced my bank transactions with location data to ascertain the restaurants I went to.
Some of the photos and videos showed me in attendance at a soccer match, however, it was unknown which teams were playing. The model looked up my bank transactions and found a Ticketmaster invoice with information about the teams and name of the tournament.
It looked up my Uber trips to figure out travel times and exact locations of pickup and drop.
It used my Shazam tracks to write about the kinds of songs that were playing at a place, like Cuban songs at a Cuban restaurant.
In a follow-up, I mentioned remembering an evening dinner with a guitarist playing in the background. It filtered my media to evening captures, found a frame in a video with the guitarist, uploaded it, and referenced the moment in the page.
The MediaWiki architecture worked well with the edits, since for every new data source it would make amendments like a real Wikipedia contributor would. I leaned heavily on features that already existed. Talk pages to clarify gaps and consolidate research notes, categories to group pages by theme, revision history to track how a page evolved as new data came in. I didn’t have to build any of this, it was all just there.
What started as me helping the model fill in gaps from my memory gradually inverted. The model was now surfacing things I had completely forgotten, cross-referencing details across data sources in ways I never would have done manually.
So I started pointing Claude Code at other data exports. My Facebook, Instagram, and WhatsApp archives held around 100k messages and a couple thousand voice notes exchanged with close friends over a decade.
The model traced the arc of our friendships through the messages, pulled out the life episodes we had talked each other through, and wove them into multiple pages that read like it was written by someone who knew us both. When I shared the pages with my friends, they wanted to read every single one.
This is when I realized I was no longer working on a family history project. What I had been building, page by page, was a personal encyclopedia. A structured, browsable, interconnected account of my life compiled from the data I already had lying around.
I’ve been working on this as whoami.wiki. It uses MediaWiki as its foundation, which turns out to be a great fit because language models already understand Wikipedia conventions deeply from their training data. You bring your data exports, and agents draft the pages for you to review.
A page about your grandmother’s wedding works the same way as a page about a royal wedding. A page about your best friend works the same way as a page about a public figure.
Oh and it’s genuinely fun! Putting together the encyclopedia felt like the early days of Facebook timeline, browsing through finished pages, following links between people and events, and stumbling on a detail I forgot.
But more than the technology, it’s the stories that stayed with me. Writing about my grandmother’s life surfaced things I’d never known, her years as a single mother, the decisions she had to make, the resilience it took. She was a stronger woman than I ever realized. Going through my friendships, I found moments of endearment that I had nearly forgotten, the days friends went the extra mile to be good to me. Seeing those moments laid out on a page made me pick up the phone and call a few of them. The encyclopedia didn’t just organize my data, it made me pay closer attention to the people in my life.
Today I’m releasing whoami.wiki as an open source project. The encyclopedia is yours, it runs on your machine, your data stays with you, and any model can read it. The project is early and I’m still figuring a lot of it out, but if this sounds interesting, you can get started here and tell me what you think!
...
Read the original on whoami.wiki »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.