10 interesting stories served every morning and every evening.
Skip to main content
🚨 The Conservatives (EPP) are attempting to force a new vote on Thursday (26th), seeking to reverse Parliament’s NO on indiscriminate scanning. This is a direct attack on democracy and blatant disregard for your right to privacy. No means no. Take action now!
...
Read the original on fightchatcontrol.eu »
After a team member summoned Copilot to correct a typo in a PR of mine, Copilot edited my PR description to include and ad for itself and Raycast.
This is horrific. I knew this kind of bullshit would happen eventually, but I didn’t expect it so soon.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
...
Read the original on notes.zachmanson.com »
I’ve taken agency in the treatment of my bone cancer (osteosarcoma in the T5 vertebrae of the upper spine). After I’ve ran out of standard of care treatment options and there were no trials available for me I’ve started doing: maximum diagnostics, created new treatments, started doing treatments in parallel, and scaling this for others.
Elliot Hershberg wrote a great and extensive article about my cancer journey.
My cancer journey deck is embedded below, there also is a recording of an OpenAI Forum presentation. The companies we are building to scale this approach for others can be found at evenone.ventures. Please scroll further on this page for my data and other information.
I think the medical industry can be more patient first, see this great article by Ruxandra https://www.writingruxandrabio.com/p/the-bureaucracy-blocking-the-chance
For my data please see https://osteosarc.com/ that includes my treatment timeline and a data overview doc with 25TB of publicly readable Google Cloud buckets.
Please subscribe to my mailing list
...
Read the original on sytse.com »
Linux gaming has come a long way. When Valve launched Proton back in 2018, it felt like a turning point, turning the Linux gaming experience from “technically possible if you’re okay with a lot of pain” to something that more or less worked. Since then, we’ve seen incremental Wine releases, each one chipping away at compatibility issues and improving performance bit by bit. Wine 10, Wine 9, and so on; each one a collection of bug fixes and small improvements that kept the ecosystem moving forward.
Wine 11 is different. This isn’t just another yearly release with a few hundred bug fixes and some compatibility tweaks. It represents a huge number of changes and bug fixes. However, it also ships with NTSYNC support, which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming. On top of that, the WoW64 architecture overhaul is finally complete, the Wayland driver has grown up a lot, and there’s a big list of smaller improvements that collectively make this feel like an all-new project.
I should be clear: not every game is going to see a night-and-day difference. Some titles will run identically to before. But for the games that do benefit from these changes, the improvements range from noticeable to absurd. And because Proton, SteamOS, and every downstream project builds on top of Wine, those gains trickle down to everyone.
Everything up until now was a workaround
Esync and fsync worked, but they weren’t ideal
If you’ve spent any time tweaking Wine or Proton settings, you’ve probably encountered the terms “esync” and “fsync” before. Maybe you toggled them on in Lutris, or noticed them in Proton launch options, without fully understanding what they do. To understand why NTSYNC matters, you need to understand the problem these solutions were all trying to solve.
Windows games, especially modern ones, are heavily multi-threaded. Your CPU isn’t just running one thing at a time, and instead, it’s juggling rendering, physics calculations, asset streaming, audio processing, AI routines, and more, all in parallel across multiple threads. These threads need to coordinate with each other constantly. One thread might need to wait for another to finish loading a texture before it can render a frame. Another might need exclusive access to a shared resource so two threads don’t try to modify it simultaneously.
Windows handles this coordination through what are called NT synchronization primitives… mutexes, semaphores, events, and the like. They’re baked deep into the Windows kernel, and games rely on them heavily. The problem is that Linux doesn’t have native equivalents that behave exactly the same way. Wine has historically had to emulate these synchronization mechanisms, and the way it did so was, to put it simply, not ideal.
The original approach involved making a round-trip RPC call to a dedicated “kernel” process called wineserver every single time a game needed to synchronize between threads. For a game making thousands of these calls per second, that overhead added up fast and served to be a bottleneck. And it was a bottleneck that manifested as subtle frame stutters, inconsistent frame pacing, and games that just felt a little bit off even when the raw FPS numbers looked fine.
Esync was the first attempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd system call to handle synchronization without bouncing through the wineserver. It worked, and it helped, but it had quirks. Some distros ran into issues with file descriptor limits, since every synchronization object needed its own file descriptor, and games that opened a lot of them could hit the system’s ceiling quite quickly.
Fsync came next, using Linux futexes for even better performance. It was faster than esync in most cases, but it required out-of-tree kernel patches that never made it into the mainline Linux kernel or to upstream Wine out of the box. That meant you needed a custom or patched kernel to use it, which is fine for enthusiasts running CachyOS or Proton-GE, but not exactly accessible for the average user on Ubuntu or Fedora. Futex2, often referred to interchangeably with fsync, did make it to Linux kernel 5.16 as futex_waitv, but the original implementation of fsync isn’t that. Fsync used futex_wait_multiple, and Futex2 used futex_waitv. Applications such as Lutris still refer to it as Fsync, though. It’s still kind of fsync, but it’s not the original fsync.
Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonetheless. They approximated NT synchronization behavior using Linux primitives that weren’t designed for the job, and certain edge cases simply couldn’t be handled correctly. Operations like NtPulseEvent() and the “wait-for-all” mode in NtWaitForMultipleObjects() require direct control over the underlying wait queues in ways that user-space implementations just can’t reliably provide.
Synchronization at the kernel-level, rather than in user-space
NTSYNC takes a completely different approach. Instead of trying to shoehorn Windows synchronization behavior into existing Linux primitives, it adds a new kernel driver that directly models the Windows NT synchronization object API. It exposes a /dev/ntsync device that Wine can talk to, and the kernel itself handles the coordination. No more round trips to wineserver, no more approximations, and the synchronization happens in the kernel, which is where it should be. And it has proper queue management, proper event semantics, and proper atomic operations.
What makes this even better is that NTSYNC was developed by the same person who created esync and fsync in the first place. Elizabeth Figura has been working on this problem for years, iterating through multiple kernel patch revisions, presenting the work at the Linux Plumbers Conference in 2023, and pushing through multiple versions of the patch set before it was finally merged into the mainline Linux kernel with version 6.14.
The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.
The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.
All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.
If NTSYNC is the headline feature, the completion of Wine’s WoW64 architecture is the change that will quietly improve everyone’s life going forward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the subsystem that lets 32-bit applications run on 64-bit systems. Wine has been working toward its own implementation of this for years, and Wine 11 marks the point where it’s officially done.
What this means in practice is that you no longer need 32-bit system libraries installed on your 64-bit Linux system to run 32-bit Windows applications. Wine handles the translation internally, using a single unified binary that automatically detects whether it’s dealing with a 32-bit or 64-bit executable. The old days of installing multilib packages, configuring ia32-libs, or fighting with 32-bit dependencies on your 64-bit distro thankfully over.
This might sound like a small quality-of-life improvement, but it’s a massive piece of engineering work. The WoW64 mode now handles OpenGL memory mappings, SCSI pass-through, and even 16-bit application support. Yes, 16-bit! If you’ve got ancient Windows software from the ’90s that you need to run for whatever reason, Wine 11 has you covered.
For gaming specifically, this matters because a surprising number of games, especially older ones, are 32-bit executables. Previously, getting these to work often meant wrestling with your distro’s multilib setup, which varied in quality and ease depending on whether you were on Ubuntu, Arch, Fedora, or something else entirely. Now, Wine just handles it for you.
The rest of Wine 11 isn’t just filler
There are more fixes, too
It’s easy to let NTSYNC and WoW64 steal the spotlight, but Wine 11 is packed to the gills with other stuff worth talking about.
The Wayland driver has come a long way. Clipboard support now works bidirectionally between Wine and native Wayland applications, which is one of those things you don’t think about until it doesn’t work and it drives you mad. Drag-and-drop from Wayland apps into Wine windows is supported. Display mode changes are now emulated through compositor scaling, which means older games that try to switch to lower resolutions like 640x480 actually behave properly instead of leaving you with a broken desktop. If you’ve been holding off on switching from X11 to Wayland because of Wine compatibility concerns, Wine 11 removes a lot of those barriers.
On the graphics front, EGL is now the default backend for OpenGL rendering on X11, replacing the older GLX path. Vulkan support has been bumped to API version 1.4, and there’s initial support for hardware-accelerated H.264 decoding through Direct3D 11 video APIs using Vulkan Video. That last one is particularly interesting for games and applications that use video playback for things like cutscenes or in-game streaming.
Force feedback support has been improved for racing wheels and flight sticks, which is great news if you’re running a sim setup on Linux. As well, Bluetooth has received a new driver with BLE services and proper pairing support, MIDI soundfont handling has been improved for legacy game music, and there are a couple of minor extras like Zip64 compression support, Unicode 17.0.0 support, TWAIN 2.0 scanning for 64-bit apps, and IPv6 ping functionality.
Thread priority management has been improved on both Linux and macOS, which helps with multi-threaded application performance beyond just the NTSYNC gains. ARM64 devices can now simulate 4K page sizes on systems with larger native pages, which keeps the door open for Wine on Arm hardware. And with more Arm-based Linux devices showing up every year, that matters more than it used to.
Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all received specific compatibility fixes, which is additional to the broader improvements made across the board that will improve performance and compatibility across significantly more titles.
Wine 11 is a big release, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth paying attention to, but combined with the WoW64 completion, the Wayland improvements, and the sheer volume of fixes, it’s the most important Wine release since Proton made Linux gaming viable. Everything built on top of Wine, from Proton to Lutris to Bottles, gets better because of it. If you play games on Linux at all, Wine 11 is worth your time trying out.
...
Read the original on www.xda-developers.com »
It’s been about a year since coding agents appeared on the scene that could actually build you full projects. There were precursors like Aider and early Cursor, but they were more assistant than agent. The new generation is enticing, and a lot of us have spent a lot of free time building all the projects we always wanted to build but never had time to.
And I think that’s fine. Spending your free time building things is super enjoyable, and most of the time you don’t really have to care about code quality and maintainability. It also gives you a way to learn a new tech stack if you so want.
During the Christmas break, both Anthropic and OpenAI handed out some freebies to hook people to their addictive slot machines. For many, it was the first time they experienced the magic of agentic coding. The fold’s getting bigger.
Coding agents are now also introduced to production codebases. After 12 months, we are now beginning to see the effects of all that “progress”. Here’s my current view.
While all of this is anecdotal, it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services. And user interfaces have the weirdest fucking bugs that you’d think a QA team would catch. I give you that that’s been the case for longer than agents exist. But we seem to be accelerating.
We don’t have access to the internals of companies. But every now and then something slips through to some news reporter. Like this supposed AI caused outage at AWS. Which AWS immediately “corrected”. Only to then follow up internally with a 90-day reset.
Satya Nadella, the CEO of Microsoft, has been going on about how much code is now being written by AI at Microsoft. While we don’t have direct evidence, there sure is a feeling that Windows is going down the shitter. Microsoft itself seems to agree, based on this fine blog post.
Companies claiming 100% of their product’s code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes: that is not the seal of quality they think it is. And it’s definitely not good advertising for the fever dream of having your agents do all the work for you.
Through the grapevine you hear more and more people, from software companies small and large, saying they have agentically coded themselves into a corner. No code review, design decisions delegated to the agent, a gazillion features nobody asked for. That’ll do it.
We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.
You’re building an orchestration layer to command an army of autonomous agents. You installed Beads, completely oblivious to the fact that it’s basically uninstallable malware. The internet told you to. That’s how you should work or you’re ngmi. You’re ralphing the loop. Look, Anthropic built a C compiler with an agent swarm. It’s kind of broken, but surely the next generation of LLMs can fix it. Oh my god, Cursor built a browser with a battalion of agents. Yes, of course, it’s not really working and it needed a human to spin the wheel a little bit every now and then. But surely the next generation of LLMs will fix it. Pinky promise! Distribute, divide and conquer, autonomy, dark factories, software is solved in the next 6 months. SaaS is dead, my grandma just had her Claw build her own Shopify!
Now again, this can work for your side project barely anyone is using, including yourself. And hey, maybe there’s somebody out there who can actually make this work for a software product that’s not a steaming pile of garbage and is used by actual humans in anger.
If that’s you, more power to you. But at least among my circle of peers I have yet to find evidence that this kind of shit works. Maybe we all have skill issues.
The problem with agents is that they make errors. Which is fine, humans also make errors. Maybe they are just correctness errors. Easy to identify and fix. Add a regression test on top for bonus points. Or maybe it’s a code smell your linter doesn’t catch. A useless method here, a type that doesn’t make sense, duplicated code over there. On their own, these are harmless. A human will also do such booboos.
But clankers aren’t humans. A human makes the same error a few times. Eventually they learn not to make it again. Either because someone starts screaming at them or because they’re on a genuine learning path.
An agent has no such learning ability. At least not out of the box. It will continue making the same errors over and over again. Depending on the training data it might also come up with glorious new interpolations of different errors.
Now you can try to teach your agent. Tell it to not make that booboo again in your AGENTS.md. Concoct the most complex memory system and have it look up previous errors and best practices. And that can be effective for a specific category of errors. But it also requires you to actually observe the agent making that error.
There’s a much more important difference between clanker and human. A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there’s only so many booboos the human can introduce in a codebase per day. The booboos will compound at a very slow rate. Usually, if the booboo pain gets too big, the human, who hates pain, will spend some time fixing up the booboos. Or the human gets fired and someone else fixes up the booboos. So the pain goes away.
With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that’s unsustainable. You have removed yourself from the loop, so you don’t even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it’s too late.
Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn’t allow your army of agents to make the change in a functioning way. Or your users are screaming at you because something in the latest release broke and deleted some user data.
You realize you can no longer trust the codebase. Worse, you realize that the gazillions of unit, snapshot, and e2e tests you had your clankers write are equally untrustworthy. The only thing that’s still a reliable measure of “does this work” is manually testing the product. Congrats, you fucked yourself (and your company).
You have zero fucking idea what’s going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity. They have seen many bad architectural decisions in their training data and throughout their RL training. You have told them to architect your application. Guess what the result is?
An immense amount of complexity, an amalgam of terrible cargo cult “industry best practices”, that you didn’t rein in before it was too late. But it’s worse than that.
Your agents never see each other’s runs, never get to see all of your codebase, never get to see all the decisions that were made by you or other agents before they make a change. As such, an agent’s decisions are always local, which leads to the exact booboos described above. Immense amounts of code duplication, abstractions for abstractions’ sake.
All of this compounds into an unrecoverable mess of complexity. The exact same mess you find in human-made enterprise codebases. Those arrive at that state because the pain is distributed over a massive amount of people. The individual suffering doesn’t pass the threshold of “I need to fix this”. The individual might not even have the means to fix things. And organizations have super high pain tolerance. But human-made enterprise codebases take years to get there. The organization slowly evolves along with the complexity in a demented kind of synergy and learns how to deal with it.
With agents and a team of 2 humans, you can get to that complexity within weeks.
So now you hope your agents can fix the mess, refactor it, make it pristine. But your agents can also no longer deal with it. Because the codebase and complexity are too big, and they only ever have a local view of the mess.
And I’m not just talking about context window size or long context attention mechanisms failing at the sight of a 1 million lines of code monster. Those are obvious technical limitations. It’s more devious than that.
Before your agent can try and help fix the mess, it needs to find all the code that needs changing and all existing code it can reuse. We call that agentic search. How the agent does that depends on the tools it has. You can give it a Bash tool so it can ripgrep its way through the codebase. You can give it some queryable codebase index, an LSP server, a vector database. In the end it doesn’t matter much. The bigger the codebase, the lower the recall. Low recall means that your agent will, in fact, not find all the code it needs to do a good job.
This is also why those code smell booboos happen in the first place. The agent misses existing code, duplicates things, introduces inconsistencies. And then they blossom into a beautiful shit flower of complexity.
How do we avoid all of this?
Coding agents are sirens, luring you in with their speed of code generation and jagged intelligence, often completing a simple task with high quality at breakneck velocity. Things start falling apart when you think: “Oh golly, this thing is great. Computer, do my work!”.
There’s nothing wrong with delegating tasks to agents, obviously. Good agent tasks share a few properties: they can be scoped so the agent doesn’t need to understand the full system. The loop can be closed, that is, the agent has a way to evaluate its own work. The output isn’t mission critical, just some ad hoc tool or internal piece of software nobody’s life or revenue depends on. Or you just need a rubber duck to bounce ideas against, which basically means bouncing your idea against the compressed wisdom of the internet and synthetic training data. If any of that applies, you found the perfect task for the agent, provided that you as the human are the final quality gate.
Karpathy’s auto-research applied to speeding up startup time of your app? Great! As long as you understand that the code it spits out is not production-ready at all. Auto-research works because you give it an evaluation function that lets the agent measure its work against some metric, like startup time or loss. But that evaluation function only captures a very narrow metric. The agent will happily ignore any metrics not captured by the evaluation function, such as code quality, complexity, or even correctness, if your evaluation function is foobar.
The point is: let the agent do the boring stuff, the stuff that won’t teach you anything new, or try out different things you’d otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation. Yes, sure, you can also use an agent for that final step.
And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you’re actually building and why. Give yourself an opportunity to say, fuck no, we don’t need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. Maybe use tab completion for some nostalgic feels. Or do some pair programming with your agent. Be in the code. Because the simple act of having to write the thing or seeing it being built up step by step introduces friction that allows you to better understand what you want to build and how the system “feels”. This is where your experience and taste come in, something the current SOTA models simply cannot yet replace. And slowing the fuck down and suffering some friction is what allows you to learn and grow.
The end result will be systems and codebases that continue to be maintainable, at least as maintainable as our old systems before agents. Yes, those were not perfect either. Your users will thank you, as your product now sparks joy instead of slop. You’ll build fewer features, but the right ones. Learning to say no is a feature in itself.
You can sleep well knowing that you still have an idea what the fuck is going on, and that you have agency. Your understanding allows you to fix the recall problem of agentic search, leading to better clanker outputs that need less massaging. And if shit hits the fan, you are able to go in and fix it. Or if your initial design has been suboptimal, you understand why it’s suboptimal, and how to refactor it into something better. With or without an agent, don’t fucking care.
All of this requires discipline and agency.
All of this requires humans.
...
Read the original on mariozechner.at »
Microsoft just announced a 7-point plan to fix Windows 11, and the tech press is treating it like a redemption arc. Pavan Davuluri, the Windows president, admitted in January 2026 that “Windows 11 had gone off track” and said Microsoft was entering a mode called “swarming” where engineers would be pulled off new features to fix existing problems.
I saw this headline and my first thought was: it’s like being in an abusive relationship. They beat you, then show up with flowers saying they’ve changed. And everyone around you says “see, they’re getting better.” But the bruises are still there and the apology only covers the hits people noticed.
I want to walk through what Microsoft actually did to Windows 11 over the past four years, because this “fix” announcement only makes sense when you see the full damage list and realize that the worst offenses aren’t even part of the repair plan.
The Copilot invasion started September 26, 2023, when Microsoft pushed their AI chatbot into Windows 11 ahead of the formal 23H2 release. The icon appeared between your Start menu and system tray, you couldn’t move it, you couldn’t remove it through normal settings, and it hijacked the Win+C keyboard shortcut. Over the next two years, Copilot buttons metastasized into Snipping Tool, Photos, Notepad, Widgets, File Explorer context menus, Start menu search, and system Settings. Microsoft even planned to force-install the Microsoft 365 Copilot app directly onto Start menus of “eligible PCs.” The new plan promises to remove all of that. They want credit for pulling their hand out of your pocket.
On April 24, 2024, Microsoft shipped update KB5036980, which injected advertisements into the Windows 11 Start menu’s “Recommended” section. These showed up labeled “Promoted” and pushed apps like Opera browser and some password manager nobody asked for. And the Start menu was just one surface, they also placed ads on the lock screen, in the Settings homepage hawking Game Pass subscriptions, inside File Explorer pushing OneDrive, and through “tip” notifications that were thinly veiled product pitches. The “fix” promises “fewer ads.” Fewer. The operating system you paid $139 for at retail should have exactly zero ads, and the fact that “fewer” is supposed to impress anyone shows how thoroughly Microsoft has lowered the bar.
The privacy angle is where this gets dangerous. When Windows 11 launched in October 2021, Home edition required a Microsoft account during setup. By October 2025, Microsoft had systematically hunted down and killed every single workaround for creating a local account, the `oobe\bypassnro` command, the BypassNRO registry toggle, the `ms-cxh:localonly` trick, even the old fake email method. Amanda Langowski from Microsoft stated it plainly: they were “removing known mechanisms for creating a local account in the Windows Setup experience.”
A Microsoft account means your identity is tied to your OS from first boot. Your activity, your app usage, your browsing through Edge, your files through OneDrive, all funneled into a profile Microsoft controls. And this particular abuse is nowhere in the 7-point fix plan.
OneDrive got the same treatment. Microsoft silently changed Windows 11 setup in 2024 so that OneDrive folder backup enables automatically with no consent dialog, syncing your Desktop, Documents, Pictures, Music, and Videos to Microsoft’s cloud. When people discovered this and tried to turn it off, their files disappeared from their local machine because OneDrive had moved them, transferred ownership of your personal files to their cloud service without asking. Author Jason Pargin went viral describing how OneDrive activated itself, moved his files, then started deleting them when he hit the free 5GB storage limit. Microsoft’s response to this was silence. Also not in the fix plan.
Windows Recall is worth lingering on. Announced May 2024, it’s an AI feature that screenshots everything on your screen every few seconds and makes it searchable. Security researcher Kevin Beaumont demonstrated that the entire Recall database was stored in plaintext in an AppData folder where any malware could extract it. Bank numbers, Social Security numbers, passwords, all sitting in an unencrypted SQLite database.
The UK’s Information Commissioner’s Office got involved. Microsoft delayed it, made it opt-in, added encryption, and quietly relaunched it for Insiders in November 2024. They built a surveillance feature, shipped it broken, got caught, and called the patch “responding to feedback.”
But the abuse pattern goes back way further than Windows 11. In 2015 and 2016, Microsoft ran the GWX (Get Windows 10) campaign, full-screen nag dialogs that pushed Windows 10 upgrades on Windows 7 and 8 users. In May 2016, they changed the behavior of the red X button so that clicking it, which for decades had meant “close” or “cancel”, instead scheduled the Windows 10 upgrade. Microsoft’s own security advice told users to close suspicious dialogs using the X button, and they weaponized that trained behavior against their own customers. A woman named Teri Goldstein sued after the forced upgrade bricked her travel agency PC and won $10,000. Microsoft appealed, then dropped the appeal and paid. They eventually admitted they “went too far.”
And right now, Microsoft is about to force 240 million PCs into the landfill. Windows 10 hit end of life on October 14, 2025, and Windows 11 requires TPM 2.0, specific CPU generations, UEFI Secure Boot, hardware requirements that excluded roughly 20% of all PCs worldwide. Perfectly functional machines, rendered “obsolete” by arbitrary software restrictions. If you want to keep getting security patches on Windows 10, Microsoft will charge you $30 per year, paying for patches to an operating system you already bought a license for. Enterprise customers pay $61 per device for Year 1, $122 for Year 2, and $244 for Year 3, with the price doubling each year.
Edge is its own disaster. Mozilla commissioned an independent report titled “Over the Edge” that documented specific dark patterns including confirmshaming (pop-ups implying you’re “shopping in a dumb way” if you don’t use Edge), disguised ads injected into Google.com and the Chrome Web Store, and default browser settings that hijack back to Edge without notification. Certain Windows web links still force-open in Edge regardless of your default browser setting. Despite all this manipulation, Edge holds just 5.35% global market share. Even with the full weight of an operating system monopoly forcing their browser on people, almost nobody chooses to use it.
And the telemetry question. On Windows 11 Home and Pro, you cannot fully disable telemetry. Setting `AllowTelemetry` to 0 in the registry on non-Enterprise editions gets silently overridden back to 1. Only Enterprise and Education editions can actually turn it off. The operating system you paid for reports data about you to Microsoft, and the setting to stop it is a lie on consumer editions. Also not in the fix plan.
I haven’t even mentioned the EU fining Microsoft over 2.2 billion euros across multiple antitrust rulings, including 561 million euros specifically for breaking a browser ballot promise, a Windows 7 update silently removed the choice screen for 14 months, affecting 15 million users, and it was the first time the EU fined a company for violating a “commitment decision.” Or the _NSAKEY controversy from 1999 where a second crypto key labeled literally `_NSAKEY` was found embedded in Windows NT. Or the time in August 2024 when a Microsoft update bricked Linux dual-boot systems across Ubuntu, Mint, and other distros, and it took 9 months to fully fix.
Ok so here’s the table that tells the whole story:
The bottom four rows are the ones that matter. The privacy-hostile changes, the forced Microsoft accounts, the telemetry that lies about being disabled, OneDrive hijacking your files, the pre-installed garbage, none of that is part of the fix plan. Microsoft’s “swarming” effort targets the most visible UI annoyances, the ones that generate bad headlines. Data collection, vendor lock-in, forced accounts, those stay because those are the revenue model.
Microsoft spent four years deliberately degrading an operating system that people paid $139 or more for, and now they’re announcing the removal of their own damage as if it’s a gift. The “fix” is them taking their foot off your neck and expecting applause. The ads should have never been there, the Copilot buttons should have never been forced, and the taskbar should have never been crippled in the first place. And the things they’re choosing to keep, the telemetry, the forced accounts, the data harvesting, those are the real product, because at this point, you are.
...
Read the original on www.sambent.com »
Hello! This is a long, hopefully fun one! If you’re reading this in your email, you may need to click “expand” to read all the way to the end of this post. Thank you!
When I lived in Nashville, my girlfriends and I would take ourselves on “field trips” across the state. We once went on a tour to spot bald eagles in West Tennessee, and upon arrival, a woman with fluffy hair in the state park bathroom told us she had seen 113 bald eagles the day before. We ended up seeing (counts on one hand)…2.
In the summer of 2017, we went on another field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was chosen as the site for a plutonium and uranium enrichment plant as part of the Manhattan Project, a top-secret WWII effort to develop the first atomic bomb. Once a small and rural farming community settled in the valley of East Tennessee, the swift task to create a nuclear bomb grew the secret settlement titled “Site X” from 3,000 people in 1942 to 75,000 by 1945. Alongside the population growth, enormously complex buildings were built.
A Note: The Manhattan Project created the nuclear bomb that caused extreme devastation in Japan and ended the war. There’s a lot of U. S. history that’s awful and indefensible. Today, though, I’d like to talk about the industrial design and color theory from that era.
Our first stop on the tour was the X-10 Graphite Reactor room and its control panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s second full-scale nuclear reactor. The plutonium produced from uranium there was shipped to Los Alamos, New Mexico, for research into the atomic bomb Fat Man.
What caught my eye as a designer, as with most industrial plants and control rooms of that time, besides the knobs, levers, and buttons, was the use of a very specific seafoam green, seen here on the reactor’s walls and in the control panel room.
Thus began my day-long search, traipsing through the internet for historical information about this specific shade of seafoam green.
Thankfully, this path led me to the work of color theorist Faber Birren.
In the fall of 1919, Faber Birren entered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to commit himself to self-education in color, as such a program didn’t exist. He spent his days interviewing psychologists and physicists and conducted his own color studies, which were considered unconventional at the time. He painted his bedroom walls red vermillion to test if it would make him go mad.
In 1933, he moved to New York City and became a self-appointed color consultant, approaching major corporations to sell the idea that appropriate use of color could boost sales. He convinced a Chicago wholesale meat company that the company’s white walls made the meat unappealing. He studied the steaks on various colored backgrounds and determined that a blue/green background would make the beef appear redder. Sales went up, and soon a number of industries hired Faber to bring color theory into their work, including the leading chemical and wartime contract company, as well as the Manhattan Project building designer, DuPont.
With the increase in wartime production in the US during WWII, Birren and DuPont created a master color safety code for the industrial plant industry, with the aim of reducing accidents and increasing efficiency within plants. These color codes were approved by the National Safety Council in 1944 and are now internationally recognized, having been mandatory practice since 1948. The color coding went as such:
* Fire Red: All fire protection, emergency stop buttons, and flammable liquids should be red
* Solar Yellow: Signifies caution and physical hazards such as falling
* Safety Green: Indicates safety features such as first-aid equipment, emergency exits, and eyewash stations.
* Light Green: Used on walls to reduce visual fatigue
My industrial “seafoam” light green mystery has finally been solved thanks to this article from UChicago Magazine.
Keeping in theme with “control rooms”, I researched the second Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plutonium production reactor in the world. To my surprise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also responsible for the design and construction of Hanford.
In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about research undertaken to measure eye fatigue in the industrial workplace and the effects of interior color on human efficiency and well-being. Using the color chart above, he states that the proper use of color hues can reduce accidents, raise standards of machine maintenance, and improve labor morale.
“The importance of color in factories is first to control brightness in the general field of view for an efficient seeing condition. Interiors can then be conditioned for emotional pleasure and interest, using warm, cool, or luminois hues as working conditions suggest. Color should be functional and not merely decorative.” - Faber Birren
Now, looking at the interiors of the Manhattan Project control rooms and plants, the broad use of Light and Medium Green makes sense. One mistake and mass devastation could have occurred within these towns. Birren writes, “Note that most of the standards are soft in tone. This is deliberate and intended to establish a non-distracting environment. Green is a restful and natural-looking color for average factory interiors. Light Green with Medium Green is suggested.”
Let’s put these theories to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he directed the following color applications for small industrial areas:
* ✔️ Medium Gray is proposed for machinery, equipment, and racks
* ✔️ Beige walls may be applied to interiors deprived of natural light
As we can see, his color theory was followed to a T.
Other US Industrial Plants that Used these Color Methods
This color theory research just opened a whole can of design worms for me, and I’m excited to dive into them more. For example, Germany developed its own seafoam green, specifically designed for bridges, called Cologne Bridge Green. That’s a post for another day.
And finally, if you enjoy this sort of design, I designed a font called “Parts List” that is meant to evoke the feeling of sitting in an oil change waiting room, with the smell of burnt coffee. I created this font out of old auto parts lists, and it’s a perfectly wobbly typeface that will give you that ‘Is it a typewriter or handwriting?’ feeling. It’s now available on my website.
PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was surprised that almost all of the facilities had been torn down, and he just looked at me straight in the face and said, “Who said it’s actually gone?” Noted. ✌️
Thanks for being here!
...
Read the original on bethmathews.substack.com »
Every ChatGPT message triggers a Cloudflare Turnstile program that runs silently in your browser. I decrypted 377 of these programs from network traffic and found something that goes beyond standard browser fingerprinting.
The program checks 55 properties spanning three layers: your browser (GPU, screen, fonts), the Cloudflare network (your city, your IP, your region from edge headers), and the ChatGPT React application itself (__reactRouterContext, loaderData, clientBootstrap). Turnstile doesn’t just verify that you’re running a real browser. It verifies that you’re running a real browser that has fully booted a specific React application.
A bot that spoofs browser fingerprints but doesn’t render the actual ChatGPT SPA will fail.
The Turnstile bytecode arrives encrypted. The server sends a field called turnstile.dx in the prepare response: 28,000 characters of base64 that change on every request.
The outer layer is XOR’d with the p token from the prepare request. Both travel in the same HTTP exchange, so decrypting it is straightforward:
outer = json.loads(bytes(
base64decode(dx)[i] ^ p_token[i % len(p_token)]
for i in range(len(base64decode(dx)))
# → 89 VM instructions
Inside those 89 instructions, there is a 19KB encrypted blob containing the actual fingerprinting program. This inner blob uses a different XOR key that is not the p token.
Initially I assumed this key was derived from performance.now() and was truly ephemeral. Then I looked at the bytecode more carefully and found the key sitting in the instructions:
[41.02, 0.3, 22.58, 12.96, 97.35]
The last argument, 97.35, is the XOR key. A float literal, generated by the server, embedded in the bytecode it sent to the browser. I verified this across 50 requests. Every time, the float from the instruction decrypts the inner blob to valid JSON. 50 out of 50.
The full decryption chain requires nothing beyond the HTTP request and response:
1. Read p from prepare request
2. Read turnstile.dx from prepare response
3. XOR(base64decode(dx), p) → outer bytecode
4. Find the 5-arg instruction after the 19KB blob → last arg is the key
5. XOR(base64decode(blob), str(key)) → inner program (417-580 VM instructions)
The key is in the payload.
Each inner program uses a custom VM with 28 opcodes (ADD, XOR, CALL, BTOA, RESOLVE, BIND_METHOD, JSON_STRINGIFY, etc.) and randomized float register addresses that change per request. I mapped the opcodes from the SDK source (sdk.js, 1,411 lines, deobfuscated).
The program collects 55 properties. No variation across 377 samples. All 55, every time, organized into three layers:
Storage (5): storage, quota, estimate, setItem, usage. Also writes the fingerprint to localStorage under key 6f376b6560133c2c for persistence across page loads.
These are injected server-side by Cloudflare’s edge. They exist only if the request passed through Cloudflare’s network. A bot making direct requests to the origin server or running behind a non-Cloudflare proxy will produce missing or inconsistent values.
This is the part that matters. __reactRouterContext is an internal data structure that React Router v6+ attaches to the DOM. loaderData contains the route loader results. clientBootstrap is specific to ChatGPT’s SSR hydration.
These properties only exist if the ChatGPT React application has fully rendered and hydrated. A headless browser that loads the HTML but doesn’t execute the JavaScript bundle won’t have them. A bot framework that stubs out browser APIs but doesn’t actually run React won’t have them.
This is bot detection at the application layer, not the browser layer.
After collecting all 55 properties, the program hits a 116-byte encrypted blob that decrypts to 4 final instructions:
[96.05, 3.99, 3.99], // JSON.stringify(fingerprint)
[22.58, 46.15, 57.34], // store
[33.34, 3.99, 74.43], // XOR(json, key)
[1.51, 56.88, 3.99] // RESOLVE → becomes the token
The fingerprint is JSON.stringify’d, XOR’d, and resolved back to the parent. The result is the OpenAI-Sentinel-Turnstile-Token header sent with every conversation request.
Turnstile is one of three challenges. The other two:
Signal Orchestrator (271 instructions): Installs event listeners for keydown, pointermove, click, scroll, paste, and wheel. Monitors 36 window.__oai_so_* properties tracking keystroke timing, mouse velocity, scroll patterns, idle time, and paste events. A behavioral biometric layer running underneath the fingerprint.
Proof of Work (25-field fingerprint + SHA-256 hashcash): Difficulty is uniform random (400K-500K), 72% solve under 5ms. Includes 7 binary detection flags (ai, createPRNG, cache, solana, dump, InstallTrigger, data), all zero across 100% of 100 samples. The PoW adds compute cost but is not the real defense.
The XOR key for the inner program is a server-generated float embedded in the bytecode. Whoever generated the turnstile.dx knows the key. The privacy boundary between the user and the system operator is a policy decision, not a cryptographic one.
The obfuscation serves real operational purposes: it hides the fingerprint checklist from static analysis, prevents the website operator (OpenAI) from reading raw fingerprint values without reverse-engineering the bytecode, makes each token unique to prevent replay, and allows Cloudflare to change what the program checks without anyone noticing.
But the “encryption” is XOR with a key that’s in the same data stream. It prevents casual inspection. It does not prevent analysis.
No systems were accessed without authorization. No individual user data is disclosed. All traffic was observed from consented participants. The Sentinel SDK was beautified and manually deobfuscated. All decryption was performed offline using Python.
...
Read the original on www.buchodi.com »
...
Read the original on rpastro.square.site »
Here are three stories about the state of gambling in America.
In November 2025, two pitchers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a conspiracy for “rigging pitches.” Frankly, I had never heard of rigged pitches before, but the federal indictment describes a scheme so simple that it’s a miracle that this sort of thing doesn’t happen all the time. Three years ago, a few corrupt bettors approached the pitchers with a tantalizing deal: (1) We’ll bet that certain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.
The plan worked. Why wouldn’t it? There are hundreds of pitches thrown in a baseball game, and nobody cares about one bad pitch. The bets were so deviously clever because they offered enormous rewards for bettors and only incidental inconvenience for players and viewers. Before their plan was snuffed out, the fraudsters won $450,000 from pitches that not even the most ardent Cleveland baseball fan would ever remember the next day. Nobody watching America’s pastime could have guessed that they were witnessing a six-figure fraud.
On the morning of February 28th, someone logged onto the prediction market website Polymarket and made an unusually large bet. This bet wasn’t placed on a baseball game. It wasn’t placed on any sport. This was a bet that the United States would bomb Iran on a specific day, despite extremely low odds of such a thing happening.
A few hours later, bombs landed in Iran. This one bet was part of a $553,000 payday for a user named “Magamyman.” And it was just one of dozens of suspicious, perfectly-timed wagers, totaling millions of dollars, placed in the hours before a war began.
It is almost impossible to believe that, whoever Magamyman is, he didn’t have inside information from members of the administration. The term war profiteering typically refers to arms dealers who get rich from war. But we now live in a world not only where online bettors stand to profit from war, but also where key decision makers in government have the tantalizing options to make hundreds of thousands of dollars by synchronizing military engagements with their gambling position.
On March 10, several days into the Iran War, the journalist Emanuel Fabian reported that a warhead launched from Iran struck a site outside Jerusalem.
Meanwhile on Polymarket, users had placed bets on the precise location of missile strikes on March 10. Fabian’s article was therefore poised to determine payouts of $14 million in betting. As The Atlantic’s Charlie Warzel reported, bettors encouraged him to rewrite his story to produce the outcome that they’d bet on. Others threatened to make his life “miserable.”
A clever dystopian novelist might conceive of a future where poorly paid journalists for news wires are offered six-figure deals to report fictions that cash out bets from online prediction markets. But just how fanciful is that scenario when we have good reason to believe that journalists are already being pressured, bullied, and threatened to publish specific stories that align with multi-thousand dollar bets about the future?
Put it all together: rigged pitches, rigged war bets, and attempts to rig wartime journalism. Without context, each story would sound like a wacky conspiracy theory. But these are not conspiracy theories. These are things that have happened. These are conspiracies—full stop.
“If you’re not paranoid, you’re not paying attention” has historically been one of those bumperstickers you find on the back of a car with so many other bumperstickers that you worry for the sanity of its occupants. But in this weird new reality where every event on the planet has a price, and behind every price is a shadowy counterparty, the jittery gambler’s paranoia—is what I’m watching happening because somebody more powerful than me bet on it?—is starting to seem, eerily, like a kind of perverse common sense.
What’s remarkable is not just the fact that online sports books have taken over sports, or that betting markets have metastasized in politics and culture, but the speed with which both have taken place.
For most of the last century, the major sports leagues were vehemently against gambling, as the Atlantic staff writer McKay Coppins explained in his recent feature. In 1992, NFL commissioner Paul Tagliabue told Congress that “nothing has done more to despoil the games Americans play and watch than widespread gambling on them.” In 2012, NBA commissioner David Stern loudly threatened New Jersey Gov. Chris Christie for signing a bill to legalize sports betting in the Garden State, reportedly screaming, “we’re going to come after you with everything we’ve got.”
So much for that. Following the 2018 Supreme Court decision Murphy vs. NCAA, sports gambling was unleashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 billion gambled on football games, and the league itself made half a billion dollars in advertising, licensing, and data deals.
Nine years ago, Americans bet less than $5 billion on sports. Last year, that number rose to at least $160 billion. Big numbers mean nothing to me, so let me put that statistic another way: $5 billion is roughly the amount Americans spend annually at coin-operated laundromats and $160 billion is nearly what Americans spent last year on domestic airline tickets. So, in a decade, the online sports gambling industry will have risen from the level of coin laundromats to rival the entire airline industry.
And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion. “These predictive markets are the logical endpoint of the online gambling boom,” Coppins told me on my podcast Plain English. “We have taught the entire American population how to gamble with sports. We’ve made it frictionless and easy and put it on everybody’s phone. Why not extend the logic and culture of gambling to other segments of American life?” He continued:
Why not let people gamble on who’s going to win the Oscar, when Taylor Swift’s wedding will be, how many people will be deported from the United States next year, when the Iranian regime will fall, whether a nuclear weapon will be detonated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m making up. These are all bets that you can make on these predictive markets.
Indeed, why not let people gamble on whether there will be a famine in Gaza? The market logic is cold and simple: More bets means more information, and more informational volume is more efficiency in the marketplace of all future happenings. But from another perspective—let’s call it, baseline morality?—the transformation of a famine into a windfall event for prescient bettors seems so grotesque as to require no elaboration. One imagines a young man sending his 1099 documents to a tax accountant the following spring: “right, so here are my dividends, these are the cap gains, and, oh yeah, here’s my $9,000 payout for totally nailing when all those kids would die.”
It is a comforting myth that dystopias happen when obviously bad ideas go too far. Comforting, because it plays to our naive hope that the world can be divided into static categories of good versus evil and that once we stigmatize all the bad people and ghettoize all the bad ideas, some utopia will spring into view. But I think dystopias more likely happen because seemingly good ideas go too far. “Pleasure is better than pain” is a sensible notion, and a society devoted to its implications created Brave New World. “Order is better than disorder” sounds alright to me, but a society devoted to the most grotesque vision of that principle takes us to 1984. Sports gambling is fun, and prediction markets can forecast future events. But extended without guardrails or limitations, those principles lead to a world where ubiquitous gambling leads to cheating, cheating leads to distrust, and distrust leads ultimately to cynicism or outright disengagement.
“The crisis of authority that has kind of already visited every other American institution in the last couple of decades has arrived at professional sports,” Coppins said. Two-thirds of Americans now believe that professional athletes sometimes change their performance to influence gambling outcomes. “Not to overstate it, but that’s a disaster,” he said. And not just for sports.
There are four reasons to worry about the effect of gambling in sports and culture.
The first is the risk to individual bettors. Every time we create 1,000 new gamblers, we create dozens of new addicts and a handful of new bankruptcies. As I’ve reported, there is evidence that about one in five men under 25 is on the spectrum of having a gambling problem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gambling was broadly legalized in 2018. Research from UCLA and USC found that bankruptcies increased by 10 percent in states that legalized online sports betting between 2018 and 2023. People will sometimes ask me what business I have worrying about online gambling when people should be free to spend their money however they like. My response is that wise rules place guardrails around economic activity with a certain rate of personal harm. For alcohol, we have licensing requirements, minimum drinking ages, boundaries around hours of sale, and rules about public consumption. As alcohol consumption is declining among young people, gambling is surging; Gen Z has replaced one (often fun) vice with a meaningful chance of addiction with another (often fun) vice with a meaningful chance of addiction. But whereas we have centuries of experience curtailing excessive drinking with rules and customs, we are currently in a free-for-all era of gambling.
The second risk is to individual players and practitioners. One reason why sports commissioners might have wanted to keep gambling out of their business is that gamblers turns some people into complete psychopaths, and that’s not a very nice experience for folks on the receiving end of gambling-afflicted psychopaths. In his feature, McKay Coppins reports on the experience of Caroline Garcia, a top-ranked tennis player, who said she received torrents of abusive messages from gamblers both for losing games and for winning games. “This has become a very common experience for athletes at the professional level, even at the college level too,” Coppins said. As the experience of journalist Emanuel Fabian shows, gambling can turn ordinary people into mini mob bosses, who go around threatening players and practitioners who they believe are costing them thousands of dollars.
The third risk is to the integrity of sports—or any other institution. At the end of 2025, in addition to its indictment of the Cleveland Guardians pitchers, the FBI announced 30 arrests involving gambling schemes in the NBA. This cavalcade of arrests has dramatically reduced trust in sports. Two-thirds of Americans now believe that professional athletes change their performance to influence gambling outcomes. It does not require extraordinary creativity to imagine how this principle could extend to other domains and institutions. If more people start to believe that things only happen in the world as a direct result of shadowy interests in vast betting markets, it’s going to be a permanent open season for conspiracy theories.
The ultimate risk is almost too dark to contemplate in much detail. As the logic and culture of casinos moves from sports to politics, the scandals that have visited baseball and basketball might soon arrive in politics. Is it really so unbelievable that a politician might tip off a friend, or assuage an enemy, by giving them inside information that would allow them to profit on betting markets? Is it really so incredible to believe that a government official would try to align policy with a betting position that stood to earn them, or an allied group, hundreds of thousands of dollars? That is what a “rigged pitch” in politics would look like. It’s not just wagering on a policy outcome that you suspect will happen. It’s changing policy outcomes based on what can be wagered.
Gambling is flourishing because it meets the needs of our moment: a low-trust world, where lonely young people are seeking high-risk opportunities to launch them into wealth and comfort. In such an environment, financialization might seem to be the last form of civic participation that feels honest to a large portion of the country. Voting is compromised, and polling is manipulated, and news is algorithmically curated. But a bet settles. A game ends. There is comfort in that. In an uncertain and illegible world, it doesn’t get much more certain and legible than this: You won, or you lost.
A 2023 Wall Street Journal poll found that Americans are pulling away from practically every value that once defined national life—patriotism, religion, community, family. Young people care less than their parents about marriage, children, or faith. But nature, abhorring a vacuum, is filling the moral void left by retreating institutions with the market. Money has become our final virtue.
I often find myself thinking about the philosopher Alasdair MacIntyre, who argued in the introduction of After Virtue that modernity had destroyed the shared moral language once supplied by traditions and religion, leaving us with only the language of individual preference. Virtue did not disappear, I think, so much as it died and was reincarnated as the market. It is now the market that tells us what things are worth, what events matter, whose predictions are correct, who is winning, who counts. Money has, in a strange way, become the last moral arbiter standing—the final universal language that a pluralistic, distrustful, post-institutional society can use to communicate with itself.
As this moral vocabulary scales across culture, it also corrodes culture. In sports, when you have money on a game, you’re not rooting for a team. You’re rooting for a proposition. The social function of fandom—shared identity, inherited loyalty, something larger than yourself—dissolves into individual risk. In politics, I fear the consequences will be worse. Prediction markets can be useful for those who want to know the future, but their utility recruits participants into a relationship with the news cycle that is adversarial, and even misanthropic. A young man betting on a terrorist attack or a famine is not acting as a mere concerned citizen whose participation improves the efficiency of global prediction markets. He’s just a dude, on his phone, alone in a room, choosing to root for death.
If that doesn’t bother you, I don’t know how to make it bother you. Based on economic and market efficiency principles alone, this young man’s behavior is defensible. But there is morality outside of markets. There is more to life than the efficiency of information networks. But will we rediscover it, any time soon? Don’t bet on it.
...
Read the original on www.derekthompson.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.