10 interesting stories served every morning and every evening.
Skip to main content
🚨 The Conservatives (EPP) are attempting to force a new vote on Thursday (26th), seeking to reverse Parliament’s NO on indiscriminate scanning. This is a direct attack on democracy and blatant disregard for your right to privacy. No means no. Take action now!
...
Read the original on fightchatcontrol.eu »
Linux gaming has come a long way. When Valve launched Proton back in 2018, it felt like a turning point, turning the Linux gaming experience from “technically possible if you’re okay with a lot of pain” to something that more or less worked. Since then, we’ve seen incremental Wine releases, each one chipping away at compatibility issues and improving performance bit by bit. Wine 10, Wine 9, and so on; each one a collection of bug fixes and small improvements that kept the ecosystem moving forward.
Wine 11 is different. This isn’t just another yearly release with a few hundred bug fixes and some compatibility tweaks. It represents a huge number of changes and bug fixes. However, it also ships with NTSYNC support, which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming. On top of that, the WoW64 architecture overhaul is finally complete, the Wayland driver has grown up a lot, and there’s a big list of smaller improvements that collectively make this feel like an all-new project.
I should be clear: not every game is going to see a night-and-day difference. Some titles will run identically to before. But for the games that do benefit from these changes, the improvements range from noticeable to absurd. And because Proton, SteamOS, and every downstream project builds on top of Wine, those gains trickle down to everyone.
Everything up until now was a workaround
Esync and fsync worked, but they weren’t ideal
If you’ve spent any time tweaking Wine or Proton settings, you’ve probably encountered the terms “esync” and “fsync” before. Maybe you toggled them on in Lutris, or noticed them in Proton launch options, without fully understanding what they do. To understand why NTSYNC matters, you need to understand the problem these solutions were all trying to solve.
Windows games, especially modern ones, are heavily multi-threaded. Your CPU isn’t just running one thing at a time, and instead, it’s juggling rendering, physics calculations, asset streaming, audio processing, AI routines, and more, all in parallel across multiple threads. These threads need to coordinate with each other constantly. One thread might need to wait for another to finish loading a texture before it can render a frame. Another might need exclusive access to a shared resource so two threads don’t try to modify it simultaneously.
Windows handles this coordination through what are called NT synchronization primitives… mutexes, semaphores, events, and the like. They’re baked deep into the Windows kernel, and games rely on them heavily. The problem is that Linux doesn’t have native equivalents that behave exactly the same way. Wine has historically had to emulate these synchronization mechanisms, and the way it did so was, to put it simply, not ideal.
The original approach involved making a round-trip RPC call to a dedicated “kernel” process called wineserver every single time a game needed to synchronize between threads. For a game making thousands of these calls per second, that overhead added up fast and served to be a bottleneck. And it was a bottleneck that manifested as subtle frame stutters, inconsistent frame pacing, and games that just felt a little bit off even when the raw FPS numbers looked fine.
Esync was the first attempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd system call to handle synchronization without bouncing through the wineserver. It worked, and it helped, but it had quirks. Some distros ran into issues with file descriptor limits, since every synchronization object needed its own file descriptor, and games that opened a lot of them could hit the system’s ceiling quite quickly.
Fsync came next, using Linux futexes for even better performance. It was faster than esync in most cases, but it required out-of-tree kernel patches that never made it into the mainline Linux kernel or to upstream Wine out of the box. That meant you needed a custom or patched kernel to use it, which is fine for enthusiasts running CachyOS or Proton-GE, but not exactly accessible for the average user on Ubuntu or Fedora. Futex2, often referred to interchangeably with fsync, did make it to Linux kernel 5.16 as futex_waitv, but the original implementation of fsync isn’t that. Fsync used futex_wait_multiple, and Futex2 used futex_waitv. Applications such as Lutris still refer to it as Fsync, though. It’s still kind of fsync, but it’s not the original fsync.
Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonetheless. They approximated NT synchronization behavior using Linux primitives that weren’t designed for the job, and certain edge cases simply couldn’t be handled correctly. Operations like NtPulseEvent() and the “wait-for-all” mode in NtWaitForMultipleObjects() require direct control over the underlying wait queues in ways that user-space implementations just can’t reliably provide.
Synchronization at the kernel-level, rather than in user-space
NTSYNC takes a completely different approach. Instead of trying to shoehorn Windows synchronization behavior into existing Linux primitives, it adds a new kernel driver that directly models the Windows NT synchronization object API. It exposes a /dev/ntsync device that Wine can talk to, and the kernel itself handles the coordination. No more round trips to wineserver, no more approximations, and the synchronization happens in the kernel, which is where it should be. And it has proper queue management, proper event semantics, and proper atomic operations.
What makes this even better is that NTSYNC was developed by the same person who created esync and fsync in the first place. Elizabeth Figura has been working on this problem for years, iterating through multiple kernel patch revisions, presenting the work at the Linux Plumbers Conference in 2023, and pushing through multiple versions of the patch set before it was finally merged into the mainline Linux kernel with version 6.14.
The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.
The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.
All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.
If NTSYNC is the headline feature, the completion of Wine’s WoW64 architecture is the change that will quietly improve everyone’s life going forward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the subsystem that lets 32-bit applications run on 64-bit systems. Wine has been working toward its own implementation of this for years, and Wine 11 marks the point where it’s officially done.
What this means in practice is that you no longer need 32-bit system libraries installed on your 64-bit Linux system to run 32-bit Windows applications. Wine handles the translation internally, using a single unified binary that automatically detects whether it’s dealing with a 32-bit or 64-bit executable. The old days of installing multilib packages, configuring ia32-libs, or fighting with 32-bit dependencies on your 64-bit distro thankfully over.
This might sound like a small quality-of-life improvement, but it’s a massive piece of engineering work. The WoW64 mode now handles OpenGL memory mappings, SCSI pass-through, and even 16-bit application support. Yes, 16-bit! If you’ve got ancient Windows software from the ’90s that you need to run for whatever reason, Wine 11 has you covered.
For gaming specifically, this matters because a surprising number of games, especially older ones, are 32-bit executables. Previously, getting these to work often meant wrestling with your distro’s multilib setup, which varied in quality and ease depending on whether you were on Ubuntu, Arch, Fedora, or something else entirely. Now, Wine just handles it for you.
The rest of Wine 11 isn’t just filler
There are more fixes, too
It’s easy to let NTSYNC and WoW64 steal the spotlight, but Wine 11 is packed to the gills with other stuff worth talking about.
The Wayland driver has come a long way. Clipboard support now works bidirectionally between Wine and native Wayland applications, which is one of those things you don’t think about until it doesn’t work and it drives you mad. Drag-and-drop from Wayland apps into Wine windows is supported. Display mode changes are now emulated through compositor scaling, which means older games that try to switch to lower resolutions like 640x480 actually behave properly instead of leaving you with a broken desktop. If you’ve been holding off on switching from X11 to Wayland because of Wine compatibility concerns, Wine 11 removes a lot of those barriers.
On the graphics front, EGL is now the default backend for OpenGL rendering on X11, replacing the older GLX path. Vulkan support has been bumped to API version 1.4, and there’s initial support for hardware-accelerated H.264 decoding through Direct3D 11 video APIs using Vulkan Video. That last one is particularly interesting for games and applications that use video playback for things like cutscenes or in-game streaming.
Force feedback support has been improved for racing wheels and flight sticks, which is great news if you’re running a sim setup on Linux. As well, Bluetooth has received a new driver with BLE services and proper pairing support, MIDI soundfont handling has been improved for legacy game music, and there are a couple of minor extras like Zip64 compression support, Unicode 17.0.0 support, TWAIN 2.0 scanning for 64-bit apps, and IPv6 ping functionality.
Thread priority management has been improved on both Linux and macOS, which helps with multi-threaded application performance beyond just the NTSYNC gains. ARM64 devices can now simulate 4K page sizes on systems with larger native pages, which keeps the door open for Wine on Arm hardware. And with more Arm-based Linux devices showing up every year, that matters more than it used to.
Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all received specific compatibility fixes, which is additional to the broader improvements made across the board that will improve performance and compatibility across significantly more titles.
Wine 11 is a big release, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth paying attention to, but combined with the WoW64 completion, the Wayland improvements, and the sheer volume of fixes, it’s the most important Wine release since Proton made Linux gaming viable. Everything built on top of Wine, from Proton to Lutris to Bottles, gets better because of it. If you play games on Linux at all, Wine 11 is worth your time trying out.
...
Read the original on www.xda-developers.com »
Zen gives you access to a handpicked set of AI models that OpenCode has tested and benchmarked specifically for coding agents. No need to worry about inconsistent performance and quality across providers, use validated models that work.
...
Read the original on opencode.ai »
It’s been about a year since coding agents appeared on the scene that could actually build you full projects. There were precursors like Aider and early Cursor, but they were more assistant than agent. The new generation is enticing, and a lot of us have spent a lot of free time building all the projects we always wanted to build but never had time to.
And I think that’s fine. Spending your free time building things is super enjoyable, and most of the time you don’t really have to care about code quality and maintainability. It also gives you a way to learn a new tech stack if you so want.
During the Christmas break, both Anthropic and OpenAI handed out some freebies to hook people to their addictive slot machines. For many, it was the first time they experienced the magic of agentic coding. The fold’s getting bigger.
Coding agents are now also introduced to production codebases. After 12 months, we are now beginning to see the effects of all that “progress”. Here’s my current view.
While all of this is anecdotal, it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services. And user interfaces have the weirdest fucking bugs that you’d think a QA team would catch. I give you that that’s been the case for longer than agents exist. But we seem to be accelerating.
We don’t have access to the internals of companies. But every now and then something slips through to some news reporter. Like this supposed AI caused outage at AWS. Which AWS immediately “corrected”. Only to then follow up internally with a 90-day reset.
Satya Nadella, the CEO of Microsoft, has been going on about how much code is now being written by AI at Microsoft. While we don’t have direct evidence, there sure is a feeling that Windows is going down the shitter. Microsoft itself seems to agree, based on this fine blog post.
Companies claiming 100% of their product’s code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes: that is not the seal of quality they think it is. And it’s definitely not good advertising for the fever dream of having your agents do all the work for you.
Through the grapevine you hear more and more people, from software companies small and large, saying they have agentically coded themselves into a corner. No code review, design decisions delegated to the agent, a gazillion features nobody asked for. That’ll do it.
We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.
You’re building an orchestration layer to command an army of autonomous agents. You installed Beads, completely oblivious to the fact that it’s basically uninstallable malware. The internet told you to. That’s how you should work or you’re ngmi. You’re ralphing the loop. Look, Anthropic built a C compiler with an agent swarm. It’s kind of broken, but surely the next generation of LLMs can fix it. Oh my god, Cursor built a browser with a battalion of agents. Yes, of course, it’s not really working and it needed a human to spin the wheel a little bit every now and then. But surely the next generation of LLMs will fix it. Pinky promise! Distribute, divide and conquer, autonomy, dark factories, software is solved in the next 6 months. SaaS is dead, my grandma just had her Claw build her own Shopify!
Now again, this can work for your side project barely anyone is using, including yourself. And hey, maybe there’s somebody out there who can actually make this work for a software product that’s not a steaming pile of garbage and is used by actual humans in anger.
If that’s you, more power to you. But at least among my circle of peers I have yet to find evidence that this kind of shit works. Maybe we all have skill issues.
The problem with agents is that they make errors. Which is fine, humans also make errors. Maybe they are just correctness errors. Easy to identify and fix. Add a regression test on top for bonus points. Or maybe it’s a code smell your linter doesn’t catch. A useless method here, a type that doesn’t make sense, duplicated code over there. On their own, these are harmless. A human will also do such booboos.
But clankers aren’t humans. A human makes the same error a few times. Eventually they learn not to make it again. Either because someone starts screaming at them or because they’re on a genuine learning path.
An agent has no such learning ability. At least not out of the box. It will continue making the same errors over and over again. Depending on the training data it might also come up with glorious new interpolations of different errors.
Now you can try to teach your agent. Tell it to not make that booboo again in your AGENTS.md. Concoct the most complex memory system and have it look up previous errors and best practices. And that can be effective for a specific category of errors. But it also requires you to actually observe the agent making that error.
There’s a much more important difference between clanker and human. A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there’s only so many booboos the human can introduce in a codebase per day. The booboos will compound at a very slow rate. Usually, if the booboo pain gets too big, the human, who hates pain, will spend some time fixing up the booboos. Or the human gets fired and someone else fixes up the booboos. So the pain goes away.
With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that’s unsustainable. You have removed yourself from the loop, so you don’t even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it’s too late.
Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn’t allow your army of agents to make the change in a functioning way. Or your users are screaming at you because something in the latest release broke and deleted some user data.
You realize you can no longer trust the codebase. Worse, you realize that the gazillions of unit, snapshot, and e2e tests you had your clankers write are equally untrustworthy. The only thing that’s still a reliable measure of “does this work” is manually testing the product. Congrats, you fucked yourself (and your company).
You have zero fucking idea what’s going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity. They have seen many bad architectural decisions in their training data and throughout their RL training. You have told them to architect your application. Guess what the result is?
An immense amount of complexity, an amalgam of terrible cargo cult “industry best practices”, that you didn’t rein in before it was too late. But it’s worse than that.
Your agents never see each other’s runs, never get to see all of your codebase, never get to see all the decisions that were made by you or other agents before they make a change. As such, an agent’s decisions are always local, which leads to the exact booboos described above. Immense amounts of code duplication, abstractions for abstractions’ sake.
All of this compounds into an unrecoverable mess of complexity. The exact same mess you find in human-made enterprise codebases. Those arrive at that state because the pain is distributed over a massive amount of people. The individual suffering doesn’t pass the threshold of “I need to fix this”. The individual might not even have the means to fix things. And organizations have super high pain tolerance. But human-made enterprise codebases take years to get there. The organization slowly evolves along with the complexity in a demented kind of synergy and learns how to deal with it.
With agents and a team of 2 humans, you can get to that complexity within weeks.
So now you hope your agents can fix the mess, refactor it, make it pristine. But your agents can also no longer deal with it. Because the codebase and complexity are too big, and they only ever have a local view of the mess.
And I’m not just talking about context window size or long context attention mechanisms failing at the sight of a 1 million lines of code monster. Those are obvious technical limitations. It’s more devious than that.
Before your agent can try and help fix the mess, it needs to find all the code that needs changing and all existing code it can reuse. We call that agentic search. How the agent does that depends on the tools it has. You can give it a Bash tool so it can ripgrep its way through the codebase. You can give it some queryable codebase index, an LSP server, a vector database. In the end it doesn’t matter much. The bigger the codebase, the lower the recall. Low recall means that your agent will, in fact, not find all the code it needs to do a good job.
This is also why those code smell booboos happen in the first place. The agent misses existing code, duplicates things, introduces inconsistencies. And then they blossom into a beautiful shit flower of complexity.
How do we avoid all of this?
Coding agents are sirens, luring you in with their speed of code generation and jagged intelligence, often completing a simple task with high quality at breakneck velocity. Things start falling apart when you think: “Oh golly, this thing is great. Computer, do my work!”.
There’s nothing wrong with delegating tasks to agents, obviously. Good agent tasks share a few properties: they can be scoped so the agent doesn’t need to understand the full system. The loop can be closed, that is, the agent has a way to evaluate its own work. The output isn’t mission critical, just some ad hoc tool or internal piece of software nobody’s life or revenue depends on. Or you just need a rubber duck to bounce ideas against, which basically means bouncing your idea against the compressed wisdom of the internet and synthetic training data. If any of that applies, you found the perfect task for the agent, provided that you as the human are the final quality gate.
Karpathy’s auto-research applied to speeding up startup time of your app? Great! As long as you understand that the code it spits out is not production-ready at all. Auto-research works because you give it an evaluation function that lets the agent measure its work against some metric, like startup time or loss. But that evaluation function only captures a very narrow metric. The agent will happily ignore any metrics not captured by the evaluation function, such as code quality, complexity, or even correctness, if your evaluation function is foobar.
The point is: let the agent do the boring stuff, the stuff that won’t teach you anything new, or try out different things you’d otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation. Yes, sure, you can also use an agent for that final step.
And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you’re actually building and why. Give yourself an opportunity to say, fuck no, we don’t need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. Maybe use tab completion for some nostalgic feels. Or do some pair programming with your agent. Be in the code. Because the simple act of having to write the thing or seeing it being built up step by step introduces friction that allows you to better understand what you want to build and how the system “feels”. This is where your experience and taste come in, something the current SOTA models simply cannot yet replace. And slowing the fuck down and suffering some friction is what allows you to learn and grow.
The end result will be systems and codebases that continue to be maintainable, at least as maintainable as our old systems before agents. Yes, those were not perfect either. Your users will thank you, as your product now sparks joy instead of slop. You’ll build fewer features, but the right ones. Learning to say no is a feature in itself.
You can sleep well knowing that you still have an idea what the fuck is going on, and that you have agency. Your understanding allows you to fix the recall problem of agentic search, leading to better clanker outputs that need less massaging. And if shit hits the fan, you are able to go in and fix it. Or if your initial design has been suboptimal, you understand why it’s suboptimal, and how to refactor it into something better. With or without an agent, don’t fucking care.
All of this requires discipline and agency.
All of this requires humans.
...
Read the original on mariozechner.at »
Microsoft just announced a 7-point plan to fix Windows 11, and the tech press is treating it like a redemption arc. Pavan Davuluri, the Windows president, admitted in January 2026 that “Windows 11 had gone off track” and said Microsoft was entering a mode called “swarming” where engineers would be pulled off new features to fix existing problems.
I saw this headline and my first thought was: it’s like being in an abusive relationship. They beat you, then show up with flowers saying they’ve changed. And everyone around you says “see, they’re getting better.” But the bruises are still there and the apology only covers the hits people noticed.
I want to walk through what Microsoft actually did to Windows 11 over the past four years, because this “fix” announcement only makes sense when you see the full damage list and realize that the worst offenses aren’t even part of the repair plan.
The Copilot invasion started September 26, 2023, when Microsoft pushed their AI chatbot into Windows 11 ahead of the formal 23H2 release. The icon appeared between your Start menu and system tray, you couldn’t move it, you couldn’t remove it through normal settings, and it hijacked the Win+C keyboard shortcut. Over the next two years, Copilot buttons metastasized into Snipping Tool, Photos, Notepad, Widgets, File Explorer context menus, Start menu search, and system Settings. Microsoft even planned to force-install the Microsoft 365 Copilot app directly onto Start menus of “eligible PCs.” The new plan promises to remove all of that. They want credit for pulling their hand out of your pocket.
On April 24, 2024, Microsoft shipped update KB5036980, which injected advertisements into the Windows 11 Start menu’s “Recommended” section. These showed up labeled “Promoted” and pushed apps like Opera browser and some password manager nobody asked for. And the Start menu was just one surface, they also placed ads on the lock screen, in the Settings homepage hawking Game Pass subscriptions, inside File Explorer pushing OneDrive, and through “tip” notifications that were thinly veiled product pitches. The “fix” promises “fewer ads.” Fewer. The operating system you paid $139 for at retail should have exactly zero ads, and the fact that “fewer” is supposed to impress anyone shows how thoroughly Microsoft has lowered the bar.
The privacy angle is where this gets dangerous. When Windows 11 launched in October 2021, Home edition required a Microsoft account during setup. By October 2025, Microsoft had systematically hunted down and killed every single workaround for creating a local account, the `oobe\bypassnro` command, the BypassNRO registry toggle, the `ms-cxh:localonly` trick, even the old fake email method. Amanda Langowski from Microsoft stated it plainly: they were “removing known mechanisms for creating a local account in the Windows Setup experience.”
A Microsoft account means your identity is tied to your OS from first boot. Your activity, your app usage, your browsing through Edge, your files through OneDrive, all funneled into a profile Microsoft controls. And this particular abuse is nowhere in the 7-point fix plan.
OneDrive got the same treatment. Microsoft silently changed Windows 11 setup in 2024 so that OneDrive folder backup enables automatically with no consent dialog, syncing your Desktop, Documents, Pictures, Music, and Videos to Microsoft’s cloud. When people discovered this and tried to turn it off, their files disappeared from their local machine because OneDrive had moved them, transferred ownership of your personal files to their cloud service without asking. Author Jason Pargin went viral describing how OneDrive activated itself, moved his files, then started deleting them when he hit the free 5GB storage limit. Microsoft’s response to this was silence. Also not in the fix plan.
Windows Recall is worth lingering on. Announced May 2024, it’s an AI feature that screenshots everything on your screen every few seconds and makes it searchable. Security researcher Kevin Beaumont demonstrated that the entire Recall database was stored in plaintext in an AppData folder where any malware could extract it. Bank numbers, Social Security numbers, passwords, all sitting in an unencrypted SQLite database.
The UK’s Information Commissioner’s Office got involved. Microsoft delayed it, made it opt-in, added encryption, and quietly relaunched it for Insiders in November 2024. They built a surveillance feature, shipped it broken, got caught, and called the patch “responding to feedback.”
But the abuse pattern goes back way further than Windows 11. In 2015 and 2016, Microsoft ran the GWX (Get Windows 10) campaign, full-screen nag dialogs that pushed Windows 10 upgrades on Windows 7 and 8 users. In May 2016, they changed the behavior of the red X button so that clicking it, which for decades had meant “close” or “cancel”, instead scheduled the Windows 10 upgrade. Microsoft’s own security advice told users to close suspicious dialogs using the X button, and they weaponized that trained behavior against their own customers. A woman named Teri Goldstein sued after the forced upgrade bricked her travel agency PC and won $10,000. Microsoft appealed, then dropped the appeal and paid. They eventually admitted they “went too far.”
And right now, Microsoft is about to force 240 million PCs into the landfill. Windows 10 hit end of life on October 14, 2025, and Windows 11 requires TPM 2.0, specific CPU generations, UEFI Secure Boot, hardware requirements that excluded roughly 20% of all PCs worldwide. Perfectly functional machines, rendered “obsolete” by arbitrary software restrictions. If you want to keep getting security patches on Windows 10, Microsoft will charge you $30 per year, paying for patches to an operating system you already bought a license for. Enterprise customers pay $61 per device for Year 1, $122 for Year 2, and $244 for Year 3, with the price doubling each year.
Edge is its own disaster. Mozilla commissioned an independent report titled “Over the Edge” that documented specific dark patterns including confirmshaming (pop-ups implying you’re “shopping in a dumb way” if you don’t use Edge), disguised ads injected into Google.com and the Chrome Web Store, and default browser settings that hijack back to Edge without notification. Certain Windows web links still force-open in Edge regardless of your default browser setting. Despite all this manipulation, Edge holds just 5.35% global market share. Even with the full weight of an operating system monopoly forcing their browser on people, almost nobody chooses to use it.
And the telemetry question. On Windows 11 Home and Pro, you cannot fully disable telemetry. Setting `AllowTelemetry` to 0 in the registry on non-Enterprise editions gets silently overridden back to 1. Only Enterprise and Education editions can actually turn it off. The operating system you paid for reports data about you to Microsoft, and the setting to stop it is a lie on consumer editions. Also not in the fix plan.
I haven’t even mentioned the EU fining Microsoft over 2.2 billion euros across multiple antitrust rulings, including 561 million euros specifically for breaking a browser ballot promise, a Windows 7 update silently removed the choice screen for 14 months, affecting 15 million users, and it was the first time the EU fined a company for violating a “commitment decision.” Or the _NSAKEY controversy from 1999 where a second crypto key labeled literally `_NSAKEY` was found embedded in Windows NT. Or the time in August 2024 when a Microsoft update bricked Linux dual-boot systems across Ubuntu, Mint, and other distros, and it took 9 months to fully fix.
Ok so here’s the table that tells the whole story:
The bottom four rows are the ones that matter. The privacy-hostile changes, the forced Microsoft accounts, the telemetry that lies about being disabled, OneDrive hijacking your files, the pre-installed garbage, none of that is part of the fix plan. Microsoft’s “swarming” effort targets the most visible UI annoyances, the ones that generate bad headlines. Data collection, vendor lock-in, forced accounts, those stay because those are the revenue model.
Microsoft spent four years deliberately degrading an operating system that people paid $139 or more for, and now they’re announcing the removal of their own damage as if it’s a gift. The “fix” is them taking their foot off your neck and expecting applause. The ads should have never been there, the Copilot buttons should have never been forced, and the taskbar should have never been crippled in the first place. And the things they’re choosing to keep, the telemetry, the forced accounts, the data harvesting, those are the real product, because at this point, you are.
...
Read the original on www.sambent.com »
...
Read the original on rpastro.square.site »
For various reasons, I have decided to move as many services and subscriptions as possible from non-EU countries to the EU or to switch to European service providers. The reasons for this are the current global political situation and improved data protection. I don’t want to go into the first point any further for various reasons, but the second point should be immediately obvious, since the EU currently has the most user-friendly laws when it comes to data protection. Below, I will list both the old and new service providers; this is not an advertisement, but simply the result of my research, which was aimed at achieving the same or better quality at affordable prices.
I would call this post an interim report, and I will expand on it if I end up migrating more services.
In my opinion, Fastmail is one of the best email providers. In all the years I’ve had my email accounts there, I’ve never had any problems. I paid 10 euros a month for two accounts, could use an unlimited number of my own domains, and could not only set up catch-all addresses but also send emails from any email address I wanted. This is important for my email setup. The calendar is also solid and was used within the family. All of this was also available in a well-designed Android app. Finding a European alternative that offers all of this proved difficult. First, I tried mailbox.org, which I can generally recommend without reservation. Unfortunately, you can’t send emails from any address on your own domain without a workaround, so the search continued. Eventually, I landed on Uberspace. This “pay what you want” provider offers a shell account, web hosting, email hosting, and more at fair prices. In addition, you can use as many of your own domains as you like for both web and email, and send emails from any sender address. There isn’t a dedicated app, which is why I now use Thunderbird for Android and am very satisfied with it.
Uberspace doesn’t offer a built-in calendar solution. So I tried installing various CalDAV servers, but none of them really convinced me. In the end, I simply installed NextCloud on my Uberspace Asteroid, which has CalDAV and CardDAV built in. On my desktop, I use Thunderbird as a client; on Android, I use DAVx5 and Fossil Calendar. It works great, even if NextCloud does come with some overhead. In return, I can now easily share files with others and, in theory, also use NextCloud’s online office functionality.
Now that I’m already using Uberspace for my email and calendar, I was able to host this website there as well. I previously had a VPS with Hetzner for this purpose, which I no longer need. The only minor hurdle was that I use SSI on this site to manage the header centrally. I had previously used Nginx, but Uberspace hosts on Apache, where the SSI implementation is handled slightly differently. However, adapting my HTML code was quite simple, so I was able to quickly migrate the site to Uberspace.
For a long time, I was a satisfied Namecheap customer. They offer good prices, a wide selection of available domains, their DNS management has everything you need, and their support team has helped me quickly on several occasions. But now it was time to look for a comparable provider in the EU. In the end, I settled on hosting.de. Some of the reasons were the prices, reviews, the location in Germany, and the availability of .is domains. So far, everything has been running smoothly; support helped me quickly and competently with one issue; and while prices for non-German domains are slightly higher, they’re still within an acceptable range.
At some point, pretty much everyone had their code on GitHub (or still does). I was no exception, though I had also hosted my own Gitea instance. Eventually, I got tired of that too and migrated all my Git repositories to codeberg.org. Codeberg is a German-based nonprofit organization, and it’s hard to imagine going wrong with this choice.
No changes here. I’ve always been a happy Mullvad customer. For 5 euros a month, I pay a Swedish company that has proven it doesn’t log any data and doesn’t even require me to create an account. No subscription traps, no weird Black Friday deals, no discounts: just 5 euros a month for a reliable, trustworthy service.
For many years, I used my work smartphone for personal use as well. I was more than satisfied with the Pixel 6, but understandably, I wasn’t allowed to install a custom ROM or use alternative app stores like F-Droid. That’s why I decided to buy a separate personal smartphone. I chose the Pixel 9a, which is supported by Graphene OS. I still installed the Google Play Store so I could install a significant number of apps that are only available there. However, I can now use alternative app stores, which allows me to install and use apps like NewPipe. This way, I can enjoy YouTube ad-free and without an account.
For casual use on the couch, a Chromebook has been unbeatable for me so far. It’s affordable, the battery lasts forever, and it wakes up from sleep mode extremely quickly. To break away from Google here as well, I recently bought a cheap used 11-inch MacBook Air (A1465) to install MX Linux with Fluxbox on it and use it for browsing and watching videos. I haven’t had a chance to test it out yet, but I’m hoping it will be able to replace the Chromebook.
...
Read the original on rz01.org »
Tesla runs a bug bounty program that invites researchers to find security vulnerabilities in their vehicles. To participate, I needed the actual hardware, so I started looking for Tesla Model 3 parts on eBay. My goal was to get a Tesla car computer and touchscreen running on my desk, booting the car’s operating system.
The car computer consists of two parts - the MCU (Media Control Unit) and the autopilot computer (AP) layered on top of each other. In the car, the computer is located in front of the passenger seat, roughly behind the glovebox. The part itself is the size of an iPad and the thickness of a ~500 page book and is covered in a water-cooled metal casing:
By searching for “Tesla Model 3 MCU” on Ebay, I found quite a lot of results in the $200 - $300 USD price range. Looking at the listings, I found that many of these sellers are “salvaging” companies who buy crashed cars, take them apart, and list all parts for sale individually. Sometimes, they even include a photo of the original crashed car and a way to filter their listings for parts extracted from the same vehicle.
To boot the car up and interact with it, I needed a few more things:
* The display cable to connect them together
For the power supply, I went with an adjustable 0-30V model from Amazon. There was a 5 ampere and a 10A version available, at the time, I figured it’s safer to have some headroom and went with the 10A version — it was a very good decision, as it later turned out, the full setup could consume up to 8A at peak times. The Model 3 screens were surprisingly expensive on Ebay, I assume that is because it is a popular part to replace. I found a pretty good deal for 175 USD.
The last and most difficult part to order was the cable which connects the MCU to the screen. I needed this because both the computer and a screen were being sold with the cables cut a few centimeters after the connector (interestingly most sellers did that, instead of just unplugging the cables).
This is when I discovered that Tesla publishes the wiring “Electrical Reference” for all of its cars publicly. On their service website, you can look up a specific car model, search for a component (such as the display), and it will show you exactly how the part should be wired up, what cables/connectors are used, and even what the different pins are responsible for inside a single connector:
Turns out the display uses a 6-pin cable (2 for 12V and ground, 4 for data) with a special Rosenberger 99K10D-1D5A5-D connector. I soon discovered that unless you are a car manufacturer ordering in bulk, there is no way you are buying a single Rosenberger cable like this. No Ebay listings, nothing on Aliexpress, essentially no search results at all.
After digging around a bit, I found that this cable is very similar to a more widely used automotive cable called “LVDS”, which is used to transfer video in BMW cars. At first sight, the connectors looked like a perfect match to my Rosenberger, so I placed an order:
The computer arrived first. To attempt to power it on, I looked up which pin of which connector I needed to attach 12V and ground to using the Tesla schematics & the few pictures online of people doing the same desk-MCU setup. Since the computer included the shortly cut cables, I was able to strip the relevant wires and attach the power supply’s clips to the right ones:
I saw a couple of red LEDs start flashing, and the computer started up! Since I had no screen yet, there were not many ways to interact with the car. Reading @lewurm’s previous research on GitHub I knew that, at least in older car versions, there was a network inside the car, with some components having their own webserver. I connected an Ethernet cable to the port next to the power connector and to my laptop.
This network does not have DHCP, so you have to manually set your IP address. The IP you select has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not conflict with other hosts on the network. On Reddit, I found the contents of an older /etc/hosts file from a car which shows the hosts that are normally associated with specific IPs:
@lewurm’s blog mentioned that SSH on port :22 and a webserver on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer models? Yes!
I had already found 2 services to explore on the MCU:
* An SSH server which states “SSH allowed: vehicle parked” - quite funny given the circumstances
This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
* Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* A REST-like API on :8080 which returned a history of “tasks”
This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
* This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
Around this time, I also removed the metal shielding to see exactly what the boards look like inside. You can see the two different boards which were stacked on top of each other:
Once the screen and the BMW LVDS cable arrived, it unfortunately became clear that the connector is not going to fit. The BMW connector was much thicker on the sides and it was not possible to plug it into the screen. This led to some super sketchy improvised attempts to strip the two original “tail” cables from the MCU and the screen and connect the individual wires together. The wires were really sensitive and thin. The setup worked for a couple of seconds, but caused wire debris to fall on the PCB and short it, burning one of the power controller chips:
It was extremely hard to find the name/model of the chip that got burned, especially since part of the text printed on it had become unreadable due to the damage. To be able to continue with the project, I had to order a whole other car computer.
In the meantime, my friend Yasser (@n3r0li) somehow pulled off the impossible and identified it as the “MAX16932CATIS/V+T” step-down controller, responsible for converting power down to lower voltages. We ordered the chip and took the board to a local PCB repair shop, where they successfully replaced it and fixed the MCU. Now I had two computers to work with.
So I really did need that Rosenberger cable, there was no getting around it.
After having no luck finding it online and even visiting a Tesla service center in London (an odd encounter, to say the least), I had to accept what I had been trying to avoid: buying an entire Dashboard Wiring Harness.
Back in the Tesla Electrical Reference, in addition to the connectors, one can find every part number. Looking at the cable which connects the MCU to the screen, the number 1067960-XX-E shows. Searching for it on Ebay brings up this monstrosity:
Turns out that actual cars don’t have individual cables. Instead they have these big “looms”, which bundle many cables from a nearby area into a single harness. This is the reason why I could not find the individual cable earlier. They simply don’t manufacture it. Unfortunately I had no other choice but to buy this entire loom for 80 USD.
Despite how bulky it was, the loom worked perfectly. The car booted, the touch screen started up, and I had a working car computer on my desk, running the car’s operating system!
Having the system running, I can now start playing with the user interface, interacting with the exposed network interfaces, exploring the CAN buses, and perhaps even attempting to extract the firmware.
...
Read the original on bugs.xdavidhu.me »
Last year, I visited my grandmother’s house for the first time after the pandemic and came across a cupboard full of loose old photos. I counted 1,351 of them spanning all the way from my grandparents in their early 20s, my mom as a baby, to me in middle school, just around the time when we got our first smartphone and all photos since then were backed up online.
Everything was all over the place so I spent some time going through them individually and organizing them into groups. Some of the initial groups were based on the physical attributes of the photograph like similar aspect ratios or film stock. For example, there was a group of black/white 32mm square pictures that were taken around the time when my grandfather was in his mid 20s.
As I got done with grouping all of them, I was able to see flashes of stories in my head, but they were ephemeral and fragile. For instance, there was a group of photos that looked like it was taken during my grandparents’ wedding but I didn’t know the chronological order they were taken because EXIF metadata didn’t exist around that time.
So I sat down with my grandmother and asked her to reorder the photos and tell me everything she could remember about her wedding. Her face lit up as she narrated the backstory behind the occasion, going from photo to photo, resurfacing details that had been dormant for decades. I wrote everything down, recorded the names of people in some of the photos, some of whom I recognized as younger versions of my uncles and aunts.
After the “interview”, I had multiple pages of notes connecting the photos to events that happened 50 years ago. Since the account was historical, as an inside joke I wanted to see if I could clean it up and present it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a local instance, and began my editorial work. I used the 2011 Royal Wedding as reference and drafted a page starting with the classic infobox and the lead paragraph.
I split up the rest of the content into sections and filled them with everything I could verify like dates, names, places, who sat where. I scanned all the photos and spent some time figuring out what to place where. For every photo placement, there was a follow up to include a descriptive caption too.
Whenever I mentioned a person, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that provided wider context to things like venues, rituals, and the political climate around that time, like for instance a legal amendment that was relevant to the wedding ceremony.
In two evenings, I was able to document a full backstory for the photos into a neat article. These two evenings also made me realize just how powerful encyclopedia software is to record and preserve media and knowledge that would’ve otherwise been lost over time.
This was so much fun that I spent the following months writing pages to account for all the photos that needed to be stitched together.
I got help from r/genealogy about how to approach recording oral history and I was given resources to better conduct interviews, shoutout to u/stemmatis! I would get on calls with my grandmother and people in the family, ask them a couple of questions, and then write. It was also around this time that I began using audio transcription and language models to make the editorial process easier.
Over time, I managed to write a lot of pages connecting people to different life events. The encyclopedia format made it easy to connect dots I would have never found on my own, like discovering that one of the singers at my grandparents’ wedding was the same nurse who helped deliver me.
After finding all the stories behind the physical photos, I started to work on digital photos and videos that I had stored on Google Photos. The wonderful thing about digital photos is that they come with EXIF metadata that can reveal extra information like date, time, and sometimes geographical coordinates.
This time, without any interviews, I wanted to see if I could use a language model to create a page based on just browsing through the photos. As my first experiment, I created a folder with 625 photos of a family trip to Coorg back in 2012.
I pointed Claude Code at the directory and asked it to draft a wiki page by browsing through the images. I hinted at using ImageMagick to create contact sheets so it would help with browsing through multiple photos at once.
After a few minutes and a couple of tokens later, it had created a compelling draft with a detailed account of everything we did during the trip by time of day. The model had no location data to work with, just timestamps and visual content, but it was able to identify the places from the photos alone, including ones that I had forgotten by now. It picked up details on the modes of transportation we used to get between places just from what it could see.
After I had clarified who some of the people in the pictures were, it went on to identify them automatically in the captions. Now that I had a detailed outline ready, the page still only had content based on the available data, so to fill in the gaps I shared a list of anecdotes from my point of view and the model inserted them into places where the narrative called for them.
The Coorg trip only had photos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 photos and 343 videos with an iPhone 12 Pro that included geographical coordinates as part of the EXIF metadata.
On top of that, I exported my location timeline from Google Maps, my Uber trips, my bank transactions, and Shazam history. I would ask Claude Code to start with the photos and then gradually give it access to the different data exports.
Here are some of the things it did across multiple runs:
It cross-referenced my bank transactions with location data to ascertain the restaurants I went to.
Some of the photos and videos showed me in attendance at a soccer match, however, it was unknown which teams were playing. The model looked up my bank transactions and found a Ticketmaster invoice with information about the teams and name of the tournament.
It looked up my Uber trips to figure out travel times and exact locations of pickup and drop.
It used my Shazam tracks to write about the kinds of songs that were playing at a place, like Cuban songs at a Cuban restaurant.
In a follow-up, I mentioned remembering an evening dinner with a guitarist playing in the background. It filtered my media to evening captures, found a frame in a video with the guitarist, uploaded it, and referenced the moment in the page.
The MediaWiki architecture worked well with the edits, since for every new data source it would make amendments like a real Wikipedia contributor would. I leaned heavily on features that already existed. Talk pages to clarify gaps and consolidate research notes, categories to group pages by theme, revision history to track how a page evolved as new data came in. I didn’t have to build any of this, it was all just there.
What started as me helping the model fill in gaps from my memory gradually inverted. The model was now surfacing things I had completely forgotten, cross-referencing details across data sources in ways I never would have done manually.
So I started pointing Claude Code at other data exports. My Facebook, Instagram, and WhatsApp archives held around 100k messages and a couple thousand voice notes exchanged with close friends over a decade.
The model traced the arc of our friendships through the messages, pulled out the life episodes we had talked each other through, and wove them into multiple pages that read like it was written by someone who knew us both. When I shared the pages with my friends, they wanted to read every single one.
This is when I realized I was no longer working on a family history project. What I had been building, page by page, was a personal encyclopedia. A structured, browsable, interconnected account of my life compiled from the data I already had lying around.
I’ve been working on this as whoami.wiki. It uses MediaWiki as its foundation, which turns out to be a great fit because language models already understand Wikipedia conventions deeply from their training data. You bring your data exports, and agents draft the pages for you to review.
A page about your grandmother’s wedding works the same way as a page about a royal wedding. A page about your best friend works the same way as a page about a public figure.
Oh and it’s genuinely fun! Putting together the encyclopedia felt like the early days of Facebook timeline, browsing through finished pages, following links between people and events, and stumbling on a detail I forgot.
But more than the technology, it’s the stories that stayed with me. Writing about my grandmother’s life surfaced things I’d never known, her years as a single mother, the decisions she had to make, the resilience it took. She was a stronger woman than I ever realized. Going through my friendships, I found moments of endearment that I had nearly forgotten, the days friends went the extra mile to be good to me. Seeing those moments laid out on a page made me pick up the phone and call a few of them. The encyclopedia didn’t just organize my data, it made me pay closer attention to the people in my life.
Today I’m releasing whoami.wiki as an open source project. The encyclopedia is yours, it runs on your machine, your data stays with you, and any model can read it. The project is early and I’m still figuring a lot of it out, but if this sounds interesting, you can get started here and tell me what you think!
...
Read the original on whoami.wiki »
Age verification is no longer a narrow mechanism for a few adult websites. Across Europe, the USA, the UK, Australia, and elsewhere, it is expanding into social media, messaging, gaming, search, and other mainstream services.
The common framing says these systems exist to protect children. That concern is real. Children are exposed to harmful content, manipulative recommendation systems, predatory behavior, and compulsive platform design. Even adults are manipulated, quite succesfully, with techniques that can influence national elections.
But from a technical and political point of view, age verification is not just a child-safety feature. It is an access control architecture. It changes the default condition of the network from open access to permissioned access. Instead of receiving content unless something is blocked, users increasingly have to prove something about themselves before a service is allowed to respond.
That shift becomes clearer when age assurance moves down into the operating system. In some US proposals, the model is no longer a one-off check at a website. It becomes a persistent age-status layer maintained by the OS and exposed to applications through a system-level interface. At that point, age verification stops looking like a limited safeguard and starts looking like a general identity layer for the whole device.
This is no longer only a proprietary-platform story either. Even the Linux desktop stack is beginning to absorb this pressure. systemd has reportedly added an optional birthDate field to userdb in response to age-assurance laws. Regulation is beginning to shape the data model of personal computing, so that higher-level components can build age-aware behavior on top.
Content moderation is about classification and filtering. It asks whether some content should be blocked, labeled, delayed, or handled differently. Guardianship is something else. It is the contextual responsibility of parents, teachers, schools, and other trusted adults to decide what is appropriate for a child, when exceptions make sense, and how supervision should evolve over time. Moderation is partly technical. Guardianship is relational, local, and situated in specific contexts.
I am also a parent. I understand the fear behind these proposals because I live with it too. Children do face real online risks. But recognizing that does not oblige us to accept any solution placed in front of us, least of all one that weakens privacy for everyone while shifting responsibility away from families, schools, and the people who actually have to guide children through digital life.
Age-verification laws collapse these two questions into one centralized answer. The result is predictable. A platform, browser vendor, app store, operating-system provider, or identity intermediary is asked to enforce what is presented as a child-protection policy, even though no centralized actor can replace the judgment of a parent, a school, or a local community.
It also fails on its own terms. The bypasses are obvious: VPNs, borrowed accounts, purchased credentials, fake credentials, and tricks against age-estimation systems. A control that is easy to evade but expensive to impose is not a serious compromise: it is an error or, one may say, a corporate data-grab.
The price is high and paid by everyone. More identity checks. More metadata. More logging. More vendors in the middle. More friction for people who lack the right device, the right papers, or the right digital skills. This is not a minor safety feature. It is a new control layer for the network.
And once that layer exists, it rarely stays confined to age. Infrastructure built for one attribute is easily reused for others: location, citizenship, legal status, platform policy, or whatever the next panic demands. This is how a limited check becomes a general gate.
Keep guardianship where it belongs: with parents, teachers, schools, and communities that can make contextual decisions, authorize exceptions, and adjust over time.
The operating system can help here, but only as a local policy surface under the control of users and guardians. It should not become a universal age-broadcasting layer for apps and remote services. That is the architectural line that matters.
Most of the harms invoked in this debate do not come from the mere existence of content online. They come from recommendation systems, dark patterns, addictive metrics, and business models that reward amplification without responsibility. If the goal is to protect minors, that is where regulation should bite.
If we are serious about reducing harm, we should stop asking how to identify everyone and start asking how to strengthen local control without turning the network into a checkpoint.
It is encouraging to see this article circulating widely, as it may contribute to a shift in how policymakers approach the issue. Given its growing visibility, I will keep a concise record here of the sequence of its coverage across media outlets, as well pilot implementations across the world.
My first account on the problem emerged from a dialogue with Brave’s developer Kyle den Hartog at a cypherpunk retreat in Berlin. It was right after facilitating the digital identity track of the event that I published a rather technical piece on the topic.
Later, as age verification measures began to take hold, and in alignment with our community facilitators at the Dyne.org foundation, we decided to discontinue Discord as a channel for participation, as the platform moved to impose age verification.
Then the systemd dispute unfolded, and I found myself, as founder of the project, as the first distro maintainer stating that we would not implement age verification in Devuan GNU/Linux, a Debian fork without systemd that has, since 2016, shown fewer bugs and security advisories. The tech journalist Lunduke picked it up immediately, setting off a wave of similar declarations across the distribution maintainer community.
That was the moment I realised the need to set out, in clear terms, the reasons behind this choice, and the grounds for a form of conscientious objection should such laws ever be enforced on our projects at Dyne.org. I then wrote a piece for Wired Italy, in Italian, my mother tongue, which is due to be published by the magazine in the coming days (link TBD).
While awaiting publication in Wired, I translated the article and published it here, in English, through our think and do tank. The piece you have just read quickly reached the front page of Hacker News, drawing nearly 400 comments from concerned readers and technical experts, a valuable body of material to build on.
As the discussion gains momentum, I am engaging with colleagues at the City of Lugano and the Plan₿ Foundation, where I have recently taken on the role of Scientific Director. The proposal is to move from analysis to action by establishing a city-wide pilot that explores technologies for locally managed guardianship, offering a constructive example for Switzerland.
We are approaching this with confidence and preparing for a rollout for Lugano within the next two years. At the same time, within the Swiss Confederation there are signs of a more grounded direction, as reflected in “The Internet Initiative” placing responsibility on Big Tech and bringing together representatives from all major Swiss political parties.
My next steps include reaching out to contacts in Europe to help broaden the discussion and contribute to a more balanced public debate, in the face of sustained pressure from corporate lobbies advancing data-extractive measures.
And you can play a meaningful role as well: engage with the issue, bring your technical and political understanding to it, and help sustain attention so that those who make up the internet are not excluded from decisions that affect it. I hope this material and the reasoning behind it can be useful in that direction. Do let us at Dyne.org know if we can assist in making visible successful local pilots that implement child protection in a sound and proportionate way.
If you like to read further, I’ve written more about the problems of European Digital Identity implementation plans and architecture.
I’ve been working on privacy and identity technology for over a decade, primarily in projects funded by the European Commission.
Among my efforts are decodeproject.eu and reflowproject.eu, various academic papers, including SD-BLS, recently published by IEEE. Additionally, with our team at The Forkbomb Company we’ve developed digital identity products as DIDROOM.com and CREDIMI.io.
...
Read the original on news.dyne.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.