10 interesting stories served every morning and every evening.




1 1,402 shares, 49 trendiness

Protect Digital Privacy in the EU

Skip to main con­tent

🚨 The Conservatives (EPP) are at­tempt­ing to force a new vote on Thursday (26th), seek­ing to re­verse Parliament’s NO on in­dis­crim­i­nate scan­ning. This is a di­rect at­tack on democ­racy and bla­tant dis­re­gard for your right to pri­vacy. No means no. Take ac­tion now!

...

Read the original on fightchatcontrol.eu »

2 1,177 shares, 48 trendiness

Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive

Linux gam­ing has come a long way. When Valve launched Proton back in 2018, it felt like a turn­ing point, turn­ing the Linux gam­ing ex­pe­ri­ence from technically pos­si­ble if you’re okay with a lot of pain” to some­thing that more or less worked. Since then, we’ve seen in­cre­men­tal Wine re­leases, each one chip­ping away at com­pat­i­bil­ity is­sues and im­prov­ing per­for­mance bit by bit. Wine 10, Wine 9, and so on; each one a col­lec­tion of bug fixes and small im­prove­ments that kept the ecosys­tem mov­ing for­ward.

Wine 11 is dif­fer­ent. This is­n’t just an­other yearly re­lease with a few hun­dred bug fixes and some com­pat­i­bil­ity tweaks. It rep­re­sents a huge num­ber of changes and bug fixes. However, it also ships with NTSYNC sup­port, which is a fea­ture that has been years in the mak­ing and rewrites how Wine han­dles one of the most per­for­mance-sen­si­tive op­er­a­tions in mod­ern gam­ing. On top of that, the WoW64 ar­chi­tec­ture over­haul is fi­nally com­plete, the Wayland dri­ver has grown up a lot, and there’s a big list of smaller im­prove­ments that col­lec­tively make this feel like an all-new pro­ject.

I should be clear: not every game is go­ing to see a night-and-day dif­fer­ence. Some ti­tles will run iden­ti­cally to be­fore. But for the games that do ben­e­fit from these changes, the im­prove­ments range from no­tice­able to ab­surd. And be­cause Proton, SteamOS, and every down­stream pro­ject builds on top of Wine, those gains trickle down to every­one.

Everything up un­til now was a workaround

Esync and fsync worked, but they weren’t ideal

If you’ve spent any time tweak­ing Wine or Proton set­tings, you’ve prob­a­bly en­coun­tered the terms esync” and fsync” be­fore. Maybe you tog­gled them on in Lutris, or no­ticed them in Proton launch op­tions, with­out fully un­der­stand­ing what they do. To un­der­stand why NTSYNC mat­ters, you need to un­der­stand the prob­lem these so­lu­tions were all try­ing to solve.

Windows games, es­pe­cially mod­ern ones, are heav­ily multi-threaded. Your CPU is­n’t just run­ning one thing at a time, and in­stead, it’s jug­gling ren­der­ing, physics cal­cu­la­tions, as­set stream­ing, au­dio pro­cess­ing, AI rou­tines, and more, all in par­al­lel across mul­ti­ple threads. These threads need to co­or­di­nate with each other con­stantly. One thread might need to wait for an­other to fin­ish load­ing a tex­ture be­fore it can ren­der a frame. Another might need ex­clu­sive ac­cess to a shared re­source so two threads don’t try to mod­ify it si­mul­ta­ne­ously.

Windows han­dles this co­or­di­na­tion through what are called NT syn­chro­niza­tion prim­i­tives… mu­texes, sem­a­phores, events, and the like. They’re baked deep into the Windows ker­nel, and games rely on them heav­ily. The prob­lem is that Linux does­n’t have na­tive equiv­a­lents that be­have ex­actly the same way. Wine has his­tor­i­cally had to em­u­late these syn­chro­niza­tion mech­a­nisms, and the way it did so was, to put it sim­ply, not ideal.

The orig­i­nal ap­proach in­volved mak­ing a round-trip RPC call to a ded­i­cated kernel” process called wine­server every sin­gle time a game needed to syn­chro­nize be­tween threads. For a game mak­ing thou­sands of these calls per sec­ond, that over­head added up fast and served to be a bot­tle­neck. And it was a bot­tle­neck that man­i­fested as sub­tle frame stut­ters, in­con­sis­tent frame pac­ing, and games that just felt a lit­tle bit off even when the raw FPS num­bers looked fine.

Esync was the first at­tempt at a workaround. Developed by Elizabeth Figura at CodeWeavers, it used Linux’s eventfd sys­tem call to han­dle syn­chro­niza­tion with­out bounc­ing through the wine­server. It worked, and it helped, but it had quirks. Some dis­tros ran into is­sues with file de­scrip­tor lim­its, since every syn­chro­niza­tion ob­ject needed its own file de­scrip­tor, and games that opened a lot of them could hit the sys­tem’s ceil­ing quite quickly.

Fsync came next, us­ing Linux fu­texes for even bet­ter per­for­mance. It was faster than esync in most cases, but it re­quired out-of-tree ker­nel patches that never made it into the main­line Linux ker­nel or to up­stream Wine out of the box. That meant you needed a cus­tom or patched ker­nel to use it, which is fine for en­thu­si­asts run­ning CachyOS or Proton-GE, but not ex­actly ac­ces­si­ble for the av­er­age user on Ubuntu or Fedora. Futex2, of­ten re­ferred to in­ter­change­ably with fsync, did make it to Linux ker­nel 5.16 as fu­tex_waitv, but the orig­i­nal im­ple­men­ta­tion of fsync is­n’t that. Fsync used fu­tex_wait­_­mul­ti­ple, and Futex2 used fu­tex_waitv. Applications such as Lutris still re­fer to it as Fsync, though. It’s still kind of fsync, but it’s not the orig­i­nal fsync.

Here’s the thing about both esync and fsync: they were workarounds. Clever ones, but workarounds nonethe­less. They ap­prox­i­mated NT syn­chro­niza­tion be­hav­ior us­ing Linux prim­i­tives that weren’t de­signed for the job, and cer­tain edge cases sim­ply could­n’t be han­dled cor­rectly. Operations like NtPulseEvent() and the wait-for-all” mode in NtWaitForMultipleObjects() re­quire di­rect con­trol over the un­der­ly­ing wait queues in ways that user-space im­ple­men­ta­tions just can’t re­li­ably pro­vide.

Synchronization at the ker­nel-level, rather than in user-space

NTSYNC takes a com­pletely dif­fer­ent ap­proach. Instead of try­ing to shoe­horn Windows syn­chro­niza­tion be­hav­ior into ex­ist­ing Linux prim­i­tives, it adds a new ker­nel dri­ver that di­rectly mod­els the Windows NT syn­chro­niza­tion ob­ject API. It ex­poses a /dev/ntsync de­vice that Wine can talk to, and the ker­nel it­self han­dles the co­or­di­na­tion. No more round trips to wine­server, no more ap­prox­i­ma­tions, and the syn­chro­niza­tion hap­pens in the ker­nel, which is where it should be. And it has proper queue man­age­ment, proper event se­man­tics, and proper atomic op­er­a­tions.

What makes this even bet­ter is that NTSYNC was de­vel­oped by the same per­son who cre­ated esync and fsync in the first place. Elizabeth Figura has been work­ing on this prob­lem for years, it­er­at­ing through mul­ti­ple ker­nel patch re­vi­sions, pre­sent­ing the work at the Linux Plumbers Conference in 2023, and push­ing through mul­ti­ple ver­sions of the patch set be­fore it was fi­nally merged into the main­line Linux ker­nel with ver­sion 6.14.

The num­bers are wild. In de­vel­oper bench­marks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an im­pres­sive 678% im­prove­ment. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now ac­tu­ally playable on Linux, too. Those bench­marks com­pare Wine NTSYNC against up­stream vanilla Wine, which means there’s no fsync or esync ei­ther. Gamers who use fsync are not go­ing to see such a leap in per­for­mance in most games.

The games that ben­e­fit most from NTSYNC are the ones that were strug­gling be­fore, such as ti­tles with heavy multi-threaded work­loads where the syn­chro­niza­tion over­head was a gen­uine bot­tle­neck. For those games, the dif­fer­ence is night and day. And un­like fsync, NTSYNC is in the main­line ker­nel, mean­ing you don’t need any cus­tom patches or out-of-tree mod­ules for it work. Any dis­tro ship­ping ker­nel 6.14 or later, which at this point in­cludes Fedora 42, Ubuntu 25.04, and more re­cent re­leases, will sup­port it. Valve has al­ready added the NTSYNC ker­nel dri­ver to SteamOS 3.7.20 beta, load­ing the mod­ule by de­fault, and an un­of­fi­cial Proton fork, Proton GE, al­ready has it en­abled. When Valve’s of­fi­cial Proton re­bases on Wine 11, every Steam Deck owner gets this for free.

All of this is what makes NTSYNC such a big deal, as it’s not sim­ply a run-of-the-mill per­for­mance patch. Instead, it’s some­thing much big­ger: this is the first time Wine’s syn­chro­niza­tion has been cor­rect at the ker­nel level, im­ple­mented in the main­line Linux ker­nel, and avail­able to every­one with­out jump­ing through hoops.

If NTSYNC is the head­line fea­ture, the com­ple­tion of Wine’s WoW64 ar­chi­tec­ture is the change that will qui­etly im­prove every­one’s life go­ing for­ward. On Windows, WoW64 (Windows 32-bit on Windows 64-bit) is the sub­sys­tem that lets 32-bit ap­pli­ca­tions run on 64-bit sys­tems. Wine has been work­ing to­ward its own im­ple­men­ta­tion of this for years, and Wine 11 marks the point where it’s of­fi­cially done.

What this means in prac­tice is that you no longer need 32-bit sys­tem li­braries in­stalled on your 64-bit Linux sys­tem to run 32-bit Windows ap­pli­ca­tions. Wine han­dles the trans­la­tion in­ter­nally, us­ing a sin­gle uni­fied bi­nary that au­to­mat­i­cally de­tects whether it’s deal­ing with a 32-bit or 64-bit ex­e­cutable. The old days of in­stalling mul­ti­lib pack­ages, con­fig­ur­ing ia32-libs, or fight­ing with 32-bit de­pen­den­cies on your 64-bit dis­tro thank­fully over.

This might sound like a small qual­ity-of-life im­prove­ment, but it’s a mas­sive piece of en­gi­neer­ing work. The WoW64 mode now han­dles OpenGL mem­ory map­pings, SCSI pass-through, and even 16-bit ap­pli­ca­tion sup­port. Yes, 16-bit! If you’ve got an­cient Windows soft­ware from the 90s that you need to run for what­ever rea­son, Wine 11 has you cov­ered.

For gam­ing specif­i­cally, this mat­ters be­cause a sur­pris­ing num­ber of games, es­pe­cially older ones, are 32-bit ex­e­cuta­bles. Previously, get­ting these to work of­ten meant wrestling with your dis­tro’s mul­ti­lib setup, which var­ied in qual­ity and ease de­pend­ing on whether you were on Ubuntu, Arch, Fedora, or some­thing else en­tirely. Now, Wine just han­dles it for you.

The rest of Wine 11 is­n’t just filler

There are more fixes, too

It’s easy to let NTSYNC and WoW64 steal the spot­light, but Wine 11 is packed to the gills with other stuff worth talk­ing about.

The Wayland dri­ver has come a long way. Clipboard sup­port now works bidi­rec­tion­ally be­tween Wine and na­tive Wayland ap­pli­ca­tions, which is one of those things you don’t think about un­til it does­n’t work and it dri­ves you mad. Drag-and-drop from Wayland apps into Wine win­dows is sup­ported. Display mode changes are now em­u­lated through com­pos­i­tor scal­ing, which means older games that try to switch to lower res­o­lu­tions like 640x480 ac­tu­ally be­have prop­erly in­stead of leav­ing you with a bro­ken desk­top. If you’ve been hold­ing off on switch­ing from X11 to Wayland be­cause of Wine com­pat­i­bil­ity con­cerns, Wine 11 re­moves a lot of those bar­ri­ers.

On the graph­ics front, EGL is now the de­fault back­end for OpenGL ren­der­ing on X11, re­plac­ing the older GLX path. Vulkan sup­port has been bumped to API ver­sion 1.4, and there’s ini­tial sup­port for hard­ware-ac­cel­er­ated H.264 de­cod­ing through Direct3D 11 video APIs us­ing Vulkan Video. That last one is par­tic­u­larly in­ter­est­ing for games and ap­pli­ca­tions that use video play­back for things like cutscenes or in-game stream­ing.

Force feed­back sup­port has been im­proved for rac­ing wheels and flight sticks, which is great news if you’re run­ning a sim setup on Linux. As well, Bluetooth has re­ceived a new dri­ver with BLE ser­vices and proper pair­ing sup­port, MIDI sound­font han­dling has been im­proved for legacy game mu­sic, and there are a cou­ple of mi­nor ex­tras like Zip64 com­pres­sion sup­port, Unicode 17.0.0 sup­port, TWAIN 2.0 scan­ning for 64-bit apps, and IPv6 ping func­tion­al­ity.

Thread pri­or­ity man­age­ment has been im­proved on both Linux and ma­cOS, which helps with multi-threaded ap­pli­ca­tion per­for­mance be­yond just the NTSYNC gains. ARM64 de­vices can now sim­u­late 4K page sizes on sys­tems with larger na­tive pages, which keeps the door open for Wine on Arm hard­ware. And with more Arm-based Linux de­vices show­ing up every year, that mat­ters more than it used to.

Plus, there are a ton of bug fixes. Games like Nioh 2, StarCraft 2, The Witcher 2, Call of Duty: Black Ops II, Final Fantasy XI, and Battle.net all re­ceived spe­cific com­pat­i­bil­ity fixes, which is ad­di­tional to the broader im­prove­ments made across the board that will im­prove per­for­mance and com­pat­i­bil­ity across sig­nif­i­cantly more ti­tles.

Wine 11 is a big re­lease, and it’s not just NTSYNC that makes it the case. Sure, NTSYNC alone would have made it worth pay­ing at­ten­tion to, but com­bined with the WoW64 com­ple­tion, the Wayland im­prove­ments, and the sheer vol­ume of fixes, it’s the most im­por­tant Wine re­lease since Proton made Linux gam­ing vi­able. Everything built on top of Wine, from Proton to Lutris to Bottles, gets bet­ter be­cause of it. If you play games on Linux at all, Wine 11 is worth your time try­ing out.

...

Read the original on www.xda-developers.com »

3 1,162 shares, 46 trendiness

The open source AI coding agent

Zen gives you ac­cess to a hand­picked set of AI mod­els that OpenCode has tested and bench­marked specif­i­cally for cod­ing agents. No need to worry about in­con­sis­tent per­for­mance and qual­ity across providers, use val­i­dated mod­els that work.

...

Read the original on opencode.ai »

4 1,015 shares, 41 trendiness

Thoughts on slowing the fuck down

It’s been about a year since cod­ing agents ap­peared on the scene that could ac­tu­ally build you full pro­jects. There were pre­cur­sors like Aider and early Cursor, but they were more as­sis­tant than agent. The new gen­er­a­tion is en­tic­ing, and a lot of us have spent a lot of free time build­ing all the pro­jects we al­ways wanted to build but never had time to.

And I think that’s fine. Spending your free time build­ing things is su­per en­joy­able, and most of the time you don’t re­ally have to care about code qual­ity and main­tain­abil­ity. It also gives you a way to learn a new tech stack if you so want.

During the Christmas break, both Anthropic and OpenAI handed out some free­bies to hook peo­ple to their ad­dic­tive slot ma­chines. For many, it was the first time they ex­pe­ri­enced the magic of agen­tic cod­ing. The fold’s get­ting big­ger.

Coding agents are now also in­tro­duced to pro­duc­tion code­bases. After 12 months, we are now be­gin­ning to see the ef­fects of all that progress”. Here’s my cur­rent view.

While all of this is anec­do­tal, it sure feels like soft­ware has be­come a brit­tle mess, with 98% up­time be­com­ing the norm in­stead of the ex­cep­tion, in­clud­ing for big ser­vices. And user in­ter­faces have the weird­est fuck­ing bugs that you’d think a QA team would catch. I give you that that’s been the case for longer than agents ex­ist. But we seem to be ac­cel­er­at­ing.

We don’t have ac­cess to the in­ter­nals of com­pa­nies. But every now and then some­thing slips through to some news re­porter. Like this sup­posed AI caused out­age at AWS. Which AWS im­me­di­ately corrected”. Only to then fol­low up in­ter­nally with a 90-day re­set.

Satya Nadella, the CEO of Microsoft, has been go­ing on about how much code is now be­ing writ­ten by AI at Microsoft. While we don’t have di­rect ev­i­dence, there sure is a feel­ing that Windows is go­ing down the shit­ter. Microsoft it­self seems to agree, based on this fine blog post.

Companies claim­ing 100% of their pro­duc­t’s code is now writ­ten by AI con­sis­tently put out the worst garbage you can imag­ine. Not point­ing fin­gers, but mem­ory leaks in the gi­ga­bytes, UI glitches, bro­ken-ass fea­tures, crashes: that is not the seal of qual­ity they think it is. And it’s def­i­nitely not good ad­ver­tis­ing for the fever dream of hav­ing your agents do all the work for you.

Through the grapevine you hear more and more peo­ple, from soft­ware com­pa­nies small and large, say­ing they have agen­ti­cally coded them­selves into a cor­ner. No code re­view, de­sign de­ci­sions del­e­gated to the agent, a gazil­lion fea­tures no­body asked for. That’ll do it.

We have ba­si­cally given up all dis­ci­pline and agency for a sort of ad­dic­tion, where your high­est goal is to pro­duce the largest amount of code in the short­est amount of time. Consequences be damned.

You’re build­ing an or­ches­tra­tion layer to com­mand an army of au­tonomous agents. You in­stalled Beads, com­pletely obliv­i­ous to the fact that it’s ba­si­cally unin­stal­lable mal­ware. The in­ter­net told you to. That’s how you should work or you’re ngmi. You’re ral­ph­ing the loop. Look, Anthropic built a C com­piler with an agent swarm. It’s kind of bro­ken, but surely the next gen­er­a­tion of LLMs can fix it. Oh my god, Cursor built a browser with a bat­tal­ion of agents. Yes, of course, it’s not re­ally work­ing and it needed a hu­man to spin the wheel a lit­tle bit every now and then. But surely the next gen­er­a­tion of LLMs will fix it. Pinky promise! Distribute, di­vide and con­quer, au­ton­omy, dark fac­to­ries, soft­ware is solved in the next 6 months. SaaS is dead, my grandma just had her Claw build her own Shopify!

Now again, this can work for your side pro­ject barely any­one is us­ing, in­clud­ing your­self. And hey, maybe there’s some­body out there who can ac­tu­ally make this work for a soft­ware prod­uct that’s not a steam­ing pile of garbage and is used by ac­tual hu­mans in anger.

If that’s you, more power to you. But at least among my cir­cle of peers I have yet to find ev­i­dence that this kind of shit works. Maybe we all have skill is­sues.

The prob­lem with agents is that they make er­rors. Which is fine, hu­mans also make er­rors. Maybe they are just cor­rect­ness er­rors. Easy to iden­tify and fix. Add a re­gres­sion test on top for bonus points. Or maybe it’s a code smell your lin­ter does­n’t catch. A use­less method here, a type that does­n’t make sense, du­pli­cated code over there. On their own, these are harm­less. A hu­man will also do such boo­boos.

But clankers aren’t hu­mans. A hu­man makes the same er­ror a few times. Eventually they learn not to make it again. Either be­cause some­one starts scream­ing at them or be­cause they’re on a gen­uine learn­ing path.

An agent has no such learn­ing abil­ity. At least not out of the box. It will con­tinue mak­ing the same er­rors over and over again. Depending on the train­ing data it might also come up with glo­ri­ous new in­ter­po­la­tions of dif­fer­ent er­rors.

Now you can try to teach your agent. Tell it to not make that boo­boo again in your AGENTS.md. Concoct the most com­plex mem­ory sys­tem and have it look up pre­vi­ous er­rors and best prac­tices. And that can be ef­fec­tive for a spe­cific cat­e­gory of er­rors. But it also re­quires you to ac­tu­ally ob­serve the agent mak­ing that er­ror.

There’s a much more im­por­tant dif­fer­ence be­tween clanker and hu­man. A hu­man is a bot­tle­neck. A hu­man can­not shit out 20,000 lines of code in a few hours. Even if the hu­man cre­ates such boo­boos at high fre­quency, there’s only so many boo­boos the hu­man can in­tro­duce in a code­base per day. The boo­boos will com­pound at a very slow rate. Usually, if the boo­boo pain gets too big, the hu­man, who hates pain, will spend some time fix­ing up the boo­boos. Or the hu­man gets fired and some­one else fixes up the boo­boos. So the pain goes away.

With an or­ches­trated army of agents, there is no bot­tle­neck, no hu­man pain. These tiny lit­tle harm­less boo­boos sud­denly com­pound at a rate that’s un­sus­tain­able. You have re­moved your­self from the loop, so you don’t even know that all the in­no­cent boo­boos have formed a mon­ster of a code­base. You only feel the pain when it’s too late.

Then one day you turn around and want to add a new fea­ture. But the ar­chi­tec­ture, which is largely boo­boos at this point, does­n’t al­low your army of agents to make the change in a func­tion­ing way. Or your users are scream­ing at you be­cause some­thing in the lat­est re­lease broke and deleted some user data.

You re­al­ize you can no longer trust the code­base. Worse, you re­al­ize that the gazil­lions of unit, snap­shot, and e2e tests you had your clankers write are equally un­trust­wor­thy. The only thing that’s still a re­li­able mea­sure of does this work” is man­u­ally test­ing the prod­uct. Congrats, you fucked your­self (and your com­pany).

You have zero fuck­ing idea what’s go­ing on be­cause you del­e­gated all your agency to your agents. You let them run free, and they are mer­chants of com­plex­ity. They have seen many bad ar­chi­tec­tural de­ci­sions in their train­ing data and through­out their RL train­ing. You have told them to ar­chi­tect your ap­pli­ca­tion. Guess what the re­sult is?

An im­mense amount of com­plex­ity, an amal­gam of ter­ri­ble cargo cult industry best prac­tices”, that you did­n’t rein in be­fore it was too late. But it’s worse than that.

Your agents never see each oth­er’s runs, never get to see all of your code­base, never get to see all the de­ci­sions that were made by you or other agents be­fore they make a change. As such, an agen­t’s de­ci­sions are al­ways lo­cal, which leads to the ex­act boo­boos de­scribed above. Immense amounts of code du­pli­ca­tion, ab­strac­tions for ab­strac­tions’ sake.

All of this com­pounds into an un­re­cov­er­able mess of com­plex­ity. The ex­act same mess you find in hu­man-made en­ter­prise code­bases. Those ar­rive at that state be­cause the pain is dis­trib­uted over a mas­sive amount of peo­ple. The in­di­vid­ual suf­fer­ing does­n’t pass the thresh­old of I need to fix this”. The in­di­vid­ual might not even have the means to fix things. And or­ga­ni­za­tions have su­per high pain tol­er­ance. But hu­man-made en­ter­prise code­bases take years to get there. The or­ga­ni­za­tion slowly evolves along with the com­plex­ity in a de­mented kind of syn­ergy and learns how to deal with it.

With agents and a team of 2 hu­mans, you can get to that com­plex­ity within weeks.

So now you hope your agents can fix the mess, refac­tor it, make it pris­tine. But your agents can also no longer deal with it. Because the code­base and com­plex­ity are too big, and they only ever have a lo­cal view of the mess.

And I’m not just talk­ing about con­text win­dow size or long con­text at­ten­tion mech­a­nisms fail­ing at the sight of a 1 mil­lion lines of code mon­ster. Those are ob­vi­ous tech­ni­cal lim­i­ta­tions. It’s more de­vi­ous than that.

Before your agent can try and help fix the mess, it needs to find all the code that needs chang­ing and all ex­ist­ing code it can reuse. We call that agen­tic search. How the agent does that de­pends on the tools it has. You can give it a Bash tool so it can rip­grep its way through the code­base. You can give it some queryable code­base in­dex, an LSP server, a vec­tor data­base. In the end it does­n’t mat­ter much. The big­ger the code­base, the lower the re­call. Low re­call means that your agent will, in fact, not find all the code it needs to do a good job.

This is also why those code smell boo­boos hap­pen in the first place. The agent misses ex­ist­ing code, du­pli­cates things, in­tro­duces in­con­sis­ten­cies. And then they blos­som into a beau­ti­ful shit flower of com­plex­ity.

How do we avoid all of this?

Coding agents are sirens, lur­ing you in with their speed of code gen­er­a­tion and jagged in­tel­li­gence, of­ten com­plet­ing a sim­ple task with high qual­ity at break­neck ve­loc­ity. Things start falling apart when you think: Oh golly, this thing is great. Computer, do my work!”.

There’s noth­ing wrong with del­e­gat­ing tasks to agents, ob­vi­ously. Good agent tasks share a few prop­er­ties: they can be scoped so the agent does­n’t need to un­der­stand the full sys­tem. The loop can be closed, that is, the agent has a way to eval­u­ate its own work. The out­put is­n’t mis­sion crit­i­cal, just some ad hoc tool or in­ter­nal piece of soft­ware no­body’s life or rev­enue de­pends on. Or you just need a rub­ber duck to bounce ideas against, which ba­si­cally means bounc­ing your idea against the com­pressed wis­dom of the in­ter­net and syn­thetic train­ing data. If any of that ap­plies, you found the per­fect task for the agent, pro­vided that you as the hu­man are the fi­nal qual­ity gate.

Karpathy’s auto-re­search ap­plied to speed­ing up startup time of your app? Great! As long as you un­der­stand that the code it spits out is not pro­duc­tion-ready at all. Auto-research works be­cause you give it an eval­u­a­tion func­tion that lets the agent mea­sure its work against some met­ric, like startup time or loss. But that eval­u­a­tion func­tion only cap­tures a very nar­row met­ric. The agent will hap­pily ig­nore any met­rics not cap­tured by the eval­u­a­tion func­tion, such as code qual­ity, com­plex­ity, or even cor­rect­ness, if your eval­u­a­tion func­tion is foo­bar.

The point is: let the agent do the bor­ing stuff, the stuff that won’t teach you any­thing new, or try out dif­fer­ent things you’d oth­er­wise not have time for. Then you eval­u­ate what it came up with, take the ideas that are ac­tu­ally rea­son­able and cor­rect, and fi­nal­ize the im­ple­men­ta­tion. Yes, sure, you can also use an agent for that fi­nal step.

And I would like to sug­gest that slow­ing the fuck down is the way to go. Give your­self time to think about what you’re ac­tu­ally build­ing and why. Give your­self an op­por­tu­nity to say, fuck no, we don’t need this. Set your­self lim­its on how much code you let the clanker gen­er­ate per day, in line with your abil­ity to ac­tu­ally re­view the code.

Anything that de­fines the gestalt of your sys­tem, that is ar­chi­tec­ture, API, and so on, write it by hand. Maybe use tab com­ple­tion for some nos­tal­gic feels. Or do some pair pro­gram­ming with your agent. Be in the code. Because the sim­ple act of hav­ing to write the thing or see­ing it be­ing built up step by step in­tro­duces fric­tion that al­lows you to bet­ter un­der­stand what you want to build and how the sys­tem feels”. This is where your ex­pe­ri­ence and taste come in, some­thing the cur­rent SOTA mod­els sim­ply can­not yet re­place. And slow­ing the fuck down and suf­fer­ing some fric­tion is what al­lows you to learn and grow.

The end re­sult will be sys­tems and code­bases that con­tinue to be main­tain­able, at least as main­tain­able as our old sys­tems be­fore agents. Yes, those were not per­fect ei­ther. Your users will thank you, as your prod­uct now sparks joy in­stead of slop. You’ll build fewer fea­tures, but the right ones. Learning to say no is a fea­ture in it­self.

You can sleep well know­ing that you still have an idea what the fuck is go­ing on, and that you have agency. Your un­der­stand­ing al­lows you to fix the re­call prob­lem of agen­tic search, lead­ing to bet­ter clanker out­puts that need less mas­sag­ing. And if shit hits the fan, you are able to go in and fix it. Or if your ini­tial de­sign has been sub­op­ti­mal, you un­der­stand why it’s sub­op­ti­mal, and how to refac­tor it into some­thing bet­ter. With or with­out an agent, don’t fuck­ing care.

All of this re­quires dis­ci­pline and agency.

All of this re­quires hu­mans.

...

Read the original on mariozechner.at »

5 972 shares, 39 trendiness

Microsoft's "Fix" for Windows 11

Microsoft just an­nounced a 7-point plan to fix Windows 11, and the tech press is treat­ing it like a re­demp­tion arc. Pavan Davuluri, the Windows pres­i­dent, ad­mit­ted in January 2026 that Windows 11 had gone off track” and said Microsoft was en­ter­ing a mode called swarming” where en­gi­neers would be pulled off new fea­tures to fix ex­ist­ing prob­lems.

I saw this head­line and my first thought was: it’s like be­ing in an abu­sive re­la­tion­ship. They beat you, then show up with flow­ers say­ing they’ve changed. And every­one around you says see, they’re get­ting bet­ter.” But the bruises are still there and the apol­ogy only cov­ers the hits peo­ple no­ticed.

I want to walk through what Microsoft ac­tu­ally did to Windows 11 over the past four years, be­cause this fix” an­nounce­ment only makes sense when you see the full dam­age list and re­al­ize that the worst of­fenses aren’t even part of the re­pair plan.

The Copilot in­va­sion started September 26, 2023, when Microsoft pushed their AI chat­bot into Windows 11 ahead of the for­mal 23H2 re­lease. The icon ap­peared be­tween your Start menu and sys­tem tray, you could­n’t move it, you could­n’t re­move it through nor­mal set­tings, and it hi­jacked the Win+C key­board short­cut. Over the next two years, Copilot but­tons metas­ta­sized into Snipping Tool, Photos, Notepad, Widgets, File Explorer con­text menus, Start menu search, and sys­tem Settings. Microsoft even planned to force-in­stall the Microsoft 365 Copilot app di­rectly onto Start menus of eligible PCs.” The new plan promises to re­move all of that. They want credit for pulling their hand out of your pocket.

On April 24, 2024, Microsoft shipped up­date KB5036980, which in­jected ad­ver­tise­ments into the Windows 11 Start menu’s Recommended” sec­tion. These showed up la­beled Promoted” and pushed apps like Opera browser and some pass­word man­ager no­body asked for. And the Start menu was just one sur­face, they also placed ads on the lock screen, in the Settings home­page hawk­ing Game Pass sub­scrip­tions, in­side File Explorer push­ing OneDrive, and through tip” no­ti­fi­ca­tions that were thinly veiled prod­uct pitches. The fix” promises fewer ads.” Fewer. The op­er­at­ing sys­tem you paid $139 for at re­tail should have ex­actly zero ads, and the fact that fewer” is sup­posed to im­press any­one shows how thor­oughly Microsoft has low­ered the bar.

The pri­vacy an­gle is where this gets dan­ger­ous. When Windows 11 launched in October 2021, Home edi­tion re­quired a Microsoft ac­count dur­ing setup. By October 2025, Microsoft had sys­tem­at­i­cally hunted down and killed every sin­gle workaround for cre­at­ing a lo­cal ac­count, the `oobe\bypassnro` com­mand, the BypassNRO reg­istry tog­gle, the `ms-cxh:localonly` trick, even the old fake email method. Amanda Langowski from Microsoft stated it plainly: they were removing known mech­a­nisms for cre­at­ing a lo­cal ac­count in the Windows Setup ex­pe­ri­ence.”

A Microsoft ac­count means your iden­tity is tied to your OS from first boot. Your ac­tiv­ity, your app us­age, your brows­ing through Edge, your files through OneDrive, all fun­neled into a pro­file Microsoft con­trols. And this par­tic­u­lar abuse is nowhere in the 7-point fix plan.

OneDrive got the same treat­ment. Microsoft silently changed Windows 11 setup in 2024 so that OneDrive folder backup en­ables au­to­mat­i­cally with no con­sent di­a­log, sync­ing your Desktop, Documents, Pictures, Music, and Videos to Microsoft’s cloud. When peo­ple dis­cov­ered this and tried to turn it off, their files dis­ap­peared from their lo­cal ma­chine be­cause OneDrive had moved them, trans­ferred own­er­ship of your per­sonal files to their cloud ser­vice with­out ask­ing. Author Jason Pargin went vi­ral de­scrib­ing how OneDrive ac­ti­vated it­self, moved his files, then started delet­ing them when he hit the free 5GB stor­age limit. Microsoft’s re­sponse to this was si­lence. Also not in the fix plan.

Windows Recall is worth lin­ger­ing on. Announced May 2024, it’s an AI fea­ture that screen­shots every­thing on your screen every few sec­onds and makes it search­able. Security re­searcher Kevin Beaumont demon­strated that the en­tire Recall data­base was stored in plain­text in an AppData folder where any mal­ware could ex­tract it. Bank num­bers, Social Security num­bers, pass­words, all sit­ting in an un­en­crypted SQLite data­base.

The UKs Information Commissioner’s Office got in­volved. Microsoft de­layed it, made it opt-in, added en­cryp­tion, and qui­etly re­launched it for Insiders in November 2024. They built a sur­veil­lance fea­ture, shipped it bro­ken, got caught, and called the patch responding to feed­back.”

But the abuse pat­tern goes back way fur­ther than Windows 11. In 2015 and 2016, Microsoft ran the GWX (Get Windows 10) cam­paign, full-screen nag di­alogs that pushed Windows 10 up­grades on Windows 7 and 8 users. In May 2016, they changed the be­hav­ior of the red X but­ton so that click­ing it, which for decades had meant close” or cancel”, in­stead sched­uled the Windows 10 up­grade. Microsoft’s own se­cu­rity ad­vice told users to close sus­pi­cious di­alogs us­ing the X but­ton, and they weaponized that trained be­hav­ior against their own cus­tomers. A woman named Teri Goldstein sued af­ter the forced up­grade bricked her travel agency PC and won $10,000. Microsoft ap­pealed, then dropped the ap­peal and paid. They even­tu­ally ad­mit­ted they went too far.”

And right now, Microsoft is about to force 240 mil­lion PCs into the land­fill. Windows 10 hit end of life on October 14, 2025, and Windows 11 re­quires TPM 2.0, spe­cific CPU gen­er­a­tions, UEFI Secure Boot, hard­ware re­quire­ments that ex­cluded roughly 20% of all PCs world­wide. Perfectly func­tional ma­chines, ren­dered obsolete” by ar­bi­trary soft­ware re­stric­tions. If you want to keep get­ting se­cu­rity patches on Windows 10, Microsoft will charge you $30 per year, pay­ing for patches to an op­er­at­ing sys­tem you al­ready bought a li­cense for. Enterprise cus­tomers pay $61 per de­vice for Year 1, $122 for Year 2, and $244 for Year 3, with the price dou­bling each year.

Edge is its own dis­as­ter. Mozilla com­mis­sioned an in­de­pen­dent re­port ti­tled Over the Edge” that doc­u­mented spe­cific dark pat­terns in­clud­ing con­firmsham­ing (pop-ups im­ply­ing you’re shopping in a dumb way” if you don’t use Edge), dis­guised ads in­jected into Google.com and the Chrome Web Store, and de­fault browser set­tings that hi­jack back to Edge with­out no­ti­fi­ca­tion. Certain Windows web links still force-open in Edge re­gard­less of your de­fault browser set­ting. Despite all this ma­nip­u­la­tion, Edge holds just 5.35% global mar­ket share. Even with the full weight of an op­er­at­ing sys­tem mo­nop­oly forc­ing their browser on peo­ple, al­most no­body chooses to use it.

And the teleme­try ques­tion. On Windows 11 Home and Pro, you can­not fully dis­able teleme­try. Setting `AllowTelemetry` to 0 in the reg­istry on non-En­ter­prise edi­tions gets silently over­rid­den back to 1. Only Enterprise and Education edi­tions can ac­tu­ally turn it off. The op­er­at­ing sys­tem you paid for re­ports data about you to Microsoft, and the set­ting to stop it is a lie on con­sumer edi­tions. Also not in the fix plan.

I haven’t even men­tioned the EU fin­ing Microsoft over 2.2 bil­lion eu­ros across mul­ti­ple an­titrust rul­ings, in­clud­ing 561 mil­lion eu­ros specif­i­cally for break­ing a browser bal­lot promise, a Windows 7 up­date silently re­moved the choice screen for 14 months, af­fect­ing 15 mil­lion users, and it was the first time the EU fined a com­pany for vi­o­lat­ing a commitment de­ci­sion.” Or the _NSAKEY con­tro­versy from 1999 where a sec­ond crypto key la­beled lit­er­ally `_NSAKEY` was found em­bed­ded in Windows NT. Or the time in August 2024 when a Microsoft up­date bricked Linux dual-boot sys­tems across Ubuntu, Mint, and other dis­tros, and it took 9 months to fully fix.

Ok so here’s the table that tells the whole story:

The bot­tom four rows are the ones that mat­ter. The pri­vacy-hos­tile changes, the forced Microsoft ac­counts, the teleme­try that lies about be­ing dis­abled, OneDrive hi­jack­ing your files, the pre-in­stalled garbage, none of that is part of the fix plan. Microsoft’s swarming” ef­fort tar­gets the most vis­i­ble UI an­noy­ances, the ones that gen­er­ate bad head­lines. Data col­lec­tion, ven­dor lock-in, forced ac­counts, those stay be­cause those are the rev­enue model.

Microsoft spent four years de­lib­er­ately de­grad­ing an op­er­at­ing sys­tem that peo­ple paid $139 or more for, and now they’re an­nounc­ing the re­moval of their own dam­age as if it’s a gift. The fix” is them tak­ing their foot off your neck and ex­pect­ing ap­plause. The ads should have never been there, the Copilot but­tons should have never been forced, and the taskbar should have never been crip­pled in the first place. And the things they’re choos­ing to keep, the teleme­try, the forced ac­counts, the data har­vest­ing, those are the real prod­uct, be­cause at this point, you are.

...

Read the original on www.sambent.com »

6 875 shares, 36 trendiness

Rod Prazeres Astrophotography in Project Hail Mary End Credits

...

Read the original on rpastro.square.site »

7 874 shares, 34 trendiness

rz01.org

For var­i­ous rea­sons, I have de­cided to move as many ser­vices and sub­scrip­tions as pos­si­ble from non-EU coun­tries to the EU or to switch to European ser­vice providers. The rea­sons for this are the cur­rent global po­lit­i­cal sit­u­a­tion and im­proved data pro­tec­tion. I don’t want to go into the first point any fur­ther for var­i­ous rea­sons, but the sec­ond point should be im­me­di­ately ob­vi­ous, since the EU cur­rently has the most user-friendly laws when it comes to data pro­tec­tion. Below, I will list both the old and new ser­vice providers; this is not an ad­ver­tise­ment, but sim­ply the re­sult of my re­search, which was aimed at achiev­ing the same or bet­ter qual­ity at af­ford­able prices.

I would call this post an in­terim re­port, and I will ex­pand on it if I end up mi­grat­ing more ser­vices.

In my opin­ion, Fastmail is one of the best email providers. In all the years I’ve had my email ac­counts there, I’ve never had any prob­lems. I paid 10 eu­ros a month for two ac­counts, could use an un­lim­ited num­ber of my own do­mains, and could not only set up catch-all ad­dresses but also send emails from any email ad­dress I wanted. This is im­por­tant for my email setup. The cal­en­dar is also solid and was used within the fam­ily. All of this was also avail­able in a well-de­signed Android app. Finding a European al­ter­na­tive that of­fers all of this proved dif­fi­cult. First, I tried mail­box.org, which I can gen­er­ally rec­om­mend with­out reser­va­tion. Unfortunately, you can’t send emails from any ad­dress on your own do­main with­out a workaround, so the search con­tin­ued. Eventually, I landed on Uberspace. This pay what you want” provider of­fers a shell ac­count, web host­ing, email host­ing, and more at fair prices. In ad­di­tion, you can use as many of your own do­mains as you like for both web and email, and send emails from any sender ad­dress. There is­n’t a ded­i­cated app, which is why I now use Thunderbird for Android and am very sat­is­fied with it.

Uberspace does­n’t of­fer a built-in cal­en­dar so­lu­tion. So I tried in­stalling var­i­ous CalDAV servers, but none of them re­ally con­vinced me. In the end, I sim­ply in­stalled NextCloud on my Uberspace Asteroid, which has CalDAV and CardDAV built in. On my desk­top, I use Thunderbird as a client; on Android, I use DAVx5 and Fossil Calendar. It works great, even if NextCloud does come with some over­head. In re­turn, I can now eas­ily share files with oth­ers and, in the­ory, also use NextCloud’s on­line of­fice func­tion­al­ity.

Now that I’m al­ready us­ing Uberspace for my email and cal­en­dar, I was able to host this web­site there as well. I pre­vi­ously had a VPS with Hetzner for this pur­pose, which I no longer need. The only mi­nor hur­dle was that I use SSI on this site to man­age the header cen­trally. I had pre­vi­ously used Nginx, but Uberspace hosts on Apache, where the SSI im­ple­men­ta­tion is han­dled slightly dif­fer­ently. However, adapt­ing my HTML code was quite sim­ple, so I was able to quickly mi­grate the site to Uberspace.

For a long time, I was a sat­is­fied Namecheap cus­tomer. They of­fer good prices, a wide se­lec­tion of avail­able do­mains, their DNS man­age­ment has every­thing you need, and their sup­port team has helped me quickly on sev­eral oc­ca­sions. But now it was time to look for a com­pa­ra­ble provider in the EU. In the end, I set­tled on host­ing.de. Some of the rea­sons were the prices, re­views, the lo­ca­tion in Germany, and the avail­abil­ity of .is do­mains. So far, every­thing has been run­ning smoothly; sup­port helped me quickly and com­pe­tently with one is­sue; and while prices for non-Ger­man do­mains are slightly higher, they’re still within an ac­cept­able range.

At some point, pretty much every­one had their code on GitHub (or still does). I was no ex­cep­tion, though I had also hosted my own Gitea in­stance. Eventually, I got tired of that too and mi­grated all my Git repos­i­to­ries to code­berg.org. Codeberg is a German-based non­profit or­ga­ni­za­tion, and it’s hard to imag­ine go­ing wrong with this choice.

No changes here. I’ve al­ways been a happy Mullvad cus­tomer. For 5 eu­ros a month, I pay a Swedish com­pany that has proven it does­n’t log any data and does­n’t even re­quire me to cre­ate an ac­count. No sub­scrip­tion traps, no weird Black Friday deals, no dis­counts: just 5 eu­ros a month for a re­li­able, trust­wor­thy ser­vice.

For many years, I used my work smart­phone for per­sonal use as well. I was more than sat­is­fied with the Pixel 6, but un­der­stand­ably, I was­n’t al­lowed to in­stall a cus­tom ROM or use al­ter­na­tive app stores like F-Droid. That’s why I de­cided to buy a sep­a­rate per­sonal smart­phone. I chose the Pixel 9a, which is sup­ported by Graphene OS. I still in­stalled the Google Play Store so I could in­stall a sig­nif­i­cant num­ber of apps that are only avail­able there. However, I can now use al­ter­na­tive app stores, which al­lows me to in­stall and use apps like NewPipe. This way, I can en­joy YouTube ad-free and with­out an ac­count.

For ca­sual use on the couch, a Chromebook has been un­beat­able for me so far. It’s af­ford­able, the bat­tery lasts for­ever, and it wakes up from sleep mode ex­tremely quickly. To break away from Google here as well, I re­cently bought a cheap used 11-inch MacBook Air (A1465) to in­stall MX Linux with Fluxbox on it and use it for brows­ing and watch­ing videos. I haven’t had a chance to test it out yet, but I’m hop­ing it will be able to re­place the Chromebook.

...

Read the original on rz01.org »

8 833 shares, 31 trendiness

Running Tesla Model 3's Computer on My Desk Using Parts From Crashed Cars

Tesla runs a bug bounty pro­gram that in­vites re­searchers to find se­cu­rity vul­ner­a­bil­i­ties in their ve­hi­cles. To par­tic­i­pate, I needed the ac­tual hard­ware, so I started look­ing for Tesla Model 3 parts on eBay. My goal was to get a Tesla car com­puter and touch­screen run­ning on my desk, boot­ing the car’s op­er­at­ing sys­tem.

The car com­puter con­sists of two parts - the MCU (Media Control Unit) and the au­topi­lot com­puter (AP) lay­ered on top of each other. In the car, the com­puter is lo­cated in front of the pas­sen­ger seat, roughly be­hind the glove­box. The part it­self is the size of an iPad and the thick­ness of a ~500 page book and is cov­ered in a wa­ter-cooled metal cas­ing:

By search­ing for Tesla Model 3 MCU on Ebay, I found quite a lot of re­sults in the $200 - $300 USD price range. Looking at the list­ings, I found that many of these sell­ers are salvaging” com­pa­nies who buy crashed cars, take them apart, and list all parts for sale in­di­vid­u­ally. Sometimes, they even in­clude a photo of the orig­i­nal crashed car and a way to fil­ter their list­ings for parts ex­tracted from the same ve­hi­cle.

To boot the car up and in­ter­act with it, I needed a few more things:

* The dis­play ca­ble to con­nect them to­gether

For the power sup­ply, I went with an ad­justable 0-30V model from Amazon. There was a 5 am­pere and a 10A ver­sion avail­able, at the time, I fig­ured it’s safer to have some head­room and went with the 10A ver­sion — it was a very good de­ci­sion, as it later turned out, the full setup could con­sume up to 8A at peak times. The Model 3 screens were sur­pris­ingly ex­pen­sive on Ebay, I as­sume that is be­cause it is a pop­u­lar part to re­place. I found a pretty good deal for 175 USD.

The last and most dif­fi­cult part to or­der was the ca­ble which con­nects the MCU to the screen. I needed this be­cause both the com­puter and a screen were be­ing sold with the ca­bles cut a few cen­time­ters af­ter the con­nec­tor (interestingly most sell­ers did that, in­stead of just un­plug­ging the ca­bles).

This is when I dis­cov­ered that Tesla pub­lishes the wiring Electrical Reference” for all of its cars pub­licly. On their ser­vice web­site, you can look up a spe­cific car model, search for a com­po­nent (such as the dis­play), and it will show you ex­actly how the part should be wired up, what ca­bles/​con­nec­tors are used, and even what the dif­fer­ent pins are re­spon­si­ble for in­side a sin­gle con­nec­tor:

Turns out the dis­play uses a 6-pin ca­ble (2 for 12V and ground, 4 for data) with a spe­cial Rosenberger 99K10D-1D5A5-D con­nec­tor. I soon dis­cov­ered that un­less you are a car man­u­fac­turer or­der­ing in bulk, there is no way you are buy­ing a sin­gle Rosenberger ca­ble like this. No Ebay list­ings, noth­ing on Aliexpress, es­sen­tially no search re­sults at all.

After dig­ging around a bit, I found that this ca­ble is very sim­i­lar to a more widely used au­to­mo­tive ca­ble called LVDS, which is used to trans­fer video in BMW cars. At first sight, the con­nec­tors looked like a per­fect match to my Rosenberger, so I placed an or­der:

The com­puter ar­rived first. To at­tempt to power it on, I looked up which pin of which con­nec­tor I needed to at­tach 12V and ground to us­ing the Tesla schemat­ics & the few pic­tures on­line of peo­ple do­ing the same desk-MCU setup. Since the com­puter in­cluded the shortly cut ca­bles, I was able to strip the rel­e­vant wires and at­tach the power sup­ply’s clips to the right ones:

I saw a cou­ple of red LEDs start flash­ing, and the com­puter started up! Since I had no screen yet, there were not many ways to in­ter­act with the car. Reading @lewurm’s pre­vi­ous re­search on GitHub I knew that, at least in older car ver­sions, there was a net­work in­side the car, with some com­po­nents hav­ing their own web­server. I con­nected an Ethernet ca­ble to the port next to the power con­nec­tor and to my lap­top.

This net­work does not have DHCP, so you have to man­u­ally set your IP ad­dress. The IP you se­lect has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not con­flict with other hosts on the net­work. On Reddit, I found the con­tents of an older /etc/hosts file from a car which shows the hosts that are nor­mally as­so­ci­ated with spe­cific IPs:

@lewurm’s blog men­tioned that SSH on port :22 and a web­server on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer mod­els? Yes!

I had al­ready found 2 ser­vices to ex­plore on the MCU:

* An SSH server which states SSH al­lowed: ve­hi­cle parked” - quite funny given the cir­cum­stances

This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

* Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* A REST-like API on :8080 which re­turned a his­tory of tasks”

This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

* This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

Around this time, I also re­moved the metal shield­ing to see ex­actly what the boards look like in­side. You can see the two dif­fer­ent boards which were stacked on top of each other:

Once the screen and the BMW LVDS ca­ble ar­rived, it un­for­tu­nately be­came clear that the con­nec­tor is not go­ing to fit. The BMW con­nec­tor was much thicker on the sides and it was not pos­si­ble to plug it into the screen. This led to some su­per sketchy im­pro­vised at­tempts to strip the two orig­i­nal tail” ca­bles from the MCU and the screen and con­nect the in­di­vid­ual wires to­gether. The wires were re­ally sen­si­tive and thin. The setup worked for a cou­ple of sec­onds, but caused wire de­bris to fall on the PCB and short it, burn­ing one of the power con­troller chips:

It was ex­tremely hard to find the name/​model of the chip that got burned, es­pe­cially since part of the text printed on it had be­come un­read­able due to the dam­age. To be able to con­tinue with the pro­ject, I had to or­der a whole other car com­puter.

In the mean­time, my friend Yasser (@n3r0li) some­how pulled off the im­pos­si­ble and iden­ti­fied it as the MAX16932CATIS/V+T” step-down con­troller, re­spon­si­ble for con­vert­ing power down to lower volt­ages. We or­dered the chip and took the board to a lo­cal PCB re­pair shop, where they suc­cess­fully re­placed it and fixed the MCU. Now I had two com­put­ers to work with.

So I re­ally did need that Rosenberger ca­ble, there was no get­ting around it.

After hav­ing no luck find­ing it on­line and even vis­it­ing a Tesla ser­vice cen­ter in London (an odd en­counter, to say the least), I had to ac­cept what I had been try­ing to avoid: buy­ing an en­tire Dashboard Wiring Harness.

Back in the Tesla Electrical Reference, in ad­di­tion to the con­nec­tors, one can find every part num­ber. Looking at the ca­ble which con­nects the MCU to the screen, the num­ber 1067960-XX-E shows. Searching for it on Ebay brings up this mon­stros­ity:

Turns out that ac­tual cars don’t have in­di­vid­ual ca­bles. Instead they have these big looms”, which bun­dle many ca­bles from a nearby area into a sin­gle har­ness. This is the rea­son why I could not find the in­di­vid­ual ca­ble ear­lier. They sim­ply don’t man­u­fac­ture it. Unfortunately I had no other choice but to buy this en­tire loom for 80 USD.

Despite how bulky it was, the loom worked per­fectly. The car booted, the touch screen started up, and I had a work­ing car com­puter on my desk, run­ning the car’s op­er­at­ing sys­tem!

Having the sys­tem run­ning, I can now start play­ing with the user in­ter­face, in­ter­act­ing with the ex­posed net­work in­ter­faces, ex­plor­ing the CAN buses, and per­haps even at­tempt­ing to ex­tract the firmware.

...

Read the original on bugs.xdavidhu.me »

9 828 shares, 32 trendiness

Personal Encyclopedias — whoami.wiki

Last year, I vis­ited my grand­moth­er’s house for the first time af­ter the pan­demic and came across a cup­board full of loose old pho­tos. I counted 1,351 of them span­ning all the way from my grand­par­ents in their early 20s, my mom as a baby, to me in mid­dle school, just around the time when we got our first smart­phone and all pho­tos since then were backed up on­line.

Everything was all over the place so I spent some time go­ing through them in­di­vid­u­ally and or­ga­niz­ing them into groups. Some of the ini­tial groups were based on the phys­i­cal at­trib­utes of the pho­to­graph like sim­i­lar as­pect ra­tios or film stock. For ex­am­ple, there was a group of black/​white 32mm square pic­tures that were taken around the time when my grand­fa­ther was in his mid 20s.

As I got done with group­ing all of them, I was able to see flashes of sto­ries in my head, but they were ephemeral and frag­ile. For in­stance, there was a group of pho­tos that looked like it was taken dur­ing my grand­par­ents’ wed­ding but I did­n’t know the chrono­log­i­cal or­der they were taken be­cause EXIF meta­data did­n’t ex­ist around that time.

So I sat down with my grand­mother and asked her to re­order the pho­tos and tell me every­thing she could re­mem­ber about her wed­ding. Her face lit up as she nar­rated the back­story be­hind the oc­ca­sion, go­ing from photo to photo, resur­fac­ing de­tails that had been dor­mant for decades. I wrote every­thing down, recorded the names of peo­ple in some of the pho­tos, some of whom I rec­og­nized as younger ver­sions of my un­cles and aunts.

After the interview”, I had mul­ti­ple pages of notes con­nect­ing the pho­tos to events that hap­pened 50 years ago. Since the ac­count was his­tor­i­cal, as an in­side joke I wanted to see if I could clean it up and pre­sent it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a lo­cal in­stance, and be­gan my ed­i­to­r­ial work. I used the 2011 Royal Wedding as ref­er­ence and drafted a page start­ing with the clas­sic in­fobox and the lead para­graph.

I split up the rest of the con­tent into sec­tions and filled them with every­thing I could ver­ify like dates, names, places, who sat where. I scanned all the pho­tos and spent some time fig­ur­ing out what to place where. For every photo place­ment, there was a fol­low up to in­clude a de­scrip­tive cap­tion too.

Whenever I men­tioned a per­son, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that pro­vided wider con­text to things like venues, rit­u­als, and the po­lit­i­cal cli­mate around that time, like for in­stance a le­gal amend­ment that was rel­e­vant to the wed­ding cer­e­mony.

In two evenings, I was able to doc­u­ment a full back­story for the pho­tos into a neat ar­ti­cle. These two evenings also made me re­al­ize just how pow­er­ful en­cy­clo­pe­dia soft­ware is to record and pre­serve me­dia and knowl­edge that would’ve oth­er­wise been lost over time.

This was so much fun that I spent the fol­low­ing months writ­ing pages to ac­count for all the pho­tos that needed to be stitched to­gether.

I got help from r/​ge­neal­ogy about how to ap­proach record­ing oral his­tory and I was given re­sources to bet­ter con­duct in­ter­views, shoutout to u/​stem­ma­tis! I would get on calls with my grand­mother and peo­ple in the fam­ily, ask them a cou­ple of ques­tions, and then write. It was also around this time that I be­gan us­ing au­dio tran­scrip­tion and lan­guage mod­els to make the ed­i­to­r­ial process eas­ier.

Over time, I man­aged to write a lot of pages con­nect­ing peo­ple to dif­fer­ent life events. The en­cy­clo­pe­dia for­mat made it easy to con­nect dots I would have never found on my own, like dis­cov­er­ing that one of the singers at my grand­par­ents’ wed­ding was the same nurse who helped de­liver me.

After find­ing all the sto­ries be­hind the phys­i­cal pho­tos, I started to work on dig­i­tal pho­tos and videos that I had stored on Google Photos. The won­der­ful thing about dig­i­tal pho­tos is that they come with EXIF meta­data that can re­veal ex­tra in­for­ma­tion like date, time, and some­times ge­o­graph­i­cal co­or­di­nates.

This time, with­out any in­ter­views, I wanted to see if I could use a lan­guage model to cre­ate a page based on just brows­ing through the pho­tos. As my first ex­per­i­ment, I cre­ated a folder with 625 pho­tos of a fam­ily trip to Coorg back in 2012.

I pointed Claude Code at the di­rec­tory and asked it to draft a wiki page by brows­ing through the im­ages. I hinted at us­ing ImageMagick to cre­ate con­tact sheets so it would help with brows­ing through mul­ti­ple pho­tos at once.

After a few min­utes and a cou­ple of to­kens later, it had cre­ated a com­pelling draft with a de­tailed ac­count of every­thing we did dur­ing the trip by time of day. The model had no lo­ca­tion data to work with, just time­stamps and vi­sual con­tent, but it was able to iden­tify the places from the pho­tos alone, in­clud­ing ones that I had for­got­ten by now. It picked up de­tails on the modes of trans­porta­tion we used to get be­tween places just from what it could see.

After I had clar­i­fied who some of the peo­ple in the pic­tures were, it went on to iden­tify them au­to­mat­i­cally in the cap­tions. Now that I had a de­tailed out­line ready, the page still only had con­tent based on the avail­able data, so to fill in the gaps I shared a list of anec­dotes from my point of view and the model in­serted them into places where the nar­ra­tive called for them.

The Coorg trip only had pho­tos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 pho­tos and 343 videos with an iPhone 12 Pro that in­cluded ge­o­graph­i­cal co­or­di­nates as part of the EXIF meta­data.

On top of that, I ex­ported my lo­ca­tion time­line from Google Maps, my Uber trips, my bank trans­ac­tions, and Shazam his­tory. I would ask Claude Code to start with the pho­tos and then grad­u­ally give it ac­cess to the dif­fer­ent data ex­ports.

Here are some of the things it did across mul­ti­ple runs:

It cross-ref­er­enced my bank trans­ac­tions with lo­ca­tion data to as­cer­tain the restau­rants I went to.

Some of the pho­tos and videos showed me in at­ten­dance at a soc­cer match, how­ever, it was un­known which teams were play­ing. The model looked up my bank trans­ac­tions and found a Ticketmaster in­voice with in­for­ma­tion about the teams and name of the tour­na­ment.

It looked up my Uber trips to fig­ure out travel times and ex­act lo­ca­tions of pickup and drop.

It used my Shazam tracks to write about the kinds of songs that were play­ing at a place, like Cuban songs at a Cuban restau­rant.

In a fol­low-up, I men­tioned re­mem­ber­ing an evening din­ner with a gui­tarist play­ing in the back­ground. It fil­tered my me­dia to evening cap­tures, found a frame in a video with the gui­tarist, up­loaded it, and ref­er­enced the mo­ment in the page.

The MediaWiki ar­chi­tec­ture worked well with the ed­its, since for every new data source it would make amend­ments like a real Wikipedia con­trib­u­tor would. I leaned heav­ily on fea­tures that al­ready ex­isted. Talk pages to clar­ify gaps and con­sol­i­date re­search notes, cat­e­gories to group pages by theme, re­vi­sion his­tory to track how a page evolved as new data came in. I did­n’t have to build any of this, it was all just there.

What started as me help­ing the model fill in gaps from my mem­ory grad­u­ally in­verted. The model was now sur­fac­ing things I had com­pletely for­got­ten, cross-ref­er­enc­ing de­tails across data sources in ways I never would have done man­u­ally.

So I started point­ing Claude Code at other data ex­ports. My Facebook, Instagram, and WhatsApp archives held around 100k mes­sages and a cou­ple thou­sand voice notes ex­changed with close friends over a decade.

The model traced the arc of our friend­ships through the mes­sages, pulled out the life episodes we had talked each other through, and wove them into mul­ti­ple pages that read like it was writ­ten by some­one who knew us both. When I shared the pages with my friends, they wanted to read every sin­gle one.

This is when I re­al­ized I was no longer work­ing on a fam­ily his­tory pro­ject. What I had been build­ing, page by page, was a per­sonal en­cy­clo­pe­dia. A struc­tured, brows­able, in­ter­con­nected ac­count of my life com­piled from the data I al­ready had ly­ing around.

I’ve been work­ing on this as whoami.wiki. It uses MediaWiki as its foun­da­tion, which turns out to be a great fit be­cause lan­guage mod­els al­ready un­der­stand Wikipedia con­ven­tions deeply from their train­ing data. You bring your data ex­ports, and agents draft the pages for you to re­view.

A page about your grand­moth­er’s wed­ding works the same way as a page about a royal wed­ding. A page about your best friend works the same way as a page about a pub­lic fig­ure.

Oh and it’s gen­uinely fun! Putting to­gether the en­cy­clo­pe­dia felt like the early days of Facebook time­line, brows­ing through fin­ished pages, fol­low­ing links be­tween peo­ple and events, and stum­bling on a de­tail I for­got.

But more than the tech­nol­ogy, it’s the sto­ries that stayed with me. Writing about my grand­moth­er’s life sur­faced things I’d never known, her years as a sin­gle mother, the de­ci­sions she had to make, the re­silience it took. She was a stronger woman than I ever re­al­ized. Going through my friend­ships, I found mo­ments of en­dear­ment that I had nearly for­got­ten, the days friends went the ex­tra mile to be good to me. Seeing those mo­ments laid out on a page made me pick up the phone and call a few of them. The en­cy­clo­pe­dia did­n’t just or­ga­nize my data, it made me pay closer at­ten­tion to the peo­ple in my life.

Today I’m re­leas­ing whoami.wiki as an open source pro­ject. The en­cy­clo­pe­dia is yours, it runs on your ma­chine, your data stays with you, and any model can read it. The pro­ject is early and I’m still fig­ur­ing a lot of it out, but if this sounds in­ter­est­ing, you can get started here and tell me what you think!

...

Read the original on whoami.wiki »

10 801 shares, 31 trendiness

Do Not Turn Child Protection Into Internet Access Control

Age ver­i­fi­ca­tion is no longer a nar­row mech­a­nism for a few adult web­sites. Across Europe, the USA, the UK, Australia, and else­where, it is ex­pand­ing into so­cial me­dia, mes­sag­ing, gam­ing, search, and other main­stream ser­vices.

The com­mon fram­ing says these sys­tems ex­ist to pro­tect chil­dren. That con­cern is real. Children are ex­posed to harm­ful con­tent, ma­nip­u­la­tive rec­om­men­da­tion sys­tems, preda­tory be­hav­ior, and com­pul­sive plat­form de­sign. Even adults are ma­nip­u­lated, quite suc­ces­fully, with tech­niques that can in­flu­ence na­tional elec­tions.

But from a tech­ni­cal and po­lit­i­cal point of view, age ver­i­fi­ca­tion is not just a child-safety fea­ture. It is an ac­cess con­trol ar­chi­tec­ture. It changes the de­fault con­di­tion of the net­work from open ac­cess to per­mis­sioned ac­cess. Instead of re­ceiv­ing con­tent un­less some­thing is blocked, users in­creas­ingly have to prove some­thing about them­selves be­fore a ser­vice is al­lowed to re­spond.

That shift be­comes clearer when age as­sur­ance moves down into the op­er­at­ing sys­tem. In some US pro­pos­als, the model is no longer a one-off check at a web­site. It be­comes a per­sis­tent age-sta­tus layer main­tained by the OS and ex­posed to ap­pli­ca­tions through a sys­tem-level in­ter­face. At that point, age ver­i­fi­ca­tion stops look­ing like a lim­ited safe­guard and starts look­ing like a gen­eral iden­tity layer for the whole de­vice.

This is no longer only a pro­pri­etary-plat­form story ei­ther. Even the Linux desk­top stack is be­gin­ning to ab­sorb this pres­sure. sys­temd has re­port­edly added an op­tional birth­Date field to userdb in re­sponse to age-as­sur­ance laws. Regulation is be­gin­ning to shape the data model of per­sonal com­put­ing, so that higher-level com­po­nents can build age-aware be­hav­ior on top.

Content mod­er­a­tion is about clas­si­fi­ca­tion and fil­ter­ing. It asks whether some con­tent should be blocked, la­beled, de­layed, or han­dled dif­fer­ently. Guardianship is some­thing else. It is the con­tex­tual re­spon­si­bil­ity of par­ents, teach­ers, schools, and other trusted adults to de­cide what is ap­pro­pri­ate for a child, when ex­cep­tions make sense, and how su­per­vi­sion should evolve over time. Moderation is partly tech­ni­cal. Guardianship is re­la­tional, lo­cal, and sit­u­ated in spe­cific con­texts.

I am also a par­ent. I un­der­stand the fear be­hind these pro­pos­als be­cause I live with it too. Children do face real on­line risks. But rec­og­niz­ing that does not oblige us to ac­cept any so­lu­tion placed in front of us, least of all one that weak­ens pri­vacy for every­one while shift­ing re­spon­si­bil­ity away from fam­i­lies, schools, and the peo­ple who ac­tu­ally have to guide chil­dren through dig­i­tal life.

Age-verification laws col­lapse these two ques­tions into one cen­tral­ized an­swer. The re­sult is pre­dictable. A plat­form, browser ven­dor, app store, op­er­at­ing-sys­tem provider, or iden­tity in­ter­me­di­ary is asked to en­force what is pre­sented as a child-pro­tec­tion pol­icy, even though no cen­tral­ized ac­tor can re­place the judg­ment of a par­ent, a school, or a lo­cal com­mu­nity.

It also fails on its own terms. The by­passes are ob­vi­ous: VPNs, bor­rowed ac­counts, pur­chased cre­den­tials, fake cre­den­tials, and tricks against age-es­ti­ma­tion sys­tems. A con­trol that is easy to evade but ex­pen­sive to im­pose is not a se­ri­ous com­pro­mise: it is an er­ror or, one may say, a cor­po­rate data-grab.

The price is high and paid by every­one. More iden­tity checks. More meta­data. More log­ging. More ven­dors in the mid­dle. More fric­tion for peo­ple who lack the right de­vice, the right pa­pers, or the right dig­i­tal skills. This is not a mi­nor safety fea­ture. It is a new con­trol layer for the net­work.

And once that layer ex­ists, it rarely stays con­fined to age. Infrastructure built for one at­tribute is eas­ily reused for oth­ers: lo­ca­tion, cit­i­zen­ship, le­gal sta­tus, plat­form pol­icy, or what­ever the next panic de­mands. This is how a lim­ited check be­comes a gen­eral gate.

Keep guardian­ship where it be­longs: with par­ents, teach­ers, schools, and com­mu­ni­ties that can make con­tex­tual de­ci­sions, au­tho­rize ex­cep­tions, and ad­just over time.

The op­er­at­ing sys­tem can help here, but only as a lo­cal pol­icy sur­face un­der the con­trol of users and guardians. It should not be­come a uni­ver­sal age-broad­cast­ing layer for apps and re­mote ser­vices. That is the ar­chi­tec­tural line that mat­ters.

Most of the harms in­voked in this de­bate do not come from the mere ex­is­tence of con­tent on­line. They come from rec­om­men­da­tion sys­tems, dark pat­terns, ad­dic­tive met­rics, and busi­ness mod­els that re­ward am­pli­fi­ca­tion with­out re­spon­si­bil­ity. If the goal is to pro­tect mi­nors, that is where reg­u­la­tion should bite.

If we are se­ri­ous about re­duc­ing harm, we should stop ask­ing how to iden­tify every­one and start ask­ing how to strengthen lo­cal con­trol with­out turn­ing the net­work into a check­point.

It is en­cour­ag­ing to see this ar­ti­cle cir­cu­lat­ing widely, as it may con­tribute to a shift in how pol­i­cy­mak­ers ap­proach the is­sue. Given its grow­ing vis­i­bil­ity, I will keep a con­cise record here of the se­quence of its cov­er­age across me­dia out­lets, as well pi­lot im­ple­men­ta­tions across the world.

My first ac­count on the prob­lem emerged from a di­a­logue with Brave’s de­vel­oper Kyle den Hartog at a cypher­punk re­treat in Berlin. It was right af­ter fa­cil­i­tat­ing the dig­i­tal iden­tity track of the event that I pub­lished a rather tech­ni­cal piece on the topic.

Later, as age ver­i­fi­ca­tion mea­sures be­gan to take hold, and in align­ment with our com­mu­nity fa­cil­i­ta­tors at the Dyne.org foun­da­tion, we de­cided to dis­con­tinue Discord as a chan­nel for par­tic­i­pa­tion, as the plat­form moved to im­pose age ver­i­fi­ca­tion.

Then the sys­temd dis­pute un­folded, and I found my­self, as founder of the pro­ject, as the first dis­tro main­tainer stat­ing that we would not im­ple­ment age ver­i­fi­ca­tion in Devuan GNU/Linux, a Debian fork with­out sys­temd that has, since 2016, shown fewer bugs and se­cu­rity ad­vi­sories. The tech jour­nal­ist Lunduke picked it up im­me­di­ately, set­ting off a wave of sim­i­lar de­c­la­ra­tions across the dis­tri­b­u­tion main­tainer com­mu­nity.

That was the mo­ment I re­alised the need to set out, in clear terms, the rea­sons be­hind this choice, and the grounds for a form of con­sci­en­tious ob­jec­tion should such laws ever be en­forced on our pro­jects at Dyne.org. I then wrote a piece for Wired Italy, in Italian, my mother tongue, which is due to be pub­lished by the mag­a­zine in the com­ing days (link TBD).

While await­ing pub­li­ca­tion in Wired, I trans­lated the ar­ti­cle and pub­lished it here, in English, through our think and do tank. The piece you have just read quickly reached the front page of Hacker News, draw­ing nearly 400 com­ments from con­cerned read­ers and tech­ni­cal ex­perts, a valu­able body of ma­te­r­ial to build on.

As the dis­cus­sion gains mo­men­tum, I am en­gag­ing with col­leagues at the City of Lugano and the Plan₿ Foundation, where I have re­cently taken on the role of Scientific Director. The pro­posal is to move from analy­sis to ac­tion by es­tab­lish­ing a city-wide pi­lot that ex­plores tech­nolo­gies for lo­cally man­aged guardian­ship, of­fer­ing a con­struc­tive ex­am­ple for Switzerland.

We are ap­proach­ing this with con­fi­dence and prepar­ing for a roll­out for Lugano within the next two years. At the same time, within the Swiss Confederation there are signs of a more grounded di­rec­tion, as re­flected in The Internet Initiative” plac­ing re­spon­si­bil­ity on Big Tech and bring­ing to­gether rep­re­sen­ta­tives from all ma­jor Swiss po­lit­i­cal par­ties.

My next steps in­clude reach­ing out to con­tacts in Europe to help broaden the dis­cus­sion and con­tribute to a more bal­anced pub­lic de­bate, in the face of sus­tained pres­sure from cor­po­rate lob­bies ad­vanc­ing data-ex­trac­tive mea­sures.

And you can play a mean­ing­ful role as well: en­gage with the is­sue, bring your tech­ni­cal and po­lit­i­cal un­der­stand­ing to it, and help sus­tain at­ten­tion so that those who make up the in­ter­net are not ex­cluded from de­ci­sions that af­fect it. I hope this ma­te­r­ial and the rea­son­ing be­hind it can be use­ful in that di­rec­tion. Do let us at Dyne.org know if we can as­sist in mak­ing vis­i­ble suc­cess­ful lo­cal pi­lots that im­ple­ment child pro­tec­tion in a sound and pro­por­tion­ate way.

If you like to read fur­ther, I’ve writ­ten more about the prob­lems of European Digital Identity im­ple­men­ta­tion plans and ar­chi­tec­ture.

I’ve been work­ing on pri­vacy and iden­tity tech­nol­ogy for over a decade, pri­mar­ily in pro­jects funded by the European Commission.

Among my ef­forts are de­code­pro­ject.eu and re­flow­pro­ject.eu, var­i­ous aca­d­e­mic pa­pers, in­clud­ing SD-BLS, re­cently pub­lished by IEEE. Additionally, with our team at The Forkbomb Company we’ve de­vel­oped dig­i­tal iden­tity prod­ucts as DID­ROOM.com and CRED­IMI.io.

...

Read the original on news.dyne.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.