10 interesting stories served every morning and every evening.

Removing the Modem and GPS from my 2024 RAV4 Hybrid

arkadiyt.com

May 13th, 2026 | 14 minute read

Modern cars are com­put­ers on wheels - they have more sen­sors than you can count and are con­stantly phon­ing home with teleme­try data like your lo­ca­tion, speed, fuel lev­els, sud­den ac­cel­er­a­tions/​de­cel­er­a­tions, video footage, dri­ver at­ten­tion data from eye mon­i­tor­ing sys­tems, and hun­dreds of other data points. Cars have in­ward- and out­ward-fac­ing cam­eras. They have mi­cro­phones. They have al­ways-on modems. It’s all en­abled by de­fault with dif­fi­cult or mean­ing­less opt-outs, and your data is mon­e­tized through bro­kers like LexisNexis or Verisk. This all brings a host of se­cu­rity and pri­vacy is­sues - here are some over the years:

In 2025 Subaru had vul­ner­a­bil­i­ties al­low­ing any­one to re­motely un­lock cus­tomers’ cars, as well as ac­cess the real-time GPS lo­ca­tion and lo­ca­tion his­tory of the car of the car

Car man­u­fac­tur­ers share your dri­ving data with in­sur­ance com­pa­nies, which then in­crease your pre­mi­ums

In 2023 Tesla em­ploy­ees in­ter­nally shared cam­era footage of naked cus­tomers and other sen­si­tive im­ages

In 2015 Charlie Miller and Chris Valasek fa­mously took over a Jeep Cherokee with full con­trol of the ig­ni­tion, brakes, locks, steer­ing, etc.

Mozilla de­tailed how 25 car man­u­fac­tur­ers scored abysmally on pri­vacy and how they col­lect data in­clud­ing sexual ac­tiv­ity, im­mi­gra­tion sta­tus, race, fa­cial ex­pres­sions, weight and ge­netic in­for­ma­tion.” They sell this data to third par­ties and use it to build pro­files about you cov­er­ing intelligence, abil­i­ties, char­ac­ter­is­tics, pref­er­ences, and more.”

Tesla had a vul­ner­a­bil­ity in 2017 that al­lowed any­one to re­motely see your car’s lo­ca­tion, man­age other fea­tures, and even sum­mon the car to them­selves

The Car That Watches You Back de­tails how cars are now serv­ing you ads, as well as col­lect­ing vast amounts of data about you. The Hacker News dis­cus­sion about this ar­ti­cle is what prompted this blog post

Now that we’re suf­fi­ciently mo­ti­vated, what can we do about it? In this blog post, rather than re­ly­ing on com­pa­nies’ promises or mean­ing­less opt-outs, we’re go­ing to stop the data at the source by phys­i­cally re­mov­ing the mo­dem (the DCM, or Data Communication Module) as well as the built-in GPS on my 2024 RAV4 Hybrid, so the car will no longer have the ca­pa­bil­ity to send any teleme­try data back home. Let’s dive in:

Will the car still be func­tional?

Yes. Depending on how dif­fer­ent car man­u­fac­tur­ers have wired their cars, how their soft­ware and firmware were writ­ten, etc., vary­ing lev­els of func­tion­al­ity might be af­fected by re­mov­ing the mo­dem and GPS. For this car:

Everything that re­lies on a data con­nec­tion will no longer work. This in­cludes things like over-the-air up­dates as well as Toyota cloud-based ser­vices and SOS func­tion­al­i­tyThis is a safety trade­off - you’re dis­abling au­to­matic crash no­ti­fi­ca­tion and emer­gency call­ing

This is a safety trade­off - you’re dis­abling au­to­matic crash no­ti­fi­ca­tion and emer­gency call­ing

The car’s mi­cro­phone is wired through the DCM, and in the ab­sence of any other changes re­mov­ing the DCM means the in-car mi­cro­phone won’t work, which is in­con­ve­nient if you plan on tak­ing calls in the car. However we’ll in­stall a DCM Bypass Kit (discussed more be­low) to re­store all func­tion­al­ity and have a work­ing mi­cro­phone

CarPlay has a quirk: the phone uses its own GPS but also ac­cepts a lo­ca­tion sig­nal from the car’s GPS unit. After re­mov­ing the DCM, the car would get con­fused about its lo­ca­tion and some­times jump my po­si­tion to the mid­dle of Nevada (I live in San Francisco), mak­ing nav­i­ga­tion an­noy­ing. To work around this we’ll fully dis­con­nect the car’s GPS, so it can’t send a bad lo­ca­tion to the phone­From the ti­tle of the blog post you might have won­dered why bother re­mov­ing the GPS af­ter we’ve re­moved the mo­dem - who cares if the car has built-in lo­ca­tion when it can’t phone home with that data? This is whyThis is a well-doc­u­mented bug with dis­cus­sions on Apple Support threads as well as car-spe­cific fo­rums like rav4­world. This bug af­fects more than just Toyotas, it’s a generic Apple bug even for peo­ple who haven’t re­moved their mo­dem (but anec­do­tally re­mov­ing my mo­dem made the prob­lem worse)

From the ti­tle of the blog post you might have won­dered why bother re­mov­ing the GPS af­ter we’ve re­moved the mo­dem - who cares if the car has built-in lo­ca­tion when it can’t phone home with that data? This is why

This is a well-doc­u­mented bug with dis­cus­sions on Apple Support threads as well as car-spe­cific fo­rums like rav4­world. This bug af­fects more than just Toyotas, it’s a generic Apple bug even for peo­ple who haven’t re­moved their mo­dem (but anec­do­tally re­mov­ing my mo­dem made the prob­lem worse)

Removing the DCM and GPS may void parts of your war­ranty - just some­thing to be aware of. Thanks to the Magnuson–Moss Warranty Act, it can­not void the whole car war­ranty. It can void cov­er­age re­lated to the work you did (cloud ser­vices, telem­at­ics, etc.) but un­re­lated fail­ures like en­gine prob­lems must still be cov­ered

So thank­fully every­thing in the car re­mains 100% func­tional ex­cept the cloud-based ser­vices men­tioned above, which I did­n’t want any­way. There is also one crit­i­cal caveat about Bluetooth:

No more Bluetooth

Important: Even af­ter the mo­dem is re­moved, if you con­nect your phone to the car via Bluetooth then the car will use your phone as an in­ter­net con­nec­tion and send all the same teleme­try data back to Toyota. However, if you use a wired USB con­nec­tion then it does not do that (see the dis­cus­sion here and else­where), so I ex­clu­sively use CarPlay via USB. I wish I had a way to com­pletely dis­able the car’s Bluetooth func­tion­al­ity, but it’s deeply in­te­grated into the head unit.

If you need USB ca­bles for CarPlay I like these USB-A to Lightning and USB-A to USB-C ca­bles from Anker.

Or, if you pre­fer the con­ve­nience of Bluetooth, you can use a Bluetooth -> wired USB adapter like this one. The adapter re­ceives Bluetooth from your phone and pre­sents it­self to the car as a USB de­vice, so the car treats it like a wired con­nec­tion and won’t tether through your phone.

Now, onto the nec­es­sary tools and parts:

Tools/parts needed

For this pro­ject you’ll need:

A trim re­moval kit (I used this one)

A ratchet, ex­ten­sion, 10mm socket, and 8mm sock­etI’ve been ex­tremely happy with this set. However if you’re not plan­ning on do­ing more handyper­son type work then just bor­row these 4 parts from a neigh­bor in­stead of spend­ing the money on a whole set

I’ve been ex­tremely happy with this set. However if you’re not plan­ning on do­ing more handyper­son type work then just bor­row these 4 parts from a neigh­bor in­stead of spend­ing the money on a whole set

(Optional) A pre­ci­sion flat­head screw­driver (like this one). This can help with dis­con­nect­ing wire plugs

This Telematics DCM Bypass Kit, for fix­ing the in-car mi­cro­phone$90 is a bit steep for a part that prob­a­bly costs less than $1 to pro­duce, but the mak­ers of the kit did the work of read­ing the (paywalled) Toyota di­ag­nos­tics to pro­duce a work­ing prod­uct. If you’d like to build your own ver­sion you’ll need to sub­scribe to Toyota TIS to ac­cess the car wiring schemat­ics. It’s un­for­tu­nate that these schemat­ics and other re­pair man­u­als aren’t pub­lic

$90 is a bit steep for a part that prob­a­bly costs less than $1 to pro­duce, but the mak­ers of the kit did the work of read­ing the (paywalled) Toyota di­ag­nos­tics to pro­duce a work­ing prod­uct. If you’d like to build your own ver­sion you’ll need to sub­scribe to Toyota TIS to ac­cess the car wiring schemat­ics. It’s un­for­tu­nate that these schemat­ics and other re­pair man­u­als aren’t pub­lic

Overall this was a medium-dif­fi­culty pro­ject that took me a few hours to com­plete. Now, let’s get to work:

Removing the car mo­dem

1) Push down on the leather of your shifter and re­move the pin (don’t lose it!):

2) Remove the shifter top:

3) Use the trim tool to pop out the base of the shifter. Just lean it to the side, no need to dis­con­nect any­thing:

4) Use your hands to pop out the next panel and lean it to the side:

5) Remove these three 10mm bolts:

6) Pull on this light gray trim piece un­til it dis­con­nects slightly:

7) Pull the ra­dio out, dis­con­nect the plug, and put the ra­dio aside. The ra­dio is held on by clips only and can even be pulled out with your hands, but it re­quires a lit­tle force and the trim re­moval tool may be help­ful. When dis­con­nect­ing the plug it may help to use the pre­ci­sion screw­driver to push down on the tab to un­lock it, but you can also do it with your hands:

8) Pull the next panel (the seat warm­ing con­trols) out with your hands. It’s only held on by clips but may re­quire a bit of force to re­move:

9) Take a photo of all the wiring con­nec­tions on the seat warm­ing con­trols so you can as­sem­ble it cor­rectly later, un­plug all the wires, and set the con­trols aside:

10) You now have ac­cess to the DCM:

11) Removing the DCM re­quires a lot of ma­neu­ver­ing, tight spaces, and pa­tience, but you can do it. There are two 8mm bolts on the right and one 8mm bolt on the left that need to be re­moved. Getting ac­cess to them may re­quire re­mov­ing some of the other har­nesses or com­po­nents that are in the way - just go slow and steady, take your time, and take pho­tos of things be­fore you move them. After those 3 bolts are re­moved you have a lit­tle more play to pull the unit out, and af­ter dis­con­nect­ing the wires in the back you can com­pletely re­move the DCM. Here’s mine out of the car, part num­ber 86741 – 06130:

12) Now that the mo­dem is re­moved we need to in­stall the DCM Bypass Kit so the in-car mi­cro­phone con­tin­ues to work. It’s ex­tremely straight­for­ward, just plug it into the wiring har­ness that you re­moved from the DCM. The plugs will only fit on the cor­rect wires, there’s no way to get it wrong:

13) Reassemble every­thing by go­ing in re­verse or­der. Make sure all clips, bolts, etc. are back in their orig­i­nal po­si­tion and every­thing is seated cor­rectly. This part should go much faster than dis­as­sem­bly.

Now you’re done with the hard part. Next we dis­con­nect the GPS from the head unit, which is sig­nif­i­cantly eas­ier:

Removing the GPS an­tenna

1) Use the trim tool to re­move the back panel be­hind the in­fo­tain­ment screen:

2) Unscrew these four 10mm bolts:

3) Pop the head unit out (it’s only held on by 2 clips at this point). The part num­ber will vary but for my car it was 86140 – 0R710.

4) The GPS an­tenna is one of the sin­gle-wire ca­bles (not the multi-wire plugs). I had 3 sin­gle-wire ca­bles in my unit and the GPS wire was the black wire shown in the pic­ture. I was able to de­ter­mine this by process of elim­i­na­tion - un­plug­ging one of the wires dis­con­nected my car’s re­verse cam­era, un­plug­ging an­other one dis­con­nected CarPlay com­pletely, and the last one was the GPS - worked like a charm. Again, with a Toyota TIS sub­scrip­tion you can get ac­cess to the head unit wiring di­a­gram and not have to make guesses about which wire is which, but process of elim­i­na­tion worked fine for me:

5) Reassemble every­thing by go­ing in re­verse or­der. Again, make sure that all the clips seat prop­erly.

Confirming it worked

After you have every­thing re­assem­bled, turn the car on.

1) If you un­plugged the mo­dem suc­cess­fully then:

The in­fo­tain­ment screen will have an icon in the up­per right cor­ner in­di­cat­ing no con­nec­tion

The SOS light in the over­head con­sole will be off:

2) If the DCM Bypass Kit was in­stalled suc­cess­fully then:

Make a phone call through CarPlay. The re­cip­i­ent should be able to hear you / the mi­cro­phone should be work­ing

Congratulations - your car no longer has the ca­pa­bil­ity to trans­mit teleme­try data. Of course it may still be cap­tured to lo­cal stor­age and can be phys­i­cally col­lected later, but for me that was fine.

Conclusion

Overall I’m very happy with this pro­ject. Unfortunately I think it’s only a mat­ter of time be­fore the mo­dem and GPS be­come more deeply in­te­grated into the car (making this blog post in­fea­si­ble), or cars have more dras­tic fail­ure modes when the mo­dem/​GPS is re­moved, or anti-right-to-re­pair laws get passed to fur­ther clamp down on this be­hav­ior. For now the win stands - no teleme­try leaves the car. Strong Federal pri­vacy laws would make posts like this un­nec­es­sary, that’s the world I’d rather live in.

Video transcript: A message from President Kornbluth about funding and the talent pipeline | MIT Office of the President | MIT

president.mit.edu

Hello, every­one.

It’s been a while since I’ve spo­ken with you all.

But the Institute is fac­ing on­go­ing chal­lenges in two re­lated ar­eas: fund­ing, and our tal­ent pipeline.

So I thought you’d ap­pre­ci­ate hear­ing the facts.

First, fund­ing.

For more than a year, we’ve all worked on re­spond­ing to ex­tra­or­di­nary new and sus­tained pres­sures on our bud­get (due largely to the heavy new 8% tax on our en­dow­ment re­turns, a bur­den for MIT and only a few other peer schools).

Across the Institute — cen­trally and in lo­cal units — it was clear to every­one that change was im­per­a­tive, and you all took the chal­lenge head on. That re­quired se­ri­ous ef­fort and sac­ri­fice. Some units are still work­ing through the process, and I know the cuts have been painful.

But please know that your ef­forts have been in­cred­i­bly valu­able — and I ap­pre­ci­ate every­thing you’ve done to get us here.

Through all of this, of course, we’ve kept our eyes on Washington. In February, we heard wel­come news re­gard­ing Congressional ap­pro­pri­a­tions: Funding for many re­search agen­cies had been at least par­tially re­stored.

The news seemed en­cour­ag­ing enough that I started hear­ing peo­ple ask if maybe these new de­vel­op­ments in DC meant that we could step back from some of the bud­get cuts at MIT or at least feel con­fi­dent that we’re past the storm.

I re­ally wish we could! But un­for­tu­nately, the an­swer is no — for a set of rea­sons.

First, al­though Congress re­stored sub­stan­tial agency fund­ing, we can see in the num­bers that fed­eral fund­ing is not ac­tu­ally flow­ing to MIT the way it typ­i­cally has. Relatedly, some fed­eral agen­cies are dis­cussing the pos­si­bil­ity of fac­tor­ing in ge­og­ra­phy when they al­lo­cate their funds, rather than bas­ing de­ci­sions on sci­en­tific merit alone.

Compared to this time last year, MIT has ex­pe­ri­enced a de­cline in cam­pus re­search ac­tiv­ity funded by fed­eral awards of more than 20%. Still more con­cern­ing is that our num­ber of new fed­eral re­search awards is also down more than 20%.

While we’ve seen en­cour­ag­ing growth in re­search fund­ing from other spon­sors, it’s not nearly enough to off­set the fed­eral de­cline.

So here’s the big pic­ture: Counting fed­eral and non-fed­eral sources to­gether, our cam­pus spon­sored-re­search ac­tiv­ity is now 10% smaller than it was a year ago. That is a strik­ing loss for one of the most in­flu­en­tial and pro­duc­tive re­search com­mu­ni­ties in the world.

Now, the sec­ond chal­lenge: Talent.

I’ve said many times that MIT is in the tal­ent busi­ness. Which means we’re very alert to changes in our tal­ent pipeline.

We’ve al­ready seen clear signs that pol­icy changes af­fect­ing in­ter­na­tional stu­dents and schol­ars are dis­cour­ag­ing ex­tremely tal­ented in­di­vid­u­als from ap­ply­ing to join our com­mu­nity.

Right now, we’re com­ing to the end of ad­mis­sions sea­son.

For de­part­ments across the Institute, the fund­ing un­cer­tainty I talked about has made them cau­tious about ad­mit­ting new grad­u­ate stu­dents.

That cau­tion is com­pletely un­der­stand­able: If fed­eral grants con­tinue to de­crease, PIs just won’t have the funds to sup­port ad­di­tional stu­dents!

But the cu­mu­la­tive im­pact di­rectly af­fects our mis­sion of re­search and ed­u­ca­tion: Our grad­u­ate stu­dent en­roll­ment de­creased this year…and we ex­pect that to con­tinue next year.

Outside of Sloan and the EECS MEng pro­gram, still in the midst of ad­mis­sions, com­pared with 2024, our de­part­ments’ new en­roll­ments for next year are down close to 20%.

That means that, in to­tal, out­side of Sloan, we could have about 500 fewer grad­u­ate stu­dents. Which means we’ll have many fewer stu­dents ad­vanc­ing the work of MIT, and un­der­grad­u­ates will have fewer grad stu­dents as men­tors in their re­search.

But to me, far and away the worst im­pact is that hun­dreds of ex­cep­tion­ally tal­ented young peo­ple will not have the ben­e­fit of an MIT ed­u­ca­tion — and we won’t have the ben­e­fit of their cre­ative bril­liance.

As I’m sure you all un­der­stand, re­spond­ing to these new pres­sures is not just a mat­ter of belt-tight­en­ing — and it’s not just trim­ming around the edges.”

Last week, I spoke with sev­eral se­nior fac­ulty mem­bers, in very dif­fer­ent fields, all with long records of win­ning sig­nif­i­cant grants. All of them are now hav­ing to cut grad­u­ate stu­dents, post­docs and par­tic­u­lar av­enues of re­search.

At the Institute level, we are work­ing on plans to help sup­port groups whose op­er­a­tions are se­ri­ously im­pacted by cur­rent fed­eral fund­ing lapses. But that will not be a long-term so­lu­tion.

The fact is that we’re look­ing at a real drop in re­search be­ing done by the peo­ple of MIT. It’s a loss of mo­men­tum for fac­ulty and stu­dents.

And frankly, it’s a loss for the na­tion: When you shrink the pipeline of ba­sic dis­cov­ery re­search, you choke off the flow of fu­ture so­lu­tions, in­no­va­tions and cures — and you shrink the sup­ply of fu­ture sci­en­tists.

I know that hear­ing these facts all to­gether feels pretty chilly and over­cast.”

But I also know that, over its long his­tory, MIT has con­fronted and pushed through many se­ri­ous storms be­fore.

And I take heart in what I see here on cam­pus every day: The same MIT in­ten­sity. The same en­thu­si­asm. The same cre­ativ­ity and drive.

And the peo­ple of MIT are ap­ply­ing that same en­ergy in many dif­fer­ent ways to meet these new chal­lenges to our mis­sion.

•    Our fac­ulty are ris­ing to the mo­ment with ex­cit­ing ideas to meet emerg­ing fed­eral op­por­tu­ni­ties. For the Department of Energy’s new Genesis Mission — in a her­culean ef­fort by our fac­ulty and ad­min­is­tra­tive staff — MIT PIs re­cently sub­mit­ted 176 grant pro­pos­als. These pro­pos­als of­fer a snap­shot of first-class MIT sci­ence and en­gi­neer­ing, in ser­vice to the na­tion.

•    We’re ag­gres­sively pur­su­ing new sources of fund­ing, es­pe­cially from in­dus­try, and we’re build­ing on deep re­la­tion­ships, like the MIT-IBM Computing Research Lab, which we re­cently launched to shape the fu­ture of AI and quan­tum com­put­ing.

•    We’re ex­plor­ing new ways to gen­er­ate in­come through ed­u­ca­tional of­fer­ings (like mas­ters’-only pro­grams) that match our mis­sion.

•    With a new leader for our Resource Development team, we’re tak­ing a fresh look at how to at­tract more sup­port through phil­an­thropy.

•    And our alumni and friends are step­ping up, both with do­na­tions and as cham­pi­ons for the value of MIT.

We need to ad­vo­cate for our­selves, and for America’s re­search uni­ver­si­ties, in all these ways and more.

Our Washington Office is work­ing en­er­get­i­cally, on both sides of the aisle, to raise aware­ness about the dam­age the en­dow­ment tax is do­ing to MIT and to a hand­ful of our peer schools.

We’re pur­su­ing new ways to en­gage pol­i­cy­mak­ers and the pub­lic around the trans­for­ma­tive im­pact of cu­rios­ity-dri­ven sci­ence.

And I’m meet­ing fre­quently with lead­ers in Congress and the Administration, to make the case for MITs value to the na­tion.

I can do this with con­fi­dence be­cause I know you’re all here, work­ing to re­al­ize our mis­sion.

Thank you for that — and for all you’ve done, and will do, to help the Institute nav­i­gate this dif­fi­cult time.

RTX 5090 + M4 MacBook Air: Can it Game?

scottjg.com

What if you could strap a full desk­top GPU to your MacBook Air? Turns out, you can.

Just a quick FTC re­quired note: When you buy through my links, I may earn a com­mis­sion.

Never tell me the odds

As much as I hate to ad­mit it, step one in most of my pro­jects now is to ask AI about it. Maybe it’ll tell me some­thing I don’t know.

Fortunately, bor­der­line-im­prac­ti­cal is kind of my thing.

What’s a Thunderbolt eGPU?

Ok, so the plan is to plug a big PC gam­ing GPU, an NVIDIA RTX 5090, into my M4 MacBook Air. To do that, we plug it into a Thunderbolt dock which adapts PCIe to Thunderbolt, and we plug that into a USB-C port.

Thunderbolt tun­nels PCIe over a USB-C ca­ble, so from the com­put­er’s per­spec­tive a Thunderbolt de­vice re­ally is a PCIe de­vice, not a USB one. You get 4 PCIe lanes at up to 40Gbps on Thunderbolt 4, with a small per­for­mance penalty for the tun­nel­ing. USB4 in­cludes the same PCIe tun­nel­ing as an op­tional fea­ture, so some non-Thun­der­bolt USB4 ports can do this too. You can use this to plug a GPU into a lap­top with a com­pat­i­ble port.

Thunderbolt from the lap­top plugs into the GPU dock. The GPU plugs into the mon­i­tor via DisplayPort. Shortly af­ter this was taken, I broke this dock.

From the com­put­er’s per­spec­tive, the de­vice looks more or less like a slightly slower PCIe de­vice, so you can usu­ally use the same dri­vers you’d nor­mally use for those de­vices. eG­PUs work pretty much out of the box on Linux and Windows. It’s even pos­si­ble to use one on a Raspberry Pi (albeit with Oculink, not Thunderbolt).

The first hur­dle is that ma­cOS does not ship with dri­vers for NVIDIA or AMD GPUs on Apple Silicon.

What about tiny­grad?

tiny­grad re­cently re­leased their own ma­cOS eGPU dri­vers. It’s a whole new AI stack with its own open source dri­ver pipeline for NVIDIA and AMD hard­ware.

Sadly, if your main ob­jec­tive is to run AI in­fer­ence or play games, tiny­grad prob­a­bly is­n’t the so­lu­tion you’re look­ing for. This video by YouTuber Alex Ziskind shows that us­ing an eGPU via tiny­grad for in­fer­ence is about 10 times slower than run­ning na­tive Metal in­fer­ence di­rectly on an M4 Pro with­out an eGPU. You can only use the tiny­grad eGPU dri­ver with the tiny­grad stack, not for any­thing else. It also has very lim­ited sup­port for dif­fer­ent AI mod­els.

Getting NVIDIA PTX code run­ning on the GPU is one thing. Writing a full gen­eral-pur­pose dis­play dri­ver that works with ar­bi­trary soft­ware is a sig­nif­i­cantly harder prob­lem. So for now, what can you ac­tu­ally do with an eGPU and a Mac?

The ex­ist­ing Linux dri­ver

Linux can run on Apple Silicon Macs now. Regrettably, at this time, the Linux ker­nel does not sup­port Thunderbolt on Apple Silicon (only in­ter­nal de­vices and USB3). But…

You can run Linux in a 64-bit ARM VM on a ma­cOS host. ma­cOS sup­ports Thunderbolt de­vices. Linux sup­ports NVIDIA GPUs. Let’s put the pieces to­gether and pass through the GPU into the Linux VM.

At a high level, we’re just go­ing to put the GPU in the Linux VM. The VM is the same ar­chi­tec­ture as the Mac host (arm64), so per­for­mance should be com­pa­ra­ble. Of course, the devil is in the de­tails.

There is no dri­ver for NVIDIA cards on ARM64 Windows. That’s why we use Linux.

There is no dri­ver for NVIDIA cards on ARM64 Windows. That’s why we use Linux.

For a quick video demo of the re­sult, take a look:

In the rest of the post, I’ll go through the long and wind­ing road of get­ting this to ac­tu­ally work. If you just want to see screen­shots and bench­marks, you can prob­a­bly skip to the bench­mark sec­tion.

Engineering PCI Passthrough on ma­cOS

PCI de­vice ba­sics

Let’s look at two things we need work­ing for the VM to talk to the PCI de­vice:

PCI BAR (Base Address Registers) - Each PCI de­vice com­mu­ni­cates through chunks of mem­ory that the com­puter can read and write to. There’s ba­si­cally a re­served re­gion of mem­ory on your com­puter for each de­vice. Those mem­ory re­gions have to be mir­rored into the VM for PCI passthrough to work.

PCI BAR (Base Address Registers) - Each PCI de­vice com­mu­ni­cates through chunks of mem­ory that the com­puter can read and write to. There’s ba­si­cally a re­served re­gion of mem­ory on your com­puter for each de­vice. Those mem­ory re­gions have to be mir­rored into the VM for PCI passthrough to work.

DMA (Direct Memory Access) - This is how the de­vice can read and write in­for­ma­tion di­rectly in/​out of your com­put­er’s mem­ory. Instead of hav­ing the CPU burn cy­cles copy­ing data from the de­vice, the de­vice can copy the mem­ory au­to­mat­i­cally. For a GPU, it might be used to copy tex­tures di­rectly from the com­put­er’s mem­ory into its own video mem­ory.

DMA (Direct Memory Access) - This is how the de­vice can read and write in­for­ma­tion di­rectly in/​out of your com­put­er’s mem­ory. Instead of hav­ing the CPU burn cy­cles copy­ing data from the de­vice, the de­vice can copy the mem­ory au­to­mat­i­cally. For a GPU, it might be used to copy tex­tures di­rectly from the com­put­er’s mem­ory into its own video mem­ory.

Mapping PCI BARs

When QEMU starts a VM, it sets up the guest’s mem­ory lay­out. For nor­mal RAM, this boils down to a call to hvf_set_­phys_mem() in QEMU, which uses the Hypervisor.framework method:

hv_vm_map(mem, guest_­phys­i­cal_ad­dress, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);

Next, we con­nect to the host PCIDriverKit dri­ver and ask to map the mem­ory from the PCI de­vice into our process. (I’m leav­ing the dri­ver-side code out for now, but it’s very sim­i­lar boil­er­plate.)

// map BAR0 into the cur­rent process and set `addr` to the lo­ca­tion // where it was mapped mach_vm_ad­dress_t addr = 0; mach_vm_­size_t size = 0; IOConnectMapMemory64(driverConnection, 0, mach_­task_­self(), &addr, &size, kIOMa­pA­ny­where);

Ok, so then we have addr, which now points to the BAR0 mem­ory that we can ac­cess di­rectly in our process. At this point you can just read and write stuff to it, like any other piece of mem­ory.

volatile uin­t32_t *bar0 = (volatile uin­t32_t *)addr; printf(“BAR0[0] = %x\n”, bar0[0]); // this would out­put: BAR0[0] = 0x1b2000a1 // which is a de­vice-spe­cific con­stant that de­scribes my RTX 5090 // // BAR0[0] is the BOOT_0 reg­is­ter. The fields break down as: // arch = 0x1b → GB200 GPU fam­ily // impl = 0x2 → GB202 die (RTX 5090) // ma­jor_rev = 0xa → step­ping A // mi­nor_rev = 0x1 → re­vi­sion 1 (together: step­ping A1)

Now we just make sure QEMU calls hvf_set_­phys_mem() for our de­vice mem­ory, and we can map that into the guest. When guest code touches that map­ping, it talks di­rectly to the GPU with min­i­mal host over­head. This is the best case for per­for­mance. At least, in the­ory.

In prac­tice, as soon as the VM touched the PCI BAR mem­ory, the host ker­nel crashed.

If you’ve never ex­pe­ri­enced this be­fore, it’s dis­ori­ent­ing. Your en­tire com­puter will hang, and be­cause the track­pad feed­back is con­trolled by soft­ware, sud­denly the track­pad will no longer click. The dogs and cats in your neigh­bor­hood start howl­ing. Pictures fall off the walls of your house. Eventually your com­puter will re­boot, and you will be pre­sented with this di­a­log.

Ok, so we can’t map de­vice mem­ory di­rectly, but we have other tricks up our sleeve. We can trap every ac­cess to the mem­ory, exit the guest back into QEMU, and have QEMU for­ward each read or write to the de­vice. That keeps be­hav­ior cor­rect, but it’s bru­tally slow. In many work­loads the pain is else­where. Most of the per­for­mance-sen­si­tive work is DMA, but some paths still care how fast you can push com­mands through the BAR.

I started prepar­ing a bug re­port for Apple and wrote a small re­pro­duc­tion (well, AI-assisted) to demon­strate the is­sue:

In ~100 lines of C, you can spin up a VM, map the de­vice BAR into the guest, and run code that touches it. I’m still not sure whether that was more frus­trat­ing or en­cour­ag­ing, but that ver­sion ran with­out crash­ing, while QEMU was still pan­ick­ing the host. I was stumped for a while. Was it the guest page ta­bles? Was the BAR col­lid­ing with guest RAM in some sub­tle way? Why were the dogs and cats still howl­ing?

Eventually, in my des­per­a­tion, I asked an AI cod­ing as­sis­tant to com­pare my sam­ple and QEMU. It im­me­di­ately flagged that my map­ping used HV_MEMORY_READ | HV_MEMORY_WRITE while QEMU used HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC. Alas, bested again by AI. Not even silly blog pro­jects are safe any­more (mostly kid­ding).

The workaround in QEMU was a small change:

It works, but it’s not per­fect. ARM has sev­eral fla­vors of de­vice mem­ory (the Device-nGnRnE/nGnRE/nGRE/GRE fam­ily), with dif­fer­ent rules for whether writes can be gath­ered, re­ordered, or ac­knowl­edged early. It’s roughly anal­o­gous to x86 write-com­bin­ing on the most per­mis­sive end.

On real hard­ware, the prefetch­able BARs on my GPU are sup­posed to al­low gath­er­ing, which makes them sev­eral times faster for bulk writes than BAR0. But hv_vm_map() has no flags to con­fig­ure this, so every de­vice map­ping ends up as the strictest nGn­RnE. There’s noth­ing we can do about it, and it’s still ~30x faster than trap­ping every ac­cess, but it makes writ­ing the BAR ~10x slower than it would be nor­mally.

DMA

This was by far the sketchi­est part of the pro­ject. To start, let’s go over how this works on a PC run­ning Linux with VM PCI-passthrough, and then we’ll com­pare to our chal­lenge on ma­cOS.

When there’s just a com­puter talk­ing to a de­vice (no VM in­volved), they can talk to­gether di­rectly. The PC will tell the de­vice hey I got that DMA buffer ready at this mem­ory ad­dress” and the de­vice can ac­cess that mem­ory di­rectly (AKA DMA). Easy.

When a VM is in­volved, it’s more com­pli­cated. Guest phys­i­cal ad­dresses don’t cor­re­spond to host phys­i­cal ad­dresses. The VMs RAM is just some chunk of host mem­ory al­lo­cated wher­ever it was avail­able. So if the guest tells the de­vice DMA into 0x00000000,” the de­vice will hap­pily scrib­ble over what­ever ac­tu­ally lives there on the host. The sim­plest fix is two things:

Pin all guest mem­ory so it can’t be paged out while the de­vice might touch it.

Put a hard­ware unit called the IOMMU be­tween the de­vice and host mem­ory. The hy­per­vi­sor pro­grams it with the guest → host trans­la­tions, and every DMA re­quest from the de­vice gets remapped on the fly.

DMA Request:Read/Write0x00000000

IOMMU

Translation Table

0x00000000 – 0x80000000

0x20000000 – 0xA0000000

Translated to:0x20000000

Host Physical Memory

0x20000000 – 0xA0000000

This is a blunt so­lu­tion. The guest does­n’t have to do any­thing spe­cial, but the host has to keep all guest RAM pinned. There are more ad­vanced ap­proaches (like a vir­tual IOMMU), but they’re out­side the scope of this post.

DMA on Apple Silicon

On Apple Silicon, there’s a hard­ware unit called DART that’s more or less equiv­a­lent to an IOMMU. It’s not spe­cific to VMs; it also acts as a se­cu­rity bound­ary, pre­vent­ing de­vices from ac­cess­ing ar­bi­trary host mem­ory. Ideally we’d just use DART the same way Linux uses the IOMMU in the sim­ple case above.

Unfortunately, DART (at least via PCIDriverKit for Thunderbolt de­vices) has some hard con­straints:

~1.5GB map­ping limit. A VM with 1.5GB of RAM can tech­ni­cally boot, but CUDA runs out of mem­ory and any mod­ern game needs 8 – 16GB.

~64k map­ping cap. With many small DMA buffers the map­ping table fills up.

No ad­dress or align­ment con­trol. PCIDriverKit as­signs mapped ad­dresses for you. You can’t pick them, or spec­ify align­ment con­straints. This rules out a vir­tual IOMMU, which re­quires the guest to choose its own DMA ad­dresses.

The 1.5GB ceil­ing was the biggest ini­tial blocker. I tried a few workarounds: pre-map­ping ranges where I guessed DMAs might land (obviously did­n’t work), and us­ing a re­stricted-dma-pool de­vice tree at­tribute to force all DMA through a pre-al­lo­cated re­gion. The re­stricted pool ap­proach ac­tu­ally works for sim­pler de­vices, but GPU dri­vers are too weird to fit into that model. (If you’re cu­ri­ous about the specifics, there’s a qemu-de­vel thread where I dis­cuss it.)

ap­ple-dma-pci

I ended up de­sign­ing a new vir­tual PCI de­vice in QEMU called ap­ple-dma-pci. It gets in­serted into the VM along­side the passed-through GPU, and a com­pan­ion ker­nel dri­ver in the guest in­ter­cepts the NVIDIA dri­ver’s DMA map­ping calls. The so­lu­tion is, frankly, a very up­set­ting hack, but it works.

Because map­pings are cre­ated on de­mand per DMA re­quest and torn down when the buffer is freed, we re­duce the amount of mapped mem­ory we need at any given time. Only the work­ing set of live DMA buffers at any given mo­ment has to fit in our 1.5GB limit, as op­posed to the en­tirety of guest mem­ory.

The guest dri­ver is loaded early (via an /etc/modules-load.d/ con­fig), so it can find the GPU at probe time and swap in cus­tom DMA ops be­fore the NVIDIA dri­ver touches it:

sta­tic struct dma_map_ops ap­ple_d­ma_ops = { .map_page = ap­ple_d­ma_map_­page, .unmap_page = ap­ple_d­ma_un­map_­page, .map_sg = ap­ple_d­ma_map_sg, .unmap_sg = ap­ple_d­ma_un­map_sg, .alloc = ap­ple_d­ma_al­loc, .free = ap­ple_d­ma_free, };

sta­tic int ap­ple_d­ma_p­ci_probe(struct pci_dev *pdev, const struct pci_de­vice_id *id) { struct pci_dev *gpu = pci_get_de­vice(PCI_VEN­DOR_N­VIDIA, PCI_ANY_ID, NULL); if (!gpu) re­turn -ENODEV;

set_d­ma_ops(&gpu->dev, &apple_dma_ops); pci_de­v_put(gpu); re­turn 0; }

Each of the cus­tom ops is a thin wrap­per. It mar­shals its ar­gu­ments into a small re­quest, writes it into mem­ory for the ap­ple-dma-pci vir­tual BAR, kicks a door­bell reg­is­ter, and waits for a re­ply. On the host side, QEMU picks up the re­quest, hands it off to the PCIDriverKit dri­ver, which per­forms the ac­tual DART map­ping, and the re­sult­ing DMA ad­dress gets writ­ten back to guest mem­ory. The NVIDIA dri­ver should­n’t know the dif­fer­ence.

Linux VM (Guest)

NVIDIA Driver

dma_map_­page()

ap­ple_d­ma_ops han­dler

vir­tual PCI BAR write

ap­ple-dma-pci vir­tual de­vice

VM exit

ma­cOS Host

QEMU

IOConnectCallMethod()

PCIDriverKit dri­ver

IODMACommand

DART hard­ware

mapped ad­dress re­turned back up the stack

NVIDIA align­ment quirk

It did­n’t im­me­di­ately work well, though. While the dri­ver ini­tially loaded and ini­tial­ized the card, I was greeted with this fun ker­nel log mes­sage as soon as I at­tempted to run a CUDA work­load:

[ 456.194883] NVRM: nvAsser­tOk­Failed­NoLog: Assertion failed: The off­set passed is not valid [NV_ERR_INVALID_OFFSET] (0x00000037) re­turned from pRmApi->Al­loc(pRmApi, de­vice->ses­sion->han­dle, is­Sys­tem­Mem­ory ? de­vice->han­dle : de­vice->sub­han­dle, &physHandle, is­Sys­tem­Mem­ory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAl­loc­Params)) @ nv_g­pu_ops.c:4972 [ 456.371282] NVRM: GPU0 nvAssert­Failed­NoLog: Assertion failed: 0 == (physAddr & (RM_PAGE_SIZE_HUGE - 1)) @ mem_m­gr_g­m107.c:1312 [ 456.372020] NVRM: nvAsser­tOk­Failed­NoLog: Assertion failed: The off­set passed is not valid [NV_ERR_INVALID_OFFSET] (0x00000037) re­turned from pRmApi->Al­loc(pRmApi, de­vice->ses­sion->han­dle, is­Sys­tem­Mem­ory ? de­vice->han­dle : de­vice->sub­han­dle, &physHandle, is­Sys­tem­Mem­ory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAl­loc­Params)) @ nv_g­pu_ops.c:4972

If you re­call the ear­lier DMA sec­tion, we noted that we can’t con­trol the align­ment of DMA-mapped buffers. Bummer. At this point, I dug into the dri­ver to try to see if there was some­thing sim­ple we could patch.

Here’s the rel­e­vant seg­ment:

if (type == UVM_RM_MEM_TYPE_SYS) { if (size >= UVM_PAGE_SIZE_2M) al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_2M; else if (size >= UVM_PAGE_SIZE_64K) al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_64K;

sta­tus = uvm_r­m_locked_­call(nvU­vmInter­face­Mem­o­ryAl­loc­Sys(gpu->rm_ad­dress_­space, size, &gpu_va, &alloc_info));

// TODO: Bug 5042223 if (status == NV_ERR_NO_MEMORY && size >= UVM_PAGE_SIZE_64K) { UVM_ERR_PRINT(“nvUvmInterfaceMemoryAllocSys al­loc failed with big page size, retry with de­fault page size\n”); al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_DEFAULT; sta­tus = uvm_r­m_locked_­call(nvU­vmInter­face­Mem­o­ryAl­loc­Sys(gpu->rm_ad­dress_­space, size, &gpu_va, &alloc_info)); } }

By adding more de­bug log­ging in the mod­ule, I could see it was a 16MB al­lo­ca­tion of type UVM_RM_MEM_TYPE_SYS. So, it uses the largest (2MB) page size. Ironically, there is al­ready a workaround here when the al­lo­ca­tion fails. It’ll just try again with a smaller page size. It just does­n’t take into ac­count the dif­fer­ent er­ror code for align­ment (NV_ERR_INVALID_OFFSET).

God Damn AI is making me dumb

jpain.io

James Pain’s Weblog

14 May, 2026

It’s so god damn tempt­ing to use AI to write. Whether it is ar­ti­cles, code, or doc­u­ments. I feel like us­ing AI is di­min­ish­ing my abil­ity to write my­self.

I did­n’t nec­es­sar­ily feel I was bad at writ­ing. I used to be a some­what tal­ented…well…mediocre soft­ware de­vel­oper, but now, the more I use AI, the more I can feel my own skills get­ting worse.

I think the prob­lem feeds on my self-doubt, my im­poster syn­drome, that I can ac­tu­ally pro­duce the work. However, when I use AI to write, I read it back and think: God damn, this just looks like AI. It does­n’t sound or look like me at all. It does­n’t say what I want it to say.

With cod­ing, I’ve been us­ing AI en­tirely for a year or two. I’ve been en­tirely prompt­ing and I haven’t writ­ten a sin­gle line of code. I have mostly for­got­ten how to code, which I find very sad and de­press­ing be­cause cod­ing used to be my life. I’m now teach­ing my­self how to code by hand again.

I’m pretty cer­tain that the skills of soft­ware de­vel­op­ment aren’t go­ing to en­tirely dis­ap­pear with AI. There still need to be peo­ple who know how to read and write code. It will be fewer peo­ple but cer­tainly there will be peo­ple needed.

I’m hop­ing AI might re­verse a trend that’s been hap­pen­ing over the past 20 – 30 years, where there’s been more de­mand than sup­ply of soft­ware de­vel­op­ers. As Robert Martin (Uncle Bob) lec­tures; be­fore com­puter sci­ence was a pro­fes­sion, it was physi­cists and math­e­mati­cians and aca­d­e­mics who pro­grammed. Professionals. The pro­fes­sion­al­ism has faded away as de­mand for soft­ware de­vel­op­ers sky­rock­eted.

This ar­ti­cle is­n’t writ­ten with AI but I just caught my­self about to copy and paste it into Claude to see what it thinks be­cause I’m wor­ried that it does­n’t make sense or it reads funny or there’s some­thing miss­ing. That’s the self-doubt that it’s feed­ing on and what I need to fight back.

GitHub - DepthFirstDisclosures/Nginx-Rift: exploit for CVE-2026-42945

github.com

RCE Proof of con­cept for CVE-2026 – 42945, a crit­i­cal heap buffer over­flow in NGINXs ngx_http_rewrite_­mod­ule in­tro­duced in 2008. The bug en­ables unau­then­ti­cated re­mote code ex­e­cu­tion against servers us­ing rewrite and set di­rec­tives.

This vul­ner­a­bil­ity — along with three other mem­ory cor­rup­tion is­sues (CVE-2026 – 42946, CVE-2026 – 40701, CVE-2026 – 42934) — was au­tonomously dis­cov­ered by depth­first’s se­cu­rity analy­sis sys­tem af­ter a sin­gle click of on­board­ing the NGINX source.

Want to find is­sues like this in your own code? Try the same sys­tem at https://​depth­first.com/​open-de­fense.

Want to find is­sues like this in your own code? Try the same sys­tem at https://​depth­first.com/​open-de­fense.

The Bug (TL;DR)

NGINXs script en­gine uses a two-pass process: first com­pute the re­quired buffer size, then copy data in. The is_args flag is set on the main en­gine when a rewrite re­place­ment con­tains ?, but the length-cal­cu­la­tion pass runs on a freshly ze­roed sub-en­gine. So:

Length pass sees is_args = 0 → re­turns raw cap­ture length.

Copy pass sees is_args = 1 → calls ngx_es­cape_uri with NGX_ESCAPE_ARGS, ex­pand­ing each es­capable byte to 3 bytes.

The copy over­flows the un­der­sized heap buffer with at­tacker-con­trolled URI data. Exploitation uses cross-re­quest heap feng shui to cor­rupt an ad­ja­cent ngx_pool_t’s cleanup pointer (sprayed via POST bod­ies, since URI bytes can’t con­tain null bytes), redi­rect­ing it to a fake ngx_pool_­cleanup_s in­vok­ing sys­tem() on pool de­struc­tion.

Read more about this bug in our tech­ni­cal write-up.

Affected & Fixed Versions

Full ven­dor ad­vi­sory: https://​my.f5.com/​man­age/​s/​ar­ti­cle/​K000160932

Usage

Tested on Ubuntu 24.04.3 LTS.

./setup.sh — build the con­tainer.

docker com­pose -f env/​docker-com­pose.yml up — start the vul­ner­a­ble NGINX server.

python3 poc.py –shell — pop a shell.

Mullvad exit IPs as a fingerprinting vector

tmctmt.com

Mullvad is one of the few VPN providers that of­fers mul­ti­ple exit IPs for its servers. If two peo­ple con­nect to the same server, they will usu­ally end up with dif­fer­ent pub­lic IPs.

With only 578 servers (compared to Proton VPNs 20,000), this kind of ver­ti­cal scal­ing makes sense to avoid cram­ming too many users onto one IP, which would be a prob­lem on sites with overzeal­ous IP blocks and rate­lim­its.

Surprisingly, the exit IP you are given is not ran­dom­ized each time you con­nect to the server, but de­ter­min­is­ti­cally picked based on your WireGuard key, which ro­tates every 1 to 30 days (unless you use a third-party client, in which case it never ro­tates).

But wait.. if each server as­signs you an in­de­pen­dently picked sta­tic exit IP, would­n’t just a few of those be enough to uniquely iden­tify you among every other Mullvad user?

Putting it to the test

I wrote a script that re­peat­edly changes my pub­key and fetches exit IPs for a set of 9 servers. Leaving it run­ning for a night pro­duced data points for 3650 pub­keys, which is enough to map out the exit IP range for each server:

The pool sizes add up to over 8.2 tril­lion exit IP com­bi­na­tions for these servers, so you’d think each pub­key would be as­signed a unique com­bi­na­tion of IPs since the odds of a col­li­sion are so as­tro­nom­i­caly low. And yet, some­how all the pub­keys I tested were as­signed just one of 284 com­bi­na­tions.

What’s go­ing on here?

Different IPs, same pro­por­tion

You can cal­cu­late a nu­mer­i­cal po­si­tion for an exit IP by count­ing its dis­tance from the pool’s start­ing IP.

For ex­am­ple, the IP 103.136.147.53 as­signed by au-syd-wg-101 would have a 1-based in­dex of 49 (X.X.X.53 - X.X.X.5 + 1).

Now, if you take the IP po­si­tions for any of the 284 com­bi­na­tions linked above, and you di­vide them by pool size, a com­mon ra­tio emerges:

Each IP lands within the same per­centile of its pool, in this case, the 81st.

This ex­plains the lim­ited num­ber of com­bi­na­tions, Mullvad will only as­sign neigh­bor­ing exit IPs across all its servers. But why?

Feature or bug?

Curiously, the servers cl-scl-wg-001 and za-jnb-wg-002 con­sis­tently share IP in­dexes with each other across all 284 ob­served IP com­bi­na­tions.

The thing they have in com­mon is a pool size of 11, and this gives us a clue about what’s hap­pen­ing.

In any lan­guage, if you ini­ti­ate an RNG with a sta­tic seed, a rand-be­tween call with the same bounds will al­ways pro­duce the same re­sult:

use rand::{Rng, SeedableRng}; use rand::rngs::StdRng;

fn main() { let seed = 1234; for _ in 1..100 { let mut rng = StdRng::seed_from_u64(seed); let num­ber = rng.ran­dom_range(0..1000); println!(“{}”, num­ber) // will al­ways print 56 } }

So, the shared in­dexes be­tween these two servers in­di­cate that Mullvad is prob­a­bly us­ing some sort of seed-based RNG to pick exit IP in­dexes, where the seed is the pub­key (or pos­si­bly the tun­nel ad­dress) and the up­per bound pa­ra­me­ter is the pool size.

This is fairly straight­for­ward, but what hap­pens when the bounds are changed?

use rand::{Rng, SeedableRng}; use rand::rngs::StdRng;

fn main() { let seed = 12345; for bound in 10..100 { let mut rng = StdRng::seed_from_u64(seed); let num­ber = rng.ran­dom_range(0..bound); let ra­tio = num­ber as f64 / bound as f64; println!(“{} {:.3} , num­ber, ra­tio) } }

5 0.500 5 0.455 6 0.500 6 0.462 7 0.500 7 0.467 8 0.500 9 0.529 9 0.500 10 0.526 10 0.500 11 0.524 11 0.500 12 0.522 12 0.500 13 0.520 13 0.500 14 0.519 14 0.500 15 0.517 …

As it turns out, the en­tropy pool of the RNG is un­af­fected by the bounds you pro­vide, and at least in Rust, the same float is gen­er­ated on each first call and used as a mul­ti­plier scale for to the bounds, like so: min + round((max - min) * float) (this may be a gi­ant over­sim­pli­fi­ca­tion)

This lines up with the be­hav­ior we’ve seen in Mullvad’s exit IP pick­ing al­go­rithm, so it’s safe to say that this is the cause of it.

Rust as the back­end lan­guage makes sense too, con­sid­er­ing that the client is also writ­ten in it.

The thing is, al­most none of my pro­gram­mer friends were able to ac­cu­rately de­scribe what ran­dom_range would pro­duce in the sec­ond code snip­pet, and the ac­tual be­hav­ior took me by sur­prise too. It’s rea­son­able to think that each in­cre­ment to the bounds would skew with the en­tropy and re­sult in a dif­fer­ent num­ber, even though that’s not what hap­pens.

Is it pos­si­ble that the Mullvad devs shared this com­mon mis­con­cep­tion, while ac­tu­ally in­tend­ing for there to be an un­bounded num­ber of exit IP com­bi­na­tions? I don’t know, but it’s a funny thought.

Correlating iden­ti­ties

I made a tool that can de­duce the min­i­mum and max­i­mum float value for a given com­bi­na­tion of IPs, avail­able at https://​tm­ctmt.github.io/​mul­l­vad-seed-es­ti­ma­tor/.

This par­tic­u­lar set of IPs in the screen­shot re­solves to a float value be­tween 0.2909 and 0.2943 for a dif­fer­ence of 0.0034, which means that 0.34% of Mullvad users share these IPs. At a ball park es­ti­mate of a 100,000 ac­tive Mullvad users, this equates to 340 users.

This is def­i­nitely not as unique as I orig­i­nally thought, but at the same time, >99% ac­cu­racy is re­ally not that bad?

As an ex­am­ple, imag­ine that you are a mod­er­a­tor on a fo­rum and you sus­pect that a new face is ac­tu­ally a sock­pup­pet of a user you banned the day prior. You check the IP logs, and de­spite us­ing dif­fer­ent Mullvad servers, both ac­counts re­solve to the over­lap­ping float ranges 0.4334 – 0.4428 and 0.4358 – 0.4423. This gives you a >99% chance that they are the same per­son.

Now ap­ply this to IP logs ob­tained through data breaches and le­gal chan­nels and you can see how you could get deanonymized be­hind a VPN through sim­i­lar cor­re­la­tion at­tacks.

Protecting your­self

Avoid switch­ing servers more than once per pub­key

Force ro­tate your pub­key by log­ging out of the Mullvad app

First public macOS kernel memory corruption exploit on Apple M5

blog.calif.io

Early this week, we had a meet­ing at Apple Park in Cupertino. While there, we also shared with Apple our lat­est vul­ner­a­bil­ity re­search re­port: the first pub­lic ma­cOS ker­nel mem­ory cor­rup­tion ex­ploit on M5 sil­i­con, sur­viv­ing MIE. It was laser printed, in honor of our hacker friends.

We wanted to re­port it in per­son, in­stead of get­ting buried in the sub­mis­sion flood that some un­for­tu­nate Pwn2Own par­tic­i­pants just ex­pe­ri­enced. Most re­spected hack­ers avoid hu­man in­ter­ac­tion when­ever pos­si­ble, so this phys­i­cal strat­egy may give us a slight edge in the eter­nal race for five min­utes of fame and glory on Twitter.

This is the story of the ex­ploit and our field trip. Full tech­ni­cal de­tails will be shared af­ter Apple fixes the vul­ner­a­bil­i­ties and at­tack path. Hopefully it won’t take our beloved com­pany too long. We only bud­geted one year of do­main reg­is­tra­tion fees for this at­tack.

Memory cor­rup­tion re­mains the most com­mon vul­ner­a­bil­ity class every­where, in­clud­ing iOS and ma­cOS. In se­cu­rity, if you can’t fully pre­vent some­thing, you ac­cept the risk mit­i­gate it by mak­ing ex­ploita­tion more ex­pen­sive.

But mit­i­ga­tions are not cheap. If per­for­mance did­n’t mat­ter, many se­cu­rity prob­lems would be easy to solve. Apple is smart and con­trols the full stack, so they pushed many of these de­fenses di­rectly into hard­ware and made by­pass­ing them sig­nif­i­cantly harder. Many se­cu­rity ex­perts con­sider Apple de­vices to be the most se­cure con­sumer plat­form.

The lat­est flag­ship ex­am­ple is MIE (Memory Integrity Enforcement), Apple’s hard­ware-as­sisted mem­ory safety sys­tem built around ARMs MTE (Memory Tagging Extension). It was in­tro­duced as the mar­quee se­cu­rity fea­ture for the Apple M5 and A19, specif­i­cally de­signed to stop mem­ory cor­rup­tion ex­ploits, the vul­ner­a­bil­ity class be­hind many of the most so­phis­ti­cated com­pro­mises on iOS and ma­cOS.

Apple spent five years build­ing it. Probably bil­lions of dol­lars too. According to their re­search, MIE dis­rupts every pub­lic ex­ploit chain against mod­ern iOS, in­clud­ing the re­cently leaked Coruna and Darksword ex­ploit kits.

We’ve been on a fun jour­ney ex­plor­ing how AI can help build ex­ploits that still work un­der MTE. While Apple’s fo­cus is pri­mar­ily iOS, they also brought MIE to the M5, the chip pow­er­ing the lat­est MacBooks.

Our ma­cOS at­tack path was ac­tu­ally an ac­ci­den­tal dis­cov­ery. Bruce Dang found the bugs on April 25th. Dion Blazakis joined Calif on April 27th. Josh Maine built the tool­ing, and by May 1st we had a work­ing ex­ploit.

The ex­ploit is a data-only ker­nel lo­cal priv­i­lege es­ca­la­tion chain tar­get­ing ma­cOS 26.4.1 (25E253). It starts from an un­priv­i­leged lo­cal user, uses only nor­mal sys­tem calls, and ends with a root shell. The im­ple­men­ta­tion path in­volves two vul­ner­a­bil­i­ties and sev­eral tech­niques, tar­get­ing bare-metal M5 hard­ware with ker­nel MIE en­abled.

PoC video:

We did­n’t build the chain alone. Mythos Preview helped iden­tify the bugs and as­sisted through­out ex­ploit de­vel­op­ment.

Mythos Preview is pow­er­ful: once it has learned how to at­tack a class of prob­lems, it gen­er­al­izes to nearly any prob­lem in that class. Mythos dis­cov­ered the bugs quickly be­cause they be­long to known bug classes. But MIE is a new best-in-class mit­i­ga­tion, so au­tonomously by­pass­ing it can be tricky. This is where hu­man ex­per­tise comes in.

Part of our mo­ti­va­tion was to test what’s pos­si­ble when the best mod­els are paired with ex­perts. Landing a ker­nel mem­ory cor­rup­tion ex­ploit against the best pro­tec­tions in a week is note­wor­thy, and says some­thing strong about this pair­ing.

To the best of our knowl­edge, this is the first pub­lic ma­cOS ker­nel ex­ploit on MIE hard­ware. Again, we’ll pub­lish our 55-page re­port af­ter Apple ships a fix.

MIE was never meant to be hacker-proof. With the right vul­ner­a­bil­i­ties, it can be evaded. As we’ve shown through­out the MAD Bugs se­ries, AI sys­tems are al­ready dis­cov­er­ing more and more vul­ner­a­bil­i­ties. It’s in­evitable that some of those bugs will even­tu­ally be pow­er­ful enough to sur­vive even ad­vanced mit­i­ga­tions like MIE. This is ex­actly what we just dis­cov­ered.

This work is a glimpse of what is com­ing. Apple built MIE in a world be­fore Mythos Preview. We’re about to learn how the best mit­i­ga­tion tech­nol­ogy on Earth holds up dur­ing the first AI bug­maged­don.

Epilogue

The Apple space­ship is every bit as breath­tak­ing as peo­ple say. It has a lot of ap­ple trees, ob­vi­ously. We wanted to check out the in­fa­mous Infinite Loop too, but were afraid it could take a long time.

Our hosts shared that Apple spent $5 bil­lion build­ing this office”, then asked about our of­fice. We said, well, ours def­i­nitely cost less than $1 bil­lion.

But this is the fun part about AI. Small teams can sud­denly do things that used to re­quire en­tire or­ga­ni­za­tions. With the right strat­egy and peo­ple, even a tiny com­pany can be­come mighty enough that the world’s largest com­pa­nies start ask­ing for its help.

In Vietnamese, we say, nhỏ mà có võ”.

No posts

openai.com

Bitcoin trader recovers $400,000 using Claude AI after getting 'stoned' and losing wallet password 11 years ago — bot tried 3.5 trillion passwords before decrypting an old wallet backup

www.tomshardware.com

A Bitcoin holder who changed their wal­let pass­word while stoned’ and then for­got it was fi­nally able to re­cover their wal­let with the help of Claude. According to X user cprkrn, they’d been try­ing to re­cover their wal­let for more than 11 years. Still, they did­n’t give up be­cause that wal­let con­tained 5 BTC; this may not sound much, but it has a value of al­most $400,000.

After find­ing a mnemonic that ac­tu­ally turned out to be their old pass­word a few weeks ago, the user dumped their en­tire col­lege com­puter files in Claude in a last-gasp ef­fort. The bot un­cov­ered an old backup wal­let file that it suc­cess­fully de­crypted, while also un­cov­er­ing a bug in the pass­word con­fig­u­ra­tion that was pre­vent­ing re­cov­ery up to that point.

HOLY FUCKING SHIT OMG CLAUDE JUST CRACKED THIS SHIT, THANK YOU @AnthropicAI THANK YOU @DarioAmodei NAMING MY KID AFTER YOU 😍https://​t.co/​gOb­Nir­RDpS https://​t.co/​ByT­dIM4d20 pic.twit­ter.com/​xB5LU­Jb6Pe­May 13, 2026

HOLY FUCKING SHIT OMG CLAUDE JUST CRACKED THIS SHIT, THANK YOU @AnthropicAI THANK YOU @DarioAmodei NAMING MY KID AFTER YOU 😍https://​t.co/​gOb­Nir­RDpS https://​t.co/​ByT­dIM4d20 pic.twit­ter.com/​xB5LU­Jb6Pe­May 13, 2026

Cryptocurrency wal­lets dur­ing their early years were com­pletely dif­fer­ent beasts. Mnemonic seed phrases back then gen­er­ated the HD key tree, but wal­lets of­ten mixed them with non-HD and im­ported keys. Those can­not be re­cov­ered by the seed phrase and are stored in a wal­let file that re­quires a pass­word. This is what hap­pened to cprkrn — they changed the pass­word to the wal­let file that con­tained some spe­cific keys while they were stoned and then com­pletely for­got what pass­word they used. This meant that the Bitcoins tied to those keys were com­pletely in­ac­ces­si­ble, and they’ve been try­ing to find their way back in since then.

It seems that the user al­ready had some can­di­date pass­words and mul­ti­ple wal­lets stored on their PC. They’d been try­ing to brute-force their way into the locked file with bt­cre­cover, an open-source Bitcoin wal­let re­cov­ery tool, but to no suc­cess. Their luck changed for the bet­ter when they found an old mnemonic seed phrase writ­ten in an old col­lege note­book. The HD ad­dresses re­cov­ered by the seed phrase matched those of a spe­cific file on their com­puter, con­firm­ing that it was the wal­let that held the 5 BTC, but it re­mained en­crypted.

Out of frus­tra­tion, cprkrn then dumped their whole col­lege com­puter into Claude. This was when the AI dis­cov­ered an older backup file of the wal­let from December 2019 hid­den in cprkrn’s data. Claude also dis­cov­ered an is­sue where the shared key and pass­words that bt­cre­cover was try­ing weren’t com­bined prop­erly. With the bug ironed out and an older wal­let pre­dat­ing the pass­word change, Claude suc­cess­fully ran bt­cre­cover and was able to de­crypt the pri­vate keys, al­low­ing cprkrn to trans­fer the five lost” BTC to their cur­rent wal­let.

This is a happy end­ing for one user who for­got their wal­let pass­word, giv­ing them a mas­sive wind­fall be­cause of Bitcoin’s mas­sive in­crease in value dur­ing the past few years. And while Anthropic’s Claude did not mag­i­cally guess the right set of char­ac­ters to un­lock the file that held the pri­vate keys, it fixed one crit­i­cal is­sue that cprkrn missed out on, al­low­ing him to fi­nally re­gain his crypto. Before AI LLMs be­came pop­u­lar, re­searchers spent at least half a year crack­ing open a Bitcoin wal­let with a for­got­ten 20-character pass­word. It was well worth the ef­fort, though, as it con­tained an es­ti­mated $1.6 mil­lion in BTC back in 2024. Unfortunately, we can­not say the same thing for this poor guy who lost $780 mil­lion in Bitcoin af­ter a 2025 court rul­ing pre­vented him from at­tempt­ing to rum­mage through the lo­cal dump af­ter his lap­top with 8,000 BTC was dis­carded in the trash.

Follow Tom’s Hardware on Google News, or add us as a pre­ferred source, to get our lat­est news, analy­sis, & re­views in your feeds.

Get Tom’s Hardware’s best news and in-depth re­views, straight to your in­box.

Jowi Morales is a tech en­thu­si­ast with years of ex­pe­ri­ence work­ing in the in­dus­try. He’s been writ­ing with sev­eral tech pub­li­ca­tions since 2021, where he’s been in­ter­ested in tech hard­ware and con­sumer elec­tron­ics.

A few words on DS4

antirez.com

I did­n’t ex­pect DwarfStar 4 (https://​github.com/​an­ti­rez/​ds4) to be­come so pop­u­lar so fast. It is clear that there was a need for sin­gle-model in­te­gra­tion fo­cused lo­cal AI ex­pe­ri­ence, and that a few things hap­pened to­gether: the re­lease of a quasi-fron­tier model that is large and fast enough to change the game of lo­cal in­fer­ence, and the fact that it works ex­tremely well with an ex­tremely asym­met­ric quants recipe of 2/8 bit, so that 96 or 128GB of RAM are enough to run it. And, of course: all the ex­pe­ri­ence pro­duced by the lo­cal AI move­ment in the lat­est years, that can be lever­aged more promptly be­cause of GPT 5.5 (otherwise you can’t build DS4 in one week — and even with all this help you need to know how to gen­tly talk to LLMs).

The last week was funny and also tir­ing, I worked 14 hours per day on av­er­age. My nor­mal av­er­age is 4/6 since early Redis times, but the first few months of Redis were like that.

So, what’s next? Is this a pro­ject that starts and ends with DeepSeek v4 Flash? Nope, the model can change over time. The space will be oc­cu­pied, in my vi­sion, by the best cur­rent open weights model that is *practically fast* on a high end Mac or GPU in a box” gear (like the DGX Spark and other sim­i­lar se­tups). I bet that the next con­tender is DeepSeek v4 Flash it­self, in the new check­point that will be re­leased and, hope­fully, a ver­sion specif­i­cally tuned for cod­ing, and who knows, other ex­pert-vari­ants (not in the sense of MoE ex­perts) maybe. For lo­cal in­fer­ence, to have a ds4-cod­ing, ds4-le­gal, ds4-med­ical mod­els make a lot of sense, af­ter all. You just load what you need de­pend­ing on the ques­tion.

It is the first time since I play with lo­cal in­fer­ence (I play with it since the start) that I find my­self us­ing a lo­cal model for se­ri­ous stuff that I would nor­mally ask to Claude / GPT. This, I think, is re­ally a big thing. It is also the first time that us­ing vec­tor steer­ing I can en­joy an ex­pe­ri­ence where the LLM can be used with more free­dom. DeepSeek v4 Flash is re­ally an im­pres­sive model, no doubt about that. If you can imag­ine in your mind the small good lo­cal model ex­pe­ri­ence as A, and the fron­tier model you use on­line as B, DS4 is a lot more B than A. I can’t wait for the new re­leases, hon­estly (btw, thank you DeepSeek).

So, af­ter those chaotic first days, I hope the pro­ject will fo­cus on: qual­ity bench­marks, po­ten­tially adding a cod­ing agent that is also part of the pro­ject, a hard­ware setup here in my home that can run the CI test in or­der to en­sure long term qual­ity, more ports, and fi­nally but as a very im­por­tant point: dis­trib­uted in­fer­ence (both se­r­ial and par­al­lel).

For now, thank you for all the sup­port: it was re­ally ap­pre­ci­ated :) AI is too crit­i­cal to be just a pro­vided ser­vice.

blog com­ments pow­ered by Disqus

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.