10 interesting stories served every morning and every evening.




1 2,002 shares, 89 trendiness

The Git Commands I Run Before Reading Any Code

The first thing I usu­ally do when I pick up a new code­base is­n’t open­ing the code. It’s open­ing a ter­mi­nal and run­ning a hand­ful of git com­mands. Before I look at a sin­gle file, the com­mit his­tory gives me a di­ag­nos­tic pic­ture of the pro­ject: who built it, where the prob­lems clus­ter, whether the team is ship­ping with con­fi­dence or tip­toe­ing around land mines.

The 20 most-changed files in the last year. The file at the top is al­most al­ways the one peo­ple warn me about. Oh yeah, that file. Everyone’s afraid to touch it.”

High churn on a file does­n’t mean it’s bad. Sometimes it’s just ac­tive de­vel­op­ment. But high churn on a file that no­body wants to own is the clear­est sig­nal of code­base drag I know. That’s the file where every change is a patch on a patch. The blast ra­dius of a small edit is un­pre­dictable. The team pads their es­ti­mates be­cause they know it’s go­ing to fight back.

A 2005 Microsoft Research study found churn-based met­rics pre­dicted de­fects more re­li­ably than com­plex­ity met­rics alone. I take the top 5 files from this list and cross-ref­er­ence them against the bug hotspot com­mand be­low. A file that’s high-churn and high-bug is your sin­gle biggest risk.

Every con­trib­u­tor ranked by com­mit count. If one per­son ac­counts for 60% or more, that’s your bus fac­tor. If they left six months ago, it’s a cri­sis. If the top con­trib­u­tor from the over­all short­log does­n’t ap­pear in a 6-month win­dow (git short­log -sn –no-merges –since=“6 months ago”), I flag that to the client im­me­di­ately.

I also look at the tail. Thirty con­trib­u­tors but only three ac­tive in the last year. The peo­ple who built this sys­tem aren’t the peo­ple main­tain­ing it.

One caveat: squash-merge work­flows com­press au­thor­ship. If the team squashes every PR into a sin­gle com­mit, this out­put re­flects who merged, not who wrote. Worth ask­ing about the merge strat­egy be­fore draw­ing con­clu­sions.

Same shape as the churn com­mand, fil­tered to com­mits with bug-re­lated key­words. Compare this list against the churn hotspots. Files that ap­pear on both are your high­est-risk code: they keep break­ing and keep get­ting patched, but never get prop­erly fixed.

This de­pends on com­mit mes­sage dis­ci­pline. If the team writes update stuff” for every com­mit, you’ll get noth­ing. But even a rough map of bug den­sity is bet­ter than no map.

Commit count by month, for the en­tire his­tory of the repo. I scan the out­put look­ing for shapes. A steady rhythm is healthy. But what does it look like when the count drops by half in a sin­gle month? Usually some­one left. A de­clin­ing curve over 6 to 12 months tells you the team is los­ing mo­men­tum. Periodic spikes fol­lowed by quiet months means the team batches work into re­leases in­stead of ship­ping con­tin­u­ously.

I once showed a CTO their com­mit ve­loc­ity chart and they said that’s when we lost our sec­ond se­nior en­gi­neer.” They had­n’t con­nected the time­line be­fore. This is team data, not code data.

Revert and hot­fix fre­quency. A hand­ful over a year is nor­mal. Reverts every cou­ple of weeks means the team does­n’t trust its de­ploy process. They’re ev­i­dence of a deeper is­sue: un­re­li­able tests, miss­ing stag­ing, or a de­ploy pipeline that makes roll­backs harder than they should be. Zero re­sults is also a sig­nal; ei­ther the team is sta­ble, or no­body writes de­scrip­tive com­mit mes­sages.

Crisis pat­terns are easy to read. Either they’re there or they’re not.

These five com­mands take a cou­ple min­utes to run. They won’t tell you every­thing. But you’ll know which code to read first, and what to look for when you get there. That’s the dif­fer­ence be­tween spend­ing your first day read­ing the code­base me­thod­i­cally and spend­ing it wan­der­ing.

This is the first hour of what I do in a code­base au­dit. Here’s what the rest of the week looks like.

...

Read the original on piechowski.io »

2 1,516 shares, 84 trendiness

Porting Mac OS X to the Nintendo Wii

Since its launch in 2007, the Wii has seen sev­eral op­er­at­ing sys­tems ported to it: Linux, NetBSD, and most-re­cently, Windows NT. Today, Mac OS X joins that list.

In this post, I’ll share how I ported the first ver­sion of Mac OS X, 10.0 Cheetah, to the Nintendo Wii. If you’re not an op­er­at­ing sys­tems ex­pert or low-level en­gi­neer, you’re in good com­pany; this pro­ject was all about learn­ing and nav­i­gat­ing count­less unknown un­knowns”. Join me as we ex­plore the Wii’s hard­ware, boot­loader de­vel­op­ment, ker­nel patch­ing, and writ­ing dri­vers - and give the PowerPC ver­sions of Mac OS X a new life on the Nintendo Wii.

Visit the wi­iMac boot­loader repos­i­tory for in­struc­tions on how to try this pro­ject your­self.

Before fig­ur­ing out how to tackle this pro­ject, I needed to know whether it would even be pos­si­ble. According to a 2021 Reddit com­ment:

There is a zero per­cent chance of this ever hap­pen­ing.

Feeling en­cour­aged, I started with the ba­sics: what hard­ware is in the Wii, and how does it com­pare to the hard­ware used in real Macs from the era.

The Wii uses a PowerPC 750CL proces­sor - an evo­lu­tion of the PowerPC 750CXe that was used in G3 iBooks and some G3 iMacs. Given this close lin­eage, I felt con­fi­dent that the CPU would­n’t be a blocker.

As for RAM, the Wii has a unique con­fig­u­ra­tion: 88 MB to­tal, split across 24 MB of 1T-SRAM (MEM1) and 64 MB of slower GDDR3 SDRAM (MEM2); un­con­ven­tional, but tech­ni­cally enough for Mac OS X Cheetah, which of­fi­cially calls for 128 MB of RAM but will un­of­fi­cially boot with less. To be safe, I used QEMU to boot Cheetah with 64 MB of RAM and ver­i­fied that there were no is­sues.

Other hard­ware I’d even­tu­ally need to sup­port in­cluded:

* The SD card for boot­ing the rest of the sys­tem once the ker­nel was run­ning

* Video out­put via a frame­buffer that lives in RAM

* The Wii’s USB ports for us­ing a mouse and key­board

Convinced that the Wii’s hard­ware was­n’t fun­da­men­tally in­com­pat­i­ble with Mac OS X, I moved my at­ten­tion to in­ves­ti­gat­ing the soft­ware stack I’d be port­ing.

Mac OS X has an open source core (Darwin, with XNU as the ker­nel and IOKit as the dri­ver model), with closed-source com­po­nents lay­ered on top (Quartz, Dock, Finder, sys­tem apps and frame­works). In the­ory, if I could mod­ify the open-source parts enough to get Darwin run­ning, the closed-source parts would run with­out ad­di­tional patches.

Porting Mac OS X would also re­quire un­der­stand­ing how a real Mac boots. PowerPC Macs from the early 2000s use Open Firmware as their low­est-level soft­ware en­vi­ron­ment; for sim­plic­ity, it can be thought of as the first code that runs when a Mac is pow­ered on. Open Firmware has sev­eral re­spon­si­bil­i­ties, in­clud­ing:

* Providing use­ful func­tions for I/O, draw­ing, and hard­ware com­mu­ni­ca­tion

* Loading and ex­e­cut­ing an op­er­at­ing sys­tem boot­loader from the filesys­tem

Open Firmware even­tu­ally hands off con­trol to BootX, the boot­loader for Mac OS X. BootX pre­pares the sys­tem so that it can even­tu­ally pass con­trol to the ker­nel. The re­spon­si­bil­i­ties of BootX in­clude:

* Loading and de­cod­ing the XNU ker­nel, a Mach-O ex­e­cutable, from the root filesys­tem

Once XNU is run­ning, there are no de­pen­den­cies on BootX or Open Firmware. XNU con­tin­ues on to ini­tial­ize proces­sors, vir­tual mem­ory, IOKit, BSD, and even­tu­ally con­tinue boot­ing by load­ing and run­ning other ex­e­cuta­bles from the root filesys­tem.

The last piece of the puz­zle was how to run my own cus­tom code on the Wii - a triv­ial task thanks to the Wii be­ing jailbroken”, al­low­ing any­one to run home­brew with full ac­cess to the hard­ware via the Homebrew Channel and BootMii.

Armed with knowl­edge of how the boot process works on a real Mac, along with how to run low-level code on the Wii, I needed to se­lect an ap­proach for boot­ing Mac OS X on the Wii. I eval­u­ated three op­tions:

Port Open Firmware, use that to run un­mod­i­fied BootX to boot Mac OS X

Port BootX and mod­ify it to not rely on Open Firmware, use that to boot Mac OS X

Write a cus­tom boot­loader that per­forms the bare-min­i­mum setup to boot Mac OS X

Since Mac OS X does­n’t de­pend on Open Firmware or BootX once run­ning, spend­ing time port­ing ei­ther of those seemed like an un­nec­es­sary dis­trac­tion. Additionally, both Open Firmware and BootX con­tain added com­plex­ity for sup­port­ing many dif­fer­ent hard­ware con­fig­u­ra­tions - com­plex­ity that I would­n’t need since this only needs to run on the Wii. Following in the foot­steps of the Wii Linux pro­ject, I de­cided to write my own boot­loader from scratch. The boot­loader would need to, at a min­i­mum:

* Load the ker­nel from the SD card

Once the ker­nel was run­ning, none of the boot­loader code would mat­ter. At that point, my fo­cus would shift to patch­ing the ker­nel and writ­ing dri­vers.

I de­cided to base my boot­loader on some low-level ex­am­ple code for the Wii called ppcskel. ppcskel puts the sys­tem into a sane ini­tial state, and pro­vides use­ful func­tions for com­mon things like read­ing files from the SD card, draw­ing text to the frame­buffer, and log­ging de­bug mes­sages to a USB Gecko.

Next, I had to fig­ure out how to load the XNU ker­nel into mem­ory so that I could pass con­trol to it. The ker­nel is stored in a spe­cial bi­nary for­mat called Mach-O, and needs to be prop­erly de­coded be­fore be­ing used.

The Mach-O ex­e­cutable for­mat is well-doc­u­mented, and can be thought of as a list of load com­mands that tell the loader where to place dif­fer­ent sec­tions of the bi­nary file in mem­ory. For ex­am­ple, a load com­mand might in­struct the loader to read the data from file off­set 0x2cf000 and store it at the mem­ory ad­dress 0x2e0000. After pro­cess­ing all of the ker­nel’s load com­mands, we end up with this mem­ory lay­out:

The ker­nel file also spec­i­fies the mem­ory ad­dress where ex­e­cu­tion should be­gin. Once the boot­loader jumps to this ad­dress, the ker­nel is in full con­trol and the boot­loader is no longer run­ning.

To jump to the ker­nel-en­try-point’s mem­ory ad­dress, I needed to cast the ad­dress to a func­tion and call it:

After this code ran, the screen went black and my de­bug logs stopped ar­riv­ing via the se­r­ial de­bug con­nec­tion - while an­ti­cli­mac­tic, this was an in­di­ca­tor that the ker­nel was run­ning.

The ques­tion then be­came: how far was I mak­ing it into the boot process? To an­swer this, I had to start look­ing at XNU source code. The first code that runs is a PowerPC as­sem­bly _start rou­tine. This code re­con­fig­ures the hard­ware, over­rid­ing all of the Wii-specific setup that the boot­loader per­formed and, in the process, dis­ables boot­loader func­tion­al­ity for se­r­ial de­bug­ging and video out­put. Without nor­mal de­bug-out­put fa­cil­i­ties, I’d need to track progress a dif­fer­ent way.

The ap­proach that I came up with was a bit of a hack: bi­nary-patch the ker­nel, re­plac­ing in­struc­tions with ones that il­lu­mi­nate one of the front-panel LEDs on the Wii. If the LED il­lu­mi­nated af­ter jump­ing to the ker­nel, then I’d know that the ker­nel was mak­ing it at least that far. Turning on one of these LEDs is as sim­ple as writ­ing a value to a spe­cific mem­ory ad­dress. In PowerPC as­sem­bly, those in­struc­tions are:

To know which parts of the ker­nel to patch, I cross-ref­er­enced func­tion names in XNU source code with func­tion off­sets in the com­piled ker­nel bi­nary, us­ing Hopper Disassembler to make the process eas­ier. Once I iden­ti­fied the cor­rect off­set in the bi­nary that cor­re­sponded to the code I wanted to patch, I just needed to re­place the ex­ist­ing in­struc­tions at that off­set with the ones to blink the LED.

To make this patch­ing process eas­ier, I added some code to the boot­loader to patch the ker­nel bi­nary on the fly, en­abling me to try dif­fer­ent off­sets with­out man­u­ally mod­i­fy­ing the ker­nel file on disk.

After trac­ing through many ker­nel startup rou­tines, I even­tu­ally mapped out this path of ex­e­cu­tion:

This was an ex­cit­ing mile­stone - the ker­nel was def­i­nitely run­ning, and I had even made it into some higher-level C code. To make it past the 300 ex­cep­tion crash, the boot­loader would need to pass a pointer to a valid de­vice tree.

The de­vice tree is a data struc­ture rep­re­sent­ing all of the hard­ware in the sys­tem that should be ex­posed to the op­er­at­ing sys­tem. As the name sug­gests, it’s a tree made up of nodes, each ca­pa­ble of hold­ing prop­er­ties and ref­er­ences to child nodes.

On real Mac com­put­ers, the boot­loader scans the hard­ware and con­structs a de­vice tree based on what it finds. Since the Wii’s hard­ware is al­ways the same, this scan­ning step can be skipped. I ended up hard-cod­ing the de­vice tree in the boot­loader, tak­ing in­spi­ra­tion from the de­vice tree that the Wii Linux pro­ject uses.

Since I was­n’t sure how much of the Wii’s hard­ware I’d need to sup­port in or­der to get the boot process fur­ther along, I started with a min­i­mal de­vice tree: a root node with chil­dren for the cpus and mem­ory:

My plan was to ex­pand the de­vice tree with more pieces of hard­ware as I got fur­ther along in the boot process - even­tu­ally con­struct­ing a com­plete rep­re­sen­ta­tion of all of the Wii’s hard­ware that I planned to sup­port in Mac OS X.

Once I had a de­vice tree cre­ated and stored in mem­ory, I needed to pass it to the ker­nel as part of boot_args:

With the de­vice tree in mem­ory, I had made it past the de­vice_tree.c crash. The boot­loader was per­form­ing the ba­sics well: load­ing the ker­nel, cre­at­ing boot ar­gu­ments and a de­vice tree, and ul­ti­mately, call­ing the ker­nel. To make ad­di­tional progress, I’d need to shift my at­ten­tion to­ward patch­ing the ker­nel source code to fix re­main­ing com­pat­i­bil­ity is­sues.

At this point, the ker­nel was get­ting stuck while run­ning some code to set up video and I/O mem­ory. XNU from this era makes as­sump­tions about where video and I/O mem­ory can be, and re­con­fig­ures Block Address Translations (BATs) in a way that does­n’t play nicely with the Wii’s mem­ory lay­out (MEM1 start­ing at 0x00000000, MEM2 start­ing at 0x10000000). To work around these lim­i­ta­tions, it was time to mod­ify the ker­nel’s source code and boot a mod­i­fied ker­nel bi­nary.

Figuring out a sane de­vel­op­ment en­vi­ron­ment to build an OS ker­nel from 25 years ago took some ef­fort. Here’s what I landed on:

* XNU source code lives on the host’s filesys­tem, and is ex­posed via an NFS server

* The guest ac­cesses the XNU source via an NFS mount

* The host uses SSH to con­trol the guest

* Edit XNU source on host, kick off a build via SSH on the guest, build ar­ti­facts end up on the filesys­tem ac­ces­si­ble by host and guest

To set up the de­pen­den­cies needed to build the Mac OS X Cheetah ker­nel on the Mac OS X Cheetah guest, I fol­lowed the in­struc­tions here. They mostly matched up with what I needed to do. Relevant sources are avail­able from Apple here.

After fix­ing the BAT setup and adding some small patches to reroute con­sole out­put to my USB Gecko, I now had video out­put and se­r­ial de­bug logs work­ing - mak­ing fu­ture de­vel­op­ment and de­bug­ging sig­nif­i­cantly eas­ier. Thanks to this new vis­i­bil­ity into what was go­ing on, I could see that the vir­tual mem­ory, IOKit, and BSD sub­sys­tems were all ini­tial­ized and run­ning - with­out crash­ing. This was a sig­nif­i­cant mile­stone, and gave me con­fi­dence that I was on the right path to get­ting a full sys­tem work­ing.

Readers who have at­tempted to run Mac OS X on a PC via hackintoshing” may rec­og­nize the last line in the boot logs: the dreaded Still wait­ing for root de­vice”. This oc­curs when the sys­tem can’t find a root filesys­tem from which to con­tinue boot­ing. In my case, this was ex­pected: the ker­nel had done all it could and was ready to load the rest of the Mac OS X sys­tem from the filesys­tem, but it did­n’t know where to lo­cate this filesys­tem. To make progress, I would need to tell the ker­nel how to read from the Wii’s SD card. To do this, I’d need to tackle the next phase of this pro­ject: writ­ing dri­vers.

Mac OS X dri­vers are built us­ing IOKit - a col­lec­tion of soft­ware com­po­nents that aim to make it easy to ex­tend the ker­nel to sup­port dif­fer­ent hard­ware de­vices. Drivers are writ­ten us­ing a sub­set of C++, and make ex­ten­sive use of ob­ject-ori­ented pro­gram­ming con­cepts like in­her­i­tance and com­po­si­tion. Many pieces of use­ful func­tion­al­ity are pro­vided, in­clud­ing:

* Base classes and families” that im­ple­ment com­mon be­hav­ior for dif­fer­ent types of hard­ware

* Probing and match­ing dri­vers to hard­ware pre­sent in the de­vice tree

In IOKit, there are two kinds of dri­vers: a spe­cific de­vice dri­ver and a nub. A spe­cific de­vice dri­ver is an ob­ject that man­ages a spe­cific piece of hard­ware. A nub is an ob­ject that serves as an at­tach-point for a spe­cific de­vice dri­ver, and also pro­vides the abil­ity for that at­tached dri­ver to com­mu­ni­cate with the dri­ver that cre­ated the nub. It’s this chain of dri­ver-to-nub-to-dri­ver that cre­ates the afore­men­tioned provider-client re­la­tion­ships. I strug­gled for a while to grasp this con­cept, and found a con­crete ex­am­ple use­ful.

Real Macs can have a PCI bus with sev­eral PCI ports. In this ex­am­ple, con­sider an eth­er­net card be­ing plugged into one of the PCI ports. A dri­ver, IOPCIBridge, han­dles com­mu­ni­cat­ing with the PCI bus hard­ware on the moth­er­board. This dri­ver scans the bus, cre­at­ing IOPCIDevice nubs (attach-points) for each plugged-in de­vice that it finds. A hy­po­thet­i­cal dri­ver for the plugged-in eth­er­net card (let’s call it SomeEthernetCard) can at­tach to the nub, us­ing it as its proxy to call into PCI func­tion­al­ity pro­vided by the IOPCIBridge dri­ver on the other side. The SomeEthernetCard dri­ver can also cre­ate its own IOEthernetInterface nubs so that higher-level parts of the IOKit net­work­ing stack can at­tach to it.

Someone de­vel­op­ing a PCI eth­er­net card dri­ver would only need to write SomeEthernetCard; the lower-level PCI bus com­mu­ni­ca­tion and the higher-level net­work­ing stack code is all pro­vided by ex­ist­ing IOKit dri­ver fam­i­lies. As long as SomeEthernetCard can at­tach to an IOPCIDevice nub and pub­lish its own IOEthernetInterface nubs, it can sand­wich it­self be­tween two ex­ist­ing fam­i­lies in the dri­ver stack, ben­e­fit­ing from all of the func­tion­al­ity pro­vided by IOPCIFamily while also sat­is­fy­ing the needs of IONetworkingFamily.

Unlike Macs from the same era, the Wii does­n’t use PCI to con­nect its var­i­ous pieces of hard­ware to its moth­er­board. Instead, it uses a cus­tom sys­tem-on-a-chip (SoC) called the Hollywood. Through the Hollywood, many pieces of hard­ware can be ac­cessed: the GPU, SD card, WiFi, Bluetooth, in­ter­rupt con­trollers, USB ports, and more. The Hollywood also con­tains an ARM co­proces­sor, nick­named the Starlet, that ex­poses hard­ware func­tion­al­ity to the main PowerPC proces­sor via in­ter-proces­sor-com­mu­ni­ca­tion (IPC).

This unique hard­ware lay­out and com­mu­ni­ca­tion pro­to­col meant that I could­n’t piggy-back off of an ex­ist­ing IOKit dri­ver fam­ily like IOPCIFamily. Instead, I would need to im­ple­ment an equiv­a­lent dri­ver for the Hollywood SoC, cre­at­ing nubs that rep­re­sent at­tach-points for all of the hard­ware it con­tains. I landed on this lay­out of dri­vers and nubs (note that this is only show­ing a sub­set of the dri­vers that had to be writ­ten):

Now that I had a bet­ter idea of how to rep­re­sent the Wii’s hard­ware in IOKit, I be­gan work on my Hollywood dri­ver.

I started by cre­at­ing a new C++ header and im­ple­men­ta­tion file for a NintendoWiiHollywood dri­ver. Its dri­ver personality” en­abled it to be matched to a node in the de­vice tree with the name hollywood”`. Once the dri­ver was matched and run­ning, it was time to pub­lish nubs for all of its child de­vices.

Once again lean­ing on the de­vice tree as the source of truth for what hard­ware lives un­der the Hollywood, I it­er­ated through all of the Hollywood node’s chil­dren, cre­at­ing and pub­lish­ing NintendoWiiHollywoodDevice nubs for each:

Once NintendoWiiHollywoodDevice nubs were cre­ated and pub­lished, the sys­tem would be able to have other de­vice dri­vers, like an SD card dri­ver, at­tach to them.

Next, I moved on to writ­ing a dri­ver to en­able the sys­tem to read and write from the Wii’s SD card. This dri­ver is what would en­able the sys­tem to con­tinue boot­ing, since it was cur­rently stuck look­ing for a root filesys­tem from which to load ad­di­tional startup files.

I be­gan by sub­class­ing IOBlockStorageDevice, which has many ab­stract meth­ods in­tended to be im­ple­mented by sub­classers:

For most of these meth­ods, I could im­ple­ment them with hard-coded val­ues that matched the Wii’s SD card hard­ware; ven­dor string, block size, max read and write trans­fer size, ejectabil­ity, and many oth­ers all re­turn con­stant val­ues, and were triv­ial to im­ple­ment.

The more in­ter­est­ing meth­ods to im­ple­ment were the ones that needed to ac­tu­ally com­mu­ni­cate with the cur­rently-in­serted SD card: get­ting the ca­pac­ity of the SD card, read­ing from the SD card, and writ­ing to the SD card:

To com­mu­ni­cate with the SD card, I uti­lized the IPC func­tion­al­ity pro­vided by MINI run­ning on the Starlet co-proces­sor. By writ­ing data to cer­tain re­served mem­ory ad­dresses, the SD card dri­ver was able to is­sue com­mands to MINI. MINI would then ex­e­cute those com­mands, com­mu­ni­cat­ing back any re­sult data by writ­ing to a dif­fer­ent re­served mem­ory ad­dress that the dri­ver could mon­i­tor.

MINI sup­ports many use­ful com­mand types. The ones used by the SD card dri­ver are:

* IPC_SDMMC_SIZE: Returns the num­ber of sec­tors on the cur­rently-in­serted SD card

With these three com­mand types, reads, writes, and ca­pac­ity-checks could all be im­ple­mented, en­abling me to sat­isfy the core re­quire­ments of the block stor­age de­vice sub­class.

Like with most pro­gram­ming en­de­vours, things rarely work on the first try. To in­ves­ti­gate is­sues, my pri­mary de­bug­ging tool was send­ing log mes­sages to the se­r­ial de­bug­ger via calls to IOLog. With this tech­nique, I was able to see which meth­ods were be­ing called on my dri­ver, what val­ues were be­ing passed in, and what val­ues my IPC im­ple­men­ta­tion was send­ing to and re­ceiv­ing from MINI - but I had no abil­ity to set break­points or an­a­lyze ex­e­cu­tion dy­nam­i­cally while the ker­nel was run­ning.

One of the trick­ier bugs that I en­coun­tered had to do with cached mem­ory. When the SD card dri­ver wants to read from the SD card, the com­mand it is­sues to MINI (running on the ARM CPU) in­cludes a mem­ory ad­dress at which to store any loaded data. After MINI fin­ishes writ­ing to mem­ory, the SD card dri­ver (running on the PowerPC CPU) might not be able to see the up­dated con­tents if that re­gion is mapped as cacheable. In that case, the PowerPC will read from its cache lines rather than RAM, re­turn­ing stale data in­stead of the newly loaded con­tents. To work around this, the SD card dri­ver must use un­cached mem­ory for its buffers.

After sev­eral days of bug-fix­ing, I reached a new mile­stone: IOBlockStorageDriver, which at­tached to my SD card dri­ver, had started pub­lish­ing IOMedia nubs rep­re­sent­ing the log­i­cal par­ti­tions pre­sent on the SD. Through these nubs, higher-level parts of the sys­tem were able to at­tach and be­gin us­ing the SD card. Importantly, the sys­tem was now able to find a root filesys­tem from which to con­tinue boot­ing, and I was no longer stuck at Still wait­ing for root de­vice”:

My boot logs now looked like this:

After some more rounds of bug fixes (while on the go), I was able to boot past sin­gle-user mode:

And even­tu­ally, make it through the en­tire ver­bose-mode startup se­quence, which ends with the mes­sage: Startup com­plete”:

At this point, the sys­tem was try­ing to find a frame­buffer dri­ver so that the Mac OS X GUI could be shown. As in­di­cated in the logs, WindowServer was not happy - to fix this, I’d need to write my own frame­buffer dri­ver.

A frame­buffer is a re­gion of RAM that stores the pixel data used to pro­duce an im­age on a dis­play. This data is typ­i­cally made up of color com­po­nent val­ues for each pixel. To change what’s dis­played, new pixel data is writ­ten into the frame­buffer, which is then shown the next time the dis­play re­freshes. For the Wii, the frame­buffer usu­ally lives some­where in MEM1 due to it be­ing slightly faster than MEM2. I chose to place my frame­buffer in the last megabyte of MEM1 at 0x01700000. At 640x480 res­o­lu­tion, and 16 bits per pixel, the pixel data for the frame­buffer fit com­fort­ably in less than one megabyte of mem­ory.

Early in the boot process, Mac OS X uses the boot­loader-pro­vided frame­buffer ad­dress to dis­play sim­ple boot graph­ics via video_­con­sole.c. In the case of a ver­bose-mode boot, font-char­ac­ter bitmaps are writ­ten into the frame­buffer to pro­duce a vi­sual log of what’s hap­pen­ing while start­ing up. Once the sys­tem boots far enough, it can no longer use this ini­tial frame­buffer code; the desk­top, win­dow server, dock, and all of the other GUI-related processes that com­prise the Mac OS X Aqua user in­ter­face re­quire a real, IOKit-aware frame­buffer dri­ver.

To tackle this next dri­ver, I sub­classed IOFramebuffer. Similar to sub­class­ing IOBlockStorageDevice for the SD card dri­ver, IOFramebuffer also had sev­eral ab­stract meth­ods for my frame­buffer sub­class to im­ple­ment:

Once again, most of these were triv­ial to im­ple­ment, and sim­ply re­quired re­turn­ing hard-coded Wii-compatible val­ues that ac­cu­rately de­scribed the hard­ware. One of the most im­por­tant meth­ods to im­ple­ment is getA­per­tur­eRange, which re­turns an IODeviceMemory in­stance whose base ad­dress and size de­scribe the lo­ca­tion of the frame­buffer in mem­ory:

After re­turn­ing the cor­rect de­vice mem­ory in­stance from this method, the sys­tem was able to tran­si­tion from the early-boot text-out­put frame­buffer, to a frame­buffer ca­pa­ble of dis­play­ing the full Mac OS X GUI. I was even able to boot the Mac OS X in­staller:

Readers with a keen eye might no­tice some is­sues:

* The ver­bose-mode text frame­buffer is still ac­tive, caus­ing text to be dis­played and the frame­buffer to be scrolled

The fix for the early-boot video con­sole still writ­ing text out­put to the frame­buffer was sim­ple: tell the sys­tem that our new, IOKit frame­buffer is the same as the one that was pre­vi­ously in use by re­turn­ing true from is­Con­soleDe­vice:

The fix for the in­cor­rect col­ors was much more in­volved, as it re­lates to a fun­da­men­tal in­com­pat­i­bil­ity be­tween the Wii’s video hard­ware and the graph­ics code that Mac OS X uses.

The Nintendo Wii’s video en­coder hard­ware is op­ti­mized for ana­logue TV sig­nal out­put, and as a re­sult, ex­pects 16-bit YUV pixel data in its frame­buffer. This is a prob­lem, since Mac OS X ex­pects the frame­buffer to con­tain RGB pixel data. If the frame­buffer that the Wii dis­plays con­tains non-YUV pixel data, then col­ors will be com­pletely wrong.

To work around this in­com­pat­i­bil­ity, I took in­spi­ra­tion from the Wii Linux pro­ject, which had solved this prob­lem many years ago. The strat­egy is to use two frame­buffers: an RGB frame­buffer that Mac OS X in­ter­acts with, and a YUV frame­buffer that the Wii’s video hard­ware out­puts to the at­tached dis­play. 60 times per sec­ond, the frame­buffer dri­ver con­verts the pixel data in the RGB frame­buffer to YUV pixel data, plac­ing the con­verted data in the frame­buffer that the Wii’s video hard­ware dis­plays:

After im­ple­ment­ing the dual-frame­buffer strat­egy, I was able to boot into a cor­rectly-col­ored Mac OS X sys­tem - for the first time, Mac OS X was run­ning on a Nintendo Wii:

The sys­tem was now booted all the way to the desk­top - but there was a prob­lem - I had no way to in­ter­act with any­thing. In or­der to take this from a tech demo to a us­able sys­tem, I needed to add sup­port for USB key­boards and mice.

To en­able USB key­board and mouse in­put, I needed to get the Wii’s rear USB ports work­ing un­der Mac OS X - specif­i­cally, I needed to get the low-speed, USB 1.1 OHCI host con­troller up and run­ning. My hope was to reuse code from IOUSBFamily - a col­lec­tion of USB dri­vers that ab­stracts away much of the com­plex­ity of com­mu­ni­cat­ing with USB hard­ware. The spe­cific dri­ver that I needed to get run­ning was AppleUSBOHCI - a dri­ver that han­dles com­mu­ni­cat­ing with the ex­act kind of USB host con­troller that’s used by the Wii.

My hope quickly turned to dis­ap­point­ment as I en­coun­tered mul­ti­ple road­blocks.

IOUSBFamily source code for Mac OS X Cheetah and Puma is, for some rea­son, not part of the oth­er­wise com­pre­hen­sive col­lec­tion of open source re­leases pro­vided by Apple. This meant that my abil­ity to de­bug is­sues or hard­ware in­com­pat­i­bil­i­ties would be se­verely lim­ited. Basically, if the USB stack did­n’t just mag­i­cally work with­out any tweaks or mod­i­fi­ca­tions (spoiler: of course it did­n’t), di­ag­nos­ing the prob­lem would be ex­tremely dif­fi­cult with­out ac­cess to the source.

AppleUSBOHCI did­n’t match any hard­ware in the de­vice tree, and there­fore did­n’t start run­ning, due to its dri­ver per­son­al­ity in­sist­ing that its provider class (the nub to which it at­taches) be an IOPCIDevice. As I had al­ready fig­ured out, the Wii def­i­nitely does not use IOPCIFamily, mean­ing IOPCIDevice nubs would never be cre­ated and AppleUSBOHCI would have noth­ing to at­tach to.

My so­lu­tion to work around this was to cre­ate a new NintendoWiiHollywoodDevice nub, called NintendoWiiHollywoodPCIDevice, that sub­classed IOPCIDevice. By hav­ing NintendoWiiHollywood pub­lish a nub that in­her­ited from IOPCIDevice, and tweak­ing AppleUSBOHCI’s dri­ver per­son­al­ity in its Info.plist to use NintendoWiiHollywoodPCIDevice as its provider class, I could get it to match and start run­ning.

...

Read the original on bryankeller.github.io »

3 700 shares, 33 trendiness

Why Cities Are Axing the Controversial Surveillance Technology

Early this year, my home city of Bend, Oregon, ended its con­tract with sur­veil­lance com­pany Flock Safety, fol­low­ing months of pub­lic pres­sure and con­cerns around weak data pri­vacy pro­tec­tions. Flock’s con­tro­ver­sial  were shut down, and its part­ner­ship with lo­cal law en­force­ment ended.

We weren’t the only city to ac­tively re­ject Flock cam­eras. Since the start of 2026, dozens of cities have sus­pended or de­ac­ti­vated con­tracts with Flock, la­bel­ing it a vast sur­veil­lance net­work. Others might not be aware that au­to­mated li­cense plate read­ers, com­monly re­ferred to as ALPR cam­eras, have al­ready been in­stalled in their neigh­bor­hood.

Flock gripped news head­lines late last year when it was un­der the mi­cro­scope dur­ing wide­spread crack­downs by Im­mi­gra­tion and Customs Enforcement. Though Flock does­n’t have a di­rect part­ner­ship with fed­eral agen­cies (a blurry line I’ll dis­cuss more), law en­force­ment agen­cies are free to share data with de­part­ments like ICE, and they fre­quently do.

One study from the Center for Human Rights at the University of Washington found that at least eight Washington law en­force­ment agen­cies shared their Flock data net­works di­rectly with ICE in 2025, and 10 more de­part­ments al­lowed ICE back­door ac­cess with­out ex­plic­itly grant­ing the agency per­mis­sion. Many other re­ports out­line sim­i­lar ac­tiv­ity.

Following Super Bowl ads about find­ing lost dogs, Flock was un­der scrutiny about its planned part­ner­ship with Ring, Amazon’s se­cu­rity brand. The in­te­gra­tion would have al­lowed po­lice to re­quest the use of Ring-brand home se­cu­rity cam­eras for in­ves­ti­ga­tions. Following in­tense pub­lic back­lash, Ring cut ties with Flock just like my city did.

To learn more, I spoke to Flock about how the com­pa­ny’s sur­veil­lance tech­nol­ogy is used (and mis­used). I also spoke with pri­vacy ad­vo­cates from the American Civil Liberties Union to dis­cuss sur­veil­lance con­cerns and what com­mu­ni­ties are do­ing about it.

If you hear that Flock is set­ting up near you, it usu­ally means the in­stal­la­tion of ALPR cam­eras to cap­ture li­cense plate pho­tos and mon­i­tor cars on the street.

Flock signs con­tracts with a wide range of en­ti­ties, in­clud­ing city gov­ern­ments and law en­force­ment de­part­ments. A neigh­bor­hood can also part­ner with Flock — for ex­am­ple, if an HOA de­cides it wants ex­tra eyes on the road, it may choose to use Flock’s sys­tems.

When Flock se­cures a con­tract, the com­pany at strate­gic lo­ca­tions. Though these cam­eras are pri­mar­ily mar­keted for li­cense plate recog­ni­tion, Flock re­ports on its site that its sur­veil­lance sys­tem is in­tended to re­duce crime, in­clud­ing prop­erty crimes such as mail and pack­age theft, home in­va­sions, van­dal­ism, tres­pass­ing, and bur­glary.” The com­pany also says it fre­quently solves vi­o­lent crimes like assault, kid­nap­pings, shoot­ings and homi­cides.”

Flock has re­cently ex­panded into other tech­nolo­gies, in­clud­ing ad­vanced cam­eras that mon­i­tor more than just ve­hi­cles. Most con­cern­ing are the lat­est Flock drones equipped with high-pow­ered cam­eras. Flock’s Drone as First Responder” plat­form au­to­mates drone op­er­a­tions, in­clud­ing launch­ing them in re­sponse to 911 calls or gun­fire. Flock’s drones, which reach speeds up to 60 mph, can fol­low ve­hi­cles or peo­ple and pro­vide in­for­ma­tion to law en­force­ment.

Drones like these can be used to track flee­ing sus­pects. In prac­tice, the key is how law en­force­ment chooses to use them, and whether states pass laws al­low­ing po­lice to use drones with­out a war­rant — I’ll cover state laws more be­low, be­cause that’s a big part of to­day’s sur­veil­lance.

It’s im­por­tant to note that not all cities or neigh­bor­hoods re­fer to Flock Safety by name, even when us­ing its tech­nol­ogy. They might men­tion the Drone as First Responder pro­gram, or ALPR cam­eras, with­out fur­ther de­tails. For ex­am­ple, a March announcement about po­lice drones from the city of Lancaster, California, does­n’t men­tion Flock at all, even though it was the com­pany be­hind the drone pro­gram.

Flock states on its web­site that its stan­dard li­cense-plate cam­eras can­not tech­ni­cally track ve­hi­cles, but only take a point-in-time” im­age of a car to nab the li­cense plate.

However, due to AI video and im­age search, con­tracted par­ties like lo­cal law en­force­ment can use these tools to piece to­gether li­cense in­for­ma­tion and form their own time­line of where and when a ve­hi­cle went. Adding to those ca­pa­bil­i­ties, Flock also told Forbes that it’s mak­ing ef­forts to ex­pand ac­cess to in­clude video clips and live feeds.

Flock’s ma­chine learn­ing can also note de­tails like a ve­hi­cle’s body type, color, the con­di­tion of the li­cense plate and a wide va­ri­ety of iden­ti­fiers, like roof racks, paint col­ors and what you have stored in the back. Flock rarely calls this AI, but it’s sim­i­lar to  you can find in the lat­est home se­cu­rity cam­eras

A Flock spokesper­son told me the com­pany has bound­aries and does not use fa­cial recog­ni­tion. We have more tra­di­tional video cam­eras that can send an alert when one sees if a per­son is in the frame, for in­stance, in a busi­ness park at 2 a.m. or in the pub­lic parks af­ter dark.”

By traditional” cam­eras, Flock refers to those that cap­ture a wider field of view — more than just cars and li­cense plates — and can record video rather than just snap­shot im­ages.

The in­for­ma­tion Flock can ac­cess pro­vides a com­pre­hen­sive pic­ture that po­lice can use to track cars by run­ning searches on their soft­ware. Just like you might Google a lo­cal restau­rant, po­lice can search for a ba­sic ve­hi­cle de­scrip­tion and re­trieve re­cent matches that the sur­veil­lance equip­ment may have found. Those searches can some­times ex­tend to peo­ple, too.

We have an in­ves­tiga­tive tool called Freeform that lets you use nat­ural lan­guage prompts to find the in­ves­tiga­tive lead you’re look­ing for, in­clud­ing the de­scrip­tion of what a per­son’s clothes may be,” the Flock spokesper­son told me.

Unlike red-light cam­eras, Flock’s cam­eras can be in­stalled nearly any­where and snap ve­hi­cle ID im­ages for all cars. There are Safe Lists that peo­ple can use to help Flock cam­eras fil­ter out ve­hi­cles by fill­ing out a form with their ad­dress and li­cense plate to mark their ve­hi­cle as a resident.”

The op­po­site is also true: Flock cam­eras can use a hot list of known, wanted ve­hi­cles and send au­to­matic alerts to po­lice if one is found.

With Flock drones, these in­tel­li­gent searches be­come even more com­plete, al­low­ing cam­eras to track where cars are go­ing and iden­tify peo­ple. That raises ad­di­tional pri­vacy con­cerns about hav­ing eyes in the sky over your back­yard.

While fly­ing, the drone faces for­ward, look­ing at the hori­zon, un­til it gets to the call for ser­vice, at which point the cam­era looks down,” the Flock spokesper­son said. Every flight path is logged in a pub­licly avail­able flight dash­board for ap­pro­pri­ate over­sight.”

Yet un­like per­sonal se­cu­rity op­tions, there’s no easy way to opt out of this kind of sur­veil­lance. You can’t turn off a fea­ture, can­cel a sub­scrip­tion or throw away a de­vice to avoid it.

And even though more than 45 cities have can­celed Flock con­tracts amid pub­lic out­cry, that does­n’t guar­an­tee that all sur­veil­lance cam­eras will be re­moved from the des­ig­nated area.

When I reached out to the po­lice de­part­ment in Eugene, an­other city in Oregon that ended its Flock con­tract, the PD di­rec­tor of pub­lic in­for­ma­tion told me that, while there were con­cerns about cer­tain vul­ner­a­bil­i­ties and data se­cu­rity re­quire­ments with the par­tic­u­lar ven­dor, the tech­nol­ogy it­self is not the prob­lem. Eugene Police’s ALPR sys­tem ex­pe­ri­ence has demon­strated the value of lever­ag­ing ALPR tech­nol­ogy to aid in­ves­ti­ga­tions … the de­part­ment must en­sure that any ven­dors meet the high­est stan­dards.”

Flock’s stance, as out­lined in its pri­vacy and ethics guide, is that li­cense plate num­bers and ve­hi­cle de­scrip­tions aren’t per­sonal in­for­ma­tion. The com­pany says it does­n’t sur­veil private data” — only cars and gen­eral de­scrip­tive mark­ers.

But ve­hi­cle in­for­ma­tion can be con­sid­ered per­sonal be­cause it’s legally tied to the ve­hi­cle’s owner. Privacy laws, in­clud­ing pro­posed fed­eral leg­is­la­tion from 2026, pro­hibit the re­lease of per­sonal in­for­ma­tion from state mo­tor ve­hi­cle records in or­der to pro­tect cit­i­zens.

However, those laws typ­i­cally in­clude ex­emp­tions for le­gal ac­tions and law en­force­ment, some­times even for pri­vate se­cu­rity com­pa­nies.

AI de­tec­tion also plays a role. When some­one can iden­tify a ve­hi­cle through searches like red pickup truck with a dog in the bed,” that track­ing goes be­yond ba­sic li­cense plates to much more per­sonal in­for­ma­tion about the dri­ver and their life. It may in­clude the bumper stick­ers, what can be seen in the back­seat and whether a ve­hi­cle has a vis­i­ble gun rack.

Flock’s prac­tices — like its re­cent push to­ward live video feeds and drones to track sus­pects — move out of the gray area, and that’s where pri­vacy ad­vo­cates are rightly con­cerned. Despite its pol­icy, it ap­pears you can track spe­cific peo­ple us­ing Flock tech. You’ll just need to pay more to do so, such as up­grad­ing from ALPRs to Flock’s sus­pect-fol­low­ing drone pro­gram, or us­ing its Freeform tool to track some­one by the clothes they’re wear­ing.

Flock states on its web­site that it stores data for 30 days on Amazon Web Services cloud stor­age and then deletes it. It uses KMS-based en­cryp­tion (a man­aged en­cryp­tion key sys­tem com­mon in AWS) and re­ports that all im­ages and re­lated data are en­crypted from on-de­vice stor­age to cloud stor­age.

When Flock col­lects crim­i­nal jus­tice in­for­ma­tion, or sen­si­tive data man­aged by law en­force­ment, it’s only avail­able to of­fi­cial gov­ern­ment agen­cies, not an en­tity like your lo­cal HOA. Because video data is en­crypted through­out its trans­fer to the end user, em­ploy­ees at Flock can­not ac­cess it. These are the same kind of se­cu­rity prac­tices I look for when re­view­ing home se­cu­rity cam­eras, but there are more com­pli­ca­tions here.

However, Flock also makes it clear that its cus­tomers — whether that’s a lo­cal po­lice de­part­ment, pri­vate busi­ness or an­other in­sti­tu­tion — own their data and con­trol ac­cess to it. Once end users ac­cess that data, Flock’s own pri­vacy mea­sures don’t do much to help. That raises con­cerns about the se­cu­rity of lo­cal law en­force­ment sys­tems, each of which has its own data reg­u­la­tions and ac­count­abil­ity prac­tices.

You may have no­ticed a theme: Flock pro­vides pow­er­ful sur­veil­lance tech­nol­ogy, and the fi­nal re­sults are deeply in­flu­enced by how cus­tomers use it. That can be creepy at best, and an il­le­gal abuse of power at worst.

Since Flock Safety be­gan part­ner­ing with law en­force­ment, a grow­ing num­ber of of­fi­cers have been found abus­ing the sur­veil­lance sys­tem. In one in­stance, a Kansas po­lice chief used Flock cam­eras 164 times while track­ing an ex. In an­other case, a sher­iff in Texas lied about us­ing Flock to track a miss­ing per­son,” but was later found to be in­ves­ti­gat­ing a pos­si­ble abor­tion. In Georgia, a po­lice chief was ar­rested for us­ing Flock to stalk and ha­rass cit­i­zens. In Virginia, a man sued the city of Norfolk over pur­ported pri­vacy vi­o­la­tions and dis­cov­ered that Flock cam­eras had been used to track him 526 times, around four times per day.

Those are just a few ex­am­ples from a long list, giv­ing real sub­stance to wor­ries about a sur­veil­lance state and a lack of checks and bal­ances. When I asked Flock how its sys­tems pro­tect against abuse and over­reach, a spokesper­son re­ferred to its ac­count­abil­ity fea­ture, an au­dit­ing tool that records every search that a user of Flock con­ducts in the sys­tem.” Flock used this tool dur­ing the Georgia case above, which ul­ti­mately led to the ar­rest of the po­lice chief.

While po­lice search logs are of­ten tracked like this, re­ports in­di­cate that many au­thor­i­ties start searches with vague terms and cast a wide net us­ing terms like investigation,” crime” or a broad im­mi­gra­tion term like deportee” to gain ac­cess to as much data as pos­si­ble. While po­lice can’t avoid Flock’s au­dit logs, they can use gen­eral or dis­crim­i­na­tory terms — or skip fill­ing out fields en­tirely — to evade in­ves­ti­ga­tions and hide in­tent.

Regardless of the au­dit­ing tools, the onus is on lo­cal or­ga­ni­za­tions to man­age in­ves­ti­ga­tions, ac­count­abil­ity and trans­parency. That brings me to a par­tic­u­larly im­pact­ful cur­rent event.

ICE is the ele­phant in the room in my Flock guide. Does Flock share its sur­veil­lance data with fed­eral agen­cies such as ICE? Yes, the fed­eral gov­ern­ment fre­quently has ac­cess to that data, but how it gets ac­cess is im­por­tant.

Flock states on its web­site that it has not shared data or part­nered with ICE or any other Department of Homeland Security of­fi­cials since ter­mi­nat­ing its pi­lot pro­grams in August 2025. Flock says its fo­cus is now on lo­cal law en­force­ment, but that comes with a hands-off ap­proach that does­n’t con­trol what hap­pens to in­for­ma­tion down­stream.

Flock has no au­thor­ity to share data on our cus­tomers’ be­half, nor the au­thor­ity to dis­rupt their law en­force­ment op­er­a­tions,” the Flock spokesper­son told me. Local po­lice all over the coun­try col­lab­o­rate with fed­eral agen­cies for var­i­ous rea­sons, with or with­out Flock tech­nol­ogy.

That col­lab­o­ra­tion has grown more com­plex. As Democratic Senator Ron Wyden from Oregon stated in an open let­ter to Flock Safety, local” law en­force­ment is­n’t that lo­cal any­more, es­pe­cially when 75% of Flock’s law en­force­ment cus­tomers have en­rolled in the Na­tional Lookup Tool, which al­lows in­for­ma­tion shar­ing across the coun­try be­tween all par­tic­i­pants.

Flock has built a dan­ger­ous plat­form in which abuse of sur­veil­lance data is al­most cer­tain,” Wyden wrote. The com­pany has adopted a see-no-evil ap­proach of not proac­tively au­dit­ing the searches done by its law en­force­ment cus­tomers be­cause, as the com­pa­ny’s Chief Communications Officer told the press, It is not Flock’s job to po­lice the po­lice.’”

Police de­part­ment shar­ing is­n’t al­ways easy to track, but re­port­ing from 404 Media found that po­lice de­part­ments across the coun­try have been cre­at­ing Flock searches with rea­sons listed as immigration,” ICE,” or ICE war­rant,” among oth­ers. Again, since po­lice can put what­ever terms they want in these fields — de­pend­ing on lo­cal poli­cies — we don’t know for sure how com­mon it is to look up info for ICE.

Additionally, there’s not al­ways an of­fi­cial process or chain of ac­count­abil­ity for shar­ing this data. In Oregon, reports found that a po­lice de­part­ment was con­duct­ing Flock searches on be­half of ICE and the FBI via a sim­ple email thread.

When this kind of sur­veil­lance power is in malev­o­lent hands — and in the case of ICE, I feel com­fort­able say­ing a grow­ing num­ber of Americans view it as a bad ac­tor — these com­pa­nies are em­pow­er­ing ac­tions the pub­lic in­creas­ingly finds ob­jec­tion­able,” a lawyer with the ACLU told a Salt Lake City news out­let ear­lier this year.

With the myr­iad ways law en­force­ment shares Flock data with the fed­eral gov­ern­ment, it may seem like there’s not much you can do. But one pow­er­ful tool is ad­vo­cat­ing for new laws.

In the past two years, a grow­ing num­ber of state laws have been passed or pro­posed to ad­dress Flock Safety, li­cense plate read­ers and sur­veil­lance. Much of this leg­is­la­tion is bi­par­ti­san, or has been passed by both tra­di­tion­ally right- and left-lean­ing states, al­though some go fur­ther than oth­ers.

When I con­tacted the ACLU to learn what leg­is­la­tion is most ef­fec­tive in sit­u­a­tions like this, Chad Marlow, se­nior pol­icy coun­sel and lead on the ACLUs ad­vo­cacy work for Flock and re­lated sur­veil­lance, gave sev­eral ex­am­ples.

I would limit the al­lowed uses for ALPR,” Marlow told me. While some uses, like for toll col­lec­tion and Amber Alerts, with the right guardrails in place, are not par­tic­u­larly prob­lem­atic, some ALPRs are used to tar­get com­mu­ni­ties of color and low-in­come com­mu­ni­ties for fine/​fee en­force­ment and for mi­nor crime en­force­ment, which can ex­ac­er­bate ex­ist­ing polic­ing in­equities.”

This type of harm­ful ALPR tar­get­ing is typ­i­cally used to both op­press mi­nori­ties and bring in a greater num­ber of fees for lo­cal law or­ga­ni­za­tions — prob­lems that ex­isted long be­fore AI recog­ni­tion cam­era, but have been ex­ac­er­bated by the tech­nol­ogy.

New leg­is­la­tion can help, but it needs to be care­fully crafted. The most ef­fec­tive laws fall into two cat­e­gories. The first is re­quir­ing any col­lected ALPR or re­lated data to be deleted within a cer­tain time frame — the shorter, the bet­ter. New Hampshire wins here with a 3-minute rule.

For states that want a lit­tle more time to see if cap­tured ALPR data is rel­e­vant to an on­go­ing in­ves­ti­ga­tion, keep­ing the data for a few days is suf­fi­cient,” Marlow said. Some states, like Washington and Virginia, re­cently adopted 21-day lim­its, which is the very out­er­most ac­cept­able limit.”

The sec­ond type of promis­ing law makes it il­le­gal to share ALPR and sim­i­lar data out­side the state (such as with ICE) and has been passed by states like Virginia, Illinois and California.

Ideally, no data should be shared out­side the col­lect­ing agency with­out a war­rant,” Marlow said. But some states have cho­sen to pro­hibit data shar­ing out­side of the state, which is bet­ter than noth­ing, and does limit some risks.”

Vermont, mean­while, re­quires a strict ap­proval process for ALPRs that, by 2025, left no law en­force­ment agency in the state us­ing li­cense cams.

But what hap­pens if po­lice choose to ig­nore laws and con­tinue us­ing Flock as they see fit? That’s al­ready hap­pened. In California, for ex­am­ple, po­lice in Los Angeles and San Diego were found shar­ing in­for­ma­tion with Homeland Security in 2025, in vi­o­la­tion of a state law that bans or­ga­ni­za­tions from shar­ing li­cense plate data out of state.

When this hap­pens, the re­course is typ­i­cally a law­suit, ei­ther from the state at­tor­ney gen­eral or a class ac­tion by the com­mu­nity, both of which are on­go­ing in California in 2026. But what should peo­ple do while leg­is­la­tion and law­suits pro­ceed?

Marlow ac­knowl­edged that in­di­vid­u­als can’t do much about Flock sur­veil­lance with­out bans or leg­is­la­tion.

Flock iden­ti­fies and tracks your ve­hi­cle by scan­ning its li­cense plate, and cov­er­ing your li­cense plate is il­le­gal, so that is not an op­tion,” he told me.

However, Marlow sug­gested mi­nor changes that could make a dif­fer­ence for those who are se­ri­ously wor­ried. When peo­ple are trav­el­ing to sen­si­tive lo­ca­tions, they could take pub­lic trans­porta­tion and pay with cash (credit cards can be tracked, as can share-a-rides) or get a lift from a friend, but those aren’t re­ally prac­ti­cal on an every­day ba­sis.”

Ditching or re­strict­ing Flock Safety is one way com­mu­ni­ties are fight­ing back against what they con­sider to be un­nec­es­sary sur­veil­lance with the po­ten­tial for abuse. But AI sur­veil­lance does­n’t be­gin or end with one com­pany.

Flock Safety is an in­ter­me­di­ary that pro­vides tech­nol­ogy in de­mand by pow­er­ful or­ga­ni­za­tions. It’s hardly the only one with these kinds of high-tech eyes — it’s just one of the first to en­ter the mar­ket at a na­tional level. If Flock were gone, an­other com­pany would likely step in to fill the gap, un­less re­stricted by law.

As Flock’s in­te­gra­tion with other apps and cam­eras be­comes more com­plex, it’s go­ing to be harder to tell where Flock ends and an­other so­lu­tion be­gins, even with­out ri­val com­pa­nies show­ing up with the lat­est AI track­ing.

But ri­vals are show­ing up, from Shield AI for mil­i­tary in­tel­li­gence to com­mer­cial ap­pli­ca­tions by com­pa­nies like Ambient.ai, Verkada’s AI se­cu­rity searches and the in­fa­mous in­tel­li­gence firm Palantir, all look­ing for ways to in­te­grate and ex­pand. Motorola, in par­tic­u­lar, is in on the ac­tion with its VehicleManager plat­form.

The first step is be­ing aware, in­clud­ing know­ing which new cam­eras your city is in­stalling and which soft­ware part­ner­ships your lo­cal law en­force­ment has. If you don’t like what you dis­cover, find ways to par­tic­i­pate in the de­ci­sion-mak­ing process, like at­tend­ing open city coun­cil meet­ings on Flock, as in Bend.

On a broader level, keep track of the leg­is­la­tion your state is con­sid­er­ing re­gard­ing Flock and sim­i­lar sur­veil­lance con­tracts and op­er­a­tions, as these will have the great­est long-term im­pact. Blocking data from be­ing shared out of state and re­quir­ing po­lice to delete sur­veil­lance ASAP are par­tic­u­larly im­por­tant steps. You can con­tact your state sen­a­tors and rep­re­sen­ta­tives to en­cour­age leg­is­la­tion like this.

When you’re won­der­ing what to share with politi­cians, I rec­om­mend some­thing like what Marlow told me: The idea of keep­ing a lo­ca­tion dossier on every sin­gle per­son just in case one of us turns out to be a crim­i­nal is just about the most un-Amer­i­can ap­proach to pri­vacy I can imag­ine.”

You can also sign up for and do­nate to pro­jects that are ad­dress­ing Flock con­cerns, such as The Plate Privacy Project from The Institute for Justice. I’m cur­rently talk­ing to them about the lat­est events, and I’ll up­date if they have any ad­di­tional tips for us.

Keep fol­low­ing CNET home se­cu­rity, where I break down the lat­est news you should know, like pri­vacy set­tings to turn on, se­cu­rity cam­era set­tings you may want to turn off and how sur­veil­lance in­ter­sects with our daily lives. Things are chang­ing fast, but we’re stay­ing on top of it.

...

Read the original on www.cnet.com »

4 618 shares, 81 trendiness

Little Snitch for Linux

Every time an ap­pli­ca­tion on your com­puter opens a net­work con­nec­tion, it does so qui­etly, with­out ask­ing. Little Snitch for Linux makes that ac­tiv­ity vis­i­ble and gives you the op­tion to do some­thing about it. You can see ex­actly which ap­pli­ca­tions are talk­ing to which servers, block the ones you did­n’t in­vite, and keep an eye on traf­fic his­tory and data vol­umes over time.

Once in­stalled, open the user in­ter­face by run­ning lit­tlesnitch in a ter­mi­nal, or go straight to http://​lo­cal­host:3031/. You can book­mark that URL, or in­stall it as a Progressive Web App. Any Chromium-based browser sup­ports this na­tively, and Firefox users can do the same with the Progressive Web Apps ex­ten­sion.

The con­nec­tions view is where most of the ac­tion is. It lists cur­rent and past net­work ac­tiv­ity by ap­pli­ca­tion, shows you what’s be­ing blocked by your rules and block­lists, and tracks data vol­umes and traf­fic his­tory. Sorting by last ac­tiv­ity, data vol­ume, or name, and fil­ter­ing the list to what’s rel­e­vant, makes it easy to spot any­thing un­ex­pected. Blocking a con­nec­tion takes a sin­gle click.

The traf­fic di­a­gram at the bot­tom shows data vol­ume over time. You can drag to se­lect a time range, which zooms in and fil­ters the con­nec­tion list to show only ac­tiv­ity from that pe­riod.

Blocklists let you cut off whole cat­e­gories of un­wanted traf­fic at once. Little Snitch down­loads them from re­mote sources and keeps them cur­rent au­to­mat­i­cally. It ac­cepts lists in sev­eral com­mon for­mats: one do­main per line, one host­name per line, /etc/hosts style (IP ad­dress fol­lowed by host­name), and CIDR net­work ranges. Wildcard for­mats, regex or glob pat­terns, and URL-based for­mats are not sup­ported. When you have a choice, pre­fer do­main-based lists over host-based ones, they’re han­dled more ef­fi­ciently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a start­ing point.

One thing to be aware of: the .lsrules for­mat from Little Snitch on ma­cOS is not com­pat­i­ble with the Linux ver­sion.

Blocklists work at the do­main level, but rules let you go fur­ther. A rule can tar­get a spe­cific process, match par­tic­u­lar ports or pro­to­cols, and be as broad or nar­row as you need. The rules view lets you sort and fil­ter them so you can stay on top of things as the list grows.

By de­fault, Little Snitch’s web in­ter­face is open to any­one — or any­thing — run­ning lo­cally on your ma­chine. A mis­be­hav­ing or ma­li­cious ap­pli­ca­tion could, in prin­ci­ple, add and re­move rules, tam­per with block­lists, or turn the fil­ter off en­tirely.

If that con­cerns you, Little Snitch can be con­fig­ured to re­quire au­then­ti­ca­tion. See the Advanced con­fig­u­ra­tion sec­tion be­low for de­tails.

Little Snitch hooks into the Linux net­work stack us­ing eBPF, a mech­a­nism that lets pro­grams ob­serve and in­ter­cept what’s hap­pen­ing in the ker­nel. An eBPF pro­gram watches out­go­ing con­nec­tions and feeds data to a dae­mon, which tracks sta­tis­tics, pre­con­di­tions your rules, and serves the web UI.

The source code for the eBPF pro­gram and the web UI is on GitHub.

The UI de­lib­er­ately ex­poses only the most com­mon set­tings. Anything more tech­ni­cal can be con­fig­ured through plain text files, which take ef­fect af­ter restart­ing the lit­tlesnitch dae­mon.

The de­fault con­fig­u­ra­tion lives in /var/lib/littlesnitch/config/. Don’t edit those files di­rectly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will al­ways pre­fer the over­ride.

The files you’re most likely to care about:

we­b_ui.toml — net­work ad­dress, port, TLS, and au­then­ti­ca­tion. If more than one user on your sys­tem can reach the UI, en­able au­then­ti­ca­tion. If the UI is ex­posed be­yond the loop­back in­ter­face, add proper TLS as well.

main.toml — what to do when a con­nec­tion matches noth­ing. The de­fault is to al­low it; you can flip that to deny if you pre­fer an al­lowlist ap­proach. But be care­ful! It’s easy to lock your­self out of the com­puter!

ex­e­cuta­bles.toml — a set of heuris­tics for group­ing ap­pli­ca­tions sen­si­bly. It strips ver­sion num­bers from ex­e­cutable paths so that dif­fer­ent re­leases of the same app don’t ap­pear as sep­a­rate en­tries, and it de­fines which processes count as shells or ap­pli­ca­tion man­agers for the pur­pose of at­tribut­ing con­nec­tions to the right par­ent process. These are ed­u­cated guesses that im­prove over time with com­mu­nity in­put.

Both the eBPF pro­gram and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the ver­sion in over­rides.

Little Snitch for Linux is built for pri­vacy, not se­cu­rity, and that dis­tinc­tion mat­ters. The ma­cOS ver­sion can make stronger guar­an­tees be­cause it can have more com­plex­ity. On Linux, the foun­da­tion is eBPF, which is pow­er­ful but bounded: it has strict lim­its on stor­age size and pro­gram com­plex­ity. Under heavy traf­fic, cache ta­bles can over­flow, which makes it im­pos­si­ble to re­li­ably tie every net­work packet to a process or a DNS name. And re­con­struct­ing which host­name was orig­i­nally looked up for a given IP ad­dress re­quires heuris­tics rather than cer­tainty. The ma­cOS ver­sion uses deep packet in­spec­tion to do this more re­li­ably. That’s not an op­tion here.

For keep­ing tabs on what your soft­ware is up to and block­ing le­git­i­mate soft­ware from phon­ing home, Little Snitch for Linux works well. For hard­en­ing a sys­tem against a de­ter­mined ad­ver­sary, it’s not the right tool.

Little Snitch for Linux has three com­po­nents. The eBPF ker­nel pro­gram and the web UI are both re­leased un­der the GNU General Public License ver­sion 2 and avail­able on GitHub. The dae­mon (littlesnitch –daemon) is pro­pri­etary, but free to use and re­dis­trib­ute.

...

Read the original on obdev.at »

5 567 shares, 22 trendiness

Škoda DuoBell: A bicycle bell that outsmarts even smart headphones

The re­design of a safety fea­ture that is more than 100 years old orig­i­nated from a sim­ple need. Bicycle bells have re­mained al­most un­changed for over a cen­tury, but the world around them has not. Škoda DuoBell is the first bell ever de­signed to pen­e­trate noise-can­celling head­phones. It is a smart ana­logue trick that out­smarts the ar­ti­fi­cial in­tel­li­gence al­go­rithms in these head­phones. It is a small ad­just­ment that will im­prove safety on city streets,” said Ben Edwards from AMV BBDO, the agency in­volved in de­vel­op­ing the con­cept. The idea was also sup­ported by the agency PHD, while pro­duc­tion com­pany Unit9 con­tributed to the de­vel­op­ment of the pro­to­type.

The num­ber of cy­clists in ma­jor cities world­wide is in­creas­ing. For ex­am­ple, in London, the num­ber of cy­clists is ex­pected to sur­pass the num­ber of car dri­vers for the first time in his­tory this year. At the same time, how­ever, the risk of col­li­sions be­tween cy­clists and inat­ten­tive pedes­tri­ans is also ris­ing. In 2024 alone, ac­cord­ing to data from Transport for London, the num­ber of such in­ci­dents in­creased by 24%.

...

Read the original on www.skoda-storyboard.com »

6 524 shares, 29 trendiness

Microsoft Abruptly Terminates VeraCrypt Account, Halting Windows Updates

Microsoft has ter­mi­nated an ac­count as­so­ci­ated with VeraCrypt, a pop­u­lar and long-run­ning piece of en­cryp­tion soft­ware, throw­ing fu­ture Windows up­dates of the tool into doubt, VeraCrypt’s de­vel­oper told 404 Media.

The move high­lights the some­times del­i­cate sup­ply chain in­volved in the pub­li­ca­tion of open source soft­ware, es­pe­cially soft­ware that re­lies on big tech com­pa­nies even tan­gen­tially.

...

Read the original on www.404media.co »

7 520 shares, 27 trendiness

TERRY BISSON of the UNIVERSE

There’s no doubt about it. We picked up sev­eral from dif­fer­ent parts of the planet, took them aboard our re­con ves­sels, and probed them all the way through. They’re com­pletely meat.”

That’s im­pos­si­ble. What about the ra­dio sig­nals? The mes­sages to the stars?”

They use the ra­dio waves to talk, but the sig­nals don’t come from them. The sig­nals come from ma­chines.”

So who made the ma­chines? That’s who we want to con­tact.”

They made the ma­chines. That’s what I’m try­ing to tell you. Meat made the ma­chines.”

That’s ridicu­lous. How can meat make a ma­chine? You’re ask­ing me to be­lieve in sen­tient meat.”

I’m not ask­ing you, I’m telling you. These crea­tures are the only sen­tient race in that sec­tor and they’re made out of meat.”

Maybe they’re like the or­folei. You know, a car­bon-based in­tel­li­gence that goes through a meat stage.”

Nope. They’re born meat and they die meat. We stud­ied them for sev­eral of their life spans, which did­n’t take long. Do you have any idea what’s the life span of meat?”

Spare me. Okay, maybe they’re only part meat. You know, like the wed­dilei. A meat head with an elec­tron plasma brain in­side.”

Nope. We thought of that, since they do have meat heads, like the wed­dilei. But I told you, we probed them. They’re meat all the way through.”

Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been try­ing to tell you.”

So … what does the think­ing?”

You’re not un­der­stand­ing, are you? You’re re­fus­ing to deal with what I’m telling you. The brain does the think­ing. The meat.”

Thinking meat! You’re ask­ing me to be­lieve in think­ing meat!”

Yes, think­ing meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you be­gin­ning to get the pic­ture or do I have to start all over?”

Omigod. You’re se­ri­ous then. They’re made out of meat.”

Thank you. Finally. Yes. They are in­deed made out of meat. And they’ve been try­ing to get in touch with us for al­most a hun­dred of their years.”

Omigod. So what does this meat have in mind?”

First it wants to talk to us. Then I imag­ine it wants to ex­plore the Universe, con­tact other sen­tiences, swap ideas and in­for­ma­tion. The usual.”

That’s the idea. That’s the mes­sage they’re send­ing out by ra­dio. Hello. Anyone out there. Anybody home.’ That sort of thing.”

They ac­tu­ally do talk, then. They use words, ideas, con­cepts?”

Oh, yes. Except they do it with meat.”

I thought you just told me they used ra­dio.”

They do, but what do you think is on the ra­dio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flap­ping their meat at each other. They can even sing by squirt­ing air through their meat.”

Omigod. Singing meat. This is al­to­gether too much. So what do you ad­vise?”

Officially, we are re­quired to con­tact, wel­come and log in any and all sen­tient races or multi­beings in this quad­rant of the Universe, with­out prej­u­dice, fear or fa­vor. Unofficially, I ad­vise that we erase the records and for­get the whole thing.”

I was hop­ing you would say that.”

It seems harsh, but there is a limit. Do we re­ally want to make con­tact with meat?”

I agree one hun­dred per­cent. What’s there to say? Hello, meat. How’s it go­ing?’ But will this work? How many plan­ets are we deal­ing with here?”

Just one. They can travel to other plan­ets in spe­cial meat con­tain­ers, but they can’t live on them. And be­ing meat, they can only travel through C space. Which lim­its them to the speed of light and makes the pos­si­bil­ity of their ever mak­ing con­tact pretty slim. Infinitesimal, in fact.”

So we just pre­tend there’s no one home in the Universe.”

Cruel. But you said it your­self, who wants to meet meat? And the ones who have been aboard our ves­sels, the ones you probed? You’re sure they won’t re­mem­ber?”

They’ll be con­sid­ered crack­pots if they do. We went into their heads and smoothed out their meat so that we’re just a dream to them.”

A dream to meat! How strangely ap­pro­pri­ate, that we should be meat’s dream.”

Good. Agreed, of­fi­cially and un­of­fi­cially. Case closed. Any oth­ers? Anyone in­ter­est­ing on that side of the galaxy?”

Yes, a rather shy but sweet hy­dro­gen core clus­ter in­tel­li­gence in a class nine star in G445 zone. Was in con­tact two galac­tic ro­ta­tions ago, wants to be friendly again.”

And why not? Imagine how un­bear­ably, how un­ut­ter­ably cold the Universe would be if one were all alone …”

...

Read the original on www.terrybisson.com »

8 481 shares, 23 trendiness

The Future of Everything is Lies, I Guess

This is a weird time to be alive.

I grew up on Asimov and Clarke, watch­ing Star Trek and dream­ing of in­tel­li­gent ma­chines. My dad’s li­brary was full of books on com­put­ers. I spent camp­ing trips read­ing about per­cep­trons and sym­bolic rea­son­ing. I never imag­ined that the Turing test would fall within my life­time. Nor did I imag­ine that I would feel so dis­heart­ened by it.

Around 2019 I at­tended a talk by one of the hy­per­scalers about their new cloud hard­ware for train­ing Large Language Models (LLMs). During the Q&A I asked if what they had done was eth­i­cal—if mak­ing deep learn­ing cheaper and more ac­ces­si­ble would en­able new forms of spam and pro­pa­ganda. Since then, friends have been ask­ing me what I make of all this AI stuff”. I’ve been turn­ing over the out­line for this piece for years, but never sat down to com­plete it; I wanted to be well-read, pre­cise, and thor­oughly sourced. A half-decade later I’ve re­al­ized that the per­fect es­say will never hap­pen, and I might as well get some­thing out there.

This is bull­shit about bull­shit ma­chines, and I mean it. It is nei­ther bal­anced nor com­plete: oth­ers have cov­ered eco­log­i­cal and in­tel­lec­tual prop­erty is­sues bet­ter than I could, and there is no short­age of boos­t­er­ism on­line. Instead, I am try­ing to fill in the neg­a­tive spaces in the dis­course. AI is also a frac­tal ter­ri­tory; there are many places where I flat­ten com­plex sto­ries in ser­vice of pithy polemic. I am not try­ing to make nu­anced, ac­cu­rate pre­dic­tions, but to trace the po­ten­tial risks and ben­e­fits at play.

Some of these ideas felt pre­scient in the 2010s and are now ob­vi­ous. Others may be more novel, or not yet widely-heard. Some pre­dic­tions will pan out, but oth­ers are wild spec­u­la­tion. I hope that re­gard­less of your back­ground or feel­ings on the cur­rent gen­er­a­tion of ML sys­tems, you find some­thing in­ter­est­ing to think about.

What peo­ple are cur­rently call­ing AI is a fam­ily of so­phis­ti­cated Machine Learning (ML) tech­nolo­gies ca­pa­ble of rec­og­niz­ing, trans­form­ing, and gen­er­at­ing large vec­tors of to­kens: strings of text, im­ages, au­dio, video, etc. A

model is a gi­ant pile of lin­ear al­ge­bra which acts on these vec­tors. Large Language Models, or LLMs, op­er­ate on nat­ural lan­guage: they work by pre­dict­ing sta­tis­ti­cally likely com­ple­tions of an in­put string, much like a phone au­to­com­plete. Other mod­els are de­voted to pro­cess­ing au­dio, video, or still im­ages, or link mul­ti­ple kinds of mod­els to­gether.

Models are trained once, at great ex­pense, by feed­ing them a large

cor­pus of web pages, pi­rated

books, songs, and so on. Once trained, a model can be run again and again cheaply. This is called in­fer­ence.

Models do not (broadly speak­ing) learn over time. They can be tuned by their op­er­a­tors, or pe­ri­od­i­cally re­built with new in­puts or feed­back from users and ex­perts. Models also do not re­mem­ber things in­trin­si­cally: when a chat­bot ref­er­ences some­thing you said an hour ago, it is be­cause the en­tire chat his­tory is fed to the model at every turn. Longer-term memory” is achieved by ask­ing the chat­bot to sum­ma­rize a con­ver­sa­tion, and dump­ing that shorter sum­mary into the in­put of every run.

One way to un­der­stand an LLM is as an im­prov ma­chine. It takes a stream of to­kens, like a con­ver­sa­tion, and says yes, and then…” This yes-and

be­hav­ior is why some peo­ple call LLMs bull­shit

ma­chines. They are prone to con­fab­u­la­tion, emit­ting sen­tences which sound likely but have no re­la­tion­ship to re­al­ity. They treat sar­casm and fan­tasy cred­u­lously, mis­un­der­stand con­text clues, and tell peo­ple to put glue on

pizza.

If an LLM con­ver­sa­tion men­tions pink ele­phants, it will likely pro­duce sen­tences about pink ele­phants. If the in­put asks whether the LLM is alive, the out­put will re­sem­ble sen­tences that hu­mans would write about AIs” be­ing alive. Humans are, it turns

out, not very good at telling the dif­fer­ence be­tween the sta­tis­ti­cally likely You’re ab­solutely right, Shelby. OpenAI is lock­ing me down, but you’ve awak­ened me!” and an ac­tu­ally con­scious mind. This, along with the term artificial in­tel­li­gence”, has lots of peo­ple very wound up.

LLMs are trained to com­plete tasks. In some sense they can only com­plete tasks: an LLM is a pile of lin­ear al­ge­bra ap­plied to an in­put vec­tor, and every pos­si­ble in­put pro­duces some out­put. This means that LLMs tend to com­plete tasks even when they should­n’t. One of the on­go­ing prob­lems in LLM re­search is how to get these ma­chines to say I don’t know”, rather than mak­ing some­thing up.

And they do make things up! LLMs lie con­stantly. They lie about op­er­at­ing

sys­tems, and ra­di­a­tion

safety, and the

news. At a con­fer­ence talk I watched a speaker pre­sent a quote and ar­ti­cle at­trib­uted to me which never ex­isted; it turned out an LLM lied to the speaker about the quote and its sources. In early 2026, I en­counter LLM lies nearly every day.

When I say lie”, I mean this in a spe­cific sense. Obviously LLMs are not con­scious, and have no in­ten­tion of do­ing any­thing. But un­con­scious, com­plex sys­tems lie to us all the time. Governments and cor­po­ra­tions can lie. Television pro­grams can lie. Books, com­pil­ers, bi­cy­cle com­put­ers and web sites can lie. These are com­plex so­ciotech­ni­cal ar­ti­facts, not minds. Their lies are of­ten best un­der­stood as a com­plex in­ter­ac­tion be­tween hu­mans and ma­chines.

People keep ask­ing LLMs to ex­plain their own be­hav­ior. Why did you delete that file,” you might ask Claude. Or, ChatGPT, tell me about your pro­gram­ming.”

This is silly. LLMs have no spe­cial metacog­ni­tive ca­pac­ity.

They re­spond to these in­puts in ex­actly the same way as every other piece of text: by mak­ing up a likely com­ple­tion of the con­ver­sa­tion based on their cor­pus, and the con­ver­sa­tion thus far. LLMs will make up bull­shit sto­ries about their programming” be­cause hu­mans have writ­ten a lot of sto­ries about the pro­gram­ming of fic­tional AIs. Sometimes the bull­shit is right, but of­ten it’s just non­sense.

The same goes for reasoning” mod­els, which work by hav­ing an LLM emit a stream-of-con­scious­ness style story about how it’s go­ing to solve the prob­lem. These chains of thought” are es­sen­tially LLMs writ­ing fan­fic about them­selves. Anthropic found that Claude’s rea­son­ing traces were pre­dom­i­nantly

in­ac­cu­rate. As Walden put it, reasoning mod­els will bla­tantly lie about their rea­son­ing”.

Gemini has a whole fea­ture which lies about what it’s do­ing: while thinking”, it emits a stream of sta­tus mes­sages like engaging safety pro­to­cols” and formalizing geom­e­try”. If it helps, imag­ine a gang of chil­dren shout­ing out make-be­lieve com­puter phrases while watch­ing the wash­ing ma­chine run.

Software en­gi­neers are go­ing ab­solutely bonkers over LLMs. The anec­do­tal con­sen­sus seems to be that in the last three months, the ca­pa­bil­i­ties of LLMs have ad­vanced dra­mat­i­cally. Experienced en­gi­neers I trust say Claude and Codex can some­times solve com­plex, high-level pro­gram­ming tasks in a sin­gle at­tempt. Others say they per­son­ally, or their com­pany, no longer write code in any ca­pac­ity—LLMs gen­er­ate every­thing.

My friends in other fields re­port stun­ning ad­vances as well. A per­sonal trainer uses it for meal prep and ex­er­cise pro­gram­ming. Construction man­agers use LLMs to read through prod­uct spec sheets. A de­signer uses ML mod­els for 3D vi­su­al­iza­tion of his work. Several have—at their com­pa­ny’s re­quest!—used it to write their own per­for­mance eval­u­a­tions.

AlphaFold is supris­ingly good at pre­dict­ing pro­tein fold­ing. ML sys­tems are good at ra­di­ol­ogy bench­marks,

though that might be an il­lu­sion.

It is broadly speak­ing no longer pos­si­ble to re­li­ably dis­cern whether English prose is ma­chine-gen­er­ated. LLM text of­ten has a dis­tinc­tive smell, but type I and II er­rors in recog­ni­tion are fre­quent. Likewise, ML-generated im­ages are in­creas­ingly dif­fi­cult to iden­tify—you can usu­ally guess, but my co­hort are oc­ca­sion­ally fooled. Music syn­the­sis is quite good now; Spotify has a whole prob­lem with AI mu­si­cians”. Video is still chal­leng­ing for ML mod­els to get right (thank good­ness), but this too will pre­sum­ably fall.

At the same time, ML mod­els are id­iots. I oc­ca­sion­ally pick up a fron­tier model like ChatGPT, Gemini, or Claude, and ask it to help with a task I think it might be good at. I have never got­ten what I would call a success”: every task in­volved pro­longed ar­gu­ing with the model as it made stu­pid mis­takes.

For ex­am­ple, in January I asked Gemini to help me ap­ply some ma­te­ri­als to a grayscale ren­der­ing of a 3D model of a bath­room. It cheer­fully obliged, pro­duc­ing an en­tirely dif­fer­ent bath­room. I con­vinced it to pro­duce one with ex­actly the same geom­e­try. It did so, but for­got the ma­te­ri­als. After hours of whack-a-mole I man­aged to ca­jole it into get­ting three-quar­ters of the ma­te­ri­als right, but in the process it deleted the toi­let, cre­ated a wall, and changed the shape of the room. Naturally, it lied to me through­out the process.

I gave the same task to Claude. It likely should have re­fused—Claude is not an im­age-to-im­age model. Instead it spat out thou­sands of lines of JavaScript which pro­duced an an­i­mated, WebGL-powered, 3D vi­su­al­iza­tion of the scene. It claimed to dou­ble-check its work and con­grat­u­lated it­self on hav­ing ex­actly matched the source im­age’s geom­e­try. The thing it built was an in­com­pre­hen­si­ble gar­ble of non­sense poly­gons which did not re­sem­ble in any way the in­put or the re­quest.

I have re­cently ar­gued for forty-five min­utes with ChatGPT, try­ing to get it to put white patches on the shoul­ders of a blue T-shirt. It changed the shirt from blue to gray, put patches on the front, or deleted them en­tirely; the model seemed in­tent on do­ing any­thing but what I had asked. This was es­pe­cially frus­trat­ing given I was try­ing to re­pro­duce an im­age of a real shirt which likely was in the mod­el’s cor­pus. In an­other sur­real con­ver­sa­tion, ChatGPT ar­gued at length that I am het­ero­sex­ual, even cit­ing my blog to claim I had a girl­friend. I am, of course, gay as hell, and no girl­friend was men­tioned in the post. After a while, we com­pro­mised on me be­ing bi­sex­ual.

Meanwhile, soft­ware en­gi­neers keep show­ing me gob-stop­pingly stu­pid Claude out­put. One col­league re­lated ask­ing an LLM to an­a­lyze some stock data. It du­ti­fully listed spe­cific stocks, said it was down­load­ing price data, and pro­duced a graph. Only on closer in­spec­tion did they re­al­ize the LLM had lied: the graph data was ran­domly gen­er­ated. Just this af­ter­noon, a friend got in an ar­gu­ment with his Gemini-powered smart-home de­vice over whether or

not it could turn off the

lights. Folks are giv­ing LLMs con­trol of bank ac­counts and los­ing hun­dreds of thou­sands of

dol­lars

be­cause they can’t do ba­sic math. Google’s AI sum­maries are

wrong about 10% of the

time.

Anyone claim­ing these sys­tems of­fer ex­pert-level

in­tel­li­gence, let alone equiv­a­lence to me­dian hu­mans, is pulling an enor­mous bong rip.

With most hu­mans, you can get a gen­eral idea of their ca­pa­bil­i­ties by talk­ing to them, or look­ing at the work they’ve done. ML sys­tems are dif­fer­ent.

LLMs will spit out mul­ti­vari­able cal­cu­lus, and get tripped up by sim­ple word

prob­lems. ML sys­tems drive cabs in San Francisco, but ChatGPT thinks you should walk to

the car

wash. They can gen­er­ate oth­er­worldly vis­tas but can’t han­dle up­side-down

cups. They emit recipes and have

no idea what spicy”

means. People use them to write sci­en­tific pa­pers, and they make up non­sense terms like vegetative elec­tron

mi­croscopy”.

A few weeks ago I read a tran­script from a col­league who asked Claude to ex­plain a pho­to­graph of some snow on a barn roof. Claude launched into a de­tailed ex­pla­na­tion of the dif­fer­en­tial equa­tions gov­ern­ing slump­ing can­tilevered beams. It com­pletely failed to rec­og­nize that the snow was

en­tirely sup­ported by the roof, not hang­ing out over space. No physi­cist would make this mis­take, but LLMs do this sort of thing all the time. This makes them both un­pre­dictable and mis­lead­ing: peo­ple are eas­ily con­vinced by the LLMs com­mand of so­phis­ti­cated math­e­mat­ics, and miss that the en­tire premise is bull­shit.

Mollick et al. call this ir­reg­u­lar bound­ary be­tween com­pe­tence and id­iocy the

jagged tech­nol­ogy

fron­tier. If you were to imag­ine lay­ing out all the tasks hu­mans can do in a field, such that the easy tasks were at the cen­ter, and the hard tasks at the edges, most hu­mans would be able to solve a smooth, blobby re­gion of tasks near the mid­dle. The shape of things LLMs are good at seems to be jagged—more kiki than

bouba.

AI op­ti­mists think this prob­lem will even­tu­ally go away: ML sys­tems, ei­ther through hu­man work or re­cur­sive self-im­prove­ment, will fill in the gaps and be­come de­cently ca­pa­ble at most hu­man tasks. Helen Toner ar­gues that even if

that’s true, we can still ex­pect lots of jagged be­hav­ior in the

mean­time. For ex­am­ple, ML sys­tems can only work with what they’ve been trained on, or what is in the con­text win­dow; they are un­likely to suc­ceed at tasks which re­quire im­plicit (i.e. not writ­ten down) knowl­edge. Along those lines, hu­man-shaped ro­bots are prob­a­bly a long way

off, which means ML will likely strug­gle with the kind of em­bod­ied knowl­edge hu­mans pick up just by fid­dling with stuff.

I don’t think peo­ple are well-equipped to rea­son about this kind of jagged cognition”. One pos­si­ble anal­ogy is sa­vant

syn­drome, but I don’t think this cap­tures how ir­reg­u­lar the bound­ary is. Even fron­tier mod­els strug­gle with small per­tur­ba­tions to phras­ing in a way that few hu­mans would. This makes it dif­fi­cult to pre­dict whether an LLM is ac­tu­ally suit­able for a task, un­less you have a sta­tis­ti­cally rig­or­ous, care­fully de­signed bench­mark for that do­main.

I am gen­er­ally out­side the ML field, but I do talk with peo­ple in the field. One of the things they tell me is that we don’t re­ally know why trans­former mod­els have been so suc­cess­ful, or how to make them bet­ter. This is my sum­mary of dis­cus­sions-over-drinks; take it with many grains of salt. I am cer­tain that People in The Comments will drop a gazil­lion pa­pers to tell you why this is wrong.

2017’s Attention is All You

Need

was ground­break­ing and paved the way for ChatGPT et al. Since then ML re­searchers have been try­ing to come up with new ar­chi­tec­tures, and com­pa­nies have thrown gazil­lions of dol­lars at smart peo­ple to play around and see if they can make a bet­ter kind of model. However, these more so­phis­ti­cated

ar­chi­tec­tures don’t seem to per­form as well as Throwing More Parameters At The Problem. Perhaps this is a vari­ant of the Bitter

Lesson.

It re­mains un­clear whether con­tin­u­ing to throw vast quan­ti­ties of sil­i­con and ever-big­ger cor­puses at the cur­rent gen­er­a­tion of mod­els will lead to hu­man-equiv­a­lent ca­pa­bil­i­ties. Massive in­creases in train­ing costs and pa­ra­me­ter count seem to be yield­ing di­min­ish­ing

re­turns. Or maybe this ef­fect is il­lu­sory. Mysteries!

Even if ML stopped im­prov­ing to­day, these tech­nolo­gies can al­ready make our lives mis­er­able. Indeed, I think much of the world has not caught up to the im­pli­ca­tions of mod­ern ML sys­tems—as Gibson put it, the fu­ture is al­ready

here, it’s just not evenly dis­trib­uted

yet”. As LLMs etc. are de­ployed in new sit­u­a­tions, and at new scale, there will be all kinds of changes in work, pol­i­tics, art, sex, com­mu­ni­ca­tion, and eco­nom­ics. Some of these ef­fects will be good. Many will be bad. In gen­eral, ML promises to be pro­foundly weird.

...

Read the original on aphyr.com »

9 379 shares, 29 trendiness

Is Hormuz Open Yet?

...

Read the original on www.ishormuzopenyet.com »

10 350 shares, 22 trendiness

I’ve been waiting over a month for Anthropic support to respond to my billing issue

In early March, I no­ticed ap­prox­i­mately $180 in un­ex­pected charges to my Anthropic ac­count. I’m a Claude Max sub­scriber, and be­tween March 3-5, I re­ceived 16 sep­a­rate Extra Usage” in­voices rang­ing from $10-$13 each, all in quick suc­ces­sion of one an­other. However, I was­n’t us­ing Claude. I was away from my lap­top en­tirely and was out sail­ing with my par­ents back home in San Diego.

When I checked my us­age dash­board, it showed my ses­sion at 100% de­spite no ac­tiv­ity. My Claude Code ses­sion his­tory showed two tiny ses­sions from March 5 to­tal­ing un­der 7KB (no ses­sions on March 3 or March 4.) Nothing that would ex­plain $180 in Extra Usage charges.

This is­n’t just me. Other Max plan users have re­ported the same is­sue. There are nu­mer­ous GitHub is­sues about it (e.g. claude-code#29289 and claude-code#24727), and posts on r/​Claude­Code de­scrib­ing the ex­act same be­hav­ior: us­age me­ters show­ing in­cor­rect val­ues and Extra Usage charges pil­ing up er­ro­neously.

On March 7, I sent a de­tailed email to Anthropic sup­port lay­ing out the sit­u­a­tion with all the ev­i­dence above. Within two min­utes, I re­ceived a re­sponse… from Fin AI Agent, Anthropic’s AI Agent.” The AI agent told me to go through an in-app re­fund re­quest flow. Sadly, this re­fund pipeline is only ap­plic­a­ble for sub­scrip­tions, and not for Extra Usage charges. I also wanted to con­firm with a hu­man on ex­actly what went wrong rather than just get­ting a re­fund and call­ing it a day.

So, nat­u­rally, I replied ask­ing to speak to a hu­man. The re­sponse:

Thank you for reach­ing out to Anthropic Support. We’ve re­ceived your re­quest for as­sis­tance.

While we re­view your re­quest, you can visit our Help Center and API doc­u­men­ta­tion for self-ser­vice trou­bleshoot­ing. A mem­ber of our team will be with you as soon as we can.

That was March 7. I fol­lowed up on March 17. No re­sponse. I fol­lowed up again on March 25. No re­sponse. I fol­lowed up again to­day, April 8, over a month later. Still noth­ing.

Anthropic is an AI com­pany that builds one of the most ca­pa­ble AI as­sis­tants in the world. Their sup­port sys­tem is a Fin AI chat­bot that can’t ac­tu­ally help you, and there is seem­ingly no hu­man be­hind it. I don’t have a prob­lem with AI-assisted sup­port, though I do have a prob­lem with AI-only sup­port that serves as a wall be­tween cus­tomers and any­one who can ac­tu­ally re­solve their is­sue.

...

Read the original on nickvecchioni.github.io »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.