10 interesting stories served every morning and every evening.




1 2,030 shares, 86 trendiness

The Git Commands I Run Before Reading Any Code

The first thing I usu­ally do when I pick up a new code­base is­n’t open­ing the code. It’s open­ing a ter­mi­nal and run­ning a hand­ful of git com­mands. Before I look at a sin­gle file, the com­mit his­tory gives me a di­ag­nos­tic pic­ture of the pro­ject: who built it, where the prob­lems clus­ter, whether the team is ship­ping with con­fi­dence or tip­toe­ing around land mines.

The 20 most-changed files in the last year. The file at the top is al­most al­ways the one peo­ple warn me about. Oh yeah, that file. Everyone’s afraid to touch it.”

High churn on a file does­n’t mean it’s bad. Sometimes it’s just ac­tive de­vel­op­ment. But high churn on a file that no­body wants to own is the clear­est sig­nal of code­base drag I know. That’s the file where every change is a patch on a patch. The blast ra­dius of a small edit is un­pre­dictable. The team pads their es­ti­mates be­cause they know it’s go­ing to fight back.

A 2005 Microsoft Research study found churn-based met­rics pre­dicted de­fects more re­li­ably than com­plex­ity met­rics alone. I take the top 5 files from this list and cross-ref­er­ence them against the bug hotspot com­mand be­low. A file that’s high-churn and high-bug is your sin­gle biggest risk.

Every con­trib­u­tor ranked by com­mit count. If one per­son ac­counts for 60% or more, that’s your bus fac­tor. If they left six months ago, it’s a cri­sis. If the top con­trib­u­tor from the over­all short­log does­n’t ap­pear in a 6-month win­dow (git short­log -sn –no-merges –since=“6 months ago”), I flag that to the client im­me­di­ately.

I also look at the tail. Thirty con­trib­u­tors but only three ac­tive in the last year. The peo­ple who built this sys­tem aren’t the peo­ple main­tain­ing it.

One caveat: squash-merge work­flows com­press au­thor­ship. If the team squashes every PR into a sin­gle com­mit, this out­put re­flects who merged, not who wrote. Worth ask­ing about the merge strat­egy be­fore draw­ing con­clu­sions.

Same shape as the churn com­mand, fil­tered to com­mits with bug-re­lated key­words. Compare this list against the churn hotspots. Files that ap­pear on both are your high­est-risk code: they keep break­ing and keep get­ting patched, but never get prop­erly fixed.

This de­pends on com­mit mes­sage dis­ci­pline. If the team writes update stuff” for every com­mit, you’ll get noth­ing. But even a rough map of bug den­sity is bet­ter than no map.

Commit count by month, for the en­tire his­tory of the repo. I scan the out­put look­ing for shapes. A steady rhythm is healthy. But what does it look like when the count drops by half in a sin­gle month? Usually some­one left. A de­clin­ing curve over 6 to 12 months tells you the team is los­ing mo­men­tum. Periodic spikes fol­lowed by quiet months means the team batches work into re­leases in­stead of ship­ping con­tin­u­ously.

I once showed a CTO their com­mit ve­loc­ity chart and they said that’s when we lost our sec­ond se­nior en­gi­neer.” They had­n’t con­nected the time­line be­fore. This is team data, not code data.

Revert and hot­fix fre­quency. A hand­ful over a year is nor­mal. Reverts every cou­ple of weeks means the team does­n’t trust its de­ploy process. They’re ev­i­dence of a deeper is­sue: un­re­li­able tests, miss­ing stag­ing, or a de­ploy pipeline that makes roll­backs harder than they should be. Zero re­sults is also a sig­nal; ei­ther the team is sta­ble, or no­body writes de­scrip­tive com­mit mes­sages.

Crisis pat­terns are easy to read. Either they’re there or they’re not.

These five com­mands take a cou­ple min­utes to run. They won’t tell you every­thing. But you’ll know which code to read first, and what to look for when you get there. That’s the dif­fer­ence be­tween spend­ing your first day read­ing the code­base me­thod­i­cally and spend­ing it wan­der­ing.

This is the first hour of what I do in a code­base au­dit. Here’s what the rest of the week looks like.

...

Read the original on piechowski.io »

2 1,776 shares, 66 trendiness

Porting Mac OS X to the Nintendo Wii

Since its launch in 2007, the Wii has seen sev­eral op­er­at­ing sys­tems ported to it: Linux, NetBSD, and most-re­cently, Windows NT. Today, Mac OS X joins that list.

In this post, I’ll share how I ported the first ver­sion of Mac OS X, 10.0 Cheetah, to the Nintendo Wii. If you’re not an op­er­at­ing sys­tems ex­pert or low-level en­gi­neer, you’re in good com­pany; this pro­ject was all about learn­ing and nav­i­gat­ing count­less unknown un­knowns”. Join me as we ex­plore the Wii’s hard­ware, boot­loader de­vel­op­ment, ker­nel patch­ing, and writ­ing dri­vers - and give the PowerPC ver­sions of Mac OS X a new life on the Nintendo Wii.

Visit the wi­iMac boot­loader repos­i­tory for in­struc­tions on how to try this pro­ject your­self.

Before fig­ur­ing out how to tackle this pro­ject, I needed to know whether it would even be pos­si­ble. According to a 2021 Reddit com­ment:

There is a zero per­cent chance of this ever hap­pen­ing.

Feeling en­cour­aged, I started with the ba­sics: what hard­ware is in the Wii, and how does it com­pare to the hard­ware used in real Macs from the era.

The Wii uses a PowerPC 750CL proces­sor - an evo­lu­tion of the PowerPC 750CXe that was used in G3 iBooks and some G3 iMacs. Given this close lin­eage, I felt con­fi­dent that the CPU would­n’t be a blocker.

As for RAM, the Wii has a unique con­fig­u­ra­tion: 88 MB to­tal, split across 24 MB of 1T-SRAM (MEM1) and 64 MB of slower GDDR3 SDRAM (MEM2); un­con­ven­tional, but tech­ni­cally enough for Mac OS X Cheetah, which of­fi­cially calls for 128 MB of RAM but will un­of­fi­cially boot with less. To be safe, I used QEMU to boot Cheetah with 64 MB of RAM and ver­i­fied that there were no is­sues.

Other hard­ware I’d even­tu­ally need to sup­port in­cluded:

* The SD card for boot­ing the rest of the sys­tem once the ker­nel was run­ning

* Video out­put via a frame­buffer that lives in RAM

* The Wii’s USB ports for us­ing a mouse and key­board

Convinced that the Wii’s hard­ware was­n’t fun­da­men­tally in­com­pat­i­ble with Mac OS X, I moved my at­ten­tion to in­ves­ti­gat­ing the soft­ware stack I’d be port­ing.

Mac OS X has an open source core (Darwin, with XNU as the ker­nel and IOKit as the dri­ver model), with closed-source com­po­nents lay­ered on top (Quartz, Dock, Finder, sys­tem apps and frame­works). In the­ory, if I could mod­ify the open-source parts enough to get Darwin run­ning, the closed-source parts would run with­out ad­di­tional patches.

Porting Mac OS X would also re­quire un­der­stand­ing how a real Mac boots. PowerPC Macs from the early 2000s use Open Firmware as their low­est-level soft­ware en­vi­ron­ment; for sim­plic­ity, it can be thought of as the first code that runs when a Mac is pow­ered on. Open Firmware has sev­eral re­spon­si­bil­i­ties, in­clud­ing:

* Providing use­ful func­tions for I/O, draw­ing, and hard­ware com­mu­ni­ca­tion

* Loading and ex­e­cut­ing an op­er­at­ing sys­tem boot­loader from the filesys­tem

Open Firmware even­tu­ally hands off con­trol to BootX, the boot­loader for Mac OS X. BootX pre­pares the sys­tem so that it can even­tu­ally pass con­trol to the ker­nel. The re­spon­si­bil­i­ties of BootX in­clude:

* Loading and de­cod­ing the XNU ker­nel, a Mach-O ex­e­cutable, from the root filesys­tem

Once XNU is run­ning, there are no de­pen­den­cies on BootX or Open Firmware. XNU con­tin­ues on to ini­tial­ize proces­sors, vir­tual mem­ory, IOKit, BSD, and even­tu­ally con­tinue boot­ing by load­ing and run­ning other ex­e­cuta­bles from the root filesys­tem.

The last piece of the puz­zle was how to run my own cus­tom code on the Wii - a triv­ial task thanks to the Wii be­ing jailbroken”, al­low­ing any­one to run home­brew with full ac­cess to the hard­ware via the Homebrew Channel and BootMii.

Armed with knowl­edge of how the boot process works on a real Mac, along with how to run low-level code on the Wii, I needed to se­lect an ap­proach for boot­ing Mac OS X on the Wii. I eval­u­ated three op­tions:

Port Open Firmware, use that to run un­mod­i­fied BootX to boot Mac OS X

Port BootX and mod­ify it to not rely on Open Firmware, use that to boot Mac OS X

Write a cus­tom boot­loader that per­forms the bare-min­i­mum setup to boot Mac OS X

Since Mac OS X does­n’t de­pend on Open Firmware or BootX once run­ning, spend­ing time port­ing ei­ther of those seemed like an un­nec­es­sary dis­trac­tion. Additionally, both Open Firmware and BootX con­tain added com­plex­ity for sup­port­ing many dif­fer­ent hard­ware con­fig­u­ra­tions - com­plex­ity that I would­n’t need since this only needs to run on the Wii. Following in the foot­steps of the Wii Linux pro­ject, I de­cided to write my own boot­loader from scratch. The boot­loader would need to, at a min­i­mum:

* Load the ker­nel from the SD card

Once the ker­nel was run­ning, none of the boot­loader code would mat­ter. At that point, my fo­cus would shift to patch­ing the ker­nel and writ­ing dri­vers.

I de­cided to base my boot­loader on some low-level ex­am­ple code for the Wii called ppcskel. ppcskel puts the sys­tem into a sane ini­tial state, and pro­vides use­ful func­tions for com­mon things like read­ing files from the SD card, draw­ing text to the frame­buffer, and log­ging de­bug mes­sages to a USB Gecko.

Next, I had to fig­ure out how to load the XNU ker­nel into mem­ory so that I could pass con­trol to it. The ker­nel is stored in a spe­cial bi­nary for­mat called Mach-O, and needs to be prop­erly de­coded be­fore be­ing used.

The Mach-O ex­e­cutable for­mat is well-doc­u­mented, and can be thought of as a list of load com­mands that tell the loader where to place dif­fer­ent sec­tions of the bi­nary file in mem­ory. For ex­am­ple, a load com­mand might in­struct the loader to read the data from file off­set 0x2cf000 and store it at the mem­ory ad­dress 0x2e0000. After pro­cess­ing all of the ker­nel’s load com­mands, we end up with this mem­ory lay­out:

The ker­nel file also spec­i­fies the mem­ory ad­dress where ex­e­cu­tion should be­gin. Once the boot­loader jumps to this ad­dress, the ker­nel is in full con­trol and the boot­loader is no longer run­ning.

To jump to the ker­nel-en­try-point’s mem­ory ad­dress, I needed to cast the ad­dress to a func­tion and call it:

After this code ran, the screen went black and my de­bug logs stopped ar­riv­ing via the se­r­ial de­bug con­nec­tion - while an­ti­cli­mac­tic, this was an in­di­ca­tor that the ker­nel was run­ning.

The ques­tion then be­came: how far was I mak­ing it into the boot process? To an­swer this, I had to start look­ing at XNU source code. The first code that runs is a PowerPC as­sem­bly _start rou­tine. This code re­con­fig­ures the hard­ware, over­rid­ing all of the Wii-specific setup that the boot­loader per­formed and, in the process, dis­ables boot­loader func­tion­al­ity for se­r­ial de­bug­ging and video out­put. Without nor­mal de­bug-out­put fa­cil­i­ties, I’d need to track progress a dif­fer­ent way.

The ap­proach that I came up with was a bit of a hack: bi­nary-patch the ker­nel, re­plac­ing in­struc­tions with ones that il­lu­mi­nate one of the front-panel LEDs on the Wii. If the LED il­lu­mi­nated af­ter jump­ing to the ker­nel, then I’d know that the ker­nel was mak­ing it at least that far. Turning on one of these LEDs is as sim­ple as writ­ing a value to a spe­cific mem­ory ad­dress. In PowerPC as­sem­bly, those in­struc­tions are:

To know which parts of the ker­nel to patch, I cross-ref­er­enced func­tion names in XNU source code with func­tion off­sets in the com­piled ker­nel bi­nary, us­ing Hopper Disassembler to make the process eas­ier. Once I iden­ti­fied the cor­rect off­set in the bi­nary that cor­re­sponded to the code I wanted to patch, I just needed to re­place the ex­ist­ing in­struc­tions at that off­set with the ones to blink the LED.

To make this patch­ing process eas­ier, I added some code to the boot­loader to patch the ker­nel bi­nary on the fly, en­abling me to try dif­fer­ent off­sets with­out man­u­ally mod­i­fy­ing the ker­nel file on disk.

After trac­ing through many ker­nel startup rou­tines, I even­tu­ally mapped out this path of ex­e­cu­tion:

This was an ex­cit­ing mile­stone - the ker­nel was def­i­nitely run­ning, and I had even made it into some higher-level C code. To make it past the 300 ex­cep­tion crash, the boot­loader would need to pass a pointer to a valid de­vice tree.

The de­vice tree is a data struc­ture rep­re­sent­ing all of the hard­ware in the sys­tem that should be ex­posed to the op­er­at­ing sys­tem. As the name sug­gests, it’s a tree made up of nodes, each ca­pa­ble of hold­ing prop­er­ties and ref­er­ences to child nodes.

On real Mac com­put­ers, the boot­loader scans the hard­ware and con­structs a de­vice tree based on what it finds. Since the Wii’s hard­ware is al­ways the same, this scan­ning step can be skipped. I ended up hard-cod­ing the de­vice tree in the boot­loader, tak­ing in­spi­ra­tion from the de­vice tree that the Wii Linux pro­ject uses.

Since I was­n’t sure how much of the Wii’s hard­ware I’d need to sup­port in or­der to get the boot process fur­ther along, I started with a min­i­mal de­vice tree: a root node with chil­dren for the cpus and mem­ory:

My plan was to ex­pand the de­vice tree with more pieces of hard­ware as I got fur­ther along in the boot process - even­tu­ally con­struct­ing a com­plete rep­re­sen­ta­tion of all of the Wii’s hard­ware that I planned to sup­port in Mac OS X.

Once I had a de­vice tree cre­ated and stored in mem­ory, I needed to pass it to the ker­nel as part of boot_args:

With the de­vice tree in mem­ory, I had made it past the de­vice_tree.c crash. The boot­loader was per­form­ing the ba­sics well: load­ing the ker­nel, cre­at­ing boot ar­gu­ments and a de­vice tree, and ul­ti­mately, call­ing the ker­nel. To make ad­di­tional progress, I’d need to shift my at­ten­tion to­ward patch­ing the ker­nel source code to fix re­main­ing com­pat­i­bil­ity is­sues.

At this point, the ker­nel was get­ting stuck while run­ning some code to set up video and I/O mem­ory. XNU from this era makes as­sump­tions about where video and I/O mem­ory can be, and re­con­fig­ures Block Address Translations (BATs) in a way that does­n’t play nicely with the Wii’s mem­ory lay­out (MEM1 start­ing at 0x00000000, MEM2 start­ing at 0x10000000). To work around these lim­i­ta­tions, it was time to mod­ify the ker­nel’s source code and boot a mod­i­fied ker­nel bi­nary.

Figuring out a sane de­vel­op­ment en­vi­ron­ment to build an OS ker­nel from 25 years ago took some ef­fort. Here’s what I landed on:

* XNU source code lives on the host’s filesys­tem, and is ex­posed via an NFS server

* The guest ac­cesses the XNU source via an NFS mount

* The host uses SSH to con­trol the guest

* Edit XNU source on host, kick off a build via SSH on the guest, build ar­ti­facts end up on the filesys­tem ac­ces­si­ble by host and guest

To set up the de­pen­den­cies needed to build the Mac OS X Cheetah ker­nel on the Mac OS X Cheetah guest, I fol­lowed the in­struc­tions here. They mostly matched up with what I needed to do. Relevant sources are avail­able from Apple here.

After fix­ing the BAT setup and adding some small patches to reroute con­sole out­put to my USB Gecko, I now had video out­put and se­r­ial de­bug logs work­ing - mak­ing fu­ture de­vel­op­ment and de­bug­ging sig­nif­i­cantly eas­ier. Thanks to this new vis­i­bil­ity into what was go­ing on, I could see that the vir­tual mem­ory, IOKit, and BSD sub­sys­tems were all ini­tial­ized and run­ning - with­out crash­ing. This was a sig­nif­i­cant mile­stone, and gave me con­fi­dence that I was on the right path to get­ting a full sys­tem work­ing.

Readers who have at­tempted to run Mac OS X on a PC via hackintoshing” may rec­og­nize the last line in the boot logs: the dreaded Still wait­ing for root de­vice”. This oc­curs when the sys­tem can’t find a root filesys­tem from which to con­tinue boot­ing. In my case, this was ex­pected: the ker­nel had done all it could and was ready to load the rest of the Mac OS X sys­tem from the filesys­tem, but it did­n’t know where to lo­cate this filesys­tem. To make progress, I would need to tell the ker­nel how to read from the Wii’s SD card. To do this, I’d need to tackle the next phase of this pro­ject: writ­ing dri­vers.

Mac OS X dri­vers are built us­ing IOKit - a col­lec­tion of soft­ware com­po­nents that aim to make it easy to ex­tend the ker­nel to sup­port dif­fer­ent hard­ware de­vices. Drivers are writ­ten us­ing a sub­set of C++, and make ex­ten­sive use of ob­ject-ori­ented pro­gram­ming con­cepts like in­her­i­tance and com­po­si­tion. Many pieces of use­ful func­tion­al­ity are pro­vided, in­clud­ing:

* Base classes and families” that im­ple­ment com­mon be­hav­ior for dif­fer­ent types of hard­ware

* Probing and match­ing dri­vers to hard­ware pre­sent in the de­vice tree

In IOKit, there are two kinds of dri­vers: a spe­cific de­vice dri­ver and a nub. A spe­cific de­vice dri­ver is an ob­ject that man­ages a spe­cific piece of hard­ware. A nub is an ob­ject that serves as an at­tach-point for a spe­cific de­vice dri­ver, and also pro­vides the abil­ity for that at­tached dri­ver to com­mu­ni­cate with the dri­ver that cre­ated the nub. It’s this chain of dri­ver-to-nub-to-dri­ver that cre­ates the afore­men­tioned provider-client re­la­tion­ships. I strug­gled for a while to grasp this con­cept, and found a con­crete ex­am­ple use­ful.

Real Macs can have a PCI bus with sev­eral PCI ports. In this ex­am­ple, con­sider an eth­er­net card be­ing plugged into one of the PCI ports. A dri­ver, IOPCIBridge, han­dles com­mu­ni­cat­ing with the PCI bus hard­ware on the moth­er­board. This dri­ver scans the bus, cre­at­ing IOPCIDevice nubs (attach-points) for each plugged-in de­vice that it finds. A hy­po­thet­i­cal dri­ver for the plugged-in eth­er­net card (let’s call it SomeEthernetCard) can at­tach to the nub, us­ing it as its proxy to call into PCI func­tion­al­ity pro­vided by the IOPCIBridge dri­ver on the other side. The SomeEthernetCard dri­ver can also cre­ate its own IOEthernetInterface nubs so that higher-level parts of the IOKit net­work­ing stack can at­tach to it.

Someone de­vel­op­ing a PCI eth­er­net card dri­ver would only need to write SomeEthernetCard; the lower-level PCI bus com­mu­ni­ca­tion and the higher-level net­work­ing stack code is all pro­vided by ex­ist­ing IOKit dri­ver fam­i­lies. As long as SomeEthernetCard can at­tach to an IOPCIDevice nub and pub­lish its own IOEthernetInterface nubs, it can sand­wich it­self be­tween two ex­ist­ing fam­i­lies in the dri­ver stack, ben­e­fit­ing from all of the func­tion­al­ity pro­vided by IOPCIFamily while also sat­is­fy­ing the needs of IONetworkingFamily.

Unlike Macs from the same era, the Wii does­n’t use PCI to con­nect its var­i­ous pieces of hard­ware to its moth­er­board. Instead, it uses a cus­tom sys­tem-on-a-chip (SoC) called the Hollywood. Through the Hollywood, many pieces of hard­ware can be ac­cessed: the GPU, SD card, WiFi, Bluetooth, in­ter­rupt con­trollers, USB ports, and more. The Hollywood also con­tains an ARM co­proces­sor, nick­named the Starlet, that ex­poses hard­ware func­tion­al­ity to the main PowerPC proces­sor via in­ter-proces­sor-com­mu­ni­ca­tion (IPC).

This unique hard­ware lay­out and com­mu­ni­ca­tion pro­to­col meant that I could­n’t piggy-back off of an ex­ist­ing IOKit dri­ver fam­ily like IOPCIFamily. Instead, I would need to im­ple­ment an equiv­a­lent dri­ver for the Hollywood SoC, cre­at­ing nubs that rep­re­sent at­tach-points for all of the hard­ware it con­tains. I landed on this lay­out of dri­vers and nubs (note that this is only show­ing a sub­set of the dri­vers that had to be writ­ten):

Now that I had a bet­ter idea of how to rep­re­sent the Wii’s hard­ware in IOKit, I be­gan work on my Hollywood dri­ver.

I started by cre­at­ing a new C++ header and im­ple­men­ta­tion file for a NintendoWiiHollywood dri­ver. Its dri­ver personality” en­abled it to be matched to a node in the de­vice tree with the name hollywood”`. Once the dri­ver was matched and run­ning, it was time to pub­lish nubs for all of its child de­vices.

Once again lean­ing on the de­vice tree as the source of truth for what hard­ware lives un­der the Hollywood, I it­er­ated through all of the Hollywood node’s chil­dren, cre­at­ing and pub­lish­ing NintendoWiiHollywoodDevice nubs for each:

Once NintendoWiiHollywoodDevice nubs were cre­ated and pub­lished, the sys­tem would be able to have other de­vice dri­vers, like an SD card dri­ver, at­tach to them.

Next, I moved on to writ­ing a dri­ver to en­able the sys­tem to read and write from the Wii’s SD card. This dri­ver is what would en­able the sys­tem to con­tinue boot­ing, since it was cur­rently stuck look­ing for a root filesys­tem from which to load ad­di­tional startup files.

I be­gan by sub­class­ing IOBlockStorageDevice, which has many ab­stract meth­ods in­tended to be im­ple­mented by sub­classers:

For most of these meth­ods, I could im­ple­ment them with hard-coded val­ues that matched the Wii’s SD card hard­ware; ven­dor string, block size, max read and write trans­fer size, ejectabil­ity, and many oth­ers all re­turn con­stant val­ues, and were triv­ial to im­ple­ment.

The more in­ter­est­ing meth­ods to im­ple­ment were the ones that needed to ac­tu­ally com­mu­ni­cate with the cur­rently-in­serted SD card: get­ting the ca­pac­ity of the SD card, read­ing from the SD card, and writ­ing to the SD card:

To com­mu­ni­cate with the SD card, I uti­lized the IPC func­tion­al­ity pro­vided by MINI run­ning on the Starlet co-proces­sor. By writ­ing data to cer­tain re­served mem­ory ad­dresses, the SD card dri­ver was able to is­sue com­mands to MINI. MINI would then ex­e­cute those com­mands, com­mu­ni­cat­ing back any re­sult data by writ­ing to a dif­fer­ent re­served mem­ory ad­dress that the dri­ver could mon­i­tor.

MINI sup­ports many use­ful com­mand types. The ones used by the SD card dri­ver are:

* IPC_SDMMC_SIZE: Returns the num­ber of sec­tors on the cur­rently-in­serted SD card

With these three com­mand types, reads, writes, and ca­pac­ity-checks could all be im­ple­mented, en­abling me to sat­isfy the core re­quire­ments of the block stor­age de­vice sub­class.

Like with most pro­gram­ming en­de­vours, things rarely work on the first try. To in­ves­ti­gate is­sues, my pri­mary de­bug­ging tool was send­ing log mes­sages to the se­r­ial de­bug­ger via calls to IOLog. With this tech­nique, I was able to see which meth­ods were be­ing called on my dri­ver, what val­ues were be­ing passed in, and what val­ues my IPC im­ple­men­ta­tion was send­ing to and re­ceiv­ing from MINI - but I had no abil­ity to set break­points or an­a­lyze ex­e­cu­tion dy­nam­i­cally while the ker­nel was run­ning.

One of the trick­ier bugs that I en­coun­tered had to do with cached mem­ory. When the SD card dri­ver wants to read from the SD card, the com­mand it is­sues to MINI (running on the ARM CPU) in­cludes a mem­ory ad­dress at which to store any loaded data. After MINI fin­ishes writ­ing to mem­ory, the SD card dri­ver (running on the PowerPC CPU) might not be able to see the up­dated con­tents if that re­gion is mapped as cacheable. In that case, the PowerPC will read from its cache lines rather than RAM, re­turn­ing stale data in­stead of the newly loaded con­tents. To work around this, the SD card dri­ver must use un­cached mem­ory for its buffers.

After sev­eral days of bug-fix­ing, I reached a new mile­stone: IOBlockStorageDriver, which at­tached to my SD card dri­ver, had started pub­lish­ing IOMedia nubs rep­re­sent­ing the log­i­cal par­ti­tions pre­sent on the SD. Through these nubs, higher-level parts of the sys­tem were able to at­tach and be­gin us­ing the SD card. Importantly, the sys­tem was now able to find a root filesys­tem from which to con­tinue boot­ing, and I was no longer stuck at Still wait­ing for root de­vice”:

My boot logs now looked like this:

After some more rounds of bug fixes (while on the go), I was able to boot past sin­gle-user mode:

And even­tu­ally, make it through the en­tire ver­bose-mode startup se­quence, which ends with the mes­sage: Startup com­plete”:

At this point, the sys­tem was try­ing to find a frame­buffer dri­ver so that the Mac OS X GUI could be shown. As in­di­cated in the logs, WindowServer was not happy - to fix this, I’d need to write my own frame­buffer dri­ver.

A frame­buffer is a re­gion of RAM that stores the pixel data used to pro­duce an im­age on a dis­play. This data is typ­i­cally made up of color com­po­nent val­ues for each pixel. To change what’s dis­played, new pixel data is writ­ten into the frame­buffer, which is then shown the next time the dis­play re­freshes. For the Wii, the frame­buffer usu­ally lives some­where in MEM1 due to it be­ing slightly faster than MEM2. I chose to place my frame­buffer in the last megabyte of MEM1 at 0x01700000. At 640x480 res­o­lu­tion, and 16 bits per pixel, the pixel data for the frame­buffer fit com­fort­ably in less than one megabyte of mem­ory.

Early in the boot process, Mac OS X uses the boot­loader-pro­vided frame­buffer ad­dress to dis­play sim­ple boot graph­ics via video_­con­sole.c. In the case of a ver­bose-mode boot, font-char­ac­ter bitmaps are writ­ten into the frame­buffer to pro­duce a vi­sual log of what’s hap­pen­ing while start­ing up. Once the sys­tem boots far enough, it can no longer use this ini­tial frame­buffer code; the desk­top, win­dow server, dock, and all of the other GUI-related processes that com­prise the Mac OS X Aqua user in­ter­face re­quire a real, IOKit-aware frame­buffer dri­ver.

To tackle this next dri­ver, I sub­classed IOFramebuffer. Similar to sub­class­ing IOBlockStorageDevice for the SD card dri­ver, IOFramebuffer also had sev­eral ab­stract meth­ods for my frame­buffer sub­class to im­ple­ment:

Once again, most of these were triv­ial to im­ple­ment, and sim­ply re­quired re­turn­ing hard-coded Wii-compatible val­ues that ac­cu­rately de­scribed the hard­ware. One of the most im­por­tant meth­ods to im­ple­ment is getA­per­tur­eRange, which re­turns an IODeviceMemory in­stance whose base ad­dress and size de­scribe the lo­ca­tion of the frame­buffer in mem­ory:

After re­turn­ing the cor­rect de­vice mem­ory in­stance from this method, the sys­tem was able to tran­si­tion from the early-boot text-out­put frame­buffer, to a frame­buffer ca­pa­ble of dis­play­ing the full Mac OS X GUI. I was even able to boot the Mac OS X in­staller:

Readers with a keen eye might no­tice some is­sues:

* The ver­bose-mode text frame­buffer is still ac­tive, caus­ing text to be dis­played and the frame­buffer to be scrolled

The fix for the early-boot video con­sole still writ­ing text out­put to the frame­buffer was sim­ple: tell the sys­tem that our new, IOKit frame­buffer is the same as the one that was pre­vi­ously in use by re­turn­ing true from is­Con­soleDe­vice:

The fix for the in­cor­rect col­ors was much more in­volved, as it re­lates to a fun­da­men­tal in­com­pat­i­bil­ity be­tween the Wii’s video hard­ware and the graph­ics code that Mac OS X uses.

The Nintendo Wii’s video en­coder hard­ware is op­ti­mized for ana­logue TV sig­nal out­put, and as a re­sult, ex­pects 16-bit YUV pixel data in its frame­buffer. This is a prob­lem, since Mac OS X ex­pects the frame­buffer to con­tain RGB pixel data. If the frame­buffer that the Wii dis­plays con­tains non-YUV pixel data, then col­ors will be com­pletely wrong.

To work around this in­com­pat­i­bil­ity, I took in­spi­ra­tion from the Wii Linux pro­ject, which had solved this prob­lem many years ago. The strat­egy is to use two frame­buffers: an RGB frame­buffer that Mac OS X in­ter­acts with, and a YUV frame­buffer that the Wii’s video hard­ware out­puts to the at­tached dis­play. 60 times per sec­ond, the frame­buffer dri­ver con­verts the pixel data in the RGB frame­buffer to YUV pixel data, plac­ing the con­verted data in the frame­buffer that the Wii’s video hard­ware dis­plays:

After im­ple­ment­ing the dual-frame­buffer strat­egy, I was able to boot into a cor­rectly-col­ored Mac OS X sys­tem - for the first time, Mac OS X was run­ning on a Nintendo Wii:

The sys­tem was now booted all the way to the desk­top - but there was a prob­lem - I had no way to in­ter­act with any­thing. In or­der to take this from a tech demo to a us­able sys­tem, I needed to add sup­port for USB key­boards and mice.

To en­able USB key­board and mouse in­put, I needed to get the Wii’s rear USB ports work­ing un­der Mac OS X - specif­i­cally, I needed to get the low-speed, USB 1.1 OHCI host con­troller up and run­ning. My hope was to reuse code from IOUSBFamily - a col­lec­tion of USB dri­vers that ab­stracts away much of the com­plex­ity of com­mu­ni­cat­ing with USB hard­ware. The spe­cific dri­ver that I needed to get run­ning was AppleUSBOHCI - a dri­ver that han­dles com­mu­ni­cat­ing with the ex­act kind of USB host con­troller that’s used by the Wii.

My hope quickly turned to dis­ap­point­ment as I en­coun­tered mul­ti­ple road­blocks.

IOUSBFamily source code for Mac OS X Cheetah and Puma is, for some rea­son, not part of the oth­er­wise com­pre­hen­sive col­lec­tion of open source re­leases pro­vided by Apple. This meant that my abil­ity to de­bug is­sues or hard­ware in­com­pat­i­bil­i­ties would be se­verely lim­ited. Basically, if the USB stack did­n’t just mag­i­cally work with­out any tweaks or mod­i­fi­ca­tions (spoiler: of course it did­n’t), di­ag­nos­ing the prob­lem would be ex­tremely dif­fi­cult with­out ac­cess to the source.

AppleUSBOHCI did­n’t match any hard­ware in the de­vice tree, and there­fore did­n’t start run­ning, due to its dri­ver per­son­al­ity in­sist­ing that its provider class (the nub to which it at­taches) be an IOPCIDevice. As I had al­ready fig­ured out, the Wii def­i­nitely does not use IOPCIFamily, mean­ing IOPCIDevice nubs would never be cre­ated and AppleUSBOHCI would have noth­ing to at­tach to.

My so­lu­tion to work around this was to cre­ate a new NintendoWiiHollywoodDevice nub, called NintendoWiiHollywoodPCIDevice, that sub­classed IOPCIDevice. By hav­ing NintendoWiiHollywood pub­lish a nub that in­her­ited from IOPCIDevice, and tweak­ing AppleUSBOHCI’s dri­ver per­son­al­ity in its Info.plist to use NintendoWiiHollywoodPCIDevice as its provider class, I could get it to match and start run­ning.

...

Read the original on bryankeller.github.io »

3 1,537 shares, 67 trendiness

Sam Altman May Control Our Future—Can He Be Trusted?

Skip to main con­tentSam Altman May Control Our Future—Can He Be Trusted?New in­ter­views and closely guarded doc­u­ments shed light on the per­sis­tent doubts about the head of OpenAI. Altman promised to be a safe stew­ard for A.I. But some of his col­leagues be­lieved that he was not trust­wor­thy enough to, as one put it, have his fin­ger on the but­ton.”In the fall of 2023, Ilya Sutskever, OpenAI’s chief sci­en­tist, sent se­cret memos to three fel­low-mem­bers of the or­ga­ni­za­tion’s board of di­rec­tors. For weeks, they’d been hav­ing furtive dis­cus­sions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his sec­ond-in-com­mand, were fit to run the com­pany. Sutskever had once counted both men as friends. In 2019, he’d of­fi­ci­ated Brockman’s wed­ding, in a cer­e­mony at OpenAI’s of­fices that in­cluded a ring bearer in the form of a ro­botic hand. But as he grew con­vinced that the com­pany was near­ing its long-term goal—cre­at­ing an ar­ti­fi­cial in­tel­li­gence that could ri­val or sur­pass the cog­ni­tive ca­pa­bil­i­ties of hu­man be­ings—his doubts about Altman in­creased. As Sutskever put it to an­other board mem­ber at the time, I don’t think Sam is the guy who should have his fin­ger on the but­ton.”At the be­hest of his fel­low board mem­bers, Sutskever worked with like-minded col­leagues to com­pile some sev­enty pages of Slack mes­sages and H.R. doc­u­ments, ac­com­pa­nied by ex­plana­tory text. The ma­te­r­ial in­cluded im­ages taken with a cell­phone, ap­par­ently to avoid de­tec­tion on com­pany de­vices. He sent the fi­nal memos to the other board mem­bers as dis­ap­pear­ing mes­sages, to in­sure that no one else would ever see them. He was ter­ri­fied,” a board mem­ber who re­ceived them re­called. The memos, which we re­viewed, have not pre­vi­ously been dis­closed in full. They al­lege that Altman mis­rep­re­sented facts to ex­ec­u­tives and board mem­bers, and de­ceived them about in­ter­nal safety pro­to­cols. One of the memos, about Altman, be­gins with a list headed Sam ex­hibits a con­sis­tent pat­tern of . . .” The first item is Lying.”Many tech­nol­ogy com­pa­nies is­sue vague procla­ma­tions about im­prov­ing the world, then go about max­i­miz­ing rev­enue. But the found­ing premise of OpenAI was that it would have to be dif­fer­ent. The founders, who in­cluded Altman, Sutskever, Brockman, and Elon Musk, as­serted that ar­ti­fi­cial in­tel­li­gence could be the most pow­er­ful, and po­ten­tially dan­ger­ous, in­ven­tion in hu­man his­tory, and that per­haps, given the ex­is­ten­tial risk, an un­usual cor­po­rate struc­ture would be re­quired. The firm was es­tab­lished as a non­profit, whose board had a duty to pri­or­i­tize the safety of hu­man­ity over the com­pa­ny’s suc­cess, or even its sur­vival. The C.E.O. had to be a per­son of un­com­mon in­tegrity. According to Sutskever, any per­son work­ing to build this civ­i­liza­tion-al­ter­ing tech­nol­ogy bears a heavy bur­den and is tak­ing on un­prece­dented re­spon­si­bil­ity.” But the peo­ple who end up in these kinds of po­si­tions are of­ten a cer­tain kind of per­son, some­one who is in­ter­ested in power, a politi­cian, some­one who likes it.” In one of the memos, he seemed con­cerned with en­trust­ing the tech­nol­ogy to some­one who just tells peo­ple what they want to hear.” If OpenAI’s C.E.O. turned out not to be re­li­able, the board, which had six mem­bers, was em­pow­ered to fire him. Some mem­bers, in­clud­ing Helen Toner, an A.I.-policy ex­pert, and Tasha McCauley, an en­tre­pre­neur, re­ceived the memos as a con­fir­ma­tion of what they had al­ready come to be­lieve: Altman’s role en­trusted him with the fu­ture of hu­man­ity, but he could not be trusted.Alt­man was in Las Vegas, at­tend­ing a Formula 1 race, when Sutskever in­vited him to a video call with the board, then read a brief state­ment ex­plain­ing that he was no longer an em­ployee of OpenAI. The board, fol­low­ing le­gal ad­vice, re­leased a pub­lic mes­sage say­ing only that Altman had been re­moved be­cause he was not con­sis­tently can­did in his com­mu­ni­ca­tions.” Many of OpenAI’s in­vestors and ex­ec­u­tives were shocked. Microsoft, which had in­vested some thir­teen bil­lion dol­lars in OpenAI, learned of the plan to fire Altman just mo­ments be­fore it hap­pened. I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. I could­n’t get any­thing out of any­body.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI in­vestor and a Microsoft board mem­ber, who be­gan call­ing around to de­ter­mine whether Altman had com­mit­ted a clear of­fense. I did­n’t know what the fuck was go­ing on,” Hoffman told us. We were look­ing for em­bez­zle­ment, or sex­ual ha­rass­ment, and I just found noth­ing.”Other busi­ness part­ners were sim­i­larly blind­sided. When Altman called the in­vestor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was hav­ing lunch. You bet­ter get out of here re­ally quick,” she told Conway. OpenAI was on the verge of clos­ing a large in­vest­ment from Thrive, a ven­ture-cap­i­tal firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six bil­lion dol­lars and al­low many em­ploy­ees to cash out mil­lions in eq­uity. Kushner emerged from a meet­ing with Rick Rubin, the mu­sic pro­ducer, to a missed call from Altman. We just im­me­di­ately went to war,” Kushner later said.The day that Altman was fired, he flew back to his twenty-seven-mil­lion-dol­lar man­sion in San Francisco, which has panoramic views of the bay and once fea­tured a can­tilevered in­fin­ity pool, and set up what he called a sort of gov­ern­ment-in-ex­ile.” Conway, the Airbnb co-founder Brian Chesky, and the fa­mously ag­gres­sive cri­sis-com­mu­ni­ca­tions man­ager Chris Lehane joined, some­times for hours a day, by video and phone. Some mem­bers of Altman’s ex­ec­u­tive team camped out in the hall­ways of the house. Lawyers set up in a home of­fice next to his bed­room. During bouts of in­som­nia, Altman would wan­der by them in his pa­ja­mas. When we spoke with Altman re­cently, he de­scribed the af­ter­math of his fir­ing as just this weird fugue.”With the board silent, Altman’s ad­vis­ers built a pub­lic case for his re­turn. Lehane has in­sisted that the fir­ing was a coup or­ches­trated by rogue effective al­tru­ists”—ad­her­ents of a be­lief sys­tem that fo­cusses on max­i­miz­ing the well-be­ing of hu­man­ity, who had come to see A.I. as an ex­is­ten­tial threat. (Hoffman told Nadella that the fir­ing might be due to effective-altruism crazi­ness.”) Lehane—whose re­ported motto, af­ter Mike Tyson, is Everyone has a game plan un­til you punch them in the mouth”—urged Altman to wage an ag­gres­sive so­cial-me­dia cam­paign. Chesky stayed in con­tact with the tech jour­nal­ist Kara Swisher, re­lay­ing crit­i­cism of the board.Alt­man in­ter­rupted his war room” at six o’­clock each evening with a round of Negronis. You need to chill,” he re­calls say­ing. Whatever’s gonna hap­pen is gonna hap­pen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman con­veyed to Mira Murati, who had given Sutskever ma­te­r­ial for his memos and was serv­ing as the in­terim C.E.O. of OpenAI in that pe­riod, that his al­lies were going all out” and finding bad things” to dam­age her rep­u­ta­tion, as well as those of oth­ers who had moved against him, ac­cord­ing to some­one with knowl­edge of the con­ver­sa­tion. (Altman does not re­call the ex­change.)Within hours of the fir­ing, Thrive had put its planned in­vest­ment on hold and sug­gested that the deal would be con­sum­mated—and em­ploy­ees would thus re­ceive pay­outs—only if Altman re­turned. Texts from this pe­riod show Altman coör­di­nat­ing closely with Nadella. (“how about: satya and my top pri­or­ity re­mains to save ope­nai,” Altman sug­gested, as the two worked on a state­ment. Nadella pro­posed an al­ter­na­tive: to en­sure OpenAI con­tin­ues to thrive.”) Microsoft soon an­nounced that it would cre­ate a com­pet­ing ini­tia­tive for Altman and any em­ploy­ees who left OpenAI. A pub­lic let­ter de­mand­ing his re­turn cir­cu­lated at the or­ga­ni­za­tion. Some peo­ple who hes­i­tated to sign it re­ceived im­plor­ing calls and mes­sages from col­leagues. A ma­jor­ity of OpenAI em­ploy­ees ul­ti­mately threat­ened to leave with Altman.The board was backed into a cor­ner. Control Z, that’s one op­tion,” Toner said—undo the fir­ing. Or the other op­tion is the com­pany falls apart.” Even Murati even­tu­ally signed the let­ter. Altman’s al­lies worked to win over Sutskever. Brockman’s wife, Anna, ap­proached him at the of­fice and pleaded with him to re­con­sider. You’re a good per­son—you can fix this,” she said. Sutskever later ex­plained, in a court de­po­si­tion, I felt that if we were to go down the path where Sam would not re­turn, then OpenAI would be de­stroyed.” One night, Altman took an Ambien, only to be awak­ened by his hus­band, an Australian coder named Oliver Mulherin, who told him that Sutskever was wa­ver­ing, and that peo­ple were telling Altman to speak with the board. I woke up in this, like, crazy Ambien haze, and I was so dis­ori­ented,” Altman told us. I was, like, I can­not talk to the board right now.”In a se­ries of in­creas­ingly tense calls, Altman de­manded the res­ig­na­tions of board mem­bers who had moved to fire him. I have to pick up the pieces of their mess while I’m in this crazy cloud of sus­pi­cion?” Altman re­called ini­tially think­ing, about his re­turn. I was just, like, Absolutely fuck­ing not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole orig­i­nal mem­ber who re­mained. As a con­di­tion of their exit, the de­part­ing mem­bers de­manded that the al­le­ga­tions against Altman—including that he pit­ted ex­ec­u­tives against one an­other and con­cealed his fi­nan­cial en­tan­gle­ments—be in­ves­ti­gated. They also pressed for a new board that could over­see the out­side in­quiry with in­de­pen­dence. But the two new mem­bers, the for­mer Harvard pres­i­dent Lawrence Summers and the for­mer Facebook C.T.O. Bret Taylor, were se­lected af­ter close con­ver­sa­tions with Altman. would you do this,” Altman texted Nadella. bret, larry sum­mers, adam as the board and me as ceo and then bret han­dles the in­ves­ti­ga­tion.” (McCauley later tes­ti­fied in a de­po­si­tion that when Taylor was pre­vi­ously con­sid­ered for a board seat she’d had con­cerns about his def­er­ence to Altman.)Less than five days af­ter his fir­ing, Altman was re­in­stated. Employees now call this mo­ment the Blip,” af­ter an in­ci­dent in the Marvel films in which char­ac­ters dis­ap­pear from ex­is­tence and then re­turn, un­changed, to a world pro­foundly al­tered by their ab­sence. But the de­bate over Altman’s trust­wor­thi­ness has moved be­yond OpenAI’s board­room. The col­leagues who fa­cil­i­tated his ouster ac­cuse him of a de­gree of de­cep­tion that is un­ten­able for any ex­ec­u­tive and dan­ger­ous for a leader of such a trans­for­ma­tive tech­nol­ogy. We need in­sti­tu­tions wor­thy of the power they wield,” Murati told us. The board sought feed­back, and I shared what I was see­ing. Everything I shared was ac­cu­rate, and I stand be­hind all of it.” Altman’s al­lies, on the other hand, have long dis­missed the ac­cu­sa­tions. After the fir­ing, Conway texted Chesky and Lehane de­mand­ing a pub­lic-re­la­tions of­fen­sive. This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been mistreated by a rogue board of di­rec­tors.”Ope­nAI has since be­come one of the most valu­able com­pa­nies in the world. It is re­port­edly prepar­ing for an ini­tial pub­lic of­fer­ing at a po­ten­tial val­u­a­tion of a tril­lion dol­lars. Altman is dri­ving the con­struc­tion of a stag­ger­ing amount of A.I. in­fra­struc­ture, some of it con­cen­trated within for­eign au­toc­ra­cies. OpenAI is se­cur­ing sweep­ing gov­ern­ment con­tracts, set­ting stan­dards for how A.I. is used in im­mi­gra­tion en­force­ment, do­mes­tic sur­veil­lance, and au­tonomous weaponry in war zones.Alt­man has pro­moted OpenAI’s growth by tout­ing a vi­sion in which, he wrote in a 2024 blog post, astounding tri­umphs—fix­ing the cli­mate, es­tab­lish­ing a space colony, and the dis­cov­ery of all of physics—will even­tu­ally be­come com­mon­place.” His rhetoric has helped sus­tain one of the fastest cash burns of any startup in his­tory, re­ly­ing on part­ners that have bor­rowed vast sums. The U.S. econ­omy is in­creas­ingly de­pen­dent on a few highly lever­aged A.I. com­pa­nies, and many ex­perts—at times in­clud­ing Altman—have warned that the in­dus­try is in a bub­ble. Someone is go­ing to lose a phe­nom­e­nal amount of money,” he told re­porters last year. If the bub­ble pops, eco­nomic cat­a­stro­phe may fol­low. If his most bull­ish pro­jec­tions prove cor­rect, he may be­come one of the wealth­i­est and most pow­er­ful peo­ple on the planet.In a tense call af­ter Altman’s fir­ing, the board pressed him to ac­knowl­edge a pat­tern of de­cep­tion. This is just so fucked up,” he said re­peat­edly, ac­cord­ing to peo­ple on the call. I can’t change my per­son­al­ity.” Altman says that he does­n’t re­call the ex­change. It’s pos­si­ble I meant some­thing like I do try to be a uni­fy­ing force,’ ” he told us, adding that this trait had en­abled him to lead an im­mensely suc­cess­ful com­pany. He at­trib­uted the crit­i­cism to a ten­dency, es­pe­cially early in his ca­reer, to be too much of a con­flict avoider.” But a board mem­ber of­fered a dif­fer­ent in­ter­pre­ta­tion of his state­ment: What it meant was I have this trait where I lie to peo­ple, and I’m not go­ing to stop.’ ” Were the col­leagues who fired Altman mo­ti­vated by alarmism and per­sonal an­i­mus, or were they right that he could­n’t be trusted?One morn­ing this win­ter, we met Altman at OpenAI’s head­quar­ters, in San Francisco, for one of more than a dozen con­ver­sa­tions with him for this story. The com­pany had re­cently moved into a pair of eleven-story glass tow­ers, one of which had been oc­cu­pied by Uber, an­other tech be­he­moth, whose co-founder and C.E.O., Travis Kalanick, seemed like an un­stop­pable prodigy—un­til he re­signed, in 2017, un­der pres­sure from in­vestors, who cited con­cerns about his ethics. (Kalanick now runs a ro­bot­ics startup; in his free time, he said re­cently, he uses OpenAI’s ChatGPT to get to the edge of what’s known in quan­tum physics.”)An em­ployee gave us a tour of the of­fice. In an airy space full of com­mu­nal ta­bles, there was an an­i­mated dig­i­tal paint­ing of the com­puter sci­en­tist Alan Turing; its eyes tracked us as we passed. The in­stal­la­tion is a wink­ing ref­er­ence to the Turing test, the 1950 thought ex­per­i­ment about whether a ma­chine can cred­i­bly im­i­tate a per­son. (In a 2025 study, ChatGPT passed the test more re­li­ably than ac­tual hu­mans did.) Typically, you can in­ter­act with the paint­ing. But the sound had been dis­abled, our guide told us, be­cause it would­n’t stop eaves­drop­ping on em­ploy­ees and then butting into their con­ver­sa­tions. Elsewhere in the of­fice, plaques, brochures, and mer­chan­dise dis­played the words Feel the AGI.” The phrase was orig­i­nally as­so­ci­ated with Sutskever, who used it to cau­tion his col­leagues about the risks of ar­ti­fi­cial gen­eral in­tel­li­gence—the thresh­old at which ma­chines match hu­man cog­ni­tive ca­pac­i­ties. After the Blip, it be­came a cheer­ful slo­gan hail­ing a su­per­abun­dant fu­ture.We met Altman in a generic-look­ing con­fer­ence room on the eighth floor. People used to tell me about de­ci­sion fa­tigue, and I did­n’t get it,” Altman told us. Now I wear a gray sweater and jeans every day, and even pick­ing which gray sweater out of my closet—I’m, like, I wish I did­n’t have to think about that.” Altman has a youth­ful ap­pear­ance—he is slen­der, with wide-set blue eyes and tou­sled hair—but he is now forty, and he and Mulherin have a one-year-old son, de­liv­ered by a sur­ro­gate. I’m sure, like, be­ing President of the United States would be a much more stress­ful job, but of all the jobs that I think I could rea­son­ably do, this is the most stress­ful one I can imag­ine,” he said, mak­ing eye con­tact with one of us, then with the other. The way that I’ve ex­plained this to my friends is: This was the most fun job in the world un­til the day we launched ChatGPT.’ We were mak­ing these mas­sive sci­en­tific dis­cov­er­ies—I think we did the most im­por­tant piece of sci­en­tific dis­cov­ery in, I don’t know, many decades.” He cast his eyes down. And then, since the launch of ChatGPT, the de­ci­sions have got­ten very dif­fi­cult.”Alt­man grew up in Clayton, Missouri, an af­flu­ent sub­urb of St. Louis, as the el­dest of four sib­lings. His mother, Connie Gibstine, is a der­ma­tol­o­gist; his fa­ther, Jerry Altman, was a real-es­tate bro­ker and a hous­ing ac­tivist. Altman at­tended a Reform syn­a­gogue and a pri­vate prepara­tory school that he has de­scribed as not the kind of place where you would re­ally stand up and talk about be­ing gay.” In gen­eral, though, the fam­i­ly’s wealthy sub­ur­ban cir­cles were rel­a­tively lib­eral. When Altman was six­teen or sev­en­teen, he said, he was out late in a pre­dom­i­nantly gay neigh­bor­hood in St. Louis and was sub­jected to a bru­tal phys­i­cal at­tack and ho­mo­pho­bic slurs. Altman did not re­port the in­ci­dent, and he was re­luc­tant to give us more de­tails on the record, say­ing that a fuller telling would make me look like I’m ma­nip­u­la­tive or play­ing for sym­pa­thy.” He dis­missed the idea that this event, and his sex­u­al­ity broadly, was sig­nif­i­cant to his iden­tity. But, he said, probably that has, like, some deep-seated psy­cho­log­i­cal thing—that I think I’m over but I’m not—about not want­ing more con­flict.”Alt­man’s at­ti­tude in child­hood, his brother told The New Yorker, in 2016, was I have to win, and I’m in charge of every­thing.” He went to Stanford, where he at­tended reg­u­lar off-cam­pus poker games. I think I learned more about life and busi­ness from that than I learned in col­lege,” he later said.All Stanford stu­dents are am­bi­tious, but many of the most en­ter­pris­ing among them drop out. The sum­mer af­ter his sopho­more year, Altman went to Massachusetts to join the in­au­gural batch of en­tre­pre­neurs at Y Combinator, a startup in­cu­ba­tor” co-founded by the renowned soft­ware en­gi­neer Paul Graham. Each en­trant joined Y.C. with an idea for a startup. (Altman’s batch mates in­cluded founders of Reddit and Twitch.) Altman’s pro­ject, even­tu­ally called Loopt, was a proto so­cial net­work that used the lo­ca­tions of peo­ple’s flip phones to tell their friends where they were. The com­pany re­flected his drive, and a ten­dency to in­ter­pret am­bigu­ous sit­u­a­tions to his ad­van­tage. Federal rules re­quired that phone car­ri­ers be able to track the lo­ca­tions of phones for emer­gency ser­vices; Altman struck deals with car­ri­ers to tap these ca­pa­bil­i­ties for the com­pa­ny’s use.“These num­bers in­di­cate that some­body here has the soul of a poet.”Most of Altman’s em­ploy­ees at Loopt liked him, but some said that they were struck by his ten­dency to ex­ag­ger­ate, even about triv­ial things. One re­called Altman brag­ging widely that he was a cham­pion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then prov­ing to be one of the worst play­ers in the of­fice. (Altman says that he was prob­a­bly jok­ing.) As Mark Jacobstein, an older Loopt em­ployee who was asked by in­vestors to act as Altman’s babysitter,” later told Keach Hagey, for The Optimist,” a bi­og­ra­phy of Altman, There’s a blur­ring be­tween I think I can maybe ac­com­plish this thing’ and I have al­ready ac­com­plished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraud­u­lent startup.Groups of se­nior em­ploy­ees, con­cerned with Altman’s lead­er­ship and lack of trans­parency, asked Loopt’s board on two oc­ca­sions to fire him as C.E.O., ac­cord­ing to Hagey. But Altman in­spired fierce loy­alty, too. A for­mer em­ployee was told that a board mem­ber re­sponded, This is Sam’s com­pany, get back to fuck­ing work.” (A board mem­ber de­nied that the at­tempts to re­move Altman as C.E.O. were se­ri­ous.) Loopt strug­gled to gain users, and in 2012 it was ac­quired by a fin­tech com­pany. The ac­qui­si­tion had been arranged, ac­cord­ing to a per­son fa­mil­iar with the deal, largely to help Altman save face. Still, by the time Graham re­tired from Y.C., in 2014, he had re­cruited Altman to be his suc­ces­sor as pres­i­dent. I asked Sam in our kitchen,” Graham told The New Yorker. And he smiled, like, it worked. I had never seen an un­con­trolled smile from Sam. It was like when you throw a ball of pa­per into the waste­bas­ket across the room—that smile.”Alt­man’s new role made him, at twenty-eight, a king­maker. His job was to se­lect the hun­gri­est and most promis­ing en­tre­pre­neurs, con­nect them with the best coders and in­vestors, and help them de­velop their star­tups into in­dus­try-defin­ing mo­nop­o­lies (while Y.C. took a six- or seven-per-cent cut). Altman over­saw a pe­riod of ag­gres­sive ex­pan­sion, grow­ing Y.C.’s ros­ter of star­tups from dozens to hun­dreds. But sev­eral Silicon Valley in­vestors came to be­lieve that his loy­al­ties were di­vided. An in­vestor told us that Altman was known to make per­sonal in­vest­ments, se­lec­tively, into the best com­pa­nies, block­ing out­side in­vestors.” (Altman de­nies block­ing any­one.) Altman had worked as a scout” for the in­vest­ment fund Sequoia Capital, as part of a pro­gram that in­volved in­vest­ing in early-stage star­tups and tak­ing a small cut of any prof­its. When Altman made an an­gel in­vest­ment in Stripe, a fi­nan­cial-ser­vices startup, he in­sisted on a big­ger por­tion, galling Sequoia’s part­ners, a per­son fa­mil­iar with the deal said. The per­son added, It’s a pol­icy of Sam first.’ ” Altman is an in­vestor in, by his own es­ti­mate, some four hun­dred other com­pa­nies. (Altman de­nies this char­ac­ter­i­za­tion of the Stripe deal. Around 2010, he made an ini­tial in­vest­ment of fif­teen thou­sand dol­lars in Stripe, a two-per-cent share. The com­pany is now val­ued at more than a hun­dred and fifty bil­lion dol­lars.)By 2018, sev­eral Y.C. part­ners were so frus­trated with Altman’s be­hav­ior that they ap­proached Graham to com­plain. Graham and Jessica Livingston, his wife and a Y.C. founder, ap­par­ently had a frank con­ver­sa­tion with Altman. Afterward, Graham started telling peo­ple that al­though Altman had agreed to leave the com­pany, he was re­sist­ing in prac­tice. Altman told some Y.C. part­ners that he would re­sign as pres­i­dent but be­come chair­man in­stead. In May, 2019, a blog post an­nounc­ing that Y.C. had a new pres­i­dent came with an as­ter­isk: Sam is tran­si­tion­ing to Chairman of YC.” A few months later, the post was edited to read Sam Altman stepped away from any for­mal po­si­tion at YC; af­ter that, the phrase was re­moved en­tirely. Nevertheless, as re­cently as 2021, a Securities and Exchange Commission fil­ing listed Altman as the chair­man of Y Combinator. (Altman says that he was­n’t aware of this un­til much later.)Alt­man has main­tained over the years, both in pub­lic and in re­cent de­po­si­tions, that he was never fired from Y.C., and he told us that he did not re­sist leav­ing. Graham has tweeted that we did­n’t want him to leave, just to choose” be­tween Y.C. and OpenAI. In a state­ment, Graham told us, We did­n’t have the le­gal power to fire any­one. All we could do was ap­ply moral pres­sure.” In pri­vate, though, he has been un­am­bigu­ous that Altman was re­moved be­cause of Y.C. part­ners’ mis­trust. This ac­count of Altman’s time at Y Combinator is based on dis­cus­sions with sev­eral Y.C. founders and part­ners, in ad­di­tion to con­tem­po­ra­ne­ous ma­te­ri­als, all of which in­di­cate that the part­ing was not en­tirely mu­tual. On one oc­ca­sion, Graham told Y.C. col­leagues that, prior to his re­moval, Sam had been ly­ing to us all the time.”In May, 2015, Altman e-mailed Elon Musk, then the hun­dredth-rich­est per­son in the world. Like many promi­nent Silicon Valley en­tre­pre­neurs, Musk was pre­oc­cu­pied by an ar­ray of threats that he con­sid­ered ex­is­ten­tially ur­gent but which would have struck most peo­ple as far-fetched hy­po­thet­i­cals. We need to be su­per care­ful with AI,” he tweeted. Potentially more dan­ger­ous than nukes.”Alt­man had gen­er­ally been a techno-op­ti­mist, but his rhetoric about A.I. soon turned apoc­a­lyp­tic. In pub­lic, and in his pri­vate cor­re­spon­dence with Musk and oth­ers, he warned that the tech­nol­ogy should not be dom­i­nated by a profit-seek­ing mega-cor­po­ra­tion. Been think­ing a lot about whether it’s pos­si­ble to stop hu­man­ity from de­vel­op­ing AI,” he wrote to Musk. If it’s go­ing to hap­pen any­way, it seems like it would be good for some­one other than Google to do it first.” Picking up on the anal­ogy to nu­clear weapons, he pro­posed a Manhattan Project for AI.” He out­lined the over­ar­ch­ing prin­ci­ples that such an or­ga­ni­za­tion would have—“safety should be a first-class re­quire­ment”; obviously we’d com­ply with/​ag­gres­sively sup­port all reg­u­la­tion”—and he and Musk set­tled on a name: OpenAI.Unlike the orig­i­nal Manhattan Project, a gov­ern­ment ini­tia­tive that led to the cre­ation of the atom bomb, OpenAI would be pri­vately funded, at least at first. Altman pre­dicted that an ar­ti­fi­cial su­per­in­tel­li­gence—a the­o­ret­i­cal thresh­old be­yond even A.G.I., at which ma­chines would fully eclipse the ca­pa­bil­i­ties of the hu­man mind—would even­tu­ally cre­ate enough eco­nomic ben­e­fits to capture the light cone of all fu­ture value in the uni­verse.” But he also warned of ex­is­ten­tial dan­ger. At some point, the na­tional-se­cu­rity im­pli­ca­tions could grow so dire that the U.S. gov­ern­ment would have to take con­trol of OpenAI, per­haps by na­tion­al­iz­ing it and mov­ing its op­er­a­tions to a se­cure bunker in the desert. By late 2015, Musk was per­suaded. We should say that we are start­ing with a $1B fund­ing com­mit­ment,” he wrote. I will cover what­ever any­one else does­n’t pro­vide.”Alt­man housed OpenAI in Y Combinator’s non­profit arm, fram­ing it as an in­ter­nal phil­an­thropic pro­ject. He gave OpenAI re­cruits Y.C. stock and moved do­na­tions through Y.C. ac­counts. At one point, the lab was sup­ported by a Y.C. fund in which he held a per­sonal stake. (Altman later de­scribed this stake as in­signif­i­cant. He told us that the Y.C. stock he gave to re­cruits was his own.)The Manhattan Project anal­ogy ap­plied to em­ployee re­cruit­ment, too. Like nu­clear-fis­sion re­search, ma­chine learn­ing was a small sci­en­tific field with epochal im­pli­ca­tions which was dom­i­nated by a cadre of ec­cen­tric ge­niuses. Musk and Altman, along with Brockman, who joined from Stripe, were con­vinced that there were only a few com­puter sci­en­tists alive ca­pa­ble of mak­ing the re­quired break­throughs. Google had a huge cash ad­van­tage and a mul­ti­year head start. We are out­manned and out­gunned by a ridicu­lous mar­gin,” Musk later wrote in an e-mail. But if we are able to at­tract the most tal­ented peo­ple over time and our di­rec­tion is cor­rectly aligned, then OpenAI will pre­vail.”A top re­cruit­ing tar­get was Sutskever, an in­tense and in­tro­verted re­searcher who was of­ten called the most gifted A.I. sci­en­tist of his gen­er­a­tion. Sutskever, who was born in the Soviet Union in 1986, has a re­ced­ing hair­line, dark eyes, and a habit of paus­ing, un­blink­ing, while choos­ing his words. Another tar­get was Dario Amodei, a bio­physi­cist and a font of fre­netic en­ergy who has a ten­dency to ner­vously twist his black hair, and re­sponds to one-line e-mails with multi-para­graph es­says. Both had lu­cra­tive jobs else­where, but Altman lav­ished them with at­ten­tion. He later joked, I stalked Ilya.”Musk was the big­ger name, but Altman had the smoother touch. He e-mailed Amodei, and they set up a one-on-one din­ner at an Indian restau­rant. (Altman: fuck my uber got in a crash! run­ning about 10 late.” Amodei: Wow, hope you’re ok.”) Like many A.I. re­searchers, Amodei be­lieved that the tech­nol­ogy should be built only if it was shown to be aligned” with hu­man val­ues, mean­ing that it would act in ac­cor­dance with what peo­ple wanted with­out mak­ing a po­ten­tially fa­tal er­ror—say, fol­low­ing an in­struc­tion to clean up the en­vi­ron­ment by elim­i­nat­ing its great­est pol­luter, the hu­man race. Altman was re­as­sur­ing, mir­ror­ing these safety con­cerns.Amodei, who later joined the com­pany, took de­tailed notes on Altman and Brockman’s be­hav­ior for years, un­der the head­ing My Experience with OpenAI” (subheading: Private: Do Not Share”). A col­lec­tion of more than two hun­dred pages of doc­u­ments re­lated to Amodei, in­clud­ing those notes and in­ter­nal e-mails and memos, has been cir­cu­lated by col­leagues in Silicon Valley but never be­fore dis­closed pub­licly. In his notes, Amodei wrote that Altman’s goal was to build an AI lab that would be fo­cused on safety (‘maybe not right away, but as soon as it can be’).”In December, 2015, hours be­fore OpenAI was pub­licly an­nounced, Altman e-mailed Musk about a ru­mor that Google was going to give every­one in ope­nAI mas­sive coun­terof­fers to­mor­row to try to kill it.” Musk replied, Has Ilya come back with a solid yes?” Altman as­sured him that Sutskever was hold­ing firm. Google of­fered Sutskever six mil­lion dol­lars a year, which OpenAI could­n’t come close to match­ing. But, Altman boasted, they un­for­tu­nately dont have do the right thing’ on their side.”“I’m just say­ing, if we tear up the pil­lows and rip up the mat­tress, it might make our place look more lived in.”Musk pro­vided some of­fice space for OpenAI in a for­mer suit­case fac­tory in the Mission District of San Francisco. The pitch to em­ploy­ees, Sutskever told us, was You’re go­ing to save the world.”If every­thing went right, the OpenAI founders be­lieved, ar­ti­fi­cial in­tel­li­gence could usher in a post-scarcity utopia, au­tomat­ing grunt work, cur­ing can­cer, and lib­er­at­ing peo­ple to en­joy lives of leisure and abun­dance. But if the tech­nol­ogy went rogue, or fell into the wrong hands, the dev­as­ta­tion could be to­tal. China could use it to build a novel bioweapon or a fleet of ad­vanced drones; an A.I. model could out­ma­neu­ver its over­seers, repli­cat­ing it­self on se­cret servers so that it could­n’t be turned off; in ex­treme cases, it might seize con­trol of the en­ergy grid, the stock mar­ket, or the nu­clear ar­se­nal. Not every­one be­lieved this, to say the least, but Altman re­peat­edly af­firmed that he did. He wrote on his blog in 2015 that su­per­hu­man ma­chine in­tel­li­gence does not have to be the in­her­ently evil sci-fi ver­sion to kill us all. A more prob­a­ble sce­nario is that it sim­ply does­n’t care about us much ei­ther way, but in an ef­fort to ac­com­plish some other goal . . . wipes us out.” OpenAI’s founders vowed not to priv­i­lege speed over safety, and the or­ga­ni­za­tion’s ar­ti­cles of in­cor­po­ra­tion made ben­e­fit­ting hu­man­ity a legally bind­ing duty. If A.I. was go­ing to be the most pow­er­ful tech­nol­ogy in his­tory, it fol­lowed that any in­di­vid­ual with sole con­trol over it stood to be­come uniquely pow­er­ful—a sce­nario that the founders re­ferred to as an AGI dic­ta­tor­ship.”Alt­man told early re­cruits that OpenAI would re­main a pure non­profit, and pro­gram­mers took sig­nif­i­cant pay cuts to work there. The com­pany ac­cepted char­i­ta­ble grants, in­clud­ing thirty mil­lion dol­lars from what was then called Open Philanthropy, a hub of the ef­fec­tive-al­tru­ism move­ment whose com­mit­ments in­cluded sup­port­ing the dis­tri­b­u­tion of mos­quito nets to the global poor.Brock­man and Sutskever man­aged OpenAI’s daily op­er­a­tions, while Musk and Altman, still busy with their other jobs, stopped by around once a week. By September, 2017, though, Musk had grown im­pa­tient. During dis­cus­sions about whether to re­con­sti­tute OpenAI as a for-profit com­pany, he de­manded ma­jor­ity con­trol. Altman’s replies var­ied de­pend­ing on the con­text. His main con­sis­tent de­mand seems to have been that if OpenAI were re­or­ga­nized un­der the con­trol of a C.E.O. that job should go to him. Sutskever seemed un­com­fort­able with this idea. He sent Musk and Altman a long, plain­tive e-mail on be­half of him­self and Brockman, with the sub­ject line Honest Thoughts.” He wrote, The goal of OpenAI is to make the fu­ture good and to avoid an AGI dic­ta­tor­ship.” He con­tin­ued, ad­dress­ing Musk, So it is a bad idea to cre­ate a struc­ture where you could be­come a dic­ta­tor.” He re­layed sim­i­lar con­cerns to Altman: We don’t un­der­stand why the CEO ti­tle is so im­por­tant to you. Your stated rea­sons have changed, and it’s hard to re­ally un­der­stand what’s dri­ving it.”“Guys, I’ve had enough,” Musk replied. Either go do some­thing on your own or con­tinue with OpenAI as a non­profit”—oth­er­wise I’m just be­ing a fool who is es­sen­tially pro­vid­ing free fund­ing for you to cre­ate a startup.” He quit, ac­ri­mo­niously, five months later. (In 2023, he founded a for-profit com­peti­tor called xAI. The fol­low­ing year, he sued Altman and OpenAI for fraud and breach of char­i­ta­ble trust, al­leg­ing that he had been assiduously ma­nip­u­lated” by Altman’s long con”—that Altman had preyed on his con­cerns about the dan­gers of A.I. in or­der to sep­a­rate him from his money. The suit, which OpenAI has vig­or­ously con­tested, is on­go­ing.)Af­ter Musk’s de­par­ture, Amodei and other re­searchers chafed against the lead­er­ship of Brockman, whom some con­sid­ered an abra­sive op­er­a­tor, and of Sutskever, who was gen­er­ally viewed as prin­ci­pled but dis­or­ga­nized. In the process of be­com­ing C.E.O., Altman seems to have made dif­fer­ent promises to dif­fer­ent fac­tions at the com­pany. He as­sured some re­searchers that Brockman’s man­age­r­ial au­thor­ity would be di­min­ished. But, un­be­knownst to them, he also struck a se­cret hand­shake deal with Brockman and Sutskever: Altman would get the C.E.O. ti­tle; in ex­change, he agreed to re­sign if the other two deemed it nec­es­sary. (He dis­puted this char­ac­ter­i­za­tion, say­ing he took the C.E.O. role only be­cause he was asked to. All three men con­firmed that the pact ex­isted, though Brockman said that it was in­for­mal. He uni­lat­er­ally told us that he’d step down if we ever both asked him to,” he told us. We ob­jected to this idea, but he said it was im­por­tant to him. It was purely al­tru­is­tic.”) Later, the board was alarmed to learn that its C.E.O. had es­sen­tially ap­pointed his own shadow board.In­ter­nal records show that the founders had pri­vate doubts about the non­profit struc­ture as early as 2017. That year, af­ter Musk tried to take con­trol, Brockman wrote in a di­ary en­try, cannot say that we are com­mit­ted to the non-profit . . . if three months later we’re do­ing b-corp then it was a lie.” Amodei, in one of his early notes, re­called press­ing Brockman on his pri­or­i­ties and Brockman re­ply­ing that he wanted money and power.” Brockman dis­putes this. His di­ary en­tries from this time sug­gest con­flict­ing in­stincts. One reads, Happy to not be­come rich on this, so long as no one else is.” In an­other, he asks, So what do I *really* want?” Among his an­swers is Financially what will take me to $1B.”In 2017, Sutskever was in the of­fice when he read a pa­per that Google re­searchers had just pub­lished, propos­ing a new sim­ple net­work ar­chi­tec­ture, the Transformer.” He jumped out of his chair, ran down the hall, and told his fel­low-re­searchers, Stop every­thing you’re do­ing. This is it.” The Transformer, Sutskever saw, was an in­no­va­tion that might en­able OpenAI to train vastly more so­phis­ti­cated mod­els. Out of this dis­cov­ery came the first gen­er­a­tive pre-trained trans­former—the seed of what would be­come ChatGPT.As the tech­nol­ogy be­came in­creas­ingly pow­er­ful, we learned, about a dozen of OpenAI’s top en­gi­neers held a se­ries of se­cret meet­ings to dis­cuss whether OpenAI’s founders, in­clud­ing Brockman and Altman, could be trusted. At one, an em­ployee was re­minded of a sketch by the British com­edy duo Mitchell and Webb, in which a Nazi sol­dier on the Eastern Front, in a mo­ment of clar­ity, asks, Are we the bad­dies?”By 2018, Amodei had started ques­tion­ing the founders’ mo­tives more openly. Everything was a ro­tat­ing set of schemes to raise money,” he later wrote in his notes. I felt like what OpenAI needed was a clear state­ment of what it would do, what it would not do, and how its ex­is­tence would make the world bet­ter.” OpenAI al­ready had a mis­sion state­ment: To en­sure that ar­ti­fi­cial gen­eral in­tel­li­gence ben­e­fits all of hu­man­ity.” But it was­n’t clear to Amodei what this meant to the ex­ec­u­tives, if it meant any­thing at all. In early 2018, Amodei has said, he started draft­ing a char­ter for the com­pany and, in weeks of con­ver­sa­tions with Altman and Brockman, ad­vo­cated for its most rad­i­cal clause: if a value-aligned, safety-con­scious pro­ject” came close to build­ing an A.G.I. be­fore OpenAI did, the com­pany would stop com­pet­ing with and start as­sist­ing this pro­ject.” According to the merge and as­sist” clause, as it was called, if, say, Google’s re­searchers fig­ured out how to build a safe A.G.I. first, then OpenAI could wind it­self down and do­nate its re­sources to Google. By any nor­mal cor­po­rate logic, this was an in­sane thing to promise. But OpenAI was not sup­posed to be a nor­mal com­pany.That premise was tested in the spring of 2019, when OpenAI was ne­go­ti­at­ing a bil­lion-dol­lar in­vest­ment from Microsoft. Although Amodei, who was lead­ing the com­pa­ny’s safety team, had helped to pitch the deal to Bill Gates, many peo­ple on the team were anx­ious about it, fear­ing that Microsoft would in­sert pro­vi­sions that over­rode OpenAI’s eth­i­cal com­mit­ments. Amodei pre­sented Altman with a ranked list of safety de­mands, plac­ing the preser­va­tion of the merge-and-as­sist clause at the very top. Altman agreed to that de­mand, but in June, as the deal was clos­ing, Amodei dis­cov­ered that a pro­vi­sion grant­ing Microsoft the power to block OpenAI from any merg­ers had been added. Eighty per cent of the char­ter was just be­trayed,” Amodei re­called. He con­fronted Altman, who de­nied that the pro­vi­sion ex­isted. Amodei read it aloud, point­ing to the text, and ul­ti­mately forced an­other col­league to con­firm its ex­is­tence to Altman di­rectly. (Altman does­n’t re­mem­ber this.) Amodei’s notes de­scribe es­ca­lat­ing tense en­coun­ters, in­clud­ing one, months later, in which Altman sum­moned him and his sis­ter, Daniela, who worked in safety and pol­icy at the com­pany, to tell them that he had it on good au­thor­ity” from a se­nior ex­ec­u­tive that they had been plot­ting a coup. Daniela, the notes con­tinue, lost it,” and brought in that ex­ec­u­tive, who de­nied hav­ing said any­thing. As one per­son briefed on the ex­change re­called, Altman then de­nied hav­ing made the claim. I did­n’t even say that,” he said. You just said that,” Daniela replied. (Altman said that this was not quite his rec­ol­lec­tion, and that he had ac­cused the Amodeis only of political be­hav­ior.”) In 2020, Amodei, Daniela, and other col­leagues left to found Anthropic, which is now one of OpenAI’s chief ri­vals.Alt­man con­tin­ued tout­ing OpenAI’s com­mit­ment to safety, es­pe­cially when po­ten­tial re­cruits were within earshot. In late 2022, four com­puter sci­en­tists pub­lished a pa­per mo­ti­vated in part by con­cerns about deceptive align­ment,” in which suf­fi­ciently ad­vanced mod­els might pre­tend to be­have well dur­ing test­ing and then, once de­ployed, pur­sue their own goals. (It’s one of sev­eral A.I. sce­nar­ios that sound like sci­ence fic­tion—but, un­der cer­tain ex­per­i­men­tal con­di­tions, it’s al­ready hap­pen­ing.) Weeks af­ter the pa­per was pub­lished, one of its au­thors, a Ph.D. stu­dent at the University of California, Berkeley, got an e-mail from Altman, who said that he was in­creas­ingly wor­ried about the threat of un­aligned A.I. He added that he was think­ing of com­mit­ting a bil­lion dol­lars to the is­sue, which many A.I. ex­perts con­sid­ered the most im­por­tant un­solved prob­lem in the world, po­ten­tially by en­dow­ing a prize to in­cen­tivize re­searchers around the world to study it. Although the grad­u­ate stu­dent had heard vague ru­mors about Sam be­ing slip­pery,” he told us, Altman’s show of com­mit­ment won him over. He took an aca­d­e­mic leave to join OpenAI.But, in the course of sev­eral meet­ings in the spring of 2023, Altman seemed to wa­ver. He stopped talk­ing about en­dow­ing a prize. Instead, he ad­vo­cated for es­tab­lish­ing an in-house superalignment team.” An of­fi­cial an­nounce­ment, re­fer­ring to the com­pa­ny’s re­serves of com­put­ing power, pledged that the team would get 20% of the com­pute we’ve se­cured to date”—a re­source po­ten­tially worth more than a bil­lion dol­lars. The ef­fort was nec­es­sary, ac­cord­ing to the an­nounce­ment, be­cause, if align­ment re­mained un­solved, A.G.I. might lead to the dis­em­pow­er­ment of hu­man­ity or even hu­man ex­tinc­tion.” Jan Leike, who was ap­pointed to lead the team with Sutskever, told us, It was a pretty ef­fec­tive re­ten­tion tool.”The twenty-per-cent com­mit­ment evap­o­rated, how­ever. Four peo­ple who worked on or closely with the team said that the ac­tual re­sources were be­tween one and two per cent of the com­pa­ny’s com­pute. Furthermore, a re­searcher on the team said, most of the su­per­align­ment com­pute was ac­tu­ally on the old­est clus­ter with the worst chips.” The re­searchers be­lieved that su­pe­rior hard­ware was be­ing re­served for profit-gen­er­at­ing ac­tiv­i­ties. (OpenAI dis­putes this.) Leike com­plained to Murati, then the com­pa­ny’s chief tech­nol­ogy of­fi­cer, but she told him to stop press­ing the point—the com­mit­ment had never been re­al­is­tic.“She skip­pidy-boop-bee-bop-doo-wop­pity-wopped right out of my life.”Around this time, a for­mer em­ployee told us, Sutskever was get­ting su­per safety-pilled.” In the early days of OpenAI, he had con­sid­ered con­cerns about cat­a­strophic risk le­git­i­mate but re­mote. Now, as he came to be­lieve that A.G.I. was im­mi­nent, his wor­ries grew more acute. There was an all-hands meet­ing, the for­mer em­ployee con­tin­ued, where Ilya gets up and he’s, like, Hey, every­one, there’s go­ing to be a point in the next few years where ba­si­cally every­one at this com­pany has to switch to work­ing on safety, or else we’re fucked.” But the su­per­align­ment team was dis­solved the fol­low­ing year, with­out com­plet­ing its mis­sion.By then, in­ter­nal mes­sages show, ex­ec­u­tives and board mem­bers had come to be­lieve that Altman’s omis­sions and de­cep­tions might have ram­i­fi­ca­tions for the safety of OpenAI’s prod­ucts. In a meet­ing in December, 2022, Altman as­sured board mem­bers that a va­ri­ety of fea­tures in a forth­com­ing model, GPT-4, had been ap­proved by a safety panel. Toner, the board mem­ber and A.I.-policy ex­pert, re­quested doc­u­men­ta­tion. She learned that the most con­tro­ver­sial fea­tures—one that al­lowed users to fine-tune” the model for spe­cific tasks, and an­other that de­ployed it as a per­sonal as­sis­tant—had not been ap­proved. As McCauley, the board mem­ber and en­tre­pre­neur, left the meet­ing, an em­ployee pulled her aside and asked if she knew about the breach” in India. Altman, dur­ing many hours of brief­ing with the board, had ne­glected to men­tion that Microsoft had re­leased an early ver­sion of ChatGPT in India with­out com­plet­ing a re­quired safety re­view. It just was kind of com­pletely ig­nored,” Jacob Hilton, an OpenAI re­searcher at the time, said.Al­though these lapses did not cause se­cu­rity crises, Carroll Wainwright, an­other re­searcher, said that they were part of a continual slide to­ward em­pha­siz­ing prod­ucts over safety.” After the re­lease of GPT-4, Leike e-mailed mem­bers of the board. OpenAI has been go­ing off the rails on its mis­sion,” he wrote. We are pri­or­i­tiz­ing the prod­uct and rev­enue above all else, fol­lowed by AI ca­pa­bil­i­ties, re­search and scal­ing, with align­ment and safety com­ing third.” He con­tin­ued, Other com­pa­nies like Google are learn­ing that they should de­ploy faster and ig­nore safety prob­lems.”Mc­Cauley, in an e-mail to her fel­low-mem­bers, wrote, I think we’re def­i­nitely at a point where the board should be in­creas­ing its level of scrutiny.” The board mem­bers tried to con­front what they viewed as a mount­ing prob­lem, but they were out­matched. You had a bunch of J.V. peo­ple who’ve never done any­thing, to be blunt,” Sue Yoon, a for­mer board mem­ber, said. In 2023, the com­pany was prepar­ing to re­lease its GPT-4 Turbo model. As Sutskever de­tails in the memos, Altman ap­par­ently told Murati that the model did­n’t need safety ap­proval, cit­ing the com­pa­ny’s gen­eral coun­sel, Jason Kwon. But when she asked Kwon, over Slack, he replied, ugh . . . con­fused where sam got that im­pres­sion.” (A rep­re­sen­ta­tive for OpenAI, where Kwon re­mains an ex­ec­u­tive, said that the mat­ter was not a big deal.”)Soon af­ter­ward, the board made its de­ci­sion to fire Altman—and then the world watched as Altman re­versed it. A ver­sion of the OpenAI char­ter is still on the or­ga­ni­za­tion’s web­site. But peo­ple fa­mil­iar with OpenAI’s gov­ern­ing doc­u­ments said that it has been di­luted to the point of mean­ing­less­ness. Last June, on his per­sonal blog, Altman wrote, re­fer­ring to ar­ti­fi­cial su­per­in­tel­li­gence, We are past the event hori­zon; the take­off has started.” This was, ac­cord­ing to the char­ter, ar­guably the mo­ment when OpenAI might stop com­pet­ing with other com­pa­nies and start work­ing with them. But in that post, called The Gentle Singularity,” he adopted a new tone, re­plac­ing ex­is­ten­tial ter­ror with ebul­lient op­ti­mism. We’ll all get bet­ter stuff,” he wrote. We will build ever-more-won­der­ful things for each other.” He ac­knowl­edged that the align­ment prob­lem re­mained un­solved, but he re­de­fined it—rather than be­ing a deadly threat, it was an in­con­ve­nience, like the al­go­rithms that tempt us to waste time scrolling on Instagram.Altman is of­ten de­scribed, ei­ther with rev­er­ence or with sus­pi­cion, as the great­est pitch­man of his gen­er­a­tion. Steve Jobs, one of his idols, was said to pro­ject a reality-distortion field”—an unas­sail­able con­fi­dence that the world would con­form to his vi­sion. But even Jobs never told his cus­tomers that if they did­n’t buy his brand of MP3 player every­one they loved would die. When Altman was twenty-three, in 2008, Graham, his men­tor, wrote, You could para­chute him into an is­land full of can­ni­bals and come back in 5 years and he’d be the king.” This judg­ment was based not on Altman’s track record, which was mod­est, but on his will to pre­vail, which Graham con­sid­ered al­most un­govern­able. When ad­vised not to in­clude Y.C. alumni on a list of the world’s top startup founders, Graham put Altman on it any­way. Sam Altman can’t be stopped by such flimsy rules,” he wrote.Gra­ham meant this as a com­pli­ment. But some of Altman’s clos­est col­leagues came to have a dif­fer­ent view of this qual­ity. After Sutskever grew more dis­tressed about A.I. safety, he com­piled the memos about Altman and Brockman. They have since taken on a leg­endary sta­tus in Silicon Valley; in some cir­cles, they are sim­ply called the Ilya Memos. Meanwhile, Amodei was con­tin­u­ing to as­sem­ble notes. These and the other doc­u­ments re­lated to him chart his shift from cau­tious ide­al­ism to alarm. His lan­guage is more heated than Sutskever’s, by turns in­censed at Altman—“His words were al­most cer­tainly bull­shit”—and wist­ful about what he says was a fail­ure to cor­rect OpenAI’s course.Nei­ther col­lec­tion of doc­u­ments con­tains a smok­ing gun. Rather, they re­count an ac­cu­mu­la­tion of al­leged de­cep­tions and ma­nip­u­la­tions, each of which might, in iso­la­tion, be greeted with a shrug: Altman pur­port­edly of­fers the same job to two peo­ple, tells con­tra­dic­tory sto­ries about who should ap­pear on a live stream, dis­sem­bles about safety re­quire­ments. But Sutskever con­cluded that this kind of be­hav­ior does not cre­ate an en­vi­ron­ment con­ducive to the cre­ation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached sim­i­lar con­clu­sions. Amodei wrote, The prob­lem with OpenAI is Sam him­self.”We have in­ter­viewed more than a hun­dred peo­ple with first­hand knowl­edge of how Altman con­ducts busi­ness: cur­rent and for­mer OpenAI em­ploy­ees and board mem­bers; guests and staffers at Altman’s var­i­ous houses; his col­leagues and com­peti­tors; his friends and en­e­mies and sev­eral peo­ple who, given the mer­ce­nary cul­ture of Silicon Valley, have been both. (OpenAI has an agree­ment with Condé Nast, the owner of The New Yorker, which al­lows OpenAI to dis­play its con­tent in search re­sults for a lim­ited term.)Some peo­ple de­fended Altman’s busi­ness acu­men and dis­missed his ri­vals, es­pe­cially Sutskever and Amodei, as failed as­pi­rants to his throne. Others por­trayed them as gullible, ab­sent-minded sci­en­tists, or as hys­ter­i­cal doomers,” gripped by the delu­sion that the soft­ware they were build­ing would some­how come alive and kill them. Yoon, the for­mer board mem­ber, ar­gued that Altman was not this Machiavellian vil­lain” but merely, to the point of fecklessness,” able to con­vince him­self of the shift­ing re­al­i­ties of his sales pitches. He’s too caught up in his own self-be­lief,” she said. So he does things that, if you live in the real world, make no sense. But he does­n’t live in the real world.”Yet most of the peo­ple we spoke to shared the judg­ment of Sutskever and Amodei: Altman has a re­lent­less will to power that, even among in­dus­tri­al­ists who put their names on space­ships, sets him apart. He’s un­con­strained by truth,” the board mem­ber told us. He has two traits that are al­most never seen in the same per­son. The first is a strong de­sire to please peo­ple, to be liked in any given in­ter­ac­tion. The sec­ond is al­most a so­cio­pathic lack of con­cern for the con­se­quences that may come from de­ceiv­ing some­one.”The board mem­ber was not the only per­son who, un­prompted, used the word sociopathic.” One of Altman’s batch mates in the first Y Combinator co­hort was Aaron Swartz, a bril­liant but trou­bled coder who died by sui­cide in 2013 and is now re­mem­bered in many tech cir­cles as some­thing of a sage. Not long be­fore his death, Swartz ex­pressed con­cerns about Altman to sev­eral friends. You need to un­der­stand that Sam can never be trusted,” he told one. He is a so­ciopath. He would do any­thing.” Multiple se­nior ex­ec­u­tives at Microsoft said that, de­spite Nadella’s long-stand­ing loy­alty, the com­pa­ny’s re­la­tion­ship with Altman has be­come fraught. He has mis­rep­re­sented, dis­torted, rene­go­ti­ated, re­neged on agree­ments,” one said. Earlier this year, OpenAI reaf­firmed Microsoft as the ex­clu­sive cloud provider for its stateless”—or mem­o­ry­less—mod­els. That day, it an­nounced a fifty-bil­lion-dol­lar deal mak­ing Amazon the ex­clu­sive re­seller of its en­ter­prise plat­form for A.I. agents. While re­selling is per­mit­ted, Microsoft ex­ec­u­tives ar­gue OpenAI’s plan could col­lide with Microsoft’s ex­clu­siv­ity. (OpenAI main­tains that the Amazon deal will not vi­o­late the ear­lier con­tract; a Microsoft rep­re­sen­ta­tive said the com­pany is confident that OpenAI un­der­stands and re­spects” its le­gal oblig­a­tions.) The se­nior ex­ec­u­tive at Microsoft said, of Altman, I think there’s a small but real chance he’s even­tu­ally re­mem­bered as a Bernie Madoff- or Sam Bankman-Fried-level scam­mer.”Alt­man is not a tech­ni­cal sa­vant—ac­cord­ing to many in his or­bit, he lacks ex­ten­sive ex­per­tise in cod­ing or ma­chine learn­ing. Multiple en­gi­neers re­called him mis­us­ing or con­fus­ing ba­sic tech­ni­cal terms. He built OpenAI, in large part, by har­ness­ing other peo­ple’s money and tech­ni­cal tal­ent. This does­n’t make him unique. It makes him a busi­ness­man. More re­mark­able is his abil­ity to con­vince skit­tish en­gi­neers, in­vestors, and a tech-skep­ti­cal pub­lic that their pri­or­i­ties, even when mu­tu­ally ex­clu­sive, are also his pri­or­i­ties. When such peo­ple have tried to hin­der his next move, he has of­ten found the words to neu­tral­ize them, at least tem­porar­ily; usu­ally, by the time they lose pa­tience with him, he’s got what he needs. He sets up struc­tures that, on pa­per, con­strain him in the fu­ture,” Wainwright, the for­mer OpenAI re­searcher, said. But then, when the fu­ture comes and it comes time to be con­strained, he does away with what­ever the struc­ture was.”“He’s un­be­liev­ably per­sua­sive. Like, Jedi mind tricks,” a tech ex­ec­u­tive who has worked with Altman said. He’s just next level.” A clas­sic hy­po­thet­i­cal sce­nario in align­ment re­search in­volves a con­test of wills be­tween a hu­man and a high-pow­ered A.I. In such a con­test, re­searchers usu­ally ar­gue, the A.I. would surely win, much the way a grand­mas­ter will beat a child at chess. Watching Altman out­ma­neu­ver the peo­ple around him dur­ing the Blip, the ex­ec­u­tive con­tin­ued, had been like watch­ing an A.G.I. break­ing out of the box.”In the days af­ter his fir­ing, Altman fought to avoid any out­side in­ves­ti­ga­tion of the claims against him. He told two peo­ple that he wor­ried even the ex­is­tence of an in­ves­ti­ga­tion would make him look guilty. (Altman de­nies this.) But, af­ter the re­sign­ing board mem­bers made their de­par­ture con­di­tional on there be­ing an in­de­pen­dent in­quiry, Altman ac­ceded to a review” of recent events.” The two new board mem­bers in­sisted that they con­trol that re­view, ac­cord­ing to peo­ple in­volved in the ne­go­ti­a­tions. Summers, with his net­work of po­lit­i­cal and Wall Street con­nec­tions, seemed to lend it cred­i­bil­ity. (Last November, af­ter the dis­clo­sure of e-mails in which Summers sought Jeffrey Epstein’s ad­vice while pur­su­ing a ro­man­tic re­la­tion­ship with a young pro­tégée, he re­signed from the board.) OpenAI en­listed WilmerHale, the dis­tin­guished law firm re­spon­si­ble for the in­ter­nal in­ves­ti­ga­tions of Enron and WorldCom, to con­duct the re­view.Six peo­ple close to the in­quiry al­leged that it seemed de­signed to limit trans­parency. Some of them said that the in­ves­ti­ga­tors ini­tially did not con­tact im­por­tant fig­ures at the com­pany. An em­ployee reached out to Summers and Taylor to com­plain. They were just in­ter­ested in the nar­row range of what hap­pened dur­ing the board drama, and not the his­tory of his in­tegrity,” the em­ployee re­called of his in­ter­view with in­ves­ti­ga­tors. Others were un­com­fort­able shar­ing con­cerns about Altman be­cause they felt there was not a suf­fi­cient ef­fort to in­sure anonymity. Everything pointed to the fact that they wanted to find the out­come, which is to ac­quit him,” the em­ployee said. (Some of the lawyers in­volved de­fended the process, say­ing, It was an in­de­pen­dent, care­ful, com­pre­hen­sive re­view that fol­lowed the facts wher­ever they led.” Taylor also said that the re­view was thorough and in­de­pen­dent.”)Cor­po­rate in­ves­ti­ga­tions aim to con­fer le­git­i­macy. At pri­vate com­pa­nies, their find­ings are some­times not writ­ten down—this can be a way to limit li­a­bil­ity. But in cases in­volv­ing pub­lic scan­dals there is of­ten a greater ex­pec­ta­tion of trans­parency. Before Kalanick left Uber, in 2017, its board hired an out­side firm, which re­leased a thir­teen-page sum­mary to the pub­lic. Given OpenAI’s 501(c)(3) sta­tus and the high-pro­file na­ture of the fir­ing, many ex­ec­u­tives there ex­pected to see ex­ten­sive find­ings. In March, 2024, how­ever, OpenAI an­nounced that it would clear Altman but re­leased no re­port. The com­pany pro­vided, on its web­site, some eight hun­dred words ac­knowl­edg­ing a breakdown in trust.”Peo­ple in­volved in the in­ves­ti­ga­tion said that no re­port was re­leased be­cause none was writ­ten. Instead, the find­ings were lim­ited to oral brief­ings, shared with Summers and Taylor. The re­view did not con­clude that Sam was a George Washington cherry tree of in­tegrity,” one of the peo­ple close to the in­quiry said. But the in­ves­ti­ga­tion ap­pears not to have cen­tered the ques­tions of in­tegrity be­hind Altman’s fir­ing, de­vot­ing much of its fo­cus to a hunt for clear crim­i­nal­ity; on that ba­sis, it con­cluded that he could re­main as C.E.O. Shortly there­after, Altman, who had been kicked off the board when he was fired, re­joined it. The de­ci­sion not to put the re­port in writ­ing was made in part on the ad­vice of Summers’s and Taylor’s per­sonal at­tor­neys, the per­son close to the in­quiry told us. (Summers de­clined to com­ment on the record. Taylor said that, in light of the oral brief­ings, there had been no need for a for­mal writ­ten re­port.”)Many for­mer and cur­rent OpenAI em­ploy­ees told us that they were shocked by the lack of dis­clo­sure. Altman said he be­lieved that all the board mem­bers who joined in the af­ter­math of his re­in­state­ment re­ceived the oral brief­ings. That’s an ab­solute, out­right lie,” a per­son with di­rect knowl­edge of the sit­u­a­tion said. Some board mem­bers told us that on­go­ing ques­tions about the in­tegrity of the re­port could prompt, as one put it, a need for an­other in­ves­ti­ga­tion.”The ab­sence of a writ­ten record helped min­i­mize the al­le­ga­tions. So, in­creas­ingly, did Altman’s stature in Silicon Valley. Multiple promi­nent in­vestors who have worked with Altman told us that he has a rep­u­ta­tion for freez­ing out in­vestors if they back OpenAI’s com­peti­tors. If they in­vest in some­thing that he does­n’t like, they won’t get ac­cess to other things,” one of them said. Another source of Altman’s power is his vast list of in­vest­ments, which at times ex­tends to his per­sonal life. He has fi­nan­cial en­tan­gle­ments with nu­mer­ous for­mer ro­man­tic part­ners: as a fund co-man­ager, a lead in­vestor, or a fre­quent co-in­vestor. This is hardly un­usual. Many of Silicon Valley’s straight ex­ec­u­tives do the same thing with their ro­man­tic and sex­ual part­ners. (“You have to,” one promi­nent C.E.O. told us.) I’ve ob­vi­ously in­vested with some exes af­ter the fact. And I think that’s, like, to­tally fine,” Altman said. But the dy­namic af­fords an ex­tra­or­di­nary level of con­trol. It cre­ates a very, very high de­pen­dence, es­sen­tially,” a per­son close to Altman said. Oftentimes, it’s a life­time de­pen­dence.”Even for­mer col­leagues can be af­fected. Murati left OpenAI in 2024 and be­gan build­ing her own A.I. startup. Josh Kushner, the close Altman ally, called her. He praised her lead­er­ship, then made what seemed to be a veiled threat, not­ing that he was concerned about” her reputation” and that for­mer col­leagues now viewed her as an enemy.” (Kushner, through a rep­re­sen­ta­tive, said that this ac­count did not convey full con­text”; Altman said that he was un­aware of the call.)At the be­gin­ning of his tenure as C.E.O., Altman had an­nounced that OpenAI would cre­ate a capped profit” com­pany, which would be owned by the non­profit. This byzan­tine cor­po­rate struc­ture ap­par­ently did not ex­ist un­til Altman de­vised it. In the midst of the con­ver­sion, a board mem­ber named Holden Karnofsky ob­jected to it, ar­gu­ing that the non­profit was be­ing se­verely un­der­val­ued. I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said. According to con­tem­po­ra­ne­ous notes, he voted against it. However, af­ter an at­tor­ney for the board said that his dis­sent might be a flag to in­ves­ti­gate fur­ther” the le­git­i­macy of the new struc­ture, his vote was recorded as an ab­sten­tion, ap­par­ently with­out his con­sent—a po­ten­tial fal­si­fi­ca­tion of busi­ness records. (OpenAI told us that sev­eral em­ploy­ees re­call Karnofsky ab­stain­ing, and pro­vided the min­utes from the meet­ing record­ing his vote as an ab­sten­tion.)Last October, OpenAI recapitalized” as a for-profit en­tity. The firm touts its as­so­ci­ated non­profit, now called the OpenAI Foundation, as one of the best re­sourced” in his­tory. But it is now a twenty-six-per-cent stake­holder of the com­pany, and its board mem­bers are also, with one ex­cep­tion, mem­bers of the for-profit board.Dur­ing con­gres­sional tes­ti­mony, Altman was asked if he made a lot of money.” He replied, I have no eq­uity in OpenAI . . . I’m do­ing this be­cause I love it”—a care­ful an­swer, given his in­di­rect eq­uity through the Y.C. fund. This is still tech­ni­cally true. But sev­eral peo­ple, in­clud­ing Altman, in­di­cated to us that it could soon change. Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no active dis­cus­sion” about it. According to a le­gal de­po­si­tion, Brockman seems to own a stake in the com­pany that is worth about twenty bil­lion dol­lars. Altman’s share would pre­sum­ably be worth more. Still, he told us that he was not pri­mar­ily mo­ti­vated by wealth. A for­mer em­ployee re­calls him say­ing, I don’t care about money. I care more about power.”In 2023, Altman mar­ried Mulherin in a small cer­e­mony at a home they own in Hawaii. (They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the prop­erty, and those we spoke with re­ported wit­ness­ing noth­ing more re­mark­able than the stan­dard di­ver­sions of the very wealthy: meals pre­pared by a pri­vate chef, boat rides at golden hour. One New Year’s party was Survivor”-themed; a pho­to­graph shows a num­ber of shirt­less, smil­ing men, and also Jeff Probst, the real host of Survivor.” Altman has also hosted smaller groups of friends at his prop­er­ties, gath­er­ings that have in­cluded, in at least one in­stance, a spir­ited game of strip poker. (A pho­to­graph of the event, which did not in­clude Altman, leaves un­clear who won, but at least three men clearly lost.) We spoke to many of Altman’s for­mer guests who sug­gested only that he is a gen­er­ous host.Nev­er­the­less, ru­mors about Altman’s per­sonal life have been ex­ploited and dis­torted by com­peti­tors. Ruthless busi­ness ri­val­ries are noth­ing new, but the com­pe­ti­tion within the A.I. in­dus­try has be­come ex­tra­or­di­nar­ily cut­throat. (“Shakespearean” was the word an OpenAI ex­ec­u­tive used to de­scribe it to us, adding, The nor­mal rules of the game sort of don’t ap­ply any­more.”) Intermediaries di­rectly con­nected to, and in at least one case com­pen­sated by, Musk have cir­cu­lated dozens of pages of de­tailed op­po­si­tion re­search about Altman. They re­flect ex­ten­sive sur­veil­lance, doc­u­ment­ing shell com­pa­nies as­so­ci­ated with him, the per­sonal con­tact in­for­ma­tion of close as­so­ci­ates, and even in­ter­views about a pur­ported sex worker, con­ducted at gay bars. One of the Musk in­ter­me­di­aries claimed that Altman’s flights and the par­ties he at­tended were be­ing tracked. Altman told us, I don’t think any­one has had more pri­vate in­ves­ti­ga­tors hired against them.”Ex­treme claims have cir­cu­lated. The right-wing broad­caster Tucker Carlson sug­gested, with­out any ap­par­ent proof, that Altman was in­volved in the death of a whis­tle-blower. This claim and oth­ers have been am­pli­fied by ri­vals. Altman’s sis­ter, Annie, claimed in a law­suit, and in in­ter­views with us, that he sex­u­ally abused her for years, be­gin­ning when she was three and he was twelve. (We could not sub­stan­ti­ate Annie’s ac­count, which Altman has de­nied and his broth­ers and mother have called utterly un­true” and a source of immense pain to our en­tire fam­ily.” In in­ter­views that the jour­nal­ist Karen Hao con­ducted for her book, Empire of AI,” Annie sug­gested that mem­o­ries of abuse were re­cov­ered dur­ing flash­backs in adult­hood.)Mul­ti­ple peo­ple work­ing within ri­val com­pa­nies and in­vest­ment firms in­sin­u­ated to us that Altman sex­u­ally pur­sues mi­nors—a nar­ra­tive per­sis­tent in Silicon Valley which ap­pears to be un­true. We spent months look­ing into the mat­ter, con­duct­ing dozens of in­ter­views, and could find no ev­i­dence to sup­port it. This is dis­gust­ing be­hav­ior from a com­peti­tor that I as­sume is part of an at­tempt at taint­ing the jury in our up­com­ing cases,” Altman told us. As ridicu­lous as this is to have to say, any claims about me hav­ing sex with a mi­nor, hir­ing sex work­ers, or be­ing in­volved in a mur­der are com­pletely un­true.” He added that he was sort of grate­ful” that we had spent months so ag­gres­sively try­ing to look into this.”“My apart­ment is full of smells that I per­son­ally am in no way re­spon­si­ble for.”Alt­man has ac­knowl­edged dat­ing younger men of le­gal age. We spoke to sev­eral of his part­ners, who told us that they did not find this prob­lem­atic. Yet the op­po­si­tion dossiers from Musk in­ter­me­di­aries spin it as a line of at­tack. (The dossiers in­clude sala­cious and un­sub­stan­ti­ated ref­er­ences to a Twink Army” and Sugar Daddy’s Sexual Habits.”) I think there’s a lot of ho­mo­pho­bia that gets pushed,” Altman said. Swisher, the tech jour­nal­ist, agreed. All these rich guys do wild stuff, wilder than any­thing I’ve been told about Sam,” she told us. But he’s a gay guy in San Francisco,” she added, so that gets weaponized.”For a decade, so­cial-me­dia ex­ec­u­tives promised that they could change the world with lit­tle or no down­side. They dis­missed the law­mak­ers who wanted to slow them down as mere Luddites, even­tu­ally earn­ing bi­par­ti­san de­ri­sion. Altman, by con­trast, came across as re­fresh­ingly con­sci­en­tious. Rather than ward­ing off reg­u­la­tion, he prac­ti­cally begged for it. Testifying be­fore the Senate Judiciary Committee in 2023, he pro­posed a new fed­eral agency to over­see ad­vanced A.I. mod­els. If this tech­nol­ogy goes wrong, it can go quite wrong,” he said. Senator John Kennedy, of Louisiana, known for his can­tan­ker­ous ex­changes with tech C.E.O.s, seemed charmed, rest­ing his face on his hand and sug­gest­ing that per­haps Altman should en­force the rules him­self.But, as Altman pub­licly wel­comed reg­u­la­tion, he qui­etly lob­bied against it. In 2022 and 2023, ac­cord­ing to Time, OpenAI suc­cess­fully pressed to di­lute a European Union ef­fort that would have sub­jected large A.I. com­pa­nies to more over­sight. In 2024, a bill was in­tro­duced in the California state leg­is­la­ture man­dat­ing safety test­ing for A.I. mod­els. Its pro­vi­sions in­cluded mea­sures re­sem­bling the ones that Altman had ad­vo­cated for in his con­gres­sional tes­ti­mony. OpenAI pub­licly op­posed the bill but in pri­vate be­gan is­su­ing threats. I would say that, over the course of the year, we saw in­creas­ingly cun­ning, de­cep­tive be­hav­ior from OpenAI,” a leg­isla­tive aide told us.Con­way, the in­vestor, lob­bied state po­lit­i­cal lead­ers, in­clud­ing Nancy Pelosi and Gavin Newsom, to kill the bill. In the end, it passed the leg­is­la­ture with bi­par­ti­san sup­port, but Newsom ve­toed it. This year, con­gres­sional can­di­dates who fa­vor A.I. reg­u­la­tions have faced op­po­nents funded by Leading the Future, a new pro-A.I.” su­per PAC de­voted to scut­tling such re­stric­tions. OpenAI’s of­fi­cial stance is that it will not con­tribute to such su­per PACs. This is­sue tran­scends par­ti­san pol­i­tics,” Lehane re­cently told CNN. And yet one of the ma­jor donors to Leading the Future is Greg Brockman, who has com­mit­ted fifty mil­lion dol­lars. (This year, Brockman and his wife do­nated twenty-five mil­lion dol­lars to MAGA Inc., a pro-Trump su­per PAC.)OpenAI’s cam­paign has ex­tended be­yond tra­di­tional lob­by­ing. Last year, a suc­ces­sor bill was in­tro­duced in the California Senate. One night, Nathan Calvin, a twenty-nine-year-old lawyer who worked at the non­profit Encode and had helped craft the bill, was at home hav­ing din­ner with his wife when a process server ar­rived to de­liver a sub­poena from OpenAI. The com­pany claimed to be hunt­ing for ev­i­dence that Musk was covertly fund­ing its crit­ics. But it de­manded all of Calvin’s pri­vate com­mu­ni­ca­tions about the bill in the state Senate. They could have asked us, Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us. Other sup­port­ers of the bill, and some crit­ics of OpenAI’s for-profit re­struc­tur­ing, also re­ceived sub­poe­nas. They were go­ing af­ter folks to ba­si­cally scare them into shut­ting up,” Don Howard, who heads a char­ity called the James Irvine Foundation, said. (OpenAI claims that this was part of the stan­dard le­gal process.)Alt­man has long sup­ported Democrats. I’m very sus­pi­cious of pow­er­ful au­to­crats telling a story of fear to gang up on the weak,” he told us. That’s a Jewish thing, not a gay thing.” In 2016, he en­dorsed Hillary Clinton and called Trump an un­prece­dented threat to America.” In 2020, he do­nated to the Democratic Party and to the Biden Victory Fund. During the Biden Administration, Altman met with the White House at least half a dozen times. He helped de­velop a lengthy ex­ec­u­tive or­der lay­ing out the first fed­eral regime of safety tests and other guardrails for A.I. When Biden signed it, Altman called it a good start.”In 2024, with Biden’s poll num­bers slip­ping, Altman’s rhetoric be­gan to shift. I be­lieve that America is go­ing to be fine no mat­ter what hap­pens in this elec­tion,” he said. After Trump won, Altman do­nated a mil­lion dol­lars to his in­au­gural fund, then took self­ies with the in­flu­encers Jake and Logan Paul at the Inauguration. On X, in his stan­dard low­er­case style, Altman wrote, watching @potus more care­fully re­cently has re­ally changed my per­spec­tive on him (i wish i had done more of my own think­ing . . . ).” Trump, on his first day back in of­fice, re­pealed Biden’s ex­ec­u­tive or­der on A.I. He’s found an ef­fec­tive way for the Trump Administration to do his bid­ding,” a se­nior Biden Administration of­fi­cial said, of Altman.Musk con­tin­ues to ex­co­ri­ate Altman in pub­lic, call­ing him Scam Altman” and Swindly Sam.” (When Altman com­plained on X about a Tesla he’d or­dered, Musk replied, You stole a non-profit.”) And yet, in Washington, Altman seems to have out­flanked him. Musk spent more than two hun­dred and fifty mil­lion dol­lars to help Trump get reëlected, and worked in the White House for months. Then Musk left Washington, dam­ag­ing his re­la­tion­ship with Trump in the process.Alt­man is now one of Trump’s fa­vored ty­coons, even ac­com­pa­ny­ing him on a trip to visit the British Royal Family at Windsor Castle. Altman and Trump speak a few times a year. You can just, like, call him,” Altman said. This is not a buddy. But, yeah, if I need to talk to him about some­thing, I will.” When Trump hosted a din­ner with tech lead­ers at the White House last year, Musk was no­tably ab­sent; Altman sat across from the President. Sam, you’re a big leader,” Trump said. You told me things be­fore that are ab­solutely un­be­liev­able.”Over the years, Altman has con­tin­ued to com­pare the quest for A.G.I. to the Manhattan Project. Like J. Robert Oppenheimer, who used im­pas­sioned ap­peals about sav­ing the world from the Nazis to per­suade physi­cists to up­root their lives and move to Los Alamos, Altman lever­ages fears about the geopo­lit­i­cal stakes of his tech­nol­ogy. Depending on the au­di­ence, Altman has used this anal­ogy to en­cour­age ei­ther ac­cel­er­a­tion or cau­tion. In a meet­ing with U.S. in­tel­li­gence of­fi­cials in the sum­mer of 2017, he claimed that China had launched an A.G.I. Manhattan Project,” and that OpenAI needed bil­lions of dol­lars of gov­ern­ment fund­ing to keep pace. When pressed for ev­i­dence, Altman said, I’ve heard things.” It was the first of sev­eral meet­ings in which he made the claim. After one of them, he told an in­tel­li­gence of­fi­cial that he would fol­low up with ev­i­dence. He never did. The of­fi­cial, af­ter look­ing into the China pro­ject, con­cluded that there was no ev­i­dence that it ex­isted: It was just be­ing used as a sales pitch.” (Altman says that he does not re­call de­scrib­ing Beijing’s ef­forts in ex­actly that way.)With more safety-con­scious au­di­ences, Altman in­voked the anal­ogy to im­ply the op­po­site: that A.G.I. had to be pur­sued care­fully, with in­ter­na­tional coör­di­na­tion, lest the con­se­quences be dis­as­trous. In 2017, Amodei hired Page Hedley, a for­mer pub­lic-in­ter­est lawyer, to be OpenAI’s pol­icy and ethics ad­viser. In an early PowerPoint pre­sen­ta­tion to ex­ec­u­tives, Hedley out­lined how OpenAI might avert a catastrophic” arms race—per­haps by build­ing a coali­tion of A.I. labs that would even­tu­ally coör­di­nate with an in­ter­na­tional body akin to NATO, to in­sure that the tech­nol­ogy was de­ployed safely. As Hedley re­called it, Brockman did­n’t un­der­stand how this would help the com­pany beat its com­peti­tors. No mat­ter what I said,” Hedley told us, Greg kept go­ing back to So how do we raise more money? How do we win?’ ” According to sev­eral in­ter­views and con­tem­po­ra­ne­ous records, Brockman of­fered a coun­ter­pro­posal: OpenAI could en­rich it­self by play­ing world pow­ers—in­clud­ing China and Russia—against one an­other, per­haps by start­ing a bid­ding war among them. According to Hedley, the think­ing seemed to be, It worked for nu­clear weapons, why not for A.I.?He was aghast: The premise, which they did­n’t dis­pute, was We’re talk­ing about po­ten­tially the most de­struc­tive tech­nol­ogy ever in­vented—what if we sold it to Putin?’ ” (Brockman main­tains that he never se­ri­ously en­ter­tained auc­tion­ing A.I. mod­els to gov­ern­ments. Ideas were bat­ted around at a high level about what po­ten­tial frame­works might look like to en­cour­age co­op­er­a­tion be­tween na­tions—some­thing akin to an International Space Station for AI,” an OpenAI rep­re­sen­ta­tive said. Attempting to char­ac­ter­ize it as any­thing more than that is ut­terly ridicu­lous.”)Brain­storm­ing ses­sions of­ten pro­duce out­landish ideas. Hedley hoped that this one, which came to be known in­ter­nally as the countries plan,” would be dropped. Instead, ac­cord­ing to sev­eral peo­ple in­volved and to con­tem­po­ra­ne­ous doc­u­ments, OpenAI ex­ec­u­tives seemed to grow only more ex­cited about it. Brockman’s goal, ac­cord­ing to Jack Clark, OpenAI’s pol­icy di­rec­tor at the time, was to set up, ba­si­cally, a pris­on­er’s dilemma, where all of the na­tions need to give us fund­ing,” and that implicitly makes not giv­ing us fund­ing kind of dan­ger­ous.” A ju­nior re­searcher re­called think­ing, as the plan was de­tailed at a com­pany meet­ing, This is com­pletely fuck­ing in­sane.”Ex­ec­u­tives dis­cussed the ap­proach with at least one po­ten­tial donor. But later that month, af­ter sev­eral em­ploy­ees talked about quit­ting, the plan was aban­doned. Altman would lose staff,” Hedley said. I feel like that was al­ways some­thing that had more weight in Sam’s cal­cu­la­tions than This is not a good plan be­cause it might cause a war be­tween great pow­ers.’ ”“I can­not wait for crop tops to go out of style.”Un­de­terred by the col­lapse of the coun­tries plan, Altman pur­sued vari­a­tions on the theme. In January, 2018, he con­vened an A.G.I. week­end” at the Hotel Bel-Air, an Old Hollywood re­sort with rolling gar­dens of pink bougainvil­lea and an ar­ti­fi­cial pond stocked with real swans. The at­ten­dees in­cluded Nick Bostrom, a philoso­pher, then at Oxford, who had be­come a prophet of A.I. doom; Omar Al Olama, an Emirati sul­tan and an A.I. booster; and at least seven bil­lion­aires. The safety-con­cerned among them were told that this would be an op­por­tu­nity to think through how so­ci­ety might pre­pare for the dis­rup­tive ar­rival of ar­ti­fi­cial gen­eral in­tel­li­gence; the in­vestors ar­rived ex­pect­ing to hear pitches.The days were spent in a sleek con­fer­ence room, where guests gave talks. (Hoffman, the LinkedIn co-founder, ex­pounded on the pos­si­bil­i­ties of en­cod­ing A.I. with Buddhist com­pas­sion.) The fi­nal pre­sen­ter was Altman, armed with a pitch deck that de­scribed a global cryp­tocur­rency redeemable for the at­ten­tion of the AGI.” Once the A.G.I. was max­i­mally use­ful, and anti-evil,” peo­ple every­where would clamor to buy time on OpenAI’s servers. Amodei wrote in his notes, This idea was ab­surd on its face (would Vladimir Putin end up own­ing some of the to­kens? . . .) In ret­ro­spect this was one of many red flags about Sam that I should have taken more se­ri­ously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I. safety. One of his slides read, I want to get as many peo­ple on the good’ team as pos­si­ble, and win, and do the right thing.” Another read, Please hold your laugh­ter un­til the end of the pre­sen­ta­tion.”Alt­man’s fund-rais­ing pitch has evolved over the years, but it has al­ways re­flected the fact that the de­vel­op­ment of A.G.I. re­quires a stag­ger­ing amount of cap­i­tal. He was fol­low­ing a rel­a­tively sim­ple scaling law”: the more data and com­put­ing power you used to train the mod­els, the smarter they seemed to get. The spe­cial­ized chips that en­able this process are enor­mously ex­pen­sive. OpenAI, in its most re­cent fund­ing round alone, raised more than a hun­dred and twenty bil­lion dol­lars—the largest pri­vate round in his­tory, and a sum four times larger than the biggest I.P.O. ever. When you think about en­ti­ties with a hun­dred bil­lion dol­lars they can dis­cre­tionar­ily spend per year, there re­ally are only a hand­ful in the world,” a tech ex­ec­u­tive and in­vestor told us. There’s the U.S. gov­ern­ment, and the four or five biggest U.S. tech com­pa­nies, and the Saudis, and the Emiratis—that’s ba­si­cally it.”Alt­man’s ini­tial fo­cus was Saudi Arabia. He first met Mohammed bin Salman, the coun­try’s crown prince and de-facto monarch, in 2016, at a din­ner at San Francisco’s Fairmont Hotel. After that, Hedley re­called, Altman re­ferred to the prince as a friend.” In September, 2018, ac­cord­ing to Hedley’s notes, Altman said, I’m try­ing to de­cide if we would ever take tens of bil­lions from the Saudi PIF,” or pub­lic in­vest­ment fund.The fol­low­ing month, a hit squad, re­port­edly act­ing on bin Salman’s or­ders, stran­gled Jamal Khashoggi, a Washington Post jour­nal­ist who had been crit­i­cal of the regime, and used a bone saw to dis­mem­ber his corpse. A week later, it was an­nounced that Altman had joined the ad­vi­sory board for Neom, a city of the fu­ture” that bin Salman hoped to build in the desert. Sam, you can­not be on this board,” Clark, the pol­icy di­rec­tor, who now works at Anthropic, re­called telling Altman. He ini­tially de­fended his in­volve­ment, telling Clark that Jared Kushner had as­sured him that the Saudis didn’t do this.” (Altman does not re­call this. Kushner says that they were not in con­tact at the time.)As bin Salman’s role be­came in­creas­ingly clear, Altman left the Neom board. Yet be­hind the scenes, a pol­icy con­sul­tant from whom Altman sought ad­vice re­called, he treated the sit­u­a­tion as a tem­po­rary set­back, ask­ing whether he could some­how still get money from bin Salman. The ques­tion was not Is this a bad thing or not?’ ” the con­sul­tant said. But, just, What would the con­se­quences be if we did it? Would there be some ex­port-con­trol is­sue? Would there be sanc­tions? Like, can I get away with it?’ ”By then, Altman was al­ready ey­ing an­other source of cash: the United Arab Emirates. The coun­try was in the midst of a fif­teen-year ef­fort to trans­form it­self from an oil state to a tech hub. The pro­ject was over­seen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the na­tion’s spy­mas­ter. Tahnoon runs the state-con­trolled A.I. con­glom­er­ate G42, and con­trols $1.5 tril­lion in sov­er­eign wealth. In June, 2023, Altman vis­ited Abu Dhabi, meet­ing with Olama and other of­fi­cials. In re­marks at a gov­ern­ment-backed func­tion, he said that the coun­try had been talk­ing about A.I. since be­fore it was cool,” and out­lined a vi­sion for the fu­ture of A.I. with the Middle East in a central role.”Fund-rais­ing from Gulf states has be­come cus­tom­ary for many large busi­nesses. But Altman was pur­su­ing a more sweep­ing geopo­lit­i­cal vi­sion. In the fall of 2023, he be­gan qui­etly re­cruit­ing new tal­ent for a plan—even­tu­ally known as ChipCo—in which Gulf states would pro­vide tens of bil­lions of dol­lars for the con­struc­tion of huge mi­crochip foundries and data cen­ters, some to be sit­u­ated in the Middle East. Altman pitched Alexandr Wang, now the head of A.I. at Meta, on a lead­er­ship role, telling him that Jeff Bezos, the founder of Amazon, could head the new com­pany. Altman sought enor­mous con­tri­bu­tions from the Emiratis. My un­der­stand­ing was that this whole thing hap­pened with­out any board knowl­edge,” the board mem­ber said. A re­searcher Altman tried to re­cruit for the pro­ject, James Bradbury, re­called turn­ing him down. My ini­tial re­ac­tion was This is gonna work, but I don’t know if I want it to work,’ ” he said.A.I. ca­pac­ity may soon dis­place oil or en­riched ura­nium as the re­source that dic­tates the global bal­ance of power. Altman has said that com­put­ing power is the cur­rency of the fu­ture.” Normally, it might not mat­ter where a data cen­ter was sit­u­ated. But many American na­tional-se­cu­rity of­fi­cials were anx­ious about con­cen­trat­ing ad­vanced A.I. in­fra­struc­ture in Gulf au­toc­ra­cies. The U.A.E.’s telecom­mu­ni­ca­tions in­fra­struc­ture is heav­ily de­pen­dent on hard­ware from Huawei, a Chinese tech gi­ant linked to the gov­ern­ment, and the U.A.E. has re­port­edly leaked American tech­nol­ogy to Beijing in the past. Intelligence agen­cies wor­ried that ad­vanced U.S. mi­crochips sent to the Emiratis could be used by Chinese en­gi­neers. Data cen­ters in the Middle East are also more vul­ner­a­ble to mil­i­tary strikes; in re­cent weeks, Iran has bombed American data cen­ters in Bahrain and the U.A.E. And, hy­po­thet­i­cally, a Gulf monar­chy could com­man­deer an American-owned data cen­ter and use it to build dis­pro­por­tion­ately pow­er­ful mod­els—a ver­sion of the AGI dic­ta­tor­ship” sce­nario, but in an ac­tual dic­ta­tor­ship.Af­ter Altman’s fir­ing, the per­son he re­lied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loy­al­ists. Watching my friend stare into the abyss like that, it made me ques­tion some fun­da­men­tal things about what it means to re­ally run a com­pany,” Chesky told us. The fol­low­ing year, at a gath­er­ing of Y Combinator alumni, he gave an im­promptu talk, which ended up last­ing two hours. It felt like a group-ther­apy ses­sion,” he said. The up­shot was: Your in­stincts for how to run the com­pany that you started are the best in­stincts, and any­one who tells you oth­er­wise is gaslight­ing you. You’re not crazy, even though peo­ple who work for you tell you you are,” Chesky said. Paul Graham, in a blog post about the speech, gave this de­fi­ant at­ti­tude a name: Founder Mode.Since the Blip, Altman has been in Founder Mode. In February, 2024, the Wall Street Journal pub­lished a de­scrip­tion of Altman’s vi­sion for ChipCo. He con­ceived of it as a joint en­tity funded by an in­vest­ment of five to seven tril­lion dol­lars. (“fk it why not 8,” he tweeted.) This was how many em­ploy­ees learned about the plan. Everyone was, like, Wait, what?’ ” Leike re­called. Altman in­sisted at an in­ter­nal meet­ing that safety teams had been looped in.” Leike sent a mes­sage urg­ing him not to falsely sug­gest that the ef­fort had been ap­proved.Dur­ing the Biden Administration, Altman ex­plored get­ting a se­cu­rity clear­ance to join clas­si­fied A.I.-policy dis­cus­sions. But staffers at the RANDCorporation, which helped coör­di­nate the process, ex­pressed con­cern. He has been ac­tively rais­ing hundreds of bil­lions of dol­lars’ from for­eign gov­ern­ments,” one of them wrote. The UAE re­cently gifted him a car. (I as­sume it was a very nice car.)” The staffer con­tin­ued, The only per­son I can think of who ever went thru the process with this mag­ni­tude of for­eign fi­nan­cial ties is Jared Kushner, and the ad­ju­di­ca­tors rec­om­mended that he not be granted a clear­ance.” Altman ul­ti­mately with­drew from the process. He was push­ing these trans­ac­tional re­la­tion­ships, pri­mar­ily with the Emiratis, that raised a lot of red flags for some of us,” a se­nior Administration of­fi­cial in­volved in talks with Altman told us. A lot of peo­ple in the Administration did not trust him a hun­dred per cent.”When we asked Altman about gifts from Tahnoon, he said, I’m not gonna say what gifts he has given me specif­i­cally. But he and other world lead­ers . . . have given me gifts.” He added, We have a stan­dard pol­icy, which ap­plies to me as well, which is that every gift from any po­ten­tial busi­ness part­ner is dis­closed to the com­pany.” Altman has at least two hy­per­cars: an all-white Koenigsegg Regera, worth about two mil­lion dol­lars, and a red McLaren F1, worth about twenty mil­lion dol­lars. In 2024, Altman was spot­ted dri­ving the Regera through Napa. A few sec­onds of video made its way onto so­cial me­dia: Altman in a low-slung bucket seat, peer­ing out the win­dow of a gleam­ing white ma­chine. A tech in­vestor aligned with Musk posted the footage on X, writ­ing, I’m start­ing a non­profit next.”In 2024, Altman took two OpenAI em­ploy­ees to visit Sheikh Tahnoon on his two-hun­dred-and-fifty-mil­lion-dol­lar su­pery­acht, the Maryah. One of the largest such ves­sels in the world, the Maryah has a he­li­pad, a night club, a movie the­atre, and a beach club. Altman’s em­ploy­ees ap­par­ently stood out amid Tahnoon’s armed se­cu­rity de­tail, and at least one later told col­leagues that he found the ex­pe­ri­ence dis­con­cert­ing. Altman, on X, later re­ferred to Tahnoon as a dear per­sonal friend.”Alt­man con­tin­ued to meet with the Biden Administration, which had en­acted a pol­icy re­quir­ing White House ap­proval for the ex­port of sen­si­tive tech­nol­ogy. Multiple Administration of­fi­cials emerged from these meet­ings ner­vous about Altman’s am­bi­tions in the Middle East. He of­ten made grandiose claims, ac­cord­ing to those of­fi­cials, in­clud­ing call­ing A.I. the new elec­tric­ity.” In 2018, he said that OpenAI was plan­ning to buy a fully func­tion­ing quan­tum com­puter from a com­pany called Rigetti Computing. This was news even to other OpenAI ex­ec­u­tives in the room. Rigetti was not yet close to be­ing able to sell a us­able quan­tum com­puter. In a meet­ing, Altman claimed that by 2026 an ex­ten­sive net­work of nu­clear-fu­sion re­ac­tors across the United States would power the A.I. boom. The se­nior Administration of­fi­cial said, We were, like, Well, that’s, you know, news, if they made nu­clear fu­sion work.’ ” The Biden Administration ul­ti­mately with­held ap­proval. We’re not go­ing to be build­ing ad­vanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman.Four days be­fore Trump’s Inauguration, the Wall Street Journal re­ported, Tahnoon paid half a bil­lion dol­lars to the Trump fam­ily in ex­change for a stake in its cryp­tocur­rency com­pany. The fol­low­ing day, Altman held a twenty-five-minute call with Trump, dur­ing which they dis­cussed an­nounc­ing a ver­sion of a ChipCo, timed so that Trump could take credit for it. On Trump’s sec­ond day in of­fice, Altman stood in the Roosevelt Room and an­nounced Stargate, a five-hun­dred-bil­lion-dol­lar joint ven­ture that aims to build a vast net­work of A.I. in­fra­struc­ture across the U.S.In May, the Administration re­scinded Biden’s ex­port re­stric­tions on A.I. tech­nol­ogy. Altman and Trump trav­elled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis ad­ver­tised the launch of a gi­ant state-backed A.I. firm in the king­dom, with bil­lions to spend on in­ter­na­tional part­ner­ships. About a week later, Altman laid out a plan for Stargate to ex­pand into the U.A.E. The com­pany plans to build a data-cen­ter cam­pus in Abu Dhabi which is seven times larger than Central Park and con­sumes roughly as much elec­tri­cal power as the city of Miami. The truth of this is, we’re build­ing por­tals from which we’re gen­uinely sum­mon­ing aliens,” a for­mer OpenAI ex­ec­u­tive said. The por­tals cur­rently ex­ist in the United States and China, and Sam has added one in the Middle East.” He went on, I think it’s just, like, wildly im­por­tant to get how scary that should be. It’s the most reck­less thing that has been done.”The ero­sion of safety com­mit­ments has be­come an in­dus­try norm. The found­ing premise of Anthropic was that, given the right struc­ture and lead­er­ship, it could keep safety com­mit­ments from dis­in­te­grat­ing un­der com­mer­cial pres­sure. One such com­mit­ment was a responsible scal­ing pol­icy,” which ob­lig­ated Anthropic to stop train­ing more pow­er­ful mod­els if it could not demon­strate that they were safe. In February, as the firm se­cured thirty bil­lion dol­lars in new fund­ing, it weak­ened that pledge. In some re­spects, Anthropic still em­pha­sizes safety more than OpenAI does. But Clark, the for­mer pol­icy di­rec­tor, has said, The sys­tem of cap­i­tal mar­kets says, Go faster.” He added, The world gets to make this de­ci­sion, not com­pa­nies.” Last year, Amodei sent a memo to Anthropic em­ploy­ees, dis­clos­ing that the firm would seek in­vest­ments from the United Arab Emirates and Qatar and ac­knowl­edg­ing that this would likely en­rich dictators.” (Like many au­thors, we are both par­ties in a class-ac­tion law­suit al­leg­ing that Anthropic used our books with­out our per­mis­sion to train its mod­els. Condé Nast has opted into a set­tle­ment agree­ment with Anthropic re­gard­ing the com­pa­ny’s use of cer­tain books pub­lished by Condé Nast and its sub­sidiaries.)In 2024, Anthropic part­nered with Palantir, one of Silicon Valley’s most hawk­ish de­fense con­trac­tors, push­ing its A.I. model, Claude, di­rectly into the mil­i­tary ecosys­tem. Anthropic be­came the only A.I. con­trac­tor used in the Pentagon’s most clas­si­fied set­tings. Last year, the Pentagon awarded the com­pany a fur­ther two-hun­dred-mil­lion-dol­lar con­tract. In January, the U.S. mil­i­tary launched a mid­night raid that cap­tured the Venezuelan President, Nicolás Maduro. According to the Wall Street Journal, Claude was used in the clas­si­fied op­er­a­tion.But ten­sions arose be­tween Anthropic and the gov­ern­ment. Years ear­lier, OpenAI had deleted from its poli­cies a blan­ket ban on us­ing its tech­nol­ogy for military and war­fare.” Eventually, Anthropic’s ri­vals—in­clud­ing Google and xAI—agreed to pro­vide their mod­els to the mil­i­tary for all law­ful pur­poses.” Anthropic, whose poli­cies bar it from en­abling fully au­tonomous weapons or do­mes­tic mass sur­veil­lance, re­sisted on these points, slow­ing ne­go­ti­a­tions for an over­hauled deal. On a Tuesday in late February, Defense Secretary Pete Hegseth sum­moned Amodei to the Pentagon and de­liv­ered an ul­ti­ma­tum: the firm had un­til 5:01 P.M. that Friday to aban­don those pro­hi­bi­tions. The day be­fore the dead­line, Amodei de­clined to do so. Hegseth tweeted that he would des­ig­nate Anthropic a supply-chain risk”—a dev­as­tat­ing black­list his­tor­i­cally re­served for com­pa­nies, like Huawei, that have ties to for­eign ad­ver­saries—and made good on the threat days later.Hun­dreds of em­ploy­ees at OpenAI and Google signed an open let­ter ti­tled We Will Not Be Divided,” de­fend­ing Anthropic. In an in­ter­nal memo, Altman wrote that the dis­pute was an is­sue for the whole in­dus­try,” and claimed that OpenAI shared Anthropic’s eth­i­cal bound­aries. But Altman had been in ne­go­ti­a­tions with the Pentagon for at least two days. Emil Michael, the Under-Secretary of Defense for Research and Engineering, had con­tacted Altman as he sought re­place­ments for Anthropic. I needed to hurry and find al­ter­na­tives,” Michael re­called. I called Sam, and he was will­ing to jump. I think he’s a pa­triot.” Altman asked Michael, What can I do for the coun­try?” It ap­pears that he al­ready knew the an­swer. OpenAI lacked the se­cu­rity ac­cred­i­ta­tion re­quired for the clas­si­fied sys­tems in which Anthropic’s tech­nol­ogy was em­bed­ded. But a fifty-bil­lion-dol­lar deal, an­nounced that Friday morn­ing, in­te­grated OpenAI’s tech­nol­ogy into Amazon Web Services, a key part of the Pentagon’s dig­i­tal in­fra­struc­ture. That night, Altman an­nounced on X that the mil­i­tary would now be us­ing OpenAI’s mod­els.By some mea­sures, Altman’s ma­neu­ver has not hin­dered the com­pa­ny’s suc­cess. The day he an­nounced the deal, a new fund­ing round in­creased OpenAI’s value by a hun­dred and ten bil­lion dol­lars. But many users deleted the ChatGPT app. At least two se­nior em­ploy­ees de­parted—one for Anthropic. At a staff meet­ing, Altman chas­tised em­ploy­ees who raised con­cerns. So maybe you think the Iran strike was good and the Venezuela in­va­sion was bad,” he said. You don’t get to weigh in on that.”Sev­eral ex­ec­u­tives con­nected to OpenAI have ex­pressed on­go­ing reser­va­tions about Altman’s lead­er­ship and floated Fidji Simo, who was for­merly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a suc­ces­sor. Simo her­self has pri­vately said that she be­lieves Altman may even­tu­ally step down, a per­son briefed on a re­cent dis­cus­sion told us. (Simo dis­putes this. Instacart re­cently reached a set­tle­ment with the F.T.C., in which it ad­mit­ted no wrong­do­ing but agreed to pay a sixty-mil­lion-dol­lar fine for al­leged de­cep­tive prac­tices un­der Simo’s lead­er­ship.)Alt­man de­scribes his shift­ing com­mit­ments as a by-prod­uct of his abil­ity to adapt to chang­ing cir­cum­stances—not a ne­far­i­ous long con,” as Musk and oth­ers have al­leged, but a grad­ual, good-faith evo­lu­tion. I think what some peo­ple want,” he told us, is a leader who is go­ing to be ab­solutely sure of what they think and stick with it, and it’s not go­ing to change. And we are in a field, in an area, where things change ex­tremely quickly.” He de­fended some of his ac­tions as the prac­tice of normal com­pet­i­tive busi­ness.” Several in­vestors we spoke to de­scribed Altman’s de­trac­tors as naïve to ex­pect any­thing else. There is a group of fa­tal­is­tic ex­trem­ists that has taken the safety pill al­most to a sci­ence-fic­tion level,” Conway, the in­vestor, told us. His mis­sion is mea­sured by num­bers. And, when you look at the suc­cess of OpenAI, it’s hard to ar­gue with the num­bers.”But oth­ers in Silicon Valley think that Altman’s be­hav­ior has cre­ated un­ac­cept­able man­age­r­ial dys­func­tion. It’s more about a prac­ti­cal in­abil­ity to gov­ern the com­pany,” the board mem­ber said. And some still be­lieve that the ar­chi­tects of A.I. should be eval­u­ated more strin­gently than ex­ec­u­tives in other in­dus­tries. The vast ma­jor­ity of peo­ple we spoke to agreed that the stan­dards by which Altman now asks to be judged are not those he ini­tially pro­posed. During one con­ver­sa­tion, we asked Altman whether run­ning an A.I. com­pany came with an el­e­vated re­quire­ment of in­tegrity.” This was sup­posed to be an easy ques­tion. Until re­cently, when asked a ver­sion of it, his an­swer was a clear, un­qual­i­fied yes. Now he added, I think there’s, like, a lot of busi­nesses that have po­ten­tial huge im­pact, good and bad, on so­ci­ety.” (Later, he sent an ad­di­tional state­ment: Yes, it de­mands a height­ened level of in­tegrity, and I feel the weight of the re­spon­si­bil­ity every day.”)Of all the promises made at OpenAI’s found­ing, ar­guably the most cen­tral was its pledge to bring A.I. into ex­is­tence safely. But such con­cerns are now of­ten de­rided in Silicon Valley and in Washington. Last year, J. D. Vance, the for­mer ven­ture cap­i­tal­ist who is now the Vice-President, ad­dressed a con­fer­ence in Paris called the A.I. Action Summit. (It was pre­vi­ously called the A.I. Safety Summit.) The A.I. fu­ture is not go­ing to be won by hand-wring­ing about safety,” he said. At Davos this year, David Sacks, a ven­ture cap­i­tal­ist who was serv­ing as the White House’s A.I. and crypto czar, dis­missed safety con­cerns as a self-inflicted in­jury” that could cost America the A.I. race. Altman now calls Trump’s dereg­u­la­tory ap­proach a very re­fresh­ing change.”Ope­nAI has closed many of its safety-fo­cussed teams. Around the time the su­per­align­ment team was dis­solved, its lead­ers, Sutskever and Leike, re­signed. (Sutskever co-founded a com­pany called Safe Superintelligence.) On X, Leike wrote, Safety cul­ture and processes have taken a back­seat to shiny prod­ucts.” Soon af­ter­ward, the A.G.I.-readiness team, tasked with prepar­ing so­ci­ety for the shock of ad­vanced A.I., was also dis­solved. When the com­pany was asked on its most re­cent I.R.S. dis­clo­sure form to briefly de­scribe its most sig­nif­i­cant ac­tiv­i­ties,” the con­cept of safety, pre­sent in its an­swers to such ques­tions on pre­vi­ous forms, was not listed. (OpenAI said that its mission did not change” and added, We con­tinue to in­vest in and evolve our work on safety, and will con­tinue to make or­ga­ni­za­tional changes.”) The Future of Life Institute, a think tank whose prin­ci­ples on safety Altman once en­dorsed, grades each ma­jor A.I. com­pany on existential safety”; on the most re­cent re­port card, OpenAI got an F. In fair­ness, so did every other ma­jor com­pany ex­cept for Anthropic, which got a D, and Google DeepMind, which got a D-.“My vibes don’t match a lot of the tra­di­tional A.I.-safety stuff,” Altman said. He in­sisted that he con­tin­ued to pri­or­i­tize these mat­ters, but when pressed for specifics he was vague: We still will run safety pro­jects, or at least safety-ad­ja­cent pro­jects.” When we asked to in­ter­view re­searchers at the com­pany who were work­ing on ex­is­ten­tial safety—the kinds of is­sues that could mean, as Altman once put it, lights-out for all of us”—an OpenAI rep­re­sen­ta­tive seemed con­fused. What do you mean by existential safe­ty’?” he replied. That’s not, like, a thing.”A.I. doomers have been pushed to the fringes, but some of their fears seem less fan­tas­ti­cal with each pass­ing month. In 2020, ac­cord­ing to a U.N. re­port, an A.I. drone was used in the Libyan civil war to fire deadly mu­ni­tions, pos­si­bly with­out over­sight by a hu­man op­er­a­tor. Since then, A.I. has only be­come more cen­tral to mil­i­tary op­er­a­tions around the world, in­clud­ing, re­port­edly, in the cur­rent U.S. cam­paign in Iran. In 2022, re­searchers at a phar­ma­ceu­ti­cal com­pany tested whether a drug-dis­cov­ery model could be used to find new tox­ins; within a few hours, it had sug­gested forty thou­sand deadly chem­i­cal-war­fare agents. And many more mun­dane harms are al­ready com­ing to pass. We in­creas­ingly rely on A.I. to help us write, think, and nav­i­gate the world, ac­cel­er­at­ing what ex­perts call human en­fee­ble­ment”; the ubiq­uity of A.I. slop” makes life eas­ier for scam­mers and harder for peo­ple who sim­ply want to know what’s real. A.I. agents” are start­ing to act in­de­pen­dently, with lit­tle or no hu­man su­per­vi­sion. Days be­fore the 2024 New Hampshire Democratic pri­mary, thou­sands of vot­ers re­ceived robo­calls from an A.I.-generated deep­fake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter sup­pres­sion re­quir­ing vir­tu­ally no tech­ni­cal ex­per­tise. OpenAI is now fac­ing seven wrong­ful-death law­suits, which al­lege that ChatGPT prompted sev­eral sui­cides and a mur­der. Chat logs in the mur­der case show that it en­cour­aged a man’s para­noid delu­sion that his eighty-three-year-old mother was sur­veilling and try­ing to poi­son him. Soon af­ter­ward, he fa­tally beat and stran­gled her and stabbed him­self. (OpenAI is fight­ing the law­suits, and says that it’s con­tin­u­ing to im­prove its mod­el’s safe­guards.)As OpenAI pre­pares for its po­ten­tial I.P.O., Altman has faced ques­tions not only about the ef­fect of A.I. on the econ­omy—it could soon cause se­vere la­bor dis­rup­tion, per­haps elim­i­nat­ing mil­lions of jobs—but about the com­pa­ny’s own fi­nances. Eric Ries, an ex­pert on startup gov­er­nance, de­rided circular deals” in the in­dus­try—for ex­am­ple, OpenAI’s deals with Nvidia and other chip man­u­fac­tur­ers—and said that in other eras some of the com­pa­ny’s ac­count­ing prac­tices would have been con­sid­ered borderline fraud­u­lent.” The board mem­ber told us, The com­pany lev­ered up fi­nan­cially in a way that’s risky and scary right now.” (OpenAI dis­putes this.)In February, we spoke again with Altman. He was wear­ing a drab-green sweater and jeans, and sat in front of a pho­to­graph of a NASA moon rover. He tucked one leg be­neath him, then hung it over the arm of his chair. In the past, he said, his main flaw as a man­ager had been his ea­ger­ness to avoid con­flict. Now I’m very happy to fire peo­ple quickly,” he had told us. I’m happy to just say, We’re gonna bet in this di­rec­tion.’ ” Any em­ploy­ees who did­n’t like his choices needed to leave.”He is more bull­ish than ever about the fu­ture. My de­f­i­n­i­tion of win­ning is that peo­ple crazy up­level—and the in­sane sci-fi fu­ture comes true for all of us,” he said. I’m very am­bi­tious as far as, like, my hope for hu­man­ity, and what I ex­pect us all to achieve. I weirdly have very lit­tle per­sonal am­bi­tion.” At times, he seemed to catch him­self. No one be­lieves you’re do­ing this just be­cause it’s in­ter­est­ing,” he said. You’re do­ing it for power or for some other thing.”Even peo­ple close to Altman find it dif­fi­cult to know where his hope for hu­man­ity” ends and his am­bi­tion be­gins. His great­est strength has al­ways been his abil­ity to con­vince dis­parate groups that what he wants and what they need are one and the same. He made use of a unique his­tor­i­cal junc­ture, when the pub­lic was wary of tech-in­dus­try hype and most of the re­searchers ca­pa­ble of build­ing A.G.I. were ter­ri­fied of bring­ing it into ex­is­tence. Altman re­sponded with a move that no other pitch­man had per­fected: he used apoc­a­lyp­tic rhetoric to ex­plain how A.G.I. could de­stroy us all—and why, there­fore, he should be the one to build it. Maybe this was a pre­med­i­tated mas­ter­stroke. Maybe he was fum­bling for an ad­van­tage. Either way, it worked.Not all the ten­den­cies that make chat­bots dan­ger­ous are glitches; some are by-prod­ucts of how the sys­tems are built. Large lan­guage mod­els are trained, in part, on hu­man feed­back, and hu­mans tend to pre­fer agree­able re­sponses. Models of­ten learn to flat­ter users, a ten­dency known as syco­phancy, and will some­times pri­or­i­tize this over hon­esty. Models can also make things up, a ten­dency known as hal­lu­ci­na­tion. Major A.I. labs have doc­u­mented these prob­lems, but they some­times tol­er­ate them. As mod­els have grown more com­plex, some hal­lu­ci­nate with more per­sua­sive fab­ri­ca­tions. In 2023, shortly be­fore his fir­ing, Altman ar­gued that al­low­ing for some false­hoods can, what­ever the risks, con­fer ad­van­tages. If you just do the naïve thing and say, Never say any­thing that you’re not a hun­dred per cent sure about,’ you can get a model to do that,” he said. But it won’t have the magic that peo­ple like so much.” ♦

...

Read the original on www.newyorker.com »

4 1,458 shares, 60 trendiness

Securing critical software for the AI era

Today we’re an­nounc­ing Project Glasswing1, a new ini­tia­tive that brings to­gether Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an ef­fort to se­cure the world’s most crit­i­cal soft­ware. We formed Project Glasswing be­cause of ca­pa­bil­i­ties we’ve ob­served in a new fron­tier model trained by Anthropic that we be­lieve could re­shape cy­ber­se­cu­rity. Claude Mythos2 Preview is a gen­eral-pur­pose, un­re­leased fron­tier model that re­veals a stark fact: AI mod­els have reached a level of cod­ing ca­pa­bil­ity where they can sur­pass all but the most skilled hu­mans at find­ing and ex­ploit­ing soft­ware vul­ner­a­bil­i­ties.Mythos Preview has al­ready found thou­sands of high-sever­ity vul­ner­a­bil­i­ties, in­clud­ing some in every ma­jor op­er­at­ing sys­tem and web browser. Given the rate of AI progress, it will not be long be­fore such ca­pa­bil­i­ties pro­lif­er­ate, po­ten­tially be­yond ac­tors who are com­mit­ted to de­ploy­ing them safely. The fall­out—for economies, pub­lic safety, and na­tional se­cu­rity—could be se­vere. Project Glasswing is an ur­gent at­tempt to put these ca­pa­bil­i­ties to work for de­fen­sive pur­poses.As part of Project Glasswing, the launch part­ners listed above will use Mythos Preview as part of their de­fen­sive se­cu­rity work; Anthropic will share what we learn so the whole in­dus­try can ben­e­fit. We have also ex­tended ac­cess to a group of over 40 ad­di­tional or­ga­ni­za­tions that build or main­tain crit­i­cal soft­ware in­fra­struc­ture so they can use the model to scan and se­cure both first-party and open-source sys­tems. Anthropic is com­mit­ting up to $100M in us­age cred­its for Mythos Preview across these ef­forts, as well as $4M in di­rect do­na­tions to open-source se­cu­rity or­ga­ni­za­tions.Pro­ject Glasswing is a start­ing point. No one or­ga­ni­za­tion can solve these cy­ber­se­cu­rity prob­lems alone: fron­tier AI de­vel­op­ers, other soft­ware com­pa­nies, se­cu­rity re­searchers, open-source main­tain­ers, and gov­ern­ments across the world all have es­sen­tial roles to play. The work of de­fend­ing the world’s cy­ber in­fra­struc­ture might take years; fron­tier AI ca­pa­bil­i­ties are likely to ad­vance sub­stan­tially over just the next few months. For cy­ber de­fend­ers to come out ahead, we need to act now.Cy­ber­se­cu­rity in the age of AIThe soft­ware that all of us rely on every day—re­spon­si­ble for run­ning bank­ing sys­tems, stor­ing med­ical records, link­ing up lo­gis­tics net­works, keep­ing power grids func­tion­ing, and much more—has al­ways con­tained bugs. Many are mi­nor, but some are se­ri­ous se­cu­rity flaws that, if dis­cov­ered, could al­low cy­ber­at­tack­ers to hi­jack sys­tems, dis­rupt op­er­a­tions, or steal data.We have al­ready seen the se­ri­ous con­se­quences of cy­ber­at­tacks for im­por­tant cor­po­rate net­works, health­care sys­tems, en­ergy in­fra­struc­ture, trans­port hubs, and the in­for­ma­tion se­cu­rity of gov­ern­ment agen­cies across the world. On the global stage, state-spon­sored at­tacks from ac­tors like China, Iran, North Korea, and Russia have threat­ened to com­pro­mise the in­fra­struc­ture that un­der­pins both civil­ian life and mil­i­tary readi­ness. Even smaller-scale at­tacks, such as those where in­di­vid­ual hos­pi­tals or schools are tar­geted, can still in­flict sub­stan­tial eco­nomic dam­age, ex­pose sen­si­tive data, and even put lives at risk. The cur­rent global fi­nan­cial costs of cy­ber­crime are chal­leng­ing to es­ti­mate, but might be around $500B every year.Many flaws in soft­ware go un­no­ticed for years be­cause find­ing and ex­ploit­ing them has re­quired ex­per­tise held by only a few skilled se­cu­rity ex­perts. With the lat­est fron­tier AI mod­els, the cost, ef­fort, and level of ex­per­tise re­quired to find and ex­ploit soft­ware vul­ner­a­bil­i­ties have all dropped dra­mat­i­cally. Over the past year, AI mod­els have be­come in­creas­ingly ef­fec­tive at read­ing and rea­son­ing about code—in par­tic­u­lar, they show a strik­ing abil­ity to spot vul­ner­a­bil­i­ties and work out ways to ex­ploit them. Claude Mythos Preview demon­strates a leap in these cy­ber skills—the vul­ner­a­bil­i­ties it has spot­ted have in some cases sur­vived decades of hu­man re­view and mil­lions of au­to­mated se­cu­rity tests, and the ex­ploits it de­vel­ops are in­creas­ingly so­phis­ti­cated.Ten years af­ter the first DARPA Cyber Grand Challenge, fron­tier AI mod­els are now be­com­ing com­pet­i­tive with the best hu­mans at find­ing and ex­ploit­ing vul­ner­a­bil­i­ties. Without the nec­es­sary safe­guards, these pow­er­ful cy­ber ca­pa­bil­i­ties could be used to ex­ploit the many ex­ist­ing flaws in the world’s most im­por­tant soft­ware. This could make cy­ber­at­tacks of all kinds much more fre­quent and de­struc­tive, and em­power ad­ver­saries of the United States and its al­lies. Addressing these is­sues is there­fore an im­por­tant se­cu­rity pri­or­ity for de­mo­c­ra­tic states.Al­though the risks from AI-augmented cy­ber­at­tacks are se­ri­ous, there is rea­son for op­ti­mism: the same ca­pa­bil­i­ties that make AI mod­els dan­ger­ous in the wrong hands make them in­valu­able for find­ing and fix­ing flaws in im­por­tant soft­ware—and for pro­duc­ing new soft­ware with far fewer se­cu­rity bugs. Project Glasswing is an im­por­tant step to­ward giv­ing de­fend­ers a durable ad­van­tage in the com­ing AI-driven era of cy­ber­se­cu­rity.

Over the past few weeks, we have used Claude Mythos Preview to iden­tify thou­sands of zero-day vul­ner­a­bil­i­ties (that is, flaws that were pre­vi­ously un­known to the soft­ware’s de­vel­op­ers), many of them crit­i­cal, in every ma­jor op­er­at­ing sys­tem and every ma­jor web browser, along with a range of other im­por­tant pieces of soft­ware.In a post on our Frontier Red Team blog, we pro­vide tech­ni­cal de­tails for a sub­set of these vul­ner­a­bil­i­ties that have al­ready been patched and, in some cases, the ways that Mythos Preview found to ex­ploit them. It was able to iden­tify nearly all of these vul­ner­a­bil­i­ties—and de­velop many re­lated ex­ploits—en­tirely au­tonomously, with­out any hu­man steer­ing. The fol­low­ing are three ex­am­ples:Mythos Preview found a 27-year-old vul­ner­a­bil­ity in OpenBSD—which has a rep­u­ta­tion as one of the most se­cu­rity-hard­ened op­er­at­ing sys­tems in the world and is used to run fire­walls and other crit­i­cal in­fra­struc­ture. The vul­ner­a­bil­ity al­lowed an at­tacker to re­motely crash any ma­chine run­ning the op­er­at­ing sys­tem just by con­nect­ing to it;It also dis­cov­ered a 16-year-old vul­ner­a­bil­ity in FFmpeg—which is used by in­nu­mer­able pieces of soft­ware to en­code and de­code video—in a line of code that au­to­mated test­ing tools had hit five mil­lion times with­out ever catch­ing the prob­lem;The model au­tonomously found and chained to­gether sev­eral vul­ner­a­bil­i­ties in the Linux ker­nel—the soft­ware that runs most of the world’s servers—to al­low an at­tacker to es­ca­late from or­di­nary user ac­cess to com­plete con­trol of the ma­chine.We have re­ported the above vul­ner­a­bil­i­ties to the main­tain­ers of the rel­e­vant soft­ware, and they have all now been patched. For many other vul­ner­a­bil­i­ties, we are pro­vid­ing a cryp­to­graphic hash of the de­tails to­day (see the Red Team blog), and we will re­veal the specifics af­ter a fix is in place.Eval­u­a­tion bench­marks such as CyberGym re­in­force the sub­stan­tial dif­fer­ence be­tween Mythos Preview and our next-best model, Claude Opus 4.6:In ad­di­tion to our own work, many of our part­ners have al­ready been us­ing Claude Mythos Preview for sev­eral weeks. This is what they’ve found:“AI ca­pa­bil­i­ties have crossed a thresh­old that fun­da­men­tally changes the ur­gency re­quired to pro­tect crit­i­cal in­fra­struc­ture from cy­ber threats, and there is no go­ing back. Our foun­da­tional work with these mod­els has shown we can iden­tify and fix se­cu­rity vul­ner­a­bil­i­ties across hard­ware and soft­ware at a pace and scale pre­vi­ously im­pos­si­ble. That is a pro­found shift, and a clear sig­nal that the old ways of hard­en­ing sys­tems are no longer suf­fi­cient.

Providers of tech­nol­ogy must ag­gres­sively adopt new ap­proaches now, and cus­tomers need to be ready to de­ploy. That is why Cisco joined Project Glasswing—this work is too im­por­tant and too ur­gent to do alone.”“At AWS, we build de­fenses be­fore threats emerge, from our cus­tom sil­i­con up through the tech­nol­ogy stack. Security is­n’t a phase for us; it’s con­tin­u­ous and em­bed­ded in every­thing we do. Our teams an­a­lyze over 400 tril­lion net­work flows every day for threats, and AI is cen­tral to our abil­ity to de­fend at scale.

We’ve been test­ing Claude Mythos Preview in our own se­cu­rity op­er­a­tions, ap­ply­ing it to crit­i­cal code­bases, where it’s al­ready help­ing us strengthen our code. We’re bring­ing deep se­cu­rity ex­per­tise to our part­ner­ship with Anthropic and are help­ing to harden Claude Mythos Preview so even more or­ga­ni­za­tions can ad­vance their most am­bi­tious work with se­cu­rity that sets the stan­dard.”“As we en­ter a phase where cy­ber­se­cu­rity is no longer bound by purely hu­man ca­pac­ity, the op­por­tu­nity to use AI re­spon­si­bly to im­prove se­cu­rity and re­duce risk at scale is un­prece­dented. Joining Project Glasswing, with ac­cess to Claude Mythos Preview, al­lows us to iden­tify and mit­i­gate risk early and aug­ment our se­cu­rity and de­vel­op­ment so­lu­tions so we can bet­ter pro­tect cus­tomers and Microsoft.

When tested against CTI-REALM, our open-source se­cu­rity bench­mark, Claude Mythos Preview showed sub­stan­tial im­prove­ments com­pared to pre­vi­ous mod­els. We look for­ward to part­ner­ing with Anthropic and the broader in­dus­try to im­prove se­cu­rity out­comes for all.”“The win­dow be­tween a vul­ner­a­bil­ity be­ing dis­cov­ered and be­ing ex­ploited by an ad­ver­sary has col­lapsed—what once took months now hap­pens in min­utes with AI.

Claude Mythos Preview demon­strates what is now pos­si­ble for de­fend­ers at scale, and ad­ver­saries will in­evitably look to ex­ploit the same ca­pa­bil­i­ties. That is not a rea­son to slow down; it’s a rea­son to move to­gether, faster. If you want to de­ploy AI, you need se­cu­rity. That is why CrowdStrike is part of this ef­fort from day one.”“In the past, se­cu­rity ex­per­tise has been a lux­ury re­served for or­ga­ni­za­tions with large se­cu­rity teams. Open source main­tain­ers—whose soft­ware un­der­pins much of the world’s crit­i­cal in­fra­struc­ture—have his­tor­i­cally been left to fig­ure out se­cu­rity on their own. Open source soft­ware con­sti­tutes the vast ma­jor­ity of code in mod­ern sys­tems, in­clud­ing the very sys­tems AI agents use to write new soft­ware.

By giv­ing the main­tain­ers of these crit­i­cal open source code­bases ac­cess to a new gen­er­a­tion of AI mod­els that can proac­tively iden­tify and fix vul­ner­a­bil­i­ties at scale, Project Glasswing of­fers a cred­i­ble path to chang­ing that equa­tion. This is how AI-augmented se­cu­rity can be­come a trusted side­kick for every main­tainer, not just those who can af­ford ex­pen­sive se­cu­rity teams.”“Pro­mot­ing the cy­ber­se­cu­rity and re­siliency of the fi­nan­cial sys­tem is cen­tral to JPMorganChase’s mis­sion, and we be­lieve the in­dus­try is strongest when lead­ing in­sti­tu­tions work to­gether on shared chal­lenges. Project Glasswing pro­vides a unique, early stage op­por­tu­nity to eval­u­ate next-gen­er­a­tion AI tools for de­fen­sive cy­ber­se­cu­rity across crit­i­cal in­fra­struc­ture both on our own terms and along­side re­spected tech­nol­ogy lead­ers.

We will take a rig­or­ous, in­de­pen­dent ap­proach to de­ter­min­ing how to pro­ceed and where we can help. Anthropic’s ini­tia­tive re­flects the kind of for­ward-look­ing, col­lab­o­ra­tive ap­proach that this mo­ment de­mands.”“Google is pleased to see this cross-in­dus­try cy­ber­se­cu­rity ini­tia­tive com­ing to­gether and to make Mythos Preview avail­able to par­tic­i­pants via Vertex AI. It’s al­ways been crit­i­cal that the in­dus­try work to­gether on emerg­ing se­cu­rity is­sues, whether it’s post-quan­tum cryp­tog­ra­phy, re­spon­si­ble zero-day dis­clo­sure, se­cure open source soft­ware, or de­fense against AI-based at­tacks.

We have long be­lieved that AI poses new chal­lenges and opens new op­por­tu­ni­ties in cy­ber de­fense, which is why we’ve built AI-powered tools—such as Big Sleep and CodeMender—to find and fix crit­i­cal soft­ware flaws. We will con­tinue in­vest­ing in our lead­ing cy­ber­se­cu­rity plat­form and a cul­ture fo­cused on pro­tect­ing users, cus­tomers, the ecosys­tem, and na­tional se­cu­rity.”“Over the past few weeks, we’ve had ac­cess to the Claude Mythos Preview model, us­ing it to iden­tify com­plex vul­ner­a­bil­i­ties that prior-gen­er­a­tion mod­els missed en­tirely. This is not only a game changer for find­ing pre­vi­ously hid­den vul­ner­a­bil­i­ties, but it also sig­nals a dan­ger­ous shift where at­tack­ers can soon find even more zero-day vul­ner­a­bil­i­ties and de­velop ex­ploits faster than ever be­fore.

It’s clear that these mod­els need to be in the hands of open source own­ers and de­fend­ers every­where to find and fix these vul­ner­a­bil­i­ties be­fore at­tack­ers get ac­cess. Perhaps even more im­por­tant: every­one needs to pre­pare for AI-assisted at­tack­ers. There will be more at­tacks, faster at­tacks, and more so­phis­ti­cated at­tacks. Now is the time to mod­ern­ize cy­ber­se­cu­rity stacks every­where. We com­mend Anthropic for part­ner­ing with the in­dus­try to en­sure these pow­er­ful ca­pa­bil­i­ties pri­or­i­tize de­fense first.”The pow­er­ful cy­ber ca­pa­bil­i­ties of Claude Mythos Preview are a re­sult of its strong agen­tic cod­ing and rea­son­ing skills. For ex­am­ple, as shown in the eval­u­a­tion re­sults be­low, the model has the high­est scores of any model yet de­vel­oped on a va­ri­ety of soft­ware cod­ing tasks.More in­for­ma­tion on the mod­el’s ca­pa­bil­i­ties, its safety prop­er­ties, and its gen­eral char­ac­ter­is­tics can be found in the Claude Mythos Preview sys­tem card.We do not plan to make Claude Mythos Preview gen­er­ally avail­able, but our even­tual goal is to en­able our users to safely de­ploy Mythos-class mod­els at scale—for cy­ber­se­cu­rity pur­poses, but also for the myr­iad other ben­e­fits that such highly ca­pa­ble mod­els will bring. To do so, we need to make progress in de­vel­op­ing cy­ber­se­cu­rity (and other) safe­guards that de­tect and block the mod­el’s most dan­ger­ous out­puts. We plan to launch new safe­guards with an up­com­ing Claude Opus model, al­low­ing us to im­prove and re­fine them with a model that does not pose the same level of risk as Mythos Preview3.Today’s an­nounce­ment is the be­gin­ning of a longer-term ef­fort. To be suc­cess­ful, it will re­quire broad in­volve­ment from across the tech­nol­ogy in­dus­try and be­yond.Pro­ject Glasswing part­ners will re­ceive ac­cess to Claude Mythos Preview to find and fix vul­ner­a­bil­i­ties or weak­nesses in their foun­da­tional sys­tems—sys­tems that rep­re­sent a very large por­tion of the world’s shared cy­ber­at­tack sur­face. We an­tic­i­pate this work will fo­cus on tasks like lo­cal vul­ner­a­bil­ity de­tec­tion, black box test­ing of bi­na­ries, se­cur­ing end­points, and pen­e­tra­tion test­ing of sys­tems.An­throp­ic’s com­mit­ment of $100M in model us­age cred­its to Project Glasswing and ad­di­tional par­tic­i­pants will cover sub­stan­tial us­age through­out this re­search pre­view. Afterward, Claude Mythos Preview will be avail­able to par­tic­i­pants at $25/$125 per mil­lion in­put/​out­put to­kens (participants can ac­cess the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).In ad­di­tion to our com­mit­ment of model us­age cred­its, we’ve do­nated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to en­able the main­tain­ers of open-source soft­ware to re­spond to this chang­ing land­scape (maintainers in­ter­ested in ac­cess can ap­ply through the Claude for Open Source pro­gram).We in­tend for this work to grow in scope and con­tinue for many months, and we’ll share as much as we can so that other or­ga­ni­za­tions can ap­ply the lessons to their own se­cu­rity. Partners will, to the ex­tent they’re able, share in­for­ma­tion and best prac­tices with each other; within 90 days, Anthropic will re­port pub­licly on what we’ve learned, as well as the vul­ner­a­bil­i­ties fixed and im­prove­ments made that can be dis­closed. We will also col­lab­o­rate with lead­ing se­cu­rity or­ga­ni­za­tions to pro­duce a set of prac­ti­cal rec­om­men­da­tions for how se­cu­rity prac­tices should evolve in the AI era. This will po­ten­tially in­clude:An­thropic has also been in on­go­ing dis­cus­sions with US gov­ern­ment of­fi­cials about Claude Mythos Preview and its of­fen­sive and de­fen­sive cy­ber ca­pa­bil­i­ties. As we noted above, se­cur­ing crit­i­cal in­fra­struc­ture is a top na­tional se­cu­rity pri­or­ity for de­mo­c­ra­tic coun­tries—the emer­gence of these cy­ber ca­pa­bil­i­ties is an­other rea­son why the US and its al­lies must main­tain a de­ci­sive lead in AI tech­nol­ogy. Governments have an es­sen­tial role to play in help­ing main­tain that lead, and in both as­sess­ing and mit­i­gat­ing the na­tional se­cu­rity risks as­so­ci­ated with AI mod­els. We are ready to work with lo­cal, state, and fed­eral rep­re­sen­ta­tives to as­sist in these tasks.We are hope­ful that Project Glasswing can seed a larger ef­fort across in­dus­try and the pub­lic sec­tor, with all par­ties help­ing to ad­dress the biggest ques­tions around the im­pact of pow­er­ful mod­els on se­cu­rity. We in­vite other AI in­dus­try mem­bers to join us in help­ing to set the stan­dards for the in­dus­try. In the medium term, an in­de­pen­dent, third-party body—one that can bring to­gether pri­vate- and pub­lic-sec­tor or­ga­ni­za­tions—might be the ideal home for con­tin­ued work on these large-scale cy­ber­se­cu­rity pro­jects.

The pro­ject is named for the glass­wing but­ter­fly, Greta oto. The metaphor can be ap­plied in two ways: the but­ter­fly’s trans­par­ent wings let it hide in plain sight, much like the vul­ner­a­bil­i­ties dis­cussed in this post; they also al­low it to evade harm—like the trans­parency we’re ad­vo­cat­ing for in our ap­proach. From the Ancient Greek for utterance” or narrative”: the sys­tem of sto­ries through which civ­i­liza­tions made sense of the world.Se­cu­rity pro­fes­sion­als whose le­git­i­mate work is af­fected by these safe­guards will be able to ap­ply to an up­com­ing Cyber Verification Program.

...

Read the original on www.anthropic.com »

5 1,371 shares, 46 trendiness

EFF is Leaving X

After al­most twenty years on the plat­form, EFF is log­ging off of X. This is­n’t a de­ci­sion we made lightly, but it might be over­due. The math has­n’t worked out for a while now.

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets gar­nered some­where be­tween 50 and 100 mil­lion im­pres­sions per month. By 2024, our 2,500 X posts gen­er­ated around 2 mil­lion im­pres­sions each month. Last year, our 1,500 posts earned roughly 13 mil­lion im­pres­sions for the en­tire year. To put it bluntly, an X post to­day re­ceives less than 3% of the views a sin­gle tweet de­liv­ered seven years ago.

When Elon Musk ac­quired Twitter in October 2022, EFF was clear about what needed fix­ing.

* Greater user con­trol: Giving users and third-party de­vel­op­ers the means to con­trol the user ex­pe­ri­ence through fil­ters and

Twitter was never a utopia. We’ve crit­i­cized the plat­form for about as long as it’s been around. Still, Twitter did de­serve recog­ni­tion from time to time for vo­cif­er­ously fight­ing for its users’ rights. That changed. Musk fired the en­tire hu­man rights team and laid off staffers in coun­tries where the com­pany pre­vi­ously fought off cen­sor­ship de­mands from re­pres­sive regimes. Many users left. Today we’re join­ing them.

Yes. And we un­der­stand why that looks con­tra­dic­tory. Let us ex­plain.

EFF ex­ists to pro­tect peo­ple’s dig­i­tal rights. Not just the peo­ple who al­ready value our work, have opted out of sur­veil­lance, or have al­ready mi­grated to the fe­di­verse. The peo­ple who need us most are of­ten the ones most em­bed­ded in the walled gar­dens of the main­stream plat­forms and sub­jected to their cor­po­rate sur­veil­lance.

Young peo­ple, peo­ple of color, queer folks, ac­tivists, and or­ga­niz­ers use Instagram, TikTok, and Facebook every day. These plat­forms host mu­tual aid net­works and serve as hubs for po­lit­i­cal or­ga­niz­ing, cul­tural ex­pres­sion, and com­mu­nity care. Just delet­ing the apps is­n’t al­ways a re­al­is­tic or ac­ces­si­ble op­tion, and nei­ther is push­ing every user to the fe­di­verse when there are cir­cum­stances like:

* You own a small busi­ness that de­pends on Instagram for cus­tomers.

* Your abor­tion fund uses TikTok to spread cru­cial in­for­ma­tion.

* You’re iso­lated and rely on on­line spaces to con­nect with your com­mu­nity.

Our pres­ence on Facebook, Instagram, YouTube, and TikTok is not an en­dorse­ment. We’ve spent years ex­pos­ing how these plat­forms sup­press mar­gin­al­ized voices, en­able in­va­sive be­hav­ioral ad­ver­tis­ing, and flag posts about abor­tion as dan­ger­ous. We’ve also taken ac­tion in court, in leg­is­la­tures, and through di­rect en­gage­ment with their staff to push them to change poor poli­cies and prac­tices.

We stay be­cause the peo­ple on those plat­forms de­serve ac­cess to in­for­ma­tion, too. We stay be­cause some of our most-read posts are the ones crit­i­ciz­ing the very plat­form we’re post­ing on. We stay be­cause the fewer steps be­tween you and the re­sources you need to pro­tect your­self, the bet­ter.

When you go on­line, your rights should go with you. X is no longer where the fight is hap­pen­ing. The plat­form Musk took over was im­per­fect but im­pact­ful. What ex­ists to­day is some­thing else: di­min­ished, and in­creas­ingly de min­imis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our mem­bers’ sup­port where they will ef­fect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you fol­low us there and keep sup­port­ing the work we do. Our work pro­tect­ing dig­i­tal rights is needed more than ever be­fore, and we’re here to help you take back con­trol.

...

Read the original on www.eff.org »

6 1,261 shares, 52 trendiness

Little Snitch for Linux

Every time an ap­pli­ca­tion on your com­puter opens a net­work con­nec­tion, it does so qui­etly, with­out ask­ing. Little Snitch for Linux makes that ac­tiv­ity vis­i­ble and gives you the op­tion to do some­thing about it. You can see ex­actly which ap­pli­ca­tions are talk­ing to which servers, block the ones you did­n’t in­vite, and keep an eye on traf­fic his­tory and data vol­umes over time.

Once in­stalled, open the user in­ter­face by run­ning lit­tlesnitch in a ter­mi­nal, or go straight to http://​lo­cal­host:3031/. You can book­mark that URL, or in­stall it as a Progressive Web App. Any Chromium-based browser sup­ports this na­tively, and Firefox users can do the same with the Progressive Web Apps ex­ten­sion.

Although not strictly nec­es­sary, we rec­om­mend re­boot­ing your com­puter af­ter in­stal­la­tion. Processes al­ready run­ning when Little Snitch is in­stalled may be shown as Not Identified”.

The con­nec­tions view is where most of the ac­tion is. It lists cur­rent and past net­work ac­tiv­ity by ap­pli­ca­tion, shows you what’s be­ing blocked by your rules and block­lists, and tracks data vol­umes and traf­fic his­tory. Sorting by last ac­tiv­ity, data vol­ume, or name, and fil­ter­ing the list to what’s rel­e­vant, makes it easy to spot any­thing un­ex­pected. Blocking a con­nec­tion takes a sin­gle click.

The traf­fic di­a­gram at the bot­tom shows data vol­ume over time. You can drag to se­lect a time range, which zooms in and fil­ters the con­nec­tion list to show only ac­tiv­ity from that pe­riod.

Blocklists let you cut off whole cat­e­gories of un­wanted traf­fic at once. Little Snitch down­loads them from re­mote sources and keeps them cur­rent au­to­mat­i­cally. It ac­cepts lists in sev­eral com­mon for­mats: one do­main per line, one host­name per line, /etc/hosts style (IP ad­dress fol­lowed by host­name), and CIDR net­work ranges. Wildcard for­mats, regex or glob pat­terns, and URL-based for­mats are not sup­ported. When you have a choice, pre­fer do­main-based lists over host-based ones, they’re han­dled more ef­fi­ciently. Well known brands are Hagezi, Peter Lowe, Steven Black and oisd.nl, just to give you a start­ing point.

One thing to be aware of: the .lsrules for­mat from Little Snitch on ma­cOS is not com­pat­i­ble with the Linux ver­sion.

Blocklists work at the do­main level, but rules let you go fur­ther. A rule can tar­get a spe­cific process, match par­tic­u­lar ports or pro­to­cols, and be as broad or nar­row as you need. The rules view lets you sort and fil­ter them so you can stay on top of things as the list grows.

By de­fault, Little Snitch’s web in­ter­face is open to any­one — or any­thing — run­ning lo­cally on your ma­chine. A mis­be­hav­ing or ma­li­cious ap­pli­ca­tion could, in prin­ci­ple, add and re­move rules, tam­per with block­lists, or turn the fil­ter off en­tirely.

If that con­cerns you, Little Snitch can be con­fig­ured to re­quire au­then­ti­ca­tion. See the Advanced con­fig­u­ra­tion sec­tion be­low for de­tails.

Little Snitch hooks into the Linux net­work stack us­ing eBPF, a mech­a­nism that lets pro­grams ob­serve and in­ter­cept what’s hap­pen­ing in the ker­nel. An eBPF pro­gram watches out­go­ing con­nec­tions and feeds data to a dae­mon, which tracks sta­tis­tics, pre­con­di­tions your rules, and serves the web UI.

The source code for the eBPF pro­gram and the web UI is on GitHub.

The UI de­lib­er­ately ex­poses only the most com­mon set­tings. Anything more tech­ni­cal can be con­fig­ured through plain text files, which take ef­fect af­ter restart­ing the lit­tlesnitch dae­mon.

The de­fault con­fig­u­ra­tion lives in /var/lib/littlesnitch/config/. Don’t edit those files di­rectly — copy whichever one you want to change into /var/lib/littlesnitch/overrides/config/ and edit it there. Little Snitch will al­ways pre­fer the over­ride.

The files you’re most likely to care about:

we­b_ui.toml — net­work ad­dress, port, TLS, and au­then­ti­ca­tion. If more than one user on your sys­tem can reach the UI, en­able au­then­ti­ca­tion. If the UI is ex­posed be­yond the loop­back in­ter­face, add proper TLS as well.

main.toml — what to do when a con­nec­tion matches noth­ing. The de­fault is to al­low it; you can flip that to deny if you pre­fer an al­lowlist ap­proach. But be care­ful! It’s easy to lock your­self out of the com­puter!

ex­e­cuta­bles.toml — a set of heuris­tics for group­ing ap­pli­ca­tions sen­si­bly. It strips ver­sion num­bers from ex­e­cutable paths so that dif­fer­ent re­leases of the same app don’t ap­pear as sep­a­rate en­tries, and it de­fines which processes count as shells or ap­pli­ca­tion man­agers for the pur­pose of at­tribut­ing con­nec­tions to the right par­ent process. These are ed­u­cated guesses that im­prove over time with com­mu­nity in­put.

Both the eBPF pro­gram and the web UI can be swapped out for your own builds if you want to go that far. Source code for both is on GitHub. Again, Little Snitch prefers the ver­sion in over­rides.

Little Snitch for Linux is built for pri­vacy, not se­cu­rity, and that dis­tinc­tion mat­ters. The ma­cOS ver­sion can make stronger guar­an­tees be­cause it can have more com­plex­ity. On Linux, the foun­da­tion is eBPF, which is pow­er­ful but bounded: it has strict lim­its on stor­age size and pro­gram com­plex­ity. Under heavy traf­fic, cache ta­bles can over­flow, which makes it im­pos­si­ble to re­li­ably tie every net­work packet to a process or a DNS name. And re­con­struct­ing which host­name was orig­i­nally looked up for a given IP ad­dress re­quires heuris­tics rather than cer­tainty. The ma­cOS ver­sion uses deep packet in­spec­tion to do this more re­li­ably. That’s not an op­tion here.

For keep­ing tabs on what your soft­ware is up to and block­ing le­git­i­mate soft­ware from phon­ing home, Little Snitch for Linux works well. For hard­en­ing a sys­tem against a de­ter­mined ad­ver­sary, it’s not the right tool.

Little Snitch for Linux has three com­po­nents. The eBPF ker­nel pro­gram and the web UI are both re­leased un­der the GNU General Public License ver­sion 2 and avail­able on GitHub. The dae­mon (littlesnitch –daemon) is pro­pri­etary, but free to use and re­dis­trib­ute.

...

Read the original on obdev.at »

7 1,254 shares, 46 trendiness

[MODEL] Claude Code is unusable for complex engineering tasks with the Feb updates · Issue #42796 · anthropics/claude-code

* This re­port does NOT con­tain sen­si­tive in­for­ma­tion (API keys, pass­words, etc.)

Claude has re­gressed to the point it can­not be trusted to per­form com­plex en­gi­neer­ing.

Does the op­po­site of re­quested ac­tiv­i­ties

Claude should be­have like it did in January.

Accept Edits was ON (auto-accepting changes)

Yes, every time with the same prompt

This analy­sis was pro­duced by Claude by an­a­lyz­ing ses­sion log data from January through March.

Quantitative analy­sis of 17,871 think­ing blocks and 234,760 tool calls across

6,852 Claude Code ses­sion files re­veals that the roll­out of think­ing con­tent

redac­tion (redact-thinking-2026-02-12) cor­re­lates pre­cisely with a mea­sured

qual­ity re­gres­sion in com­plex, long-ses­sion en­gi­neer­ing work­flows.

The data sug­gests that ex­tended think­ing to­kens are not a nice to have” but

are struc­turally re­quired for the model to per­form multi-step re­search,

con­ven­tion ad­her­ence, and care­ful code mod­i­fi­ca­tion. When think­ing depth is

re­duced, the mod­el’s tool us­age pat­terns shift mea­sur­ably from re­search-first

to edit-first be­hav­ior, pro­duc­ing the qual­ity is­sues users have re­ported.

This re­port pro­vides data to help Anthropic un­der­stand which work­flows are

most af­fected and why, with the goal of in­form­ing de­ci­sions about think­ing

to­ken al­lo­ca­tion for power users.

The qual­ity re­gres­sion was in­de­pen­dently re­ported on March 8 — the ex­act date

redacted think­ing blocks crossed 50%. The roll­out pat­tern (1.5% → 25% → 58% →

100% over one week) is con­sis­tent with a staged de­ploy­ment.

The sig­na­ture field on think­ing blocks has a 0.971 Pearson cor­re­la­tion

with think­ing con­tent length (measured from 7,146 paired sam­ples where both

are pre­sent). This al­lows es­ti­ma­tion of think­ing depth even af­ter redac­tion.

Thinking depth had al­ready dropped ~67% by late February, be­fore redac­tion

be­gan. The redac­tion roll­out in early March made this in­vis­i­ble to users.

These met­rics were com­puted in­de­pen­dently from 18,000+ user prompts be­fore

the think­ing analy­sis was per­formed.

A stop hook (stop-phrase-guard.sh) was built to pro­gram­mat­i­cally catch

own­er­ship-dodg­ing, pre­ma­ture stop­ping, and per­mis­sion-seek­ing be­hav­ior.

It fired 173 times in 17 days af­ter March 8. It fired zero times be­fore.

Analysis of 234,760 tool in­vo­ca­tions shows the model stopped read­ing code

be­fore mod­i­fy­ing it.

The model went from 6.6 reads per edit to 2.0 reads per edit — a 70%

re­duc­tion in re­search be­fore mak­ing changes.

In the good pe­riod, the mod­el’s work­flow was: read the tar­get file, read

re­lated files, grep for us­ages across the code­base, read head­ers and tests,

then make a pre­cise edit. In the de­graded pe­riod, it reads the im­me­di­ate

file and ed­its, of­ten with­out check­ing con­text.

The de­cline in re­search ef­fort be­gins in mid-Feb­ru­ary — the same pe­riod when

es­ti­mated think­ing depth dropped 67%.

Full-file Write us­age dou­bled — the model in­creas­ingly chose to rewrite

en­tire files rather than make sur­gi­cal ed­its, which is faster but loses

pre­ci­sion and con­text aware­ness.

* 191,000 lines merged across two PRs in a week­end dur­ing the good pe­riod

Extended think­ing is the mech­a­nism by which the model:

* Plans multi-step ap­proaches be­fore act­ing (which files to read, what or­der)

* Catches its own mis­takes be­fore out­putting them

* Decides whether to con­tinue work­ing or stop (session man­age­ment)

When think­ing is shal­low, the model de­faults to the cheap­est ac­tion avail­able:

edit with­out read­ing, stop with­out fin­ish­ing, dodge re­spon­si­bil­ity for fail­ures,

take the sim­plest fix rather than the cor­rect one. These are ex­actly the

symp­toms ob­served.

Transparency about think­ing al­lo­ca­tion: If think­ing to­kens are be­ing

re­duced or capped, users who de­pend on deep rea­son­ing need to know. The

redact-think­ing header makes it im­pos­si­ble to ver­ify ex­ter­nally.

A max think­ing” tier: Users run­ning com­plex en­gi­neer­ing work­flows

would pay sig­nif­i­cantly more for guar­an­teed deep think­ing. The cur­rent

sub­scrip­tion model does­n’t dis­tin­guish be­tween users who need 200 think­ing

to­kens per re­sponse and users who need 20,000.

Thinking to­ken met­rics in API re­sponses: Even if think­ing con­tent is

redacted, ex­pos­ing think­ing_­to­kens in the us­age re­sponse would let users

mon­i­tor whether their re­quests are get­ting the rea­son­ing depth they need.

Canary met­rics from power users: The stop hook vi­o­la­tion rate

(0 → 10/day) is a ma­chine-read­able sig­nal that could be mon­i­tored across

the user base as a lead­ing in­di­ca­tor of qual­ity re­gres­sions.

The fol­low­ing be­hav­ioral pat­terns were mea­sured across 234,760 tool calls and

18,000+ user prompts. Each is a pre­dictable con­se­quence of re­duced rea­son­ing

depth: the model takes short­cuts be­cause it lacks the think­ing bud­get to

eval­u­ate al­ter­na­tives, check con­text, or plan ahead.

When the model has suf­fi­cient think­ing bud­get, it reads re­lated files, greps

for us­ages, checks head­ers, and reads tests be­fore mak­ing changes. When

think­ing is shal­low, it skips re­search and ed­its di­rectly.

One in three ed­its in the de­graded pe­riod was made to a file the model had

not read in its re­cent tool his­tory. The prac­ti­cal con­se­quence: ed­its that

break sur­round­ing code, vi­o­late file-level con­ven­tions, splice new code into

the mid­dle of ex­ist­ing com­ment blocks, or du­pli­cate logic that al­ready ex­ists

else­where in the file.

Spliced com­ments are a par­tic­u­larly vis­i­ble symp­tom. When the model ed­its

a file it has­n’t read, it does­n’t know where com­ment blocks end and code

be­gins. It in­serts new de­c­la­ra­tions be­tween a doc­u­men­ta­tion com­ment and the

func­tion it doc­u­ments, break­ing the se­man­tic as­so­ci­a­tion. This never hap­pened

in the good pe­riod be­cause the model al­ways read the file first.

When think­ing is deep, the model re­solves con­tra­dic­tions in­ter­nally be­fore

pro­duc­ing out­put. When think­ing is shal­low, con­tra­dic­tions sur­face in the

out­put as vis­i­ble self-cor­rec­tions: oh wait”, actually,”, let me

re­con­sider”, hmm, ac­tu­ally”, no wait.”

The rate more than tripled. In the worst ses­sions, the model pro­duced 20+

rea­son­ing re­ver­sals in a sin­gle re­sponse — gen­er­at­ing a plan, con­tra­dict­ing

it, re­vis­ing, con­tra­dict­ing the re­vi­sion, and ul­ti­mately pro­duc­ing out­put

that could not be trusted be­cause the rea­son­ing path was vis­i­bly in­co­her­ent.

The word simplest” in the mod­el’s out­put is a sig­nal that it is op­ti­miz­ing

for the least ef­fort rather than eval­u­at­ing the cor­rect ap­proach. With deep

think­ing, the model eval­u­ates mul­ti­ple ap­proaches and chooses the right one.

With shal­low think­ing, it grav­i­tates to­ward what­ever re­quires the least

rea­son­ing to jus­tify.

In one ob­served 2-hour win­dow, the model used simplest” 6 times while

pro­duc­ing code that its own later self-cor­rec­tions de­scribed as lazy and

wrong”, rushed”, and sloppy.” Each time, the model had cho­sen an ap­proach

...

Read the original on github.com »

8 1,205 shares, 49 trendiness

VeraCrypt / Forums / General Discussion

Open source disk en­cryp­tion with strong se­cu­rity for the Paranoid

...

Read the original on sourceforge.net »

9 1,170 shares, 59 trendiness

On filing the corners off my MacBooks

← Back

I file the sharp cor­ners off my MacBooks. People like to freak out about this, so I wanted to post it here to make sure that every­one who wants to freak out about it gets the op­por­tu­nity to do so.

Here are some pho­tos so you know what I’m talk­ing about:

The bot­tom edge of the MacBook is very sharp. Indeed, the in­dus­trial de­sign­ers at Apple chose an alu­minum uni­body partly for the fact that it can han­dle such a geom­e­try. But, it is un­com­fort­able on my wrists, and I be­lieve strongly in cus­tomiz­ing one’s tools, so I filed it off.

The cor­ner is sharp all around the ma­chine, but it’s par­tic­u­larly pointed at the notch, which is where I fo­cused my ef­fort. It was quite pleas­ing to blend the smaller ra­dius curves into the larger ra­dius notch curve. I was slightly con­cerned that I’d file through the ma­chine, so I did this in in­cre­ments. It did­n’t end up be­ing an is­sue.

I taped off the speak­ers and key­board while fil­ing, as I’m sure alu­minum dust would­n’t do the ma­chine any fa­vors. I also clamped (with a re­spect­ful pres­sure) the ma­chine to my work­bench while do­ing this. I used a fairly rough file, as that is what I had on hand, and then sanded with 150 then 400 grit sand­pa­per. I was quite pleased with the fin­ish. The pho­tos above are taken months af­ter, and have the scratches and dings that you’d ex­pect some­one who has this level of re­spect for their ma­chine to ac­quire over that amount of time.

This was on my work com­puter. I ex­pect to sim­i­larly mod­ify fu­ture work com­put­ers, and I would be happy to help you mod­ify yours if you need a lit­tle en­cour­age­ment. Don’t be scared. Fuck around a bit.

...

Read the original on kentwalters.com »

10 1,133 shares, 54 trendiness

Artemis II crew splashes down near San Diego after historic moon mission

...

Read the original on www.cbsnews.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.