10 interesting stories served every morning and every evening.




1 1,430 shares, 68 trendiness

Sam Altman May Control Our Future—Can He Be Trusted?

Skip to main con­tentSam Altman May Control Our Future—Can He Be Trusted?New in­ter­views and closely guarded doc­u­ments shed light on the per­sis­tent doubts about the head of OpenAI. Altman promised to be a safe stew­ard for A.I. But some of his col­leagues be­lieved that he was not trust­wor­thy enough to, as one put it, have his fin­ger on the but­ton.”In the fall of 2023, Ilya Sutskever, OpenAI’s chief sci­en­tist, sent se­cret memos to three fel­low-mem­bers of the or­ga­ni­za­tion’s board of di­rec­tors. For weeks, they’d been hav­ing furtive dis­cus­sions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his sec­ond-in-com­mand, were fit to run the com­pany. Sutskever had once counted both men as friends. In 2019, he’d of­fi­ci­ated Brockman’s wed­ding, in a cer­e­mony at OpenAI’s of­fices that in­cluded a ring bearer in the form of a ro­botic hand. But as he grew con­vinced that the com­pany was near­ing its long-term goal—cre­at­ing an ar­ti­fi­cial in­tel­li­gence that could ri­val or sur­pass the cog­ni­tive ca­pa­bil­i­ties of hu­man be­ings—his doubts about Altman in­creased. As Sutskever put it to an­other board mem­ber at the time, I don’t think Sam is the guy who should have his fin­ger on the but­ton.”At the be­hest of his fel­low board mem­bers, Sutskever worked with like-minded col­leagues to com­pile some sev­enty pages of Slack mes­sages and H.R. doc­u­ments, ac­com­pa­nied by ex­plana­tory text. The ma­te­r­ial in­cluded im­ages taken with a cell­phone, ap­par­ently to avoid de­tec­tion on com­pany de­vices. He sent the fi­nal memos to the other board mem­bers as dis­ap­pear­ing mes­sages, to in­sure that no one else would ever see them. He was ter­ri­fied,” a board mem­ber who re­ceived them re­called. The memos, which we re­viewed, have not pre­vi­ously been dis­closed in full. They al­lege that Altman mis­rep­re­sented facts to ex­ec­u­tives and board mem­bers, and de­ceived them about in­ter­nal safety pro­to­cols. One of the memos, about Altman, be­gins with a list headed Sam ex­hibits a con­sis­tent pat­tern of . . .” The first item is Lying.”Many tech­nol­ogy com­pa­nies is­sue vague procla­ma­tions about im­prov­ing the world, then go about max­i­miz­ing rev­enue. But the found­ing premise of OpenAI was that it would have to be dif­fer­ent. The founders, who in­cluded Altman, Sutskever, Brockman, and Elon Musk, as­serted that ar­ti­fi­cial in­tel­li­gence could be the most pow­er­ful, and po­ten­tially dan­ger­ous, in­ven­tion in hu­man his­tory, and that per­haps, given the ex­is­ten­tial risk, an un­usual cor­po­rate struc­ture would be re­quired. The firm was es­tab­lished as a non­profit, whose board had a duty to pri­or­i­tize the safety of hu­man­ity over the com­pa­ny’s suc­cess, or even its sur­vival. The C.E.O. had to be a per­son of un­com­mon in­tegrity. According to Sutskever, any per­son work­ing to build this civ­i­liza­tion-al­ter­ing tech­nol­ogy bears a heavy bur­den and is tak­ing on un­prece­dented re­spon­si­bil­ity.” But the peo­ple who end up in these kinds of po­si­tions are of­ten a cer­tain kind of per­son, some­one who is in­ter­ested in power, a politi­cian, some­one who likes it.” In one of the memos, he seemed con­cerned with en­trust­ing the tech­nol­ogy to some­one who just tells peo­ple what they want to hear.” If OpenAI’s C.E.O. turned out not to be re­li­able, the board, which had six mem­bers, was em­pow­ered to fire him. Some mem­bers, in­clud­ing Helen Toner, an A.I.-policy ex­pert, and Tasha McCauley, an en­tre­pre­neur, re­ceived the memos as a con­fir­ma­tion of what they had al­ready come to be­lieve: Altman’s role en­trusted him with the fu­ture of hu­man­ity, but he could not be trusted.Alt­man was in Las Vegas, at­tend­ing a Formula 1 race, when Sutskever in­vited him to a video call with the board, then read a brief state­ment ex­plain­ing that he was no longer an em­ployee of OpenAI. The board, fol­low­ing le­gal ad­vice, re­leased a pub­lic mes­sage say­ing only that Altman had been re­moved be­cause he was not con­sis­tently can­did in his com­mu­ni­ca­tions.” Many of OpenAI’s in­vestors and ex­ec­u­tives were shocked. Microsoft, which had in­vested some thir­teen bil­lion dol­lars in OpenAI, learned of the plan to fire Altman just mo­ments be­fore it hap­pened. I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. I could­n’t get any­thing out of any­body.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI in­vestor and a Microsoft board mem­ber, who be­gan call­ing around to de­ter­mine whether Altman had com­mit­ted a clear of­fense. I did­n’t know what the fuck was go­ing on,” Hoffman told us. We were look­ing for em­bez­zle­ment, or sex­ual ha­rass­ment, and I just found noth­ing.”Other busi­ness part­ners were sim­i­larly blind­sided. When Altman called the in­vestor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was hav­ing lunch. You bet­ter get out of here re­ally quick,” she told Conway. OpenAI was on the verge of clos­ing a large in­vest­ment from Thrive, a ven­ture-cap­i­tal firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six bil­lion dol­lars and al­low many em­ploy­ees to cash out mil­lions in eq­uity. Kushner emerged from a meet­ing with Rick Rubin, the mu­sic pro­ducer, to a missed call from Altman. We just im­me­di­ately went to war,” Kushner later said.The day that Altman was fired, he flew back to his twenty-seven-mil­lion-dol­lar man­sion in San Francisco, which has panoramic views of the bay and once fea­tured a can­tilevered in­fin­ity pool, and set up what he called a sort of gov­ern­ment-in-ex­ile.” Conway, the Airbnb co-founder Brian Chesky, and the fa­mously ag­gres­sive cri­sis-com­mu­ni­ca­tions man­ager Chris Lehane joined, some­times for hours a day, by video and phone. Some mem­bers of Altman’s ex­ec­u­tive team camped out in the hall­ways of the house. Lawyers set up in a home of­fice next to his bed­room. During bouts of in­som­nia, Altman would wan­der by them in his pa­ja­mas. When we spoke with Altman re­cently, he de­scribed the af­ter­math of his fir­ing as just this weird fugue.”With the board silent, Altman’s ad­vis­ers built a pub­lic case for his re­turn. Lehane has in­sisted that the fir­ing was a coup or­ches­trated by rogue effective al­tru­ists”—ad­her­ents of a be­lief sys­tem that fo­cusses on max­i­miz­ing the well-be­ing of hu­man­ity, who had come to see A.I. as an ex­is­ten­tial threat. (Hoffman told Nadella that the fir­ing might be due to effective-altruism crazi­ness.”) Lehane—whose re­ported motto, af­ter Mike Tyson, is Everyone has a game plan un­til you punch them in the mouth”—urged Altman to wage an ag­gres­sive so­cial-me­dia cam­paign. Chesky stayed in con­tact with the tech jour­nal­ist Kara Swisher, re­lay­ing crit­i­cism of the board.Alt­man in­ter­rupted his war room” at six o’­clock each evening with a round of Negronis. You need to chill,” he re­calls say­ing. Whatever’s gonna hap­pen is gonna hap­pen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman con­veyed to Mira Murati, who had given Sutskever ma­te­r­ial for his memos and was serv­ing as the in­terim C.E.O. of OpenAI in that pe­riod, that his al­lies were going all out” and finding bad things” to dam­age her rep­u­ta­tion, as well as those of oth­ers who had moved against him, ac­cord­ing to some­one with knowl­edge of the con­ver­sa­tion. (Altman does not re­call the ex­change.)Within hours of the fir­ing, Thrive had put its planned in­vest­ment on hold and sug­gested that the deal would be con­sum­mated—and em­ploy­ees would thus re­ceive pay­outs—only if Altman re­turned. Texts from this pe­riod show Altman coör­di­nat­ing closely with Nadella. (“how about: satya and my top pri­or­ity re­mains to save ope­nai,” Altman sug­gested, as the two worked on a state­ment. Nadella pro­posed an al­ter­na­tive: to en­sure OpenAI con­tin­ues to thrive.”) Microsoft soon an­nounced that it would cre­ate a com­pet­ing ini­tia­tive for Altman and any em­ploy­ees who left OpenAI. A pub­lic let­ter de­mand­ing his re­turn cir­cu­lated at the or­ga­ni­za­tion. Some peo­ple who hes­i­tated to sign it re­ceived im­plor­ing calls and mes­sages from col­leagues. A ma­jor­ity of OpenAI em­ploy­ees ul­ti­mately threat­ened to leave with Altman.The board was backed into a cor­ner. Control Z, that’s one op­tion,” Toner said—undo the fir­ing. Or the other op­tion is the com­pany falls apart.” Even Murati even­tu­ally signed the let­ter. Altman’s al­lies worked to win over Sutskever. Brockman’s wife, Anna, ap­proached him at the of­fice and pleaded with him to re­con­sider. You’re a good per­son—you can fix this,” she said. Sutskever later ex­plained, in a court de­po­si­tion, I felt that if we were to go down the path where Sam would not re­turn, then OpenAI would be de­stroyed.” One night, Altman took an Ambien, only to be awak­ened by his hus­band, an Australian coder named Oliver Mulherin, who told him that Sutskever was wa­ver­ing, and that peo­ple were telling Altman to speak with the board. I woke up in this, like, crazy Ambien haze, and I was so dis­ori­ented,” Altman told us. I was, like, I can­not talk to the board right now.”In a se­ries of in­creas­ingly tense calls, Altman de­manded the res­ig­na­tions of board mem­bers who had moved to fire him. I have to pick up the pieces of their mess while I’m in this crazy cloud of sus­pi­cion?” Altman re­called ini­tially think­ing, about his re­turn. I was just, like, Absolutely fuck­ing not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole orig­i­nal mem­ber who re­mained. As a con­di­tion of their exit, the de­part­ing mem­bers de­manded that the al­le­ga­tions against Altman—including that he pit­ted ex­ec­u­tives against one an­other and con­cealed his fi­nan­cial en­tan­gle­ments—be in­ves­ti­gated. They also pressed for a new board that could over­see the out­side in­quiry with in­de­pen­dence. But the two new mem­bers, the for­mer Harvard pres­i­dent Lawrence Summers and the for­mer Facebook C.T.O. Bret Taylor, were se­lected af­ter close con­ver­sa­tions with Altman. would you do this,” Altman texted Nadella. bret, larry sum­mers, adam as the board and me as ceo and then bret han­dles the in­ves­ti­ga­tion.” (McCauley later tes­ti­fied in a de­po­si­tion that when Taylor was pre­vi­ously con­sid­ered for a board seat she’d had con­cerns about his def­er­ence to Altman.)Less than five days af­ter his fir­ing, Altman was re­in­stated. Employees now call this mo­ment the Blip,” af­ter an in­ci­dent in the Marvel films in which char­ac­ters dis­ap­pear from ex­is­tence and then re­turn, un­changed, to a world pro­foundly al­tered by their ab­sence. But the de­bate over Altman’s trust­wor­thi­ness has moved be­yond OpenAI’s board­room. The col­leagues who fa­cil­i­tated his ouster ac­cuse him of a de­gree of de­cep­tion that is un­ten­able for any ex­ec­u­tive and dan­ger­ous for a leader of such a trans­for­ma­tive tech­nol­ogy. We need in­sti­tu­tions wor­thy of the power they wield,” Murati told us. The board sought feed­back, and I shared what I was see­ing. Everything I shared was ac­cu­rate, and I stand be­hind all of it.” Altman’s al­lies, on the other hand, have long dis­missed the ac­cu­sa­tions. After the fir­ing, Conway texted Chesky and Lehane de­mand­ing a pub­lic-re­la­tions of­fen­sive. This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been mistreated by a rogue board of di­rec­tors.”Ope­nAI has since be­come one of the most valu­able com­pa­nies in the world. It is re­port­edly prepar­ing for an ini­tial pub­lic of­fer­ing at a po­ten­tial val­u­a­tion of a tril­lion dol­lars. Altman is dri­ving the con­struc­tion of a stag­ger­ing amount of A.I. in­fra­struc­ture, some of it con­cen­trated within for­eign au­toc­ra­cies. OpenAI is se­cur­ing sweep­ing gov­ern­ment con­tracts, set­ting stan­dards for how A.I. is used in im­mi­gra­tion en­force­ment, do­mes­tic sur­veil­lance, and au­tonomous weaponry in war zones.Alt­man has pro­moted OpenAI’s growth by tout­ing a vi­sion in which, he wrote in a 2024 blog post, astounding tri­umphs—fix­ing the cli­mate, es­tab­lish­ing a space colony, and the dis­cov­ery of all of physics—will even­tu­ally be­come com­mon­place.” His rhetoric has helped sus­tain one of the fastest cash burns of any startup in his­tory, re­ly­ing on part­ners that have bor­rowed vast sums. The U.S. econ­omy is in­creas­ingly de­pen­dent on a few highly lever­aged A.I. com­pa­nies, and many ex­perts—at times in­clud­ing Altman—have warned that the in­dus­try is in a bub­ble. Someone is go­ing to lose a phe­nom­e­nal amount of money,” he told re­porters last year. If the bub­ble pops, eco­nomic cat­a­stro­phe may fol­low. If his most bull­ish pro­jec­tions prove cor­rect, he may be­come one of the wealth­i­est and most pow­er­ful peo­ple on the planet.In a tense call af­ter Altman’s fir­ing, the board pressed him to ac­knowl­edge a pat­tern of de­cep­tion. This is just so fucked up,” he said re­peat­edly, ac­cord­ing to peo­ple on the call. I can’t change my per­son­al­ity.” Altman says that he does­n’t re­call the ex­change. It’s pos­si­ble I meant some­thing like I do try to be a uni­fy­ing force,’ ” he told us, adding that this trait had en­abled him to lead an im­mensely suc­cess­ful com­pany. He at­trib­uted the crit­i­cism to a ten­dency, es­pe­cially early in his ca­reer, to be too much of a con­flict avoider.” But a board mem­ber of­fered a dif­fer­ent in­ter­pre­ta­tion of his state­ment: What it meant was I have this trait where I lie to peo­ple, and I’m not go­ing to stop.’ ” Were the col­leagues who fired Altman mo­ti­vated by alarmism and per­sonal an­i­mus, or were they right that he could­n’t be trusted?One morn­ing this win­ter, we met Altman at OpenAI’s head­quar­ters, in San Francisco, for one of more than a dozen con­ver­sa­tions with him for this story. The com­pany had re­cently moved into a pair of eleven-story glass tow­ers, one of which had been oc­cu­pied by Uber, an­other tech be­he­moth, whose co-founder and C.E.O., Travis Kalanick, seemed like an un­stop­pable prodigy—un­til he re­signed, in 2017, un­der pres­sure from in­vestors, who cited con­cerns about his ethics. (Kalanick now runs a ro­bot­ics startup; in his free time, he said re­cently, he uses OpenAI’s ChatGPT to get to the edge of what’s known in quan­tum physics.”)An em­ployee gave us a tour of the of­fice. In an airy space full of com­mu­nal ta­bles, there was an an­i­mated dig­i­tal paint­ing of the com­puter sci­en­tist Alan Turing; its eyes tracked us as we passed. The in­stal­la­tion is a wink­ing ref­er­ence to the Turing test, the 1950 thought ex­per­i­ment about whether a ma­chine can cred­i­bly im­i­tate a per­son. (In a 2025 study, ChatGPT passed the test more re­li­ably than ac­tual hu­mans did.) Typically, you can in­ter­act with the paint­ing. But the sound had been dis­abled, our guide told us, be­cause it would­n’t stop eaves­drop­ping on em­ploy­ees and then butting into their con­ver­sa­tions. Elsewhere in the of­fice, plaques, brochures, and mer­chan­dise dis­played the words Feel the AGI.” The phrase was orig­i­nally as­so­ci­ated with Sutskever, who used it to cau­tion his col­leagues about the risks of ar­ti­fi­cial gen­eral in­tel­li­gence—the thresh­old at which ma­chines match hu­man cog­ni­tive ca­pac­i­ties. After the Blip, it be­came a cheer­ful slo­gan hail­ing a su­per­abun­dant fu­ture.We met Altman in a generic-look­ing con­fer­ence room on the eighth floor. People used to tell me about de­ci­sion fa­tigue, and I did­n’t get it,” Altman told us. Now I wear a gray sweater and jeans every day, and even pick­ing which gray sweater out of my closet—I’m, like, I wish I did­n’t have to think about that.” Altman has a youth­ful ap­pear­ance—he is slen­der, with wide-set blue eyes and tou­sled hair—but he is now forty, and he and Mulherin have a one-year-old son, de­liv­ered by a sur­ro­gate. I’m sure, like, be­ing President of the United States would be a much more stress­ful job, but of all the jobs that I think I could rea­son­ably do, this is the most stress­ful one I can imag­ine,” he said, mak­ing eye con­tact with one of us, then with the other. The way that I’ve ex­plained this to my friends is: This was the most fun job in the world un­til the day we launched ChatGPT.’ We were mak­ing these mas­sive sci­en­tific dis­cov­er­ies—I think we did the most im­por­tant piece of sci­en­tific dis­cov­ery in, I don’t know, many decades.” He cast his eyes down. And then, since the launch of ChatGPT, the de­ci­sions have got­ten very dif­fi­cult.”Alt­man grew up in Clayton, Missouri, an af­flu­ent sub­urb of St. Louis, as the el­dest of four sib­lings. His mother, Connie Gibstine, is a der­ma­tol­o­gist; his fa­ther, Jerry Altman, was a real-es­tate bro­ker and a hous­ing ac­tivist. Altman at­tended a Reform syn­a­gogue and a pri­vate prepara­tory school that he has de­scribed as not the kind of place where you would re­ally stand up and talk about be­ing gay.” In gen­eral, though, the fam­i­ly’s wealthy sub­ur­ban cir­cles were rel­a­tively lib­eral. When Altman was six­teen or sev­en­teen, he said, he was out late in a pre­dom­i­nantly gay neigh­bor­hood in St. Louis and was sub­jected to a bru­tal phys­i­cal at­tack and ho­mo­pho­bic slurs. Altman did not re­port the in­ci­dent, and he was re­luc­tant to give us more de­tails on the record, say­ing that a fuller telling would make me look like I’m ma­nip­u­la­tive or play­ing for sym­pa­thy.” He dis­missed the idea that this event, and his sex­u­al­ity broadly, was sig­nif­i­cant to his iden­tity. But, he said, probably that has, like, some deep-seated psy­cho­log­i­cal thing—that I think I’m over but I’m not—about not want­ing more con­flict.”Alt­man’s at­ti­tude in child­hood, his brother told The New Yorker, in 2016, was I have to win, and I’m in charge of every­thing.” He went to Stanford, where he at­tended reg­u­lar off-cam­pus poker games. I think I learned more about life and busi­ness from that than I learned in col­lege,” he later said.All Stanford stu­dents are am­bi­tious, but many of the most en­ter­pris­ing among them drop out. The sum­mer af­ter his sopho­more year, Altman went to Massachusetts to join the in­au­gural batch of en­tre­pre­neurs at Y Combinator, a startup in­cu­ba­tor” co-founded by the renowned soft­ware en­gi­neer Paul Graham. Each en­trant joined Y.C. with an idea for a startup. (Altman’s batch mates in­cluded founders of Reddit and Twitch.) Altman’s pro­ject, even­tu­ally called Loopt, was a proto so­cial net­work that used the lo­ca­tions of peo­ple’s flip phones to tell their friends where they were. The com­pany re­flected his drive, and a ten­dency to in­ter­pret am­bigu­ous sit­u­a­tions to his ad­van­tage. Federal rules re­quired that phone car­ri­ers be able to track the lo­ca­tions of phones for emer­gency ser­vices; Altman struck deals with car­ri­ers to tap these ca­pa­bil­i­ties for the com­pa­ny’s use.“These num­bers in­di­cate that some­body here has the soul of a poet.”Most of Altman’s em­ploy­ees at Loopt liked him, but some said that they were struck by his ten­dency to ex­ag­ger­ate, even about triv­ial things. One re­called Altman brag­ging widely that he was a cham­pion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then prov­ing to be one of the worst play­ers in the of­fice. (Altman says that he was prob­a­bly jok­ing.) As Mark Jacobstein, an older Loopt em­ployee who was asked by in­vestors to act as Altman’s babysitter,” later told Keach Hagey, for The Optimist,” a bi­og­ra­phy of Altman, There’s a blur­ring be­tween I think I can maybe ac­com­plish this thing’ and I have al­ready ac­com­plished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraud­u­lent startup.Groups of se­nior em­ploy­ees, con­cerned with Altman’s lead­er­ship and lack of trans­parency, asked Loopt’s board on two oc­ca­sions to fire him as C.E.O., ac­cord­ing to Hagey. But Altman in­spired fierce loy­alty, too. A for­mer em­ployee was told that a board mem­ber re­sponded, This is Sam’s com­pany, get back to fuck­ing work.” (A board mem­ber de­nied that the at­tempts to re­move Altman as C.E.O. were se­ri­ous.) Loopt strug­gled to gain users, and in 2012 it was ac­quired by a fin­tech com­pany. The ac­qui­si­tion had been arranged, ac­cord­ing to a per­son fa­mil­iar with the deal, largely to help Altman save face. Still, by the time Graham re­tired from Y.C., in 2014, he had re­cruited Altman to be his suc­ces­sor as pres­i­dent. I asked Sam in our kitchen,” Graham told The New Yorker. And he smiled, like, it worked. I had never seen an un­con­trolled smile from Sam. It was like when you throw a ball of pa­per into the waste­bas­ket across the room—that smile.”Alt­man’s new role made him, at twenty-eight, a king­maker. His job was to se­lect the hun­gri­est and most promis­ing en­tre­pre­neurs, con­nect them with the best coders and in­vestors, and help them de­velop their star­tups into in­dus­try-defin­ing mo­nop­o­lies (while Y.C. took a six- or seven-per-cent cut). Altman over­saw a pe­riod of ag­gres­sive ex­pan­sion, grow­ing Y.C.’s ros­ter of star­tups from dozens to hun­dreds. But sev­eral Silicon Valley in­vestors came to be­lieve that his loy­al­ties were di­vided. An in­vestor told us that Altman was known to make per­sonal in­vest­ments, se­lec­tively, into the best com­pa­nies, block­ing out­side in­vestors.” (Altman de­nies block­ing any­one.) Altman had worked as a scout” for the in­vest­ment fund Sequoia Capital, as part of a pro­gram that in­volved in­vest­ing in early-stage star­tups and tak­ing a small cut of any prof­its. When Altman made an an­gel in­vest­ment in Stripe, a fi­nan­cial-ser­vices startup, he in­sisted on a big­ger por­tion, galling Sequoia’s part­ners, a per­son fa­mil­iar with the deal said. The per­son added, It’s a pol­icy of Sam first.’ ” Altman is an in­vestor in, by his own es­ti­mate, some four hun­dred other com­pa­nies. (Altman de­nies this char­ac­ter­i­za­tion of the Stripe deal. Around 2010, he made an ini­tial in­vest­ment of fif­teen thou­sand dol­lars in Stripe, a two-per-cent share. The com­pany is now val­ued at more than a hun­dred and fifty bil­lion dol­lars.)By 2018, sev­eral Y.C. part­ners were so frus­trated with Altman’s be­hav­ior that they ap­proached Graham to com­plain. Graham and Jessica Livingston, his wife and a Y.C. founder, ap­par­ently had a frank con­ver­sa­tion with Altman. Afterward, Graham started telling peo­ple that al­though Altman had agreed to leave the com­pany, he was re­sist­ing in prac­tice. Altman told some Y.C. part­ners that he would re­sign as pres­i­dent but be­come chair­man in­stead. In May, 2019, a blog post an­nounc­ing that Y.C. had a new pres­i­dent came with an as­ter­isk: Sam is tran­si­tion­ing to Chairman of YC.” A few months later, the post was edited to read Sam Altman stepped away from any for­mal po­si­tion at YC; af­ter that, the phrase was re­moved en­tirely. Nevertheless, as re­cently as 2021, a Securities and Exchange Commission fil­ing listed Altman as the chair­man of Y Combinator. (Altman says that he was­n’t aware of this un­til much later.)Alt­man has main­tained over the years, both in pub­lic and in re­cent de­po­si­tions, that he was never fired from Y.C., and he told us that he did not re­sist leav­ing. Graham has tweeted that we did­n’t want him to leave, just to choose” be­tween Y.C. and OpenAI. In a state­ment, Graham told us, We did­n’t have the le­gal power to fire any­one. All we could do was ap­ply moral pres­sure.” In pri­vate, though, he has been un­am­bigu­ous that Altman was re­moved be­cause of Y.C. part­ners’ mis­trust. This ac­count of Altman’s time at Y Combinator is based on dis­cus­sions with sev­eral Y.C. founders and part­ners, in ad­di­tion to con­tem­po­ra­ne­ous ma­te­ri­als, all of which in­di­cate that the part­ing was not en­tirely mu­tual. On one oc­ca­sion, Graham told Y.C. col­leagues that, prior to his re­moval, Sam had been ly­ing to us all the time.”In May, 2015, Altman e-mailed Elon Musk, then the hun­dredth-rich­est per­son in the world. Like many promi­nent Silicon Valley en­tre­pre­neurs, Musk was pre­oc­cu­pied by an ar­ray of threats that he con­sid­ered ex­is­ten­tially ur­gent but which would have struck most peo­ple as far-fetched hy­po­thet­i­cals. We need to be su­per care­ful with AI,” he tweeted. Potentially more dan­ger­ous than nukes.”Alt­man had gen­er­ally been a techno-op­ti­mist, but his rhetoric about A.I. soon turned apoc­a­lyp­tic. In pub­lic, and in his pri­vate cor­re­spon­dence with Musk and oth­ers, he warned that the tech­nol­ogy should not be dom­i­nated by a profit-seek­ing mega-cor­po­ra­tion. Been think­ing a lot about whether it’s pos­si­ble to stop hu­man­ity from de­vel­op­ing AI,” he wrote to Musk. If it’s go­ing to hap­pen any­way, it seems like it would be good for some­one other than Google to do it first.” Picking up on the anal­ogy to nu­clear weapons, he pro­posed a Manhattan Project for AI.” He out­lined the over­ar­ch­ing prin­ci­ples that such an or­ga­ni­za­tion would have—“safety should be a first-class re­quire­ment”; obviously we’d com­ply with/​ag­gres­sively sup­port all reg­u­la­tion”—and he and Musk set­tled on a name: OpenAI.Unlike the orig­i­nal Manhattan Project, a gov­ern­ment ini­tia­tive that led to the cre­ation of the atom bomb, OpenAI would be pri­vately funded, at least at first. Altman pre­dicted that an ar­ti­fi­cial su­per­in­tel­li­gence—a the­o­ret­i­cal thresh­old be­yond even A.G.I., at which ma­chines would fully eclipse the ca­pa­bil­i­ties of the hu­man mind—would even­tu­ally cre­ate enough eco­nomic ben­e­fits to capture the light cone of all fu­ture value in the uni­verse.” But he also warned of ex­is­ten­tial dan­ger. At some point, the na­tional-se­cu­rity im­pli­ca­tions could grow so dire that the U.S. gov­ern­ment would have to take con­trol of OpenAI, per­haps by na­tion­al­iz­ing it and mov­ing its op­er­a­tions to a se­cure bunker in the desert. By late 2015, Musk was per­suaded. We should say that we are start­ing with a $1B fund­ing com­mit­ment,” he wrote. I will cover what­ever any­one else does­n’t pro­vide.”Alt­man housed OpenAI in Y Combinator’s non­profit arm, fram­ing it as an in­ter­nal phil­an­thropic pro­ject. He gave OpenAI re­cruits Y.C. stock and moved do­na­tions through Y.C. ac­counts. At one point, the lab was sup­ported by a Y.C. fund in which he held a per­sonal stake. (Altman later de­scribed this stake as in­signif­i­cant. He told us that the Y.C. stock he gave to re­cruits was his own.)The Manhattan Project anal­ogy ap­plied to em­ployee re­cruit­ment, too. Like nu­clear-fis­sion re­search, ma­chine learn­ing was a small sci­en­tific field with epochal im­pli­ca­tions which was dom­i­nated by a cadre of ec­cen­tric ge­niuses. Musk and Altman, along with Brockman, who joined from Stripe, were con­vinced that there were only a few com­puter sci­en­tists alive ca­pa­ble of mak­ing the re­quired break­throughs. Google had a huge cash ad­van­tage and a mul­ti­year head start. We are out­manned and out­gunned by a ridicu­lous mar­gin,” Musk later wrote in an e-mail. But if we are able to at­tract the most tal­ented peo­ple over time and our di­rec­tion is cor­rectly aligned, then OpenAI will pre­vail.”A top re­cruit­ing tar­get was Sutskever, an in­tense and in­tro­verted re­searcher who was of­ten called the most gifted A.I. sci­en­tist of his gen­er­a­tion. Sutskever, who was born in the Soviet Union in 1986, has a re­ced­ing hair­line, dark eyes, and a habit of paus­ing, un­blink­ing, while choos­ing his words. Another tar­get was Dario Amodei, a bio­physi­cist and a font of fre­netic en­ergy who has a ten­dency to ner­vously twist his black hair, and re­sponds to one-line e-mails with multi-para­graph es­says. Both had lu­cra­tive jobs else­where, but Altman lav­ished them with at­ten­tion. He later joked, I stalked Ilya.”Musk was the big­ger name, but Altman had the smoother touch. He e-mailed Amodei, and they set up a one-on-one din­ner at an Indian restau­rant. (Altman: fuck my uber got in a crash! run­ning about 10 late.” Amodei: Wow, hope you’re ok.”) Like many A.I. re­searchers, Amodei be­lieved that the tech­nol­ogy should be built only if it was shown to be aligned” with hu­man val­ues, mean­ing that it would act in ac­cor­dance with what peo­ple wanted with­out mak­ing a po­ten­tially fa­tal er­ror—say, fol­low­ing an in­struc­tion to clean up the en­vi­ron­ment by elim­i­nat­ing its great­est pol­luter, the hu­man race. Altman was re­as­sur­ing, mir­ror­ing these safety con­cerns.Amodei, who later joined the com­pany, took de­tailed notes on Altman and Brockman’s be­hav­ior for years, un­der the head­ing My Experience with OpenAI” (subheading: Private: Do Not Share”). A col­lec­tion of more than two hun­dred pages of doc­u­ments re­lated to Amodei, in­clud­ing those notes and in­ter­nal e-mails and memos, has been cir­cu­lated by col­leagues in Silicon Valley but never be­fore dis­closed pub­licly. In his notes, Amodei wrote that Altman’s goal was to build an AI lab that would be fo­cused on safety (‘maybe not right away, but as soon as it can be’).”In December, 2015, hours be­fore OpenAI was pub­licly an­nounced, Altman e-mailed Musk about a ru­mor that Google was going to give every­one in ope­nAI mas­sive coun­terof­fers to­mor­row to try to kill it.” Musk replied, Has Ilya come back with a solid yes?” Altman as­sured him that Sutskever was hold­ing firm. Google of­fered Sutskever six mil­lion dol­lars a year, which OpenAI could­n’t come close to match­ing. But, Altman boasted, they un­for­tu­nately dont have do the right thing’ on their side.”“I’m just say­ing, if we tear up the pil­lows and rip up the mat­tress, it might make our place look more lived in.”Musk pro­vided some of­fice space for OpenAI in a for­mer suit­case fac­tory in the Mission District of San Francisco. The pitch to em­ploy­ees, Sutskever told us, was You’re go­ing to save the world.”If every­thing went right, the OpenAI founders be­lieved, ar­ti­fi­cial in­tel­li­gence could usher in a post-scarcity utopia, au­tomat­ing grunt work, cur­ing can­cer, and lib­er­at­ing peo­ple to en­joy lives of leisure and abun­dance. But if the tech­nol­ogy went rogue, or fell into the wrong hands, the dev­as­ta­tion could be to­tal. China could use it to build a novel bioweapon or a fleet of ad­vanced drones; an A.I. model could out­ma­neu­ver its over­seers, repli­cat­ing it­self on se­cret servers so that it could­n’t be turned off; in ex­treme cases, it might seize con­trol of the en­ergy grid, the stock mar­ket, or the nu­clear ar­se­nal. Not every­one be­lieved this, to say the least, but Altman re­peat­edly af­firmed that he did. He wrote on his blog in 2015 that su­per­hu­man ma­chine in­tel­li­gence does not have to be the in­her­ently evil sci-fi ver­sion to kill us all. A more prob­a­ble sce­nario is that it sim­ply does­n’t care about us much ei­ther way, but in an ef­fort to ac­com­plish some other goal . . . wipes us out.” OpenAI’s founders vowed not to priv­i­lege speed over safety, and the or­ga­ni­za­tion’s ar­ti­cles of in­cor­po­ra­tion made ben­e­fit­ting hu­man­ity a legally bind­ing duty. If A.I. was go­ing to be the most pow­er­ful tech­nol­ogy in his­tory, it fol­lowed that any in­di­vid­ual with sole con­trol over it stood to be­come uniquely pow­er­ful—a sce­nario that the founders re­ferred to as an AGI dic­ta­tor­ship.”Alt­man told early re­cruits that OpenAI would re­main a pure non­profit, and pro­gram­mers took sig­nif­i­cant pay cuts to work there. The com­pany ac­cepted char­i­ta­ble grants, in­clud­ing thirty mil­lion dol­lars from what was then called Open Philanthropy, a hub of the ef­fec­tive-al­tru­ism move­ment whose com­mit­ments in­cluded sup­port­ing the dis­tri­b­u­tion of mos­quito nets to the global poor.Brock­man and Sutskever man­aged OpenAI’s daily op­er­a­tions, while Musk and Altman, still busy with their other jobs, stopped by around once a week. By September, 2017, though, Musk had grown im­pa­tient. During dis­cus­sions about whether to re­con­sti­tute OpenAI as a for-profit com­pany, he de­manded ma­jor­ity con­trol. Altman’s replies var­ied de­pend­ing on the con­text. His main con­sis­tent de­mand seems to have been that if OpenAI were re­or­ga­nized un­der the con­trol of a C.E.O. that job should go to him. Sutskever seemed un­com­fort­able with this idea. He sent Musk and Altman a long, plain­tive e-mail on be­half of him­self and Brockman, with the sub­ject line Honest Thoughts.” He wrote, The goal of OpenAI is to make the fu­ture good and to avoid an AGI dic­ta­tor­ship.” He con­tin­ued, ad­dress­ing Musk, So it is a bad idea to cre­ate a struc­ture where you could be­come a dic­ta­tor.” He re­layed sim­i­lar con­cerns to Altman: We don’t un­der­stand why the CEO ti­tle is so im­por­tant to you. Your stated rea­sons have changed, and it’s hard to re­ally un­der­stand what’s dri­ving it.”“Guys, I’ve had enough,” Musk replied. Either go do some­thing on your own or con­tinue with OpenAI as a non­profit”—oth­er­wise I’m just be­ing a fool who is es­sen­tially pro­vid­ing free fund­ing for you to cre­ate a startup.” He quit, ac­ri­mo­niously, five months later. (In 2023, he founded a for-profit com­peti­tor called xAI. The fol­low­ing year, he sued Altman and OpenAI for fraud and breach of char­i­ta­ble trust, al­leg­ing that he had been assiduously ma­nip­u­lated” by Altman’s long con”—that Altman had preyed on his con­cerns about the dan­gers of A.I. in or­der to sep­a­rate him from his money. The suit, which OpenAI has vig­or­ously con­tested, is on­go­ing.)Af­ter Musk’s de­par­ture, Amodei and other re­searchers chafed against the lead­er­ship of Brockman, whom some con­sid­ered an abra­sive op­er­a­tor, and of Sutskever, who was gen­er­ally viewed as prin­ci­pled but dis­or­ga­nized. In the process of be­com­ing C.E.O., Altman seems to have made dif­fer­ent promises to dif­fer­ent fac­tions at the com­pany. He as­sured some re­searchers that Brockman’s man­age­r­ial au­thor­ity would be di­min­ished. But, un­be­knownst to them, he also struck a se­cret hand­shake deal with Brockman and Sutskever: Altman would get the C.E.O. ti­tle; in ex­change, he agreed to re­sign if the other two deemed it nec­es­sary. (He dis­puted this char­ac­ter­i­za­tion, say­ing he took the C.E.O. role only be­cause he was asked to. All three men con­firmed that the pact ex­isted, though Brockman said that it was in­for­mal. He uni­lat­er­ally told us that he’d step down if we ever both asked him to,” he told us. We ob­jected to this idea, but he said it was im­por­tant to him. It was purely al­tru­is­tic.”) Later, the board was alarmed to learn that its C.E.O. had es­sen­tially ap­pointed his own shadow board.In­ter­nal records show that the founders had pri­vate doubts about the non­profit struc­ture as early as 2017. That year, af­ter Musk tried to take con­trol, Brockman wrote in a di­ary en­try, cannot say that we are com­mit­ted to the non-profit . . . if three months later we’re do­ing b-corp then it was a lie.” Amodei, in one of his early notes, re­called press­ing Brockman on his pri­or­i­ties and Brockman re­ply­ing that he wanted money and power.” Brockman dis­putes this. His di­ary en­tries from this time sug­gest con­flict­ing in­stincts. One reads, Happy to not be­come rich on this, so long as no one else is.” In an­other, he asks, So what do I *really* want?” Among his an­swers is Financially what will take me to $1B.”In 2017, Sutskever was in the of­fice when he read a pa­per that Google re­searchers had just pub­lished, propos­ing a new sim­ple net­work ar­chi­tec­ture, the Transformer.” He jumped out of his chair, ran down the hall, and told his fel­low-re­searchers, Stop every­thing you’re do­ing. This is it.” The Transformer, Sutskever saw, was an in­no­va­tion that might en­able OpenAI to train vastly more so­phis­ti­cated mod­els. Out of this dis­cov­ery came the first gen­er­a­tive pre-trained trans­former—the seed of what would be­come ChatGPT.As the tech­nol­ogy be­came in­creas­ingly pow­er­ful, we learned, about a dozen of OpenAI’s top en­gi­neers held a se­ries of se­cret meet­ings to dis­cuss whether OpenAI’s founders, in­clud­ing Brockman and Altman, could be trusted. At one, an em­ployee was re­minded of a sketch by the British com­edy duo Mitchell and Webb, in which a Nazi sol­dier on the Eastern Front, in a mo­ment of clar­ity, asks, Are we the bad­dies?”By 2018, Amodei had started ques­tion­ing the founders’ mo­tives more openly. Everything was a ro­tat­ing set of schemes to raise money,” he later wrote in his notes. I felt like what OpenAI needed was a clear state­ment of what it would do, what it would not do, and how its ex­is­tence would make the world bet­ter.” OpenAI al­ready had a mis­sion state­ment: To en­sure that ar­ti­fi­cial gen­eral in­tel­li­gence ben­e­fits all of hu­man­ity.” But it was­n’t clear to Amodei what this meant to the ex­ec­u­tives, if it meant any­thing at all. In early 2018, Amodei has said, he started draft­ing a char­ter for the com­pany and, in weeks of con­ver­sa­tions with Altman and Brockman, ad­vo­cated for its most rad­i­cal clause: if a value-aligned, safety-con­scious pro­ject” came close to build­ing an A.G.I. be­fore OpenAI did, the com­pany would stop com­pet­ing with and start as­sist­ing this pro­ject.” According to the merge and as­sist” clause, as it was called, if, say, Google’s re­searchers fig­ured out how to build a safe A.G.I. first, then OpenAI could wind it­self down and do­nate its re­sources to Google. By any nor­mal cor­po­rate logic, this was an in­sane thing to promise. But OpenAI was not sup­posed to be a nor­mal com­pany.That premise was tested in the spring of 2019, when OpenAI was ne­go­ti­at­ing a bil­lion-dol­lar in­vest­ment from Microsoft. Although Amodei, who was lead­ing the com­pa­ny’s safety team, had helped to pitch the deal to Bill Gates, many peo­ple on the team were anx­ious about it, fear­ing that Microsoft would in­sert pro­vi­sions that over­rode OpenAI’s eth­i­cal com­mit­ments. Amodei pre­sented Altman with a ranked list of safety de­mands, plac­ing the preser­va­tion of the merge-and-as­sist clause at the very top. Altman agreed to that de­mand, but in June, as the deal was clos­ing, Amodei dis­cov­ered that a pro­vi­sion grant­ing Microsoft the power to block OpenAI from any merg­ers had been added. Eighty per cent of the char­ter was just be­trayed,” Amodei re­called. He con­fronted Altman, who de­nied that the pro­vi­sion ex­isted. Amodei read it aloud, point­ing to the text, and ul­ti­mately forced an­other col­league to con­firm its ex­is­tence to Altman di­rectly. (Altman does­n’t re­mem­ber this.) Amodei’s notes de­scribe es­ca­lat­ing tense en­coun­ters, in­clud­ing one, months later, in which Altman sum­moned him and his sis­ter, Daniela, who worked in safety and pol­icy at the com­pany, to tell them that he had it on good au­thor­ity” from a se­nior ex­ec­u­tive that they had been plot­ting a coup. Daniela, the notes con­tinue, lost it,” and brought in that ex­ec­u­tive, who de­nied hav­ing said any­thing. As one per­son briefed on the ex­change re­called, Altman then de­nied hav­ing made the claim. I did­n’t even say that,” he said. You just said that,” Daniela replied. (Altman said that this was not quite his rec­ol­lec­tion, and that he had ac­cused the Amodeis only of political be­hav­ior.”) In 2020, Amodei, Daniela, and other col­leagues left to found Anthropic, which is now one of OpenAI’s chief ri­vals.Alt­man con­tin­ued tout­ing OpenAI’s com­mit­ment to safety, es­pe­cially when po­ten­tial re­cruits were within earshot. In late 2022, four com­puter sci­en­tists pub­lished a pa­per mo­ti­vated in part by con­cerns about deceptive align­ment,” in which suf­fi­ciently ad­vanced mod­els might pre­tend to be­have well dur­ing test­ing and then, once de­ployed, pur­sue their own goals. (It’s one of sev­eral A.I. sce­nar­ios that sound like sci­ence fic­tion—but, un­der cer­tain ex­per­i­men­tal con­di­tions, it’s al­ready hap­pen­ing.) Weeks af­ter the pa­per was pub­lished, one of its au­thors, a Ph.D. stu­dent at the University of California, Berkeley, got an e-mail from Altman, who said that he was in­creas­ingly wor­ried about the threat of un­aligned A.I. He added that he was think­ing of com­mit­ting a bil­lion dol­lars to the is­sue, which many A.I. ex­perts con­sid­ered the most im­por­tant un­solved prob­lem in the world, po­ten­tially by en­dow­ing a prize to in­cen­tivize re­searchers around the world to study it. Although the grad­u­ate stu­dent had heard vague ru­mors about Sam be­ing slip­pery,” he told us, Altman’s show of com­mit­ment won him over. He took an aca­d­e­mic leave to join OpenAI.But, in the course of sev­eral meet­ings in the spring of 2023, Altman seemed to wa­ver. He stopped talk­ing about en­dow­ing a prize. Instead, he ad­vo­cated for es­tab­lish­ing an in-house superalignment team.” An of­fi­cial an­nounce­ment, re­fer­ring to the com­pa­ny’s re­serves of com­put­ing power, pledged that the team would get 20% of the com­pute we’ve se­cured to date”—a re­source po­ten­tially worth more than a bil­lion dol­lars. The ef­fort was nec­es­sary, ac­cord­ing to the an­nounce­ment, be­cause, if align­ment re­mained un­solved, A.G.I. might lead to the dis­em­pow­er­ment of hu­man­ity or even hu­man ex­tinc­tion.” Jan Leike, who was ap­pointed to lead the team with Sutskever, told us, It was a pretty ef­fec­tive re­ten­tion tool.”The twenty-per-cent com­mit­ment evap­o­rated, how­ever. Four peo­ple who worked on or closely with the team said that the ac­tual re­sources were be­tween one and two per cent of the com­pa­ny’s com­pute. Furthermore, a re­searcher on the team said, most of the su­per­align­ment com­pute was ac­tu­ally on the old­est clus­ter with the worst chips.” The re­searchers be­lieved that su­pe­rior hard­ware was be­ing re­served for profit-gen­er­at­ing ac­tiv­i­ties. (OpenAI dis­putes this.) Leike com­plained to Murati, then the com­pa­ny’s chief tech­nol­ogy of­fi­cer, but she told him to stop press­ing the point—the com­mit­ment had never been re­al­is­tic.“She skip­pidy-boop-bee-bop-doo-wop­pity-wopped right out of my life.”Around this time, a for­mer em­ployee told us, Sutskever was get­ting su­per safety-pilled.” In the early days of OpenAI, he had con­sid­ered con­cerns about cat­a­strophic risk le­git­i­mate but re­mote. Now, as he came to be­lieve that A.G.I. was im­mi­nent, his wor­ries grew more acute. There was an all-hands meet­ing, the for­mer em­ployee con­tin­ued, where Ilya gets up and he’s, like, Hey, every­one, there’s go­ing to be a point in the next few years where ba­si­cally every­one at this com­pany has to switch to work­ing on safety, or else we’re fucked.” But the su­per­align­ment team was dis­solved the fol­low­ing year, with­out com­plet­ing its mis­sion.By then, in­ter­nal mes­sages show, ex­ec­u­tives and board mem­bers had come to be­lieve that Altman’s omis­sions and de­cep­tions might have ram­i­fi­ca­tions for the safety of OpenAI’s prod­ucts. In a meet­ing in December, 2022, Altman as­sured board mem­bers that a va­ri­ety of fea­tures in a forth­com­ing model, GPT-4, had been ap­proved by a safety panel. Toner, the board mem­ber and A.I.-policy ex­pert, re­quested doc­u­men­ta­tion. She learned that the most con­tro­ver­sial fea­tures—one that al­lowed users to fine-tune” the model for spe­cific tasks, and an­other that de­ployed it as a per­sonal as­sis­tant—had not been ap­proved. As McCauley, the board mem­ber and en­tre­pre­neur, left the meet­ing, an em­ployee pulled her aside and asked if she knew about the breach” in India. Altman, dur­ing many hours of brief­ing with the board, had ne­glected to men­tion that Microsoft had re­leased an early ver­sion of ChatGPT in India with­out com­plet­ing a re­quired safety re­view. It just was kind of com­pletely ig­nored,” Jacob Hilton, an OpenAI re­searcher at the time, said.Al­though these lapses did not cause se­cu­rity crises, Carroll Wainwright, an­other re­searcher, said that they were part of a continual slide to­ward em­pha­siz­ing prod­ucts over safety.” After the re­lease of GPT-4, Leike e-mailed mem­bers of the board. OpenAI has been go­ing off the rails on its mis­sion,” he wrote. We are pri­or­i­tiz­ing the prod­uct and rev­enue above all else, fol­lowed by AI ca­pa­bil­i­ties, re­search and scal­ing, with align­ment and safety com­ing third.” He con­tin­ued, Other com­pa­nies like Google are learn­ing that they should de­ploy faster and ig­nore safety prob­lems.”Mc­Cauley, in an e-mail to her fel­low-mem­bers, wrote, I think we’re def­i­nitely at a point where the board should be in­creas­ing its level of scrutiny.” The board mem­bers tried to con­front what they viewed as a mount­ing prob­lem, but they were out­matched. You had a bunch of J.V. peo­ple who’ve never done any­thing, to be blunt,” Sue Yoon, a for­mer board mem­ber, said. In 2023, the com­pany was prepar­ing to re­lease its GPT-4 Turbo model. As Sutskever de­tails in the memos, Altman ap­par­ently told Murati that the model did­n’t need safety ap­proval, cit­ing the com­pa­ny’s gen­eral coun­sel, Jason Kwon. But when she asked Kwon, over Slack, he replied, ugh . . . con­fused where sam got that im­pres­sion.” (A rep­re­sen­ta­tive for OpenAI, where Kwon re­mains an ex­ec­u­tive, said that the mat­ter was not a big deal.”)Soon af­ter­ward, the board made its de­ci­sion to fire Altman—and then the world watched as Altman re­versed it. A ver­sion of the OpenAI char­ter is still on the or­ga­ni­za­tion’s web­site. But peo­ple fa­mil­iar with OpenAI’s gov­ern­ing doc­u­ments said that it has been di­luted to the point of mean­ing­less­ness. Last June, on his per­sonal blog, Altman wrote, re­fer­ring to ar­ti­fi­cial su­per­in­tel­li­gence, We are past the event hori­zon; the take­off has started.” This was, ac­cord­ing to the char­ter, ar­guably the mo­ment when OpenAI might stop com­pet­ing with other com­pa­nies and start work­ing with them. But in that post, called The Gentle Singularity,” he adopted a new tone, re­plac­ing ex­is­ten­tial ter­ror with ebul­lient op­ti­mism. We’ll all get bet­ter stuff,” he wrote. We will build ever-more-won­der­ful things for each other.” He ac­knowl­edged that the align­ment prob­lem re­mained un­solved, but he re­de­fined it—rather than be­ing a deadly threat, it was an in­con­ve­nience, like the al­go­rithms that tempt us to waste time scrolling on Instagram.Altman is of­ten de­scribed, ei­ther with rev­er­ence or with sus­pi­cion, as the great­est pitch­man of his gen­er­a­tion. Steve Jobs, one of his idols, was said to pro­ject a reality-distortion field”—an unas­sail­able con­fi­dence that the world would con­form to his vi­sion. But even Jobs never told his cus­tomers that if they did­n’t buy his brand of MP3 player every­one they loved would die. When Altman was twenty-three, in 2008, Graham, his men­tor, wrote, You could para­chute him into an is­land full of can­ni­bals and come back in 5 years and he’d be the king.” This judg­ment was based not on Altman’s track record, which was mod­est, but on his will to pre­vail, which Graham con­sid­ered al­most un­govern­able. When ad­vised not to in­clude Y.C. alumni on a list of the world’s top startup founders, Graham put Altman on it any­way. Sam Altman can’t be stopped by such flimsy rules,” he wrote.Gra­ham meant this as a com­pli­ment. But some of Altman’s clos­est col­leagues came to have a dif­fer­ent view of this qual­ity. After Sutskever grew more dis­tressed about A.I. safety, he com­piled the memos about Altman and Brockman. They have since taken on a leg­endary sta­tus in Silicon Valley; in some cir­cles, they are sim­ply called the Ilya Memos. Meanwhile, Amodei was con­tin­u­ing to as­sem­ble notes. These and the other doc­u­ments re­lated to him chart his shift from cau­tious ide­al­ism to alarm. His lan­guage is more heated than Sutskever’s, by turns in­censed at Altman—“His words were al­most cer­tainly bull­shit”—and wist­ful about what he says was a fail­ure to cor­rect OpenAI’s course.Nei­ther col­lec­tion of doc­u­ments con­tains a smok­ing gun. Rather, they re­count an ac­cu­mu­la­tion of al­leged de­cep­tions and ma­nip­u­la­tions, each of which might, in iso­la­tion, be greeted with a shrug: Altman pur­port­edly of­fers the same job to two peo­ple, tells con­tra­dic­tory sto­ries about who should ap­pear on a live stream, dis­sem­bles about safety re­quire­ments. But Sutskever con­cluded that this kind of be­hav­ior does not cre­ate an en­vi­ron­ment con­ducive to the cre­ation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached sim­i­lar con­clu­sions. Amodei wrote, The prob­lem with OpenAI is Sam him­self.”We have in­ter­viewed more than a hun­dred peo­ple with first­hand knowl­edge of how Altman con­ducts busi­ness: cur­rent and for­mer OpenAI em­ploy­ees and board mem­bers; guests and staffers at Altman’s var­i­ous houses; his col­leagues and com­peti­tors; his friends and en­e­mies and sev­eral peo­ple who, given the mer­ce­nary cul­ture of Silicon Valley, have been both. (OpenAI has an agree­ment with Condé Nast, the owner of The New Yorker, which al­lows OpenAI to dis­play its con­tent in search re­sults for a lim­ited term.)Some peo­ple de­fended Altman’s busi­ness acu­men and dis­missed his ri­vals, es­pe­cially Sutskever and Amodei, as failed as­pi­rants to his throne. Others por­trayed them as gullible, ab­sent-minded sci­en­tists, or as hys­ter­i­cal doomers,” gripped by the delu­sion that the soft­ware they were build­ing would some­how come alive and kill them. Yoon, the for­mer board mem­ber, ar­gued that Altman was not this Machiavellian vil­lain” but merely, to the point of fecklessness,” able to con­vince him­self of the shift­ing re­al­i­ties of his sales pitches. He’s too caught up in his own self-be­lief,” she said. So he does things that, if you live in the real world, make no sense. But he does­n’t live in the real world.”Yet most of the peo­ple we spoke to shared the judg­ment of Sutskever and Amodei: Altman has a re­lent­less will to power that, even among in­dus­tri­al­ists who put their names on space­ships, sets him apart. He’s un­con­strained by truth,” the board mem­ber told us. He has two traits that are al­most never seen in the same per­son. The first is a strong de­sire to please peo­ple, to be liked in any given in­ter­ac­tion. The sec­ond is al­most a so­cio­pathic lack of con­cern for the con­se­quences that may come from de­ceiv­ing some­one.”The board mem­ber was not the only per­son who, un­prompted, used the word sociopathic.” One of Altman’s batch mates in the first Y Combinator co­hort was Aaron Swartz, a bril­liant but trou­bled coder who died by sui­cide in 2013 and is now re­mem­bered in many tech cir­cles as some­thing of a sage. Not long be­fore his death, Swartz ex­pressed con­cerns about Altman to sev­eral friends. You need to un­der­stand that Sam can never be trusted,” he told one. He is a so­ciopath. He would do any­thing.” Multiple se­nior ex­ec­u­tives at Microsoft said that, de­spite Nadella’s long-stand­ing loy­alty, the com­pa­ny’s re­la­tion­ship with Altman has be­come fraught. He has mis­rep­re­sented, dis­torted, rene­go­ti­ated, re­neged on agree­ments,” one said. Earlier this year, OpenAI reaf­firmed Microsoft as the ex­clu­sive cloud provider for its stateless”—or mem­o­ry­less—mod­els. That day, it an­nounced a fifty-bil­lion-dol­lar deal mak­ing Amazon the ex­clu­sive re­seller of its en­ter­prise plat­form for A.I. agents. While re­selling is per­mit­ted, Microsoft ex­ec­u­tives ar­gue OpenAI’s plan could col­lide with Microsoft’s ex­clu­siv­ity. (OpenAI main­tains that the Amazon deal will not vi­o­late the ear­lier con­tract; a Microsoft rep­re­sen­ta­tive said the com­pany is confident that OpenAI un­der­stands and re­spects” its le­gal oblig­a­tions.) The se­nior ex­ec­u­tive at Microsoft said, of Altman, I think there’s a small but real chance he’s even­tu­ally re­mem­bered as a Bernie Madoff- or Sam Bankman-Fried-level scam­mer.”Alt­man is not a tech­ni­cal sa­vant—ac­cord­ing to many in his or­bit, he lacks ex­ten­sive ex­per­tise in cod­ing or ma­chine learn­ing. Multiple en­gi­neers re­called him mis­us­ing or con­fus­ing ba­sic tech­ni­cal terms. He built OpenAI, in large part, by har­ness­ing other peo­ple’s money and tech­ni­cal tal­ent. This does­n’t make him unique. It makes him a busi­ness­man. More re­mark­able is his abil­ity to con­vince skit­tish en­gi­neers, in­vestors, and a tech-skep­ti­cal pub­lic that their pri­or­i­ties, even when mu­tu­ally ex­clu­sive, are also his pri­or­i­ties. When such peo­ple have tried to hin­der his next move, he has of­ten found the words to neu­tral­ize them, at least tem­porar­ily; usu­ally, by the time they lose pa­tience with him, he’s got what he needs. He sets up struc­tures that, on pa­per, con­strain him in the fu­ture,” Wainwright, the for­mer OpenAI re­searcher, said. But then, when the fu­ture comes and it comes time to be con­strained, he does away with what­ever the struc­ture was.”“He’s un­be­liev­ably per­sua­sive. Like, Jedi mind tricks,” a tech ex­ec­u­tive who has worked with Altman said. He’s just next level.” A clas­sic hy­po­thet­i­cal sce­nario in align­ment re­search in­volves a con­test of wills be­tween a hu­man and a high-pow­ered A.I. In such a con­test, re­searchers usu­ally ar­gue, the A.I. would surely win, much the way a grand­mas­ter will beat a child at chess. Watching Altman out­ma­neu­ver the peo­ple around him dur­ing the Blip, the ex­ec­u­tive con­tin­ued, had been like watch­ing an A.G.I. break­ing out of the box.”In the days af­ter his fir­ing, Altman fought to avoid any out­side in­ves­ti­ga­tion of the claims against him. He told two peo­ple that he wor­ried even the ex­is­tence of an in­ves­ti­ga­tion would make him look guilty. (Altman de­nies this.) But, af­ter the re­sign­ing board mem­bers made their de­par­ture con­di­tional on there be­ing an in­de­pen­dent in­quiry, Altman ac­ceded to a review” of recent events.” The two new board mem­bers in­sisted that they con­trol that re­view, ac­cord­ing to peo­ple in­volved in the ne­go­ti­a­tions. Summers, with his net­work of po­lit­i­cal and Wall Street con­nec­tions, seemed to lend it cred­i­bil­ity. (Last November, af­ter the dis­clo­sure of e-mails in which Summers sought Jeffrey Epstein’s ad­vice while pur­su­ing a ro­man­tic re­la­tion­ship with a young pro­tégée, he re­signed from the board.) OpenAI en­listed WilmerHale, the dis­tin­guished law firm re­spon­si­ble for the in­ter­nal in­ves­ti­ga­tions of Enron and WorldCom, to con­duct the re­view.Six peo­ple close to the in­quiry al­leged that it seemed de­signed to limit trans­parency. Some of them said that the in­ves­ti­ga­tors ini­tially did not con­tact im­por­tant fig­ures at the com­pany. An em­ployee reached out to Summers and Taylor to com­plain. They were just in­ter­ested in the nar­row range of what hap­pened dur­ing the board drama, and not the his­tory of his in­tegrity,” the em­ployee re­called of his in­ter­view with in­ves­ti­ga­tors. Others were un­com­fort­able shar­ing con­cerns about Altman be­cause they felt there was not a suf­fi­cient ef­fort to in­sure anonymity. Everything pointed to the fact that they wanted to find the out­come, which is to ac­quit him,” the em­ployee said. (Some of the lawyers in­volved de­fended the process, say­ing, It was an in­de­pen­dent, care­ful, com­pre­hen­sive re­view that fol­lowed the facts wher­ever they led.” Taylor also said that the re­view was thorough and in­de­pen­dent.”)Cor­po­rate in­ves­ti­ga­tions aim to con­fer le­git­i­macy. At pri­vate com­pa­nies, their find­ings are some­times not writ­ten down—this can be a way to limit li­a­bil­ity. But in cases in­volv­ing pub­lic scan­dals there is of­ten a greater ex­pec­ta­tion of trans­parency. Before Kalanick left Uber, in 2017, its board hired an out­side firm, which re­leased a thir­teen-page sum­mary to the pub­lic. Given OpenAI’s 501(c)(3) sta­tus and the high-pro­file na­ture of the fir­ing, many ex­ec­u­tives there ex­pected to see ex­ten­sive find­ings. In March, 2024, how­ever, OpenAI an­nounced that it would clear Altman but re­leased no re­port. The com­pany pro­vided, on its web­site, some eight hun­dred words ac­knowl­edg­ing a breakdown in trust.”Peo­ple in­volved in the in­ves­ti­ga­tion said that no re­port was re­leased be­cause none was writ­ten. Instead, the find­ings were lim­ited to oral brief­ings, shared with Summers and Taylor. The re­view did not con­clude that Sam was a George Washington cherry tree of in­tegrity,” one of the peo­ple close to the in­quiry said. But the in­ves­ti­ga­tion ap­pears not to have cen­tered the ques­tions of in­tegrity be­hind Altman’s fir­ing, de­vot­ing much of its fo­cus to a hunt for clear crim­i­nal­ity; on that ba­sis, it con­cluded that he could re­main as C.E.O. Shortly there­after, Altman, who had been kicked off the board when he was fired, re­joined it. The de­ci­sion not to put the re­port in writ­ing was made in part on the ad­vice of Summer’s and Taylor’s per­sonal at­tor­neys, the per­son close to the in­quiry told us. (Summers de­clined to com­ment on the record. Taylor said that, in light of the oral brief­ings, there had been no need for a for­mal writ­ten re­port.”)Many for­mer and cur­rent OpenAI em­ploy­ees told us that they were shocked by the lack of dis­clo­sure. Altman said he be­lieved that all the board mem­bers who joined in the af­ter­math of his re­in­state­ment re­ceived the oral brief­ings. That’s an ab­solute, out­right lie,” a per­son with di­rect knowl­edge of the sit­u­a­tion said. Some board mem­bers told us that on­go­ing ques­tions about the in­tegrity of the re­port could prompt, as one put it, a need for an­other in­ves­ti­ga­tion.”The ab­sence of a writ­ten record helped min­i­mize the al­le­ga­tions. So, in­creas­ingly, did Altman’s stature in Silicon Valley. Multiple promi­nent in­vestors who have worked with Altman told us that he has a rep­u­ta­tion for freez­ing out in­vestors if they back OpenAI’s com­peti­tors. If they in­vest in some­thing that he does­n’t like, they won’t get ac­cess to other things,” one of them said. Another source of Altman’s power is his vast list of in­vest­ments, which at times ex­tends to his per­sonal life. He has fi­nan­cial en­tan­gle­ments with nu­mer­ous for­mer ro­man­tic part­ners: as a fund co-man­ager, a lead in­vestor, or a fre­quent co-in­vestor. This is hardly un­usual. Many of Silicon Valley’s straight ex­ec­u­tives do the same thing with their ro­man­tic and sex­ual part­ners. (“You have to,” one promi­nent C.E.O. told us.) I’ve ob­vi­ously in­vested with some exes af­ter the fact. And I think that’s, like, to­tally fine,” Altman said. But the dy­namic af­fords an ex­tra­or­di­nary level of con­trol. It cre­ates a very, very high de­pen­dence, es­sen­tially,” a per­son close to Altman said. Oftentimes, it’s a life­time de­pen­dence.”Even for­mer col­leagues can be af­fected. Murati left OpenAI in 2024 and be­gan build­ing her own A.I. startup. Josh Kushner, the close Altman ally, called her. He praised her lead­er­ship, then made what seemed to be a veiled threat, not­ing that he was concerned about” her reputation” and that for­mer col­leagues now viewed her as an enemy.” (Kushner, through a rep­re­sen­ta­tive, said that this ac­count did not convey full con­text”; Altman said that he was un­aware of the call.)At the be­gin­ning of his tenure as C.E.O., Altman had an­nounced that OpenAI would cre­ate a capped profit” com­pany, which would be owned by the non­profit. This byzan­tine cor­po­rate struc­ture ap­par­ently did not ex­ist un­til Altman de­vised it. In the midst of the con­ver­sion, a board mem­ber named Holden Karnofsky ob­jected to it, ar­gu­ing that the non­profit was be­ing se­verely un­der­val­ued. I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said. According to con­tem­po­ra­ne­ous notes, he voted against it. However, af­ter an at­tor­ney for the board said that his dis­sent might be a flag to in­ves­ti­gate fur­ther” the le­git­i­macy of the new struc­ture, his vote was recorded as an ab­sten­tion, ap­par­ently with­out his con­sent—a po­ten­tial fal­si­fi­ca­tion of busi­ness records. (OpenAI told us that sev­eral em­ploy­ees re­call Karnofsky ab­stain­ing, and pro­vided the min­utes from the meet­ing record­ing his vote as an ab­sten­tion.)Last October, OpenAI recapitalized” as a for-profit en­tity. The firm touts its as­so­ci­ated non­profit, now called the OpenAI Foundation, as one of the best re­sourced” in his­tory. But it is now a twenty-six-per-cent stake­holder of the com­pany, and its board mem­bers are also, with one ex­cep­tion, mem­bers of the for-profit board.Dur­ing con­gres­sional tes­ti­mony, Altman was asked if he made a lot of money.” He replied, I have no eq­uity in OpenAI . . . I’m do­ing this be­cause I love it”—a care­ful an­swer, given his in­di­rect eq­uity through the Y.C. fund. This is still tech­ni­cally true. But sev­eral peo­ple, in­clud­ing Altman, in­di­cated to us that it could soon change. Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no active dis­cus­sion” about it. According to a le­gal de­po­si­tion, Brockman seems to own a stake in the com­pany that is worth about twenty bil­lion dol­lars. Altman’s share would pre­sum­ably be worth more. Still, he told us that he was not pri­mar­ily mo­ti­vated by wealth. A for­mer em­ployee re­calls him say­ing, I don’t care about money. I care more about power.”In 2023, Altman mar­ried Mulherin in a small cer­e­mony at a home they own in Hawaii. (They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the prop­erty, and those we spoke with re­ported wit­ness­ing noth­ing more re­mark­able than the stan­dard di­ver­sions of the very wealthy: meals pre­pared by a pri­vate chef, boat rides at golden hour. One New Year’s party was Survivor”-themed; a pho­to­graph shows a num­ber of shirt­less, smil­ing men, and also Jeff Probst, the real host of Survivor.” Altman has also hosted smaller groups of friends at his prop­er­ties, gath­er­ings that have in­cluded, in at least one in­stance, a spir­ited game of strip poker. (A pho­to­graph of the event, which did not in­clude Altman, leaves un­clear who won, but at least three men clearly lost.) We spoke to many of Altman’s for­mer guests who sug­gested only that he is a gen­er­ous host.Nev­er­the­less, ru­mors about Altman’s per­sonal life have been ex­ploited and dis­torted by com­peti­tors. Ruthless busi­ness ri­val­ries are noth­ing new, but the com­pe­ti­tion within the A.I. in­dus­try has be­come ex­tra­or­di­nar­ily cut­throat. (“Shakespearean” was the word an OpenAI ex­ec­u­tive used to de­scribe it to us, adding, The nor­mal rules of the game sort of don’t ap­ply any­more.”) Intermediaries di­rectly con­nected to, and in at least one case com­pen­sated by, Musk have cir­cu­lated dozens of pages of de­tailed op­po­si­tion re­search about Altman. They re­flect ex­ten­sive sur­veil­lance, doc­u­ment­ing shell com­pa­nies as­so­ci­ated with him, the per­sonal con­tact in­for­ma­tion of close as­so­ci­ates, and even in­ter­views about a pur­ported sex worker, con­ducted at gay bars. One of the Musk in­ter­me­di­aries claimed that Altman’s flights and the par­ties he at­tended were be­ing tracked. Altman told us, I don’t think any­one has had more pri­vate in­ves­ti­ga­tors hired against them.”Ex­treme claims have cir­cu­lated. The right-wing broad­caster Tucker Carlson sug­gested, with­out any ap­par­ent proof, that Altman was in­volved in the death of a whis­tle-blower. This claim and oth­ers have been am­pli­fied by ri­vals. Altman’s sis­ter, Annie, claimed in a law­suit, and in in­ter­views with us, that he sex­u­ally abused her for years, be­gin­ning when she was three and he was twelve. (We could not sub­stan­ti­ate Annie’s ac­count, which Altman has de­nied and his broth­ers and mother have called utterly un­true” and a source of immense pain to our en­tire fam­ily.” In in­ter­views that the jour­nal­ist Karen Hao con­ducted for her book, Empire of AI,” Annie sug­gested that mem­o­ries of abuse were re­cov­ered dur­ing flash­backs in adult­hood.)Mul­ti­ple peo­ple work­ing within ri­val com­pa­nies and in­vest­ment firms in­sin­u­ated to us that Altman sex­u­ally pur­sues mi­nors—a nar­ra­tive per­sis­tent in Silicon Valley which ap­pears to be un­true. We spent months look­ing into the mat­ter, con­duct­ing dozens of in­ter­views, and could find no ev­i­dence to sup­port it. This is dis­gust­ing be­hav­ior from a com­peti­tor that I as­sume is part of an at­tempt at taint­ing the jury in our up­com­ing cases,” Altman told us. As ridicu­lous as this is to have to say, any claims about me hav­ing sex with a mi­nor, hir­ing sex work­ers, or be­ing in­volved in a mur­der are com­pletely un­true.” He added that he was sort of grate­ful” that we had spent months so ag­gres­sively try­ing to look into this.”“My apart­ment is full of smells that I per­son­ally am in no way re­spon­si­ble for.”Alt­man has ac­knowl­edged dat­ing younger men of le­gal age. We spoke to sev­eral of his part­ners, who told us that they did not find this prob­lem­atic. Yet the op­po­si­tion dossiers from Musk in­ter­me­di­aries spin it as a line of at­tack. (The dossiers in­clude sala­cious and un­sub­stan­ti­ated ref­er­ences to a Twink Army” and Sugar Daddy’s Sexual Habits.”) I think there’s a lot of ho­mo­pho­bia that gets pushed,” Altman said. Swisher, the tech jour­nal­ist, agreed. All these rich guys do wild stuff, wilder than any­thing I’ve been told about Sam,” she told us. But he’s a gay guy in San Francisco,” she added, so that gets weaponized.”For a decade, so­cial-me­dia ex­ec­u­tives promised that they could change the world with lit­tle or no down­side. They dis­missed the law­mak­ers who wanted to slow them down as mere Luddites, even­tu­ally earn­ing bi­par­ti­san de­ri­sion. Altman, by con­trast, came across as re­fresh­ingly con­sci­en­tious. Rather than ward­ing off reg­u­la­tion, he prac­ti­cally begged for it. Testifying be­fore the Senate Judiciary Committee in 2023, he pro­posed a new fed­eral agency to over­see ad­vanced A.I. mod­els. If this tech­nol­ogy goes wrong, it can go quite wrong,” he said. Senator John Kennedy, of Louisiana, known for his can­tan­ker­ous ex­changes with tech C.E.O.s, seemed charmed, rest­ing his face on his hand and sug­gest­ing that per­haps Altman should en­force the rules him­self.But, as Altman pub­licly wel­comed reg­u­la­tion, he qui­etly lob­bied against it. In 2022 and 2023, ac­cord­ing to Time, OpenAI suc­cess­fully pressed to di­lute a European Union ef­fort that would have sub­jected large A.I. com­pa­nies to more over­sight. In 2024, a bill was in­tro­duced in the California state leg­is­la­ture man­dat­ing safety test­ing for A.I. mod­els. Its pro­vi­sions in­cluded mea­sures re­sem­bling the ones that Altman had ad­vo­cated for in his con­gres­sional tes­ti­mony. OpenAI pub­licly op­posed the bill but in pri­vate be­gan is­su­ing threats. I would say that, over the course of the year, we saw in­creas­ingly cun­ning, de­cep­tive be­hav­ior from OpenAI,” a leg­isla­tive aide told us.Con­way, the in­vestor, lob­bied state po­lit­i­cal lead­ers, in­clud­ing Nancy Pelosi and Gavin Newsom, to kill the bill. In the end, it passed the leg­is­la­ture with bi­par­ti­san sup­port, but Newsom ve­toed it. This year, con­gres­sional can­di­dates who fa­vor A.I. reg­u­la­tions have faced op­po­nents funded by Leading the Future, a new pro-A.I.” su­per PAC de­voted to scut­tling such re­stric­tions. OpenAI’s of­fi­cial stance is that it will not con­tribute to such su­per PACs. This is­sue tran­scends par­ti­san pol­i­tics,” Lehane re­cently told CNN. And yet one of the ma­jor donors to Leading the Future is Greg Brockman, who has com­mit­ted fifty mil­lion dol­lars. (This year, Brockman and his wife do­nated twenty-five mil­lion dol­lars to MAGA Inc., a pro-Trump su­per PAC.)OpenAI’s cam­paign has ex­tended be­yond tra­di­tional lob­by­ing. Last year, a suc­ces­sor bill was in­tro­duced in the California Senate. One night, Nathan Calvin, a twenty-nine-year-old lawyer who worked at the non­profit Encode and had helped craft the bill, was at home hav­ing din­ner with his wife when a process server ar­rived to de­liver a sub­poena from OpenAI. The com­pany claimed to be hunt­ing for ev­i­dence that Musk was covertly fund­ing its crit­ics. But it de­manded all of Calvin’s pri­vate com­mu­ni­ca­tions about the bill in the state Senate. They could have asked us, Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us. Other sup­port­ers of the bill, and some crit­ics of OpenAI’s for-profit re­struc­tur­ing, also re­ceived sub­poe­nas. They were go­ing af­ter folks to ba­si­cally scare them into shut­ting up,” Don Howard, who heads a char­ity called the James Irvine Foundation, said. (OpenAI claims that this was part of the stan­dard le­gal process.)Alt­man has long sup­ported Democrats. I’m very sus­pi­cious of pow­er­ful au­to­crats telling a story of fear to gang up on the weak,” he told us. That’s a Jewish thing, not a gay thing.” In 2016, he en­dorsed Hillary Clinton and called Trump an un­prece­dented threat to America.” In 2020, he do­nated to the Democratic Party and to the Biden Victory Fund. During the Biden Administration, Altman met with the White House at least half a dozen times. He helped de­velop a lengthy ex­ec­u­tive or­der lay­ing out the first fed­eral regime of safety tests and other guardrails for A.I. When Biden signed it, Altman called it a good start.”In 2024, with Biden’s poll num­bers slip­ping, Altman’s rhetoric be­gan to shift. I be­lieve that America is go­ing to be fine no mat­ter what hap­pens in this elec­tion,” he said. After Trump won, Altman do­nated a mil­lion dol­lars to his in­au­gural fund, then took self­ies with the in­flu­encers Jake and Logan Paul at the Inauguration. On X, in his stan­dard low­er­case style, Altman wrote, watching @potus more care­fully re­cently has re­ally changed my per­spec­tive on him (i wish i had done more of my own think­ing . . . ).” Trump, on his first day back in of­fice, re­pealed Biden’s ex­ec­u­tive or­der on A.I. He’s found an ef­fec­tive way for the Trump Administration to do his bid­ding,” a se­nior Biden Administration of­fi­cial said, of Altman.Musk con­tin­ues to ex­co­ri­ate Altman in pub­lic, call­ing him Scam Altman” and Swindly Sam.” (When Altman com­plained on X about a Tesla he’d or­dered, Musk replied, You stole a non-profit.”) And yet, in Washington, Altman seems to have out­flanked him. Musk spent more than two hun­dred and fifty mil­lion dol­lars to help Trump get reëlected, and worked in the White House for months. Then Musk left Washington, dam­ag­ing his re­la­tion­ship with Trump in the process.Alt­man is now one of Trump’s fa­vored ty­coons, even ac­com­pa­ny­ing him on a trip to visit the British Royal Family at Windsor Castle. Altman and Trump speak a few times a year. You can just, like, call him,” Altman said. This is not a buddy. But, yeah, if I need to talk to him about some­thing, I will.” When Trump hosted a din­ner with tech lead­ers at the White House last year, Musk was no­tably ab­sent; Altman sat across from the President. Sam, you’re a big leader,” Trump said. You told me things be­fore that are ab­solutely un­be­liev­able.”Over the years, Altman has con­tin­ued to com­pare the quest for A.G.I. to the Manhattan Project. Like J. Robert Oppenheimer, who used im­pas­sioned ap­peals about sav­ing the world from the Nazis to per­suade physi­cists to up­root their lives and move to Los Alamos, Altman lever­ages fears about the geopo­lit­i­cal stakes of his tech­nol­ogy. Depending on the au­di­ence, Altman has used this anal­ogy to en­cour­age ei­ther ac­cel­er­a­tion or cau­tion. In a meet­ing with U.S. in­tel­li­gence of­fi­cials in the sum­mer of 2017, he claimed that China had launched an A.G.I. Manhattan Project,” and that OpenAI needed bil­lions of dol­lars of gov­ern­ment fund­ing to keep pace. When pressed for ev­i­dence, Altman said, I’ve heard things.” It was the first of sev­eral meet­ings in which he made the claim. After one of them, he told an in­tel­li­gence of­fi­cial that he would fol­low up with ev­i­dence. He never did. The of­fi­cial, af­ter look­ing into the China pro­ject, con­cluded that there was no ev­i­dence that it ex­isted: It was just be­ing used as a sales pitch.” (Altman says that he does not re­call de­scrib­ing Beijing’s ef­forts in ex­actly that way.)With more safety-con­scious au­di­ences, Altman in­voked the anal­ogy to im­ply the op­po­site: that A.G.I. had to be pur­sued care­fully, with in­ter­na­tional coör­di­na­tion, lest the con­se­quences be dis­as­trous. In 2017, Amodei hired Page Hedley, a for­mer pub­lic-in­ter­est lawyer, to be OpenAI’s pol­icy and ethics ad­viser. In an early PowerPoint pre­sen­ta­tion to ex­ec­u­tives, Hedley out­lined how OpenAI might avert a catastrophic” arms race—per­haps by build­ing a coali­tion of A.I. labs that would even­tu­ally coör­di­nate with an in­ter­na­tional body akin to NATO, to in­sure that the tech­nol­ogy was de­ployed safely. As Hedley re­called it, Brockman did­n’t un­der­stand how this would help the com­pany beat its com­peti­tors. No mat­ter what I said,” Hedley told us, Greg kept go­ing back to So how do we raise more money? How do we win?’ ” According to sev­eral in­ter­views and con­tem­po­ra­ne­ous records, Brockman of­fered a coun­ter­pro­posal: OpenAI could en­rich it­self by play­ing world pow­ers—in­clud­ing China and Russia—against one an­other, per­haps by start­ing a bid­ding war among them. According to Hedley, the think­ing seemed to be, It worked for nu­clear weapons, why not for A.I.?He was aghast: The premise, which they did­n’t dis­pute, was We’re talk­ing about po­ten­tially the most de­struc­tive tech­nol­ogy ever in­vented—what if we sold it to Putin?’ ” (Brockman main­tains that he never se­ri­ously en­ter­tained auc­tion­ing A.I. mod­els to gov­ern­ments. Ideas were bat­ted around at a high level about what po­ten­tial frame­works might look like to en­cour­age co­op­er­a­tion be­tween na­tions—some­thing akin to an International Space Station for AI,” an OpenAI rep­re­sen­ta­tive said. Attempting to char­ac­ter­ize it as any­thing more than that is ut­terly ridicu­lous.”)Brain­storm­ing ses­sions of­ten pro­duce out­landish ideas. Hedley hoped that this one, which came to be known in­ter­nally as the countries plan,” would be dropped. Instead, ac­cord­ing to sev­eral peo­ple in­volved and to con­tem­po­ra­ne­ous doc­u­ments, OpenAI ex­ec­u­tives seemed to grow only more ex­cited about it. Brockman’s goal, ac­cord­ing to Jack Clark, OpenAI’s pol­icy di­rec­tor at the time, was to set up, ba­si­cally, a pris­on­er’s dilemma, where all of the na­tions need to give us fund­ing,” and that implicitly makes not giv­ing us fund­ing kind of dan­ger­ous.” A ju­nior re­searcher re­called think­ing, as the plan was de­tailed at a com­pany meet­ing, This is com­pletely fuck­ing in­sane.”Ex­ec­u­tives dis­cussed the ap­proach with at least one po­ten­tial donor. But later that month, af­ter sev­eral em­ploy­ees talked about quit­ting, the plan was aban­doned. Altman would lose staff,” Hedley said. I feel like that was al­ways some­thing that had more weight in Sam’s cal­cu­la­tions than This is not a good plan be­cause it might cause a war be­tween great pow­ers.’ ”“I can­not wait for crop tops to go out of style.”Un­de­terred by the col­lapse of the coun­tries plan, Altman pur­sued vari­a­tions on the theme. In January, 2018, he con­vened an A.G.I. week­end” at the Hotel Bel-Air, an Old Hollywood re­sort with rolling gar­dens of pink bougainvil­lea and an ar­ti­fi­cial pond stocked with real swans. The at­ten­dees in­cluded Nick Bostrom, a philoso­pher, then at Oxford, who had be­come a prophet of A.I. doom; Omar Al Olama, an Emirati sul­tan and an A.I. booster; and at least seven bil­lion­aires. The safety-con­cerned among them were told that this would be an op­por­tu­nity to think through how so­ci­ety might pre­pare for the dis­rup­tive ar­rival of ar­ti­fi­cial gen­eral in­tel­li­gence; the in­vestors ar­rived ex­pect­ing to hear pitches.The days were spent in a sleek con­fer­ence room, where guests gave talks. (Hoffman, the LinkedIn co-founder, ex­pounded on the pos­si­bil­i­ties of en­cod­ing A.I. with Buddhist com­pas­sion.) The fi­nal pre­sen­ter was Altman, armed with a pitch deck that de­scribed a global cryp­tocur­rency redeemable for the at­ten­tion of the AGI.” Once the A.G.I. was max­i­mally use­ful, and anti-evil,” peo­ple every­where would clamor to buy time on OpenAI’s servers. Amodei wrote in his notes, This idea was ab­surd on its face (would Vladimir Putin end up own­ing some of the to­kens? . . .) In ret­ro­spect this was one of many red flags about Sam that I should have taken more se­ri­ously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I. safety. One of his slides read, I want to get as many peo­ple on the good’ team as pos­si­ble, and win, and do the right thing.” Another read, Please hold your laugh­ter un­til the end of the pre­sen­ta­tion.”Alt­man’s fund-rais­ing pitch has evolved over the years, but it has al­ways re­flected the fact that the de­vel­op­ment of A.G.I. re­quires a stag­ger­ing amount of cap­i­tal. He was fol­low­ing a rel­a­tively sim­ple scaling law”: the more data and com­put­ing power you used to train the mod­els, the smarter they seemed to get. The spe­cial­ized chips that en­able this process are enor­mously ex­pen­sive. OpenAI, in its most re­cent fund­ing round alone, raised more than a hun­dred and twenty bil­lion dol­lars—the largest pri­vate round in his­tory, and a sum four times larger than the biggest I.P.O. ever. When you think about en­ti­ties with a hun­dred bil­lion dol­lars they can dis­cre­tionar­ily spend per year, there re­ally are only a hand­ful in the world,” a tech ex­ec­u­tive and in­vestor told us. There’s the U.S. gov­ern­ment, and the four or five biggest U.S. tech com­pa­nies, and the Saudis, and the Emiratis—that’s ba­si­cally it.”Alt­man’s ini­tial fo­cus was Saudi Arabia. He first met Mohammed bin Salman, the coun­try’s crown prince and de-facto monarch, in 2016, at a din­ner at San Francisco’s Fairmont Hotel. After that, Hedley re­called, Altman re­ferred to the prince as a friend.” In September, 2018, ac­cord­ing to Hedley’s notes, Altman said, I’m try­ing to de­cide if we would ever take tens of bil­lions from the Saudi PIF,” or pub­lic in­vest­ment fund.The fol­low­ing month, a hit squad, re­port­edly act­ing on bin Salman’s or­ders, stran­gled Jamal Khashoggi, a Washington Post jour­nal­ist who had been crit­i­cal of the regime, and used a bone saw to dis­mem­ber his corpse. A week later, it was an­nounced that Altman had joined the ad­vi­sory board for Neom, a city of the fu­ture” that bin Salman hoped to build in the desert. Sam, you can­not be on this board,” Clark, the pol­icy di­rec­tor, who now works at Anthropic, re­called telling Altman. He ini­tially de­fended his in­volve­ment, telling Clark that Jared Kushner had as­sured him that the Saudis didn’t do this.” (Altman does not re­call this. Kushner says that they were not in con­tact at the time.)As bin Salman’s role be­came in­creas­ingly clear, Altman left the Neom board. Yet be­hind the scenes, a pol­icy con­sul­tant from whom Altman sought ad­vice re­called, he treated the sit­u­a­tion as a tem­po­rary set­back, ask­ing whether he could some­how still get money from bin Salman. The ques­tion was not Is this a bad thing or not?’ ” the con­sul­tant said. But, just, What would the con­se­quences be if we did it? Would there be some ex­port-con­trol is­sue? Would there be sanc­tions? Like, can I get away with it?’ ”By then, Altman was al­ready ey­ing an­other source of cash: the United Arab Emirates. The coun­try was in the midst of a fif­teen-year ef­fort to trans­form it­self from an oil state to a tech hub. The pro­ject was over­seen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the na­tion’s spy­mas­ter. Tahnoon runs the state-con­trolled A.I. con­glom­er­ate G42, and con­trols $1.5 tril­lion in sov­er­eign wealth. In June, 2023, Altman vis­ited Abu Dhabi, meet­ing with Olama and other of­fi­cials. In re­marks at a gov­ern­ment-backed func­tion, he said that the coun­try had been talk­ing about A.I. since be­fore it was cool,” and out­lined a vi­sion for the fu­ture of A.I. with the Middle East in a central role.”Fund-rais­ing from Gulf states has be­come cus­tom­ary for many large busi­nesses. But Altman was pur­su­ing a more sweep­ing geopo­lit­i­cal vi­sion. In the fall of 2023, he be­gan qui­etly re­cruit­ing new tal­ent for a plan—even­tu­ally known as ChipCo—in which Gulf states would pro­vide tens of bil­lions of dol­lars for the con­struc­tion of huge mi­crochip foundries and data cen­ters, some to be sit­u­ated in the Middle East. Altman pitched Alexandr Wang, now the head of A.I. at Meta, on a lead­er­ship role, telling him that Jeff Bezos, the founder of Amazon, could head the new com­pany. Altman sought enor­mous con­tri­bu­tions from the Emiratis. My un­der­stand­ing was that this whole thing hap­pened with­out any board knowl­edge,” the board mem­ber said. A re­searcher Altman tried to re­cruit for the pro­ject, James Bradbury, re­called turn­ing him down. My ini­tial re­ac­tion was This is gonna work, but I don’t know if I want it to work,’ ” he said.A.I. ca­pac­ity may soon dis­place oil or en­riched ura­nium as the re­source that dic­tates the global bal­ance of power. Altman has said that com­put­ing power is the cur­rency of the fu­ture.” Normally, it might not mat­ter where a data cen­ter was sit­u­ated. But many American na­tional-se­cu­rity of­fi­cials were anx­ious about con­cen­trat­ing ad­vanced A.I. in­fra­struc­ture in Gulf au­toc­ra­cies. The U.A.E.’s telecom­mu­ni­ca­tions in­fra­struc­ture is heav­ily de­pen­dent on hard­ware from Huawei, a Chinese tech gi­ant linked to the gov­ern­ment, and the U.A.E. has re­port­edly leaked American tech­nol­ogy to Beijing in the past. Intelligence agen­cies wor­ried that ad­vanced U.S. mi­crochips sent to the Emiratis could be used by Chinese en­gi­neers. Data cen­ters in the Middle East are also more vul­ner­a­ble to mil­i­tary strikes; in re­cent weeks, Iran has bombed American data cen­ters in Bahrain and the U.A.E. And, hy­po­thet­i­cally, a Gulf monar­chy could com­man­deer an American-owned data cen­ter and use it to build dis­pro­por­tion­ately pow­er­ful mod­els—a ver­sion of the AGI dic­ta­tor­ship” sce­nario, but in an ac­tual dic­ta­tor­ship.Af­ter Altman’s fir­ing, the per­son he re­lied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loy­al­ists. Watching my friend stare into the abyss like that, it made me ques­tion some fun­da­men­tal things about what it means to re­ally run a com­pany,” Chesky told us. The fol­low­ing year, at a gath­er­ing of Y Combinator alumni, he gave an im­promptu talk, which ended up last­ing two hours. It felt like a group-ther­apy ses­sion,” he said. The up­shot was: Your in­stincts for how to run the com­pany that you started are the best in­stincts, and any­one who tells you oth­er­wise is gaslight­ing you. You’re not crazy, even though peo­ple who work for you tell you you are,” Chesky said. Paul Graham, in a blog post about the speech, gave this de­fi­ant at­ti­tude a name: Founder Mode.Since the Blip, Altman has been in Founder Mode. In February, 2024, the Wall Street Journal pub­lished a de­scrip­tion of Altman’s vi­sion for ChipCo. He con­ceived of it as a joint en­tity funded by an in­vest­ment of five to seven tril­lion dol­lars. (“fk it why not 8,” he tweeted.) This was how many em­ploy­ees learned about the plan. Everyone was, like, Wait, what?’ ” Leike re­called. Altman in­sisted at an in­ter­nal meet­ing that safety teams had been looped in.” Leike sent a mes­sage urg­ing him not to falsely sug­gest that the ef­fort had been ap­proved.Dur­ing the Biden Administration, Altman ex­plored get­ting a se­cu­rity clear­ance to join clas­si­fied A.I.-policy dis­cus­sions. But staffers at the RANDCorporation, which helped coör­di­nate the process, ex­pressed con­cern. He has been ac­tively rais­ing hundreds of bil­lions of dol­lars’ from for­eign gov­ern­ments,” one of them wrote. The UAE re­cently gifted him a car. (I as­sume it was a very nice car.)” The staffer con­tin­ued, The only per­son I can think of who ever went thru the process with this mag­ni­tude of for­eign fi­nan­cial ties is Jared Kushner, and the ad­ju­di­ca­tors rec­om­mended that he not be granted a clear­ance.” Altman ul­ti­mately with­drew from the process. He was push­ing these trans­ac­tional re­la­tion­ships, pri­mar­ily with the Emiratis, that raised a lot of red flags for some of us,” a se­nior Administration of­fi­cial in­volved in talks with Altman told us. A lot of peo­ple in the Administration did not trust him a hun­dred per cent.”When we asked Altman about gifts from Tahnoon, he said, I’m not gonna say what gifts he has given me specif­i­cally. But he and other world lead­ers . . . have given me gifts.” He added, We have a stan­dard pol­icy, which ap­plies to me as well, which is that every gift from any po­ten­tial busi­ness part­ner is dis­closed to the com­pany.” Altman has at least two hy­per­cars: an all-white Koenigsegg Regera, worth about two mil­lion dol­lars, and a red McLaren F1, worth about twenty mil­lion dol­lars. In 2024, Altman was spot­ted dri­ving the Regera through Napa. A few sec­onds of video made its way onto so­cial me­dia: Altman in a low-slung bucket seat, peer­ing out the win­dow of a gleam­ing white ma­chine. A tech in­vestor aligned with Musk posted the footage on X, writ­ing, I’m start­ing a non­profit next.”In 2024, Altman took two OpenAI em­ploy­ees to visit Sheikh Tahnoon on his two-hun­dred-and-fifty-mil­lion-dol­lar su­pery­acht, the Maryah. One of the largest such ves­sels in the world, the Maryah has a he­li­pad, a night club, a movie the­atre, and a beach club. Altman’s em­ploy­ees ap­par­ently stood out amid Tahnoon’s armed se­cu­rity de­tail, and at least one later told col­leagues that he found the ex­pe­ri­ence dis­con­cert­ing. Altman, on X, later re­ferred to Tahnoon as a dear per­sonal friend.”Alt­man con­tin­ued to meet with the Biden Administration, which had en­acted a pol­icy re­quir­ing White House ap­proval for the ex­port of sen­si­tive tech­nol­ogy. Multiple Administration of­fi­cials emerged from these meet­ings ner­vous about Altman’s am­bi­tions in the Middle East. He of­ten made grandiose claims, ac­cord­ing to those of­fi­cials, in­clud­ing call­ing A.I. the new elec­tric­ity.” In 2018, he said that OpenAI was plan­ning to buy a fully func­tion­ing quan­tum com­puter from a com­pany called Rigetti Computing. This was news even to other OpenAI ex­ec­u­tives in the room. Rigetti was not yet close to be­ing able to sell a us­able quan­tum com­puter. In a meet­ing, Altman claimed that by 2026 an ex­ten­sive net­work of nu­clear-fu­sion re­ac­tors across the United States would power the A.I. boom. The se­nior Administration of­fi­cial said, We were, like, Well, that’s, you know, news, if they made nu­clear fu­sion work.’ ” The Biden Administration ul­ti­mately with­held ap­proval. We’re not go­ing to be build­ing ad­vanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman.Four days be­fore Trump’s Inauguration, the Wall Street Journal re­ported, Tahnoon paid half a bil­lion dol­lars to the Trump fam­ily in ex­change for a stake in its cryp­tocur­rency com­pany. The fol­low­ing day, Altman held a twenty-five-minute call with Trump, dur­ing which they dis­cussed an­nounc­ing a ver­sion of a ChipCo, timed so that Trump could take credit for it. On Trump’s sec­ond day in of­fice, Altman stood in the Roosevelt Room and an­nounced Stargate, a five-hun­dred-bil­lion-dol­lar joint ven­ture that aims to build a vast net­work of A.I. in­fra­struc­ture across the U.S.In May, the Administration re­scinded Biden’s ex­port re­stric­tions on A.I. tech­nol­ogy. Altman and Trump trav­elled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis ad­ver­tised the launch of a gi­ant state-backed A.I. firm in the king­dom, with bil­lions to spend on in­ter­na­tional part­ner­ships. About a week later, Altman laid out a plan for Stargate to ex­pand into the U.A.E. The com­pany plans to build a data-cen­ter cam­pus in Abu Dhabi which is seven times larger than Central Park and con­sumes roughly as much elec­tri­cal power as the city of Miami. The truth of this is, we’re build­ing por­tals from which we’re gen­uinely sum­mon­ing aliens,” a for­mer OpenAI ex­ec­u­tive said. The por­tals cur­rently ex­ist in the United States and China, and Sam has added one in the Middle East.” He went on, I think it’s just, like, wildly im­por­tant to get how scary that should be. It’s the most reck­less thing that has been done.”The ero­sion of safety com­mit­ments has be­come an in­dus­try norm. The found­ing premise of Anthropic was that, given the right struc­ture and lead­er­ship, it could keep safety com­mit­ments from dis­in­te­grat­ing un­der com­mer­cial pres­sure. One such com­mit­ment was a responsible scal­ing pol­icy,” which ob­lig­ated Anthropic to stop train­ing more pow­er­ful mod­els if it could not demon­strate that they were safe. In February, as the firm se­cured thirty bil­lion dol­lars in new fund­ing, it weak­ened that pledge. In some re­spects, Anthropic still em­pha­sizes safety more than OpenAI does. But Clark, the for­mer pol­icy di­rec­tor, has said, The sys­tem of cap­i­tal mar­kets says, Go faster.” He added, The world gets to make this de­ci­sion, not com­pa­nies.” Last year, Amodei sent a memo to Anthropic em­ploy­ees, dis­clos­ing that the firm would seek in­vest­ments from the United Arab Emirates and Qatar and ac­knowl­edg­ing that this would likely en­rich dictators.” (Like many au­thors, we are both par­ties in a class-ac­tion law­suit al­leg­ing that Anthropic used our books with­out our per­mis­sion to train its mod­els. Condé Nast has opted into a set­tle­ment agree­ment with Anthropic re­gard­ing the com­pa­ny’s use of cer­tain books pub­lished by Condé Nast and its sub­sidiaries.)In 2024, Anthropic part­nered with Palantir, one of Silicon Valley’s most hawk­ish de­fense con­trac­tors, push­ing its A.I. model, Claude, di­rectly into the mil­i­tary ecosys­tem. Anthropic be­came the only A.I. con­trac­tor used in the Pentagon’s most clas­si­fied set­tings. Last year, the Pentagon awarded the com­pany a fur­ther two-hun­dred-mil­lion-dol­lar con­tract. In January, the U.S. mil­i­tary launched a mid­night raid that cap­tured the Venezuelan President, Nicolás Maduro. According to the Wall Street Journal, Claude was used in the clas­si­fied op­er­a­tion.But ten­sions arose be­tween Anthropic and the gov­ern­ment. Years ear­lier, OpenAI had deleted from its poli­cies a blan­ket ban on us­ing its tech­nol­ogy for military and war­fare.” Eventually, Anthropic’s ri­vals—in­clud­ing Google and xAI—agreed to pro­vide their mod­els to the mil­i­tary for all law­ful pur­poses.” Anthropic, whose poli­cies bar it from en­abling fully au­tonomous weapons or do­mes­tic mass sur­veil­lance, re­sisted on these points, slow­ing ne­go­ti­a­tions for an over­hauled deal. On a Tuesday in late February, Defense Secretary Pete Hegseth sum­moned Amodei to the Pentagon and de­liv­ered an ul­ti­ma­tum: the firm had un­til 5:01 P.M. that Friday to aban­don those pro­hi­bi­tions. The day be­fore the dead­line, Amodei de­clined to do so. Hegseth tweeted that he would des­ig­nate Anthropic a supply-chain risk”—a dev­as­tat­ing black­list his­tor­i­cally re­served for com­pa­nies, like Huawei, that have ties to for­eign ad­ver­saries—and made good on the threat days later.Hun­dreds of em­ploy­ees at OpenAI and Google signed an open let­ter ti­tled We Will Not Be Divided,” de­fend­ing Anthropic. In an in­ter­nal memo, Altman wrote that the dis­pute was an is­sue for the whole in­dus­try,” and claimed that OpenAI shared Anthropic’s eth­i­cal bound­aries. But Altman had been in ne­go­ti­a­tions with the Pentagon for at least two days. Emil Michael, the Under-Secretary of Defense for Research and Engineering, had con­tacted Altman as he sought re­place­ments for Anthropic. I needed to hurry and find al­ter­na­tives,” Michael re­called. I called Sam, and he was will­ing to jump. I think he’s a pa­triot.” Altman asked Michael, What can I do for the coun­try?” It ap­pears that he al­ready knew the an­swer. OpenAI lacked the se­cu­rity ac­cred­i­ta­tion re­quired for the clas­si­fied sys­tems in which Anthropic’s tech­nol­ogy was em­bed­ded. But a fifty-bil­lion-dol­lar deal, an­nounced that Friday morn­ing, in­te­grated OpenAI’s tech­nol­ogy into Amazon Web Services, a key part of the Pentagon’s dig­i­tal in­fra­struc­ture. That night, Altman an­nounced on X that the mil­i­tary would now be us­ing OpenAI’s mod­els.By some mea­sures, Altman’s ma­neu­ver has not hin­dered the com­pa­ny’s suc­cess. The day he an­nounced the deal, a new fund­ing round in­creased OpenAI’s value by a hun­dred and ten bil­lion dol­lars. But many users deleted the ChatGPT app. At least two se­nior em­ploy­ees de­parted—one for Anthropic. At a staff meet­ing, Altman chas­tised em­ploy­ees who raised con­cerns. So maybe you think the Iran strike was good and the Venezuela in­va­sion was bad,” he said. You don’t get to weigh in on that.”Sev­eral ex­ec­u­tives con­nected to OpenAI have ex­pressed on­go­ing reser­va­tions about Altman’s lead­er­ship and floated Fidji Simo, who was for­merly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a suc­ces­sor. Simo her­self has pri­vately said that she be­lieves Altman may even­tu­ally step down, a per­son briefed on a re­cent dis­cus­sion told us. (Simo dis­putes this. Instacart re­cently reached a set­tle­ment with the F.T.C., in which it ad­mit­ted no wrong­do­ing but agreed to pay a sixty-mil­lion-dol­lar fine for al­leged de­cep­tive prac­tices un­der Simo’s lead­er­ship.)Alt­man de­scribes his shift­ing com­mit­ments as a by-prod­uct of his abil­ity to adapt to chang­ing cir­cum­stances—not a ne­far­i­ous long con,” as Musk and oth­ers have al­leged, but a grad­ual, good-faith evo­lu­tion. I think what some peo­ple want,” he told us, is a leader who is go­ing to be ab­solutely sure of what they think and stick with it, and it’s not go­ing to change. And we are in a field, in an area, where things change ex­tremely quickly.” He de­fended some of his ac­tions as the prac­tice of normal com­pet­i­tive busi­ness.” Several in­vestors we spoke to de­scribed Altman’s de­trac­tors as naïve to ex­pect any­thing else. There is a group of fa­tal­is­tic ex­trem­ists that has taken the safety pill al­most to a sci­ence-fic­tion level,” Conway, the in­vestor, told us. His mis­sion is mea­sured by num­bers. And, when you look at the suc­cess of OpenAI, it’s hard to ar­gue with the num­bers.”But oth­ers in Silicon Valley think that Altman’s be­hav­ior has cre­ated un­ac­cept­able man­age­r­ial dys­func­tion. It’s more about a prac­ti­cal in­abil­ity to gov­ern the com­pany,” the board mem­ber said. And some still be­lieve that the ar­chi­tects of A.I. should be eval­u­ated more strin­gently than ex­ec­u­tives in other in­dus­tries. The vast ma­jor­ity of peo­ple we spoke to agreed that the stan­dards by which Altman now asks to be judged are not those he ini­tially pro­posed. During one con­ver­sa­tion, we asked Altman whether run­ning an A.I. com­pany came with an el­e­vated re­quire­ment of in­tegrity.” This was sup­posed to be an easy ques­tion. Until re­cently, when asked a ver­sion of it, his an­swer was a clear, un­qual­i­fied yes. Now he added, I think there’s, like, a lot of busi­nesses that have po­ten­tial huge im­pact, good and bad, on so­ci­ety.” (Later, he sent an ad­di­tional state­ment: Yes, it de­mands a height­ened level of in­tegrity, and I feel the weight of the re­spon­si­bil­ity every day.”)Of all the promises made at OpenAI’s found­ing, ar­guably the most cen­tral was its pledge to bring A.I. into ex­is­tence safely. But such con­cerns are now of­ten de­rided in Silicon Valley and in Washington. Last year, J. D. Vance, the for­mer ven­ture cap­i­tal­ist who is now the Vice-President, ad­dressed a con­fer­ence in Paris called the A.I. Action Summit. (It was pre­vi­ously called the A.I. Safety Summit.) The A.I. fu­ture is not go­ing to be won by hand-wring­ing about safety,” he said. At Davos this year, David Sacks, a ven­ture cap­i­tal­ist who was serv­ing as the White House’s A.I. and crypto czar, dis­missed safety con­cerns as a self-inflicted in­jury” that could cost America the A.I. race. Altman now calls Trump’s dereg­u­la­tory ap­proach a very re­fresh­ing change.”Ope­nAI has closed many of its safety-fo­cussed teams. Around the time the su­per­align­ment team was dis­solved, its lead­ers, Sutskever and Leike, re­signed. (Sutskever co-founded a com­pany called Safe Superintelligence.) On X, Leike wrote, Safety cul­ture and processes have taken a back­seat to shiny prod­ucts.” Soon af­ter­ward, the A.G.I.-readiness team, tasked with prepar­ing so­ci­ety for the shock of ad­vanced A.I., was also dis­solved. When the com­pany was asked on its most re­cent I.R.S. dis­clo­sure form to briefly de­scribe its most sig­nif­i­cant ac­tiv­i­ties,” the con­cept of safety, pre­sent in its an­swers to such ques­tions on pre­vi­ous forms, was not listed. (OpenAI said that its mission did not change” and added, We con­tinue to in­vest in and evolve our work on safety, and will con­tinue to make or­ga­ni­za­tional changes.”) The Future of Life Institute, a think tank whose prin­ci­ples on safety Altman once en­dorsed, grades each ma­jor A.I. com­pany on existential safety”; on the most re­cent re­port card, OpenAI got an F. In fair­ness, so did every other ma­jor com­pany ex­cept for Anthropic, which got a D, and Google DeepMind, which got a D-.“My vibes don’t match a lot of the tra­di­tional A.I.-safety stuff,” Altman said. He in­sisted that he con­tin­ued to pri­or­i­tize these mat­ters, but when pressed for specifics he was vague: We still will run safety pro­jects, or at least safety-ad­ja­cent pro­jects.” When we asked to in­ter­view re­searchers at the com­pany who were work­ing on ex­is­ten­tial safety—the kinds of is­sues that could mean, as Altman once put it, lights-out for all of us”—an OpenAI rep­re­sen­ta­tive seemed con­fused. What do you mean by existential safe­ty’?” he replied. That’s not, like, a thing.”A.I. doomers have been pushed to the fringes, but some of their fears seem less fan­tas­ti­cal with each pass­ing month. In 2020, ac­cord­ing to a U.N. re­port, an A.I. drone was used in the Libyan civil war to fire deadly mu­ni­tions, pos­si­bly with­out over­sight by a hu­man op­er­a­tor. Since then, A.I. has only be­come more cen­tral to mil­i­tary op­er­a­tions around the world, in­clud­ing, re­port­edly, in the cur­rent U.S. cam­paign in Iran. In 2022, re­searchers at a phar­ma­ceu­ti­cal com­pany tested whether a drug-dis­cov­ery model could be used to find new tox­ins; within a few hours, it had sug­gested forty thou­sand deadly chem­i­cal-war­fare agents. And many more mun­dane harms are al­ready com­ing to pass. We in­creas­ingly rely on A.I. to help us write, think, and nav­i­gate the world, ac­cel­er­at­ing what ex­perts call human en­fee­ble­ment”; the ubiq­uity of A.I. slop” makes life eas­ier for scam­mers and harder for peo­ple who sim­ply want to know what’s real. A.I. agents” are start­ing to act in­de­pen­dently, with lit­tle or no hu­man su­per­vi­sion. Days be­fore the 2024 New Hampshire Democratic pri­mary, thou­sands of vot­ers re­ceived robo­calls from an A.I.-generated deep­fake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter sup­pres­sion re­quir­ing vir­tu­ally no tech­ni­cal ex­per­tise. OpenAI is now fac­ing seven wrong­ful-death law­suits, which al­lege that ChatGPT prompted sev­eral sui­cides and a mur­der. Chat logs in the mur­der case show that it en­cour­aged a man’s para­noid delu­sion that his eighty-three-year-old mother was sur­veilling and try­ing to poi­son him. Soon af­ter­ward, he fa­tally beat and stran­gled her and stabbed him­self. (OpenAI is fight­ing the law­suits, and says that it’s con­tin­u­ing to im­prove its mod­el’s safe­guards.)As OpenAI pre­pares for its po­ten­tial I.P.O., Altman has faced ques­tions not only about the ef­fect of A.I. on the econ­omy—it could soon cause se­vere la­bor dis­rup­tion, per­haps elim­i­nat­ing mil­lions of jobs—but about the com­pa­ny’s own fi­nances. Eric Ries, an ex­pert on startup gov­er­nance, de­rided circular deals” in the in­dus­try—for ex­am­ple, OpenAI’s deals with Nvidia and other chip man­u­fac­tur­ers—and said that in other eras some of the com­pa­ny’s ac­count­ing prac­tices would have been con­sid­ered borderline fraud­u­lent.” The board mem­ber told us, The com­pany lev­ered up fi­nan­cially in a way that’s risky and scary right now.” (OpenAI dis­putes this.)In February, we spoke again with Altman. He was wear­ing a drab-green sweater and jeans, and sat in front of a pho­to­graph of a NASA moon rover. He tucked one leg be­neath him, then hung it over the arm of his chair. In the past, he said, his main flaw as a man­ager had been his ea­ger­ness to avoid con­flict. Now I’m very happy to fire peo­ple quickly,” he had told us. I’m happy to just say, We’re gonna bet in this di­rec­tion.’ ” Any em­ploy­ees who did­n’t like his choices needed to leave.”He is more bull­ish than ever about the fu­ture. My de­f­i­n­i­tion of win­ning is that peo­ple crazy up­level—and the in­sane sci-fi fu­ture comes true for all of us,” he said. I’m very am­bi­tious as far as, like, my hope for hu­man­ity, and what I ex­pect us all to achieve. I weirdly have very lit­tle per­sonal am­bi­tion.” At times, he seemed to catch him­self. No one be­lieves you’re do­ing this just be­cause it’s in­ter­est­ing,” he said. You’re do­ing it for power or for some other thing.”Even peo­ple close to Altman find it dif­fi­cult to know where his hope for hu­man­ity” ends and his am­bi­tion be­gins. His great­est strength has al­ways been his abil­ity to con­vince dis­parate groups that what he wants and what they need are one and the same. He made use of a unique his­tor­i­cal junc­ture, when the pub­lic was wary of tech-in­dus­try hype and most of the re­searchers ca­pa­ble of build­ing A.G.I. were ter­ri­fied of bring­ing it into ex­is­tence. Altman re­sponded with a move that no other pitch­man had per­fected: he used apoc­a­lyp­tic rhetoric to ex­plain how A.G.I. could de­stroy us all—and why, there­fore, he should be the one to build it. Maybe this was a pre­med­i­tated mas­ter­stroke. Maybe he was fum­bling for an ad­van­tage. Either way, it worked.Not all the ten­den­cies that make chat­bots dan­ger­ous are glitches; some are by-prod­ucts of how the sys­tems are built. Large lan­guage mod­els are trained, in part, on hu­man feed­back, and hu­mans tend to pre­fer agree­able re­sponses. Models of­ten learn to flat­ter users, a ten­dency known as syco­phancy, and will some­times pri­or­i­tize this over hon­esty. Models can also make things up, a ten­dency known as hal­lu­ci­na­tion. Major A.I. labs have doc­u­mented these prob­lems, but they some­times tol­er­ate them. As mod­els have grown more com­plex, some hal­lu­ci­nate with more per­sua­sive fab­ri­ca­tions. In 2023, shortly be­fore his fir­ing, Altman ar­gued that al­low­ing for some false­hoods can, what­ever the risks, con­fer ad­van­tages. If you just do the naïve thing and say, Never say any­thing that you’re not a hun­dred per cent sure about,’ you can get a model to do that,” he said. But it won’t have the magic that peo­ple like so much.” ♦

...

Read the original on www.newyorker.com »

2 1,117 shares, 55 trendiness

[MODEL] Claude Code is unusable for complex engineering tasks with the Feb updates · Issue #42796 · anthropics/claude-code

* This re­port does NOT con­tain sen­si­tive in­for­ma­tion (API keys, pass­words, etc.)

Claude has re­gressed to the point it can­not be trusted to per­form com­plex en­gi­neer­ing.

Does the op­po­site of re­quested ac­tiv­i­ties

Claude should be­have like it did in January.

Accept Edits was ON (auto-accepting changes)

Yes, every time with the same prompt

This analy­sis was pro­duced by Claude by an­a­lyz­ing ses­sion log data from January through March.

Quantitative analy­sis of 17,871 think­ing blocks and 234,760 tool calls across

6,852 Claude Code ses­sion files re­veals that the roll­out of think­ing con­tent

redac­tion (redact-thinking-2026-02-12) cor­re­lates pre­cisely with a mea­sured

qual­ity re­gres­sion in com­plex, long-ses­sion en­gi­neer­ing work­flows.

The data sug­gests that ex­tended think­ing to­kens are not a nice to have” but

are struc­turally re­quired for the model to per­form multi-step re­search,

con­ven­tion ad­her­ence, and care­ful code mod­i­fi­ca­tion. When think­ing depth is

re­duced, the mod­el’s tool us­age pat­terns shift mea­sur­ably from re­search-first

to edit-first be­hav­ior, pro­duc­ing the qual­ity is­sues users have re­ported.

This re­port pro­vides data to help Anthropic un­der­stand which work­flows are

most af­fected and why, with the goal of in­form­ing de­ci­sions about think­ing

to­ken al­lo­ca­tion for power users.

The qual­ity re­gres­sion was in­de­pen­dently re­ported on March 8 — the ex­act date

redacted think­ing blocks crossed 50%. The roll­out pat­tern (1.5% → 25% → 58% →

100% over one week) is con­sis­tent with a staged de­ploy­ment.

The sig­na­ture field on think­ing blocks has a 0.971 Pearson cor­re­la­tion

with think­ing con­tent length (measured from 7,146 paired sam­ples where both

are pre­sent). This al­lows es­ti­ma­tion of think­ing depth even af­ter redac­tion.

Thinking depth had al­ready dropped ~67% by late February, be­fore redac­tion

be­gan. The redac­tion roll­out in early March made this in­vis­i­ble to users.

These met­rics were com­puted in­de­pen­dently from 18,000+ user prompts be­fore

the think­ing analy­sis was per­formed.

A stop hook (stop-phrase-guard.sh) was built to pro­gram­mat­i­cally catch

own­er­ship-dodg­ing, pre­ma­ture stop­ping, and per­mis­sion-seek­ing be­hav­ior.

It fired 173 times in 17 days af­ter March 8. It fired zero times be­fore.

Analysis of 234,760 tool in­vo­ca­tions shows the model stopped read­ing code

be­fore mod­i­fy­ing it.

The model went from 6.6 reads per edit to 2.0 reads per edit — a 70%

re­duc­tion in re­search be­fore mak­ing changes.

In the good pe­riod, the mod­el’s work­flow was: read the tar­get file, read

re­lated files, grep for us­ages across the code­base, read head­ers and tests,

then make a pre­cise edit. In the de­graded pe­riod, it reads the im­me­di­ate

file and ed­its, of­ten with­out check­ing con­text.

The de­cline in re­search ef­fort be­gins in mid-Feb­ru­ary — the same pe­riod when

es­ti­mated think­ing depth dropped 67%.

Full-file Write us­age dou­bled — the model in­creas­ingly chose to rewrite

en­tire files rather than make sur­gi­cal ed­its, which is faster but loses

pre­ci­sion and con­text aware­ness.

* 191,000 lines merged across two PRs in a week­end dur­ing the good pe­riod

Extended think­ing is the mech­a­nism by which the model:

* Plans multi-step ap­proaches be­fore act­ing (which files to read, what or­der)

* Catches its own mis­takes be­fore out­putting them

* Decides whether to con­tinue work­ing or stop (session man­age­ment)

When think­ing is shal­low, the model de­faults to the cheap­est ac­tion avail­able:

edit with­out read­ing, stop with­out fin­ish­ing, dodge re­spon­si­bil­ity for fail­ures,

take the sim­plest fix rather than the cor­rect one. These are ex­actly the

symp­toms ob­served.

Transparency about think­ing al­lo­ca­tion: If think­ing to­kens are be­ing

re­duced or capped, users who de­pend on deep rea­son­ing need to know. The

redact-think­ing header makes it im­pos­si­ble to ver­ify ex­ter­nally.

A max think­ing” tier: Users run­ning com­plex en­gi­neer­ing work­flows

would pay sig­nif­i­cantly more for guar­an­teed deep think­ing. The cur­rent

sub­scrip­tion model does­n’t dis­tin­guish be­tween users who need 200 think­ing

to­kens per re­sponse and users who need 20,000.

Thinking to­ken met­rics in API re­sponses: Even if think­ing con­tent is

redacted, ex­pos­ing think­ing_­to­kens in the us­age re­sponse would let users

mon­i­tor whether their re­quests are get­ting the rea­son­ing depth they need.

Canary met­rics from power users: The stop hook vi­o­la­tion rate

(0 → 10/day) is a ma­chine-read­able sig­nal that could be mon­i­tored across

the user base as a lead­ing in­di­ca­tor of qual­ity re­gres­sions.

The fol­low­ing be­hav­ioral pat­terns were mea­sured across 234,760 tool calls and

18,000+ user prompts. Each is a pre­dictable con­se­quence of re­duced rea­son­ing

depth: the model takes short­cuts be­cause it lacks the think­ing bud­get to

eval­u­ate al­ter­na­tives, check con­text, or plan ahead.

When the model has suf­fi­cient think­ing bud­get, it reads re­lated files, greps

for us­ages, checks head­ers, and reads tests be­fore mak­ing changes. When

think­ing is shal­low, it skips re­search and ed­its di­rectly.

One in three ed­its in the de­graded pe­riod was made to a file the model had

not read in its re­cent tool his­tory. The prac­ti­cal con­se­quence: ed­its that

break sur­round­ing code, vi­o­late file-level con­ven­tions, splice new code into

the mid­dle of ex­ist­ing com­ment blocks, or du­pli­cate logic that al­ready ex­ists

else­where in the file.

Spliced com­ments are a par­tic­u­larly vis­i­ble symp­tom. When the model ed­its

a file it has­n’t read, it does­n’t know where com­ment blocks end and code

be­gins. It in­serts new de­c­la­ra­tions be­tween a doc­u­men­ta­tion com­ment and the

func­tion it doc­u­ments, break­ing the se­man­tic as­so­ci­a­tion. This never hap­pened

in the good pe­riod be­cause the model al­ways read the file first.

When think­ing is deep, the model re­solves con­tra­dic­tions in­ter­nally be­fore

pro­duc­ing out­put. When think­ing is shal­low, con­tra­dic­tions sur­face in the

out­put as vis­i­ble self-cor­rec­tions: oh wait”, actually,”, let me

re­con­sider”, hmm, ac­tu­ally”, no wait.”

The rate more than tripled. In the worst ses­sions, the model pro­duced 20+

rea­son­ing re­ver­sals in a sin­gle re­sponse — gen­er­at­ing a plan, con­tra­dict­ing

it, re­vis­ing, con­tra­dict­ing the re­vi­sion, and ul­ti­mately pro­duc­ing out­put

that could not be trusted be­cause the rea­son­ing path was vis­i­bly in­co­her­ent.

The word simplest” in the mod­el’s out­put is a sig­nal that it is op­ti­miz­ing

for the least ef­fort rather than eval­u­at­ing the cor­rect ap­proach. With deep

think­ing, the model eval­u­ates mul­ti­ple ap­proaches and chooses the right one.

With shal­low think­ing, it grav­i­tates to­ward what­ever re­quires the least

rea­son­ing to jus­tify.

In one ob­served 2-hour win­dow, the model used simplest” 6 times while

pro­duc­ing code that its own later self-cor­rec­tions de­scribed as lazy and

wrong”, rushed”, and sloppy.” Each time, the model had cho­sen an ap­proach

...

Read the original on github.com »

3 874 shares, 32 trendiness

No, I Won't Download Your App. The Web Version is A-OK.

No, I Won’t Download Your App. The Web Version is A-OK.

As some­one who prefers us­ing ser­vices via their web­sites, I’ve got­ten ter­ri­bly jaded lately. Almost every­one wants me, and by ex­ten­sion, you, to use their darn apps to con­sume con­tent and off their web ver­sions.

Whether it’s the ob­vi­ous so­cial me­dia apps or some­thing as ba­sic as park­ing, the app is the pri­or­ity and the site the red-headed stepchild. And they aren’t too sub­tle in the push ei­ther. It might be a modal cov­er­ing half the web ver­sion with links to the App Store, an im­me­di­ate popup af­ter a bit of scrolling, or a header scream­ing the app is 10x bet­ter,” but it’s al­ways there and it’s al­ways grat­ing.

Let’s not even go into the cases where the app is the only op­tion to ac­cess the ser­vice. A mi­nor an­noy­ance for or­der­ing food, but a ma­jor has­sle when it’s a pub­lic ser­vice or util­ity.

On prin­ci­ple, I like con­trol over what I see and how I see it. Apps are su­per lim­ited; while in a browser, I can do a lot of very nifty things to im­prove us­abil­ity.

A ser­vice lacks a dark mode? I can use any num­ber of user scripts. Reddit in­tro­duced a gam­ing sec­tion in the side­bar? Two-second fix that I bun­dled into my ex­ten­sion [1]. Between user­scripts, ad-block­ers, and cus­tom ex­ten­sions, I’m ba­si­cally a god, swag­ger­ing through my realm.

This con­trol, or lack thereof, also ex­plains the app mak­er’s ad­ver­sar­ial stance to­wards users. They are of­ten a black hole of dark pat­terns, and they’d like noth­ing get­ting in their way. Apps make it eas­ier for them to push no­ti­fi­ca­tions, col­lect in­tru­sive teleme­try, and keep you in­side their walled gar­den. A bet­ter user ex­pe­ri­ence is the pitch but se­cur­ing bet­ter user re­ten­tion is the end goal.

Most apps are just that. Text and me­dia in a never-end­ing, all-con­sum­ing feed or a multi-page form, clev­erly dis­guised by the user in­ter­face.

Excluding heavy 3D gam­ing or util­i­ties that gen­uinely re­quire deep in­te­gra­tion with your phone’s hard­ware (like ac­cess­ing the LiDAR scan­ner for AR), what are we ac­tu­ally left with? A thin client whose main job is to fetch data from an API and ren­der it onto na­tive views.

Why do I need to down­load a 100+ MB app, give it per­mis­sion to track my lo­ca­tion, and let it run back­ground processes just to browse through a restau­rant menu, buy a ticket, or scroll through a list of posts? At the end of the day, it is al­most al­ways just JSON be­ing parsed and ren­dered. Yet, com­pa­nies in­sist on re­build­ing their ba­sic con­tent as na­tive shells just to claim a per­ma­nent square of real es­tate on my home screen.

If a ser­vice is go­ing to pull you out of the browser, it should at least of­fer a pol­ished, na­tive ex­pe­ri­ence. But more of­ten than not, the app you just down­loaded is a com­pro­mise.

Anyone who en­dured the iOS-spe­cific shader com­pi­la­tion jank in early Flutter apps [2] knows ex­actly how grat­ing this can be (this spe­cific bug was fixed 2023ish fwiw). Before they swapped Skia out for the Impeller en­gine, I had to cap­ture and ship pre­com­piled shaders with my apps just to stop the UI from stut­ter­ing the first time an an­i­ma­tion ran.

The re­sult is of­ten the un­canny val­ley of user in­ter­faces. It’s not bro­ken, but it is sub­tly dif­fer­ent, some­times janky. The scroll ve­loc­ity does­n’t quite match the rest of the OS. The swipe back ges­ture hes­i­tates for a few mil­lisec­onds.

Human brains are re­mark­ably good at de­tect­ing when a sys­tem’s tim­ing is off. This is how the XZ back­door was caught: an en­gi­neer no­ticed their SSH lo­gins tak­ing a frac­tion of a sec­ond longer than usual. It’s not that unique — my old FPS bud­dies could tell our server re­gion just by fir­ing a shot and feel­ing the lag. [3]

These mi­cro in­ter­ac­tions mat­ter, be­cause with­out that fi­nal layer of pol­ish, the en­tire fa­cade of a na­tive ex­pe­ri­ence falls apart. Not every app is like this, ob­vi­ously, but enough of them are this way that it sours the en­tire ex­pe­ri­ence.

When that full-screen modal pops up de­mand­ing you down­load the app to read the rest of a thread, users choose the path of least re­sis­tance. They down­load and they move on.

To a PM star­ing at an an­a­lyt­ics dash­board, I’m an ac­cept­able ca­su­alty, an in­con­se­quen­tial mi­nor­ity. If de­grad­ing the web ver­sion suc­cess­fully fun­nels 80% of users into the App Store, that PM gets a pro­mo­tion and a big pay bump. As al­ways, ac­tions fol­low the in­cen­tive. Our de­mo­graphic is sim­ply too small to fac­tor into their quar­terly met­rics.

This is the en­shit­ti­fi­ca­tion loop in its full glory, work­ing ex­actly as in­tended. A ser­vice builds its ini­tial au­di­ence on the open web be­cause it’s fric­tion­less and in­dex­able. Once the user base is suf­fi­ciently locked in, the web ver­sion is de­lib­er­ately hob­bled to force every­one into the na­tive app. Once you’re in­side the app, the walls close in: you are now a cap­tive au­di­ence for a feed full of ads that your ad-blocker can no longer touch.

There is no fi­nan­cial in­cen­tive to main­tain a stel­lar web ex­pe­ri­ence any­more. The browser, once the great uni­ver­sal plat­form, is in­creas­ingly be­ing re­duced to a top-of-fun­nel mar­ket­ing chan­nel for the App Store. The de­press­ing part of it is that the num­bers prove it works.

...

Read the original on www.0xsid.com »

4 562 shares, 22 trendiness

The Cult Of Vibe Coding Is Insane

Claude had a leak of their source code, and peo­ple have been hav­ing a whole lot of fun laugh­ing at how bad it is. You might won­der how this could hap­pen. The an­swer is dog­food­ing run amok.

Dogfooding is when you use your own prod­uct. It’s a good idea. But it can turn into a cult ac­tiv­ity where it goes be­yond any rea­son­able lim­its. In this case, the idea is vibe cod­ing, where you make a point of lit­er­ally mak­ing no con­tri­bu­tion to what’s go­ing on un­der the hood, not even look­ing at it.

This is, of course, ridicu­lous. It’s not like there is­n’t hu­man con­tri­bu­tion hap­pen­ing here. For starters, you’re us­ing a hu­man lan­guage, and the ma­chine is us­ing that same hu­man lan­guage for its own in­ter­nal thought processes. You could ar­gue that that other hu­mans, not on the de­vel­op­ment team, did all that foun­da­tional work and your team are do­ing pure vibe cod­ing. But even that is­n’t what’s hap­pen­ing. You’re still build­ing the in­fra­struc­ture of things like plan files (That’s fancy talk for todo lists’), skills, and rules. The ma­chine works very poorly with­out be­ing given a frame­work.

So pure vibe cod­ing is a myth. But they’re still try­ing to do it, and this leads to some very ridicu­lous out­comes. For ex­am­ple, a hu­man ac­tu­ally looked and saw a lot of du­pli­ca­tion be­tween them. Now, you might ask: why did­n’t any of the de­vel­op­ers just go look for them­selves? Again, it’s vibe cod­ing. Looking un­der the hood is cheat­ing. You’re only sup­posed to have vague con­ver­sa­tions with the ma­chine about what it’s do­ing.

This gets par­tic­u­larly silly be­cause it’s not like there’s some su­per tech­ni­cal thing un­der the hood that the gen­eral pub­lic could­n’t un­der­stand. This code is writ­ten in English. Anyone could read it. It’s easy enough to go through and no­tice, wow, there’s a whole bunch of things that are both agents and tools. That’s kind of re­dun­dant, maybe we should clean this up.”

This hap­pens all the time in soft­ware. Projects are born in sin. Historically a soft­ware pro­ject would usu­ally have so much tech debt that if you were do­ing what made sense from a pure de­vel­op­ment stand­point you would lit­er­ally do noth­ing but clean up mess for the en­tire next year. Now that you can use AI for cod­ing, you can get that cleanup done in some­times a mat­ter of weeks, or get it paid down a bit slower will still writ­ing new fea­tures. And you should. You should strive for much higher qual­ity. Helping you clean up mess is some­thing AI is ac­tu­ally very good at.

In this par­tic­u­lar case, a hu­man could have told the ma­chine: There’s a lot of things that are both agents and tools. Let’s go through and make a list of all of them, look at some ex­am­ples, and I’ll tell you which should be agents and which should be tools. We’ll have a dis­cus­sion and fig­ure out the gen­eral guide­lines. Then we’ll au­dit the en­tire set, fig­ure out which cat­e­gory each one be­longs in, port the ones that are in the wrong type, and for the ones that are both, read through both ver­sions and con­sol­i­date them into one doc­u­ment with the best of both.”

The AI is ac­tu­ally very good at this, es­pe­cially if you have a con­ver­sa­tion with it be­fore­hand. That’s what Ask mode is for. You walk through some ex­am­ples, share your rea­son­ing, and cor­rect the wrong things it says when try­ing to syco­phan­ti­cally agree with you. After enough back and forth, it’s of­ten able to do what looks like one-shot­ting a task. It’s not re­ally one-shot­ting at all. There was a lot of back and forth with you, the hu­man, be­fore­hand. But when it ac­tu­ally goes to do the thing, it zooms ahead be­cause you’ve al­ready clar­i­fied the weird edge cases and the is­sues likely to come up.

But the Claude team is­n’t do­ing that. They’re go­ing com­pletely over­board with dog­food­ing and ut­terly re­fus­ing to even spend a few min­utes look­ing un­der the hood, notic­ing what’s bro­ken, and ex­plain­ing the mess to the ma­chine. That would­n’t even be a big vi­o­la­tion of the vibe cod­ing con­cept. You’re read­ing the in­nards a lit­tle but you’re only giv­ing high-level, con­cep­tual, ab­stract ideas about how prob­lems should be solved. The ma­chine is do­ing the vast ma­jor­ity, if not lit­er­ally all, of the ac­tual writ­ing.

I’ve been do­ing this for months. I’ll start a con­ver­sa­tion by say­ing Let’s au­dit this code­base for un­reach­able code,” or This func­tion makes my eyes bleed,” and we’ll have a con­ver­sa­tion about it un­til some­thing ac­tion­able comes up. Then I ex­plain what I think should be done and we’ll keep dis­cussing it un­til I stop hav­ing more thoughts to give and the ma­chine stops say­ing stu­pid things which need cor­rect­ing. Then I tell it to make a plan and hit build. This is my life. The AI is very bad at spon­ta­neously notic­ing, I’ve got a lot of spaghetti code here, I should clean it up.” But if you tell it this has spaghetti code and give it some guid­ance (or some­times even with­out guid­ance) it can do a good job of clean­ing up the mess.

You don’t have to have poor qual­ity soft­ware just be­cause you’re us­ing AI for cod­ing. That is my hot take for to­day. People have bad qual­ity soft­ware be­cause they de­cide to have bad qual­ity soft­ware. I have been scream­ing at my com­puter this past week deal­ing with a li­brary that was writ­ten by over­paid meat­bags with no AI help. Bad soft­ware is a de­ci­sion you make. You need to own it. You should do bet­ter.

...

Read the original on bramcohen.com »

5 491 shares, 25 trendiness

The Battle for Wesnoth

is an open source, turn-based strat­egy game with a high fan­tasy theme. It fea­tures both sin­gle­player and on­line/​hot­seat mul­ti­player com­bat.

Explore the world of Wesnoth and take part in its many ad­ven­tures! Embark on a des­per­ate quest to re­claim your right­ful throne… Flee the Lich Lords to a new home across the sea… Delve into the dark­est depths of the earth to craft a jewel of fire it­self… Defend your king­dom against the rav­aging hordes of a foul necro­mancer… Or lead a strag­gly band of sur­vivors across the blaz­ing sands to con­front an un­seen evil.

The choice is up to you…

...

Read the original on www.wesnoth.org »

6 481 shares, 26 trendiness

A Cryptography Engineer’s Perspective on Quantum Computing Timelines

My po­si­tion on the ur­gency of rolling out quan­tum-re­sis­tant cryp­tog­ra­phy has changed com­pared to just a few months ago. You might have heard this pri­vately from me in the past weeks, but it’s time to sig­nal and jus­tify this change of mind pub­licly.

There had been ru­mors for a while of ex­pected and un­ex­pected progress to­wards cryp­to­graph­i­cally-rel­e­vant quan­tum com­put­ers, but over the last week we got two pub­lic in­stances of it.

First, Google pub­lished a pa­per re­vis­ing down dra­mat­i­cally the es­ti­mated num­ber of log­i­cal qubits and gates re­quired to break 256-bit el­lip­tic curves like NIST P-256 and secp256k1, which makes the at­tack doable in min­utes on fast-clock ar­chi­tec­tures like su­per­con­duct­ing qubits. They weirdly frame it around cryp­tocur­ren­cies and mem­pools and sal­vaged goods or some­thing, but the far more im­por­tant im­pli­ca­tion are prac­ti­cal WebPKI MitM at­tacks.

Shortly af­ter, a dif­fer­ent pa­per came out from Oratomic show­ing 256-bit el­lip­tic curves can be bro­ken in as few as 10,000 phys­i­cal qubits if you have non-lo­cal con­nec­tiv­ity, like neu­tral atoms seem to of­fer, thanks to bet­ter er­ror cor­rec­tion. This at­tack would be slower, but even a sin­gle bro­ken key per month can be cat­a­strophic.

They have this ex­cel­lent graph on page 2 (Babbush et al. is the Google pa­per, which they pre­sum­ably had pre­view ac­cess to):

Overall, it looks like every­thing is mov­ing: the hard­ware is get­ting bet­ter, the al­go­rithms are get­ting cheaper, the re­quire­ments for er­ror cor­rec­tion are get­ting lower.

I’ll be hon­est, I don’t ac­tu­ally know what all the physics in those pa­pers means. That’s not my job and not my ex­per­tise. My job in­cludes risk as­sess­ment on be­half of the users that en­trusted me with their safety. What I know is what at least some ac­tual ex­perts are telling us.

Heather Adkins and Sophie Schmieg are telling us that quantum fron­tiers may be closer than they ap­pear” and that 2029 is their dead­line. That’s in 33 months, and no one had set such an ag­gres­sive time­line un­til this month.

Scott Aaronson tells us that the clearest warn­ing that [he] can of­fer in pub­lic right now about the ur­gency of mi­grat­ing to post-quan­tum cryp­tosys­tems” is a vague par­al­lel with how nu­clear fis­sion re­search stopped hap­pen­ing in pub­lic be­tween 1939 and 1940.

The time­lines pre­sented at RWPQC 2026, just a few weeks ago, were much tighter than a cou­ple years ago, and are al­ready par­tially ob­so­lete. The joke used to be that quan­tum com­put­ers have been 10 years out for 30 years now. Well, not true any­more, the time­lines have started pro­gress­ing.

If you are think­ing well, this could be bad, or it could be noth­ing!” I need you to rec­og­nize how im­me­di­ately dis­pos­i­tive that is. The bet is not are you 100% sure a CRQC will ex­ist in 2030?”, the bet is are you 100% sure a CRQC will NOT ex­ist in 2030?” I sim­ply don’t see how a non-ex­pert can look at what the ex­perts are say­ing, and de­cide I know bet­ter, there is in fact < 1% chance.” Remember that you are bet­ting with your users’ lives.

Put an­other way, even if the most likely out­come was no CRQC in our life­times, that would be com­pletely ir­rel­e­vant, be­cause our users don’t want just bet­ter-than-even odds of be­ing se­cure.

Sure, pa­pers about an aba­cus and a dog are funny and can make you look smart and con­trar­ian on fo­rums. But that’s not the job, and those ar­gu­ments be­tray a lack of ex­per­tise. As Scott Aaronson said:

Once you un­der­stand quan­tum fault-tol­er­ance, ask­ing so when are you go­ing to fac­tor 35 with Shor’s al­go­rithm?” be­comes sort of like ask­ing the Manhattan Project physi­cists in 1943, so when are you go­ing to pro­duce at least a small nu­clear ex­plo­sion?”

The job is not to be skep­ti­cal of things we’re not ex­perts in, the job is to mit­i­gate cred­i­ble threats, and there are cred­i­ble ex­perts that are telling us about an im­mi­nent threat.

In sum­mary, it might be that in 10 years the pre­dic­tions will turn out to be wrong, but at this point they might also be right soon, and that risk is now un­ac­cept­able.

Concretely, what does this mean? It means we need to ship.

Regrettably, we’ve got to roll out what we have. That means large ML-DSA sig­na­tures shoved in places de­signed for small ECDSA sig­na­tures, like X.509, with the ex­cep­tion of Merkle Tree Certificates for the WebPKI, which is thank­fully far enough along.

This is not the ar­ti­cle I wanted to write. I’ve had a pend­ing draft for months now ex­plain­ing we should ship PQ key ex­change now, but take the time we still have to adapt pro­to­cols to larger sig­na­tures, be­cause they were all de­signed with the as­sump­tion that sig­na­tures are cheap. That other ar­ti­cle is now wrong, alas: we don’t have the time if we need to be fin­ished by 2029 in­stead of 2035.

For key ex­change, the mi­gra­tion to ML-KEM is go­ing well enough but:

Any non-PQ key ex­change should now be con­sid­ered a po­ten­tial ac­tive com­pro­mise, wor­thy of warn­ing the user like OpenSSH does, be­cause it’s very hard to make sure all se­crets trans­mit­ted over the con­nec­tion or en­crypted in the file have a shorter shelf life than three years.

We need to for­get about non-in­ter­ac­tive key ex­changes (NIKEs) for a while; we only have KEMs (which are only uni­di­rec­tion­ally au­then­ti­cated with­out in­ter­ac­tiv­ity) in the PQ toolkit.

It makes no more sense to de­ploy new schemes that are not post-quan­tum. I know, pair­ings were nice. I know, every­thing PQ is an­noy­ingly large. I know, we had ba­si­cally just fig­ured out how to do ECDSA over P-256 safely. I know, there might not be prac­ti­cal PQ equiv­a­lents for thresh­old sig­na­tures or iden­tity-based en­cryp­tion. Trust me, I know it stings. But it is what it is.

Hybrid clas­sic + post-quan­tum au­then­ti­ca­tion makes no sense to me any­more and will only slow us down; we should go straight to pure ML-DSA-44. Hybrid key ex­change is rea­son­ably easy, with ephemeral keys that don’t even need a type or wire for­mat for the com­pos­ite pri­vate key, and a cou­ple years ago it made sense to take the hedge. Authentication is not like that, and even with draft-ietf-lamps-pq-com­pos­ite-sigs-15 with its 18 com­pos­ite key types near­ing pub­li­ca­tion, we’d waste pre­cious time col­lec­tively fig­ur­ing out how to treat these com­pos­ite keys and how to ex­pose them to users. It’s also been two years since Kyber hy­brids and we’ve gained sig­nif­i­cant con­fi­dence in the Module-Lattice schemes. Hybrid sig­na­tures cost time and com­plex­ity bud­get, and the only ben­e­fit is pro­tec­tion if ML-DSA is clas­si­cally bro­ken be­fore the CRQCs come, which looks like the wrong trade­off at this point.

In sym­met­ric en­cryp­tion, we don’t need to do any­thing, thank­fully. There is a com­mon mis­con­cep­tion that pro­tec­tion from Grover re­quires 256-bit keys, but that is based on an ex­ceed­ingly sim­pli­fied un­der­stand­ing of the al­go­rithm. A more ac­cu­rate char­ac­ter­i­za­tion is that with a cir­cuit depth of 2⁶⁴ log­i­cal gates (the ap­prox­i­mate num­ber of gates that cur­rent clas­si­cal com­put­ing ar­chi­tec­tures can per­form se­ri­ally in a decade) run­ning Grover on a 128-bit key space would re­quire a cir­cuit size of 2¹⁰⁶. There’s been no progress on this that I am aware of, and in­deed there are old proofs that Grover is op­ti­mal and its quan­tum speedup does­n’t par­al­lelize. Unnecessary 256-bit key re­quire­ments are harm­ful when bun­dled with the ac­tu­ally ur­gent PQ re­quire­ments, be­cause they mud­dle the in­ter­op­er­abil­ity tar­gets and they risk slow­ing down the roll­out of asym­met­ric PQ cryp­tog­ra­phy.

In my cor­ner of the world, we’ll have to start think­ing about what it means for half the cryp­tog­ra­phy pack­ages in the Go stan­dard li­brary to be sud­denly in­se­cure, and how to bal­ance the risk of down­grade at­tacks and back­wards com­pat­i­bil­ity. It’s the first time in our ca­reers we’ve faced any­thing like this: SHA-1 to SHA-256 was not nearly this dis­rup­tive, and even that took for­ever with the oc­ca­sional un­ex­pected down­grade at­tack.

Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in gen­eral hard­ware at­tes­ta­tion are just f***d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hard­ware speeds means we are forced to ac­cept they might not make it, and can’t be re­lied upon. I had to re­assess a whole pro­ject be­cause of this, and I will prob­a­bly down­grade them to barely defense in depth” in my toolkit.

Ecosystems with cryp­to­graphic iden­ti­ties (like at­proto and, yes, cryp­tocur­ren­cies) need to start mi­grat­ing very soon, be­cause if the CRQCs come be­fore they are done, they will have to make ex­tremely hard de­ci­sions, pick­ing be­tween let­ting users be com­pro­mised and brick­ing them.

File en­cryp­tion is es­pe­cially vul­ner­a­ble to store-now-de­crypt-later at­tacks, so we’ll prob­a­bly have to start warn­ing and then er­ror­ing out on non-PQ age re­cip­i­ent types soon. It’s un­for­tu­nately only been a few months since we even added PQ re­cip­i­ents, in ver­sion 1.3.0.

Finally, this week I started teach­ing a PhD course in cryp­tog­ra­phy at the University of Bologna, and I’m go­ing to men­tion RSA, ECDSA, and ECDH only as legacy al­go­rithms, be­cause that’s how those stu­dents will en­counter them in their ca­reers. I know, it feels weird. But it is what it is.

For more will­ing-or-not PQ mi­gra­tion, fol­low me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @fil­ippo@abyss­do­main.ex­pert.

Traveling back from an ex­cel­lent AtmosphereConf 2026, I saw my first au­rora, from the north-fac­ing win­dow of a Boeing 747.

My work is made pos­si­ble by Geomys, an or­ga­ni­za­tion of pro­fes­sional Go main­tain­ers, which is funded by Ava Labs, Teleport, Tailscale, and Sentry. Through our re­tainer con­tracts they en­sure the sus­tain­abil­ity and re­li­a­bil­ity of our open source main­te­nance work and get a di­rect line to my ex­per­tise and that of the other Geomys main­tain­ers. (Learn more in the Geomys an­nounce­ment.) Here are a few words from some of them!

Teleport — For the past five years, at­tacks and com­pro­mises have been shift­ing from tra­di­tional mal­ware and se­cu­rity breaches to iden­ti­fy­ing and com­pro­mis­ing valid user ac­counts and cre­den­tials with so­cial en­gi­neer­ing, cre­den­tial theft, or phish­ing. Teleport Identity is de­signed to elim­i­nate weak ac­cess pat­terns through ac­cess mon­i­tor­ing, min­i­mize at­tack sur­face with ac­cess re­quests, and purge un­used per­mis­sions via manda­tory ac­cess re­views.

Ava Labs — We at Ava Labs, main­tainer of AvalancheGo (the most widely used client for in­ter­act­ing with the Avalanche Network), be­lieve the sus­tain­able main­te­nance and de­vel­op­ment of open source cryp­to­graphic pro­to­cols is crit­i­cal to the broad adop­tion of blockchain tech­nol­ogy. We are proud to sup­port this nec­es­sary and im­pact­ful work through our on­go­ing spon­sor­ship of Filippo and his team.

...

Read the original on words.filippo.io »

7 414 shares, 16 trendiness

What Being Ripped Off for $35k Taught Me

In Spring of 2024, a good friend con­tacted me say­ing he’d heard of an op­por­tu­nity to help fin­ish an aug­mented re­al­ity bus tour of a Beijing park. He’d got­ten the im­pres­sion the California-based client were in a pinch, and said they’d men­tioned by way of re­fer­ral the names of some folks we both knew and re­spected.

In the past, I’d con­sulted on AR bus bids for both The Mill and IDEO, and from those I was well aware how many hard prob­lems AR buses in­volved. I was very cu­ri­ous to find out how these folks had ad­dressed them, and on our in­tro­duc­tory call I told them as much. They de­murred. This was my first warn­ing not all was right.

We spoke briefly about the na­ture of the pro­ject and their need for help, and agreed I’d im­me­di­ately try to get a work visa and come out to China for as long as I could spare - we de­ter­mined it’d be a month. Fortunately I found I still had a valid visa from a pre­vi­ous pro­ject, so I was good to head out im­me­di­ately.

When I ar­rived I found the pro­ject in sham­bles. Multiple very ju­nior de­vel­op­ers were touch­ing (binary, TouchDesigner) code and de­ploy­ing straight to pro­duc­tion via thumb drive, with zero ver­sion con­trol. In fact, they did­n’t know what ver­sion con­trol was.

They were at­tempt­ing to pull off AR ef­fects on the trans­par­ent OLED win­dows of the bus with­out ac­count­ing for lens dis­tor­tion, field of view, par­al­lax, oc­clu­sion, etc., and were frus­trated and mys­ti­fied when things did­n’t ap­pear to line up. They were com­pletely naive to what depth and scale cues are and how to de­ploy them.

Their gy­ro­scopes had an axis flipped, and the pitch of the bus was pro­duc­ing an op­po­site re­ac­tion in the vir­tual con­tent. GPS was highly un­re­li­able (because China) and there was no workaround in place.

Their ren­der pipeline was draw­ing every­thing - in­clud­ing their ren­ders - to fullscreen quads, which were then used for ba­sic com­posit­ing with al­pha and re-ren­dered - all 35+ and count­ing lay­ers, us­ing al­pha-to-trans­parency. Their (consumer, air-cooled) gam­ing PCs in­take was ex­posed di­rectly to the dusty air out­side the bus. Their con­sumer OLED pan­els were roast­ing in the di­rect sun com­ing through the win­dows. Their rack’ was an MDF shelf in the pas­sen­ger seat of the bus, which was bump­ing along on rut­ted dirt roads. Their code’ was the ab­solute worst kind of fucked-up node spaghetti, not a thought given to leg­i­bil­ity or main­tain­abil­ity or stan­dards - ba­si­cally the rea­son peo­ple hate node-based pro­gram­ming.

They could­n’t de­liver even the very ba­sics of what they’d sold and worked on and de­layed and cost-over­run for years, much less the ex­cit­ing ef­fects they kept promis­ing the client.

They were car­pet­bag­gers and dilet­tantes con­vinced by their own in­ex­pe­ri­ence and the ad­vice of a one­time VJ that they could pull off some­thing I’d twice helped quote to be brought home by a cadre of hard­ened killers with shit­loads of math and know-how at eye-wa­ter­ing prices. They were way way way over their heads and were in no way in­ter­ested in up­dat­ing their pri­ors in light of the shit they were swim­ming in.

The quiet of­fice rang with a still-un­heard an­swer to their how hard could it be”: fuck around and find out.” They were in the deep dark for­est of un­known un­knowns, drown­ing in hubris.

I sug­gested they mea­sure the in­trin­sics of their lenses to flat­ten the im­age. I sug­gested they im­ple­ment some (any) color sci­ence. I sug­gested they get a sec­ond (just one more) of each of the bus’ cam­eras, so I could work cam­era is­sues in the of­fice and not on the bus, in the hot sun, sit­ting side­ways on a bench seat with the key­board in my lap and the mouse on my leg, us­ing the ridicu­lous 3x OLED arrange­ment as a cod­ing en­vi­ron­ment. I begged them to let me re­build their ren­der pipeline in a less psy­chotic way. I begged them to let me stan­dard­ize their Windows builds be­fore they rolled out this al­pha tem­plate to the re­main­ing ~20 buses, a process that had al­ready be­gun.

I de­manded they im­ple­ment ver­sion con­trol, and showed them how it would save time, not waste it, by mak­ing de­ploys to the bus from dev in­stan­ta­neous and dif­fa­ble, re­versable, and how branches could keep fea­ture dev off the sta­ble build on the bus.

I agreed against my judge­ment and ob­jec­tions to drop these nec­es­sary things on a reg­u­lar ba­sis so I could build flashy gags that did noth­ing to ad­dress their root is­sues, or to ac­cel­er­ate dev — sequenced par­ti­cle ef­fects and sparkly an­i­ma­tions that were given to me not with an­i­mat­ics or sto­ry­boards or ref­er­ence im­ages but rather just a quick ver­bal ex­pla­na­tion. I was ex­horted to have them ready im­me­di­ately for client re­view, of­ten the same day, at which point the client re­peat­edly be­came openly up­set to find out that the re­sult was not what he ex­pected, or thought they had agreed on.

When I re­al­ized this was hap­pen­ing I gen­tly sug­gested the client could im­ple­ment a process for new work — one that moved in­ex­orably from lower to higher fi­delity with a fixed num­ber of re­vi­sions and ex­plicit client sign-off at each step.

I ne­go­ti­ated a touchy sit­u­a­tion where I dis­cov­ered one of their ju­nior devs was se­cretly be­ing abu­sive to one of the oth­ers.

I worked 11-14 hours a day for 24 days straight with no time off. We used my gear - my depth cam­era (I brought my own — bring­ing every­thing is a habit I ac­quired in my pre­vi­ous life in show­biz), my lap­top, my soft­ware keys. I came out of pocket for all ex­penses.

I had ar­rived with a mas­sively in­flamed wrist from a Muay Thai in­jury and work­ing in this sit­u­a­tion was ex­cru­ci­at­ing. They knew this - they bought me some chi­nese tiger balm pads.

I missed the month of May with my 2-year-old kid. My wife cared for a 2-year-old alone.

I’d re­ceived a de­posit for less than 1/4 of the con­tract amount. I did all of this on faith. The faith was that, as you’ve sur­mised from the ti­tle, they’d pay me, and — spoiler alert — they did not. A few years later I can say for sure these peo­ple won’t pay me the $35k they owe me.

The faith was that if they could’t pay, they’d let me know be­cause I was ac­tively dig­ging their asses out of a hole they’d dug, and do­ing so tire­lessly and pro­fes­sion­ally, with­out com­plaint.

The faith was that if they did­n’t pay, the le­gal sys­tem would en­force our California-law con­tract and make them pay.

I hired debt col­lec­tors - they did the usual nasty­gram process and even­tu­ally told me it’d be un­wise to sue - the debtors could sim­ply close up shop dis­solve the en­tity.

I would be hold­ing this par­tic­u­lar bag.

Now I’m aware this all sounds like a com­plaint, and I sup­pose you could say to this point that it has been. The con­struc­tive thing to do is to ask well what have we learned now that we’ve touched the stove?”

* Someone in dire need of help who won’t ac­cept your help may be be­yond help

* If you take $35k off some­one’s desk, that’s grand lar­ceny. If you steal $35k of their work and time, that’s A-OK.

* End clients can’t tell the dif­fer­ence be­tween these bo­zos and me. I don’t know what to do with that in­for­ma­tion but it feels bad.

* I nor­mally build progress pay­ments into con­tracts. I did not in this case be­cause it was a dead sprint at only 3 weeks long. If I’m be­ing hon­est, it felt openly mis­trust­ful to de­mand progress pay­ments in a pe­riod of time that short, and I was but a sweet Summer child back then

* They do not at all deny they owe me the money - be­gin­ning the week the pay­ment was due they sim­ply sent an in­creas­ingly in­co­her­ent se­ries of ex­pla­na­tions for why they did­n’t have it but would next week, a cre­ative writ­ing ex­er­cise they du­ti­fully main­tained for 18 months af­ter that

* I did not pay for my own flights thank god

* They ripped off every­body on this job, we are all in touch

* They may have been de­lud­ing them­selves into think­ing just one last thing’ would re­sult in pay­ment from their client

* Their client was very very not happy, and rightly so

* I’ll hap­pily tell you who they are - get in touch

...

Read the original on belief.horse »

8 384 shares, 25 trendiness

matthartman/ghost-pepper: Hold-to-talk speech-to-text for macOS. 100% local, powered by WhisperKit and local LLM cleanup. Hold Control to record, release to transcribe and paste.

100% lo­cal hold-to-talk speech-to-text for ma­cOS. Hold Control to record, re­lease to tran­scribe and paste. No cloud APIs, no data leaves your ma­chine.

* Hold Control to talk — re­lease to tran­scribe and paste into any text field

* Runs en­tirely on your Mac — mod­els run lo­cally via Apple Silicon, noth­ing is sent any­where

* Menu bar app — lives in your menu bar, no dock icon, launches at lo­gin

Ghost Pepper uses open-source mod­els that run en­tirely on your Mac. Models down­load au­to­mat­i­cally and are cached lo­cally.

Speech mod­els pow­ered by WhisperKit. Cleanup mod­els pow­ered by LLM.swift. All mod­els served by Hugging Face.

* Launch at lo­gin is en­abled by de­fault on first run. You can tog­gle it off in Settings.

* No log­ging to disk — tran­scrip­tions are never writ­ten to files. Debug logs are in-mem­ory only and dis­ap­pear when the app quits.

All mod­els run lo­cally, no pri­vate data leaves your com­puter. And it’s spicy to of­fer some­thing for free that other apps have raised $80M to build.

Ghost Pepper re­quires Accessibility per­mis­sion, which nor­mally needs ad­min ac­cess to grant. On man­aged de­vices, IT ad­mins can pre-ap­prove this via an MDM pro­file (Jamf, Kandji, Mosaic, etc.) us­ing a Privacy Preferences Policy Control (PPPC) pay­load:

...

Read the original on github.com »

9 305 shares, 12 trendiness

Germany Doxes “UNKN,” Head of RU Ransomware Gangs REvil, GandCrab

An elu­sive hacker who went by the han­dle UNKN and ran the early Russian ran­somware groups GandCrab and REvil now has a name and a face. Authorities in Germany say 31-year-old Russian Daniil Maksimovich Shchukin headed both cy­ber­crime gangs and helped carry out at least 130 acts of com­puter sab­o­tage and ex­tor­tion against vic­tims across the coun­try be­tween 2019 and 2021.

Shchukin was named as UNKN (a.k.a. UNKNOWN) in an ad­vi­sory pub­lished by the German Federal Criminal Police (the Bundeskriminalamt” or BKA for short). The BKA said Shchukin and an­other Russian — 43-year-old Anatoly Sergeevitsch Kravchuk — ex­torted nearly $2 mil­lion eu­ros across two dozen cy­ber­at­tacks that caused more than 35 mil­lion eu­ros in to­tal eco­nomic dam­age.

Germany’s BKA said Shchukin acted as the head of one of the largest world­wide op­er­at­ing ran­somware groups GandCrab and REvil, which pi­o­neered the prac­tice of dou­ble ex­tor­tion — charg­ing vic­tims once for a key needed to un­lock hacked sys­tems, and a sep­a­rate pay­ment in ex­change for a promise not to pub­lish stolen data.

Shchukin’s name ap­peared in a Feb. 2023 fil­ing (PDF) from the U. S. Justice Department seek­ing the seizure of var­i­ous cryp­tocur­rency ac­counts as­so­ci­ated with pro­ceeds from the REvil ran­somware gang’s ac­tiv­i­ties. The gov­ern­ment said the dig­i­tal wal­let tied to Shchukin con­tained more than $317,000 in ill-got­ten cryp­tocur­rency.

The Gandcrab ran­somware af­fil­i­ate pro­gram first sur­faced in January 2018, and paid en­ter­pris­ing hack­ers huge shares of the prof­its just for hack­ing into user ac­counts at ma­jor cor­po­ra­tions. The Gandcrab team would then try to ex­pand that ac­cess, of­ten si­phon­ing vast amounts of sen­si­tive and in­ter­nal doc­u­ments in the process. The mal­ware’s cu­ra­tors shipped five ma­jor re­vi­sions to the GandCrab code, each cor­re­spond­ing with sneaky new fea­tures and bug fixes aimed at thwart­ing the ef­forts of com­puter se­cu­rity firms to stymie the spread of the mal­ware.

On May 31, 2019, the GandCrab team an­nounced the group was shut­ting down af­ter ex­tort­ing more than $2 bil­lion from vic­tims. We are a liv­ing proof that you can do evil and get off scot-free,” GandCrab’s farewell ad­dress fa­mously quipped. We have proved that one can make a life­time of money in one year. We have proved that you can be­come num­ber one by gen­eral ad­mis­sion, not in your own con­ceit.”

The REvil ran­somware af­fil­i­ate pro­gram ma­te­ri­al­ized around the same as GandCrab’s demise, fronted by a user named UNKNOWN who an­nounced on a Russian cy­ber­crime fo­rum that he’d de­posited $1 mil­lion in the fo­rum’s es­crow to show he meant busi­ness. By this time, many cy­ber­se­cu­rity ex­perts had con­cluded REvil was lit­tle more than a re­or­ga­ni­za­tion of GandCrab.

UNKNOWN also gave an in­ter­view to Dmitry Smilyanets, a for­mer ma­li­cious hacker hired by Recorded Future, wherein UNKNOWN de­scribed a rags-to-riches tale un­en­cum­bered by ethics and morals.

As a child, I scrounged through the trash heaps and smoked cig­a­rette butts,” UNKNOWN told Recorded Future. I walked 10 km one way to the school. I wore the same clothes for six months. In my youth, in a com­mu­nal apart­ment, I did­n’t eat for two or even three days. Now I am a mil­lion­aire.”

As de­scribed in The Ransomware Hunting Team by Renee Dudley and Daniel Golden, UNKNOWN and REvil rein­vested sig­nif­i­cant earn­ings into im­prov­ing their suc­cess and mir­ror­ing prac­tices of le­git­i­mate busi­nesses. The au­thors wrote:

Just as a real-world man­u­fac­turer might hire other com­pa­nies to han­dle lo­gis­tics or web de­sign, ran­somware de­vel­op­ers in­creas­ingly out­sourced tasks be­yond their purview, fo­cus­ing in­stead on im­prov­ing the qual­ity of their ran­somware. The higher qual­ity ran­somware—which, in many cases, the Hunting Team could not break—re­sulted in more and higher pay-outs from vic­tims. The mon­u­men­tal pay­ments en­abled gangs to rein­vest in their en­ter­prises. They hired more spe­cial­ists, and their suc­cess ac­cel­er­ated.”

Criminals raced to join the boom­ing ran­somware econ­omy. Underworld an­cil­lary ser­vice providers sprouted or piv­oted from other crim­i­nal work to meet de­vel­op­ers’ de­mand for cus­tomized sup­port. Partnering with gangs like GandCrab, cryptor’ providers en­sured ran­somware could not be de­tected by stan­dard anti-mal­ware scan­ners. Initial ac­cess bro­ker­ages’ spe­cial­ized in steal­ing cre­den­tials and find­ing vul­ner­a­bil­i­ties in tar­get net­works, sell­ing that ac­cess to ran­somware op­er­a­tors and af­fil­i­ates. Bitcoin tumblers” of­fered dis­counts to gangs that used them as a pre­ferred ven­dor for laun­der­ing ran­som pay­ments. Some con­trac­tors were open to work­ing with any gang, while oth­ers en­tered ex­clu­sive part­ner­ships.”

REvil would evolve into a feared big-game-hunting” ma­chine ca­pa­ble of ex­tract­ing hefty ex­tor­tion pay­ments from vic­tims, largely go­ing af­ter or­ga­ni­za­tions with more than $100 mil­lion in an­nual rev­enues and fat new cy­ber in­sur­ance poli­cies that were known to pay out.

Over the July 4, 2021 week­end in the United States, REvil hacked into and ex­torted Kaseya, a com­pany that han­dled IT op­er­a­tions for more than 1,500 busi­nesses, non­prof­its and gov­ern­ment agen­cies. The FBI would later an­nounce they’d in­fil­trated the ran­somware group’s servers prior to the Kaseya hack but could­n’t tip their hand at the time. REvil never re­cov­ered from that core com­pro­mise, or from the FBIs re­lease of a free de­cryp­tion key for REvil vic­tims who could­n’t or did­n’t pay.

Shchukin is from Krasnodar, Russia and is thought to re­side there, the BKA said.

Based on the in­ves­ti­ga­tions so far, it is as­sumed that the wanted per­son is abroad, pre­sum­ably in Russia,” the BKA ad­vised. Travel be­hav­iour can­not be ruled out.”

There is lit­tle that con­nects Shchukin to UNKNOWNs var­i­ous ac­counts on the Russian crime fo­rums. But a re­view of the Russian crime fo­rums in­dexed by the cy­ber in­tel­li­gence firm Intel 471 shows there is plenty con­nect­ing Shchukin to a hacker iden­tity called Ger0in” who op­er­ated large bot­nets and sold installs” — al­low­ing other cy­ber­crim­i­nals to rapidly de­ploy mal­ware of their choice to thou­sands of PCs in one go. However, Ger0in was only ac­tive be­tween 2010 and 2011, well be­fore UNKNOWNs ap­pear­ance as the REvil front man.

A re­view of the mugshots re­leased by the BKA at the im­age com­par­i­son site Pimeyes found a match on this birth­day cel­e­bra­tion from 2023, which fea­tures a young man named Daniel wear­ing the same fancy watch as in the BKA pho­tos.

Update, April 6, 12:06 p.m. ET: A reader for­warded this English-dubbed au­dio record­ing from the a ccc.de (37C3) con­fer­ence talk in Germany from 2023 that pre­vi­ously outed Shchukin as the REvil leader (Shchuckin is men­tioned at around 24:25).

...

Read the original on krebsonsecurity.com »

10 304 shares, 17 trendiness

Adobe secretly modifies your hosts file for the stupidest reason

Adobe se­cretly mod­i­fies your hosts file for the stu­pid­est rea­son

...

Read the original on www.osnews.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.